text
stringlengths 56
7.94M
|
---|
\begin{document}
\title{Quantum system characterization with limited resources}
\author{D. K. L. Oi$^a$, S. G. Schirmer$^b$} \address{$^b$SUPA,
Department of Physics, University of Strathclyde, Glasgow, G4 0NG,
United Kingdom, $^b$Department of Physics, Swansea University,
Singleton Park, Swansea, SA2 8PP, United Kingdom}
\date{\today}
\abstract{The construction and operation of large scale quantum
information devices presents a grand challenge. A major issue is
the effective control of coherent evolution, which requires accurate
knowledge of the system dynamics that may vary from device to
device. We review strategies for obtaining such knowledge from
minimal initial resources and in an efficient manner, and apply
these to the problem of characterization of a qubit embedded into a
larger state manifold, made tractable by exploiting prior structural
knowledge. We also investigate adaptive sampling for
estimation of multiple parameters.}
\keywords{quantum, control, estimation}
\maketitle
\section{Introduction}
Recently, much effort has been put into the construction of large
scale quantum devices operating in the coherent regime. This has been
spurred by the possibilities offered by quantum communication and
information processing, from secure transmission, simulation of
quantum dynamics, to the solution of currently intractable
mathematical problems~\cite{NatureReview2008}. Many different physical
systems have been proposed as the basic architecture upon which to
construct quantum devices, ranging from atoms, ions, photos, quantum
dots and superconductors. For large scale commercial application, it
is likely that this will involve scalable engineered and constructed
devices with tailored dynamics requiring precision control.
Due to inevitable manufacturing tolerances and variation, each device
will display different behaviour even though they may be nominally
``identical''. For their operation, they will need to be characterized
as to their basic properties, response to control fields, and noise or
decoherence~\cite{BIRS2007}. We may also need to know how ideal is the
system in the first place, for instance the effective Hilbert space;
what we may assume to be a qubit may have dynamics involving more than
two effective levels. Extracting this information efficiently and
robustly is crucial.
In a laboratory setting, an experimentalist may have access to many
tools with which to study a system, e.g. spectroscopy and external
probes. In a production setting, provision of these extra resources may
be difficult, expensive, or impossible to integrate with the device. It
is therefore an important to understand what sort of characterisation
can be performed simply using what is available \emph{in situ}. Ideally,
we would also like to be able to characterize the performance of a
device with as little prior knowledge of its behaviour, e.g. how it
responds to control fields, if this is the information which we are
trying to obtain in the first place. Characterization using only the
\emph{in situ} resources of state preparation and measurement, even
where it is possible, is challenging, due to the increasing complexity
of the signals, number of parameters to be estimated and the complexity
of reconstructing a valid Hamiltonian from the resulting signal
parameters. Robust and efficient methods of data gathering and
analysis, preferably as automated as possible, are therefore
essential. Here, we review progress in tackling these problems and we
present new results for the problem of characterizing the Hamiltonian of
a qubit embedded in a larger state manifold.
\section{System identification paradigms}
A standard tool for determining discrete quantum dynamics is
\emph{quantum process tomography} (QPT). Assume a completely positive
trace preserving map $\Lambda$ acting on a system state $\rho$ by
\begin{equation}
\Lambda(\rho)=\sum_j \lambda_j \rho \lambda_j^\dagger
\end{equation}
where $\lambda_j$ are the Kraus operators satisfying $\sum
\lambda_j^\dagger \lambda_j = \mathbb{I}$. QPT is a method by which we can
determine $\{\lambda_j\}$ by seeing how initial states of the system
evolve under the map. For a complete characterisation, a complete set of
initial states $\{\rho_j\}$ and quantum state tomography on the final
states are both required~\footnote{There are variants of QPT which use
ancillas, entangling or other coherent operations which may offer
certain advantages~\cite{MRL2008,SBLP2011}. However they all require
operational resources beyond what we assume for system
characterisation.}.
In general for a d-dimensional system, we need to use $d^2-1$ input
states, and quantum tomography either requires a measurement with at
least $d^2$ outcomes, for instance, symmetric informationally complete
positive operator-valued measures SIC-POVMs~\cite{SICPOVM}, or
projective measurements in several different bases.
The ability to generate a complete set of initial states is a strong
assumption. In many physical systems, it is only possible to directly
prepare a set of orthogonal states, e.g. by projective measurement, or
even only a single ``fiducial'' state. The usual assumption that we can
generate any state by coherent control of a fiducial state is invalid
when we are trying to determine how to control the system in the first
place. Equally, most systems can only be measured directly in a single
fixed basis, and other measurement bases are assumed to be available
through coherent control. This shows that QPT cannot be used as a
starting for quantum system identification in a setting where control
has not been already established. We require a mechanism by which we
can bootstrap our knowledge and abilities until control can be enacted
upon the system.
Assuming the system dynamics can be approximated to a reasonable degree
by Hamiltonian dynamics, the first core challenge is the identification
of the \emph{intrinsic system Hamiltonian} $H=\sum_j E_j \proj{E_j}$,
which can be specified by the energy eigenstates and eigenvalues (up to
rescaling). Optimum protocols for identification of the Hamiltonian
dynamics depend on the available resources. One general paradigm
introduced in \cite{SKO2004} assumes a situation where we are restricted
to preparation and measurement in a single fixed basis $\{\ket{e_j}\}$,
i.e., we can prepare the system in one of the basis states $\ket{e_j}$,
allow it to evolve for some time $t$ before projectively measuring in
the same basis, obtaining state $\ket{e_k}$ with some probability
$p_{jk}(t)$. The computational basis in this case can be defined in
terms of preparation and measurement and our task is to characterize the
system Hamiltonian in this basis, i.e. obtain the eigenenergies $E_j$
and the eigenstates $\ket{E_j}$ (up to a global phase), solely from the
data traces of $p_{jk}(t)$.
Hamiltonian characterization in this paradigm has been considered in a
number of papers. It has been shown that a generic Hamiltonian for a
single qubit can be recovered from preparation and measurement in a
fixed basis up to a certain set of phases and an unobservable global
energy shift. The extra phases become relevant only when additional
resources are available that allow us to initialize the system in
non-measurement basis states, or apply control that alters the system
Hamiltonian. In the latter setting is was also shown how the relative
phases for two Hamiltonians could be recovered by composite rotations,
vaguely reminiscent of Ramsey spectroscopy~\cite{Ramsey1949}. If the
Hamiltonians do not commute or coincide with the preparation/measurement
basis, full control over a single qubit can be realized and the system
and control Hamiltonians can be characterized up to a sign
factor~\cite{SOKC2004}. The results can be extended to multi-qubit or
higher level systems~\cite{SO2009}.
Upon characterisation of the intrinsic system Hamiltonian the effect
of applying controls must be identified. We can model this by
assuming a Hamiltonian $H(\vec{\lambda})$ that depends on classical
control field parameters $\vec{\lambda}$. In the simplest case we may
approximate the response of the system by $H(\vec{\lambda})=H_0+\sum_j
\lambda_j H_j$, where $H_0$ is the intrinsic system Hamiltonian and
$H_j$ are perturbations resulting from the application of control
$\lambda_j$. This has been considered in~\cite{SKO2004,SOKC2004}
where the effect of multiple control fields on a single qubit were
characterized with respect to a reference Hamiltonian. For coherent
operation, \emph{incoherent effects} should be small but they
will still need to be characterized. Although complete
characterization of the dynamics for open systems is a daunting task
under certain simplifying assumptions on the type of decoherence,
e.g., pure dephasing or relaxation in a natural basis, the number of
parameters to be estimated can be reduced and the relevant information
extracted~\cite{OiSchirmer2011,CGOSWH2006,SO2009A,qph1012_4593}.
Although the general characterization paradigm described is quite
restrictive, in some cases further restrictions must be imposed to
deal with limited resources. Characterization of the coupling
constants in a spin network when only a subset of the spins at the
boundary can be measured and initialized is an
example~\cite{BMN2008,BMN2011,FMK2011}. Another is estimation of
leakage out of a subspace or coupling to unknown states, where we
generally cannot measure the populations of these states directly.
General bounds on subspace leakage were derived in~\cite{DSOCH2007}
based on the Fourier spectrum of the observed Rabi oscillations. This
approach is useful when there are potentially many states very weakly
coupled to the subspace of interest. In many cases, however, leakage
may be due to coupling to a small number of states outside the
subspace, e.g., when we encode a qubit using the lowest two states of
a slightly anharmonic oscillator. In this case a control applied to
the qubit transition will induce some coupling to the third (nuisance)
level, giving rise to unwanted dynamics. Characterization of this
coupling allows the design of pulses that can suppress such unwanted
excitations.
\section{Hamiltonian estimation for embedded qubit}
Formally, we consider a three-level system subject to a control field
resonant with the $1-2$ transition (see also~\cite{Leghtas2009}), where
$\ket{1}$ and $\ket{2}$ can be regarded as the qubit states. Assuming a
constant amplitude field $f(t)=A \cos(\omega t)$, transforming to a
rotating frame and making the rotating wave approximation, this leads to
an effective Hamiltonian of the form
\begin{equation}
\label{eq:H1}
H= \begin{pmatrix}
0 & d_1 & d_3 \\
d_1 & 0 & d_2 \\
d_3 & d_2 & \delta
\end{pmatrix}.
\end{equation}
We choose a rotating frame where the off-diagonal
elements $d_n$ are real and positive. If higher order
processes such as two-photon transitions between states $1$ and $3$ are
negligible, we can assume $d_3=0$. The objective is to characterize
both the qubit transition coupling $d_1$ as well as the coupling to the
nuisance level $d_2$ and the detuning $\delta$ as a result of the
anharmonicity.
If the system can be initialized in the basis states $\ket{n}$ for
$n=1,2,3$ and we perform complete projective measurements in the same
basis, then the probabilities
\begin{equation}
p_{k\ell} = |\bra{k} e^{-i H t} \ket{\ell}|^2
\end{equation}
can be determined for all $k,\ell$ for different times $t_j$. However,
if we can only initialize and measure the system in the ground state
then only a single population evolution trace $p_{11}(t)$ is available.
For a two-level system this is sufficient to infer the population of
$p_{22}$ but the presence of the third level means that $p_{11}(t)$ only
gives us limited information about the population of the other levels.
The existence of a non-zero detuning $\delta$ complicates the problem
substantially. In the absence of anharmonicity, i.e., for $\delta=0$
analytic expressions for $p_{11}(t)$ can be obtained and the problem
reduced to a single frequency estimation problem~\cite{SL2010}. For
$\delta\neq 0$ there is no simple closed form for the signal $p_{11}(t)$
and the eigenvalues are no longer of the form $0,\pm \lambda$, but are
instead $\omega_{12}=\lambda_2-\lambda_1=\omega-\mathcal{D}elta\omega$ and
$\omega_{23}=\lambda_3-\lambda_2 = \omega+\mathcal{D}elta\omega$. For small
detunings $\delta$ relative to the coupling strengths $d_n$, the
frequency splitting $\mathcal{D}elta\omega$ is much smaller than $\omega$. To
obtain obtain a frequency resolution of $\mathcal{D}elta\omega$ using spectral
analysis would require a signal length of at least $1/(\mathcal{D}elta\omega)$.
However, using the structure of the signal and restricting to solutions
consistent with our prior knowledge we can do considerably better. We
know that the observed signal must be of the form
\begin{equation}
p_{11}(t) = a_0 + a_1 \cos((\omega-\mathcal{D}elta\omega)t)
+ a_2 \cos((\omega+\mathcal{D}elta\omega)t)
+ a_3 \cos(2\omega t).
\end{equation}
A natural starting point for a maximum likelihood estimation is thus to
choose basis functions $g_0=1$, $g_1(t)=\cos((\omega-\mathcal{D}elta \omega)t)$,
$g_2(t)=\cos((\omega+\mathcal{D}elta \omega)t)$ and $g_3(t)=\cos(2\omega t)$, and
following standard techniques, maximize the log-likelihood
function~\cite{SO2009,Bretthorst88}
\begin{equation}
\label{eq:loglikelihood}
L(\{\omega,\mathcal{D}elta\omega\}|\vec{d})
\propto \frac{m_b-N_t}{2} \log_{10}
\left[1-\frac{m_b \ave{\vec{h}^2}} {N_t\ave{\vec{d}^2}}\right],
\end{equation}
where $m_b=4$ is the number of basis functions, $N_t$ is the number of
data points and
\begin{equation}
\ave{\vec{d}^2} = \frac{1}{N_t} \sum_{n=0}^{N_t-1} d_n^2, \quad
\ave{\vec{h}^2} = \frac{1}{m_b} \sum_{m=0}^{m_b-1} h_m^2.
\end{equation}
The elements $h_m$ of the $(m_b,1)$-vector $\vec{h}$ are projections
of the $(1,N_t)$-data vector $\vec{d}$ onto a set of orthonormal basis
vectors derived from the non-orthogonal basis functions $g_m(t)$
evaluated at the respective sample times $t_n$. Concretely, setting
$G_{mn}=g_m(t_n)$, let $\lambda_m$ and $\vec{e}_m$ be the eigenvalues
and corresponding (normalized) eigenvectors of the $m_b\times m_b$
matrix $G G^\dag$ with $G=(G_{mn})$, and let $E=(e_{m'm})$ be a matrix
whose columns are $\vec{e}_m$. Then we have $H=V G$ and $\vec{h}=H
\vec{d}^\dag$ with $V =\operatorname{diag}(\alpha_m^{-1/2}) E^\dag$, and the
corresponding coefficient vector is $\vec{a}=\vec{h}^\dag
V$~\cite{SO2009}.
\begin{figure*}
\caption{``Qubit'' with leakage to a slightly detuned third level.
The raw signal $p_{11}
\label{fig1}
\end{figure*}
Fig.~\ref{fig1} shows that the log-likelihood function of the data
provides strong evidence for a non-zero detuning, even though no peak
splitting is detectable in the Fourier spectrum of the signal. For the
given input data the log-likelihood function has a squeezed peak that is
narrow in one direction but much broader in the other. The fact that
the peak is squeezed in a direction not aligned with a coordinate axis
shows that the uncertainties in $\omega$ and $\mathcal{D}elta\omega$ are not
independent. The plot also shows that the width of the peak along the
$\mathcal{D}elta\omega$ direction is much greater than that in $\omega$ direction.
\begin{figure*}
\caption{Standard deviation of $\omega_{est}
\label{fig:uncert}
\end{figure*}
This is also reflected in the observed uncertainties of the estimates
of $\omega$ and $\mathcal{D}elta \omega$. If we take the $(\omega_{est},\mathcal{D}elta
\omega_{est})$ to be the coordinates for which the log-likelihood
peaks then their standard deviations give an indication of the
uncertainty in our estimates. Fig.~\ref{fig:uncert} shows the
standard deviation of $\omega_{est}$ and $\mathcal{D}elta\omega_{est}$ for 256
simulated experiments each as a function of the number of time samples
$N_t$ and the number of experiment repetitions $N_e$, on a logarithmic
scale. The plots look qualitatively similar, suggesting a similar
scaling, but the scale shows that the uncertainty of the
$\mathcal{D}elta\omega$ estimates is about one order of magnitude greater than
that of the $\omega$ estimates. We observe a similar scaling for the
estimated amplitudes $a_m$ for $m=0,1,2,3$ (not shown). The median
relative error in the estimated Hamiltonian shows a similar behaviour
though with some kinks, and it is interesting to note that the
relative errors in the Hamiltonian tend to be larger than the errors
in the estimated frequencies and amplitudes of the signal.
Preliminary results suggest that fragility of the reconstruction
procedure is responsible for the observed larger spread in the errors of
the reconstructed Hamiltonian even if the uncertainty of the estimated
signal parameters is quite low as shown in Fig.~\ref{fig:error-H}
(left). The reconstruction procedure can sometimes fail, leading to
outliers in the relative Hamiltonian error histogram shown in
Fig.~\ref{fig:error-H} (right), which typically correspond to unphysical
Hamiltonians.
\begin{figure*}
\caption{(left) Error scaling as a function of sampling times and
accuracy. (right) Histogram of reconstruction error showing outliers
corresponding to unphysical solutions. (Colour online)}
\label{fig:error-H}
\end{figure*}
If the likelihood function does not have a sufficiently sharp peak, we
can adaptively refine the sampling to reduce the uncertainty. A simple
adaptive strategy is as follows:
\begin{enumerate}
\item Preliminary sampling: Over a pre-chosen sampling period, measure
at randomly chosen sampling times and compute the likelihood of
various models based on this initial data.
\item Uncertainty estimate: The uncertainty of the solution can be
estimated from the sharpness and relative height of the highest
peak in the likelihood plot.
\item Refinement: From the initial data, choose an ensemble of
probable models and calculate the weighted expected variance of
their data traces as a function of time.
\item New samples: We make additional measurements at times for which
the most probable models differ the most and use the new data points
to update the estimates of the signal parameters and Hamiltonian.
\item Repeat as necessary.
\end{enumerate}
Although similar in spirit to the adaptive Bayesian identification
strategy proposed Wiseman \emph{et al.} for single parameter
Hamiltonian estimation~\cite{SCCBW2011}, we do not minimize the
variance of a single parameter as the Hamiltonian depends on multiple
parameters. Another difficulty is that the multi-parameter likelihood
function is usually far from Gaussian and the expectation values of
$\omega$ and $\mathcal{D}elta\omega$ tend to differ substantially from the
maximum likelihood estimate. In this case using the expectation values
and variances with respect to a given parameter is not necessarily a
good indicator of the real uncertainty of the model.
\begin{figure}
\caption{Log-likelihood comparision (left) Initial, (right) After
adaptive sampling. The probability is more highly concentrated and
the accuracy of the solution greatly improved. (Colour
online)}
\label{fig:log-likelihood-compare}
\end{figure}
We apply this to the above qutrit system. Starting with a 100-point
low-discrepancy sampling of the selected time range $[0,20]$, we
obtain $N_e=100$ measurements per time point. The resulting
log-likelihood function, shown in
Fig.~\ref{fig:log-likelihood-compare}(left), has a squeezed peak
centred at $(\omega,\mathcal{D}elta\omega)=(1.9468,0.1087)$. To narrow the
peak width we resample using the initial estimate for
$(\omega,\mathcal{D}elta\omega)$. A simple and computationally cheap strategy
is to choose new sample points at integer multiples of
$T/2=\pi/\omega_{est}$, and get a few accurate samples using
e.g. $N_e=1000$. The idea is that we will be most sensitive to small
modulations in the peak heights due to $\mathcal{D}elta\omega$ at these times.
Indeed we find that the relative errors for the frequency and
amplitude estimates improve substantially,
Fig.~\ref{fig:log-likelihood-compare}(right). Yet, the relative error
of the reconstructed Hamiltonians does not always follow the same
trend and sometimes actually \emph{increases}. Further analysis
shows that this is due to the reconstruction step and the complex
dependence of $H$ on the estimated parameters.
This suggests that it would be desirable to estimate the Hamiltonian
paramaters directly from the data, maximizing
the likelihood
\begin{equation}
\label{eq:P2}
P(\vec{d}|\{\Omega,\alpha,\epsilon\}) \propto \exp \left[
\sum_{j=1}^{N_t} |d_j -p_{11}(t_j)|^2)\right]^{-N_t/2}
\end{equation}
or its logarithm, and for convenience we have defined
$d_1=\Omega\cos{\alpha}$, $d_2=\Omega\sin{\alpha}$, and
$\delta=4\epsilonilon$. Although there is no simple closed form for
$p_{11}(t_j)$ in this case, it can be computed numerically and the
challenge is finding the global optimum of the 3-parameter likelihood
function. Without any prior information, we first compute the
log-likelihood on a 3D grid of parameter values, find the region where
the global optimum is expected, and use this information as a starting
point of a local optimization routine to find the peak. For the
example above, this yields
$(\Omega,\alpha,\epsilon)_{opt}=(1.7159,0.9494,0.5340)$ for the initial
data, versus $(\Omega,\alpha,\epsilon)_{opt}=(1.7323,0.9557,0.4999)$ with
the new data (actual $(1.7321,0.9553,0.5000)$). The relative error of
the reconstructed Hamiltonian for the inital data is $0.0363$,
comparable (even slightly higher) to the estimate obtained using the
previous two-step approach. Using the new data, the relative error is
substantially lower, $3.8672 \times 10^{-4}$ if we maximize
(\ref{eq:P2}) instead of maximizing (\ref{eq:loglikelihood}) followed
by reconstruction. This approach appears suitable for systematic
adaptive estimation, which will be investigated further in future
work.
\section{Conclusion}
We have outlined the problem of system characterisation and why it is an
essential basic building block of quantum control for many prospective
quantum information processing devices. We have
shown that such a bootstrapping procedure is possible and that much
information can be gained through utilisation of limited \emph{in situ}
initially present operational capabilities. By exploiting prior
knowledge and reasonable assumptions on the structure and behaviour of
the system, maximum likelihood analysis offers efficient
and robust characterisation and reconstruction of complex systems.
Scalability remains a challenge. In the general setting, slightly
increasing the size of the system leads to an explosion in signal
complexity~\cite{SO2009} and directly applying system identification to
three of more qubits is a formidable task. Some
complexity reduction can be achieved using Bayesian signal
estimation in order to split up frequency and amplitude estimation,
though this has some drawbacks in ensuring physically allowed
reconstructed Hamiltonians. Extending the Bayesian estimation directly
to Hamiltonians would alleviate reconstruction validity but direct
optimisation of the likelihood is challenging for more than a few
parameters. Hence there is the need for a similar complexity reduction
for Hamiltonian parameter estimation. Exploiting as much structural
information about the system is essential in making the problem
tractable, especially in the case of restricted resources.
Adaptive estimation for multiple parameters is a ripe area for further
exploration. Practical online schemes may require pragmatic methods of
determining adaptive measurements, as full Bayesian optimisation
involves integration over a highly peaked payoff function in a high
dimensional parameter space. Development of effective yet
computationally efficient optimisation routines is imperative.
Relaxing more assumptions or expanding the set of resources available
would give greater experimental relevance. Preparation and measurement
capabilities may vary, for instance instead of projective measurement,
some experimental proposals implement continuous~\cite{ContMeas}, weak
or generalized measurements. States may also be prepared by relaxation
or adiabatic passage and may not coincide with the measurement basis.
More generally, there are interesting connections between the
compressive sensing (CS) and sparse reconstruction
paradigm~\cite{CSreference,CSGross} and how our model-based systen
characterisation techniques work. Instead of the union of
low-dimensional (linear) sub-spaces model in CS, we instead have a
solution space as the union of low-dimensional manifolds of
parameters. An extension of the notion of ``basis incoherence'' and
general techniques for efficient reconstruction from sparse data would
be extremely beneficial.
\end{document}
|
\begin{document}
\title{On Poisson operators and Dirichlet-Neumann maps in $H^s$ for divergence form elliptic operators with Lipschitz coefficients}
\date{}
\author{
\null\\
Yasunori Maekawa\\
Mathematical Institute, Tohoku University\\
6-3 Aoba, Aramaki, Aoba, Sendai 980-8578, Japan\\
{\tt [email protected] }
\and
\\
Hideyuki Miura \\
Department of Mathematics, Graduate School of Science, Osaka University\\
1-1 Machikaneyama, Toyonaka, Osaka 560-0043, Japan\\
{\tt [email protected]}}
\maketitle
\begin{center}
{\bf Abstract}
\mathbf{e}nd{center}
We consider second order uniformly elliptic operators of divergence form in $\mathbb{R}^{d+1}$ whose coefficients are independent of one variable.
Under the Lipschitz condition on the coefficients
we characterize the domain of
the Poisson operators and the Dirichlet-Neumann maps
in the Sobolev space $H^s(\mathbb{R}^d)$ for each $s\in [0,1]$. Moreover,
we also show a factorization formula for the elliptic operator
in terms of the Poisson operator.
\noindent {\bf Keywords:} Divergence form elliptic operators, Poisson operators, Dirichlet-Neumann maps
\noindent {\bf 2010 Mathematics Subject Classification:} 35J15, 35J25, 35S05
\section{Introduction}\label{sec.intro}
In this present paper we consider the second order elliptic operator of divergence form in $\mathbb{R}^{d+1}= \{ (x,t) \in \mathbb{R}^d \times \mathbb{R} \}$,
\begin{equation}
\mathcal{A} = -\nabla \cdot A \nabla, ~~~~~~~~~~~~~~A = A (x) = \big (a_{i,j} (x) \big ) _{1\leq i,j\leq d+1}~.\label{def.calA}
\mathbf{e}nd{equation}
Here $d\in \mathbb{N}$, $\nabla = (\nabla_x,\partial_t)^\top$ with $\nabla_x=(\partial_1,\cdots,\partial_d)^\top$, and each $a_{i,j}$ is complex-valued and assumed to be $t$-independent. The adjoint matrix of $A$ will be denoted by $A^*$. We assume the uniformly ellipticity condition
\begin{align}
{\rm Re} \langle A (x) \mathbf{e}ta, \mathbf{e}ta \rangle \geq \nu_1 |\mathbf{e}ta|^2, ~~~~~~~~~~ | \langle A (x) \mathbf{e}ta,\zeta\rangle | \leq \nu_2 |\mathbf{e}ta| |\zeta|\label{ellipticity}
\mathbf{e}nd{align}
for all $\mathbf{e}ta,\zeta\in \mathbb{C}^{d+1}$ with positive constants $\nu_1,\nu_2$.
Here $\langle \cdot,\cdot \rangle$ denotes the inner product of $\mathbb{C}^{d+1}$, i.e., $\langle \mathbf{e}ta, \zeta\rangle = \sum_{j=1}^{d+1}\mathbf{e}ta_j \bar{\zeta}_j$ for $\mathbf{e}ta,\zeta\in \mathbb{C}^{d+1}$. For later use we set
\begin{align*}
A' =(a_{i,j})_{1\leq i,j\leq d},~~~b=a_{d+1, d+1}, ~~~{\bf r_1} = ( a_{1, d+1},\cdots , a_{d, d+1} )^\top, ~~~{\bf r_2} = ( a_{d+1,1} ,\cdots , a_{d+1,d})^\top.
\mathbf{e}nd{align*}
We will also use the notation $\mathcal{A}'=-\nabla_x\cdot A' \nabla_x$.
In this paper, we are concerned with the Poisson operator and
the Dirichlet-Neumann map associated with $\mathcal{A}$,
which play fundamental roles in the boundary value problems
for the elliptic operators.
They are defined through $\mathcal{A}$-extension of the boundary data on $\mathbb{R}^d=\partial\mathbb{R}^{d+1}_+$ to the upper half space.
\begin{df}\label{def.A-extension} {\rm (i)} For a given $h\in \mathcal{S}' (\mathbb{R}^d)$ we denote by $M_h: \mathcal{S}(\mathbb{R}^d)\rightarrow \mathcal{S}'(\mathbb{R}^d)$ the multiplication $M_h u = h u$.
\noindent {\rm (ii)} We denote by $E_{\mathcal{A}}: \dot{H}^{1/2} (\mathbb{R}^d)\rightarrow \dot{H}^1 (\mathbb{R}^{d+1}_+)$ the $\mathcal{A}$-extension operator, i.e., $w=E_{\mathcal{A}} f$ is the solution to the Dirichlet problem
\begin{equation}\label{eq.dirichlet0}
\begin{cases}
& \mathcal{A} u = 0~~~~~~~{\rm in}~~~\mathbb{R}^{d+1}_+,\\
& \hspace{0.3cm} u = f~~~~~~~{\rm on}~~\partial\mathbb{R}^{d+1}_+=\mathbb{R}^d.
\mathbf{e}nd{cases}
\mathbf{e}nd{equation}
The one parameter family of linear operators $\{E_{\mathcal{A}} (t)\}_{t\geq 0}$, defined by $E_{\mathcal{A}} (t) f = (E_{\mathcal{A}} f )(\cdot,t)$ for $f\in \dot{H}^{1/2}(\mathbb{R}^d)$, is called the Poisson semigroup associated with $\mathcal{A}$.
\noindent {\rm (iii)} We denote by $\Lambda_{\mathcal{A}}: D_{L^2} (\Lambda_{\mathcal{A}}) \subset \dot{H}^{1/2} (\mathbb{R}^d)\rightarrow \dot{H}^{-1/2} (\mathbb{R}^d)$ the Dirichlet-Neumann map associated with $\mathcal{A}$, which is defined through the sesquilinear form
\begin{equation}
\langle \Lambda_{\mathcal{A}} f, g\rangle_{\dot{H}^{-\frac12},\dot{H}^{\frac12}} = \langle A\nabla E_{\mathcal{A}}f, \nabla E_{\mathcal{A}} g \rangle_{L^2(\mathbb{R}^{d+1}_+)},~~~~~~~~~~f,g \in \dot{H}^{\frac12}(\mathbb{R}^d).\label{def.Lambda}
\mathbf{e}nd{equation}
Here $\langle \cdot,\cdot\rangle _{\dot{H}^{-1/2},\dot{H}^{1/2}}$ denotes the duality coupling of $\dot{H}^{-1/2}(\mathbb{R}^d)$ and $\dot{H}^{1/2}(\mathbb{R}^d)$.
\mathbf{e}nd{df}
\noindent Here $\dot{H}^s (\mathbb{R}^d)$ is the homogeneous Sobolev space of the order $s\in \mathbb{R}$ and $D_H(T)$ denotes the domain of a linear operator $T$
in a Banach space $H$. Since the ellipticity condition \mathbf{e}qref{ellipticity}
ensures that $E_\mathcal{A}$ is well-defined in $\dot{H}^{1/2}(\mathbb{R}^d)$
via the Lax-Milgram theorem,
it is not difficult to see that $\{E_{\mathcal{A}} (t)\}_{t\geq 0}$
is realized as a strongly continuous and analytic semigroup
in $\dot{H}^{1/2}(\mathbb{R}^d)$ and in $H^{1/2}(\mathbb{R}^d)$ (see, e.g. \cite[Proposition 2.4]{MaekawaMiura1}).
Then the generator of the Poisson semigroup will be denoted by $-\mathcal{P}_{\mathcal{A}}$, and $\mathcal{P}_{\mathcal{A}}$ is called the {\it Poisson operator} (associated with $\mathcal{A}$).
As for the Dirichlet-Neumann map, it is well known from the theory of sesquilinear forms that \mathbf{e}qref{ellipticity} guarantees
the generation of a strongly continuous and analytic semigroup in $L^2 (\mathbb{R}^d)$; see \cite{Kato}. On the other hand, the realization of the Poisson semigroup in $L^2 (\mathbb{R}^d)$ is nothing but the solvability of the elliptic boundary
value problem \mathbf{e}qref{eq.dirichlet0} for $L^2$ boundary data (see \cite{MaekawaMiura1} for details), there have been a lot of works on
this subject by now. Moreover the characterization of $D_{L^2}(\mathcal{P}_{\mathcal{A}})$ is studied as well, for it provides precise informations on
the behavior of $\mathcal{A}$-extension near the boundary.
As far as the authors know,
these problems are affirmatively settled at least for the following classes of $A$.
(I) $A$ is a constant matrix, i.e., $A(x)=A$; (II) $A$ is Hermite, i.e., $A^*=A$; (III) $A$ is block type, i.e., ${\bf r_1}={\bf r_2} =0$; (IV) $A$ is a small $L^\infty$ perturbation of $B$ satisfying one of (I)-(III) above.
The case (I) is easy since one can directly derive the solution formula for
\mathbf{e}qref{eq.dirichlet0} with the aid of the Fourier transform. The case (II) is a classical problem, for it is closely related with the Laplace equations in Lipschitz domains,
and it is studied in \cite{Dahlberg1, JerisonKenig1, JerisonKenig2,
Verchota, Dahlberg2, KenigPipher, Aucher.et.al.2, Aucher.et.al.3}.
The case (III) is considered in
\cite{Aucher.et.al.1, Aucher.et.al.2}. In this case, the Poisson operator
essentially coincides with the Dirichlet-Neumann map, and the characterization
$D_{L^2}(\mathcal{P}_{\mathcal{A}})=H^1 (\mathbb{R}^d)$
is known as the Kato square root problem
for divergence form elliptic operators,
which is settled by \cite{Aucher.et.al.1}.
The case (IV) is solved in \cite{Fabes.et.al.1}
when $B$ is a constant matrix,
and in \cite{Aucher.et.al.2,Aucher.et.al.3,Alfonseca.et.al.1} when $B$ is a Hermite, or block matrix.
Recently in \cite{MaekawaMiura1}, the authors of the present
paper showed the $L^2$ solvablity of
\mathbf{e}qref{eq.dirichlet0} and verified
the characterization $D_{L^2}(\mathcal{P}_{\mathcal{A}})=H^1 (\mathbb{R}^d)$,
when ${\bf r_1}$, ${\bf r_2}$, and $b$ are real, and
$\nabla_x \cdot({\bf r_1}+{\bf r_2})$
belong to $L^d(\mathbb{R}^d)+L^\infty(\mathbb{R}^d)$.
We note that in the cases (II)-(IV)
the coefficients of $A$ are not discontinuous in general.
However, it is shown in \cite{Kenig.et.al.1}
that if one imposes only \mathbf{e}qref{ellipticity} and the coefficients are
discontinuous,
the Dirichlet problem \mathbf{e}qref{eq.dirichlet0} is not always
solvable for boundary data in $L^2 (\mathbb{R}^d)$.
This means that some additional conditions on $A$ such as
(I)-(IV) are required
in order to extend the Poisson semigroup in $H^{1/2}(\mathbb{R}^d)$
as a semigroup in $L^2 (\mathbb{R}^d)$.
As our first result,
we show the realization of the Poisson semigroup
and the characterization of the domain of the generator
in $H^s(\mathbb{R}^d)$ for $s \in [0,1]$ under the Lipschitz regularity assumption:
\begin{align}
{\rm Lip} (A) = \sum_{i,j} \sup_{x,y\in \mathbb{R}^d} \frac{|a_{i,j} (x) - a_{i,j} (y) |}{|x-y|} <\infty. \label{lipschitz}
\mathbf{e}nd{align}
The precise statement of the result is given as follows:
\begin{thm}\label{thm.main.1} Let $A=A(x)$ be a $t$-independent complex coefficient matrix satisfying \mathbf{e}qref{ellipticity} and \mathbf{e}qref{lipschitz}. Then the following statements hold:
\\
{\rm (i)} The Poisson semigroup in $H^{1/2}(\mathbb{R}^d)$ is extended as a strongly continuous and analytic semigroup in $L^2 (\mathbb{R}^d)$. Moreover, its generator $-\mathcal{P}_{\mathcal{A}}$ satisfies $D_{L^2}(\mathcal{P}_{\mathcal{A}}) = H^1 (\mathbb{R}^d)$ with equivalent norms, and $\mathcal{P}_{\mathcal{A}}$ admits a bounded $H^\infty$ calculus in $L^2 (\mathbb{R}^d)$.
\noindent {\rm (ii)} Let $s\in [0,1]$. Then $H^s(\mathbb{R}^d)$ is invariant under the action of the Poisson semigroup $\{e^{-t\mathcal{P}_{\mathcal{A}}}\}_{t\geq 0}$ in $L^2 (\mathbb{R}^d)$, and its restriction on $H^s(\mathbb{R}^d)$ defines a strongly continuous and analytic semigroup in $H^s(\mathbb{R}^d)$. Moreover, its generator, denoted again by $-\mathcal{P}_{\mathcal{A}}$, satisfies $D_{H^s}(\mathcal{P}_{\mathcal{A}}) = H^{1+s}(\mathbb{R}^d)$ with equivalent norms.
\mathbf{e}nd{thm}
For the definition of bounded $H^\infty$ calculus for sectorial operators, see, e.g., \cite[Chapter 5]{Haase}.
The main feature in this result is that we do not assume any {\it structural} conditions such as (I)-(IV).
In contrast to approaches taken in the aforementioned results,
we analyze $\mathcal{P}_{\mathcal{A}}$ by looking
at its principal symbol, which is explicitly calculated as
\begin{align}
\mu_{\mathcal{A}} (x,\xi) = - \frac{{\bf v} (x) \cdot \xi}{2} + i \big \{ \frac{1}{b (x) } \langle A'(x) \xi,\xi\rangle -\frac{1}{4}({\bf v}(x) \cdot \xi )^2\big \}^\frac12,~~~~~~~~{\bf v} = \frac{{\bf r_1} + {\bf r_2}}{b}.\label{mu_A.intro}
\mathbf{e}nd{align}
Here $x,\xi\in \mathbb{R}^d$.
As is expected, the associated pseudo-differential operator
$-i\mu_{\mathcal{A}}(\cdot,D_x)$ is shown to be an
approximation of $\mathcal{P}_{\mathcal{A}}$.
Since $\mu_{\mathcal{A}}$ is Lipschitz in $x$
and homogeneous of degree $1$ in $\xi$,
one may apply the general theory of pseudo-differential operators with nonsmooth symbols to
$\mu_{\mathcal{A}}(\cdot,D_x)$; \cite{KumanogoNagase, Marschall, Taylor, Abels, ES} to show Theorem \ref{thm.main.1} at least for $s <1$.
Here we provide another approach to
the analysis of $\mathcal{P}_{\mathcal{A}}$
which does not rely on the detailed properties of
$\mu_{\mathcal{A}}(\cdot,D_x)$ obtained from this general theory;
see Remark \ref{rem.pseudo}.
Indeed, the key ingredient underlying the proof of Theorem \ref{thm.main.1}
in our argument is the factorizations
of operators $\mathcal{A}'$ and $\mathcal{A}$
in terms of $\mathcal{P}_{\mathcal{A}}$, which
we will state in the next theorem.
We note here that the assertion (ii) includes the critical case $s=1$,
which seems to be out of reach of general theory of the pseudo-differential
operators in the works cited above.
\begin{rem}
{\rm
If $\mathbb{R}^{d+1}_+$ is replaced by a bounded Lipschitz domain
satisfying $VMO$ conditions for the unit normal of the boundary,
then the Dirichlet problem for $A$ with $VMO$ coefficients is
solved by \cite{MMS} in $L^p$ and Besov spaces.
In view of local regularity,
the Lipschitz condition \mathbf{e}qref{lipschitz}
assumed in our paper is rather strong.
However, one has to be careful about the lack of the compactness
of the boundary in our case. In fact, the authors in \cite{MMS} apply
a localization argument which enables them to
approximate $A(x)$ by a constant matrix in each localized domain, then
the boundary value problem is reduced to a finite sum of the problems
for small $VMO$ perturbation of the constant matrices.
However, in our case one cannot use such a localization procedure, since
$A(x)$ is not necessarily close to a constant matrix
as $|x|\rightarrow \infty$. This difficulty is overcome
with the aid of the calculus of the symbol \mathbf{e}qref{mu_A.intro}.
}
\mathbf{e}nd{rem}
In order to state the next result,
let us recall the realization of
$\mathcal{A}$ in $L^2 (\mathbb{R}^{d+1})$:
\begin{align}
D_{L^2}(\mathcal{A}) & = \big \{ u\in H^1 (\mathbb{R}^{d+1})~|~{\rm there ~is}~ F\in L^2 (\mathbb{R}^{d+1})~{\rm such ~that}~\nonumber \\
& ~~~~~~~~~~~~~~~~~~ \langle A\nabla u, \nabla v\rangle _{L^2(\mathbb{R}^{d+1})} = \langle F, v\rangle _{L^2 (\mathbb{R}^{d+1})}~{\rm for~all}~v\in H^1 (\mathbb{R}^{d+1})\big \},\label{realization.A}\\
\mathcal{A} u & = F ~~~~~ {\rm for} ~~ u\in D_{L^2}(\mathcal{A}).\nonumber
\mathbf{e}nd{align}
Note that $D_{L^2}(\mathcal{A}) = H^2 (\mathbb{R}^{d+1})$ holds with equivalent norms because of \mathbf{e}qref{lipschitz}. The realization of $\mathcal{A}' = - \nabla_x\cdot A'\nabla_x$ in $L^2 (\mathbb{R}^d)$ is defined in the similar manner, and we have $D_{L^2}(\mathcal{A}') = H^2 (\mathbb{R}^d)$ with equivalent norms since $A'$ is Lipschitz continuous.
The following theorem shows the factorization of operators $\mathcal{A}$
and $\mathcal{A'}$, and it clarifies the relation of the Poisson operator and the Dirichlet-Neumann map:
\begin{thm}\label{thm.main.2} Under the same assumption as in Theorem
\ref{thm.main.1}, the following statements hold:
\\
{\rm (i)} The realization of $\mathcal{A}'$ in $L^2 (\mathbb{R}^d)$ and the realization of $\mathcal{A}$ in $L^2 (\mathbb{R}^{d+1})$ are respectively factorized as
\begin{align}
\mathcal{A}' & = M_b \mathcal{Q}_{\mathcal{A}} \mathcal{P}_{\mathcal{A}}, ~~~~~~~~~\mathcal{Q}_{\mathcal{A}} = M_{1/b} ( M_{\bar{b}} \mathcal{P}_{\mathcal{A}^*} )^*,\label{eq.thm.main.2.1}\\
\mathcal{A} & = - M_b (\partial_t - \mathcal{Q}_{\mathcal{A}}) (\partial_t + \mathcal{P}_{\mathcal{A}}).\label{eq.thm.main.2.2}
\mathbf{e}nd{align}
Here $( M_{\bar{b}} \mathcal{P}_{\mathcal{A}^*} )^*$ is the adjoint operator of $M_{\bar{b}} \mathcal{P}_{\mathcal{A}^*}$ in $L^2 (\mathbb{R}^d)$, while $\mathcal{P}_{\mathcal{A}^*}$ is the Poisson operator in $L^2 (\mathbb{R}^d)$ associated with $\mathcal{A}^*=-\nabla\cdot A^* \nabla$.
\noindent {\rm (ii)} It follows that $D_{L^2}(\Lambda_{\mathcal{A}}) = D_{L^2}(\mathcal{Q}_{\mathcal{A}}) = H^1 (\mathbb{R}^d)$ with equivalent norms and that
\begin{align}
\mathcal{P}_{\mathcal{A}} & = M_{1/b} \Lambda_{\mathcal{A}} + M_{{\bf r_2}/b} \cdot \nabla_x, \label{eq.thm.main.2.3} \\
\mathcal{Q}_{\mathcal{A}} & = M_{1/b} \Lambda_{\mathcal{A}} - M_{{\bf r_1}/b} \cdot \nabla_x - M_{(\nabla_x\cdot {\bf r_1})/b}, \label{eq.thm.main.2.4}
\mathbf{e}nd{align}
as the operators in $L^2 (\mathbb{R}^d)$.
\mathbf{e}nd{thm}
Finally we state the counterpart of Theorem \ref{thm.main.1} for
the Dirichlet-Neumann map in $H^s(\mathbb{R}^d)$.
\begin{thm}\label{thm.main.3}
Let $s \in [0,1]$. Under the same assumption as in Theorem \ref{thm.main.1}, $H^s(\mathbb{R}^d)$ is invariant under the action of the Dirichlet-Neumann semigroup $\{e^{-t\Lambda_{\mathcal{A}}}\}_{t\geq 0}$ in $L^2 (\mathbb{R}^d)$, and its restriction on $H^s(\mathbb{R}^d)$ defines a strongly continuous and analytic semigroup in $H^s(\mathbb{R}^d)$. Moreover, its generator, denoted again by $-\Lambda_{\mathcal{A}}$, satisfies $D_{H^{s}}(\Lambda_{\mathcal{A}}) = H^{1+s}(\mathbb{R}^d)$ holds with equivalent norms.
\mathbf{e}nd{thm}
It is also shown that $\Lambda_{\mathcal{A}}$ admits a bounded $H^\infty$ calculus in $L^2 (\mathbb{R}^d)$; see Theorem \ref{thm.domain.DN}.
Similar result is obtained in \cite{ES}, where they studied the
Dirichlet-Neumann map for the Laplace operator in a bounded domain with
$C^{1+\alpha}$ boundary.
See also \cite{Taylor2} for general properties
of the Dirichlet-Neumann map and the relation with
the layer potentials.
This paper is organized as follows. In Section \ref{sec.preliminary} we state some general results on Poisson operators from \cite{MaekawaMiura1}, which plays a central role in our argument. Section \ref{sec.domain} is the core of this paper. In Section \ref{subsec.domain.L^2} we study the Poisson semigroup and its generator in $L^2 (\mathbb{R}^d)$ with the aid of the calculus of the symbol $\mu_{\mathcal{A}}$, while the Dirichlet-Neumann map in $L^2 (\mathbb{R}^d)$ is studied in Section \ref{subsec.domain.DN.L^2}. The analysis of these operators in $H^s(\mathbb{R}^d)$ is performed in Sections \ref{subsec.domain.H^1} - \ref{subsec.proof.thm}. As stated in Remark \ref{rem.pseudo}, our approach recovers some properties of the pseudo-differential operator $\mu_{\mathcal{A}}(\cdot,D_x)$ in $H^s(\mathbb{R}^d)$, which is stated in the appendix.
\section{Preliminaries}\label{sec.preliminary}
In this section we recall some results in \cite{MaekawaMiura1}. As stated in the introduction, the Poisson semigroup $\{E_{\mathcal{A}}(t)\}_{t\geq 0}$ defines a strongly continuous and analytic semigroup in $H^{1/2}(\mathbb{R}^d)$, and thus we have the representation $E_{\mathcal{A}}(t) = e^{-t\mathcal{P}_{\mathcal{A}}}$ with its generator $-\mathcal{P}_{\mathcal{A}}$. The next proposition gives the condition so that $\{e^{-t\mathcal{P}_{\mathcal{A}}}\}_{t\geq 0}$ is extended as a semigroup in $L^2 (\mathbb{R}^d)$.
\begin{prop}[{\cite[Proposition 3.3]{MaekawaMiura1}}]\label{prop.pre.1} The following two statements are equivalent.
\noindent {\rm (i)} $D_{H^{1/2}} (\mathcal{P}_{\mathcal{A}})\subset D_{L^2}(\Lambda_{\mathcal{A}^*})$ and $\| \Lambda_{\mathcal{A}^*} f\|_{L^2 (\mathbb{R}^d)} \leq C \| f \|_{H^1 (\mathbb{R}^d)}$ holds for $f\in D_{H^{1/2}}(\mathcal{P}_{\mathcal{A}})$,
\noindent {\rm (ii)} $\{e^{-t\mathcal{P}_{\mathcal{A}}}\}_{t\geq 0}$ is extended as a strongly continuous semigroup in $L^2 (\mathbb{R}^d)$ and $D_{L^2}(\mathcal{P}_{\mathcal{A}})$ is continuously embedded in $H^1 (\mathbb{R}^d)$.
\noindent Moreover, if the condition {\rm (ii)} (and hence, {\rm (i)}) holds then $D_{L^2}(\mathcal{P}_{\mathcal{A}})$ is continuously embedded in $D_{L^2}(\Lambda_{\mathcal{A}})$, $H^1 (\mathbb{R}^d)$ is continuously embedded in $D_{L^2}(\Lambda_{\mathcal{A}^*})$, and it follows that
\begin{align}
\mathcal{P}_{\mathcal{A}} f & = M_{1/b} \Lambda_{\mathcal{A}} f + M_{{\bf r_2}/b} \cdot \nabla_x f,\label{assume.prop.pre.1.1}\\
\langle A' \nabla_x f, \nabla_x g\rangle _{L^2(\mathbb{R}^d)} & = \langle \mathcal{P}_{\mathcal{A}} f, \Lambda_{\mathcal{A}^*} g + M_{\bf \bar{r}_1}\cdot \nabla_x g\rangle _{L^2(\mathbb{R}^d)}\label{assume.prop.pre.1.2}
\mathbf{e}nd{align}
for $f\in D_{L^2}(\mathcal{P}_{\mathcal{A}})$ and $g\in H^1(\mathbb{R}^d)$.
\mathbf{e}nd{prop}
In order to show $D_{L^2}(\mathcal{P}_{\mathcal{A}}) = H^1 (\mathbb{R}^d)$ we will use
\begin{prop}[{\cite[Corollary 3.5, Proposition 3.6]{MaekawaMiura1}}]\label{prop.pre.2} Assume that $\{e^{-t\mathcal{P}_{\mathcal{A}}}\}_{t\geq 0}$ and $\{e^{-t\mathcal{P}_{\mathcal{A}^*}}\}_{t\geq 0}$ are extended as strongly continuous semigroups in $L^2 (\mathbb{R}^d)$ and that $D_{L^2}(\mathcal{P}_{\mathcal{A}})$ and $D_{L^2}(\mathcal{P}_{\mathcal{A}^*})$ are continuously embedded in $H^1 (\mathbb{R}^d)$. Then we have
\begin{align}
& \langle A'\nabla_x f, \nabla_x g\rangle _{L^2 (\mathbb{R}^d)} = \langle \mathcal{P}_{\mathcal{A}}f, M_{\bar{b}} \mathcal{P}_{\mathcal{A}^*} g\rangle _{L^2 (\mathbb{R}^d)},~~~~~~~~ f\in D_{L^2}(\mathcal{P}_{\mathcal{A}}), ~~g\in D_{L^2}(\mathcal{P}_{\mathcal{A}^*}),\\
& C' \| f \|_{H^1 (\mathbb{R}^d)} \leq \| \mathcal{P}_{\mathcal{A}} f\|_{L^2 (\mathbb{R}^d)} + \| f\|_{L^2(\mathbb{R}^d)} \leq C \| f\|_{H^1 (\mathbb{R}^d)},~~~~~~~~~f\in D_{L^2}(\mathcal{P}_{\mathcal{A}}).
\mathbf{e}nd{align}
If in addition that $\displaystyle \liminf_{t\rightarrow 0} \| \,{\rm d} / \,{\rm d} t ~ e^{-t\mathcal{P}_{\mathcal{A}}} f \|_{L^2 (\mathbb{R}^d)} <\infty$ holds for all $f\in C_0^\infty (\mathbb{R}^d)$ then $D_{L^2}(\mathcal{P}_{\mathcal{A}}) = H^1 (\mathbb{R}^d)$ with equivalent norms.
\mathbf{e}nd{prop}
As for the factorizations of $\mathcal{A}'$ and $\mathcal{A}$, we have
\begin{prop}[{\cite[Lemma 3.7]{MaekawaMiura1}}]\label{prop.pre.3} Assume that the semigroups $\{ e^{-t\mathcal{P}_{\mathcal{A}}}\}_{t\geq 0}$ and $\{ e^{-t\mathcal{P}_{\mathcal{A}^*}}\}_{t\geq 0}$ in $H^{1/2}(\mathbb{R}^d)$ are extended as strongly continuous semigroups in $L^2 (\mathbb{R}^d)$ and that $D_{L^2}(\mathcal{P}_{\mathcal{A}})= D_{L^2}(\mathcal{P}_{\mathcal{A}^*}) = H^1 (\mathbb{R}^d)$ holds with equivalent norms. Then $H^1 (\mathbb{R}^d)$ is continuously embedded in $D_{L^2}(\Lambda_{\mathcal{A}})\cap D_{L^2}(\Lambda_{\mathcal{A}^*})$ and
\begin{align}
\mathcal{P}_{\mathcal{A}} f & = M_{1/b} \Lambda_{\mathcal{A}} f + M_{{\bf r_2}/b}\cdot \nabla_x f,~~~~~~~~~~f\in H^1 (\mathbb{R}^d),\label{eq.prop.pre.3.1} \\
\mathcal{P}_{\mathcal{A}^*} g & = M_{1/\bar{b}} \Lambda_{\mathcal{A}^*} g + M_{{\bf \bar{r}_1}/\bar{b}}\cdot \nabla_x g,~~~~~~~~~~ g\in H^1 (\mathbb{R}^d).\label{eq.prop.pre.3.2}
\mathbf{e}nd{align}
Moreover, the realizations of $\mathcal{A}'$ in $L^2 (\mathbb{R}^d)$ and of $\mathcal{A}$ in $L^2 (\mathbb{R}^{d+1})$ are respectively factorized as
\begin{align}
\mathcal{A}' & = M_b \mathcal{Q}_{\mathcal{A}} \mathcal{P}_{\mathcal{A}},~~~~~~~\quad \mathcal{Q}_{\mathcal{A}} = M_{1/b} ( M_{\bar{b}} \mathcal{P}_{\mathcal{A}^*} )^*,\label{eq.prop.pre.3.3'} \\
\mathcal{A} & = - M_b (\partial _t - \mathcal{Q}_{\mathcal{A}} ) ( \partial_t + \mathcal{P}_{\mathcal{A}}).\label{eq.prop.pre.3.3}
\mathbf{e}nd{align}
Here $( M_{\bar{b}} \mathcal{P}_{\mathcal{A}^*} )^*$ is the adjoint of $ M_{\bar{b}} \mathcal{P}_{\mathcal{A}^*}$ in $L^2 (\mathbb{R}^d)$.
\mathbf{e}nd{prop}
\section{Analysis of Poisson operator in $H^s (\mathbb{R}^d)$}\label{sec.domain}
To study the Poisson operator we consider the boundary value problem
\begin{equation}\label{eq.dirichlet}
\begin{cases}
& \mathcal{A} u = F~~~~~~~~{\rm in}~~~\mathbb{R}^{d+1}_+,\\
& ~~u = g~~~~~~~~~{\rm on}~~\partial\mathbb{R}^{d+1}_+.
\mathbf{e}nd{cases}
\mathbf{e}nd{equation}
Let $x,\xi\in \mathbb{R}^d$ and let $\mu_{\mathcal{A}}=\mu_{\mathcal{A}}(x,\xi)\in \{\mu \in \mathbb{C} ~|~{\rm Im} \,\mu >0\}$ be the root of
\begin{align}
b (x) \mu^2 + \big ({\bf r_1} (x) + {\bf r_2} (x) \big ) \cdot \xi \mu + \langle A'(x) \xi,\xi\rangle =0.\label{eq.root}
\mathbf{e}nd{align}
Then we have
\begin{align}
\mu_{\mathcal{A}} (x,\xi) = - \frac{{\bf v} (x) \cdot \xi}{2} + i \big \{ \frac{1}{b (x) } \langle A'(x) \xi,\xi\rangle -\frac{1}{4}({\bf v}(x) \cdot \xi )^2\big \}^\frac12,~~~~~~~~{\bf v} = \frac{{\bf r_1} + {\bf r_2}}{b}.\label{def.mu_A}
\mathbf{e}nd{align}
Here the square root in \mathbf{e}qref{def.mu_A} is taken as the principal branch. From \mathbf{e}qref{ellipticity} one can check the estimates
\begin{align}
|\mu_{\mathcal{A}} (x,\xi) | \leq C |\xi |, ~~~&~~~ {\rm Im} \,\mu_{\mathcal{A}} (x,\xi) \geq C' |\xi |, \label{estimate.mu_A.1}\\
{\rm Re} \big ( \langle A'\xi,\xi\rangle - \frac{b}{4} ({\bf v}\cdot\xi)^2 \big ) & \geq \nu_1 \big ( |\xi|^2 + \frac{|{\bf v}\cdot \xi |^2}{4} \big ), \label{estimate.mu_A.2}
\mathbf{e}nd{align}
where $C, C'$ are positive constants depending only on $\nu_1, \nu_2$. As is well known, $\mu_{\mathcal{A}}$ describes the principal symbol of the Poisson operator.
\subsection{Domain of Poisson operator in $L^2 (\mathbb{R}^d)$}\label{subsec.domain.L^2}
The aim of this section is to prove that the domain of the Poisson operator in $L^2 (\mathbb{R}^d)$ is $H^1 (\mathbb{R}^d)$. For a given $h\in \mathcal{S}(\mathbb{R}^d)$ we set
\begin{align}
\big ( U_{\mathcal{A},0} (t) h \big )(x) = \frac{1}{(2 \pi)^\frac{d}{2}} \int_{\mathbb{R}^d} e^{i t \mu_{\mathcal{A}} (x,\xi) + ix\cdot \xi } \hat{h}(\xi) \,{\rm d} \xi,\label{def.U_A0}
\mathbf{e}nd{align}
where $\hat{h}$ is the Fourier transform of $h$. The operator $U_{\mathcal{A},0}(t)$ represents the principal part of the Poisson semigroup, and we first give some estimates of $U_{\mathcal{A},0}(t)$. To this end let us introduce the operator
\begin{align}
\big ( G_p (t) h \big ) (x) = \frac{1}{(2\pi)^\frac{d}{2}}\int_{\mathbb{R}^d} p (x,\xi,t ) e^{it\mu_{\mathcal{A}} (x,\xi) + ix\cdot \xi} \hat{h} (\xi )\,{\rm d}\xi,\label{def.G_p}
\mathbf{e}nd{align}
for a given measurable function $p=p(x,\xi,t)$ on $\mathbb{R}^d\times \mathbb{R}^d \times \mathbb{R}_+$.
\begin{lem}\label{lem.G_p} Let $T\in (0,\infty]$. Assume that $p=p(x,\xi,t)$ satisfies
\begin{align}
\sup_{x\in \mathbb{R}^d,\xi\ne 0}\sup_{0<t<T} ~ \big ( \sum_{k=0}^{d+1} (1 + t |\xi| )^{-l_k} |\xi |^{k} |\nabla_\xi^k p (x,\xi,t) | + (t |\xi| )^{-l_{j_0}} |\xi |^{j_0} |\nabla_\xi^{j_0} p (x,\xi,t) | \big ) \leq L<\infty \label{assume.lem.G_p.1}
\mathbf{e}nd{align}
for some $l_k\geq 0$ and for some $l_{j_0}>0$, $j_0\in \{0,\cdots,d\}$. Then we have
\begin{align}
\sup_{0<t<T} \| G_p (t) h\|_{L^2 (\mathbb{R}^d)} \leq C \| h \|_{L^2 (\mathbb{R}^d)},\label{est.lem.G_p.1}
\mathbf{e}nd{align}
where $C$ depends only on $d$, $\nu_1$, $\nu_2$, $l_k$, $l_{j_0}$, and $L$. Furthermore, if $p$ satisfies
\begin{align}
\sup_{x\in \mathbb{R}^d, \xi\ne 0} \sup_{t>0} ~ \sum_{k=0}^{d+1} (t |\xi| )^{-l_k} |\xi |^{k} |\nabla_\xi^{k} p (x,\xi,t) | \leq L<\infty \label{assume.lem.G_p.2}
\mathbf{e}nd{align}
for some $l_k>0$, $k=0,\cdots, d+1$, then
\begin{align}
\int_0^\infty \| G_p (t) h \|_{L^2 (\mathbb{R}^d)}^2 \frac{\,{\rm d} t}{t} \leq C' \| h \|_{L^2 (\mathbb{R}^d)}^2,\label{est.lem.G_p.2}
\mathbf{e}nd{align}
where $C'$ depends only on $d$, $\nu_1$, $\nu_2$, $l_k$, and $L$.
\mathbf{e}nd{lem}
The proof of Lemma \ref{lem.G_p} is rather standard and will be stated in the appendix for convenience to the reader. Now we have
\begin{lem}\label{lem.U_0} Let $k\in \mathbb{N}\cup \{0\}$. For $h\in \mathcal{S} (\mathbb{R}^d)$ and $t>0$ it follows that
\begin{align}
& \| t^k \frac{\,{\rm d}^k}{\,{\rm d} t^k}U_{\mathcal{A}, 0} (t) h \|_{L^2 (\mathbb{R}^d)} + \| t e^{-t} \nabla_x U_{\mathcal{A},0} (t) h \|_{L^2 (\mathbb{R}^d)} + \| t U_{\mathcal{A},0} (t) (-{\rm d}elta_x)^\frac12 h \|_{L^2 (\mathbb{R}^d)} \nonumber \\
&{\phantom{ \| U_{\mathcal{A}, 0} (t) h \|_{L^2 (\mathbb{R}^d)} + \| t \nabla U_{\mathcal{A},0} (t) h \|_{L^2 (\mathbb{R}^d)}}} ~~~~~~~~ + \| [U_{\mathcal{A},0} (t), \nabla_x] h \|_{L^2 (\mathbb{R}^d)} \leq C \| h \|_{L^2 (\mathbb{R}^d)},\label{est.lem.U_0.1}
\mathbf{e}nd{align}
where $[B_1,B_2]$ is the commutator of the operators $B_1$, $B_2$, and
\begin{align}
\int_0^\infty \| e^{-t} U_{\mathcal{A},0} (t) h \|_{\dot{H}^\frac12 (\mathbb{R}^d)}^2 \,{\rm d} t \leq C \| h \|_{L^2 (\mathbb{R}^d)}^2. \label{est.lem.U_0.3}
\mathbf{e}nd{align}
In particular, $\displaystyle \lim_{t\rightarrow 0} U_{\mathcal{A},0} (t) h =h$ in $L^2 (\mathbb{R}^d)$ for any $h\in L^2 (\mathbb{R}^d)$.
\mathbf{e}nd{lem}
\noindent {\it Proof.} The estimate \mathbf{e}qref{est.lem.U_0.1} is a direct consequence of Lemma \ref{lem.G_p}. For example, we take $p(x,\xi,t) =t |\xi|$ for the estimate of $t U_{\mathcal{A},0} (t) (-{\rm d}elta_x)^{1/2} h$, and take $p (x,\xi,t) = i t \nabla_x \mu_{\mathcal{A}} (x,\xi)$ for $[U_{\mathcal{A},0}, \nabla_x ]h$, and so on. As for \mathbf{e}qref{est.lem.U_0.3}, we use the Schur lemma as in the proof of \mathbf{e}qref{est.lem.G_p.2}.
By \mathbf{e}qref{est.lem.U_0.1} and $\| f \|_{\dot{H}^{1/2}}\leq \| f \|_{L^2}^{1/2} \| \nabla_x f \|_{L^2}^{1/2}$ it is easy to see that $t^{1/2} e^{-t} \| U_{\mathcal{A},0} (t) h \|_{\dot{H}^{1/2} (\mathbb{R}^d)} \leq C \| h \|_{L^2 (\mathbb{R}^d)}$ for all $t>0$. Let $t\geq s>0$, and let $\psi_s$ be the function defined in the proof of Lemma \ref{lem.G_p} in the appendix. Then we have from $\psi_s = {\rm d}elta_x \tilde \psi_s$ and \mathbf{e}qref{est.lem.U_0.1},
\begin{align*}
t^\frac12 \| e^{-t} U_{\mathcal{A},0} (t) \psi_s * h \|_{\dot{H}^\frac12 (\mathbb{R}^d)} & \leq t^\frac12 \| e^{-t} U_{\mathcal{A},0} (t) {\rm d}elta_x \tilde \psi_s * h \|_{L^2 (\mathbb{R}^d)}^\frac12 \| \nabla_x e^{-t} U_{\mathcal{A},0} (t) \psi_s * h \|_{L^2 (\mathbb{R}^d)}^\frac12\\
& \leq C t^{-\frac12} \| \nabla_x \tilde \psi_s * h \|_{L^2 (\mathbb{R}^d)}^\frac12 \| h \|_{L^2 (\mathbb{R}^d)}^\frac12 \leq C t^{-\frac12} s^\frac12 \| h \|_{L^2 (\mathbb{R}^d)}.
\mathbf{e}nd{align*}
Let $s\geq t>0$. Then the relation $\nabla_x U_{\mathcal{A},0} (t) = t^{1/2} G_p (t) (-{\rm d}elta_x)^{1/4} + U_{\mathcal{A},0} (t) \nabla_x$ with $p = i t^{1/2}|\xi|^{-1/2} \nabla_x \mu_{\mathcal{A}} $ combined with \mathbf{e}qref{est.lem.G_p.1} and \mathbf{e}qref{est.lem.U_0.1} yields
\begin{align*}
\| \nabla_x U_{\mathcal{A},0} (t) h \|_{L^2(\mathbb{R}^d)}\leq C t^\frac12 \| (-{\rm d}elta_x)^\frac14 \psi_s * h \|_{L^2(\mathbb{R}^d)} + C \| \nabla_x \psi_s * h \|_{L^2 (\mathbb{R}^d)} \leq C( t^\frac12 s^{-\frac12} + s^{-1} ),
\mathbf{e}nd{align*}
which implies
\begin{align*}
t^\frac12 \| e^{-t} U_{\mathcal{A},0} (t) \psi_s * h \|_{\dot{H}^\frac12 (\mathbb{R}^d)} \leq C t^\frac12 e^{-t} ( t^\frac14 s^{-\frac14} + s^{-\frac12} ) \| h \|_{L^2 (\mathbb{R}^d)} \leq C t^\frac14 s^{-\frac14} \| h \|_{L^2 (\mathbb{R}^d)}.
\mathbf{e}nd{align*}
Collecting these above, we can apply the Schur lemma \cite[pp.643-644]{Grafakos} to $\{ t^{1/2} e^{-t} (-{\rm d}elta_x)^{1/4} U_{\mathcal{A},0} (t)\}_{t>0}$ to obtain \mathbf{e}qref{est.lem.U_0.3}. The last statement of the proposition follows from \mathbf{e}qref{est.lem.U_0.1} and the density argument. The proof is complete.
We look for a solution $u$ to \mathbf{e}qref{eq.dirichlet} with $F=0$ and $g=h$ of the form
\begin{align}
u=M_\chi U_{\mathcal{A},0} h + U_{\mathcal{A},1} h,
\mathbf{e}nd{align}
where $\chi=\chi(t)$ is a smooth cut-off function such that $\chi(t)=1$ if $t\in [0,1]$ and $\chi(t) =0$ if $t\geq 2$, and $U_{\mathcal{A},1} h$ is a solution to \mathbf{e}qref{eq.dirichlet} with $F= - \mathcal{A} (M_ \chi U_{\mathcal{A},0} h)$ and $g=0$.
\begin{lem}\label{lem.U_1} For any $h\in \mathcal{S} (\mathbb{R}^d)$ there exists a unique solution $U_{\mathcal{A},1} h\in \dot{H}_0^1(\mathbb{R}^{d+1}_+)$ to \mathbf{e}qref{eq.dirichlet} with $F= - \mathcal{A} ( M_\chi U_{\mathcal{A},0} h)$ and $g=0$, which satisfies
\begin{align}
\| \nabla U_{\mathcal{A},1} h\|_{L^2 (\mathbb{R}^{d+1}_+)}\leq C \| (I - {\rm d}elta_x)^{-\frac14} h \|_{L^2 (\mathbb{R}^d)}.\label{est.lem.U_1.1}
\mathbf{e}nd{align}
\mathbf{e}nd{lem}
\noindent {\it Proof.} For simplicity we write $U_0$ and $U_1$ for $U_{\mathcal{A},0}$ and $U_{\mathcal{A},1}$. We set
$$A'=(a_{i,j})_{1\leq i,j\leq d},~~~~~~~~~~ ~{\bf a'} = \nabla_x\cdot A'= (\sum_{1\leq k\leq d}\partial_k a_{k,j})_{1\leq j\leq d},$$
and
\begin{equation}
\Pi h =
\begin{pmatrix}
& \Pi' h \mbox{} ~\\
& \Pi_{d+1} h \mbox{} ~
\mathbf{e}nd{pmatrix}
=
\begin{pmatrix}
& \displaystyle G_{ it A' \nabla_x\mu_{\mathcal{A}} } h \\
& \displaystyle G_{\nabla_x\cdot {\bf r_1}} h + G_{it ({\bf r_1} +{\bf r_2} )\cdot \nabla_x\mu_{\mathcal{A}}} h
\mathbf{e}nd{pmatrix}
\in \mathbb{C}^{d+1}.\label{proof.lem.U_1.0}
\mathbf{e}nd{equation}
Then a direct computation yields
\begin{align*}
-\mathcal{A} U_0 h = \nabla\cdot \Pi h + G_\zeta h, ~~~~~~\zeta (x,\xi, t) & = i \big ({\bf r_1} (x) + {\bf r_2}(x) + i t A' (x) \xi \big ) \cdot \nabla_x \mu_{\mathcal{A}} (x,\xi) + i {\bf a'} (x) \cdot \xi.
\mathbf{e}nd{align*}
Hence $U_1h$ should be constructed as the solution of \mathbf{e}qref{eq.dirichlet} with $g=0$ and
\begin{align}
F & = - \mathcal{A} (M_\chi U_0 h) = \nabla \cdot M_\chi \Pi h + M_\chi G_\zeta h + R h,\label{proof.lem.U_1.1} \\
R h & = \nabla_x \cdot M_{({\bf r_1} + {\bf r_2})\partial_t \chi} U_0 h+ \partial_t M_{b \partial_t\chi} U_0 h - M_{\partial_t\chi } \Pi_{d+1} h - M_{b\partial_t^2\chi + \nabla_x\cdot {\bf r_2}\partial_t\chi } U_0 h.\nonumber
\mathbf{e}nd{align}
To obtain \mathbf{e}qref{est.lem.U_1.1},
let us estimate each term of $F$ in $\dot{H}^{-1}(\mathbb{R}^{d+1}_+)$.
Note that $R h$ is supported in $\{(x,t)\in \mathbb{R}^{d+1}_+~|~1\leq t\leq 2\}$ by the definition of $\chi$. In particular, it is not difficult to show
\begin{align}
| \langle R h, \varphi \rangle _{L^2 (\mathbb{R}^{d+1}_+)} | \leq C \| (I-{\rm d}elta_x)^{-\frac14} h \|_{L^2 (\mathbb{R}^d)} \| \nabla \varphi \|_{L^2 (\mathbb{R}^{d+1}_+)},~~~~~~\varphi\in H_0^1(\mathbb{R}^{d+1}_+).\label{proof.lem.U_1.2}
\mathbf{e}nd{align}
Thus we focus on the leading terms $\nabla \cdot M_\chi \Pi h$ and $M_\chi G_\zeta h$. By using Lemma \ref{lem.G_p} one can easily check the estimates
\begin{align*}
|\langle \nabla \cdot M_\chi \Pi h , \varphi \rangle _{L^2 (\mathbb{R}^{d+1}_+)} |& \leq \| M_\chi \Pi h \|_{L^2 (\mathbb{R}^{d+1}_+)} \| \nabla \varphi \|_{L^2 (\mathbb{R}^{d+1}_+)} \leq C \| h \|_{L^2 (\mathbb{R}^d)} \| \nabla \varphi \|_{L^2 (\mathbb{R}^{d+1}_+)},\\
|\langle M_\chi G_\zeta h, \varphi \rangle _{L^2 (\mathbb{R}^{d+1}_+)} | & = | \int_0^\infty \langle M_\chi G_\zeta h, \int_0^t \partial_s \varphi \,{\rm d} s \rangle _{L^2 (\mathbb{R}^d)} \,{\rm d} t | \\
& \leq \int_0^\infty t^\frac12 \| M_\chi G_\zeta h \|_{L^2 (\mathbb{R}^d)}\,{\rm d} t \| \nabla \varphi \|_{L^2(\mathbb{R}^{d+1}_+)}\leq C \| h \|_{L^2 (\mathbb{R}^d)} \| \nabla \varphi \|_{L^2 (\mathbb{R}^{d+1}_+)},
\mathbf{e}nd{align*}
for $\varphi \in H^1_0 (\mathbb{R}^{d+1}_+)$.
Next we consider the estimate of $M_\chi G_\zeta (-{\rm d}elta_x)^\frac14 h$.
By using the following general relation for $G_p$ that
\begin{align}
G_p (t) h =\partial_t G_{p/(i\mu_{\mathcal{A}})} (t) h - G_{\partial_t p / (i\mu_{\mathcal{A}})} (t)h, ~~~~~~G_p (t) (-{\rm d}elta_x)^\frac14 h = G_{p |\xi|^{1/2}} (t) h,
\label{proof.lem.U_1.4}
\mathbf{e}nd{align}
and by using $\partial_t^2 \zeta =0$
and $\partial_t\mu_{\mathcal{A}}=0$,
we observe that it suffices to estimate
\[
\partial_t M_\chi G_{\zeta |\xi|^{1/2} /(i\mu_{\mathcal{A}}) } h + \partial_t M_\chi G_{\partial_t \zeta |\xi|^{1/2}/\mu_{\mathcal{A}}^2} h,
\]
for the other terms are of lower order. We see
\begin{align*}
|\langle \partial_t M_\chi G_{\zeta |\xi|^{1/2} /(i\mu_{\mathcal{A}}) } h + \partial_t M_\chi G_{\partial_t \zeta |\xi|^{1/2}/\mu_{\mathcal{A}}^2} h, \varphi \rangle _{L^2 (\mathbb{R}^{d+1}_+)} | & \leq C \| M_\chi G_{\tilde \zeta } h\|_{L^2 (\mathbb{R}^{d+1}_+)} \| \nabla \varphi \|_{L^2 (\mathbb{R}^{d+1}_+)}
\mathbf{e}nd{align*}
for $\varphi\in H^1_0(\mathbb{R}^{d+1}_+)$, where $\tilde \zeta = \zeta |\xi|^{1/2} /(i\mu_{\mathcal{A}}) + \partial_t \zeta |\xi|^{1/2}/\mu_{\mathcal{A}}^2$. Then the bound of $\| M_\chi G_{\tilde \zeta } h\|_{L^2 (\mathbb{R}^{d+1}_+)}$ is reduced
to that of
$\int_{\mathbb{R}_+} \| G_p (t) h \|_{L^2(\mathbb{R}^d)}^2 t^{-1} \,{\rm d} t$ with $p$ satisfying \mathbf{e}qref{assume.lem.G_p.2}, and hence, $\| M_\chi G_{\tilde \zeta } h\|_{L^2 (\mathbb{R}^{d+1}_+)} \leq C \| h \|_{L^2 (\mathbb{R}^d)}$ follows
from \mathbf{e}qref{est.lem.G_p.2}, as desired. Similarly, we have
\begin{align*}
\| M_\chi \Pi (-{\rm d}elta_x)^\frac14 h \|_{L^2 (\mathbb{R}^{d+1}_+)} \leq C \| h \|_{L^2 (\mathbb{R}^d)}.
\mathbf{e}nd{align*}
Collecting these above, we arrive at
\begin{align}
| \langle - \mathcal{A} (M_ \chi U_0 h), \varphi\rangle _{L^2 (\mathbb{R}^{d+1}_+)} |\leq C \| (I-{\rm d}elta_x )^{-\frac14} h \|_{L^2 (\mathbb{R}^d)} \| \nabla\varphi \|_{L^2(\mathbb{R}^{d+1}_+)},~~~~~~~\varphi \in H^1_0 (\mathbb{R}^{d+1}_+).\label{proof.lem.U_1.5}
\mathbf{e}nd{align}
Thus, by the Lax-Milgram theorem there is a unique solution $U_1h\in \dot{H}^1_0 (\mathbb{R}^{d+1}_+)$ to \mathbf{e}qref{eq.dirichlet} with $F=- \mathcal{A} (M_\chi U_0 h)$ and $g=0$, which satisfies \mathbf{e}qref{est.lem.U_1.1}. The proof is complete.
Lemmas \ref{lem.U_0} and \ref{lem.U_1} imply the estimate $\| e^{-t\mathcal{P}_{\mathcal{A}}} h \|_{L^2 (\mathbb{R}^d)} \leq C \| h\|_{L^2 (\mathbb{R}^d)}$ for $t\in (0,1]$ and $h\in H^1 (\mathbb{R}^d)$. Since the same argument can be applied for $\{e^{-t\mathcal{P}_{\mathcal{A}^*}}\}_{t\geq 0}$, we have
\begin{cor}\label{cor.lem.U_0.U_1} The semigroups $\{e^{-t\mathcal{P}_{\mathcal{A}}}\}_{t\geq 0}$ and $\{e^{-t\mathcal{P}_{\mathcal{A}^*}}\}_{t\geq 0}$ in $H^{1/2}(\mathbb{R}^d)$ are extended as strongly continuous analytic semigroups in $L^2 (\mathbb{R}^d)$. Moreover, $H^1 (\mathbb{R}^d)$ is continuously embedded in $D_{L^2}(\Lambda_{\mathcal{A}})\cap D_{L^2}(\Lambda_{\mathcal{A}^*})$, and $D_{L^2}(\mathcal{P}_{\mathcal{A}})$ and $D_{L^2}(\mathcal{P}_{\mathcal{A}^*})$ are continuously embedded in $H^1 (\mathbb{R}^d)$. We also have the estimate
\begin{align}
\| e^{-t\mathcal{P}_{\mathcal{A}}} f \|_{L^2 (\mathbb{R}^d)} \leq C t^{-\frac12} \| f \|_{H^{-\frac12} (\mathbb{R}^d)},~~~~~~~~~0<t<1.\label{est.cor.lem.U_0.U_1}
\mathbf{e}nd{align}
\mathbf{e}nd{cor}
\noindent {\it Proof.} By the variational characterization of $\dot{H}^{1/2}(\mathbb{R}^d)$, the estimate \mathbf{e}qref{est.lem.U_1.1} implies that
\begin{align}
\| U_{\mathcal{A},1} (t) h \|_{\dot{H}^\frac12 (\mathbb{R}^d)} \leq \| \nabla U_{\mathcal{A},1} h \|_{L^2 (\mathbb{R}^{d+1}_+)} \leq C \| h \|_{H^{-\frac12} (\mathbb{R}^d)}, ~~~~~~~~~t>0.\label{proof.cor.lem.U_0.U_1.1}
\mathbf{e}nd{align}
Thus \mathbf{e}qref{est.lem.U_0.3} and \mathbf{e}qref{proof.cor.lem.U_0.U_1.1} verifies the condition (i) of \cite[Proposition 4.3]{MaekawaMiura1}, which gives $D_{L^2}(\mathcal{P}_{\mathcal{A}})\hookrightarrow H^1 (\mathbb{R}^d)$. The same is true for $D_{L^2}(\mathcal{P}_{\mathcal{A}^*})$, and then we also obtain the embedding $H^1 (\mathbb{R}^d)\hookrightarrow D_{L^2}(\Lambda_{\mathcal{A}}) \cap D_{L^2}(\Lambda_{\mathcal{A}^*})$ by Proposition \ref{prop.pre.1}. Now it remains to show \mathbf{e}qref{est.cor.lem.U_0.U_1}. Let us recall the representation $e^{-t\mathcal{P}_{\mathcal{A}}} f = M_\chi U_{\mathcal{A},0} (t) f + U_{\mathcal{A},1} (t)f$. By the definition of $G_p$ in \mathbf{e}qref{def.G_p} we have $U_{\mathcal{A},0} (t) f = t^{-1/2} G_{t^{1/2}\langle \xi \rangle^{1/2}} (I - {\rm d}elta_x)^{-1/4}f$, where $\langle \xi \rangle = (1+|\xi|^2)^{1/2}$. Then it is easy to see that $p(x,\xi,t)=t^{1/2}\langle \xi \rangle^{1/2}$ satisfies \mathbf{e}qref{assume.lem.G_p.1} with $T=1$, and therefore,
\begin{align*}
\| U_{\mathcal{A},0} (t) f \|_{L^2 (\mathbb{R}^d)} = t^{-\frac12} \| G_{t^{1/2}\langle \xi \rangle^{1/2}} (I - {\rm d}elta_x)^{-1/4}f\|_{L^2 (\mathbb{R}^d)} \leq Ct^{-\frac12} \| (I - {\rm d}elta_x)^{-1/4}f\|_{L^2 (\mathbb{R}^d)}.
\mathbf{e}nd{align*}
On the other hand, we have already proved the desired estimate for $U_{\mathcal{A},1} (t) f$ by \mathbf{e}qref{est.lem.U_1.1}. The proof is complete.
In order to establish the characterization of $D_{L^2}(\mathcal{P}_{\mathcal{A}})$ and $D_{L^2}(\mathcal{P}_{\mathcal{A}^*})$,
we need further estimates of $U_{\mathcal{A},1}$ as follows.
\begin{lem}\label{lem.U_1'} For any $h\in H^{1/2}(\mathbb{R}^d)$ we have $\,{\rm d} /\,{\rm d} t ~ U_{\mathcal{A},1} (t) h\in C ([0,\infty); H^{1/2} (\mathbb{R}^d))$ and
\begin{align}
\sup_{t>0} \| \frac{\,{\rm d} }{\,{\rm d} t} U_{\mathcal{A},1} (t) h \|_{H^\frac12 (\mathbb{R}^d)} \leq C \| h \|_{H^\frac12 (\mathbb{R}^d)}. \label{est.lem.U_1'.1}
\mathbf{e}nd{align}
\mathbf{e}nd{lem}
\noindent {\it Proof.} As in the proof of Lemma \ref{lem.U_1} we write $U_0$ and $U_1$ for $U_{\mathcal{A},0}$ and $U_{\mathcal{A},1}$. First we assume that $h\in D_{L^2}(\mathcal{P}_{\mathcal{A}})$. Let us recall that $U_1 (t)h\in \dot{H}^1_0(\mathbb{R}^{d+1}_+)$ solves \mathbf{e}qref{eq.dirichlet} with $F= - \mathcal{A}(M_\chi U_0 h) = \nabla\cdot M_\chi \Pi h + M_\chi G_\zeta h + R h$ and $g=0$.
We first see $\lim_{\delta\rightarrow 0}d/dt U_1(\delta)h$ exists in
$H^{-1}(\mathbb{R}^d)$. Set $\mathbb{R}^{d+1}_{\delta,+}=\{(x,t)\in \mathbb{R}^{d+1}~|~t>\delta\}$ for $\delta>0$. Since
$\,{\rm d} /\,{\rm d} t~ U_1 h \in \cap_{\delta>0} H^1 (\mathbb{R}^{d+1}_{\delta,+})$,
we have
\begin{align}
\langle \frac{\,{\rm d}}{\,{\rm d} t} U_1 (\delta ) h + M_{{\bf r_2}/b}\cdot \nabla_x U_1 (\delta ) h, \phi \rangle _{L^2 (\mathbb{R}^{d})} & = - \langle A \nabla U_1 h , \nabla\varphi \rangle _{L^2 (\mathbb{R}^{d+1}_{\delta,+})}- \langle M_\chi \Pi' h , \nabla_x \varphi \rangle _{L^2 (\mathbb{R}^{d+1}_{\delta,+})}\nonumber \\
& ~~~ + \langle \partial_t M_\chi \Pi_{d+1} h + M_\chi G_\zeta h + R h,\varphi \rangle _{L^2 (\mathbb{R}^{d+1}_{\delta,+})}\label{proof.lem.U_1'.1}
\mathbf{e}nd{align}
for any $\delta\in (0,1/2)$ and $\varphi\in H^1 (\mathbb{R}^{d+1}_{\delta,+})$ with $\varphi|_{t=\delta} = M_{1/\bar{b}}\phi$, $\phi\in H^{1/2}(\mathbb{R}^d)$. By taking $\varphi = e^{-(t-\delta)\mathcal{P}_{\mathcal{A}^*}} M_{1/\bar{b}}\phi$, the similar calculation as in the proof of Lemma \ref{lem.U_1} yields the estimate for the right-hand side of \mathbf{e}qref{proof.lem.U_1'.1} from above by $C \| h \|_{H^{1/2}(\mathbb{R}^d)}\|M_{1/\bar{b}} \phi \|_{H^{1/2}(\mathbb{R}^d)}$, and hence by $C \| h \|_{H^{1/2}(\mathbb{R}^d)}\| \phi \|_{H^{1/2}(\mathbb{R}^d)}$. On the other hand, we have
\begin{align}
|\langle M_{{\bf r_2}/b}\cdot \nabla_x U_1 (\delta ) h, \phi \rangle _{L^2 (\mathbb{R}^{d})}| & \leq C \| \nabla_x U_1 (\delta ) h \|_{L^2 (\mathbb{R}^d)} \| \phi \|_{L^2 (\mathbb{R}^d)} \nonumber \\
& = C \| \nabla_x \big ( e^{-\delta \mathcal{P}_{\mathcal{A}}} - U_0 (\delta ) \big ) h\|_{L^2 (\mathbb{R}^d)} \| \phi \|_{L^2 (\mathbb{R}^d)} \nonumber \\
& \leq C \big ( \| [\nabla_x, e^{-\delta \mathcal{P}_{\mathcal{A}}} ] h \|_{L^2 (\mathbb{R}^d)} + \| [\nabla_x, U_0 (\delta ) ] h\|_{L^2 (\mathbb{R}^d)} \big ) \| \phi \|_{L^2 (\mathbb{R}^d)}\nonumber \\
& ~~~ + C \| U_1 (\delta ) \nabla_x h\|_{L^2 (\mathbb{R}^d)} \| \phi \|_{L^2 (\mathbb{R}^d)}. \label{proof.lem.U_1'.2}
\mathbf{e}nd{align}
In particular, we see $\| M_{{\bf r_2}/b}\cdot \nabla_x U_1 (\delta ) h \|_{L^2(\mathbb{R}^d)}\rightarrow 0$ as $\delta\rightarrow 0$, for the facts $h\in D_{L^2}(\mathcal{P}_{\mathcal{A}})$ and $D_{L^2}(\mathcal{P}_{\mathcal{A}})\hookrightarrow H^1 (\mathbb{R}^d)$ by Corollary \ref{cor.lem.U_0.U_1} imply $\displaystyle \lim_{\delta\rightarrow 0} \| [\nabla_x, e^{-\delta \mathcal{P}_{\mathcal{A}}} ] h \|_{L^2 (\mathbb{R}^d)}=0$, while the convergence $\displaystyle \lim_{\delta\rightarrow 0} \| [\nabla_x, U_0 (\delta ) ] h\|_{L^2 (\mathbb{R}^d)}=0$ follows from \mathbf{e}qref{est.lem.U_0.1} and the density argument. The limit $\displaystyle \lim_{\delta\rightarrow 0} \| U_1 (\delta ) \nabla_x h\|_{L^2 (\mathbb{R}^d)}=0$ follows from Lemma \ref{lem.U_1} with $h$ replaced by $\nabla_x h$. By applying \mathbf{e}qref{proof.lem.U_1'.1} to another $\delta'\in (0,1/2)$ and then by estimating the difference $\langle \,{\rm d}/\,{\rm d} t ~ U_1 (\delta ) h - \,{\rm d}/\,{\rm d} t ~ U_1 (\delta' ) h, \phi \rangle _{L^2 (\mathbb{R}^{d})}$ we conclude that $\,{\rm d}/\,{\rm d} t~ U_1 (\delta ) h$ converges to some limit, denoted by $S_{\mathcal{A},1} h$, in $H^{-1/2}(\mathbb{R}^d)$ as $\delta\rightarrow 0$. We claim that
\begin{align}
\| S_{\mathcal{A},1} h \|_{H^s(\mathbb{R}^d)} \leq C_s \| h \|_{H^s (\mathbb{R}^d)},~~~~~~~~~h\in D_{L^2}(\mathcal{P}_{\mathcal{A}}),~~~~s\in (0,\frac12].
\label{proof.lem.U_1'.5}
\mathbf{e}nd{align}
To this end, first note that
the following equality with the choice of
$\varphi (t) = e^{-t\mathcal{P}_{\mathcal{A}^*}} \phi$,
$\phi\in H^{1/2}(\mathbb{R}^d)$ holds:
\begin{align}
\langle S_{\mathcal{A},1} h, M_{\bar{b}} \phi \rangle _{H^{-\frac12},H^\frac12} & = \lim_{\delta\rightarrow 0} \langle \frac{\,{\rm d}}{\,{\rm d} t} U_1 (\delta ) h, M_{\bar{b}} \phi \rangle _{H^{-\frac12},H^\frac12} =\lim_{\delta\rightarrow 0} \langle \frac{\,{\rm d}}{\,{\rm d} t} U_1 (\delta ) h, M_{\bar{b}} \varphi (\delta) \rangle _{H^{-\frac12},H^\frac12} \nonumber \\
& = \lim_{\delta\rightarrow 0} \langle M_b \frac{\,{\rm d}}{\,{\rm d} t} U_1 (\delta ) h + M_{{\bf r_2}}\cdot \nabla_x U_1 (\delta ) h , \varphi (\delta) \rangle _{L^2 (\mathbb{R}^d )} \nonumber \\
& = - \langle A \nabla U_1 h , \nabla\varphi \rangle _{L^2 (\mathbb{R}^{d+1}_{+})}- \langle M_\chi \Pi' h , \nabla_x \varphi \rangle _{L^2 (\mathbb{R}^{d+1}_{+})}\nonumber \\
& ~~~ + \langle \partial_t M_\chi \Pi_{d+1} h + M_\chi G_\zeta h + R h,\varphi \rangle _{L^2 (\mathbb{R}^{d+1}_{+})}\nonumber \\
& =- \langle M_\chi \Pi' h , \nabla_x \varphi \rangle _{L^2 (\mathbb{R}^{d+1}_{+})} + \langle \partial_t M_\chi \Pi_{d+1} h + M_\chi G_\zeta h + R h,\varphi \rangle _{L^2 (\mathbb{R}^{d+1}_{+})}. \label{proof.lem.U_1'.3}
\mathbf{e}nd{align}
Here for each line we have respectively used $\varphi (\delta) = e^{-\delta\mathcal{P}_{\mathcal{A}^*}} \phi \rightarrow \phi$ in $H^{1/2} (\mathbb{R}^d)$ for $\phi \in H^{1/2} (\mathbb{R}^d)$, $M_{{\bf r_2}/b}\cdot \nabla_x U_1 (\delta ) h\rightarrow 0$ in $L^2 (\mathbb{R}^d)$, \mathbf{e}qref{proof.lem.U_1'.1}, and then the fact that $ \langle A \nabla U_1 h , \nabla\varphi \rangle _{L^2 (\mathbb{R}^{d+1}_{+})} =0$, since $U_1 h\in \dot{H}^1_0(\mathbb{R}^{d+1}_+)$ and $\varphi (t) = e^{-t\mathcal{P}_{\mathcal{A}}^*} \phi$. Hence our next task
is to estimate the right-hand side of \mathbf{e}qref{proof.lem.U_1'.3}. Let $s\in (0,1/2]$. We will show
\begin{align}
& ~~~ |\langle M_\chi \Pi' h , \nabla_x \varphi \rangle _{L^2 (\mathbb{R}^{d+1}_{+})}| + |\langle \partial_t M_\chi G_{it ({\bf r_1} +{\bf r_2} )\cdot \nabla_x\mu_{\mathcal{A}}} h, \varphi \rangle _{L^2 (\mathbb{R}^{d+1}_{+})}| \nonumber \\
& ~~~~~~~+ |\langle M_\chi\partial_t G_{\nabla_x\cdot {\bf r_2}} h + M_\chi G_\zeta h,\varphi \rangle _{L^2 (\mathbb{R}^{d+1}_+)}| \leq C \| h \|_{H^s (\mathbb{R}^d)} \| (I-{\rm d}elta_x)^{-\frac{s}{2}} \phi \|_{L^2 (\mathbb{R}^d)}, \label{proof.lem.U_1'.4}
\mathbf{e}nd{align}
which gives \mathbf{e}qref{proof.lem.U_1'.5}, since the other terms in \mathbf{e}qref{proof.lem.U_1'.3} are of lower order and easy to handle. We observe that $\varphi (t) = e^{-t\mathcal{P}_{\mathcal{A}^*}} \phi = M_\chi U_{\mathcal{A}^*,0} (t) \phi + U_{\mathcal{A}^*,1}(t) \phi $, and thus, it suffices to check \mathbf{e}qref{proof.lem.U_1'.3} with $\varphi$ replaced by $U_{\mathcal{A}^*,0}(t) \phi$, for $U_{\mathcal{A}^*,1}(t) \phi$ is of lower order if $s$ is less than or equal to $1/2$ thanks to Lemma \ref{lem.U_1}. The argument below is based on the quadratic estimate as in \mathbf{e}qref{est.lem.G_p.2}. Note that the counterpart of Lemma \ref{lem.G_p} is valid for $U_{\mathcal{A}^*,0}(t)$. Firstly we see from the definition of $\Pi'$ in \mathbf{e}qref{proof.lem.U_1.0},
\begin{align*}
& ~~~ |\langle M_\chi \Pi' h , \nabla_x U_{\mathcal{A}^*,0} (t) (-{\rm d}elta_x)^\frac{s}{2} \phi \rangle _{L^2 (\mathbb{R}^{d+1}_{+})}| \\
& \leq \int_0^\infty M_\chi \| G_{it A' \nabla_x\mu_{\mathcal{A}}} (t) h \|_{L^2 (\mathbb{R}^d)} \| \nabla_x U_{\mathcal{A}^*,0} (t) (-{\rm d}elta_x)^\frac{s}{2}\phi \|_{L^2 (\mathbb{R}^d)} \,{\rm d} t \\
& \leq \big ( \int_0^\infty M_\chi \| t^{1-s} G_{i A' \nabla_x\mu_{\mathcal{A}}} (t) h \|_{L^2 (\mathbb{R}^d)}^2 \frac{\,{\rm d} t}{t} \big )^\frac12 \big ( \int_0^\infty M_\chi \| t^{1+ s} \nabla_x U_{\mathcal{A}^*,0} (t) (-{\rm d}elta_x)^\frac{s}{2} \phi \|_{L^2 (\mathbb{R}^d)}^2 \frac{\,{\rm d} t}{t} \big )^\frac12\\
& \leq C \| h \|_{\dot{H}^s (\mathbb{R}^d)} \| \phi \|_{L^2 (\mathbb{R}^d)}, ~~~~~~s\in [0,\frac12 ],
\mathbf{e}nd{align*}
by applying Lemma \ref{lem.G_p}. Similarly, we have
\begin{align*}
& ~~~ |\langle \partial_t M_\chi G_{it ({\bf r_1} +{\bf r_2} )\cdot \nabla_x\mu_{\mathcal{A}}} h, \varphi \rangle _{L^2 (\mathbb{R}^{d+1}_{+})}|\nonumber\\
& = |\langle M_\chi G_{it ({\bf r_1} +{\bf r_2} )\cdot \nabla_x\mu_{\mathcal{A}}} h, \frac{\,{\rm d}}{\,{\rm d} t} U_{\mathcal{A}^*,0} (t) (-{\rm d}elta_x)^\frac{s}{2}\phi \rangle _{L^2 (\mathbb{R}^{d+1}_{+})}| \leq C \| h \|_{\dot{H}^s (\mathbb{R}^d)} \| \phi \|_{L^2 (\mathbb{R}^d)}, ~~~~~~~s\in [0,\frac12 ].
\mathbf{e}nd{align*}
Finally we consider the term $M_\chi\partial_t G_{\nabla_x\cdot {\bf r_2}} h + M_\chi G_\zeta h = M_\chi G_{\mathbf{e}ta} h$, $\mathbf{e}ta = \mu_{\mathcal{A}} \nabla_x\cdot {\bf r_2} + \zeta$. Lemma \ref{lem.G_p} yields
\begin{align*}
B_s (h) : = \big ( \int_0^\infty M_\chi \| t^{1-s} G_{\mathbf{e}ta} (t) h \|_{L^2(\mathbb{R}^d)}^2 \frac{\,{\rm d} t}{t}\big )^\frac12 \leq C \| h \|_{\dot{H}^s (\mathbb{R}^d )},~~~~~~s\in [0,\frac12 ].
\mathbf{e}nd{align*}
Thus it follows that
\begin{align}
& |\langle M_\chi G_{\mathbf{e}ta} h , U_{\mathcal{A}^*,0} (t) \phi \rangle _{L^2 (\mathbb{R}^{d+1}_+)}| \leq \int_0^\infty M_\chi \| G_{\mathbf{e}ta} (t) h \|_{L^2 (\mathbb{R}^d )} \| U_{\mathcal{A}^*,0} (t) \phi \|_{L^2 (\mathbb{R}^d)} \,{\rm d} t \nonumber \\
&~~~~~~~~~~~~~~ \leq \sup_{t>0} \| U_{\mathcal{A}^*,0} (t) (I-{\rm d}elta_x)^{-1} \phi \|_{L^2 (\mathbb{R}^d )} \int_0^\infty M_\chi \| G_{\mathbf{e}ta} (t) h \|_{L^2 (\mathbb{R}^d )} \,{\rm d} t\nonumber \\
& ~~~~~~~~~~~~~~~~~~ + \int_0^\infty M_\chi \| G_{\mathbf{e}ta} (t) h \|_{L^2 (\mathbb{R}^d )} \| U_{\mathcal{A}^*,0} (t) (-{\rm d}elta_x ) (I-{\rm d}elta_x)^{-1} \phi \|_{L^2 (\mathbb{R}^d)} \,{\rm d} t,\nonumber
\mathbf{e}nd{align}
and the first term of the right-hand side is bounded from above by
\[
C \|(I-{\rm d}elta_x)^{-1} \phi \|_{L^2 (\mathbb{R}^d )} \big (\int_0^\infty \chi (t) t^{2 s} \frac{\,{\rm d} t}{t} \big )^{\frac12} B_s (h)\leq C_s B_s (h) \|(I-{\rm d}elta_x)^{-1} \phi \|_{L^2 (\mathbb{R}^d )},
\]
while the second term is estimated by
\begin{align*}
& ~~~B_s (h) \big ( \int_0^\infty M_\chi \| t^{s} U_{\mathcal{A}^*,0} (t) (-{\rm d}elta_x)^{\frac{s}{2}}(-{\rm d}elta_x)^{1-\frac{s}{2}} (I-{\rm d}elta_x)^{-1} \phi \|_{L^2 (\mathbb{R}^d)}^2 \frac{\,{\rm d} t}{t}\big )^\frac12 \\
& \leq C B_s (h) \| (-{\rm d}elta_x)^{1-\frac{s}{2}} (I-{\rm d}elta_x)^{-1} \phi \|_{L^2 (\mathbb{R}^d)}.
\mathbf{e}nd{align*}
Thus we have
\begin{align}
|\langle M_\chi G_{\mathbf{e}ta} h , U_{\mathcal{A}^*,0} (t) \phi \rangle _{L^2 (\mathbb{R}^{d+1}_+)}|& \leq C_s \|(I-{\rm d}elta_x)^{-\frac{s}{2}} \phi \|_{L^2 (\mathbb{R}^d )} \| h \|_{\dot{H}^s (\mathbb{R}^d)},\nonumber
\mathbf{e}nd{align}
where $C_s$ is a constant which tends to $\infty$ as $s\rightarrow 0$. Collecting these above, we arrive at \mathbf{e}qref{proof.lem.U_1'.5}.
Now let $V_1 (t) h\in C([0,\infty); H^{1/2} (\mathbb{R}^d))$ be the unique weak solution to \mathbf{e}qref{eq.dirichlet} with $F=- \partial_t\mathcal{A}(M_\chi U_0 h) = \nabla\cdot \partial_t M_\chi \Pi h + \partial_t M_\chi G_\zeta h + \partial_t R h $ and $g=S_{\mathcal{A},1} h$. Hence it has the form
\begin{align}
V_1 (t) h = e^{-t\mathcal{P}_{\mathcal{A}}} S_{\mathcal{A},1} h + W_{\mathcal{A}} (t) h = \chi U_0 (t) S_{\mathcal{A},1} h + U_1 (t) S_{\mathcal{A},1} h + W_{\mathcal{A}} (t) h,\label{proof.lem.U_1'.6}
\mathbf{e}nd{align}
where $W_{\mathcal{A}} h\in \dot{H}^1_0 (\mathbb{R}^{d+1}_+)$ is a weak solution to \mathbf{e}qref{eq.dirichlet} with $F= \nabla\cdot \partial_t M_\chi \Pi h + \partial_t M_\chi G_\zeta h + \partial_t R h $ and $g=0$. Note that
\begin{align}
\int_0^\infty M_\chi\big ( \| t^\frac12 \partial_t \frac{\,{\rm d}}{\,{\rm d} t} \Pi h \|_{L^2 (\mathbb{R}^d)}^2 + \| t^\frac12 G_\zeta (t) h \|_{L^2 (\mathbb{R}^d)}^2 \big )\frac{\,{\rm d} t}{t} \leq C \| h \|_{\dot{H}^\frac12 (\mathbb{R}^d)}^2\nonumber
\mathbf{e}nd{align}
by Lemma \ref{lem.G_p}, which implies the estimate $\| \nabla W_{\mathcal{A}} h \|_{L^2 (\mathbb{R}^{d+1}_+)}\leq C \| h \|_{\dot{H}^{1/2} (\mathbb{R}^d)}$, and thus,
\begin{align}
\| W_{\mathcal{A}} (t) h \|_{\dot{H}^\frac12(\mathbb{R}^d)}\leq \| \nabla w \|_{L^2 (\mathbb{R}^{d+1}_{t,+})}\leq C \| h \|_{\dot{H}^{1/2} (\mathbb{R}^d)},~~~~~~~~t>0.\label{proof.lem.U_1'.7}
\mathbf{e}nd{align}
On the other hand, since $\{e^{-t\mathcal{P}_{\mathcal{A}}}\}_{t\geq 0}$ defines a strongly continuous semigroup in $H^{1/2}(\mathbb{R}^d)$, we have from \mathbf{e}qref{proof.lem.U_1'.5},
\begin{align}
\| e^{-t\mathcal{P}_{\mathcal{A}}} S_{\mathcal{A},1} h \|_{H^\frac12 (\mathbb{R}^d)} \leq C \| S_{\mathcal{A},1} h \|_{H^\frac12 (\mathbb{R}^d)} \leq C \| h \|_{H^\frac12 (\mathbb{R}^d)},~~~~~~0<t\leq 2. \label{proof.lem.U_1'.8}
\mathbf{e}nd{align}
Finally let us prove $V_1 (t) h = \,{\rm d} / \,{\rm d} t~U_1 (t) h$. To see this, we note that, for each $\delta\in (0,1/2)$, $\,{\rm d} / \,{\rm d} t~U_1 (t+\delta) h $ is the unique weak solution to \mathbf{e}qref{eq.dirichlet} with $F = \tau_{\delta} \big ( \nabla\cdot \partial_t M_\chi \Pi h + \partial_t M_\chi G_\zeta h + \partial_t R h \big )$ and $g= \,{\rm d} / \,{\rm d} t~U_1 (\delta) h$, where $\tau_{\delta} f (t) =f(t+\delta)$. Hence we can write
\begin{align}
\frac{\,{\rm d} }{\,{\rm d} t} U_1 (t+\delta )h = e^{-t\mathcal{P}_{\mathcal{A}}} \frac{\,{\rm d} }{ \,{\rm d} t} U_1 (\delta) h + W_{\mathcal{A}}^{(\delta)} (t) h,~~~~~~~~\delta\in (0,1/2), \label{proof.lem.U_1'.9}
\mathbf{e}nd{align}
where $W_{\mathcal{A}}^{(\delta)} h$ is the solution to \mathbf{e}qref{eq.dirichlet} with $F = \tau_{\delta} \big ( \nabla\cdot \partial_t M_\chi \Pi h + \partial_t M_\chi G_\zeta h + \partial_t R h \big )$ and $g=0$. Then, arguing as in the estimate of $W_{\mathcal{A}}h$, we obtain a uniform bound of $W_{\mathcal{A}}^{(\delta)}h$ in $\dot{H}_0^1(\mathbb{R}^{d+1}_+)$, and it is not difficult to show $W_{\mathcal{A}}^{(\delta)}h$ weakly converges to $W_{\mathcal{A}} h$ in $\dot{H}_0^1 (\mathbb{R}^{d+1}_+)$. On the other hand, we have from \mathbf{e}qref{est.cor.lem.U_0.U_1},
\begin{align*}
\| e^{-t\mathcal{P}_{\mathcal{A}}} S_{\mathcal{A},1} h - e^{-t\mathcal{P}_{\mathcal{A}}} \frac{\,{\rm d} }{ \,{\rm d} t} U_1 (\delta) h \|_{L^2 (\mathbb{R}^d)} \leq C t^{-\frac12} \| S_{\mathcal{A},1} h - \frac{\,{\rm d} }{ \,{\rm d} t} U_1 (\delta) h \|_{H^{-\frac12}(\mathbb{R}^d)},
\mathbf{e}nd{align*}
which converges to zero as $\delta \rightarrow 0$ by the definition of $S_{\mathcal{A},1}h$. Hence, the right-hand side of \mathbf{e}qref{proof.lem.U_1'.9} converges to $V_1(t)h$ in $L^2_{loc} (\mathbb{R}^{d+1}_+)$. On the other hand, since $\,{\rm d} /\,{\rm d} t~ U_1 h \in \cap_{\delta>0} H^1 (\mathbb{R}^{d+1}_{\delta,+})$, the left-hand side of \mathbf{e}qref{proof.lem.U_1'.9} converges to $\,{\rm d} /\,{\rm d} t~ U_1 (t)h$ in $L^2 (\mathbb{R}^d)$ as $\delta\rightarrow 0$ for each $t>0$. Thus we have $\,{\rm d} / \,{\rm d} t~U_1 (t) h= V_1 (t)h$, and then \mathbf{e}qref{proof.lem.U_1'.7} - \mathbf{e}qref{proof.lem.U_1'.8} implies
$\| \,{\rm d}/\,{\rm d} t~ U_1 (t) h \|_{H^{1/2}(\mathbb{R}^d)}\leq C \| h \|_{H^{1/2} (\mathbb{R}^d)}$ for $0<t\leq 2$. For $t>2$ we have from the equality $\,{\rm d} /\,{\rm d} t ~ U_1 (t) h = \,{\rm d} / \,{\rm d} t~ e^{-t\mathcal{P}_{\mathcal{A}}} h$ that
\begin{align*}
\| \frac{\,{\rm d}}{\,{\rm d} t} ~ U_1 (t) h\|_{H^\frac12 (\mathbb{R}^d)} \leq C \| e^{-s\mathcal{P}_{\mathcal{A}}} h \mid _{s=1} \|_{L^2 (\mathbb{R}^d)} \leq C \| h \|_{L^2 (\mathbb{R}^d)}.
\mathbf{e}nd{align*}
Hence, \mathbf{e}qref{est.lem.U_1'.1} holds when $h\in D_{L^2}(\mathcal{P}_{\mathcal{A}})$. Note that $D_{L^2}(\mathcal{P}_{\mathcal{A}})$ is dense in $H^{1/2}(\mathbb{R}^d)$. Hence the estimate \mathbf{e}qref{est.lem.U_1'.1} for $h\in D_{L^2}(\mathcal{P}_{\mathcal{A}})$ is extended to all $h\in H^{1/2}(\mathbb{R}^d)$ by the density argument. The proof is complete.
\begin{cor}\label{cor.lem.U_1'} It follows that $D_{L^2}(\mathcal{P}_{\mathcal{A}}) = D_{L^2}(\mathcal{P}_{\mathcal{A}^*}) = H^1(\mathbb{R}^d)$ with equivalent norms.
\mathbf{e}nd{cor}
\noindent {\it Proof.} Note that the same result as in Lemma \ref{lem.U_1'} is valid for $\,{\rm d} /\,{\rm d} t ~ U_{\mathcal{A}^*,1} (t)$. On the other hand, by Lemma \ref{lem.U_0} and the definition \mathbf{e}qref{def.U_A0}, it is straightforward to see
\begin{align*}
\sup_{t>0} \| \frac{\,{\rm d}}{\,{\rm d} t} U_{\mathcal{A},0}(t) h \|_{L^2 (\mathbb{R}^d)} +\sup_{t>0} \| \frac{\,{\rm d}}{\,{\rm d} t} U_{\mathcal{A}^*,0} (t) h \|_{L^2 (\mathbb{R}^d)}<\infty ~~~~~~~{\rm for}~~h\in C_0^\infty (\mathbb{R}^d)
\mathbf{e}nd{align*}
by the definition \mathbf{e}qref{def.U_A0}. Thus we have
\begin{align}
\sup_{t>0} \| \frac{\,{\rm d}}{\,{\rm d} t} e^{-t\mathcal{P}_{\mathcal{A}}} h \|_{L^2 (\mathbb{R}^d)} +\sup_{t>0} \| \frac{\,{\rm d}}{\,{\rm d} t} e^{-t\mathcal{P}_{\mathcal{A}^*}} h \|_{L^2 (\mathbb{R}^d)}<\infty ~~~~~~{\rm for}~~~ h\in C_0^\infty (\mathbb{R}^d ). \label{proof.cor.lem.U_1'.1}
\mathbf{e}nd{align}
Hence Proposition \ref{prop.pre.2} (ii), together with Corollary \ref{cor.lem.U_0.U_1}, shows $D_{L^2}(\mathcal{P}_{\mathcal{A}}) = D_{L^2}(\mathcal{P}_{\mathcal{A}^*}) = H^1(\mathbb{R}^d)$ with equivalent norms. The proof is complete.
\begin{rem}\label{rem.pseudo}{\rm Let $\Phi:A \longmapsto \mu_{\mathcal{A}}$ be the map defined by \mathbf{e}qref{def.appendix.Phi}. The arguments of the present section essentially rely on the integration by parts technique, and in particular, we did not use the mapping properties of the pseudo-differential operator $\Phi (A)(\cdot, D_x)$ such as the equivalence $\| \Phi (A)(\cdot,D_x) f \|_{L^2(\mathbb{R}^d)} + \| f\|_{L^2(\mathbb{R}^d)} \simeq \| f \|_{H^1 (\mathbb{R}^d)}$. Since the above proof implies the identity $-\mathcal{P}_{\mathcal{A}}=i\Phi (A) (\cdot,D_x) + S_{\mathcal{A},1}$, where $S_{\mathcal{A},1}=\displaystyle \lim_{t\rightarrow 0}\,{\rm d}/\,{\rm d} t~U_{\mathcal{A},1}(t)$ is a lower order operator, our result actually gives an alternative proof (although it is lengthy) of the mapping properties of $\Phi (A) (\cdot,D_x)$ in $H^s(\mathbb{R}^d)$ which are well known in the theory of pseudo-differential operators with nonsmooth coefficients; cf. \cite{KumanogoNagase, Marschall, Taylor, Abels, ES}. Especially, the fact that $i\Phi (A) (\cdot,D_x)$ in $L^2 (\mathbb{R}^d)$ with the domain $H^1 (\mathbb{R}^d)$ generates a strongly continuous and analytic semigroup in $L^2 (\mathbb{R}^d)$ is recovered by regarding $i\Phi (A) (\cdot,D_x)$ as a perturbation from $-\mathcal{P}_{\mathcal{A}}$. More precise statements will be given in Theorem \ref{thm.class.Phi}.
}
\mathbf{e}nd{rem}
\subsection{Domain of Dirichlet-Neumann map in $L^2 (\mathbb{R}^d)$}\label{subsec.domain.DN.L^2}
In this section we consider the domain of the Dirichlet-Neumann map in $L^2 (\mathbb{R}^d)$. The result is stated as follows.
\begin{thm}\label{thm.domain.DN} It follows that $D_{L^2}(\Lambda_{\mathcal{A}}) = D_{L^2} (\Lambda_{\mathcal{A}^*}) = H^1 (\mathbb{R}^d)$ with equivalent norms. Moreover, $\Lambda_{\mathcal{A}}$ (and hence, $\Lambda_{\mathcal{A}^*}$) admits a bounded $H^\infty$ calculus in $L^2 (\mathbb{R}^d)$.
\mathbf{e}nd{thm}
\noindent {\it Proof.} It suffices to show $D_{L^2}(\Lambda_{\mathcal{A}}) = H^1 (\mathbb{R}^d)$. Let $\mu_{\mathcal{A}}$ be as in \mathbf{e}qref{def.mu_A}. Let $f\in H^1 (\mathbb{R}^d)$. Since we have already shown $D_{L^2}(\mathcal{P}_{\mathcal{A}}) =H^1 (\mathbb{R}^d)$ in the previous section, Proposition \ref{prop.pre.3} gives
\begin{align}
\mathcal{P}_{\mathcal{A}} f & = M_{1/b} \Lambda_{\mathcal{A}} f + M_{{\bf r_2}/b} \cdot \nabla_x f,\label{proof.thm.domain.DN.1}
\mathbf{e}nd{align}
while, as stated in Remark \ref{rem.pseudo}, we have
\begin{align}
\mathcal{P}_{\mathcal{A}} f & = - i \mu_{\mathcal{A}} (\cdot,D_x) f - S_{\mathcal{A},1} f = - i M_{1/b} \lambda_{\mathcal{A}} (\cdot,D_x) f - S_{\mathcal{A},1} f + M_{{\bf r_2}/b} \cdot \nabla_x f,\label{proof.thm.domain.DN.2}
\mathbf{e}nd{align}
where $S_{\mathcal{A},1}$ is the linear operator given in the proof of Lemma \ref{lem.U_1'} and $\lambda_{\mathcal{A}} (x,\xi )= b(x) \mu_{\mathcal{A}} (x,\xi) + {\bf r_2} (x) \cdot \xi$. Now let us define the linear operator $\mathcal{J}_{\mathcal{A}}$ in $L^2 (\mathbb{R}^d)$ by
\begin{align}
D_{L^2} (\mathcal{J}_{\mathcal{A}}) = H^1 (\mathbb{R}^d), ~~~~~~~~~~ \mathcal{J}_{\mathcal{A}} f = - i \lambda_{\mathcal{A}} (\cdot,D_x) f - M_b S_{\mathcal{A},1} f,\label{proof.thm.domain.DN.3}
\mathbf{e}nd{align}
which gives $\Lambda_{\mathcal{A}} f = \mathcal{J}_{\mathcal{A}}f $ for $f\in H^1 (\mathbb{R}^d)$ by \mathbf{e}qref{proof.thm.domain.DN.1} - \mathbf{e}qref{proof.thm.domain.DN.2}. Thanks to Lemma \ref{lem.invariant.lambda} together with Remark \ref{rem.pseudo} the operator $i \lambda_{\mathcal{A}} (\cdot,D_x)$ in $L^2 (\mathbb{R}^d)$ with $D_{L^2}(\lambda_{\mathcal{A}} (\cdot,D_x)) =H^1 (\mathbb{R}^d)$ generates a strongly continuous and analytic semigroup in $L^2 (\mathbb{R}^d)$, and $-i \lambda_{\mathcal{A}} (\cdot,D_x)$ admits a bounded $H^\infty$ calculus in $L^2 (\mathbb{R}^d)$. On the other hand, we have from \mathbf{e}qref{proof.lem.U_1'.5},
\begin{align*}
\| M_b S_{\mathcal{A},1} f\|_{L^2(\mathbb{R}^d)} \leq C\| S_{\mathcal{A},1} f\|_{H^s(\mathbb{R}^d)} \leq C \| f\|_{H^s(\mathbb{R}^d)}~~~~~{\rm for~all}~~s\in (0,\frac12].
\mathbf{e}nd{align*}
In particular, $M_b S_{\mathcal{A},1}$ is a lower order operator, and hence, the standard perturbation theory (cf. \cite[Section 2.4]{Lunardi}) implies that $-\mathcal{J}_{\mathcal{A}}$ generates a strongly continuous and analytic semigroup $\{e^{-t \mathcal{J}_{\mathcal{A}}}\}_{t\geq 0}$ in $L^2 (\mathbb{R}^d)$. It also follows that $\mathcal{J}_{\mathcal{A}}$ admits a bounded $H^\infty$ calculus in $L^2 (\mathbb{R}^d)$. Now we observe that $u(t)=e^{-t\mathcal{J}_{\mathcal{A}}}f$, $f\in L^2 (\mathbb{R}^d)$, satisfies $0=\partial_t u + \mathcal{J}_{\mathcal{A}} u =\partial_t u + \Lambda_{\mathcal{A}} u$ for $t>0$ and $u(t)\rightarrow f$ as $t\rightarrow 0$ in $L^2 (\mathbb{R}^d)$. Thus we also have the representation $u(t) = e^{-t\Lambda_{\mathcal{A}}} f$, that is, $e^{-t\mathcal{J}_{\mathcal{A}}} = e^{-t \Lambda_{\mathcal{A}}}$ for all $t\geq 0$. This proves $\mathcal{J}_{\mathcal{A}}=\Lambda_{\mathcal{A}}$, i.e., $D_{L^2}(\Lambda_{\mathcal{A}}) = D_{L^2}(\mathcal{J}_{\mathcal{A}})=H^1 (\mathbb{R}^d)$. The proof is complete.
\subsection{Domain of Poisson operator in $H^1 (\mathbb{R}^d)$}\label{subsec.domain.H^1}
In this section we study the Poisson operator in $H^1 (\mathbb{R}^d)$. First we consider the operator $\mathcal{Q}_{\mathcal{A}}= M_b (M_{1/\bar{b}} \mathcal{P}_{\mathcal{A}^*})^*$ in $L^2 (\mathbb{R}^d)$, which is the key step to prove $D_{H^1}(\mathcal{P}_{\mathcal{A}}) = H^2 (\mathbb{R}^d)$.
\begin{thm}\label{thm.domain.Q} Let $\mathcal{Q}_{\mathcal{A}} = M_{1/b} ( M_{\bar{b}} \mathcal{P}_{\mathcal{A}^*})^*$. Then $D_{L^2}(\mathcal{Q}_{\mathcal{A}}) = H^1 (\mathbb{R}^d)$ with equivalent norms and $\mathcal{Q}_{\mathcal{A}} f = M_{1/b} \Lambda_{\mathcal{A}} f - M_{{\bf r_1}/b} \cdot \nabla_x f - M_{(\nabla_x\cdot {\bf r_1})/b} f$ ~ for $f\in H^1 (\mathbb{R}^d)$.
\mathbf{e}nd{thm}
\noindent {\it Proof.} Assume that $f\in H^1 (\mathbb{R}^d) = D_{L^2}(\Lambda_{\mathcal{A}})$ (by Theorem \ref{thm.domain.DN}). Then for any $g\in D_{L^2} (\mathcal{P}_{\mathcal{A}^*})= H^1 (\mathbb{R}^d)$ (by Corollary \ref{cor.lem.U_1'}) we have from Proposition \ref{prop.pre.3},
\begin{align*}
\langle f, M_{\bar{b}} \mathcal{P}_{\mathcal{A}^*} g\rangle _{L^2 (\mathbb{R}^d)} & = \langle f, \Lambda_{\mathcal{A}^*} g + M_{ {\bf \bar{r}_1}}\cdot \nabla_x g\rangle _{L^2(\mathbb{R}^d)} = \langle \Lambda_{\mathcal{A}} f - \nabla_x \cdot M_{{\bf r_1}} f, g\rangle_{L^2 (\mathbb{R}^d)}.
\mathbf{e}nd{align*}
In particular, we have the estimate $|\langle f, M_{\bar{b}} \mathcal{P}_{\mathcal{A}^*} g\rangle _{L^2 (\mathbb{R}^d)}|\leq C \| f \|_{H^1 (\mathbb{R}^d)} \| g\|_{L^2(\mathbb{R}^d)}$ for all $g\in D_{L^2} (\mathcal{P}_{\mathcal{A}^*})$. This implies $f\in D_{L^2}( (M_{\bar{b}} \mathcal{P}_{\mathcal{A}^*} )^* )=D_{L^2}( \mathcal{Q}_{\mathcal{A}} )$ and
\[
\mathcal{Q}_{\mathcal{A}} f = M_{1/b} \Lambda_{\mathcal{A}} f - M_{{\bf r_1}/b} \cdot \nabla_x f - M_{(\nabla_x\cdot {\bf r_1})/b} f, ~~~~~~~~ f\in H^1 (\mathbb{R}^d),
\]
that is, we have from \mathbf{e}qref{proof.thm.domain.DN.1}-\mathbf{e}qref{proof.thm.domain.DN.2},
\begin{align}
\mathcal{Q}_{\mathcal{A}} f = -i \mu_{\mathcal{A}} (\cdot,D_x) f - M_{({\bf r_1} + {\bf r_2})/b} \cdot \nabla_x f - S_{\mathcal{A},1} f - M_{\nabla_x\cdot {\bf r_1}} f, ~~~~~~~~ f\in H^1 (\mathbb{R}^d). \label{proof.thm.domain.Q.1}
\mathbf{e}nd{align}
To prove the converse embedding we appeal to the argument of the proof of Theorem \ref{thm.domain.DN}. Set $q_{\mathcal{A}}(x,\xi) =\mu_{\mathcal{A}} (x,\xi) + b(x)^{-1}({\bf r_1} (x) + {\bf r_2} (x) ) \cdot \xi$ and let $\mathcal{K}_{\mathcal{A}}$ be the linear operator in $L^2 (\mathbb{R}^d)$ defined by
\begin{align}
D_{L^2}(\mathcal{K}_{\mathcal{A}}) = H^1 (\mathbb{R}^d ), ~~~~~~~\mathcal{K}_{\mathcal{A}} f = - i q_{\mathcal{A}} (\cdot,D_x) f - S_{\mathcal{A},1} f - M_{(\nabla_x\cdot {\bf r_1})/b} f.\label{proof.thm.domain.Q.2}
\mathbf{e}nd{align}
Then $\mathcal{K}_{\mathcal{A}}f = \mathcal{Q}_{\mathcal{A}}f$ for $f\in H^1 (\mathbb{R}^d)$ by \mathbf{e}qref{proof.thm.domain.Q.1}. On the other hand, Lemma \ref{lem.invariant.lambda} with Remark \ref{rem.pseudo} shows that $ i q_{\mathcal{A}} (\cdot,D_x)$ generates a strongly continuous and analytic semigroup in $L^2 (\mathbb{R}^d)$, and hence, so is true for $\mathcal{K}_{\mathcal{A}}$, since the operators $S_{\mathcal{A},1}$ and $M_{(\nabla_x\cdot {\bf r_1})/b}$ are of lower order. Then, as arguing as in the proof of Theorem \ref{thm.domain.DN}, we conclude that $e^{-t\mathcal{K}_{\mathcal{A}}} = e^{-t\mathcal{Q}_{\mathcal{A}}}$ for all $t\geq 0$. Thus we have $\mathcal{K}_{\mathcal{A}} = \mathcal{Q}_{\mathcal{A}}$, as desired. The proof is complete.
\begin{thm}\label{thm.domain.poisson.H^1} The restriction of $\{e^{-t\mathcal{P}_{\mathcal{A}}}\}_{t\geq 0}$ in $L^2 (\mathbb{R}^d)$ on the invariant subspace $H^1 (\mathbb{R}^d)$ defines a strongly continuous and analytic semigroup. Moreover, we have $D_{H^1} (\mathcal{P}_{\mathcal{A}}) = H^2 (\mathbb{R}^d)$ with equivalent norms.
\mathbf{e}nd{thm}
\noindent {\it Proof.} The first statement is trivial since we have already proved $D_{L^2}(\mathcal{P}_{\mathcal{A}}) =H^1 (\mathbb{R}^d)$. Thus it suffices to show $D_{H^1}(\mathcal{P}_{\mathcal{A}}) = H^2 (\mathbb{R}^d)$. Since $D_{L^2}(\mathcal{P}_{\mathcal{A}}) = D_{L^2}(\mathcal{P}_{\mathcal{A}^*}) =H^1 (\mathbb{R}^d)$ by Corollary \ref{cor.lem.U_1'}, we have from \mathbf{e}qref{eq.prop.pre.3.3'},
\begin{align*}
u\in H^2 (\mathbb{R}^d) \Longleftrightarrow u\in D_{L^2} (\mathcal{A}') \Longleftrightarrow u\in D_{L^2}(\mathcal{Q}_{\mathcal{A}}\mathcal{P}_{\mathcal{A}} )\Longleftrightarrow \mathcal{P}_{\mathcal{A}} u\in D_{L^2} (\mathcal{Q}_{\mathcal{A}}) \Longleftrightarrow \mathcal{P}_{\mathcal{A}} u\in H^1 (\mathbb{R}^d),
\mathbf{e}nd{align*}
where we have used $D_{L^2}(\mathcal{Q}_{\mathcal{A}}) = H^1 (\mathbb{R}^d)$ by Theorem \ref{thm.domain.Q}. It is also easy to see that $\|u\|_{H^2(\mathbb{R}^d)}\simeq \| \mathcal{P}_{\mathcal{A}} u \|_{H^1 (\mathbb{R}^d)} + \| u\|_{H^1(\mathbb{R}^d)}$. The proof is complete.
\subsection{Further estimates for remainder part of Poisson operator in $H^s(\mathbb{R}^d)$}\label{subsec.S_A}
Let $S_{\mathcal{A},1}$ be the bounded linear operator in $H^{1/2}(\mathbb{R}^d)$ defined by $S_{\mathcal{A},1}h =\displaystyle \lim_{t\rightarrow 0} \,{\rm d}/\,{\rm d} t~U_{\mathcal{A},1} (t) h$ as in the proof of Lemma \ref{lem.U_1'}. In this section we study the mapping property of $S_{\mathcal{A},1}$ in $H^s(\mathbb{R}^d)$.
\begin{prop}\label{prop.S_A} Let $0<s,\mathbf{e}psilon <1$. Then we have
\begin{align}
\| S_{\mathcal{A},1} h \|_{H^s(\mathbb{R}^d)}\leq C \| h \|_{H^s(\mathbb{R}^d)},~~~~~~~~\| S_{\mathcal{A},1} h \|_{H^1(\mathbb{R}^d)}\leq C \| h \|_{H^{1+\mathbf{e}psilon}(\mathbb{R}^d)}. \label{est.prop.S_A.1}
\mathbf{e}nd{align}
\mathbf{e}nd{prop}
\noindent {\it Proof.} Let $h\in \mathcal{S}(\mathbb{R}^d)$. The estimate \mathbf{e}qref{est.prop.S_A.1} with $s\in (0,1/2]$ is already proved by \mathbf{e}qref{proof.lem.U_1'.5}. Next we consider the case $s\in (1/2,1)$. Let us recall that $U_{\mathcal{A},1}(t)h$ is the solution to \mathbf{e}qref{eq.dirichlet} with $F$ given by \mathbf{e}qref{proof.lem.U_1.1} and $g=0$. By \cite[Theorem 5.1]{MaekawaMiura1} the characterization of $D_{L^2}(\mathcal{P}_{\mathcal{A}}) = D_{L^2}(\mathcal{P}_{\mathcal{A}^*}) = H^1(\mathbb{R}^d)$ provides the integral representation of $U_{\mathcal{A},1}(t)h$ such that
\begin{align*}
U_{\mathcal{A},1} (t) h & = \int_0^t e^{-(t-s)\mathcal{P}_{\mathcal{A}}} \int_s^\infty e^{-(\tau-s)\mathcal{Q}_{\mathcal{A}}} M_{1/b}\big ( \nabla\cdot M_\chi \Pi h + M_\chi G_\zeta h + R h\big ) \,{\rm d}\tau \,{\rm d} s,
\mathbf{e}nd{align*}
which gives
\begin{align}
S_{\mathcal{A},1} h = \int_0^\infty e^{-\tau \mathcal{Q}_{\mathcal{A}}} M_{1/b}\big ( \nabla\cdot M_\chi \Pi h + M_\chi G_\zeta h + R h\big ) \,{\rm d} \tau. \label{proof.prop.S_A.1}
\mathbf{e}nd{align}
Here we will only show
\begin{align}
\| \int_0^\infty e^{-\tau \mathcal{Q}_{\mathcal{A}}} M_{1/b} \nabla_x \cdot M_\chi \Pi' h \,{\rm d} \tau \|_{\dot{H}^s(\mathbb{R}^d)} \leq C \| h \|_{H^s(\mathbb{R}^d)}, \label{proof.prop.S_A.2}
\mathbf{e}nd{align}
for the other terms are treated in the similar manner. We note that the term $\Pi'h$ is not differentiable in $x$ (see the definition \mathbf{e}qref{proof.lem.U_1.0}), and thus, the term $e^{-\tau \mathcal{Q}_{\mathcal{A}}} M_{1/b} \nabla_x \cdot M_\chi \Pi' h$ in \mathbf{e}qref{proof.prop.S_A.2} has to be interpreted as
\begin{align}
e^{-\tau \mathcal{Q}_{\mathcal{A}}} M_{1/b} \nabla_x \cdot M_\chi \Pi' h & = ( I + \mathcal{Q}_{\mathcal{A}} ) e^{-\tau\mathcal{Q}_{\mathcal{A}}} \big (- \nabla_x ( I + \mathcal{P}_{\mathcal{A}^*} )^{-1} M_{1/\bar{b}}\big )^* \cdot M_\chi \Pi'h,\label{proof.prop.S_A.3}
\mathbf{e}nd{align}
where we have used the formal adjoint relation $( I + \mathcal{Q}_{\mathcal{A}} )^{-1} M_{1/b} \nabla_x = \big (- \nabla_x ( I + \mathcal{P}_{\mathcal{A}^*} )^{-1} M_{1/\bar{b}}\big )^*$. Since $\big (- \nabla_x ( I + \mathcal{P}_{\mathcal{A}^*} )^{-1} M_{1/\bar{b}}\big )^*$ is a bounded linear operator in $L^2 (\mathbb{R}^d)$ by $D_{L^2}(\mathcal{P}_{\mathcal{A}^*})=H^1 (\mathbb{R}^d)$, the right-hand side of \mathbf{e}qref{proof.prop.S_A.3} is well-defined for each $\tau>0$. Then from $\mathcal{Q}_{\mathcal{A}}e^{-\tau \mathcal{Q}_{\mathcal{A}}} = - \,{\rm d}/\,{\rm d}\tau e^{-\tau \mathcal{Q}_{\mathcal{A}}}$ and from the integration by parts together with $\Pi 'h|_{t=0} =0$ for $h\in \mathcal{S}(\mathbb{R}^d)$ (due to the definition of $\Pi'$) the estimate \mathbf{e}qref{proof.prop.S_A.2} is essentially reduced to
\begin{align}
\| \int_0^\infty e^{-\tau \mathcal{Q}_{\mathcal{A}}} B^* \cdot M_\chi \partial_\tau \Pi'h\,{\rm d} \tau \|_{\dot{H}^s(\mathbb{R}^d)} \leq C \| h \|_{H^s(\mathbb{R}^d)},~~~ ~~~ B=- \nabla_x ( I + \mathcal{P}_{\mathcal{A}^*} )^{-1} M_{1/\bar{b}},\label{proof.prop.S_A.4}
\mathbf{e}nd{align}
since the other terms are of lower order. We appeal to the duality argument and consider the integral
\begin{align}
& ~~~\langle (-{\rm d}elta_x)^\frac{s}{2}\int_0^\infty e^{-\tau \mathcal{Q}_{\mathcal{A}}} B^* \cdot M_\chi \partial_\tau \Pi'h\,{\rm d} \tau, \varphi \rangle _{L^2 (\mathbb{R}^d)} \nonumber \\
& = \int_0^\infty \langle B^* \cdot M_\chi \partial_\tau \Pi'h, M_{\bar{b}} e^{-\tau \mathcal{P}_{\mathcal{A}^*}} M_{1/\bar{b}} (-{\rm d}elta_x)^\frac{s}{2} \varphi \rangle _{L^2 (\mathbb{R}^d)} \,{\rm d}\tau\label{proof.prop.S_A.5}
\mathbf{e}nd{align}
for $\varphi\in \mathcal{S}(\mathbb{R}^d)$. Then we have
\begin{align}
&~~~ {\rm R.H.S.~of~\mathbf{e}qref{proof.prop.S_A.5}}\nonumber \\
& \leq C \big ( \int_0^\infty M_\chi \tau^{2(1-s)}\| \partial_\tau \Pi'h \|_{L^2 (\mathbb{R}^d)}^2 \frac{\,{\rm d} \tau}{\tau} \big )^\frac12 \big ( \int_0^\infty M_\chi \tau^{2 s}\| M_{\bar{b}} e^{-\tau \mathcal{P}_{\mathcal{A}^*}} M_{1/\bar{b}} (-{\rm d}elta_x)^\frac{s}{2} \varphi \|_{L^2 (\mathbb{R}^d)}^2\frac{\,{\rm d} \tau}{\tau} \big )^\frac12.\label{proof.prop.S_A.6}
\mathbf{e}nd{align}
By \mathbf{e}qref{proof.lem.U_1.0} we see $\partial_\tau \Pi'= G_{i (1 + \tau i\mu_{\mathcal{A}})A'\nabla_x\mu_{\mathcal{A}} }$ and it is straightforward to check that
\[
p (x,\xi,\tau) = \tau^{1-s} |\xi|^{-s}(1 + \tau i\mu_{\mathcal{A}}) i A'\nabla_x\mu_{\mathcal{A}}, ~~~~~~~s\in (0,1)
\]
satisfies the condition \mathbf{e}qref{assume.lem.G_p.2}. Hence we have from \mathbf{e}qref{est.lem.G_p.2},
\begin{align*}
\int_0^\infty M_\chi \tau^{2(1-s)}\| \partial_\tau \Pi' h \|_{L^2 (\mathbb{R}^d)}^2 \frac{\,{\rm d} \tau}{\tau} = \int_0^\infty M_\chi \| G_p (-{\rm d}elta_x)^\frac{s}{2} h \|_{L^2 (\mathbb{R}^d)}^2 \frac{\,{\rm d}\tau}{\tau}\leq C \| (-{\rm d}elta_x)^\frac{s}{2} h \|_{L^2 (\mathbb{R}^d)}^2.
\mathbf{e}nd{align*}
Next we estimate the second integral of the right-hand side of \mathbf{e}qref{proof.prop.S_A.6}. By the duality argument and $D_{L^2}(\mathcal{Q}_{\mathcal{A}}) = H^1 (\mathbb{R}^d)$ it is easy to see $\| M_{\bar{b}} e^{-\tau \mathcal{P}_{\mathcal{A}^*}} M_{1/\bar{b}} (-{\rm d}elta_x)^\frac{\kappa}{2} \varphi \|_{L^2 (\mathbb{R}^d)}\leq C \tau^{-\kappa} \|\varphi \|_{L^2 (\mathbb{R}^d)}$ for any $\kappa\in [0,1]$ and $\tau\in (0,2)$. Let $\{\psi_r\}_{r>0}$ be the family of functions introduced in Appendix \ref{appendix.proof.lemma} (with $s$ replaced by $r$). Then one can verify the estimates
\begin{align*}
\tau^s \| M_{\bar{b}} e^{-\tau \mathcal{P}_{\mathcal{A}^*}} M_{1/\bar{b}} (-{\rm d}elta_x)^\frac{s}{2}\psi_r * \varphi \|_{L^2 (\mathbb{R}^d)} & =\tau^s \| M_{\bar{b}} e^{-\tau \mathcal{P}_{\mathcal{A}^*}} M_{1/\bar{b}} (-{\rm d}elta_x)^\frac{1}{2} (-{\rm d}elta_x)^\frac{s-1}{2}\psi_r * \varphi \|_{L^2 (\mathbb{R}^d)}\\
& \leq C \tau^{-1+s}r^{1-s} \|\varphi\|_{L^2(\mathbb{R}^d)}, ~~~~~~~~~~~0<r\leq \tau<2,\\
\tau^s \| M_{\bar{b}} e^{-\tau \mathcal{P}_{\mathcal{A}^*}} M_{1/\bar{b}} (-{\rm d}elta_x)^\frac{s}{2}\psi_r * \varphi \|_{L^2 (\mathbb{R}^d)} & \leq \tau^s \|(-{\rm d}elta_x)^\frac{s}{2}\psi_r * \varphi \|_{L^2 (\mathbb{R}^d)}\\
& \leq C \tau^{s}r^{-s} \|\varphi\|_{L^2(\mathbb{R}^d)}, ~~~~~~~~~~~0<\tau\leq r, ~~ 0<\tau<2.
\mathbf{e}nd{align*}
Hence the Schur lemma \cite[pp.643-644]{Grafakos} yields
\begin{align*}
\int_0^\infty M_\chi \tau^{2 s}\| M_{\bar{b}} e^{-\tau \mathcal{P}_{\mathcal{A}^*}} M_{1/\bar{b}} (-{\rm d}elta_x)^\frac{s}{2} \varphi \|_{L^2 (\mathbb{R}^d)}^2\frac{\,{\rm d} \tau}{\tau} \leq C \| \varphi \|_{L^2 (\mathbb{R}^d)}^2,
\mathbf{e}nd{align*}
as desired. This completes the proof of \mathbf{e}qref{est.prop.S_A.1} with $s\in (0,1)$. To prove the second estimate in \mathbf{e}qref{est.prop.S_A.1} we go back to the representation \mathbf{e}qref{proof.prop.S_A.1}. Here we will only show, instead of \mathbf{e}qref{proof.prop.S_A.4},
\begin{align}
\| \int_0^\infty e^{-\tau \mathcal{Q}_{\mathcal{A}}} B^* \cdot M_\chi \partial_\tau \Pi' h\,{\rm d} \tau \|_{\dot{H}^1(\mathbb{R}^d)} \leq C \| h \|_{H^{1+\mathbf{e}psilon} (\mathbb{R}^d)}, ~~~~~~\mathbf{e}psilon\in (0,1).\label{proof.prop.S_A.7}
\mathbf{e}nd{align}
We use the identity $\partial_\tau \Pi' h = G_q (I-{\rm d}elta_x)^{(1+\mathbf{e}psilon)/2} h$ with $q= i (1+|\xi|^2)^{-(1+\mathbf{e}psilon)/2}
(1+ \tau i\mu_{\mathcal{A}})A'\nabla_x \mu_{\mathcal{A}} $. For $h\in \mathcal{S}(\mathbb{R}^d)$ the limit $\displaystyle \lim_{\tau \rightarrow 0}G_q (I-{\rm d}elta_x)^{(1+\mathbf{e}psilon)/2} h$ exists in $L^2 (\mathbb{R}^d)$ and we also have
\begin{align}
\sup_{0<\tau<2} \| G_q (\tau) f \|_{L^2 (\mathbb{R}^d)}\leq C \| f\|_{L^2 (\mathbb{R}^d)},~~~~~~~f\in \mathcal{S}(\mathbb{R}^d ). \label{proof.prop.S_A.8}
\mathbf{e}nd{align}
The proof of \mathbf{e}qref{proof.prop.S_A.8} is postponed to the appendix. Then from $D_{L^2}(\mathcal{Q}_{\mathcal{A}}) = H^1 (\mathbb{R}^d)$ we have
\begin{align}
{\rm L.H.S.~of~\mathbf{e}qref{proof.prop.S_A.7}} &\leq C \| \mathcal{Q}_{\mathcal{A}}\int_0^\infty M_\chi e^{-\tau \mathcal{Q}_{\mathcal{A}}} B^* \cdot G_q (I-{\rm d}elta_x)^{\frac{1+\mathbf{e}psilon}{2}} h \,{\rm d}\tau \|_{L^2 (\mathbb{R}^d)} + ~{\rm (lower~order)} \nonumber \\
& \leq C \lim_{\tau\rightarrow 0} \| B^* \cdot G_q (I-{\rm d}elta_x)^{\frac{1+\mathbf{e}psilon}{2}} h \|_{L^2 (\mathbb{R}^d)} \nonumber \\
& ~~~~~~~~~~ + C \int_0^\infty M_\chi \| e^{-\tau \mathcal{Q}_{\mathcal{A}}} B^* \cdot \partial_\tau G_q (I-{\rm d}elta_x)^{\frac{1+\mathbf{e}psilon}{2}} h \|_{L^2 (\mathbb{R}^d)} \,{\rm d}\tau +~{\rm (lower ~order)}\nonumber \\
&\leq C \| h \|_{H^{1+\mathbf{e}psilon}(\mathbb{R}^d)} + C \int_0^\infty M_\chi \| \partial_\tau G_q (I-{\rm d}elta_x)^{\frac{1+\mathbf{e}psilon}{2}} h \|_{L^2 (\mathbb{R}^d)} \,{\rm d}\tau +{\rm ~ (lower~order)}. \label{proof.prop.S_A.9}
\mathbf{e}nd{align}
By the definition of $q$ we see $\partial_\tau G_q = \tau^{-1+\mathbf{e}psilon} G_{\tilde q}$ with $\tilde q =\tau^{1-\mathbf{e}psilon} ( i q\mu_{\mathcal{A}} + \partial_\tau q )$ and $\tilde q$ satisfies the condition \mathbf{e}qref{assume.lem.G_p.1} with $T=2$. Thus, \mathbf{e}qref{est.lem.G_p.1} implies ~${\rm R.H.S.~of~\mathbf{e}qref{proof.prop.S_A.9}}\leq C \| h \|_{H^{1+\mathbf{e}psilon}(\mathbb{R}^d)}$. The proof is complete.
\subsection{Proof of Theorems \ref{thm.main.1}, \ref{thm.main.2}, and \ref{thm.main.3}}\label{subsec.proof.thm}
\noindent {\it Proof of Theorem \ref{thm.main.1}.} The assertion (ii) of Theorem \ref{thm.main.1} with $s=0$ and $s=1$ is already proved in Corollary \ref{cor.lem.U_1'} and Theorem \ref{thm.domain.poisson.H^1}. Then the case $s\in (0,1)$ follows from the interpolation inequality and the details are omitted. It remains to show the last statement of (i). By Theorem \ref{thm.domain.poisson.H^1} we have
\begin{align}
u\in D_{L^2}(\mathcal{P}_{\mathcal{A}}^2) \Longleftrightarrow \mathcal{P}_{\mathcal{A}}u \in D_{L^2} (\mathcal{P}_{\mathcal{A}}) = H^1 (\mathbb{R}^d) \Longleftrightarrow u\in D_{H^1} (\mathcal{P}_{\mathcal{A}}) = H^2 (\mathbb{R}^d).
\mathbf{e}nd{align}
It is also easy to see the norm equivalence between $H^2 (\mathbb{R}^d)$ and $D_{L^2} (\mathcal{P}_{\mathcal{A}}^2 )$. Then the sectorial operator $T=I + \mathcal{P}_{\mathcal{A}}$ in $L^2 (\mathbb{R}^d)$, which is invertible by \cite[Remark 2.6]{MaekawaMiura1}, satisfies
\begin{align}
(L^2 (\mathbb{R}^d), D_{L^2} (T^2) )_{\frac12,2} = (L^2 (\mathbb{R}^d), H^2 (\mathbb{R}^d) )_{\frac12,2} = H^1 (\mathbb{R}^d) = D_{L^2} (T).\label{identity.komatsu}
\mathbf{e}nd{align}
By the Komatsu theorem \cite[Theorem 6.6.8]{Haase} the identity \mathbf{e}qref{identity.komatsu} implies that the operator $T$ admits a bounded $H^\infty$ calculus in $L^2(\mathbb{R}^d)$. The proof is complete.
\noindent {\it Proof of Theorem \ref{thm.main.2}.} The assertions follow from Corollary \ref{cor.lem.U_1'} and Theorem \ref{thm.domain.Q} together with Proposition \ref{prop.pre.3}. The proof is complete.
\noindent {\it Proof of Theorem \ref{thm.main.3}.} The case $s=0$ is already proved by Theorem \ref{thm.domain.DN}. It suffices to consider the endpoint case $s=1$. Theorem \ref{thm.domain.DN} implies that $H^1 (\mathbb{R}^d)$ is invariant under the action of $\{e^{-t\Lambda_{\mathcal{A}}}\}_{t\geq 0}$ and the restriction of this semigroup in $H^1 (\mathbb{R}^d)$ is also analytic and strongly continuous. Hence it suffices to show that the generator of this restriction semigroup satisfies $D_{H^1}(\Lambda_{\mathcal{A}}) = H^2(\mathbb{R}^d)$ with equivalent norms. By the proof of Theorem \ref{thm.domain.DN} we have
\begin{align*}
\Lambda_{\mathcal{A}}=\mathcal{J}_{\mathcal{A}},~~~~~~~\mathcal{J}_{\mathcal{A}}=-i\lambda_{\mathcal{A}}(\cdot,D_x) - M_b S_{\mathcal{A},1}~~~~~{\rm as ~a ~operator ~in~}~ L^2 (\mathbb{R}^d),
\mathbf{e}nd{align*}
where $\lambda_{\mathcal{A}}(\cdot,D_x)$ is the pseudo-differential operator with its symbol $\lambda_{\mathcal{A}}(x,\xi) = b(x)\mu_{\mathcal{A}}(x,\xi) +{\bf r_2} (x) \cdot \xi$. On the other hand, by Lemma \ref{lem.invariant.lambda} and Theorem \ref{thm.class.Phi} the operator $i\lambda_{\mathcal{A}}(\cdot,D_x)$ generate a strongly continuous and analytic semigroup in $H^1(\mathbb{R}^d)$, and $D_{H^1}(\lambda_{\mathcal{A}}(\cdot,D_x)) = H^2(\mathbb{R}^d)$ holds with equivalent norms. Then so is true for $\mathcal{J}_{\mathcal{A}}$, for $b$ is Lipschitz and $S_{\mathcal{A},1}$ is of lower order by Proposition \ref{prop.S_A}. Since it is easy to see that $\Lambda_{\mathcal{A}}=\mathcal{J}_{\mathcal{A}}$ as a operator in $H^1 (\mathbb{R}^d)$, we conclude that $D_{H^1}(\Lambda_{\mathcal{A}}) = H^2(\mathbb{R}^d)$ holds with equivalent norms. The proof is complete.
\begin{rem}
\label{Hinfty}
{\rm In the proof of Theorem \ref{thm.main.1} we have established the expansion $\mathcal{P}_{\mathcal{A}}=-i \mu_{\mathcal{A}} (\cdot, D_x) + R$, where $\mu_{\mathcal{A}}(\cdot, D_x)$ is the pseudo-differential operator with symbol \mathbf{e}qref{mu_A.intro} and $R$ is a bounded operator in $H^s(\mathbb{R}^d)$ for $s\in (0,1)$, while it is a bounded operator from $H^{1+\mathbf{e}psilon}(\mathbb{R}^d)$, $\mathbf{e}psilon>0$, to $H^1 (\mathbb{R}^d)$. Similar expansion is obtained also for $\Lambda_{\mathcal{A}}$. Then one can apply the results of \cite[Theorem 4.8]{ES} to obtain a stronger statement that $\mathcal{P}_{\mathcal{A}}$ (and $\Lambda_{\mathcal{A}}$) admits a bounded $H^\infty$ calculus in $H^s(\mathbb{R}^d)$, $s\in [0,1)$.
Our proof of bounded $H^\infty$ calculus for $\mathcal{P}_{\mathcal{A}}$ is based on the characterization $D_{H^1}(\mathcal{P}_{\mathcal{A}})=H^2 (\mathbb{R}^d)$ and the Komatsu theorem \cite[Proposition 2.7]{Haase}, which is different from the approach in \cite{ES}.
}
\mathbf{e}nd{rem}
\appendix
\section{Appendix}\label{sec.appendix}
\subsection{Remark on pseudo-differential operator $\mu_{\mathcal{A}}(\cdot,D_x)$}\label{appendix.pseudo-differential}
In view of the definition \mathbf{e}qref{def.mu_A} for $\mu_{\mathcal{A}}(x,\xi)$, which is the root of \mathbf{e}qref{eq.root} with positive imaginary part, it is natural to introduce the map $\Phi: A\longmapsto \mu_{\mathcal{A}}$, i.e.,
\begin{align}
\Phi (A) =- \frac{{\bf v} (x) \cdot \xi}{2} + i \big \{ \frac{1}{b (x) } \langle A'(x) \xi,\xi\rangle -\frac{1}{4}({\bf v}(x) \cdot \xi )^2\big \}^\frac12, ~~~~~~~~~{\bf v} = \frac{{\bf r_1} + {\bf r_2}}{b}, \label{def.appendix.Phi}
\mathbf{e}nd{align}
where $A$ is a matrix satisfying the ellipticity condition \mathbf{e}qref{ellipticity}. We denote by $\mathcal{R}_{Lip} (\Phi)$ the range of $\Phi$ for the Lipschitz class of $A$, that is,
\begin{align*}
\mathcal{R}_{Lip} (\Phi) & = \{ \mu (x,\xi)\in {\rm Lip} (\mathbb{R}^d \times \mathbb{R}^d)~|~{\rm there~ is ~a ~matrix}~A\in ({\rm Lip}(\mathbb{R}^d ) )^{(d+1)\times (d+1)}~{\rm satisfying }~\mathbf{e}qref{ellipticity}\nonumber \\
& ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ {\rm for ~ some~} \nu_1,\nu_2>0~{\rm such ~that~} \mu =\Phi (A)\}.
\mathbf{e}nd{align*}
\noindent The next lemma is used in the study of $\Lambda_{\mathcal{A}}$ and $\mathcal{Q}_{\mathcal{A}}$.
\begin{lem}\label{lem.invariant.lambda} Assume that $\mu = \Phi (A) \in \mathcal{R}_{Lip} (\Phi)$ with $A=(a_{i,j})_{1\leq i,j\leq d+1}$. Set $b=a_{d+1,d+1}$, ${\bf r_1} = (a_{j,d+1})_{1\leq j\leq d}$, and ${\bf r_2} = (a_{d+1,j})_{1\leq j\leq d}$. Then the functions $\lambda(x,\xi)$ and $q (x,\xi)$ defined by
\begin{align}
\lambda (x,\xi ) = b (x) \mu (x,\xi) + {\bf r_2} (x) \cdot \xi, ~~~~~~~ q (x,\xi) = \mu (x,\xi) + \frac{{\bf r_1} (x) + {\bf r_2} (x) }{b(x)} \cdot \xi,
\mathbf{e}nd{align}
belong to $ \mathcal{R}_{Lip} (\Phi)$.
\mathbf{e}nd{lem}
\noindent {\it Proof.} Since $\mu$ solves \mathbf{e}qref{eq.root}, $\lambda$ and $q$ respectively satisfy
\begin{align*}
&\lambda^2 + ({\bf r_1} (x) - {\bf r_2} (x) )\cdot \xi \lambda + b(x) \langle A' (x) \xi,\xi\rangle - {\bf r_1}(x) \cdot \xi ~{\bf r_2}(x) \cdot \xi = 0,\\
& b(x) q^2 - ({\bf r_1} (x) + {\bf r_2} (x) )\cdot \xi q + \langle A' (x) \xi, \xi \rangle = 0.
\mathbf{e}nd{align*}
Set $M' = \big (m_{i,j} \big )_{1\leq i,j\leq d} = \big (|b|^2 a_{i,j} - \bar{b}a_{d+1,i} a_{j,d+1} \big )_{1\leq i,j\leq d}$, ${\bf s_1} = \bar{b}{\bf r_1}$, ${\bf s_2}= -\bar{b}{\bf r_2}$, and set $N'=A'$, ${\bf u_1}=-{\bf r_1}$, ${\bf u_2}=-{\bf r_2}$. Then the matrices
\begin{equation}
M =
\begin{pmatrix}
\mbox{} & \mbox{} & \mbox{} & \mbox{} \\
\mbox{} & \large{M'} & \mbox{} & {\bf s_1} \\
\mbox{} & \mbox{} & \mbox{} & \mbox{}\\
\mbox{} & {\bf s_2}^\top & \mbox{} & \bar{b} \mathbf{e}nd{pmatrix}, ~~~~~~~~~N =
\begin{pmatrix}
\mbox{} & \mbox{} & \mbox{} & \mbox{} \\
\mbox{} & \large{N'} & \mbox{} & {\bf u_1} \\
\mbox{} & \mbox{} & \mbox{} & \mbox{}\\
\mbox{} & {\bf u_2}^\top & \mbox{} & b \mathbf{e}nd{pmatrix}~
\mathbf{e}nd{equation}
satisfy \mathbf{e}qref{ellipticity} and \mathbf{e}qref{lipschitz} (with possibly different ellipticity constants), while we have $\lambda=\Phi (M)$ and $q=\Phi (N)$. The proof is complete.
As stated in Remark \ref{rem.pseudo}, in the proof of the characterization $D_{H^s}(\mathcal{P}_{\mathcal{A}}) = H^{1+s} (\mathbb{R}^d)$ we did not use the mapping property of the pseudo-differential operator $\mu_{\mathcal{A}} (\cdot, D_x)$, which has a representation
\begin{align}
\mu_{\mathcal{A}} (\cdot, D_x ) f (x) = \frac{1}{(2\pi)^\frac{d}{2}} \int_{\mathbb{R}^d} \mu_{\mathcal{A}} (x,\xi ) e^{ix\cdot \xi} \hat{f} (\xi) \,{\rm d} \xi ~~~~~~~~{\rm for}~~f\in \mathcal{S} (\mathbb{R}^d).
\mathbf{e}nd{align}
\noindent The results of Theorem \ref{thm.main.1} and Proposition \ref{prop.S_A} yield
\begin{thm}\label{thm.class.Phi} Let $s\in [0,1]$. Let $\mu = \Phi (A) \in \mathcal{R}_{Lip} (\Phi)$. Then the associated pseudo-differential operator $i\mu (\cdot,D_x)$ generate a strongly continuous and analytic semigroup in $H^s(\mathbb{R}^d)$, and $D_{H^s}(\mu (\cdot, D_x) ) = H^{1+s}(\mathbb{R}^d)$ holds with equivalent norms. Moreover, for the Poisson operator $\mathcal{P}_{\mathcal{A}}$ associated with $\mathcal{A}=-\nabla\cdot A\nabla$ we have the identity
\begin{align}
\mathcal{P}_{\mathcal{A}} f = - i \mu (\cdot, D_x) f- S_{\mathcal{A},1} f,~~~~~~~~f\in H^{1} (\mathbb{R}^d),\label{eq.thm.class.Phi}
\mathbf{e}nd{align}
where $S_{\mathcal{A},1}$ is the linear operator defined in the proof of Lemma \ref{lem.U_1'}, which is bounded in $H^s(\mathbb{R}^d)$, $s\in (0,1)$. In particular, $-i\mu (\cdot,D_x)$ admits a bounded $H^\infty$ calculus in $L^2 (\mathbb{R}^d)$.
\mathbf{e}nd{thm}
\begin{rem}{\rm In fact, by applying the general results of \cite{ES} for pseudo-differential operators with nonsmooth symbols, it follows that $-i\mu (\cdot,D_x)$ admits a bounded $H^\infty$ calculus in $H^s (\mathbb{R}^d)$, $s\in [0,1)$.
In this sense, the properties of $i\mu (\cdot,D_x)$ stated in Theorem \ref{thm.class.Phi} themselves are not essentially new. As commented in Remark \ref{rem.pseudo}, the special feature of our proof is that we use the information of $\mathcal{P}_{\mathcal{A}}$ to derive the properties of $-i\mu (\cdot,D_x)$, where the underlying key structure is the factorizations of $\mathcal{A}'$ and $\mathcal{A}$ in Theorem \ref{thm.main.2}.
}
\mathbf{e}nd{rem}
\noindent {\it Proof of Theorem \ref{thm.class.Phi}.} For $f\in H^{1+s}(\mathbb{R}^d)$ we define $\mu (\cdot, D_x) f = \displaystyle \lim_{n\rightarrow \infty}\mu (\cdot,D_x)f_n$ in $H^s(\mathbb{R}^d)$, where $\{f_n\}$ is a sequence in $\mathcal{S}(\mathbb{R}^d)$ converging to $f$ in $H^{1+s}(\mathbb{R}^d)$. This is well-defined since \mathbf{e}qref{eq.thm.class.Phi} holds for $f\in \mathcal{S}(\mathbb{R}^d)$, and then Theorem \ref{thm.main.1} and Proposition \ref{prop.S_A} imply $\|\mu (\cdot, D_x) f \|_{H^s(\mathbb{R}^d)} \leq \| \mathcal{P}_{\mathcal{A}} f \|_{H^s(\mathbb{R}^d)} + \| S_{\mathcal{A},1} f \|_{H^s(\mathbb{R}^d)} \leq C \| f \|_{H^{1+s}(\mathbb{R}^d)}$ for $f\in \mathcal{S}(\mathbb{R}^d)$. Since $D_{H^s}(\mathcal{P}_{\mathcal{A}})=H^{1+s}(\mathbb{R}^d)$ and $\mathcal{P}_{\mathcal{A}}$ is closed in $H^s(\mathbb{R}^d)$, we observe also from Proposition \ref{prop.S_A} that the above realization of $i\mu (\cdot, D_x)$ in $H^s(\mathbb{R}^d)$ satisfies \mathbf{e}qref{eq.thm.class.Phi} for any $f\in H^{1+s}(\mathbb{R}^d)$. Hence $i\mu (\cdot, D_x)$ defined above is a perturbation from $-\mathcal{P}_{\mathcal{A}}$ by $S_{\mathcal{A},1}$ which is a lower order operator, and the desired properties of $i\mu (\cdot, D_x)$ then follow from the ones of $-\mathcal{P}_{\mathcal{A}}$ by the standard perturbation theory of sectorial operators. The proof is complete.
\subsection{Proofs of Lemma \ref{lem.G_p} and \mathbf{e}qref{proof.prop.S_A.8}}\label{appendix.proof.lemma}
\noindent {\it Proof of Lemma \ref{lem.G_p}.} We set
\begin{align}
G_p (x,y,t) = \frac{1}{(2 \pi)^\frac{d}{2}} \int_{\mathbb{R}^d} p (x,\xi, t) e^{i t \mu_{\mathcal{A}} (x,\xi,t )} e^{i y\cdot \xi }\,{\rm d} \xi .\label{proof.lem.G_p.1}
\mathbf{e}nd{align}
Then $\big (G_p (t) h \big )(x) = (2\pi)^{d/2} \int_{\mathbb{R}^d} G_p (x,x-y,t) h (y) \,{\rm d} y$, and thus, it suffices to show
\begin{align}
|G_p (x,y,t )|\leq C t^{-d} ( 1+\frac{|y|}{t})^{-d-\mathbf{e}psilon},~~~~~~~~0<t<T,~x,y\in \mathbb{R}^d,\label{proof.lem.G_p.2}
\mathbf{e}nd{align}
for some $\mathbf{e}psilon>0$. When $|y|\leq t$ we have from \mathbf{e}qref{estimate.mu_A.1} and \mathbf{e}qref{assume.lem.G_p.1},
\begin{align*}
|G_p (x,y,t) |\leq C \int_{\mathbb{R}^d} e^{-c t|\xi|} \,{\rm d}\xi\leq C t^{-d}\leq C t^{-d} ( 1+\frac{|y|}{t})^{-d-1}.
\mathbf{e}nd{align*}
Next we consider the case $|y|\geq t$. For any multi-index $\alpha$ with its length $|\alpha|=j_0$, the integration by parts yields
\begin{align*}
y^\alpha G_p (x,y,t) & = \frac{i^{j_0}}{(2 \pi)^\frac{d}{2}} \int_{\mathbb{R}^d} ( \partial_\xi^\alpha p ) e^{it \mu_{\mathcal{A}}} e^{i y\cdot \xi} \,{\rm d} \xi + \frac{i^{j_0}}{(2 \pi)^\frac{d}{2}} \sum_{\alpha\geq \beta,|\beta|\ne 0} C_{\alpha,\beta} \int_{\mathbb{R}^d} (\partial_\xi ^{\alpha-\beta} p ) ( \partial_\xi^\beta e^{it\mu_{\mathcal{A}}}) e^{i y\cdot\xi} \,{\rm d} \xi \\
& =: I + II.
\mathbf{e}nd{align*}
Let $\chi_R (\xi)$ be a smooth function such that $\chi_R =1$ for $|\xi|\leq R$, $\chi_R =0$ for $|\xi|\geq 2 R$, and $\|\nabla^k \chi_R \|_{L^\infty}\leq C R^{-k}$. We divide $I$ into a low frequency part $I_1$ and a high frequency part $I_2$, where
\begin{align*}
I_1 = \frac{i^{j_0}}{(2 \pi)^\frac{d}{2}} \int_{\mathbb{R}^d} \chi_R \cdots \,{\rm d} \xi, ~~~~~~~~ I_2 = \frac{i^{j_0}}{(2 \pi)^\frac{d}{2}} \int_{\mathbb{R}^d} (1-\chi_R) \cdots \,{\rm d} \xi.
\mathbf{e}nd{align*}
Then the condition \mathbf{e}qref{assume.lem.G_p.1} leads to $|I_1|\leq C t^{l_{j_0}} \int_{|\xi|\leq 2 R} |\xi|^{-j_0+l_{j_0}} \,{\rm d}\xi \leq C t^{l_{j_0}} R^{d-j_0 +l_{j_0}}$, while the integration by parts combined with \mathbf{e}qref{estimate.mu_A.1} and \mathbf{e}qref{assume.lem.G_p.1} gives $|y^\gamma I_2|\leq C \int_{|\xi|\geq R} |\xi|^{-d-1}\,{\rm d}\xi \leq C R^{-1}$ for any multi-index $\gamma$ satisfying $|\gamma|=d+1-j_0$. Similarly, we divide $II$ into $II_1$ and $II_2$, where
\begin{align*}
II_1 = \frac{i^{j_0}}{(2 \pi)^\frac{d}{2}} \int_{\mathbb{R}^d} \chi_R \cdots \,{\rm d} \xi, ~~~~~~~~ II_2 = \frac{i^{j_0}}{(2 \pi)^\frac{d}{2}} \int_{\mathbb{R}^d} (1-\chi_R) \cdots \,{\rm d} \xi.
\mathbf{e}nd{align*}
Then we have from $|\beta|\geq 1$ that $|II_1|\leq C t \int_{|\xi|\leq 2 R} |\xi|^{-j_0 + 1} \,{\rm d}\xi \leq C t R^{d-j_0 + 1}$, while $II_2$ is estimated as $|y^\gamma II_2|\leq C R^{-1}$. Collecting these, we see
\begin{align*}
|G_p (x,y,t)|\leq C |y|^{-j_0} (I_1 + I_2 + II_1 + II_2) \leq C |y|^{-j_0}\big ( t^{l_{j_0}} R^{d-j_0 + l_{j_0}} + t R^{d-j_0 +1} + |y|^{-d-1+j_0}R^{-1}\big ).
\mathbf{e}nd{align*}
By taking
\begin{align*}
R=t^{-\frac{l_{j_0}}{d+1-j_0+l_{j_0}}} |y|^{-\frac{d+1-j_0}{d+1-j_0+l_{j_0}}}~~~{\rm if}~~l_{j_0}\in (0,1],~~~~~~~~ R=t^{-\frac{1}{d+2-j_0}} |y|^{-\frac{d+1-j_0}{d+2-j_0}}~~~{\rm if}~~l_{j_0}\geq 1,
\mathbf{e}nd{align*}
we get the desired estimate \mathbf{e}qref{proof.lem.G_p.2} also for $|y|\geq t$. Now the proof of \mathbf{e}qref{est.lem.G_p.1} is complete. To prove \mathbf{e}qref{est.lem.G_p.2} let $\psi\in C_0^\infty (\mathbb{R}^d)$ be a real-valued function with zero average such that
\[
\int_0^\infty \| \psi_s * f \|_{L^2(\mathbb{R}^d)}^2 \frac{\,{\rm d} s}{s} = \| f\|_{L^2 (\mathbb{R}^d)}^2,~~~~~~~~ f\in L^2 (\mathbb{R}^d).
\]
Here $\psi_s (x) = s^{-d} \psi (x/s)$. We may take $\psi={\rm d}elta \tilde \psi$ so that $\| s^{-1} \nabla_x \tilde \psi_s * f \|_{L^2(\mathbb{R}^d)}\leq C \| f\|_{L^2 (\mathbb{R}^d)}$ holds. Thanks to \mathbf{e}qref{est.lem.G_p.1} we have $\| G_p (t) \psi_s* h \|_{L^2(\mathbb{R}^d)}\leq C \| h \|_{L^2 (\mathbb{R}^d)}$ for all $t,s>0$. Moreover, when $t\geq s>0$ we apply \mathbf{e}qref{est.lem.G_p.1} to $p$ replaced by $p_1 =t\xi p$ and get
\begin{align*}
\| G_p (t) \psi_s*h \|_{L^2(\mathbb{R}^d)} & = \| G_{p} (t) \nabla_x \cdot \nabla_x \tilde \psi_s * h \|_{L^2(\mathbb{R}^d)} \\
& = t^{-1} \| G_{p_1} (t) \cdot \nabla_x \tilde \psi_s*h \|_{L^2(\mathbb{R}^d)} \leq C t^{-1} \| \nabla_x \tilde \psi_s * h\|_{L^2(\mathbb{R}^d)} \leq C t^{-1} s\| h \|_{L^2 (\mathbb{R}^d)}.
\mathbf{e}nd{align*}
When $s\geq t>0$ we take $l=\min \{l_0,\cdots,l_{d+1}\}>0$ and set $p_2 = (t|\xi |)^{-l/2} p$. Then it is easy to see that $p_2$ satisfies \mathbf{e}qref{assume.lem.G_p.1}. Hence we have
\begin{align*}
\|G_p (t) \psi_s*h \|_{L^2(\mathbb{R}^d)} & = t^{\frac{l}{2}} \| G_{p_2} (t) (-{\rm d}elta_x)^\frac{l}{4} \psi_s * h \|_{L^2(\mathbb{R}^d)} \\
& \leq C t^{\frac{l}{2}} \|(-{\rm d}elta_x)^{\frac{l}{4}} \psi_s * h\|_{L^2(\mathbb{R}^d)} \leq C t^\frac{l}{2} s^{-\frac{l}{2}}\| h \|_{L^2 (\mathbb{R}^d)}.
\mathbf{e}nd{align*}
Now we can apply the Schur lemma (cf. see \cite[pp.643-644]{Grafakos}) to $\{G_p(t)\}_{t>0}$, which leads to \mathbf{e}qref{est.lem.G_p.2}. The proof of Lemma \ref{lem.G_p} is complete.
\noindent {\it Proof of \mathbf{e}qref{proof.prop.S_A.8}.} With the notation \mathbf{e}qref{proof.lem.G_p.1} it suffices to show
\begin{align}
|G_q(x,y,t)|\leq C \min\{ |y|^{-d+\delta},~|y|^{-d-\delta}\},~~~~~~~x,y\in \mathbb{R}^d,~0<t<2\label{proof.lem.G_p.3}
\mathbf{e}nd{align}
for some $\delta>0$, where $q(x,\xi,t) = i (1+|\xi|^2)^{-(1+\mathbf{e}psilon)/2} (1+i t \mu_{\mathcal{A}} )A'\nabla_x\mu_{\mathcal{A}}$. For any multi-index $\alpha$ with $d-1\leq |\alpha|\leq d$ we have
\begin{align*}
y^\alpha G_q (x,y,t) & = \frac{i^{|\alpha|}}{(2\pi)^\frac{d}{2}} \sum_{\alpha\geq \beta} C_{\alpha,\beta} \int_{\mathbb{R}^d} (\partial_\xi^{\alpha-\beta} q) (\partial_\xi^\beta e^{it\mu_{\mathcal{A}}} ) e^{i y\cdot \xi} \,{\rm d}\xi\\
& = \frac{i^{|\alpha|}}{(2\pi)^\frac{d}{2}} \sum_{\alpha\geq \beta} C_{\alpha,\beta}\big ( \int_{\mathbb{R}^d} \chi_R \cdots \,{\rm d} \xi + \int_{\mathbb{R}^d} (1-\chi_R ) \cdots \,{\rm d}\xi \big ).
\mathbf{e}nd{align*}
Here $\chi_R$ is he cut-off function as in the proof of \mathbf{e}qref{proof.lem.G_p.2}. By the definition of $q$ we see
\begin{align*}
|\int_{\mathbb{R}^d} \chi_R \cdots \,{\rm d} \xi | \leq C \int_{|\xi|\leq 2 R} |\xi |^{-|\alpha | +1} (1+|\xi|)^{-1-\mathbf{e}psilon} \,{\rm d}\xi \leq C R^{d+1-|\alpha| },
\mathbf{e}nd{align*}
and
\begin{align*}
|y \int_{\mathbb{R}^d} (1-\chi_R ) \cdots \,{\rm d}\xi |\leq C \int_{|\xi|\geq R} |\xi |^{-|\alpha |} (1+|\xi| )^{ - 1 -\mathbf{e}psilon}\,{\rm d}\xi\leq C R^{d-|\alpha| -1-\mathbf{e}psilon}.
\mathbf{e}nd{align*}
Thus it follows that $|G_q(x,y,t) |\leq C |y|^{-|\alpha|} (R^{d+1-|\alpha|} + |y|^{-1} R^{d-|\alpha|-1-\mathbf{e}psilon} )$ for $x,y,\in \mathbb{R}^d$ and $0<t<2$. If $|y|\leq 1$ then we take $|\alpha|=d-1$, while if $|y|>1$ then take $|\alpha|=d$. Then, putting $R=|y|^{-\kappa}$ with sufficiently small $\kappa>0$, we get \mathbf{e}qref{proof.lem.G_p.3}. The proof is complete.
\begin{thebibliography}{99}
\bibitem{Abels} H. Abels,
\newblock Pseudodifferential boundary value problems with non-smooth coefficients, Comm. Part. Diff. Eq., {\bf 30} (2005) 1463-1503.
\bibitem{Alfonseca.et.al.1} M. A. Alfonseca, P. Auscher, A. Axelsson, S. Hofmann, and S. Kim,
\newblock Analyticity of layer potentials and $L^2$ solvability of boundary value problems for divergence form elliptic equations with complex $L^\infty$ coefficients, Adv. Math. {\bf 226} (2011) 4533-4606.
\bibitem{Aucher.et.al.2} P. Auscher, A. Axelsson, and S. Hofmann,
\newblock Functional calculus of Dirac operators and complex perturbations of Neumann and Dirichlet problems, J. Funct. Anal. {\bf 255} (2008) 374-448.
\bibitem{Aucher.et.al.3} P. Auscher, A. Axelsson, and A. McIntosh,
\newblock Solvability of elliptic systems with square integrable boundary data, Ark. Mat. {\bf 48} (2010) 253-287.
\bibitem{Aucher.et.al.1} P. Auscher, S. Hofmann, M. Lacey, A. McIntosh, and Ph. Tchamitchian,
\newblock The solution of the Kato square root problem for second order elliptic operators on $\mathbb{R}^n$, Ann. Math. (2) {\bf 156} (2002) 633-654.
\bibitem{Dahlberg1} B. Dahlberg,
\newblock Estimates of harmonic measure, Arch. Ration. Mech. Anal. {\bf 65} (3) (1977) 275-288.
\bibitem{Dahlberg2} B. Dahlberg,
\newblock Poisson semigroup and singular integrals, Proc. Amer. Math. Soc. {\bf 97} (1) (1986) 41-48.
\bibitem{ES} J. Escher and J. Seiler,
\newblock Bounded $H_\infty$-calculus for pseudodifferential operators and applications to the Dirichlet-Neumann operator, Trans. Amer. Math. Soc. {\bf 360} (2008) 3945-3973.
\bibitem{Fabes.et.al.1} E. Fabes, D. Jerison, and C. Kenig,
\newblock Necessary and suffcient conditions for absolute continuity of elliptic-harmonic measure, Ann. Math. (2) {\bf 119} (1984) 121-141.
\bibitem{Grafakos} L. Grafakos,
\newblock {\it Classical and modern Fourier analysis}, Pearson Education, Inc., Upper Saddle River, NJ, 2004.
\bibitem{Haase} M. Haase,
\newblock {\it The functional calculus for sectorial operators}, Operator Theory: Advances and Applications 169, Birkh{\"a}user Verlag, Basel, 2006.
\bibitem{JerisonKenig1} D. Jerison and C. Kenig,
\newblock The Neumann problem on Lipschitz domains, Bull. Amer. Math. Soc. {\bf 4} (1981) 203-207.
\bibitem{JerisonKenig2} D. Jerison and C. Kenig,
\newblock The Dirichlet problem in nonsmooth domains, Ann. Math. (2) {\bf 113} (1981) 367-382.
\bibitem{Kato} T. Kato,
\newblock {\it Perturbation Theory for Linear Operators, second edition}, Springer-Verlag, 1976.
\bibitem{KenigPipher} C. Kenig and J. Pipher,
\newblock The Neumann problem for elliptic equations with nonsmooth coefficients, Invent. Math. {\bf 113} (3) (1993) 447-509.
\bibitem{Kenig.et.al.1} C. Kenig, H. Koch, J. Pipher, and T. Toro,
\newblock A new approach to absolute continuity of elliptic measure, with applications to non-symmetric equations, Adv. Math. {\bf 153}
(2000), 231-298.
\bibitem{KumanogoNagase} H. Kumano-Go and M. Nagase,
\newblock Pseudo-differential operators with non-regular symbols and applications, Funkcial Ekvac. {\bf 21} (1978) 151-192.
\bibitem{Lunardi} A. Lunardi,
\newblock {\it Analytic semigroups and optimal regularity in parabolic problems}, Progress in Nonlinear Differential Equations and their Applications 16, Birkh{\"a}user Verlag, Basel, 1995.
\bibitem{MaekawaMiura1} Y. Maekawa and H. Miura,
\newblock On domain of Poisson operators and factorization for divergence form elliptic operators, preprint.
\bibitem{Marschall} J. Marschall,
\newblock Pseudo-differential operators with nonregular symbols of the class $S^m_{\rho,\delta}$, Comm. Part. Diff. Eq. {\bf 12} (1987) 921-965.
\bibitem{MMS} V. Maz'ya, M. Mitrea and T. Shaposhnikova,
\newblock{The Dirichlet problem in Lipschitz domains for higher order elliptic systems with rough coefficients}, J. Anal. Math. {\bf 110} (2010) 167-239.
\bibitem{Taylor} M. E. Taylor,
\newblock {\it Pseudodifferential Operators and Nonlinear PDE}, Boston, Birkhauser, 1991.
\bibitem{Taylor2} M. E. Taylor,
\newblock {\it Tools for PDE. Pseudodifferential operators, paradifferential operators, and layer potentials.} Mathematical Surveys and Monographs, 81. American Mathematical Society, Providence, RI, 2000.
\bibitem{Verchota} G. Verchota,
\newblock Layer potentials and regularity for the Dirichlet problem for Laplace's equation in Lipschitz domains, J. Funct. Anal. {\bf 59} (1984) 572-611.
\mathbf{e}nd{thebibliography}
\mathbf{e}nd{document}
|
\begin{document}
\title{Extremal Graphs Without 4-Cycles}
\author[FF]{
Frank A. Firke}
\author[PK]{Peter M. Kosek}
\author[EN]{Evan D. Nash}
\author[JW]{Jason Williford}
\address[FF]{Department of Mathematics, Carleton College, Northfield, Minnesota 55057, USA}
\address[PK]{Department of Mathematics, The College at Brockport, State University of New York, Brockport, NY 14420, USA}
\address[EN]{Department of Mathematics, University of Nebraska-Lincoln, Lincoln, NE 68588, USA}
\address[JW]{ Department of Mathematics, University of Wyoming, Laramie, Wyoming 82071, USA}
\begin{abstract}
We prove an upper bound for the number of edges a $C_4$-free graph on $q^2+q$ vertices can contain for $q$ even. This upper bound is achieved whenever there is an orthogonal polarity graph of a plane of even order $q$.
\end{abstract}
\maketitle
Let $n$ be a positive integer and $G$ a graph. We define $ex(n,G)$ to be the largest number of edges possible in a graph on $n$ vertices that does not contain $G$ as a subgraph; we call a graph on $n$ vertices \emph{extremal} if it has $ex(n,G)$ edges and does not contain $G$ as a subgraph. $EX(n,G)$ is the set of all extremal $G$-free graphs on $n$ vertices.
The problem of determining $ex(n,G)$ (and $EX(n,G)$) for general $n$ and $G$ belongs to an area of graph theory called \emph{extremal graph theory}. Extremal graph theory officially began with Tur\'an's theorem that solves $EX(n,K_m)$ for all $n$ and $m$, a result that is striking in its precision. In general, however, exact results for $ex(n,G)$ (and especially $EX(n,G)$) are very rare; most results are upper or lower bounds and asymptotic results. For many bipartite $G$ there is a large gap between upper and lower bounds.
The question of $ex(n,C_4)$ (where $C_4$ is a cycle of length 4) has an interesting history; Erd\H{o}s originally posed the problem in 1938, and the bipartite version of this problem was solved by Reiman using a construction derived from the projective plane (see \cite{B} and the references therein for a more detailed history). Reiman also determined the upper bound $ex(n,C_4) \leq \frac n 4 (1 + \sqrt{4n-3})$ for general graphs, but this is known not to be sharp \cite{R}. Erd\H{o}s, R\'enyi, and S\'os later showed that this is asymptotically correct using a construction known as the \emph{Erd\H{os}-R\'enyi graph} derived from the orthogonal polarity graph of the classical projective plane \cite{ER} \cite{ERS}. This is part of a more general family of graphs which we define below.
Let $\pi$ be a finite projective plane with point set $P$ and line set $L$. A polarity $\phi$ of $\pi$ is an involutionary permutation of $P \cup L$ which maps points to lines and lines to points and reverses containment. We call points absolute when they are contained in their own polar image. A polarity is called orthogonal if there are exactly $q+1$ absolute points. We define the polarity graph of $\pi$ to be the graph with vertex set $P$, with two distinct vertices $x,y$ adjacent whenever $x \in \phi(y)$. The graph is called an orthogonal polarity graph if the polarity is orthogonal. This graph is $C_4$-free, has $q^2$ vertices of degree $q+1$, and $q+1$ of degree $q$, for a total of $\frac 1 2 q(q+1)^2$ edges.
F\"uredi determined the first exact result that encompasses infinitely many $n$, namely that for $q>13$ we have $ex(q^2+q+1,C_4) \leq \frac 1 2 q(q+1)^2$ \cite{F1} \cite{F2}, with equality if and only if the graph is an orthogonal polarity graph of a plane of order $q$.
In particular, this shows $ex(q^2+q+1,C_4) = \frac 1 2 q(q+1)^2$ for all prime powers $q$.
The question of finding $ex(n,C_4)$ exactly for general $n$ appears to be a difficult problem. Computer searches by Clapham et al. \cite{CFS} and Yuansheng and Rowlinson \cite{YR} determined $EX(n,C_4)$ for all $n \leq 31$.
More general lower bounds are given in \cite{A} by deleting carefully chosen vertices from the Erd\H{o}s-R\'enyi graph. It is not known if any of these bounds are sharp in general. In particular it is not even known whether deleting a single vertex of
degree $q$ from an orthogonal polarity graph graph yields a graph which is still extremal, a question posed by Lazebnik in 2003 \cite{L1}.
More generally, is $ex(q^2+q,C_4) \leq \frac{1}{2}q(q+1)^2-q$? In this paper we will prove the following theorem:
\begin{theorem}\label{main}
For $q$ even, $ex(q^2+q,C_4) \leq \frac 1 2 q(q+1)^2-q$.
\end{theorem}
It follows that equality holds for all $q$ which are powers of 2.
The question of determining $EX(q^2+q,C_4)$ in this case is subtler; the searches referred to above showed that there are multiple constructions that achieve the bound for $q = 2, 3$, but for $q = 4,5$ there is only one. In a subsequent paper, we will prove the following:
\begin{theorem} For all but finitely many even $q$, any $C_4$-free graph with $ex(q^2+q,C_4)$ edges is derived from an orthogonal polarity graph by removing a vertex of minimum degree. \end{theorem}
The proof of this result is much more lengthy and complicated than that of the inequality in Theorem \ref{main}, and requires $q$ to be sufficiently large. The purpose of this paper is to give a simpler proof of the inequality and show it holds for all even $q$. We start with some notation.
We let $X_k$ be the set of vertices of degree $k$, $X_{\leq k}$ be the set of vertices of degree at most $k$, $E_0$ be $\frac 1 2 q(q+1)^2-q$, and $n$ be the number of vertices ($q^2+q$). We will use $\Gamma ( x )$ to represent the vertices in the neighborhood of $x$. For our various lemmas, we will specify in each case whether $q$ is an even number or simply a positive integer; however, in all cases we consider $q \geq 6$. (We know from \cite{CFS} and \cite{YR} that the inequality in Theorem \ref{main} is true for $q \leq 5$.)
In general, we proceed indirectly. We will show that no $C_4$-free graph with $E_0+1$ edges can exist, from which we conclude that a graph cannot have more than that number of edges (as it would contain an impossible subgraph).
We will use and generalize the techniques found in \cite{F1}, \cite{F2}, and \cite{F3}.
\begin{lemma}
Let $q$ be a natural number greater than 2 and let $G$ be a $C_4$-free graph on $q^2+q$
vertices with at least $E_0$ edges. Then
the
maximum degree
of a vertex
in $G$ is at most $q+2$.
\end{lemma}
\begin{proof}
Let $u$ be a vertex of
$G$ of maximum degree
$d$. Let $e$ be the number of edges
of $G$, $e \geq \frac{1}{2}q(q+1)^2-q$. We proceed by bounding the
number of 2-paths in $G$ which have no endpoints in $\Gamma ( v )$.
This gives us:
$$\binom{n-d}{2} \geq \sum\limits_{v \neq u} \binom{d(v)-1}{2} $$
Using Jensen's inequality for the function $f(x) = \binom{x}{2}$ we have:
$$\sum\limits_{ v \neq u } \binom{d(v)-1}{2}
\geq (n-1)\binom{(2e-(n-1)-d)/(n-1)}{2}$$
Multiplying by $2 (n-1) (q+1)$ and simplifying yields:
\begin{equation}\label{inq}
(q+1)(n-1)(n-d)(n-d-1) \geq (q+1)(2e-n-d+1)(2e-2n-d+2)
\end{equation}
However, we also have:
$$
(q+1)(2e-2n-d+2)-(n-1)(n-d-1) \geq (q^2-2)d-q^3-2q^2+q+1
$$ $$\geq (q^2-2)(q+3)-q^3-3q^2+1 \geq q^2-q-5$$
with $q^2-q-5>0$ for $q>2$, \\
which gives us:
\begin{equation}\label{inq1}
(q+1)(2e-2n-d+2) > (n-1)(n-d).
\end{equation}
We also have the inequality:
$$
(2e-n-d+1) - (q+1)(n-d) \geq -q^2-3q+1+qd
$$ $$\geq -q^2-3q+1+q(q+3) =1>0,$$
which demonstrates that:
\begin{equation}\label{inq2}
(2e-n-d+1) > (q+1)(n-d-1).
\end{equation}
Therefore the product of (\ref{inq1}) and (\ref{inq2}) contradict (\ref{inq}),
and
the theorem
follows.
\end{proof}
Since we now have an upper bound on the maximum degree of an extremal graph, we
focus on
the lower bound.
\begin{theorem} \label{degseqs}
Let $q$ be an even number and let $G$ be a $C_4$-free graph on $q^2+q$ vertices,
with maximum degree $q+1$
or less.
Then, if $e$ denotes the number of edges of $G$, we have $e \leq E_0$. Furthermore, if equality holds, then the degree sequence of $G$ must be one of the following (where $z$ is a parameter):
\begin{center}
\begin{tabular}{ |l | l | l | l |}
\hline
$|X_{q+1}|$ & $|X_q|$ &$|X_{q-1}|$ &$|X_{q-2}|$ \\ \hline
$q^2-q+z$ & $2q-2z$ & $z$ & 0 \\ \hline
$q^2-q+z+1$ & $2q-2z-1$ & $z-1$ & 1 \\ \hline
\end{tabular}
\end{center}
\end{theorem}
\begin{proof}
If the maximum degree of $G$ is $q$ or less, we have $e \leq
\frac{1}{2}q(q^2+q) \leq E_0$ and the
theorem is immediate.
Therefore, we take $G$ to have maximum degree $q+1$.
It is clear that:
$$|X_{q+1}|+|X_{q}|+||X_{q-1}|s|=q^2+q
$$
Noting that the degree sum of all the vertices of a graph is equal to $2e$, we
have:
\begin{equation}\label{edge}
2e \leq (q+1)|X_{q+1}|+q|X_{q}|+(q-1)||X_{q-1}|s|=q(q^2+q) +|X_{q+1}|-||X_{q-1}|s|
\end{equation}
Let $v$ be a vertex of degree $q+1$. We wish to bound the average degree of
$\Gamma(v)$.
We will do this by noting that the collection $C(v)$ of vertices distance 2
or less from
$v$ must naturally be less than $q^2+q$. We then show that $|\bigcup_{w \in
\Gamma(v)}
\Gamma(w)|+1 \leq |C(v)|$.
Each vertex in $\Gamma(v)$ is connected to at most one other vertex in
$\Gamma(v)$, as
otherwise it would imply a $C_4$ is in $G$. Also, since $q+1$ is odd, there
must be a
vertex in $\Gamma(v)$ which is connected to no other vertex in $\Gamma(v)$.
Rephrasing,
this means that there is at least one vertex in $C(v)$ that is not in
$\bigcup_{w \in
\Gamma(v)} \Gamma(w)$. We also know that for $w,u \in \Gamma(v)$ we have
$\Gamma(w)
\cup
\Gamma(u) = \{v\}$, otherwise it again implies $G$ has a $C_4$. We then have:
$$
\left | \bigcup_{w \in \Gamma(v)} \Gamma(w) \right |+1= \left (\sum_{w \in \Gamma(v)}
d(w)\right)-q+1
\leq |C(v)| \leq q^2+q
$$
Then we have:
$$\frac{ \sum_{w \in \Gamma(v)} d(w)}{q+1} \leq
\frac{q^2+2q-1}{q+1}=q+1-\frac{2}{q+1}
$$
Therefore, we can conclude that if a vertex of degree $q+1$ is connected to no
vertex of degree $q-1$,
then it must be connected to at least two vertices of degree $q$. Let $A$ be the
set of
vertices of degree $q+1$ which are connected to at least two vertices of $|X_{q}|s$ but no
vertex of
$X_{\leq q-1}$,
and $B$ be the set of vertices of $|X_{q+1}|s$ connected to at least one vertex of
$X_{\leq q-1}$.
Let $a=|A|$ and $b=|B|$. Naturally $a+b=|X_{q+1}|s$.
We consider the number of edges $e'$ with one endpoint in $A$ and the
other in $|X_{q}|s$.
As
each vertex of $A$ is connected to at least two vertices of $|X_{q}|s$, and each
vertex of
$|X_{q}|s$
is connected to at most $q$ vertices of $|X_{q+1}|s$, we have:
\begin{equation}\label{ineq1}
2a \leq e' \leq q|X_{q}|
\end{equation}
Let $e''$ be the number of edges with one endpoint in $B$ and the other in
$|X_{q-1}|s$.
As each vertex of $B$ is connected to at least one vertex of $|X_{q-1}|s$ and each
vertex in
$|X_{q-1}|s$
is connected to at most $q-1$ vertices in $B$, we have:
\begin{equation}\label{ineq2}
b \leq e'' \leq (q-1)||X_{q-1}|s|
\end{equation}
Adding twice (\ref{ineq2}) to (\ref{ineq1}) we get:
$$2|X_{q+1}| \leq q|X_{q}|+(2q-2)||X_{q-1}|s|
$$
Adding $q|X_{q+1}|$ to both sides we have:
$$(q+2)|X_{q+1}| \leq q(|X_{q+1}|+|X_{q}|+||X_{q-1}|s|)+(q-2)||X_{q-1}|s|=q^3+q^2+(q-2)||X_{q-1}|s|
$$
Dividing both sides by (q+2) and expanding, we obtain:
$$|X_{q+1}| \leq q^2-q+||X_{q-1}|s|-\frac{4||X_{q-1}|s|}{q+2}-\frac{4}{q+2}+2
$$
This implies
$$|X_{q+1}|-||X_{q-1}|s| \leq q^2-q + 1$$
Using this with (\ref{edge}) we have:
$$2e \leq q^3+q^2 +q^2-q +1
$$
Since $q$ and $2e$ are even, we must have:
$$e \leq \frac{1}{2}q(q+1)^2-q$$
If equality holds, then we have $ q^2-q \leq |X_{q+1}|-||X_{q-1}|s| \leq q^2-q+1 $ and $2e \leq (q+1)|X_{q+1}| + q|X_{q}| + (q-1)||X_{q-1}|s| \leq 2e+1$. This implies that
at most one of the vertices in $|X_{q-1}|s$ has degree $q-2$, and the rest have degree $q-1$. The remainder of the theorem follows. \end{proof}
It is now clear that any graph with more than $E_0$ edges must have maximum degree equal to $q+2$. The rest of the paper is devoted to showing that no such graph exists. This is done by utilizing a connection between vertices of degree $q+2$ and vertices of relatively small degree.
\begin{lemma} \label{doubleconnect}
If there is a $C_4$-free graph on $n$ vertices and $E_0+1$ edges (with $q \in \mathbb{N}$), then any vertex of degree $\delta \leq \frac q 2 + 1$ connects to every vertex of degree $q+2$.
\end{lemma}
\begin{proof} Assume for the sake of contradiction that this statement is not true. Then there is a vertex $v$ of degree $\delta$ (where $\delta \leq \frac q 2 +1$) and a vertex $u$ of degree $q+2$ such that $u \nsim v$. Then we remove all $\delta$ edges in which $v$ is incident and add a new edge from $u$ to $v$, so that our graph now has $e = E_0+1-\delta$ edges, a vertex $v$ of degree 1, and a vertex $u$ of degree $q+3$.
Now we extend the lemma used in \cite{F1} concerning 2-paths containing no endpoints in $\Gamma(u)$, the neighborhood of $u$. We know that we can bound the number of such 2-paths above with ${{n-d(u)} \choose 2}$, as there can be at most one 2-path between any pair of points not in $\Gamma(u)$. We also know that each vertex that is not $u$ has at most one neighbor in $\Gamma(u)$, which means that this inequality must hold: $${{n -d(u)} \choose 2} \geq \sum_{x \neq u} {{d(x) - 1} \choose 2}$$ as the right side of that inequality is a lower bound on the number of actual 2-paths in the graph.
To actually calculate this, we first note that every 2-path involving $v$ must have an endpoint in $\Gamma(u)$, which means that we can write $$\sum_{x \neq u} {{d(x) - 1} \choose 2} = \sum_{x \neq u,v} {{d(x)-1} \choose 2}$$ If we consider the total sum being chosen from (i.e. $\sum_{x \neq u,v} d(x) -1$) we get the number $$2e - (n-2)-(q+3)-1$$ (since the total degree sum is $2e$ and we subtract first the degrees of the two uncounted vertices and then $1$ from the remaining $n-2$ terms). We can thus use Jensen's inequality to obtain this expression: $${n-(q+3) \choose 2} \geq (n-2) {\frac{2e-(n-2)-(q+3)-1}{n-2} \choose 2}$$
Now we take $$e = E_0 - \frac q 2 + 1$$ (which corresponds to $\delta = \frac q 2 + 1$). Since the left side is not dependent on $e$ and the right side is, if the inequality fails for our chosen $e$ then it will certainly fail for larger values $e$, which is equivalent to smaller values of $\delta$. When we expand and simplify the above inequality, we find it is equivalent to the following: $$- \frac{2q^3-2q^2-10q+12}{(q^2+q-2)} \geq 0 $$ which is not true for any relevant $q$. \end{proof}
\begin{corollary} \label{dcc}
For any $q$, if $|X_{q+2}| \geq 2$ in a $C_4$-free graph with $E_0 + 1$ edges on $n$ vertices, there can be only one vertex $v$ of degree $\frac q 2 + 1$ or less. In that case, $|X_{q+2}| \leq d(v).$
\end{corollary}
\begin{proof} The first part follows from the prior lemma and the fact that the graph is $C_4$-free; the second part follows from the lemma and the first part of the corollary. \end{proof}
\begin{lemma} \label{total2plemma}
The maximum number of 2-paths in a graph with $n$ vertices and $E_0+1$ edges (with $q$ even) is $qe - |X_{q+1}| + \frac 1 2 |X_{q+2}|$.
\end{lemma}
\begin{proof}
Since there can only be one 2-path between any two vertices (if there are more, there would be a $C_4$ in the graph), we can bound the number of 2-paths by ${n \choose 2}$. However, this may be improved by bounding the number of pairs of vertices which are not the endpoints of a 2-path. We consider how many other vertices cannot be reached in two steps from a given vertex, a function we will denote by $f(v)$. The exact number of 2-paths in the graph is ${n \choose 2} - \frac{1}{2} \sum\limits_v f(v)$, so any lower bound on $\sum\limits_v f(v)$ will in turn yield an upper bound on the number of 2-paths in the graph.
To bound $\sum\limits_v f(v)$, we can compute what $f(v)$ would be if $v$ is connected only to vertices of degree $q+1$, giving us a function we call $g(v)$. This leads us to this table:
\begin{center}
\begin{tabular}{ | l | l |}
\hline
$d(v)$ & $g(v)$\\ \hline
$q-2$ & $3q-1$\\ \hline
$q-1$ & $2q-1$ \\ \hline
$q$ & $q-1$\\ \hline
$q+1$ & $1$\\ \hline
$q+2$ & $0$\\ \hline
\end{tabular}
\end{center}
As a sample calculation, if $d(v) = q$ then $g(v) = q^2 + q - (1 + q \cdot q) = q-1$ because it has $q$ neighbors that each have $q$ neighbors other than $v$. The $1$ corresponding to degree $q+1$ comes from the fact that a vertex of degree $q+1$ must have one neighbor it cannot be connected to in a 2-path since (by assumption) $q+1$ is odd.\\
Strictly speaking, the values of $g(v)$ are not lower bounds on $f(v)$, since there are also vertices of degree $q+2$. In general, if $v$ is adjacent to $k$ vertices of degree $q+2$, we have $f(v) \geq g(v)-k$. Then subtracting $(q+2)|X_{q+2}|$ from $\sum\limits_v g(v)$ gives us the bound
$\sum\limits_v f(v) \geq \sum\limits_v g(v) - (q+2)|X_{q+2}|$.
We note that, for $d(v) \leq q$, $g(v) = q( q+1 - d(v)) -1$; for $d(v) = q+1$, we must add 2 to that formula, while for $d(v) = q+2$ we must add $q+1$.
This allows us to establish an upper bound on the number of 2-paths in the graph as follows: \begin{align*} {n \choose 2} - \frac 1 2 \sum_{v \in V(G)} f(v) & \leq {q^2 + q \choose 2} - \frac 1 2 \sum_{v \in V(G)} (g(v) - |X_{q+2}|) \\ &= \frac 1 2 [ (q^2+q)(q^2+q-1) - (\sum_{v \in V(G)} (q( q+1 - d(v)) -1 ) + 2|X_{q+1}| - |X_{q+2}|)] \\ &= \frac 1 2 [(q^2+q)(q^2+q-1) - |V(G)|(q^2+q-1) -2|X_{q+1}| + |X_{q+2}| + q\sum_{v \in V(G)} d(v) ]\\ &= qe - |X_{q+1}| + \frac 1 2 |X_{q+2}| \end{align*} and so we have our result. \end{proof}
\begin{lemma}
For any $C_4$-free graph $G$ on $n$ vertices with $E_0+1$ edges ($q$ even), $\delta(G) > \frac q 2 +1$.
\end{lemma}
\begin{proof}
Assume for the sake of contradiction that there is a vertex $v$ such that $d(v) = \delta \leq \frac q 2 +1$. We know from Lemma \ref{doubleconnect} and Corollary \ref{dcc} that $v$ is unique and $v$ is connected to every vertex in $X_{q+2}$. We also know from the Lemma \ref{total2plemma} that this inequality must hold: \begin{equation} \label{2path} qe - |X_{q+1}| + \frac 1 2 |X_{q+2}| \geq |X_{q+2}| {{q+2} \choose 2} + \sum_{x \neq v, \not \in X_{q+2}} {{d(x)} \choose 2} + {{\delta} \choose 2} \end{equation} since the right hand side is the total number of 2-paths in the graph.
Take $A$ to be the set of vertices that are neither $v$ nor in $X_{q+2}$. We wish to find the average degree of $A$, which we will denote by $c$. Since the maximum degree of any vertex in $A$ is $q+1$, $c \leq q+1$. Moreover, $c$ will be minimized when the $|X_{q+2}| = \delta = \frac q 2 +1$; in that case, we can calculate $c$: $$ c = \frac{2(E_0+1) - (\frac q 2 +1) - (\frac q 2 +1)(q+2)} {n - (\frac q 2 + 1 + 1)} $$ which yields $c = q+1 - \frac{4q-2}{2q^2+q-4}$. Since the subtracted term is less than 1 for all $q$, $q < c \leq q+1$.
Now, clearly, for a fixed $|X_{q+2}|$ the left side of \ref{2path} is maximized when $|X_{q+1}|$ is minimized. We also wish to minimize the term $M = \sum_{x \in A} {{d(x)} \choose 2}$. If there is a vertex $y$ of degree $q - k$ (for some integer $k \geq 1$), then we know that we keep the same degree sum in $A$ (which is fixed, since $\delta$ and $|X_{q+2}|$ are fixed) if we were to take a vertex of degree $q+1$, turn it into a vertex of degree $q$, and raise the degree of $y$ by 1. Moreover, this will actually decrease $M$, because of the following arithmetic: \begin{align*} {q \choose 2} - {{q+1} \choose 2} + {{q - k +1} \choose 2} - {{q-k} \choose 2} &= -2q + 2(q-k) \\ &= -2k < 0 \end{align*} and so if there is a vertex of degree $q-1$ or less in $A$ then $M$ is not minimal.
Thus we see that both $M$ and $|X_{q+1}|$ are minimized when every vertex in $A$ has degree $q$ or degree $q+1$. Thus, if \ref{2path} does not hold in that case, it cannot hold in any case. To obtain values for $|X_{q+1}|$ and $|X_{q}|$, we solve this system of equations: \begin{align*} |X_{q+2}| + |X_{q+1}| + |X_{q}| + 1 &= n \\ |X_{q+2}|(q+2) + |X_{q+1}|(q+1) + |X_{q}| q + \delta &= 2(E_0+1) \end{align*} which makes $|X_{q+1}| = q^2 + 2 - \delta - 2|X_{q+2}|$ and $|X_{q}| = q -3 +|X_{q+2}| +\delta$.
When we plug those values into \ref{2path} and group terms in terms of $\delta$ we obtain this expression: $$-\frac 1 2 \delta^2 +(q+\frac 3 2)\delta + \frac 3 2 |X_{q+2}| - \frac 3 2 q - \frac 1 2 q^2 - 2 \geq 0$$ Since, viewed as a function of $\delta$, that is a downward-opening quadratic, we know that the inequality will only be true between the zeros of that function. Applying the quadratic formula to the expression yields this equivalent expression: $$\frac 3 2 + q + \frac 1 2 \sqrt{-7 + 12|X_{q+2}|} \geq \delta \geq \frac 3 2 + q - \frac 1 2 \sqrt{-7 + 12|X_{q+2}|}$$ and since we know that $\delta \geq |X_{q+2}|$, we know this must be true: \begin{align*} \frac 3 2 + q + \frac 1 2 \sqrt{-7 + 12|X_{q+2}|} &\geq |X_{q+2}| \\ \frac 3 2 + q + \frac 1 2 \sqrt{-7 + 12|X_{q+2}|} -|X_{q+2}| &\geq 0 \end{align*}
Solving that inequality for $|X_{q+2}|$ yields a requirement that $$|X_{q+2}| \geq 3+q - \sqrt{5+3q}$$ but that (given our constraint that $q$ be at least 6) implies $|X_{q+2}| > \frac q 2 + 1$. That means that $\delta > \frac q 2 + 1$, which contradicts our initial assumption. Therefore $\delta(G) > \frac q 2 + 1$. \end{proof}
\begin{lemma}
If $G$ is a $C_4$-free graph on $n$ vertices with $E_0+1$ edges (with no restrictions on $q$), then any two vertices of degree $q+2$ in $G$ must share exactly one neighbor. \end{lemma}
\begin{proof} By way of contradiction, suppose that there exist two vertices $u,v\in G$ both of degree $q+2$ that share no neighbors.
We will expand on a technique used by F\"uredi in \cite{F1} to consider 2-paths without endpoints in either $\Gamma(u)$ or $\Gamma(v)$. Denote this quantity by $P$. Let $d=q+2$ be the degree of $u$ and $v$.
We know that ${{n-d(x)} \choose 2}$ is an upper bound for the number of 2-paths without endpoints in the neighborhood of a chosen vertex $x$ with degree $d(x)$, since every two vertices not in the neighborhood of $x$ are endpoints of at most one 2-path. In our case, we have an upper bound on $P$ of ${{n-2d} \choose 2}$ as we are removing two disjoint neighborhoods of degree $d$.
Now, we know that $\sum_{w\neq u,v}{{d(w)} \choose 2}$ is precisely the number of 2-paths with the central vertex not equal to $u,v$, but we must subtract two from each degree to account for the possibility that a given $w$ shares a neighbor with both $u$ and $v$. Thus we find that $\sum_{w\neq u,v}{{d(w)-2} \choose 2}$ is the lower bound for $P$, as it assumes all vertices $w$ share a neighbor with both $u$ and $v$.
We use Jensen's inequality to get the following result: $$\sum_{w\neq u,v}{{d(w)-2} \choose 2}\geq (n-2){{(2(E_0+1)-2(n-2)-2d)/(n-2)} \choose 2}.$$
Then we have an inequality that must hold for the graph $G$ to exist: $${{n-2d}\choose 2}-1\geq P \geq (n-2){{(2(E_0+1)-2(n-2)-2d)/(n-2)} \choose 2}.$$
After simplifying and solving for $q$, we get the following inequality: $$\frac{-q^4-6q^3+17q^2+34q-48}{q^2+q-2}\geq 0.$$ However, this inequality cannot hold for any $q$ in the range we are concerned with. This contradiction shows that $u$ and $v$ must share at least one neighbor, and we know they cannot share more than one because this would create a $C_4$. This result implies that every pair of vertices of degree $q+2$ must have exactly one neighbor in common. \end{proof}
\begin{lemma}
If $G$ is a $C_4$-free graph on $n$ vertices with $E_0+1$ edges (where $q$ can be even or odd), any two vertices of degree $q+2$ must share a neighbor of degree $d < \frac q 2$.
\end{lemma}
\begin{proof}
We know from the previous lemma that two vertices of degree $q+2$ must have a neighbor in common. Consider two such vertices $x$ and $y$, and let their unique common neighbor be $u$. By adapting the argument of the previous lemma and applying it to $B = \Gamma(x) \cup \Gamma(y)$, i.e. looking at $P'$, the number of 2-paths with no endpoints in $B$, we obtain the following inequality:
$${{n-2(q+2)+1} \choose 2} \geq \sum_{v \not\in \Gamma(u)} {{d(u)-2} \choose 2} + \sum_{v \in \Gamma(u),v\neq x,y} {{d(u)-1} \choose 2}$$
We see the left hand side is an upper bound on $P'$; we add 1 back in to the number of vertices used in 2-paths because of the intersection between $\Gamma(x)$ and $\Gamma(y)$. The right hand side is separated into two sums. The first summation sums over all vertices $v$ not in the neighborhood of vertex $u$, and relies on the fact that $v$ can connect to at most 1 one other vertex in the neighborhood of each of the $q+2$ vertices before creating a $C_4$. The second summation is over all vertices $w \in \Gamma(u)$ and comes from the fact that $w$ can connect to at most 1 other vertex in the union of the neighborhoods of the two $q+2$ vertices before creating a $C_4$.
We use Jensen's inequality on the right hand side to obtain the following expression:
$$\sum_{v \not\in \Gamma(u)} {{d(v)-2} \choose 2} + \sum_{w \in \Gamma(u)} {{d(w)-1} \choose 2} \geq (n-2){{\frac{2e-2(q+2)-2(n-2)+d(u)-2}{n-2}} \choose 2}$$ (We subtract 2 because we should not count $x$ and $y$ for $u$'s total.)
Clearly, if this inequality fails for a given value of $d(u)$, it must fail for any larger value, since the right side increases with $d(u)$ and the left side is static. Thus, we plug in $\frac q 2$, which yields the following expression: $$- \frac{6q^3-25q^2-28q+96}{8q^2+8-16} \geq 0$$
That inequality fails for all relevant $q$, thus the statement is proven. \end{proof}
\begin{corollary} \label{noqp2}
If $q$ is even, any $C_4$-free graph with $E_0+1$ edges and $n$ vertices has exactly one vertex of degree $q+2$.
\end{corollary}
\begin{proof} This follows from the three previous lemmas. \end{proof}
Having reduced the hypothetical counterexamples to a single case, we proceed with the proof of the theorem.
\begin{theorem}
For $q$ even, $ex(q^2+q,C_4) \leq \frac 1 2 q(q+1)^2-q$.
\end{theorem}
\begin{proof} We know from Corollary \ref{noqp2} that we must consider only the case when $|X_{q+2}| = 1$. It is clear that any graph $G$ with $n$ vertices, $E_0+1$ edges, and $|X_{q+2}| = 1$ can only be created by taking a graph $G'$ with $E_0$ edges and $\Delta(G) = q+1$ and connecting a vertex of degree $q+1$ to a vertex of strictly lower degree. We know from Lemma \ref{degseqs} that $G'$ can have one of two possible degree sequences up to a parameter $z$. When we examine all the ways to make the necessary connection, we get these four possible degree sequences for $G$:
\begin{center}
\begin{tabular}{ | l | l | l | l | l | l |}
\hline
& $|X_{q+2}|$ & $|X_{q+1}|$ & $|X_q|$ &$|X_{q-1}|$ &$|X_{q-2}|$ \\ \hline
A & 1 & $q^2-q+z$ & $2q-2z-1$ & $z$ & 0\\ \hline
B & 1 & $q^2-q+z$ & $2q-2z$ & $z-2$ & 1\\ \hline
C & 1 & $q^2-q+z+1$ & $2q-2z-2$ & $z-1$ & 1\\ \hline
D & 1 & $q^2-q+z-1$ & $2q-2z+1$ & $z-1$ & 0\\ \hline
\end{tabular}
\end{center}
Since we have a specific degree sequence, we can use Lemma \ref{total2plemma} concerning the total number of 2-paths in $G$ to generate the following inequality: $$qe - |X_{q+1}| + \frac 1 2 |X_{q+2}| \geq |X_{q+2}| {{q+2} \choose 2}$$ $$+ \ |X_{q+1}|{{q+1} \choose 2} + |X_{q}| {q \choose 2} + |X_{q-1}|{{q-1} \choose 2} + |X_{q-2}|{{q-2} \choose 2}$$
When we solve that inequality for $z$, we get the following results:
\begin{center}
{
\begin{tabular}{| l | l |}
\hline
\bigstrut A & $z \leq - \frac 1 4$ \\ \hline
\bigstrut B & $z \leq -\frac 3 4 $ \\ \hline
\bigstrut C & $z \leq -\frac 7 4 $ \\ \hline
\bigstrut D & $z \leq \frac 3 4 $ \\ \hline
\end{tabular}}
\end{center}
Obviously, these values of $z$ lead to impossible degree sequences, thus no such $G$ is possible. \end{proof}
\end{document}
|
\begin{document}
\title{Equivariant one-parameter formal deformations of Hom-Leibniz algebras}
\author{Goutam Mukherjee}
\email{[email protected]}
\address{Stat-Math Unit,
Indian Statistical Institute, Kolkata 700108,
West Bengal, India.}
\author{Ripan Saha}
\email{[email protected]}
\address{Department of Mathematics,
Raiganj University, Raiganj, 733134,
West Bengal, India.}
\subjclass[2010]{16E40, 17A30, 55N91.}
\keywords{Group action, Hom-Leibniz algebra, equivariant cohomology, formal deformation, rigidity.}
\begin{abstract}
Aim of this paper is to define a new type of cohomology for multiplicative Hom-Leibniz algebras which controls deformations of Hom-Leibniz algebra structure. The cohomology and the associated deformation theory for Hom-Leibniz algebras as developed here are also extended to equvariant context, under the presence of finite group actions on Hom-Leibniz algebras.
\end{abstract}
\maketitle
\section{Introduction}
Gerstenhaber in a series of papers \cite{G1, G2, G3, G4, G5} introduced the notion of algebraic deformation theory for associative algebras. Later following Gerstenhaber, deformation theory of other algebraic structures are studied extensively in various context ( \cite{NR}, \cite{fox}, \cite{GS}, \cite{MM}, \cite{NR66}). For example, A. Nijenhuis and R. Richardson studied formal one-parameter deformation theory of Lie algebras \cite{NR}.
To study deformation theory of a type of algebra one needs a suitable cohomology, called deformation cohomology which controls deformations in question. In the case of associative algebras, deformation cohomology is Hochschild cohomology and for Lie algebras, the associated deformation cohomology is Chevalley-Eilenberg cohomology.
Hartwig, Larsson, and Silvestrov introduced the notion of Hom-Lie algebras in \cite{HLS}. Hom-Lie algebras appeared as examples of $q$-deformations of the Witt and Virasoro algebras. In \cite{MZ}, Makhlouf and Zusmanovich described Hom-Lie algebra structures on affine Kac-Moody algebras. The notion of a Hom-Lie algebra structure is a generalization of Lie algebra structure on a vector space. A Lie algebra equipped with self linear map, called the structure map, is said to be a Hom-Lie algebra if the Jacobi identity is replaced by Hom-Jacobi identity, which is Jacobi identity twisted by the structure map. Obviously, a Hom-Lie algebra is a Lie algebra when the associated structure map is identity.
In \cite{loday93}, J.-L. Loday introduced a version of non anti-symmetric Lie algebra and its (co)homology, known as Leibniz algebra. The bracket of a Leibniz algebra satisfies Leibniz identity instead of Jacobi identity. In the presence of skew-symmetry Leibniz identity reduces to Jacobi identity. Cohomology of any Leibniz algebra with coefficients in a bimodule was introduced in \cite{LP}.
Makhlouf and Silvestrov introduced the notion of a Hom-Leibniz algebra in \cite{MS08} generalizing Hom-Lie algebras. Thus, Hom-Leibniz algebras are generalizations of both Leibniz and Hom-Lie algebras. In Hom-Leibniz algebras, Leibniz identity is twisted by a self linear map and it is called Hom-Leibniz identity. Other variants of Hom-type algebras have been studied in \cite{AEM}, \cite{M10}, \cite{MS08}, \cite{MS10}, \cite{MS102}, \cite{saha}.
In \cite{MS10}, Makhlouf and Silvestrov introduced deformation cohomologies of first and second order to study one-parameter formal deformation theory for Hom-associative and Hom-Lie algebras. Hurle and Makhlouf introduced a new type of cohomology theory considering the structure map for Hom-associative and Hom-Lie algebras in \cite{HM19}, \cite{HM19glas}. Cheng and Cai defined cohomology groups of all orders for Hom-Leibniz algebras in \cite{CA}.
In the present paper, we define a cohomology theory for multiplicative Hom-Leibniz algebras generalizing \cite{CA}. We call this new cohomology as $\alpha$-type cohomology for Hom-Leibniz algebras. We also develop one-parameter formal deformation theory for Hom-Leibniz algebras using $\alpha$-type cohomology as the deformation cohomology.
Finally, we define a notion of finite group action on Hom-Leibniz algebras along the line of Bredon cohomology of a G-space, \cite{bredon67} and define equivariant version of $\alpha$-type cohomology. It turns out that for a Hom-Leibniz algebra equipped with an action of a finite group, its equivariant deformations are controlled by this $\alpha$-type cohomology. Note that an action of a finite group $G$ on a Hom-Leibniz algebra $L$ over a field $\mathbb K$ naturally extends to the formal power series $L[[t]]$ by bilinearly extending the multiplication of $L$ making $L[[t]]$ a Hom- Leibniz algebra over $\mathbb K [[t]].$
The paper is organized as follows. In Section \ref{sec 1}, we recall basics of Hom-Leibniz algebras which we shall use throughout the paper.
In Section \ref{sec 2}, we show that there is a Gerstenhaber bracket on shifted cochains for Hom-Leibniz cohomology introduced in \cite{CA}, and the bracket induces a graded Lie algebra structure on the graded cohomology. In Section \ref{sec 3}, we introduce $\alpha$-type cohomology of multiplicative Hom-Leibniz algebras. In Section \ref{sec 4}, we introduce one-parameter formal deformation theory of Hom-Leibniz algebras. We define infinitesimal deformation, study the the problem of extending a given deformation of order $n$ to a deformation of order $(n+1)$ and define the associated obstruction. We also study rigidity conditions for formal deformations. In Section \ref{sec 5}, we define the notion of finite group actions on Hom-Leibniz algebras and introduce equivariant $\alpha$-type cohomology for Hom-Leibniz algebras equipped with a finite group action. In the final Section \ref{sec 6}, we define equivariant formal deformations and prove that equivariant $\alpha$-type cohomology is the right notion of deformation cohomology in the present context. We end with a brief discussion of rigidity of equivariant deformations for Hom-Leibniz algebras equipped with finite group action.
\section{Preliminaries}\label{sec 1}
In this section, we recall the basics of Hom-Leibniz algebras. Let $\mathbb{K}$ be a field of characteristic zero. Though most of the constructions should also work in other characteristics (not $2$) or if $\mathbb{K}$ is a ring containing the rational numbers.
\begin{defn}
A Hom-Leibniz algebra is a $\mathbb{K}$-linear vector space $L$ together with a $\mathbb{K}$-bilinear map $[.,.]:L\times L\to L$ and a $\mathbb{K}$-linear map (structure map) $\alpha:L\to L$ satisfying Hom-Leibniz identity:
\begin{center}
$[\alpha(x),[y,z]]=[[x,y],\alpha(z)]-[[x,z],\alpha(y)].$
\end{center}
\end{defn}
A Hom-Leibniz algebra $(L_1,[.,.],\alpha)$ is called multiplicative if $\mathbb{K}$-linear map $\alpha$ satisfies $\alpha ([x,y])=[\alpha(x),\alpha(y)]$.
A morphism between Hom-Leibniz algebras $(L_1,[.,.]_1,\alpha_1)$ and $(L_2,[.,.]_2,\alpha_2)$ is a $\mathbb{K}$-linear map $\phi: L_1\to L_2$ which satisfies $\phi([x,y]_1)=[\phi(x),\phi(y)]_2$ and $\phi\circ\alpha_1=\alpha_2\circ\phi$.
\begin{example}
Any Hom-Lie algebra is automatically a Hom-Leibniz algebra as in the presence of skew-symmetry Hom-Leibniz identity is same as Hom-Jacobi identity.
\end{example}
\begin{example}
Given a Leibniz algebra $(L,[.,.])$ and a Leibniz algebra morphism $\alpha: L\to L$, one always get a Hom-Leibniz algebra $(L,[.,.]_\alpha,\alpha)$, where $[x, y]_\alpha= [\alpha(x), \alpha(y)]$.
\end{example}
\begin{example}
Let $L$ is a two-dimensional $\mathbb{C}$-vector space with basis $\lbrace e_1, e_2\rbrace$. We define a bracket operation as $[e_2,e_2]=e_1$ and zero else where and the endomorphism is given by the matrix \[
\alpha=
\begin{bmatrix}
1 & 1 \\
0 & 1
\end{bmatrix}.
\]
It is a routine work to check that $(L, [.,.],\alpha)$ is a Hom-Leibniz algebra which is not Hom-Lie.
\end{example}
\begin{defn}
A Hom-vector space is a $\mathbb{K}$-vector space $M$ together with a $\mathbb{K}$-linear map $\beta:M\to M$ such that vector space operations are compatible with $\beta$. We write a Hom-vector space as $(M,\beta)$.
\end{defn}
\begin{defn}
Let $(L,[.,.],\alpha)$ be a Hom-Leibniz algebra. A $L$-bimodule is a Hom-vector space $(M,\beta)$ together with two $L$-actions (left and right multiplications), $m_l:L\otimes M\to M $ and $m_r:M\otimes L\to M $ satisfying the following conditions,
\begin{align*}
&\beta (m_l (x, m)) = m_l(\alpha(x), \beta(m)),\\
&\beta (m_r (m, x)) = m_r(\beta(m), \alpha(x)),\\
&m_r (\beta(m), [x,y]) = m_r(m_r(m, x), \alpha(y)) - m_r (m_r(m, y), \alpha(x)),\\
& m_l (\alpha(x), m_r(m, y))= m_r(m_l(x, m), \alpha (y))- m_l([x, y], \beta(m)),\\
& m_l(\alpha(x), m_l(y, m)) = m_l([x, y], \beta(m)) - m_r(m_l(x, m), y),
\end{align*}
for any $x, y \in L$ and $m\in M$.
\end{defn}
Note that any Hom-Leibniz algebra $(L,[.,.],\alpha)$ can be considered as a bimodule over itself by taking $m_l=m_r=[.,.]$ and $\beta=\alpha$.
We recall the cohomology of Hom-Leibniz algebra $(L,[.,.],\alpha)$ defined in \cite{CA}.
Let
$${CL}_\alpha^n(L, L) = \lbrace \phi : L^{\otimes n} \to L \mid \alpha \circ \phi = \phi \circ \alpha^{\otimes n}\rbrace.$$
For $n\geq 1$, $\delta^n:CL^n_{\alpha}(L,L)\to CL^{n+1}_{\alpha}(L,L)$ is defined as follows:
\begin{align}
&(\delta^n \phi)(x_1,\dots,x_{n+1}) \\ \nonumber
& =[\alpha^{n-1}(x_1),\phi(x_2,\ldots,x_{n+1})]\\\nonumber
&+\sum^{n+1}_{i=2}(-1)^{i}[\phi(x_1,\ldots,\hat{x_i},\ldots,x_{n+1}),\alpha^{n-1}(x_i)] \\
& +\sum_{1\leq i<j\leq n+1}(-1)^{j+1}\phi(\alpha(x_1),\ldots,\alpha(x_{i-1}),[x_i,x_j],\alpha(x_{i+1}),\ldots,\widehat{\alpha(x_j)},\ldots,\alpha(x_{n+1})). \nonumber
\end{align}
For $n\geq 1$, $\delta^2=0$ and $({CL}_\alpha^\ast(L, L), \delta)$ is a cochain complex. The cohomology of this cochain complex is discussed in \cite{CA}.
\section{Gerstenhaber bracket on cochains for Hom-Leibniz Cohomology}\label{sec 2}
In \cite{AEM}, authors studied Gerstenhaber algebra structure on the shifted cochains for Hom-associative algebras. In this section, we define a Gerstenhaber bracket on shifted cochains for Hom-Leibniz algebra cohomology introduced by Cheng and Cai,\cite{CA} and show that this bracket induces a graded Lie algebra structure on cohomology of Hom-Leibniz algebra.
\begin{defn}
Let $S_n$ be the permutation group of $n$ elements $1, \ldots, n.$ A permutation $\sigma \in S_n$ is called a $(p, q)$-shuffle if $p+q = n$ and
$$\sigma (1) < \cdots <\sigma (p) ~~\mbox{and}~~~\sigma (p+1) < \cdots <\sigma (p+q).$$
\end{defn}
In the group algebra $\mathbb K [S_n],$ let $Sh_{p,q}$ be the element $$Sh_{p,q}: = \sum_{\sigma} \sigma,$$ where the summation is over all $(p, q)\mbox{-shuffles}.$
Suppose for $n\geq-1$,\,$CH^n(L,L)$ is the space of all $(n+1)$-linear maps $\phi: L^{\otimes n+1}\to L$ satisfying
$$\alpha\circ \phi=\phi\circ \alpha^{\otimes n+1}.$$
Let $\phi\in CH^p(L,L)$ and $\psi\in CH^q(L,L)$, where $p\geq 0,q\geq 0$, we define $\psi\circ\phi\in CH^{p+q}(L,L)$ as follows,
\begin{align*}
\psi\circ\phi(x_1,\ldots,x_{p+q+1})=&\sum^{q+1}_{k=1}(-1)^{p(k-1)}\lbrace\sum_{\sigma\in Sh(p,q-k+1)}sgn(\sigma)\psi(\alpha^p(x_1),\ldots,\alpha^p(x_{k-1}),\\&\phi(x_k,x_{\sigma(k+1)},\ldots,x_{\sigma(k+p)}),\alpha^p(x_{\sigma(k+p+1)}),\ldots,\alpha^p(x_{\sigma(p+q+1)})\rbrace.
\end{align*}
Suppose
$$CH^*(L,L)=\bigoplus_{p\geq -1} CH^p(L,L).$$
We define a bracket $[.,.]$ on $CH^*(L,L)$ as $[\psi,\phi]=\psi\circ\phi+(-1)^{pq+1}\phi\circ\psi$.
\begin{remark}
For $p=q=1$,
\begin{align*}
& \psi\circ\phi(x_1,x_2,x_{3})\\
& =\psi(\alpha(x_1),\phi(x_2,x_3))-\psi(\phi(x_1,x_2),\alpha(x_3))+\psi(\phi(x_1,x_3),\alpha(x_2))=\psi\circ_{\alpha}\phi.
\end{align*}
For $\phi=\psi$, this is nothing but $\alpha$-associator of $\phi$.
\end{remark}
\begin{prop}
Suppose $m_1\in CH^1(L,L)$. Then $(L,m_1,\alpha)$ is a Hom-Leibniz algebra if and only if $[m_1,m_1]=0.$
\end{prop}
\begin{proof}
\begin{align*}
[m_1,m_1] (x_1, x_2, x_3)& =2\big(m_1(\alpha(x_1),m_1(x_2,x_3)) -m_1(m_1(x_1,x_2), \alpha(x_3))\\
&+m_1(m_1(x_1,x_3),\alpha(x_2))\big).
\end{align*}
Thus, $(L,m_1,\alpha)$ is a Hom-Leibniz algebra if and only if $[m_1,m_1]=0.$
\end{proof}
\begin{lemma}
Let $(L,m_0,\alpha)$ be a Hom-Leibniz algebra and $\phi\in CH^p(L,L)$, then $\delta\phi=-[\phi,m_0]$, where $m_0=[.,.]$ is the Leibniz bracket of $L$.
\end{lemma}
\begin{proof}
Let $x_1,x_1,\ldots,x_{p+2}\in L$. From the coboundary formula, we have
\begin{align*}
& \delta\phi(x_1,\ldots,x_{p+2})\\
&=[\alpha^{p}(x_1),\phi(x_2,\ldots,x_{p+2})]\\
&+\sum^{p+2}_{i=2}(-1)^{i}[\phi(x_1,\ldots,\hat{x_i},\ldots,x_{p+2}),\alpha^{p}(x_i)]\\
&+\sum_{1\leq i<j\leq {p+2}}(-1)^{j+1}\phi(\alpha(x_1),\ldots,\alpha(x_{i-1}),[x_i,x_j],\alpha(x_{i+1}),\ldots,\widehat{\alpha(x_j)},\ldots,\alpha(x_{p+2})).
\end{align*}
Note that $m_0\in CH^1(L,L)$ and $[\phi,m_0]=\phi\circ m_0+(-1)^{p+1}m_0\circ \phi$.
\begin{align*}
\phi\circ m_0(x_1,\ldots,x_{p+2})=&\sum^{p+1}_{k=1}(-1)^{k-1}\lbrace\sum_{\sigma\in Sh(1,p-k+1)}sgn(\sigma)\phi(\alpha(x_1),\ldots,\alpha(x_{k-1}),\\& [x_k,x_{\sigma(k+1)}],\alpha(x_{\sigma(k+2)}),\ldots,\alpha(x_{\sigma(p+2)})\rbrace.\\
&=\sum_{1\leq k<j\leq (p+2)}(-1)^{j}\phi(\alpha(x_1),\ldots,\alpha(x_{k-1}),\\& [x_k,x_j],\alpha(x_{(k+1)}),\ldots,\alpha(x_{j-1}),\widehat{\alpha(x_j)},\alpha(x_{j+1}),\ldots,\alpha(x_{(p+2)})).
\end{align*}
On the other hand, we have
\begin{align*}
&m_0\circ\phi(x_1,\ldots,x_{p+2})\\
&=\sum_{\sigma\in Sh(p,1)}sgn(\sigma)m_0(\phi(x_1,x_{\sigma(1)},\ldots,x_{\sigma(p+1)}),\alpha^p(x_{\sigma(p+2)}))\\
&+(-1)^p[\alpha^p(x_1),\phi(x_2,\ldots,x_{p+2})]\\
&=\sum_{2\leq j\leq {p+2}}(-1)^{p+2-j}[\phi(x_1,x_2,\ldots,x_{j-1},\hat{x_j},x_{j+1},\ldots,x_{p+1}),\alpha^p(x_j)]\\
&+(-1)^p[\alpha^p(x_1),\phi(x_2,\ldots,x_{p+2})].
\end{align*}
Therefore,
\begin{align*}
[\phi,m_0] (x_1,\dots, x_{p+2}) & =(\phi\circ m_0+(-1)^{p+1}m_0\circ\phi)(x_1,\ldots,x_{p+2})\\
&=\sum_{1\leq k<j\leq p+2}(-1)^{j}\phi(\alpha(x_1), \ldots,\alpha(x_{k-1}), [x_k,x_j],\alpha(x_{k+1}),\ldots,\\
& \alpha(x_{j-1}),\widehat{\alpha(x_j)},
\alpha(x_{j+1}),\ldots,\alpha(x_{\sigma(p+2)})\\
&+\sum_{2\leq j\leq {p+2}}(-1)^{j+1}[\phi(x_1,x_2,\ldots,x_{j-1},\hat{x_j},x_{j+1},\ldots,x_{p+2}),\alpha^p(x_j)]\\
& -[\alpha^p(x_1),\phi(x_2,\ldots,x_{p+2})]
=-\delta\phi.
\end{align*}
Thus, we have $\delta\phi=-[\phi,m_0].$
\end{proof}
The graded $\mathbb{K}$-module $CH^*(L,L)=\bigoplus_{p\geq -1} CH^p(L,L)$ together with the bracket $[\psi,\phi]=\psi\circ\phi+(-1)^{pq+1}\phi\circ\psi$ is a graded Lie algebra \cite{bala}. If $\phi\in CH^p(L,L)$, then we define $|\phi|=p+1$.
We define a linear map $d:CH^*(L,L)\to CH^*(L,L)$ as follows:
$$d(\phi)=(-)^{|\phi|}\delta(\phi).$$
Using the graded Lie algebra structure on $CH^*(L,L)$, we prove the following lemma.
\begin{lemma}\label{differential}
The differential $d$ satisfies the the following graded derivation formula:
$$d[\psi,\phi]=[d\psi,\phi]+(-1)^{|\psi|}[\psi,d\phi],$$
for $\psi,\phi\in CH^*(L,L)$.
\end{lemma}
\begin{proof}
Let $\phi\in CH^p(L,L)$ and $\psi\in CH^q(L,L)$. To prove this Lemma, we use properties of graded Lie algebra.
\begin{align*}
d[\psi,\phi]&=(-1)^{p+q+1}\delta[\psi,\phi]\\
&=-(-1)^{p+q+1}[[\psi,\phi],m_0]\\
&=[m_0,[\psi,\phi]]\\
&=-(-1)^{p(q+1)}[\phi,[m_0,\psi]]-(-1)^{p+q}[\psi,[\phi,m_0]]\\
&=(-1)^{q+1}[[m_0,\psi],\phi]+(-1)^q[\psi,d\phi]\\
&=[(-1)^q\delta\psi,\phi]+(-1)^{q+1}[\psi,d\phi]\\
&=[d\psi,\phi]+(-1)^{q+1}[\psi,d\phi]\\
&=[d\psi,\phi]+(-1)^{|\psi|}[\psi,d\phi].
\end{align*}
\end{proof}
From the Lemma (\ref{differential}), $(CH^*(L,L),[.,.],d)$ is a differential graded Lie algebra. The cohomology group of $(CH^*(L,L),[.,.],d)$ is denoted by $(H^*_{HL}(L,L)$. It is clear from the Lemma (\ref{differential}) that the Gerstenhaber bracket on graded cochains induces a bracket $[.,.]$ on the cohomology level and we have the following theorem.
\begin{theorem}
$(H^*_{HL}(L,L),[.,.])$ is a graded Lie algebra.
\end{theorem}
\section{\texorpdfstring{$\alpha$}{alpha}-type Leibniz cohomology of Hom-Leibniz algebras}\label{sec 3}
For the deformation theory we need a new type of cohomology that can capture information of deformation for both multiplication and structure map of a Hom-Leibniz algebra. We begin this section by introducing a cohomology theory for Hom-Leibniz algebras considering both the bracket and the structure map $\alpha$. We call this cohomology for Hom-Leibniz algebras by $\alpha$-type cohomology of Hom-Leibniz algebras. We will see that this cohomology is a generalization of the cohomology defined in the last section.
Let $\gamma(x, y) =[x, y]$ and we define the cochain complex for the cohomology of $(L, \gamma, \alpha)$ with coefficients in $L$ as follows:
\begin{align*}
\widetilde{CL}^n(L, L) & = \widetilde{CL}_\gamma^n(L, L) \oplus \widetilde{CL}_\alpha^n(L, L)\\
& = \text{Hom}_\mathbb{K}(L^{\otimes n}, L) \oplus \text{Hom}_\mathbb{K}(L^{\otimes {n-1}}, L),~~~\text{for all}~ n\geq 2.
\end{align*}
\begin{align*}
\widetilde{CL}^1 (L, L) & = \widetilde{CL}_\gamma^1 (L, L) \oplus \widetilde{CL}_\alpha^1 (L, L) = \text{Hom}_\mathbb{K}(L, L) \oplus \lbrace 0 \rbrace.
\end{align*}
\begin{align*}
\widetilde{CL}^n(L, L)= \lbrace 0 \rbrace~~~\text{for all}~ n\leq 0.
\end{align*}
We may write elements of $\widetilde{CL}^n(L, L)$ as $(\phi, \psi)$ or $\phi + \psi$, where $\phi \in \widetilde{CL}_\gamma^n(L, L)$ and $\psi \in \widetilde{CL}_\alpha^n(L, L)$. Note that we set $\text{Hom}_\mathbb{K}(\mathbb{K}, L) = \lbrace 0 \rbrace$ instead of $\mathbb{K}$ as usual, otherwise $\alpha^{-1}$ would be needed in the definition of the differential.
We define four maps with domain and range given in the following diagram:
\[
\begin{tikzcd}
\widetilde{CL}_\gamma^n(L, L) \arrow[rdd, crossing over, "{\partial_{\gamma \alpha}}" pos=.2] \arrow[r, "{\partial_{\gamma \gamma}}"] & [10 ex] \widetilde{CL}_\gamma^{n+1}(L, L)\\ [-3 ex]
\bigoplus & \bigoplus\\ [-3 ex]
\widetilde{CL}_\alpha^n(L, L)\arrow[uur, "{\partial_{\alpha \gamma}}" pos=.2] \arrow[r, "{\partial_{\alpha \alpha}} "] & \widetilde{CL}_\alpha^{n+1}(L, L)
\end{tikzcd}
\]
\begin{align}
&(\partial_{\gamma \gamma} \phi)(x_1,\dots,x_{n+1}) \\ \nonumber
& =[\alpha^{n-1}(x_1),\phi(x_2,\ldots,x_{n+1})]\\ \nonumber
&+\sum^{n+1}_{i=2}(-1)^{i}[\phi(x_1,\ldots,\hat{x_i},\ldots,x_{n+1}),\alpha^{n-1}(x_i)] \\
& +\sum_{1\leq i<j\leq n+1}(-1)^{j+1}\phi(\alpha(x_1),\ldots,\alpha(x_{i-1}),[x_i,x_j],\alpha(x_{i+1}),\ldots,\widehat{\alpha(x_j)},\ldots,\alpha(x_{n+1})). \nonumber
\end{align}
\begin{align}
&(\partial_{\alpha \alpha} \psi)(x_1,\dots,x_{n}) \\ \nonumber
& =[\alpha^{n-1}(x_1),\psi(x_2,\ldots,x_{n})]\\ \nonumber
&+\sum^{n}_{i=2}(-1)^{i}[\psi(x_1,\ldots,\hat{x_i},\ldots,x_{n}),\alpha^{n-1}(x_i)] \\
& +\sum_{1\leq i<j\leq n}(-1)^{j+1}\psi(\alpha(x_1),\ldots,\alpha(x_{i-1}),[x_i,x_j],\alpha(x_{i+1}),\ldots,\widehat{\alpha(x_j)},\ldots,\alpha(x_{n})). \nonumber
\end{align}
\begin{align}
(\partial_{\gamma \alpha} \phi) (x_1,\ldots, x_{n}) = \alpha( \phi(x_1,\ldots, x_{n}) ) - \phi(\alpha(x_1),\ldots,\alpha( x_{n}))
\end{align}
\begin{align}
& (\partial_{\alpha \gamma} \psi)(x_1,\ldots, x_{n+1})\\ \nonumber
&= \sum^{n+1} _{i=2} (-1)^i [ [\alpha^{n-2}(x_1), \alpha^{n-2} (x_i)],\psi(x_2,\ldots, \hat{x_i}, \dots,x_{n+1})] \\ \nonumber
&+\sum_{2\leq i<j \leq n+1}(-1)^{j}[\psi(x_1,\ldots, x_{i-1},\hat{x_i}, x_{i+1},\ldots,\hat{x_j},\ldots,x_{n+1}), [\alpha^{n-2}(x_i), \alpha^{n-2} (x_j)]]\nonumber
\end{align}
We set
\begin{align}
\partial (\phi + \psi) & = (\partial_{\gamma \gamma} + \partial_{\gamma \alpha})(\phi) - (\partial_{\alpha \alpha} + \partial_{\alpha \gamma})(\psi)\\
\partial (\phi, \psi) & = (\partial_{\gamma \gamma} \phi - \partial_{\alpha \gamma}\psi, \partial_{\gamma \alpha}\phi - \partial_{\alpha \alpha}\psi). \nonumber
\end{align}
\begin{thm}
$\widetilde{CL}^\ast(L, L)$ together with the map $\partial (\phi , \psi) = (\partial_{\gamma \gamma} \phi - \partial_{\alpha \phi}\psi, \partial_{\gamma \alpha}\phi - \partial_{\alpha \alpha}\psi)$ is a cochain complex.
\end{thm}
\begin{proof}
We need to show that $\partial^2=0$. This is same as the following equations:
\begin{align*}
& \partial_{\gamma \gamma} \partial_{\gamma \gamma} + \partial_{\gamma \gamma} \partial_{\alpha \gamma} - \partial_{\alpha \gamma} \partial_{\alpha \alpha} - \partial_{\alpha \gamma} \partial_{\gamma \alpha} = 0,\\
& -\partial_{\alpha \alpha} \partial_{\alpha \alpha} + \partial_{\gamma \alpha} \partial_{\alpha \gamma} - \partial_{\alpha \alpha} \partial_{\gamma \alpha} + \partial_{\gamma \alpha} \partial_{\gamma \gamma} = 0.
\end{align*}
For this we verify the following equations
\begin{align*}
& \partial_{\gamma \gamma} \partial_{\gamma \gamma}= \partial_{\alpha \gamma} \partial_{\gamma \alpha},\\
& \partial_{\gamma \gamma} \partial_{\alpha \gamma} = \partial_{\alpha \gamma} \partial_{\alpha \alpha},\\
& \partial_{\alpha \alpha} \partial_{\alpha \alpha} = \partial_{\gamma \alpha} \partial_{\alpha \gamma},\\
& \partial_{\gamma \alpha} \partial_{\gamma \gamma} = \partial_{\alpha \alpha} \partial_{\gamma \alpha}.
\end{align*}
We only verify above equations for $n=1$. The proof of the general case is lengthy and can be obtained following \cite{HM19}, we omit the detail computation.
Observe that
\begin{align*}
&\partial_{\gamma \gamma} \partial_{\gamma \gamma} \phi(x_1, x_2, x_3) \\
& = [\alpha(x_1), \partial_{\gamma \gamma} \phi(x_2, x_3)] + [\partial_{\gamma \gamma}\phi (x_1, x_3), \alpha(x_2)] - [\partial_{\gamma \gamma}\phi (x_1, x_2), \alpha(x_3)] \\
& - \partial_{\gamma \gamma} \phi ([x_1, x_2], \alpha(x_3)) + \partial_{\gamma \gamma} \phi ([x_1, x_3], \alpha(x_2)) + \partial_{\gamma \gamma} \phi (\alpha(x_1), [x_2, x_3])\\
& = [\alpha (x_1), [x_2 ,\phi(x_3)]] + [\alpha(x_1), [\phi(x_2), x_3]] - [\alpha(x_1), \phi([x_2,x_3])] + [[x_1, \phi(x_3)], \alpha(x_2)] \\
& + [[\phi(x_1), x_3], \alpha(x_2)] - [\phi([x_1, x_3]), \alpha(x_2)] - [[x_1, \phi(x_2)], \alpha(x_3)] - [[\phi(x_1), x_2], \alpha(x_3)]\\
& + [\phi[x_1, x_2], \alpha(x_3)] - [[x_1, x_2], \phi \alpha(x_3)] - [\phi[x_1, x_2], \alpha(x_3)] + \phi([[x_1, x_2], \alpha(x_3)])\\
&+ [[x_1, x_3], \phi\alpha(x_2)] + [\phi[x_1, x_3], \alpha(x_2)] - \phi([[x_1,x_3], \alpha(x_2)]) + [\alpha(x_1), \phi[x_2, x_3]] \\
& + [\phi \alpha(x_1), [x_2, x_3]] - \phi([\alpha(x_1), [x_2, x_3]])\\
& = [[x_1, x_2], \alpha \phi (x_3)] - [[x_1,x_2], \phi \alpha(x_3)] - [[x_1,x_3], \alpha \phi(x_2)] + [[x_1, x_3], \phi \alpha(x_2)] \\
& - [\alpha \phi(x_1), [x_2, x_3]] + [\phi \alpha(x_1), [x_2, x_3]].
\end{align*}
Note that in the above computation the third equality is obtained by using the Hom-Leibniz identity and cancellation of terms.
On the other hand, we have
\begin{align*}
&\partial_{\alpha \gamma} \partial_{\gamma \alpha} \phi (x_1, x_2, x_3)\\
& = [[x_1, x_2], \partial_{\gamma \alpha} \phi (x_3)] - [[x_1, x_3], \partial_{\gamma \alpha} \phi (x_2)] - [\partial_{\gamma \alpha} \phi(x_1), [x_2, x_3]] \\
& = [[x_1, x_2], \alpha \phi (x_3)] - [[x_1,x_2], \phi \alpha(x_3)] - [[x_1,x_3], \alpha \phi(x_2)] + [[x_1, x_3], \phi \alpha(x_2)] \\
& - [\alpha \phi(x_1), [x_2, x_3]] + [\phi \alpha(x_1), [x_2, x_3]].
\end{align*}
Thus, $\partial_{\gamma \gamma} \partial_{\gamma \gamma} = \partial_{\alpha \gamma} \partial_{\gamma \alpha}$.
\begin{align*}
& \partial_{\gamma \alpha} \partial_{\gamma \gamma} \phi (x_1, x_2) \\
& = \alpha (\partial_{\gamma \gamma} \phi(x_1, x_2))- \partial_{\gamma \gamma} \phi (\alpha(x_1), \alpha(x_2)) \\
& = [\alpha(x_1), \alpha \phi (x_2)] + [\alpha \phi (x_1), \alpha(x_2)] - \alpha \phi ([x_1, x_2]) - [\alpha(x_1), \phi \alpha (x_2)] \\
& - [\phi \alpha (x_1), \alpha(x_2)] + \phi \alpha ([x_1, x_2]).
\end{align*}
On the other hand, we have
\begin{align*}
&\partial_{\alpha \alpha} \partial_{\gamma \alpha} \phi (x_1, x_2)\\
& = [\alpha(x_1), \partial_{\gamma \alpha} \phi (x_2)] + [\partial_{\gamma \alpha} \phi (x_1),\alpha(x_2)] - \partial_{\gamma \alpha} \phi ([x_1, x_2])\\
& = [\alpha(x_1), \alpha \phi (x_2)] - [\alpha(x_1), \phi \alpha(x_2)] + [\alpha \phi (x_1), \alpha(x_2)] - [\phi(\alpha(x_1)), \alpha(x_2)] \\
& - \alpha \phi ([x_1, x_2]) + \phi ([\alpha(x_1), \alpha(x_2)]).
\end{align*}
Thus, $\partial_{\gamma \alpha} \partial_{\gamma \gamma} = \partial_{\alpha \alpha} \partial_{\gamma \alpha}$.
As $\widetilde{CL}^1_{\alpha} (L, L) = \lbrace 0 \rbrace$, we have
\begin{align*}
& \partial_{\gamma \gamma} \partial_{\alpha \gamma}\psi (x_1, x_2) = \partial_{\alpha \gamma} \partial_{\alpha \alpha}\psi (x_1, x_2) = 0 ,\\
& \partial_{\alpha \alpha} \partial_{\alpha \alpha}\psi (x_1, x_2) = \partial_{\gamma \alpha} \partial_{\alpha \gamma}\psi (x_1, x_2) = 0.
\end{align*}
Therefore, for $n=1$ we proved $\partial^2 =0$.
\end{proof}
We denote the cohomology of the cochain complex $\big( \widetilde{CL}^\ast(L, L), \partial \big)$ by $\widetilde{HL}^\ast (L, L)$ and call it $\alpha$-type Hom-Leibniz cohomology of $L$ with coefficients in itself.
\begin{remark}
Note that $\alpha$-type cohomology for multiplicative Hom-Leibniz algebras generalizes the cohomology introduced in \cite{CA}. To show this we consider only those elements in $\widetilde{CL}^n(L, L)$ where second summand is zero, that is, $\widetilde{CL}_\alpha^n(L, L) = \lbrace 0 \rbrace.$ Thus, we have elements of the form $(\phi, 0)$. We define a subcomplex of $\widetilde{CL}^n(L, L)$ as follows:
\begin{align*}
{CL}_\alpha^n(L, L) & = \lbrace (\phi, 0) \in \widetilde{CL}^n(L, L) \mid \partial_{\gamma \alpha} \phi = 0 \rbrace\\
& = \lbrace \phi \in \widetilde{CL}_\gamma^n(L, L) \mid \alpha \circ \phi = \phi \circ \alpha^{\otimes n}\rbrace.
\end{align*}
The map $\partial_{\gamma \gamma}$ defines a diffential on this complex and this complex is same as the complex defined in \cite{CA}. Thus, $\alpha$-type cohomology generalizes the cohomology developed in \cite{CA}.
\end{remark}
\section{Deformation theory of Hom-Leibniz algebra structure}\label{sec 4}
In this section, we introduce one-parameter formal deformation theory for multiplicative Hom-Leibniz algebras and discuss how $\alpha$-type cohomology controls deformations. We only consider multiplicative Hom-Leibniz algebras.
\begin{defn}
A one-parameter formal deformation of multiplicative Hom-Leibniz algebra $(L,[.,.],\alpha)$ is given by a $\mathbb{K}[[t]]$-bilinear map $m_t:L[[t]]\times L[[t]]\to L[[t]]$ and a $\mathbb{K}[[t]]$-linear map $\alpha_t:L[[t]]\to L[[t]]$ of the forms
$$m_t=\sum_{i\geq 0} m_it^i\,\text{and}\, \alpha_t=\sum_{i\geq 0}\alpha_it^i,$$
such that,
\begin{enumerate}
\item For all $i\geq 0$, $m_i:L\times L\to L$ is a $\mathbb{K}$-bilinear map, and $\alpha_i:L\to L$ is a $\mathbb{K}$-linear map.
\item $m_0(x,y)=[x,y]$ is the bracket and $\alpha_0=\alpha$ is the structure map of $L$.
\item $(L[[t]],m_t,\alpha_t)$ satisfies the Hom-Leibniz identity, that is, \\ $m_t(\alpha_t(x),m_t(y,z))=m_t(m_t(x,y),\alpha_t(z))-m_t(m_t(x,z),\alpha_t(y)).$\label{deform equ}
\item The map $\alpha_t$ is multiplicative, that is, $m_t (\alpha_t(x), \alpha_t(y)) = \alpha_t (m_t(x, y))$.\label{deform equ mult}
\end{enumerate}
\end{defn}
Condition (\ref{deform equ}) in the last definition is equivalent to
\begin{align}
\label{deform equ 1} \sum_{\substack{i+j+k=n\\i,j,k\geq 0}}m_i(\alpha_j(x),m_k(y,z))-m_i(m_k(x,y),\alpha_j(z))+m_i(m_k(x,z),\alpha_j(y))=0.
\end{align}
Condition (\ref{deform equ mult}) in the last definition is equivalent to
\begin{align}
\label{deform equ 12} \sum_{\substack{i+j+k=n\\i,j,k\geq 0}}m_i(\alpha_j(x), \alpha_k(y))- \sum_{\substack{i+j=n\\i,j\geq 0}}\alpha_i(m_j(x,y))=0.
\end{align}
For a Hom-Leibniz algebra $(L,[.,.],\alpha)$, a $\alpha_j$-associator is a map,
\begin{align*}
Hom(L^{\times 2},L)\times &Hom(L^{\times 2},L)\to Hom(L^{\times 3},L),\\
&(m_i,m_k)\mapsto m_i\circ_{\alpha_j}m_k,
\end{align*}
defined as
$m_i\circ_{\alpha_j}m_k(x,y,z)=m_i(\alpha_j(x),m_k(y,z))-m_i(m_k(x,y),\alpha_j(z))+m_i(m_k(x,z),\alpha_j(y))$.
By using $\alpha_j$-associator, the deformation equation may be written as
\begin{align*}
&\sum_{i,j,k\geq 0} (m_i\circ_{\alpha_j}m_k)t^{i+j+k}=0,\\
&\sum_{n\geq 0}\bigg( \sum_{\substack{i + j + k= n\\ i, j, k \geq 0}} (m_i\circ_{\alpha_j}m_k) \bigg)t^n=0.
\end{align*}
Thus, for $n=0,1,2,\ldots$, we have the following infinite equations:
\begin{align}
\label{deform equ 2}\sum_{\substack{i + j + k= n\\ i, j, k \geq 0}} (m_i\circ_{\alpha_j}m_k)=0.
\end{align}
We can rewrite the Equation (\ref{deform equ 2}) as follows:
\begin{align} \label{deform equ 3}
( \partial_{\gamma \gamma} m_n - \partial_{\alpha \gamma} \alpha_n ) (x, y, z) = - \sum_{\substack{i + j + k= n\\ i, j, k > 0}} (m_i\circ_{\alpha_j}m_k)(x, y, z),
\end{align}
where
\begin{align}
\partial_{\gamma \gamma} m_n(x,y,z)& = [\alpha(x), m_n(y,z)] + [m_n(x,z), \alpha(y)] - [m_n(x, y), \alpha(z)] \\
& - m_n ([x, y], \alpha(z)) + m_n ([x,z], \alpha(y)) + m_n (\alpha(x), [y,z]), \nonumber
\end{align}
and
\begin{align}
\partial_{\alpha \gamma} \alpha_n (x,y,z) = - [\alpha_n(x), [y,z]] + [[x,y], \alpha_n(z)] - [[x,z], \alpha_n(y)].
\end{align}
From the multiplicativity of $\alpha_t$, we have
\begin{align} \label{df alpha 1}
\sum_{\substack{i + j + k= n\\ i, j, k \geq 0}} m_i (\alpha_j (x), \alpha_k(y)) - \sum_{\substack{i + j = n\\ i, j \geq 0}} \alpha_i (m_j (x, y)) = 0.
\end{align}
We can rewrite the Equation (\ref{df alpha 1}) as follows:
\begin{align}
(\partial_{\alpha \alpha} \alpha_n - \partial_{\gamma \alpha} m_n) (x, y) = - \sum_{\substack{i + j + k= n\\ i, j, k > 0}} m_i (\alpha_j (x), \alpha_k(y)) + \sum_{\substack{i + j = n\\ i, j > 0}} \alpha_i (m_j (x, y)) ,
\end{align}
where
\begin{align}
& \partial_{\alpha \alpha} \alpha_n (x, y) = [\alpha(x), \alpha_n(y)] + [\alpha_n(x), \alpha(y)] - \alpha_n ([x, y]),\\
& \partial_{\gamma \alpha} m_n(x, y) = \alpha m_n(x, y) - m_n (\alpha(x), \alpha(y)).
\end{align}
For $n=0$,
\begin{align*}
&m_0\circ_{\alpha_0}m_0=0,\\
&m_0(\alpha_0(x),m_0(y,z))-m_0(m_0(x,y),\alpha_0(z))+m_0(m_0(x,z),\alpha_0(y))=0,\\
&[\alpha(x),[y,z]]-[[x,y],\alpha(z)]+[[x,z],\alpha(y)]=0.
\end{align*}
This is the original Hom-Leibniz relation and from the Equation (\ref{df alpha 1}) we have
\begin{align*}
& m_0(\alpha_0(x), \alpha_0(y)) -\alpha_0 (m_0(x, y)) =0, \\
& \alpha[x, y] = [\alpha(x), \alpha(y)].
\end{align*}
This just shows $\alpha$ is multiplicative.
For $n=1$, from the Equation (\ref{deform equ 2}) we have
$$m_0\circ_{\alpha_0}m_1+m_1\circ_{\alpha_0}m_0+m_0\circ_{\alpha_1}m_0=0,$$
$[\alpha(x),m_1(y,z)]-[m_1(x,y),\alpha(z)]+[m_1(x,z),\alpha(y)]+m_1(\alpha(x),[y,z])-m_1([x,y],\alpha(z))+m_1([x,z],\alpha(y)) + [\alpha_1(x), [y, z]] - [[x,y], \alpha_1(z)] + [[x,z], \alpha_1(y)]=0$.
This is same as
$$\partial_{\gamma \gamma} m_1(x,y,z) - \partial_{\alpha \gamma} \alpha_1 (x, y, z)=0.$$
Now from the multiplicative part of the deformation, we have
$$[\alpha(x), \alpha_1(y)] + [\alpha_1(x), \alpha(y)] + m_1(\alpha(x), \alpha(y)) - \alpha(m_1(x,y)) - \alpha_1 [x, y] = 0.$$
This is same as
$$\partial_{\alpha \alpha} \alpha_1(x, y) - \partial_{\gamma \alpha} m_1 (x, y)= 0.$$
Thus, we have
$$\partial (m_1, \alpha_1) = 0.$$
\begin{defn}
The infinitesimal of the deformation $(m_t, \alpha_t)$ is the pair $(m_1, \alpha_1)$. Suppose more generally that $(m_n, \alpha_n)$ is the first non-zero term of $(m_t, \alpha_t)$ after $(m_0, \alpha_0)$, such $(m_n, \alpha_n)$ is called a $n$-infinitesimal of the deformation.
\end{defn}
Therefore, we have the following theorem.
\begin{thm}\label{cocycle}
Let $(L,[.,.],\alpha)$ be a Hom-Leibniz algebra, and $(L_t,m_t,\alpha_t)$ be its one-parameter deformation then the infinitesimal of the deformation is a $2$-cocycle of the $\alpha$-type Hom-Leibniz cohomology.
\end{thm}
\subsection{Obstructions of deformations}
Now we discuss obstructions of deformations for multiplicative Hom-Leibniz algebras from the cohomological point of view.
\begin{defn}
A $n$-deformation of a Hom-Leibniz algebra is a formal deformation of the forms
$$m_t=\sum^n_{i=0}m_it^i,~~~ \alpha_t=\sum^n_{i=0}\alpha_it^i,$$
\end{defn}
such that $m_t$ and $\alpha_t$ satisfies the Hom-Leibniz identity,
$$m_t(\alpha_t(x),m_t(y,z))=m_t(m_t(x,y),\alpha_t(z))-m_t(m_t(x,z),\alpha_t(y)),$$\label{n-deform}
and $\alpha_t$ is multiplicative,
$$m_t (\alpha_t(x), \alpha_t(y)) = \alpha_t (m_t(x, y)).$$
We say a $n$-deformation $(m_t, \alpha_t)$ of a Hom-Leibniz algebra is extendable to a $(n+1)$-deformation if there is an element $m_{n+1}\in \widetilde{CL}_\gamma^2(L, L)$ and $\alpha_{n+1} \in \widetilde{CL}_\alpha^2(L, L)$ such that
\begin{align*}
&\bar{m_t}=m_t+m_{n+1}t^{n+1},\\
&\bar{\alpha_t}=\alpha_t+\alpha_{n+1}t^{n+1},
\end{align*}
and $(\bar{m_t}, \bar{\alpha_t})$ satisfies all the conditions of one-parameter formal deformations.
The $(n+1)$-deformation $(\bar{m_t}, \bar{\alpha_t})$ gives us the following equations.
\begin{align}
\label{obs deform equ 1} \sum_{\substack{i+j+k=n+1\\i,j,k\geq 0}}m_i(\alpha_j(x),m_k(y,z))-m_i(m_k(x,y),\alpha_j(z))+m_i(m_k(x,z),\alpha_j(y))=0.
\end{align}
\begin{align}
\label{obs deform equ 12} \sum_{\substack{i+j+k=n+1\\i,j,k\geq 0}}m_i(\alpha_j(x), \alpha_k(y))- \sum_{\substack{i+j=n+1\\i,j\geq 0}}\alpha_i(m_j(x,y))=0.
\end{align}
This is same as the following equations
\begin{align*}
&( \partial_{\gamma \gamma} m_{n+1} - \partial_{\alpha \gamma} \alpha_{n+1} ) (x, y, z) \\
&=-\sum_{\substack{i+j+k=n+1\\i,j,k>0}}m_i(\alpha_j(x),m_k(y,z))-m_i(m_k(x,y),\alpha_j(z))+m_i(m_k(x,z),\alpha_j(y))\\
&=-\sum_{\substack{i+j+k=n+1\\i,j,k>0}}m_i\circ_{\alpha_j}m_k.
\end{align*}
\begin{align*}
(\partial_{\alpha \alpha} \alpha_{n+1} - \partial_{\gamma \alpha} m_{n+1}) (x, y) = - \sum_{\substack{i + j + k= n+1\\ i, j, k > 0}} m_i (\alpha_j (x), \alpha_k(y)) + \sum_{\substack{i + j = n+1\\ i, j > 0}} \alpha_i (m_j (x, y)).
\end{align*}
We define the $n$th obstruction to extend a deformation of Hom-Leibniz algebra of order $n$ to order $n+1$ as $\text{Obs}^n = (\text{Obs}^n_\gamma, \text{Obs}^n_\alpha)$, where
\begin{align}
\label{obs equ 222}&\text{Obs}^n_\gamma (x, y, z):=\sum_{\substack{i+j+k=n+1\\i,j,k>0}}(m_i\circ_{\alpha_j}m_k) (x, y, z)= (\partial_{\gamma \gamma} m_{n+1} - \partial_{\alpha \gamma} \alpha_{n+1})(x, y, z),\\
&\text{Obs}^n_\alpha (x, y)\\ \nonumber
&:=- \sum_{\substack{i + j + k= n+1\\ i, j, k > 0}} m_i (\alpha_j (x), \alpha_k(y)) + \sum_{\substack{i + j = n+1\\ i, j > 0}} \alpha_i (m_j (x, y)) \\ \nonumber
&= (\partial_{\alpha \alpha} \alpha_{n+1} - \partial_{\gamma \alpha} m_{n+1}) (x, y).
\end{align}
Thus, $(\text{Obs}^n_\gamma, \text{Obs}^n_\alpha) \in \widetilde{CL}^3(L, L)$ and $(\text{Obs}^n_\gamma, \text{Obs}^n_\alpha) = \partial (m_{n+1}, \alpha_{n+1})$.
\begin{theorem}
A deformation of order $n$ extends to a deformation of order $n+1$ if and only if cohomology class of $\text{Obs}^n$ vanishes.
\end{theorem}
\begin{proof}
Suppose a deformation $(m_t, \alpha_t)$ of order $n$ extends to a deformation of order $n+1$. From the obstruction equations, we have
$$\text{Obs}^n = (\text{Obs}^n_\gamma, \text{Obs}^n_\alpha) = \partial (m_{n+1}, \alpha_{n+1}).$$
As $\partial \circ \partial=0$, we get the cohomology class of $\text{Obs}^n$ vanishes.
Conversely, suppose the cohomology class of $\text{Obs}^n$ vanishes, that is,
$$\text{Obs}^n=\partial (m_{n+1}, \alpha_{n+1}),$$
for some $2$-cochains $(m_{n+1}, \alpha_{n+1})$. We define $(m'_t, \alpha'_t)$ extending the deformation $(m_t, \alpha_t)$ of order $n$ as follows-
\begin{align*}
&m'_t=m_t+m_{n+1}t^{n+1},\\
&\alpha'_t=\alpha_t+ \alpha_{n+1}t^{n+1}.
\end{align*}
The map $m'_t, \alpha'_t$ satisfy the following equations for all $x,y,z\in L$.
\begin{align*}
&\sum_{\substack{i+j+k=n+1\\i,j,k\geq 0}}m_i(\alpha_j(x),m_k(y,z))-m_i(m_k(x,y),\alpha_j(z))+m_i(m_k(x,z),\alpha_j(y))=0,\\
&\sum_{\substack{i+j+k=n+1\\i,j,k\geq 0}}m_i(\alpha_j(x), \alpha_k(y))- \sum_{\substack{i+j=n+1\\i,j\geq 0}}\alpha_i(m_j(x,y))=0.
\end{align*}
Thus, $(m'_t, \alpha'_t)$ is a deformation of order $n+1$ which extends the deformation $(m_t, \alpha_t)$ of order $n$.
\end{proof}
\begin{cor}
If $\widetilde{HL}^3 (L, L)=0$ then any $2$-cocycle gives a one-parameter formal deformation of $(L,[.,.],\alpha)$.
\end{cor}
\subsection{Equivalent and trivial deformations}
Suppose $L_t=(L,m_t,\alpha_t)$ and $L\rq_t=(L,m\rq_t,\alpha\rq_t)$ be two one-parameter Hom-Leibniz algebra deformations of $(L,[.,.],\alpha)$, where $m_t=\sum_{i\geq 0} m_it^i,\, \alpha_t=\sum_{i\geq 0}\alpha_it^i$ and $m\rq_t=\sum_{i\geq 0} m\rq_it^i,\, \alpha\rq_t=\sum_{i\geq 0}\alpha\rq_it^i$.
\begin{defn}
Two deformations $L_t$ and $L\rq_t$ are said to be equivalent if there exists a $\mathbb{K}[[t]]$-linear isomorphism $\Psi_t:L[[t]]\to L[[t]]$ of the form $\Psi_t=\sum_{i\geq 0}\psi_it^i$, where $\psi_0=Id$ and $\psi_i:L\to L$ are $\mathbb{K}$-linear maps such that the following relations holds:
\begin{align}\label{equivalent}
\Psi_t\circ m_t\rq=m_t\circ (\Psi_t\otimes \Psi_t)\,\,\,\text{and}\,\,\, \alpha_t\circ \Psi_t=\Psi_t\circ \alpha\rq_t.
\end{align}
\end{defn}
\begin{defn}
A deformation $L_t$ of a Hom-Leibniz algebra $L$ is called trivial if $L_t$ is equivalent to $L$. A Hom-Leibniz algebra $L$ is called rigid if it has only trivial deformation upto equivalence.
\end{defn}
Condition (\ref{equivalent}) may be written as
\label{equivalent 11}$$\Psi_t(m\rq_t(x,y))=m_t(\Psi_t(x),\Psi_t(y))\,\,\, \text{and}\,\,\, \alpha_t(\Psi_t(x))=\Psi_t(\alpha\rq_t(x)),\,\,\,\text{for all}~ x,y\in L.$$
The above conditions is equivalent to the following equations:
\begin{align}
&\sum_{i\geq 0}\psi_i\bigg( \sum_{j\geq 0}m\rq_j(x,y)t^j \bigg)t^i=\sum_{i\geq 0}m_i\bigg( \sum_{j\geq 0}\psi_j(x)t^j,\sum_{k\geq 0}\psi_k(y)t^k \bigg)t^i,\\
&\sum_{i\geq 0}\alpha_i\bigg( \sum_{j\geq 0}\psi_j(x)t^j \bigg)t^i=\sum_{i\geq 0}\psi_i\bigg( \sum_{j\geq 0}\alpha\rq_j(x)t^j \bigg)t^i.
\end{align}
This is same as the following equations:
\begin{align}
\label{equivalent 10}&\sum_{i,j\geq 0}\psi_i(m\rq_j(x,y))t^{i+j}=\sum_{i,j,k\geq 0}m_i(\psi_j(x),\psi_k(y))t^{i+j+k},\\
&\label{equivalent 101}\sum_{i,j \geq 0}\alpha_i(\psi_j(x))t^{i+j}=\sum_{i,j\geq 0}\psi_i(\alpha\rq_j(x))t^{i+j}.
\end{align}
Comparing constant terms on both sides of the above equations, we have
\begin{align*}
&m\rq_0(x,y)=m_0(x,y)=[x,y],\,\,\,\text{as}\,\,\,\psi_0=Id,\\
&\alpha_0(x)=\alpha\rq_0(x)=\alpha(x).
\end{align*}
Now comparing coefficients of $t$, we have
\begin{align}\label{equivalent main}
&m\rq_1(x,y)+\psi_1(m\rq_0(x,y))=m_1(x,y)+m_0(\psi_1(x),y)+m_0(x,\psi(y)),\\
\label{equivalent main 1}&\alpha_1(x)+\alpha_0(\psi_1(x))=\alpha\rq_1(x)+\psi_1(\alpha\rq_0(x)).
\end{align}
The Equations (\ref{equivalent main}) and (\ref{equivalent main 1}) are same as
\begin{align*}
& m\rq_1(x,y)-m_1(x,y)=[\psi_1(x),y]+[x,\psi_1(y)]-\psi_1([x,y])=\partial_{\gamma \gamma} \psi_1(x,y).\\
& \alpha_1\rq (x) - \alpha_1(x) = \alpha(\psi_1(x)) - \psi_1(\alpha(x)) = \partial_{\gamma \alpha} \psi_1(x).
\end{align*}
Thus, we have the following proposition.
\begin{prop}
Two equivalent deformations have cohomologous infinitesimals.
\end{prop}
\begin{proof}
Suppose $L_t=(L,m_t,\alpha_t)$ and $L\rq_t=(L,m\rq_t,\alpha\rq_t)$ be two equivalent one-parameter Hom-Leibniz deformations of $(L,[.,.],\alpha)$. Suppose $(m_n, \alpha_n)$ and $(m\rq_n, \alpha\rq_n)$ be two $n$-infinitesimals of the deformations $(m_t, \alpha_t)$ and $(m\rq_t, \alpha\rq_t)$ respectively. Using Equation (\ref{equivalent 10}) we get,
\begin{align*}
&m\rq_n(x,y)+\psi_n(m\rq_0(x,y))=m_n(x,y)+m_0(\psi_n(x),y)+m_0(x,\psi_n(y)),\\
&m\rq_n(x,y)-m_n(x,y)=m_0(\psi_n(x),y)+m_0(x,\psi_n(y))-\psi_n(m\rq_0(x,y)),\\
&m\rq_n(x,y)-m_n(x,y)=[\psi_n(x),y]+[x,\psi_n(y)]-\psi_n([x,y])=\partial_{\gamma \gamma} \psi_n(x,y).
\end{align*}
Using equation (\ref{equivalent 101}) we get,
\begin{align*}
&\alpha_0(\psi_n(x)) - \psi_0(\alpha\rq_n(x)) + \alpha_n(\psi_0(x)) - \psi_n(\alpha\rq_0(x))=0,\\
&\alpha_n(x) - \alpha\rq_n(x) = \psi_n(\alpha(x)) - \alpha(\psi_n(x)) = \partial_{\gamma \alpha}\psi_n(x).
\end{align*}
Thus, infinitesimals of two deformations determines same cohomology class.
\end{proof}
\begin{theorem}
A non-trivial deformation of a Hom-Leibniz algebra is equivalent to a deformation whose infinitesimal is not a coboundary.
\end{theorem}
\begin{proof}
Let $(m_t,\alpha_t)$ be a deformation of Hom-Leibniz algebra $L$ and $(m_n, \alpha_n)$ be the $n$-infinitesimal of the deformation for some $n\geq 1$. Then by Theorem (\ref{cocycle}), $(m_n, \alpha_n)$ is a $2$-cocycle, that is, $\partial (m_n, \alpha_n)=0$. Suppose $m_n=-\partial_{\gamma \gamma}\phi_n$ and $\alpha_n = -\partial_{\gamma \alpha}\phi_n$for some $\phi_n\in \widetilde{CL}_\gamma^1(L, L)$, that is, $(m_n, \alpha_n)$ is a coboundary. We define a formal isomorphism $\Psi_t$ of $L[[t]]$ as follows:
$$\Psi_t(a)=a+\phi_n(a)t^n.$$
We set
$$\bar{m_t}=\Psi^{-1}_t\circ m_t\circ (\Psi_t\otimes\Psi_t) \,\,\,\text{and}\,\,\,\bar{\alpha_t}=\Psi^{-1}_t\circ\alpha_t\circ\Psi_t$$
Thus, we have a new deformation $\bar{L_t}$ which is isomorphic to $L_t$. By expanding the above equations and comparing coefficients of $t^n$, we get
$$\bar{m_n}-m_n=\partial_{\gamma \gamma} \phi_n,~~~\bar{\alpha_n} - \alpha = \partial_{\gamma \alpha}\phi_n.$$
Hence, $\bar{m_n}=0, \bar{\alpha_n}=0$. By repeating this argument, we can kill off any infinitesimal which is a coboundary. Thus, the process must be stopped if the deformation is non-trivial.
\end{proof}
\begin{cor}
Let $(L,[.,.],\alpha)$ be a Hom-Leibniz algebra. If $\widetilde{HL}^2 (L, L)=0$ then $L$ is rigid.
\end{cor}
\section{Group action and equivariant cohomology}\label{sec 5}
The notion of a finite group action on a Leibniz algebra was introduced by authors in \cite{MS19}. In this section, we introduce a notion of a finite group action on Hom-Leibniz algebra. We also define an equivariant cohomology of Hom-Leibniz algebra equipped with action of a finite group.
\begin{defn}
Let $G$ be a finite group and $(L,[.,.],\alpha)$ be a Hom-Leibniz algebra. We say group $G$ acts on the Hom-Leibniz algebra $L$ from the left if there is a funtion
$$\Phi:G\times L\to L,$$
satisfying
\begin{enumerate}
\item For each $g\in G$, the map $\psi_g:L\to L,\,x\mapsto gx$ is a $\mathbb{K}$-linear map.
\item $ex=x$, where $e$ denotes identity element of the group $G$.
\item For all $g_1,g_2\in G$ and $x\in L$, $(g_1g_2)x=g_1(g_2x)$.
\item For all $g\in G$ and $x,y\in L$, $g[x,y]=[gx,gy]$ and $\alpha(gx)=g\alpha(x).$
\end{enumerate}
\end{defn}
We may write a Hom-Leibniz algebra $(L,[.,.],\alpha)$ equipped with a finite group action $G$ as $(G,L,[.,.],\alpha)$.
An alternative way to present the above definition is the following:
\begin{prop}
Let $G$ be a finite group and $(L,[.,.],\alpha)$ be a Hom-Leibniz algebra. The group $G$ acts on $L$ from the left if and only if there is a group homomorphism
\begin{align*}
\Psi:G\to Iso_{Hom\text{-}Leib}(L),\,g\mapsto\psi_g,
\end{align*}
where $Iso_{Hom\text{-}Leib}(L)$ denotes group of ismorphisms of Hom-Leibniz algebras from $L$ to $L$.
\end{prop}
Let $M,M\rq$ be Hom-Leibniz algebras equipped with actions of group $G$. We say a $\mathbb{K}$-linear map $f:M\to M\rq$ is equivariant if for all $g\in G$ and $x\in M$, $f(gx)=gf(x)$. We write the set of all equivariant maps from $M$ to $M\rq$ as $\text{Hom}^G_\mathbb{K}(M,M\rq)$.
A $G$-Hom-vector space is a Hom-vector space $(M,\beta)$ together with a action of $G$ on $M$, and $\beta:M\to M$ is an equivariant map.
We denote an equivariant Hom-vector space as triple $(G,M,\beta)$.
\begin{exam}
Any $G$-Hom-vector space $(G,M,\beta)$ together with the trivial bracket (i.e. $[x,y]=0$ for all $x,y\in M$) is a Hom-Leibniz algebra equipped with an action of $G$.
\begin{exam}\label{example-3}
Let $V$ be $\mathbb K$-module which is a representation space of a finite group $G.$ On
$$\bar{T}(V) = V\oplus V^{\otimes 2}\oplus \cdots \oplus V^{\otimes n}\oplus \cdots $$ there is a unique bracket that makes it into a Hom-Leibniz algebra by taking $\alpha=\text{Id}$ and verifies
$$v_1 \otimes v_2 \otimes \cdots \otimes v_n = [\cdots [[v_1,v_2],v_3],\cdots ,v_n]~\mbox{for}~v_i\in V~\mbox{and}~i=1,\ldots,n.$$ This is the free Hom-Leibniz algebra over the $\mathbb{K}$-module $V$. The linear action of $G$ on $V$ extends naturally to an action on $\bar{T}(V)$.
\end{exam}
\end{exam}
\begin{defn}
Let $(G,L,[.,.],\alpha)$ be a Hom-Leibniz algebra equipped with an action of a finite group $G$. A $G$-bimodule over $L$ is a $G$-Hom-vector space $(G,M,\beta)$ together with two $L$-actions (left and right multiplications), $m_l:L\otimes M\to M $ and $m_r:M\otimes L\to M $ such that $m_l,m_r$ satisfying the following conditions:
\begin{align*}
& m_l(gx, gm) = g m_l(x, m),\\
& m_r(gm, gx) =g m_r(m,x),\\
&\beta (m_l (x, m)) = m_l(\alpha(x), \beta(m)),\\
&\beta (m_r (m, x)) = m_r(\beta(m), \alpha(x)),\\
&m_r (\beta(m), [x,y]) = m_r(m_r(m, x), \alpha(y)) - m_r (m_r(m, y), \alpha(x)),\\
& m_l (\alpha(x), m_r(m, y))= m_r(m_l(x, m), \alpha (y))- m_l([x, y], \beta(m)),\\
& m_l(\alpha(x), m_l(y, m)) = m_l([x, y], \beta(m)) - m_r(m_l(x, m), y),
\end{align*}
for any $x, y \in L, m\in M$, and $g \in G$.
\end{defn}
\begin{remark}
Any Hom-Leibniz algebra equipped with an action of a finite group $G$ is a G-bimodule over itself. In this paper, we shall only consider G-bimodule over itself.
\end{remark}
We now introduce an equivariant cohomology of Hom-Leibniz algebras $L$ equipped with an action of a finite group $G$.
Set
\begin{align*}
&\widetilde{CL}^n_{G}(L,L)\\
&:=\lbrace (c_\gamma, c_\alpha) \in \widetilde{CL}^n(L, L) : c_\gamma(\psi_g(x_1),\ldots,\psi_g(x_n))=gc_\gamma(x_1,\ldots,x_n),\\
&~ c_\alpha(\psi_g(x_1),\ldots,\psi_g(x_{n-1}))=gc_\alpha(x_1,\ldots,x_{n-1})\rbrace\\
&=\lbrace (c_\gamma, c_\alpha) \in \widetilde{CL}^n(L, L) : c_\gamma(gx_1,\ldots,gx_n)=gc_\gamma(x_1,\ldots,x_n),\\
&~ c_\alpha(gx_1,\ldots,gx_{n-1})=gc_\alpha(x_1,\ldots,x_{n-1}) \rbrace.
\end{align*}
Here $\widetilde{CL}^n(L, L)$ is $n$-cochain group of the Hom-Leibniz algebra $(L,[.,.],\alpha)$ and $\widetilde{CL}^n_{G}(L,L)$ consists of all $n$-cochains which are equivariant. Clearly, $\widetilde{CL}^n_{G}(L,L)$ is a submodule of $\widetilde{CL}^n(L, L)$ and $(c_\gamma, c_\alpha) \in \widetilde{CL}^n_{G}(L,L)$ is called an invariant $n$-cochain.
\begin{lemma}
If a $n$-cochain $(c_\gamma, c_\alpha)$ is invariant then $\partial(c_\gamma, c_\alpha)$ is also an invariant $(n+1)$-cochain. In otherwords,
$$(c_\gamma, c_\alpha) \in \widetilde{CL}^n_{G}(L,L)\implies \partial(c_\gamma, c_\alpha)\in \widetilde{CL}^{n+1}_{G}(L,L).$$
\end{lemma}
\begin{proof}
As $(c_\gamma, c_\alpha) \in \widetilde{CL}^n_{G}(L,L)$, we have
\begin{align*}
& c_\gamma(gx_1,gx_2,\ldots,gx_{n})=gc_\gamma(x_1,x_2,\ldots,x_{n}),\\
& c_\alpha(gx_1,gx_2,\ldots,gx_{n-1})=gc_\alpha(x_1,x_2,\ldots,x_{n-1}),
\end{align*}
for all $g\in G$ and $x_1,\ldots,x_{n+1}\in L$. It is enough to show that the four differentials $\partial_{\gamma \gamma}, \partial_{\gamma \alpha}, \partial_{\alpha \alpha}, \partial_{\alpha \gamma}$ respect the group action.
Observe that
\begin{align*}
&\partial_{\gamma \gamma}(c_\gamma)(\psi_g(x_1),\psi_g(x_2),\ldots,\psi_g(x_{n+1}))\\
&=\partial_{\gamma \gamma}(c_\gamma)(gx_1,gx_2,\ldots,gx_{n+1})\\
&=[\alpha^{n-1}(gx_1), c_\gamma(gx_2,\ldots,gx_n)]+\sum^{n+1}_{i=2}(-1)^{i}[c_\gamma(gx_1,\cdots,\hat{gx_i},\ldots,gx_{n+1}),\alpha^{n-1}g(x_i)]\\
&+\sum_{1\leq i<j\leq n+1}(-1)^{j+1} c_\gamma(\alpha(gx_1),\ldots,\alpha(gx_{i-1}),[gx_i,gx_j],\alpha(gx_{i+1}),\ldots,\widehat{\alpha(gx_j)},\ldots,\alpha(gx_{n+1})).\\
&=[g\alpha^{n-1}(x_1),g c_\gamma(x_2,\ldots,x_{n+1})]+\sum^{n+1}_{i=2}(-1)^{i}[gc_\gamma(x_1,\cdots,\hat{x_i},\ldots,x_n),g\alpha^{n-1}(x_i)]\\
&+\sum_{1\leq i<j\leq n+1}(-1)^{j+1}gc_\gamma(\alpha(x_1),\ldots,\alpha(x_{i-1}),[x_i,x_j],\alpha(x_{i+1}),\ldots,\widehat{\alpha(gx_j)},\ldots,\alpha(x_{n+1})).\\
&=g\partial_{\gamma \gamma}(c_\gamma)(x_1,x_2,\ldots,x_{n+1}).
\end{align*}
On the other hand, we have
\begin{align*}
&(\partial_{\gamma \alpha} c_\gamma) (gx_1,\ldots, gx_{n})\\
&= \alpha( c_\gamma(gx_1,\ldots, gx_{n}) ) - c_\gamma(\alpha(gx_1),\ldots,\alpha( gx_{n})) \\
&= g \big(\alpha( c_\gamma(x_1,\ldots, x_{n}) ) - c_\gamma(\alpha(x_1),\ldots,\alpha( x_{n}))\big)\\
&=g(\partial_{\gamma \alpha} c_\gamma) (x_1,\ldots, x_{n}).
\end{align*}
Similarly, it is easy to show that
\begin{align*}
&(\partial_{\alpha \alpha} c_\alpha) (gx_1,\ldots, gx_{n})= g(\partial_{\alpha \alpha} c_\alpha) (x_1,\ldots, x_{n}),\\
&(\partial_{\alpha \gamma} c_\alpha) (gx_1,\ldots, gx_{n+1})= g(\partial_{\alpha \gamma} c_\alpha) (x_1,\ldots, x_{n+1}).
\end{align*}
Thus, $\partial(c_\gamma, c_\alpha)\in \widetilde{CL}^{n+1}_{G}(L,L)$.
\end{proof}
The cochain complex $\lbrace \widetilde{CL}^\ast_{G}(L,L),\partial\rbrace $ is called an equivariant cochain complex of $(G,L,[.,.],\alpha)$. We define $n$th equivariant cohomology group of $(G,L,[.,.],\alpha)$ with coefficients over itself by
$$\widetilde{HL}_G^n (L, L):=H_n(\widetilde{CL}^\ast_{G}(L,L)).$$
\section{Equivariant formal deformation of Hom-Leibniz algebra structure}\label{formal defn}\label{sec 6}
In this section, we introduce an equivariant one-parameter formal deformation theory including the deformation of the structure map of Hom-Leibniz algebra equipped with action of a finite group $G$. We show that equivariant cohomology controls such equivariant deformations.
\begin{defn}
An equivariant one-parameter formal deformation of $(G,L,[.,.],\alpha)$ is given by $\mathbb{K}[[t]]$-bilinear and a $\mathbb{K}[[t]]$-linear map $m_t:L[[t]]\times L[[t]]\to L[[t]]$ and $\alpha_t:L[[t]]\to L[[t]]$ respectively of the forms
$$m_t=\sum_{i\geq 0}m_it^i\,\,\text{and}\,\,\alpha_t=\sum_{i\geq 0}\alpha_it^i,$$
where each $m_i:L\times L\to L$ is a $\mathbb{K}$-bilinear map and each $\alpha_i:L\to L$ is a $\mathbb{K}$-linear map satisfying the followings:
\begin{enumerate}
\item $m_0(x,y)=[x,y]$ is the original Hom-Leibniz bracket on $L$ and $\alpha_0(x,y)=\alpha(x,y)$.
\item $m_t$ and $\alpha_t$ satisfies the following Hom-Leibniz algebra condition:\label{equ defor}$$
m_t(\alpha_t(x),m_t(y,z))=m_t(m_t(x,y),\alpha_t(z))-m_t(m_t(x,z),\alpha_t(y)).
$$
\item The map $\alpha_t$ is multiplicative, that is, $m_t (\alpha_t(x), \alpha_t(y)) = \alpha_t (m_t(x, y))$.\label{equ defor mult}
\item For all $g\in G$,~$x,y\in L$ and $i\geq 0$,$$m_i(gx,gy)=gm_i(x,y)\,\,\text{and}\,\,\alpha_i(gx)=g\alpha_i(x),$$
that is, $m_i\in\text{Hom}^G_\mathbb{K}(L\otimes L, L)$ and $\alpha_i\in\text{Hom}^G_\mathbb{K}(L,L).$
\end{enumerate}
\end{defn}
For all $n\geq 0$, the Condition (\ref{equ defor}) in the above Definition is equivalent to
\begin{align}
\label{equ deform equ 11}\sum_{\substack{i+j+k=n\\i,j,k\geq 0}}m_i(\alpha_j(x),m_k(y,z))-m_i(m_k(x,y),\alpha_j(z))+m_i(m_k(x,z),\alpha_j(y))=0.
\end{align}
For all $n\geq 0$, the Condition (\ref{equ defor mult}) in the above Definition is equivalent to
\begin{align}
\label{equ deform equ 12} \sum_{\substack{i+j+k=n\\i,j,k\geq 0}}m_i(\alpha_j(x), \alpha_k(y))- \sum_{\substack{i+j=n\\i,j\geq 0}}\alpha_i(m_j(x,y))=0.
\end{align}
\begin{defn}
An equivariant $2$-cochain $(m_1, \alpha_1)$ is called an equivariant infinitesimal of the equivariant deformation $(m_t, \alpha_t)$. Suppose more generally that $(m_n, \alpha_n)$ is the first non-zero term of $(m_t, \alpha_t)$ after $(m_0, \alpha_0)$, such $(m_n, \alpha_n)$ is called an equivariant $n$-infinitesimal of the equivariant deformation.
\end{defn}
\begin{prop}\label{equ cocycle}
Let $G$ be a finite group and $(L,[.,.],\alpha)$ be a Hom-Leibniz algebra. Suppose $(G,L_t,m_t,\alpha_t)$ is its equivariant one-parameter deformation then the equivariant infinitesimal of an equivariant deformation is a $2$-cocycle of the equivariant Hom-Leibniz cohomology.
\end{prop}
\begin{proof}
Let $(m_n, \alpha_n)$ be an equivariant $n$-infinitesimal of an equivariant deformation $(m_t, \alpha_t)$. Thus, $m_i=\alpha_i=0$ for all $0<i<n$ and $m_i(gx,gy)=gm_i(x,y),~ \alpha_i(gx)=g\alpha_i(x)$. From the Equation (\ref{equ deform equ 11}), we have
\begin{align*}
&[\alpha(x),m_n(y,z)]-[m_n(x,y),\alpha(z)]+[m_n(x,z),\alpha(y)]+m_n(\alpha(x),[y,z])\\
&-m_n([x,y],\alpha(z))+m_n([x,z],\alpha(y))+ [\alpha_n(x), [y,z]] - [[x,y], \alpha_n(z)] + [[x,z], \alpha_n(y)]\\
&=( \partial_{\gamma \gamma} m_n - \partial_{\alpha \gamma} \alpha_n ) (x, y, z)=0.
\end{align*}
From the Equation (\ref{equ deform equ 12}), we have
\begin{align*}
&[\alpha(x), \alpha_n(y)] + [\alpha_n(x), \alpha(y)] + m_n(\alpha(x), \alpha(y)) - \alpha(m_n(x,y)) - \alpha_n [x, y] \\
&= (\partial_{\alpha \alpha} \alpha_1 - \partial_{\gamma \alpha} m_1)(x, y)\\
&= 0.
\end{align*}
This is same as $\partial^2 (m_n, \alpha_n)=0$. Thus, the desired result follows.
\end{proof}
An equivariant $n$-deformation of a Hom-Leibniz algebra equipped with a finite group action is a formal deformation of the forms
$$m_t=\sum^n_{i=0}m_it^i,~~~ \alpha_t=\sum^n_{i=0}\alpha_it^i,$$
such that
\begin{enumerate}
\item For each $0\leq i\leq n$, $m_i\in \text{Hom}^G_\mathbb{K}(L\otimes L,L)$ and $\alpha_i\in\text{Hom}^G_\mathbb{K}(L,L)$, that is, each $m_i$ and $\alpha_i$ are equivariant $\mathbb{K}$-linear maps.
\item $m_t$ satisfies the Hom-Leibniz identity, that is,
$m_t(\alpha_t(x),m_t(y,z))=m_t(m_t(x,y),\alpha_t(z))-m_t(m_t(x,z),\alpha_t(y)).$\label{equ n-deform}
\item The map $\alpha_t$ is multiplicative, that is, $m_t (\alpha_t(x), \alpha_t(y)) = \alpha_t (m_t(x, y))$.
\end{enumerate}
We say an equivariant $n$-deformation $(m_t, \alpha_t)$ of a Hom-Leibniz algebra $(G,L,[.,.],\alpha)$ is extendable to an equivariant $(n+1)$-deformation if there is an element $(m_{n+1}, \alpha_{n+1}) \in \widetilde{CL}^{n+1}_{G}(L,L)$ such that
\begin{align*}
&\bar{m_t}=m_t+m_{n+1}t^{n+1},\\
&\bar{\alpha_t}=\alpha_t+\alpha_{n+1}t^{n+1},
\end{align*}
and $(\bar{m_t}, \bar{\alpha_t})$ satisfies all the conditions of formal deformations.
For $n\geq -1$, we can rewrite the Equation (\ref{equ deform equ 11}) in the following form using Hom-Leibniz cohomology
\begin{align}
\label{eq obs deform equ 1} \sum_{\substack{i+j+k=n+1\\i,j,k\geq 0}}m_i(\alpha_j(x),m_k(y,z))-m_i(m_k(x,y),\alpha_j(z))+m_i(m_k(x,z),\alpha_j(y))=0.
\end{align}
\begin{align}
\label{eq obs deform equ 12} \sum_{\substack{i+j+k=n+1\\i,j,k\geq 0}}m_i(\alpha_j(x), \alpha_k(y))- \sum_{\substack{i+j=n+1\\i,j\geq 0}}\alpha_i(m_j(x,y))=0.
\end{align}
This is same as the following equations
\begin{align*}
&( \partial_{\gamma \gamma} m_{n+1} - \partial_{\alpha \gamma} \alpha_{n+1} ) (x, y, z) \\
&=-\sum_{\substack{i+j+k=n+1\\i,j,k>0}}m_i(\alpha_j(x),m_k(y,z))-m_i(m_k(x,y),\alpha_j(z))+m_i(m_k(x,z),\alpha_j(y))\\
&=-\sum_{\substack{i+j+k=n+1\\i,j,k>0}}m_i\circ_{\alpha_j}m_k (x, y, z).
\end{align*}
\begin{align*}
(\partial_{\alpha \alpha} \alpha_{n+1} - \partial_{\gamma \alpha} m_{n+1}) (x, y) = - \sum_{\substack{i + j + k= n+1\\ i, j, k > 0}} m_i (\alpha_j (x), \alpha_k(y)) + \sum_{\substack{i + j = n+1\\ i, j > 0}} \alpha_i (m_j (x, y)).
\end{align*}
We define $n$th obstruction to extend a deformation of Hom-Leibniz algebra of order $n$ to order $n+1$ as $\text{Obs}_G^n = (\text{Obs}^n_{G,\gamma}, \text{Obs}^n_{G,\alpha})$, where
\begin{align}
\label{eq obs equ 222}&\text{Obs}^n_{G,\gamma}(x, y, z):=\sum_{\substack{i+j+k=n+1\\i,j,k>0}}m_i\circ_{\alpha_j}m_k (x, y, z)=(\partial_{\gamma \gamma} m_{n+1} - \partial_{\alpha \gamma} \alpha_{n+1})(x, y, z),\\
&\text{Obs}^n_{G,\alpha}(x, y) \\ \nonumber
& :=- \sum_{\substack{i + j + k= n+1\\ i, j, k > 0}} m_i (\alpha_j (x), \alpha_k(y)) + \sum_{\substack{i + j = n+1\\ i, j > 0}} \alpha_i (m_j (x, y)) \\ \nonumber
&= (\partial_{\alpha \alpha} \alpha_{n+1} - \partial_{\gamma \alpha} m_{n+1})(x, y).
\end{align}
\begin{lemma}
Suppose $(m_t, \alpha_t)$ is an equivariant $n$-deformations, then $\text{Obs}_G^n\in \widetilde{CL}^{3}_{G}(L,L)$ is a cocycle for all $n\geq 1$.
\end{lemma}
\begin{proof}
As for all $i\geq 0$, $m_i\in \text{Hom}^G_\mathbb{K}(L\otimes L,L)$ and $\alpha_i\in \text{Hom}^G_\mathbb{K}(L,L)$. So for all $x,y\in L$, $m_i(gx,gy)=gm_i(x,y)$ and $\alpha_i(gx)=g\alpha_i(x)$.
Now,
\begin{align*}
&\text{Obs}^n_{G,\gamma}(gx,gy,gz)
=-\sum_{\substack{i+j+k=n+1\\i,j,k>0}}m_i(\alpha_j(gx),m_k(gy,gz))\\
&-m_i(m_k(gx,gy),\alpha_j(gz))+m_i(m_k(gx,gz),\alpha_j(gy))\\
&=-g\sum_{\substack{i+j+k=n+1\\i,j,k>0}}m_i(\alpha_j(x),m_k(y,z))-m_i(m_k(x,y),\alpha_j(z))+m_i(m_k(x,z),\alpha_j(y))\\
&=g\text{Obs}^n_{G,\gamma}(x,y,z).
\end{align*}
\begin{align*}
\text{Obs}^n_{G,\alpha}(gx, gy)& =- \sum_{\substack{i + j + k= n+1\\ i, j, k > 0}} m_i (\alpha_j (gx), \alpha_k(gy)) + \sum_{\substack{i + j = n+1\\ i, j > 0}} \alpha_i (m_j (gx, gy))\\
&= g\big(\sum_{\substack{i + j + k= n+1\\ i, j, k > 0}} m_i (\alpha_j (x), \alpha_k(y)) + \sum_{\substack{i + j = n+1\\ i, j > 0}} \alpha_i (m_j (x, y))\big)\\
&= g\text{Obs}^n_{G,\alpha}(x, y).
\end{align*}
Thus, $\text{Obs}_G^n\in \widetilde{CL}^{3}_{G}(L,L).$
As $\text{Obs}_G^n(x,y,z)=\partial (m_{n+1},\alpha_{n+1})(x,y,z)$, we have $\text{Obs}_G^n$ is an equivariant cocycle.
\end{proof}
We can prove the following theorem along the same line as of non-equivariant case.
\begin{theorem}
An equivariant $n$-deformation extends to an equivariant $(n+1)$-deformation if and only if cohomology class of $\text{Obs}_G^n$ vanishes.
\end{theorem}
\begin{cor}
If $\widetilde{HL}_G^3 (L, L)=0$ then any equivariant $2$-cocycle gives an equivariant one-parameter formal deformation of $(G,L,[.,.],\alpha)$.
\end{cor}
Finally, we study rigidity conditions for equivariant deformations. Observe that an action of a finite group $G$ on Hom-Leibniz algebra $L$ induces an action on $L[[t]]$ by bilinearity.
\label{equ equivalent defn}\begin{defn}
Given two equivariant deformations $L^G_t=(G,L,m_t,\alpha_t)$ and ${L\rq}^G_t=(L,m\rq_t,\alpha\rq_t)$ of $(G, L,[.,.],\alpha)$, where $m_t=\sum_{i\geq 0} m_it^i,\, \alpha_t=\sum_{i\geq 0}\alpha_it^i$ and $m\rq_t=\sum_{i\geq 0} m\rq_it^i,\, \alpha\rq_t=\sum_{i\geq 0}\alpha\rq_it^i$. We say $L^G_t$ and ${L\rq}_t^G$ are equivalent if there is a formal isomorphism $\Psi_t:L[[t]]\to L[[t]]$ of the following form:
$$\Psi_t(a)=\psi_0(a)+\psi_1(a)t+\psi_2(a)t^2+\cdots,$$
Such that
\begin{enumerate}
\item $\psi_0=Id$ and for $i\geq 1$, $\psi_i:L\to L$ are equivariant $\mathbb{K}$-linear maps.
\item $\Psi_t\circ m_t\rq=m_t\circ (\Psi_t\otimes \Psi_t)\,\,\,\text{and}\,\,\, \alpha_t\circ \Psi_t=\Psi_t\circ \alpha\rq_t.$
\end{enumerate}
\end{defn}
\begin{rmk}
Suppose $L^G_t$ and ${L\rq}_t^G$ are equivalent deformation. For every subgroup $H\leq G$, $H$-fixed point set $L^H$ is a Hom-Leibniz sub algebra. A formal equivariant isomorphism $\Psi_t$ induces formal isomophism $L^H[[t]]\to {L\rq}^H[[t]]$ for all subgroups $H$ of $G$.
\end{rmk}
From the second condition of the Definition (\ref{equ equivalent defn}) we have the following equations:
\begin{align}
&\sum_{i,j\geq 0}\psi_i(m\rq_j(x,y))t^{i+j}=\sum_{i,j,k\geq 0}m_i(\psi_j(x),\psi_k(y))t^{i+j+k},\\
&\sum_{i,j \geq 0}\alpha_i(\psi_j(x))t^{i+j}=\sum_{i,j\geq 0}\psi_i(\alpha\rq_j(x))t^{i+j}.
\end{align}
Comparing coefficients of infinitesimals on both sides of the above equations, we have the following proposition.
\begin{prop}
Equivariant infinitesimals of two equivalent equivariant deformations determine the same cohomology class.
\end{prop}
Similar to the non-equivariant case, we have the following rigidity theorem for equivariant deformations.
\begin{theorem}
Let $(G,L,[.,.],\alpha)$ be a Hom-Leibniz algebra equipped with an action of finite group $G$. If $\widetilde{HL}_G^2 (L, L)=0$ then $L$ is equivariantly rigid.
\end{theorem}
\end{document}
\end{document}
|
\begin{document}
\maketitle
\begin{abstract}
The purpose of the paper is twofold. First, we show that partial-data transmission eigenfunctions associated with a conductive boundary condition vanish locally around a polyhedral or conic corner in $\mathbb{R}^n$, $n=2,3$. Second, we apply the spectral property to the geometrical inverse scattering problem of determining the shape as well as its boundary impedance parameter of a conductive scatterer, independent of its medium content, by a single far-field measurement. We establish several new unique recovery results. The results extend the relevant ones in \cite{DCL} in two directions: first, we consider a more general geometric setup where both polyhedral and conic corners are investigated, whereas in \cite{DCL} only polyhedral corners are concerned; second, we significantly relax the regularity assumptions in \cite{DCL} which is particularly useful for the geometrical inverse problem mentioned above. We develop novel technical strategies to achieve these new results.
\noindent{\bf Keywords:}~~Transmission eigenfunctions; spectral geometry; vanishing; microlocal analysis; inverse scattering; conductive scatterer; single measurement.
\end{abstract}
\section{Introduction}
\subsection{Mathematical setup and summary of major findings}
The purpose of the paper is twofold. We are concerned with the spectral geometry of transmission eigenfunctions and the geometrical inverse scattering problem of recovering the shape of an anomalous scatterer, independent of its medium content, by a single far-field measurement. We first introduce the mathematical setup of our study.
Let $\Omega$ be a bounded Lipschitz domain in $\mathbb{R}^n$, $n=2,3$, with a connected complement $\mathbb{R}^n\backslash\overline{\Omega}$. Let $V\in L^\infty(\Omega)$ and $\eta\in L^\infty(\partial\Omega)$ be complex-valued functions. Let $\Gamma$ denote an open subset of $\partial\Omega$. Consider the following conductive transmission eigenvalue problem associated with $k\in \mathbb R_+$ and $(w,v)\in H^{1}(\Omega)\times H^1(\Omega)$:
\begin{equation}\label{0eq:tr}
\begin{cases}
\ \ \Delta w+k^{2}(1+V)w=0 &\quad \mbox {$\mathrm {in}\ \ \Omega$},\\
\ \ \Delta v+k^{2}v=0 &\quad \mbox {$\mathrm {in}\ \ \Omega$},\\
\ \ w=v,\ \partial_{\nu}w=\partial_{\nu}v+\eta v &\quad \mbox{$\mathrm {on}\ \ \Gamma$},
\end{cases}
\end{equation}
where and also in what follows, $\nu\in\mathbb{S}^{n-1}$ signifies the exterior unit normal vector to $\partial\Omega$. Clearly, $(w, v)=(0, 0)$ is a trivial solution to \eqref{0eq:tr}. If there exists a nontrivial pair of solutions to \eqref{0eq:tr}, $k$ is referred to as a conductive transmission eigenvalue and $(u, v)$ is the corresponding pair of conductive transmission eigenfunctions. In the case $\Gamma=\partial\Omega$, \eqref{0eq:tr} is said to be the full-data conductive transmission eigenvalue problem, and otherwise it is called the partial-data problem. $\eta$ is called the boundary impedance or conductive parameter. If $\eta\equiv 0$, then \eqref{0eq:tr} is reduced to the standard transmission eigenvalue problem. Hence, the conductive transmission eigenvalue problem \eqref{0eq:tr} is a generalized formulation of the transmission eigenvalue problem. Nevertheless, it has its own physical background when $\eta\equiv\hspace*{-3.5mm}\backslash\, 0$ as shall be discussed in what follows.
One of the main purposes of this paper is to quantitatively characterize the geometric property of the partial-data conductive transmission eigenfunctions (assuming their existence). The major findings can be briefly summarized as follows. If there is a polyhedral or conic corner on $\partial\Omega$, then under certain regularity conditions the eigenfunctions must vanish at the corner. The regularity conditions are characterized by the H\"older continuity of the parameters $q:=1+V$ and $\eta$ locally around the corner as well as a certain Herglotz extension property of the eigenfunction $v$, which is weaker than the H\"older continuity. The results extend the relevant ones in \cite{DCL} in two directions: first, we consider a more general geometric setup where both polyhedral and conic corners are investigated, whereas in \cite{DCL} only polygonal and edge corners are concerned; second, we significantly relax the regularity assumptions in \cite{DCL} which is particularly useful for the geometrical inverse problem discussed in what follows. We develop novel technical strategies to achieve those new results. More detailed discussion shall be given in the next subsection.
The other focus of our study is the inverse scattering problem from a conductive medium scatterer. Let $V$ be extended by setting $V=0$ in $\mathbb{R}^n\backslash\overline{\Omega}$. Throughout, we set $q=1+V$. Let $u^i(\mathbf{x})$ be a time-harmonic incident wave which is an entire solution to
\begin{align}\label{eq:incident}
\Delta u^i(\mathbf x)+k^2 u^i(\mathbf x)=0,\quad \quad \mathbf x\in \mathbb R^n,
\end{align}
where $k\in \mathbb R_+$ signifies the wave number. Let $(\Omega, q, \eta)$ denote a conductive medium scatterer with $\Omega$ signifying its shape and $q, \eta$ being its medium parameters. The impingement of $u^i$ on $(\Omega, q, \eta)$ generates wave scattering and it is described by the following system:
\begin{equation}\label{eq:contr}
\begin{cases}
\Delta u^-+k^2qu^-=0, &\quad \mbox{in}\quad \Omega,\\
\Delta u^++k^2u^+=0, &\quad \mbox{in}\quad \mathbb R^n\setminus \overline{\Omega},\\
u^+=u^-,\quad \partial_\nu u^++\eta u^+=\partial _\nu u^-, &\quad \mbox{on}\quad \partial \Omega,\\
u^+=u^i+u^s, &\quad\mbox{in}\quad \mathbb R^n\backslash\overline{\Omega},\\
\lim_{r\to \infty}r^{(n-1)/2}(\partial _ru^s-\mathrm iku^s)=0, &\quad r=\vert \mathbf x\vert,
\end{cases}
\end{equation}
where $\mathrm i:=\sqrt{-1}$ and the last limit in \eqref{eq:contr} is known as the Sommerfeld radiation condition that characterises the outward radiating of the scattered wave field $u^s$. The well-posedness of the direct problem \eqref{eq:contr} can be found in \cite{BO} for the unique existence of $u:=u^-\chi_\Omega+u^+\chi_{\mathbb{R}^n\backslash\overline{\Omega}} \in H^1_{\rm loc}(\mathbb R^n)$.
Moreover, the scattered field admits the following asymptotic expansion:
\begin{equation}\notag
u^s(\mathbf x)=\frac{e^{ik\vert \mathbf x\vert }}{\vert \mathbf x\vert^{(n-1)/2} }\left(u^\infty(\hat{\mathbf x})+\mathcal O\left(\frac{1}{\vert \mathbf x\vert^{(n-1)/2}}\right)\right), \quad \vert \mathbf x\vert \to \infty,
\end{equation}
which holds uniformly in all directions $\hat{\mathbf{x}}:={\mathbf x}/\vert \mathbf x\vert\in\mathbb{S}^{n-1}$. The function $u^\infty$ defined on the unit sphere $\mathbb S^{n-1}$ is known as the far field pattern of $u^s$. Associated with \eqref{eq:contr}, we are concerned with the following geometrical inverse problem:
\begin{equation}\label{5eq:ineta}
u^\infty(\hat{\mathbf x};u^i),\ u^i\ \mbox{fixed}\longrightarrow \Omega\quad \mbox{independent of $q$ and $\eta$.}
\end{equation}
That is, we intend to recover the geometrical shape of the conductive scatterer independent of its physical content by the associated far-field pattern generated by a single incident wave (which is usually referred to as a single far-field measurement in the literature).
Determining the shape of a scatterer from a single far-field measurement constitutes a longstanding problem in the inverse scattering theory \cite{DR2018,DR,Liu22}. In this paper, based on the spectral geometric results discussed earlier, we derive several new unique identifiability results for the inverse problem \eqref{5eq:ineta}. In brief, we establish local unique recovery results by showing that if two conductive scatterers possess the same far-field pattern, then their difference cannot possess a polyhedral or conic corner. If we further imposed a certain a-priori global convexity on the scatterer, then one can establish the global uniqueness result. Moreover, we can show that the boundary impedance parameter $\eta$ can also be uniquely recovered. It is emphasized that all of the results established in this paper hold equally for the case $\eta\equiv 0$. If $\eta\equiv 0$, \eqref{eq:contr} describes the scattering from a regular medium scatterer $(\Omega, q)$. In the case $\eta\neq 0$, $(\Omega, q, \eta)$ (effectively) characterises a regular medium scatterer $(\Omega, q)$ by a thin layer of highly loss medium \cite{Ang,BO,CDL2020}, and in two dimensions \eqref{eq:contr} describes the corresponding transverse electromagnetic scattering, whereas in three dimensions \eqref{eq:contr} describes the corresponding acoustic scattering. In addition to its physical significance, introducing a boundary parameter $\eta$ make our study more general which includes $\eta\equiv 0$ as a special case. Hence, in what follows, we also call $(v, w)$ to \eqref{0eq:tr} as generalized transmission eigenfunctions.
\subsection{Connection to existing studies and discussions}
Before discussing the relevant existing studies, we note one intriguing connection between the scattering problem \eqref{eq:contr} and the spectral problem \eqref{0eq:tr}. If $u_\infty\equiv 0$, which by Rellich's theorem implies that $u^+=u^i$ in $\mathbb{R}^n\backslash\overline{\Omega}$, one can show that $(v, w)=(u^i|_{\Omega}, u^-|_{\Omega})$ fulfils the spectral system \eqref{0eq:tr} with $\Gamma=\partial\Omega$. In the case of $u_\infty\equiv 0$, no scattering pattern can be observed outside $\Omega$, and hence the scatterer $(\Omega, q, \eta)$ is invisible/transparent with respect to the exterior observation under the wave interrogation by $u^i$. On the other hand, if $(w, v)$ is a pair of full-data transmission eigenfunctions to \eqref{0eq:tr}, then by the Herglotz extension $v$ can give rise to an incident wave whose impingement on $(\Omega, q, \eta)$ is (nearly) no-scattering, i.e. $(\Omega, q, \eta)$ is (nearly) invisible/transparent.
Recently, there has been considerable interest in quantitatively characterising the singularities of scattering waves induced by the geometric singularities on the shape of the underlying scatterer as well as its implications to invisibility and geometrical inverse problems. There are two perspectives in the literature. The first one is mainly concerned with occurrence or non-occurrence of non-scattering phenomenon, namely whether invisibility can occur or not. The main rationale is that if the scatterer possesses a geometric singularity (in a proper sense) on its shape, then it scatters a generic incident wave nontrivially, namely invisibility cannot occur. Here, the generic condition is usually characterized by a non-vanishing property of the incident wave at the geometrically singular place. It first started from the study in \cite{BPS} for acoustic scattering with many subsequent developments in different physical contexts \cite{LME,B2018,BL,Blasten2020,CX21,CV,DCL,ElH,SS,VX,BLY,BLX2020,2021,LX,DFLY,BL2018}. The other one is a spectral perspective which is mainly concerned with the spectral geometry of transmission eigenfunctions. According to the connection mentioned above, the spectral geometric results characterise the patterns of the wave propagation inside a (nearly) invisible/transparent scatterer. It was first discovered in \cite{BL2017} that transmission eigenfunctions are generically vanishing around a corner point and such a local geometric property was further extended to conductive transmission eigenfunctions in \cite{DCL}, elastic transmission eigenfunctions in\cite{BL,DLS} and electromagnetic transmission eigenfunctions in \cite{2021,DFLY,BLX2020}. Though the two perspectives share some similarities, especially about the vanishing of the wave fields around the geometrically singular places, there are subtle and technical differences. In fact, it is numerically observed in \cite{BLLW} that there exist transmission eigenfunctions which do not vanish, instead localize, around geometrically singular places. An unobjectionable reason to account for such (locally) localizing behaviour of the transmission eigenfunctions is the regularity of the eigenfunctions at the geometrically singular places. In general, if the transmission eigenfunctions are H\"older continuous, they locally vanish around the singular places. Nevertheless, it is shown in \cite{LT} that under a certain Herglotz extension property, the locally vanishing property still hold. It is shown in \cite{LT} that the aforementioned regularity criterion in terms of the Herglotz extension is weaker than the H\"older regularity. In addition to the local geometric pattern, the spectral geometric perspective also leads to the discovery of certain global geometric patterns of the transmission eigenfunctions. Indeed, it is discovered in \cite{CDHLW,DJLZ,DLX1} that the (full-data) transmission eigenfunctions tend to (globally) localize on $\partial\Omega$ with many subtle structures. Those spectral geometric results have been proposed to produce a variety of interesting applications, including super-resolution imaging \cite{CDHLW}, artificial mirage \cite{DLX1} and pseudo plasmon resonance \cite{ACL}. We also refer to \cite{Liu22} for more related results in different physical contexts.
In this paper, we adopt the second perspective to study the (local) geometric properties of the conductive transmission eigenfunctions as well as consider the application to address the unique identifiability issue for the geometrical inverse scattering problem. As discussed in the previous subsection, our results derived in this paper extend the relevant ones in \cite{DCL} in terms of the geometric setup as well as the regularity requirements. To achieve these new results, we develop novel technical strategies. In principle, we adopt microlocal tools to quantitatively characterise the singularities of the eigenfunctions induced by the corner or conic singularities. Nevertheless, we utilise CGO (Complex Geometric Optics) solutions of the PDO (partial differential operator) $\Delta+(1+V)$ in our quantitative analysis, whereas in \cite{DCL}, the analysis made use of certain CGO solutions to $\Delta$. This induces various subtle and technical quantitative estimates and asymptotic analysis. Finally, as also discussed in the previous subsection, we apply the newly derived spectral geometric results to establish several novel unique identifiability results for the geometric inverse problem \eqref{5eq:ineta}. We would also like to mention in passing some recent results on determining the shape of a scattering object by a single or at most a few far-field measurements in different physical contexts \cite{DLS,DLW,DLZZ21,CDL2,CDL3,DLW20,BL2018,BLX2020,DFLY,B2018,CDLZ22,BL,LPRX,Liu-Zou3}.
The rest of the paper is organized as follows. In Section \ref{sec:preliminary}, we collect some preliminary results which are needed in the subsequent analysis. In Section \ref{sec:2D}, we show that the conductive transmission eigenfunctions to \eqref{0eq:tr} near a convex sectorial corner in $\mathbb R^2$ must vanish. In Section \ref{sec:3D}, we study the vanishing of conductive transmission eigenfunctions to \eqref{0eq:tr} near a convex conic or polyhedral corner in $\mathbb R^3$. In Section \ref{sec:inverse}, we discuss the visibility of a scatterer associated with \eqref{eq:contr}. Furthermore, the unique recovery for the shape determination $\Omega$ associated with the corresponding conductive scattering problem \eqref{eq:contr} is investigated.
\section{Preliminaries}\label{sec:preliminary}
In this section, we present some preliminary results which shall be frequently used in our subsequent analysis.
Given $s\in \mathbb{R}$ and $p\geq 1$, the Bessel potential space is defined by
\begin{equation}\label{1eq:Hsp}
H^{s,p}:=\{f\in L^{p}(\mathbb{R}^{n}); \mathcal{F}^{-1}[(1+\vert \xi \vert^{2})^{\frac{s}{2}}\mathcal{F}f]\in L^{p}(\mathbb{R}^{n}) \},
\end{equation}
where $\mathcal{F}$ and $\mathcal{F}^{-1}$ denote the Fourier transform and its inverse, respectively.
The following proposition on a multiplication property for Sobolev spaces can be directly proved by utilizing the results in \cite[Theorem 7.5]{AM}, \cite[Proposition 7.6]{Blasten2020}, and \cite[Proposition 3.1]{CX21}.
\begin{prop}\label{1pro:nor}
Suppose that $q \in H^{1,1+\epsilon_{0}}$, where $ 0<\epsilon_{0}<1$. It holds that
\begin{equation}\notag
\vert \vert qf\vert \vert _{H^{1,\tilde{p}}} \leq C\vert \vert f\vert \vert_{H^{1,p}}\quad \mbox{for any}\quad f\in H^{1,p} \mbox{ and } p\geq 1,
\end{equation}
where $C$ is a positive constant and $1<\tilde{p}<2$ satisfies
\begin{equation}\label{eq:p til cond}
\frac{1}{p}+\frac{1}{1+\epsilon_{0}}=\frac{1}{\tilde{p}}\quad \mbox{and}
\quad \frac{1}{n+1}+\frac{1}{p}\leq \frac{1}{\tilde{p}} <\frac{1}{p}+\min\left\{\frac{1}{p},\frac{1}{n}\right\}.
\end{equation}
\end{prop}
We introduce a complex geometrical optics (CGO) solution $u_0$ defined by \eqref{1eq:cgo} in Lemma \ref{1lem:nor}.
\begin{lem}\cite{CX21,LME}\label{1lem:nor}
Given the space dimensions $n=2,3$, let $q$ satisfy the assumption in Proposition \ref{1pro:nor} with the constant $p$ subject to $p>\frac{1}{n-1}$ and $\frac{n}{p}<\frac{2}{n+1}+1$. Let
\begin{equation}\label{1eq:cgo}
u_{0}(\mathbf x)=(1+\psi (\mathbf x))e^{\rho\cdot \mathbf x},\ \mathbf x\in \mathbb R^n
\end{equation}
where
\begin{equation}\label{1eq:eta}
\rho=-\tau(\mathbf{d}+\mathrm{i} \mathbf{d}^{\perp}),
\end{equation}
with $\mathbf{d},\ \mathbf{d}^{\perp} \in \mathbb{S}^{n-1}$ satisfying $\mathbf{d}\perp \mathbf{d}^{\perp}$,
and $\tau \in \mathbb{R}_+$. If $\tau $ is sufficient large, there exits a solution $u_0(\mathbf x)$ with the form (\ref{1eq:cgo}) satisfying
\begin{equation}\label{1eq:u_0}
\Delta u_0 +k^2qu_0=0\quad \mbox{in} \quad \mathbb R^n,
\end{equation}
and $\psi(\mathbf x)$ fulfills that
\begin{equation}\label{1nor:psi}
\vert \vert \psi(\mathbf x)\vert \vert_{H^{1,p}}=\mathcal O\left(\tau^{n(\frac{1}{\tilde p}-\frac{1}{p})-2}\right ),
\end{equation}
where $\widetilde p $ satisfies \eqref{eq:p til cond}.
\end{lem}
\begin{prop}\cite[Lemma 4.4]{DLW}\label{1prop:gamma}
For any given $\alpha>0$ and $0<\epsilon<e$ , we have the following estimates
\begin{subequations}
\begin{align}
& \left\vert \int_{\epsilon}^{\infty} r^{\alpha}e^{-\mu r}\mathrm dr \right\vert \leq \frac{2}{\mathbb Re\mu}e^{\frac{\epsilon}{2}\mathbb Re{\mu}} ,\label{1eq:Ir} \\
& \int_{0}^{\epsilon}r^{\alpha}e^{-\mu r}\mathrm dr =
\frac{\Gamma(\alpha+1)}{\mu^{\alpha+1}} +\mathcal O\left( \frac{2}{\mathbb Re{\mu}}e^{-\frac{\epsilon}{2}\mathbb Re{\mu}}\right),\label{1eq:gamma}
\end{align}
\end{subequations}
as $\mathbb Re(\mu)\rightarrow \infty$, where $\Gamma (s)$ stands for the Gamma function.
\end{prop}
\begin{lem}\cite{costabel88}\label{2lem:green}
Let $\Omega \subset \mathbb R^n$ be a bounded Lipschitz domain. For any $f, g\in H^{1,{\Delta}}:=\{f\in H^{1}(\Omega)~|~\Delta f\in L^{2}(\Omega)\}$, then the following Green formula holds
\begin{equation}\label{2eq:2green}
\int_{\Omega}(g\Delta f-f\Delta g){\rm d}\mathbf x=\int_{\partial \Omega}(g\partial _{\nu}f-f\partial_\nu g) {\rm d}\sigma,
\end{equation}
where $\partial_\nu f$ is the exterior normal derivative of $f$ to $\partial \Omega$.
\end{lem}
\section{Vanishing of transmission eigenfunctions near a convex planar corner}\label{sec:2D}
In this section, we consider the vanishing property of conductive transmission eigenfunctions to \eqref{0eq:tr} near corners in $\mathbb R^2$. Firstly, let us introduce some notations for the subsequent use. Let $(r,\theta)$ be the polar coordinates in $\mathbb{R}^{2}$; that is $\mathbf{x}=(x_{1},x_{2})=(r\cos \theta, r\sin \theta)\in \mathbb{R}^{2}$. For $\mathbf{x}\in \mathbb {R}^{2}$, $B_{h}(\mathbf{x})$ denotes an open ball of radius $h \in \mathbb {R}_{+}$ and centered at $ \mathbf{x} $. For simplicity, we denote $B_{h}\ :=B_{h}(\mathbf{0})$. Consider an open sector in $\mathbb{R}^{2}$ with the boundary $\Gamma^{\pm}$ as follows,
\begin{equation}\label{1eq:sec}
\mathcal{K}=\{ \mathbf{x} \in \mathbb{R}^{2}~|~\theta_{m}<\arg(x_{1}+\mathrm ix_{2})<\theta_{M} \},
\end{equation}
where $-\pi <\theta_{m}<\theta_{M}<\pi, \mathrm{i}:=\sqrt{-1}$ and the two boundaries $\Gamma^{\pm}$ of $\mathcal K$
correspond to $(r,\theta_{m})$ and $(r,\theta_{M})$ with $r>0$, respectively . Set
\begin{equation}\label{1eq:not}
S_{h}=\mathcal{K}\cap B_{h},\ \Gamma_{h}^{\pm}=\Gamma^{\pm}\cap B_{h},\ \Lambda_{h}=\mathcal{K}\cap \partial B_{h}.
\end{equation}
Let the Herglotz wave function be defined by
\begin{align}
u(\mathbf x)=\int_{\mathbb S^{1}} e^{\mathrm{i}k\xi\cdot \mathbf{x}}g (\xi)\mathrm d\xi,\ \xi \in \mathbb S^{n-1},\ \mathbf{x} \in \mathbb{R}^n,\ g \in L^{2}(\mathbb S^{n-1}), \quad n= 2 \mbox{ or }3,
\end{align}
which is an entire solution of
$$
(\Delta +k^2)u(\mathbf x)=0\quad \mbox{ in } \quad \mathbb R^n,\quad n=2 \mbox{ or }3.
$$
By \cite[Theorem 2 and Remark 2]{Wec}, we know that the set of the Herglotz wave function is dense with respect to $H^1$ norm in the set of the solution to
$$
(\Delta +k^2 ) v(\mathbf x)=0 \quad \mbox{ in } \quad D, \quad D\subset \mathbb R^n,\quad n=2 \mbox{ or }3,
$$
where $D$ is a bounded Lipschitz domain with a connected complement.
Consider the transmission eigenvalue problem \eqref{0eq:tr} defined in a bounded Lipschitz domain $\Omega$ with a connected complement. Since $\Delta$ is invariant under rigid motions, without loss of generality, we always assume that $ \mathbf 0 \in \partial \Omega$ throughout of the rest of this paper. In Theorem \ref{thm:2D}, we establish the vanishing property of the transmission eigenfunctions near a convex planar corner under $H^1$ regularity with certain Herglotz wave approximation assumptions in the underlying corner. We postpone the proof of Theorem \ref{thm:2D} in the subsection \ref{subsec:3.1}. Compared with the assumptions in \cite[Theorem 2.1]{DCL}, we remove the technical condition $qw\in C^{\alpha}(\overline{S}_h )$, which is critical for the analysis in \cite{DCL}.
\begin{thm}\label{thm:2D}
Consider a pair of transmission eigenfunctions $v\in H^{1}(\Omega)$ and $w\in H^{1}(\Omega)$ to (\ref{0eq:tr}) associated with $k\in \mathbb R_+$, where $\Omega$ is a bounded Lipschitz domain with a connected complement. Suppose that $\mathbf 0\in \Gamma \subset \partial \Omega$ such that $\Omega \cap B_h=\mathcal K \cap B_h=S_h$, where the sector $\mathcal K$ is defined by (\ref{1eq:sec}) and $h\in \mathbb R_+$ is sufficiently small such that $q\in H^2(\overline{S}_h)$ and $\eta \in C^{\alpha}(\overline{\Gamma_h^\pm} )$, where $\alpha \in (0,1)$. If the following conditions are fulfilled:
\begin{itemize}
\item[(a)] for any given positive constants $\beta$ and $\gamma$ satisfying
\begin{equation}\label{eq:assump1}
\gamma <\alpha \beta,
\end{equation}
the transmission eigenfunction $v$ can be approximated in $H^{1}(S_h)$ by the Herglotz wave functions
$$
v_j=\int_{\mathbb S^{1}} e^{\mathrm{i}k\xi\cdot \mathbf{x}}g_j (\xi)\mathrm d\xi,j=1,2,\cdots,
$$
with the kernels $g_j$ satisfying the approximation property
\begin{equation}\label{1eq:herg}
\|v-v_{j}\|_{H^{1}}\leq j^{-\beta},\quad \|g_{j}\|_{L^{2}(\mathbb S^{1})}\leq j^{\gamma };
\end{equation}
\item[(b)] $\eta$ does not vanish at $\mathbf 0$, where $\mathbf 0$ is the vertex of $S_h$;
\item[(c)] the open angle of $S_h$ satisfies
\begin{equation}\notag
-\pi <\theta_m<\theta_M<\pi\ and \ 0<\theta_M-\theta_m <\pi;
\end{equation}
\end{itemize}
then one has
\begin{equation}\label{1eq:v0}
\lim_{\lambda \to +0}\frac{1}{m(B(\mathbf 0,\lambda)\cap \Omega)}\int_{B(\mathbf 0,\lambda)\cap \Omega}\vert v(\mathbf x)\vert \mathrm d\mathbf x=0,
\end{equation}
where $m(B(\mathbf 0,\lambda)\cap \Omega)$ is the area of $B(\mathbf 0,\lambda)\cap \Omega$.
\end{thm}
It is remarked that the Herglotz approximation property in \eqref{1eq:herg} characterises a regularity lower than H\"older continuity (cf. \cite{LT}). In the following theorem, if the stronger H\"older regularity is imposed on the transmission eigenfunction $v$ near the corner is satisfied, we can prove that $v$ vanishes near the corner point. The proof of Theorem \ref{2D:delta} is a slight modification of the corresponding proof of Theorem \ref{thm:2D}. We only give a sketched proof of Theorem \ref{2D:delta} at the end of Subsection \ref{subsec:3.1}.
\begin{thm}\label{2D:delta}
Consider a pair of transmission eigenfunctions $v\in H^{1}(\Omega)$ and $w\in H^{1}(\Omega)$ to (\ref{0eq:tr}) associated with $k\in \mathbb R_+$, where $\Omega$ is a bounded Lipschitz domain with a connected complement. Suppose that $\mathbf 0\in\Gamma \subset \partial \Omega$ such that $\Omega \cap B_h=\mathcal K \cap B_h=S_h$, where the sector $\mathcal K$ is defined by (\ref{1eq:sec}) and $h\in \mathbb R_+$. If the following conditions are fulfilled:
\begin{itemize}
\item[(a)] $q\in H^2(\overline{S}_h)$, $v\in C^\alpha(\overline{S}_h) $ and $\eta \in C^{\alpha}(\overline{\Gamma_h^\pm} )$, where $0<\alpha<1$;
\item[(b)]the function $\eta$ does not vanish at the vertex $\mathbf 0$, where $\mathbf 0$ is the vertex of $S_h$, i.e.,
\begin{equation}\label{2eq:q0}
\eta(\mathbf 0)\not=0;
\end{equation}
\item[(c)]the open angels of $S_h$ satisfies
$$-\pi <\theta_m<\theta_M<\pi,\ and\ 0<\theta_M-\theta_m<\pi;$$
\end{itemize}
then one has
\begin{equation}\label{1eq:delv0}
v(\mathbf 0) =0.
\end{equation}
\end{thm}
Recall that $\Omega$ is a bounded Lipschitz domain and $\Gamma$ is an open subset of $\partial \Omega$. Consider the classical transmission eigenvalue problem:
\begin{equation}\label{3eq:treta0}
\begin{cases}
&\Delta w+k^{2}qw=0 \quad \hspace*{0.55cm} \mbox {$\mathrm {in}\ \Omega$},\\
&\Delta v+k^{2}v=0 \quad \hspace*{0.9cm}\mbox {$\mathrm {in}\ \Omega$},\\
&w=v,\ \partial_{\nu}w=\partial_{\nu}v \hspace*{0.5cm}\mbox{$\mathrm {on}\ \Gamma$},
\end{cases}
\end{equation}
which can be formulated from \eqref{0eq:tr} by setting $\eta\equiv 0$. When $\Gamma=\partial \Omega$, \eqref{3eq:treta0} is referred to be {\it interior transmission eigenvalue problem}, which has a colorful history in invere scattering theory (cf. \cite{CCH,CH2013,Liu22} and references therein). It was revealed that in \cite[Theorem 1.2]{B2018} the transmission eigenfunction $v$ and $w$ to \eqref{3eq:treta0} must vanish near a planar corner of $\Gamma$ if $v$ or $ w$ is $H^2$-smooth near the underlying corner and $q$ is H\"older continuous at the corner point. In the following Corollary \ref{cor1}, we shall establish the vanishing characterization of transmission eigenfunctions to \eqref{3eq:treta0} near a convex planar corner under two regularity criterions on the underlying transmission eigenfunctions near the corner. We should emphasize that we remove the $H^2$-smooth near the corner assumption on $v$ and $w$ as stated in \cite[Theorem 1.2]{B2018}, where we only require that $v$ is H\"older continuous at the corner point or holds a certain regularity condition in terms of Herglotz wave approximations (which is weaker than H\"older continuity as remarked earlier). The proof of Corollary \ref{cor1} is postponed to Subsection \ref{subsec:32}.
\begin{cor}\label{cor1}
Consider a pair of transmission eigenfunctions $v\in H^{1}(\Omega)$ and $w\in H^{1}(\Omega)$ to \eqref{3eq:treta0} associated with $k\in \mathbb R_+$, where $\Omega$ is a bounded Lipschitz domain with a connected complement. Suppose that $\mathbf 0\in \Gamma \subset \partial \Omega$ such that $\Omega \cap B_h=\mathcal K \cap B_h=S_h$, where the sector $\mathcal K$ is defined by (\ref{1eq:sec}) and $h\in \mathbb R_+$ is sufficient small such that $q\in H^2(\overline S_h)$ and $q(\mathbf 0)\neq 1$. The following two statements are valid.
\begin{itemize}
\item[(a)] For any given positive constants $\beta$ and $\gamma$ satisfying
$\gamma <\beta$,
if the transmission eigenfunction $v$ and Herglotz wave functions $v_j$ with the kernel $g_j$ satisfying the approximation property \eqref{1eq:herg}, then we have the vanishing property of $v$ near $S_h$ in the sense of \eqref{1eq:v0}.
\item[(b)] If $v\in C^\alpha (\overline{S_h})$ with $\alpha\in (0,1)$, then it holds that $v(\mathbf 0)=0$.
\end{itemize}
\end{cor}
\subsection{Proof of Theorem \ref{thm:2D} }\label{subsec:3.1}
Given a convex sector $\mathcal K$ defined by \eqref{1eq:sec} and a positive constant $\zeta$, we define $\mathcal{K}_{\zeta}$ as the open set of $\mathbb{S}^{1}$ which is composed of all directions $\mathbf d\in \mathbb{S}^{1}$ satisfying that
\begin{equation}\label{2eq:zeta}
\mathbf d\cdot \hat{\mathbf{x}}>\zeta>0, \quad for\ all\ \hat{\mathbf{x}} \in\mathcal{K} \cap\mathbb S^1 .
\end{equation}
Throughout the present section, we always assume that the unit vector $ \mathbf d$ in the form of the CGO solution $u_0$ given by \eqref{1eq:cgo} fulfills \eqref{2eq:zeta}.
\begin{prop}\label{2prop:tau2}
Let $S_{h}$ and $\rho$ be defined in (\ref{1eq:not}) and (\ref{1eq:eta}), respectively, where $ \mathbf d$ satisfies \eqref{2eq:zeta}. Then we have
\begin{equation}\label{1eq:tau2}
\left\vert \int_{\Gamma_h^{\pm}}e^{\rho \cdot \mathbf x}\mathrm d \mathbf x\right \vert \geq \frac{C_{S_h }}{\tau}-\mathcal O\left(\frac{1}{\tau}e^{-\frac{1}{2}\zeta h \tau}\right),
\end{equation}
for sufficiently large $\tau$, where $C_{S_h}$ is a positive number only depending on the opening angle $\theta_M-\theta_m$ of $\mathcal K$ and $\zeta$.
\end{prop}
\begin{proof}
Using polar coordinates transformation and Proposition \ref{1prop:gamma}, we have
\begin{equation}\notag
\begin{aligned}
\int_{\Gamma_h^{\pm}}e^{\rho\cdot \mathbf x}\mathrm d \sigma
&=\frac{\Gamma(1)}{\tau}\frac{1}{(\mathbf d+\mathrm i\mathbf d^{\perp})\cdot \mathbf{\hat x_1}}-I_{R_1}+\frac{\Gamma(1)}{\tau}\frac{1}{(\mathbf d+\mathrm i\mathbf d^{\perp})\cdot \mathbf{\hat x_2}}-I_{R_2},
\end{aligned}
\end{equation}
where $\mathbf {\hat x_1}$ and $\mathbf {\hat x_2}$ are unit vector of $\mathbf x$ on $\Gamma^-$ and $\Gamma ^+$, and
\begin{equation}\notag
I_{R_1}=\int_{\Gamma^-\setminus \Gamma^-_h}e^{-\tau (\mathbf d+\mathrm i\mathbf d^\perp)}\mathrm d\sigma,\quad I_{R_2}=\int_{\Gamma^+\setminus \Gamma^+_h}e^{-\tau (\mathbf d+\mathrm i\mathbf d^\perp)}\mathrm d\sigma.
\end{equation}
Hence, with the help of Proposition \ref{1prop:gamma}, for sufficiently large $\tau$, we have the following integral inequality
\begin{equation}\label{1eq:etatau2}
\begin{aligned}
\left\vert \int_{\Gamma_h^{\pm}}e^{\rho\cdot \mathbf x}\mathrm d \sigma \right \vert&\geq\frac{1}{\tau}\left \vert \frac{1}{(\mathbf d+\mathrm i\mathbf d^{\perp})\cdot \mathbf{\hat x_1}} + \frac{1}{(\mathbf d+\mathrm i\mathbf d^{\perp})\cdot \mathbf{\hat x_2}} \right \vert-\vert I_{R_1}\vert -\vert I_{R_2} \vert\\
&\geq \frac{C_h}{\tau}-O\left(\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right)
\end{aligned}
\end{equation}
by using \eqref{2eq:zeta}.
\end{proof}
The following proposition can be directly derived by using \eqref{2eq:zeta} and Proposition \ref{1prop:gamma}.
\begin{prop}\label{1prop:enorm}
For any given $t>0$, we let $S_h$ and $\Gamma_h^\pm$ be defined by \eqref{1eq:not}. Then one has
\begin{equation}\label{1eq:enorm}
\vert\vert e^{\rho \cdot \mathbf {x}} \vert \vert_{L^t(S_h)}\leq C \left(\frac{1}{\tau^2}+\frac{1}{\tau}e^{-\frac{t}{2}\zeta h\tau}\right)^{\frac{1}{t}},
\end{equation}
\begin{equation}\label{1eq:enormGamma}
\vert\vert e^{\rho \cdot \mathbf {x}} \vert \vert_{L^t(\Gamma_h^{\pm})}\leq C \left(\frac{1}{\tau}+\frac{1}{\tau}e^{-\frac{t}{2}\zeta h\tau}\right)^{\frac{1}{t}}.
\end{equation}
as $\tau\rightarrow \infty$, where $\vert\vert e^{\rho \cdot \mathbf {x}} \vert \vert_{L^t(S_h)}=\left(\int_{S_h} |e^{\rho \cdot \mathbf {x}} |^t {\mathrm d}x\right)^{1/t}$, $\rho$ is defined in (\ref{1eq:eta}) and $C$ is a positive constant only depending on $t,\zeta$ .
\end{prop}
\begin{lem}\label{lem:31}
Under the same setup of Theorem \ref{thm:2D}, let the CGO solution $u_0$ be defined by \eqref{1eq:cgo}. Denote $u=w-v$, where $(v,w)$ is a pair of transmission eigenfunctions of \eqref{0eq:tr} associated with $k$. Then it holds that
\begin{equation}\label{eq:pde sys 2}
\begin{cases}
&\Delta u_0 +k^2qu_0=0\hspace*{2.1cm}\mbox{in}\quad S_h,\\
&\Delta u +k^2qu=k^2(1-q)v \hspace*{0.9cm} \mbox{in}\quad S_h,\\
&u=0,\quad \partial_\nu u=\eta v \hspace*{1.9cm} \mbox{on}\quad \Gamma_h^\pm,
\end{cases}
\end{equation}
and \begin{equation}\label{1eq:psi}
\vert \vert \psi(\mathbf x) \vert \vert_{H^{1,8}}=\mathcal O(\tau^{-\frac{2}{3}}),
\end{equation}
where $\psi$ and $\tau$ are defined in \eqref{1eq:cgo}.
\end{lem}
\begin{proof}
Since $q \in H^2(\overline{ S}_h)$, let $\widetilde q $ be the Sobolev extension of $q$, one has $\widetilde q \in H^2$. Hence we have $\widetilde q\in H^{1,1+\epsilon_0}$ where $\epsilon_{0}=\frac{1}{2}$. Therefore $\widetilde q$ satisfies the assumption in Proposition \ref{1pro:nor}. Let $\tilde{p}=1+\frac{10}{19}\epsilon_{0}$, according to \eqref{eq:p til cond}, one has $p=8$. Since $\widetilde q$ and $p$ fulfill the assumption of Lemma \ref{1lem:nor}, there exits a CGO solution with the form \eqref{1eq:cgo} satisfies
\begin{align}\label{eq:cgo 2}
\Delta u_0 +k^2\widetilde qu_0=0\quad \mbox{in} \quad \mathbb R^n,\quad \widetilde q~\big|_{S_h} =q.
\end{align}
By \eqref{1nor:psi}, it yields that \eqref{1eq:psi}. Using \eqref{0eq:tr} and \eqref{eq:cgo 2}, we can obtain \eqref{eq:pde sys 2}.
\end{proof}
\begin{lem}\label{lem:trace thm}\cite{E.G,LME,GG07,SEM}
Let $\Omega$ be a Lipschitz bounded and connected subset of $\mathbb R^n,n=2,3$ whose bounded and orientable boundary is denote by $\Gamma$. Let the restriction $\gamma_0(u)=u|_\Gamma$, then the operator $\gamma_0$ is linear and continuous from $H^{1,p}(\Omega)$ onto $H^{1-\frac{1}{p},p}(\Gamma)$ for $1\leq p<\infty$.
\end{lem}
\begin{lem}\label{2lem:psi8}
Let $\Gamma_h^\pm$ be defined in \eqref{1eq:not}, $e^{\rho \cdot \mathbf x}$ and $\psi$ be given by \eqref{1eq:cgo} and \eqref{1eq:eta}. For sufficiently large $\tau$, it holds that
\begin{equation}
\left\vert \int_{\Gamma_h^\pm} e^{\rho \cdot \mathbf x}\psi(\mathbf x)\mathrm d\sigma\right\vert \lesssim \tau^{-\frac{17}{12}}.
\end{equation}
Throughout of the rest of this paper, $\lesssim$ means that we only give the leading asymptotic analysis by neglecting a generic positive constant $C$ with respect to $\tau\rightarrow \infty$, where $C$ is not a function of $\tau$.
\end{lem}
\begin{proof}
Taking $\mathbf y=\tau \mathbf x $, then using H\"older inequality and Lemma \ref{lem:trace thm}, one has
\begin{align}\label{2eq:etagamma}
\int_{\Gamma_h^\pm}\vert e^{\rho \cdot \mathbf x}\vert&\vert \psi(\mathbf x)\vert \mathrm d\sigma
\lesssim \frac{1}{\tau}\|e^{-\mathbf d\cdot \mathbf y}\|_{L^{\frac{8}{7}}(\Gamma_{\tau h}^\pm)}\left\|\psi\left(\frac{\mathbf y}{\tau}\right)\right\|_{L^8(\Gamma_{\tau h}^\pm)} \\
&
\lesssim \frac{1}{\tau}\|e^{-\mathbf d\cdot \mathbf y}\|_{L^{\frac{8}{7}}(\Gamma^\pm)}\left\|\psi\left(\frac{\mathbf y}{\tau}\right)\right\|_{H^{1,8}(S_{\tau h})} \lesssim \frac{1}{\tau}\|e^{-\mathbf d\cdot \mathbf y}\|_{L^{\frac{8}{7}}(\Gamma^\pm)}\left\|\psi\left(\frac{\mathbf y}{\tau}\right)\right\|_{H^{1,8}(\mathcal K)},\notag
\end{align}
for sufficiently large $\tau $. We have $\frac{1}{\tau}<1$, and it holds that
\begin{align}\label{2eq:psik}
\left\|\psi\left(\frac{\mathbf y}{\tau}\right)\right\|_{H^{1,8}(\mathcal K)}
&\leq\tau^{\frac{1}{4}}\left \|\psi(\mathbf x)\right \|_{H^{1,8}(\mathcal K)}=\mathcal O(\tau^{-\frac{5}{12}}),\quad \mbox{as $\tau \rightarrow \infty$. }
\end{align}
Furthermore,
\begin{equation}\label{2eq:edy}
\|e^{-\mathbf d\cdot \mathbf y}\|_{L^{\frac{8}{7}}(\Gamma ^\pm )}
\leq \left(\int_{\Gamma^\pm}e^{-\frac{8}{7}\zeta \vert \mathbf y\vert}\mathrm d\sigma\right)^\frac{7}{8}
=2\left( \frac{7}{8}\frac{1}{\zeta}\right)^{\frac{7}{8}},
\end{equation}
where $\zeta$ is defined in \eqref{2eq:zeta}. Hence, $\|e^{-\mathbf d\cdot \mathbf y}\|_{L^{\frac{8}{7}}(\Gamma ^\pm )}$ is a positive constant which only depends on $\zeta$.
Combining \eqref{2eq:psik} and \eqref{2eq:edy} with \eqref{2eq:etagamma}, we can prove Lemma \ref{2lem:psi8}.
\end{proof}
\begin{lem}\label{2lem:u0est}
Let $\Lambda_h,\ S_h$ be defined in \eqref{1eq:not} and $u_0(\mathbf x)$ be given by \eqref{1eq:cgo}. Then $u_0(\mathbf x)\in H^1(S_h)$ and it holds that
\begin{subequations}
\begin{align}
\|u_0(\mathbf x)\|_{L^2(\Lambda_h)}&\lesssim \left(1+\tau^{-\frac{2}{3}}\right)e^{-\zeta h\tau},\label{2eq:u0lambda} \\
\label{2eq:nablau0}
\|\nabla u_0(\mathbf x)\|_{L^2(\Lambda_h)}&\lesssim (1+\tau) \left(1+\tau^{-\frac{2}{3}}\right)e^{-\zeta h\tau},\\
\label{2eq:u0alpha}
\int_{S_h}\vert \mathbf x\vert^{\alpha}\vert u_0(\mathbf x)\vert\rm d\mathbf x
&\lesssim
\tau^{-(\alpha+\frac{29}{12})}+\left(\frac{1}{\tau^{\alpha+2}}+\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right)
\end{align}
\end{subequations}
as $\tau \rightarrow \infty$, where $\zeta$ is defined in \eqref{2eq:zeta} and $\alpha \in (0,1)$.
\end{lem}
\begin{proof} Using polar coordinates transformation, \eqref{1eq:Ir} and \eqref{2eq:zeta}, we can obtain that
\begin{equation}\label{2eq:eeta}
\|e^{\rho \cdot \mathbf x}\|_{ L^{t}(\Lambda_{h})} \lesssim e^{-\zeta h\tau}.
\end{equation}
where $\rho$ is defined in \eqref{1eq:eta} and $t$ is a positive constant.
According to \eqref{1eq:psi} and Lemma \ref{lem:trace thm}, for sufficient large $\tau$, one can show that
\begin{align}\label{1eq:psi est}
\|\psi(\mathbf x)\|_{L^{4}(\Lambda_{h})}
\leq C \| \psi(\mathbf x)\|_{H^{1,8}(S_h)}=O(\tau^{-\frac{2}{3}}),
\end{align}
where $C$ is a positive constant, which is not a function of $\tau$.
By virtue of \eqref{1eq:psi est} and H\"older inequality, it can be directly verified that
\begin{equation}\label{2eq:I42}
\begin{aligned}
\|u_{0}\|_{ L^{2}(\Lambda_h)}
&\leq\|e^{\rho \cdot \mathbf x}\|_{ L^{2}(\Lambda_{h})}+\|e^{\rho \cdot \mathbf x}\|_{ L^{4}(\Lambda_{h})}\|\psi(\mathbf x)\|_{ L^{4}(\Lambda_{h})}\\
&\lesssim \left(1+\tau^{-\frac{2}{3}}\right)e^{-\zeta h\tau},\quad \mbox{as $\tau \rightarrow \infty$. }
\end{aligned}
\end{equation}
Similarly, using Cauchy-Schwarz inequality, \eqref{2eq:u0lambda} and Proposition \ref{1prop:gamma}, we have
\begin{equation}\label{2eq:I43}
\begin{aligned}
\|\nabla u_{0}(\mathbf x)\|_{L^{2}(\Lambda_h)}
&\leq \sqrt{2}\tau \|u_{0}\|_{ L^{2}(\Lambda_h)}+\|e^{\rho \cdot \mathbf x}\|_{ L^{4}(\Lambda_h)} \|\nabla \psi(\mathbf x)\|_{ L^{4}(\Lambda_h)}\\
&\lesssim(1+\tau)(1+\tau^{-\frac{2}{3}})e^{-\zeta h\tau},\quad \mbox{as $\tau \rightarrow \infty$. }
\end{aligned}
\end{equation}
Moreover, by using Cauchy-Schwarz inequality, we know that
\begin{equation}\label{2eq:xalpha}
\begin{aligned}
\int_{S_{h}}\vert \mathbf x\vert^{\alpha}\vert u_{0} \vert\mathrm{d}\mathbf x
&\leq\int_{S_{h}}\vert \mathbf x\vert^{\alpha}\vert e^{\rho \cdot \mathbf{x}} \vert \mathrm{d}\mathbf x+\int_{S_{h}}\vert \mathbf x\vert^{\alpha}\vert e^{\rho \cdot \mathbf{x}}\vert\vert \psi(\mathbf x)\vert\mathrm{d}\mathbf x.\\
\end{aligned}
\end{equation}
Using polar coordinates transformation and Proposition \ref{1prop:gamma}, we can deduce that
\begin{equation}\label{2eq:I111}
\int_{S_{h}}\vert \mathbf x\vert^{\alpha}\vert e^{\rho \cdot \mathbf{x}} \vert \mathrm{d}\mathbf x
\lesssim \left(\frac{1}{\tau^{\alpha+2}}+\frac{1}{\tau}\right)e^{-\frac{1}{2}\zeta h\tau },\quad \mbox{as $\tau \rightarrow \infty$. }
\end{equation}
Next, by letting $\mathbf y=\tau \mathbf x$ and H\"older inequality, it can be calculated that
\begin{equation}\label{2eq:u0K}
\begin{aligned}
\int_{S_{h}}\vert \mathbf x\vert^{\alpha}\vert e^{\rho \cdot \mathbf{x}}\vert\vert \psi(\mathbf x)\vert\mathrm{d}\mathbf x
&\leq \frac{1}{\tau^{\alpha+2}}\int_{\mathcal K}\vert \mathbf y\vert^{\alpha}\vert e^{-\mathbf d\cdot \mathbf y}\vert \left\vert \psi\left(\frac{\mathbf y}{\tau}\right)\right\vert\mathrm d\mathbf x\\
&\leq \frac{1}{\tau^{\alpha+2}}\|\vert \mathbf y\vert^{\alpha}\vert e^{-\mathbf d\cdot \mathbf y}\vert \|_{L^{\frac{8}{7}}(\mathcal K)}\left\|\psi\left(\frac{\mathbf y}{\tau}\right)\right\|_{L^{8}(\mathcal K)}.
\end{aligned}
\end{equation}
With the help of variable substitution and \eqref{1eq:psi}, we can calculate that
\begin{equation}\label{2eq:psiL8K}
\left\|\psi\left(\frac{\mathbf y}{\tau}\right)\right\|_{L^8(\mathcal K)}=\tau^{\frac{1}{4}}\|\psi\left(\mathbf x\right)\|_{L^8(\mathcal K)}=\mathcal O(\tau^{-\frac{5}{12}}),\quad \mbox{as $\tau \rightarrow \infty$. }
\end{equation}
Similar to \eqref{2eq:edy}, by using polar coordinates transformation , we have
$
\| \vert \mathbf y\vert^\alpha \vert e^{-\mathbf d\cdot \mathbf y}\vert \|_{L^\frac{8}{7}(\mathcal K)}
$
is a positive constant and not a function of $\tau$.
Therefore, combining \eqref{2eq:psiL8K}, \eqref{2eq:u0K} and \eqref{2eq:I111} with \eqref{2eq:xalpha}, we have \eqref{2eq:u0alpha}.
The proof is complete.
\end{proof}
Now we are in a position to prove Theorem \ref{thm:2D}.
\begin{proof}[{\bf Proof of Theorem \ref{thm:2D}}]
By Green's formula \eqref{2eq:2green} and \eqref{eq:pde sys 2}, the following integral equality holds
\begin{equation}\label{1eq:green}
\begin{aligned}
\int_{\Lambda_{h}}(w-v)\partial_{\nu}u_{0}-u_{0}\partial_{\nu}(w-v)\mathrm{d}\sigma-\int_{\Gamma_h^{\pm}}\eta u_0v\mathrm d\sigma
&=k^{2}\int_{S_{h}}(q-1)vu_{0}\mathrm{d}\mathbf x.
\end{aligned}
\end{equation}
Denote
$$
f_{j}=(q-1)v_{j}.
$$
Since $q\in H^2(\overline S_h)$, by Sobolev embedding property, one has $q\in C^{\alpha}(\overline S_h)$ where $\alpha \in (0,1]$. Clearly, $v_{j}\in C^{\alpha}(\overline S_h)$, hence $f_{j}\in C^{\alpha}(\overline S_h)$. According $v_j\in C^{\alpha},\ \eta\in C^{\alpha}$, we have the expansion
\begin{equation}\label{1eq:fj}
\begin{aligned}
f_{j}&=f_{j}(\mathbf{0})+\delta f_{j},\ \vert \delta f_{j}\vert\leq \| f_{j}\|_{C^{\alpha}( {\overline S}_h)}\vert \mathbf{x} \vert^{\alpha},\\
v_j&=v_j(\mathbf 0)+\delta v_j,\ \vert\delta v_j\vert \leq\|v_j\|_{C^{\alpha}(\overline S_h)}\vert\mathbf x\vert^{\alpha},\\
\eta&=\eta(\mathbf 0)+\delta\eta,\ \vert\delta\eta\vert\leq\|\eta\|_{C^{\alpha}(\overline{ \Gamma_h^\pm } )}\vert\mathbf x\vert^{\alpha}.
\end{aligned}
\end{equation}
By virtue of \eqref{1eq:fj} and \eqref{1eq:cgo}, it yields that
\begin{equation}\label{2eq:fj}
\begin{aligned}
&k^2\int_{S_{h}}(q-1)vu_0\mathrm d\mathbf x =-\sum_{m=1}^3 I_{m},\quad \int_{\Gamma_h^{\pm}}\eta u_0v\mathrm d\sigma=I-\sum_{m=4}^9I_m,
\end{aligned}
\end{equation}
where
\begin{align*}
I_{1}&=-k^{2}\int_{S_{h}}(q-1)(v-v_{j})u_{0}\mathrm d\mathbf x
,\quad
I_{2}=-\int_{S_{h}}\delta f_{j}u_{0}\mathrm d\mathbf x,\quad
\\
I_{3}&=-f_{j}(\mathbf{0})\int_{S_{h}}u_0\mathrm{d}\mathbf x,\quad
I_4=-\eta(\mathbf 0)\int_{\Gamma_h^{\pm}}(v-v_j)u_0\mathrm{d}\sigma,\\
I_5&=-\int_{\Gamma_h^{\pm}}\delta \eta(v-v_j)u_0\mathrm{d}\sigma,\quad
I_6=-\eta(\mathbf 0)v_j(\mathbf 0)\int_{\Gamma_h^{\pm}}e^{\rho\cdot \mathbf x}\psi(\mathbf x)\mathrm{d}\sigma,\\
I_{7}&=-\eta(\mathbf 0)\int_{\Gamma_h^{\pm}}\delta v_ju_0\mathrm d\sigma,\quad
I_8=-v_j(\mathbf 0) \int_{\Gamma_h^{\pm}}\delta \eta u_0\mathrm d\sigma,\\
I_9&=-\int_{\Gamma_h^{\pm}}\delta \eta\delta v_j u_0\mathrm d\sigma,\quad
I=\eta(\mathbf 0)v_j(\mathbf 0)\int_{\Gamma_h^{\pm}}e^{\rho\cdot\mathbf x}\mathrm d\sigma.
\end{align*}
Substituting \eqref{2eq:fj} into \eqref{1eq:green}, we have the following integral identity
\begin{align}\notag
I=\sum_{m=1}^9I_m+J_1+J_2,
\end{align}
where
\begin{equation}\label{2eq:chaifen}
\begin{aligned}
J_{1}&=\int_{\Lambda_{h}}(w-v)\partial_{\nu}u_{0}\mathrm d\sigma,\quad
J_{2}=-\int_{\Lambda_{h}} u_{0}\partial_{\nu}(w-v)\mathrm d\sigma.
\end{aligned}
\end{equation}
Therefore, it yields that
\begin{equation}\label{2eq:I}
\vert I\vert\leq \sum_{m=1}^9\vert I_m\vert+\vert J_1\vert+\vert J_2\vert.
\end{equation}
In the following, we give detailed asymptotic estimates of $I_m,m=1,\cdots,9$ and $J_j,\ j=1,2$ as $\tau\to \infty$, separately.
With the help of Proposition \ref{1prop:enorm}, H\"older inequality and \eqref{1eq:psi}, it arrives at
\begin{equation}\label{2eq:I1}
\begin{aligned}
\vert I_{1} \vert
&\lesssim \|v-v_j\|_{L^2(S_h)}\left(\|e^{\rho\cdot \mathbf x}\|_{L^2(S_h)}+\| e^{\rho\cdot \mathbf x}\psi(\mathbf x)\|_{L^2(S_h)}\right)\\
&\lesssim \|v-v_j\|_{L^2(S_h)}\left(\|e^{\rho\cdot \mathbf x}\|_{L^2(S_h)}+\| e^{\rho\cdot \mathbf x}\|_{L^4(S_h)}\|\psi(\mathbf x)\|_{L^4(S_h)}\right)\\
&\lesssim j^{-\beta}\left[\left(\frac{1}{\tau^2}+\frac{1}{\tau}e^{-\zeta h\tau}\right)^{\frac{1}{2}}+\left(\frac{1}{\tau^2}+\frac{1}{\tau}e^{-2\zeta h\tau}\right)^{\frac{1}{4}}\tau^{-\frac{2}{3}} \right]
\end{aligned}
\end{equation}
as $\tau \rightarrow\infty$.
By virtue of (\ref{1eq:fj}), it yields that
\begin{equation}\label{2eq:I21}
\vert I_{2} \vert \leq \|f_{j} \|_{C^{\alpha}( \overline S_h)}\int_{S_{h}}\vert \mathbf x\vert^{\alpha}\vert u_{0} \vert\mathrm d\mathbf x,
\end{equation}
where
\begin{equation}\label{2eq:fj1}
\begin{aligned}
\|f_{j}\|_{C^{\alpha}
(S_h)}&
\leq k^2\left(\|q\|_{C^{\alpha}(\overline S_h)}\sup_{S_h}\vert v_{j} \vert+\|v_{j}\|_{C^{\alpha}( \overline S_h)}\sup_{S_h}\vert q-1\vert\right).
\end{aligned}
\end{equation}
Using the property of compact embedding of H\"older spaces, we can derive that
\begin{equation}\label{2eq:calpha}
\| v_{j}\|_{C^{\alpha}}\leq {\mathrm {diam} }\left(S_{h}\right)^{1-\alpha}\|v_{j}\|_{C^{1}(S_h)},
\end{equation}
where diam$\left(S_{h}\right)$ is the diameter of $S_{h}$. By direct computations, we obtain
\begin{equation}\label{2eq:vjc1}
\|v_{j}\|_{C^{1}}\leq \sqrt{2\pi}(1+k)\|g\|_{L^{2}(\mathbb S^{1})}.
\end{equation}
Furthermore, by Cauchy-Schwarz inequality, we also can deduce that
\begin{equation}\label{2eq:vj}
\vert v_{j}\vert\leq\sqrt{2\pi}\|g\|_{L^{2}(\mathbb S^{1})}.
\end{equation}
Due to \eqref{1eq:herg}, by using the fact that $q\in C^{\alpha}(\overline S_h)$, substituting (\ref{2eq:calpha}), (\ref{2eq:vjc1}), and (\ref{2eq:vj}) into (\ref{2eq:fj1}), we have
\begin{equation}\label{2eq:fjc}
\|f_{j}\|_{C^{\alpha}(\overline S_h)}\lesssim j^{\gamma },\quad \|v_j\|_{C^\alpha(\overline S_h)}\lesssim j^{\gamma},
\end{equation}
where $\gamma$ is a given positive constant defined in \eqref{1eq:herg}. Substituting \eqref{2eq:u0alpha} and \eqref{2eq:fjc} into \eqref{2eq:I21}, we can deduce that
\begin{equation}\label{2eq:I2}
\vert I_{2} \vert \lesssim j^{\gamma }\left[\tau^{-(\alpha+\frac{29}{12})}+\left(\frac{1}{\tau^{\alpha+2}}+\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right)\right]
\end{equation}
as $\tau \rightarrow \infty$.
Using Cauchy-Schwarz inequality, it can be easily calculated that
\begin{equation}\label{2eq:I31}
\vert I_{3} \vert
\lesssim \int_{S_h}\vert e^{\rho \cdot \mathbf x}\vert\mathrm d\mathbf x+\int_{S_h}\vert e^{\rho \cdot \mathbf x}\psi(\mathbf x)\vert \mathrm d\mathbf x\lesssim \int_{S_h}\vert e^{\rho \cdot \mathbf x}\vert \mathrm d\mathbf x+\int_{\mathcal K}\vert e^{\rho \cdot \mathbf x}\psi(\mathbf x)\vert\mathrm d\mathbf x.
\end{equation}
By integral substitution and using \eqref{2eq:psiL8K}, we obtain that
\begin{equation}\label{2eq:I33}
\begin{aligned}
\int_{\mathcal K}\vert e^{\rho \cdot \mathbf x}\psi(\mathbf x)\vert\mathrm d\mathbf x&=\frac{1}{\tau^2}\int_{\mathcal K}e^{-\mathbf d\cdot \mathbf y}\left \vert \psi\left(\frac{\mathbf y}{\tau}\right)\right \vert\mathrm d\mathbf y
\leq \frac{1}{\tau^2} \|e^{-\mathbf d\cdot \mathbf y}\|_{L^{\frac{8}{7}}(\mathcal K)}\|\psi\left(\frac{\mathbf y}{\tau}\right)\|_{L^{8}(\mathcal K)}\\
&\lesssim \tau^{-\frac{29}{12}},\quad \mbox{as $\tau \rightarrow \infty$. }
\end{aligned}
\end{equation}
With the help of Proposition \ref{2prop:tau2}, substituting (\ref{2eq:I33}) into (\ref{2eq:I31}), we can derive that
\begin{equation}\label{2eq:I3}
\vert I_{3} \vert \lesssim \tau^{-\frac{29}{12}}+\left(\frac{1}{\tau^2}+\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right), \mbox{as $\tau \rightarrow \infty$. }
\end{equation}
Using Cauchy-Schwarz inequality, the trace theorem and H\"older inequality, we have
\begin{equation}\label{2eq:I4}
\begin{aligned}
\vert I_4\vert
&\lesssim \|v-v_j\|_{L^2(\Gamma_h^{\pm})}(\|e^{\rho\cdot \mathbf x}\|_{L^2(\Gamma_h^{\pm})}+\|e^{\rho\cdot \mathbf x}\|_{L^4(\Gamma_h^{\pm})}\|\psi(\mathbf x)\|_{L^4(\Gamma_h^{\pm})})\\
&\lesssim j^{-\beta}\left[\left(\frac{1}{\tau}+\frac{1}{\tau}e^{-\zeta h\tau}\right)^{\frac{1}{2}}+\left(\frac{1}{\tau}+\frac{1}{\tau}e^{-2\zeta h\tau}\right)^{\frac{1}{4}}\tau^{-\frac{2}{3}}\right],
\end{aligned}
\end{equation}
as $\tau \rightarrow \infty$. Similarly, by virtue of Cauchy-Schwarz inequality, the trace theorem and H\"older inequality, it can be calculated that
\begin{equation}
\begin{aligned}\notag
\vert I_5\vert
&\lesssim \|v-v_j\|_{H^1(S_h)}\left(\|e^{\rho\cdot \mathbf x}\vert \mathbf x\vert^{\alpha}\|_{L^2(\Gamma_h^{\pm})}+\|e^{\rho\cdot \mathbf x}\vert \mathbf x\vert^{\alpha}\|_{L^4(\Gamma_h^{\pm})}\|\psi(\mathbf x)\|_{L^4(\Gamma_h^{\pm})}\right)\\
&\lesssim j^{-\beta}\left[ \left(\frac{1}{\tau^{(2\alpha+1)}}+\frac{1}{\tau}e^{-\zeta h\tau}\right)^\frac{1}{2}+\left(\frac{1}{\tau^{(4\alpha+1)}}+\frac{1}{\tau}e^{-2\zeta h\tau} \right)^{\frac{1}{4}}\tau^{-\frac{2}{3}}\right],
\end{aligned}
\end{equation}
as $\tau \rightarrow \infty$.
By using Lemma \ref{2lem:psi8}, one can show that
\begin{equation}\label{2eq:I63}
\begin{aligned}
I_6&\lesssim \int_{\Gamma_h^\pm}\vert e^{\rho \cdot \mathbf x}\psi(\mathbf x)\vert \mathrm d\mathbf x\lesssim \tau^{-\frac{17}{12}}, \quad \mbox{as $\tau \rightarrow \infty$. }
\end{aligned}
\end{equation}
Using \eqref{1eq:fj}, \eqref{2eq:fjc}, and Proposition \ref{1prop:gamma}, we have the following inequality
\begin{align}\label{2eq:I7}
\vert I_7\vert
&\lesssim j^{\gamma}\left[\int_{\Gamma_{h}^{\pm}}\vert \mathbf x\vert^{\alpha}\vert e^{\rho \cdot \mathbf x}\vert\mathrm d\sigma+\|\vert \mathbf x\|^{\alpha}\vert e^{\rho\cdot \mathbf x}\vert\|_{L^2(\Gamma_h^{\pm})}\|\psi(\mathbf x)\|_{L^2(\Gamma_h^{\pm})} \right] \nonumber \\
&\lesssim j^{\gamma}\left[\left(\frac{1}{\tau^{\alpha+1}}+\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right)+\left(\frac{1}{\tau^{2\alpha+1}}+\frac{1}{\tau}e^{-\zeta h\tau}\right)^{\frac{1}{2}}\tau^{-\frac{2}{3}}\right],
\end{align}
as $\tau \rightarrow \infty$.
According to \eqref{2eq:I7}, we can derive that
\begin{subequations}
\begin{align}\label{2eq:I8}
& \vert I_8\vert\lesssim \left(\frac{1}{\tau^{\alpha+1}}+\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right)+\left(\frac{1}{\tau^{2\alpha+1}}+\frac{1}{\tau}e^{-\zeta h\tau}\right)^{\frac{1}{2}}\tau^{-\frac{2}{3}},\\
& \vert I_9\vert\lesssim
j^{\gamma}\left[\left(\frac{1}{\tau^{2\alpha+1}}+\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right)+\left(\frac{1}{\tau^{4\alpha+1}}+\frac{1}{\tau}e^{-\zeta h\tau}\right)^{\frac{1}{2}}\tau^{-\frac{2}{3}}\right], \label{2eq:I9}
\end{align}
\end{subequations}
as $\tau \rightarrow \infty$. By the Cauchy-Schwarz inequality and the trace theorem, we deduce that
\begin{align}
\vert J_1\vert&
\leq C\|u_{0}\|_{H^{1}(\Lambda_{h})} \|w-v\|_{H^{1}(\Lambda_{h})} \label{2eq:J11} \\
&\lesssim \|u_{0}\|_{H^{1}(\Lambda_{h})} \nonumber
\end{align}
as $\tau \rightarrow \infty$, where $C$ is a positive constant arising from the trace theorem. Hence, by virtue of \eqref{2eq:u0lambda} and \eqref{2eq:nablau0}, from \eqref{2eq:J11}, it is readily known that
\begin{equation}\label{2eq:J1}
\begin{aligned}
\vert J_1 \vert \lesssim (1+\tau)(1+\tau^{-\frac{2}{3}})e^{-\zeta h\tau}
\end{aligned}
\end{equation}
as $\tau \rightarrow \infty$, where $\zeta$ is a positive constant given in \eqref{2eq:zeta}.
Similarly, using Cauchy-Schwarz inequality, the trace theorem and \eqref{2eq:nablau0}, we can obtain that
\begin{align}
\vert J_2 \vert &\leq \|\partial _{\nu} u_0\|_{ L^{2}(\Lambda_h)} \|w-v\|_{ L^{2}(\Lambda_h)}
\leq C \|\partial _{\nu} u_0\|_{ L^{2}(\Lambda_h)} \|w-v\|_{ H^{1}(S_{h})} \label{2eq:J2} \\
&\lesssim \|\nabla u_{0}\|_{ L^{2}(\Lambda_h)} \lesssim (1+\tau)(1+\tau^{-\frac{2}{3}})e^{-\zeta h\tau}. \nonumber
\end{align}
Substituting (\ref{2eq:I1}), (\ref{2eq:I2}), \eqref{2eq:I3}, (\ref{2eq:J1}), and (\ref{2eq:J2}) into (\ref{2eq:I}), by virtue of (\ref{1eq:tau2}), we derive that
\begin{align}
\left(\frac{C_{S_h}}{\tau}-\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right)&\vert \eta(\mathbf 0)v_j(\mathbf 0)\vert
\lesssim
j^{-\beta}\left[\left(\frac{1}{\tau^2}+\frac{1}{\tau}e^{-\zeta h\tau}\right)^{\frac{1}{2}}+\left(\frac{1}{\tau^2}+\frac{1}{\tau}e^{-2\zeta h\tau}\right)^{\frac{1}{4}}\tau^{-\frac{2}{3}} \right ] \nonumber \\
& + j^{\gamma }\left[\tau^{-(\alpha+\frac{29}{12})}+\left(\frac{1}{\tau^{\alpha+2}}+\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right)\right] \nonumber \\
&+j^{-\beta}\left[\left(\frac{1}{\tau}+\frac{1}{\tau}e^{-\zeta h\tau}\right)^{\frac{1}{2}}+\left(\frac{1}{\tau}+\frac{1}{\tau}e^{-2\zeta h\tau}\right)^{\frac{1}{4}}\tau^{-\frac{2}{3}}\right] \label{2D:1} \\
&+j^{-\beta}\left[ \left(\frac{1}{\tau^{(2\alpha+1)}}+\frac{1}{\tau}e^{-\zeta h\tau}\right)^\frac{1}{2}+\left(\frac{1}{\tau^{(4\alpha+1)}}+\frac{1}{\tau}e^{-2\zeta h\tau} \right)^{\frac{1}{4}}\tau^{-\frac{2}{3}}\right] \nonumber \\
&+(j^{\gamma}+1)\left[\left(\frac{1}{\tau^{\alpha+1}}+\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right)+\left(\frac{1}{\tau^{2\alpha+1}}+\frac{1}{\tau}e^{-\zeta h\tau}\right)^{\frac{1}{2}}\tau^{-\frac{2}{3}}\right] \nonumber \\
&+j^{\gamma}\left[\left(\frac{1}{\tau^{2\alpha+1}}+\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right)+\left(\frac{1}{\tau^{4\alpha+1}}+\frac{1}{\tau}e^{-\zeta h\tau}\right)^{\frac{1}{2}}\tau^{-\frac{2}{3}}\right] \nonumber \\
&+(1+\tau)(1+\tau^{-\frac{2}{3}})e^{-\zeta h\tau}+
\tau^{-\frac{29}{12}}+\left(\frac{1}{\tau^2}+\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right)+\tau^{-\frac{17}{12}} \nonumber
\end{align}
as $\tau \rightarrow \infty$, where $C_{S_h}$ is a positive constant given in \eqref{1eq:tau2}.
Multiplying $\tau$ on both sides of (\ref{2D:1}) and letting $\tau=j^s$, where $s>0$, it can be derived that
\begin{equation}\label{2D:2}
\begin{aligned}
\left(C_h-e^{-\frac{1}{2}\zeta hj^s}\right)\vert \eta(\mathbf 0)v_j(\mathbf 0)\vert&\lesssim j^{-\beta+s}+j^{\gamma-(\alpha+1)s}+j^{-\beta+\frac{1}{2}s}+j^{-\beta+(-\alpha +\frac{1}{2})s}\\
&+j^{\gamma-\alpha s}+j^{-\frac{13}{24}s}+j^{-\frac{17}{12}s},
\end{aligned}
\end{equation}
as $\tau \rightarrow \infty$. Under the assumption \eqref{eq:assump1}, we can choose $s \in (\gamma/ \alpha , \beta )$.
Hence in \eqref{2D:2}, let $j\rightarrow \infty$ it is readily to know that
$$
\lim_{j \rightarrow \infty } \left\vert \eta(\mathbf 0)v_j(\mathbf 0)\right\vert=0.$$
Since $\eta(\mathbf 0)\not = 0$, one has $\lim_{j \rightarrow \infty }\vert v_j(\mathbf 0)\vert=0 $. Using \eqref{1eq:herg} and the integral mean value theorem, we can obtain \eqref{1eq:v0}.
The proof is complete.
\end{proof}
\begin{proof}[\bf Proof of Theorem \ref{2D:delta}]
Due to $q\in H^2(\overline S_h)$, using the Sobolev embedding property, we know that $q\in C^\alpha(\overline{S_h})$ with $\alpha\in (0,1]$. Under the assumption $v\in C^\alpha(\overline S_h)$ ($\alpha\in (0,1]$), it readily has $f(\mathbf x):=(q(\mathbf x)-1)v(\mathbf x) \in C^{\alpha} (\overline{S_h})$. Hence we have the expansion of $f(\mathbf x),\ \eta$ and $v(\mathbf x)$ near the origin as follows
\begin{equation}\label{eq:f expan 2}
\begin{aligned}
f(\mathbf x)&= f(\mathbf 0)+\delta f,\quad \vert \delta f\vert\leq \|f\|_{C^\alpha}\vert \mathbf x \vert ^\alpha\\
\eta&= \eta(\mathbf 0)+\delta \eta,\quad \vert \delta \eta\vert\leq \|\eta\|_{C^\alpha}\vert \mathbf x\vert ^\alpha\\
v(\mathbf x)&= v(\mathbf 0)+\delta v,\quad \vert \delta v\vert\leq \|v\|_{C^\alpha}\vert \mathbf x\vert ^\alpha
\end{aligned}
\end{equation}
Plugging \eqref{eq:f expan 2} into the integral identity \eqref{1eq:green}, it yields that
\begin{align}
\eta(\mathbf 0)v(\mathbf 0)\int_{\Gamma_h^{\pm}}e^{\rho\cdot \mathbf x}\mathrm d\sigma&=f(\mathbf 0)\int_{S_h}u_0\mathrm d\mathbf x+\int_{S_h}\delta fu_0\mathrm d\mathbf x
+\eta(\mathbf 0)v(\mathbf 0)\int_{\Gamma_h^{\pm}}\psi(\mathbf x)e^{\eta \cdot \mathbf x}\mathrm d\sigma\notag \\
&+\eta(\mathbf 0)\int_{\Gamma_h^{\pm}}\delta v u_0\mathrm d\sigma
+v(\mathbf 0)\int_{\Gamma_h^{\pm}}\delta \eta u_0\mathrm d\sigma
+\int_{\Gamma_h^\pm}\delta v\delta\eta u_0\mathrm d\sigma\notag \\
&-\int_{\Lambda_h}(w-v)\partial_\nu u_0\mathrm d\sigma+\int_{\Lambda_h}u_0\partial_\nu (w-v)\mathrm d\sigma. \label{eq:int 349}
\end{align}
By adopting similar asymptotic analysis for each integrals in \eqref{eq:int 349} with respect to the parameter $\tau$ as in the proof of Theorem \ref{thm:2D}, and letting $\tau \rightarrow \infty$, we can prove Theorem \ref{2D:delta}.
\end{proof}
\subsection{Proof of Corollary \ref{cor1} }\label{subsec:32}
Next, we give the proof of Corollary \ref{cor1} regarding the vanishing property of transmission eigenfunctions to \eqref{3eq:treta0} near a convex planar corner under two regularity conditions described in Corollary \ref{cor1}. Since the proof of the statement (b) in Corollary \ref{cor1} can be obtained by using the similar asymptotic analysis for proving Corollary \ref{cor1} (a), we omit it here. In order to prove the statement (a) in Corollary \ref{cor1}, we give the following proposition which is obtained by slightly modifying the proof of Proposition \ref{2prop:tau2}.
\begin{prop}\label{prop:tau2}
Let $S_{h}$ and $\eta$ be defined in (\ref{1eq:not}) and (\ref{1eq:eta}), respectively, where $ \mathbf d$ satisfies \eqref{2eq:zeta}. Then we have
\begin{equation}\label{1eq:tau}
\left\vert \int_{S_h}e^{\eta \cdot \mathbf x}\mathrm d \mathbf x\right \vert \geq \frac{\widetilde {C_{S_h }}}{\tau^2}-\mathcal O\left(\frac{1}{\tau}e^{-\frac{1}{2}\zeta h \tau}\right),
\end{equation}
for sufficiently large $\tau$, where $\widetilde {C_{S_h }}$ is a positive number only depending on the opening angle $\theta_M-\theta_m$ of $\mathcal K$ and $\zeta$.
\end{prop}
\begin{proof}
Using polar coordinates transformation and \eqref{1eq:gamma} in Proposition \ref{1prop:gamma}, we have
\begin{equation}\notag
\begin{aligned}
\int_{S_h} e^{\rho \cdot \mathbf x}\mathrm d \mathbf x
&=\int_{\theta_{m}}^{\theta_M}\left[\frac{\Gamma(2)}{(\tau(\mathbf d+\mathrm i \mathbf d^\perp)\cdot \mathbf {\hat x} )^2}-I_{\sf R}\right]\mathrm d\theta\\
&=\frac{\Gamma(2)}{\tau^2}\int_{\theta_{m}}^{\theta_M}\frac{1}{\left(\mathbf d\cdot \mathbf{\hat x}+\mathrm i \mathbf{d}^\perp \cdot \mathbf{\hat x} \right)^2}\mathrm d \theta-\int_{\theta_{m}}^{\theta_M}I_{\sf{R}}\mathrm d\theta,
\end{aligned}
\end{equation}
where $I_{\sf R}= \int_{h}^{\infty}e^{-\tau (\mathbf d +\mathrm i\mathbf d)\cdot \hat{\mathbf x}r}r\mathrm d r$.
Hence, it can be directly calculated that
\begin{equation}\notag
\int_{\theta_{m}}^{\theta_M}\frac{1}{\left(\mathbf d\cdot \mathbf{\hat x}+\mathrm i \mathbf{d}^\perp \cdot \mathbf{\hat x} \right)^2}\mathrm d \theta
\geq \frac{\theta_M-\theta_m }{2}
\end{equation}
by using the integral mean value theorem.
With the help of Proposition \ref{1prop:gamma}, for sufficiently large $\tau$, we have the following integral inequality
\begin{equation}\label{1eq:eta0tau2}
\begin{aligned}
\left\vert \int_{S_h} e^{\rho \cdot \mathbf x}\mathrm d \mathbf x \right\vert
&\geq \frac{\Gamma(2)(\theta_{M}-\theta_{m})}{\tau^2} \frac{1}{\left\vert \mathbf d\cdot \mathbf x( \theta_{\xi})+ \mathrm i \mathbf{d}^\perp \cdot \mathbf{\hat x} (\theta_{\xi}) \right\vert^2}- \left\vert \int_{\theta_{m}}^{\theta_M} I_{\sf{R}} \mathrm d\mathbf \theta \right\vert\\
&\geq \frac{\Gamma(2)}{(\theta_{M}-\theta_{m})}\frac{1}{\left(\mathbf d\cdot \mathbf{\hat x}(\theta_{\xi}) \right)^2+ \left(\mathbf{d}^\perp \cdot \mathbf{\hat x}(\theta_{\xi}) \right)^2}-\int_{\theta_{m}}^{\theta_M} \vert I_{\sf{R}} \vert \mathrm d\theta\\
&\geq \frac{\widetilde {C_{S_h }}}{\tau^2}-\frac{1}{\tau}e^{-\frac{1}{2}\zeta h \tau},
\end{aligned}
\end{equation}
by using \eqref{2eq:zeta}.
\end{proof}
\begin{proof}[\bf {Proof of Corollary \ref{cor1}(a)}]\label{proof:holder2}
Similar to the proof of Theorem \ref{thm:2D}, we have the following integral identity according to \eqref{1eq:green} by noting $\eta \equiv 0$ on $\Gamma_h^\pm$,
\begin{equation}\label{2eq:eta0I}
k^2f_j(0)\int_{S_h}e^{\rho \cdot \mathbf x}\mathrm d \mathbf x=I_1+I_2+I_3+J_1+J_2,
\end{equation}
where
\begin{align*}
I_1&=-k^2\int_{S_h}(q-1)(v-v_j)u_0\mathrm d \mathbf x
,\quad I_2=-k^2\int_{S_h}\delta f_ju_0\mathrm d\mathbf x, \\
I_3&=-k^2f_j(\mathbf 0)\int_{S_h}e^{\rho \cdot \mathbf x}\psi(\mathbf x)\mathrm d\mathbf x,
\end{align*}
and $J_1$, $J_2$ are defined in \eqref{2eq:chaifen}, respectively.
By the Sobolev embedding theorem and $q\in H^2(\overline S_{h})$, we have $q\in C^\alpha(\overline S_{h} )$, where $\alpha =1$. Combining \eqref{2eq:eta0I} with \eqref{2eq:I1}, \eqref{2eq:I2} and \eqref{2eq:I33}, we can deduce that
\begin{equation}\label{2eq:fj0}
\begin{aligned}
k^2\left[ \frac{\widetilde{C_{S_h }}}{\tau^2}-\frac{1}{\tau}e^{-\frac{1}{2}\zeta h \tau}\right] &\vert f_j(\mathbf 0)\vert\lesssim
j^{-\beta}\left[\left(\frac{1}{\tau^2}+\frac{1}{\tau}e^{-\zeta h\tau}\right)^{\frac{1}{2}}+\left(\frac{1}{\tau^2}+\frac{1}{\tau}e^{-2\zeta h\tau}\right)^{\frac{1}{4}}\tau^{-\frac{2}{3}} \right]\\
&+j^{\gamma }\left[\tau^{-(\alpha+\frac{29}{12})}+\left(\frac{1}{\tau^{\alpha+2}}+\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right)\right]\\
&+\tau^{-\frac{29}{12}}
+(1+\tau)(1+\tau^{-\frac{2}{3}})e^{-\zeta h\tau}
\end{aligned}
\end{equation}
as $\tau \to \infty$. Multiplying $\tau^2$ on the both sides of \eqref{2eq:fj0}, using the assumption \eqref{1eq:herg}, by letting $\tau=j^s$, it is easy to see that
\begin{equation}\label{2eq:j}
k^2\widetilde {C_{S_h }}\vert f_j(\mathbf 0)\vert \lesssim j^{-\beta +s}+j^{\gamma - \alpha s}.
\end{equation}
And under the assumptions $\gamma/\alpha<\beta$, we choose $s\in(\gamma/\alpha,\beta)$. Letting $j\to \infty$ in \eqref{2eq:j}, we obtain that
$$\vert f_j(\mathbf 0)\vert =0.$$
Since $q(\mathbf 0)\not =1$, we finish the proof of this corollary.
\end{proof}
\section{Vanishing of transmission eigenfunctions near a convex conic corner or polyhedral corner}\label{sec:3D}
In this section, we study the vanishing of eigenfunctions near a corner in $\mathbb R^3$ respectively, where the corner in $\mathbb R^3$ could be a convex conic corner or polyhedral corner. Let us first introduce the corresponding geometrical setup for our study.
For a given point $\mathbf{x}_0\in\mathbb{R}^3$, let $\mathbf{v}_0=\mathbf y_0-\mathbf x_0$ where $\mathbf y_0\in\mathbb{R}^3$ is fixed. Hence
\begin{equation}\label{eq:cone1}
{\mathcal C}={\mathcal C}_{\mathbf{x}_0,\theta_0}:= \left\{\mathbf{y} \in \mathbb R^3 ~|~0\leqslant \angle(\mathbf y-\mathbf{x}_0,\mathbf{v}_0)\leqslant \theta_0\right \}\ (\theta_0 \in(0,\pi/2))
\end{equation}
is a strictly convex conic cone with the apex $\mathbf x_0$ and an opening angle $2\theta_0 \in(0,\pi)$ in $\mathbb R^3$. Here $\mathbf v_0$ is referred to be the axis of $\mathcal C_{\mathbf x_0,\theta_0}$. Specifically, when $\mathbf{x}_0=\mathbf 0$, $\mathbf{v}_0=(0,0,1)^\top$, we write $\mathcal C_{\mathbf x_0,\theta_0 }$ as $\mathcal{C}_{\theta_0}$. Define the truncated conic cone $\mathcal C^{h}:=\mathcal C^{h} _{ \mathbf 0}$ as
\begin{equation}\label{eq:cone2}
\mathcal C^{h}:=\mathcal C_{\theta_0}\cap B_{h},\quad \Gamma_h=\partial \mathcal C\cap B_h,\quad \Lambda_h=\mathcal C \cap \partial B_h,
\end{equation}
where $B_{h}$ is an open ball centered at $\mathbf 0$ with the radius $h\in \mathbb R_+$.
Assume that $\mathcal K_{\mathbf x_0;\mathbf e_1,\ldots, \mathbf e_\ell}$ is a polyhedral cone with the apex $\mathbf x_0$ and edges $\mathbf e_j$ ($j=1,\ldots, \ell$, $\ell\geq 3$. Throughout of this paper we always suppose that $ \mathcal K_{\mathbf x_0;\mathbf e_1,\ldots, \mathbf e_\ell}$ is strictly convex, which implies that it can be fitted into a conic cone $\mathcal C_{\mathbf x_0, \theta_0}$ with the opening angle $\theta_0\in (0,\pi/2)$, where $\mathcal C_{\mathbf x_0, \theta_0}$ is defined in \eqref{eq:cone1}. Without loss of generality, we assume that the axis of $\mathcal C_{\mathbf x_0, \theta_0}$ coincides with $x_3^+$ and $\mathbf x_0=\mathbf 0$. Given a constant $h\in \mathbb R_+$, we define the truncated polyhedral corner $\mathcal K_{\mathbf x_0}^{h}$ as
\begin{equation}\label{eq:kr0}
\mathcal K_{\mathbf x_0}^{h}=\mathcal K_{\mathbf{x_0}; {\mathbf e_1},\ldots {\mathbf e_\ell }}\cap B_{h}.
\end{equation}
For convenience, we have a similar geometry setup with \eqref{eq:cone2} as
\begin{equation}\label{eq:cone3}
\mathcal K^h=\mathcal K_{\mathbf 0}^h,\ \Gamma_h=\partial \mathcal K\cap B_h, \ \Lambda _h=\mathcal K\cap \partial B_h.
\end{equation}
The following theorem states that the transmission eigenfunctions to \eqref{0eq:tr} must vanish at a conic corner if they have $H^1$ regularity and $v$ can be approximated by a sequence of Herglotz wave functions near the underlying conic corner with certain properties, where the detailed proof is postponed to Subsection \ref{subsec:41}.
\begin{thm}\label{3thm:v0}
Let $\Omega$ is a bounded Lipschitz domain with a connected complement and $v,w\in H^1(\Omega)$ be a pair of transmission eigenfunctions to (\ref{0eq:tr}) associated with $k\in \mathbb R_+$. Assume that $\mathbf 0\in \Gamma \subset \partial \Omega$ such that $\Omega\cap B_h=\mathcal C\cap B_h=\mathcal C^h$, where $\mathcal C$ is defined by (\ref{eq:cone1}) and $h\in \mathbb R_+$ is sufficient small such that $q\in H^2( \overline{\mathcal C^h})$ and $\eta\in C^{\alpha_1} (\overline{\Gamma_h})$, where $\alpha_1\in (0,1)$. If the following conditions are fulfilled:
\begin{itemize}
\item[(a)]for any give positive constants $\beta$ and $\gamma$ satisfying
\begin{equation}\label{eq:assump2}
\gamma<\frac{10}{11 }\alpha\beta,\quad \alpha=\min\{\alpha_1,1/2 \},
\end{equation}
the transmission eigenfunction $v$ can be approximated in $H^1(\mathcal C^h)$ by Herglotz functions
\begin{equation}\label{2eq:herg}
v_j=\int_{\mathbb S^2}e^{\mathrm ik\xi\cdot \mathbf x}g_j(\xi)\mathrm d\xi,\quad \xi\in \mathbb S^2, j=1,2,\cdots,
\end{equation}
with the kernels $g_j$ satisfying the approximation property
\begin{equation}\label{3eq:kernel}
\|v-v_j\|_{H^1(\mathcal C^h)}\leq j^{-\beta},\quad \|g_j\|_{L^2(\mathcal C^h)}\leq j^\gamma ;
\end{equation}
\item[(b)]the function $\eta$ dose not vanish at the apex $\mathbf 0$ of $\mathcal C^h$;
\end{itemize}
then one has
\begin{equation}\label{3eq:delv0}
\lim_{\lambda\to \infty}\frac{1}{m(B(\mathbf 0,\lambda)\cap\Omega)}\int_{B(\mathbf 0,\lambda)\cap \Omega}\vert v(\mathbf x)\vert \mathrm d \mathbf x=0,
\end{equation}
where $m(B(\mathbf 0,\lambda)\cap \Omega)$ is the area of $B(\mathbf 0,\lambda)\cap \Omega$.
\end{thm}
As remarked earlier, the Herglotz approximation property in \eqref{3eq:kernel} characterises a regularity of $v$ weaker than the H\"older continuity (cf. \cite{LT}). In the following theorem, if a stronger H\"older regularity condition near a conic corner on the transmission eigenfunction $v$ to \eqref{0eq:tr} is satisfied, we also have the vanishing characterization of the corresponding transmission eigenfunction $v$. Namely, when $v$ is H\"older continuous near the underlying circular corner, we show that it must vanish at the apex of the conic corner. The proof can be obtained by modifying the corresponding proof of Theorem \ref{3thm:v0} directly as for the two dimensional case, which is omitted.
\begin{thm}\label{thm:3alpha}
Let $v\in H^1(\Omega)$ and $w\in H^{1}(\Omega)$ be a pair of transmission eigenfunctions to \eqref{0eq:tr} associated with $k\in \mathbb R_+$. Assume that the Lipschitz domain $\Omega\subset \mathbb R^3$ with $\mathbf 0 \in \partial \Omega$ contains a conic corner $\Omega\cap B_h=\mathcal C\cap B_h=\mathcal C^h$, such that $v\in C^{\alpha}(\overline{\mathcal C^h})$, $q\in H^2(\overline {\mathcal C^h})$ and $\eta \in C^\alpha (\overline{\Gamma _h}) $ for $0<\alpha <1$, where $B_h,\ \Gamma_h $ and $\mathcal C^h$ are defined in (\ref{eq:cone2}). If $\eta(\mathbf 0)\not=0$, where $\mathbf 0$ is the apex of $\mathcal C^h$,
then one has
\begin{equation}\label{3eq:v0}
v(\mathbf 0)=0.
\end{equation}
\end{thm}
Consider a cuboid corner $\mathcal K_{\mathbf x_0;\mathbf e_1, \mathbf e_2, \mathbf e_3}$ defined by \eqref{eq:kr0}. In Theorem \ref{3:cubiod}, we show that the transmission eigenfunctions to \eqref{0eq:tr} vanish at the cuboid corner $\mathcal K_{\mathbf x_0;\mathbf e_1, \mathbf e_2, \mathbf e_3}$ when they are H\"older continuous at the corner point. The proof of Theorem \ref{3:cubiod} can be found in Subsection \ref{subsec:42}. Since $\Delta$ is invariant under rigid motion, we assume that the apex $\mathbf x_0$ of $\mathcal K_{\mathbf x_0;\mathbf e_1, \mathbf e_2, \mathbf e_3}$ coincides with the origin, and the edges of $\mathcal K_{\mathbf x_0;\mathbf e_1, \mathbf e_2, \mathbf e_3}$ satisfy $\mathbf e_1=(1,0,0)^\top $, $\mathbf e_2=(0,1,0)^\top $ and $\mathbf e_3=(0,0,1)^\top $.
\begin{thm}\label{3:cubiod}
Let $v\in H^1{(\Omega}),\ w\in H^1{(\Omega)}$ be a pair of transmission eigenfunctions of \eqref{0eq:tr} with $k>0$. Assume that Lipschitz domain $\Omega \subset \mathbb R^3$ with $\mathbf 0 \in \Gamma \subset \partial \Omega$ contains a cuboid corner $\Omega \cap B_h =\mathcal K \cap B_h =\mathcal K^h,$ such that $v\in C^\alpha (\overline{\mathcal K^h}) $, $q\in H^2(\overline{\mathcal K^h})$ and $\eta \in C^\alpha (\overline{\Gamma _h})$ for $0<\alpha < 1$, where $\mathcal K^h$ and $\Gamma _h$ are defined in \eqref{eq:kr0}. If $\eta (\mathbf 0)\not=0$, then
$$v(\mathbf 0)=0.$$
\end{thm}
\begin{rem}
Consider the classical transmission eigenvalue problem \eqref{3eq:treta0} in $\mathbb R^3$, namely $\eta\equiv 0$ on $\Gamma$ in \eqref{0eq:tr}, when the underlying domain $\Omega$ of \eqref{3eq:treta0} has a cuboid corner $\mathcal K_{\mathbf x_0;\mathbf e_1, \mathbf e_2, \mathbf e_3}$, if the corresponding potential $q$ has $\alpha$-H\"older continuity regularity for $\alpha >\frac{1}{4}$ near the cuboid corner (cf. \cite[Definition 2.2 and Theorem 3.2]{BL2017}), then the transmission eigenfunction $v$ must vanish near the corner. Compared with the results in \cite{BL2017}, the vanishing property of transmission eigenfunctions to \eqref{0eq:tr} near the underlying cuboid corner holds under a general scenario. Namely, the assumption in Theorem \ref{3:cubiod} only needs $q$ fulfills $H^2$ regularity, $v$ and boundary parameter $\eta$ are H\"older continuous near $\mathbf x_0$, where $\eta(\mathbf x_0) \neq 0$.
\end{rem}
In the following two corollaries, we consider the classical transmission eigenvalue problem \eqref{3eq:treta0}, namely $\eta\equiv 0$ on $\Gamma$ in \eqref{0eq:tr}, where the domain $\Omega$ contains a conic or polyhedral corner. The proof of Corollary \ref{cor2} is postponed in Subsection \ref{subsec:43}.
\begin{cor}\label{cor2}
Let $\Omega$ is a bounded Lipschitz domain with a connected complement and $v,w\in H^1(\Omega)$ be a pair of transmission eigenfunctions to \eqref{3eq:treta0} associated with $k\in \mathbb R_+$. Assume that $\mathbf 0\in \Gamma\subset \partial \Omega$ such that $\Omega\cap B_h=\mathcal C\cap B_h=\mathcal C^h$, where $\mathcal C$ is defined by (\ref{eq:cone1}) and $h\in \mathbb R_+$ is sufficient small such that $q\in H^2( \overline{\mathcal C^h})$ and $q(\mathbf 0)\neq 1$.
\begin{itemize}
\item[(a)]For any give positive constants $\beta$ and $\gamma$ satisfying
$
\gamma <\frac{20}{37}\alpha\beta,
$
if the transmission eigenfunction $v$ can be approximated in $H^1(\mathcal C^h)$ by Herglotz wave functions $v_j$ defined by \eqref{2eq:herg}
with the kernels $g_j$ satisfying the approximation property
\eqref{3eq:kernel}, then we have the vanishing of the transmission eigenfunction $v$ near $\mathcal C^h$ in the sense of \eqref{3eq:delv0}.
\item[(b)]If $v\in C^\alpha (\overline{\mathcal C ^h})$ with $\alpha\in (0,1)$, then one has $v(\mathbf 0)=0$.
\end{itemize}
\end{cor}
In the subsequent corollary, we consider the case that $\Omega$ contains a polyhedral corner $\mathcal K^{h}$ defined by \eqref{eq:kr0}. When the transmission eigenfunction $v$ to \eqref{3eq:treta0} satisfies two regularity assumptions, we can establish the similar geometrical characterization of $v$ near the polyhedral corner. The proofs are similar to the counterpart of Theorem \ref{thm:3alpha} and Corollary \ref{cor2}, where we only need to use the asymptotic analysis \cite[Lemma 2.2]{BLX2020} with respect to the parameter in the corresponding CGO solution introduced the following subsection. Hence we omit its proof.
\begin{cor}\label{thm:poly}
Let $\Omega$ is a bounded Lipschitz domain with a connected complement and $v,w\in H^1(\Omega)$ be a pair of transmission eigenfunctions to \eqref{3eq:treta0} associated with $k\in \mathbb R_+$. Assume that $\mathbf 0\in \Gamma \subset \partial \Omega$ such that $\Omega\cap B_h=\mathcal K_{\mathbf 0;\mathbf e_1,\cdots,\mathbf e_{\ell}}\cap B_{h}=\mathcal K^{h}$, where $\mathcal K^{h}$ is defined by \eqref{eq:kr0} and $h\in \mathbb R_+$ is sufficiently small such that $q\in H^2( \overline{\mathcal K^h})$ and $q(\mathbf 0)\neq 1$.
\begin{itemize}
\item[(a)]
For any give positive constants $\beta$ and $\gamma$ satisfying $\gamma <\frac{20}{37}\alpha\beta$, if
the transmission eigenfunction $v$ can be approximated in $H^1(\mathcal C^h)$ by Herglotz wave functions $v_j$ defined by \eqref{2eq:herg}
with the kernels $g_j$ satisfying the approximation property \eqref{3eq:kernel}, then we have the vanishing property of $v$ near $\mathcal K^{h}$ in the sense of \eqref{3eq:delv0}.
\item[(b)] If $v\in C^{\alpha}(\overline{ \mathcal K^{h}})$ with $\alpha\in (0,1)$, then one has $v(\mathbf 0)=0$.
\end{itemize}
\end{cor}
\subsection{Proof of Theorem \ref{3thm:v0}}\label{subsec:41}
Since the conic cone $\mathcal C$ defined by \eqref{eq:cone1} is strictly convex, for any given positive constant $\zeta$, we define $\mathcal C_\zeta$ as the open set of $\mathbb S^2$ which is composed by all unit directions $\mathbf d\in \mathbb S^2$ satisfying that
\begin{equation}\label{3eq:3zeta}
\mathbf d\cdot \mathbf {\hat{x}}>\zeta>0,\quad for\ all \ \mathbf {\hat{x}}\in \mathcal C\cap \mathbb S^2.
\end{equation}
Through out of this subsection, we always assume that the unit vector $\mathbf d$ in the form of the CGO solution $u_0$ given by (\ref{1eq:cgo}) satisfies (\ref{3eq:3zeta}). In order to prove Theorem \ref{3thm:v0}, we need several key propositions and lemmas in the following.
\begin{prop}\label{3prop:eta}
Let $\Gamma_{h}$ and $\rho$ be defined in (\ref{eq:cone2}) and (\ref{1eq:eta}), respectively. Then we have
\begin{equation}\label{3eq:eta}
\left\vert \int_{\Gamma_{h}} e^{\rho \cdot \mathbf x}\mathrm d\sigma\right\vert \geq \frac{C_{\mathcal C^h}}{\tau^2}-\mathcal O\left(\frac{1}{\tau}e^{-\frac{1}{2}\zeta h \tau}\right),
\end{equation}
for sufficiently large $\tau$, where $C_{\mathcal C^h}$ is a positive number only depending on the opening angle $\theta_0$ of $\mathcal C$ and $\zeta$.
\end{prop}
\begin{proof}
Using polar coordinates transformation and the mean value theorem for integrals, we have
\begin{equation}\label{eq:estconic}
\begin{aligned}
\int_{\Gamma_h}e^{\rho \cdot \mathbf x}\mathrm d\sigma
&=\sin\theta_0\frac{2\pi\Gamma(2)}{\tau^2}\frac{1}{((\mathbf d+\mathrm i\mathbf d^{\perp})\cdot \mathbf{ \hat{x}}(\theta_0,\varphi_\xi))^2}-\sin\theta_0\int_{0}^{2\pi}I_{R}\mathrm d\varphi,
\end{aligned}
\end{equation}
where $I_{R}= \int_{h}^{\infty}e^{-\tau (\mathbf d +\mathrm i\mathbf d)\cdot \hat{\mathbf x}r}r\mathrm d r$.
Furthermore, for sufficiently large $\tau$, it is ready to know that
$$\frac{1}{\tau^2}-\frac{1}{\tau}e^{-\frac{1}{2}\zeta h \tau}>0.$$
Hence, by virtue of \eqref{3eq:3zeta} and Proposition \ref{1prop:gamma}, we have the following integral inequality
\begin{equation}\notag
\begin{aligned}
\left\vert \int_{\Gamma _h}e^{\rho \cdot \mathbf x}\mathrm d\sigma \right \vert
&\geq \sin\theta_0 \frac{2\pi \Gamma (2)}{\tau^2}\frac{1}{(\mathbf d\cdot \mathbf{\hat x}(\theta_{0},\varphi_{\xi}))^2+(\mathbf d^\perp \cdot \mathbf{\hat x}(\theta_{0},\varphi_\xi))^2}-\sin\theta_0\int_{0}^{2\pi}|I_R|\mathrm d\varphi\\
&\geq \frac{C_{\mathcal C^h}}{\tau^2}-\mathcal O(\frac{1}{\tau}e^{-\frac{1}{2}\zeta h \tau}),
\end{aligned}
\end{equation}
which completes the proof of this proposition.
\end{proof}
Similar to Proposition \ref{1prop:enorm}, the following proposition can be obtained by directly verifications.
\begin{prop}\label{2prop:enorm}
Let $\mathcal C^h$ be defined by \eqref{eq:cone2}. For any given $t>0$, it yields that
\begin{equation}\label{2eq:enorm}
\begin{aligned}
\| e^{\rho \cdot \mathbf {x}} \|_{L^t(\mathcal C^h)} &\leq C \left(\frac{1}{\tau^3}+\frac{1}{\tau}e^{-\frac{t}{2}\zeta h\tau}\right)^{\frac{1}{t}},\\
\| e^{\rho \cdot \mathbf {x}} \|_{L^t(\Gamma _h)} &\leq C \left(\frac{1}{\tau^2}+\frac{1}{\tau}e^{-\frac{t}{2}\zeta h\tau}\right)^{\frac{1}{t}},
\end{aligned}
\end{equation}
as $\tau\rightarrow \infty$, where $\rho$ is defined in (\ref{1eq:eta}) and $C$ is a positive constant only depending on $t,\zeta$.
\end{prop}
\begin{lem}
Under the same setup of Theorem \ref{3thm:v0}, let the CGO solution $u_0$ be defined by (\ref{1eq:cgo}). We also denote $u=w-v$, where $(v,w)$ is a pair of transmission eigenfunctions of \eqref{0eq:tr} associated with $k \in \mathbb R_+$. Then it holds that
\begin{equation}\label{eq:pde sys 3}
\begin{cases}
&\Delta u_0 +k^2qu_0=0\hspace*{1.7cm}\mbox{in}\quad {\mathcal C}^h,\\
&\Delta u +k^2qu=k^2(1-q)v \hspace*{0.5cm} \mbox{in}\quad {\mathcal C}^h,\\
&u=0,\quad \partial_{\nu}u=0 \hspace*{1.7cm} \mbox{on}\quad \Gamma_h,
\end{cases}
\end{equation}
where $\mathcal C^h$ and $\Gamma_h$ are defined by \eqref{eq:cone2},
and
\begin{equation}\label{3eq:psinorm}
\| \psi(\mathbf x) \|_{H^{1,8}}
= \mathcal O(\tau^{-\frac{2}{5}}),
\end{equation}
where $\psi$ and $\tau$ are defined in \eqref{1eq:cgo}.
\end{lem}
\begin{proof}
The proof of this lemma is similar to the one of Lemma \ref{lem:31}. By using the assumption $q\in H^2(\mathcal C^h)$ and Sobolev extension property, suppose that $\tilde q$ is a Sobolev extension of $q$ in $\mathbb R^3$, it yields that $\tilde q\in H^{2}$, hence we have $\tilde{q}\in H^{1,1+\epsilon_0}$.
Therefore $\tilde q$ satisfies the assumption in Proposition \ref{1pro:nor}. Let $\tilde p=\frac{120}{79}$ and $\epsilon_0=\frac{7}{8}$, thus we know that $p=8$ by (\ref{eq:p til cond}). By (\ref{1nor:psi}), it yields that (\ref{3eq:psinorm}).
\end{proof}
\begin{lem}\label{3lem:u0est}
Let $\Lambda_h$ and $\mathcal C^h$ be defined in (\ref{eq:cone2}). Recall that $u_0(\mathbf x)$ is given by (\ref{1eq:cgo}). Then $u_0(\mathbf x)\in H^1({\mathcal C}^h)$ and it holds that
\begin{subequations}
\begin{align}
\|u_0(\mathbf x)\|_{L^2(\Lambda_h)}&\lesssim \left(1+\tau^{-\frac{2}{5}}\right)e^{-\zeta h\tau},\label{3eq:u0lambda} \\
\label{3eq:nablau0}
\|\nabla u_0(\mathbf x)\|_{L^2(\Lambda_h)}&\lesssim (1+\tau) \left(1+\tau^{-\frac{2}{5}}\right)e^{-\zeta h\tau},\\
\label{3eq:u0alpha}
\int_{\mathcal C^h}\vert \mathbf x\vert^{\alpha}\vert u_0(\mathbf x)\vert\rm d\mathbf x
&\lesssim \tau^{-(\alpha+\frac{121}{40})}+\left(\frac{1}{\tau^{\alpha+3}}+\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right),\\
\int_{\Gamma _h} \vert \mathbf x\vert ^\alpha \vert u_0(\mathbf x)\vert \rm d \sigma
&\lesssim \tau^{-(\alpha +\frac{43}{20})}+\left(\frac{1}{\tau^{\alpha +2}} +\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right)\label{3eq:u0alphagam}
\end{align}
\end{subequations}
as $\tau \rightarrow \infty$, where $\zeta$ is defined in \eqref{2eq:zeta} and $\alpha \in (0,1)$.
\end{lem}
\begin{proof}
Using (\ref{3eq:psinorm}) and Lemma \ref{lem:trace thm} about the trace theorem, it yields that
\begin{equation}\notag
\|\psi(\mathbf x)\|_{L^{4}(\Lambda_h)}
\leq C \|\psi(\mathbf x)\|_{H^{\frac{7}{8},8}(\Lambda_h)}\leq C\|\psi(\mathbf x)\|_{H^{1,8}(\mathcal C^h)}= O(\tau^{-\frac{2}{5}}),
\end{equation}
as $\tau \rightarrow \infty$.
By using polar coordinates transformation, (\ref{1eq:Ir}), and (\ref{3eq:3zeta}), one can derive that
\begin{equation}\label{3eq:etabound}
\|e^{\rho \cdot \mathbf x}\|_{L^t(\Lambda_h)}\lesssim e^{-\zeta h\tau},
\end{equation}
where $\rho$ is defined in (\ref{1eq:eta}) and $t$ is a positive constant.
Due to polar coordinates transformation, (\ref{3eq:etabound}), \eqref{3eq:psinorm} and H\" older inequality, it can be calculated that
\begin{equation}\label{3eq:I42}
\begin{aligned}
\|u_{0}\|_{ L^{2}(\Lambda_{h})}
&\leq\|e^{\rho \cdot \mathbf x}\|_{ L^{2}(\Lambda_{h})}+\|e^{\rho \cdot \mathbf x}\|_{ L^{4}(\Lambda_{h})}\|\psi(\mathbf x)\|_{ L^{4}(\Lambda_{h})}\\
&\lesssim \left(1+\tau^{-\frac{2}{5}}\right)e^{-\zeta h\tau}, \quad \mbox{as $\tau \rightarrow \infty$. }
\end{aligned}
\end{equation}
By virtue of Cauchy-Schwarz inequality, (\ref{3eq:u0lambda}) and Proposition \ref{1prop:gamma}, we can deduce that
\begin{equation}\label{3eq:I43}
\begin{aligned}
\|\nabla u_{0}\|_{L^{2}(\Lambda_{h})}
&\leq \sqrt{2}\tau \|u_{0}\|_{ L^{2}(\Lambda_{h})}+\|e^{\rho \cdot \mathbf x}\|_{ L^{4}(\Lambda_{h})} \|\nabla \psi(\mathbf x)\|_{ L^{4}(\Lambda_{h})}\\
&\lesssim(1+\tau)(1+\tau^{-\frac{2}{5}})e^{-\zeta h\tau},\quad \mbox{as $\tau \rightarrow \infty$. }
\end{aligned}
\end{equation}
It is clear that we can get the following integral inequality,
\begin{equation}\label{3eq:xalpha}
\begin{aligned}
\int_{\mathcal C^h}\vert \mathbf x\vert^{\alpha}\vert u_{0} \vert\mathrm{d}\mathbf x
&\leq\int_{\mathcal C^h}\vert \mathbf x\vert^{\alpha}\vert e^{\rho \cdot \mathbf{x}} \vert \mathrm{d}\mathbf x+\int_{\mathcal C^h}(\vert \mathbf x\vert^{\alpha}\vert e^{\rho \cdot \mathbf{x}}\vert)(\vert \psi(\mathbf x)\vert)\mathrm{d}\mathbf x.
\end{aligned}
\end{equation}
By virtue of polar coordinates transformation and Proposition \ref{1prop:gamma}, it arrives that
\begin{equation}\label{3eq:I22}
\begin{aligned}
\int_{\mathcal C^h}\vert\mathbf x\vert^{\alpha}e^{-\zeta \tau \vert \mathbf x\vert}\mathrm{d}\mathbf x
\lesssim \frac{1}{\tau^{\alpha+3}}+\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau },
\end{aligned}
\end{equation}
as $\tau \rightarrow \infty$. Next, letting $\mathbf y=\tau \mathbf x$, using Cauchy-Schwarz inequality and H$\ddot {\mathrm o}$lder inequality, it arrives that
\begin{align}
\int_{\mathcal C^h}(\vert \mathbf x\vert^{\alpha}\vert e^{\rho \cdot \mathbf{x}}\vert)(\vert \psi(\mathbf x)\vert)\mathrm{d}\mathbf x
&\leq
\frac{1}{\tau^{\alpha +3}}\int_{\mathcal C}\vert \mathbf y\vert^\alpha\vert e^{-\mathbf d\cdot \mathbf y}\vert \left\vert \psi\left(\frac{\mathbf y}{\tau}\right)\right\vert \mathrm d\mathbf y\notag \\
&\leq\frac{1}{\tau^{\alpha +3}}\|\vert \mathbf y\vert^\alpha\vert e^{-\mathbf d\cdot \mathbf y}\vert \|_{L^{\frac{8}{7}}(\mathcal C)}\left\|\psi\left(\frac{\mathbf y}{\tau}\right)\right\|_{L^{8}(\mathcal C)}, \label{3eq:u0C}
\end{align}
as $\tau \rightarrow \infty$, using variable substitution and \eqref{3eq:psinorm}, it arrives that
\begin{equation}\label{3eq:psiL8C}
\left\|\psi\left(\frac{\mathbf y}{\tau}\right)\right\|_{L^8(\mathcal C)} =\tau^{\frac{3}{8}}\|\psi(\mathbf x)\|_{L^8(\mathcal C)}\leq\tau^{\frac{3}{8}}\|\psi(\mathbf x)\|_{H^{1,8}(\mathcal C)} =\mathcal O(\tau^{-\frac{1}{40}}),
\end{equation}
as $\tau \rightarrow \infty$.
Furthermore, one has
\begin{equation}\label{3eq:edy}
\|\vert \mathbf y\vert^\alpha \vert e^{\rho \cdot \mathbf x}\vert \|_{L^{\frac{8}{7}}(\mathcal C)}
=\left( \int_{\mathcal C}\vert \mathbf y\vert ^{\frac{8}{7}\alpha}e^{-\frac{8}{7}\mathbf d\cdot \mathbf y}\mathrm d\mathbf y\right)^{\frac{7}{8}}
\leq\left( \int_{\mathcal C}\vert \mathbf y\vert ^{\frac{8}{7}\alpha}e^{-\frac{8}{7}\zeta\vert \mathbf y\vert}\mathrm d\mathbf y \right)^\frac{7}{8}\leq \frac{C}{\zeta^{3+\frac{8}{7}\alpha}},
\end{equation}
where $C=2\pi\theta_0 \Gamma(3+\frac{8}{7}\alpha)(\frac{7}{8})^{3+\frac{8}{7}\alpha}$. Hence, $\|\vert \mathbf y\vert^\alpha \vert e^{\rho \cdot \mathbf x}\vert \|_{L^{\frac{8}{7}}(\mathcal C)}$ is a positive constant which only depends on $\theta_0$, $\zeta$ and $\alpha$.
Combining \eqref{3eq:psiL8C}, \eqref{3eq:u0C} and \eqref{3eq:I22} with \eqref{3eq:xalpha}, one has \eqref{3eq:u0alpha}.
Furthermore, we have
\begin{equation}\label{3eq:u0alhagam1}
\int_{\Gamma_h}\vert \mathbf x\vert ^\alpha \vert u_0\vert \mathrm d \sigma
\leq \int_{\Gamma_h}\vert \mathbf x\vert \vert e^{\rho \cdot \mathbf x}\vert \mathrm d\sigma
+\int_{\Gamma_h}\vert \mathbf x\vert ^\alpha \vert e^{\rho \cdot \mathbf x}\vert\vert \psi(\mathbf x)\vert \mathrm d\sigma,
\end{equation}
and we can easily get \eqref{3eq:xalphagam} by using polar coordinates transformation and Proposition \ref{1prop:gamma},
\begin{equation}\label{3eq:xalphagam}
\int_{\Gamma_h}\vert \mathbf x\vert \vert e^{\rho \cdot \mathbf x}\vert \mathrm d\sigma \lesssim \frac{1}{\tau^{\alpha +2}}+\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau},
\end{equation}
as $\tau \rightarrow \infty$.
Then letting $\mathbf y=\tau\mathbf x$ and utilizing H\"older ineuqality, it can be obtained that
\begin{align}
\int_{\Gamma_h}\vert \mathbf x\vert ^\alpha\vert e^{\rho \cdot \mathbf x}\vert \vert \psi(\mathbf x)\vert \mathrm d\sigma
&\leq
\frac{1}{\tau^{\alpha +2}}\int_{\partial \mathcal C}\vert \mathbf y \vert ^\alpha \vert e^{-\mathbf d \cdot \mathbf y}\vert \left \vert \psi \left(\frac{\mathbf y}{\tau}\right)\right \vert \mathrm d \sigma \notag \\
&\leq \frac{1}{\tau^{\alpha +2}} \|\vert \mathbf y \vert ^\alpha \vert e^{-\mathbf d\cdot \mathbf y}\vert \|_{L^{\frac{8}{7}}(\partial \mathcal C)}\left\| \psi \left(\frac{\mathbf y}{\tau}\right)\right\|_{L^8(\partial \mathcal C)}. \label{3eq:xalphagam1}
\end{align}
Similar to \eqref{3eq:edy}, we know that $\|\vert \mathbf y \vert ^\alpha \vert e^{-\mathbf d\cdot \mathbf y}\vert \|_{L^{\frac{8}{7}}(\partial \mathcal C)}$ is a positive constant. By virtue of variable substitution, trace theorem and \eqref{3eq:psinorm}, it arrives that
\begin{equation}\label{3eq:psigam}
\left\| \psi \left(\frac{\mathbf y}{\tau}\right)\right\|_{L^8(\partial \mathcal C)}
\lesssim \tau^{\frac{1}{4}}\|\psi(\mathbf x)\|_{H^{\frac{7}{8},8}(\partial \mathcal C)}\lesssim \tau^{\frac{1}{4}}\|\psi(\mathbf x)\|_{H^{1,8}( \mathcal C)}\lesssim \tau^{-\frac{3}{20}},
\end{equation}
as $\tau \rightarrow \infty$.
Combining \eqref{3eq:xalphagam}, \eqref{3eq:xalphagam1} and \eqref{3eq:psigam} with \eqref{3eq:u0alhagam1}, one has \eqref{3eq:u0alphagam}.
\end{proof}
Now, we are in the position to prove Theorem \ref{3thm:v0}.
\begin{proof}[\bf{Proof of Theorem \ref{3thm:v0}}]
The proof of this theorem is similar to the counterpart of Theorem \ref{thm:2D}. Recall that $(v,w)$ is a pair of transmission eigenfunctions to \eqref{0eq:tr}.
Using Green formula \eqref{2eq:2green} and boundary conditions in \eqref{eq:pde sys 3}, the following integral identity holds
\begin{equation}\label{3eq:green}
\int_{\mathcal C^h}k^2(q-1)vu_0\mathrm d\mathbf x=\int_{\Lambda _h}(w-v)\partial_\nu u_0-u_0\partial(w-v)\mathrm d\sigma-\int_{\Gamma_h}\eta u_0v\mathrm d\sigma
\end{equation}
where $\mathcal C^h$, $\Lambda_h$ and $\Gamma_h$ are defined by \eqref{eq:cone2}. Let
$$
f_{j}=(q-1)v_j.
$$
Due to $q\in H^2{(\overline{\mathcal C^h})}$, we know that $q\in C^{1/2}(\overline{ \mathcal C^h})$ by using the property of embedding of Sobolev space.
Recall that $\eta \in C^{\alpha_1}(\overline{ \Gamma_h})$. Let $\alpha=\{ \alpha_1,1/2 \}$. Furthermore, since the Herglotz wave function $v_j\in C^{\alpha}(\overline{ \mathcal C^h})$, it yields that $f_j\in C^\alpha(\overline{ \mathcal C^h})$. Hence one has the expansion
\begin{equation}\label{3eq:fj}
\begin{aligned}
f_{j}&=f_j(\mathbf 0)+\delta f_j,\ \vert \delta f_{j}\vert\leq \| f_{j}\|_{C^{\alpha}(\overline{\mathcal C^h })}\vert \mathbf{x} \vert^{\alpha},\\
v_{j}&=v_j(\mathbf 0)+\delta v_j,\ \vert \delta v_j\vert \leq\| v_j\|_{C^\alpha(\overline{\mathcal C^h })}\vert\mathbf x\vert ^\alpha,\\
\eta&=\eta(\mathbf 0)+\delta \eta,\ \vert\delta\eta\vert\leq\|\eta\|_{C^\alpha(\overline{\Gamma_h})}\vert\mathbf x\vert^\alpha.
\end{aligned}
\end{equation}
By virtue of (\ref{3eq:fj}), we have the following integral identityies
\begin{equation}\label{3eq:vu0}
k^2\int_{\mathcal C_h}(q-1)vu_0\mathrm d\mathbf x=-\sum_{m=1}^3I_m,\quad
\int_{\Gamma_h}\eta u_0v\mathrm d\sigma=I-\sum_{m=4}^9I_m,
\end{equation}
where
\begin{equation}\label{3eq:chaifen}
\begin{aligned}
I_1&=-k^2\int_{\mathcal C^h}(q-1)(v-v_j)u_0\mathrm d\mathbf x,\quad I_2=-\int_{\mathcal C^h}\delta f_j u_0\mathrm d\mathbf x,\\
I_3&=-f_j(\mathbf 0)\int_{\mathcal C^h}u_0\mathrm d\mathbf x,\quad
I_4=-\eta(\mathbf 0)\int_{\Gamma _h}(v-v_j)u_0\mathrm d \sigma\\
I_5&=-\int_{\Gamma_h}\delta \eta(v-v_j)u_0\mathrm d\sigma,\quad I_6=-\eta(\mathbf 0)v_j(\mathbf 0)\int_{\Gamma_h}e^{\rho \cdot \mathbf x}\psi(\mathbf x)\mathrm d\sigma, \\
I_7&=\eta(\mathbf 0)\int_{\Gamma_h}\delta v_ju_0\mathrm d\sigma,\quad I_8=-v_j(\mathbf 0)\int_{\Gamma_h}\delta \eta u_0\mathrm d\sigma,\\
I_9&= \int_{\Gamma_h}\delta \eta \delta v_ju_0\mathrm d\sigma ,\quad I=\eta(\mathbf 0)v_j(\mathbf 0)\int_{\Gamma_h}e^{\rho\cdot \mathbf x}\mathrm d\sigma.
\end{aligned}
\end{equation}
Substituting (\ref{3eq:vu0}) into (\ref{3eq:green}), it yields that
$$
I=\sum_{m=1}^9I_m+J_1+J_2,
$$
where
\begin{equation}\label{3eq:I4I5}
J_1=\int_{\Lambda_{h}}u_{0}\partial_{\nu}(w-v)\mathrm d\sigma,\quad
J_2=-\int_{\Lambda_{h}}(w-v)\partial_{\nu}u_{0}\mathrm d\sigma.
\end{equation}
Hence, it readily yields that
\begin{equation}\label{3eq:I}
\vert I\vert\leq \sum_{m=1}^9\vert I_m\vert+\vert J_1\vert+\vert J_2\vert.
\end{equation}
In the sequel, we derive the asymptotic estimates of $I_j$ $(j=1,\ldots,9)$ and $J_j,\ j=1,2$ with respect to the parameter $\tau$ in the CGO solution $u_0$ when $\tau \rightarrow \infty$, separately.
Using H\"older inequality, Proposition \ref{2prop:enorm} and (\ref{3eq:psinorm}), it is clear that
\begin{equation}\label{3eq:I11}
\begin{aligned}
\vert I_1\vert
&\leq \|v-v_j\|_{L^2(\mathcal C^h)}\|e^{\rho\cdot \mathbf x}\|_{L^2(\mathcal C^h)}+\|v-v_j\|_{L^2(\mathcal C^h)}\|e^{\rho\cdot \mathbf x}\psi(\mathbf x)\|_{L^2(\mathcal C^h)}\\
&\lesssim j^{-\beta}\left[\left(\frac{1}{\tau^3}+\frac{1}{\tau}e^{-\zeta h\tau}\right)^{\frac{1}{2}}+\left(\frac{1}{\tau^3}+\frac{1}{\tau}e^{-2\zeta h\tau}\right)^\frac{1}{4}\tau^{-\frac{2}{5}} \right],
\end{aligned}
\end{equation}
as $\tau \rightarrow \infty$.
With the help of (\ref{3eq:fj}), we have
\begin{equation}\label{3eq:I21}
\vert I_{2} \vert \leq k^{2}\|f_{j} \|_{C^{\alpha}}\int_{\mathcal C^h}\vert \mathbf x\vert^{\alpha}\vert u_{0} \vert\mathrm d\mathbf x,
\end{equation}
and
\begin{equation}\label{3eq:fj1}
\begin{aligned}
\|f_{j}\|_{C^{\alpha}(\mathcal C^h )}&
\leq \|q\|_{C^{\alpha}(\mathcal C^h )}\sup_{\mathcal C^h }\vert v_{j}\vert+\|v_{j}\|_{C^{\alpha}(\mathcal C^h )}\sup_{\mathcal C^h }\vert q-1\vert.
\end{aligned}
\end{equation}
Moreover, due to the property of compact embedding of H\"older spaces, one has
\begin{equation}\label{3eq:calpha}
\| v_{j}\|_{C^{\alpha}(\mathcal C^h )}\leq \mathrm{ diam}\left(\mathcal C^h\right)^{1-\alpha}\|v_{j}\|_{C^{1}(\mathcal C^h )},
\end{equation}
where diam($\mathcal C^h$) is the diameter of $\mathcal C^h$. It can be directly shown that
\begin{equation}\label{3eq:vjc1}
\|v_{j}\|_{C^{1}(\mathcal C^h )}\leq 4\sqrt{\pi}(1+k)\|g\|_{L^{2}(\mathbb S^{2})}.
\end{equation}
On the other hand, we can obtain the following estimate by using the Cauchy-Schwarz inequality,
\begin{equation}\label{3eq:vj}
\vert v_{j}\vert\leq 4\sqrt{\pi}\|g\|_{L^{2}(\mathbb S^{2})}.
\end{equation}
Using (\ref{3eq:kernel}) and $q\in C^{\alpha}(\overline {\mathcal C^h})$,
plugging (\ref{3eq:kernel}), (\ref{3eq:calpha}), (\ref{3eq:vjc1}) and (\ref{3eq:vj}) into (\ref{3eq:fj1}), one can arrive at
\begin{equation}\label{3eq:fjc}
\|f_{j}\|_{C^{\alpha}(\mathcal C^h)}\lesssim j^{\gamma},
\end{equation}
where $\gamma$ is a given positive constant defined in (\ref{3eq:kernel}). Substituting (\ref{3eq:u0alpha}) and (\ref{3eq:fjc}) into (\ref{3eq:I21}), we obtain
\begin{equation}\label{3eq:I2}
\vert I_{2} \vert \lesssim j^{\gamma}\left[\tau^{-(\alpha+\frac{121}{40})}+\left(\frac{1}{\tau^{\alpha+3}}+\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right)\right]
\end{equation}
as $\tau \to \infty$.
With the help of Cauchy-Schwarz inequality and (\ref{3eq:kernel}), it yields that
\begin{equation}\label{3eq:I31}
\vert I_{3} \vert
\leq \int_{\mathcal C^h} \vert e^{\rho\cdot \mathbf x}\vert \mathrm d\mathbf x+\int_{\mathcal C}\vert e^{\rho\cdot \mathbf x}\vert\vert\psi(\mathbf x)\vert\mathrm d\mathbf x,
\end{equation}
Similar to \eqref{3eq:edy}, we have that $\|e^{-\mathbf d\cdot \mathbf y}\|_{L^\frac{8}{7}(\mathcal C)}$ is a positive constant depending only on $\zeta$ and $\theta_0$. Letting $\mathbf y=\tau \mathbf x$ and using \eqref{3eq:psiL8C}, it can be calculated that
\begin{equation}\label{3eq:psiC}
\begin{aligned}
\int_{\mathcal C}\vert e^{\rho\cdot \mathbf x}\vert\vert \psi(\mathbf x)\vert\mathrm d\mathbf x
\leq\frac{1}{\tau^3}\|e^{-\mathbf d\cdot \mathbf y}\|_{L^{\frac{8}{7}}(\mathcal C)}\left \|\psi\left (\frac{\mathbf y}{\tau}\right )\right \|_{L^8(\mathcal C)}
\lesssim \tau^{-\frac{121}{40}},
\end{aligned}
\end{equation}
as $\tau \rightarrow \infty$.
Therefore, with the help of Proposition \ref{2prop:enorm}, and plugging \eqref{3eq:psiC} into (\ref{3eq:I31}), one has
\begin{equation}\label{3eq:I3}
\vert I_{3} \vert \lesssim \tau^{-\frac{121}{40}}+\left(\frac{1}{\tau^3}+\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right),\quad \mbox{as $\tau \rightarrow \infty$. }
\end{equation}
By virtue of Cauchy-Schwarz inequality and Lemma \ref{lem:trace thm}, we can obtain that
\begin{equation}\label{3eq:I41}
\begin{aligned}
\vert I_4 \vert&\lesssim \|v-v_j\|_{L^{2}(\Gamma_h)}\|e^{\rho \cdot \mathbf x}\|_{L^2(\Gamma_h)}+\|v-v_j\|_{L^2(\Gamma_h)}\|e^{\rho \cdot \mathbf x}\|_{L^4(\Gamma_h)}\|\psi(\mathbf x )\|_{L^4(\Gamma_h)}\\
&\lesssim j^{-\beta}\left[\left(\frac{1}{\tau^2}+\frac{1}{\tau}e^{-\zeta h\tau}\right)^\frac{1}{2}+\left(\frac{1}{\tau^2}+\frac{1}{\tau}e^{-2\zeta h\tau}\right)^\frac{1}{4}\tau^{-\frac{2}{5}}\right]
\end{aligned}
\end{equation}
as $\tau\to \infty$.
With the help of Cauchy-Schwarz inequality, Lemma \ref{lem:trace thm} and H\"older inequality, one has
\begin{equation}\label{3eq:I5}
\begin{aligned}
\vert I_5\vert
&\lesssim\|v-v_j\|_{L^2(\Gamma_h)}(\| \vert e^{\rho\cdot \mathbf x}\vert \vert \mathbf x\vert^\alpha\|_{L^2(\Gamma_h)}+\|\vert e^{\rho\cdot \mathbf x}\vert \vert \mathbf x\vert^\alpha\|_{L^4(\Gamma_h)}\|\psi(\mathbf x)\|_{L^{4}(\Gamma_h)})\\
&\lesssim j^{-\beta}\left[\left(\frac{1}{\tau^{(2\alpha +2)}}+\frac{1}{\tau}e^{-\zeta h\tau}\right)^{\frac{1}{2}}+\left(\frac{1}{\tau^{(4\alpha +2)}}+\frac{1}{\tau}e^{-2\zeta h\tau}\right)^{\frac{1}{4}}\tau^{-\frac{2}{5}}\right],
\end{aligned}
\end{equation}
as $\tau \rightarrow \infty$.
Similar to \eqref{2eq:psik}, it can be directly obtained that
\begin{equation}\label{3eq:psiH8}
\left\| \psi\left(\frac{\mathbf y}{\tau}\right)\right\|_{H^{1,8}(\mathcal C)}\leq\tau^{\frac{3}{8}}\|\psi(\mathbf x)\|_{H^{1,8}(\mathcal C)}=\mathcal O(\tau^{-\frac{1}{40}}),
\end{equation}
as $\tau \rightarrow \infty$. Therefore, following the proof of Lemma \ref{2lem:psi8} and using H\"older inequality and Lemma \ref{lem:trace thm}, we have
\begin{equation}\label{3eq:I61}
\begin{aligned}
\vert I_6\vert &
\lesssim \frac{1}{\tau^2}\|e^{-\mathbf d\cdot \mathbf y}\|_{L^{\frac{8}{7}}(\Gamma)}\left\|\psi\left(\frac{\mathbf y}{\tau}\right)\right\|_{L^8(\Gamma)}\\
&\lesssim \frac{1}{\tau^2} \|e^{-\mathbf d\cdot \mathbf y}\|_{L^{\frac{8}{7}}(\Gamma)} \tau^{\frac{3}{8}}\|\psi(\mathbf x)\|_{H^{1,8}(\mathcal C)}
\lesssim\tau^{-\frac{81}{40}}, \quad \mbox{as $\tau \rightarrow \infty$. }
\end{aligned}
\end{equation}
Moreover, we have the following estimates for $I_7,\ I_8$ and $I_9$ by virtue of \eqref{3eq:u0alphagam} directly,
\begin{align}
\vert I_7\vert
&\lesssim \|v_j\|_{C^\alpha(\Gamma_h)}\int_{\Gamma_h}\vert \mathbf x\vert^\alpha\vert u_0\vert\mathrm d\sigma
\lesssim j^\gamma\left[ \tau^{-(\alpha +\frac{43}{20})}+\left(\frac{1}{\tau^{\alpha +2}} +\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right) \right], \notag \\
\vert I_8\vert& \lesssim
\tau^{-(\alpha +\frac{43}{20})}+\left(\frac{1}{\tau^{\alpha +2}} +\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right),\notag \\
\vert I_9\vert&\lesssim j^\gamma\left[ \tau^{-(2\alpha +\frac{43}{20})}+\left(\frac{1}{\tau^{2\alpha +2}} +\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right)\right],\quad \mbox{as $\tau \rightarrow \infty$. }\label{3eq:I9}
\end{align}
Using Cauchy-Schwarz inequality and Lemma \ref{lem:trace thm}, we obtain that
\begin{equation}\label{3eq:J11}
\begin{aligned}
\vert J_1 \vert&\leq \|u_{0}\|_{H^{\frac{1}{2}}(\Lambda_{h})} \|\partial_{\nu}(w-v)\|_{H^{-\frac{1}{2}}(\Lambda_{h})}
\leq C\|u_{0}\|_{H^{1}(\Lambda_{h})} \|\partial_{\nu}(w-v)\|_{H^{1}(\Lambda_{h})}\\
&\lesssim \|u_{0}\|_{H^{1}(\Lambda_{h})}
\end{aligned}
\end{equation}
as $\tau\to \infty$, where $C$ is a positive constant arising from the trace theorem. By virtue of (\ref{3eq:u0lambda}) and (\ref{3eq:nablau0}), it can be calculated that
\begin{equation}\label{3eq:J1}
\begin{aligned}
\vert J_1 \vert \lesssim (1+\tau)(1+\tau^{-\frac{2}{5}})e^{-\zeta h\tau}
\end{aligned}
\end{equation}
as $\tau \to \infty$, where $\zeta$ is a positive constant given in (\ref{3eq:3zeta}).
Finally, using Cauchy-Schwarz inequality, the trace theorem and (\ref{3eq:nablau0}), we can obtain that
\begin{equation}\label{3eq:J2}
\begin{aligned}
\vert J_2 \vert &\leq \|\partial _{\nu} u_0\|_{ L^{2}(\Lambda_{h})} \|w-v\|_{ L^{2}(\Lambda_{h})}
\leq C\|\partial _{\nu} u_0\|_{ L^{2}(\Lambda_{h})} \|w-v\|_{ H^{1}(\mathcal C^h)}\\
&\lesssim (1+\tau)(1+\tau^{-\frac{2}{5}})e^{-\zeta h\tau},
\end{aligned}
\end{equation}
as $\tau \rightarrow \infty$.
Substituting (\ref{3eq:I11}), (\ref{3eq:I2}), (\ref{3eq:I3})$-$\eqref{3eq:I9}, \eqref{3eq:J1} and \eqref{3eq:J2} into (\ref{3eq:I}), we have
\begin{align}
\vert \eta(\mathbf 0)v_j(\mathbf 0)\vert&\left(\frac{C_{\mathcal C^h}}{\tau^2}-\frac{1}{\tau}e^{-\frac{1}{2}\zeta h \tau}\right)
\lesssim j^{-\beta}\left[\left(\frac{1}{\tau^3}+\frac{1}{\tau}e^{-\zeta h\tau}\right)^{\frac{1}{2}}+\left(\frac{1}{\tau^3}+\frac{1}{\tau}e^{-4\zeta h\tau}\right)^\frac{1}{4}\tau^{-\frac{2}{5}} \right]\notag \\
&
+j^{\gamma}\left[\tau^{-(\alpha+\frac{121}{40})}+\left(\frac{1}{\tau^{\alpha+3}}+\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right)\right]\notag \\
&
+j^{-\beta}\left[\left(\frac{1}{\tau^2}+\frac{1}{\tau}e^{-\zeta h\tau}\right)^{\frac{1}{2}}+\left(\frac{1}{\tau^2}+\frac{1}{\tau}e^{-2\zeta h\tau}\right)^\frac{1}{4}\tau^{-\frac{2}{5}}\right]\notag \\
&
+j^{-\beta}\left[\left(\frac{1}{\tau^{(2\alpha +2)}}+\frac{1}{\tau}e^{-\zeta h\tau}\right)^{\frac{1}{2}}+\left(\frac{1}{\tau^{(4\alpha +2)}}+\frac{1}{\tau}e^{-2\zeta h\tau}\right)^{\frac{1}{4}}\tau^{-\frac{2}{5}}\right]\notag \\
&
+(j^\gamma+1)\left[ \tau^{-(\alpha+\frac{43}{20})}+\left(\frac{1}{\tau^{\alpha+2}}+\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right)
\right]\notag \\
&
+j^\gamma\left[\tau^{-(2\alpha+\frac{43}{20})}+\left(\frac{1}{\tau^{2\alpha+2}}+\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right)\right]\notag \\
&
+\tau^{-\frac{81}{40}}+\tau^{-\frac{121}{40}}+\left(\frac{1}{\tau^3}+\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right)
+(1+\tau)(1+\tau^{-\frac{2}{5}})e^{-\zeta h\tau} \label{3D1}
\end{align}
as $\tau \to \infty$, where $C_{\mathcal C^h}$ is a positive constant given in (\ref{3eq:eta}). Moreover, for sufficiently large $\tau$, we know that
$$\frac{C_{\mathcal C^h}}{\tau^2}-\frac{1}{\tau}e^{-\frac{1}{2}\zeta h \tau}>0.$$
Hence, multiplying $\tau^2$ on both sides of (\ref{3D1}) and taking $\tau=j^s$ and $s>0$, we derive that
\begin{equation}\label{3D2}
\begin{aligned}
\left(C_{\mathcal C^h}-j^s e^{-\frac{1}{2}\zeta hj^s}\right)\vert \eta(\mathbf 0)v_j(\mathbf 0)\vert &\lesssim j^{-\beta +\frac{17}{20}s}+j^{\gamma-(\alpha+1)s}+j^{-\beta+\frac{11}{10}s}\\
&+j^{-\beta+(-\alpha+\frac{11}{10})s}
+j^{\gamma-\alpha s}+j^{\gamma-2\alpha s}
\end{aligned}
\end{equation}
as $\tau\to \infty$. Recalling that $\gamma/\alpha<\frac{10}{11}\beta$, we can choose $s\in (\gamma/\alpha ,\frac{10}{11}\beta )$. Hence in (\ref{3D2}), by letting $j\to \infty$, we prove that
$$\lim_{j\to \infty}\vert\eta(\mathbf 0)v_j(\mathbf 0)\vert=0.$$
Since $\eta(\mathbf 0)\not=0$, we have $\lim_{j\to \infty }\vert v_j(\mathbf 0) \vert=0$. Using (\ref{2eq:herg}) and integral mean value theorem, we can obtain (\ref{3eq:delv0}).
The proof is complete.
\end{proof}
\subsection{Proof of Theorem \ref{3:cubiod}}\label{subsec:42}
In order to prove Theorem \ref{3:cubiod}, we first give a crucial estimate in the following proposition. It is pointed out that $\mathcal K$ is a cuboid cone in this subsection, where $\mathbf 0$ is the apex of $\mathcal K$. Denote ${\sf cone}(\mathbf a, \mathbf b)=\{\mathbf x\in \mathbb R^3~|~\mathbf x=c_1\mathbf a+c_2\mathbf b,\, \forall c_i\geq 0,\ i=1,2\} $, where $\mathbf a$ and $\mathbf b$ are fixed vectors. Let $\mathbf e_1=(1,0,0)^\top$, $\mathbf e_2=(0,1,0)^\top$ and $\mathbf e_3=(0,0,1)^\top$. Suppose that the faces $\partial \mathcal K=\cup_{i=1}^3\partial \mathcal K_i $, where $ \mathcal K_1={\sf cone}(\mathbf e_1, \mathbf e_3) $, $ \mathcal K_1={\sf cone}(\mathbf e_1, \mathbf e_2) $ and $ \mathcal K_1={\sf cone}(\mathbf e_2, \mathbf e_3) $.
\begin{prop}\label{pro:43}
Let $\mathbf d=(1,1,1)^\top $ and $\mathbf d^\perp=(1,-1,0)^\top $. Denote $z_j=\rho_1 \cdot \hat{\mathbf x}_j(\theta_\xi )$, where
\begin{align}\label{eq:x_j}
\hat{\mathbf x}_1(\theta_\xi )=\begin{bmatrix}
0\\ \sin \theta_\xi \\ \cos \theta_\xi
\end{bmatrix},\quad \hat{\mathbf x}_2(\theta_\xi )=\begin{bmatrix}
\sin \theta_\xi\\ 0 \\ \cos \theta_\xi
\end{bmatrix},\quad \hat{\mathbf x}_3(\theta_\xi )=\begin{bmatrix}
\cos \theta_\xi \\ \sin \theta_\xi \\ 0
\end{bmatrix}
\end{align}
with a fixed $\theta_\xi \in (0,\pi/2)$, and $\rho_1=\mathbf d +\mathrm i \mathbf d^\perp$. It holds that
\begin{align}\label{eq:bound z}
\left| \sum_{j=1}^3 \frac{1}{z_j^2} \right| \geq \frac{\sin^3\theta_\xi }{30} >0.
\end{align}
\end{prop}
\begin{proof}
By direct calculations, we have
\begin{equation}\label{eq:456 theta}
\sum_{j=1}^3 \frac{1}{z_j^2}=\frac{S(\theta_\xi )}{z_1^2},\quad S(\theta_\xi )=1+\left(\frac{z_1}{\bar z_1}\right)^2 +c_1 z_4,
\end{equation}
where
\begin{align*}
c_1=&\frac{|z_1|^4}{||z_1|^2-\sin \theta_\xi \cos \theta_\xi +\mathrm i \cos \theta_\xi (\cos\theta_\xi+\sin \theta_\xi )|^4},\\
z_4=&\left(|z_1|^2-1/2\sin2\theta_\xi -\mathbf i \cos \theta_\xi (\cos\theta_\xi +\sin\theta_\xi ) \right)^2.
\end{align*}
By noting $\theta_\xi \in (0,\pi/2)$, it yields that $c_1\geq 0.05\sin\theta_\xi$ and $\mathbb Re(z_4) \geq 2 \sin^2 \theta_\xi $. Hence according to \eqref{eq:456 theta}, we obtain \eqref{eq:bound z}.
\end{proof}
\begin{prop}\label{pro:44}
Assume that $\mathcal K^h$ is a truncated cuboid. Let $\Gamma _h=\partial \mathcal K^h\cap B_h$ and $\rho$ be defined in \eqref{1eq:eta}, where $\mathbf d=(1,1,1)^\top $ and $\mathbf d^\perp=(1,-1,0)^\top $. Then one has
\begin{equation}\label{eq:estcuboid}
\left \vert \int_{\Gamma_h}e^{\rho\cdot \mathbf x}\mathrm d \sigma\right \vert \geq \frac{C^{'}_{\mathcal K^h}}{\tau^2}-\mathcal O(\frac{1}{\tau}e^{-\frac{1}{2}\zeta h \tau}),
\end{equation}
for sufficiently large $\tau$, where $C^{'}_{\mathcal K ^h}$ is a positive number not depending on $\tau$.
\end{prop}
\begin{proof}
Since $\mathcal K$ is a cuboid, by the geometrical setup and notations in this subsection, we have $\Gamma_h=\Gamma_{h1}\cup \Gamma_{h2}\cup\Gamma_{h3}$, where $\Gamma _{h1}:=\partial \mathcal K_1\cap B_h,\ \Gamma _{h2}:=\partial \mathcal K_2\cap B _h,\ \Gamma _{h3}:=\partial \mathcal K_3\cap B_h$.
According to Proposition \ref{1prop:gamma}, it can be derived that
\begin{equation}
\begin{aligned}\notag
\int_{\Gamma_h}e^{\rho \cdot \mathbf x} \mathrm d \sigma
&=\int_{\Gamma _{h1}}e^{\rho \cdot \mathbf x} \mathrm d \sigma+\int_{\Gamma _{h2}}e^{\rho \cdot \mathbf x} \mathrm d \sigma+\int_{\Gamma _{h3}}e^{\rho \cdot \mathbf x} \mathrm d \sigma\\
&= \frac{1}{2\pi \tau^2} \sum_{j=1}^3 \frac{1}{(\rho_1\cdot \hat{\mathbf x}_j(\theta_\xi))^2} -\mathcal O(\frac{1}{\tau}e^{-\frac{1}{2}\zeta h \tau}),
\end{aligned}
\end{equation}
where $\theta_\xi \in (0,\pi/2)$ is fixed.
By virtue of Proposition \ref{pro:43}, we complete the proof.
\end{proof}
\begin{proof}[The proof of Theorem \ref{3:cubiod}]
Using the fact that $f=(q-1)v\in C^\alpha(\overline{\mathcal K^h}),\ v\in C^\alpha(\overline{\mathcal K^h}) ,\eta\in C^\alpha(\overline {\Gamma _h}) $, we have the following expansion
\begin{equation}\label{3eq:exalpha}
\begin{aligned}
f&=f(\mathbf 0)+\delta f,\quad \vert \delta f \vert \leq \|f\|_{C^\alpha}\vert \mathbf x \vert ^\alpha,\\
v&=v(\mathbf 0)+\delta v,\quad \vert \delta v \vert \leq \|v\|_{C^\alpha}\vert \mathbf x \vert ^\alpha,\\
\eta&=\eta(\mathbf 0)+\delta \eta, \quad \vert \delta \eta\vert \leq \|\eta\|_{C^\alpha}\vert \mathbf x\vert ^\alpha.
\end{aligned}
\end{equation}
Combining the integral identity \eqref{3eq:green} with \eqref{3eq:exalpha}, it arrives that
\begin{equation}\label{3eq:cubiod1}
\eta(\mathbf 0)v(\mathbf 0)\int_{\Gamma_h}e^{\rho \cdot \mathbf x}\mathrm d \sigma=\sum_{i=1}^6 I_i+J_1+J_2,
\end{equation}
where
\begin{equation}\label{3eq:chaicub}
\begin{aligned}
I_1&=f(\mathbf 0)\int_{\mathcal K^h}u_0\mathrm d\mathbf x,\quad I_2= \int_{\mathcal K^h}\delta f u_0\mathrm d\mathbf x,\quad
I_3=\eta(\mathbf 0)v(\mathbf 0)\int_{\Gamma _h}\psi(\mathbf x)e^{\rho \cdot \mathbf x}\mathrm d\sigma,\\
I_4&=\eta (\mathbf 0)\int_{\Gamma _h}\delta v u_0\mathrm d\sigma,\quad
I_5=v(\mathbf 0)\int_{\Gamma _h}\delta \eta u_0\mathrm d\sigma,\quad I_6=\int_{\Gamma _h}\delta \eta\delta v u_0\mathrm d\sigma,\\
J_1&=-\int_{\Lambda_h}(w-v)\partial_{\nu}u_0\mathrm d\sigma,\quad J_2=\int_{\Lambda_h}u_0\partial_{\nu}(w-v)\mathrm d\sigma.
\end{aligned}
\end{equation}
There must exist a convex conic cone $\mathcal C$ contains the cuboid cone $\mathcal K$, namley $\mathcal K \subset \mathcal C$. Hence, by virtue of \eqref{3eq:I3} and \eqref{3eq:u0alphagam}, we have
\begin{equation}\label{I1'}
\vert I_1\vert
\leq\vert f(\mathbf 0)\vert \int_{\mathcal K^h}\vert u_0\vert\mathrm d\mathbf x
\leq \vert f(\mathbf 0)\vert \int_{\mathcal C^h}\vert u_0\vert\mathrm d\mathbf x \lesssim \tau^{-\frac{121}{40}}+\left(\frac{1}{\tau^3}+\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right),
\end{equation}
and
\begin{equation}\label{I2'}
\vert I_2\vert
\leq\int_{\mathcal K^h} \vert \delta f u_0\vert \mathrm d\mathbf x
\leq \int_{\mathcal C^h} \vert \delta f u_0\vert \mathrm d\mathbf x
\lesssim \tau^{-(\alpha +\frac{43}{20})}+\left(\frac{1}{\tau^{\alpha +2}} +\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right),
\end{equation}
as $\tau \rightarrow \infty$.
In view of \eqref{3eq:I61}, we have
\begin{equation}\label{I3'}
\vert I_3\vert \lesssim \tau^{-\frac{81}{40}}.
\end{equation}
In addition, by using \eqref{3eq:u0alphagam} in Lemma \ref{3lem:u0est}, we have the following inequalities:
\begin{align}\label{I4'}
\vert I_4\vert &\lesssim \tau^{-(\alpha +\frac{43}{20})}+\left(\frac{1}{\tau^{\alpha +2}} +\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right),\\
\vert I_5 \vert &\lesssim \tau^{-(\alpha +\frac{43}{20})}+\left(\frac{1}{\tau^{\alpha +2}} +\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right), \label{I5'} \\
\vert I_6\vert &\lesssim \tau^{-(2\alpha +\frac{43}{20})}+\left(\frac{1}{\tau^{2\alpha +2}} +\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right), \label{I6'}
\end{align}
as $\tau \rightarrow \infty$. Moreover, by using \eqref{3eq:J1} and \eqref{3eq:J2}, we have
\begin{equation}\label{J1'}
\vert J_1 \vert \lesssim (1+\tau)(1+\tau^{-\frac{2}{5}})e^{-\zeta h\tau}
\end{equation}
and
\begin{equation}\label{J2'}
\vert J_2 \vert \lesssim (1+\tau)(1+\tau^{-\frac{2}{5}})e^{-\zeta h\tau}.
\end{equation}
as $\tau \rightarrow \infty$.
Let $\rho$ be defined in \eqref{1eq:eta} with $\mathbf d=(1,1,1)^\top $ and $\mathbf d^\perp=(1,-1,0)^\top $. By Proposition \ref{pro:44}, one has \eqref{eq:estcuboid}. Plugging \eqref{I1'}-\eqref{J2'} and \eqref{eq:estcuboid} into \eqref{3eq:cubiod1}, it arrives that
\begin{equation}\label{I'}
\begin{aligned}
\vert \eta(\mathbf 0) v(\mathbf 0)\vert &\left(\frac{C^{'}_{\mathcal K^h}}{\tau^2}-\frac{1}{\tau}e^{-\frac{1}{2}\zeta h \tau}\right)
\lesssim \tau^{-\frac{121}{40}}+\left(\frac{1}{\tau^3}+\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right)
+\tau^{-(\alpha +\frac{43}{20})}\\
&+\left(\frac{1}{\tau^{\alpha +2}} +\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right)
+\tau^{-\frac{81}{40}}
+\tau^{-(\alpha +\frac{43}{20})}+\left(\frac{1}{\tau^{\alpha +2}}
+\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right)\\
&+\tau^{-(2\alpha +\frac{43}{20})}+\left(\frac{1}{\tau^{2\alpha +2}} +\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right)\\
&+(1+\tau)(1+\tau^{-\frac{2}{5}})e^{-\zeta h\tau},
\end{aligned}
\end{equation}
where the positive constant $C^{'}_{\mathcal K^h}$ not depending on $\tau$ is defined in \eqref{eq:estcuboid}.
Multiplying $\tau^2$ on both sides of \eqref{I'} and letting $\tau \to \infty $, one has
$$\vert \eta (\mathbf 0)v(\mathbf 0)\vert=0.$$
Due to $\eta (\mathbf 0)\not =0$, we complete the proof of Theorem \ref{3:cubiod}.
\end{proof}
\subsection{Proof of Corollary \ref{cor2}}\label{subsec:43}
Due to the proof of Corollary \ref{cor2} (b) can be obtained by adopting the similar process as one of Corollary \ref{cor2} (a), hence we only give the proof of Corollary \ref{cor2} (a). Firstly, we give the following proposition.
\begin{prop}\cite[Lemma\ 2.4]{DFLY}\label{3prop:eta3}
Let $\mathcal C^{h}$ and $\rho$ be defined in (\ref{eq:cone2}) and (\ref{1eq:eta}), respectively. Then we have
\begin{equation}\label{3eq:eta3}
\left\vert \int_{\mathcal C^{h}} e^{\rho \cdot \mathbf x}\mathrm d\mathbf x\right\vert \geq \frac{\widetilde{C_{\mathcal C^h}}}{\tau^3}-\mathcal O\left(\frac{1}{\tau}e^{-\frac{1}{2}\zeta h \tau}\right),
\end{equation}
for sufficiently large $\tau$, where $\widetilde{C_{\mathcal C^h}}$ is a positive number only depending on the opening angle $\theta_0$ of $\mathcal C$ and $\zeta$.
\end{prop}
\begin{proof}[\bf{The proof of Corollary \ref{cor2}}(a)]
The following integral identity can be obtained according to \eqref{3eq:green}:
\begin{equation}
k^2f_j(\mathbf 0)\int_{\mathcal C^h}e^{\rho \cdot \mathbf x}=I_1+I_2+I_3+J_1+J_2,
\end{equation}
where $I_m,\ m=1,2,3$, $J_1$ and $J_2$ defined in \eqref{3eq:chaifen}.
With the help of \eqref{3eq:I11}, \eqref{3eq:I2}, \eqref{3eq:I3}, \eqref{3eq:J1} and Proposition \ref{3prop:eta3}, we have the following integral inequality
\begin{equation}\label{3eq:fj0}
\begin{aligned}
k^2\left[\frac{\widetilde{C_{\mathcal C^h}}}{\tau^3}-\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right]&\vert f_j(\mathbf 0)\vert \lesssim j^{-\beta}\left[\left(\frac{1}{\tau^3}+\frac{1}{\tau}e^{-\zeta h\tau}\right)^{\frac{1}{2}}+\left(\frac{1}{\tau^3}+\frac{1}{\tau}e^{-2\zeta h\tau}\right)^\frac{1}{4}\tau^{-\frac{2}{5}} \right]\\
&+j^{\gamma}\left[\tau^{-(\alpha+\frac{121}{40})}+\left(\frac{1}{\tau^{\alpha+3}}+\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}\right)\right]\\
&+\tau^{-\frac{121}{40}}+(1+\tau)(1+\tau^{-\frac{2}{5}})e^{-\zeta h\tau}
\end{aligned}
\end{equation}
as $\tau \to \infty$. For sufficiently large $\tau$, we know that
$$ \frac{\widetilde{C_{\mathcal C^h}}}{\tau^3}-\frac{1}{\tau}e^{-\frac{1}{2}\zeta h\tau}>0.$$
Then multiplying $\tau^3$ in the both sides of \eqref{3eq:fj0} and letting $\tau \to \infty$ and $\tau =j^s$, one has
\begin{equation}\label{3eq:fjalpha}
k^2\widetilde{C_{\mathcal C^h}}\vert f_j(\mathbf 0)\vert \lesssim j^{-\beta+\frac{37}{20}s}+j^{\gamma-\alpha s}.
\end{equation}
Due to the assumption that $\gamma <\frac{20}{37}\alpha\beta$, we choose $s\in (\gamma/\alpha, \frac{20}{37}\beta)$. By letting $j\to \infty$, we have
$$\vert f_j(\mathbf 0)\vert =0.$$
Since $q(\mathbf 0)\neq 1$, the proof of this corollary is complete.
\end{proof}
\section{Visibility and unique recovery results for the inverse scattering problem}\label{sec:inverse}
In this section, we show that when a medium scatter with a conductive transmission boundary condition possesses either one of a convex planar corner, a convex polyhedral corner, or a convex conic corner, it radiates a non-trivial far field pattern, namely, the visibility of this scatterer occurs. Furthermore, when the medium scatter is visible, it can be uniquely determined by a single far field measurement under generic physical scenarios.
In the following theorem, it indicates that a conductive medium possesses an aforementioned corner under generic physical conditions always scatters.
\begin{thm}\label{thm:medium conical scat}
Consider the conductive medium scattering problems \eqref{eq:contr}. Let $(\Omega ; q,\eta )$ be the medium scatterer associated with \eqref{eq:contr}, where $\Omega$ is a bounded Lipschitz domain with a connected complement in $\mathbb R^n$, $n=2,3$. If either of the following conditions is fulfilled, namely,
\begin{itemize}
\item[(a)] when $\Omega\Subset \mathbb R^2$, there exists a sufficient small $h\in \mathbb R_+$ such that $\Omega\cap B_h=S_h$, where $S_h$ is defined by \eqref{1eq:sec}, $q\in H^2(\overline{S_h} )$, $\eta \in C^\alpha(\overline{\Gamma_h^\pm } )$ satisfying $\alpha\in (0,1)$ and $\eta(\mathbf 0) \neq 0$, and $\
\Gamma_h^\pm =\partial S_h\setminus \partial B_h $;
\item[(b)] when $\Omega\Subset \mathbb R^3$, there exists a sufficient small $h\in \mathbb R_+$ such that $\Omega\cap B_h=\mathcal K^h$, where $\mathcal K^h$ is a cuboid defined by \eqref{eq:cone3}, $q\in H^2(\overline{\mathcal K^h} )$, $\eta \in C^\alpha(\overline{\Gamma_h } )$ satisfying $\alpha\in (0,1)$ and $\eta(\mathbf 0) \neq 0$, and $\
\Gamma_h =\partial \mathcal K^h\setminus \partial B_h $;
\item[(c)] when $\Omega\Subset \mathbb R^3$, there exists a sufficient small $h\in \mathbb R_+$ such that $\Omega\cap B_h=\mathcal K^h$, where $\mathcal K^h$ is a polyhedral corner but not a cuboid, then $q \in H^2(\overline{\mathcal K^h})$ satisfying $q(\mathbf 0)\not =1$ and $\eta \equiv 0$ on $ \partial {\mathcal K^h}\setminus \partial B_h$;
\item[(d)] when $\Omega\Subset \mathbb R^3$, there exists a sufficient small $h\in \mathbb R_+$ such that $\Omega\cap B_h=\mathcal C^h$, where $\mathcal C^h$ is defined by \eqref{eq:cone1}, $q\in H^2(\overline{\mathcal C^h} )$, $\eta \in C^\alpha(\overline{\Gamma_h } )$ satisfying $\alpha\in (0,1)$, $\eta(\mathbf 0) \neq 0$, and $\
\Gamma_h =\partial \mathcal C^h\setminus \partial B_h $;
\end{itemize}
then $\Omega$ always scatters for any incident wave satisfying \eqref{eq:incident}.
\end{thm}
\begin{proof}
By contradiction, suppose that the mediums scatterer $\Omega$ possesses either one of a convex planar corner, a convex polyhedral corner, and a convex conic corner, where the assumptions (a)-(d) is fulfilled. Assume that $\Omega$ is non-radiating, namely, the far field pattern $u^\infty \equiv 0$. By virtue of Rellich lemma, the total wave field $u$ and incident wave $u^i$ satisfies \eqref{0eq:tr} associated with the incident wave number $k$. It is clear that the incident $u^i$ is $\alpha$-H\"older continuous and non vanishing near the underlying corner. According to Corollaries \ref{cor1} and \ref{thm:poly}, Theorems \ref{thm:3alpha} and \ref{3:cubiod}, one has $u^i$ must vanish at the corresponding corner point, where we get the contradiction.
The proof is complete.
\end{proof}
In the following, we shall study the unique recovery for the inverse problem \eqref{5eq:ineta} associated with the conductive scattering problem \eqref{eq:contr} in $\mathbb R^3$. In the field of inverse scattering problems, it is concerned with the shape determination of $\Omega$ by a minimum far-field measurement (cf. \cite{DR2018}). We utilize the local geometrical characterization of transmission eigenfunctions near a corner in Section \ref{sec:3D} to establish the uniqueness regarding the shape determination of \eqref{5eq:ineta} by a single measurement under generic physical scenario, where a single far-field measurement means that the underlying far-field pattern is generated only by a single incident wave $u^i$. The unique determination results of \eqref{5eq:ineta} for recovering the material parameters associated with \eqref{eq:contr} by infinitely many far-field measurements with a fixed frequency can be found in \cite{OX,BHK,HK2020}. We obtain local unique recovery results for the determination of $\Omega$ without a-prior knowledge on the material parameters $q$ and $\eta$ in this section. When $\Omega$ is a cuboid or a corona shape scatterer with a conductive transmission boundary condition, the corresponding global uniqueness results on the shape determination can be drawn under generic physical scenarios. It is pointed out that when $\eta\equiv 0$ on $\partial \Omega$, namely consider the inverse problem \eqref{5eq:ineta} associated with the corresponding scattering problem
\begin{equation}\label{polyeq:contr}
\begin{cases}
&\Delta u^-+k^2qu^-=0 \hspace*{3.1cm} \mbox{in}\quad \Omega,\\
&\Delta u^++k^2u^+=0 \hspace*{3.3cm} \mbox{in}\quad \mathbb R^n\setminus \Omega,\\
&u^+=u^-,\quad \partial_\nu u^+=\partial _\nu u^-, \hspace*{1.7cm} \mbox{on} \quad\partial \Omega,\\
&u^+=u^i+u^s,\hspace*{3.8cm} \mbox{in}\quad \mathbb R^n,\\
&\lim_{r\to \infty}r^{(n-1)/2}(\partial _ru^s-iku^s)=0,\hspace*{0.4cm} r=\vert \mathbf x\vert,
\end{cases}
\end{equation}
we can establish global unique recovery results for the shape of $\Omega$ within convex polyhedral or corona shape geometries by a single far-field measurement, whereas the corresponding single-measurement uniqueness result regarding the shape determination of a convex polygon or cuboid medium associated with \eqref{polyeq:contr} was studied in \cite{HU&SALO}.
In Theorem \ref{thm:u10}, we show the local uniqueness results for \eqref{5eq:ineta}, which aims to recover a scatterer $(\Omega;q,\eta)$ by knowledge of the far-field pattern $u^\infty(\hat{\mathbf x};u^i)$ with a single measurement. First, let us introduce the admissible class of the conductive scatterer and the related notations in our study.
\begin{defn}\label{def:admiss}
Let $\Omega$ be a bounded Lipschitz domain in $\mathbb R^3$ with a connected complement and $(\Omega;k,\mathbf d,q,\eta)$ be a conductive scatterer with the incident plane wave $u^i=e^{ik\mathbf x\cdot \mathbf d}$, where $\mathbf d\in \mathbb S^2$ and $k\in \mathbb R_+$. Consider the scattering problem \eqref{eq:contr}. Denote $u$ by the total wave field, which is associated with \eqref{eq:contr}. The scatterer $\Omega$ is said to be admissible if the following conditions are fulfilled:
\begin{itemize}
\item[(a)] $q\in L^\infty (\Omega)$ and $\eta\in L^\infty(\partial \Omega)$.
\item [(b)] After rigid motions, we assume that $\mathbf 0\in \partial \Omega$. Recall that $\mathcal C^h$ and $\mathcal K^h$ are defined in \eqref{eq:cone2} and \eqref{eq:kr0} respectively, where $\mathbf 0$ is the apex of the conic corner $\mathcal C^h$ or the convex polyhedral corner $\mathcal K^h$. If $\Omega$ possesses a convex conic corner $\mathcal C^h$ (or a cuboid corner $\mathcal K^h$), then $q \in H^2(\overline{\mathcal C^h})$ (or $q \in H^2(\overline{\mathcal K^h})$) and $\eta\in C^\alpha (\overline {\Gamma_h})$ satisfying $\eta(\mathbf 0)\not =0$ and $\alpha\in (0,1)$, where $\Gamma_h= \mathcal C^h\cap \partial \Omega$ (or $\Gamma_h= \mathcal K^h\cap \partial \Omega$). If $\Omega$ possesses a convex polyhedral corner $\mathcal K^h=B_h\cap \Omega$, then $q \in H^2(\overline{\mathcal K^h})$ satisfying $q(\mathbf 0)\not =1$ and $\eta \equiv 0$ on $ \overline{\mathcal K^h}\cap \partial \Omega$.
\item [(c)] The total wave field $u$ is non-vanishing everywhere in the sense that for any $\mathbf x\in \mathbb R^3$,
\begin{equation}\label{eq:assum 51}
\lim_{\lambda \to +0}\frac{1}{m(B(\mathbf x,\rho))}\int_{B(\mathbf x,\lambda )}\vert u(\mathbf x)\vert\mathrm d\mathbf x\not=0,
\end{equation}
where $m(B(\mathbf x,\lambda ))$ is the measure of $B(\mathbf x,\lambda )$.
\end{itemize}
\end{defn}
\begin{rem}\label{rem:51}
The assumption \eqref{eq:assum 51} is a technical condition for deriving the uniqueness results, which can be fulfilled under generic physical scenarios. For example, when $k\cdot {\rm diam}(\Omega )\ll 1 $, by the well-posedness of the direct scattering problem \eqref{eq:contr} (cf. \cite[Theorem 2.4]{OX}), the condition \eqref{eq:assum 51} can be satisfied. The detailed discussion on this point can be found in \cite[Page 44]{DCL}. We believe that \eqref{eq:assum 51} can be fulfilled under other physical settings, where we choose not to explore this aspect in this paper and shall investigate it in the future.
\end{rem}
\begin{thm}\label{thm:u10}
Consider the conductive scattering problem \eqref{eq:contr} with two conductive scatterers $(\Omega_j;k,\mathbf d,q_j,\eta_j),j=1,2,$ in $ \mathbb R^3$.
Let $u^\infty_{j}(\hat{\mathbf {x}};u^i)$ be the far-field pattern associated with the scatterers $(\Omega_j;k,\mathbf d,q_j,\eta_j),j=1,2$ and the incident field $u^i$. If $(\Omega_j;k,\mathbf d, q_j,\eta_j)$ are admissible and
\begin{equation}\label{eq:thm51 cond}
u^\infty_1(\hat{\mathbf x};u^i)=u^\infty_2(\hat{\mathbf x};u^i)
\end{equation}
for all $\hat{\mathbf x}\in \mathbb S^2$ with a fixed incident $u^i$. Then
\begin{equation}\label{eq:set min}
\Omega_1\Delta \Omega_2:=(\Omega_1\setminus \Omega_2)\cup(\Omega_2\setminus\Omega_1)
\end{equation}
cannot contain a convex conic corner or a cuboid corner. Furthermore, if $\Omega_1$ and $\Omega_2$ are two cuboids, then $\Omega_1=\Omega_2$.
\end{thm}
\begin{proof}
We prove this theorem by contradiction. Suppose that $\Omega_1\Delta \Omega_2$ contains a convex conic corner. Without loss of generality, we assume that the underlying convex conic corner $\mathcal C^h \subset {\Omega_2}\setminus {\Omega_1 } $, where $\mathbf 0\in \partial \Omega_2$ and $\Omega_2\cap B_h=\mathcal C^h$ with a sufficient small $h\in \mathbb R_+$ such that $B_h\subset \mathbb R^3 \setminus \overline{\Omega_1}$.
Due to \eqref{eq:thm51 cond}, with the help of Rellich's Theorem (cf.\cite{DR}), it holds that $u_1^s=u_2^s$ in $\mathbb R^3\setminus(\overline{\Omega}_1\cup \overline{\Omega}_2)$, we have
\begin{equation}\label{eq:u1=u2}
u_1(\mathbf x)=u_2(\mathbf x),\ \forall \mathbf x \in \mathbb R^3\setminus(\overline{\Omega}_1\cup \overline{\Omega}_2).
\end{equation}
Since $\Gamma_h=\partial \mathcal C^h\cap \partial \Omega_2$, by virtue of transmission conditions on $\partial \Omega_2$ of \eqref{eq:contr} and \eqref{eq:u1=u2}, it yields that
\begin{equation}\label{eq:55 trans}
u_2^+=u_2^-=u_1^+,\ \partial u_2^-=\partial u_2^++\eta_2 u_2^+=\partial u_1^+ +\eta_2 u_1^+\ \mathrm {on} \ \Gamma_h.
\end{equation}
According to \eqref{eq:55 trans} and direct scattering problems \eqref{eq:contr} associated with $(\Omega_j;k,\mathbf d,q_j,\eta_j)$, one has
\begin{equation}\notag
\begin{cases}
&\Delta u_2^-+k^{2}q_2u_2^-=0 \quad \hspace*{2.1cm} \mbox {$\mathrm {in}\ \mathcal C^h$},\\
&\Delta u_1^++k^{2}u_1^+=0 \quad \hspace*{2.5cm}\mbox {$\mathrm {in}\ \mathcal C^h$},\\
&u_2^-=u_1^+,\ \partial_{\nu}u_2^-=\partial_{\nu}u_1^++\eta_2 u_1^+ \hspace*{0.5cm}\mbox{$\mathrm {on}\ \Gamma_h$}.
\end{cases}
\end{equation}
By the well-posdeness of the direct scattering problem \eqref{eq:contr}, it yields that $u_2^-\in H^1( \mathcal C^h)$ and $u_1^+$ is real analytic in $B_h$. By virtue of the condition (b) in Definition \ref{def:admiss}, using Theorem \ref{thm:3alpha}, we know that $u_1(\mathbf 0)=0$,
which is contradicted to the admissibility condition (c) in Definition \ref{def:admiss}.
The first conclusion of this theorem concerning a cuboid corner can be proved similarly by using Theorem \ref{3:cubiod}. We omit the proof.
By the convexity of two cuboids $\Omega_1$ and $\Omega_2$ and the first conclusion of this theorem, it is ready to know that $\Omega_1=\Omega_2$.
The proof is complete.
\end{proof}
In the following we introduce an admissible class $\mathcal T$ of corona shape, which shall be used in Theorem \ref{thm:finiteconic}.
\begin{defn}\label{def:cone3}
Let $D$ be a convex bounded Lipschitz domain with a connected complement $\mathbb R^3\setminus\overline{D}$. If there exit finite many strictly convex conic cones $\mathcal C_{\mathbf x_j,\theta_j}(j=1,2,\dots,\ell,\ell\in \mathbb N)$ defined in \eqref{eq:cone1} such that
\begin{itemize}
\item [(a)]
the apex $\mathbf x_j\in \mathbb R^3\setminus \overline{D}$ and let $\mathcal C_{\mathbf x_j,\theta_j}^{\ast}=\mathcal C_{\mathbf x_j,\theta_j}\setminus \overline D$ respectively, where the apex $\mathbf x_j$ belongs to the strictly convex bounded conic corner of $\mathcal C_{\mathbf x_j,\theta_j}^{\ast}$;
\item [(b)]
$\partial \overline{\mathcal C_{\mathbf x_j,\theta_j}^{\ast}}\setminus \partial \overline{\mathcal C_{\mathbf x_j,\theta_j}}\subset \partial \overline D$ and $\cap _{j=1}^\ell\partial \overline{\mathcal C_{\mathbf x_j,\theta_j}^{\ast}}\setminus \partial \overline{\mathcal C_{\mathbf x_j,\theta_j}}=\emptyset$;
\item[(c)] $\Omega :=\cup_{j=1}^\ell\mathcal C_{\mathbf x_j,\theta_j} \cup D$ is admissible described by Definition \ref{def:admiss};
\end{itemize}
then $\Omega$ is said to belong to an admissible class $\mathcal T$ of corona shape.
\end{defn}
A global unique recovery for the admissible scatter belonging to $\mathcal T$ of corona shape is shown in Theorem \ref{thm:finiteconic}, which can be proved by using Theorem \ref{thm:u10} and the assumptions in Theorem \ref{thm:finiteconic}. Indeed, the assumptions \eqref{eq:ass 56a} and \eqref{eq:ass 56b} imply that the set difference of two scatters $\Omega_1$ and $\Omega_2$ cannot contain a convex conic corner if $\Omega_j \in \mathcal T$, $j=1,2$.
\begin{thm}\label{thm:finiteconic}
Suppose that $\Omega_{m},m=1,2$ belong to the admissible class $\mathcal T$ of corona shape, where
$$
\Omega_m=\cup_{j^{(m)}=1}^{\ell^{(m)}}\mathcal C_{\mathbf x_{j^{(m)}},\theta_{j^{(m)}}}\cup D_m,\quad m=1,2.
$$
Consider the conductive scattering problem \eqref{eq:contr} associated with the admissible conductive scatterers $\Omega_{m},m=1,2$. Let $u_j^\infty(\hat{\mathbf x};u^i)$ be the far-field pattern associated with the scatterers $(\Omega_m;\mathcal C_{\mathbf x_{j^m},\theta_{j^m}}),m=1,2$ and the incident field $u^i$. If the following conditions:
\begin{subequations}
\begin{align}
D_1&=D_2, \label{eq:ass 56a} \\
\theta_{i^{(1)}}&=\theta_{j^{(2)}} \ \mbox{for} \ i^{(1)} \in \{1,\ldots,\ell^{(1)}\}\ \mbox{and} \ j^{(2)}\in \{1,\ldots,\ell^{(2)}
\}\ \mbox{when} \ \mathbf x_{i^{(1)}}=\mathbf x_{j^{(2)}}, \label{eq:ass 56b}
\end{align}
\end{subequations}
and \eqref{eq:thm51 cond} are satisfied,
then $\ell^{(1)}=\ell^{(2)},\ \mathbf x_{j^{(1)}}=\mathbf x_{j^{(2)}}$ and $\theta_{j^{(1)}}=\theta_{j^{(2)}}$, where $j^{(m)}=1,\dots \ell^{(m)}$, $m=1,2$. Namely, one has $\Omega_1=\Omega_2$.
\end{thm}
In Theorem \ref{thm:polycon}, we first show a local uniqueness result regarding a polyhedral corner by a single measurement, where we can prove this theorem in a similar manner as for Theorem \ref{thm:u10} by utilizing Corollary \ref{thm:poly}. Hence the detailed proof of Theorem \ref{thm:polycon} is omitted. we emphasize that an admissible convex polyhedral scatterer $\Omega$ can be uniquely determined by a single far-field measurement, which a global uniqueness result for \eqref{5eq:ineta} associated with \eqref{eq:contr} is established.
\begin{thm}\label{thm:polycon}
Consider the conductive scattering problem \eqref{eq:contr} with conductive scatterers $(\Omega_j;k,\mathbf d,q_j,\eta_j),j=1,2,$ in $\mathbb R^3$.
Let $u^\infty_{j}(\hat{\mathbf {x}};u^i)$ be the far-field pattern associated with the scatterers $(\Omega_j;k,\mathbf d,q_j,\eta_j),j=1,2$ and the incident field $u^i$. If $(\Omega_j;k,\mathbf d,q_j,\eta_j)$ are admissible and \eqref{eq:thm51 cond} is fulfilled, then
$ \Omega_1\Delta\Omega_2$ defined by \eqref{eq:set min}
cannot contains a convex polyhedral corner. Furthermore, if $\Omega_1$ and $\Omega_2$ are two admissible convex polyhedrons, then
$$\Omega_1=\Omega_2.$$
\end{thm}
Consider the direct scattering problem \eqref{polyeq:contr} associated with a convex polyhedron medium $(\Omega;k,\mathbf d,q)$, which is a special case of \eqref{eq:contr} by letting $\eta \equiv 0$ on $\partial \Omega$. In Corollary \ref{cor:53 noeta}, we give a global unique determination for a convex polyhedron $\Omega$ associated with the direct scattering problem \eqref{polyeq:contr} by a single far-field measurement under generic physical settings. Corollary \ref{cor:53 noeta} can be proved directly by using Theorem \ref{thm:polycon} and the detailed proof is omitted. Compared with the corresponding uniqueness result in \cite{HU&SALO} for the shape determination of a cuboid scatterer by a single measurement, we relax the geometrical restriction on the uniqueness determination regarding medium shapes by a single measurement from a cuboid to a general convex polyhedron.
\begin{cor}\label{cor:53 noeta}
Consider the scattering problem \eqref{polyeq:contr} with scatterers $(\Omega_j;k,\mathbf d,q_j),j=1,2,$ in $\mathbb R^3$.
Let $u^\infty_{j}(\hat{\mathbf {x}};u^i)$ be the far-field pattern associated with the scatterers $(\Omega_j;k,\mathbf d,q_j),j=1,2$ and the incident field $u^i$. Assume that the total wave field $u_j$ corresponding to \eqref{polyeq:contr} associated with $(\Omega_j;k,\mathbf d,q_j)$ $(j=1,2)$ satisfies \eqref{eq:assum 51}. Suppose that $\Omega_j$ is a convex polyhedron, $j=1,2$. Denote $\mathcal V(\Omega_j )$ by a set composed by all vertexes of $\Omega_j$ with $j=1,2$. For any $\mathbf x_{c,j} \in \mathcal V(\Omega_j )$, if there exists sufficient small $h\in\mathbb R_+$ such that $q_j\in H^2(\overline{ \mathcal K_{\mathbf x_{c,j} }^h } )$ with $q_j(\mathbf x_{c,j} ) \neq 1$ for $j=1,2$, where $\mathcal K_{\mathbf x_{c,j} }^h=\Omega \cap B_h( \mathbf x_{c,j})\Subset \Omega_j$ , then the condition \eqref{eq:thm51 cond} implies that $\Omega_1=\Omega_2.$
\end{cor}
When the shape of an admissible scatter $\Omega$ is uniquely determined by a single measurement, under a-prior knowledge the potential $q$ associated with $\Omega$ we can recover the surface parameter $\eta$ by a single measurement provided that $\eta$ is a non-zero constant. We can use a similar argument for proving \cite[Theorem 4.2]{DCL} to establish Theorem \ref{eta0}. The detailed proof is omitted. The technical condition \eqref{eq:assum k} can be easily fulfilled under generic physical scenarios; see the detailed discussion in \cite[Remark 4.2]{DCL}.
\begin{thm}\label{eta0}
Consider the conductive scattering problem \eqref{eq:contr} with the admissible conductive scatterers $(\Omega_m;k,\mathbf d,q,\eta_m)$ in $\mathbb R^3$, where $\eta_m\not =0$, $m=1,2$, are two constants. Let $u_m^\infty(\hat{\mathbf x};u^i)$ be the far-field pattern with the scatterers $(\Omega_m;k,\mathbf d,q,\eta_m),m=1,2$ and the incident field $u^i$. Suppose that
$$u_1^\infty(\hat{\mathbf x};u^i)=u_2^\infty(\hat{\mathbf x};u^i),\ for\ all\ \hat{\mathbf x}\in \mathbb S^2$$
with a fixed incident wave $u^i$. If
\begin{equation}\label{eq:assum k}
\mbox{ $k$ is not an eigenvalue of the partial differential operator $\Delta +k^2q$, }
\end{equation}
and $\Omega_m$ is a cuboid ($m=1,2$), we have $\eta_1=\eta _2$. Similarly, when
$$
\Omega_m=\cup_{j^{(m)}=1}^{\ell^{(m)}}\mathcal C_{\mathbf x_{j^{(m)}},\theta_{j^{(m)}}}\cup D_m \in \mathcal T,\quad m=1,2,
$$
if the conditions \eqref{eq:assum k}, \eqref{eq:ass 56a} and \eqref{eq:ass 56b} are fulfilled, one has $\eta_1=\eta _2$.
\end{thm}
\end{document}
|
\begin{document}
\title{On polytopes associated to factorisations of prime-powers}
{\sl Abstract\footnote{Keywords:
Polytope, Prime-power, Symmetric positive definite matrix.
Math. class: 52B05, 52B11, 52B20}:
We study polytopes associated to factorisations of prime powers.
These polytopes have explicit
descriptions either in terms of their vertices or as
intersections of closed halfspaces associated to their facets.
We give formulae for their $f-$vectors.}
\section{Main results}\label{sectmainresults}
Polytopes have two dual descriptions: They can be given either as
convex hulls of finite sets or as compact sets of the form
$\cap_{f \in\mathcal F} f^{-1}(\mathbb R_+)$ where
$\mathcal F$ is a finite set of affine functions and where
$f^{-1}(\mathbb R_+)$ denotes the closed half-space on which the
affine function $f$ is non-negative.
It is difficult to construct families of polytopes where both
descriptions are explicit. The aim of this paper is to study a
new family of such examples. These polytopes are associated to
vector-factorisations
of prime-powers where a {\it $d-$dimensional vector-factorisation}
of a prime-power
$p^e$ is an integral vector $(v_1,v_2,\dots,v_d)\in\mathbb N^d$
such that $p^e=v_1\cdot v_2\cdots v_d$.
Given a prime power $p^e\in\mathbb N$ and a
natural integer $d\geq 1$, we denote by
$\mathcal P(p^e,d)$ the convex hull of all
$d-$dimensional vector-factorisations
$(v_1,v_2,\dots,v_d)\in\mathbb N^d$ of $p^e$.
The case $e=0$ yields the unique vector-factorisation $(1,1,\dots,1)$
and is without interest.
For $d=2$ and $e\geq 2$ the polytope
$\mathcal P(p^e,2)$ is a $2-$dimensional polygon with
vertices $(1,p^e),(p,p^{e-1}),\dots,(p^{e-1},p),(1,p^e)$.
For $e=1$, the polytope
$\mathcal P(p,d)$ is a $(d-1)-$dimensional simplex with vertices
$(p,1,\dots,1),(1,p,1,\dots,1),\dots,(1,\dots,1,p)$.
The observation that the combinatorial properties of
$\mathcal P(p^e,d)$ are independent of
the prime $p$ in these examples is a general fact:
The combinatorial properties of the polytope $\mathcal P(p^e,d)$
are independent always independent of the prime number $p$.
It is in fact possible to replace every occurence of $p$ by an arbitrary
real constant which is strictly greater than $1$.
(The choice of a strictly positive real number which is strictly smaller than
$1$ leads to a combinatorially
equivalent polytope with the opposite orientation.)
Let us also mention that the polytopes $\mathcal P(p^e,d)$
are invariant under permutations of coordinates.
In the sequel we suppose always $e\geq 2$ and $d\geq 2$. This ensures that
$\mathcal P(p^e,d)$ is $d-$dimensional.
In order to state our first main result, the description of
$\mathcal P(p^e,d)$ in terms of inequalities, we consider
for $\lambda\in\{1,\dots,\min(e,d-1)\}$ the set
$\mathcal R_\lambda(d,e)$ consisting of all integral vectors
$\alpha=(\alpha_1,\dots,\alpha_d)\in\mathbb N^d$ such that
$\min(\alpha_1,\dots,\alpha_d)=0$,
$$\max(\alpha_1,\dots,\alpha_d)\ d<e+\sum_{i=1}^d\alpha_i$$
and
$$e+\sum_{i=1}^d\alpha_i\equiv \lambda\pmod d\ .$$
We call $\mathcal R_\lambda(d,e)$ the set of {\it regular vectors
of type $\lambda$}. The union
$$\mathcal R(d,e)=\cup_{\lambda=1}^{\min(e,d-1)}\mathcal R_\lambda(d,e)$$
is the set of {\it regular vectors}.
We denote by $\mu$ the
function on $\mathcal R(d,e)$ defined by the equality
$$\mu(\alpha)\ d+\lambda=e+\sum_{i=1}^d\alpha_i$$
where $\alpha=(\alpha_1,\dots,\alpha_d)$ is an element
of $\mathcal R_\lambda(d,e)$. The formula
$$\mu(\alpha)=\left\lfloor\left(e+\sum_{i=1}^d\alpha_i\right)/d\right\rfloor=
\left( e-\lambda+\sum_{i=1}^d \alpha_i\right)/d$$
shows that $\mu(\alpha)$ is a natural integer.
\begin{thm} \label{thminequalities}
Let $e\geq 2$ and $d\geq 2$ be two integers.
The polytope $\mathcal P(p^e,d)$ is defined by the inequalities
$$\sum_{i=1}^dx_i\leq (d-1)+p^e\ ,$$
$$x_i\geq 1,\ i=1,\dots,d\ ,$$
$$\sum_{i=1}^d p^{\alpha_i}x_i\geq\lambda p^{\mu(\alpha)+1}+
(d-\lambda)p^{\mu(\alpha)},\ \alpha=(\alpha_1,\dots,\alpha_d)\in
\mathcal R_\lambda(d,e)$$
for $\lambda$ in $\{1,\dots,\min(e,d-1)\}$.
This list of inequalities is minimal if $d\geq 3$. For $d=2$,
the minimal list is obtained by removing
the two inequalities $x_1\geq 1$ and $x_2\geq 1$.
\end{thm}
For $\lambda\in\{1,\dots,d-1\}$, we denote by
$$\Delta(\lambda)=
\mathrm{Conv}((\epsilon_1,\dots,\epsilon_d)\in\{0,1\}^d,\
\sum_{i=1}^d \epsilon_i=\lambda)$$
the {\it $(d-1)-$dimensional hypersimplex of parameter
$\lambda$} (see for example page 19 of \cite{Z}).
Faces of codimension $1$ (or $(d-1)-$dimensional faces) of a
$d-$dimensional polytope are called {\it facets}.
The following result describes all facets of $\mathcal P(p^e,d)$
in terms of their vertices.
\begin{thm} \label{thmfacets}
The set of all facets of $\mathcal P(p^e,d)$ (for $e\geq 2$
and $d\geq 3$) is given by the set of convex hulls of the following sets:
$$\{(p^e,1,\dots,1),(1,p^e,1,\dots,),\dots,(1,\dots,1,p^e)\},\ $$
$$\{(p^{\beta_1},\dots,p^{\beta_d})\in\mathbb N^d\ \vert \beta_i=0,\sum_{j=1}^d
\beta_j=e\},\ i=1,\dots,d$$
$$\{(p^{\beta_1+\epsilon_1},\dots,p^{\beta_d+\epsilon_d})\in\mathbb N^d\ \vert
\ (\epsilon_1,\dots,\epsilon_d)\in\Delta(\lambda)\}$$
with $\lambda=1,\dots,\min(e,d-1)$ and $(p^{\beta_1},\dots, p^{\beta_d})$
going through all $d-$dimensional vector-factorisations of $p^{e-\lambda}$.
\end{thm}
Recall that the $f-$vector of a $d-$dimensional polytope $\mathcal P$
counts the number $f_k$ of $k-$dimensional faces contained in $\mathcal P$.
The following result describes the coefficients of the $f-$vector of
$\mathcal P(p^e,d)$.
\begin{thm} \label{thmfvector}
Let $e\geq 2$ and $d\geq 2$ be two integers.
The numbers $f_0,\dots,f_k$ with $f_k$ counting the number
of $k-$dimensional faces of the polytope
$\mathcal P(p^e,d)$ are given by the formulae
$$\begin{array}{l}
\displaystyle f_0={e+d-1\choose d-1}\ ,\\
\displaystyle f_1={d\choose 2}+{d\choose 2}{e+d-2\choose d-1}\ ,\\
\displaystyle f_k
={d+1\choose k+1}+{d\choose k+1}{e+d-1\choose d}-{d\choose k+1}
{e-k+d-1\choose d},\ 2\leq k<d\\
\displaystyle f_d=1\ .\end{array}$$
\end{thm}
The formula for the number $f_0$ of vertices is easy:
Identification of a vector-factorisation $(p^{\beta_1},p^{\beta_2},\dots,
p^{\beta_d})$ with the monomial $x_1^{\beta_1}x_2^{\beta_2}\cdots
x_d^{\beta_d}$ of $\mathbb Q[x_1,\dots,x_d]$ shows that
$f_0$ is the dimension of the vector space spanned by homogeneous
polynomials of degree $e$ in $d$ variables.
The plan of the paper is as follows:
Section \ref{sectgen} describes a few generalisations
of the polytopes $\mathcal P(p^e,d)$.
The next three sections are devoted to the proof of Theorem
\ref{thminequalities} and Theorem \ref{thmfacets}.
The idea for proving Theorem \ref{thminequalities} is as follows:
We show first that all inequalities of
Theorem \ref{thminequalities} hold and that they correspond to facets
of $\mathcal P(p^e,d)$.
Section \ref{sectregfacets} contains the details
for facets associated to regular vectors, called {\it
regular facets}. Section \ref{sectexcfacets} is devoted
to $d+1$ other obvious facets, called {\it exceptional facets}.
The proofs contain explicit descriptions of all facets
and imply easily Theorem \ref{thmfacets} from
Theorem \ref{thminequalities}.
It remains to show that $\mathcal P(p^e,d)$ has no ``exotic'' facets
(neither regular nor exceptional).
This is achieved in Section \ref{sectallfacets}
by showing that any facet $f$ of the $(d-1)-$dimensional polytope
defined
by a (regular or exceptional) facet $F$ of $\mathcal P$ is also contained
in a second (regular or exceptional) facet $F'$ of $\mathcal P$.
This implies the completeness of the list of facets.
Finally, Section \ref{sectfvector} contains a proof of Theorem
\ref{thmfvector}.
\section{Generalisations}\label{sectgen}
A straightforward generalisation of the polytope $\mathcal P(p^e,d)$
is obtained by considering the polytopes with vertices
given by vector-factorisations of an
arbitrary natural integer $N$. The vertices of
such a polytope $\mathcal P(N,d)$ are the
$$\prod_{j=1}^h {e_j+d-1\choose d-1}$$
different $d-$dimensional vector-factorisations of
$N=p_1^{e_1}\dots p_h^{e_h}$ where $p_1<p_2<\dots<p_h$ are all
prime-divisors of $N$. A further generalisation is given by replacing
$N$ with a monomial $\mathbf T=T_1^{e_1}\cdots T_h^{e_h}$ and by
considering real evaluations $T_i=t_i>0$ of
all $d-$dimensional vector-factorisations of $\mathbf T$
(defined in the obvious way). The combinatorial
type of these polytopes depends on these evaluations (or on the
primes $p_1,\dots,p_h$ involved in $N=p_1^{e_1}\cdots p_h^{e_h}$).
There should however exist a ``limit-type'' if the increasing sequence
$p_1<p_2<\dots<p_h$ formed by all prime-divisors of $N$ grows extremely fast.
\begin{rem} One can also consider polytopes defined as the convex hull of
vector-factorisations (of a given integer) subject to various
restrictions. A perhaps interesting case is given by considering
only factorisations with decreasing coordinates. The number of
such decreasing $d-$dimensional vector-factorisations of $p^e$
equals the number of partitions of $e$ having at most $d$
parts if $N=p^e$ is the $e-$th power of a prime $p$.
\end{rem}
The polytopes $\mathcal P(N,d)$ have the following natural
generalisation: Consider an integral symmetric matrix $A$ of size $d\times d$.
For a given natural number $N$, consider the set $\mathcal D_A(N)$ of
all integral diagonal
matrices $D$ of size $d\times d$ such that $A+D$ is positive definite
and has determinant $N$. One can show that
$\mathcal D_A(N)$ is always finite.
Brunn-Minkowski's inequality for mixed volumes, see eg.
Theorem 6.2 in \cite{SY}, states that
$$\det\left(A+\sum_{D\in\mathcal D_A(N)}\lambda_D D\right)^{1/d}\geq
\sum_{D\in\mathcal D_A(N)}\lambda_D \det(A+D)^{1/d}=N^{1/d}$$
if $\sum_{D\in\mathcal D_A(N)}\lambda_D=1$ with $\lambda_D\geq 0$.
This inequality is strict except in
the obvious case where $\lambda_D$ is equal to $1$ for a unique
matrix $D\in\mathcal D_A(N)$.
This implies that $\mathcal D_A(N)$ is the set of vertices of
the polytope $\mathcal P_A(N)$ defined as the convex hull of
$\mathcal D_A(N)$.
The polytope
$\mathcal P(N,d)$ discussed above
correspond to the case where $A$ is the zero matrix of size $d\times d$.
The choice of a Dynkin matrix of size $d\times d$ for a root system of
type $A$ and of $N=1$ leads to polytopes having ${2d\choose d}/(d+1)$
vertices, see for example the solution of problem (18) in
\cite{CC} or \cite{LN}.
These polytopes are different from the Stasheff polytopes
(or associahedra) since they are of dimension $d$ for $d\geq 3$.
Another natural and perhaps interesting choice is given by considering
for $A$ (an integral multiple of) the all one matrix.
The determination of the number of vertices
of $\mathcal P_A(N)$ (or even of $\mathcal P_A(1)$) is perhaps
a non-trivial problem, say, for $A$ the adjacency matrix of a
connected finite simple graph.
\section{Regular facets}\label{sectregfacets}
We show first that all inequalities of Theorem \ref{thminequalities}
associated to regular vectors in $\mathcal R_\lambda(d,e)$
are satisfied on $\mathcal P(p^e,d)$.
We show then that every such inequality is
sharp on a subset of vertices in $\mathcal P(p^e,d)$
spanning a polytope affinely equivalent to a $(d-1)-$dimensional
hypersimplex. All these inequalities define thus facets and are
necessary. We call a facet $F_\alpha$ associated to such a regular
vector $\alpha\in\mathcal R_\lambda(p^e,d)$ a {\it regular facet}.
\begin{prop} \label{propineqregular}
Let $\alpha\in \mathcal R_\lambda(d,e)$
be a regular vector. Setting
$\mu=\mu(\alpha)$, we have
$$\sum_{i=1}^d p^{\alpha_i}x_i\geq \lambda p^{\mu+1}+(d-\lambda)p^\mu$$
for every element $(x_1,\dots,x_d)$ of $\mathcal P(p^e,d)$.
\end{prop}
{\bf Proof} Given $\alpha=(\alpha_1,\dots,\alpha_d)$ in $\mathcal
A_\lambda=\mathcal R_\lambda(d,e)$,
we denote by $l_\alpha$ the linear form defined by
$$l_\alpha(x_1,\dots,x_d)=\sum_{i=1}^dp^{\alpha_i}x_i\ .$$
We have to show that $l_\alpha(x)\geq \lambda p^{\mu+1}+(d-\lambda)p^\mu$
for $x$ in $\mathcal P=\mathcal P(p^e,d)$ and $\mu=\mu(\alpha)$.
Since $l_\alpha$ is linear, it is enough to establish the inequality
for all vertices of $\mathcal P$.
Let $v=(p^{\beta_1},\dots,p^{\beta_d})$ be a vertex of $\mathcal P$
realising the minimum $l_\alpha(v)=\min l_\alpha(\mathcal P)$. Set
$$\begin{array}{l}
\displaystyle a=\min(\alpha_1+\beta_1,\dots,\alpha_d+\beta_d)\ ,\\
\displaystyle A=\max(\alpha_1+\beta_1,\dots,\alpha_d+\beta_d)\ .\end{array}$$
Choose indices $i,j$ in $\{1,\dots,d\}$ such that
$a=\alpha_i+\beta_i$ and $A=\alpha_j+\beta_j$.
If $a=A$, the equality
$$dA=\sum_{i=1}^d\alpha_i+\beta_i=e+\sum_{i=1}^d\alpha_i\equiv \lambda
\pmod d$$
shows $\lambda=0$ in contradiction with $\lambda\in\{1,\dots,d-1\}$.
We claim next that $\beta_j\geq 1$. Indeed, we have otherwise
$A=\alpha_j$ and
$$e+\sum_{k=1}^d\alpha_k=\sum_{k=1}^d(\alpha_k+\beta_k)\leq A d=
\max(\alpha_1,\dots,\alpha_d)\ d$$
in contradiction with the inequality
$\max(\alpha_1,\dots,\alpha_d)\ d<e+\sum_{k=1}^d \alpha_k$ satisfied by
$\alpha\in\mathcal R_\lambda$.
We consider now the vertex
$$\tilde v=(p^{\tilde \beta_1},\dots,p^{\tilde \beta_d})$$
where $\tilde\beta_k=\beta_k$ if $k\not\in \{i,j\},\ \tilde
\beta_i=\beta_i+1=a+1$ and $\tilde \beta_j=\beta_j-1=A-1$.
We have
$$l_\alpha(v)-l_\alpha(\tilde v)=\sum_{k=1}^d p^{\alpha_k+\beta_k}-
\sum_{k=1}^d p^{\alpha_k+\tilde\beta_k}=p^A+p^a-(p^{A-1}+p^{a+1})\ .$$
Since $p>1$ and $A>a$ we have
$$p^A+p^a-(p^{A-1}+p^{a+1})=(p^{A-1}-p^a)(p-1)\geq 0\ .$$
This shows that $A-1=a$ by minimality of $l_\alpha(v)$.
In order to compute the value of $l_\alpha(v)$, we use
the equalities
$$\sum_{i=1}^d\alpha_i+\beta_i=e+\sum_{i=1}^d\alpha_i=\lambda+\mu d\ .$$
Since the vector
$w=(\alpha_1+\beta_1,\dots,\alpha_d+\beta_d)$ has all its coefficients
in $\{a,a+1=A\}$, we get $a=\mu$ and $w$ takes the value $\mu$ with
multiplicity $d-\lambda$ and $\mu+1$ with multiplicity $\lambda$.
This shows $l_\alpha(v)=\lambda p^{\mu+1}+(d-\lambda)p^\mu$.
$\Box$
\begin{prop} \label{propbij}
The map $(\alpha_1,\dots,\alpha_d)\longmapsto
(p^{\mu-\alpha_1},\dots,p^{\mu-\alpha_d})$
(where $\mu=\lfloor (e+\sum_{i=1}^d \alpha_i)/d\rfloor=
(e-\lambda+\sum_{i=1}^d\alpha_i)/d$) is a one-to-one map from
the set $\mathcal R_\lambda(d,e)$ of regular vectors of type $\lambda$
onto the set of $d-$dimensional vector-factorisations of $p^{e-\lambda}$.
The inverse map is given by $(p^{\beta_1},\dots,p^{\beta_d})
\longmapsto (B-\beta_1,\dots,B-\beta_d)$
where $B=\max(\beta_1,\dots,\beta_d)$.
\end{prop}
{\bf Proof}
We define $\mathcal F_\lambda=\mathcal F_\lambda(d,e)$ as the
finite set of all integral vectors
$(\beta_1,\dots,\beta_d)\in\mathbb N^d$ such that
$\sum_{i=1}^d\beta_i=e-\lambda$. The map
$$\mathcal F_\lambda\ni(\beta_1,\dots,\beta_d)\longmapsto
(p^{\beta_1},\dots,p^{\beta_d})$$
yields a bijection between $\mathcal F_\lambda$ and
the set of $d-$dimensional vector-factorisations of $p^{e-\lambda}$.
We show first the inclusions $\varphi(\mathcal R_\lambda)\subset
\mathcal F_\lambda$ and $\psi(\mathcal F_\lambda)\subset
\mathcal R_\lambda$ where
$$\varphi(\alpha_1,\dots,\alpha_d)=(\mu-\alpha_1,\dots,\mu-\alpha_d)$$
with $\mu=\left(e-\lambda+\sum_{i=1}^d\alpha_i\right)/d$ and
$$\psi(\beta_1,\dots,\beta_d)=(B-\beta_1,\dots,B-\beta_d)$$
with $(\beta_1,\dots,\beta_d)\in\mathbb N^d$ such that
$\sum_{i=1}^d \beta_i=e-\lambda$ and $B=\max(\beta_1,\dots,\beta_d)$.
The inclusion $\varphi(\mathcal R_\lambda)\subset
\mathcal F_\lambda$ follows from
$$\max(\alpha_1,\dots,\alpha_d)\leq \lfloor(e+\sum_{i=1}^d\alpha_i)/d\rfloor
=\mu=(e-\lambda+\sum_{i=1}^d\alpha_i)/d$$
showing $\varphi(\alpha_1,\dots,\alpha_d)\in\mathbb N^d$ and from
$$\sum_{i=1}^d(\mu-\alpha_i)=\mu d-\sum_{i=1}^d\alpha_i=e-\lambda\ .$$
Consider now $(\beta_1,\dots,\beta_d)\in \mathcal F_\lambda$.
We have $(B-\beta_1,\dots,B-\beta_d)\in \mathbb N^d$ and
$\min(B-\beta_1,\dots,B-\beta_d)=0$ where $B=\max(\beta_1,\dots,\beta_d)$.
We have moreover the inequalities
$$e+\sum_{i=1}^d(B-\beta_i)=\lambda+Bd>Bd\geq
\max(B-\beta_1,\dots,B-\beta_d)\ d$$
since $\lambda>0$. Finally, the computation
$$e+\sum_{i=1}^d(B-\beta_i)=\lambda+Bd\equiv \lambda\pmod d$$
proves the inclusion of $\psi(\beta_1,\dots,\beta_d)=(B-\beta_1,\dots,
B-\beta_d)$ in $\mathcal R_\lambda$.
The computation of
$$\mu=\left(e-\lambda+\sum_{i=1}(B-\beta_i)\right)/d=B$$
shows that $\varphi\circ \psi$ is the identity map of $\mathcal F_\lambda$.
Consider $(\alpha_1,\dots,\alpha_d)\in \mathcal R_\lambda$.
Since $\min(\alpha_1,\dots,\alpha_d)=0$, we have
$\max(\mu-\alpha_1,\dots,\mu-\alpha_d)=\mu$. This implies that
$$\psi\circ\varphi(\alpha_1,\dots,\alpha_d)=(\mu-(\mu-\alpha_1),\dots,
\mu-(\mu-\alpha_d))$$
is the identity map of the set $\mathcal R_\lambda$.
$\Box$
\begin{prop} \label{propaffhypersimplex}
(i) Consider an integral vector $(\beta_1,\dots,\beta_d)\in\mathbb N^d$,
an integer $\lambda\in\{1,\dots,d-1\}$ and a prime $p$.
The affine isomorphism
$$(x_1,\dots,x_d)\longmapsto \left(\frac{
x_1-p^{\beta_1}}{p^{\beta_1+1}-p^{\beta_1}},\dots,
\frac{x_d-p^{\beta_d}}{p^{\beta_d+1}-p^{\beta_d}}\right)$$
of $\mathbb R^d$ induces a one-to-one map between the set
$$S=\{(p^{\beta_1+\epsilon_1},\dots,p^{\beta_d+\epsilon_d})\in
\mathbb N^d\ \vert\
(\epsilon_1,\dots,\epsilon_d)
\in\Delta(\lambda)\}$$
and the set
$$\left\{(\epsilon_1,\dots,\epsilon_d)
\in\{0,1\}^d,\ \sum_{i=1}^d \epsilon_i=\lambda\right\}$$
of vertices of the $(d-1)-$dimensional hypersimplex
$\Delta(\lambda)$.
\ \ (ii) The facets of the convex hull of $S$
are of the form $x_i=p^{\beta_i+\epsilon}$ for $i=1,\dots,d$ and $\epsilon\in
\{0,1\}$ except if $\lambda=1$ or $\lambda=d-1$ where all facets are of the
form $x_i=p^{\beta_i}$ respectively $x_i=p^{\beta_i+1}$.
\end{prop}
{\bf Proof} We leave the easy proof of assertion (i) to the reader.
Assertion (ii) follows from the peculiar form of the affine isomorphism
introduced in assertion (i) and from the observation
that facets of $\Delta(\lambda)$
are given as intersections of the hyperplane defined by the equation
$\sum_{i=1}^d x_i=\lambda$ with one of the $2d$ facets of the $d-$dimensional
cube $[0,1]^d$.
$\Box$
\begin{cor} \label{corfacet} The inequality
$$\sum_{i=1}^dp^{\alpha_i} x_i\geq\lambda p^{\mu+1}+(d-\lambda)p\mu$$
associated to a regular vector $(\alpha_1,\dots,\alpha_d)$ in
$\mathcal R_\lambda(d,e)$
is sharp on the set
$$S_\alpha=\{(p^{\mu-\alpha_1+\epsilon_1},\dots,p^{\mu-\alpha_d
+\epsilon_d})\
\vert\ (\epsilon_1,\dots,\epsilon_d)\in \Delta(\lambda)\}$$
of vertices of $\mathcal P(p^e,d)$.
The convex hull of $S_\alpha$ is a facet of $\mathcal P(p^e,d)$
which is affinely equivalent to the
$(d-1)-$dimensional hypersimplex $\Delta(\lambda)$.
\end{cor}
{\bf Proof} The obvious equalities
$$\begin{array}{l}
\displaystyle
\mu=\min(\mu-\alpha_1+\epsilon_1+\alpha_1,\dots,\mu-\alpha_d+\epsilon_d
+\alpha_d)\\
\displaystyle
\mu+1=\max(\mu-\alpha_1+\epsilon_1+\alpha_1,\dots,\mu-\alpha_d+\epsilon_d
+\alpha_d)\end{array}$$
and the definition of $\Delta(\lambda)$ show that $S_\alpha$
consists exactly of all vertices of $\mathcal P(p^e,d)$ such that $A=a+1$
with $a,A$ as in the proof of Proposition \ref{propineqregular}.
The arguments of the proof of Proposition \ref{propineqregular}
imply thus that $S_\alpha$ is the subset of vertices of
$\mathcal P(p^e,d)$ on which the linear form
$$x=(x_1,\dots,x_d)\longmapsto l_\alpha(x)=\sum_{i=1}^d p^{\alpha_i}x_i$$
is minimal. This shows that the convex hull $F_\alpha$ of the set $S_\alpha$
defines a $k-$dimensional face of $\mathcal P(p^e,d)$ for some integer
$k\in\{0,\dots,d-1\}$. Assertion (i) of Proposition
\ref{propaffhypersimplex} implies now that $F_\alpha$ is affinely
equivalent to a hypersimplex $\Delta(\lambda)$ of
dimension $d-1$. In particular, the convex hull $F_\alpha$ of
$S_\alpha$ is a facet of $\mathcal P(p^e,d)$.
$\Box$
\section{Exceptional facets}\label{sectexcfacets}
We leave it the reader to check that we have
$$\sum_{i=1}^dx_i\leq d-1+p^e$$
for $(x_1,\dots,x_d)\in\mathcal P(p^e,d)$. The details are straightforward
and involve computations similar to those used for
proving Proposition \ref{propineqregular}.
Equality holds for the elements of the exceptional facet $F_\infty$
given by the $(d-1)-$dimensional simplex with vertices
$(p^e,1,\dots,1),\dots,(1,\dots,1,p^e)$.
The inequalities $x_i\geq 1,\ i=1,\dots,d$ hold obviously for
$(x_1,\dots,x_d)\in
\mathcal P(p^e,d)$. For $d\geq 3$,
these inequalities define $d$ exceptional facets $F_1,\dots,F_d$
which are all affinely equivalent to the $(d-1)-$dimensional
polytope $\mathcal P(p^e,d-1)$.
\begin{rem}
The $d+1$ inequalities associated to exceptional facets define
a $d-$dimensional simplex with vertices
$(1,1,\dots,1,1),(p^e,1,\dots,1),\dots,(1,\dots,1,p^e)$.
This simplex contains $\mathcal P(p^e,d)$.
\end{rem}
\section{Proof of Theorem \ref{thminequalities} and \ref{thmfacets}}
\label{sectallfacets}
The main tool for proving Theorem \ref{thminequalities}
is the following obvious and well-known result.
\begin{prop} \label{propcriterefaces} Let $\mathcal F$ be a non-empty
set of facets of a polytope $\mathcal P$.
The set $\mathcal F$ contains all facets of $\mathcal P$
if and only if for every element $F\in \mathcal F$ and for every facet
$f$ of $F$, there exists a distinct element $F'\not=F$ in $\mathcal F$
such that $f$ is also a facet of $F'$.
\end{prop}
{\bf Proof} Call two facets of a $d-$dimensional polytope $\mathcal P$
{\it adjacent} if they intersect in a common $(d-2)-$face of $\mathcal P$.
Consider the graph with vertices formed by all facets of $\mathcal P$
and edges given by adjacent pairs of facets. This graph is connected
and its edges are in bijection with $(d-2)-$faces of $\mathcal P$
since the intersection of three distinct facets is of dimension $\leq d-3$.
Proposition \ref{propcriterefaces}
boils now down to the trivial observation that a non-empty
subset $\mathcal V'$ of vertices of a connected graph $\Gamma$ coincides with
the set of vertices of $\Gamma$ if and only if for every vertex $v$ of
$\mathcal V'$, the set $\mathcal V'$ contains also all vertices of $\Gamma$
which are adjacent to $v$.
$\Box$
Given a subset $S$ of vertices of $\mathcal P(p^e,d)$,
we consider
$$m_i=\min_{(p^{\beta_1},\dots,p^{\beta_d})\in S}(\beta_i)$$
and
$$M_i=\max_{(p^{\beta_1},\dots,p^{\beta_d})\in S}(\beta_i)\ .$$
We have thus
$$m_i\leq \beta_i\leq M_i$$
for every element $(p^{\beta_1},\dots,p^{\beta_d})$
of $S$ and these inequalities are sharp.
We set $m(S)=(m_1,\dots,m_d)$ and $M(S)=(M_1,\dots,M_d)$.
An important ingredient of all proofs is the following result.
\begin{lem} \label{lemregsubset}
Let $S$ be a subset of vertices of $\mathcal P(p^e,d)$
such that $M-m\in\{0,1\}^d$
where $m=m(S)$ and $M=M(S)$ are as above.
Then $S$ is contained in the set of vertices of a
regular facet of $\mathcal P(p^e,d)$.
\end{lem}
{\bf Proof} Set $\lambda=e-\sum_{i=1}^dm_i$.
Suppose first $\lambda=0$. This implies that $S$ is reduced to a
unique element $v=(p^{m_1},\dots,p^{m_d})$. Choose two distinct
indices $i,j$ in $\{1,\dots,d\}$
such that $m_i>0$ in order to construct
the element $\tilde v=(p^{\tilde m_1},\dots,p^{\tilde m_d})$
where $\tilde m_k=m_k$ if $k\not=i,j$, $\tilde m_i=m_i-1,\tilde m_j=m_j+1$.
The set $\tilde S=\{v,\tilde v\}$ contains $S$ and
satisfies the conditions of Lemma \ref{lemregsubset}
with $\lambda=1$. We may thus assume $\lambda\geq 1$.
The obvious identity $\prod_{i=1}^d p^{\beta_i}=p^e$
shows the equality $\sum_{i=1}^d \beta_i=\lambda+\sum_{i=1}^d
m_i$ for every element $(p^{\beta_1},\dots,p^{\beta_d})$ of $S$.
The inclusion $M-m\in\{0,1\}^d$ shows
$m_i\leq \beta_i\leq M_i\leq m_i+1$ and implies $\lambda\leq d$.
Moreover, if $\lambda=d$ then $\beta_i=m_i+1$
for every element $(p^{\beta_1},\dots,p^{\beta_d})$ of $S$.
This shows that $S$ is reduced to the unique element
$(p^{m_1+1},\dots,p^{m_d+1})$ and contradicts the definition
of $m=(m_1,\dots,m_d)$. We have thus $\lambda\in\{1,\dots,d-1\}$.
Up to enlarging the set $S$, we can assume
$$S=\{(p^{m_1+\epsilon_1},\dots,p^{m_d+\epsilon_d})\in\mathbb N^d\ \vert\
(\epsilon_1,\dots,\epsilon_d)\in \Delta(\lambda)\}$$
for some integer $\lambda\in\{1,\dots,d-1\}$.
Consider the function $\psi$ defined by $\psi(m_1,\dots,m_d)=
(\mu-m_1,\dots,\mu-m_d)$ where $\mu=\max(m_1,\dots,m_d)$,
see the proof of Proposition \ref{propbij}.
Proposition \ref{propbij} shows that we have
$\psi(m_1,\dots,m_d)=(\mu-m_1,\dots
\mu-m_d)\in\mathcal R_\lambda(p^e,d)$.
Corollary \ref{corfacet} shows now that $\mathcal S$ is
the vertex set of the regular facet defined by the regular vector
$\psi(m_1,\dots,m_d)=(\mu-m_1,\dots,\mu-m_d)$ of
$\mathcal R_\lambda(p^e,d)$.
$\Box$
{\bf Proof of Theorem \ref{thminequalities}}
Corollary \ref{corfacet} and Section \ref{sectexcfacets}
show that all inequalities
of Theorem \ref{thminequalities} are satisfied and necessary if
$d\geq 3$. We have thus to show that every facet of $\mathcal P(p^e,d)$
is either in $\{F_\infty,F_1,F_2,\dots,F_d\}$ or is among the set
$\{F_\alpha\}_{\alpha\in\mathcal R}$ of
regular facets indexed by the set $\mathcal R=\cup_{\lambda=1}
^{\min(e,d-1)}\mathcal R_\lambda(d,e)$ of regular vectors.
We show this using Proposition \ref{propcriterefaces}
with respect to the set of facets
$$\mathcal F=
F_\infty\cup\bigcup_{i=1}^d F_i\cup\bigcup_{\alpha\in\mathcal R} F_\alpha
\ .$$
We consider first the exceptional facet $F_\infty$.
A facet $f$ of $F_\infty$ is defined by
an additional equality $x_i=1$ for some $i\in\{1,\dots,d\}$
and we have thus $f\in F_\infty\cap F_i$ where $F_i$ is the
exceptional facet defined by $x_i=1$.
Consider next an exceptional facet $F_i$ defined by $x_i=1$ for some
$i\in\{1,\dots,d\}$. Such a facet $F_i$ coincides with the polytope
$\mathcal P'=\mathcal P(p^e,d-1)$ of all $(d-1)-$dimensional
vector-factorisations of $p^e$.
Facets of $F_i$ are thus in bijection with facets of $\mathcal P'$.
Using induction on $d$ (the initial case $d=2$ is easy), we know
thus complete list of facets of $F_i$.
Consider first a facet $f$ of $F_i$ corresponding to an ordinary facet
of $\mathcal P'$. Its vertices satisfy the conditions of Lemma
\ref{lemregsubset} and are thus also contained in a regular facet of
$\mathcal P$.
A facet $f$ of $F_i$ corresponding to the exceptional facet $F'_\infty$
of $\mathcal P'$ is also contained in the exceptional facet
$F_\infty$ of $\mathcal P$.
All other exceptional facets of $\mathcal P'$
are given by $x_i=x_j=1$ for
some $j\not=i$ and are thus contained in $F_i\cap F_j$.
(Remark that the last case does never arise for $d=3$.)
We consider now a regular facet $F$ of type $\lambda$
with vertices
$$\{(p^{\beta_1+\epsilon_1},\dots,p^{\beta_d+\epsilon_d})\ \vert \
(\epsilon_1,\dots,\epsilon_d)\in \Delta(\lambda)\}\ .$$
Since $F$
is affinely equivalent after multiplication by a diagonal matrix and
a translation
to the hypersimplex $\Delta_{d-1}(\lambda)$,
a facet $f$ of $F$ is given by an additional equality $x_i=c$
with $i$ in $\{1,\dots,d\}$ and $c$ in $\{p^{\beta_i},
p^{\beta_i+1}\}$.
If $c=1$ then $f$ is also contained in the exceptional facet $F_i$.
If $c=p^{\beta_i}>1$ and $\lambda<d-1$ then $f$ belongs also to the
regular facet
of type $\lambda+1$ with vertices
$$\{(t^{\beta_1+\epsilon_1},\dots,p^{\beta_{i-1}+\epsilon_{i-1}},p^{
\beta_i-1+\epsilon_i},p^{\beta_{i+1}+\epsilon_{i+1}},\dots,p^{\beta_d+
\epsilon_d})\ \vert\ (\epsilon_1,\dots,\epsilon_d)\in \Delta(\lambda+1)\}\ .$$
The case $\gamma_i=\beta_i$ and $\lambda=d-1$ implies that
the set of vertices of $f$ is reduced to
$(p^{\beta_1+1},\dots,p^{\beta_{i-1}+1},p^{\beta_i},p^{\beta_{i+1}+1},
\dots,p^{\beta_d+1})$. This is impossible for $d\geq 3$.
If $c=p^{\beta_i+1}$ and $\lambda>1$ then $f$ belongs also to the
regular facet of type $\lambda-1$ with vertices
$$\{(t^{\beta_1+\epsilon_1},\dots,p^{\beta_{i-1}+\epsilon_{i-1}},p^{
\beta_i+1+\epsilon_i},p^{\beta_{i+1}+\epsilon_{i+1}},\dots,p^{\beta_d+
\epsilon_d})\ \vert\ (\epsilon_1,\dots,\epsilon_d)\in \Delta(\lambda-1)\}\ .$$
In the case $c=p^{\beta_i+1}$ and $\lambda=1$ the set of vertices
of $f$ is reduced to
$(p^{\beta_1},\dots,p^{\beta_{i-1}},p^{\beta_i+1},p^{\beta_{i+1}},
\dots,p^{\beta_d})$ and this is impossible for $d\geq 3$.
Proposition \ref{propcriterefaces}
shows that $\mathcal F$ is the complete list of facets for
$\mathcal P$. This ends the proof of Theorem \ref{thminequalities}.
$\Box$
{\bf Proof of Theorem \ref{thmfacets}} Theorem \ref{thmfacets}
follows easily from Theorem \ref{thminequalities}
and from the explicit descriptions of regular facets given by
Proposition \ref{propbij} and Corollary \ref{corfacet}.
\section{Proof of Theorem \ref{thmfvector}}\label{sectfvector}
Theorem \ref{thmfvector} is easily checked in the case $d=2$
where $\mathcal P(p^e,2)$ is the polygon defined by the $e+1$
vertices $(p^e,1),(p^{e-1},p),\dots,(p,p^{e-1}),(1,p^e)$.
We suppose henceforth $e\geq 2$ and $d\geq 3$ and we
consider $\mathcal P=\mathcal P(p^e,d)$ (where $p$ is a prime).
The formula for the number $f_0$ of vertices of $\mathcal P$ certainly
holds. Indeed, $f_0$ is equal to the number of $d-$dimensional
vector-factorisations of $p^e$. Such vector-factorisations are in
one-to-one correspondence with homogeneous monomials of degree $e$
in $d$ commuting variables. The vector space spanned by these monomials
is of dimension ${e+d-1\choose d-1}$.
We call a $k-$dimensional face of $\mathcal P$
{\it regular} if it is contained in a
regular facet of $\mathcal P$.
A $k-$dimensional face of $\mathcal P$ is {\it exceptional} otherwise.
A $1-$dimensional face is exceptional if and only if it
is contained in the exceptional facet $F_\infty$. There are thus
${d\choose 2}$ exceptional $1-$dimensional faces.
For $k\geq 2$, a $k-$dimensional face $f$ which is exceptional is
either contained in the exceptional facet $F_\infty$ and there are
${d\choose k+1}$ such faces, or it is the intersection of
$d-k$ distinct exceptional facets in $\{F_1,\dots,F_d\}$.
For $k$ in $\{2,\dots,d-1\}$, the polytope $\mathcal P$ contains thus
$${d\choose k+1}+{d\choose d-k}={d+1\choose k+1}$$
exceptional $k-$dimensional faces.
A regular face $f$ of dimension $k\geq 1$ has
a type $\lambda=e-\sum_{i=1}^d m_i\in \{1,\dots,k\}$
where $m(S)=(m_1,\dots,m_d)$ is associated to the vertex set
$S$ of $f$ as in Lemma \ref{lemregsubset}. The face $f$ is then defined
by the support consisting of the $k+1$ non-zero coordinates of
$M(S)-m(s)\in\{0,1\}^d$ and by the regular facet with vertices
$$\{(p^{m_1+\epsilon_1},\dots,p^{m_d+\epsilon_d})\ \vert\
(\epsilon_1,\dots,\epsilon_d)\in\Delta(\lambda)\}\ .$$
The number of regular $k-$dimensional faces contained in $\mathcal P$
is thus given by
$${d\choose k+1}\sum_{\lambda=1}^{\min(k,e)}\sharp(A_\lambda)=
{d\choose k+1}\sum_{\lambda=1}^{\min(k,e)}{e-\lambda+d-1\choose d-1}$$
where the first factor corresponds to the choice of a support for
$M(S)-m(S)$, the sum corresponds to all possibilities for $\lambda$
and the factor $\sharp(A_\lambda)={e-\lambda+d-1\choose d-1}$
corresponds to all possibilities for the \lq\lq minimal'' regular
facet with vertices
$$\{(p^{m_1+\epsilon_1},\dots,p^{m_d+\epsilon_d})\ \vert\
(\epsilon_1,\dots,\epsilon_d)\in\Delta(\lambda)\}$$
which contains $f$.
Iterated application of the identity
${a-1\choose b-1}+{a-1\choose b}={a\choose b}$ shows
$$\sum_{\lambda=1}^{\min(k,e)}{e-\lambda+d-1\choose d-1}={e-
1+d\choose d}-{e-\min(e,k)+d-1\choose d}\ .$$
This yields the closed expression
$$f_k={d+1\choose k+1}+{d\choose k+1}{e+d-1\choose d}-{d\choose k+1}
{e-k+d-1\choose d}$$
for $f_2,\dots,f_{d-1}$ and ends the proof of Theorem
\ref{thmfvector}.
$\Box$
\begin{rem} The proof of Theorem \ref{thmfvector} contains the
detailled description in terms of vertices of all faces of
$\mathcal P$. It is thus easy to work out the
face-lattice of $\mathcal P(p^e,d)$.
\end{rem}
I thank F. Mouton for helpful comments.
\end{document}
\noindent Roland BACHER
\noindent INSTITUT FOURIER
\noindent Laboratoire de Math\'ematiques
\noindent UMR 5582 (UJF-CNRS)
\noindent BP 74
\noindent 38402 St Martin d'H\`eres Cedex (France)
\noindent e-mail: [email protected]
\end{document}
|
\begin{document}
\title{Billiards with Markovian reflection laws}
\author{Clayton Barnes, Krzysztof Burdzy and Carl-Erik Gauthier}
\address{Department of Mathematics, Box 354350, University of Washington, Seattle, WA 98195}
\email{[email protected] or [email protected]}
\email{[email protected]}
\email{[email protected] or [email protected]}
\thanks{KB's research was supported in part by Simons Foundation Grant 506732.
CEG's research was supported by the Swiss National Foundation for Research Grant P2NEP2\_171951. }
\keywords{Reflection distribution, stationary distribution, billiards}
\subjclass{Primary 60J99; Secondary 60K35}
\pagestyle{headings}
\begin{abstract}
We construct a class of reflection laws for billiard processes in the unit interval whose stationary distribution for the billiard position and its velocity is the product of the uniform distribution and the standard normal distribution. These billiard processes have Markovian reflection laws, meaning their velocity is constant between reflections but changes in a Markovian way at reflection times.
\end{abstract}
\maketitle
\section{Introduction}
\noindent
Consider a billiard process $\{(X(t), L(t)), t\geq 0\}$ with values in $ [0, 1] \times \mathbb{R}$, where $X$ represents the billiard position, reflecting at the endpoints 0 and 1, and $L$ represents the velocity of $X$.
Under the totally elastic collision assumption, i.e., when the kinetic energy is preserved, the long run distribution of $X$ is uniform in $[0,1]$ and the speed, i.e. $|L|$, is constant.
The Boltzmann-Gibbs distribution assigns a probability proportional to $\Ep(-c \mathcal{E}(x))$ to a state $x$ of a physical system, where $\mathcal{E}(x)$ is the energy of the state $x$. This suggests that, if the process $X$ does not move in a potential, i.e., the particle $X$ does not have potential energy, then the probability of a state $(x, \ell)$ of $(X,L)$, in the stationary regime, should be proportional to $\Ep (-c \ell^2)$ because the kinetic energy is proportional to $\ell^2$. In other words, position should be distributed uniformly in the interval $[0,1]$ and velocity should be normally distributed in the stationary regime. For this to be true, speed (i.e., the norm of velocity) must change at reflection times. We will present examples of Markovian reflection laws for billiard processes giving rise to the stationary density of the form $c_1\Ep (-c_2 \ell^2)$.
We will now state our main result and discuss related articles which inspired this research and provided some of the techniques used in this paper.
\subsection{Main result}
We will define our process on an interval $[0,T)$, possibly random, for some $0< T \leq \infty$, because we cannot assume from the outset that the process is well defined for all times $t>0$.
\begin{definition}\label{def:billiardProcess}
A process $\{(X(t), L(t)): t \in [0, T)\} $ with values in $ [0, 1] \times \mathbb{R}$ will be called a \emph{billiard process with Markovian reflections}
if and only if
\begin{enumerate}[label = (\roman*)]
\item There exists an infinite sequence of random times $0=t_0<t_1<t_2 < \dots$ such that $\sup_i t_i =T$.
\item $X(t) \in\{0,1\}$ if and only if $t=t_i$ for some $i$, a.s.
\item $L(t_0), L(t_1), \dots$, is a Markov chain with non-zero values. The sign of $L(t_i)$ alternates, i.e., $L(t_i)L(t_{i+1}) < 0$ for every $i$.
\item $L$ is constant on $[t_i, t_{i + 1})$ for every $i$, a.s.
\item $\displaystyle X(t) = \int_0^tL(s)ds + X(0)$ for all $0\leq t<T$, a.s.
\end{enumerate}
\end{definition}
A billiard process with Markovian reflections is a billiard whose
velocity after reflection is random; it depends on the incoming velocity and only on the incoming velocity.
Our notation for distributions and conditional distributions will be $\mc{L}(\,\cdot\,)$ and $\mc{L}(\,\cdot\,\mid\,\cdot\,)$.
To construct a billiard process with Markovian reflections we need an initial condition $(X(0), L(0))$ and the Markov chain determining the laws of reflection.
This is sufficient to construct a billiard process with Markovian reflections on the time interval $[0, \sup_j t_j)$; see Section \ref{Sect3}.
Let $\mathcal{U}(0, 1)$ denote the uniform distribution on
$[0, 1]$ and let $\mathcal{N}(0, 1)$ be the standard normal distribution on $\mathbb{R}$.
We will provide a large family of reflection laws $\mc{L}( L(t_{i + 1}) \mid L(t_i))$ for which $\mathcal{U}(0, 1) \times \mathcal{N}(0, 1)$
is the stationary distribution for the billiard process with Markovian reflections. The family will be indexed by an integer $N\geq 0$ and $\vec{\beta} = (\beta_0, \dots, \beta_N)\in (0,\infty)^{N+1}$.
\begin{definition}\label{def:vel}\
Consider an integer $N \geq 0$ and suppose that $\beta_i > 0$, $i = 0, \dots, N$ are reals such that $\sum_i\beta_i = 1$ and $\beta_i \geq \beta_{i+1} $ for all $i = 0, \dots, N$. Set $\lambda_i = \beta_i/\sum_{j = 0}^i\beta_j$ for $i = 0, \dots, N$, and $\mu_i = \beta_{i + 1}/\sum_{j = 0}^i\beta_j, i = 0, \dots, N - 1$.
Suppose that $\ell < 0$ and let
\begin{align}\label{s16.1}
p_N(\ell) &= \Ep\left(-\frac{(1-\beta_N)\ell^2}{2\beta_N}\right),\\
p_k(\ell) &=
\left(\prod_{m=k}^{N-1} \frac1{\mu_m}\right)
\sum_{j=k-1}^{N-1}
\Ep\left(-\frac{\ell^2}{2\mu_j}\right)
\left(\prod_{\substack{m=k-1\\m\neq j}}^{N-1}\frac{1}{1/\mu_m- 1/\mu_j}\right), \quad k=1,\cdots, N-1, \label{s16.2}\\
p_0(\ell) &= 1 - \sum_{k=1}^N p_k(\ell). \label{s23.5}
\end{align}
Let
$Z(\ell)\in\{0,1, \dots, N\}$ be a random variable such that $\mathbb{P}(Z(\ell)=j)=p_j(\ell)$, $ j=0,\cdots, N$. Suppose that
$E_0, E_1,\cdots , E_N$ are i.i.d.{} exponential, mean one, random variables
independent of $Z(\ell)$. Let $\mc{V}(\ell, N, \vec{\beta})$ be the distribution of
\begin{align}\label{s16.3}
\sum_{j=0}^N \left(2\sum_{i=j}^N\lambda_i E_i\right)^{1/2} \mathbbm{1}_{Z(\ell)=j}.
\end{align}
We extend the definition of $\mc{V}(\ell, N, \vec{\beta})$ to $\ell>0$ by saying that $\mc{V}(\ell, N, \vec{\beta})$ is the distribution of $X$ if the distribution of $-X$ is $\mc{V}(-\ell, N, \vec{\beta})$.
\end{definition}
\begin{remark}\label{def:velold}
Some factors in the product on the right hand side of \eqref{s16.2} are negative so it is not obvious that $p_k(\ell)$'s are non-negative.
In fact, this is the case, as can be seen from Proposition \ref{LevelDistrib} and Theorem \ref{ThinBoundaries} (i). The same results imply that $\sum_{k=0}^N p_k(\ell)=1$.
\end{remark}
\begin{example}\label{s17.1}
Formulas \eqref{s16.1}-\eqref{s16.3} are complicated so we present three concrete examples.
(i)
In the case $N=0$ we necessarily have $\beta_0 =1$. If $\ell<0$, the distribution
$\mc{V}(\ell, 0, (1))$ is the law of $\sqrt{2E_0}$, where $E_0$ is the exponential distribution with mean 1. For $\ell >0$, $\mc{V}(\ell, 0, (1))$ is the law of $-\sqrt{2E_0}$.
The distribution $\mc{V}(\ell, 0, (1))$ of $\sqrt{2E_0}$
is known as the Rayleigh distribution with parameter one.
(ii) Next consider the case $N=1$. Suppose that $0<\beta_1\leq 1/2$, $\ell<0$ and let
\begin{align}\label{s23.4}
p_0&= \Ep\left(-\frac{(1-\beta_1)\ell^2}{2\beta_1}\right),\qquad
p_1= 1-\Ep\left(-\frac{(1-\beta_1)\ell^2}{2\beta_1}\right).
\end{align}
Suppose that the following three random variables are independent: two mean-one exponentials $E_0$ and $E_1$, and $Z(\ell)$ such that $\mathbb{P}(Z(\ell)=j)= p_j(\ell)$, $j=0,1$.
Although $\beta_0$ does not enter the following formula, we note that necessarily $\beta_0=1-\beta_1$. The distribution
$\mc{V}(\ell, 1, (\beta_0,\beta_1))$ is the law of
\begin{align*}
\sqrt{2\left(E_0 + \beta_1 E_1\right)}\mathbbm{1}_{Z(\ell)=0}+ \sqrt{2\beta_1 E_1}\mathbbm{1}_{Z(\ell)=1}.
\end{align*}
(iii) Suppose that $N\geq 2$ and let $\beta_i = 1/(N+1)$ for $i=0,\dots, N$.
Then $\mu_i = \lambda_i = 1/(i+1)$. Elementary calculations show that
formulas \eqref{s16.1}-\eqref{s23.5} for $p_k(\ell)$
reduce to the binomial probabilities with parameters $N$ and $q:=\Ep(-\ell^2/2)$, i.e.,
\begin{align*}
p_k(\ell) = \binom N k q^ k (1-q) ^{N-k}, \qquad k=0,1,\cdots, N.
\end{align*}
Formula \eqref{s16.3} becomes
\begin{align*}
\sum_{j=0}^N \left(2\sum_{i=j}^N \frac{E_i}{i+1}\right)^{1/2} \mathbbm{1}_{Z(\ell)=j}.
\end{align*}
\end{example}
The next theorem is our main result.
Note that we may have different families of reflection laws for reflections at 0 and 1.
\begin{theorem}\label{mainTheorem}
Suppose that $N^-, N^+\geq 0$ are integers, $\vec{\beta}^- \in (0,\infty)^{N^- + 1}$ and $\vec{\beta}^+ \in (0,\infty)^{N^+ + 1}$.
\begin{enumerate}[label = (\roman*)]
\item There exists a billiard process $\{(X(t), L(t)) : t \in [0, \infty)\}$ with Markovian reflection laws
\begin{align*}
\mc{L}( L(t_{i + 1}) \mid L(t_i)=\ell)&=\mc{V}(\ell, N^-, \vec{\beta}^-),
\quad \text{ if } \ell<0,\\
\mc{L}( L(t_{i + 1}) \mid L(t_i)=\ell)&=\mc{V}(\ell, N^+, \vec{\beta}^+),
\quad \text{ if } \ell>0.
\end{align*}
\item $\mathcal{U}(0, 1) \times \mathcal{N}(0, 1)$ is the unique stationary distribution for $(X, L)$.
\end{enumerate}
\end{theorem}
\begin{remark}
(i) The reflection laws in Definition \ref{def:vel} correspond to the model considered in Theorem \ref{ThinBoundaries}. Two other results, Theorems \ref{Infinite_layers} and \ref{Cvge_Veloc}, implicitly contain two other families of reflection laws for which Theorem \ref{mainTheorem} holds. We do not give explicit formulas for these reflection laws because they are more complicated than those in Definition \ref{def:vel}. The interested reader will have no problem with extracting definitions of those reflection law families from the discussion of the cases when $N_0(n)$ goes to infinity (in the ``noiseless case'') and the ``noisy case'' in Section \ref{Sect2}.
(ii)
In view of Example \ref{s17.1} (i) and Theorem \ref{mainTheorem} (ii), the stationary distribution of the process $(X, L)$ is $\mathcal{U}(0, 1) \times \mathcal{N}(0, 1)$ if the speeds $|L(t_i)|$ are i.i.d. with the standard Rayleigh
distribution. It is easy to see that this is the only example of a Markovian reflection law such that $L(t_{i+1})$ does not depend on $L(t_i)$ and the stationary distribution is $\mathcal{U}(0, 1) \times \mathcal{N}(0, 1)$.
\end{remark}
\subsection{Proof strategy}
We will approximate a billiard with Markovian reflections by a sequence of processes $(X_n, L_n)$ with state spaces $\mathcal{D}_n \times \mathbb{R}$, where $\mathcal{D}_n = \{0, 1/n, 2/n, \dots, 1\}$. The processes $(X_n, L_n)$ will belong to a particular class introduced in \cite{BW1}, which we describe in Section \ref{Sect1}. These processes have the stationary distribution $\mathcal{U}(\mathcal{D}_n) \times \mathcal{N}(0, 1)$, where $\mathcal{U}(\mathcal{D}_n)$ denotes the uniform distribution on $\mathcal{D}_n$. Consequently, if $(X_n, L_n)$ converges to a process, classical limit results show that $\mathcal{U}(0, 1) \times \mathcal{N}(0, 1)$ is the stationary
distribution for the limit. Constructing a sequence of processes that converge to a billiard process with
Markovian reflections relies, roughly speaking, on a finite system of equations involving transition rates
between states of $X_n$ together with a process $L_n$ which represents the ``memory'' of $X_n$. Manipulation of these equations will give rise to the variety of reflection laws described above.
As we have already mentioned, we will approximate the unit interval with the discrete interval $\{0, 1/n, 2/n, \dots, 1\}$. We will reserve
a tiny fraction of these points to serve the role of boundaries, namely the first $N_0(n)+1$ (resp. the last $N_1(n)+1$)
points will form the ``boundary'' at $0$ (resp. at $1$). We think of these short discrete intervals as
layers in which the random reflection takes place. In the limit, the layers will collapse to the respective endpoints. Thus
we take $N_j(n)/n\rightarrow 0$ as $n$ converges to infinity, for $j=0,1$. For fixed $n$, one can think of the points in $[0, N_0/n] \cup [(n - N_1)/n, 1]$ as holding a potential that reverses the direction of the motion of the particle $X_n$ as it
approaches either boundary. After this reversal it will leave the potential layer with a random ``velocity.'' These potential layers will disappear as $n$
approaches infinity. Because of this, the limiting process will have ballistic trajectories, but randomness for
the reflecting velocity will be retained. The
``velocity'' $L_n$ will not change outside of the potential layers in our model.
To make the model tractable, we will consider only two dynamics
inside the boundary layers. In the first case, the particle $X_n$ will be able to jump in only one direction, depending on the sign of $L_n$. In the second case, $X_n$ will be able to jump to both neighbors but the boundary layers will be very thin, i.e., $N_0(n)= N_1(n)=1$.
\begin{remark}
The formula for the reflected velocity \eqref{s16.3} is complicated and hard to comprehend intuitively. One may wonder whether a more accessible examples may arise by passing with $N$ to infinity and scaling $\vec{\beta}$ appropriately. This does not seem to be the case. The limit seems to be deterministic. In other words, the limiting reflection would be totally elastic, resulting in the constant speed for all times. The reason is that
Theorem \ref{mainTheorem} is based on a ``noiseless'' approximation scheme where the particle can jump in only one direction, depending on its current drift. For large $N$, the law of large numbers would generate deterministic reflections.
We expect that in the ``noisy'' case, when the particle can jump in both directions, there may exist interesting limiting distributions. However, we can effectively analyze the ``noisy'' case only for $N=1$.
\end{remark}
\subsection{Related results}
We have already indicated, at the beginning of the introduction, that our research is inspired by certain ideas from physics. On the mathematical side, this paper is related to models of Markov processes with ``memory''
presented in \cite{BBCH,BCG15,CEGMB,BKS12,BKS13}.
We will not review these models in detail because they are quite diverse. What they have in common is that, in every case, the stationary distribution has the product form---it is uniform (on an appropriate space) for the ``position'' component of the process and it is Gaussian for the ``memory.'' The product form of the stationary distribution is far from obvious because the components, position and memory, are not independent; they are not even Markov on their own.
In view of the history of the model, we will interchangeably refer to the second component of $(X_n, L_n)$ as ``velocity'' or ``memory.''
The perspective of this paper is
the reversal of the classical problem of finding the stationary distribution. We are looking for models that have the prescribed product-form stationary distribution.
Our specific model has the following roots.
In \cite{BBCH}, a reflected Brownian motion with drift was analyzed.
The drift had memory---it accumulated proportionally to the vector-valued local time on the boundary. As a part of the analysis, the authors of \cite{BBCH} considered a sequence of Brownian motions not reflected on the boundary but repulsed by a sequence of smooth potentials converging to 0 inside the domain and to infinity on the boundary. The diffusion coefficient remained constant.
One may wonder what limiting processes could arise if we let the potentials converge in the manner described above and at the same time we let diffusivity go to 0 at an appropriate rate. It is clear that the limiting process must have ballistic trajectories inside the domain but its reflection law might be random. Our present article can be viewed as a simplified version of the problem, but one that tries to go into the heart of the matter.
At the technical level, we will use a discrete approximation, originally introduced in \cite{BW1}. So far, this type of approximation was used only for generating conjectures which were subsequently proved using other methods, as in \cite{BKS12} and \cite{BKS13}. Convergence of a discrete approximation of this type to a Markov process with memory was proved for the first time in \cite{B}.
Finally, we would like to
point out that \cite{BW2} presented a process with sawtooth paths, just like our process $X$. In that case, the sawtooth process had a Gaussian stationary distribution. The speed was constant and the locations of direction reversals were random, whereas in our case, the locations of direction reversals are fixed but the speed is random.
\subsection{Organization of the paper}
In Section \ref{Sect1} we introduce approximating processes and state our assumptions. In Section \ref{Sect2} we state, without proof, all intermediate results needed to prove that
the approximating processes converge in distribution to a billiard process with Markovian
reflections. All these results and Theorem \ref{mainTheorem}, our main result, are proved in Section \ref{Sect4}.
\section{Discrete approximations}
\label{Sect1}
\subsection{Discrete-space Markov processes with memory}\label{ch3:section:MarkovProcessesMemory}
We will review the context as well as the main result from \cite{BW1} in this subsection. Let $(X, L)$ be
a continuous time Markov processes with state space $\mathcal{D}_n\times\mathbb{R}^d$,
where $d\geq 1$ and $\mathcal{D}_n=\{0,1,\dots,n\}$.
We associate a vector $v_j \in \mathbb{R}^d$ to each $j \in \mathcal{D}_n$, and define
\[
L_j(t) = \Leb(s\in[0, t] : X(s) = j)
\] as the
time $X$ has spent at location $j$ until time $t.$ The ``memory'' process is defined as
\[
L(t) = \sum_{j \in \mathcal{D}_n}v_jL_j(t).
\]
Functions
\[
a_{ij}(\ell) : \mathbb{R}^d \to \mathbb{R}
\]
govern the intensity of transitions of $X$ from $i$ to $j$. In other words, conditional on $X(t_0) = i$ and $L(t_0) = \ell$, the intensity of jumps of $X$ from $i$ to $j$ is $a_{ij}(\ell + [t-t_0]v_i)$ for $t \geq t_0$, until $X$ jumps away from $i$. More precisely, the evolution of the process can be
described as follows. Let $(E_k^j)_{j\in \mathcal{D}_n ,\;k\geq 0}$ be a family of i.i.d.{} exponential random variables with
parameter one and let $(T_i)_{i\geq 0}$ be the sequence of times when $X$
changes its position, with $T_0=0$. Assuming that the process is defined up to time $T_i$, we recursively define
\begin{align}\notag
T_{i+1}^j&= \inf\left(t>T_i: \int_{T_i}^t a_{X(T_i)j}\big(L(T_i)+ v_{X(T_i)}(s-T_i)\big)ds\geq E_i^j\right),\\
T_{i+1}&=\min_{j\in \mathcal{D}_n} T_{i+1}^j,\label{o11.1}
\end{align}
with the convention that $\inf\emptyset =\infty$. Then, set
\begin{align}\label{Evol_Interjumps}
L(s)&=L(T_i)+ v_{X(T_i)}(s-T_i),& \text{ for } s\in [T_i, T_{i+1}],\\
X(s)&= X(T_i),& \text{ for } s\in [T_i, T_{i+1}),\notag\\
X(T_{i+1})&= \argmin(T_{i+1}^j: j\in \mathcal{D}_n).&\notag
\end{align}
Note that
\[
\mathbb{P}(T_{ i+1}^j > t + T_i \mid X(T_i) = k, L(T_i) = \ell) = \Ep\left(-\int_0^ta_{kj}(\ell + sv_k)ds\right),
\]
for all $t > 0$. The pair $(X, L)$ is a strong Markov process with
infinitesimal generator
\begin{align*}
\mc{A}f(j,\ell)= \langle v_j,\nabla_\ell f(j,\ell)\rangle +\sum_{i\neq j}a_{ji}(\ell)\big(f(i,\ell)-f(j,\ell)\big)
\end{align*}
for $f:\mathcal{D}_n\times \mathbb{R}^d \rightarrow \mathbb{R}$ of sufficient smoothness.
It is assumed in \cite{BW1} that $(X, L)$ is irreducible in the sense that there are $j_0 \in \mathcal{D}_n$ and a non-empty open set $U\subset \mathbb{R}^d$ such that
\[
\mathbb{P}((X(t), L(t)) \in \{j_0\} \times U \mid X(0) = i, L(0) = \ell) > 0,
\]
for every $(i, \ell) \in \mathcal{D}_n\times \mathbb{R}^d$ and some $t>0$ (depending on $(i, \ell)$).
\begin{remark}
See \cite[Chap. 2]{Bremaud} for a formal definition and characterization of doubly-stochastic jump processes such as $X$. Note that
the stochastic jump intensity of $X$ is adapted to the right continuous filtration generated by $X$.
\end{remark}
Let $\mc{U}(\mathcal{D}_n)$ denote the uniform distribution on $\mathcal{D}_n$ and let $\mathcal{N}_d$ be the $d$-dimensional standard normal distribution.
Our model and arguments will be based on the following result.
\begin{theorem}\label{Theorem:BurdzyWhite}\cite[Cor. 2.3]{BW1}
The stationary distribution for $(X, L)$ is $\mathcal{U}(\mathcal{D}_n)\times \mathcal{N}_d$ if and only if
\begin{align}\label{eq1}
v_j\cdot \ell + \sum_{i \in \mathcal{D}_n}a_{ij}(\ell) - \sum_{i\in \mathcal{D}_n} a_{ji}(\ell) = 0,
\end{align}
for all $j\in \mathcal{D}_n$ and $\ell \in \mathbb{R}^d$.
\end{theorem}
\begin{remark}
Heuristically,
condition \eqref{eq1} can be represented as
\begin{align}\label{s21.1}
v_j\cdot \ell + (\text{flow into $j$}) - (\text{flow out of $j$}) = 0,
\end{align}
for all $j\in \mathcal{D}_n$ and $\ell \in \mathbb{R}^d$.
\end{remark}
In the next two sections we will specify $v_j$ and $a_{ij}$ that will give rise to a billiard process with Markovian reflections.
\subsection{Approximating processes}
We will consider a sequence of processes $(X_n, L_n)$, $n\geq 2$, defined
as in Section \ref{ch3:section:MarkovProcessesMemory}, with the state space $\mathcal{D}_n \times \mathbb{R}$, where
$\mathcal{D}_n = \{ 0, 1,\dots, n\}$.
We will always assume that $a_{ij}(\ell) =0 $ whenever $|i-j| \ne 1$ (we will suppress $n$ in the notation $a_{ij}(\ell) $). Hence $X_n$ will be a nearest neighbor random walk with random transition probabilities.
Heuristically, $\mathcal{D}_n$ should be thought of as a discretization of $[0,1]$. We chose to label the elements of $\mathcal{D}_n$ as $\{ 0, 1,\dots, n\}$ rather than $\{ 0, 1/n,2/n,\dots, (n-1)/n,1\}$ for typographical reasons.
The state space $\mathcal{D}_n$ will have two ``boundary regions''
$\partial\mathcal{D}_n^-:= \{0,1,\dots,N_0(n)\}$ and $\partial\mathcal{D}_n^+:=\{ n - N_1(n) , \dots, n-1, n\}$.
\begin{assumption}\label{o8.1}
The following are (some of) our standing assumptions.
\begin{enumerate}[label=(\roman*)]
\item $0 \leq N_0(n), N_1(n)< n/2$, for
$ n\geq2$,
\item
$\lim_{n\rightarrow \infty} N_k(n)/n=0$, for $ k=0,1$,
\item $v_j(n) = 0$ if and only if $j\in \{N_0(n)+1, n - N_1(n) -1\}$,
\item $v_j(n) > 0$ if $j \in \partial\mathcal{D}_n^-$,
\item $v_j(n) < 0$ if $j \in \partial\mathcal{D}_n^+$.
\end{enumerate}
\end{assumption}
Assumption \ref{o8.1} (iii) means that the memory process $L_n$ is not affected when $X_n$ is outside the boundary regions
$\partial\mathcal{D}_n^-$ and $\partial\mathcal{D}_n^+$. We will choose $a_{ij}(\ell)$ so that, as a consequence of Assumption \ref{o8.1} (iii), the ``drift'' of $X_n$ will not be affected outside the boundary regions.
\begin{definition} The boundary $0$ (resp. $n$) is said to be \textit{hard} if and only if $N_0(n)=0$ (resp. $N_1(n)=0$); otherwise it is said to be \textit{soft}. The boundaries are said to be \textit{noiseless} if $a_{i,j}(\ell)>0$ if and only if $(j-i)\ell >0$ for all $i\in\partial\mathcal{D}_n^-\cup\partial\mathcal{D}_n^+$; otherwise they are said to be \textit{noisy}.
\end{definition}
Our motivation for this terminology is the following. Since the process $(X_n,L_n)$ is supposed to approximate a billiard process, its ``velocity component'' $L_n$ should change
only if $X_n$ is in one of the boundary regions $\partial\mathcal{D}_n^-$ and $\partial\mathcal{D}_n^+$. The term \textit{``soft''} refers to the idea that the repulsive effect is felt away from the boundaries 0 and $n$, while \textit{``hard''} designates the opposite case.
The term \textit{``noisy''} refers to the idea that the ``drift'' of the particle does not determine the direction of the motion in a deterministic way---the particle can go in both directions with
positive probabilities.
\subsubsection{Noiseless case}
We will discuss only the lower boundary region $\partial\mathcal{D}_n^-$. Implicitly, we make analogous assumptions for the region $\partial\mathcal{D}_n^+$; therefore, analogous results hold for the upper boundary region.
Recall that we have assumed $a_{ij}(\ell) =0 $ whenever $|i-j| \ne 1$.
In the noiseless case we will assume that for all $n, \ell $ and $i\in\mathcal{D}_n\setminus\{n\}$, and some $c_i(n) >0$, the transition rates have the form
\begin{align}\label{s21.7}
a_{i, i + 1}(\ell) &=
\begin{cases}
c_i(n)\ell,&\text{if }\ell\geq 0,\\
0 & \text{otherwise},
\end{cases}\\
a_{i + 1, i}(\ell) &=
\begin{cases}
c_i(n)(- \ell ), & \text{if }\ell\leq 0,\\
0 & \text{otherwise}.
\end{cases}\label{s21.8}
\end{align}
Let
\begin{align}\label{o8.2}
c_i(n) = n, \qquad \text{ for } N_0(n)\leq i \leq n - N_1(n).
\end{align}
When $\ell < 0$ we have the following
schematic representation of the probability mass flow into and out of $i\in\partial\mathcal{D}_n^-\setminus\{0\}$, (see \eqref{s21.1}),
\begin{center}
\begin{equation}\label{schema1}
\begin{tikzpicture}
\node[draw,circle,fill=gray!0] (A)at(3,3){$i-1$};
\node[draw,circle,fill=gray!0] (B)at(7,3){$i$};
\node[draw,circle,fill=gray!0] (C)at(11,3){$i+1$};
\node(draw) (D)at(7,5){};
\draw[<-, >= latex] (A)--(B);
\draw (4.5,3) node [above right] {$c_{i-1}(n)|\ell|$};
\draw[<-, >= latex] (B)--(C);
\draw (8.5,3) node [above right] {$c_{i}(n)|\ell|$};
\draw[->, >= latex] (B)--(D);
\draw (7,4) node [above right] {$v_i(n)|\ell|$};
\end{tikzpicture}
\end{equation}
\end{center}
Assumption \ref{o8.1} (iv) implies that $v_i(n)\ell < 0$. Therefore, the corresponding arrow shows the ``outflow'' from $i$.
Following \eqref{eq1} we equate the sum of signed flows to zero, to obtain for $i\in\partial\mathcal{D}_n^-\setminus\{0\}$,
\begin{align}\label{s21.2}
0 = c_i(n)|\ell| - c_{i - 1}(n)|\ell | - v_i(n)|\ell| \iff v_i(n) + c_{i - 1}(n) = c_i(n).
\end{align}
When $i = 0$, the schematic is the following,
\begin{center}
\begin{equation}\label{s23.3}
\begin{tikzpicture}
\node[draw,circle,fill=gray!0] (A)at(3,3){$0$};
\node[draw,circle,fill=gray!0] (B)at(7,3){$1$};
\node(draw) (D)at(3,5){};
\draw[<-, >= latex] (A)--(B);
\draw (4.5,3) node [above right] {$c_{0}(n)|\ell|$};
\draw[->, >= latex] (A)--(D);
\draw (3,4) node [above right] {$v_0(n)|\ell|$};
\end{tikzpicture}
\end{equation}
\end{center}
which yields the formula
\begin{align}\label{s21.3}
v_0(n) = c_0(n).
\end{align}
We combine \eqref{s21.2}-\eqref{s21.3} to obtain the following system of equations for $c_i(n)$'s and $ v_i(n)$'s,
\begin{align}
\begin{split}
v_{N_0(n)}(n) + c_{N_0(n) - 1}(n) &= c_{N_0(n)}(n) = n,\\
v_{N_0(n) - 1}(n) + c_{N_0(n) - 2}(n) &= c_{N_0(n) - 1},\\
\vdots\\
v_1(n) + c_0(n) &= c_1(n),\\
v_0(n) &= c_0(n).
\label{eq:system}
\end{split}
\end{align}
It follows from \eqref{eq1} and \eqref{s21.7}-\eqref{s21.8} that we obtain the same system of equations \eqref{eq:system} in the case when $\ell > 0$.
It follows easily from \eqref{eq:system} that
\begin{align}\label{s22.1}
c_k(n) = \sum_{i = 0}^kv_i(n),\quad 0\leq k \leq N_0(n).
\end{align}
In particular,
\begin{align}\label{s22.2}
\sum_{i = 0}^{N_0(n)}v_i(n) = c_{N_0(n)}(n) = n.
\end{align}
In order to analyze the evolution of $L_n$ inside the soft boundaries, we will need the following quantities:
\begin{align}\label{s28.5}
\lambda_{i}(n) &:= \frac{v_i(n)}{c_i(n)}, \quad i= 0, \dots, N_0(n),\\
\mu_i(n) &:= \frac{v_{i+1}(n)}{c_{i}(n)}, \quad i= 0, \dots, N_0(n) - 1.\label{s28.6}
\end{align}
These are the ratios of the ``memory accumulation rates'' at sites $i$ and $i+1$ and the jump rate between these two sites (per unit of memory $L_n$); see Fig. \eqref{schema1}.
In view of \eqref{s21.3}, we have $\lambda_0(n)=1$ for all $n$.
We will use the following assumptions in some of our arguments.
\begin{enumerate}
\item[$\bold{F1}$:] $\lambda_{i}(n)\neq \lambda_{j}(n)$ for all $i,j\in \partial \mathcal{D}_n^-$ such that $j\neq i$.
\item[$\bold{F2}$:] $\mu_{i}(n)\neq \mu_{j}(n)$ for all $i,j= 0, \dots, N_0(n) - 1$ such that $j\neq i$.
\item[$\bold{F'}$: ] $v_j(n) \geq v_{j+1}(n) >0$ for all $j\in \{0,\dots, N_0(n) -1\}$.
\end{enumerate}
We will argue that $\bold{F'}$ implies $\textbf{F1}$-$\textbf{F2}$. We will use \eqref{s22.1}. We have $\lambda_i(n) > \lambda_{i+1}(n)$ if and only if the following equivalent conditions hold,
\begin{align}\label{s30.10}
\frac{v_i(n)}{c_i(n)} > \frac{v_{i+1}(n)}{c_{i+1}(n)}
&\Longleftrightarrow
v_i(n)c_{i+1}(n) > v_{i+1}(n)c_{i}(n)
\Longleftrightarrow v_i(n)\sum_{j = 0}^{i+1}v_j(n) > v_{i+1}(n)\sum_{j = 0}^{i}v_j(n)\\
&\Longleftrightarrow
v_i(n)v_{i+1}(n) + (v_i(n)-v_{i+1}(n))\sum_{j = 0}^{i}v_j(n) >0. \label{s30.13}
\end{align}
If $\bold{F'}$ holds then the last inequality is true and, therefore, $\lambda_i(n) > \lambda_{i+1}(n)$. This shows that $\bold{F'}$ implies $\textbf{F1}$. The calculations showing that $\bold{F'}$ implies $\textbf{F2}$ are similar:
\begin{align}\label{s30.11}
\frac{v_{i+1}(n)}{c_i(n)} > \frac{v_{i+2}(n)}{c_{i+1}(n)}
&\Longleftrightarrow
v_{i+1}(n)c_{i+1}(n) > v_{i+2}(n)c_{i}(n) \\
&\Longleftrightarrow v_{i+1}(n)\sum_{j = 0}^{i+1}v_j(n) > v_{i+2}(n)\sum_{j = 0}^{i}v_j(n)\\
&\Longleftrightarrow
v_{i+1}(n)v_{i+1}(n) + (v_{i+1}(n)-v_{i+2}(n))\sum_{j = 0}^{i}v_j(n) >0. \label{s30.12}
\end{align}
In the case when $N_0(n)=N$ for all $n$, we will make the following assumption.
\begin{enumerate}
\item[$\bold{F3}$:] $\lim_{n\rightarrow \infty} v_j(n)/n=\beta_j >0$ for all $j=0,\cdots, N_0(n)=N$.
\end{enumerate}
\begin{remark}\label{s30.20}
(i) If $\bold{F3}$ holds then $\sum_{j=0}^{N}\beta_j = 1$ because of \eqref{s22.2}.
(ii) It is easy to check that if \textbf{F3} is true then the limits
\begin{align}\label{o2.1}
\lambda_i :=\lim_{n\rightarrow\infty}\lambda_i(n) &=\frac{\beta_i}{\sum_{j=0}^i \beta_j},\quad i = 0, \dots, N_0(n),\\
\mu_i :=\lim_{n\rightarrow\infty}\mu_i(n) &=\frac{\beta_{i+1}}{\sum_{j=0}^i \beta_j},\quad i = 0, \dots, N_0(n)-1,\label{o2.2}
\end{align}
exist.
(iii)
Assumption $\bold{F3}$ and \eqref{s22.1} imply that the limits $\lim_{n\rightarrow \infty} c_j(n)/n=c_j >0$ exist for all $j=0,\cdots, N_0(n)=N$.
By assumptions $\bold{F'}$ and $\bold{F3}$, we have $0<\beta_{j+1}\leq \beta_j$ for all $j\in \{0,\cdots, N_0(n)-1\}$. The calculations \eqref{s30.11}-\eqref{s30.12} can be repeated with $v_j(n)$ replaced with $\beta_j$ for $j=i+1, i+2$, and $c_j(n)$ replaced with $c_j$ for $j=i, i+1$. With this substitution, the conclusion of that calculation is that $\mu_i > \mu_{i+1} >0$.
\end{remark}
If $N_0(n)$ grows to infinity with $n$, instead of \textbf{F3}, we will adopt the following assumptions. First, let
\begin{align}\label{o2.3}
\lambda'_j(n) &=
\begin{cases}
\lambda_{j}(n) & \text{if } j\leq N_0(n),\\
0 & \text{otherwise, }
\end{cases}\\
\mu'_j(n) &=
\begin{cases}
\mu_{j}(n) & \text{if } j\leq N_0(n),\\
0 & \text{otherwise. }
\end{cases}
\label{o2.4}
\end{align}
The new assumptions are
\begin{enumerate}
\item[$\bold{G1}$:] $(\mu'_j(n))_{j\geq 0}$ converges in $\ell^1$ to $(\mu_j)_{j\geq 0}$ as $n\to\infty$.
\item[$\bold{G2}$:] $(\lambda'_j(n))_{j\geq 0}$ converges in $\ell^1$ to $(\lambda_j)_{j\geq 0}$ as $n\to\infty$.
\end{enumerate}
\subsubsection{Noisy case}
In this case, we will give explicit formulas only in the case $N_0(n)=N_1(n)=1$.
In the general case the formulas are too complicated to be useful or informative.
Recall that we have assumed that $a_{ij}(\ell) =0 $ whenever $|i-j| \ne 1$.
In the noisy case we will assume that for all $n, \ell $ and $i\in\mathcal{D}_n\setminus\{n\}$, and some $b_i(n),c_i(n) >0$, the transition rates have the form,
\begin{align}\label{o3.1}
a_{i, i + 1}(\ell) &=
\begin{cases}
c_i(n)\ell,&\text{if }\ell\geq 0,\\
b_{i+1}(n)(-\ell) & \text{if }\ell< 0,
\end{cases}\\
a_{i + 1, i}(\ell) &=
\begin{cases}
c_i(n)(- \ell ), & \text{if }\ell\leq 0,\\
b_{i+1}(n)\ell & \text{if }\ell> 0.
\end{cases}\label{o3.2}
\end{align}
For $i \in \{1,\dots, n - 2\}$ we set $c_i(n) = n$ and $b_{i+1}(n)=0$.
By symmetry of our model, we can focus on the lower boundary $\partial \mathcal{D}_n^-$. Updating the noiseless schematics in \eqref{schema1} and \eqref{s23.3}, we obtain in the noisy case, when $\ell < 0$,
\begin{center}
\begin{tikzpicture}
\node[draw,circle,fill=gray!0] (A)at(3,3){$0$};
\node[draw,circle,fill=gray!0] (B)at(7,3){$1$};
\node[draw,circle,fill=gray!0] (C)at(11,3){$2$};
\node(draw) (D)at(7,5){};
\node(draw) (E)at(3,5){};
\draw[<-, >= latex] (A)to[bend left](B);
\draw (4.5,3) node [above right] {$c_{0}(n)|\ell|$};
\draw[->, >= latex] (A)to[bend right](B);
\draw (4.5,3) node [below right] {$b_{1}(n)|\ell|$};
\draw[<-, >= latex] (B)--(C);
\draw (8.5,3) node [above right] {$c_{1}(n)|\ell|$};
\draw[->, >= latex] (B)--(D);
\draw (7,4) node [above right] {$v_1(n)|\ell|$};
\draw[->, >= latex] (A)--(E);
\draw (3,4) node [above right] {$v_0(n)|\ell|$};
\end{tikzpicture}
\end{center}
In this schematics $v_i \geq 0$, the values adjacent to the arrows indicate the magnitude of the incoming or outgoing flow,
and the direction designates the sign. When $\ell >0$, the schematics remain valid except that the direction of the arrows should be reversed.
Following \eqref{eq1}-\eqref{s21.1} we equate the sum of signed flows to zero. With the convention $c_{-1}(n)=c_{n}(n)=b_0(n)= b_{n+1}(n)=0$, we get for all $i\in\mathcal{D}_n$,
\begin{align*}
&0 = c_i(n)|\ell| + b_i(n)|\ell| - c_{i - 1}(n)|\ell| - v_i(n)|\ell|- b_{i+1}(n)|\ell| \\
\iff &b_{i+1}(n)+ v_i(n) + c_{i - 1}(n) = b_i(n) + c_i(n).
\end{align*}
Hence,
\begin{align*}
c_0(n) &= v_0(n) + b_1(n),\\
c_1(n) &= c_2(n)= v_0(n)+ v_1(n) =n.
\end{align*}
In the noisy case, we will use the following assumption.
\begin{enumerate}
\item[$\bold{K}$: ] $\lim_{n\rightarrow\infty}
\displaystyle \frac{b_1(n)}{n}= \vartheta_1 \geq 0$.
\end{enumerate}
\section{Convergence of approximations}\label{Sect2}
This section contains intermediate results needed to prove Theorem \ref{mainTheorem}. Some of them may have independent interest. All proofs will be postponed to Section \ref{Sect4}.
\subsection{Noiseless case}
The discussion of the noiseless case will be further subdivided into two cases, those of the hard boundary and soft boundary.
\subsubsection{Hard boundary}
Recall that ``hard boundary'' refers to the case $N_0(n) = 0$. Hence, $s\mapsto L_n(s)$ changes only if $X_n(s)\in \{0, n\}$.
This implies that if $X_n$ jumps to 0 at some time $t>0$, we must have $L_n(t) < 0$. Our transition rates are chosen so that $X_n$ cannot leave 0 until $L_n$ changes sign to positive. Thus, let us suppose that $(X_n(0),L_n(0))=(0,0)$. Recall our notation from \eqref{o11.1} and the assumption that $c_0(n)=v_{0}(n)=n$. Let $E_1$ be an exponential random variable with mean 1.
We have
\begin{align*}
T_1&=\inf\left(t\geq 0 : \int_0^t a_{01}(sv_{0})ds\geq E_1\right)
= \inf\left(t\geq 0 : \int_0^t n^2 s ds\geq E_1\right)
=\frac{\sqrt{2E_1}}{n}.
\end{align*}
Hence,
if $\ell<0$ and $(X_n(0),L_n(0))=(0,\ell)$ then the distribution of $L_n(T_1)= v_0(n)T_1$ is the same as the distribution of $\sqrt{2E_1}$.
Therefore,
\begin{align*}
\mathbb{P}(L_n(T_1)>r )&= \mathbb{P}(E_1>r^2/2)
=\Ep(-r^2/2).
\end{align*}
Consequently, the density of $L_n(T_1)$ is
$r\Ep(-r^2/2)$ for $r>0$. This is the density of what is called the Rayleigh distribution with parameter 1.
The unique feature of the hard boundary reflection is that the distribution of the ``velocity'' just after the reflection depends neither on the incoming velocity nor on $n$.
\subsubsection{Soft boundary}\label{soft_boundary}
In the soft boundary case, the evolution is more interesting than in the hard boundary case.
At the moment when the process $X_n$ enters the lower boundary layer $\partial \mathcal{D}_n^-$, its ``velocity'' $L_n$ must be negative. The particle $X_n$ will continue to transition downward until $L_n$ changes sign or $X_n$ reaches 0. Consequently, we must determine the distribution of the level at which the velocity $L_n$ changes sign. Once the velocity becomes positive, it increases
until $X_n$ exits $\partial \mathcal{D}_n^-$.
Let
\begin{align}\label{s30.21}
\mc{T}_n &= \inf\{t\geq 0: L_n(t) \geq 0\}, \\
G_n &= X_n(\mc{T}_n), \label{s30.22}\\
\mc{U}_n &= \inf\{t\geq \mc{T}_n: X_n(t) \notin \partial \mathcal{D}_n^-\}, \label{s30.23}\\
\mc{V}_n(\ell) &= \mc{L}(L_n(\mc{U}_n)\mid X_n(0) =N_0(n),L_n(0) = \ell),
\qquad \ell<0,
\label{o5.3}\\
\mc{V}^+_n(\ell) &= \mc{L}(L_n(\mc{U}_n)\mid X_n(0) =N_1(n),L_n(0) = \ell),
\qquad \ell>0,
\label{o5.4}\\
p_j(n, \ell) &=
\mathbb{P}\left( G_n =j \mid X_n(0) =N_0(n), L_n(0) = \ell \right).\label{s24.1}
\end{align}
\begin{proposition}\label{LevelDistrib} Assume either $\bold{F2}$ or $\bold{F'}$. For $\ell <0$,
\begin{align}\label{s30.1}
&p_{N_0(n)}(n, \ell)=\Ep\left(-\frac{\ell^2}{2\mu_{N_0(n)-1}(n)}\right),\\
&p_k(n, \ell)
=\left(\prod_{j=k}^{N_0(n)-1} \frac1{\mu_j(n)} \right)
\sum_{j=k-1}^{N_0(n)-1} \left(\Ep\left(-\frac{\ell^2}{2 \mu_j(n)}\right)
\prod_{\substack{i=k-1\\i \ne j}}^{N_0(n)-1}
\frac 1{1/\mu_i(n) - 1/\mu_j(n)}
\right),\label{o1.2}\\
& \qquad\qquad \text{ for } 0 < k < N_0(n),\notag\\
&p_0(n, \ell) = 1-\sum_{k=1}^{N_0(n)}p_k(n, \ell) .\label{s30.3}
\end{align}
\end{proposition}
\begin{proposition}\label{CondExitVel} Assume either $\bold{F1}$ or $\bold{F'}$. Given $k\in \partial \mathcal{D}_n^-$ and $\ell<0$, and conditional on $X_n(0) =N_0(n)$, $L_n(0) = \ell$, and $G_n = k $, the distribution of $L_n(\mc{U}_n)$ is the same as that of
\begin{align}\label{s29.3}
\left(2\sum_{j=k}^{N_0(n)} \lambda_j(n) E_j\right)^{1/2},
\end{align}
where $E_k, \cdots, E_{N_0(n)}$ are i.i.d. exponential random variables with mean 1. The density of this random variable is equal to
\begin{align}\label{s29.4}
f_{k,n}(r) &:=
r\left(\prod_{j=k}^{N_0(n)}\frac1{\lambda_j(n)} \right)
\sum_{j=k}^{N_0(n)} \left(\Ep\left(-\frac{r^2}{2\lambda_j(n)}\right)
\prod_{\substack{i=k\\i \ne j}}^{N_0(n)}
\frac 1 {1/\lambda_i(n) - 1/\lambda_j(n)}
\right).
\end{align}
\end{proposition}
The following corollary follows easily from Propositions \ref{LevelDistrib} and \ref{CondExitVel} and the strong Markov property applied at $\mc{T}_n$, so we will not supply a formal proof.
\begin{corollary}\label{o2.7}
Assume $\bold{F'}$. If $ \ell<0$, $\mc{V}_n(\ell)$ is the same as the distribution of
\begin{align}\label{s24.5}
\sum_{k=0}^{N_0(n)}\left(2\sum_{j=k}^{N_0(n)}\lambda_j(n)E_j\right)^{1/2} \mathbbm{1}_{Z=k},
\end{align}
where $E_j$'s are are i.i.d. exponential random variables with mean 1 and $Z$ is an independent random variable with
$\mathbb{P}(Z=j) = p_j(n,\ell)$ for $j\in \partial \mathcal{D}_n^-$, where $p_j(n,\ell)$ are as in \eqref{s30.1}-\eqref{s30.3}.
\end{corollary}
\begin{theorem}\label{ThinBoundaries}
Suppose that $\ell_n < 0$ for $n\geq 1$ and $\lim_{n\rightarrow \infty}\ell_n =\ell < 0$. Assume $\bold{F1}$-$\bold{F2}$ or $\bold{F'}$, and $\bold{F3}$. Suppose that $E_1, E_2, \dots$ are i.i.d. exponential random variables with mean 1.
(i) Assume that $N_0(n) = N< \infty$ for all $n$. Then for every $k\in \partial \mathcal{D}_n^-$, the following limit exists,
\begin{align}\label{s30.4}
p_k(\ell) := \lim_{n\rightarrow\infty} p_k(n, \ell_n).
\end{align}
(ii) Assume that $N_0(n) = N< \infty$. Then, when $n\to \infty$, $\mc{V}_n(\ell_n)$ converge to the distribution of
\begin{align*}
\sum_{j=0}^N \left(2\sum_{i=j}^N\lambda_i E_i\right)^{1/2}
\mathbbm{1}_{Z(\ell)=j},
\end{align*}
where $Z(\ell)$ is a random variable
with values in $\{0,1, \dots, N\}$, independent of $E_j$'s and such that
$\mathbb{P}(Z(\ell)=j)=p_j(\ell)$, $ j=0,\dots, N$. The values of $p_j(\ell)$, $j=0,\dots, N$ are given by \eqref{s16.1}-\eqref{s23.5}.
(iii) Assume that $N_0(n) = 1$. Then, when $n\to \infty$, $\mc{V}_n(\ell_n)$ converge to the distribution of
$$\sqrt{2\left(E_0 + \beta_1 E_1\right)}\mathbbm{1}_{Z(\ell)=0}+ \sqrt{2\beta_1 E_1}\mathbbm{1}_{Z(\ell)=1},$$
where $Z(\ell)$ is a random variable independent of the collection of $E_j$'s such that $\mathbb{P}(Z(\ell)=j)= p_j(\ell),\; j=0,1$. The values of $p_0$ and $p_1$ are given by \eqref{s23.4}.
\end{theorem}
\begin{remark} We presented the case $N_0(n)=1$ in Theorem \ref{ThinBoundaries}, in addition to the general case $N_0(n)=N$, so that Theorem \ref{ThinBoundaries} (iii) may be directly compared to Theorem \ref{Cvge_Veloc}, its counterpart in the case of noisy soft boundary.
\end{remark}
\begin{proposition}\label{CVGE_probs} Assume $\bold{G1}$. Suppose that $\ell_n < 0$ for $n\geq 1$, $\lim_{n\rightarrow \infty}\ell_n =\ell < 0$, and $\lim_{n\to \infty } N_0(n) =\infty$. Then, for every $k\geq 0$, the following limit exists,
\begin{align}\label{s23.6}
p_k(\ell) := \lim_{n\rightarrow\infty} p_k(n, \ell_n).
\end{align}
\end{proposition}
When $N_0(n) \to \infty$ as $n\to\infty$, the counterpart of Theorem \ref{ThinBoundaries} is the following.
\begin{theorem}\label{Infinite_layers}
Assume $\bold{G1}$-$\bold{G2}$. Suppose that $\ell_n < 0$ for $n\geq 1$, $\lim_{n\rightarrow \infty}\ell_n =\ell < 0$, and $\lim_{n\to \infty } N_0(n) =\infty$.
Then, when $n\to \infty$, $\mc{V}_n(\ell_n)$ converge to the distribution of
\begin{align*}
\sum_{j=0}^\infty \left(2\sum_{i=j}^\infty\lambda_i E_i\right)^{1/2}
\mathbbm{1}_{Z(\ell)=j},
\end{align*}
where $E_1, E_2, \dots$ are i.i.d.\ exponential random variables with mean 1 and $Z(\ell)\geq 0$ is independent of the collection of $E_j$'s such that
$\mathbb{P}(Z(\ell)=j)=p_j(\ell)$, $ j\geq0$. The probabilities $p_j(\ell)$ are defined in \eqref{s23.6}.
\end{theorem}
\subsection{Noisy case}
The following result is a noisy counterpart of Proposition \ref{LevelDistrib}.
\begin{proposition}\label{Layer_Reverse} Assume that $\ell<0$.
Let $\beta_1(n) = c_0(n)/v_1(n)$ and $\beta_2(n) = b_1(n)/v_0(n)$.
(i)
If $\beta_1(n)\neq \beta_2(n)$ then,
\begin{align*}
p_1(n, \ell)
&=\sum_{k\geq 0}\Bigg(\beta_1(n)^k\beta_2(n)^k \Ep\left(-\beta_1(n)\ell^2/2\right)\times\\
&\qquad\times\int_0^{\ell^2/2}\Bigg[\sum_{i=1}^2\sum_{j=1}^k \frac{(-1)^{k-j}}{(j-1)!}u^{j-1}\Ep(-(\beta_i(n)-\beta_1(n))u)\times\\
&\qquad\qquad\qquad\times \binom{2k-j-1}{k-j}\big(\beta_{3-i}(n)-\beta_i(n)\big)^{-(2k-j)}\Bigg]du\Bigg).
\end{align*}
(ii)
If $\beta_1(n)= \beta_2(n)$ then,
\begin{align*}
p_1(n, \ell)
=\sum_{k\geq 0} \frac{(\beta_1(n)\ell^2)^{2k} }{2^{2k}(2k)!}\Ep\left(-\beta_1(n) \ell^2/2\right).
\end{align*}
\end{proposition}
\begin{corollary}\label{Limit_Layer_reverse} Assume $\bold{F3}$ and $\bold{K}$.
Suppose that $\ell_n < 0$ for $n\geq 1$ and $\lim_{n\rightarrow \infty}\ell_n =\ell < 0$. Then $p_1(\ell) = \lim_{n\rightarrow\infty}p_1(n,\ell_n) $ exists.
\end{corollary}
The term ``geometric distribution'' may refer to either of two closely related distributions. In this article, a random variable $R$ will be called geometric with parameter $p$ if $\mathbb{P}(R = j) = (1-p)^{j-1}p$ for $j=1,2,\dots$.
We have the following analogue of Proposition \ref{CondExitVel}.
\begin{proposition}\label{Exit_Velo}
If $\ell<0$ then $\mc{V}_n(\ell)$ is the distribution of
\begin{align}\label{s24.3}
\left(2\frac{v_0(n)}{c_0(n)} E + S_n\right)^{1/2}\mathbbm{1}_{Z=0}+ S_n^{1/2} \; \mathbbm{1}_{Z=1} ,
\end{align}
where
\begin{align}\label{SnL}
S_n = 2\frac{v_1(n)}{c_1(n)}E'+\sum_{k=1}^{J(n) -1} 2\frac{v_1(n)}{b_1(n)}E_j'+\sum_{k=1}^{J(n) -1} 2\frac{v_0(n)}{c_0(n)}E_k''.
\end{align}
The random variables $E$, $E'$, $(E_k')_k$, $(E_k'')_k$ are i.i.d\ exponential with mean 1, $J(n)$ is geometric with parameter $c_1(n)/(c_1(n)+b_1(n)) $, and $Z$ takes values 0 or 1
and satisfies $\mathbb{P}(Z=1) = p_1(n,\ell)$, where $p_1(n,\ell)$ is given in Proposition \ref{Layer_Reverse}. All of these random variables are assumed to be independent.
\end{proposition}
\begin{theorem}\label{Cvge_Veloc}
Assume $\bold{F3}$ and $\bold{K}$. Suppose that $\ell_n < 0$ for $n\geq 1$ and $\lim_{n\rightarrow \infty}\ell_n =\ell < 0$.
Then there exist constants $\gamma_0,\gamma_1,\gamma_2 \in (0,+\infty)$ and $s\in[0,1)$ such that
\begin{align}\label{o5.1}
\gamma_0 &= \lim_{n\rightarrow\infty}\frac{2v_0(n)}{c_0(n)}, \qquad
\gamma_1 = \lim_{n\rightarrow\infty}\frac{2v_1(n)}{c_1(n)},
\qquad \gamma_2 = \lim_{n\rightarrow\infty}\frac{2v_1(n)}{b_1(n)} ,\\
\gamma_3&= \lim_{n\rightarrow\infty}\frac{c_1(n)}{c_1(n)+b_1(n)}. \label{o5.2}
\end{align}
Moreover, distributions $\mc{V}_n(\ell_n)$ converge to the distribution of
\begin{align}\label{o5.8}
\sqrt{\gamma_0 E + S}\;\mathbbm{1}_{Z(\ell)=0}+ \sqrt{S}\;\mathbbm{1}_{Z(\ell)=1},
\end{align}
where
\begin{align}\label{o5.7}
S = \gamma_1 E' + \sum_{j=1}^{J-1}\left(\gamma_0 E_j' + \gamma_2 E_j''\right).
\end{align}
The random variables $E$, $E'$, $(E_k')_k$, $(E_k'')_k$ are i.i.d. exponential with mean 1.
The distribution of
$Z(\ell)\in \{0,1\}$ is determined by $\mathbb{P}(Z(\ell)=1)= p_1(\ell)$, with $p_1(\ell)$ defined in Corollary \ref{Limit_Layer_reverse}. The
random variable $J$ is geometric with parameter $\gamma_3$.
All these random variables are independent.
\end{theorem}
\subsection{Convergence to the billiard process}\label{Sect3}
We will prove, under appropriate assumptions, that the sequence $(X_n/n, L_n)$ converges in distribution to a billiard process with Markovian reflections. Let
\begin{align}\label{o5.10}
t_0(n) & = 0,\\
s_j(n) & = \inf\{t\geq t_j(n): X_n(t) \in \partial \mathcal{D}_n^- \cup \partial \mathcal{D}_n^+\} ,\quad j\geq 0,\label{o5.11}\\
t_{j+1}(n) & = \inf\{t\geq s_j(n): X_n(t) \in \mathcal{D}_n \setminus( \partial \mathcal{D}_n^- \cup \partial \mathcal{D}_n^+)\}, \quad j\geq 0.\label{o5.12}
\end{align}
These are successive times when the process $X_n$ enters or leaves the boundaries.
\begin{proposition}\label{timeOnBoundary}
Suppose that the distributions of $ L_n(0)$, $n\geq 1$, are tight.
Make one of the following assumptions.
(i) Consider the noiseless model.
If $N_0(n) = N < \infty$ for all $n$,
suppose that $\bold{F1}$-$\bold{F2}$ or $\bold{F'}$, and $\bold{F3}$ hold. If $\lim_{n\rightarrow} N_0(n)=\infty$, assume instead $\bold{G1}$-$\bold{G2}$.
(ii) Consider the noisy model and suppose that $\bold{F1}$-$\bold{F2}$ or $\bold{F'}$, $\bold{F3}$, and $\bold{K}$ hold. Recall that $N_0(n)=1$ for all $n$.
Then for every $j\geq 0$,
\begin{align*}
\lim_{n\rightarrow\infty} t_{j+1}(n)-s_{j}(n) =0,\qquad\text{ in distribution.}
\end{align*}
\end{proposition}
\begin{definition}
Recall definitions \eqref{o5.3}-\eqref{o5.4} of $\mc{V}_n(\ell)$ and $\mc{V}^+_n(\ell)$.
Let
\begin{align}\label{s25.1}
\mc{V}_\infty(\ell) &= \lim_{n \to \infty} \mc{V}_n(\ell),\qquad \ell<0,\\
\mc{V}^+_\infty(\ell)& = \lim_{n \to \infty} \mc{V}^+_n(\ell),\qquad \ell>0,
\label{o5.5}
\end{align}
if the limits exist.
\end{definition}
For assumptions under which the limit in \eqref{s25.1} exists, see Theorems \ref{ThinBoundaries},
\ref{Infinite_layers} and \ref{Cvge_Veloc}. Analogous results hold for the second limit, by symmetry. The distributions $\mc{V}_\infty(\ell)$ and $\mc{V}^+_\infty(\ell)$ (if the limits in \eqref{s25.1}-\eqref{o5.5} exist) give no mass to 0.
\begin{remark}
It follows from Theorem \ref{ThinBoundaries} that every distribution given in Definition \ref{def:vel} can be expressed as the limit for a sequence $(X_n, L_n)$.
\end{remark}
Suppose that the limits in \eqref{s25.1}-\eqref{o5.5} exist for every $\ell$ in the corresponding range.
For any $(x_0, \ell_0) \in (0,1)\times \mathbb{R}\setminus\{0\}$, we will define a billiard process $(X,L)$ with Markovian reflections starting from $(x_0, \ell_0)$.
Consider $\ell_0<0$. The case $\ell_0>0$ can be treated in an analogous way.
First we form a Markov chain $\{R_j, j\geq 0\}$ by setting $R_0 = \ell_0$ and giving it the Markovian transition mechanism
\begin{align}\label{s26.1}
\mc{L} (R_{2j+1} \mid R_{2j} = \ell, R_{2j-1}, \dots, R_0) &= \mc{V}_\infty(\ell), \qquad j\geq 0,\\
\mc{L} (R_{2j+2} \mid R_{2j+1} = \ell, R_{2j}, \dots, R_0) &= \mc{V}^+_\infty(\ell), \qquad j\geq 0.
\label{o5.6}
\end{align}
We define
the process $(X,L)$ by
\begin{align}\label{s26.2}
u_0&=0,\\
W_0(t) &= x_0+R_0 t, \quad t \geq 0,\\
u_{j+1} &= \inf\{t> u_j: W_{j}(t) \notin (0,1)\}, \quad j \geq 0,\\
W_j(t) & = W_{j-1}(u_j) + R_j (t - u_j), \quad t\geq u_j, j \geq 1,\\
X(t) & = W_j(t), \quad [u_j, u_{j+1}), \quad j\geq 0,\\
L(t) &= R_j, \quad [u_j, u_{j+1}),\quad j\geq 0.\label{s26.3}
\end{align}
Note that $u_j > u_{j-1}$ for all $j\geq 1$, a.s., because distributions $\mc{V}_\infty(\ell)$ and $\mc{V}^+_\infty(\ell)$ give no mass to 0 and, therefore, $R_j \ne 0$ for all $j\geq 0$, a.s.
We have constructed a billiard process $(X, L)$ with Markovian reflections on $ [0, \sup_j u_j)$.
\begin{theorem}\label{convProc}
Assume that $(X_n(0)/n, L_n(0))$ converge in distribution to a pair of random variables $(X(0), L(0))$, as $n\to \infty$. Suppose that $X(0)\in(0,1)$ and $L(0) \ne 0$, a.s.
Make one of the following assumptions.
(i) Consider the noiseless model.
Assume that $N_0(n) = N < \infty$ for all $n$, and
suppose that $\bold{F1}$-$\bold{F2}$ or $\bold{F'}$, and $\bold{F3}$ hold.
(ii) Consider the noiseless model.
Assume that $\lim_{n\rightarrow} N_0(n)=\infty$, and suppose that $\bold{G1}$-$\bold{G2}$ hold.
(iii) Assume the noisy model and suppose that $\bold{F1}$-$\bold{F2}$ or $\bold{F'}$, $\bold{F3}$, and $\bold{K}$ hold. Recall that $N_0(n)=1$ for all $n$.
Then
(a) $\sup_j u_j = \infty$, a.s. It follows that $(X,L)$ is defined on $[0,\infty)$.
(b)
$\{(X_n(t)/n, L_n(t)) : t \in [0, T]\}$ converges in distribution
to a billiard process with Markovian reflections $\{(X(t), L(t)) : t\in [0, T]\}$ as $n \to \infty$, for every fixed $T <\infty$.
The distribution of $(X,L)$ is determined by \eqref{s25.1}-\eqref{s26.3}.
\end{theorem}
\section{Proofs}\label{Sect4}
\begin{remark}\label{o4.1}
We will use the following results from \cite{MA,JK}.
(i) Suppose that $E_1, E_2, \dots, E_k$ are i.i.d. exponential random variables with mean 1 and consider $ \alpha _j\in(0,\infty)$, $j=1,\dots, k$, such that $\alpha_i\ne \alpha_j$ for $i\ne j$.
Then, according to \cite[Thm. 2.1]{MA}, the density of $\sum_{j=1}^k \alpha_j E_j $ is equal to
\begin{align}\label{s27.1}
f(r) = \left(\prod_{m=1}^{k}\frac1{\alpha_m} \right)
\sum_{j=1}^k \left(\Ep(-r / \alpha_j)
\prod_{\substack{i=1\\i \ne j}}^{k}
\frac1{1/\alpha_i -1/ \alpha_j}
\right).
\end{align}
(ii) Suppose that $z_1, z_2, \dots , z_m$ are distinct complex numbers and $z\ne z_j$ for all $j$. Then, according to \cite[(2.4)]{MA},
\begin{align}\label{o1.1}
\frac 1 {\prod_{j=1}^m (z_j-z)} = \sum_{i=1}^m \displaystyle\frac 1 {(z_i-z)\displaystyle\prod_{\substack{j=1\\j \ne i}}^m (z_j-z_i)}.
\end{align}
(iii) Consider the sum $S$ of $k_1+k_2 + \dots+ k_r$ independent random variables with exponential distributions.
Suppose that exactly $k_j$ of these
random variables have mean $\alpha_j$, for $j=1,\dots, r$. Assume that $\alpha_i\ne \alpha_j$ for $i\ne j$. According to \cite[Thm. 1]{JK} (see also \cite[Thm. 4.1]{MA}), the density $f_S(u)$ of $S$ is given by the following formula, for $u>0$,
\begin{align*}
\sum_{i=1}^r \frac 1{\alpha_i^{k_i}}
\Ep\left(-u / \alpha_i\right)
\sum_{j=1}^{k_i} \frac{(-1)^{k_i-j}}{(j-1)!}
u^{j-1}
\sum_{\substack{m_1+m_2+\dots+m_r=k_i-j\\m_i =0}}
\prod_{\substack{l=1\\l\ne i }}^r
\binom{k_l + m_l -1 }{m_l}
\frac{\alpha_l^{-k_l}}{(\alpha_l^{-1} - \alpha_i^{-1})^{k_l+m_l}}.
\end{align*}
\end{remark}
\begin{proof}[Proof of Proposition \ref{LevelDistrib}]
The proof has two steps. First we compute the remaining memory $L_n$ when the particle jumps from site $i$ to site $i-1$. This will allow us to compute the distribution of $G_n$ in the second part of the proof.
Consider a family $E_j$, $j \geq 1$, of i.i.d. exponential random variables with mean 1. We can represent $(X_n,L_n)$ as follows.
Suppose that $X_n(0) = N_0(n)$ and $L_n(0)<0$.
For $0 \leq i \leq N_0(n)$ let
\begin{align}
\tau_0 &=0,\label{s28.1}\\
\tau_i &= \inf\left\{t > \tau_{i - 1} : \int_{\tau_{i - 1}}^ta_{N_0(n)-i+1,N_0(n)- i}(L_n(s))ds > E_i\right\}, \qquad i\geq 1,\label{s28.2}\\
\Delta \tau_i &= \tau_{i+ 1} - \tau_{i}, \qquad i\geq 0.\label{s28.3}
\end{align}
The following remarks apply as long as $L_n$ stays negative.
The $\tau_i$'s are the times when $X_n$ jumps from $N_0(n)-i+1$ to $N_0(n)-i $. The amount of time that the process spends at $N_0(n)-i$ is represented by $\Delta \tau_i$. It follows that $v_{N_0(n)-i}(n)\Delta \tau_i$
is the amount of memory (i.e., the increment of $L_n$) that is accumulated at $N_0(n)-i$. If the process $X_n$ arrives at
$N_0(n)-i$ with $L_n(\tau_{i}) = \ell<0$ then $X_n$ will leave $N_0(n)-i$ for $N_0(n)-i - 1$ at time $\tau_{i + 1}$ with the memory process $L_n$ taking the value
\begin{align}\label{s28.4}
L_n(\tau_{i + 1}) = \ell + v_{N_0(n)-i}(n)\Delta \tau_i
=-(\vert \ell\vert-v_{N_0(n)-i}(n)\Delta \tau_i),
\end{align}
provided this quantity is
negative.
By definition, $X_n$ arrives at $N_0(n) - i$ at time $\tau_{i}$.
Recall our choice for $a_{i,i+1}(\ell)$ from \eqref{s21.7}-\eqref{s21.8}. It follows from this and \eqref{s28.1}-\eqref{s28.3} that if
$L_n(\tau_{i}
) = \ell<0$ then
$\Delta \tau_{i}$ is the smallest $t>0$ such that
\begin{align*}
\int_0^tc_{N_0(n) - i - 1}(n)(|\ell| - v_{N_0(n) - i}(n)s)ds > E_{i+1}.
\end{align*}
We set both sides equal to each other and solve the resulting equation for $t$ as follows. First, we have
\begin{align*}
|\ell| t - v_{N_0(n) - i}(n)t^2/2 = E_{i+1}/c_{N_0(n) - i - 1}(n),
\end{align*}
and then we find zeros using the quadratic formula,
\begin{align}\label{Time_Spent}
\frac{|\ell|}{v_{N_0(n) - i}(n)} \pm \frac{\sqrt{\ell^2 - 2(v_{N_0(n) - i}(n)E_{i+1}/c_{N_0(n) - i - 1}(n))}}{v_{N_0(n) - i}(n)}.
\end{align}
Provided $2(v_{N_0(n) - i}(n)E_{i+1}/c_{N_0(n) - i - 1}(n))$ is sufficiently small, the first
nonnegative zero is
\begin{align}\label{Time_Spent2}
\frac{|\ell|}{v_{N_0(n) - i}(n)} - \frac{\sqrt{\ell^2 - 2(v_{N_0(n) - i}(n)E_{i+1}/c_{N_0(n) - i - 1}(n))}}{v_{N_0(n) - i}(n)}.
\end{align}
If we take this as the value of $\Delta \tau_{i}$ and combine this formula with \eqref{s28.4}, we obtain,
\begin{align*}
L_n(\tau_{i+1})=-\left(\ell^2-2\frac{v_{N_0(n)-i}(n)E_{i+1}}{c_{N_0(n)-i-1}(n)}\right)^{1/2}.
\end{align*}
It follows from this and \eqref{s28.6} that
\begin{align*}
L_n^2(\tau_{i+1})&=L_n^2(\tau_i)-2\frac{v_{N_0(n)-i}(n)E_{i+1}}{c_{N_0(n)-i-1}(n)}
=L_n^2(\tau_i)-2 \mu_{N_0(n)-i-1}(n) E_{i+1}.
\end{align*}
Hence, if $L_n(0) = \ell <0$ then for $k=0,1,\dots, N_0(n)-1$,
\begin{align*}
&\{G_n= N_0(n)-k\} \\
&= \left\{\sum_{i=0}^{k-1}\mu_{N_0(n)-i-1}(n)E_{i+1} < \ell^2/2,
\sum_{i=0}^{k}\mu_{N_0(n)-i-1}(n)E_{i+1} \geq \ell^2/2\right\}\\
&= \left\{\sum_{j=N_{0}(n)-k}^{N_0(n)-1}\mu_{j}(n)E_{N_0(n)-j} < \ell^2/2,
\sum_{j=N_{0}(n)-k-1}^{N_0(n)-1}\mu_{j}(n)E_{N_0(n)-j} \geq \ell^2/2\right\}.
\end{align*}
If we change the variable by taking $m=N_0(n)-k$ then we obtain for $m=1,2,\dots, N_0(n)$,
\begin{align}\label{s29.1}
\{G_n= m\}
= \left\{\sum_{j=m}^{N_0(n)-1}\mu_{j}(n)E_{N_0(n)-j} < \ell^2/2,
\sum_{j=m-1}^{N_0(n)-1}\mu_{j}(n)E_{N_0(n)-j} \geq \ell^2/2\right\}.
\end{align}
Set
\begin{align*}
Z_m(n)=\sum_{j=m}^{N_0(n)-1}\mu_j(n)E_{N_0(n)-j},\qquad m=1,2,\dots, N_0(n).
\end{align*}
It follows from \eqref{s27.1} that the density of $Z_m(n)$ is given by
\begin{align}\label{s29.2}
f_{Z_{m}(n)}(t):=
\left(\prod_{j=m}^{N_0(n)-1}\frac 1{\mu_j(n)} \right)
\sum_{k=m}^{N_0(n)-1} \left(\Ep(-t / \mu_k(n))
\prod_{\substack{i=m\\i \ne k}}^{N_0(n)-1}
\frac1{1/\mu_i(n) -1/ \mu_k(n)}
\right),
\end{align}
provided that $\mu_j(n)\neq \mu_k(n)$ for $k\neq j$; this is the case because we assumed that $\bold{F2}$ or $\bold{F'}$ holds.
For $m= N_0(n)$, we do not need \eqref{s29.2}; formula \eqref{s29.1} yields,
\begin{align*}
\mathbb{P}(G_n=N_0(n)) &= \mathbb{P}\left(\mu_{N_0(n)-1}(n)E_{1}\geq \ell^2/2\right)
= e^{-\mu_{N_0(n)-1}(n)^{-1}\ell^2/2}.
\end{align*}
For $m=1, \cdots, N_0(n)-1$, we use \eqref{s29.1} and \eqref{s29.2} as follows,
\begin{align}\label{o4.3}
&\mathbb{P} (G_n=k)= \mathbb{P}\left(Z_{m-1}(n)\geq \ell^2/2, Z_{m}(n)<\ell^2/2\right)\\
&= \int_0^{\ell^2/2} \mathbb{P}\left(\mu_{m-1}(n)E_{N_0(n) -m+1}\geq \ell^2/2-u\right)f_{Z_{m}}(u)du \notag \\
&= \int_0^{\ell^2/2} \Ep\left(-\mu_{m-1}(n)^{-1}(\ell^2/2-u)\right) \times \notag \notag \\
&\qquad \times \left(\prod_{j=m}^{N_0(n)-1}\frac1{\mu_j(n)} \right)
\sum_{k=m}^{N_0(n)-1} \left(\Ep(-u / \mu_k(n))
\prod_{\substack{i=m\\i \ne k}}^{N_0(n)-1}
\frac1{1/\mu_i(n) - 1/\mu_k(n)}
\right)du \notag \\
&= \Ep\left(-\mu_{m-1}(n)^{-1}\ell^2/2\right)
\left(\prod_{j=m}^{N_0(n)-1}\frac1{\mu_j(n)} \right) \times \notag \\
&\qquad \times
\sum_{k=m}^{N_0(n)-1} \left(\int_0^{\ell^2/2}\Ep(- (\mu_k(n)^{-1}-\mu_{m-1}(n)^{-1})u) du
\prod_{\substack{i=m\\i \ne k}}^{N_0(n)-1}\frac1{1/\mu_i(n) - 1/\mu_k(n)}
\right) \notag \\
&= \Ep\left(-\mu_{m-1}(n)^{-1}\ell^2/2\right)
\left(\prod_{j=m}^{N_0(n)-1}\frac1{\mu_j(n)} \right) \times \notag \\
&\qquad \times
\sum_{k=m}^{N_0(n)-1} \left(
\frac{1- \Ep(- (\mu_k(n)^{-1}-\mu_{m-1}(n)^{-1})\ell^2/2)}
{\mu_k(n)^{-1}-\mu_{m-1}(n)^{-1}}
\prod_{\substack{i=m\\i \ne k}}^{N_0(n)-1}\frac1{1/\mu_i(n) - 1/\mu_k(n)}
\right) \notag \\
&= \left(\prod_{j=m}^{N_0(n)-1}\frac1{\mu_j(n)} \right)\times \notag \\
&\quad \times
\sum_{k=m}^{N_0(n)-1} \left(
\left(\Ep\left(-\mu_{k}(n)^{-1}\ell^2/2\right)- \Ep(- \mu_{m-1}(n)^{-1}\ell^2/2)\right)
\prod_{\substack{i=m-1\\i \ne k}}^{N_0(n)-1}\frac1{1/\mu_i(n) - 1/\mu_k(n)}
\right) \notag \\
&= \left(\prod_{j=m}^{N_0(n)-1}\frac1{\mu_j(n)} \right)
\sum_{k=m}^{N_0(n)-1} \left(\Ep\left(-\mu_{k}(n)^{-1}\ell^2/2\right)
\prod_{\substack{i=m-1\\i \ne k}}^{N_0(n)-1}\frac1{1/\mu_i(n) - 1/\mu_k(n)}
\right) \notag \\
& \qquad-\left(\prod_{j=m}^{N_0(n)-1}\frac1{\mu_j(n)} \right)
\Ep(- \mu_{m-1}(n)^{-1}\ell^2/2)
\sum_{k=m}^{N_0(n)-1}
\prod_{\substack{i=m-1\\i \ne k}}^{N_0(n)-1}\frac1{1/\mu_i(n) - 1/\mu_k(n)}. \notag
\end{align}
We now apply \eqref{o1.1} to the last line to obtain
\begin{align*}
\mathbb{P} & (G_n=k)
= \left(\prod_{j=m}^{N_0(n)-1}\frac1{\mu_j(n)} \right)
\sum_{k=m}^{N_0(n)-1} \left(\Ep\left(-\mu_{k}(n)^{-1}\ell^2/2\right)
\prod_{\substack{i=m-1\\i \ne k}}^{N_0(n)-1}\frac1{1/\mu_i(n) - 1/\mu_k(n)}
\right) \notag \\
& \qquad-\left(\prod_{j=m}^{N_0(n)-1}\frac1{\mu_j(n)} \right)
\Ep(- \mu_{m-1}(n)^{-1}\ell^2/2)
\prod_{i=m}^{N_0(n)-1}
\frac1{1/\mu_i(n) - 1/\mu_{m-1}(n)}
\notag \\
&= \left(\prod_{j=m}^{N_0(n)-1}\frac1{\mu_j(n)} \right)
\sum_{k=m-1}^{N_0(n)-1} \left(\Ep\left(-\mu_{k}(n)^{-1}\ell^2/2\right)
\prod_{\substack{i=m-1\\i \ne k}}^{N_0(n)-1}\frac1{1/\mu_i(n) - 1/\mu_k(n)}
\right) \notag \\
&= \mu_{m-1}(n)f_{Z_{m-1}}(n)(\ell^2/2).
\end{align*}
This and \eqref{s29.2} yield \eqref{o1.2}.
Finally, \eqref{s30.3} is true because $p_j(n, \ell) = 0$ for $j\notin \partial \mathcal{D}_n^-$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{CondExitVel}]
Suppose that $E_j$, $j\geq 0$, are i.i.d. exponential with mean 1.
Suppose that $X_n(0)=i\in \partial\mathcal{D}_n^-$ and $L_n(0)=\ell\geq 0$, and let $T$ be the time of the first jump, necessarily to $i+1$. We can represent $T$ as follows,
\begin{align}\label{o3.3}
T=\inf\left(t>0: \int_0^t c_i(n)\big(\ell+ v_i(n)s\big)ds\geq E_i\right).
\end{align}
Hence, $T$ is the smallest positive solution to
\begin{align}\label{o3.4}
T\ell + T^2 v_i(n) /2 = E_i/c_i(n).
\end{align}
This yields
\begin{align}\label{TimeToWait}
v_i(n) T &= -\ell +\left(\ell^2 + 2\frac{v_i(n)}{c_i(n)}E_i\right)^{1/2},\\
\label{Pos_initial_velo}
L_n(T)&= \ell+ v_i(n)T=\left(\ell^2 + 2\frac{v_i(n)}{c_i(n)}E_i\right)^{1/2}.
\end{align}
Recall notation from \eqref{s30.21} and \eqref{s30.23}.
By the strong Markov property applied at the stopping time $\mc{T}_n$, the distribution of $L_n(\mc{U}_n)$ is the same in the following cases: (i) $X_n(0) =N_0(n)$, $L_n(0) = \ell<0$, and $G_n = k $, and (ii)
$X_n(0) = k$ and $L_n(0) =0$.
Assume (ii).
An application of the strong Markov property at the jump times from $j$ to $j+1$ for $j=k, \dots , N_0(n)$, and \eqref{Pos_initial_velo} show that the distribution of $L_n(\mc{U}_n)$ is the same as that of
\begin{align*}
\left(2\sum_{j=k}^{N_0(n)}E_{j}\frac{v_{j}(n)}{c_{j}(n)}\right)^{1/2}
=\left(2\sum_{j=k}^{N_0(n)}E_{j}\lambda_{j}(n)\right)^{1/2}.
\end{align*}
This proves \eqref{s29.3}.
To prove \eqref{s29.4}, note that $L_n(\mc{U}_n)^2/2$ can be represented as the sum of independent exponential random variables. Their means are all distinct, i.e., $\lambda_j(n)\neq \lambda_i(n)$ for all $i\neq j$,
because we assumed that either $\bold{F1}$ or $\bold{F'}$ hold.
Thus, we can use \eqref{s27.1} to conclude that
\begin{align*}
\mathbb{P}&(L_n(\mc{U}_n)\leq r) = \mathbb{P}( L_n(\mc{U}_n)^2/2\leq r^2/2)\\
&= \int_0^{r^2/2}\left(\prod_{j=k}^{N_0(n)}\frac1{\lambda_j(n)} \right)
\sum_{j=k}^{N_0(n)} \left(\Ep(-t / \lambda_j(n))
\prod_{\substack{i=k\\i \ne j}}^{N_0(n)}
\frac1{1/\lambda_i(n) - 1/\lambda_j(n)}
\right)dt.
\end{align*}
Differentiating the above expression with respect to $r$ yields \eqref{s29.4}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{ThinBoundaries}]
(i) In view of Remark \ref{s30.20} (ii) and explicit formulas \eqref{s30.1}-\eqref{s30.3}, the limit in \eqref{s30.4} must exist, except that we have to show that the limit does not involve division by 0. By Remark \ref{s30.20} (iii), $\mu_i > \mu_{i+1} >0$, so one can take the limit in \eqref{s30.1}-\eqref{s30.3} as $n\to \infty$ and the limiting formulas do not involve division by 0.
(ii)
It follows from Remark \ref{s30.20} (ii) that, for every $k\leq N$,
\begin{align*}
\left(2\sum_{j=k}^{N}\lambda_j(n)E_j\right)^{1/2}
\to
\left(2\sum_{j=k}^{N}\lambda_j E_j\right)^{1/2} ,
\end{align*}
in distribution. This and part (i) of the theorem easily imply part (ii).
Part (iii) is a special case of part (ii).
\end{proof}
\begin{proof}[Proof of Proposition \ref{CVGE_probs}]
Suppose that $E_j$, $j\geq 1$, are i.i.d. exponential with mean 1. Recall notation from \eqref{o2.1}-\eqref{o2.4}.
Fix some $k\geq 1$ and consider $n$ such that $N_0(n)>k$. Set
\begin{align*}
&Y_{n,k} = \sum_{j= k}^\infty\mu'_j(n)E_j,
\qquad Y_k = \sum_{j= k}^\infty \mu_j E_j.
\end{align*}
We have assumed $\bold{G1}$ so $\sum_{j\geq 1} \mu_j < \infty$. This, the fact that $E_j$'s are exponential and Kolmogorov's three-series theorem easily imply that $Y_k$ is well defined and finite, a.s.
Since $\mu_kE_k$ has a density, so does $Y_k =\mu_kE_k+ \sum_{j= k+1}^\infty \mu_j E_j$. Hence, $\mathbb{P}(Y_k= \ell^2/2) =0$ for every $\ell$. Fix some $\ell<0$ and find $\varepsilon>0$ so small that
\begin{align}\label{o2.6}
\mathbb{P}\left( (\ell^2-\varepsilonilon)/2 \leq Y_{k}\leq (\ell^2+\varepsilon)/2\right)\leq \delta .
\end{align}
We apply formula \eqref{s29.1} to see that
\begin{align*}
p_k(n,\ell_n) &=
\mathbb{P}( G_n = k\mid X_n(0) = N_0(n), L_n(0) = \ell_n)\\
&=\mathbb{P}\left(\sum_{j=k-1}^{N_0(n)-1}\mu_j(n)E_j\geq \frac{\ell_n^2}{2}, \sum_{j=k}^{N_0(n)-1}\mu_j(n)E_j<\frac{\ell_n^2}{2}\right)\\
&=\mathbb{P}\left(\sum_{j=k-1}^{N_0(n)-1}\mu_j(n)E_j\geq \frac{\ell_n^2}{2}\right) - \mathbb{P}\left(\sum_{j=k}^{N_0(n)-1}\mu_j(n)E_j\geq \frac{\ell_n^2}{2}\right)\\
&=\mathbb{P}\left(\sum_{j= k-1}^\infty\mu'_j(n)E_j\geq \frac{\ell_n^2}{2}\right) - \mathbb{P}\left(\sum_{j= k}^\infty\mu'_j(n)E_j\geq \frac{\ell_n^2}{2}\right)\\
&= \mathbb{P}\left(Y_{n,k-1}\geq \ell_n^2/2\right)-\mathbb{P}\left(Y_{n,k}\geq \ell_n^2/2\right) .
\end{align*}
It will suffice to prove that, for any fixed $k$, $\mathbb{P}\left(Y_{n,k}\geq \ell_n^2/2\right)$ converges to $\mathbb{P}\left(Y_{k}\geq \ell^2/2\right)$
as $n$ goes to infinity.
Since $\ell_n\to \ell$, we can find $n_1$ so large that $\left\vert\ell_n^2/2- \ell^2/2\right\vert < \varepsilon/2$ for $n\geq n_1$.
Then, for $n\geq n_1$,
\begin{align*}
\mathbb{P}\left(Y_{n,k}\geq \frac{\ell^2+\varepsilon}{2}\right)
\leq \mathbb{P}\left(Y_{n,k}\geq \frac{\ell_n^2}{2}\right)
\leq \mathbb{P}\left(Y_{n,k}\geq \frac{\ell^2-\varepsilon}{2}\right),
\end{align*}
and, therefore,
\begin{align}\label{o2.5}
&\left\vert \mathbb{P}\left(Y_{n,k}\geq \frac{\ell_n^2}{2}\right) - \mathbb{P}\left(Y_{k}\geq \frac{\ell^2}{2}\right)\right\vert \\
&\leq \max\left(\left\vert \mathbb{P}\left(Y_{n,k}\geq \frac{\ell^2-\varepsilon}{2}\right) - \mathbb{P}\left(Y_{k}\geq \frac{\ell^2}{2}\right)\right\vert ,\
\left\vert \mathbb{P}\left(Y_{n,k}\geq \frac{\ell^2+\varepsilon}{2}\right) - \mathbb{P}\left(Y_{k}\geq \frac{\ell^2}{2}\right)\right\vert \right). \notag
\end{align}
We will estimate one of the quantities under ``max'' on the right hand side. The other one can be estimated in a similar way.
We have, by assumption $\bold{G1}$,
\begin{align}\label{o2.8}
\mathbb{E}\left|Y_{n,k}-Y_k\right|
&=
\mathbb{E}\left(\left\vert\sum_{j\geq k}\mu'_j(n)E_j- \sum_{j\geq k}\mu_j E_j\right\vert\right)
\leq \mathbb{E}\left(\sum_{j\geq k}\left\vert\mu'_j(n)- \mu_j\right\vert E_j\right)\\
&= \sum_{j\geq k}\left\vert\mu'_j(n)- \mu_j \right\vert
\to 0, \qquad \text{ when } n\to\infty.\notag
\end{align}
Hence $Y_{n,k}$ converges in $L^1$, thus in distribution, to $Y_k$ as $n$ goes to infinity.
Since $Y_k$ has a density, Portmanteau's theorem implies that there exists $n_2$ such that for all $n\geq n_2$,
\begin{align*}
\left\vert\mathbb{P}\left( Y_{n,k} \geq \frac{\ell^2-\varepsilonilon}{2}\right)- \mathbb{P}\left( Y_{k} \geq \frac{\ell^2-\varepsilonilon}{2}\right)\right\vert\leq \delta .
\end{align*}
Combined with \eqref{o2.6}, this yields
\begin{align*}
\left\vert\mathbb{P}\left( Y_{n,k} \geq \frac{\ell^2-\varepsilonilon}{2}\right)- \mathbb{P}\left( Y_{k} \geq \frac{\ell^2}{2}\right)\right\vert\leq 2\delta .
\end{align*}
An analogous estimate holds for the other quantity under ``max'' on the right hand side of \eqref{o2.5} so, for large $n$,
\begin{align*}
&\left\vert \mathbb{P}\left(Y_{n,k}\geq \frac{\ell_n^2}{2}\right) - \mathbb{P}\left(Y_{k}\geq \frac{\ell^2}{2}\right)\right\vert \leq 2 \delta.
\end{align*}
Since $\delta>0$ is arbitrarily small, this completes the proof.
\end{proof}
\begin{lemma}\label{o16.1}
Suppose that for every $n\geq 1$, random variables $A_{k,n}$, $k\geq1$, and $Z_n$ are defined on the same probability space, and for each $n$, $Z_n$ is independent of $A_{k,n}$, $k\geq1$.
Suppose that $A_k$, $k\geq 1$, and $Z$ are also defined on the same probability space and $Z$ is independent of $A_k$, $k\geq 1$.
Assume that $Z$ and $Z_n$ take only strictly positive integer values, for each $n$, a.s.
Suppose that $A_{k,n} \to A_k$ and $Z_n \to Z$, in distribution, as $n\to \infty$, for every $k$.
Let
\begin{align*}
S_n = \sum_{k=1}^\infty A_{k,n} \mathbbm{1}_{\{Z_n=k\}}, \qquad
S = \sum_{k=1}^\infty A_{k} \mathbbm{1}_{\{Z=k\}}.
\end{align*}
Then $S_n$ converges to $S$ in distribution, as $n\to\infty$.
\end{lemma}
\begin{proof}
The proof is routine so we only sketch it.
For any $\varepsilon >0$, there is $k_0$ such that $\mathbb{P}(Z\geq k_0) < \varepsilon$. Hence, there is $n_0$ such that for $n\geq n_0$, $\mathbb{P}(Z_n\geq k_0) < 2\varepsilon$.
This implies that it will suffice to show that
\begin{align*}
\sum_{k=1}^j A_{k,n} \mathbbm{1}_{\{Z_n=k\}} \to
S = \sum_{k=1}^j A_{k} \mathbbm{1}_{\{Z=k\}},
\end{align*}
in distribution, for every fixed $j\geq 1$.
For any random variable $X$, let $\phi_{X}(t)$ denote its characteristic function.
We need to show that
\begin{align*}
\mathbb{E}\left(\sum_{k=1}^j \phi_{A_{k,n}}(t) \mathbbm{1}_{\{Z_n=k\}} \right)\to
\mathbb{E}\left(\sum_{k=1}^j \phi_{A_{k}}(t) \mathbbm{1}_{\{Z=k\}}\right),
\end{align*}
for every real $t$, as $n\to \infty$. This follows from (i) pointwise convergence $\phi_{A_{k,n}}(t) \to \phi_{A_{k}}(t)$, (ii) the Skorokhod representation theorem which lets us assume that $Z_n \to Z$, a.s., and (iii) dominated convergence theorem.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Infinite_layers}]
Recall notation and definitions from \eqref{o2.1}-\eqref{o2.4} and \eqref{s30.21}-\eqref{s24.1}.
Suppose that $\ell_n < 0$ for $n\geq 1$, $\lim_{n\rightarrow \infty}\ell_n =\ell < 0$, and $\lim_{n\to \infty } N_0(n) =\infty$.
Assume that $X_n(0) =N_0(n)$ and $L_n(0) = \ell_n$ for all $n$.
By Corollary \ref{o2.7},
the distribution of $L_n(\mc{U}_n)$ is the same as that of
\begin{align*}
\sum_{j=0}^\infty \left(2\sum_{i=j}^\infty\lambda'_i(n) E_i\right)^{1/2}
\mathbbm{1}_{Z_n(\ell_n)=j},
\end{align*}
where $E_1, E_2, \dots$ are i.i.d. exponential random variables with mean 1; $Z_n(\ell_n)\geq 0$ is an integer valued random variable, independent of $E_j$'s and such that
$\mathbb{P}(Z_n(\ell_n)=j)=p_j(n,\ell_n)$, $ j\geq0$.
In other words,
if $\mc{L}_{j,n}$ denotes the distribution of $\left(2\sum_{i=j}^\infty\lambda'_i(n) E_i\right)^{1/2}$ and $\nu_n$ denotes the distribution of $Z_n(\ell_n)$ then
$\mc{V}_n(\ell_n)$ is a mixture of distributions $\mc{L}_{j,n}$ with the mixing measure $\nu_n$ for the index $j$.
Let $\mc{L}_{j}$ denote the distribution of $\left(2\sum_{i=j}^\infty\lambda_i E_i\right)^{1/2}$ and let $\nu$ denote the distribution of $Z(\ell)$.
The argument given in \eqref{o2.8} shows that $\mc{L}_{j,n}\to \mc{L}_{j}$ for every $j$, except that we have to replace $\mu$'s with $\lambda$'s, and use assumption $\bold{G2}$.
Distributions $\nu_n$ converge to $\nu$ by Proposition \ref{CVGE_probs}.
We use Lemma \ref{o16.1} to conclude that $L_n(\mc{U}_n)$ converge in distribution to the mixture of distributions $\mc{L}_{j}$ with the mixing measure $\nu$ for the index $j$.
This is equivalent to the statement of the theorem.
\end{proof}
\begin{lemma}\label{AgainstFlow}
Consider the noisy model and
recall that in the noisy case we assume that $N_0(n) =1$.
Let $T_1$ denote the time of the first jump of $X_n$. We have
\begin{align*}
\mathbb{P}\left( X_n(T_1) =0 \mid X_n(0)= 1, \; L_n(0)= \ell > 0\right)
&=\frac{b_1(n)}{c_1(n)+b_1(n)}.
\end{align*}
\end{lemma}
\begin{proof}
Recall from \eqref{o3.1}-\eqref{o3.2} that, for $\ell>0$, the jump rates are $a_{1,0}(\ell)= b_1(n)|\ell|$ and $a_{1,2}(\ell)= c_1(n)|\ell|$.
Suppose that $E_1$ and $E_1'$ are independent exponential random variables with parameter $1$.
An argument similar to that in \eqref{o3.3}-\eqref{TimeToWait} yields the following representation of the probability in question,
\begin{align*}
&\mathbb{P}\left( X_n(T_1) =0 \mid X_n(0)= 1, \; L_n(0)= \ell > 0\right)\\
&=\mathbb{P}\left(-\frac{\ell}{v_1(n)}+\frac{1}{v_1(n)}
\left(\ell^2 + 2\frac{v_1(n)}{b_1(n)}E_1\right)^{1/2}
<-\frac{\ell}{v_1(n)}+\frac{1}{v_1(n)}
\left(\ell^2 + 2\frac{v_1(n)}{c_1(n)}E_1'\right)^{1/2}\right)\\
&= \mathbb{P}\left(\frac{E_1}{b_1(n)}<\frac{E_1'}{c_1(n)}\right)
=\frac{b_1(n)}{c_1(n)+b_1(n)}.
\end{align*}
\end{proof}
\begin{proof}[Proof of Proposition \ref{Layer_Reverse}]
The proof is similar to that of Proposition \ref{LevelDistrib} so we will only sketch the main steps. The key to our calculation is a formula analogous to \eqref{s29.1}. In the present case, $X_n$ starts from 1 and may jump between 0 and 1 any number of times before $L_n$ changes the sign from negative to positive. Every visit to 0 or 1 is associated with a positive increment of $L_n$. These observations can be implemented as follows.
Assume that $\beta_1(n)= c_0(n)/v_1(n)\neq b_1(n)/v_0(n) = \beta_2(n)$.
For $k\geq 0$, let
\begin{align*}
Y_k(n) = \sum_{j=1}^k \frac{v_1(n)}{c_0(n)} E_j + \sum_{j=1}^k \frac{v_0(n)}{b_1(n)} E_j'
= \sum_{j=1}^k \beta_1(n)^{-1} E_j + \sum_{j=1}^k \beta_2(n)^{-1} E_j',
\end{align*}
where $(E_j)_j$ and $(E_j')_j$ are i.i.d exponential random variables with parameter $1$.
It follows from Remark \ref{o4.1} (iii) with $r=2$ and $k_1=k_2=k$ that for $k\geq 1$, the density function of $Y_k(n)$ is
\begin{align}\label{o4.2}
f_{k,n}(u)=
\beta_1(n)^k\beta_2(n)^k\sum_{i=1}^2\sum_{j=1}^k \frac{(-1)^{k-j}}{(j-1)!}u^{j-1}e^{-\beta_i(n)u}
\binom{2k-j-1}{k-j}\big(\beta_{3-i}(n)-\beta_i(n)\big)^{-(2k-j)}.
\end{align}
The following formula is analogous to \eqref{o4.3},
\begin{align}\label{Prob_rev_@1}
p_1(n,\ell) &=\sum_{k\geq 0}\mathbb{P}\left(Y_k(n) + \beta_1(n)^{-1}E_{k+1}\geq \ell^2/2\; ,\; Y_k(n)<\ell^2/2\right).
\end{align}
A single term in the sum has the following representation,
\begin{align}\label{at_kth_step}
&\mathbb{P}\left(Y_k(n) + \beta_1(n)^{-1}E_{k+1}\geq \ell^2/2\; ,\; Y_k(n)<\ell^2/2\right)\\
&= \int_0^{\ell^2/2} \mathbb{P}\left(\beta_1(n)^{-1} E_{k+1}\geq \ell^2/2-u\right)f_{k,n}(u)du\notag\\
&=\int_0^{\ell^2/2} \Ep\left(-\beta_1(n)(\ell^2/2-u)\right) f_{k,n}(u)du\notag\\
&=\beta_1(n)^k\beta_2(n)^k \Ep(-\beta_1(n)\ell^2/2)\int_0^{\ell^2/2}\Bigg[\sum_{i=1}^2\sum_{j=1}^k \frac{(-1)^{k-j}}{(j-1)!}u^{j-1}
\Ep(-(\beta_i(n)-\beta_1(n))u)\notag\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\times \binom{2k-j-1}{k-j}\big(\beta_{3-i}(n)-\beta_i(n)\big)^{-(2k-j)}\Bigg]du.\notag
\end{align}
This and \eqref{Prob_rev_@1} prove part (i) of proposition.
If $\beta_1(n)= c_0(n)/v_1(n)= b_1(n)/v_0(n) = \beta_2(n)$ then $Y_k(n)$ is
the sum of $2k$ i.i.d. exponential random variables with parameter $\beta_1(n)$ and, therefore, it has the following Gamma density,
\begin{align*}
f_{k,n}(u) =\frac{\beta_1(n)^{2k} u^{2k-1}}{(2k-1)!}\Ep(-\beta_1(n) u).
\end{align*}
It follows that
\begin{align}\label{o4.4}
&\mathbb{P}\left(Y_k(n) + \beta_1(n)^{-1}E_{k+1}\geq \ell^2/2\; ,\; Y_k(n)<\ell^2/2\right)\\
&= \int_0^{\ell^2/2} \mathbb{P}\left(\beta_1(n)^{-1} E_{k+1}\geq \ell^2/2-u\right)f_{k,n}(u)du\notag\\
&=\int_0^{\ell^2/2} \Ep\left(-\beta_1(n)(\ell^2/2-u)\right)
\frac{\beta_1(n)^{2k} u^{2k-1}}{(2k-1)!}\Ep(-\beta_1(n) u) du\notag\\
&= \frac{\left(\beta_1(n)\ell^2/2\right)^{2k} }{(2k)!}
\Ep(-\beta_1(n) \ell^2/2).\notag
\end{align}
The second part of the proposition follows from this formula and \eqref{Prob_rev_@1}.
\end{proof}
\begin{proof}[Proof of Corollary \ref{Limit_Layer_reverse}]
We have to prove that we can pass to the limit in formulas given in Proposition \ref{Layer_Reverse}. We will refer to the proof of that proposition below.
It follows easily from assumptions $\bold{F3}$ and $\bold{K}$ and explicit formulas in \eqref{at_kth_step} and \eqref{o4.4} that for every $\ell<0$ and $k\geq 0$,
\begin{align}\label{o4.6}
\lim_{n\rightarrow \infty}
\mathbb{P}\left(Y_k(n) + \beta_1(n)^{-1}E_{k+1}\geq \ell^2/2\; ,\; Y_k(n)<\ell^2/2\right)
\end{align}
exists.
For $k\geq 1$, we have
\begin{align}\label{o4.5}
\mathbb{P}&\left(Y_k(n) + \beta_1(n)^{-1}E_{k+1}\geq \ell^2/2\; ,\; Y_k(n)<\ell^2/2\right)
\leq \mathbb{P}\left( Y_k(n)<\ell_n^2/2\right)\\
&\leq \mathbb{P}\left(\beta_1(n)^{-1}E_j<\ell_n^2/2 \;\forall\; j=1,\cdots k\right)
= \left(1-e^{-\beta_1(n)\ell_n^2/2}\right)^k.\notag
\end{align}
The assumptions made in the corollary and $\bold{F3}$ imply that for some $\gamma_1< \infty$ and $n_1$, we have $\beta_1(n)\ell_n^2/2 < \gamma_1$ for all $n\geq n_1$. Hence, there exists $q\in (0,1)$ such that $\Big(1-e^{-\beta_1(n)\ell_n^2/2}\Big)\leq q$ for $n\geq n_1$. This and \eqref{o4.5} imply that the series in \eqref{Prob_rev_@1} is dominated by a geometric series. Thus, in view of \eqref{o4.6}, the limit stated in the corollary exists.
\end{proof}
\begin{proof}[Proof of Proposition \ref{Exit_Velo}]
The proof of the proposition is analogous to that of Corollary \ref{o2.7}. The evolution of the process is split into two parts by the stopping time $\mc{T}_n$. The pre-$\mc{T}_n$ evolution is captured by Proposition \ref{Layer_Reverse}. The amount accumulated by $L_n$ between times $\mc{T}_n$ and $\mc{U}_n$ can be represented by a formula analogous to \eqref{s24.5} in the noiseless case. In the present case, $X_n$ can jump between 0 and 1 even if $L_n$ is positive so, to account for these jumps, we have to have two sequences of exponential random variables, representing repeated visits at 0 and 1. Once $L_n$ becomes positive, the number of jumps between 0 and 1 is geometric with the parameter determined in Lemma \ref{AgainstFlow}.
We leave the details of the proof to the reader.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Cvge_Veloc}]
The limits in \eqref{o5.1}-\eqref{o5.2} exist because we assumed
$\bold{F3}$ and $\bold{K}$.
To prove the second claim of the theorem, note that
the distributions of random variables in \eqref{s24.3} and \eqref{SnL} are mixtures, with mixing measures being the distributions of random variables $Z$ and $J(n)$. Due to convergence of the parameters stated in \eqref{o5.1}-\eqref{o5.2}, the distributions of the individual components
$2\frac{v_1(n)}{c_1(n)}E', 2\frac{v_1(n)}{b_1(n)}E_j'$ and $ 2\frac{v_0(n)}{c_0(n)}E_k''$
in the mixtures converge to the limits $\gamma_1 E',\gamma_0 E_j'$ and $ \gamma_2 E_j''$, which are the terms of the sum in \eqref{o5.7}.
The distributions of the mixing random variables, $Z$ and $J(n)$, converge due to Corollary \ref{Limit_Layer_reverse} and \eqref{o5.2}.
This proves that the mixtures converge, and this is just a different way of expressing the theorem.
\end{proof}
\begin{proof}[Proof of Proposition \ref{timeOnBoundary}]
(i)
First we will consider the noiseless model.
Fix some $\ell <0$ and assume that $X_n(0) = N_0(n)$ and $L_n(0) = \ell$. It is routine to modify our argument for the case $\ell >0$ and $X_n(0) \ne N_0(n)$. Under these assumptions, $s_0(n) = 0$. We will estimate $t_1(n) - s_0(n) = t_1(n)$. Recall notation from \eqref{s30.21} and \eqref{s30.23} and note that $[s_0(n) , t_1(n)] = [s_0(n), \mc{T}_n) \cup [\mc{T}_n, \mc{U}_n]$.
The process $X_n$ will jump toward 0 until it reaches a random point $G_n$ (see \eqref{s30.22} for the definition) and then $X_n$ will jump away from 0 until it exits $\partial \mathcal{D}_n^-$ at time $\mc{U}_n$.
Fix some site $i\in \partial \mathcal{D}_n^- \setminus\{0\}$ and
suppose that $X_n$ arrives at $i$ at a time $u_-$, and $L_n(u_-) =\ell_i<0$. Let
\begin{align*}
\Delta u_i^- = \inf\{t\geq 0: X_n(u_- + t)\ne i \text{ or }
u_- +t = \mc{T}_n\}.
\end{align*}
The following representation of $\Delta u_i^-$ is the same as in the proof of Proposition \ref{LevelDistrib}, except that we are using different notation.
Consider an exponential random variable $E_i$ with mean 1.
If $\ell_i^2> 2\frac{v_i(n)}{c_{i-1}(n)}E_i $ then, by \eqref{Time_Spent2},
\begin{align}\label{o7.1}
\Delta u_i^-
&= \inf\{t\geq 0: X_n(u_- + t)\ne i\} = \frac{\vert\ell_i\vert}{v_i(n)} - \frac{1}{v_i(n)}\left(\ell_i^2 -2 \frac{v_i(n)}{c_{i-1}(n)}E_i\right)^{1/2}\\
&< \left(\frac{2}{v_i(n)c_{i-1}(n)}E_i\right)^{1/2}.\notag
\end{align}
The inequality on the right hand side holds because it is equivalent to $\ell_i^2> 2\frac{v_i(n)}{c_{i-1}(n)}E_i $, as an elementary calculation shows.
If $\ell_i^2\leq 2\frac{v_i(n)}{c_{i-1}(n)}E_i $ then
\begin{align}\label{o7.2}
\Delta u_i^- = \inf\{t\geq 0:
u_- +t = \mc{T}_n\}
= \frac{\vert \ell_i\vert}{v_i(n)}\leq \left(\frac{2}{v_i(n)c_{i-1}(n)}E_i\right)^{1/2}.
\end{align}
It follows from \eqref{o7.1}-\eqref{o7.2} that
\begin{align}\label{o7.3}
\Delta u_i^- \leq \left(\frac{2}{v_i(n)c_{i-1}(n)}E_i\right)^{1/2}.
\end{align}
If $i=0$ then $X_n$ can jump to site 1 only after time $\mc{U}_n$ so
\begin{align}\label{o7.4}
\Delta u_0^- = \inf\{t\geq 0:
u_- +t = \mc{T}_n\}
= \frac{\vert \ell_0\vert}{v_0(n)}\leq \frac{\vert \ell\vert}{v_0(n)}.
\end{align}
The above inequality holds because when $X_n$ is on its way from $N_0(n)$ to 0 and the initial value of $L_n$ is $\ell<0$ then the value of $|L_n|$ can only decrease.
If $X_n$ does not visit $i$ on its way from $N_0(n)$ to 0 then we let $\Delta u_i^- =0$.
Summing $\Delta u_i^- $ over all $i\in \partial \mathcal{D}_n^-$, we obtain from \eqref{o7.3} and \eqref{o7.4},
\begin{align}\label{o7.5}
\mc{T}_n = \sum_{i=0}^{N_0(n)} \Delta u_i^-
\leq \frac{\vert \ell\vert}{v_0(n)} + \sum_{i=0}^{N_0(n)}
\left(\frac{2}{v_i(n)c_{i-1}(n)}E_i\right)^{1/2},
\end{align}
where $E_i$ are i.i.d. exponential with mean 1.
Fix some site $i\in \partial \mathcal{D}_n^- \setminus\{0\}$ and
suppose that $X_n$ arrives at $i$ at a time $u_+$, but now suppose that $L_n(u_+) =\ell_i>0$. Let
\begin{align*}
\Delta u_i^+ = \inf\{t\geq 0: X_n(u_- + t)\ne i\}.
\end{align*}
Reasoning as in the previous part of the proof and using \eqref{Pos_initial_velo}, we obtain
\begin{align}\label{o7.6}
\Delta u_i^+
& = -\frac{\ell_i}{v_i(n)} + \frac{1}{v_i(n)}\left(\ell_i^2 +2 \frac{v_i(n)}{c_{i}(n)}E'_i\right)^{1/2}
\leq \left(\frac{2}{v_i(n)c_{i}(n)}E'_i\right)^{1/2},
\end{align}
where $E'_i$ is mean one exponential.
The above inequality is elementary.
Since $G_n$ does not have to be 0, we have the following bound
\begin{align}\label{o7.7}
\mc{U}_n -\mc{T}_n\leq \sum_{i=0}^{N_0(n)} \Delta u_i^+
\leq \sum_{i=0}^{N_0(n)}
\left(\frac{2}{v_i(n)c_{i}(n)}E'_i\right)^{1/2},
\end{align}
where $E'_i$ are i.i.d. exponential with mean 1.
Combining \eqref{o7.5} and \eqref{o7.7}, we obtain
\begin{align*}
\mc{U}_n
\leq \frac{\vert \ell\vert}{v_0(n)} + \sum_{i=0}^{N_0(n)}
\left(\frac{2}{v_i(n)c_{i-1}(n)}E_i\right)^{1/2}
+\sum_{i=0}^{N_0(n)}
\left(\frac{2}{v_i(n)c_{i}(n)}E'_i\right)^{1/2},
\end{align*}
where $E_i$ and $E'_i$ are i.i.d. exponential with mean 1.
We now remove the condition $L_n(0)=\ell$ and write
\begin{align}\label{o7.10}
\mc{U}_n
\leq \frac{\vert L_n(0)\vert}{v_0(n)} + \sum_{i=0}^{N_0(n)}
\left(\frac{2}{v_i(n)c_{i-1}(n)}E_i\right)^{1/2}
+\sum_{i=0}^{N_0(n)}
\left(\frac{2}{v_i(n)c_{i}(n)}E'_i\right)^{1/2}.
\end{align}
By assumptions $\bold{F3}$ and $\bold{G1}$-$\bold{G2}$, there exists a constant $C>0$ such that
\begin{align*}
\mc{U}_n
\leq \frac{C\vert L_n(0)\vert}{n} + \frac C n \sum_{i=0}^{N_0(n)}
\sqrt{E_i}
+\frac C n \sum_{i=0}^{N_0(n)}
\sqrt{E'_i}.
\end{align*}
The first term on the right hand side converges to 0 in distribution, as $n\to \infty$, because we assumed that
the distributions of $ L_n(0)$, $n\geq 1$, are tight. The other two terms converge to 0 in distribution by the law of large numbers, in view of Assumption \ref{o8.1} (ii).
This completes the proof that
$\lim_{n\to\infty} t_1(n) - s_0(n) =0$ in distribution.
To extend the proof to show that $\lim_{n\to\infty} t_{j+1}(n) - s_j(n) =0$ for $j\geq 1$, it will suffice, by the strong Markov property, to argue that for every fixed $j$, the distributions of $L_n(s_j(n))$, $n\geq 1$, are tight.
By randomizing $\ell$ in Corollary \ref{o2.7}, we see that if $ L_n(0)$, $n\geq 1$, are tight then $L_n(t_1(n))$, $n\geq 1$, are tight. Since $L_n(s_{j+1}(n)) = L_n(t_{j+1}(n))$ for all $j$, we see that $L_n(s_1(n))$, $n\geq 1$, are tight. Then we proceed by induction and use the strong Markov property to conclude that for every $j$,
$L_n(s_j(n))$, $n\geq 1$, are tight and $L_n(t_j(n))$, $n\geq 1$, are tight.
(ii) In the noisy case, the argument is very similar to that in the noiseless case so we will only sketch the proof. The new version of the key estimate will be based on
Proposition \ref{Exit_Velo}. We will
use the representation \eqref{s24.3}-\eqref{SnL}
together with the inequality $\sqrt{x+y}\leq \sqrt{x}+\sqrt{y}$.
We obtain the following
analogue of \eqref{o7.10},
\begin{align*}
\mc{U}_n
\leq & \frac{\vert L_n(0)\vert}{v_0(n)} +
\left(\frac{2}{v_0(n)c_0(n)}\right)^{1/2}\sqrt{E_0}
+ \left(\frac{2}{v_1(n)c_1(n)}\right)^{1/2}\sqrt{E_1}\\
&+ \sum_{j=1}^{J(n)-1}\left(
\left(\frac{2}{v_{1}(n)b_{1}(n)}\right)^{1/2}\sqrt{E_{j}'}
+\left(\frac{2}{v_0(n)c_0(n)}\right)^{1/2}\sqrt{E_{j}''}\right).
\end{align*}
Random variables $E_0$, $E_1$, $(E_k')_k$, $(E_k'')_k$ are i.i.d. exponential with mean 1. Random variable $J(n)$ is geometric with parameter $c_1(n)/(c_1(n)+b_1(n)) $. All of these random variables are assumed to be independent.
Assumptions $\bold{F1}$-$\bold{F2}$ or $\bold{F'}$, $\bold{F3}$, and $\bold{K}$ imply that there exists $C$ such that
\begin{align*}
\mc{U}_n
\leq & \frac C n \left (|L_n(0)| + \sqrt{E_0}
+ \sqrt{E_1} + \sum_{j=1}^{J(n)-1}\left(
\sqrt{E_{j}'}
+\sqrt{E_{j}''}\right)\right),
\end{align*}
and the parameter of $J(n)$ is within $(1/C,C)$.
Just like in part (i) of the proof, we can use tightness arguments and induction to show that $\lim_{n\to\infty} t_{j+1}(n) - s_j(n) =0$ in distribution, for $j\geq 0$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{convProc}]
(a)
(i) Noiseless case.
Set $N_0(\infty)=\lim_{n\rightarrow\infty}N_0(n)$ and note that, in our models, $N_0(\infty)$ can be finite or infinite.
In both cases, in view of \eqref{o2.1} and assumption $\bold{G2}$, we have $\sum_{j=0}^{N_0(\infty)} \lambda_j < \infty$. Hence,
$\mathbb{E}\left(2\sum_{j = 0}^{N_0(\infty)}\lambda_j E_j\right)^{1/2} < \infty$, and, therefore, $\left(2\sum_{j = 0}^{N_0(\infty)}\lambda_j E_j\right)^{1/2} < \infty$, a.s. This and Theorems \ref{ThinBoundaries} and \ref{Infinite_layers} imply that the distributions $\mc{V}_\infty(\ell)$ and $\mc{V}^+_\infty(\ell)$ are stochastically bounded by a single distribution (not depending on $\ell$) of a finite valued random variable.
It follows from this and \eqref{s26.1}-\eqref{o5.6} that, on some probability space, we can construct an i.i.d. sequence $A_j$, $j\geq 0$, of strictly positive and finite random variables such that $L(u_j)\leq A_j$, a.s., for all $j$. Note that $\mathbb{E}(1/A_j) > 0$, possibly $\mathbb{E}(1/A_j) = \infty$.
Consequently, for every $k\geq 2$,
$u_k = u_1+ \sum_{i = 1}^{k-1} 1/|L(u_i)| \geq \sum_{i = 1}^{k-1} 1/A_i$, a.s., and the right hand side approaches infinity, a.s., by the strong law of large numbers. This completes the proof in the noiseless case.
(ii) Noisy case.
In view of \eqref{o5.2}, the distribution of $S$ in \eqref{o5.7} does not depend on $\ell$. If follows from this and \eqref{o5.8} that the distributions $\mc{V}_\infty(\ell)$ and $\mc{V}^+_\infty(\ell)$ are stochastically bounded by a single distribution (not depending on $\ell$) of a finite valued random variable. The rest of the proof is the same as in the noiseless case.
(b)
Recall notation from \eqref{o5.10}-\eqref{o5.12}.
We have assumed that
$(X_n(0)/n, L_n(0))$ converge in distribution to $(X(0), L(0))$, and $X(0)\in(0,1)$, a.s.
This and Assumption \ref{o8.1} (iii) imply that $L_n(s_0(n)) = L_n(0)$ for large $n$.
By the strong Markov property applied at stopping times $s_0(n)$ and
Theorems \ref{ThinBoundaries} (ii), \ref{Infinite_layers} and \ref{Cvge_Veloc}, $L_n(t_1(n))$ converge in distribution to a random variable, say $R_1$.
We proceed by induction. Note that, due to Assumption \ref{o8.1} (iii) we have $L_n(s_j(n)) = L_n(t_j(n))$ for all $j\geq 1$. Suppose that $L_n(t_j(n))$ converge in distribution to a random variable $R_j$. Then $L_n(s_j(n))$ converge in distribution to $R_j$. By the strong Markov property applied at $s_j(n)$ and
Theorems \ref{ThinBoundaries} (ii), \ref{Infinite_layers} and \ref{Cvge_Veloc}, $L_n(t_{j+1}(n))$ converge in distribution to a random variable $R_{j+1}$. To complete our notation, we let $R_0 $ have the same distribution as that of $ L(0)$, the weak limit of $L_n(0)$.
Our argument actually implies a stronger claim, i.e., that we can define $R_j$'s on a common probability space so that the joint distribution of $\{R_j, j\geq 0\}$ is the same as that of $R_j$'s defined in
\eqref{s26.1}-\eqref{s26.3}, assuming that $\ell_0$ in that definition
is randomized and given the distribution of $L(0)$.
It follows from \eqref{o8.2} and a formula analogous to \eqref{s28.2} that for every fixed $j\geq 1$, we can represent the time interval $[s_j(n) , t_j(n)]$ as follows,
\begin{align}\label{o9.2}
s_j(n) - t_j(n) = \sum_{i\in \mathcal{D}_n\setminus(\partial \mathcal{D}_n^- \cup \partial \mathcal{D}_n^+)} E_{i,j}/(n \big\vert L_n(t_j(n))\big\vert),
\end{align}
where $E_{i,j}$, $i\in \mathcal{D}_n\setminus(\partial \mathcal{D}_n^- \cup \partial \mathcal{D}_n^+)$, are i.i.d. exponential with mean 1, independent of $L_n(t_j(n))$'s.
For $j=0$, the analogous formula is
\begin{align*}
s_0(n) - t_0(n) =
\begin{cases}
\sum_{i \leq X_n(0), x\notin \partial \mathcal{D}_n^- } E_{i,0}/(nL_n(0)),&
\text{ if } L_n(0) < 0,\\
\sum_{i \geq X_n(0), x\notin \partial \mathcal{D}_n^+ } E_{i,0}/(nL_n(0)),&
\text{ if } L_n(0) > 0.
\end{cases}
\end{align*}
It follows from this, the assumption that $(X_n(0)/n,L_n(0)) $ converge weakly to $(X(0),L(0))$, the assumption that $X(0)\in(0,1)$ and $L(0)$ does not take value 0, and the law of large numbers that the following limit exists,
\begin{align}\label{o9.1}
\Delta u_1 := \lim_{n\to\infty}
s_0(n) - t_0(n) =
\begin{cases}
X(0)/(-L(0)) = X(0)/(-R_0),&
\text{ if } L(0) < 0,\\
(1-X(0))/L(0) =(1- X(0))/R_0,&
\text{ if } L_n(0) > 0,
\end{cases}
\end{align}
in distribution.
For similar reasons, \eqref{o9.2} yields
\begin{align}\label{o9.3}
\Delta u_j := \lim_{n\to\infty}
s_j(n) - t_j(n) = 1 /| R_j|,
\end{align}
in distribution.
It follows from Proposition \ref{timeOnBoundary}, \eqref{o9.1}-\eqref{o9.3} and the strong Markov property that for $j\geq 0$, the following limits exist,
\begin{align}\label{o9.4}
\lim_{n\to\infty}
s_j(n) =\lim_{n\to\infty} t_{j+1}(n)
= \sum_{i=1}^{j+1} \Delta u_i = \Delta u_1 + \sum _{i=2}^{j+1} 1 /| R_j| =: u_{j+1},
\end{align}
in distribution. Moreover, by the strong Markov property, we have joint convergence, in the sense that for every $j\geq 1$,
the vectors
\begin{align*}
( t_0(n), s_0(n), t_1(n), s_1(n), \dots, t_j(n), s_j(n))
\end{align*}
converge in distribution to
\begin{align*}
(0, u_1, u_1, u_2, u_2, \dots , u_j,u_j,u_{j+1}).
\end{align*}
In view of part (a), to finish the proof, it will suffice to fix $j\geq 0$ and analyze trajectories of $(X_n/n,L_n)$ on time intervals $[ t_j(n), s_j(n)]$ and $[ s_j(n), t_{j+1}(n)]$.
It follows easily from \eqref{o5.10}-\eqref{o5.12} and Assumption \ref{o8.1} (ii) that
\begin{align*}
\lim_{n\to \infty} \sup\{ X_n(t)/n: t \in [ s_{2j}(n), t_{2j+1}(n)]\} =0,
\qquad \text{ if } j\geq 0, L(0)< 0,\\
\lim_{n\to \infty} \sup\{1- X_n(t)/n: t \in [ s_{2j+1}(n), t_{2j+2}(n)]\} =0,
\qquad \text{ if } j\geq 0, L(0)> 0,\\
\lim_{n\to \infty} \sup\{1- X_n(t)/n: t \in [ s_{2j}(n), t_{2j+1}(n)]\} =0,
\qquad \text{ if } j\geq 0, L(0)> 0,\\
\lim_{n\to \infty} \sup\{ X_n(t)/n: t \in [ s_{2j+1}(n), t_{2j+2}(n)]\} =0,
\qquad \text{ if } j\geq 0, L(0)< 0.
\end{align*}
Assumption \ref{o8.1} (iii) implies that $L_n$ does not change its value on the interval $[ t_j(n), s_j(n)]$. Hence, the sequence of jump times of $X_n$ on this interval is a Poisson process, and jumps always take $X_n$ in the same direction. The same reasoning based on the law of large numbers that is behind \eqref{o9.2} proves that $X_n/n$ converge on $[ t_j(n), s_j(n)]$ to a linear function going either from 0 to 1 or vice versa (depending on the sign of $L(0)$ and, therefore, on the sign of $L_n(t_j(n))$), in the supremum norm, weakly, as $n\to \infty$. This completes the proof of the theorem.
\end{proof}
\begin{proof}[Proof of Theorem \ref{mainTheorem}]
Every family of reflection laws in Definition \ref{def:vel} is the limit of reflection laws for discrete approximations $(X_n,L_n)$, according to Theorem \ref{ThinBoundaries}. For every family of reflection laws given in Definition \ref{def:vel} there
exists
a billiard process $(X(t), L(t))$ with Markovian reflections by Theorem \ref{convProc} (a).
By Theorem \ref{convProc} (b) there exists a sequence of processes $(X_n, L_n)$ converging in distribution to $(X, L)$ where
each $(X_n, L_n)$ satisfies equation \ref{eq1}, by construction.
By Theorem \ref{Theorem:BurdzyWhite}, every process $(X_n, L_n)$ has $\mathcal{U}(\mathcal{D}_n) \times \mathcal{N}(0, 1)$ as its stationary distribution. Consequently, the limiting
billiard process $(X, L)$ has $\mathcal{U}(0, 1) \times \mathcal{N}(0, 1)$ as its stationary distribution; see the discussion in \cite[Chap. 4]{EthierKurtz}, particularly \cite[Chap. 4, Thm. 9.10]{EthierKurtz}.
In order to prove that $\mathcal{U}(0, 1) \times \mathcal{N}(0, 1)$ is the unique stationary distribution for $(X, L)$, first note that $(X,L)$ is Feller (i.e., its semi-group maps continuous bounded functions onto continuous bounded functions). From Theorems \ref{ThinBoundaries}, \ref{Infinite_layers} and \ref{Cvge_Veloc}, one obtains that for any initial condition $(x,\ell)$ and any non-empty open set $\mathcal{K} \subset [0,1]\times \mathbb{R}$,
\begin{align*}
\mathbb{P}\Big(\Eists t>0 \text{ such that } (X_t,L_t)\in\mathcal{K}\Big)>0.
\end{align*}
Therefore the support of any invariant probability measure is $[0,1]\times\mathbb{R}$. Because two distinct ergodic invariant probability measures are singular, it follows there can be only one such probability measure. This implies that there is only one invariant probability measure.
\end{proof}
\end{document}
|
\begin{document}
\author{Siva Athreya}
\address{Siva Athreya \\ Indian Statistical Institute
8th Mile Mysore Road \\ Bangalore 560059, India.}
\itail{[email protected]}
\thanks{}
\author{Michael Eckhoff}
\author{Anita Winter}
\address{Anita Winter\\ Fakult\"at f\"ur Mathematik\\ Universit\"at Duisburg-Essen\\
Universit\"atsstrasse 2\\ 45141 Essen, Germany}
\thanks{}
\itail{[email protected]}
\keywords{${\mathbb R}$-trees, Brownian motion, Diffusions on metric measure trees, Dirichlet forms, Spectral gap, Mixing times, Recurrence}
\subjclass[2000]{Primary: 60B05, 60J27; Secondary: 60J80,
60B99.}
\title{Brownian Motion on ${\mathbb R}$-trees}
\date{\today}
\begin{abstract}
The real trees form a class of metric spaces that extends
the class of trees with edge lengths by allowing behavior such as infinite
total edge length and vertices with infinite branching degree.
We use Dirichlet form methods to construct Brownian motion on any given locally compact
${\mathbb R}$-tree {$(T,r)$} equipped with a Radon measure $\newlineu$ {on $(T,{\mathcal B}(T))$}.
We specify a criterion under which the Brownian motion is recurrent or transient.
For compact recurrent ${\mathbb R}$-trees we provide bounds on the mixing time.
\end{abstract}
\maketitle
\section{Introduction and main results}
\label{S:motiv}
Let $r_1,r_2\in{\mathbb R}\cup\{-\infty, \infty\}$ with $r_1<r_2$ and $\newlineu$
be a Radon measure on $(r_1,r_2)$, i.e., $\newlineu$
is an inner regular non-negative Borel measure on $(r_1,r_2)$ which is finite on compact sets and positive on any ball.
Then the {\it $\newlineu$-Brownian motion} on $(r_1,r_2)$ is the unique (up-to $\newlineu$-equivalence) strong Markov process which is associated with the regular Dirichlet form
\be{e:formR}
{\mathcal E}(f,g)
:=
{\tfrac{1}{2}\int_{(r_1,r_2)}\mathrm{d}\lambda\,f'\cdot g'}
\end{equation}
with domain
\be{e:domainR}
{\mathcal D}({\mathcal E}) :=
g\{f\in L^2(\newlineu)\cap {\mathcal A_{{\mathbb R}}} : f'\in L^2(\lambda)
g\}
\end{equation}
where $\lambda$ denotes Lebesgue measure and ${\mathcal A_{{\mathbb R}}}$ is the space of absolutely continuous functions that vanish at {regular boundary points}. As usual, we call the left boundary point $r_1$ {\it regular} if it is finite and there exists a point $x\in(r_1,r_2)$
with $\newlineu(r_1,x)<\infty$. Regularity of the right boundary point, $r_{2},$ is defined in the same way. If $\newlineu=\lambda$ we obtain {\it standard Brownian motion} while a general $\newlineu$ plays the r\^{o}le of the {\it speed measure}. The goal of this paper is to extend this construction of Brownian motion to locally compact ${\mathbb R}$-trees.
In \cite{KumagaiSturm05} a sufficient condition is given to construct non-trivial diffusion processes on a locally compact metric measure space. These processes are associated with local
regular Dirichlet forms which are obtained as suitable limits of approximating
non-local Dirichlet forms. On self-similar sets which can be approximated by an increasing set $(V_m)_{m\in\mathbb{N}}$ diffusions have been studied from a probabilistic and analytic point of view. For example, \cite{Kusuoka1987,Goldstein1987,BarlowPerkins1988,Lindstrom1990} consider random walks on $V_m$ and construct Brownian motion as the scaling limit. From an analytical point of view this corresponds to constructing the {\it Laplace operator} as the limit of the difference operators corresponding to the approximating random walks.
Tree-like objects have been studied this way as well. An approximation scheme of the Brownian continuum random tree was exploited in \cite{Kre95}. The notion of finite resistance forms was introduced in \cite{Kigami95} and these approximating forms yield a regular Dirichlet form on complete, locally compact ${\mathbb R}$-trees. More recently in \cite{Cro08} and \cite{Cro10} scaling limits of simple random walks on random discrete trees have been shown to converge to Brownian motion on limiting compact ${\mathbb R}$-trees. In a couple of instances diffusions have been constructed using the specific structure of the given ${\mathbb R}$-tree (\cite{DJ93,Eva00}). In \cite{Eva00} the ''richest'' $\mathbb{R}$-tree is considered and a particular diffusion is constructed such that the height process (with respect to a distinguished root) is a standard one-dimensional Brownian motion which in any branch point chooses a direction according to a measure prescribed on the leaves.
The main purpose of the paper is to provide an explicit description of the Dirichlet form of Brownian motion on a given locally compact ${\mathbb R}$-tree without requiring an approximation scheme. Thus providing a unifying theory from which various properties of the process can be easily read off. Towards this, we imitate the construction of Brownian motion on the real line via Dirichlet forms by exploiting the one-dimensional structure of the skeleton of the ${\mathbb R}$-tree. The first step lies in capturing the key ingredients, namely the length measure and a notion of a gradient (Proposition {\mathbb R}ef{P:grad}). Given these ingredients one can then define a bilinear form similar to the real line construction. The second step is then to show that the above bilinear form is a regular Dirichlet form (Proposition~{\mathbb R}ef{P:00} and Proposition~{\mathbb R}ef{L:04}) to ensure the existence (Theorem~{\mathbb R}ef{T:01}) of a Markov process. In Proposition~{\mathbb R}ef{P:prop} we obtain the characterizing identities for the occupation measure and hitting probabilities to conclude that the Markov process so constructed is indeed the desired Brownian motion.
On complete and locally compact ${\mathbb R}$-trees the Brownian motions constructed this way are the diffusions associated with the finite resistance form introduced in \cite{Kigami95} (see Remark~{\mathbb R}ef{Rem:03}). As we will show in Section~{\mathbb R}ef{s:BMdrift} it covers all the examples of Brownian motions on particular ${\mathbb R}$-trees which can be found in the literature, (See Example {\mathbb R}ef{Exp:02} and Example {\mathbb R}ef{Exp:06}), and can also be easily adapted to construct diffusions with a drift as well. Furthermore, we are able to provide geometric conditions under which the {Brownian motion} is recurrent and transient (Theorems~{\mathbb R}ef{T:04} and~{\mathbb R}ef{T:trareha}). An interesting application of this result (See Example {\mathbb R}ef{karrayt}) generalizes the results shown for random walks on discrete trees in~\cite{Lyo90}. Bounds on eigenvalues and mixing times (Theorem~{\mathbb R}ef{C:mix}), and various properties of random walks on discrete trees (Theorem~{\mathbb R}ef{nashwillconverse}) are obtained for generic ${\mathbb R}$-trees. Thus highlighting the advantages of having an explicit limiting Dirichlet form along with an explicit description of its domain.
We begin by stating some preliminaries in Subsection~{\mathbb R}ef{prelim} which will be followed by statements of our main results in Subsection~{\mathbb R}ef{mainresults}.
\subsection{Set-up for Brownian motion on ${\mathbb R}$-tree} \label{prelim}
In this subsection we discuss preliminaries that are required for constructing Brownian motion on ${\mathbb R}$ trees.
{\bf ${\mathbb R}$-Tree:} A metric space $(T,r)$ is said to be a {\it real tree }(${\mathbb R}$- {\it tree}) if it
satisfies the following axioms.
\begin{itemize}
\item[{}]{\bf Axiom~1 (Unique geodesic) } For all ${u},{v}\in T$
there exists a unique isometric embedding
$\partialhi_{{u},{v}}:[0,{r}({u},{v})]\to T$ such that $\partialhi_{{u},{v}}(0)={u}$
and $\partialhi_{{u},{v}}({r}({u},{v}))={v}$.
\item[{}]{\bf Axiom~2 (Loop-free) } For every injective
continuous map $\kappa:[0,1]\to T$ one has
$\kappa([0,1])=\partialhi_{\kappa(0),\kappa(1)}([0,r(\kappa(0),\kappa(1))])$.
\end{itemize}
Axiom~1 states that there is a unique ``unit speed'' path between
any two points, whereas Axiom~2 then implies that the image
of any injective path connecting two points coincides with
the image of the unique unit speed path. Consequently any injective path between two points can be re-parameterized
to become the unit speed path. Thus, Axiom~1
is satisfied by many other spaces such as ${\mathbb R}^d$
with the usual metric, whereas Axiom~2 expresses the
property of ``tree-ness'' and is only satisfied
by ${\mathbb R}^d$ when $d=1$. We refer the reader to \cite{Dre84, DreMouTer96, DreTer96,
Ter97, MR2003e:20029} for background on ${\mathbb R}$-trees.
For $a,b\in T$, let
\begin{equation}\label{EPWarc}
[a,b]\,:=\partialhi_{a,b}(\,[0,r(a,b)]\,)\quad \mbox{and}\quad
]a,b[\,:=\partialhi_{a,b}(\,]0,r(a,b)[\,)
\end{equation}
be the unique closed and open,
respectively, {\it arc}\index{arc} between them.
An immediate consequence of both axioms together is that
real trees are $0$-hyperbolic. For a
given real tree $(T,r)$ and for all $x,a,b\in T$, this implies that there exists a unique
point $c(a,b,x)\in T$ such that
\be{e:branch}
[a,x]\cap[a,b]=[a,c(a,b,x)].
\end{equation}
The point $c(a,b,x)$ also satisfies $[b,x]\cap[b,a]=[b,c(a,b,x)]$ and $[x,a]\cap[x,b]=[x,c(a,b,x)]$ (see, for example, Lemma~3.20 in \cite{Eva} and compare with Figure 1).
\begin{figure}\label{Fig:01}
\end{figure}
In this paper, we will assume that $(T,r)$ is locally compact. By virtue of Lemma~5.7 in \cite{Kigami95} such ${\mathbb R}$-trees are separable and by Lemma~5.9 in \cite{Kigami95} the complete and bounded subsets are compact.
{\bf Length measure:} We follow \cite{EvaPitWin2006} to introduce the notion of the {\it length measure}
$\lambda^{(T,r)}$ on a separable ${\mathbb R}$-tree $(T,r)$ which extends the
Lebesgue measure on ${\mathbb R}$.
Let $\mathcal B(T)$ denote the Borel-$\sigma$-algebra of $(T,r)$.
Denote the {\it skeleton} of $(T,r)$ by
\begin{equation}
\label{sce}
{T}^o:=
gcup\newlineolimits_{a,b\in {T}}\,]a,b[.
\end{equation}
Observe that if
${T}^\partialrime \subset {T}$ is a
dense countable set, then ({\mathbb R}ef{sce}) holds with ${T}$ replaced by
${T}^\partialrime$. In particular, ${T}^o \in {\mathcal B}({T})$ and
${\mathcal B}({T})
g|_{{T}^o}=\sigma(\{]a,b[;\,a,b\in {T}^\partialrime\})$, where
${\mathcal B}({T})
g|_{{T}^o}:=\{A \cap {T}^o;\,A\in{\mathcal B}({T})\}$.
Hence, there exist a unique $\sigma$-finite measure $\lambda^{(T,r)}$ on $T$, called
{\it length measure}, such that $\lambda^{(T,r)}({T}\setminus {T}^o)=0$ and
\begin{equation}
\label{length}
\lambda^{(T,r)}(]a,b[)=r(a,b),
\end{equation}
for all $a,b\in T$.
In particular, $\lambda^{(T,r)}$ is the trace onto ${T}^o$ of one-dimensional
Hausdorff measure on $T$.
{\bf Gradient:} We now introduce the notion of weak differentiability and integrability. We will proceed as in~\cite{Eva00}.
Let ${\mathcal C}(T)$ be the space of all real continuous functions on $T$. Consider the subspaces
\begin{equation}
{\mathcal C}_0(T):=
g\{f\in {\mathcal C}(T)\mbox{ which have compact support}
g\}
\end{equation}
and
\begin{equation}
\label{e:Cinfty}
{\mathcal C}_{\infty}(T)
:=
g\{f \in {\mathcal C}(T):\, \fracorall\, \varepsilon>0\; \exists\, K \mbox{ compact }\;\fracorall\, x \in T\setminus K,\; |f(x)| \leq \varepsilon
g\}
\end{equation}
which is oftne refered to as the space of continuous functions which {\it vanish at infinity}.
We call a function $f\in{\mathcal C}(T)$
{\it locally absolutely continuous} if and
only if for all $\varepsilon>0$ and all subsets $S\subseteq T$
with $\lambda^{(T,r)}(S)<\infty$
there exists a $\delta=\delta(\varepsilon,S)$ such
that if $[x_1,y_1],...,[x_{n},y_n]\in S$ are disjoint arcs with
$\sum_{i=1}^{n} r(x_i,y_i)<\delta$ then
$\sum_{i=1}^{n}
g|f(x_i)-f(y_i)
g|<\varepsilon$.
Put
\begin{equation}\label{mathcalA}
{\mathcal A}={\mathcal A}^{(T,r)}
:=
g\{f\in{\mathcal C}(T):\,f\mbox{ is locally absolutely continuous}
g\}.
\end{equation}
In order to define a {\it gradient} of a locally absolutely continuous function, we need the notion of directions on $(T,r)$. For that purpose from now on we fix a point ${\mathbb R}ho\in T$ which in the following is referred to as the {\it root}.
Notice that ${\mathbb R}ho\in T$ allows us to define a partial order (with respect to ${\mathbb R}ho$), $\le_{\mathbb R}ho$, on $T$ by saying
that $x\le_{\mathbb R}ho y$ for all $x,y\in T$ with $x\in[{\mathbb R}ho,y]$.
For all $x,y\in T$ we write
\begin{equation}
\label{e:wedge}
x\sqrtedge y
:=
c({\mathbb R}ho,x,y).
\end{equation}
The root enables an orientation sensitive
integration given by
\begin{equation}\label{osi}
\begin{aligned}
&\int_x^y\lambda^{(T,r)}(\mathrm{d}z)\,g(z)
\\
&:= -\int_{[{x\sqrtedge y},x ]}\lambda^{(T,r)}(\mathrm{d}z)\,g(z)+\int_{[{x\sqrtedge y},y]}\lambda^{(T,r)}(\mathrm{d}z)\,g(z),
\end{aligned}
\end{equation}
for all $x,y\in T$.
The definition of the gradient is then based on the following observation.
\begin{proposition}
Let $f\in\mathcal A$. \label{P:grad}
There exists a unique (up to $\lambda^{(T,r)}$-zero sets) function
$g\in L_{\mathrm{loc}}^1(\lambda^{(T,r)})$ such that
\begin{equation}\label{con.1}
f(y)-f(x)
=
\int_x^y\lambda^{(T,r)}(\mathrm{d}z)\,g(z),
\end{equation}
for all $x,y\in T$. Moreover, $g$ is already uniquely determined (up to $\lambda^{(T,r)}$-zero sets) if we only require ({\mathbb R}ef{con.1}) to hold for all $x,y\in T$ with $x\in[{\mathbb R}ho,y]$.
\end{proposition}
\begin{definition}[Gradient] The gradient, $\newlineabla f=\newlineabla^{(T,r,{\mathbb R}ho)} f,$ of $f\in{\mathcal A}$ is the unique up to $\lambda^{(T,r)}$-zero sets function $g$ which satisfies ({\mathbb R}ef{con.1}) for all $x,y\in T$.
\label{Def:01}
\end{definition}
{
\begin{remark}[Dependence on the choice of the root] {\mathbb R}m Fix a separable $\mathbb{R}$-tree $(T,r)$.
Notice that the gradient $\newlineabla f$ of a function $f\in{\mathcal A}$ depends on the particular choice of the root ${\mathbb R}ho\in T$ (compare Examples~{\mathbb R}ef{Exp:04} and~{\mathbb R}ef{Exp:01}). It is, however, easy to verify that for each ${\mathbb R}ho\in T$ there exists a $\{-1,1\}$-valued function $\sigma^{\mathbb R}ho:T\to\{-1,1\}$ and for all $f\in{\mathcal A}$ a function $g^f:T\to\mathbb{R}$ such that $\newlineabla f$ is of the following form:
\begin{equation}
\label{e:formgrad}
\newlineabla f=\sigma^{\mathbb R}ho\cdot g^f.
\end{equation}
\label{Rem:06}
\end{remark}
}
{\bf The Dirichlet form: } {Let $(T,r)$ be a separable $\mathbb{R}$-tree and $\newlineu$ a Borel measure on $(T,{\mathcal B}(T))$. Denote, as usual, by $L^2(\newlineu)$ the space of Borel-measurable functions on $T$ which are square integrable with respect to $\newlineu$. As usual, for $f,g\in L^2(\newlineu)$ we denote by
\begin{equation}
\label{e:inner}
g(f,g
g)_\newlineu:=\int\mathrm{d}\newlineu\,f\cdot g
\end{equation}
the {\it inner product} of $f$ and $g$ with respect to $\newlineu$.
Put}
\begin{equation}
\label{e:F}
{\mathcal F}:=
g\{f \in {\mathcal A}:\, \newlineabla f\in L^2(\lambda^{(T,r)})
g\},
\end{equation}
and consider the domain
\begin{equation}
\label{domainp}
{\mathcal D}(\mathcal E)
:=
{\mathcal F} \cap L^2(\newlineu) \cap {\mathcal C}_{\infty}(T)
\end{equation}
together with the bilinear form
\begin{equation}\label{con.2p}
\begin{aligned}
{\mathcal E}(f,g)
&:=
\fracrac{1}{2}\int\lambda^{(T,r)}(\mathrm{d}z)\newlineabla f(z)
\newlineabla g(z)
\end{aligned}
\end{equation}
for all $f,g\in{\mathcal D}({\mathcal E})$.
{Notice that this bilinear form is independent of the particular choice of ${\mathbb R}ho$ by Remark~{\mathbb R}ef{Rem:06}.}
\subsection{Main Results}
\label{mainresults}
In this subsection we shall state all our main results.
Unless stated otherwise throughout the paper we shall assume that
\begin{itemize}
\item[(A1)] $(T,r)$ is a locally compact ${\mathbb R}$-tree.
\item[(A2)] $\newlineu$ is a Radon measure on $(T,{\mathcal B}(T))$, i.e., $\newlineu$ is finite on compact sets and positive on
any open ball
\begin{equation}
\label{e:ball}
B(x,\varepsilon)
:=
g\{x'\in T:\,r(x,x')<\varepsilon
g\}
\end{equation}
with $x\in T$ and $\varepsilon>0$.
\end{itemize}
Our first main result is the following:
\begin{theorem}[Brownian motion on $(T,r,\newlineu)$]
Assume (A1) and (A2). There exists a {unique (up to $\newlineu$-equivalence)}
continuous $\newlineu$-symmetric strong Markov process
$B=((B_t)_{t\ge 0},({\mathbf P}^x)_{x\in T})$ on $(T,r)$
whose Dirichlet form is
$({\mathcal E}, {\mathcal D}({\mathcal E}))$.
\label{T:01}\end{theorem}
This leads to the following definition.
\begin{definition}[Brownian motion] The $\newlineu$-symmetric strong Markov process
$B=((B_t)_{t\ge 0},({\mathbf P}^x)_{x\in T})$ on $(T,r)$
associated with the Dirichlet form $({\mathcal E}, {\mathcal D}({\mathcal E}))$ is called $\newlineu$-Brownian motion on the ${\mathbb R}$-tree $(T,r)$.
\label{Def:05}
\end{definition}
{
\begin{remark}[The role of $\newlineu$]\it $\newlineu$-Brownian motion on $(T,r)$ can be thought of as a diffusion on $(T,r)$ which is on {\it natural scale} and has
{\it speed measure} $\newlineu$. With a slight arbitrament we shall refer to $B$ as the {\it standard Brownian motion} if $\newlineu$ equals the Hausdorff measure on $(T,r)$.
\label{Rem:07}
\hspace{0.1in}fill$\qed$
\end{remark}
}
{
\begin{remark}[\bf Kigami's resistance form on dendrites] {\mathbb R}m Let $(T,r)$ be a locally compact and complete ${\mathbb R}$-tree and $\newlineu$ a Radon measure on $(T,{\mathcal B}(T))$. Let furthermore $(V_m)_{m\in\mathbb{N}}$ be an increasing
and compatible (in the sense of Definition~0.2 in \cite{Kigami95}) family of finite subsets of $T$ such that $V^\ast:=\cup_{m\in\mathbb{N}}V_m$ is
countable and dense. For each $m\in\mathbb{N}$ and $x,y\in V_m$, let
$x\sim y$ whenever $]x,y[\cap V_m=\itptyset$, and put for all $f,g:V_m\to{\mathbb R}$
\begin{equation}
\label{Kigami:m}
{\mathcal E}_m(f,g)
:=
\tfrac{1}{2}\sum\newlineolimits_{x,y\in V_m;x\sim y}\tfrac{(f(x)-f(y))(g(x)-g(y))}{r(x,y)}.
\end{equation}
In \cite{Kigami95} the bilinear form
\begin{equation}
\label{Kigami:form}
{\mathcal E}^{\mathrm{Kigami}}(f,g)
:=
\lim_{m\to\infty}{\mathcal E}_m
g(f
g|_{V_m},g
g|_{V_m}
g)
\end{equation}
with domain
\begin{equation}
\label{Kigami:label}
{\mathcal F}^{\mathrm{Kigami}}
:=
g\{f:V^\ast\to\mathbb{R}:\,\mbox{limit on r.h.s. of ({\mathbb R}ef{Kigami:form}) exists}
g\}
\end{equation}
is studied.
Put
\begin{equation}
\label{Kigami:labelD}
{\mathcal D}
g({\mathcal E}^{\mathrm{Kigami}}
g) := \overlineerline{ {\mathcal F}^{\mathrm{Kigami}} \cap {\mathcal C}_0(T)} ^{{\mathcal E}^{\mathrm{Kigami}}_{1}},
\end{equation}
where the closure is with respect to the ${\mathcal E}^{\mathrm{Kigami}}_1$-norm given by
\begin{equation}
\label{e:024}
{\mathcal E}^{\mathrm{Kigami}}_{1}(f,g):={\mathcal E}^{\mathrm{Kigami}}(f,g) + (f,g)_\newlineu.
\end{equation}
It is (partily) shown in
Theorem~5.4 in \cite{Kigami95} that $({\mathcal E}^{\mathrm{Kigami}},{\mathcal D}({\mathcal E}^{\mathrm{Kigami}}))$
is a regular Dirichlet form.
Notice that Theorem~5.4 in \cite{Kigami95} actually only assumes the measure $\newlineu$ to be a $\sigma$-finite Borel measure that charges all open sets,
and defines the domain to be ${\mathcal F}^{\mathrm{Kigami}} \cap L^{2}(\newlineu)$.
In order to ensure regularity, however, one needs to indeed close the Kigami suggested domain
$ {\mathcal F}^{\mathrm{Kigami}} \cap{\mathcal C}_0(T)$ with respect to the ${\mathcal E}^{\mathrm{Kigami}}_1$-norm. Moreover, regularity forces
$\newlineu$ to be a Radon measure; a fact which is used in Kigami's proof.
We will prove in Remark~{\mathbb R}ef{Rem:05} that $({\mathcal E},{\mathcal D}({\mathcal E}))$ agrees with Kigami's form on complete locally compact ${\mathbb R}$-trees. Note that our set-up is slightly more general (do not require completness) and the notion of a gradient at hand provides an explicit description of the form.
\label{Rem:03}
\hspace{0.1in}fill$\qed$
\end{remark}
}
For all closed $A\subseteq T$, let
\begin{equation}\label{tauA}
\tau_A
:=
\inf
g\{t > 0:\,B_t\in A
g\}
\end{equation}
denote the {\it first hitting time} of the set $A$. In particular, put $\tau_A:=\infty$ if $\cup_{t > 0}\{B_t\}\subseteq T\setminus A$. Abbreviate $\tau_x:=\tau_{\{x\}}$, $x\in T$.
\begin{definition}[Recurrence/transience]
The $\newlineu$-Brownian motion $B$ on the ${\mathbb R}$-tree $(T,r)$
is called {{\mathbb R}m transient} iff
\begin{equation}\label{e:013}
\int_0^\infty\mathrm{d}u\,\mathbf P^{{\mathbb R}ho}\{B_u \in K \}<\infty,
\end{equation}
for all compact subsets $K\subseteq T$.
Otherwise, the $\newlineu$-Brownian motion on the ${\mathbb R}$-tree $(T,r)$ is called {{\mathbb R}m recurrent}.
We say that a recurrent $\newlineu$-Brownian motion on $(T,r)$ is {{\mathbb R}m null-recurrent} if there exists a $y\in T$ such that $\mathbf E^x[\tau_y]=\infty$, for some $x\in T$, and {{\mathbb R}m positive recurrent} otherwise.
\label{Def:04}
\end{definition}
{
\begin{remark}{\mathbb R}m As we will observe in Lemma~{\mathbb R}ef{L:02}, $B$ has a $\newlineu$-symmetric transition densities $p_t(x,y)$ with respect
to $\newlineu$ such that $p_t(x;y) > 0$ for all $x, y\in T$. Consequently, in the terminology
of \cite{FukushimaOshimaTakeda1994}, $B$ is irreducible. Therefore, by Lemma~1.6.4 of \cite{FukushimaOshimaTakeda1994}, $B$ is either
transient or recurrent.
\hspace{0.1in}fill$\qed$
\end{remark}
}
To justify the name ``Brownian motion'', we
next verify that $\newlineu$-Brownian motion on $(T,r)$ satisfies the characterizations of Brownian motion (known on ${\mathbb R}$).
\begin{proposition}[Occupation time measure]
Assume (A1) and (A2). Let $B=((B_t)_{t\ge 0},({\mathbf P}^x)_{x\in T})$ be the continuous $\newlineu$-symmetric strong Markov process\label{P:prop}
on $(T,r)$
whose Dirichlet form is $({\mathcal E}, {\mathcal D}({\mathcal E}))$.
Then the following hold:
\begin{itemize}
\item[(i)]
For all $a,b,x\in T$ such that $\mathbb{P}^x\{\tau_a\sqrtedge \tau_b<\infty\}=1$,
\be{Xhit}
\mathbf P^x
g\{\tau_a<\tau_b
g\}
=
\fracrac{r(c(x,a,b),b)}{r(a,b)}.
\end{equation}
\item[(ii)] Assume furthermore that the measure ${\mathbb R}$-tree $(T,r,\newlineu)$ is such that the $\newlineu$-Brownian motion $(T,r)$ is recurrent.
For all $b,x\in T$ and bounded measurable $f$,
\be{Xocc}
\mathbf{E}^x
g[\int_0^{\tau_b}\mathrm{d}t\,f(B_s)
g]=2\int_{T}\newlineu(\mathrm{d}y)\,r
g(c(y,x,b),b
g)f(y).
\end{equation}
\end{itemize}
\end{proposition}
\begin{remark}{\mathbb R}m
Proposition~{\mathbb R}ef{P:prop} has been verified for the $\newlineu$-Brownian motion on the Brownian CRT for two particular choices of $\newlineu$ in \cite{Kre95} and \cite{Cro08}
(compare also Example~{\mathbb R}ef{Exp:02}).
\hspace{0.1in}fill$\qed$
\label{Rem:04}
\end{remark}
A second goal of this paper is to give a criterion for the $\newlineu$-Brownian motion on $(T,r)$ to be recurrent or transient. For a subset $A\subseteq T$, denote by
\begin{equation}\label{e:diam}
\mathrm{diam}^{(T,r)}(A)
:=
\sup
g\{r(x,y):\,x,y\in A
g\}
\end{equation}
its diameter. For bounded trees (i.e.\ those with finite diameter) recurrence and transience depends on whether or not $(T,r)$ is compact.
\begin{theorem}[Recurrence/transience on bounded trees] Let $(T,r)$ be a bounded ${\mathbb R}$-tree. Assume (A1) and (A2).
\begin{itemize}
\item[(i)] If $T$ is compact then $\newlineu$-Brownian motion on $(T,r)$ is positive recurrent.
\item[(ii)] If $T$ is not compact then $\newlineu$-Brownian motion on $(T,r)$ is transient.
\end{itemize}
\label{T:04}
\end{theorem}
Obviously, a bounded { and locally compact} ${\mathbb R}$-tree is complete if and only it is compact. Therefore {Theorem~{\mathbb R}ef{T:04} states} that the $\newlineu$-Brownian motion on a bounded locally compact ${\mathbb R}$-tree is positive recurrent if the tree is complete and transient if the tree is incomplete.
In the case of compact ${\mathbb R}$-trees we can also give bounds on the mixing time.
\begin{theorem}[Mixing time] {Let $(T,r)$ be a compact ${\mathbb R}$-tree, and $\newlineu$ a Radon measure on $(T,{\mathcal B}(T))$.} Let {$(P_t)_{t\ge 0}$} be the semi-group associated with {the} $\newlineu$-Brownian motion on $(T,r)$.
\label{C:mix}
If $\newlineu'$ is a probability measure on $(T,{\mathcal B}(T))$ with $\newlineu'\ll\newlineu$ such that
$\tfrac{\mathrm d\newlineu'}{\mathrm d\newlineu}\in L^1(\newlineu')$, then for all $t{\ge} 0$,
\begin{equation}\label{e:mix.1}
\begin{aligned}
&
g\|\newlineu'P_t-
g(\newlineu(T)
g)^{-1}\newlineu
g\|_{\mathrm{TV}}
\\
&\leq
\left(1+\newlineu(T)\sqrt{(\mathbf{1}_T,\tfrac{\mathrm d\newlineu'}{\mathrm d\newlineu})_{\newlineu'}} \,\,{\mathbb R}ight)\cdot\mathrm e^{-t/2\mathrm{diam}^{(T,r)}(T)\newlineu(T)},
\end{aligned}
\end{equation}
where $\|\boldsymbol{\cdot}\|_{\mathrm{TV}}$ denotes the total variation norm.
\end{theorem}
We next state a geometric criterion for recurrence versus
transience for {\it unbounded} trees.
As a preparation we introduce the space of ends at infinity and recall the notion of {the} {\it Hausdorff dimension}.
{\bf The space of ends at infinity $(E_{\infty},\bar{r})$: }
If $(T,r)$ is unbounded then there exists an isometric embedding {$\partialhi$ from ${\mathbb R}_+:=[0,\infty)$ into $T$ with $\partialhi(0)={\mathbb R}ho$}.
In the following we refer to each {such isometry } as an {\it end at infinity}, and let
\begin{equation}
\label{Einfty}
E_\infty
:=
\mbox{ set of all ends at infinity.}
\end{equation}
Recall that ${\mathbb R}ho\in T$ is a fixed root which allows to define a partial order $\le_{\mathbb R}ho$ on $T$ by saying
that $x\le_{\mathbb R}ho y$ for all $x,y\in T$ with $x\in[{\mathbb R}ho,y]$.
This partial order $\le_{\mathbb R}ho$ extends to a partial order on $T\cup E_\infty$ by letting for each {$x\in T$ and $y\in E_\infty$},
$x\le_{\mathbb R}ho y$ if and only if {$x\in y({\mathbb R}_+)$. Further for $x,y \in E_{\infty}$, $x\le_{\mathbb R}ho y$ if and only if $x=y.$ Each pair $x,y\in T\cup E_\infty$ has then a well-defined {\it greatest common lower bound}
\begin{equation}
\label{brach}
x\sqrtedge y
=
x\sqrtedge_{\mathbb R}ho y\in T\cup E_\infty.
\end{equation}
We equip $E_\infty$ with the metric $\overlineerline r(\boldsymbol{\cdot},\boldsymbol{\cdot})$ defined by
\begin{equation}\label{e:barr}
\bar{r}(x,y)=\bar{r}_{\mathbb R}ho(x,y)
:=
1\sqrtedge \fracrac{1}{r({\mathbb R}ho,x\sqrtedge y)},
\end{equation}
for all $x,y\in E_\infty$.
It is not difficult to see that $(E_\infty,\bar{r})$ is ultra-metric.
Hence by Theorem~3.38 in \cite{Eva} for all subsets $E'\subseteq E_\infty$ there is a (smallest) ${\mathbb R}$-tree $(T',r')$ with $E'\subseteq T'$ and such that $\bar{r}(x,y)=r'(x,y)$ for all $x,y\in E'$. We will refer to this smallest ${\mathbb R}$-tree as the ${\mathbb R}$-tree {\it spanned by $(E',\bar{r})$} and denote it by \begin{equation}
\label{e:023}
\mathrm{span}(E',\bar{r}).
\end{equation}
It is easy to see that $\mathrm{span}(E',\bar{r})$ is a compact ${\mathbb R}$-tree which has the same tree-topology as $(T,r)$ {outside $B({\mathbb R}ho,1)$}.
{\bf Hausdorff dimension of $E_{\infty}$: }
For all $\alpha\ge 0$,
the {\it $\alpha$-dimensional Hausdorff measure} ${\mathcal H}^\alpha$ on $(E_\infty,{\mathcal B}(E_\infty))$ is defined as follows:
for all $A\in{\mathcal B}(E_{\infty})$, let
\begin{equation}\label{e:inha}
\begin{aligned}
&{\mathcal H}^\alpha(A)
\\
&:=
\lim_{\varepsilon\downarrow 0}\,
\inf
g\{\sum_{i\ge 1}
g(\mathrm{diam}^{(E_{\infty}, \overlineerline r)}(E_i)
g)^\alpha:\;
gcup_{i\ge 1}E_i\supseteq A,\,\mathrm{diam}^{(E_{\infty}, \overlineerline r))}(E_i)\le\varepsilon
g\}.
\end{aligned}
\end{equation}
The {\it Hausdorff dimension} of a subset $A\in{\mathcal B}(E_{\infty})$ is then defined as
\begin{equation}\label{dim}
\begin{aligned}
\mathrm{dim}_{\mathrm{H}}^{(E_{\infty},\overlineerline r)}(A)
&:=
\inf
g\{\alpha\ge 0:\,{\mathcal H}^\alpha(A)=0
g\}
\\
&=
\sup
g\{\alpha\ge 0:\,{\mathcal H}^\alpha(A)=\infty
g\}.
\end{aligned}
\end{equation}
\begin{remark}{\mathbb R}m
Note that $\dim_{\mathrm{H}}^{(E_\infty,\overlineerline r)}(E_\infty)$ does not depend on the particular choice of ${\mathbb R}ho\in T$.
\label{Rem:08}
\hspace{0.1in}fill$\qed$
\end{remark}
We are now ready to state a geometric criterion for recurrence and transience on trees with ends at infinity.
\begin{theorem}[Recurrence/transience on unbounded trees] {Let $(T,r)$ be a locally compact ${\mathbb R}$-tree such that $E_\infty\newlineot = \itptyset$, and $\newlineu$ a Radon measure on $(T,{\mathcal B}(T))$.}
\label{T:trareha}
\begin{itemize}
\item[(i)] If $(T,r)$ is complete and ${\mathcal H}^1$ is a finite measure,
then
the $\newlineu$-Brownian motion on $(T,r)$ is recurrent.
\item[(ii)] If
$\dim_{\mathrm{H}}^{(E_\infty,\overlineerline r)}(E_\infty)>1$ or $(T,r)$ is incomplete, then
the $\newlineu$-Brownian motion on $(T,r)$ is transient.
\end{itemize}
\end{theorem}
The following example illustrates an application of Theorems~{\mathbb R}ef{T:04} and~{\mathbb R}ef{T:trareha}
{suggesting a duality between bounded and unbounded trees}.
\begin{example}[The $k$-ary tree] {\mathbb R}m We want to illustrate the theorem with the example of symmetric trees. Fix $k\ge2$ and $c>0$ and
\label{karrayt}
We want to illustrate the theorem with the example of symmetric trees. Fix $k\ge2$ and $c>0$ and
let $(T,r)$ be the
following locally compact
${\mathbb R}$-tree uniquely characterized as follows:
\begin{itemize}
\item
There is a root ${\mathbb R}ho\in T$.
\item A point $x\in T$ is a {\it branch point}, i.e., $T\setminus\{x\}$ consists of more than $2$
connected components, if and only if
$r({\mathbb R}ho,x)=\sum_{l=0}^mc^{l}$, for some $m\in{\mathbb N} \cup \{0\}$.
\item All branch points are of {\it degree} $k+1$, i.e., $T\setminus\{x\}$ consists of $k+1$
connected components.
\end{itemize}
It is easy to check that for all choices of $c>0$, the length measure $\lambda^{(T,r)}$ is Radon.
Hence $\lambda^{(T,r)}$-Brownian motion on $(T,r)$
exists by Theorem~{\mathbb R}ef{T:01}.
Since
\begin{equation}\label{e:diamk}
\mathrm{diam}^{(T,r)}(T)
=
\sum\newlineolimits_{l\in{\mathbb N}}c^l
\left\{\begin{array}{cc}<\infty, & \mbox{ if }c<1,\\ =\infty, & \mbox{ if }c\ge 1,\end{array} {\mathbb R}ight.
\end{equation}
the tree is bounded iff $c<1$. We discuss bounded and unbounded trees separately.
Assume first that $c<1$. By construction, $(T,r)$ is not compact and hence $\lambda^{(T,r)}$-Brownian motion is transient.
Notice that since
\begin{equation}
\lambda^{(T,r)}(T)
=
\sum\newlineolimits_{l\in{\mathbb N}}k^l\cdot c^l
\left\{\begin{array}{cc}<\infty, & \mbox{ if }c<\fracrac{1}{k},\\ &\\ =\infty, & \mbox{ if }c\ge \fracrac{1}{k},\end{array} {\mathbb R}ight.
\end{equation}
$\lambda^{(T,r)}$-Brownian motion exists also on the completion $(\bar{T},r)$ of $(T,r)$ in the case $c\in(0,\fracrac{1}{k})$.
Since a complete, bounded, and locally compact ${\mathbb R}$-tree is compact,
$\lambda^{(T,r)}$-Brownian motion on $(\bar{T},r)$ is positive recurrent by Theorem~{\mathbb R}ef{T:04}. Note that $\lambda^{(T,r)}$-Brownian motion on $(T,r)$ versus $(\bar{T},r)$
differ in their behaviour on the boundary $\partialartial T:=\bar{T}\setminus T$. While the first process gets killed on $\partialartial T$, the second gets reflected at $\partialartial T$.
Assume next that $c\ge 1$. An easy calculation shows that $\mathrm{dim_H}^{(E_\infty,\bar{r})}(E_\infty)=\log_c(k)$, and hence the
$\lambda^{(T,r)}$-Brownian motion is recurrent if
$c>k$ and transient if $c<k$ by Theorem~{\mathbb R}ef{T:trareha}. The latter has been shown for random walks in~\cite{Lyo90}.
{Further, when $c=k$ it can be easily verified that the Hausdorff measure of $E_\infty$ is bounded by $2k<\infty$, which implies that
$\lambda^{(T,r)}$-Brownian motion is recurrent at the critical value $c=k$.}
\hspace{0.1in}fill$\qed$
\end{example}
We conclude this section with a result that shows how the $\lambda^{(T,r)}$-Brownian motion on locally compact ${\mathbb R}$-trees which are spanned by their ends at infinity
can be used to decide whether or not random walks, simple or weighted, on graph-theoretical trees are recurrent.
{\bf Graph-Theoretical Trees:}
Consider a non-empty countable set~$V$ and a family of non-negative weights $\{r_{\{x,y\}};\,x,y\in V\}$ such that $(V,E)$ is a locally finite graph-theoretical tree, where $E:=\{\{x,y\}\mbox{ with }x,y\in V;r_{\{x,y\}}>0\}$. In the following we refer to $(V,\{r_{\{x,y\}};\,x,y\in V\})$ as a {\it weighted, discrete tree}. A Markov chain $X=(X_n)_{n\in{\mathbb N}_0}$ on the weighted, discrete tree $(V,\{r_{\{x,y\}};\,x,y\in V\})$
allows transitions between any neighboring points $x,y\in T$ with $r_{\{x,y\}}>0$ and probabilities proportional to the {\it conductance} $c_{\{x,y\}}:=(r_{\{x,y\}})^{-1}$.
Call an infinite sequence $(x_n)_{n\in{\mathbb N}_0}$ of distinct vertices in $V$ with $x_0={\mathbb R}ho$ and $r_{\{x_n,x_{n+1}\}}>0$ for all $n\in{\mathbb N}$ a {\it direction} in $(V,\{r_{\{x,y\}};\,x,y\in V\})$, and denote similar to ({\mathbb R}ef{Einfty}) by $\tilde{E}_\infty$ the set of all directions. Let for any two directions $x=(x_n)_{n\in{\mathbb N}}$ and $y=(y_n)_{n\in{\mathbb N}}$,
$k(x,y)$ denote the last index $k\in{\mathbb N}\cup\{\infty\}$ for which $x_k=y_k$, and define
$x\sqrtedge y:=x_{k(x,y)}$ if $k(x,y)\in\mathbb{N}$, and $x\sqrtedge y:=x\in\tilde{E}_\infty$ if $k(x,y)=\infty$.
Recall from ({\mathbb R}ef{e:barr}) the metric $\bar{r}$, and define in a similar way a metric $\tilde{r}$ on $\tilde{E}_\infty$ by letting for all $x,y\in\tilde{E}_\infty$, $\tilde{r}(x,y):=(r(x\sqrtedge y,{\mathbb R}ho))^{-1}\sqrtedge 1$.
\begin{theorem}[Recurrence versus transience of random walks on trees]
Let $(V,\{r_{\{x,y\}};\,x,y\in V\})$ be a weighted discrete tree such that for all directions $x=(x_n)_{n\in{\mathbb N}}$, $\sum_{n\in{\mathbb N}}r_{\{x_n,x_{n+1}\}}=\infty$.
Then the random walk $X$ is recurrent if \label{nashwillconverse}
{${\mathcal H}^1$ is a finite measure on $(\tilde{E}_\infty,{\mathcal B}(\tilde{E}_\infty))$} and
transient if $\dim_H(\tilde{E}_\infty,\tilde r)>1$.
\end{theorem}
\subsection{Outline.} The rest of the paper is organized as follows. In Section~{\mathbb R}ef{S:Dirichlet} we introduce the Dirichlet space associated with the Brownian motion.
In Section~{\mathbb R}ef{S:capacities} we recall the relevant potential theory and apply it to give explicit
expressions for the
capacities and Green kernels associated with the Dirichlet form. In Section~{\mathbb R}ef{S:existence} we
prove the existence of a strong
Markov process with continuous paths which is associated with the Dirichlet form.
In Section~{\mathbb R}ef{S:compact} we study the basic long-term behavior
for Brownian motions on locally-compact and bounded ${\mathbb R}$-trees. More precisely, we prove Theorem~{\mathbb R}ef{T:04}
and give in the recurrent case lower and upper bounds for the principle eigenvalue and the spectral gap. We prove Theorem~{\mathbb R}ef{T:trareha} in Section~{\mathbb R}ef{S:transinfty}.
In Section~{\mathbb R}ef{Sub:contdisc} we recover and generalize for ${\mathbb R}$-trees which can be spanned by their ends at infinity results for the embedded
random walks as known from~\cite{Lyo90}. In particular, we give the proof of Theorem~{\mathbb R}ef{nashwillconverse}. Finally in Section~{\mathbb R}ef{s:BMdrift} we discuss examples in the literature and diffusions that are not on natural scale.
\subsection*{Acknowledgments} Michael Eckhoff passed away during the completion of this work. The core theme and ideas in the paper are in part due to him. Further, several key estimates and ideas from Dirichlet form theory were brought to our notice by him. Our deepest condolences to his family.
We would like to thank David Aldous for proposing a problem that initiated this project and Zhen-Qing Chen, Steve Evans, Wolfgang L\"ohr and Christoph Schumacher for helpful discussions. Thanks are due to an anonymous referee, whose earlier detailed report helped us in preparing this revised version of the article.
Siva Athreya was supported in part by a CSIR Grant in Aid scheme and Homi Bhaba Fellowship. Anita Winter was supported in part at the Technion by a fellowship from the Aly Kaufman Foundation.
\section{The Dirichlet space}
\label{S:Dirichlet}
Fix $(T,r)$ to be a locally compact ${\mathbb R}$-tree and $\newlineu$ a Radon measure on $(T,{\mathcal B}(T))$.
In this section we construct the Dirichlet space (to be) associated with the $\newlineu$-Brownian motion.
In Subsection~{\mathbb R}ef{Sub:proof21}, we begin with giving the proof of Proposition {\mathbb R}ef{P:grad}. In Subsection~{\mathbb R}ef{Sub:form}
we verify that $({\mathcal E},{\mathcal D}({\mathcal E}))$ from ({\mathbb R}ef{domainp}) and ({\mathbb R}ef{con.2p}) is indeed a Dirichlet form.
\subsection{The gradient (Proof of Proposition~{\mathbb R}ef{P:grad})}
\label{Sub:proof21}
\begin{proof}[Proof of Proposition~{\mathbb R}ef{P:grad}]
Fix a root ${\mathbb R}ho\in T$, and $x,y\in T$.
Assume for the moment that $x,y\in T$ are such that $x\in[{\mathbb R}ho,y]$. By Axiom~1, there is a unique isometric embedding $\partialhi_{x,y}:[0,r(x,y)]\to[x,y]$. Fix $f\in{\mathcal A}$, and define the function $F_{x,y}:[0,r(x,y)]\to{\mathbb R}$ by
$F_{x,y}:=f\circ\partialhi_{x,y}$. Since $\partialhi_{x,y}$ is an isometry, $F_{x,y}$ is locally absolutely continuous on ${\mathbb R}$.
Hence by standard theory (compare, for example, Theorem~7.5.10 in \cite{AthSun09}),
$F_{x,y}$ is almost everywhere differentiable,
its derivative $F'_{x,y}$ is Lebesgue integrable and
\begin{equation}\label{grund}
\begin{aligned}
f(y)-f(x)
&=
F_{x,y}(r(x,y))-F_{x,y}(0)
\\
&=
\int_{[0,r(x,y)]}\mathrm{d}t\,F_{x,y}^\partialrime(t)
\\
&=
\int_{[x,y]}\lambda^{(T,r)}(\mathrm{d}z)\,F_{x,y}'(\partialhi^{-1}_{x,y}(z))
\\
&=
\int_x^y\lambda^{(T,r)}(\mathrm{d}z)\,F_{x,y}'(\partialhi^{-1}_{x,y}(z)).
\end{aligned}
\end{equation}
Notice that for all $z\in [x,y]$, we have $F_{x,y}(\partialhi^{-1}_{x,y}(z))=F_{{\mathbb R}ho,y}(\partialhi^{-1}_{{\mathbb R}ho,y}(z))$. Hence, $F_{x,y}'(\partialhi^{-1}_{x,y}(z))$ does not depend explicitly on $x\in[{\mathbb R}ho,y]$. Similarly, for any $y_{1},y_{2}\in T$, $F_{{\mathbb R}ho,y_i}(\partialhi^{-1}_{{\mathbb R}ho,y_i})=F_{{\mathbb R}ho,y_1\sqrtedge y_2}(\partialhi^{-1}_{{\mathbb R}ho,y_1\sqrtedge y_2})$, for $i=1,2$, on $[{\mathbb R}ho,y_1\sqrtedge y_2]$. This implies that for all $y_1,y_2\in T$, $F'_{{\mathbb R}ho,y_1}(\partialhi^{-1}_{{\mathbb R}ho,y_1})=F'_{{\mathbb R}ho,y_2}(\partialhi^{-1}_{{\mathbb R}ho,y_2})$ on $[{\mathbb R}ho,y_1\sqrtedge y_2]$. Therefore $F'_{x,y}(\partialhi^{-1}_{x,y}(z))$ does not depend on the direction given through $[{\mathbb R}ho,y]$, and so does not depend on $x,y$.
Consequently, $g:T{\mathbb R}ightarrow {\mathbb R}$ given by $g(z):= F_{x,y}'(\partialhi^{-1}_{x,y}(z))$ when $z \in [x,y]$ satisfies ({\mathbb R}ef{con.1}). Local integrability and uniqueness follow by standard measure theoretic arguments.
Let now $x,y\in T$ be arbitrary. Then by what we have shown so far
\begin{equation}\label{grund2}
\begin{aligned}
&f(y)-f(x)
\\
&=
f(y)-f({\mathbb R}ho)+f({\mathbb R}ho)-f(x)
\\
&=
-{\int_{{\mathbb R}ho}^x}\lambda^{(T,r)}(\mathrm{d}z)\,\newlineabla f(z)+{\int_{\mathbb R}ho^y}\lambda^{(T,r)}(\mathrm{d}z)\,\newlineabla f(z)
\\
&=
\int_x^y\lambda^{(T,r)}(\mathrm{d}z)\,\newlineabla f(z),
\end{aligned}
\end{equation}
and the claim follows.
\end{proof}
\begin{example}[Distance to a fixed point]{\mathbb R}m Fix $a\in T$, and define $g_a:T\to{\mathbb R}_+$ as
\begin{equation}\label{e:h0}
g_{a}(x)
:=
r
g(a,x
g),
\end{equation}
for all $x\in T$.
Obviously,
$g_{a}$ is
absolutely continuous. Observe that moving the argument outside the arc $[{\mathbb R}ho,a]$ away from the root lets the distance grow at speed $1$, while moving the argument inside the arc $[{\mathbb R}ho,a]$ away from the root lets the distance decrease with speed one. We therefore expect that a version of $\newlineabla g_{a}$ is given by
\begin{equation}\label{e:nablah0}
\newlineabla g_{a}(x)
=
\mathbf{1}_{T}(x)-2\cdot \mathbf{1}_{[{\mathbb R}ho,a]}(x)
\end{equation}
for all $x\in T$.
To see this it is enough to verify ({\mathbb R}ef{osi}) for all $x,y\in T$ with $x\in[{\mathbb R}ho,y]$. Indeed,
{
\begin{equation}\label{e:yy0}
\begin{aligned}
&\int^y_x\lambda^{(T,r)}(\mathrm{d}z)\,
g(\mathbf{1}_{T}(z)-2\cdot \mathbf{1}_{[{\mathbb R}ho,a]}(z)
g)
\\
&=
r({\mathbb R}ho,y)-r({\mathbb R}ho,x)-2\cdot r({\mathbb R}ho,a\sqrtedge y)+2\cdot r({\mathbb R}ho,a\sqrtedge x)
\\
&=
r({\mathbb R}ho,a)+r({\mathbb R}ho,y)-2\cdot r({\mathbb R}ho,a\sqrtedge y)-r({\mathbb R}ho,a)-r({\mathbb R}ho,x)+2\cdot r({\mathbb R}ho,a\sqrtedge x)
\\
&=
g_a(y)-g_a(x).\mbox{\hspace{0.1in}fill}\qed
\end{aligned}
\end{equation}}
\label{Exp:04}
\end{example}
\begin{example}[Distance between branch and end point on an arc]{\mathbb R}m Fix $a,b\in T$, and recall the definition of branch points
from ({\mathbb R}ef{e:branch}). Define $f_{a,b}:T\to{\mathbb R}_+$ by
\begin{equation}\label{e:h}
f_{a,b}(x)
:=
r
g(c(x,a,b),b
g),
\end{equation}
for all $x\in T$.
Obviously,
$f_{a,b}$ is
absolutely continuous. Observe that now disturbing the argument outside the arc $[a,b]$ does not change the value of the function while moving the argument away from the root along $[a,a\sqrtedge b]$ and $[a\sqrtedge b,b]$ let the distance grow and decrease, respectively, with speed one. We therefore expect that a version of $\newlineabla f_{a,b}$ is given by
\begin{equation}\label{e:nablah}
\newlineabla f_{a,b}(x)
=
\mathbf{1}_{[a,a\sqrtedge b]}(x)-\mathbf{1}_{[a\sqrtedge b,b]}(x)
\end{equation}
for all $x\in T$.
To see this it is enough to verify ({\mathbb R}ef{osi}) for all $x,y\in T$ with $x\in[{\mathbb R}ho,y]$.
Indeed,\label{Exp:01}
{
\begin{equation}\label{e:yy}
\begin{aligned}
&\int^y_x\lambda^{(T,r)}(\mathrm{d}z)\,
g(\mathbf{1}_{[a,a\sqrtedge b]}(z)-\mathbf{1}_{[a\sqrtedge b,b]}(z)
g)
\\
&=
\lambda([x,y]\cap[a,a\sqrtedge b])-\lambda([x,y]\cap[b,a\sqrtedge b])
\\
&=
g(\mathbf{1}\{c(x,a,b)\in[{\mathbb R}ho,a]-\mathbf{1}\{c(x,a,b)\in[{\mathbb R}ho,b]\}
g)\cdot r
g(c(y,a,b),c(x,a,b)
g)
\\
&=
f_{a,b}(y)-f_{a,b}(x).\mbox{\hspace{0.1in}fill$\qed$}
\end{aligned}
\end{equation}}
\end{example}
\subsection{The Dirichlet form}
\label{Sub:form}
Let $(T,r)$ be a locally compact ${\mathbb R}$-tree and $\newlineu$ a {\it Radon measure} {on $(T,{\mathcal B}(T))$}.
Recall from ({\mathbb R}ef{domainp}) and ({\mathbb R}ef{con.2p}) the bilinear form $({\mathcal E},{\mathcal D}({\mathcal E}))$.
{
\begin{lemma} Fix $a,b\in T$, and recall the function $f_{a,b}$ from Example~{\mathbb R}ef{Exp:01}.
Then for all $f\in {\mathcal D}({\mathcal E})$, we have that also $\tilde{f}_{a,b}:=f\cdot f_{a,b}\in {\mathcal D}({\mathcal E})$.
\label{L:13}
In particular, if $\mathbf{1}_T\in {\mathcal D}({\mathcal E})$ then also $f_{a,b}\in {\mathcal D}({\mathcal E})$.
\end{lemma}
\begin{proof} By definition,
\begin{equation}\label{e:calcc}
\begin{aligned}
\newlineabla\tilde{f}_{a,b}
&=
\newlineabla f\cdot f_{a,b}+f\cdot
g(\mathbf{1}_{[a,a\sqrtedge b]}-\mathbf{1}_{[b,a\sqrtedge b]}
g).
\end{aligned}
\end{equation}
Furthermore
\begin{equation}\label{e:calcc2}
\begin{aligned}
&
g(\newlineabla\tilde{f}_{a,b}
g)^2
\\
&=
g(\newlineabla f
g)^2\cdot f^2_{a,b}+f^2\cdot\mathbf{1}_{[a,b]}+2\cdot f\cdot\newlineabla f\cdot f_{a,b}
\cdot
g(\mathbf{1}_{[a,a\sqrtedge b]}-\mathbf{1}_{[b,a \sqrtedge b]}
g)
\\
&\le
g(\newlineabla f
g)^2\cdot r^2(a,b)+f^2\cdot\mathbf{1}_{[a,b]}+2r(a,b)\cdot {|f|}\cdot
g|\newlineabla f
g|
\cdot\mathbf{1}_{[a,b]},
\end{aligned}
\end{equation}
which implies that\label{Exp:05}
\begin{equation}\label{dirform}
\begin{aligned}
&{\mathcal E}
g(\tilde{f}_{a,b},\tilde{f}_{a,b}
g)
\\
&\le
r^2(a,b){\mathcal E}(f,f)+\fracrac{1}{2}\int_{[a,b]}\lambda^{(T,r)}(\mathrm{d}z)\,
g(f^2+2r(a,b)|f|\cdot|\newlineabla f|
g)
\\
&\le
r^2(a,b){\mathcal E}(f,f)+\fracrac{1}{2}\int_{[a,b]}\lambda^{(T,r)}(\mathrm{d}z)\,
g(2f^2+r^{2}(a,b)(\newlineabla f)^{2}
g)
\\
&\le
2r^2(a,b){\mathcal E}(f,f)+ \int_{[a,b]}\lambda^{(T,r)}(\mathrm{d}z)\,f^2.
\end{aligned}
\end{equation}
Here we have applied in the second line that $2xy\le x^2+y^2$, for all $x,y\in{\mathbb R}$, with $x:=|f|$ and $y:=r(a,b)\cdot|\newlineabla f|$.
Since $f\in{\mathcal D}({\mathcal E})$ is continuous and hence bounded on [a,b],
it follows that $\tilde{f}_{a,b}\in{\mathcal D}({\mathcal E})$.
\end{proof}
}
For technical purposes we also introduce for all $\alpha> 0$
the bilinear form
\begin{equation}\label{con.2b}
\mathcal E_{\alpha}
g(f,g
g)
:=
\mathcal E
g(f,g
g)+\alpha
g(f,g
g)_{\newlineu}
\end{equation}
with domain
\begin{equation}\label{domainalp}
{\mathcal D}({\mathcal E}_{\alpha})
:=
{\mathcal D}(\mathcal E).
\end{equation}
Moreover, we also consider for any given closed subset $A\subseteq T$ the domain
\begin{equation}\label{con.2a}
\mathcal D_A(\mathcal E_{\alpha})
:=
\mathcal D_A(\mathcal E)
=
g\{f\in\mathcal D(\mathcal E):\,f|_A=0
g\}.
\end{equation}
The main result of this section states that
the {form
$(\mathcal E,\mathcal D_A(\mathcal E))$ is a
{\it Dirichlet form}}, i.e., symmetric, closed and
Markovian (see, for example,
\cite{FukushimaOshimaTakeda1994} for notation
and terminology).
\begin{proposition}[Dirichlet forms]
For any closed {$A\subseteq T$},\label{P:00}
$(\mathcal E,\mathcal D_A(\mathcal E))$
is a Dirichlet form.
\end{proposition}
\begin{proof}
By an analogous argument, as in Example~1.2.1 in
\cite{FukushimaOshimaTakeda1994}, it can be shown that $({\mathcal E},{\mathcal D}_A({\mathcal E}))$ is
well-defined and symmetric.
The following lemma states that the form $({\mathcal E},{\mathcal D}_A(\mathcal E))$ is closed.
\begin{lemma}[Closed form] For any closed $A\subseteq T$,
\label{L:EDstarclosed}
the form
$({\mathcal E},{\mathcal D}_A(\mathcal E))$
is closed, that is, $\mathcal D_A(\mathcal E)$ equipped with
the inner product $\mathcal E_1$ is complete.
\end{lemma}
\begin{proof} Let $(f_n)_{n\in{\mathbb N}}$ be an $\mathcal E_1$-Cauchy
sequence in $\mathcal D_A(\mathcal E)$. Then there exist
$f,g\in L^2(\newlineu)$ such that $\lim_{n\to\infty}f_n=f$
in $L^2(\newlineu)$ and
$\lim_{n\to\infty}\newlineabla f_n=g$ in $L^2(\lambda^{(T,r)})$.
In particular,
along a subsequence $f=\lim_{k\to\infty}f_{n_k}$, $\newlineu$-almost
surely. By the Cauchy-Schwartz inequality,
\begin{equation}\label{e:nablaa}
\begin{aligned}
\Big|\int_x^y&\lambda^{(T,r)}(\mathrm{d}z)\,g(z)-f(y)+f(x)\Big|^2
\\
&=
\lim_{k\to\infty}\Big|\int_x^y\lambda^{(T,r)}(\mathrm{d}z)\,g(z)
-f_{n_k}(y)+f_{n_k}(x)\Big|^2
\\
&=
\lim_{k\to\infty}\Big|\int_x^y\lambda^{(T,r)}(\mathrm{d}z)\,
g(g(z)-\newlineabla f_{n_k}(z)
g)\Big|^2
\\
&\leq
r(x,y)\lim_{k\to\infty}
g(g-\newlineabla f_{n_k},g-\newlineabla
f_{n_k}
g)_{\lambda^{(T,r)}}
=
0,
\end{aligned}
\end{equation}
for $\lambda^{(T,r)}$-almost all $x,y\in T$. Hence
$\newlineabla f=g$, $\lambda^{(T,r)}$-almost
surely. Similarly, by Fatou's Lemma,
along a subsequence {$(f_{n_l})_{l\in\mathbb{N}}$} with $f=\lim_{l\to\infty}f_{n_l}$, $\lambda^{(T,r)}$-almost
surely,
\begin{equation}\label{e:nabla}
\begin{aligned}
\lim_{n\to\infty}{\mathcal E}(f_n-f,f_n-f)
&=
\lim_{n\to\infty}\int\lambda^{(T,r)}(\mathrm{d}z)\,
\lim_{l\to\infty}(\newlineabla f_n(z)-\newlineabla f_{n_l}(z))^2
\\
&\le
\lim_{n\to\infty}\liminf_{l\to\infty}
\mathcal E(f_n-f_{n_l},f_n-f_{n_l})
=
0.
\end{aligned}
\end{equation}
Clearly, $f|_A=0$ and
the assertion follows.
\end{proof}
{The following lemma shows that the form is {\it contractive}. The conclusions are easily verified, so we omit the proof.
\begin{lemma}[Contraction property] If $f\in{\mathcal D}({\mathcal E})$ then for all $\varepsilon>0$, $f^\varepsilon:=(f\sqrtedge\varepsilon)\varepsilone(-\varepsilon)\in{\mathcal D}({\mathcal E})$ and ${\mathcal E}(f^\varepsilon,f^\varepsilon)\le {\mathcal E}(f,f)$.
\label{L:14}
\end{lemma}
Since the form $({\mathcal E},{\mathcal D}({\mathcal E}))$ is closed and has the contraction property, it immediately follows the it is Markovian
(compare, e.g., {Theorem~1.4.1 in \cite{FukushimaOshimaTakeda1994}}).
\begin{cor}[Markovian form]
For any closed {$A\subseteq T$},
the form
$({\mathcal E},{\mathcal D}_A({\mathcal E}))$
is Markovian, that is, for all $\varepsilon>0$
there exists a Lipschitz continuous function
\label{L:EDstarmarkovian}
$\varphi_\varepsilon:{\mathbb R}\to[-\varepsilon,1+\varepsilon]$ with Lipschitz
constant one such that
\begin{itemize}
\item[(i)]
$\varphi_\varepsilon(t)=t$, for all $t\in[0,1]$, and
\item[(ii)] for all $f\in\mathcal D_A(\mathcal E)$,
$\varphi_\varepsilon\circ f\in\mathcal D_A(\mathcal E)$, and
$\mathcal E(\varphi_\varepsilon\circ f,\varphi_\varepsilon\circ f)
\leq
\mathcal E(f,f)$.
\end{itemize}
\end{cor}
}
By Lemma~{\mathbb R}ef{L:EDstarclosed} and Corollary~{\mathbb R}ef{L:EDstarmarkovian}, the proof of Proposition~{\mathbb R}ef{P:00} is complete.
\end{proof}
We conclude this subsection with the following useful fact.
\begin{lemma}[Transience of $({\mathcal E},{\mathcal D}_A({\mathcal E}))$] \label{L:dtrans} Assume that $(T,r)$ is a locally compact ${\mathbb R}$-tree and $\newlineu$ a Radon measure on
$(T,{\mathcal B}(T))$.
For any closed, non-empty {$A\subseteq T$} the Dirichlet form
$(\mathcal E,\mathcal D_A(\mathcal E))$ is transient, \label{L:01}
that is, there exists a bounded $\newlineu$-integrable reference function $g$ which is strictly positive,
$\newlineu$-almost surely, and satisfies
for all $f\in{\mathcal D}_A({\mathcal E})$,
\begin{equation}\label{e:trans}
\int\mathrm{d}\newlineu\,|f|\cdot g
\le
\sqrt{{\mathcal E}(f,f)}.
\end{equation}
\end{lemma}
\begin{proof} Let $A\subset T$ be a non-empty and closed subset, and ${\mathbb R}ho'\in A$.
\begin{equation}
g :=
{1\sqrtedge} \gamma\sum_{n\in{\mathbb N}}\fracrac{\mathbf{1}_{\bar{B}({\mathbb R}ho',n)\setminus\bar{B}({\mathbb R}ho',n-1)}}{n^2\newlineu(\bar{B}({\mathbb R}ho',n)\setminus\bar{B}({\mathbb R}ho',n-1))}
\end{equation}
with a normalizing constant $\gamma:=(\sqrt{2}\sum_{n\ge 1}n^{-3/2})^{-1}$.
Obviously, $g$ is positive and
\be{e:g}
\int\newlineu(\mathrm{d}x)g(x)\le\gamma\sum_{n\ge 1}n^{-2}<\infty.
\end{equation}
For all $f\in{\mathcal D}_A({\mathcal E})$ and
$x,y\in T$,
\begin{equation}\label{con.3}
\begin{aligned}
g|f(y)-f(x)
g|^2
&=
g|\int^y_x\lambda^{(T,r)}(\mathrm{d}z)\,\newlineabla f(z)
g|^2
\\
&\leq
2\mathcal E(f,f)r(x,y),
\end{aligned}
\end{equation}
by the Cauchy-Schwartz inequality. Since $f({\mathbb R}ho')=0$, ({\mathbb R}ef{con.3}) implies
in particular that $(f(y))^2\le 2\mathcal E(f,f)r({\mathbb R}ho',y)$, and therefore
\begin{equation}
\label{e:trans.1}
\begin{aligned}
\int\mathrm d\newlineu\,|f|\cdot g
&\le
\sqrt{2}\sqrt{{\mathcal E}(f,f)}\int\mathrm d\newlineu\,\sqrt{r({\mathbb R}ho',\boldsymbol{\cdot})}\cdot g
\\
&\le
\sqrt{2\mathcal E(f,f)}\gamma\sum_{n\ge 1}n^{-3/2}=\mathcal E(f,f)^{1/2}.
\end{aligned}
\end{equation}
\end{proof}
\section{Capacity, Green kernel and resistance}
\label{S:capacities}
In this section we recall well-known facts on capacities and the Green kernel which we will use frequently throughout the paper.
In Subsection~{\mathbb R}ef{Sub:extended} we give the notion of the extended Dirichlet space and discuss a frequently used example.
In Subsection~{\mathbb R}ef{Sub:capacity} we introduce the capacity and in Subsection~{\mathbb R}ef{Sub:Green} the Green kernel. In Subsection~{\mathbb R}ef{Sub:resistance}
we discuss the relation between resistance and capacities.
Later on in the article we will relate these potential theoretic notions with the corresponding probabilistic properties of the $\newlineu$-Brownian motion
associated with the Dirichlet form $({\mathcal E},{\mathcal D}({\mathcal E}))$.
\subsection{Extended Dirichlet space}
\label{Sub:extended}
Let $A\subseteq T$ be a closed set. Let
\begin{equation}
\label{extended}
(\mathcal E,\bar{\mathcal D}_A(\mathcal E))
:=
\mbox{ the extended transient Dirichlet space,}
\end{equation}
i.e., $\bar{\mathcal D}_A(\mathcal E)$ is the family of all Borel-measurable functions $f$ on $T$ such that $|f|<\infty$, $\newlineu$-almost surely, and there exists a ${\mathcal E}$-Cauchy sequence $\{f_n;\,n\in{\mathbb N}\}$ of functions in $\mathcal D_A(\mathcal E)$ such that $\lim_{n\to\infty}f_n=f$, $\newlineu$-almost surely. By
Theorem 1.5.3 in \cite{FukushimaOshimaTakeda1994} this space can be
identified with the completion of $\mathcal D_A(\mathcal E)$ with
respect to the inner product~$\mathcal E$.
{\begin{remark}[Connection of Kigami's domain with ${\mathcal D}({\mathcal E})$] {\mathbb R}m
{Recall the forms $({\mathcal E}^{\mathrm{Kigami}},{\mathcal F}^{\mathrm{Kigami}})$ and $({\mathcal E}^{\mathrm{Kigami}},{\mathcal D}({\mathcal E}^{\mathrm{Kigami}}))$
from ({\mathbb R}ef{Kigami:form}) through ({\mathbb R}ef{Kigami:labelD}), and the forms $({\mathcal E},{\mathcal F})$ and $({\mathcal E},{\mathcal D}({\mathcal E}))$
from ({\mathbb R}ef{e:F}) through ({\mathbb R}ef{con.2p}).
In analogy to ({\mathbb R}ef{extended}) write $\bar{D}({\mathcal E}^{\mathrm{Kigami}})$ for the extension of Kigami's domain.
We will now show that
\begin{equation}
\label{e:028}
\bar{{\mathcal D}}({\mathcal E})=\bar{\mathcal D}({\mathcal E}^{\mathrm{Kigami}}).
\end{equation}}
Choose $f\in{\mathcal D}({\mathcal E})={\mathcal F}\cap{\mathcal C}_\infty(T)\cap L^2(\newlineu)$,
and put for all $\varepsilon>0$, $f^\varepsilon:=f-(f\varepsilone(-\varepsilon))\sqrtedge\varepsilon$. Since $f\in{\mathcal C}_\infty(T)$, $f^\varepsilon\in{\mathcal C}_0(T)$
for all $\varepsilon>0$.
Moreover, $f^\varepsilon\in{\mathcal F}$ for all $\varepsilon>0$.
Since $f^\varepsilon{_{\displaystyle\longrightarrow\atop \varepsilon\to 0}} f$, $\newlineu$-almost everywhere and in ${\mathcal E}$ by Theorem~1.4.1 in \cite{FukushimaOshimaTakeda1994}, we find that
$f\in{\mathcal D}({\mathcal E}^{\mathrm{Kigami}})$. This implies that $\bar{{\mathcal D}}({\mathcal E})\subseteq\bar{{\mathcal D}}({\mathcal E}^{\mathrm{Kigami}})$.
On the other hand, if $f\in\bar{{\mathcal D}}({\mathcal E}^{\mathrm{Kigami}})$, then we find an ${\mathcal E}$-Cauchy sequence $(f_n)_{n\in\mathbb{N}}$ in ${\mathcal D}({\mathcal E}^{\mathrm{Kigami}})$
such that $f_n{_{\displaystyle\longrightarrow\atop n\to\infty}} f$, $\newlineu$-almost everywhere. For each $n\in\mathbb{N}$ we can find, however, a sequence $(h^n_{k})_{k\in\mathbb{N}}$ in ${\mathcal F}\cap{\mathcal C}_0(T)$ such that $h^n_{k}{_{\displaystyle\longrightarrow\atop k\to\infty}} f_n$, $\newlineu$-almost everywhere and in ${\mathcal E}$. Thus, along a subsequence $(k_n)_{n\in\mathbb{N}}$ with $k_n{_{\displaystyle\longrightarrow\atop n\to\infty}}\infty$, $h^n_{k_n}{_{\displaystyle\longrightarrow\atop n\to\infty}} f$ $\newlineu$-almost everywhere and in ${\mathcal E}$. Since ${\mathcal F}\cap{\mathcal C}_0(T)\subseteq{\mathcal F}\cap{\mathcal C}_\infty(T)\cap L^2(\newlineu)$, $f\in\bar{\mathcal D}({\mathcal E})$ and the claim follows.
\label{Rem:05}
\hspace{0.1in}fill $\qed$
\end{remark}
}
The following will be used frequently.
\begin{lemma} Let $(T,r)$ be a locally compact ${\mathbb R}$-tree and $\newlineu$ a Radon measure on $(T,{\mathcal B}(T))$.
Assume that $(T,r,\newlineu)$ is such that $\mathbf{1}_T\in\bar{\mathcal D}({\mathcal E})$.
\label{L:08}
Then for all ${a},{b}\in T$ with ${a}\newlineot ={b}$,
the function $h_{{a},{b}}:=\fracrac{f_{{a},{b}}}{r({a},{b})}$ with $f_{{a},{b}}$ as defined in ({\mathbb R}ef{e:h}) belongs to the extended domain.
\end{lemma}
\begin{proof} Assume that $\mathbf{1}_T \in \bar{\mathcal D}({\mathcal E})$. Then there exists a
${\mathcal E}$-Cauchy
sequence $\{f_n;\,n\in{\mathbb N}\}$ of functions in ${\mathcal D}({\mathcal E})$ such that $\lim_{n\to\infty}f_n=\mathbf{1}_T$, $\newlineu$-almost surely.
Fix $a,b\in T$ with $a\newlineot=b$.
For each $n\in{\mathbb N}$, put
$g_{n}:=f_{n}\cdot h_{{a},{b}}$.
By definition, $h_{{a},{b}}$ is a bounded function. By Example~{\mathbb R}ef{Exp:01}, $h_{{a},{b}}$ is absolutely continuous with
{$\newlineabla h_{{a},{b}}
=\fracrac{1}{r({a},{b})}
g(\mathbf{1}_{[{a},{a}\sqrtedge {b}]}-\mathbf{1}_{[{a}\sqrtedge {b},{b}]}
g)$.}
{It follows from Lemma~{\mathbb R}ef{L:13} that} $g_{n}\in\mathcal D(\mathcal E)$, and moreover by ({\mathbb R}ef{dirform}),
\begin{equation}
\begin{aligned}
\label{e:cca}
&{\mathcal E}
g(g_{n}-g_m,g_{n}-g_m
g)
\\
&=
{\mathcal E}
g((f_{n}-f_m)\cdot h_{a,b},(f_{n}-f_m)\cdot h_{a,b}
g)
\\
&\le
{2{\mathcal E}
g(f_{n}-f_m,f_{n}-f_m
g)+\tfrac{1}{r(a,b)^2}\int_{[a,b]}\mathrm{d}\lambda^{(T,r)}\,(f_{n}-f_m)^2}
\end{aligned}
\end{equation}
for all $n\in{\mathbb N}$. Since $\{f_n;\,n\in{\mathbb N}\}$ is ${\mathcal E}$-Cauchy, the first summand on the right hand side of ({\mathbb R}ef{e:cca}) goes to zero as $m,n{\mathbb R}ightarrow\infty$.
As $(f_{n})_{n\in\mathbb{N}}$ converges $\newlineu$-almost everywhere, there exists $e\in T$ such that $(f_{n} - f_{m}) (e) {\mathbb R}ightarrow 0$ as $n,m {\mathbb R}ightarrow \infty$. Then the second summand is
\begin{equation}
\begin{aligned}
&=\tfrac{1}{r(a,b)^2}\int_{[a,b]}\mathrm{d}\lambda^{(T,r)}(x) \,\Big ( (f_{n} - f_{m}) (e) + \int_{e}^{x} \mathrm{d}\lambda^{(T,r)} (z) \newlineabla(f_{n}-f_m)(z) \Big)^2
\\
&\leq 2 \tfrac{1}{r(a,b)}\left ( (f_{n} - f_{m} (e) {\mathbb R}ight )^{2}
\\
&+ 2\tfrac{1}{r(a,b)^2}\int_{[a,b]}\mathrm{d}\lambda^{(T,r)}(x) \, r(e,x) \int_{e}^{x} \mathrm{d}\lambda^{(T,r)} (z) \left (\newlineabla(f_{n}-f_m)(z){\mathbb R}ight)^2
\\
&\leq c_{1} \left [ \left ( (f_{n} - f_{m}) (e) {\mathbb R}ight )^{2} + {\mathcal E}
g(f_{n}-f_m,f_{n}-f_m
g) \int_{[a,b]}\mathrm{d}\lambda^{(T,r)}(x) \, r(e,x) {\mathbb R}ight ]\\
&\leq c_{2} \left [ {\mathcal E}
g(f_{n}-f_m,f_{n}-f_m
g) + \left ( (f_{n} - f_{m}) (e) {\mathbb R}ight )^{2} {\mathbb R}ight ],
\end{aligned}
\end{equation}
{for suitable constants $c_1$ and $c_2$,}
and tends to $0$ as $m,n {\mathbb R}ightarrow \infty$.
This shows that $h_{{a},{b}}\in\bar{\mathcal D}({\mathcal E})$.
\end{proof}
\subsection{Capacity}
\label{Sub:capacity}
In this subsection we introduce the notion of capacity as a minimizing problem with respect to the Dirichlet form $({\mathcal E}_\alpha,{\mathcal D}_A({\mathcal E}))$. Furthermore we discuss various characterizations of the minimizers. In particular cases we provide explicit formulae for the minimizer.
For any closed $A\subseteq T$ and another closed set
{$B\subset T\setminus {A}$}, put
\begin{equation}\label{e:LAB}
\bar{\mathcal L}_{A,{B}}
:=
g\{f\in\bar{\mathcal D}_A(\mathcal E):\,f|_{B}=1
g\}.
\end{equation}
\begin{definition}[$\alpha$-capacities] For
$\alpha\ge 0$, let the {{\mathbb R}m $\alpha$-capacity} of any closed set ${B}\subseteq T$ with respect to
some other closed set {$A\subset T\setminus {B}$} be defined as
\begin{equation}\label{cap.1}
\mathrm{cap}^{\alpha}_{A}({B})
:=
\inf
g\{\mathcal E_\alpha(f,f)\,:\,f\in\bar{\mathcal L}_{A,{B}}
g\}.
\end{equation}
\label{Def:02}
If $\alpha=0$, we abbreviate $\mathrm{cap}_{A}({B}):=\mathrm{cap}^0_{A}({B})$.
If $A=\itptyset$, we will denote $\mathrm{cap}^{\alpha}_{A}({B})$ by $\mathrm{cap}^{\alpha}({B})$.
{Moreover, if $B=\{b\}$ is singleton, we will write $\mathrm{cap}_{A}(b)$, $\mathrm{cap}^\alpha_{A}(b)$
and $\mathrm{cap}^\alpha(b)$, and so on.}
\end{definition}
We note that one is not restricted to but we shall be content with the
choice of closed sets only.
\begin{lemma}[Non-empty sets have positive capacity] Let $(T,r)$ be a locally compact ${\mathbb R}$-tree and $\newlineu$ a Radon measure {on $(T,{\mathcal B}(T))$}. {
For
any $x\in T\setminus A$, $\mathrm{cap}^1_{A}(\{x\})>0$}.
\label{L:02}
\end{lemma}
\begin{proof} We follow the argument in the proof of Lemma~4 in \cite{Kre95} to
show that singletons have positive capacity. By Theorem~2.2.3 in
\cite{FukushimaOshimaTakeda1994} it is enough to show that
for all $x\in T$ the Dirac measures $\delta_x$ is of finite energy integral, i.e.,
there exists a constant $C_x>0$ such that for all $f\in{\mathcal D}({\mathcal E})\cap {\mathcal C}_0(T)$,
\begin{equation}\label{fei}
f(x)^2\le C_x\,{\mathcal E}_1(f,f)
\end{equation}
(compare (2.2.1) in \cite{FukushimaOshimaTakeda1994}).
Fix $f\in{\mathcal D}({\mathcal E})\cap {\mathcal C}_0(T)$, $x\in T$. Then by ({\mathbb R}ef{con.3}) together with $2ab\le a^2+b^2$ applied with $a:=f(y)$ and $b:=(f(x)-f(y))$,
for all $x,y\in T$,
\begin{equation}
\label{e:dirac}
\begin{aligned}
\tfrac{1}{2} f^2(x)&\le |f(x)-f(y)|^2+f^2(y)\\&\le 2{\mathcal E}(f,f)r(x,y)+f^2(y).
\end{aligned}
\end{equation}
Since $(T,r)$ is locally compact we can find a compact neighborhood, $K=K_x$, of $x$.
Integrating the latter over all $y$ with respect to $\mathbf{1}_{K_x}\cdot\newlineu$ gives
\begin{equation}
\label{e:dirac1}
\begin{aligned}
\tfrac{1}{2}f^2(x)\newlineu(K_x)&\le 2{\mathcal E}(f,f)\int_{K_x}\newlineu(\mathrm{d}y)\,r(x,y)+(f\cdot \mathbf{1}_{K_x},f)_\newlineu
\\
&\le
2{\mathcal E}(f,f)\int_{K_x}\newlineu(\mathrm{d}y)\,r(x,y)+(f,f)_\newlineu
\end{aligned}
\end{equation}
Hence ({\mathbb R}ef{fei}) clearly holds with $C_x:=\fracrac{2\cdot\max \{2\int_{K_x}\newlineu(\mathrm{d}y)\,r(x,y);1\}}{\newlineu(K_x)}$.
\end{proof}
\begin{proposition}[Capacity between two points] Let $(T,r)$ be a locally compact ${\mathbb R}$-tree and a Radon measure $\newlineu$ {on $(T,{\mathcal B}(T))$}. Assume furthermore that $(T,r,\newlineu)$ is such that $\mathbf{1}_T\in\bar{\mathcal D}({\mathcal E})$.
Then for all ${a},{b}\in T$ with ${a}\newlineot ={b}$,
the function $h_{{a},{b}}:=\fracrac{f_{{a},{b}}}{r({a},{b})}$ with $f_{{a},{b}}$ as defined in ({\mathbb R}ef{e:h}) is the unique minimizer of ({\mathbb R}ef{cap.1}).
\label{P:04}
In particular,
\begin{equation}
\label{capxy}
\mathrm{cap}_{{b}}({a}):=\mathrm{cap}_{\{{b}\}}(\{{a}\})=
g(2r({a},{b})
g)^{-1}.
\end{equation}
\end{proposition}
Before providing a proof of the above proposition, we state well-known characterizations of the solution of the minimizing problem ({\mathbb R}ef{cap.1}).
\begin{lemma}[Characterization of minimizers; Capacities] Fix a locally compact ${\mathbb R}$-tree $(T,r)$ and a Radon measure $\newlineu$ on $(T,{\mathcal B}(T))$. Let $A$ be a closed subset,
${B}\subseteq T\setminus A$ be another {non-empty} closed subset and $\alpha\ge 0$.
\begin{itemize}
\item[(i)]
For a function $h^\ast\in\bar{\mathcal L}_{A,B}$ the
following are equivalent: \label{L:05}
\begin{itemize}
\item[(a)] For all $g\in\bar{\mathcal D}_{A\cup B}({\mathcal E})$,
$\mathcal E_\alpha(h^\ast,g)=0$.
\item[(b)] For all $h\in\bar{\mathcal L}_{A,B}$,
${\mathcal E}_\alpha(h^\ast,h^\ast)\le{\mathcal E}_\alpha(h,h)$.
\end{itemize}
\item[(ii)] If $\bar{\mathcal L}_{A,B} \newlineeq \itptyset$ then there exists a unique function $h^\ast\in\bar{\mathcal L}_{A,B}$ with $h^\ast$ is $[0,1]$-valued and
$\mathcal{E}_\alpha(h^\ast,h^\ast)=\mathrm{cap}_{A}^\alpha({B})$.
\end{itemize}
\end{lemma}
\begin{proof} (i) {\bf (b) $\Longrightarrow$ (a).}
Assume that $h^\ast\in\bar{\mathcal L}_{A,B}$ is such that ${\mathcal E}_\alpha(h^\ast,h^\ast)\le{\mathcal E}_\alpha(h,h)$ for all $h\in\bar{\mathcal L}_{A,B}$. Choose a function $g\in\bar{\mathcal D}_{A\cup B}({\mathcal E}_\alpha)$, and put $h^{\partialm}=h^\ast\partialm\varepsilon g$. Then $h^\partialm\in\bar{\mathcal L}_{A,B}$ and
\begin{equation}
\begin{aligned}
{\mathcal E}_\alpha
g(h^\ast,h^\ast
g)
&\le
{\mathcal E}_\alpha
g(h^{\partialm},h^{\partialm}
g)
\\
&=
{\mathcal E}_\alpha
g(h^\ast,h^\ast
g)+\varepsilon^2{\mathcal E}_\alpha
g(g,g
g)\partialm 2\varepsilon {\mathcal E}_\alpha
g(g,h^\ast
g),
\end{aligned}
\end{equation}
or equivalently,
\begin{equation}\label{emin3}
\begin{aligned}
2
g|{\mathcal E}_\alpha
g(g,h^\ast
g)
g|
&\le
\varepsilon{\mathcal E}_\alpha
g(g,g
g).
\end{aligned}
\end{equation}
Letting $\varepsilon\downarrow 0$ implies that ${\mathcal E}_\alpha
g(g,h^\ast
g)=0$, which proves (a) since
$g\in\bar{\mathcal D}_{A\cup B}({\mathcal E}_\alpha)$ was chosen arbitrarily.
{\bf (a) $\Longrightarrow$ (b).} Assume that (a) holds. Then for each $h\in\bar{\mathcal L}_{A,B}$,
$g_{h}:=h^\ast-h\in\bar{\mathcal D}_{A\cup B}({\mathcal E})$. Therefore
\begin{equation}\label{emin}
\begin{aligned}
{\mathcal E}_\alpha
g(h,h
g)
&=
{\mathcal E}_\alpha
g(h^\ast-g_h,h^\ast-g_h
g)
\\
&=
{\mathcal E}_\alpha
g(h^\ast,h^\ast
g)+{\mathcal E}_\alpha
g(g_h,g_h
g)
\\
&\ge
{\mathcal E}_\alpha
g(h^\ast,h^\ast
g).
\end{aligned}
\end{equation}
(ii) See Theorem~2.1.5 in \cite{FukushimaOshimaTakeda1994}.
\end{proof}
\begin{proof}[Proof of Proposition~{\mathbb R}ef{P:04}] Fix $a,b\in T$ with $a\newlineot=b$. Recall from Lemma~{\mathbb R}ef{L:08} that under the assumption $\mathbf{1}_T\in \bar{\mathcal D}({\mathcal E})$, also
$h_{{a},{b}}\in \bar{\mathcal D}({\mathcal E})$.
Since for any $g\in\bar{\mathcal D}_{\{{a},{b}\}}(\mathcal E)$,
\begin{equation}\label{e:dir.6}
\begin{aligned}
\mathcal E
g(h_{{a},{b}},g
g)
&=
\fracrac{1}{2r({a},{b})}\int\mathrm{d}\lambda^{(T,r)}\,
g(\mathbf{1}_{[{a},{a}\sqrtedge {b}]}-\mathbf{1}_{[{b},{a}\sqrtedge {b}]}
g)\cdot \newlineabla g
\\
&=
\fracrac{g({a})-g({a}\sqrtedge {b})-g({b})+g({a}\sqrtedge {b})}{2r({a},{b})}
\\
&=0,
\end{aligned}
\end{equation}
by ({\mathbb R}ef{e:nablah}),
$h_{{a},{b}}$ is the unique minimizer by Lemma~{\mathbb R}ef{L:05}. In particular,
it follows from Lemma~{\mathbb R}ef{L:13} that
$\mathrm{cap}_{b}(a)={\mathcal E}(h_{a,b},h_{a,b})=(2r(a,b))^{-1}$.
\end{proof}
\subsection{Green kernel}
\label{Sub:Green}
To prove the characterization of occupation time measure of the process associated with the Dirichlet form $({\mathcal E},{\mathcal D}({\mathcal E}))$ as stated in Proposition~{\mathbb R}ef{P:prop} we introduce a more general variational
problem. Its solution corresponds to the {\it Green kernel}. Consider a closed subset $A\subset T$.
Let $\kappa$ be a positive finite measure with $\int\mathrm{d}\kappa\,r({{\mathbb R}ho},\boldsymbol{\cdot})<\infty$. For each $\alpha\ge 0$ consider the following
variational problem:
\begin{equation}\label{greenvar}
H^{\alpha,{{\mathbb R}ho},\kappa}_{A}
:=
\inf
g\{\mathcal E_\alpha(g,g)-{2}\int{\mathrm{d}\kappa\, g};\,g\in\bar{\mathcal D}_A(\mathcal E_\alpha)
g\}.
\end{equation}
There is a well-known characterization of the unique solution to ({\mathbb R}ef{greenvar}).
\begin{lemma}[Characterization of minimizers; Green kernel] Let $(T,r)$ be a locally compact ${\mathbb R}$-tree, {$\newlineu$ a Radon measure on $(T,{\mathcal B}(T))$,}
$A\subseteq T$ be a closed subset,
$\kappa$ be a positive and finite measure with $\int\mathrm{d}\kappa\,
r({\mathbb R}ho,\boldsymbol{\cdot})<\infty$, {for some (and therefore all) ${\mathbb R}ho\in T$,} and $\alpha\ge 0$.
\begin{itemize}
\item[(i)]
For a function $g^\ast\in\bar{{\mathcal D}}_{A}({\mathcal E}_\alpha)$ the
following are equivalent: \label{L:03}
\begin{itemize}
\item[(a)] For all $g\in\bar{\mathcal D}_{A}({\mathcal E}_\alpha)$,
$\mathcal E_\alpha(g^\ast,g)=\int\mathrm{d}\kappa\,g$.
\item[(b)] For all $g\in\bar{{\mathcal D}}_{A}({\mathcal E}_\alpha)$,
${\mathcal E}_\alpha(g^\ast,g^\ast)-{2}\int\mathrm{d}\kappa\,g^\ast\le
{\mathcal E}_\alpha(g,g)-{2}\int\mathrm{d}\kappa\,g$.
\end{itemize}
\item[(ii)] Assume $\bar{{\mathcal D}}_{A}({\mathcal E}_\alpha) \newlineot = \itptyset$. There exists a unique minimizer $g^\ast\in \bar{{\mathcal D}}_{A}({\mathcal E}_\alpha)$ for ({\mathbb R}ef{greenvar}).
\item[(iii)] If $g^{\ast,\alpha,\kappa}\in\bar{{\mathcal D}}_{A}({\mathcal E}_\alpha)$
is {the} minimizer for ({\mathbb R}ef{greenvar}) then $g^{\ast,\alpha,\kappa}$ is non-negative.
\end{itemize}
\end{lemma}
\begin{proof}
(i) Proof is very similar to that of Lemma {\mathbb R}ef{L:05} (i). So we omit it here.
(ii) Assume that if $(f_n)_{n\in{\mathbb N}}$ is a minimizing sequence { in $\bar{\mathcal D}_A(\mathcal E_\alpha)$, i.e.,
\begin{equation}
\label{e:minim}
{\mathcal E}_\alpha(f_n,f_n)-{2}\int\mathrm{d}\kappa\,f_n{_{\displaystyle\longrightarrow\atop n\to\infty}} H_A^{\alpha,{{\mathbb R}ho},\kappa}.
\end{equation}}
Notice first that
for all $f\in{\mathcal D}_A(\mathcal E_{\alpha})$,
\begin{equation}\label{e:squareenerint}
\begin{aligned}
g(\int_{{T}}\mathrm d\kappa\,|f|
g)^2
&\leq
\kappa(T)\cdot\int_{{T}}\mathrm{d}\kappa\,f^2
\\
&\leq
2\kappa(T)\int_{{T}}\mathrm{d}\kappa\,r({\mathbb R}ho,\boldsymbol{\cdot})\cdot \mathcal E(f,f),
\end{aligned}
\end{equation}
where we have applied ({\mathbb R}ef{con.3}) with $y:={{\mathbb R}ho}$ and used that $f({\mathbb R}ho)=0$.
The latter implies that, in particular, $(\int\mathrm{d}\kappa\, f_n)_{n\in{\mathbb N}}$ is bounded.
Hence, for all $n,l\in\mathbb{N}$,
{\begin{equation}
\begin{aligned}
&
g (
\mathcal E_\alpha
g(\tfrac{f_n-f_{n+l}}{2},\tfrac{f_n-f_{n+l}}{2}
g)-{2}\int\mathrm{d}\kappa\,\tfrac{f_n-f_{n+l}}{2}
g )+
H^{\alpha,{{\mathbb R}ho},\kappa}_{A}
\\
&\le
g (
\mathcal E_\alpha
g(\tfrac{f_n-f_{n+l}}{2},\tfrac{f_n-f_{n+l}}{2}
g)-{2}\int\mathrm{d}\kappa\,\tfrac{f_n-f_{n+l}}{2}
g)
\\
&\qquad+
g(
\mathcal E_\alpha
g(\tfrac{f_n+f_{n+l}}{2},\tfrac{f_n+f_{n+l}}{2}
g)-{2}\int\mathrm{d}\kappa\,\tfrac{f_n+f_{n+l}}{2}
g)
\\
&=
\mathcal E_\alpha
g(\tfrac{f_n}{2},\tfrac{f_n}{2}
g)+\mathcal E_\alpha
g(\tfrac{f_{n+l}}{2},\tfrac{f_{n+l}}{2}
g)-{2}\int\mathrm{d}\kappa\,\tfrac{f_n}{2}-{2}\int\mathrm{d}\kappa\,\tfrac{f_{n+l}}{2}
-{2}\int\mathrm{d}\kappa\,\tfrac{f_n-f_{n+l}}{2}.
\end{aligned}
\end{equation}}
It follows from ({\mathbb R}ef{e:minim}) that
\begin{equation}\label{e:dir.4al}
\begin{aligned}
\limsup_{n\to\infty}\sup_{l\in\mathbb{N}}\mathcal E_\alpha
g(f_n-f_{n+l},f_n-f_{n+l}
g)
&=0,
\end{aligned}
\end{equation}
i.e.,\ and $(f_n)_{n\in{\mathbb N}}$ is proven to be ${\mathcal E}_1$-Cauchy. By completeness, a limit $f\in\bar{\mathcal D}_A({\mathcal E}_\alpha)$ exists.
Uniqueness, follows easily by an application of Riesz representation Theorem (see Theorem 13.9 \cite{AliBor1999}).
(iii) Since the form $({\mathcal E}_\alpha,{\mathcal D}_A({\mathcal E}))$ is Markovian,
$(0\varepsilone h)\in {\mathcal D}_A({\mathcal E})$ whenever
$h\in{\mathcal D}_A({\mathcal E})$
(See, Theorem~1.4.2 in \cite{FukushimaOshimaTakeda1994}).
Furthermore,
\begin{equation}
\label{e:gwedge0}
\begin{aligned}
&{\mathcal E}_\alpha(0{\varepsilone} g^{\ast,\alpha,\kappa},0{\varepsilone} g^{\ast,\alpha,\kappa})-{2}\int\mathrm{d}\kappa\,0{\varepsilone} g^{\ast,\alpha,\kappa}
\\
&\le
{{\mathcal E}_\alpha(g^{\ast,\alpha,\kappa},g^{\ast,\alpha,\kappa})-{2}\int\mathrm{d}\kappa\,g^{\ast,\alpha,\kappa}}
\end{aligned}
\end{equation}
where equality holds if $0{\varepsilone} g^{\ast,\alpha,\kappa}=g^{\ast,\alpha,\kappa}$, $\kappa$-, $\newlineu$-almost surely.
This however implies that $0{\varepsilone} g^{\ast,\alpha,\kappa}=g^{\ast,\alpha,\kappa}$, which proves the claim.
\end{proof}
Consequently, we arrive at the following definition.
\begin{definition}[Green kernel] Let $(T,r)$ be a locally compact ${\mathbb R}$-tree,
$A\subseteq T$ a
closed subset,
$\kappa$ a positive and finite measure with $\int\mathrm{d}\kappa\,r({{\mathbb R}ho},\boldsymbol{\cdot})<\infty$,
and $\alpha\ge 0$.
A {\it Green kernel} $g_A^\alpha
g(\kappa,\boldsymbol{\cdot}
g)$ is the minimizer
for ({\mathbb R}ef{greenvar}).
\label{Def:03}
For $x\in T$, we use the abbreviations $g^{\ast,\alpha}_A(x,\boldsymbol{\cdot}):=
g^{\ast,\alpha}_A(\delta_x,\boldsymbol{\cdot})$ and
$g^{\ast,\alpha}_x(\kappa,\boldsymbol{\cdot}):=g^{\ast,\alpha}_{\{x\}}(\kappa,\boldsymbol{\cdot})$.
{For $A:=\itptyset$, we simply write $g^{\ast,\alpha}(x,\boldsymbol{\cdot})$ and $g^{\ast,\alpha}(\kappa,\boldsymbol{\cdot})$, respectively.}
\end{definition}
We conclude this section with providing an explicit formula for the Green kernel in some specific cases.
\begin{proposition}[Green kernel; an explicit formula]
{Let $(T,r)$ be a locally compact ${\mathbb R}$-tree and $\newlineu$ a {\it Radon measure} on $(T,{\mathcal B}(T))$.}
Fix $A\subseteq T$ non-empty and closed. Let $\kappa$ a positive and finite measure with $\int\mathrm{d}\kappa\,r({{\mathbb R}ho},\boldsymbol{\cdot})<\infty$, for some (and therefore all) ${\mathbb R}ho\in A$, and $\alpha\ge 0$. Assume further that $h^{\ast,\alpha}_{A,\boldsymbol{\cdot}}$, the unique minimizer to ({\mathbb R}ef{cap.1}), exists. The Green kernel is given by\label{P:06}
\begin{equation}\label{GBmin}
g_A^{\ast,\alpha}
g(\kappa,\boldsymbol{\cdot}
g)
:=
\int\kappa(\mathrm{d}x)\,\fracrac{h^{\ast,\alpha}_{A,\boldsymbol{\cdot}}(x)}
{\mathrm{cap}^\alpha_{A}(\boldsymbol{\cdot})}.
\end{equation}
\end{proposition}
\begin{proof} Fix $x,y\in T$ with $y\newlineot\in A$. Since
{$h_{A,y}^{\ast,\alpha}\in\bar{\mathcal L}_{A,\{y\}}(\mathcal E_\alpha)$ and ${g_A^{\ast,\alpha}(x,\boldsymbol{\cdot})}\in\bar{\mathcal D}_{A}(\mathcal E_\alpha)$},
\begin{equation}
\label{e:025}
g_A^{\ast,\alpha}(x,\boldsymbol{\cdot})-g_A^{\ast,\alpha}(x,y)\cdot h_{A,y}^{\ast,\alpha}\in
\bar{\mathcal D}_{\{y\}\cup A}(\mathcal E_\alpha).
\end{equation}
Furthermore, by Lemmata~{\mathbb R}ef{L:05} and~{\mathbb R}ef{L:03} we find that
\begin{equation}
\label{e:core.1}
\begin{aligned}
&h^{\ast,\alpha}_{A,y}(x)
\\
&=
{\mathcal E}_\alpha
g(h^{\ast,\alpha}_{A,y}(\boldsymbol{\cdot}),g_A^{\ast,\alpha}(x,{\boldsymbol{\cdot}})
g)
\\
&=
{\mathcal E}_\alpha
g(h^{\ast,\alpha}_{A,y},g_A^{\ast,\alpha}(x,{\boldsymbol{\cdot}})-g_A^{\ast,\alpha}(x,y)\cdot h^{\ast,\alpha}_{A,y}
g)
+{\mathcal E}_\alpha
g(h^{\ast,\alpha}_{A,y},g_A^{\ast,\alpha}(x,y)\cdot h^{\ast,\alpha}_{A,y}
g)
\\
&=
g_A^{\ast,\alpha}(x,y)\cdot{\mathcal E}_\alpha
g(h^{\ast,\alpha}_{A,y},h^{\ast,\alpha}_{A,y}
g)
\\
&=
g_A^{\ast,\alpha}(x,y)\cdot\mathrm{cap}^\alpha_A(y),
\end{aligned}
\end{equation}
which implies ({\mathbb R}ef{GBmin}).
\end{proof}
\begin{cor}[Green kernel; $\alpha=0$, two points]
Let $(T,r)$ be a locally compact ${\mathbb R}$-tree
and $\newlineu$ a Radon measure on $(T,{\mathcal B}(T))$. Assume furthermore that $(T,r,\newlineu)$ is such that
$\mathbf{1}_{T}\in\bar{\mathcal D}({\mathcal E})$. Then for all
$x,y\in T$ with $x\newlineot =y$, the Green kernel is given by \label{Cor:01}
\begin{equation}\label{GBmin2}
g^\ast_y
g(x,\boldsymbol{\cdot}
g)
:=
2\cdot r
g(c(\boldsymbol{\cdot},x,y),y
g).
\end{equation}
\end{cor}
\begin{proof} {Fix $y,z\in T$, and let $h_{y,z}$ be as defined in Lemma~{\mathbb R}ef{L:08}. Since $h_{y,z}\in\bar{\mathcal L}_{y,z}$
by assumption of the corollary together with
Lemma~{\mathbb R}ef{L:08}, we can follow from part(ii) of Lemma~{\mathbb R}ef{L:05} that a unique minimizer $h^\ast_{y,z}$ to ({\mathbb R}ef{cap.1}) exists.} So we {are in a position to} apply Proposition~{\mathbb R}ef{P:06} with $A:=\{y\}$, $\alpha:=0$ and $\kappa:=\delta_{x}$. {Thus,} $g^\ast_y(x,z)=\tfrac{h^\ast_{y,z}(x)}{\mathrm{cap}_{y}(z)}$.
By Proposition~{\mathbb R}ef{P:04}, $h^\ast_{y,z}(x)=\fracrac{r(c(z,x,y),y)}{r(z,y)}$ and $\mathrm{cap}_{y}(z)=\tfrac{1}{2r(z,y)}$. The result {therefore} follows immediately.
\end{proof}
\begin{remark}[Resolvent]{\mathbb R}m \label{resolvent} For $x,y \in T$ and a bounded measurable $f: T {\mathbb R}ightarrow {\mathbb R}$, put
\begin{equation}
\label{e:026}
G^{y}f(x)
:=
\int_T\mathrm{d}\newlineu\, {g^\ast_y
g(\boldsymbol{\cdot},x
g)}\cdot f.
\end{equation}
By Lemma~{\mathbb R}ef{L:03}(i),
\begin{equation}
\label{e:027}
{\mathcal E}
g(G^{y}f, h
g)
:=
\int_T\mathrm{d}\newlineu\, h\cdot f,
\end{equation}
for all $h\in\bar{\mathcal D}_{y}({\mathcal E})$. As usual, we refer to $G^{y}$ as the resolvent corresponding to ${\mathcal E}$.
{\hspace{0.1in}fill$\qed$}
\end{remark}
\subsection{Relation between resistance and capacity}
\label{Sub:resistance}
In this subsection we define a notion of resistance and discuss its connection to capacity. We will use this in Section~{\mathbb R}ef{S:transinfty} where we provide the proof of Theorem~{\mathbb R}ef{T:trareha}.
Fix a root ${\mathbb R}ho\in T$, assume $E_{\infty} \newlineot = \itptyset$, and recall from ({\mathbb R}ef{brach}) the last common lower bound $x\sqrtedge y$ for any two $x,y\in E_{\infty}$. We define the {\it mutual energy},
$\bar{\mathcal E}_{\mathbb R}ho(\partiali,\mu)$, of two probability measures $\partiali$ and $\mu$ on $(E_\infty,\mathcal{B}(E_\infty))$ by
\begin{equation}
\label{e:014b}
\bar{\mathcal E}_{\mathbb R}ho
g(\partiali,\mu
g)
:=
2\int\partiali(\mathrm{d}x)\,\int\mu(\mathrm{d}y)\,r
g({\mathbb R}ho,x\sqrtedge y
g).
\end{equation}
Moreover, we introduce
the corresponding {\it resistance} of $T$ {with respect to ${\mathbb R}ho$} by
\begin{equation}
\label{e:resis}
\bar{\mathrm{res}}_{{\mathbb R}ho}
:=
\inf
g\{\bar{\mathcal E}_{\mathbb R}ho(\partiali,\partiali):\,\partiali\in \mathcal
M_1(E_\infty)
g\},
\end{equation}
where ${\mathcal M}_1(E_\infty)$ denotes the space of all probability measure on $(E_\infty,{\mathcal B}(E_\infty))$.
\begin{proposition}
Let $(T,r)$ be a locally compact and unbounded ${\mathbb R}$-tree and $\newlineu$ be a Radon measure on $(T,{\mathcal B}(T))$. ${\mathbb R}ho\in T$ a distinguished root. Then for all ${\mathbb R}ho\in T$, \label{P:07}
\begin{equation}\label{P:07.1a}
\bar{\mathrm{res}}_{{\mathbb R}ho}
\ge
{
g(\mathrm{cap}({\mathbb R}ho)
g)^{-1}}.
\end{equation}
\end{proposition}
The proof of Proposition~{\mathbb R}ef{P:07} relies on the following lemma.
\begin{lemma}
Let $(T,r)$ be a locally compact ${\mathbb R}$-tree
such that $E_\infty \newlineot = \itptyset$, { and $\newlineu$ be a Radon measure on $(T,{\mathcal B}(T))$.}
\label{L:07}
{For all $\partiali\in{\mathcal M}_1(E_\infty)$ and $h\in\bar{\mathcal D}({\mathcal E})$ with $h({\mathbb R}ho)=1$, }
\begin{equation}\label{e:020}
\bar{\mathcal E}_{\mathbb R}ho
g(\partiali,\partiali
g)\cdot {\mathcal E}
g(h,h
g)\ge 1.
\end{equation}
\end{lemma}
\begin{proof} We follow an idea of \cite{Lyo90}.
Notice first that by Fubini's theorem,
\begin{equation}\label{e:022}
\begin{aligned}
\bar{\mathcal E}_{\mathbb R}ho(\partiali,\partiali)
&=
2\int\partiali(\mathrm{d}x)\int\partiali(\mathrm{d}y)\,
\int_{[{\mathbb R}ho,x\sqrtedge y]}\lambda^{(T,r)}(\mathrm{d}z)
\\
&=
2\int\lambda^{(T,r)}(\mathrm{d}z)\,\partiali
g\{x\in E_\infty:\,{z\in x({\mathbb R}_+)}
g\}^2.
\end{aligned}
\end{equation}
By the Cauchy-Schwarz inequality,
\begin{equation}\label{e:021}
\begin{aligned}
{\mathcal E}
g(h,h
g)\bar{\mathcal E}_{\mathbb R}ho
g(\partiali,\partiali
g)
&\ge
\Big(\int\lambda^{(T,r)}(\mathrm{d}z)\newlineabla h(z)
\partiali\{x\in E_\infty:\,{z\in x({\mathbb R}_+)}\Big)^2
\\
&=
\Big(\int\partiali(\mathrm{d}y)\,\int_{{y({\mathbb R}_+)}}\lambda^{(T,r)}(\mathrm{d}z)\newlineabla
h(z)\Big)^2
\\
&=
g(\int\partiali(\mathrm{d}y)\, h({\mathbb R}ho)
g)^2 = 1,
\end{aligned}
\end{equation}
and the claim follows.
\end{proof}
\begin{proof}[Proof of Proposition~{\mathbb R}ef{P:07}]
The statement holds trivially when $ \bar{\mathrm{res}}_{{\mathbb R}ho}=\infty$.
Assume therefore that $\bar{\mathrm{res}}_{{\mathbb R}ho}<\infty$. By ({\mathbb R}ef{e:020}),
\begin{equation}
\label{e:proofres}
\begin{aligned}
\bar{\mathrm{res}}_{{\mathbb R}ho}
&=\inf
g\{\bar{{\mathcal E}}_{\mathbb R}ho(\partiali,\partiali):\,\partiali\in{\mathcal M}_1(E_\infty)
g\}
\\
&\ge
g(\inf\{{\mathcal E}(h,h):\,h\in\bar{{\mathcal D}}({\mathcal E}),h({\mathbb R}ho)=1\}
g)^{-1}
\\
&=
g(\mathrm{cap}({{\mathbb R}ho})
g)^{-1}.
\end{aligned}
\end{equation}
\end{proof}
\section{Existence, uniqueness, basic properties (Proof of Theorem~{\mathbb R}ef{T:01})}
\label{S:existence}
In this section we establish existence and uniqueness (up to $\newlineu$-equivalence) of a strong Markov process associated with the Dirichlet form $({\mathcal E},{\mathcal D}({\mathcal E}))$.
The proof will rely on regularity as specified by the following proposition.
{\begin{proposition}[Regularity]
Let $(T,r)$ be a locally compact ${\mathbb R}$-tree
and $\newlineu$ a Radon measure on $(T,{\mathcal B}(T))$.
Then the Dirichlet form $({\mathcal E},{\mathcal D}({\mathcal E}))$ is {\it regular}, i.e.,
\begin{itemize}
\item[(i)] ${\mathcal D}({\mathcal E})\cap {\mathcal C}_0(T)$ is dense in ${\mathcal D}({\mathcal E})$ with respect to the topology generated by ${\mathcal E}_1$.
\item[(ii)] ${\mathcal D}({\mathcal E})\cap {\mathcal C}_0(T)$ is dense in ${\mathcal C}_0(T)$ with respect to the uniform topology.
\end{itemize}
\label{L:04}
\end{proposition}
The proof will rely on the following lemma:
\begin{lemma} Let $(T,r)$ be a locally compact and complete ${\mathbb R}$-tree, and $A\subseteq T$ non-empty and closed.
${\mathcal F}\cap {\mathcal C}_0(T)$ is dense in ${\mathcal C}_0(T)$ with respect to the uniform topology.
\label{L:09}
\end{lemma}}
For the proof we shall borrow the ideas from the proof of Lemma~5.13 in~\cite{Kigami95}. A semi-direct quoting of the above proof might suffice as well but for completeness and to also illustrate the benefit of the explicit limiting form we present the proof in (more) detail.
\begin{proof}
Fix ${\mathbb R}ho\in T$, and $f\in {\mathcal C}_0(T)$. Then there exists $R>0$ such that $f
g|_{B^c({\mathbb R}ho,R)}\equiv 0$. For each $n\in N$ choose $\delta_n>0$ such that
$|f(y)-f(x)|<\tfrac{1}{n}$ whenever $x,y\in B({\mathbb R}ho,R+5\delta_n)$ with $r(x,y)<\delta_n.$
Choose for all $n\in{\mathbb N}$ a finite subset $V_{n} \subset T$ with three properties:
\begin{itemize}
\item[(i)] for all three points $x,y,z\in V_n$ the branch point $c(x,y,z)\in {V_n}$ and;
\item[(ii)] $\cup_{z \in V_{n}} B(z, \fracrac{\delta_{n}}{2}) \supset \bar{U}_{n}$
\item[(iii)] If $W$ is a connected component of $T\setminus V_n$ with $\mbox{diam}{W}^{(T,r)} > \delta_{n}$ then $W \cap U_{n} =\itptyset$.
\end{itemize}
Denote
\begin{equation}
\label{e:DV}
D(V_n)
:=
g\{\bar{W}:\,W\mbox{ is a connected component of }T\setminus V_n
g\},
\end{equation}
and let $\partialartial{W}:=\bar{W}\cap V$ for all $\bar{W}\in D(V_n)$.
Notice that by the above properties of $V_{n}$ the $\partialartial{W}$ is either one or two points.
Consider for each $p\in V_n$ the function $h_{p,V_n\setminus\{p\}}$ on $T$ which is the linear interpolation on the subtree, $\mathrm{span}(V_n)$, spanned by $V_n$ with respect to the constrain $h_{p,V_n\setminus\{p\}}
g|_{V_n}=\mathbf{1}_p$ and which satisfies $\newlineabla h_{p,V_n\setminus\{p\}}
g|_{V^c_n}\equiv 0$. In particular, on each portion of $W$ not in the subtree spanned by $V_{n}$ it is extended as a constant by its value at the appropriate branch point.
Put for all $n\in\mathbb{N}$,
\begin{equation}
\label{e:tildef}
\tilde{f}_n
:=
\sum_{p\in V_n}f(p){h}_{p,V_n\setminus\{p\}}.
\end{equation}
Clearly $\tilde{f}_{n} \in {\mathcal F}.$
Let $W $ be such that $\mbox{diam}^{(T,r)}(W) \leq \delta_{n}.$
For $x\in W$ and $p \in \partialartial W$
\begin{equation}
\label{e:ftildef}
\begin{aligned}
g|f(x)-\tilde{f}_n(x)
g|
&\le
g|f(x)-f(p)
g|+
g|\tilde{f}_n(p)-\tilde{f}_n(x)
g|
\\
&\le
g|f(x)-f(p)
g|+\sup_{p'\in\partialartial W}
g|\tilde{f}_n(p)-\tilde{f}_n(p')
g|\le
\tfrac{2}{n}.
\end{aligned}
\end{equation}
On the other hand, if $\bar{W}\in D(V_n)$ is such that $\mbox{diam}^{(T,r)}(W) > \delta_{n}$ then ${W}\cap K_{n}=\itptyset$ (see, for example, Lemma~5.12 in \cite{Kigami95}). Therefore
$\tilde{f}_{n} = 0 $ on $W$, and the support of $\tilde{f}_{n}$ is contained in $K_{n}$. Thus
\begin{equation}
\label{e:030}
\sup_{x \in T}
g|f(x)-\tilde{f}_n(x)
g| \leq \tfrac{2}{n}.
\end{equation}
\end{proof}
{\begin{lemma}[Regularity; compact tree] Let $(T,r)$ be a compact ${\mathbb R}$ tree, and $\newlineu$ a Radon measure on $(T,{\mathcal B}(T))$. Then Proposition~{\mathbb R}ef{L:04} holds, i.e the Dirichlet form $({\mathcal E},{\mathcal D}({\mathcal E}))$ is {\it regular}.
\end{lemma}
}
\begin{proof} (i) If $(T,r)$ is compact, then ${\mathcal D}({\mathcal E})\cap{\mathcal C}_0(T)={\mathcal D}({\mathcal E})$ and (i) trivially holds.
(ii) Fix $f\in{\mathcal C}_0(T)={\mathcal C}(T)$. Applying Lemma~{\mathbb R}ef{L:09} we can find a sequence $(f_n)_{n\in\mathbb{N}}$ in ${\mathcal F}$ such that $\|f_n-f\|_\infty{_{\displaystyle\longrightarrow\atop n\to\infty}} 0$. Since $(T,r)$ is compact and $\newlineu$ Radon, $f_n\in L^2(\newlineu)$ for all $n\in\mathbb{N}$, and thus also the second claim immediately follows.
\end{proof}
For general (not necessarily complete) ${\mathbb R}$-trees we will make use of the follow:
\begin{cor} Let $(T,r)$ be a locally compact ${\mathbb R}$-tree and $\newlineu$ a Radon measure on $(T,{\mathcal B}(T))$. If $K\subseteq T$ is a compact subset of $T$ and $U\supset K$ an open subset of $T$ such that $\bar{U}$ is compact, then
there exists a function $\partialsi^{K,U}\in {{\mathcal D}}({\mathcal E})$ such that $0\le \partialsi^{K,U}\le 1$, $\partialsi^{K,U}|_K\equiv 1$ and $\mathrm{supp}(\partialsi^{A,K,U})\subseteq\bar{U}$.
\label{Cor:06}
\end{cor}
\begin{proof} By assumption, $(\bar{U},r)$ is a compact ${\mathbb R}$-tree. The Dirichlet form $({\mathcal E},{\mathcal D}({\mathcal E}))$ is a regular Dirichlet form on $L^{2}(\bar{U},\newlineu)$. We can find an open subsets $V_{1} , V_{2}$ of $S$ such that $K\subset V_{1}\subset\bar{V_{1}}\subset V_{2}\subset\bar{V_{2}}\subset U.$ By Theorem~4.4.3 in \cite{FukushimaOshimaTakeda1994} the form $({\mathcal E},{\mathcal D}_{\bar{U}\setminus{V_{2}}}({\mathcal E}))$ is a regular Dirichlet form on $L^{2}(V_{2},\newlineu)$.
By Urysohn's lemma a continuous function $f: V_{2}{\mathbb R}ightarrow {\mathbb R}$ with $f
g|_K\equiv 1$ and $f
g|_{\bar{U}\setminus V_{1}}\equiv 0$. Since, in particular, the Dirichlet form $({\mathcal E},{\mathcal D}_{\bar{U}\setminus{V_{2}}}({\mathcal E}))$ is regular and $f\in{\mathcal C}_0(V_{2})$, we find a function $g\in{\mathcal D}({\mathcal E})_{\bar{U}\setminus{V_{2}}}$ such that $\|g-f\|_\infty<\tfrac{1}{2}$. Put $\partialsi^{K,U}:=\min\{1,2g\}$ on ${\bar{V_{2}}}$. Obviously, $g
g|_K\ge \tfrac{1}{2}$ and $g
g|_{S\setminus V_{2}}\equiv 0$, and therefore $\partialsi^{K,U}$ can be extended to all of $T$ such that $\partialsi^{K,U} \in {{\mathcal D}}({\mathcal E}),$ $\partialsi^{K,U}
g|_K\equiv 1$ and $\partialsi^{K,U}
g|_{T\setminus U}\equiv 0$.
\end{proof}
{\begin{proof}[Proof of Proposition {\mathbb R}ef{L:04}] Recall that ${\mathcal D}({\mathcal E})\cap {\mathcal C}_\infty(T)={\mathcal D}({\mathcal E})$. Applying Lemma~1.4.2(i) in \cite{FukushimaOshimaTakeda1994}, $({\mathcal E}, {\mathcal D}({\mathcal E}))$ is proved to be regular if we show that
\begin{equation} \label{inreg}
{\mathcal D}({\mathcal E}) \mbox{ is dense in } {\mathcal C}_\infty(T) \mbox{ with respect to the uniform topology.}
\end{equation}
Fix therefore $f\in {\mathcal C}_\infty(T)$. For each $n \in {\mathbb N}$, we can then a choose a compact set $K_n$ such that $f
g|_{K_{n}^{c}}\le\tfrac1n$.
Moreover, since $(T,r)$ is locally compact, we can find also an open set $U_n\supset K_n$ such that $\bar{U}_{n}$ is compact. We can then choose
a $\delta_n>0$ such that $|f(y)-f(x)|<\tfrac{1}{n}$ whenever $r(x,y)<\delta_n$ and $x,y\in U_n$.
We generalize the reasoning and the notation of the proof of Lemma~{\mathbb R}ef{L:09}, and choose again for all $n\in{\mathbb N}$ a finite subset $V_{n} \subset T$ satisfying the properties~(i) through~(iii) and consider the corresponding piecewise linear functions $h_{p,V_n\setminus\{p\}}$.
By Corollary~{\mathbb R}ef{Cor:06}, for each $n\in\mathbb{N}$ there exists a $[0,1]$-function $\partialhi_{n}\in \mathcal{D}({\mathcal E})$ such that $\partialhi_{n}=1 $ on $K_{n}$ and $\partialhi_{n} =0$ on $U_{n}^{c}$. This time we put
\begin{equation}
\label{e:tildefn}
\tilde{f}_n
:=
\sum_{p\in V_n}f(p) \partialhi_{n}{h}_{p,V_n\setminus\{p\}}.
\end{equation}
By the same reasoning we can show that for all $n\in{\mathbb N}$, $\tilde{f}_n\in{\mathcal D}(\mathcal E)$, and that
$\|\tilde{f}_n-f\|_\infty\le\tfrac{1}{n}$.
\end{proof}
}
\begin{proof}[Proof of Theorem~{\mathbb R}ef{T:01}]
By Proposition ~{\mathbb R}ef{L:04} the form $({\mathcal E}, {\mathcal D}({\mathcal E}))$ is regular. Therefore by Theorem~7.2.1 in ~\cite{FukushimaOshimaTakeda1994} there exists a $\newlineu$-symmetric Hunt process\fracootnote{For an introduction to Hunt processes, see Section A.2 in \cite{FukushimaOshimaTakeda1994}} $B$ on $(T, {\mathcal B}(T))$ whose Dirichlet form is ${\mathcal E}$.
By Theorem~4.2.7, the process $B$ is unique (i.e., the transition probability function is determined up to an exceptional set). Also the Dirichlet form
$({\mathcal E},{\mathcal D}({\mathcal E}))$
possesses the {\it local property}, i.e., if
$f,g\in{\mathcal D}({\mathcal E})$ have disjoint compact
support then ${\mathcal E}(f,g)=0$. Hence by Theorem~7.2.2 in~\cite{FukushimaOshimaTakeda1994} the process $B$ has continuous paths.
{Finally, by Lemma~{\mathbb R}ef{L:02} there are no trivial exceptional sets, and
the above therefore imply that $B$ is a continuous $\newlineu$-symmetric strong Markov process.}
\end{proof}
We conclude this section with providing a proof for Proposition~{\mathbb R}ef{P:prop} which
identifies the Brownian motion on the real line as the
$\lambda^{(T,r)}$-Brownian motion on $\mathbb{R}$.
\begin{proof} [Proof of Proposition {\mathbb R}ef{P:prop}] Let $(T,r)$ be a locally compact ${\mathbb R}$-tree and $\newlineu$ a Radon-measure on $(T,{\mathcal B}(T))$. Assume that $(T,r,\newlineu)$ are such that the $\newlineu$-Brownian motion on $(T,r)$ is recurrent.
(i) Let $(P_t)_{t\ge 0}$ be the semi-group associated with the process.
By ({\mathbb R}ef{e:dir.6}) together with Theorem~2.2.1 in
\cite{FukushimaOshimaTakeda1994}, $f_{a,b}$ and $-f_{a,b}$ are excessive (i.e., $P_t f_{a,b}\ge f_{a,b}$ and $P_t(- f_{a,b})\ge -f_{a,b}$) and hence the process $Y:=(Y_t)_{t\ge 0}$ given by
\be{e:YY}
Y_t
:=
f_{a,b}
g(X_t
g)
\end{equation}
is a bounded non-negative martingale.
Hence by the stopping theorem,
$\mathbf{E}^x[Y_0]=\mathbf{E}^x[Y_{\tau_a\sqrtedge\tau_b}]$, for all $x\in T$. Thus,
\be{stopp}
\begin{aligned}
f_{a,b}(x)
&=
r
g(c(x,a,b),b
g)
\\
&=
f_{a,b}(a)\cdot\mathbf{P}^x
g\{\tau_a<\tau_b
g\}+f_{a,b}(b)\cdot
g(1-\mathbf{P}^x
g\{\tau_a<\tau_b
g\}
g),
\end{aligned}
\end{equation}
and hence since $f_{a,b}(b)=0$,
\be{allgemein}
\mathbf{P}^x
g\{\tau_a<\tau_b
g\}
=
\fracrac{r
g(c(x,a,b),b
g)}{r(a,b)},
\end{equation}
which proves ({\mathbb R}ef{Xhit}).
(ii) As the $\newlineu$-Brownian motion is recurrent by Lemma~{\mathbb R}ef{L:02} and Theorem ~4.6.6(ii) in \cite{FukushimaOshimaTakeda1994} $\mathbf{P}^{x}\{\tau_{b} < \infty\} = 1$. Therefore by Theorem~4.4.1(ii) in \cite{FukushimaOshimaTakeda1994} , $Rf(x) = \mathbf{E}^{x}[ \int_{0}^{\tau_{b}} f(B_{s}) ds]$ is the resolvent of the $\newlineu$-Brownian motion killed on hitting $b$, i.e.,
\begin{equation}
\label{e:031}
{\mathcal E}(Rf,h) = \int \mathrm{d}\newlineu\, h\cdot f,
\end{equation}
for all $h \in \bar{\mathcal D}_{y}(\mathcal E)$.
Consequently, using the uniqueness of the resolvent (see Theorem~1.4.3 in \cite{FukushimaOshimaTakeda1994}), Remark~{\mathbb R}ef{resolvent} and Corollary~{\mathbb R}ef{Cor:01}, ({\mathbb R}ef{Xocc}) follows.
\end{proof}
\section{Bounded Trees (Proof of Theorems~{\mathbb R}ef{T:04} and~{\mathbb R}ef{C:mix})}
\label{S:compact}
In this section we consider bounded ${\mathbb R}$-trees. We start by providing the proof for the basic long-term behavior stated in Theorem~{\mathbb R}ef{T:04}.
We then restrict to compact ${\mathbb R}$-trees, or equivalently, to recurrent Brownian motions.
In Subsection~{\mathbb R}ef{Sub:eigen} we
provide bounds on the spectral gap. In
Subsection~{\mathbb R}ef{Sub:mix} we apply the latter to study mixing times,
and provide the proof of Theorem~{\mathbb R}ef{C:mix}.
\begin{proof}[Proof of Theorem~{\mathbb R}ef{T:04}] Let $(T,r)$ be a locally compact and bounded ${\mathbb R}$-tree and $\newlineu$ a Radon measure on $(T,{\mathcal B}(T))$. We will rely on Theorem~1.6.3 in \cite{FukushimaOshimaTakeda1994} which states that the $\newlineu$-Brownian motion $B$ on $(T,r)$
is recurrent if and only if $\mathbf{1}_T\in\bar{\mathcal D}({\mathcal E})$ and ${\mathcal E}(\mathbf{1}_T,\mathbf{1}_T)=0$.
Assume first that $(T,r)$ is {\it compact}. In this case, ${\mathcal C}_\infty(T)={\mathcal C}(T)$, and thus $\mathbf{1}_T\in{\mathcal D}({\mathcal E})$. {Clearly}, ${\mathcal E}(\mathbf{1}_T,\mathbf{1}_T)=0$. Hence $B$ is recurrent. Moreover it follows from Proposition~{\mathbb R}ef{P:prop} (with the choice $f\equiv 1$ in ({\mathbb R}ef{Xocc}))
that $\mathbf{E}^x[\tau_b]\le 2\newlineu(T)\cdot r(x,b)<\infty$ for all $b,x\in T$. Hence $\newlineu$-Brownian motion on compact ${\mathbb R}$-trees is positive recurrent.
If $(T,r)$ is {\it not compact}, then we can find an $x\in\partialartial T:=\bar{T}\setminus T$, where $\bar{T}$ here denotes the completion of $T$. Let $x$ be such a ``missing boundary point'' and fix a
Cauchy-sequence $(x_n)_{n\in{\mathbb N}}$ in $(T,r)$ which converges to $x$ in $\bar{T}$. Then for each compact subset $K\subset T$ there are only finitely many points of $(x_n)_{n\in{\mathbb N}}$ covered by $K$. It therefore follows for any
$f\in{\mathcal D}({\mathcal E})\subset{\mathcal C}_{\infty}(T)$ that $\lim_{n\to\infty}f(x_n)=0$.
By the definition of the gradient we have for all $y\in T$ and $n\in{\mathbb N}$,
\begin{equation}
\label{e:eqq}
\begin{aligned}
f(y)
&=
f(x_n)+\int_{x_n}^y\mathrm{d}\lambda^{(T,r)}\,\newlineabla f
\\
&\le
f(x_n)+\sqrt{2\cdot r(y,x_n)\cdot{\mathcal E}(f,f)}.
\end{aligned}
\end{equation}
Letting $n\to\infty$ implies that for all $y\in T$ and $f \in \bar{{\mathcal D}}({\mathcal E})$,
\begin{equation}
\label{e:003}
g(f(y)
g)^2\le 2\mathrm{diam}^{(T,r)}(T) \cdot{\mathcal E}(f,f),
\end{equation}
which implies that $\mathbf{1}_T\newlineot \in\bar{\mathcal D}({\mathcal E})$ and that $B$ is transient.
\end{proof}
\subsection{Principle eigenvalue}
\label{Sub:eigen}
In this subsection we
give estimates on the principal
eigenvalue of the $\newlineu$-Brownian motion on an locally compact and bounded ${\mathbb R}$-tree $(T,r)$.
For a closed and non-empty subset $A\subseteq T$,
denote by
\begin{equation}\label{specvar}
\lambda_A(T)
:=
\inf
g\{\mathcal E(f,f)\,:\,f\in\bar{\mathcal D}_A(\mathcal E),(f,f)_{\newlineu}=1
g\}
\end{equation}
the {\it principal eigenvalue} (with respect to $A$).
\begin{lemma}[Estimates on the principal eigenvalue]
Fix a locally compact and bounded ${\mathbb R}$-tree $(T,r)$ and a Radon measure $\newlineu$ on $(T,{\mathcal B}(T))$.
\label{L:11}
Let $A\subseteq T$ be closed, non-empty and connected subset. Assume that $h^\ast_{A,x}$ the unique minimizer of ({\mathbb R}ef{cap.1}) with $B:=\{x\}$, and $\alpha=0$ exists. Then
\begin{equation}\label{e:11.0}
\inf_{x\in T}\fracrac{\mathrm{cap}_{A}(x)}{(\mathbf 1_T,h^\ast_{A,x})_{\newlineu}}
\leq
\lambda_A(T)
\leq
\inf_{x\in T}\fracrac{\mathrm{cap}_{A}(x)}{(h^\ast_{A,x},h^\ast_{A,x})_{\newlineu}},
\end{equation}
\end{lemma}
To prepare the proof we provide characterizations of the principle eigenvalue which are
very similar to Lemmata~{\mathbb R}ef{L:05} and~{\mathbb R}ef{L:03}.
\begin{lemma}[Characterization of minimizers; Principle Eigenvalue]
Let $(T,r)$ be a locally compact and bounded ${\mathbb R}$-tree,
$\newlineu$ a Radon measure on $(T,{\mathcal B}(T))$, and $A\subset T$ a closed and non-empty subset. \label{L:06}
$\lambda_A(T)$ is well-defined and $\lambda_A(T)$ is positive.
\begin{itemize}
\item[(i)] For all $h^\dagger\in\bar{\mathcal D}_A(\mathcal E)$ with $(h^\dagger,h^\dagger)_{\newlineu}=1$ the following are equivalent.
\begin{itemize}
\item[(a)] For all $g\in\bar{\mathcal D}_{A}({\mathcal E})$,
$\mathcal E(h^\dagger,g)=\lambda_A(T)(h^\dagger,g)_{\newlineu}$.
\item[(b)] For all $h\in\bar{{\mathcal D}}_{A}(\mathcal E)$ with $(h,h)_{\newlineu}=1$,
${\mathcal E}(h^\dagger,h^\dagger)\le{\mathcal E}(h,h)$.
\item[(c)] ${\mathcal E}(h^\dagger,h^\dagger)=\lambda_A(T)(h^\dagger,h^\dagger)_\newlineu$.
\end{itemize}
\item[(ii)] $\lambda_A(T)$ is positive.
\item[(iii)] Any minimizer of ({\mathbb R}ef{specvar}) is sign definite.
\end{itemize}
\end{lemma}
\begin{proof}
(i) Fix $h^\dagger\in\bar{\mathcal D}({\mathcal E})$ such that $(h^\dagger,h^\dagger)_\newlineu=1$.
{\bf (b) $\Longrightarrow$ (a).}
Assume that for all $h\in\bar{\mathcal D}_{A}({\mathcal E})\setminus\{h^\dagger\}$ with $(h,h)_\newlineu=1$, {${\mathcal E}(h^\dagger,h^\dagger)\le{\mathcal E}(h,h)$}.
Fix
$g\in\bar{\mathcal D}_A({\mathcal E})$, and put $h^{\partialm}:=h^\dagger\partialm\varepsilon (g-(h^\dagger,g)_{\newlineu}\cdot h^\dagger)$.
Then $h^\partialm\in\bar{\mathcal D}_A({\mathcal E})$, and
{\begin{equation}\label{emindag}
\begin{aligned}
&{\mathcal E}
g(h^\dagger,h^\dagger
g)
\\
&\le
{\mathcal E}
g(h^{\partialm},h^{\partialm}
g)/ (h^\partialm,h^\partialm)_\newlineu
\\
&=
{\mathcal E}
g(h^{\partialm},h^{\partialm}
g)/
g(1+\varepsilon^2(g,g)^2_\newlineu-\varepsilon^2(h^\dagger,g)^2_\newlineu
g).
\end{aligned}
\end{equation}}
Hence
{\begin{equation}\label{emindag0}
\begin{aligned}
&{\mathcal E}
g(h^\dagger,h^\dagger
g)
g(1+\varepsilon^2(g,g)^2_\newlineu-\varepsilon^2(h^\dagger,g)^2_\newlineu
g)
\\
&\leq
{\mathcal E}
g(h^\dagger,h^\dagger
g)+
\varepsilon^2{\mathcal E}
g(g-(h^\dagger,g)_{\newlineu}\cdot h^\dagger,g-(h^\dagger,g)_{\newlineu}\cdot h^\dagger
g)
\\
&\qquad\partialm 2\varepsilon {\mathcal E}
g(g-(h^\dagger,g)_{\newlineu}\cdot h^\dagger,h^\dagger
g),
\end{aligned}
\end{equation}}
or equivalently,
{\begin{equation}\label{emin3dag}
\begin{aligned}
&2
g|{\mathcal E}
g(g-(h^\dagger,g)_{\newlineu}\cdot h^\dagger,h^\dagger
g)
g|
\\
&\le
\varepsilon{\mathcal E}
g(g-(h^\dagger,g)_{\newlineu}\cdot h^\dagger,g-(h^\dagger,g)_{\newlineu}\cdot h^\dagger
g)\\&-\varepsilon(g,g)^2_\newlineu{\mathcal E}(h^\dagger,h^\dagger)+\varepsilon{\mathcal E}(h^\dagger,h^\dagger)(h^\dagger,g)^2_\newlineu.
\end{aligned}
\end{equation}}
Letting $\varepsilon\downarrow 0$ implies that ${\mathcal E}(g,h^\dagger)=\lambda_A(T)\cdot(h^\dagger,g)_{\newlineu}$, which proves (a) since
$g\in\bar{\mathcal D}_{A}({\mathcal E})$ was chosen arbitrarily.
{\bf (a) $\Longrightarrow$ (c).} Assume (a) holds. Then (c) follows with the particular choice $g:=h^\dagger$.
{\bf (c) $\Longrightarrow$ (b).} This is an immediate consequence of the definition ({\mathbb R}ef{specvar}).
(ii) Transience ({\mathbb R}ef{e:trans}) implies $\lambda_A(T)>0$ if
$A$ is non-empty.
(iii)
Let $h^\dagger$ be a minimizer. Let
$S_\partialm:=\{x\in T\,|\,\partialm h^\dagger(x)>0\}$, and put
$h_\partialm^\dagger:=\partialm\mathbf 1_{S_\partialm} h^\dagger = \partialm \fracrac{h^\dagger \partialm \mid h^\dagger\mid}{2}$, i.e., $h^\dagger=h^\dagger_+-h^\dagger_-$.
To verify that $h^\dagger$ is sign definite, we proceed by contradiction and
assume to the contrary that $\newlineu(S_-)\cdot\newlineu(S_+)>0$.
In this case we can define
\begin{equation}
\label{e:004}
\tilde{h}
:=
\fracrac{(h_-^\dagger,h_-^\dagger)_\newlineu h_+^\dagger+(h^\dagger_+,h^\dagger_+)_\newlineu h_-^\dagger}{\sqrt{2(h_-^\dagger,h_-^\dagger)_\newlineu(h^\dagger_+,h^\dagger_+)_\newlineu}} .
\end{equation}
It is easy to see that $\tilde{h}\in\bar{{\mathcal D}}_A({\mathcal E})$ is orthogonal to $h^\dagger$ and that
$(\tilde{h},\tilde{h})_\newlineu=1$.
Orthogonality together with (a) applied on $g:=\tilde{h}$ implies ${\mathcal E}(h^\dagger,\tilde{h})=0$, while we can also read off from (a) that
\begin{equation}
\label{e:readoff}
\begin{aligned}
0= &\sqrt{2(h_-^\dagger,h_-^\dagger)_\newlineu(h^\dagger_+,h^\dagger_+)_\newlineu}{\mathcal E}(h^\dagger,\tilde{h})
\\
&=
{\mathcal E}(h^\dagger_+-h^\dagger_{-},(h_-^\dagger,h_-^\dagger)_\newlineu h_+^\dagger+(h^\dagger_+,h^\dagger_+)_\newlineu h_-^\dagger)
\\
&=
2\lambda_A(T)(h_-^\dagger,h_-^\dagger)_\newlineu(h^\dagger_+,h^\dagger_+)_\newlineu.
\end{aligned}
\end{equation}
This, of course, is a contradiction since $\lambda_A(T)>0$.
\end{proof}
\begin{proof}[Proof of Lemma~{\mathbb R}ef{L:11}] Let $\varphi_A$ be
a non-negative minimizer of ({\mathbb R}ef{specvar}) and $g^\ast_A(\newlineu,\boldsymbol{\cdot})$
the unique minimizer of ({\mathbb R}ef{greenvar}) with $\kappa:=\newlineu$ and $\alpha:=0$.
Then by Lemma~{\mathbb R}ef{L:06} together with Lemma~{\mathbb R}ef{L:07},
\begin{equation}\label{e:11.1}
\begin{aligned}
\lambda_A(T)
&=
\fracrac{{\mathcal E}(\varphi_A,g^\ast_A(\newlineu,,\boldsymbol{\cdot}))}{(\varphi_A,g_A^\ast(\newlineu,\boldsymbol{\cdot}))_{\newlineu}}
= \fracrac{(\varphi_A,\mathbf{1}_T)_{\newlineu}}{(\varphi_A,g^\ast_A(\newlineu,\boldsymbol{\cdot}))_{\newlineu}}
\geq
\inf_{x\in T}
g(g_A^\ast(\newlineu,x)
g)^{-1}.
\end{aligned}
\end{equation}
Moreover, by Proposition~{\mathbb R}ef{P:06},
\begin{equation}\label{e:11.4}
g_A^\ast(\newlineu,x)
=
\fracrac{(h^\ast_{A,x},\mathbf{1}_T)_{\newlineu}}{\mathrm{cap}_{A}(x)},
\end{equation}
where $h^\ast_{A,x}$ is the unique minimizer of ({\mathbb R}ef{cap.1}) with $\alpha:=0$. This together with ({\mathbb R}ef{e:11.1})
implies the {\it lower bound} in ({\mathbb R}ef{e:11.0}).
To obtain the {\it upper bound} insert $f_{A,x}:=h^\ast_{A,x}/(h^\ast_{A,x},h^\ast_{A,x})_{\newlineu}^{1/2}$, $x\newlineot\in A$,
into ({\mathbb R}ef{specvar}). Then for all $x\in T\setminus\{A\}$,
\begin{equation}\label{e:11.2}
\lambda_A(T)
\leq
{\mathcal E}(f_{A,x},f_{A,x})
=
\fracrac{{\mathrm{cap}_{A}(x)}}{(h^\ast_{A,x},h^\ast_{A,x})_{\newlineu}},
\end{equation}
which gives the claimed upper bound.
\end{proof}
The following proposition is an immediate consequence for compact ${\mathbb R}$-trees. The lower bound in ({\mathbb R}ef{P:08.1}) {has been verified for $\alpha$-stable trees} in the proof of Lemma 2.1 of \cite{CroydonHumbly2010}.
\begin{proposition} Fix a compact ${\mathbb R}$ tree and a Radon measure $\newlineu$ on $(T,{\mathcal B}(T))$.
For all $b\in T$,
\label{P:08}
{\begin{equation} \label{P:08.1}
\tfrac{1}{2\left (\mathrm{diam}^{(T,r)}(T)\cdot\newlineu(T){\mathbb R}ight)}
\le
\lambda_{\{b\}}(T)
\le
\tfrac{1}{2}\inf_{x\in T\setminus\{b\}}
g(\newlineu
g\{y\in T:\,x\in[y,b]
g\}\cdot r(x,b)
g)^{-1}.
\end{equation}}
\end{proposition}
\begin{proof} When $A= \{b\}$, the minimizer to ({\mathbb R}ef{cap.1}), $h^{\star}_{b,x}$ exists. So the assumptions of Lemma {\mathbb R}ef{L:11} and Corollary~{\mathbb R}ef{Cor:01} are satisfied.
For the {\it lower bound}, recall from ({\mathbb R}ef{e:11.1}) together with Corollary~{\mathbb R}ef{Cor:01} that for all $x\in T$ with $x\newlineot=b$,
\begin{equation}\label{newe}
\begin{aligned}
\lambda_{\{b\}}(T)
&\ge
\inf_{x\in T\setminus\{b\}}
g(g^\ast_{\{b\}}(\newlineu,x)
g)^{-1}
\\
&=
\inf_{x\in T\setminus\{b\}}
g(2\int\newlineu(\mathrm{d}y)\,r(c(x,y,b),b)
g)^{-1}
\\
&\ge
g(2\cdot\mathrm{diam}^{(T,r)}(T)\cdot\newlineu(T)
g)^{-1},
\end{aligned}
\end{equation}
as claimed.
For the {\it upper bound}, recall from ({\mathbb R}ef{e:11.2}) together with Proposition~{\mathbb R}ef{P:04} that
\begin{equation}\label{upper}
\begin{aligned}
\lambda_b(T)
&\le
\inf_{x\in T\setminus\{b\}}\fracrac{{\mathrm{cap}_{b}(x)}}{(h^\ast_{x,b},h^\ast_{x,b})_{\newlineu}}
\\
&=
\inf_{x\in T\setminus\{b\}}\fracrac{r(x,b)}{2\cdot (r(c(\boldsymbol{\cdot},x,b),b),r(c(\boldsymbol{\cdot},x,b),b))_{\newlineu}}
\\
&\le
\tfrac{1}{2}\inf_{x\in T\setminus\{b\}}
g(\newlineu
g\{y\in T:\,x\in[y,b]
g\}\cdot r(x,b)
g)^{-1},
\end{aligned}
\end{equation}
where we have used that for all $x\in T\setminus\{b\}$,
\begin{equation}\label{P:08.4}
\begin{aligned}
\int\newlineu(\mathrm{d}y)\,r
g(c(y,x,b),b
g)^2
&\ge
\newlineu
g\{y\in T:\,x\in[y,b]
g\}\cdot r(x,b)^2.
\end{aligned}
\end{equation}
\end{proof}
\subsection{Mixing times}
\label{Sub:mix}
In this subsection we give the proof of Theorem~{\mathbb R}ef{C:mix}
based on estimates of the spectral gap of the process associated with the Dirichlet form.
Denote by
\begin{equation}
\label{specgapvar}
\begin{aligned}
\lambda_2(T)
&:=
\inf
g\{\mathcal E(f,f)\,:\,f\in\bar{\mathcal D}(\mathcal E),(f,f)_{\newlineu}=1,(f,\mathbf{1}_T)_{\newlineu}=0
g\}
\end{aligned}
\end{equation}
{\it the spectral gap}.
Here is a useful characterization of the spectral gap.
\begin{lemma}[Characterization of minimizers; Spectral gap]
Let $(T,r)$ be a compact ${\mathbb R}$-tree and $\newlineu$ a Radon measure on $(T,{\mathcal B}(T))$. \label{L:12}
\begin{itemize}
\item[(i)]
For all $h^\ddagger\in\bar{\mathcal D}(\mathcal E)$ with $(h^\ddagger,h^\ddagger)_{\newlineu}=1$
and $(h^\ddagger,\mathbf{1}_T)_{\newlineu}=0$ the following are equivalent.
\begin{itemize}
\item[(a)] For all $g\in\bar{\mathcal D}(\mathcal E)$ with $(g,\mathbf{1}_T)_{\newlineu}=0$,
$\mathcal E(h^\ddagger,g)=\lambda_2(T)(h^\ddagger,g)_{\newlineu}$.
\item[(b)] For all $h\in\bar{{\mathcal D}}(\mathcal E)$ with $(h,h)_{\newlineu}=1$
and $(h,\mathbf{1}_T)_{\newlineu}=0$,
${\mathcal E}(h^\ddagger,h^\ddagger)\le{\mathcal E}(h,h)$.
\item[(c)] ${\mathcal E}(h^\ddagger,h^\ddagger)=\lambda_2(T)$.
\end{itemize}
\item[(ii)] If $h^\ddagger$ is a minimizer to the minimum problem ({\mathbb R}ef{specgapvar}), then $\lambda_2(T)\ge\lambda_b(T)$ for all $b\in T$ with $h^\ddagger(b)=0$.
\end{itemize}
\end{lemma}
\begin{proof}
(i) The proof is very similar to that of Lemma {\mathbb R}ef{L:06}. We do not repeat it here.
(ii) Fix $h^\ddagger\in\bar{\mathcal D}({\mathcal E})$ such that $(h^\dagger,h^\dagger)_\newlineu=1$ and $(h^\ddagger,\mathbf{1}_T)_\newlineu=0$.
Let $h^\ddagger\in\bar{\mathcal D}(\mathcal E)$ be a minimizer corresponding to ({\mathbb R}ef{specgapvar}).
Since $(h^\ddagger,\mathbf{1}_T)_\newlineu=0$, the zero set $S_0:=\{x\in T:\,h^\ddagger(x)=0\}\newlineot=\itptyset$. Moreover,
if $b\in S_0$ then $h^\ddagger\in\bar{\mathcal D}_b({\mathcal E})$ and therefore by Definition~({\mathbb R}ef{specvar}),
$\lambda_{2}(T)={\mathcal E}(h^\ddagger,h^\ddagger)\ge\lambda_{\{b\}}(T)$.
\end{proof}
We close the section with the proof of Theorem~{\mathbb R}ef{C:mix}.
\begin{proof}[Proof of Theorem~{\mathbb R}ef{C:mix}] Notice first that since $\newlineu$-Brownian motion is recurrent on compact ${\mathbb R}$-trees, it is conservative.
Consequently, if $(P_t)_{t\ge 0}$ denote the semi-group then $P_t(\mathbf{1}_T)=\mathbf{1}_T$ for all $t\ge 0$. Thus we can conclude by $\newlineu$-symmetry,
for
all probability measures $\newlineu'$ on $(T,{\mathcal B}(T))$ such that $\newlineu'\ll\newlineu$ with $\tfrac{\mathrm{d}\newlineu'}{\mathrm{d}\newlineu}\in L^1(\newlineu')$,
\begin{equation}\label{e:mix.2}
\begin{aligned}
g\|\newlineu' P_t-\tfrac{\newlineu}{\newlineu(T)}
g\|_{\mathrm{TV}}
&=
g\|(P_t\tfrac{\mathrm{d}\newlineu'}{\mathrm{d}\newlineu}\newlineu(T)-\mathbf 1_T)\tfrac{\newlineu}{\newlineu(T)}
g\|_{\mathrm{TV}}
\\
&=
g\|(P_t(\tfrac{\mathrm{d}\newlineu'}{\mathrm{d}\newlineu}\newlineu(T)-\mathbf 1_T)\tfrac{\newlineu}{\newlineu(T)}
g\|_{\mathrm{TV}}.
\end{aligned}
\end{equation}
By Jensen's inequality, the assumption that $(\mathbf 1_T,f)_\newlineu=1$ and
the spectral theorem applied to $P_t$ (see discussion on page 2 in \cite{wang00} and references there in)
\begin{equation}\label{e:mix.3}
\begin{aligned}
g\|\newlineu' P_t-\tfrac{\newlineu}{\newlineu(T)}
g\|_{\mathrm{TV}}
&\leq
\Big(\int\tfrac{\mathrm{d}\newlineu}{\newlineu(T)}\,
g|P_t(\tfrac{\mathrm{d}\newlineu'}{\mathrm{d}\newlineu}\newlineu(T)-\mathbf 1_T)
g|^2\Big)^{1/2}
\\
&\leq
\mathrm e^{-\lambda_2(T) t}
g(\sqrt{\newlineu(T)}(\mathbf{1}_T,\tfrac{\mathrm{d}\newlineu'}{\mathrm{d}\newlineu})_{\newlineu'}^{1/2}+1
g).
\end{aligned}
\end{equation}
The assertion now follows from ({\mathbb R}ef{P:08.1}) and Lemma {\mathbb R}ef{L:12}~(ii).
\end{proof}
\section{Trees with infinite diameter}
\label{S:transinfty}
In this section we consider the $\newlineu$-Brownian motion on a locally compact and unbounded ${\mathbb R}$-trees $(T,r)$. We shall give the proof of Theorem~{\mathbb R}ef{T:trareha} which is based on the following criterion for recurrence and transience
relating the potential theoretic and the dynamic
approach in a transparent way. Recall from ({\mathbb R}ef{Einfty}) the set $E_\infty$ of ends at infinity.
The following proposition relates transience with a positive capacity between the root and the ends at ``infinity''.
\begin{proposition} \label{P:03}
Let $(T,r)$ be a locally compact ${\mathbb R}$ tree and $\newlineu$ a Radon measure on $(T,{\mathcal B}(T))$. Then the following are equivalent.
\begin{itemize}
\item[(a)] The $\newlineu$-Brownian motion on $(T,r)$ is recurrent.
\item[(b)] $\mathrm{cap}({\mathbb R}ho)=0$.
\end{itemize}
\end{proposition}
{
\begin{proof} By Theorem~1.6.3 in [FOT94], $\newlineu$ Brownian motion on $(T,r)$ is recurrent if and only if
there exists a sequence $(h_{{k}})_{k\in\mathbb{N}}$ in ${\mathcal D}({\mathcal E})$ such that $h_{{k}}\to 1$, $\newlineu$-almost everywhere, and
$\mathcal E(h_{{k}},h_{{k}})\to 0$, as $k\to\infty$.
{ $\mathbf (b) \Longrightarrow (a)$: }
Suppose $\mathrm{cap}({\mathbb R}ho)=0$. Then there exists for each $n\in\mathbb{N}$ a function $h_{n}\in\bar{\mathcal D}(\mathcal E)$ with $h_{n}({\mathbb R}ho) = 1$ and such that $\mathcal E(h_{n},h_{n}) {\mathbb R}ightarrow 0$, as $n\to\infty$. By standard $L^{2}$-theory there exists a subsequence $\newlineabla h_{n_{k}}{\mathbb R}ightarrow 0$, $\lambda^{(T,r)}$-almost everywhere, as $k\to\infty$. As for each $k\in\mathbb{N}$ and $x\in T$,
\begin{equation}
\label{l:031}
h_{n_{k}}(x)
=
1+\int_{{\mathbb R}ho}^{x}\mathrm{d}\lambda^{T,r}\, \newlineabla h_{n_{k}},
\end{equation}
$h_{n_{k}}\to 0$ pointwise, as $k\to\infty$.
Thus, $\newlineu$-Brownian motion on $(T,r)$ is recurrent.
{$\mathbf (a) \Longrightarrow (b)$: } Suppose $\newlineu$-Brownian motion on $(T,r)$ is recurrent, then we
can choose a sequence $(h_{{k}})_{k\in\mathbb{N}}$ in ${\mathcal D}({\mathcal E})$ such that $h_{{k}}\to 1$, $\newlineu$-almost everywhere, and
$\mathcal E(h_{{k}},h_{{k}})\to 0$, as $k\to\infty$.
Since $\newlineu$ is Radon there exists an $a\in B({\mathbb R}ho, 1)$ such that $h_{{k}}(a)\to 1$, as $k\to\infty$.
As
\begin{equation}
\label{l:032}
h_{{k}}({\mathbb R}ho)
=
h_{{k}}(a)-\int_{{\mathbb R}ho}^{a}\mathrm{d}\lambda^{T,r}\, \newlineabla h_{n_{k}},
\end{equation}
Cauchy-Schartz inequality implies that
\begin{equation}
\label{l:033}
|h_{{k}}({\mathbb R}ho)-1|
\leq
|h_{k}(a) -1| + 2\mathcal E
g(h_{{k}},h_{{k}}
g).
\end{equation}
Therefore $h_{{k}}({\mathbb R}ho)\to 1$, as $k\to\infty$. Consequently, we can assume without loss of generality that $h_{k}({\mathbb R}ho) >0$, for all $k\in\mathbb{N}$.
Put $f_{k}:=\fracrac{h_{k}}{h_{k}({\mathbb R}ho)}$. It is easy to verify that $f_{k} \in{\mathcal D}({\mathcal E})$ with $f_{k}({\mathbb R}ho)=1$ and such that
$\mathcal E(f_{{k}},f_{{k}})\to 0$, as $k\to\infty$.
This implies that $\mathrm{cap}({\mathbb R}ho)=0$.
\end{proof}
}
We conclude this section with the proof of Theorem~{\mathbb R}ef{T:trareha}.
\begin{proof}[Proof of Theorem {\mathbb R}ef{T:trareha}] (i) Recall from ({\mathbb R}ef{Einfty}) and ({\mathbb R}ef{e:barr}) the set $E_\infty$ of ends at infinity equipped with the distance $\bar{r}$, and from ({\mathbb R}ef{e:inha}) the $1$-dimensional Hausdorff measure ${\mathcal H}^1$ on $(E_\infty,\bar{r})$.
Assume that $(T,r,\newlineu)$ is such that
\begin{equation}
\label{e:010}
{\mathcal H}^1
g(E_\infty,\bar{r}
g)<\infty.
\end{equation}
Then for all $\varepsilon\in(0,1)$,
there exists a disjoint finite covering of $E_\infty$ by sets $E_i\subseteq E_\infty$, $i=1,...,m=m(\varepsilon)$, with $\mathrm{diam}^{(E_\infty,\bar{r})}(E_i)\le\varepsilon$ and furthermore the sequence can be chosen so that
\begin{equation}
\label{finitelength}
\lim_{\varepsilon{\mathbb R}ightarrow 0}\sum_{i=1}^{m(\varepsilon)}\mathrm{diam}^{(E_\infty,\bar{r})}(E_i)<\infty.
\end{equation}
For each such collection a so-called finite {\it cut set} $\{x_n;\,n=1,...,m=m(\varepsilon)\}$ in $T$ is given by letting $x_i:=\min E_i$. Note that
$\mathrm{diam}^{(E_\infty,\bar{r})}(E_i)\le\varepsilon$ if and only if $r({\mathbb R}ho,x_i)\ge \varepsilon^{-1}$.
Let $y_{i} \in T$ be such that $y_{i}\in[{\mathbb R}ho,x_{i}]$ and $r({\mathbb R}ho,y_i)=\tfrac{r({\mathbb R}ho,x_i)}{2}$. Put $V:=\{x_{i},y_{i}:\,i=1,2,...,m(\varepsilon)\}$, recall from ({\mathbb R}ef{e:DV}), the set $D(V)$ of the closure of the connected components of $T\setminus V$. As before, let for all $\bar{W}\in D(V)$, $\partialartial \bar{W}:=\bar{W}\cap V$. Let for any $p,q\in T$ with $p\newlineot=q$, $h^\ast_{p,q}$ be the minimizer of ({\mathbb R}ef{cap.1}) with $\alpha:=0$, $A:=\{q\}$ and $B:=\{p\}$.
Let for each $\varepsilon>0$,
\begin{equation}
\label{e:V1}
h_{\varepsilon}(x)
:=
\sum_{i=1}^{m(\varepsilon)}\mathbf{1}_{\bar{W}_{x_i,y_i}}\cdot h^{\ast}_{y_i,x_i}+\mathbf{1}_{T\setminus\cup_{i=1}^{m(\varepsilon)}(\bar{W}_{x_i,y_i}\cup E_i)}
\end{equation}
Since $\{E_{i};\,i=1,...,m\}$ cover $E_{\infty}$, the support of $h_{\varepsilon}$ is a compact set, and therefore in particular, $h_{\varepsilon}\in{\mathcal D}({\mathcal E})$. Furthermore,
\begin{equation}
\label{e:intnabla1}
\begin{aligned}
{\mathcal E}
g(h_\varepsilon,h_\varepsilon
g)
&=
\tfrac{1}{2}\int\mathrm{d}\lambda^{(T,r)}(\newlineabla h_\varepsilon)^2
\\
&=
\tfrac{1}{2}\sum\newlineolimits_{i=1}^{m(\varepsilon)}
g(r(y_{i},x_i)
g)^{-1}
\\
&=
\sum\newlineolimits_{i=1}^{m(\varepsilon)}
g(r({\mathbb R}ho,x_i)
g)^{-1}
\\
&=
\sum\newlineolimits_{i=1}^{m(\varepsilon)}\mathrm{diam}^{(E_\infty,\bar{r})}(E_i).
\end{aligned}
\end{equation}
In particular, $h_\varepsilon\in{\mathcal D}({\mathcal E})$ and $\limsup_{\varepsilon\to 0}\int d\lambda^{(T,r)}(\newlineabla h_\varepsilon)^2<\infty$ by ({\mathbb R}ef{finitelength}).
Moreover, $h_\varepsilon\to\mathbf{1}_T$, as $\varepsilon\to 0$, and an application of H\"older's inequality will yield that $h_\varepsilon$ is ${\mathcal E}$-Cauchy. Therefore $h_\varepsilon\to\mathbf{1}_T$ in ${\mathcal E}_1$ as $\varepsilon\to 0$.
That is, $\mathbf{1}_T\in\bar{\mathcal D}({\mathcal E})$ and therefore the $\newlineu$-Brownian motion is recurrent.
(ii) Next assume that $\mathrm{dim}_H(E_\infty,\overlineerline r)>1$. Then by the converse of Frostman's energy theorem (compare, e.g., Theorem~4.13(ii) in \cite{Fal2003}) there exists $\partiali \in{\mathcal M}_1(E_\infty)$ with $\bar{\mathcal E}(\partiali,\partiali)<\infty$, and hence $\mathrm{res}_{\mathbb R}ho<\infty$. Thus $\newlineu$-Brownian motion is transient by Proposition~{\mathbb R}ef{P:03} together with Proposition~{\mathbb R}ef{P:07}.
\end{proof}
\section{Connection to the discrete world (Proof of Theorem~{\mathbb R}ef{T:trareha})}
\label{Sub:contdisc}
In this section we give the proof of Theorem~{\mathbb R}ef{nashwillconverse}. It will be concluded from Theorem~{\mathbb R}ef{T:trareha} by considering
the embedded Markov chains. For that notice that we can associate any weighted discrete tree $(V,\{r_{\{x,y\}};\,x,y\in V\})$
with the following locally compact ${\mathbb R}$-tree: fix a root ${\mathbb R}ho\in V$ and introduce the metric
$r_V(x,y)=\sum_{e\in |x,y|}r_e$, $x,y\in V$, where $|x,y|$ is the set of
edges of the self avoiding path connecting $x$ and $y$. Notice that $(V,r_V)$ is a $0$-hyperbolic space, or equivalently, $r(v_1,v_2)+r(v_3,v_4)\le\max\{r(v_1,v_3)+r(v_2,v_4);r(v_1,v_4)+r(v_2,v_3)\}$ for all $v_1,v_2,v_3,v_4\in V$.
By Theorem~3.38 in~\cite{Eva} we can find a smallest ${\mathbb R}$-tree $(T,r)$
such that $r(x,y)=r_V(x,y)$ for all $x,y\in V$. The following lemma complements the latter to a one-to-one correspondence between
rooted weighted discrete tree and rooted ${\mathbb R}$-trees.
\begin{lemma}[Locally compact ${\mathbb R}$-trees induce weighted discrete trees]
Let $(T,r,{\mathbb R}ho)$ be locally compact rooted ${\mathbb R}$-tree which is spanned by its ends at infinity.
Then the following holds:
\begin{itemize}
\item[(i)] All $x\in T$ are of finite degree, i.e.,the number of
connected components of $T\setminus\{x\}$ is finite.
\item[(ii)] Any ball contains only finitely many branch points, i.e, points of degree as least $3$.
\end{itemize}
\label{L:061}
In particular, $\lambda^{(T,r)}(B({\mathbb R}ho,n))<\infty$, for all $n\in{\mathbb N}$.
\end{lemma}
\begin{remark}[Locally compact ${\mathbb R}$-trees induce weighted discrete trees]{\mathbb R}m
Given a locally compact rooted ${\mathbb R}$-tree which is spanned by its ends at infinity, let $V$ be the set of branch points in $(T,r)$ and $r_{\{x,y\}}=r_{x,y}$
\label{Rem:02}
for all $x,y\in V$ such that $[x,y]\cap V=\itptyset$. Obviously, $(V,\{r_{\{x,y\}};\,x,y\in V\})$ is a weighted discrete tree.
\hspace{0.1in}fill$\qed$
\end{remark}
\begin{proof} Recall from Lemma~5.9 in \cite{Kigami95}
that in a locally compact and complete
metric space all closed balls are compact.
(i) We give an indirect proof and assume to the contrary that $x\in T$ is a point of infinite degree. Then
$T\setminus\{x\}$ decomposes in at least countably many connected components, $T_1,T_2,...$ with only leaves in infinite distance to the root, i.e., $T_n=T_n^o$. We can therefore pick points $\{y_1,y_2,...\}$ with $y_i\in T_i$ and $r(x,y_i)=1$, $i=1,...$. Thus the mutual distances between any two of $\{y_1,y_2,...\}\subseteq B({\mathbb R}ho,r({\mathbb R}ho,x)+2)$ is $2$. This implies that the closed ball $\bar{B}({\mathbb R}ho,r({\mathbb R}ho,x)+2)$ can not be compact. The latter, however, contradicts the local compactness of $(T,r)$.
(ii) Let $n\in{\mathbb N}$ be arbitrary. Assume that $B({\mathbb R}ho,n)$ contains an infinite sequence of mutually distinct branch points $\{y_1,y_2,...\}$.
Since the closed ball $\bar{B}({\mathbb R}ho,n)$ is compact, we can find a subsequence $(n_k)_{k\in{\mathbb N}}$ and a limit point $y\in\bar{B}({\mathbb R}ho,n)$ such that $y_{n_k}\to y$, as $k\to\infty$. Fix $\varepsilon\in(0,\fracrac{n}{2})$. Then there is $K=K(\varepsilon)$ such that $y_{n_k}\in B(y,\varepsilon)$ for all $k\ge K$. Moreover, we can pick for any $k\ge K$ a point $z_{n_k}$ such that $y_{n_k}\in[{\mathbb R}ho,z_{n_k}]$, $r(y_{n_k},z_{n_k})=\varepsilon$ and $r(z_{n_k},z_{n_l})\ge 2\varepsilon$ for all $l\newlineot =k\ge K$. This, however, again contradicts the fact that $\bar{B}({\mathbb R}ho,n)$ is compact. Since $n$ was chosen arbitrarily, this implies the claim.
Combining the two facts, we can upper estimate $\lambda^{(T,r)}(B({\mathbb R}ho,n))$ by $n$ times the number of branch points in $\lambda^{(T,r)}(B({\mathbb R}ho,n))$
times their maximal degree times $n$. This finishes the proof.
\end{proof}
It follows immediately that $\lambda^{(T,r)}$-Brownian motion $B:=(B_t)_{t\ge 0}$ is well-defined on locally compact ${\mathbb R}$-trees $(T,r)$
which are spanned by their ends at infinity. Let $(V,\{r_{\{x,y\}};\,x,y\in V\})$ be the corresponding weighted discrete tree.
\begin{lemma}[Embedded Markov chain]
Let $(T,r)$ be a locally compact ${\mathbb R}$-tree which is spanned by its ends at infinity and $B:=(B_t)_{t\ge 0}$ the
$\lambda^{(T,r)}$-Brownian motion on $(T,r)$. We introduce $\tau_0:=\inf\{t\ge 0:\,B_t\in V\}$, and put
$Y_0:=B_{\tau_0}$. Define then recursively for all $n\in{\mathbb N}$,\label{L:10}
\begin{equation}\label{e:f4}
\tau_n
:=
\inf
g\{t>\tau_{n-1}\,|\,B_t\in V\setminus\{X_{n-1}\}
g\}.
\end{equation}
and put
\begin{equation}\label{e:f4Y}
Y_n
:=
B_{\tau_n}.
\end{equation}
Then the stochastic process $Y=(Y_n)_{n\in\mathbb{N}_0}$ is a weighted Markov chain on the
weighted, discrete tree $(V,\{r_{\{x,y\}};\,x,y\in V\})$ .
\end{lemma}
One can perhaps use the ``Trace Theorem'', Theorem 6.2.1 in \cite{FukushimaOshimaTakeda1994}, to prove the above lemma but we present a direct proof instead. As a preparation, we state the following lemma.
\begin{lemma}[Hitting times]
Fix a locally compact ${\mathbb R}$-tree $(T,r)$ {spanned by its ends at infinity} and a Radon measure $\newlineu$ on $(T,{\mathcal B}(T))$.
Let $B=((B_t)_{t\ge 0},({\mathbf P}^x)_{x\in T})$ be the continuous $\newlineu$-symmetric strong Markov process \label{L:hitting}
on $(T,r)$
whose Dirichlet form is $({\mathcal E}, {\mathcal D}({\mathcal E}))$. Consider a branch point $x\in T$ and the finite family $\{x_1,...,x_n\}$ in $T$, for some $n\in{\mathbb N}$, of all branch points adjacent to $x$, i.e., $r(x_i,x_j)=r(x_i,x)+r(x,x_j)$, for all $1\le i<j\le n$ and for all $i=1,...,n$ the open arc $]x_i,x[$ does not contain further branch points.
Then the following holds:
\begin{itemize}
\item[(i)] $\mathbf{P}^x\{\sqrtedge_{i=1}^n\tau_{x_i}<\infty\}=1$.
\item[(ii)]
For all $1\le i<j\le n$ and all $x$ in the subtree spanned by $\{x_1,...,x_n\}$,
\be{Xhit1}
\mathbf P^x
g\{\tau=\tau_{x_i}
g\}
=
\fracrac{(r(x_i,x))^{-1}}{\sum_{j=1}^n(r(x_j,x))^{-1}}.
\end{equation}
\end{itemize}
\end{lemma}
\begin{proof}[Proof of Lemma~{\mathbb R}ef{L:hitting}]
Let $(T,r)$, $\newlineu$, $n\in\mathbb{N}$, and $x_1,...,x_n$ be as by assumption.
(i) Let $D$ be the compact sub-tree formed by $x$ along with $x_{1},..., x_{n}$, and $\tau_D$ denote the exit time of $B$ from $D$, i.e. $\tau_D:=\sqrtedge_{i=1}^{n}\tau_{x_{i}} $. Reasoning as in the proof of Proposition {\mathbb R}ef{P:prop}, it follows that
\begin{equation}
\mathbf{E}^{x}
g[\int_{0}^{\tau_{D}}\mathrm{d}s\, f(B_{s})
g] = \int_{D}\newlineu(\mathrm{d}y)\, g^{\ast}_{D}(x,y) f(y)
\end{equation}
whenever $f \in L^{1}(\newlineu)$ and $g^{\ast}_{D}(x,\cdot)$ is the Green kernel as defined in Definition {\mathbb R}ef{Def:03}. As $D$ is a non-empty compact subset of $T$, $g^{\ast}_{D} (x,\boldsymbol{\cdot})$ is a bounded function on $D$. The result follows if we choose specifically {$f:=\mathbf{1}$.}
(ii)
By Lemma~{\mathbb R}ef{L:061}, we can choose for all $i=1,...,n$ a finite set $V_i\subset T$ such that for all $v\in V_i$, $x_i\in[v,x]$ and $]v,x[$
does not contain any branch points.
Define then for all $i=1,...,n$ a function $h_{i}:\,T\to[0,1]$
by the following requirements: $h_i(x_i)=1$, $h_i(x):=\fracrac{(r(x_i,x))^{-1}}{\sum_{j=1}^n(r(x_j,x))^{-1}}$,
$h_i$ is supported on the subtree spanned by $V_i\cup \{x_1,...,x_n\}\setminus\{x_i\}$, and is linear on the arcs $[x,x_j]$, for all $j=1,...,n$, and $[v,x_i]$ for all $v\in V_i$.
Obviously, $h_i\in {\mathcal L}_{V_i\cup\{x_1,...,x_n\}\setminus\{x_i\},\{x_i\}}$.
Moreover, if we choose $x$ as the root,
\begin{equation}
\label{e:005}
\begin{aligned}
&\newlineabla h_i
\\
&:=\sum_{v\in V_i}\tfrac{h_i(v)-h_i(x_i)}{r(x_i,v)}\mathbf{1}_{[v,x_i]}+\tfrac{h_i(x_i)-h_i(x)}{r(x_i,x)}\mathbf{1}_{[x,x_i]}+\sum_{j=1,j\newlineot =i}^n\tfrac{h_i(x_j)-h_i(x)}{r(x_j,x)}\mathbf{1}_{[x,x_j]}
\\
&=
-\sum_{v\in V_i}r^{-1}(x_i,v)\mathbf{1}_{[v,x_i]}+\fracrac{\sum_{j=1,j\newlineot =i}^n(r(x_j,x))^{-1}}{r(x,x_i)\cdot \sum_{j=1}^n(r(x_j,x))^{-1}}\mathbf{1}_{[x,x_i]}
\\
&\;-\sum_{j=1,j\newlineot =i}^n\tfrac{(r(x_i,x))^{-1}}{r(x_j,x)\cdot \sum_{k=1}^n(r(x_k,x))^{-1}}\mathbf{1}_{[x,x_j]}.
\end{aligned}
\end{equation}
Hence, for
all $g\in{\mathcal D}_{V_i\cup\{x_1,...,x_n\}}({\mathcal E})$,
\begin{equation}
\label{e:hn}
\begin{aligned}
{\mathcal E}
g(h_i,g
g)
&=
\tfrac{1}{2}\sum_{v\in V_i}r^{-1}(x_i,v)
g(g(x_i)-g(v)
g)\\
&\;\ +\tfrac{1}{2}\fracrac{\sum_{j=1,j\newlineot =i}^n(r(x_j,x))^{-1}}{r(x,x_i)\cdot \sum_{j=1}^n(r(x_j,x))^{-1}}
g(g(x_i)-g(x)
g)
\\
&\;\;+\tfrac{1}{2}\sum_{j=1,j\newlineot =i}^n\tfrac{(r(x_i,x))^{-1}}{r(x_j,x)\cdot \sum_{k=1}^n(r(x_k,x))^{-1}}
g(g(x)-g(x_j)
g)
\\
&=
0.
\end{aligned}
\end{equation}
By part(i) of Proposition~{\mathbb R}ef{L:05}, this identifies $h_i$ as the unique minimizer of ({\mathbb R}ef{cap.1}) with
$\alpha=0$, $A:=V_i\cup\{x_1,...,x_n\}\setminus\{x_i\}$ and $B:=\{x_i\}$. Hence we can conclude similarly as in the proof of Proposition~{\mathbb R}ef{P:prop} that for all $i\in\{1,...,n\}$, the process $Y^i_t:=h_i(B_t)$ is a bounded martingale. Thus by the optional sampling theorem applied with
$\tau:=\tau_1\sqrtedge ...\sqrtedge\tau_{x_n}<\infty$, $\mathbf{P}^x$-almost surely. Thus
\begin{equation}
\label{equation}
\fracrac{(r(x_i,x))^{-1}}{\sum_{j=1}^n(r(x_j,x))^{-1}}=\mathbf{E}^x
g[Y^{i}_0
g]=\mathbf{E}^x
g[Y^{i}_\tau
g]
=
\mathbf{P}^x
g\{\tau=\tau_{x_i}
g\},
\end{equation}
for all $i=1,...,n$ and the claim follows.
\end{proof}
\begin{proof}[Proof of Lemma~{\mathbb R}ef{L:10}] Without loss of generality, we may assume that $B_0=x$ is a branch point.
Fix a vertex $x\in V$ and let $x_1,...,x_k\in V$ be the collection of all vertices
incident to $x$. It suffices to prove for $\tau:=\tau_{x_1}\sqrtedge ...\sqrtedge\tau_{x_k}$
and all $i\leq k$,
\begin{equation}\label{e:09.1}
\mathbf P^x\{\tau_{x_i}=\tau\}=
g(r_{\{x,x_i\}}\partiali(x)
g)^{-1},
\end{equation}
where $\partiali(x):=\sum_{x'\sim x}\tfrac{1}{r(x,x')}$,
which is the claim of Lemma~{\mathbb R}ef{L:hitting}.
\end{proof}
We conclude this section by giving the proof of Theorem~{\mathbb R}ef{nashwillconverse}.
\begin{proof}[Proof of Theorem {\mathbb R}ef{nashwillconverse}] By Remark~{\mathbb R}ef{Rem:02} we can
construct a locally compact ${\mathbb R}$-tree which is spanned by its leaves at infinity and
with branch points in $V$ such that
on $V$ its metric coincides with $r$.
The
assertion now follows from the previous lemma in combination with
Theorem~{\mathbb R}ef{T:trareha}.
\end{proof}
allskip
\section{Examples and diffusions with more general scale function}
\label{s:BMdrift}
As suggested by Proposition~{\mathbb R}ef{P:prop}, $\newlineu$-Brownian motion can be thought of as a diffusion on natural scale with speed measure $\newlineu$. We begin by listing a couple of examples, which can be found in the literature:
\begin{example}[Time changed Brownian motion on (subsets of) ${\mathbb R}$]{\mathbb R}m
Let $-\infty\le a<b\le\infty$, and let the ${\mathbb R}$-trees $(T,r)$ be $(a,b)$, $[a,b)$, $(a,b]$ or $[a,b]$ equipped with the Euclidian distance.
Consider the solution of the stochastic differential equation
\begin{equation}
\label{e:BM}
\mathrm{d}X_t=\sqrt{a(X_t)}\mathrm{d}B_t,
\end{equation}
where $B:=(B_t)_{t\ge 0}$ is standard Brownian motion on the real line and $a:T\to{\mathbb R}_+$ a measurable function such that
\begin{equation}
\label{e:speed}
\newlineu(\mathrm{d}x):=\tfrac{1}{a(x)}\mathrm{d}x
\end{equation}
defines a Radon-measure on $(T,{\mathcal B}(T))$. It is well-known that under ({\mathbb R}ef{e:speed}), the equation ({\mathbb R}ef{e:BM}) has a unique weak solution $X:=(X_t)_{t\ge 0}$
whose Dirichlet form is given by ({\mathbb R}ef{con.2p}) with domain ${\mathcal D}({\mathcal E}):=L^2(\newlineu)\cap{\mathcal A}_{\mathbb R}$ where ${\mathcal A}_{\mathbb R}$ is the space of absolutely continuous functions that vanish at infinity.
\hspace{0.1in}fill$\qed$
\end{example}
A less standard example is the Brownian motion on the CRT.
\begin{example}[$\newlineu$-Brownian motion on the CRT]
Let $(T,r)$ be the CRT coded as an ${\mathbb R}$-tree. That is, let $B^{\mathrm{exc}}$ denote a standard Brownian excursion on $[0,1]$. Define an equivalence relation $\sim$ on $[0,1]$
be letting \label{Exp:02}
\begin{equation}
\label{e:CRT1}
u\sim v\hspace{0.1in}space{1cm}\mbox{ iff }\hspace{0.1in}space{1cm}B^{\mathrm{exc}}_{u}=B^{\mathrm{exc}}_v=\inf_{u'\in[u\sqrtedge v,u\varepsilone v]}B^{\mathrm{exc}}_{u'}.
\end{equation}
Consider the following pseudo-metric on the quotient space $T:=[0,1]
g|_\sim$:
\begin{equation}
\label{e:CRT2}
r(u,v):=2\cdot B^{\mathrm{exc}}_{u}+2\cdot B^{\mathrm{exc}}_v-4\cdot B^{\mathrm{exc}}_v=\inf_{u'\in[u\sqrtedge v,u\varepsilone v]}B^{\mathrm{exc}}_{u'}.
\end{equation}
By Lemma~3.1 in \cite{EvaPitWin2006} the CRT is compact, almost surely, and thus $\newlineu$-Brownian motion exists if $\newlineu$ is a finite measure on $(T,{\mathcal B}(T))$
with $\mathrm{supp}(\newlineu)=T$. The following two choices for $\newlineu$ can be found in the literature.
\begin{itemize}
\item In \cite{Kre95} first an enumerated countable dense subset $\{e_1,e_2,...\}$ of the set $T\setminus T^o$ of boundary points is fixed, and then $\newlineu$ is chosen to be $\newlineu:=\sum_{i=1}^\infty 2^{-i}\lambda^{[{\mathbb R}ho,e_i]}$.
\item In \cite{Cro07,Cro08} $\newlineu$ is chosen to be the uniform distribution on $(T,r)$ defined as the push forward of the Lebesgue measure on $[0,1]$ under the map which sends $u\in [0,1]
g|_\sim$ into the CRT as coded above.
\end{itemize}\hspace{0.1in}fill$\qed$
\end{example}
In this section we consider diffusions that are not on natural scale. That is, we look for conditions on a measure $\mu$ on $(T,{\mathcal B}(T))$ such that
the form
\begin{equation}
\label{con.2pmu}
\begin{aligned}
{\mathcal E}(f,g)
&:=
\fracrac{1}{2}\int\mu(\mathrm{d}z)\newlineabla f(z)
\newlineabla g(z)
\end{aligned}
\end{equation}
for all $f,g\in{\mathcal D}({\mathcal E})$ with the same domain ${\mathcal D}({\mathcal E})$ as before (compare ({\mathbb R}ef{domainp}))
defines again a regular Dirichlet form. If this is the case we would like to refer to the corresponding diffusion as $(\mu,\newlineu)$-Brownian motion.
\begin{example}[Diffusion on ${\mathbb R}$]{\mathbb R}m Let $X:=(X_t)_{t\ge 0}$ be the diffusion on ${\mathbb R}$ with differentiable scale function $s:{\mathbb R}\to{\mathbb R}_{+}$ and speed measure $\newlineu:{\mathcal B}({\mathbb R})\to{\mathbb R}_+$. Then $X$ is the
continuous strong Markov process associated with the Dirichlet form
\begin{equation}
\label{e:007}
{\mathcal E}(f,g)
:=
\tfrac{1}{2}\int\tfrac{\mathrm{d}z}{s'(z)}\cdot f'(z)\cdot g'(z)
\end{equation}
for all $f,g\in L^2(\newlineu)\cap{\mathcal A}{_{\mathbb R}}$ such that ${\mathcal E}(f,g)<\infty$ with ${\mathcal A}{_{\mathbb R}}$ denoting the set of all absolutely continuous functions which vanish at infinity.
It is well-known for regular diffusions that one can do a ``scale change'' resulting in a diffusion on the natural scale.
For that purpose, let for all $x,y\in{\mathbb R}$,
\begin{equation}
\label{e:rc}
r_{s}(x,y)
:=
\int_{[x\sqrtedge y,x\varepsilone y]}\mathrm{d}z\,s'(z).
\end{equation}
It is easy to see that $({\mathbb R},r_s)$ is isometric to a connnected subset of ${\mathbb R}$ {and therefore a locally compact} ${\mathbb R}$-tree which
has length measure $\mathrm{d}\lambda^{({\mathbb R},r_s)}=s'(x)\,\mathrm{d}x$.
We find that
\begin{equation}
\label{e:006}
\begin{aligned}
{\mathcal E}(f,g)
&=
\fracrac{1}{2}\int\tfrac{\mathrm{d}z}{s'(z)}\,(s'(z)\newlineabla_{r_c} f(z))\cdot (s'(z)\newlineabla_{r_c} g(z)),
\\
&=
\fracrac{1}{2}\int\mathrm{d}\lambda^{({\mathbb R},r_c)}\,\newlineabla_{r_c} f\cdot \newlineabla_{r_c} g,
\end{aligned}
\end{equation}
where $f,g\in L^2(\newlineu)\cap{\mathcal A}{_{\mathbb R}}$ such that ${\mathcal E}(f,g)<\infty$.
This implies that the $\newlineu$-Brownian motion, $B^{s}$, on $({\mathbb R},r_{s})$ has the same distribution as $X$ on $({\mathbb R},\mid\boldsymbol{\cdot}\mid)$.
Moreover, Theorems~{\mathbb R}ef{T:04} and~{\mathbb R}ef{T:trareha} imply that $X$ is recurrent iff $\int_0^\infty\mathrm{d}y\,s(y)=\infty$ and $\int_{-\infty}^0\mathrm{d}y\,s(y)=\infty$.
\label{Exp:03}
Specifically, if $X^c_{t} = B_{t} + c\cdot t$ is the (standard) Brownian motion on ${\mathbb R}$ with drift $c\in{\mathbb R}$, then its
scale function is $s(x):=\int^x_{\cdot}e^{-2cy}\mathrm{d}y$ and its speed measure is $\newlineu(\mathrm{d}x):=e^{2cx}\mathrm{d}x$. Thus with the choice
\begin{equation}
r_{c}(x,y)
:=
\tfrac{1}{2c}e^{-2c x\sqrtedge y}(1-e^{-2c|y-x|)}),
\end{equation}
for all $x,y\in{\mathbb R}$,
$X^c$ on $({\mathbb R},\mid\boldsymbol{\cdot}\mid)$ has the same distribution as $e^{2cx}\mathrm{d}x$-Brownian motion on $({\mathbb R},r_c)$.
Since
$({\mathbb R},r_c)$ is isometric
to $(\tfrac{1}{2c},\infty)$ if $c<0$ and $(-\infty,\tfrac{1}{2c})$ if $c>0$, $X^c$ is recurrent iff $c=0$.
\hspace{0.1in}fill$\qed$
\end{example}
We want to formalize the notion of a ``scale change'' discussed in Example~{\mathbb R}ef{Exp:03} on general separable ${\mathbb R}$-trees $(T,r)$, and consider a method by which
we could construct diffusions on $(T,r)$ which are not necessarily on natural scale.
Assume we are given a separable ${\mathbb R}$-tree $(T,r)$, a Radon measure $\newlineu$ on $(T,{\mathcal B}(T))$ and a further measure $\mu$ on $(T,{\mathcal B}(T))$ which is absolutely continuous {with density $e^{-2\partialhi}$} with respect to the length measure $\lambda^{(T,r)}$. Define the form $({\mathcal E},{\mathcal D({\mathcal E})})$ with ${\mathcal E}$ as in ({\mathbb R}ef{con.2pmu}) and ${\mathcal D}({\mathcal E})$ as in ({\mathbb R}ef{domainp}).
In the following we will refer to a {\it potential} as a function $\partialhi:T\to{\mathbb R}$ such that {for all $a,b\in T$,}
\begin{equation}\label{e:rphi}
r_\partialhi(a,b)
:=
\int_{[a,b]}\lambda^{(T,r)}(\mathrm{d}x)\,\mathrm e^{-2\partialhi(x)}<\infty,
\end{equation}
for all $a,b\in T$. An implicit assumption in the definition being that the function $\partialhi$ has enough regularity for the integral above to make sense.
It is easy to check that $r_\partialhi$ is a metric on $T$
which generates the same topology
as $r$ and that the metric space $(T,r_\partialhi)$ is
also an ${\mathbb R}$-tree.
If the potential $\partialhi$ is such that
the ${\mathbb R}$-tree $(T,r_\partialhi)$ is locally compact,
then $({\mathcal E},{\mathcal D}({\mathcal E}))$ is a regular Dirichlet form, and the corresponding
$(\mu,\newlineu)$-Brownian motion on $(T,r)$ agrees in law with
$\newlineu$-Brownian motion on $(T,r_\partialhi)$.
We close this section with the example of a diffusion which is extensively studied in \cite{Eva00}.
\begin{example}[Evans's Brownian motion on THE ${\mathbb R}$-tree] {\mathbb R}m
In \cite{Eva00} Evans constructs a continuous path Markov process on the ``richest'' ${\mathbb R}$-tree, which branches ``everywhere'' in ``all possible'' directions. More formally, consider the set $T$ of all bounded subsets of ${\mathbb R}$ that contain their supremum. Denote for all $A,B\in T$ by
\label{Exp:06}
\begin{equation}
\begin{aligned}
&\tau(A,B)
\\
&:=
\sup
g\{t\le \sup(A)\sqrtedge\sup(B):\,(A\cap(-\infty,t])\cup\{t\}=(B\cap(-\infty,t])\cup\{t\}
g\}
\end{aligned}
\end{equation}
the ``generation'' at which the lineages of $A$ and $B$ diverge, and put
\begin{equation}
r(A,B)
:=
\sup(A)+\sup(B)-2\cdot\tau(A,B).
\end{equation}
Then $(T,r)$ is a ${\mathbb R}$-tree which is spanned by its ends at ``infinity''. Note that $(T,r)$ is not locally compact.
Suppose that $\mu$ is a $\sigma$-finite Borel measure on $E_\infty$ such that $0<\mu(B)<\infty$ for every ball $B$ in the metric $\bar{r}(\xi,\eta):=2^{-\sup(\xi\sqrtedge\eta)}$. In particular, the support of $\mu$ is all of $E_\infty$.
Distinguish an element ${\mathbb R}ho\in E_\infty$. The ``root'' ${\mathbb R}ho$ defines a partial order on $(T,r)$ in a canonical way by saying that $x\le y$ if $x\in[{\mathbb R}ho,y]$.
For each $x\in T$, denote by $S^x:=\{\xi\in E_\infty:\,x\in[{\mathbb R}ho,\xi]\}$, and consider the measure $\newlineu(\mathrm{d}x):=\mu(S^x)\lambda^{T,r}(\mathrm{d}x)$. It was shown in Section~5 in \cite{Eva00} that the measure $\newlineu$ is Radon. Moreover, a continuous path Markov process was constructed which is a $(\newlineu,\newlineu)$-Brownian motion on $(T,r)$ in our notion.
Hence if $\int^b_a\mathrm{d}\lambda^{(T,r)}(\mu(S^x))^{-1}<\infty$, for all $a,b\in T$, and if $(T,r_{\mathrm{natural}})$ is locally compact, where
\begin{equation}
\label{e:008}
r_{\mathrm{natural}}(x,y):=\int_{[x\sqrtedge y,x\varepsilone y]}\tfrac{\lambda^{(T,r)}(\mathrm{d}z)}{\mu(S^z)},\hspace{0.1in}space{1cm}z\in T,
\end{equation}
then its law is the same as that of {$\newlineu$}-Brownian motion on $(T,r_{\mathrm{natural}})$.
\hspace{0.1in}fill$\qed$
\end{example}
bliographystyle{alpha}
\end{document}
|
\begin{document}
\title[Synchronization of 2D Cellular Neural Networks]{Exponential Synchronization of 2D Cellular Neural Networks with Boundary Feedback}
\author[L. Skrzypek]{Leslaw Skrzypek}
\address{L. Skrzypek, Department of Mathematics and Statistics, University of South Florida, Tampa, FL 33620, USA}
\email{[email protected]}
\thanks{}
\author[C. Phan]{Chi Phan}
\address{C. Phan, Department of Mathematics and Statistics, Sam Houston State University, Huntsville, TX 77340, USA}
\email{[email protected]}
\thanks{}
\author[Y. You]{Yuncheng You}
\address{Y. You (Emeritus), Department of Mathematics and Statistics, University of South Florida, Tampa, FL 33620, USA}
\email{[email protected]}
\thanks{}
\subjclass[2010]{34A33, 34D06, 37B15, 37L60, 92B20}
\date{}
\keywords{Cellular neural networks, lattice FitzHugh-Nagumo equations, exponential synchronization, boundary feedback, dissipative dynamics}
\begin{abstract}
In this work we propose a new model of 2D cellular neural networks (CNN) in terms of the lattice FitzHugh-Nagumo equations with boundary feedback and prove a threshold condition for the exponential synchronization of the entire neural network through the \emph{a priori} uniform estimates of solutions and the analysis of dissipative dynamics. The threshold to be satisfied by the gap signals between pairwise boundary cells of the network is expressed by the structural parameters and adjustable. The new result and method of this paper can also be generalized to 3D and higher dimensional FitzHugh-Nagumo type or Hindmarsh-Rose type cellular neural networks.
\end{abstract}
\maketitle
\section{\textbf{Introduction}}
For the collective dynamic phenomena of many network systems that attract scientific research interests, two of the most ubiquitous concepts for the relevant mathematical models presented by various differential equations are synchronization and pattern formation \cite{AA, A, Chow2, Chua2, I, PC, PRK}. The mechanisms of synchronization and control depend on the spatiotemporal structures of the designed models.
In this work we shall focus on the two-dimensional cellular neural networks (briefly CNN) modeled by the 2D lattice FitzHugh-Nagumo equations with the boundary feedback and prove the exponential synchronization when the adjustable and explicit threshold condition is satisfied by the pairwise boundary gap signals.
As well known, CNN was invented by Chua and Yang \cite{Chua1, Chua2} in 1988. CNN physically consists of network-like interacted analog or digital signal processors such as VLSI. Mathematically, CNN is defined \cite{Chua3, CTR} to be a 2D, 3D or higher dimensional array of identical template dynamical systems (called cells), which satisfies two properties that the interactions (called the synaptic laws) are local within a neighborhood of finite radius $r > 0$ and the state variables are all time-continuous signals. In a large sense the dynamics of CNN can be studied as a lattice dynamical system generated by lattice differential equations in time \cite{Chow1, Chow2, Chua3, CR, S2}.
The diversified cellular neural networks as well as the Hopfield neural networks have found effective applications in many areas such as computational image processing, medical visualization, data driven optimization, pattern recognition, associative memory, and secure communications, cf. \cite{Chow2, Chua2, CR, GJH, L, PC, SG, S1, S2, Wu}.
In the expanding front of deep learning and artificial intelligence in general, the theory of cellular neural networks, convolutional neural networks, and variants of complex neural networks is closely linked to discrete nonlinear partial differential equations and delay differential equations \cite{Chow1, CR, GJH, S2}.
This work aims to prove the exponential synchronization of the 2D FitzHugh-Nagumo cellular neural networks with the new feature of the boundary feedback, which is a substantial generalization of feedback synchronization for one-dimensional FitzHugh-Nagumo cellular neural networks \cite{LY2} shown by the authors. In this CNN model, the cell template with the synapsis is described by the discrete version of the 2D partly diffusive FitzHugh-Nagumo equations with boundary feedback control.
We consider a cellular neural network of the 2D grid-structure, which consists of the cells $\{N(i, k)\}$ located at the grid points $\{(ih_x, kh_y): i = 1, 2, \cdots , m \;\text{and} \; k = 1, 2, \cdots, n\}$ for given $h_x, h_y > 0$ along the $x$-row direction and the $y$-column direction, respectively. We shall study synchronization problem of the following 2D lattice FitzHugh-Nagumo equations:
\begin{equation} \label{CN}
\begin{split}
\frac{dx_{i k}}{dt} &= a [(x_{i-1, k} - 2x_{i k} + x_{i+1,k}) + (x_{i, k-1} - 2x_{i k} + x_{i, k+1})] \\
&\quad + f(x_{i k}) - b\, y_{i k} + p u_{i k}, \\
\frac{dy_{i k}}{dt} &= c\, x_{i k} - \delta \, y_{i k},
\end{split}
\end{equation}
where $1 \leq i \leq m, \; 1 \leq k \leq n, \; t > 0$, the integers $m, n \geq 4$, and the two-dimensional discrete Laplacian operator \cite{Chow1, Chua1, CR, S2}
\begin{align*}
D_{i k} (x) &= a [(x_{i-1, k} - 2x_{i k} + x_{i+1,k}) + (x_{i, k-1} - 2x_{i k} + x_{i, k+1})] \\
&= a (u_{i-1,k} + u_{i+1, k} + u_{i, k-1} + u_{i, k+1} - 4 u_{i k})
\end{align*}
shows the synaptic law of cell coupling. The nonlinear function $f(\cdot )$ will be specified below. In this 2D model of CNN, we consider the periodic boundary condition:
\begin{equation} \label{pbc}
\begin{split}
x_{0, k} (t) &= x_{m, k} (t), \quad x_{m+1, k}(t) = x_{1, k} (t), \quad \text{for} \;\, 1 \leq k \leq n, \\
x_{i, 0} (t) &= x_{i, n} (t), \, \; \quad x_{i, n+1}(t) = x_{i, 1} (t), \;\, \quad \text{for} \;\, 1 \leq i \leq m,
\end{split}
\end{equation}
and the \emph{boundary feedback} control $\{u_{i\, k}: 1 \leq i \leq m, 1 \leq k \leq n\}$: For $t \geq 0$,
\begin{equation} \label{bfc1}
\begin{split}
&u_{1, k} (t) = u_{m+1, k} (t) = x_{m, k} (t) - x_{1, k}(t), \quad \text{for} \; 1 \leq k \leq n, \\
&u_{i, k} (t) = 0, \quad 2 \leq i \leq m - 1, \quad \text{for} \;\; 1 \leq k \leq n,\\
&u_{m, k} (t) = u_{0, k} (t) = x_{1, k} (t) - x_{m, k} (t), \quad \text{for} \;\; 1 \leq k \leq n,
\end{split}
\end{equation}
and
\begin{equation} \label{bfc2}
\begin{split}
&u_{i, 1} (t) = u_{i, n+1} (t) = x_{i, n} (t) - x_{i, 1}(t), \quad \text{for} \; 1 \leq i \leq m, \\
&u_{i, k} (t) = 0, \quad 2 \leq i \leq n - 1, \quad \text{for} \;\; 1 \leq i \leq m,\\
&u_{i, n} (t) = u_{i, 0} (t) = x_{i, 1} (t) - x_{i, n} (t), \quad \text{for} \;\; 1 \leq i \leq m.
\end{split}
\end{equation}
All the parameters $a, b, c, \delta, p$ can be any given positive constants, and $p > 0$ is the adjustable coefficient of the boundary feedback signals. Here $x_{m, k} (t) - x_{1, k}(t)$ measures the boundary gap signal between the two boundary node cells on the same row of the cellular neural network and $x_{i, n} (t) - x_{i, 1} (t)$ measures the boundary gap signal between the two boundary node cells on the same column of the cellular neural network. The initial conditions for the system \eqref{CN} are denoted by
\begin{equation} \label{inc}
x_{i k} (0) = x_{i k}^0 \in \mathbb{R} \quad \text{and} \quad y_{i k} (0) = y_{i k}^0 \in \mathbb{R}, \quad 1 \leq i \leq m, \;\; 1 \leq k \leq n.
\end{equation}
We make the following Assumption: The scalar function $f \in C^1 (\mathbb{R}, \mathbb{R})$ satisfies
\begin{equation} \label{Asp}
\begin{split}
&f(s) s \leq - \lambda s^4 + \beta, \quad s \in \mathbb{R}, \\
& f^{\,\prime} (s) \leq \gamma, \quad s \in \mathbb{R}, \\
\end{split}
\end{equation}
where $\lambda, \beta$ and $\gamma$ can be any given positive constants. Note that the nonlinear term in the original FitzHugh-Nagumo ordinary differential equations \cite{FH} is
$$
f(s) = s(s-\alpha)(1 - s)
$$
and the constant $0 < \alpha < 1$. It satisfies the Assumption \eqref{Asp}:
\begin{align*}
f(s)s &= - \alpha s^2 + (\alpha + 1)s^3- s^4 \leq -\alpha s^2 + \left(\frac{1}{2}s^4 + 2^3 (\alpha +1)^4\right) - s^4 \\
&\leq - \left(\alpha s^2 + \frac{1}{2}s^4 \right) + 8(\alpha + 1)^4 \leq - \frac{1}{2} s^4 + 8(\alpha + 1)^4, \\[3pt]
f^{\, \prime} (s) &= - \alpha + 2(\alpha +1)s - 3s^2 \leq - \alpha + (\alpha +1)^2 - 2s^2 \leq 1 + \alpha + \alpha^2.
\end{align*}
Synchronization and its control play a significant role for biological neural networks and for the artificial neural networks as well \cite{A, I, WC}. Fast and effective synchronization may lead to enhanced functionality and performance of complex neural networks.
For biological or artificial neural networks, synchronization topics have been studied with several mathematical models, including the FitzHuigh-Nagumo neural networks, typically with the synaptic coupling by clamped gap junctions \cite{AA, A, IJ, Yong} and the mean field couplings \cite{QT, WLZ}. For chaotic and stochastic neural networks with various applications, the pinning control is usually exploited \cite{L, PC, SG, WZML, ZCW}. Exponential synchronization of neural networks with or without time delays has also been studied in \cite{BW, CLH, GZG, YHJ}.
The methods commonly used in the reported researches on stability and synchronization of CNN and complex neural networks are mainly based on the analysis of eigenvalues for the coupled matrices, the linear matrix inequalities \cite{CR, FBA, GM, LPK, PC}, and the Lyapunov functionals \cite{CLH, I, S1}, with many references therein.
Recently the authors proved results on the exponential synchronization for the boundary coupled Hindmarsh-Rose neuron networks in \cite{PLY, PY}, the boundary coupled partly diffusive FitzHugh-Nagumo neural networks in \cite{LY1}, and the feedback synchronization of the one-dimensional FitzHugh-Nagumo CNN in \cite{LY2}.
The feature of this work is to present a new model of 2D FitzHugh-Nagumo CNN with the computationally favorable boundary feedback and to prove a sufficient condition for realization of its exponential synchronization. Moreover, this work is characterized by a new mathematical approach of dynamical \emph{a priori} estimates to show the existence of absorbing set for the solution semiflow of this CNN, which leads to the main result on the threshold condition for the exponential synchronization. The threshold is explicitly expressed in terms of the neural network parameters and can be adjusted by the strength coefficient $p$ of the boundary feedback in applications.
\section{\textbf{Absorbing Set and Dissipative Dynamics}}
Define the following Hilbert space:
$$
H = \ell^2 (\mathbb{Z}_{mn}, \mathbb{R}^{2mn}) = \{ (x, y) = ((x_{i k}, y_{i k}): 1 \leq i \leq m, 1 \leq k \leq n) \}
$$
where $\mathbb{Z}_{m n} = \{1 ,2, \cdots, m\} \times \{1, 2, \cdots , n\}$ and $m, n \geq 4$. The norm in $H$ is denoted and define by $\|(x, y)\|^2 = \sum_{i=1}^m \sum_{k=1}^n (| x_{i k} |^2 + |y_{i k} |^2)$. The inner-product of $H$ or $\mathbb{R}^d$ is denoted by $\inpt{\,\cdot , \cdot\,}$. The space of all continuous and bounded functions of time $t \geq 0$ valued in $H$ is denoted by $C([0, \infty), H)$, which is a Banach space with the sup-norm.
Since there exists a unique local solution in time of the initial value problem \eqref{CN}-\eqref{inc} under the Assumption \eqref{Asp} because the right-hand side functions in \eqref{CN} are locally Lipschitz continuous, we shall first prove the global existence in time of all the solutions in the space $H$. Then by the uniform estimates we show the dissipative dynamics of the solution semiflow in terms of the existence of an absorbing set.
\begin{theorem} \label{Tm}
Under the setting in Section \textup{1}, for any given initial state $((x_{i k}^0, y_{i k}^0): (i, k) \in \mathbb{Z}_{mn}) \in H$, there exists a unique solution
$$
((x_{i k} (t, x_{i k}^0), y_{i k} (t, y_{i k}^0)): (i, k) \in \mathbb{Z}_{mn}, \, t \geq 0 ) \in C([0, \infty), H)
$$
of the initial-boundary value problem \eqref{CN}-\eqref{inc} for this 2D FitzHugh-Nagumo cellular neural network.
\end{theorem}
\begin{proof}
Multiply the $x_{i k}$-equation in \eqref{CN} by $C_1 x_{i k} (t)$ for $(i,k) \in \mathbb{Z}_{mn}$, where the constant $C_1 > 0$ is to be chosen, then sum them up and by the Assumption \eqref{Asp} to get
\begin{equation} \label{u1}
\begin{split}
&\frac{C_1}{2} \frac{d}{dt} \sum_{i = 1}^m \sum_{k=1}^n |x_{i k} |^2 = C_1 \sum_{i=1}^m \sum_{k=1}^n \left[\, a(x_{i-1, \, k} - 2x_{i k} + x_{i+1, \, k}) x_{i k} \right. \\[4pt]
&\left. + a(x_{i, \, k-1} - 2x_{i k} + x_{i, \, k+1}) x_{i k} + f(x_{i k}) x_{i k} - b x_{i k} y_{i k} + p u_{i k} x_{i k} \, \right] \\[4pt]
\leq &\, C_1 \sum_{i=1}^m \sum_{k=1}^n \left[\, a(x_{i-1, \, k} - 2x_{i k} + x_{i+1, \, k}) x_{i k} + a(x_{i, \, k-1} - 2x_{i k} + x_{i, \, k+1}) x_{i k} \, \right] \\
+ &\, C_1 \sum_{i=1}^m \sum_{k=1}^n \left[- \lambda |x_{i k} |^4 + \beta + \frac{b}{2}\, |x_{i k} |^2 + \frac{b}{2}\, |y_{i k} |^2 \right] \\
- &\, C_1 \sum_{i=1}^m \sum_{k=1}^n p \left[(x_{1, k} - x_{m, k})^2 + (x_{i,1} - x_{i, n})^2 \right],
\end{split}
\end{equation}
for $t \in I_{max} = [0, T_{max})$, which is the maximal existence interval of the solution. By the discrete 'divergence' formula and the boundary condition \eqref{pbc}, we have
\begin{equation} \label{key}
\begin{split}
&\sum_{i=1}^m \sum_{k=1}^n \left[\, a(x_{i-1, \, k} - 2x_{i k} + x_{i+1, \, k}) x_{i k} + a(x_{i, \, k-1} - 2x_{i k} + x_{i, \, k+1}) x_{i k} \right] \\
= &\, \sum_{i=1}^m \sum_{k=1}^n a [ (x_{i+1, k} - x_{i k})x_{i k} - (x_{i k} - x_{i-1, k})x_{i k} ] \\
&\,+ \sum_{i=1}^m \sum_{k=1}^n a [ (x_{, k+1} - x_{i k})x_{i k} - (x_{i k} - x_{i, k-1})x_{i k} ] \\
= &\, \sum_{k=1}^n \left[ \sum_{i=1}^{m-1} a(x_{i+1,k} - x_{i k}) x_{i k}- \sum_{i=2}^m a(x_{i k} - x_{i-1, k})x_{i k} \right] \\
&\, + \sum_{k=1}^n \left[ a(x_{m+1, \,k} - x_{m\, k})x_{m\, k} - a(x_{1, \,k} - x_{0, \,k})x_{1,\, k} \right] \\
&\, + \sum_{i=1}^m \left[ \sum_{k=1}^{n-1} a(x_{i,\, k+1} - x_{i k}) x_{i k}- \sum_{k=2}^n a(x_{i k} - x_{i,\, k-})x_{i k} \right] \\
&\, + \sum_{i=1}^m \left[ a(x_{i, \,n+1} - x_{i\, n})x_{i\, n} - a(x_{i, \,1} - x_{i, \,0})x_{i,\, 1} \right] \\
= &\, - \sum_{k=1}^n \sum_{i=1}^{m} a(x_{i k} - x_{i-1, \, k})^2 - \sum_{i=1}^m \sum_{k=1}^{n} a(x_{i k} - x_{i, \, k-1})^2\leq 0.
\end{split}
\end{equation}
Then \eqref{u1} with \eqref{key} yields the differential inequality
\begin{equation} \label{u2}
\begin{split}
&C_1 \frac{d}{dt} \sum_{i=1}^m \sum_{k=1}^n |x_{i k} |^2 + 2C_1 p\, \sum_{i=1}^m \sum_{k=1}^n \left[(x_{1, k} - x_{m, k})^2 + (x_{i, 1} - x_{i, n})^2 \right] \\
\leq &\,C_1 \sum_{i=1}^m \sum_{k=1}^n \left[- 2\lambda |x_{i k} (t)|^4 + 2\beta + b |x_{i k} (t)|^2 + b |y_{i k} (t)|^2 \right], \quad t \in I_{max}.
\end{split}
\end{equation}
Next multiply the $y_{i k}$-equation in \eqref{CN} by $y_{i k} (t)$ for $1 \leq i \leq m, 1 \leq k \leq n$ and then sum them up. By using Young's inequality, we obtain
\begin{equation} \label{w1}
\begin{split}
&\frac{1}{2} \frac{d}{dt} \sum_{i=1}^m \sum_{k=1}^n |y_{i k} (t) |^2 = \sum_{i=1}^m \sum_{k=1}^n ( cx_{i k} \, y_{i k} - \delta y_{i k}^2) \\
\leq &\, \sum_{i=1}^m \sum_{k=1}^n \left[\left(\frac{c^2}{\delta} x_{i k}^2 + \frac{1}{4} \delta \,y_{i k}^2\right) - \delta \,y_{i k}^2\right] \\
= &\, \sum_{i=1}^m \sum_{k=1}^n \left[\frac{c^2}{\delta} \, |x_{i k} (t)|^2 - \frac{3}{4} \delta \,|y_{i k}(t)|^2\right], \quad \text{for} \;\, t \in I_{max}.
\end{split}
\end{equation}
Add up the inequalities \eqref{u2} and doubled \eqref{w1}. We obtain
\begin{equation} \label{uw}
\begin{split}
&\frac{d}{dt} \sum_{i=1}^m \sum_{k=1}^n \left[C_1 |x_{i k} (t)|^2 + | y_{i k} (t)|^2 \right] \\
&+ 2C_1 p\, \sum_{i=1}^m \sum_{k=1}^n \left[(x_{1, k} - x_{m, k})^2 + (x_{i, 1} - x_{i, n})^2 \right] \\
\leq &\sum_{i=1}^m \sum_{k=1}^n \left[ \left(C_1 b + \frac{2c^2}{\delta}\right) |x_{i k} (t)|^2 - 2C_1 \lambda |x_{i k} (t)|^4 + 2C_1 \beta \right] \\
& + \sum_{i=1}^m \sum_{k=1}^n \left[\left(C_1 b - \frac{3 \delta}{2}\right) |y_{i k} (t)|^2 \right], \;\; t \in I_{max} = [0, T_{max}).
\end{split}
\end{equation}
We now choose the constant
\begin{equation} \label{C1}
C_1 = \frac{\delta}{2b} \quad \text{so that} \quad C_1 b - \frac{3\delta}{2} = - \delta.
\end{equation}
Then from \eqref{uw} with the fact $2C_1 p \sum_{i=1}^m \sum_{k=1}^n [ \cdots ] \geq 0$ on the left-hand side and from the choice of the constant $C_1$ in \eqref{C1}, we have
\begin{gather*}
\frac{d}{dt} \sum_{i=1}^m \sum_{k=1}^n \left(C_1 |x_{i k} |^2 + | y_{i k} |^2 \right) \\
\leq \sum_{i=1}^m \sum_{k=1}^n \left[ \left(C_1 b + \frac{2c^2}{\delta}\right)|x_{i k} (t)|^2- 2C_1 (\lambda |x_{i k} (t)|^4 + \beta) - \delta |y_{i k} (t)|^2\right]
\end{gather*}
and consequently,
\begin{equation} \label{Cuw}
\begin{split}
&\frac{d}{dt} \sum_{i=1}^m \sum_{k=1}^n \left(C_1 |x_{i k} (t) |^2 + | y_{i k} (t)|^2 \right) + \delta \sum_{i=1}^m \sum_{k=1}^n \left( C_1 |x_{i k} (t) |^2 + | y_{i k} (t)|^2 \right) \\
\leq &\, \sum_{i=1}^m \sum_{k=1}^n \left[ \left(C_1 b + C_1 \delta + \frac{2c^2}{\delta}\right)|x_{i k} (t)|^2- 2C_1 (\lambda |x_{i k} (t)|^4 + \beta)\right] \\
= &\, \sum_{i=1}^m \sum_{k=1}^n \left[ \left(\frac{\delta}{2} + \frac{\delta^2}{2b} + \frac{2c^2}{\delta}\right) |x_{i k} (t)|^2- \frac{\delta \lambda}{b} |x_{i k} (t)|^4 + \frac{\delta \beta}{b} \right], \quad t \in I_{max}.
\end{split}
\end{equation}
Completing square shows that
\begin{align*}
&\left(\frac{\delta}{2} + \frac{\delta^2}{2b} + \frac{2c^2}{\delta} \right) |x_{i k} (t)|^2- \frac{\delta \lambda}{b} |x_{i k} (t)|^4 \\
= &\, - \frac{\delta \lambda}{b} \left[ | x_{i k} (t)|^2 - \frac{b}{2\delta \lambda} \left(\frac{\delta^2}{2b} + \frac{\delta}{2} + \frac{2 c^2}{\delta}\right) \right]^2 + C_2
\end{align*}
and
\begin{equation} \label{C2}
C_2 = \frac{b}{4\delta \lambda} \left(\frac{\delta^2}{2b} + \frac{\delta}{2} + \frac{2 c^2}{\delta}\right)^2.
\end{equation}
Therefore, \eqref{Cuw} yields
\begin{equation} \label{Suw}
\frac{d}{dt} \sum_{i=1}^m \sum_{k=1}^n (C_1 |x_{i k} |^2 + | y_{i k} |^2) + \delta \sum_{i=1}^m \sum_{k=1}^n ( C_1 |x_{i k} |^2 + | y_{i k} |^2) \leq mn \left[C_2 + \frac{\delta \beta}{b}\right], \;\, t \in I_{max}.
\end{equation}
Apply the Gronwall inequality to \eqref{Suw}. Then we have the following bounded estimate for all the solutions of the system \eqref{CN}-\eqref{inc},
\begin{equation} \label{dse}
\begin{split}
& \sum_{i=1}^m \sum_{k=1}^n \left(|x_{i k} (t, x_{i k}^0) |^2 + |y_{i k} (t, y_{i k}^0)|^2 \right) \\
\leq &\, \frac{1}{\min \{C_1, 1\}} \left[e^{- \delta \, t} \sum_{i=1}^m \sum_{k=1}^n (C_1 |x_{i k}^0 |^2 + |y_{i k}^0 |^2) + \frac{mn}{\delta} \left(C_2 + \frac{\delta \beta}{b}\right)\right], \;\; t \in [0, \infty). \\
\end{split}
\end{equation}
Here we can assert that $I_{max} = [0, \infty)$ for all the solutions because the bounded estimate \eqref{dse} shows that the solutions will never blow up at any finite time. Thus it is proved that for any given initial state in $H$ there exists a unique global solution $((x_{i k} (t, x_{i k}^0), y_{i k} (t, y_{i k}^0)): (i,k) \in \mathbb{Z}_{mn} ), \, t \in [0, \infty)$, in $H$.
\end{proof}
The global existence and uniqueness of the solutions to the initial-boundary value problem \eqref{CN}-\eqref{inc} and their continuous dependence on the initial data enable us to define the solution semiflow $\{S(t): H \to H\}_{t \geq 0}$ of this system of the two-dimensional FitzHugh-Nagumo cellular neural network:
$$
S(t): ((x_{i k}^0, y_{i k}^0): (i,k) \in \mathbb{Z}_{mn}) \longmapsto ((x_{i k} (t, x_{i k}^0), y_{i k} (t, y_{i k}^0)): (i,k) \in \mathbb{Z}_{mn}).
$$
We call $\{S(t)\}_{t \geq 0}$ the semiflow of the FitzHugh-Nagumo CNN with the boundary feedback.
\begin{theorem} \label{Dsp}
The semiflow $\{S(t)\}_{t \geq 0}$ of the FitzHugh-Nagumo CNN with the boundary feedback is dissipative in the sense that there exists a bounded ball in the space $H$,
\begin{equation} \label{abs}
B^* = \{g \in H: \| g \|^2 \leq Q\}
\end{equation}
where the constant, which is independent of any initial data,
\begin{equation} \label{Q}
Q = \frac{1}{\min \{C_1, 1\}} \left[1 + \frac{mn}{\delta} \left(C_2 + \frac{\delta \beta}{b}\right) \right]
\end{equation}
such that for any given bounded set $B \subset H$, there is a finite time $T_B > 0$ and all the solutions with the initial state inside the set $B$ will permanently enter the ball $B^*$ for $t \geq T_B$. Then $B^*$ is called an absorbing set of the semiflow in the space $H$.
\end{theorem}
\begin{proof}
The bounded estimate \eqref{dse} implies that
\begin{equation} \label{lsp}
\limsup_{t \to \infty} \,\sum_{i=1}^m \sum_{k=1}^n \left(|x_{i k} (t, x_{i k}^0) |^2 + |y_{i k} (t, y_{i k}^0)|^2 \right) < Q
\end{equation}
for all the solutions of \eqref{CN}-\eqref{inc} with any initial data $((x_{i k}^0, y_{i k}^0): (i,k) \in \mathbb{Z}_{mn}) \in H$. Moreover, for any given bounded set $B = \{g \in H: \|g \|^2 \leq \rho \}$ in $H$, there is a finite time
$$
T_B = \frac{1}{\delta} \log^+ (\rho \, \max \{C_1, 1\} )
$$
such that
$$
e^{- \delta \, t} \sum_{i=1}^m \sum_{k=1}^n \left(C_1 |x_{i k}^0 |^2 + |y_{i k}^0 |^2 \right) < 1, \quad \text{for} \;\, t \geq T_B,
$$
which means all the solution trajectories started from the set $B$ will uniformly and permanently enter the bounded ball $B^*$ shown in \eqref{abs} for $t \geq T_B$. Therefore, the ball $B^*$ in \eqref{abs} is an absorbing set and this semiflow of the FitzHugh-Nagumo CNN with the boundary feedback is dissipative in $H$.
\end{proof}
\section{\textbf{Synchronization of the 2D FitzHugh-Nagumo CNN}}
Define the difference of solutions for two adjacent double-indexed cells of this FitzHugh-Nagumo CNN \eqref{CN} to be
\begin{equation} \label{DF}
\begin{split}
\Gamma_{i k} (t) &= x_{i k} (t) - x_{i-1,k} (t), \quad V_{i k} (t) = y_{i k} (t) - y_{i-1,k} (t), \;\; \text{for}\;\, (i, k) \in \mathbb{Z}_{mn} ; \\
\Pi_{i k} (t) &= x_{i k} (t) - x_{i,k-1} (t), \quad W_{i k} (t) = y_{i k} (t) - y_{i,k-1} (t), \;\; \text{for}\;\, (i, k) \in \mathbb{Z}_{mn} .
\end{split}
\end{equation}
We shall consider the system of the \emph{row-differencing} equations for this CNN:
\begin{equation} \label{dHR}
\begin{split}
\frac{\partial \Gamma_{i k}}{\partial t} = \,& a (\Gamma_{i-1,k} - 2\Gamma_{i k} + \Gamma_{i+1,k}) + a (\Gamma_{i, \,k-1} - 2\Gamma_{i k} + \Gamma_{i, \,k+1}) \\[2pt]
& + f(x_{i k}) - f(x_{i-1,k}) - b V_{i k} + p(u_{i k} - u_{i-1,k}), \\
\frac{\partial V_{i k} }{\partial t} = \, & c\, \Gamma_{i k} - \delta V_{i k}, \quad \text{for} \;\, 1 \leq k \leq n;
\end{split}
\end{equation}
and the system of the \emph{column-differencing} equations for this CNN:
\begin{equation} \label{gHR}
\begin{split}
\frac{\partial \Pi_{i k}}{\partial t} = \,& a (\Pi_{i-1,k} - 2\Pi_{i k} + \Pi_{i+1,k}) + a (\Pi_{i, \,k-1} - 2\Pi_{i k} + \Pi_{i, \,k+1}) \\[2pt]
& + f(x_{i k}) - f(x_{i, k-1}) - b W_{i k} + p(u_{i k} - u_{i, k-1}), \\
\frac{\partial W_{i k} }{\partial t} = \, & c\, \Pi_{i k} - \delta W_{i k}, \quad \text{for} \;\, 1 \leq i \leq m.
\end{split}
\end{equation}
According to the periodic boundary condition \eqref{pbc}, the corresponding boundary condition for the equations \eqref{dHR} is
\begin{equation} \label{pBC}
\begin{split}
\Gamma_{0, k} (t) & = \Gamma_{m,k} (t), \quad \Gamma_{m+1, k} (t) = \Gamma_{1, k} (t), \quad \text{for} \;\; 1 \leq k \leq n; \\
\Pi_{i, 0} (t) & = \Pi_{i, n} (t), \, \quad \Pi_{i, n+1} (t) = \Pi_{i, 1} (t), \; \quad \text{for} \;\; 1 \leq i \leq m.
\end{split}
\end{equation}
Here is the main result on the synchronization of this 2D FitzHugh-Nagumo CNN with the boundary feedback.
\begin{theorem} \label{ThM}
If the following threshold condition for the boundary gap signals of the 2D FitzHugh-Nagumo cellular neural network \eqref{CN}-\eqref{inc} is satisfied,
\begin{equation} \label{SC}
\liminf_{t \to \infty} \left[\sum_{k=1}^n |x_{m,k}(t) - x_{1,k}(t)|^2 + \sum_{i=1}^m |x_{i, n}(t) - x_{i, 1}(t)|^2 \right] > Q \left[1 + \frac{1}{p} (\delta + \gamma + 2|c - b|) \right]
\end{equation}
where the constant $Q$ is given in \eqref{Q}, then this cellular neural network is asymptotically synchronized in the space $H$ at a uniform exponential rate. That is, for any initial data $((x_{i k}^0, y_{i k}^0): (i,k) \in \mathbb{Z}_{mn}) \in H$,
\begin{equation} \label{rsyn}
\begin{split}
&\lim_{t \to \infty} \sum_{i=1}^m \sum_{k=1}^n \left(| x_{i k} (t) - x_{i-1, k}(t)|^2 + |y_{i k} (t) - y_{i-1, k}(t)|^2 \right) \\
+ &\, \lim_{t \to \infty} \sum_{i=1}^m \sum_{k=1}^n \left(|x_{i k} (t) - x_{i, k-1} (t)|^2 + |y_{i k} (t) - y_{i, k-1} (t)|^2 \right) \\
= &\, \lim_{t \to \infty} \sum_{i=1}^m \sum_{k=1}^n \left(|(\Gamma_{i k} (t)|^2 + |V_{i k} (t)|^2 + |\Pi_{i k} (t)|^2 + |W_{i k} (t)|^2\right) = 0,
\end{split}
\end{equation}
\end{theorem}
\begin{proof}
For any given $1 \leq k \leq n$, multiply the first equation in \eqref{dHR} by $\Gamma_{i k} (t)$ and the second equation in \eqref{dHR} by $V_{i k} (t)$. For any given $1 \leq i \leq m$, multiply the first equation in \eqref{gHR} by $\Pi_{i k} (t)$ and the second equation in \eqref{gHR} by $W_{i k} (t)$. Then sum all of them up for all $(i,k) \in \mathbb{Z}_{mn} $. We obtain
\begin{equation} \label{eG}
\begin{split}
&\frac{1}{2} \frac{d}{dt} \sum_{i=1}^m \sum_{k=1}^n \left(|\Gamma_{i k} (t)|^2 + |V_{i k} (t)|^2 + |\Pi_{i k} (t)|^2 + |W_{i k} (t)|^2 \right) \\
&- \sum_{i=1}^m \sum_{k=1}^n \, \left[a(\Gamma_{i-1,k} - 2\Gamma_{i k} + \Gamma_{i+1,k}) \Gamma_{i k} + a (\Gamma_{i, \,k-1} - 2\Gamma_{i k} + \Gamma_{i, \,k+1})\Gamma_{i k} \right] \\
&- \sum_{i=1}^m \sum_{k=1}^n \, \left[a(\Pi_{i-1, k} - 2\Pi_{i k} + \Pi_{i-1, k}) \Pi_{i k} + a (\Pi_{i, \,k-1} - 2\Pi_{i k} + \Pi_{i, \,k+1})\Pi_{i k} \right] \\
= &\, \sum_{i=1}^m \sum_{k=1}^n \,\left[(f(x_{i k}) - f(x_{i-1,k}) - b V_{i k} + p(u_{i k} - u_{i-1,k})\right] \Gamma_{i k} \\
&+ \sum_{i=1}^m \sum_{k=1}^n \,\left[ f(x_{i k}) - f(x_{i, k-1}) - b W_{i k} + p(u_{i k} - u_{i, k-1}) \right] \Pi_{i k} \\
&+ \sum_{i=1}^m \sum_{k=1}^n \,\left[(c\, \Gamma_{i k} - \delta V_{i k}) V_{i k} + (c\, \Pi_{i k} - \delta W_{i k})W_{i k} \right] \\
= &\, \sum_{i=1}^m \sum_{k=1}^n \, \left[ (f(x_{i k}) - f(x_{i-1,k})) \Gamma_{i k} + (f(x_{i k}) - f(x_{i, k-1}) \Pi_{i k} \right] \\
&\,+ \sum_{i=1}^m \sum_{k=1}^n \, \left[(c - b)(\Gamma_{i k} V_{i k} + \Pi_{i k} W_{i k}) - \delta (|V_{i k} |^2 + |W_{i k} |^2) \right] \\
&\, + \sum_{i=1}^m \sum_{k=1}^n \, p \left[ (u_{i k} - u_{i-1,k}) \Gamma_{i k} + (u_{i k} - u_{i, k-1}) \Pi_{i k} \right].
\end{split}
\end{equation}
By the Assumption \eqref{Asp}, from \eqref{eG} it follows that
\begin{equation} \label{AG}
\begin{split}
&\frac{1}{2} \frac{d}{dt} \sum_{i=1}^m \sum_{k=1}^n \left(|\Gamma_{i k} (t)|^2 + |V_{i k} (t)|^2 + |\Pi_{i k} (t)|^2 + |W_{i k} (t)|^2 \right) \\
&- \sum_{i=1}^m \sum_{k=1}^n \, \left[a(\Gamma_{i-1,k} - 2\Gamma_{i k} + \Gamma_{i+1,k}) \Gamma_{i k} + a (\Gamma_{i, \,k-1} - 2\Gamma_{i k} + \Gamma_{i, \,k+1})\Gamma_{i k} \right]. \\
&- \sum_{i=1}^m \sum_{k=1}^n \, \left[a(\Pi_{i-1, k} - 2\Pi_{i k} + \Pi_{i-1, k}) \Pi_{i k} + a (\Pi_{i, \,k-1} - 2\Pi_{i k} + \Pi_{i, \,k+1})\Pi_{i k} \right]. \\
\leq &\, \sum_{i=1}^m \sum_{k=1}^n \, \left[\gamma (|\Gamma_{i k} |^2 + |\Pi_{i k} |^2) + |c - b| (\Gamma_{i k} V_{i k} + \Pi_{i k} W_{i k}) - \delta (|V_{i k} |^2 + |W_{i k} |^2) \right] \\
&\, + \sum_{i=1}^m \sum_{k=1}^n \, p \left[ (u_{i k} - u_{i-1,k}) \Gamma_{i k} + (u_{i k} - u_{i, k-1}) \Pi_{i k} \right], \quad t \in [0, \infty),
\end{split}
\end{equation}
because for any $(i, k) \in \mathbb{Z}_{mn}$, there are $0 \leq \xi_{i k} \leq 1$ and $0 \leq \eta_{i k} \leq 1$ such that
\begin{align*}
&(f(x_{i k}) - f(x_{i-1,k})) \Gamma_{i k} + (f(x_{i k}) - f(x_{i, k-1})) \Pi_{i k} \\[3pt]
= &\, f^{\,\prime} (\xi_{i k} x_{i k} + (1-\xi_{i k}) x_{i-1, k}) \Gamma^2_{i k} + f^{\,\prime} (\eta_{i k} x_{i k} + (1-\eta_{i k}) x_{i, k-1}) \Pi^2_{i k} \leq \gamma (|\Gamma_{i k} |^2 + |\Pi_{i k} |^2).
\end{align*}
Then the further treatmant will go through the following steps.
Step 1. The two sums without the coefficient $a > 0$ on the left-hand side of \eqref{AG} can be expressed as
\begin{equation} \label{rcLp}
\begin{split}
& - \sum_{i=1}^m \sum_{k=1}^n \, \left[(\Gamma_{i-1,k} - 2\Gamma_{i k} + \Gamma_{i+1,k}) \Gamma_{i k} + (\Gamma_{i, \,k-1} - 2\Gamma_{i k} + \Gamma_{i, \,k+1})\Gamma_{i k} \right] \\
& - \sum_{i=1}^m \sum_{k=1}^n \, \left[(\Pi_{i-1, k} - 2\Pi_{i k} + \Pi_{i+1, k}) \Pi_{i k} - (\Pi_{i, \,k-1} + 2\Pi_{i k} + \Pi_{i, \,k+1})\Pi_{i k} \right] \\[2pt]
= &\, - \sum_{i=1}^m \sum_{k=1}^n \, \left[(\Gamma_{i+1, k} - \Gamma_{i k})\Gamma_{i k} - (\Gamma_{i k} - \Gamma_{i-1, k})\Gamma_{i k} \right] \\
&- \sum_{i=1}^m \sum_{k=1}^n \, \left[(\Gamma_{i, k+1} - \Gamma_{i k})\Gamma_{i k} - (\Gamma_{i k} - \Gamma_{i, k-1})\Gamma_{i k} \right] \\
&- \sum_{i=1}^m \sum_{k=1}^n \, \left[(\Pi_{i+1, k} - \Pi_{i k})\Pi_{i k} - (\Pi_{i k} - \Pi_{i-1, k})\Pi_{i k} \right] \\
&- \sum_{i=1}^m \sum_{k=1}^n \, \left[(\Pi_{i, k+1} - \Pi_{i k})\Pi_{i k} - (\Pi_{i k} - \Pi_{i, k-1})\Pi_{i k} \right].
\end{split}
\end{equation}
The four sums on the right-hand side of \eqref{rcLp} can be treated by using the periodic boundary condition \eqref{pBC}. Among them the first sum is
\begin{equation} \label{Gi}
\begin{split}
& - \sum_{i=1}^m \sum_{k=1}^n \, \left[(\Gamma_{i+1, k} - \Gamma_{i k})\Gamma_{i k} - (\Gamma_{i k} - \Gamma_{i-1, k})\Gamma_{i k} \right] \\[4pt]
= &\, - \sum_{k=1}^n \left(\sum_{i=1}^{m-1}\, (\Gamma_{i+1, k} - \Gamma_{i k} ) \Gamma_{i k} - \sum_{i=2}^m (\Gamma_{i k}- \Gamma_{i-1, k})\Gamma_{i k} \right) \\[3pt]
&\, - \sum_{k=1}^n \left((\Gamma_{m+1, k} - \Gamma_{m, k})\Gamma_{m, k} - (\Gamma_{1, k} - \Gamma_{0, k})\Gamma_{1, k} \right) \\
= &\,\sum_{k=1}^n \left(\sum_{i=2}^{m}\, (\Gamma_{i k} - \Gamma_{i-1, k})^2 + (\Gamma_{1, k}^2 + \Gamma_{m, k}^2) - (\Gamma_{m+1, k}\, \Gamma_{m, k} + \Gamma_{0, k}\, \Gamma_{1,k})\right) \\[3pt]
= &\, \sum_{k=1}^n \left( \sum_{i=2}^{m}\, (\Gamma_{i k} - \Gamma_{i-1, k})^2 + (\Gamma_{1, k}^2 + \Gamma_{0, k}^2) - 2 \Gamma_{1, k}\, \Gamma_{0,k} \right) \\[3pt]
= &\, \sum_{k=1}^n \left(\sum_{i=2}^{m}\, (\Gamma_{i k} - \Gamma_{i-1, k})^2 + (\Gamma_{1, k} - \Gamma_{0, k})^2 \right) = \sum_{k=1}^n \sum_{i=1}^{m} \, (\Gamma_{i k}- \Gamma_{i-1, k})^2 \geq 0.
\end{split}
\end{equation}
Similarly the three other sums on the right-hand side of \eqref{rcLp} can be treated and result in the following inequalities,
\begin{equation} \label{Gk}
\begin{split}
&- \sum_{i=1}^m \sum_{k=1}^n \, \left[(\Gamma_{i, k+1} - \Gamma_{i k})\Gamma_{i k} - (\Gamma_{i k} - \Gamma_{i, k-1})\Gamma_{i k} \right] \, = \sum_{i=1}^m \sum_{k=1}^{n} \, (\Gamma_{i k}- \Gamma_{i, k-1})^2 \geq 0 \\[4pt]
&- \sum_{i=1}^m \sum_{k=1}^n \, \left[(\Pi_{i+1, k} - \Pi_{i k})\Pi_{i k} - (\Pi_{i k} - \Pi_{i-1, k})\Pi_{i k} \right] = \sum_{k=1}^n \sum_{i=1}^{m} \, (\Pi_{i k} - \Pi_{i-1, k})^2 \geq 0 \\[4pt]
&- \sum_{i=1}^m \sum_{k=1}^n \, \left[(\Pi_{i, k+1} - \Pi_{i k})\Pi_{i k} - (\Pi_{i k} - \Pi_{i, k-1})\Pi_{i k} \right] = \sum_{i=1}^m \sum_{k=1}^{n} \, (\Pi_{i k} - \Pi_{i, k-1})^2 \geq 0.
\end{split}
\end{equation}
Substitute the inequalities \eqref{Gi} and \eqref{Gk} into \eqref{rcLp}. Then the nonnegativity of \eqref{rcLp} shows that \eqref{AG} implies
\begin{equation} \label{GPVW}
\begin{split}
&\frac{1}{2} \frac{d}{dt} \sum_{i=1}^m \sum_{k=1}^n \left(|\Gamma_{i k} (t)|^2 + |V_{i k} (t)|^2 + |\Pi_{i k} (t)|^2 + |W_{i k} (t)|^2 \right) + \sum_{i=1}^m \sum_{k=1}^n \delta (|V_{i k} |^2 + |W_{i k} |^2) \\
\leq &\, \sum_{i=1}^m \sum_{k=1}^n \, \left[\gamma (|\Gamma_{i k} |^2 + |\Pi_{i k} |^2) + |c - b| (\Gamma_{i k} V_{i k} + \Pi_{i k} W_{i k}) \right] \\
&\, + \sum_{i=1}^m \sum_{k=1}^n \, p \left[ (u_{i k} - u_{i-1,k}) \Gamma_{i k} + (u_{i k} - u_{i, k-1}) \Pi_{i k} \right] \\
\leq &\, \sum_{i=1}^m \sum_{k=1}^n \, \left[\gamma (|\Gamma_{i k} |^2 + |\Pi_{i k} |^2) + \frac{1}{2} |c - b| (|\Gamma_{i k}^2 + |V_{i k} |^2 + |\Pi_{i k} |^2 + |W_{i k} |^2) \right] \\
&\, + \sum_{i=1}^m \sum_{k=1}^n \, p \left[ (u_{i k} - u_{i-1,k}) \Gamma_{i k} + (u_{i k} - u_{i, k-1}) \Pi_{i k} \right], \quad t \in [0, \infty),
\end{split}
\end{equation}
Step 2. The boundary feedback \eqref{bfc1}-\eqref{bfc2} and \eqref{pbc} infer that
\begin{equation} \label{Gu}
\begin{split}
& \sum_{i=1}^m \sum_{k=1}^n \, p(u_{i k} - u_{i-1, k})\Gamma_{i k} = \sum_{k=1}^n \left[\sum_{i=1}^m \, p(u_{i k} - u_{i-1, k})(x_{i k} - x_{i-1, k})\right] \\
= &\, p \sum_{k=1}^n \left[(u_{1, k} - u_{0, k})(x_{1, k} - x_{0, k}) + (u_{2, k} - u_{1, k})(x_{2, k} - x_{1, k})\right] \\
+ &\, p \sum_{k=1}^n (u_{m, k} - u_{m-1, k})(x_{m,k} - x_{m-1,k}) \\
= &\, p \sum_{k=1}^n \left[(u_{1, k} - u_{0, k})(x_{1, k} - x_{0, k}) - u_{1, k} (x_{2, k} - x_{1, k}) + u_{m, k} (x_{m,k} - x_{m-1,k}) \right] \\
= &\, p \sum_{k=1}^n 2(x_{m,k} - x_{1, k})(x_{1,k} - x_{m,k}) \\
- &\, p \sum_{k=1}^n \left[(x_{m,k} - x_{1,k})(x_{2,k} - x_{1,k}) - (x_{1,k} - x_{m,k}) (x_{m,k} - x_{m-1,k})\right] \; (\text{by} \, \eqref{pbc}) \\
= &\, p \sum_{k=1}^n \left[- 3(x_{m,k} - x_{1,k})^2 + (x_{m,k} - x_{1,k})(x_{m-1,k} - x_{2,k})\right] \\
\leq &\, p \sum_{k=1}^n \left[- 2(x_{m,k} - x_{1,k})^2 + (x_{m-1, k} - x_{2, k})^2 \right].
\end{split}
\end{equation}
Similarly we can deduce that
\begin{equation} \label{Pu}
\begin{split}
& \sum_{i=1}^m \sum_{k=1}^n \, p(u_{i k} - u_{i, k-1}) \Pi_{i k} = \sum_{i=1}^m \left[\sum_{k=1}^n \, p(u_{i k} - u_{i, k-1})(x_{i k} - x_{i, k-1})\right] \\
\leq &\, p \sum_{i=1}^m \left[- 2(x_{i, n} - x_{i, 1})^2 + (x_{i, n-1} - x_{i, 2})^2 \right].
\end{split}
\end{equation}
Substitute \eqref{Gu} and \eqref{Pu} into \eqref{GPVW}. Then we come up with the following differential inequality
\begin{equation} \label{TK}
\begin{split}
&\frac{d}{dt} \sum_{i=1}^m \sum_{k=1}^n \left(|\Gamma_{i k} (t)|^2 + |\Pi_{i k} (t)|^2 + |V_{i k} (t)|^2 + |W_{i k} (t)|^2 \right) \\[2pt]
&\, + 2 \delta \sum_{i=1}^m \sum_{k=1}^n (|\Gamma_{i k} (t)|^2 + |\Pi_{i k} (t)|^2 + |V_{i k} |^2 + |W_{i k} |^2) \\
\leq &\, \sum_{i=1}^m \sum_{k=1}^n \, 2 \left[ (\delta + \gamma) (|\Gamma_{i k} |^2 + |\Pi_{i k} |^2) + |c - b| (|\Gamma_{i k}^2 + |V_{i k} |^2 + |\Pi_{i k} |^2 + |W_{i k} |^2) \right] \\
&\, + \sum_{i=1}^m \sum_{k=1}^n \, 2p \left[ (u_{i k} - u_{i-1,k}) \Gamma_{i k} + (u_{i k} - u_{i, k-1}) \Pi_{i k} \right], \\
\leq &\, \sum_{i=1}^m \sum_{k=1}^n \, 2 \left[ (\delta + \gamma) (|\Gamma_{i k} |^2 + |\Pi_{i k} |^2) + |c - b| (|\Gamma_{i k}^2 + |V_{i k} |^2 + |\Pi_{i k} |^2 + |W_{i k} |^2) \right] \\
&\, - 4p \left[\sum_{k=1}^n (x_{m,k} - x_{1,k})^2 + \sum_{i=1}^m (x_{i, n} - x_{i, 1})^2\right] \\
&\, + 2p \left[\sum_{k=1}^n (x_{m-1,k} - x_{2,k})^2 + \sum_{i=1}^m (x_{i, n-1} - x_{i, 2})^2\right], \quad t \in [0, \infty).
\end{split}
\end{equation}
Step 3. Note that \eqref{Q}-\eqref{lsp} in Theorem \ref{Dsp} confirms that for all solutions of \eqref{CN}-\eqref{inc},
$$
\limsup_{t \to \infty}\, \sum_{i=1}^m \sum_{k=1}^n \left(|x_{i k} (t, x_{i k}^0)|^2 + |y_{i k} (t, y_{i k}^0|^2\right) < Q
$$
and the bounded ball $B^*$ shown in \eqref{abs} is an absorbing set in the space $H$ for the semiflow of this 2D FitzHugh-Nagumo CNN with the boundary feedback. Therefore, for any given bounded set $B \subset H$ and any initial data $((x_1^0, y_i^0), \cdots , (x_n^0, y_n^0)) \in B$, there is a finite time $T_B \geq 0$ such that
\begin{equation} \label{bd}
\begin{split}
& \sum_{i=1}^m \sum_{k=1}^n \, 2 \left[ (\delta + \gamma) (|\Gamma_{i k} |^2 + |\Pi_{i k} |^2) + |c - b| (|\Gamma_{i k}^2 + |V_{i k} |^2 + |\Pi_{i k} |^2 + |W_{i k} |^2) \right] \\
& + 2p \left[\sum_{k=1}^n (x_{m-1,k} - x_{2,k})^2 + \sum_{i=1}^m (x_{i, n-1} - x_{i, 2})^2\right] \\[3pt]
< &\, 4 \left(\delta + \gamma + 2|c - b| \right) Q + 4p\,Q = 4 \left(\delta + \gamma + 2|c - b| + p\right) Q, \quad \text{for} \;\, t \geq T_B.
\end{split}
\end{equation}
Here we used \eqref{DF} which implies that for $t \geq T_B$,
$$
\sum_{i=1}^m \sum_{k=1}^n (|\Gamma_{i k} |^2 + |\Pi_{i k} |^2) < 2Q , \quad \sum_{i=1}^m \sum_{k=1}^n (|\Gamma_{i k}^2 + |V_{i k} |^2 + |\Pi_{i k} |^2 + |W_{i k} |^2) < 4Q,
$$
and
$$
\sum_{k=1}^n (x_{m-1,k} - x_{2,k})^2 + \sum_{i=1}^m (x_{i, n-1} - x_{i, 2})^2 < 2Q.
$$
Combining \eqref{TK} and \eqref{bd}, we have shown that
\begin{equation} \label{Mq}
\begin{split}
&\frac{d}{dt} \sum_{i=1}^m \sum_{k=1}^n \left(|\Gamma_{i k} (t)|^2 + |\Pi_{i k} (t)|^2 + |V_{i k} (t)|^2 + |W_{i k} (t)|^2 \right) \\
&\, + 2 \delta \sum_{i=1}^m \sum_{k=1}^n (|\Gamma_{i k} (t)|^2 + |\Pi_{i k} (t)|^2 + |V_{i k} (t)|^2 + |W_{i k} (t) |^2) \\
&\, + 4p \left[\sum_{k=1}^n (x_{m,k}(t) - x_{1,k}(t))^2 + \sum_{i=1}^m (x_{i, n}(t) - x_{i, 1}(t))^2\right] \\[3pt]
< &\, 4\left(\delta + \gamma + 2|c - b| + p\right) Q, \quad \text{for} \;\, t \geq T_B.
\end{split}
\end{equation}
For any given initial state $(x^0, y^0) = ((x_1^0, y_1^0), \cdots, (x_n^0, y_n^0)) \in H$ as a set of single point, there exists a finite time $T_{(x^0, \, y^0)} > 0$ such that the differential inequality \eqref{Mq} holds for $t \geq T_{(x^0, \, y^0)}$ as well as
$$
\sum_{i=1}^m \sum_{k=1}^n \left(|x_{i k} (t, x_{i k}^0) |^2 + |y_{i k} (t, y_{i k}^0)|^2 \right) < Q, \quad \text{for} \;\; t \geq T_{(x^0, \, y^0)}.
$$
Under the threshold condition \eqref{SC} of this theorem, for $t \geq T_{(x^0, \, y^0)}$,
\begin{equation} \label{Thrs}
4p \left[\sum_{k=1}^n (x_{m,k}(t) - x_{1,k}(t))^2 + \sum_{i=1}^m (x_{i, n}(t) - x_{i, 1}(t))^2 \right] > 4\left(\delta + \gamma + 2|c - b| + p\right) Q.
\end{equation}
It follows from \eqref{Mq} and \eqref{Thrs} that
\begin{equation} \label{Gwq}
\begin{split}
&\frac{d}{dt} \sum_{i=1}^m \sum_{k=1}^n \left(|\Gamma_{i k} (t)|^2 + |\Pi_{i k} (t)|^2 + |V_{i k} (t)|^2 + |W_{i k} (t)|^2 \right) \\
+ \, 2 \delta &\, \sum_{i=1}^m \sum_{k=1}^n (|\Gamma_{i k} (t)|^2 + |\Pi_{i k} (t)|^2 + |V_{i k} (t) |^2 + |W_{i k} (t)|^2) < 0, \;\; t \geq T_{(x^0, \, y^0)}.
\end{split}
\end{equation}
Finally, the Gronwall inequality applied to \eqref{Gwq} shows that
\begin{equation} \label{Syn}
\begin{split}
&\sum_{i=1}^m \sum_{k=1}^n (|\Gamma_{i k} (t)|^2 + |V_{i k} (t)|^2) + \sum_{i=1}^m \sum_{k=1}^n (|\Pi_{i k} (t)|^2 + |W_{i k} (t)|^2) \\
\leq & e^{- 2\delta [t - T_{(x^0, \, y^0)}]} \sum_{i=1}^n [ |\Gamma_{i k} (T_{(x^0, \, y^0)})|^2 + |\Pi_{i k} (T_{(x^0, \, y^0)})|^2 + |V_{i k} (T_{(x^0, \, y^0)})|^2 + |W_{i k} (T_{(x^0, \, y^0)})|^2] \\[3pt]
\leq &\, 4e^{- 2\delta [t - T_{(x^0, \, y^0)}]}\,Q \to 0, \quad \text{as} \;\, t \to \infty.
\end{split}
\end{equation}
Thus it is proved that for all solutions of the problem \eqref{CN}-\eqref{Asp} for this 2D FitzHugh-Nagumo CNN with the boundary feedback, the following convergence holds at a uniform exponential rate,
\begin{equation} \label{gik}
\begin{split}
&\lim_{t \to \infty} \sum_{i=1}^m \sum_{k=1}^n \left(|(x_{i k} (t) - x_{i-1,k}(t) |^2 + |(y_{i k} (t) - y_{i-1,k}(t) |^2 \right) = 0; \\
&\lim_{t \to \infty} \sum_{i=1}^m \sum_{k=1}^n \left(|(x_{i k} (t) - x_{i,k-1}(t) |^2 + |(y_{i k} (t) - y_{i,k-1}(t)|^2 \right) = 0.
\end{split}
\end{equation}
The convergence in \eqref{gik} shows that this 2D FitzHugh-Nagumo CNN with the boundary feedback is row synchronized and column synchronized. Therefore, it is uniformly and exponentially synchronized. The proof is completed.
\end{proof}
\textbf{Conclusions}. We summarize the new contributions in this paper.
1. We propose a new mathematical model of 2D cellular neural networks, whose cell template is the 2D lattice FitzHugh-Nagumo equations with boundary feedback control \eqref{CN}-\eqref{bfc2}. It features the synaptic coupling in terms of the 2D discrete Laplacian operator and the more meaningful and implementable boundary feedback instead of the pinning control or space-clamped feedback placed in all the interior cell nodes of the network. The boundary feedback is computationally better than the mean-field feedback structures studied in synchronization of neural networks.
2. For this new model of cellular neural networks, we tackle the global dynamics of the solutions by the approach of dynamical system analysis instead of algebraic spectral analysis and Lyapunov functional. Through the uniform \emph{a priori} estimates, we proved the existence of an absorbing set in the state space $H = \ell^2 (\mathbb{Z}_{mn}, \mathbb{R}^{2mn}) $, which signifies that the CNN system is dynamically dissipative and paves the way towards proof of the main result on sybchronization.
3. The main result stated in Theorem \ref{ThM} provides a sufficient condition for the exponential synchronization of the 2D FitzHugh-Nagumo cellular neural networks with boundary feedback. The threshold condition \eqref{SC} on
$$
\liminf_{t \to \infty} \left[\sum_{k=1}^n |x_{m,k}(t) - x_{1,k}(t)|^2 + \sum_{i=1}^m |x_{i, n}(t) - x_{i, 1}(t)|^2 \right]
$$
is to be satisfied by the boundary gap signals between the pairwise boundary cells of the two-dimensional grid. The threshold in \eqref{SC} with \eqref{Q} is explicitly expressed by the parameter and is adjustable by the feedback coefficient $p$ designed in applications.
4. More importantly, the exponential synchronization result and the new methodology contributed in this paper can be directly generalized (through more tedious steps though) to the 3D and even higher dimensional CNN modeled by the corresponding cell template of lattice FitzHugh-Nagumo equations with boundary feedback or by some other type models such as the lattice Hindmarsh-Rose equations.
We comment on two related open problems. One is that we can change the orthogonal cellular network intercouplng defined by the neighborhood indices $|\widetilde{i} - i | + |\widetilde{k} - k | = 1$ of the cell $N(i, k)$ shown in \eqref{CN} to a different coupling neighborhood $| \widetilde{i} - i | = |\widetilde{k} - k| = 1$ for the two-dimensional CNN \cite{Chow1, Chua2, S2}. Then the discrete Laplacian operator is to be adapted and we conjecture that the same kind of results can still be achieved through the approach from dissipativity to synchronization. Another challenging problem is whether the synchronization is achievable for the same setting of the 2D FitzHugh-Nagumo CNN with the periodic boundary conditions \eqref{pbc} but the boundary feedback \eqref{bfc1} and \eqref{bfc2} are exclusively restricted to the equations associated with the four corner cells $\{N(i, k): i = 1, m;\, k = 1, n\}$.
The presented modeling and synchronization of the cellular neural networks with boundary feedback are expected to be useful and effective with potential applications in the front of artificial intelligence.
\end{document}
|
\begin{document}
\begin{titlepage}
\begin{center}
\Huge
\vspace*{0.01cm}
\textbf{The Method of Alternating Projections}
\vspace*{2cm}
\includegraphics[width=0.4\textwidth]{figure1.pdf}
\vspace*{1.25cm}
\LARGE
Omer Ginat \\
\vspace*{1cm}
Honour School of Mathematics: Part C\\
University of Oxford\\
Hilary 2018 \\
\Large
Word count: 9967
\end{center}
\end{titlepage}
\pagenumbering{roman}
\thispagestyle{plain}
\begin{abstract}
The method of alternating projections involves orthogonally projecting an element of a Hilbert space onto a collection of closed subspaces. It is known that the resulting sequence always converges in norm if the projections are taken periodically, or even quasiperiodically. We present proofs of such well known results, and offer an original proof for the case of two closed subspaces, known as von Neumann's theorem. Additionally, it is known that this sequence always converges with respect to the weak topology, regardless of the order projections are taken in. By focusing on projections directly, rather than the more general case of contractions considered previously in the literature, we are able to give a simpler proof of this result. We end by presenting a technical construction taken from a recent paper, of a sequence for which we do not have convergence in norm.
\end{abstract}
\setcounter{tocdepth}{2}
\tableofcontents
\pagenumbering{arabic}
\section{Introduction} \label{introduction}
The method of alternating projections has been widely studied in mathematics. Interesting not only for its rich theory, it also has many wide-reaching applications, for instance to the iterative solution of large linear systems, in the theory of partial differential equations, and even in image restoration; see \cite{Deu92} for a survey.
\subsection{What is the method of alternating projections?}
We begin by defining what we mean by the method of alternating projections. Let $H$ be a real or complex Hilbert space, $J\geq2$ an integer, and suppose that $M_1,\dots ,M_J$ are closed subspaces of $H$. For each $j \in \{1,\dots ,J\}$, let $P_j$ be the orthogonal projection onto the closed subspace $M_j$, and let $(j_n)_{n\geq1}$ be a sequence taking values in $\{1,\dots,J\}$. We define the sequence $(x_n)_{n\geq 0}$ by choosing an element $x_0 \in H$, and letting
\begin{equation*}
x_n = P_{j_n}x_{n-1}, \quad n\geq 1.
\end{equation*}
It is natural to ask under what conditions this sequence $(x_n)$ converges. This is often referred to as the method of alternating projections, and will be the focus of this dissertation.
In order to motivate why we might expect $(x_n)$ to converge, it is useful to look at a simple example. Let $H = \mathbb{R}^2$, and consider the two closed subspaces
\begin{align*}
M_1&= \{(x,y) \in \mathbb{R}^2 : x=y\}, \\
M_2 &= \{(x,y) \in \mathbb{R}^2 : y=0\}.
\end{align*}
We investigate what happens when we project $x_0 \in H$ repeatedly between $M_1$ and $M_2$ (see Figure \ref{alternating projections}).
\begin{figure}
\caption{The method of alternating projections for two subspaces of $\mathbb{R}
\label{alternating projections}
\end{figure}
We see that the resulting sequence converges to $(0,0)$: the projection of $x_0$ onto $M_1 \cap M_2$. More generally, we will see in Section \ref{convergence in norm} that if the sequence $(j_n)$ is taken to be periodic, then $(x_n)$ always converges in norm to the projection of $x_0$ onto $\bigcap_{j=1}^J M_j$. However, as we will observe in Section \ref{failure strong convergence}, we may find a sequence $(j_n)$ for which $(x_n)$ does not converge in norm.
In this dissertation we work through the major results relating to the convergence of $(x_n)$, sometimes offering new or more direct proofs than those in the literature today, and including important details where they have been omitted.
\subsection{A brief history} \label{a brief history}
The first major result relating to the method of alternating projections is due to von Neumann \cite{von49}. In 1949, he proved that when we have two projections onto closed subspaces of a Hilbert space (that is, $J=2$), then $(x_n)$ converges in norm to the projection of $x_0$ onto the intersection of the two subspaces. The next significant advance happened in 1960, when Pr\'ager proved that the sequence $(x_n)$ converges in norm whenever $H$ is finite-dimensional \cite{Pra60}. Shortly after, in 1962, Halperin generalised von Neumann's theorem by proving that when the sequence $(j_n)$ is periodic, $(x_n)$ converges in norm \cite{Hal62}. \\ \\
In 1965, Amemiya and Ando proved a convergence result about products (compositions) of contractions, of which a corollary is that our sequence $(x_n)$ always converges weakly \cite{AmAn65}. This subsumes the result by Pr\'ager, since in finite-dimensional Hilbert spaces, the weak topology and norm topology coincide, and so weak convergence is equivalent to convergence in norm.
There had been no further convergence results until 1995, when Sakai improved on Halperin's theorem. He proved that when the sequence $(j_n)$ is so-called \textit{quasiperiodic}, we have convergence in norm \cite{Sak95}. Based on these positive results, it is natural to ask whether $(x_n)$ always converges in norm without restrictions on $H$ or the sequence $(j_n)$. Indeed, Amemiya and Ando posed this question in their paper \cite{AmAn65}.
It was only in 2012 when Paszkiewicz \cite{Pas12} proved that for an infinite-dimensional Hilbert space, we may find five subspaces, a vector $x_0 \in H$, and a sequence $(j_n)$ such that $(x_n)$ does not converge in norm. In 2014, Kopeck\'a and M\"uller improved Paszkiewicz's construction from five subspaces to three \cite{KoMu14}. Indeed, this is the best we can do, since for the case of two subspaces, we are guaranteed convergence in norm by von Neumann's theorem \cite{von49}. Kopeck\'a and Paszkiewicz refined this construction in 2017. They went on to show that for any infinite-dimensional Hilbert space $H$, we may find three subspaces such that for any non-zero $x_0 \in H$, there is a sequence $(j_n)$ for which $(x_n)$ does not converge in norm \cite{KoPa17}.
\begin{figure}
\caption{A history of the method of alternating projections}
\end{figure}
\subsection{Notation} \label{notation}
Throughout this dissertation, $H$ will be a (real or complex) Hilbert space, $J\geq2$ an integer, and $M_1,\dots, M_J$ a family of closed subspaces of $H$ with intersection $M=\bigcap_{j=1}^{J} M_j$. Given a closed subspace $Y$ of $H$, we write $P_Y$ for the orthogonal projection onto $Y$, and for ease of notation, we write $P_1,\dots,P_J$ for the orthogonal projections onto $M_1, \dots ,M_J$. Throughout, $(j_n)_{n\geq1}$ will be a sequence taking values in $\{1,\dots,J\}$. We define the sequence $(x_n)_{n\geq 0}$ by choosing a vector $x_0 \in H$, and letting
\begin{equation*}
x_n = P_{j_n}x_{n-1}, \quad n\geq1.
\end{equation*}
This will be the general setting for this dissertation. In particular, when we mention $(j_n)$ or $(x_n)$, we are referring to the sequences described above.
We will write $B(H)$ for the space of bounded linear operators on $H$, and $B_H$ for the closed unit ball in $H$. Additionally, we write $\mathbb{F}$ for the (real or complex) scalar field of $H$. Other ad hoc notation will be introduced as needed.
\section{Preliminaries}
We begin by recalling what it means to project orthogonally onto a closed subspace. It is a standard fact that for a closed subspace $Y$ of a Hilbert space $H$, we have $H=Y \oplus Y^\perp$. Hence each $x \in H$ can be written uniquely as $x=y+z$, where $y \in Y$ and $z \in Y^\perp$. The orthogonal projection $P_Y \colon H \to H$ onto $Y$ is given by $P_Y(x) = y$. In fact, it is simple to see that $P_Y(x)$ is the unique closest point in $Y$ to $x$.
Before proving the important results about the method of alternating projections, it will help to introduce some elementary facts, to be referred to throughout this dissertation. The proofs are simple, but we include them to make the dissertation self-contained.
In the following lemma, we present a few elementary facts, mainly about projections.
\begin{lemma} \label{foundational}
Let $x \in H$, $Y$ be a closed subspace of $H$, and $P$ the orthogonal projection onto $Y$. Then
\begin{enumerate}[label=(\alph*)]
\item $P$ is linear, idempotent ($P^2 = P$), and self-adjoint ($P=P$*). \label{preliminary}
\item For vectors $u,v \in H$ with $u\perp v$, we have $\|u+v\|^2 = \|u\|^2 + \|v\|^2$. \label{pythagoras}
\item $\|x-Px\|^2 = \|x\|^2 - \|Px\|^2$. \label{projection equality}
\item $\|Px\| \leq \|x\|$ with equality if and only if $Px=x$. \label{projection norm}
\item $\|P\|=1$ if $Y\neq\{0\}$, and $\|P\|=0$ if $Y=\{0\}$. \label{projection norm 1}
\item For any $x \in H$ and $y\in Y$, $\|x-Px\| \leq \|x-y\|$ with equality if and only if $Px=y$. \label{projection is onto closest point}
\item If $U$ and $V$ are closed subspaces of $H$ with $U \perp V$, then $U+V$ is closed.\label{sum projections closed}
\item If $U$ and $V$ are closed subspaces of $H$ with $U \perp V$, then $P_U+P_V = P_{U+V}$. \label{adding projections}
\end{enumerate}
\end{lemma}
\begin{proof} \ref{preliminary} For $i \in \{1,2\}$, let $x_i \in H$. Since $Y$ is a closed subspace of $H$, we have $H = Y \oplus Y^\perp$, so there are unique $y_i \in Y$ and $z_i \in Y^\perp$ such that $x_i = y_i + z_i$. For $\lambda \in \mathbb{F}$, we have \[P(x_1+\lambda x_2) = P(y_1 + \lambda y_2 + z_1 + \lambda z_2) = y_1 + \lambda y_2 = P(x_1) + \lambda P(x_2). \] Hence $P$ is linear. We also note that \[P^2(x_1) = P\big{(}P(y_1 + z_1)\big{)} = P(y_1) = y_1,\] so $P$ is idempotent. Finally, we have
\begin{equation*}
\langle Px_1,x_2 \rangle = \langle y_1,y_2 + z_2 \rangle = \langle y_1,y_2 \rangle = \langle y_1 + z_1,y_2 \rangle = \langle x_1,Px_2 \rangle.
\end{equation*}
Therefore $P$ is self-adjoint. \\ \\
\ref{pythagoras} Since $u \perp v$, we have $\langle u,v \rangle= 0 = \langle v,u \rangle$, and so \[\|u+v\|^2 = \langle u + v, u + v \rangle = \langle u,u \rangle + \langle v,v \rangle = \|u\|^2 + \|v\|^2. \]
\ref{projection equality} The result follows by applying \ref{pythagoras} with $u=Px$ and $v=x-Px$. \\ \\
\ref{projection norm} Applying \ref{projection equality}, we have that \[ \|Px\|^2 = \|x\|^2 - \|x - Px\|^2 \leq \|x\|^2,\] with equality if and only if $Px=x$. \\ \\
\ref{projection norm 1} The result follows immediately from \ref{projection norm}. \\ \\
\ref{projection is onto closest point} Since $H=Y \oplus Y^\perp$, then given $x\in H$, there are unique $\widetilde{y}\in Y$ and $\widetilde{z} \in Y^\perp$ such that $x=\widetilde{y}+\widetilde{z}$. So for any $y \in Y$, we have by \ref{pythagoras} that
\begin{equation*}
\|x-y\|^2 = \|(\widetilde{y}-y) + \widetilde{z}\|^2 = \|\widetilde{y}-y\|^2 + \|\widetilde{z}\|^2 \geq \|\widetilde{z}\|^2 = \|x - \widetilde{y}\|^2 = \|x - Px\|^2,
\end{equation*}
with equality if and only if $Px=y$. \\ \\
\ref{sum projections closed} Let $(x_n)$ be a Cauchy sequence in $U+V$. We write each $x_n$ as $u_n + v_n$, where $u_n \in U$, and $v_n \in V$. Since $U\perp V$, we have by \ref{pythagoras} that \[\|x_n - x_m\|^2 = \|(u_n-u_m) + (v_n - v_m)\|^2 =\|u_n-u_m\|^2 + \|v_n - v_m\|^2, \quad n,m \in \mathbb{N}.\] In particular, $\|u_n - u_m\| \leq \|x_n - x_m\|$ and $\|v_n - v_m\| \leq \|x_n - x_m\|$, so that $(u_n)$ and $(v_n)$ are both Cauchy sequences. Since $U$ and $V$ are closed subspaces of the Hilbert space $H$, they must be complete. Therefore $(u_n)$ and $(v_n)$ converge to some limits $u$ and $v$ respectively, and so $(x_n) = (u_n + v_n)$ converges to $u+v$. Hence $U+V$ is a complete subspace of $H$, and therefore closed. \\ \\
\ref{adding projections} By \ref{sum projections closed}, we know that $U+V$ is closed, and so $P_{U+V}$ is well defined. Since $U \perp V$, and since projections are self-adjoint, we have \[0 = \langle P_Vx,P_Uy \rangle = \langle P_UP_Vx,y \rangle = \langle x,P_VP_Uy \rangle, \quad x,y \in H.\] Hence $P_UP_V = 0 = P_VP_U$. Therefore, since $P_U$ and $P_V$ are idempotent, \[(P_U+P_V)^2 = P_U^2 + P_V^2 = P_U + P_V,\] and so $P_U + P_V$ is indeed a projection. We now note that for $x \in U$, $y\in V$, we have
\begin{equation*}
(P_U + P_V)(x+y) = P_Ux + P_Vx + P_Uy + P_Vy = x+y,
\end{equation*}
and for $z \in (U+V)^\perp$, $w \in H$, we have
\begin{equation*}
\langle (P_U+P_V)z,w \rangle = \langle z,(P_U+P_V)w \rangle = 0.
\end{equation*}
Hence $(P_U + P_V)$ is the identity on $U+V$, and zero on $(U+V)^\perp$, and so $P_U + P_V = P_{U+V}$ as claimed.
\end{proof}
The following lemma is still elementary, but the results are more specific. They will be particularly useful in proving Theorem \ref{von neumann} (von Neumann) \cite{von49} and Theorem \ref{halperin} (Halperin) \cite{Hal62}.
\begin{lemma} \label{foundational 2} Let $M_1,\dots ,M_J$ be a finite family of closed subspaces of a Hilbert space $H$, with intersection $M = \bigcap_{j=1}^J M_j $. For each $j \in \{1,\dots ,J\}$, let $P_j$ be the orthogonal projection onto the closed subspace $M_j$, and let $P_M$ be the orthogonal projection onto M. Let $T = P_J\dots P_1$. Then
\begin{enumerate}[label=(\alph*)]
\item $\bigcap_{k=1}^j \ker(I-P_k) = \ker(I-P_j\dots P_1)$ for $j \in \{1,\dots ,J\}$. \label{useful for amemiya}
\item $Tx=x$ if and only if $x \in M$. \label{Tx=x}
\item $T^*x=x$ if and only if $x \in M$. \label{T*x=x}
\item Let $A$ be a contraction on $H$ (a bounded operator with operator norm at most $1$). Suppose that $\|A^{n+1}x - A^nx\| \to 0$ as $n\to \infty$ for every $x \in H$. Then $A^ny \to 0$ as $n\to \infty$ for every $y \in \overline{\ran(I-A)}$. \label{using kakutani}
\item If $V$ is a subspace of $H$, then $H=\overline{V} \oplus V^\perp$. \label{direct sum subspace}
\item $(\ran(I-T))^\perp = \ker(I-T^*)$. \label{ran ker equality}
\item $H=\overline{\ran(I-T)} \oplus \ker(I-T^*)$. \label{direct sum ran ker}
\end{enumerate}
\end{lemma}
\begin{proof}
\ref{useful for amemiya} If $x \in \bigcap_{k=1}^j \ker(I-P_k)$, then $P_kx=x$ for each $k \in \{1,\dots ,j\}$, and hence $P_j\dots P_1x = x$. Conversely, if $x\in \ker(I-P_j\dots P_1)$, then
\begin{equation*}
\|x\| = \|P_j\dots P_1x\| \leq \|P_{j-1}\dots P_1x\| \leq \dots \leq \|P_1x\| \leq \|x\|.
\end{equation*}
Hence $\|P_1x\| = \|x\|$, and so by Lemma \ref{foundational}\ref{projection norm}, we have $P_1x=x$. But also $\|P_2P_1x\| = \|x\|$, so that $\|P_2x\| = \|x\|$. Lemma \ref{foundational}\ref{projection norm} then gives that $P_2x=x$. In this way, a simple induction shows that for each $k \in \{1,\dots ,j\}$, $P_kx=x$. \\ \\
\ref{Tx=x} Applying \ref{useful for amemiya} with $j=J$, we have
\begin{equation*}
Tx=x \iff x\in \ker(I-T) \iff x \in \bigcap_{k=1}^J \ker(I-P_k) \iff x \in M.
\end{equation*}
\ref{T*x=x} An identical argument to \ref{useful for amemiya}, but with each $P_k$ replaced by $P_{j-k}$, gives \[\bigcap_{k=1}^j \ker(I-P_k) = \ker(I-P_1\dots P_j), \quad j \in \{1,\dots ,J\}.\] We apply this with $j=J$, noting that $T^* = P_1 \dots P_J$, to get
\begin{equation*}
T^*x=x \iff x\in \ker(I-T^*) \iff x \in \bigcap_{k=1}^J \ker(I-P_k) \iff x \in M.
\end{equation*}
\ref{using kakutani} If $x\in \ran(I-A)$, then $x=(I-A)w$ for some $w \in H$. So, by assumption,
\begin{equation*}
\|A^nx\| = \|A^n(I-A)w\| = \|A^nw - A^{n+1}w\| \to 0 \textnormal{ as } n \to \infty.
\end{equation*}
Now suppose $y \in \overline{\ran(I-A)}$. Let $\varepsilon > 0$. We can find $x \in \ran(I-A)$ such that $\|x-y\| < \varepsilon$. Then
\begin{equation*}
\|A^ny\| \leq \|A^nx\| + \|A^n(x-y)\| \leq \|A^nx\| + \|x-y\| < \|A^nx\| + \varepsilon.
\end{equation*}
Hence $\limsup_{n\to\infty} \|A^ny\| \leq \varepsilon$, and since $\varepsilon$ was arbitrary, we have $\|A^ny\| \to 0$ as $n\to \infty$. \\ \\
\ref{direct sum subspace} Since $\overline{V}$ is closed, we have $H = \overline{V} \oplus (\overline{V})^\perp$. So we are done if we can show that $(\overline{V})^\perp = V^\perp$. Since $V\subseteq \overline{V}$, it follows that $(\overline{V})^\perp \subseteq (V)^\perp$. We now show that the other inclusion holds.
Let $x \in (V)^\perp$, so that $\langle x,v\rangle = 0$ for all $v \in V$, and let $y \in \overline{V}$, so that there exists a sequence $y_n \in V$ converging in norm to $y$. By continuity of inner products (with one argument fixed), we have \[\langle x,y \rangle = \langle x,\lim_{n\to\infty} y_n \rangle = \lim_{n\to\infty} \langle x,y_n \rangle = \lim_{n\to\infty} 0 = 0.\] Hence $x\in (\overline{V})^\perp$, and so $(V)^\perp \subseteq (\overline{V})^\perp$. \\ \\
\ref{ran ker equality} Noting that $(I-T)^* = I-T^*$, we have
\begin{equation*}
\begin{aligned}
x\in (\ran(I-T))^\perp &\iff \langle x,y \rangle = 0 \textnormal{ for all } y\in \ran(I-T) \\
&\iff \langle x,(I-T)w \rangle = 0 \textnormal{ for all } w\in H \\
&\iff \langle (I-T^*)x,w \rangle = 0 \textnormal{ for all } w \in H \\
&\iff (I-T^*)x = 0 \\
&\iff x\in \ker(I-T^*).
\end{aligned}
\end{equation*}
Hence $(\ran(I-T))^\perp = \ker(I-T^*)$. \\ \\
\ref{direct sum ran ker} Applying \ref{direct sum subspace} and \ref{ran ker equality}, we have
\begin{equation}
\nonumber H=\overline{\ran(I-T)} \oplus (\ran(I-T))^\perp = \overline{\ran(I-T)} \oplus \ker(I-T^*),
\end{equation}
as required.
\end{proof}
\section{Motivation}
This dissertation focuses on proving the convergence results discussed in Section \ref{a brief history}. However, it is important to understand how these results interact with other areas of mathematics. We present three applications of the method of alternating projections beyond functional analysis. The first is a playful example in which we make use of von Neumann's theorem in the unexpected context of dividing a string into equal thirds. The other two highlight its use in finding iterative solutions to systems of linear equations and in the theory of partial differential equations.
A key theme throughout this section is that the usefulness of the method of alternating projections stems from it often being easier to compute projections onto a single closed subspace, rather than directly computing the projection onto the intersection of closed subspaces. It is also worth remarking that there are many more applications beyond the three we present in this section; see \cite{Deu92} for a survey.
\subsection{Dividing a string into equal thirds}
We begin with a charming demonstration of how von Neumann's theorem (to be proved later as Theorem \ref{von neumann}) can be applied to divide a string into equal thirds. This is due to Burkholder, and presented in the paper ``Stochastic Alternating Projections'' \cite{DiKhSa10}.
We take a string and attach two paperclips anywhere along it, calling these the `left' and `right' paperclips. We will present an iterative process, so that the positions of the paperclips will converge to one third and two thirds of the total length of the string.
At any given stage, we apply the following steps.
\begin{enumerate}[label=(\alph*)]
\item We fold over the right end of the string so that it touches the left paperclip, and slide the right paperclip until it reaches the loop. We then unfold the string.
\begin{figure}
\caption{Step (a) of the iteration}
\label{step a}
\end{figure}
\item This time, we fold over the left end of the string so that it touches the right paperclip, and slide the left paperclip until it reaches the loop. We then unfold the string.
\begin{figure}
\caption{Step (b) of the iteration}
\label{step b}
\end{figure}
\end{enumerate}
Applying (a) followed by (b) makes up one iteration.
\begin{claim*}
Repeating these iterations, the positions of the paperclips converge to one third and and two thirds of the total length of the string.
\end{claim*}
Although there are simpler ways to prove this, it is interesting to see how von Neumann's theorem may be applied in this unexpected context to give a slick proof.
\begin{proof} Suppose the three sections of the string have lengths $x$, $y$ and $z$ respectively. Then, as shown in Figures \ref{step a} and \ref{step b}, an application of (a) leaves the three sections with lengths $x$, $(y+z)/2$, $(y+z)/2$, and an application of (b) leaves the sections with lengths $(x+y)/2$, $(x+y)/2$, and $z$. Hence, applications of (a) and (b) correspond to the projections
\[ P_1 =
\begin{pmatrix}
1 & 0 & 0 \\
0 & 1/2 & 1/2 \\
0 & 1/2 & 1/2
\end{pmatrix}, \quad P_2 =
\begin{pmatrix}
1/2 & 1/2 & 0 \\
1/2 & 1/2 & 0 \\
0 & 0 & 1
\end{pmatrix},\]
applied to the vector $(x,y,z)^T$.
We work in the Hilbert space $H = \mathbb{R}^3$, and let $M_1$ and $M_2$ be the closed subspaces $P_1(H)$ and $P_2(H)$ respectively. Since $P_1$ and $P_2$ are idempotent and self-adjoint, they are in fact orthogonal projections.
Let $w \in H$. For $j \in \{1,2\}$, we have \[ w \in M_j \iff P_jw = w, \] and so \[ w \in M_1\cap M_2 \iff P_1w = P_2w = w.\] It is a simple check to see that $P_1w = P_2w = w$ if and only if $w = (a,a,a)^T$ for some $a \in \mathbb{R}$, and so, \[M_1\cap M_2 = \big{\{} (a,a,a)^T : a \in \mathbb{R} \big{\}}. \]
Hence, we have \[P_{M_1\cap M_2} \begin{pmatrix}
x \\
y \\
z
\end{pmatrix} = \begin{pmatrix}
\frac{x+y+z}{3} \\
\frac{x+y+z}{3} \\
\frac{x+y+z}{3}
\end{pmatrix}. \]
By von Neumann's theorem, which states that the limit of alternating projections onto two closed subspaces converges in norm to the projection onto the intersection of these subspaces, we have \[ \Bigg{\|}(P_1P_2)^n\begin{pmatrix}
x \\
y \\
z
\end{pmatrix} - \begin{pmatrix}
\frac{x+y+z}{3} \\
\frac{x+y+z}{3} \\
\frac{x+y+z}{3}
\end{pmatrix} \Bigg{\|} \to 0 \textnormal{ as $n \to \infty$}. \] Hence the positions of the paperclips converge to one third and two thirds of the total length of string, as claimed.
\end{proof}
We may also be interested in how quickly the position of the paperclips converge to one third and two thirds of the total length of the string. It turns out (after some simple calculations, which we omit here) that for a string of length $c$, the deviation of the left paperclip from $c/3$ and the right from $2c/3$, after $n$ iterations, is at most $\frac{2c}{3} \cdot 4^{-n}$ and $\frac{c}{3} \cdot 4^{1-n}$ respectively. To put this into perspective, for a string $1$ metre in length, only $3$ iterations are needed for an error of less than $1.1$ centimetres for the left paperclip.
\subsection{Solving systems of linear equations}
As mentioned in Section \ref{a brief history}, Halperin proved that the sequence obtained by periodically projecting an element of a Hilbert space orthogonally onto a collection of closed subspaces converges in norm to the projection of the element onto the intersection of these closed subspaces \cite{Hal62}. Inspired by Deutsch \cite{Deu01}, we demonstrate how Halperin's theorem (to be proved later as Theorem \ref{halperin}) can be used to find an iterative solution to a system of linear equations.
\subsubsection{The setup}
Let $H$ be a real or complex Hilbert space, $\{y_1,\dots,y_J\} \in H \setminus \{0\}$, and $\{c_1,\dots,c_J\} \in \mathbb{F}$. We want to find an element $x \in H$ satisfying the equations
\begin{equation} \label{satisfy equation}
\langle x,y_i \rangle = c_i, \quad i\in \{1,\dots, J\}.
\end{equation}
We consider the hyperplanes
\begin{equation*}
V_i = \{y \in H \mid \langle y,y_i \rangle = c_i \}, \quad i\in \{1,\dots, J\},
\end{equation*}
and note that they are closed. We set
\begin{equation*}
V = \bigcap_{i=1}^J V_i.
\end{equation*}
Then $x$ satisfies (\ref{satisfy equation}) if and only if $x \in V$. Throughout this section, we assume a solution exists, so that $V \neq \emptyset$.
At this stage, we would like to take periodic projections of a vector $x_0 \in H$ onto the $J$ hyperplanes, and show it converges in norm to an element of $V$ (i.e. to a solution of (\ref{satisfy equation})). It seems natural to appeal to Halperin's theorem to obtain such a result. However, the hyperplanes $V_i$ may not be subspaces since they do not necessarily contain the origin.
\subsubsection{An interlude about affine spaces}
We can resolve this problem through the notion of affine spaces. There are many equivalent definitions of affine spaces, but we will use the one which is most natural in this context.
We say that $U \subset H$ is an \textit{affine space} if $U=L+u$ for some (unique) subspace $L$ of $H$, and (any) $u \in U$. We may define a projection of $z \in H$ onto an affine space $U$ by \[P_U(z) = P_L(z - u) + u.\] This is well defined since given $u,\tilde{u} \in U$, there is some $l \in L$ such that $u = l+\tilde{u}$, and so \[P_L(u-\tilde{u}) = P_L(l) = l = u - \tilde{u}.\] Therefore by linearity of $P_L$, we have that for any $z \in H$,
\begin{equation*}
P_L(z-u) + u = P_L(z - \tilde{u}) + \tilde{u}.
\end{equation*}
For each $i \in \{1,\dots,J\}$, we let $M_i$ be the subspace given by
\begin{equation*}
M_i = \{y \in M \mid \langle y,y_i \rangle = 0 \}.
\end{equation*}
Then for any $i \in \{1,\dots,J\}$, we have
\begin{equation*}
V_i = M_i + v_i, \quad v_i \in V_i,
\end{equation*}
and hence each $V_i$ is an affine space. Let $M = \bigcap_{i=1}^J M_j$. It is a simple check that we have
\begin{equation*}
V = M + v, \quad v \in V.
\end{equation*}
and so $V$ is also an affine space. Therefore for any $v \in V$, we have
\begin{equation*}
\begin{aligned}
&V_i = M_i + v, \\
& V = M + v.
\end{aligned}
\end{equation*}
\subsubsection{Finding an iterative solution}
We are now in a position to be able to make use of Halperin's theorem. We begin by choosing a starting vector $x_0 \in H$, and fixing some $v \in V$. Then for any $i,j \in \{1,\dots,J\}$, we have
\begin{equation} \label{composition of v}
\begin{aligned}
P_{V_j}P_{V_i}x_0 &= P_{V_j}(P_{M_i}(x_0-v) + v) = P_{M_j}P_{M_i}(x_0-v) + v. \\
\end{aligned}
\end{equation}
Therefore, letting $T = P_{V_J}\dots P_{V_1}$ and applying (\ref{composition of v}) repeatedly gives \[T^n x_0 = v + (P_{M_J}\dots P_{M_1})^n (x_0-v), \quad n\in \mathbb{N}.\] Hence by Halperin's Theorem,
\begin{equation*}
\|T^nx_0 - P_Vx_0\| = \|(P_{M_J}\dots P_{M_1})^n(x_0-v) - P_M(x_0 - v)\| \to 0 \textnormal{ as } n\to \infty.
\end{equation*}
In particular, since $P_Vx_0 \in V$, we see that $T^n x_0$ converges in norm to a solution of (\ref{satisfy equation}).
By the Hilbert projection theorem, there is a unique $\tilde{v} \in V$ such that $\|x_0-\tilde{v}\|$ is minimised over $V$. We show that this unique $\tilde{v}$ is in fact $P_Vx_0$. For any $w \in V$, Lemma \ref{foundational}\ref{projection is onto closest point} gives
\begin{align*}
\|x_0-P_Vx_0\| &= \|(x_0 - w) - P_M(x_0-w)\| \leq \|(x_0-w) \|.
\end{align*}
Since $w \in V$ was arbitrary, then this unique $\tilde{v}$ is indeed $P_Vx_0$. Hence, setting $x_0=0$, we have that $T^n 0$ converges in norm to the unique minimal norm solution of (\ref{satisfy equation}).
It is a simple check that for each $z \in H$,
\begin{equation} \label{project hyperplane}
P_{V_i}(z) = z - \frac{y_i\big{(}\langle z,y_i \rangle - c_i\big{)}}{\|y_i\|^2}, \quad i \in \{1,\dots, J\}.
\end{equation}
Thus we have a formula to easily calculate $P_{V_i}(z)$ for any $z \in H$.
A special case of particular interest is when $H = \mathbb{R}^N$ ($N \in \mathbb{N}$), where the inner product is taken to be the dot product. Writing $x = (x_1,\dots, x_N)$ and $y_i = (a_{i1},\dots, a_{iN})$ for $i \in \{1\dots J\}$, equation (\ref{satisfy equation}) may be rewritten as
\begin{equation*}
\sum_{j=1}^N a_{ij}x_j = c_i, \quad i \in \{1,\dots, J\},
\end{equation*}
a system of linear equations. Assuming a solution exists, we have that $T^n x_0$ converges in norm to the unique solution closest in norm to our initial `guess' $x_0$.
This is called the Kaczmarz Method, first suggested in 1937 \cite{Kac37} (see \cite{Kac93} for an English translation). Its practical value stems from our being able to easily project onto a hyperplane by using (\ref{project hyperplane}). It has a computational advantage over other known methods of solving systems of linear equations if the system is sparse. In particular, when the matrix $A=[a_{ij}]$ is sparse, the computation of $P_{V_i}(z)$ is very fast \cite{Deu92}.
\subsection{Solving PDEs on composite domains}
The final application we present is known as the Schwarz alternating method. It allows us to find an iterative solution of an elliptic partial differential equation on a region made up of two overlapping regions, in which the partial differential equation is easy to solve. We present this method for the Dirichlet problem; further examples can be found in \cite{Lio88}.
We consider the Sobolev space $H = H_0^1(\Omega)$, where the domain $\Omega = \Omega_1 \cup \Omega_2 \subset \mathbb{R}^2$ is a union of two sufficiently smooth subdomains (for example, if the domains are locally the graph of a Lipschitz continuous function). We view $H$ as a Hilbert space with inner product given by
\begin{equation*}
\langle u,v \rangle_H = \langle \nabla u, \nabla v \rangle_{L^2(\Omega)} = \int_\Omega \nabla u \cdot \overline{ \nabla v} \,dx.
\end{equation*}
Let $\Gamma=\partial \Omega$, and for $k\in\{1,2\}$, let $\Gamma_k = \partial \Omega_k \cap \partial \Omega$ and $\gamma_k = \partial \Omega_k \setminus \partial \Omega$.
\begin{figure}
\caption{An illustration of the domain $\Omega = \Omega_1 \cup \Omega_2$}
\label{Dirichlet}
\end{figure}
For $f \in L^2(\Omega)$, we would like to find a weak solution to the Dirichlet problem
\begin{equation} \label{dirichlet union}
\left\{\begin{alignedat}{2}
-\Delta &u = f && \quad \text{in}\ \Omega, \\
&u=0 && \quad \text{on}\ \Gamma. \\
\end{alignedat}\right.
\end{equation}
Finding a weak solution means finding $u \in H$ such that we have
\begin{equation*}
\langle f,v \rangle_{L^2(\Omega)} = \langle\nabla u, \nabla v \rangle_{L^2(\Omega)}, \quad v\in H.
\end{equation*}
The motivation behind the definition of a weak solution is that for test functions $u,v \in C_c^\infty(\Omega)$, with $u$ satisfying (\ref{dirichlet union}), integration by parts gives
\begin{equation*}
\begin{aligned}
\langle f,v \rangle_{L^2(\Omega)} = \int_\Omega f\overline{v} \,dx = \int_\Omega -\Delta u \cdot \overline{v} \,dx = \int_\Omega \nabla u \cdot \overline{\nabla v} \,dx = \langle\nabla u, \nabla v \rangle_{L^2(\Omega)}.
\end{aligned}
\end{equation*}
Noting that $v \mapsto \langle f,v \rangle_{L^2(\Omega)}$ is a (conjugate) linear bounded functional on $H$, the Riesz representation theorem gives that there exists a unique $u \in H$ such that
\begin{equation*}
\langle f,v \rangle_{L^2(\Omega)} = \langle\nabla u, \nabla v \rangle_{L^2(\Omega)}, \quad v \in H.
\end{equation*}
That is to say, there is a unique weak solution of (\ref{dirichlet union}). In what follows, we will use von Neumann's theorem \cite{von49} to find a sequence converging in norm to the weak solution of (\ref{dirichlet union}).
We begin by fixing $u_0 \in H$. We obtain $u_1 \in H$ by first finding a weak solution of
\begin{equation} \label{dirichlet single}
\left\{\begin{alignedat}{2}
-\Delta &u_1 = f && \quad \text{in}\ \Omega_1, \\
&u_1=0 && \quad \text{on}\ \Gamma_1, \\
&u_1=u_0 && \quad \text{on}\ \gamma_1, \\
\end{alignedat}\right.
\end{equation}
and then extending $u_1$ from $\Omega_1$ to $\Omega$ by letting $u_1=u_0$ on $\Omega_2 \setminus \Omega_1$. We note that finding a weak solution of (\ref{dirichlet single}) means finding $u_1 \in H^1(\Omega_1)$ with $u_1 = 0$ on $\Gamma_1$, and $u_1 = u_0$ on $\gamma_1$, such that
\begin{equation*}
\langle f,v \rangle_{L^2(\Omega_1)} = \langle\nabla u_1, \nabla v \rangle_{L^2(\Omega_1)}, \quad v \in H_0^1(\Omega_1).
\end{equation*}
The Riesz representation theorem again gives that there is a unique such $u_1$.
We then define $u_2 \in H$ by solving an analogous problem on $\Omega_2$, with $u_0$ replaced by $u_1$. Continuing in this way, we generate a sequence $(u_n)_{n\geq0}$ in $H$. We will show that $u_n$ converges in norm to $u$, the unique weak solution.
For $k\in\{1,2\}$, let $Y_k = H_0^1(\Omega_k)$, viewed as a closed subspace of $H$ after extending functions defined on $\Omega_k$ by zero to all of $\Omega$. For $k \in \{1,2\}$, we let $M_k = Y_k^\perp$, and $P_k$ be the orthogonal projection onto $M_k$. We also write $M = M_1 \cap M_2$.
Since $u$ and $u_1$ are both weak solutions of the Dirichlet problem on $\Omega_1$, we have that for every $v \in Y_1$,
\begin{equation*}
\begin{aligned}
\langle u-u_1,v \rangle_{H}&= \langle u-u_1,v \rangle_{Y_1} \\
&=\langle \nabla (u-u_1), \nabla v \rangle_{L^2(\Omega_1)} \\
&=\langle \nabla u, \nabla v \rangle_{L^2(\Omega_1)} - \langle \nabla u_1, \nabla v \rangle_{L^2(\Omega_1)} \\
&= \langle f,v \rangle_{L^2(\Omega_1)} - \langle f,v \rangle_{L^2(\Omega_1)} = 0.
\end{aligned}
\end{equation*}
Therefore $u-u_1 \in M_1$. We also note that $u_1-u_0 \in Y_1 = M_1^\perp$. Hence we have
\begin{equation*}
u-u_0 = \underbrace{(u-u_1)}_{\in M_1} + \underbrace{(u_1-u_0)}_{\in M_1^\perp},
\end{equation*}
and so $P_1(u-u_0) = u-u_1$. Similarly, we see that $P_2(u-u_1)= u-u_2$, and so on.
More generally, defining $x_n \in H$ by $x_n = u - u_{n}$ for $n \geq 1$, we have that $x_{2n+2}= P_2P_1 x_{2n}$, and so
\begin{equation*}
x_{2n} = (P_2P_1)^n x_0, \quad n\geq 1.
\end{equation*}
By von Neumann's theorem, we have
\begin{gather*}
\| x_{2n} - P_Mx_0\| \to 0, \\
\|x_{2n+1} - P_Mx_0\| = \|P_1(x_{2n} - P_Mx_0)\| \leq \| x_{2n} - P_Mx_0\| \to 0,
\end{gather*}
as $n \to \infty$, and therefore
\begin{equation*}
\|x_{n} - P_Mx_0\| \to 0 \textnormal{ as } n\to \infty.
\end{equation*}
Since $Y_1^\perp \cap Y_2^\perp = (Y_1+Y_2)^\perp$ (generally true for subspaces of a Hilbert space), and since the space $Y=Y_1 + Y_2$ can be shown to be dense in $H$, we have that $M=Y^\perp = \{0\}$. Hence $x_n \to 0$ as $n \to \infty$, and so \begin{equation*}
\|u_n - u\| \to 0 \textnormal{ as } n\to \infty.
\end{equation*}
So we have generated a sequence $u_n \in H$ converging in norm to the unique weak solution. In fact, it turns out that $u_n$ converges in norm to the weak solution exponentially fast (although this is not guaranteed if, for example, we were to switch to Neumann boundary conditions); see \cite{Lio88} for more detail.
Given Halperin's theorem, it is not surprising that we may extend this method to more than two subdomains. This extension is, again, discussed in \cite{Lio88}.
We end this section by remarking how the method of alternating projections was applied in very different ways in the examples above. While solving systems of linear equations, we used it to find an element in the intersection of closed affine subspaces. In contrast, for the Schwartz alternating method, we knew that the intersection of our subspaces was $\{0\}$, but we applied it to a sequence with terms we did not know. This highlights how versatile the method of alternating projections is, and is another reason why it has so many applications.
\section{Convergence in norm} \label{convergence in norm}
In this section, we work through the major results that give conditions for $(x_n)$ to converge in norm, including those by von Neumann ($J=2$), Halperin (periodic projections), and Sakai (quasiperiodic projections).
\subsection{Two closed subspaces}
We will begin by proving von Neumann's theorem, that for a sequence of projections onto two closed subspaces, we are guaranteed convergence in norm \cite{von49}.
\begin{theorem} [von Neumann] \label{von neumann}
Let $P_1,P_2$ be orthogonal projections onto the closed subspaces $M_1,M_2$ of the real or complex Hilbert space $H$, and $P_M$ the orthogonal projection onto $M=M_1\cap M_2$. Then for any $x \in H$, \[\textnormal{$\|(P_2P_1)^n x -P_Mx\| \to 0 $ as $n \to \infty$.}\]
\end{theorem}
Rather than follow von Neumann's proof, we present one which appears to not yet feature in the literature. Our proof is inspired by \cite{BaDeHu09}, where the following version of the spectral theorem is used in a similar context.
\begin{theorem*}[Spectral theorem] \label{spectral}
Let $H$ be a real or complex Hilbert space, and $T \in B(H)$ be a self-adjoint linear operator. Then there exists a measure space $(\Omega , \Sigma , \mu )$, a unitary map $U \colon H \to L^2(\Omega,\mu)$, and $m\in L^\infty(\Omega,\mu)$ with the property that $m(t) \in \mathbb{R}$ for almost all $t\in \Omega$
\begin{equation*}
UTU^{-1}f = m \cdot f, \quad f \in L^2(\Omega,\mu),
\end{equation*}
where $(m \cdot f)(t) = m(t)f(t)$ for $t \in \Omega$. Here, $\|m\|_\infty = \|T\|$.
\end{theorem*}
The idea is to consider $(P_1P_2P_1)^n$ rather than $(P_2P_1)^n$. The operator $P_1P_2P_1$ being self-adjoint allows us to apply the spectral theorem to shift the problem into some $L^2(\Omega,\mu)$, where we may make use of tools such as the dominated convergence theorem.
\begin{proof}[Proof of Theorem \ref{von neumann}] Let $T=P_1P_2P_1$. Since $P_1$ and $P_2$ are idempotent, we see that $(P_2P_1)^nx$ converges in norm to $P_Mx$ if and only if $T^nx$ does. We will prove the latter.
Since $T$ is self-adjoint, the spectral theorem gives that there exists a measure space $(\Omega , \Sigma , \mu )$, a unitary map $U: H \to L^2(\Omega,\mu)$, and $m\in L^\infty(\Omega,\mu)$ with $m(t) \in \mathbb{R}$ for almost all $t\in \Omega$, such that
\begin{equation*}
UTU^{-1}f = m \cdot f, \quad f \in L^2(\Omega,\mu).
\end{equation*}
We note that for $f \in L^2(\Omega,\mu)$, we have $m\cdot f \in L^2(\Omega,\mu)$ and also
\begin{equation*}
UT^nU^{-1}f = m^n \cdot f,\quad n\geq 0.
\end{equation*}
Let $x \in H$, and consider $f=Ux$. Noting that $UT(x) = m \cdot f$, we have
\begin{equation*}
\begin{aligned}
m\|f\|^2 &= \langle m \cdot f,f \rangle = \langle UTx,Ux \rangle = \langle U^*UTx,x \rangle = \langle Tx,x \rangle = \langle P_1P_2P_1x,x \rangle \\
& = \langle P_2P_1x,P_1x \rangle = \langle P_2^2P_1x,P_1x \rangle = \langle P_2P_1x,P_2P_1x \rangle = \|P_1P_2x\|^2 \geq 0.
\end{aligned}
\end{equation*}
Hence we have $m(t) \geq 0$ for almost all $t\in \Omega$. Since $\|m\|_\infty = \|T\| \leq 1$, then $m(t) \leq 1$ for almost all $t \in \Omega$.
So for almost all $t \in \Omega$, we have $0\leq m(t) \leq 1$. Hence, defining
\begin{equation*}
\begin{alignedat}{2}
&\widetilde{\Omega} &&= \{ t \in \Omega: m(t)\leq 1\}, \\
&\Omega' &&= \{t \in \Omega : m(t)<1\}, \\
&\Omega^* &&= \{t \in \Omega : m(t)=1\},
\end{alignedat}
\end{equation*}
we have that $\Omega \setminus \widetilde{\Omega}$ has zero measure. Additionally, noting that $ \Omega' \cap \Omega^* = \emptyset$, $\widetilde{\Omega} = \Omega' \cup \Omega^*$, and $1-m= 0$ on $\Omega^*$, we have
\begin{equation} \label{dct inequality}
\begin{aligned}
\Big{(}\|(m^n - m^{n+1})\cdot f\|_{L^2(\Omega,\mu)}\Big{)}^2 &= \int_{\Omega} |m^n(1-m)\cdot f|^2 \,d\mu \\
&= \int_{\widetilde{\Omega}} |m^n(1-m)\cdot f|^2 \,d\mu \\
&= \int_{\Omega'} |m^n(1-m)\cdot f|^2 \,d\mu \\
&\qquad + \int_{\Omega^*} |m^n(1-m)\cdot f|^2 \,d\mu \\
&= \int_{\Omega'} |m^n(1-m) \cdot f|^2 \,d\mu \\
& \leq \int_{\Omega'} |m^n \cdot f|^2 \,d\mu.
\end{aligned}
\end{equation}
We note that $(m(t))^n \to 0$ as $n\to \infty$ for $t \in \Omega'$. We also note that $f$ is integrable, and so is finite almost everywhere on $\Omega$, and therefore on $\Omega'$. Hence $(m(t))^n f(t) \to 0$ as $n \to \infty$ for $t \in \Omega'$.
Since $|m^n \cdot f| \leq |f|$ on $\Omega'$, and $f$ is integrable on $\Omega'$, we may apply the dominated convergence theorem to get
\begin{equation} \label{dct}
\lim_{n\to\infty} \int_{\Omega'} |m^n \cdot f|^2 d\mu = \int_{\Omega'} \lim_{n\to\infty} |m^n \cdot f|^2 d\mu = \int_{\Omega'} 0 \textnormal{ } d\mu = 0.
\end{equation}
We note that $\|U^{-1}\| = 1$ (since $U$ is unitary), and that $U(T^n - T^{n+1})U^{-1}f = (m^n - m^{n+1}) \cdot f$. Applying these, along with (\ref{dct inequality}) and (\ref{dct}), gives
\begin{equation} \label{kakutani inequality}
\begin{aligned}
\|T^nx - T^{n+1}x\| &= \|U^{-1}U(T^n - T^{n+1})U^{-1}Ux\| \\
& \leq \|U^{-1}\| \cdot \|U(T^n - T^{n+1})U^{-1}f\|_{L^2(\Omega,\mu)} \\
&= \|U(T^n - T^{n+1})U^{-1}f\|_{L^2(\Omega,\mu)} \\
&= \|(m^n - m^{n+1}) \cdot f\|_{L^2(\Omega,\mu)} \\
&\leq \Big( \int_{\Omega'} |m^n \cdot f|^2 d\mu \Big)^{1/2} \textnormal{ $\to 0$ as $n \to \infty$}.
\end{aligned}
\end{equation}
We know by Lemma \ref{foundational 2}\ref{T*x=x} that $(I-T^*)x = 0 \iff x \in M$. Hence by Lemma \ref{foundational 2}\ref{direct sum ran ker}, we have
\begin{equation*}
H=\overline{\ran(I-T)} \oplus \ker(I-T^*) =\overline{\ran(I-T)} \oplus M.
\end{equation*}
So for any $x \in H$, there is a unique pair $y\in \overline{\ran(I-T)}$ and $z \in M$ such that $x=y+z$. We have by (\ref{kakutani inequality}) that $\|T^nx - T^{n+1}x\| \to 0$ as $n \to \infty$, so we can apply Lemma \ref{foundational 2}\ref{using kakutani} to see that \[\|T^n x - P_Mx\| = \|T^ny + T^n z - z\| = \|T^ny\| \to 0 \, \, \textnormal{ as } n \to \infty,\] thus concluding our proof. \end{proof}
We end this section by remarking that since projections are idempotent, any sequence of projections involving $P_1$ and $P_2$ may be reduced to one where $P_1$ and $P_2$ are alternating. Therefore Theorem \ref{von neumann} does indeed show that if $J=2$, and $(j_n)$ is any sequence, then $(x_n)$ converges in norm.
\subsection{Periodic projections}
In 1962, Halperin improved on von Neumann's theorem to show that $(x_n)$ converges in norm whenever $(j_n)$ is periodic.
\begin{theorem} [Halperin's theorem] \label{halperin}
Let $H$ be a real or complex Hilbert space, $J \geq 2$ an integer, and $M_1,\dots,M_J$ be a collection of closed subspaces of $H$. Let $T = P_J \dots P_1$, and let $P_M$ be the orthogonal projection onto the intersection $M = \bigcap_{j=1}^J M_j$. Then for each $x\in H$, \[\textnormal{$\|T^n x - P_M x\| \to 0$ as $n\to \infty$.}\]
\end{theorem}
In this section, we follow a proof by Netyanun and Solomon \cite{NeSo06}, which makes use of Kakutani's lemma \cite{Kak40} to prove Theorem \ref{halperin} in a succinct way.
\subsubsection{Kakutani's lemma}
We begin with the core lemma of the proof, due to Kakutani \cite{Kak40}. We remark that we essentially proved Kakutani's lemma for the special case $J=2$ as part of our proof of Theorem \ref{von neumann} (von Neumann).
\begin{lemma} [Kakutani's lemma] \label{kakutani}
Let $T=P_J\dots P_1$. For each $x \in H$, \[\| T^n x - T^{n+1} x \| \to 0 \textnormal{ as } n \to \infty.\]
\end{lemma}
\begin{proof}
Let $x \in H$. Since for each $j \in \{1,\dots, J\}$ we have $\|P_j\| \leq 1$, then $\|T^{n+1} x\| \leq \|T^n x\|$. Therefore $\|T^n x\|$ is a monotonically decreasing sequence bounded below by $0$, and so we have
\begin{equation} \label{decreasing sequence}
\|T^nx\|^2 - \|T^{n+1}x\|^2 \to 0 \textnormal{ as } n \to \infty.
\end{equation}
We now let $Q_0 = I $, and for each $ j \in \{1,\dots ,J\}$, we recursively define $Q_j = P_jQ_{j-1}$. Then
\begin{align*}
&\|T^nx - T^{n+1}x\|^2 \\
&= \|(T^n x - P_1T^n x) + (P_1T^nx - P_2P_1T^n x) + \dots \\
&\quad \quad + (P_{J-1}\dots .P_1T^nx - P_J\dots P_1T^n x) \|^2 \\
&= \big{\|}\sum_{j=0}^{J-1} (Q_jT^nx - Q_{j+1}T^nx)\big{\|}^2 \\
&\leq \Big{(}\sum_{j=0}^{J-1} \|Q_jT^nx - Q_{j+1}T^nx\| \Big{)}^2 && \textnormal{$\big{[}$triangle inequality$\big{]}$}\\
&\leq J \sum_{j=0}^{J-1} \|Q_jT^nx - Q_{j+1}T^nx\|^2 && \textnormal{$\Big{[}\big{(}\sum_{j=0}^{J} a_j\big{)}^2 \leq J\sum_{j=0}^{J} {a_j}^2 \Big{]}$} \\
& = J \sum_{j=0}^{J-1} (\|Q_jT^nx\|^2 - \|Q_{j+1}T^nx\|^2) && \textnormal{$\big{[}$Lemma \ref{foundational}\ref{projection equality}$\big{]}$}\\
&= J(\|Q_0T^nx\|^2 - \|Q_JT^nx\|^2) && \textnormal{$\big{[}$telescoping series$\big{]}$} \\
& = J(\|T^nx\|^2 - \|T^{n+1}x\|^2) \to 0 \textnormal{ as } n \to \infty. && \textnormal{$\big{[}Q_J=T$ and (\ref{decreasing sequence})$\big{]}$}
\end{align*}
This concludes the proof.
\end{proof}
\subsubsection{Proving Halperin's theorem}
We are now ready to prove Theorem \ref{halperin}.
\begin{proof}[Proof of Theorem \ref{halperin}]
As in the proof of Theorem \ref{von neumann}, we have by Lemma \ref{foundational 2}\ref{T*x=x} that $(I-T^*)x = 0 \iff x \in M$. Hence by Lemma \ref{foundational 2}\ref{direct sum ran ker},
\begin{equation*}
H=\overline{\ran(I-T)} \oplus \ker(I-T^*) =\overline{\ran(I-T)} \oplus M.
\end{equation*}
So for any $x \in H$, there is a unique pair $y\in \overline{\ran(I-T)}$ and $z \in M$ such that $x=y+z$. By Lemma \ref{kakutani} (Kakutani), we have $\|T^nx - T^{n+1}x\| \to 0$ as $n \to \infty$. Therefore, we can apply Lemma \ref{foundational 2}\ref{using kakutani} to see that \[\|T^n x - P_Mx\| = \|T^ny + T^n z - z\| = \|T^ny\| \to 0 \, \, \textnormal{ as } n \to \infty,\] thus completing the proof.\end{proof}
We remark that it is in fact relatively simple to extend Theorem \ref{halperin} (Halperin), so that instead of projections, we consider contractions that are non-negative. An identical proof to that of Theorem \ref{halperin} works here; the only difference is that we do not know whether, for a non-negative contraction $A$, we have
\begin{equation} \label{inequality for kakutani}
\|x-Ax\|^2 \leq \|x\|^2 - \|Ax\|^2, \quad x\in H.
\end{equation}
In Lemma \ref{foundational}\ref{projection equality}, we proved (\ref{inequality for kakutani}) for the special case where $A$ is a projection. It turns out it is also true more generally when $A$ is a non-negative contraction; a proof can be found in \cite{NeSo06}. We note that this extension was in fact first proved by Amemiya and Ando \cite{AmAn65}, and more recently by Bauschke, Deutsch, Hundal and Park \cite{BaDeHuPa03}.
\subsection{Quasiperiodic projections}
It turns out we can generalise Theorem \ref{halperin} (Halperin) by finding an even weaker condition than periodicity for the sequence of projections to converge in norm. In 1995, Sakai proved that we have convergence in norm if the sequence of projections is so-called \textit{quasiperiodic} \cite{Sak95}. Before defining what it means for a sequence to be quasiperiodic, or formally stating Sakai's theorem, we remind ourselves of our usual setting.
Let $H$ be a real or complex Hilbert space, $J\geq 2$ be an integer, and $M_1,\dots ,M_J$ be a family of closed subspaces of $H$ with intersection $M=\bigcap_{j=1}^J M_j$. Given a closed subspace $Y$ of $H$, we write $P_Y$ for the orthogonal projection onto $Y$, and for ease of notation, we write $P_1,\dots, P_J$ for the orthogonal projections onto the closed subspaces $M_1,\dots,M_J$ respectively. Let $(j_n)_{n\geq1}$ be a sequence taking values in $\{1,\dots,J\}$. We define the sequence $(x_n)_{n\geq0}$ by picking an arbitrary element $x_0 \in H$, and letting
\begin{equation*}
x_n = P_{j_n}x_{n-1}, \quad n\geq 1.
\end{equation*}
For ease of notation, we write
\begin{equation*}
s = (j_n)_{n\geq1}.
\end{equation*}
We now define what it means for a sequence to be quasiperiodic.
\begin{definition*}
Consider a sequence $s= (j_n)_{n\geq1}$, where each $j_n \in \{1,\dots,J\}$. We say that $s$ is \textit{quasiperiodic} if each $i \in \{1,\dots,J\}$ appears in $s$ infinitely many times, and that for each such $i$,
\begin{equation*}
I(s,i) = \sup_n\big{(}k_n(i) - k_{n-1}(i)\big{)}
\end{equation*}
is finite, where $k_0(i) = 0$, and $(k_n(i))_{n\geq1}$ is the increasing sequence of all natural numbers such that $j_{k_n(i)}=i$.
Put more simply, $s$ being quasiperiodic means that the number of entries between an element $i \in \{1,\dots,J\}$ (in the sequence $s$) and the next appearance of it, is bounded (by $I(s,i) < \infty$).
\end{definition*}
\begin{theorem}[Sakai's Theorem] \label{sakai theorem}
If $(j_n)$ is a quasiperiodic sequence, then $(x_n)$ converges in norm to the limit of the orthogonal projection of $x_0$ onto $M= \bigcap_{j=1}^J M_j$.
\end{theorem}
We follow Sakai's proof of this result \cite{Sak95}, splitting it into small subsections, each highlighting a key element or idea of the proof.
\subsubsection{A criterion for convergence}
We begin by proving a lemma which gives us a criterion for $(x_n)$ to converge in norm.
\begin{lemma} \label{key result sakai}
Suppose there is a constant $A$ (which may depend on the sequence $s$), such that
\begin{equation} \label{key inequality}
\|x_n - x_m\|^2 \leq A \sum_{k=m}^{n-1} \|x_{k+1} - x_k \|^2, \quad n>m\geq 1.
\end{equation}
Then the sequence $(x_n)$ converges in norm.
\end{lemma}
\begin{proof}
Since $x_{k+1}$ is the orthogonal projection of $x_k$ onto $M_{j_{k+1}}$, by Lemma \ref{foundational}\ref{projection equality} we have
\begin{equation*}
\|x_{k+1}\|^2 + \|x_{k} - x_{k+1}\|^2 = \|x_{k+1}\|^2 + \|x_{k}\|^2 - \|x_{k+1}\|^2 = \|x_k\|^2.
\end{equation*}
Hence $(\|x_k\|)$ is monotonically decreasing. Adding the equalities from $k=m$ to $k=n-1$, we obtain
\begin{equation*}
\|x_{m}\|^2 = \|x_n\|^2 + \sum_{k=m}^{n-1} \|x_{k+1} - x_{k}\|^2.
\end{equation*}
Therefore (\ref{key inequality}) is equivalent to
\begin{equation*}
\|x_n - x_m\|^2 \leq A(\|x_{m}\|^2 - \|x_n\|^2).
\end{equation*}
Since $(\|x_k\|)$ is monotonically decreasing and bounded below by $0$, we have that $\|x_k\|$ converges to some limit $c\geq0$. In particular, given $\varepsilon >0$, there exists $K \in \mathbb{N}$ such that whenever $n \geq K$,\[ 0 \leq \|x_n\| - c \leq \frac{\varepsilon}{2A}.\]
Therefore for $n,m\geq K$, we have
\begin{equation*}
\begin{aligned}
\|x_n - x_m\|^2 &\leq A(\|x_{m}\|^2 - \|x_n\|^2) \\
&\leq A\left\lvert \|x_m\|^2 - c\big{|} + A\big{|}c - \|x_n\|^2 \right\lvert \\
& < A\cdot \frac{\varepsilon}{2A} + A\cdot \frac{\varepsilon}{2A} \leq \varepsilon.
\end{aligned}
\end{equation*}
Hence $(x_n)$ is a Cauchy sequence, and so converges in norm (since $H$, being a Hilbert space, is complete).
\end{proof}
So under the conditions in Theorem \ref{key result sakai}, we have that the sequence $(x_n)$ converges in norm to some limit, say $x_\infty$. In particular, it also converges weakly to this limit. In the next lemma, we show that under the additional assumption that each projection appears infinitely many times in the sequence $(P_{j_n})_{n\geq 0}$, we have that $x_\infty = P_Mx_0$.
\begin{lemma} \label{key result sakai 2}
Suppose the sequence $(x_n)$ converges weakly. Suppose also that that $s = (j_n)$ takes every value in $\{1,\dots,J\}$ infinitely many times. Then the limit is the orthogonal projection of $x_0$ onto $M= \bigcap_{j=1}^J M_j$.
\end{lemma}
\begin{proof}
By assumption, $(x_n)$ converges weakly to some limit, say $x_\infty$. Each $j \in \{1,\dots,J\}$ occurs infinitely many times in $s$, and so there is some subsequence $(x_{n_k})_{k\geq1}$ such that each $x_{n_k} \in M_j$. Then for every $y \in M_j^\perp$, we have $\langle x_{n_k},y \rangle =0$, and therefore
\begin{equation*}
\langle x_\infty , y \rangle = \lim_{k\to\infty} \langle x_{n_k}, y \rangle = \lim_{k\to\infty}0 = 0.
\end{equation*}
Hence $x_\infty \in M_j$ for each $j \in \{1,\dots,J\}$, and so $x_\infty \in M$.
To show that $x_\infty$ is the orthogonal projection of $x_0$ onto $M$, it suffices to show that $x_0 - x_\infty \in M^\perp$, since then we would have
\begin{equation*}
x_0 = \underbrace{x_\infty}_{\in M} + \underbrace{x_0 - x_\infty}_{\in M^\perp}.
\end{equation*}
Let $x \in M$. For every $n\geq0$, we have $(I-P_{j_{n+1}})x_n \in (M_{j_{n+1}})^\perp$ and $x \in M_{j_{n+1}}$. Therefore,
\begin{align*}
\langle x_n - x_{n+1},x \rangle &= \langle x_n - P_{j_{n+1}}x_n,x \rangle \\
&= \langle (I-P_{j_{n+1}})x_n, x \rangle \\
&=0.
\end{align*}
Adding these from $n=0$ to $n=h-1$, we have $\langle x_0 - x_h,x \rangle=0$, and so
\begin{equation*}
\langle x_0 - x_\infty,x \rangle = \lim_{h\to \infty} \langle x_0 - x_h,x \rangle = \lim_{h\to \infty} 0 = 0.
\end{equation*}
Hence $x_0 - x_\infty \in M^\perp$, and so $(x_n)$ converges weakly to the orthogonal projection of $x_0$ onto $M$.
\end{proof}
We now state a simple corollary of Theorems \ref{key result sakai} and \ref{key result sakai 2}.
\begin{corollary} \label{key result sakai 3}
Suppose the sequence $s$ is quasiperiodic. Suppose also that there is a constant $A$ (which may depend on the sequence $s$), such that
\begin{equation} \label{key inequality 2}
\|x_n - x_m\|^2 \leq A \sum_{k=m}^{n-1} \|x_{k+1} - x_k \|^2, \quad n>m\geq 1.
\end{equation}
Then the sequence $(x_n)$ converges in norm to the orthogonal projection of $x_0$ onto $M$.
\end{corollary}
\begin{proof}
By Lemma \ref{key result sakai}, we immediately have that $(x_n)$ converges in norm, and so in particular converges weakly to some limit, say $x_\infty$. Since $s$ is quasiperiodic, it takes every value in \{1,\dots,J\} infinitely many times. Therefore, Lemma \ref{key result sakai 2} gives that $x_\infty = P_Mx_0$.
\end{proof}
In particular, given a quasiperiodic sequence $s$, if we can find a constant $A$ such that (\ref{key inequality 2}) holds, then Theorem \ref{sakai theorem} (Sakai) follows immediately. The rest of this section involves finding such a constant.
\subsubsection{Useful lemmas}
We proceed to prove two simple, but useful, lemmas. These will be used in later parts of the proof of Theorem \ref{sakai theorem} (Sakai).
\begin{lemma} \label{easy inequality sakai}
Let $y_1, y_2, \dots , y_N, y_{N+1} \in H$. Then
\begin{equation*}
\|y_{N+1} - y_1\|^2 \leq N\sum_{k=1}^N \|y_{k+1} - y_k\|^2.
\end{equation*}
\end{lemma}
\begin{proof}
Applying the triangle inequality along with $\big{(}\sum_{k=1}^{N} a_k\big{)}^2 \leq N\sum_{k=1}^{N} a_k^2$, we have
\begin{align*}
\|y_{N+1} - y_1\| &= \Big{\|}\sum_{k=1}^N y_{k+1} - y_k \Big{\|}^2 \\
&\leq \Big{(}\sum_{k=1}^N \|y_{n+1} - y_n\|\Big{)}^2 \\
&\leq N\sum_{k=1}^N \|y_{n+1} - y_n\|^2. \qedhere
\end{align*}
\end{proof}
\begin{lemma} \label{small lemma}
Let $P$ be the orthogonal projection onto a closed subspace of $H$, and let $x,y \in H$. Then
\begin{enumerate}[label=(\alph*)]
\item $\|x-Py\|^2 \leq \|x-y\|^2 + \|x-Px\|^2$. \label{small lemma a}
\item $\|x-y\|^2 \leq \|x-Py\|^2 + \|x-Px\|^2 + 2\|y-Py\|^2$. \label{small lemma b}
\end{enumerate}
\end{lemma}
\begin{proof}
For (a), we note that $x-Px \perp P(x-y)$, and so Lemma \ref{foundational}\ref{pythagoras} gives
\begin{align*}
\|x-Py\|^2 &= \|x-Px+P(x-y)\|^2 \\
& = \|x-Px\|^2 + \|P(x-y)\|^2 \\
&\leq \|x-Px\|^2 + \|x-y\|^2.
\end{align*}
For (b), we begin by noting that since $Px,Py \perp y-Py$, we have
\begin{align*}
\langle x-Py,y-Py \rangle &= \langle x,y-Py \rangle = \langle x-Px,y-Py \rangle.
\end{align*}
Therefore, appealing to the Cauchy-Schwartz inequality and noting that $2ab \leq a^2+b^2$, we have
\begin{align*}
\|x-y\|^2 &= \|x-Py-(y-Py))\|^2 \\
&\leq \|x-Py\|^2 + \|y-Py\|^2 + 2|\langle x-Py,y-Py \rangle| \\
&= \|x-Py\|^2 + \|y-Py\|^2 + 2|\langle x-Px,y-Py \rangle| \\
&\leq \|x-Py\|^2 + \|y-Py\|^2 + 2\|x-Px\|\|y-Py\| \\
&\leq \|x-Py\|^2 + \|y-Py\|^2 + \|x-Px\|^2 + \|y-Py\|^2 \\
&\leq \|x-Py\|^2 + \|x-Px\|^2 + 2\|y-Py\|^2. \qedhere
\end{align*}
\end{proof}
\subsubsection{Two statements implying Sakai's theorem}
There are two steps left in the proof. We will first find two statements from which Theorem \ref{sakai theorem} (Sakai) follows, and then we will show these statements are true. In this subsection, we do the former.
We begin by defining $I = I(s) = \sup_{1\leq j\leq J} I(s,j)$, and \[S_l = \sum_{k=l}^{l+I-2} \|x_{k+1} - x_k\|^2.\]
By Lemma \ref{key result sakai 3}, to prove Theorem \ref{sakai theorem} (Sakai), it suffices to show that
\begin{equation} \label{want to prove sakai}
\|x_n - x_m\|^2 \leq \big{(}(I(s)-1)(I(s)-2)+3\big{)} \sum_{k=m}^{n-1} \|x_{k+1} - x_k\|^2, \quad n>m\geq1. \, \, \,
\end{equation}
Let $n \geq m \geq 1$. Suppose first that \[ \textnormal{(a)} \quad n-m \leq 2I-3. \] Then by Lemma \ref{easy inequality sakai} (with $N=n-m$, $y_1 = x_m$, $y_2 = x_{m+1}$,$\dots$, $y_N=x_{n-1}$, $y_{N+1} = x_n$), we have
\begin{equation} \label{almost there}
\|x_n-x_m\|^2 \leq (n-m)\sum_{k=m}^{n-1} \|x_{k+1} - x_k\|^2.
\end{equation}
Since $n-m \leq 2I-3 \leq (I-1)(I-2) + 3$, we see that (\ref{want to prove sakai}) holds.
We may therefore assume that \[\textnormal{(b)} \quad n-m \geq 2I-2. \] We now note that in order to show (\ref{want to prove sakai}) holds, it is sufficient to prove the following statements.
\noindent (i) If $S_{n-I+1} \leq S_m$, then
\begin{equation*}
\|x_n - x_m\|^2 \leq \|x_n - x_{m+I-1}\|^2 + \big{(}(I-1)(I-2) + 3\big{)}S_m.
\end{equation*}
(ii) If $S_m < S_{n-I+1}$, then
\begin{equation*}
\|x_n - x_m\|^2 \leq \|x_{n-I+1} - x_m\|^2 + \big{(}(I-1)(I-2) + 3\big{)}S_{n-I+1}.
\end{equation*}
Indeed, if we have (i) and (ii), then we can apply them repeatedly (whichever of the two we are able to apply at each step), until we are in case (a). For example, suppose we have just applied (i) to $\|x_n - x_m\|$. Then either $n-(m+I-1) \geq 2I-2$, so that we can apply one of (i) or (ii) to $\|x_n - x_{m+I-1}\|^2$, or $n-(m+I-1) \leq 2I-2$, so that we are in case (a) and we get, as in (\ref{almost there}),
\begin{equation*}
\|x_n - x_{m+I-1}\|^2 \leq (n-(m+I-1))\sum_{k=m+I-1}^{n-1} \|x_{k+1} - x_k\|^2.
\end{equation*}
After repeated applications of (i) or (ii), and once we are in case (a) so that we have a similar inequality to (\ref{almost there}), we obtain (\ref{want to prove sakai}), and we are done.
\subsubsection{Proving Sakai's theorem}
In the last section, we showed that in order to prove Theorem \ref{sakai theorem} (Sakai), it suffices to prove that for $n-m \leq 2I-2$, the following two statements hold.
\noindent (i) If $S_{n-I+1} \leq S_m$, then
\begin{equation*}
\|x_n - x_m\|^2 \leq \|x_n - x_{m+I-1}\|^2 + \big{(}(I-1)(I-2) + 3\big{)}S_m.
\end{equation*}
(ii) If $S_m < S_{n-I+1}$, then
\begin{equation*}
\|x_n - x_m\|^2 \leq \|x_{n-I+1} - x_m\|^2 + \big{(}(I-1)(I-2) + 3\big{)}S_{n-I+1}.
\end{equation*}
\begin{proof}[Proof of (i)] For $k \in \{m,m+1, \dots, m+I-2\}$, applying Lemma \ref{small lemma}\ref{small lemma b} with $x=x_n,y=x_k$, $P=P_{j_{k+1}}$, and setting $p_{j_{k+1}}=P_{j_{k+1}}x_n$, we have
\begin{equation*}
\|x_n-x_k\|^2 \leq \|x_n-x_{k+1}\|^2 + \|x_n - p_{j_{k+1}}\|^2 + 2\|x_{k+1} - x_k\|^2.
\end{equation*}
Applying this inequality one by one to each of $\|x_n-x_m\|^2$, $\|x_n-x_{m+1}\|^2,\dots,$ $\|x_n-x_{m+I-2}\|^2$, we obtain
\begin{equation} \label{intermediate inequality}
\begin{aligned}
\|x_n-x_m\|^2 &\leq \|x_n-x_{m+1}\|^2 + \|x_n - p_{j_{m+1}}\|^2 + 2\|x_{m+1} - x_m\|^2 \\
&\leq \dots \leq \|x_n-x_{m+I-1}\|^2 + \sum_{k=m}^{m+I-2} \|x_n - p_{j_{k+1}}\|^2 + 2S_m.
\end{aligned}
\end{equation}
The set $\{ x_{n-I+1},x_{n-I+2},\dots , x_n\}$ consists of $I$ consecutive elements of the sequence $(x_n)$. So by definition of $I$, at least one of these elements, say $x_h$, belongs to $M_{j_{k+1}}$. We choose the largest such number $h$, and denote it by $h(j_{k+1})$.
Since $p_{j_{k+1}}=P_{j_{k+1}}x_n$ is the projection of $x_n$ onto $M_{j_{k+1}}$, and $x_{h(j_{k+1})} \in M_{j_{k+1}}$, Lemma \ref{foundational}\ref{projection is onto closest point} gives
\begin{equation} \label{closest point inequality}
\|x_n - p_{j_{k+1}}\|^2 \leq \|x_n - x_{h(j_{k+1})}\|^2.
\end{equation}
Applying Lemma \ref{easy inequality sakai}, we obtain
\begin{equation} \label{intermediate inequality 2}
\begin{aligned}
\|x_n - x_{h(j_{k+1})}\|^2 & \leq (n-h(j_{k+1})) \sum_{k=h(j_{k+1})}^{n-1} \|x_{k+1} - x_k\|^2 \\
& \leq (n-h(j_{k+1})) S_{n-I+1}.
\end{aligned}
\end{equation}
Since $k \in \{m,\dots,m+I-2\}$, $k$ ranges over $I-1$ consecutive numbers. Therefore, there is some number $a$ in this range such that $M_{j_{a+1}}$ is equal to one of $M_{j_{n-1}}$ or $M_{j_{n}}$. Rephrasing this, there is some $a \in \{m,\dots,m+I-2\}$ for which $n-h(j_{a+1})$ is equal to $0$ or $1$. Since $ 0 \leq n-h(j_{k+1}) \leq I-1$, we have
\begin{equation} \label{intermediate inequality 3}
\begin{aligned}
&\sum_{k=m}^{m+I-2} n-h(j_{k+1}) \\
&\leq \sum_{k=m}^{a-1} n-h(j_{k+1}) + \Big{(}n-h(j_{a+1})\Big{)} + \sum_{k=a+1}^{m+I-2} n-h(j_{k+1}) \\
& \leq \Big{(}\sum_{k=m}^{a-1} I-1 \Big{)} + 1 + \Big{(}\sum_{k=a+1}^{m+I-2} I-1 \Big{)} \\
& \leq (I-1)(I-2) + 1.
\end{aligned}
\end{equation}
Hence applying (\ref{closest point inequality}), (\ref{intermediate inequality 2}) and (\ref{intermediate inequality 3}) in that order, and recalling that $S_{n-I+1} \leq S_m$ (by assumption), we have
\begin{equation} \label{intermediate inequality 4}
\begin{aligned}
\sum_{k=m}^{m+I-2} \|x_n - p_{j_{k+1}}\|^2 + 2S_m & \leq \sum_{k=m}^{m+I-2} \|x_n - x_{h(j_{k+1})}\|^2 + 2S_m \\
& \leq \sum_{k=m}^{m+I-2} (n-h(j_{k+1})) S_{n-I+1} + 2S_m \\
& \leq \big{(}(I-1)(I-2) + 1\big{)}S_{n-I+1} + 2S_m \\
& \leq \big{(}(I-1)(I-2) + 3\big{)}S_m.
\end{aligned}
\end{equation}
So finally, by (\ref{intermediate inequality}) and (\ref{intermediate inequality 4}), we have
\begin{equation*}
\begin{aligned}
\|x_n-x_m\|^2 & \leq \|x_n-x_{m+I-1}\|^2 + \sum_{k=m}^{m+I-2} \|x_n - p_{j_{k+1}}\|^2 + 2S_m \\
& \leq \|x_n-x_{m+I-1}\|^2 + \big{(}(I-1)(I-2) + 3\big{)}S_m,
\end{aligned}
\end{equation*}
and so (i) is proved.
\end{proof}
\begin{proof}[Proof of (ii)]
For each $k \in \{n-I+1,\dots, n-1\}$, applying Lemma \ref{small lemma}\ref{small lemma a} with $x=x_m$, $y=x_k$, $P=P_{j_{k+1}}$, and setting $p_{j_{k+1}}=P_{j_{k+1}}x_m$, we have
\begin{equation*}
\|x_m - x_n\|^2 \leq \|x_m - x_k\|^2 + \|x_m - p_{j_{k+1}} \|^2.
\end{equation*}
Applying these inequalities repeatedly (as in the proof of (i)), we obtain
\begin{equation} \label{intermediate inequality b1}
\|x_m - x_n\|^2 \leq \|x_m - x_{n-I+1}\|^2 + \sum_{k=n-I+1}^{n-1} \|x_m - p_{j_{k+1}}\|^2.
\end{equation}
An argument similar to (\ref{intermediate inequality 4}) shows that we have
\begin{equation} \label{intermediate inequality b2}
\sum_{k=n-I+1}^{n-1} \|x_m - p_{j_{k+1}}\|^2 \leq \{(I-1)(I-2) + 1\}S_m.
\end{equation}
Combining (\ref{intermediate inequality b1}) and (\ref{intermediate inequality b2}), and recalling that $S_m<S_{n-I+1}$, we obtain (ii) and so the proof is complete.
\end{proof}
\subsubsection{Concluding remarks} \label{concluding remarks}
Sakai's paper ends by posing several open questions about the convergence of sequences of projections. He also mentions a few simple results. In this subsection we briefly discuss some of the questions and ideas raised by Sakai.
The first question he poses is the following. For an arbitrary sequence $s$, does (\ref{key inequality}) always hold with $A=J-1$?
We remark that it appears this question has not yet been addressed in the literature. Perhaps this is because it can be easily resolved given a result by Kopeck\'a and Paszkiewicz \cite{KoPa17} (stated later as Theorem \ref{big result kopecka}). We resolve Sakai's question in Corollary \ref{Sakai open question}, where we find a sequence $s$ for which (\ref{key inequality}) does not hold for any constant $A$.
Another interesting question posed by Sakai is whether we still have convergence in norm for the case that $J= \infty$ and $(j_n)$ is quasiperiodic. It is worth noting that quasiperiodic sequences covering every integer do exist. We offer the following example:
\begin{equation*}
1,2,1,3,1,2,1,4,1,2,1,3,1,2,1,5,\dots
\end{equation*}
That is, the sequence which has $1$ every $2$\textsuperscript{nd} number, $2$ every $4\textsuperscript{th}$ number, \dots, $n$ every $2^n$-th number, etc.
More generally, for the case $J=\infty$, a quasiperiodic sequence always has $I = \sup_{j\in \mathbb{N}}I(s,j) = \infty$, and so the argument in the proof of Theorem \ref{sakai theorem} (Sakai) does not extend to this case. However, even when $J= \infty$, we can still show convergence for special cases.
For example, suppose the sequence of closed subspaces $(M_j)_{j\geq1}$ is monotonically decreasing (i.e. $M_1\supseteq M_2 \supseteq M_3 \supseteq \dots $). Then we have
\begin{equation} \label{commuting projections}
P_bP_a = P_aP_b = P_b, \quad b\geq a \geq1.
\end{equation}
Consider the sequence $s$ given by $j_n = n$ for every $n \in \mathbb{N}$. Applying (\ref{commuting projections}) and Lemma \ref{foundational}\ref{projection equality} for the first equality gives that for $n\geq m \geq 1$,
\begin{equation*}
\begin{aligned}
\|x_n - x_m\|^2 &= \|x_m\|^2 - \|x_n\|^2 \\
&= \|x_m - x_{m+1} + x_{m+1} - x_{m+2} + \dots - x_n + x_n\|^2 - \|x_n\|^2 \\
& \leq \|x_m - x_{m+1}\|^2 + \dots + \|x_{n-1} - x_n\|^2 + \|x_n\|^2 - \|x_n\|^2 \\
& = \sum_{k=m}^{n-1} \|x_{k+1} - x_k \|^2.
\end{aligned}
\end{equation*}
Hence (\ref{key inequality}) holds with $A=1$, and so $(x_n)$ convergences in norm.
Finally, we note that in his paper, Sakai observed that if at least one of the $J$ subspaces is finite-dimensional, then for any sequence $(j_n)$, we have that $(x_n)$ converges in norm. To prove this, we will need to make use of a result by Amemiya and Ando, to be stated later as Theorem \ref{amemiya}. We therefore defer this result (and its proof) to Section \ref{amemiya section}, where we state it as Lemma \ref{Sakai mistake}.
\section{Weak convergence} \label{amemiya section}
All of our results so far have had some restriction on the sequence of projections, or on the Hilbert space $H$. It is natural to ask what happens if we do not have such restrictions. In 1965, Amemiya and Ando proved that for any sequence of projections, we always have weak convergence \cite{AmAn65}.
\begin{theorem} \label{amemiya}
Let $H$ be a real or complex Hilbert space, $J \geq 2$ an integer, and $M_1,\dots ,M_J$ a family of closed subspaces in $H$. For each $j \in \{1,\dots ,J\}$, let $P_j$ be the orthogonal projection onto the closed subspace $M_j$, and let $(j_n)$ be any sequence taking values in $\{1,\dots,J\}$. Let $x_0 \in H$ be a vector, and let $(x_n)$ be the sequence defined by
\begin{equation*}
x_n = P_{j_n}x_{n-1}, \quad n\geq1.
\end{equation*}
Then $x_n$ converges weakly as $n\to \infty$.
\end{theorem}
In fact, Amemiya and Ando proved a slightly stronger result about contractions \cite{AmAn65}, of which Theorem \ref{amemiya} is a corollary. By proving Theorem \ref{amemiya} directly, we are able to simplify their proof. Additionally, by including details not originally present in \cite{AmAn65}, we aim to to make the proof easier to follow.
We present our proof through a series of four lemmas.
For simplicity, we write `neighbourhood' to mean a basic weakly open neighbourhood of $0$ in $H$. We also write $B_H$ for the closed unit ball in $H$.
\begin{lemma} \label{neighbourhood}
For any neighbourhood $U$, and any $j \in \{1,\dots,J\}$, there is an $\varepsilon = \varepsilon(j) > 0 $ such that for $x\in B_H$
\begin{equation*}
\|P_jx\| \geq 1-\varepsilon \implies (I-P_j)x \in U.
\end{equation*}
\end{lemma}
\begin{proof}
Let $U$ be a neighbourhood, and $x \in B_H$. Then there are $y_1,\dots ,y_r \in H$ and $\delta >0$, such that
\begin{equation*}
U = \{x \in H: |\langle x,y_k\rangle| < \delta \textnormal{ for each } 1\leq k \leq r \}.
\end{equation*}
Let $\varepsilon$ be small enough such that $0 < \eta \sqrt{2\varepsilon - \varepsilon^2}<\delta$, where $\eta = \max \{\|y_k\| : 1\leq k \leq r \}$. For example, $\varepsilon = \min \{1, \frac{\delta^2}{2\eta ^2}\}$ works. Suppose that $\|P_jx\| \geq 1-\varepsilon$. Then by Lemma \ref{foundational}\ref{projection equality},
\begin{equation*}
\begin{aligned}
\|(I-P_j)x\|^2 = \|x-P_jx\|^2 = \|x\|^2 - \|P_jx\|^2 \leq 1 - (1-\varepsilon)^2 = 2\varepsilon - \varepsilon^2.
\end{aligned}
\end{equation*}
Hence by the Cauchy-Schwartz inequality, for any $k \in \{1,\dots ,r\}$,
\begin{equation*}
|\langle (I-P_j)x,y_k \rangle| \leq \|(I-P_j)x\| \|y_k\| \leq \eta \sqrt{2\varepsilon - \varepsilon^2} < \delta,
\end{equation*}
and thus $(I-P_j)x \in U$.
\end{proof}
\begin{lemma} \label{commutes}
Let $Q_j$ be the orthogonal projection onto $Ker(I-P_j\dots P_1)$. Then for each $k \in \{1,\dots ,j\}$, $Q_j$ and $P_k$ commute.
\end{lemma}
\begin{proof}
By Lemma \ref{foundational 2}\ref{useful for amemiya}, for each $x\in H$, \[Q_j x \in \ker(I-P_j\dots P_1) = \bigcap_{k=1}^j\ker(I-P_k).\] Therefore for each $k \in \{1,\dots ,j\}$, we have $(I-P_k)Q_jx = 0$, and so $P_kQ_jx = Q_jx$. Hence,
\begin{equation} \label{equality P Q}
P_kQ_j = Q_j.
\end{equation}
Since $P_k$ and $Q_j$ are self-adjoint, \[Q_j = (Q_j)^* = (P_kQ_j)^* = (Q_j)^*(P_k)^* = Q_jP_k,\] and so $P_kQ_j = Q_j = Q_jP_k$.
\end{proof}
\begin{lemma} \label{new neighbourhood}
Let $R_j = I - Q_j$. Then for any neighbourhood $U$, there is another neighbourhood $V$ such that for $x \in B_H$,
\begin{equation*}
(I-P_k)x \in V, \quad k \in \{1,\dots ,j\} \implies R_jx \in U.
\end{equation*}
\end{lemma}
\begin{proof}
Let $H^j$ be the Cartesian product $H\times \dots \times H$ ($j$ times), viewed as a Hilbert space with addition and scalar multiplication given by
\begin{equation*}
(u_1,\dots,u_j) + \lambda(v_1,\dots,v_j) = (u_1 + \lambda v_1, \dots, u_1 + \lambda v_j), \quad u_i,v_i \in H, \, \, \lambda \in \mathbb{F},
\end{equation*}
and inner product given by
\begin{equation*}
\big{\langle}(u_1,\dots,u_j),(v_1,\dots,v_j)\big{\rangle}_{H^j} = \langle u_1,v_1\rangle + \dots + \langle u_j,v_j\rangle, \quad u_i,v_i \in H.
\end{equation*}
Therefore the norm on $H^j$ is
\begin{equation*}
\|(u_1,\dots,u_j)\|_{H^j} = \sqrt{\|u_1\|^2 + \dots + \|u_j\|^2}, \quad u_1,\dots,u_j \in H.
\end{equation*}
We consider the map $g\colon R_j(H) \to H^j$ given by \[g(R_jx) = \big{(}(I-P_1)x,\dots ,(I-P_j)x\big{)}, \quad x \in H,\] where both spaces are endowed with their respective weak topologies. Lemma \ref{foundational 2}\ref{useful for amemiya} gives that for any $x\in H$,
\begin{equation*}
\begin{aligned}
(I-P_k)x = 0, \quad k \in \{1,\dots ,j\} &\iff x\in \bigcap_{k=1}^j \ker(I-P_k) \\
&\iff x\in \ker(I-P_j\dots P_1) \\
&\iff Q_jx = x \\
&\iff R_jx=0.
\end{aligned}
\end{equation*}
The $\impliedby$ implication shows that $g$ is well defined, while the $\implies$ implication shows that $g$ is injective. We now note that (\ref{equality P Q}) gives
\begin{equation*}
(I-P_k)R_j = (I-P_k)(I-Q_j) = I-P_k - Q_j + P_kQ_j = I-P_k - Q_j + Q_j = I-P_k.
\end{equation*}
So given $x \in H$, and noting that $\|I-P_j\|\leq \|I\| + \|P_j\| \leq 2$, we have
\begin{equation*}
\begin{aligned}
\|g(R_jx)\|_{H^j} &= \| \big{(}(I-P_1)x,\dots ,(I-P_j)x\big{)} \|_{H^j} \\
&= \|\big{(}(I-P_1)R_jx,\dots ,(I-P_j)R_jx\big{)}\|_{H^j} \\
&= \sqrt{\|(I-P_1)R_jx\|^2 + \dots + \|(I-P_j)R_jx\|^2} \\
&\leq \|(I-P_1)R_jx\| + \dots + \|(I-P_j)R_jx\| \\
& \leq 2j\|R_jx\|.
\end{aligned}
\end{equation*}
Therefore $g$ is bounded. It is a simple check to see that $g$ is linear. Hence $g$ is continuous. Let $f$ be the restriction of $g$ to $R_j(B_H)$ (where $R_j(B_H)$ is endowed with the relative weak topology). Then $f$ must also be continuous and injective.
We know that a unit ball in a normed vector space $H$ is compact with respect to the weak topology if and only if $H$ is reflexive. In our case, $H$ is a Hilbert space, so is indeed reflexive. Therefore $B_H$ is weakly compact. Since $R_j$ is continuous, then $R_j(B_H)$ is also weakly compact. The weak topology on any vector space is Hausdorff, so in particular $H^j$ is Hausdorff with respect to the weak topology. Hence $f$ is an injective continuous map from a compact topological space into a Hausdorff space, so that \[\textnormal{ $R_j(B_H)$ and $f(R_j(B_H))$ are homeomorphic}.\] Therefore we can replace the codomain ($H^j$) of $f$ with the image of $f$ (endowed with the relative weak topology), so that $f$ becomes a bijection. In particular, $f^{-1}$ is then continuous at the origin, and hence the claim follows.
\end{proof}
For $j \in\{1,\dots ,J\}$, let $\mathcal{M}_j$ be the collection of maps which are in a free semigroup generated by $j$ of the projections $\{P_1,\dots,P_J\}$. We also set $\mathcal{M}_0 = \{I\}$.
\begin{lemma} \label{important neighbourhood lemma}
Let $U$ be a neighbourhood, and let $S \in \mathcal{M}_j$. There exists a positive number $\varepsilon = \varepsilon(U,j)$ depending only on $U$ and $j$ such that given $x \in B_H$, we have
\begin{equation*}
\|Sx\| \geq 1-\varepsilon \implies (I-S)x \in U.
\end{equation*}
\end{lemma}
\begin{proof}
We prove this by induction on $j$. The case $j=1$ follows immediately from Lemma \ref{neighbourhood} (since $S$ is just $P_i$ for some $i \in \{1,\dots ,J\}$). Suppose the assertion is true for $j-1$. Let $S \in \mathcal{M}_j$. If $S \in \mathcal{M}_{j-1}$, we would be done by the induction hypothesis. Therefore we may assume that \[S \in \mathcal{M}_j \setminus \mathcal{M}_{j-1}.\] Without loss of generality, we may also assume that $S$ is in the free semigroup generated by $P_1,P_2,\dots ,P_j$ (if not we simply relabel the projections). Then since $S \notin \mathcal{M}_{j-1}$, for any index $k \in \{1,\dots, j\}$, $S$ can be written in the form
\begin{equation*}
S=T_1P_kT_2 = T_3P_kT_4,
\end{equation*}
where $T_1,T_4 \in \mathcal{M}_{j-1}$, and $T_2,T_3 \in \mathcal{M}_{j}$. Let $U$ be a neighbourhood, and pick $V$ as in Lemma \ref{new neighbourhood}. Since $P_i$ is continuous, and so weakly continuous for each $i \in \{1,\dots ,j\}$, we may pick a neighbourhood $W$ such that
\begin{equation*}
4W + 4P_iW \subseteq V, \quad i \in \{1,\dots ,j\}.
\end{equation*}
Indeed, we have that $f_i = 4I + 4P_i$ is weakly continuous for each $i \in \{1,\dots,j\}$. Then since $V$ is weakly open, $f_i^{-1}(V)$ is too, and so we may find a neighbourhood $W_i$ contained in $f_i^{-1}(V)$. Hence,
\begin{equation*}
f_i(W_i) \subseteq f_i(f_i^{-1}(V))\subseteq V, \quad i \in \{1,\dots,j\}.
\end{equation*}
Now since $\bigcap_{i=1}^j W_i$ is a finite intersection of weakly open sets, it is itself weakly open. Therefore we many find a neighbourhood $W$ contained in $\bigcap_{i=1}^j W_i$. Then $W$ has the required property that for each $i \in \{1,\dots j\}$,
\begin{equation*}
4W + 4P_iW = f_i(W) \subseteq f_i(W_i) \subseteq V.
\end{equation*}
By the induction hypothesis, it is possible to find $\varepsilon_1$ such that for $x\in B_H$ and $T\in \mathcal{M}_{j-1}$, we have \[\|Tx\| \geq 1-\varepsilon_1 \implies (I-T)x \in W. \] By Lemma \ref{neighbourhood}, we can find $\varepsilon_2$ such that for $x\in B_H$ and $T=P_k$, \[\|Tx\| \geq 1-\varepsilon_2 \implies (I-T)x \in W.\] We set $\varepsilon = \min\{\varepsilon_1,\varepsilon_2\}$, and note it is independent of $S$ (since $\varepsilon_1$ and $\varepsilon_2$ are). Then given $x \in B_H$ and either $T\in \mathcal{M}_{j-1}$ or $T=P_k$,
\begin{equation} \label{neighbourhood implication}
\|Tx\| \geq 1-\varepsilon \implies (I-T)x \in W.
\end{equation}
We now fix $x \in B_H$, and we assume that $\|Sx\| \geq 1-\varepsilon$. If we can show that $(I-S)x \in U$, then our induction is complete. We note that
\begin{equation*}
1\geq \|T_4x\| \geq \|P_kT_4x\| \geq \|T_3P_kT_4x\| = \|Sx\| \geq 1-\varepsilon.
\end{equation*}
In particular, $\|T_4x\| \geq 1-\varepsilon$ and $\|P_k(T_4x)\| \geq 1-\varepsilon$. Then by (\ref{neighbourhood implication}), we have $(I-T_4)x \in W$ and $(I - P_k)(T_4x) \in W$. Hence,
\begin{equation} \label{element of 1}
\begin{alignedat}{2}
(I-P_k)x &= (I-T_4)x + (I-P_k)(T_4x) - P_k(I-T_4)x \\
& \in W + W + P_kW \subseteq 2W + 2P_kW \subseteq \frac{1}{2} V.
\end{alignedat}
\end{equation}
We also note that $ \|T_1(P_kT_2x)\| = \|Sx\| \geq 1-\varepsilon$, so that by (\ref{neighbourhood implication}), we have $(I-T_1)(P_kT_2x) \in W$. Hence,
\begin{equation} \label{element of 2}
\begin{aligned}
(I-P_k)Sx &= (T_1 - I)P_kT_2x + P_k(I-T_1)P_kT_2x \\
&\in W + P_kW \subseteq \frac{1}{2} V.
\end{aligned}
\end{equation}
Since (\ref{element of 1}) and (\ref{element of 2}) are valid for all $k \in \{1,\dots ,j\}$, Lemma \ref{new neighbourhood} guarantees that $R_jx \in \frac{1}{2} U$ and $R_j(Sx) \in \frac{1}{2} U$. Therefore, \[R_j(I-S)x \in U.\]
Recalling that we assumed $S \in \mathcal{M}_j \setminus \mathcal{M}_{j-1}$ is in the free semigroup generated by $P_1,P_2,\dots,P_j$, a similar argument to Lemma \ref{foundational 2}\ref{useful for amemiya} gives
\begin{equation*}
\ker(I-P_j\dots P_1)=\bigcap_{k=1}^j \ker(I-P_k) = \ker(I-S).
\end{equation*}
Since $Q_j$ is the projection onto $\ker(I-P_j\dots P_1) = \ker(I-S)$, we have that $(I-S)(I-R_j)x = (I-S)Q_jx = 0$. Rearranging, we have \[(I-S)x = (I-S)R_jx.\] We note also that since $Q_j$ commutes with $P_k$ for each $k \in \{1,\dots ,j\}$ (by Lemma \ref{commutes}), then $R_j$ commutes with $P_k$ for each $k \in \{1,\dots ,j\}$. Therefore $R_j$ commutes with $S$, and so $R_j$ commutes with $I-S$. Hence
\begin{equation*}
(I-S)x = (I-S)R_jx = R_j(I-S)x \in U,
\end{equation*}
thus completing the induction.
\end{proof}
We are finally able to prove that $(x_n)$ always converges with respect to the weak topology.
\begin{proof}[Proof of Theorem \ref{amemiya}]
We begin by noting that without loss of generality, we may assume that \[\|x_0\|=1.\] Indeed, if we prove Theorem \ref{amemiya} for this case, then we may extend it to any $x_0 \in H$ by noting that \[P_{j_n}\dots P_{j_1}x_0 \textnormal{ converges weakly } \iff P_{j_n}\dots P_{j_1}\frac{x_0}{\|x_0\|} \textnormal{ converges weakly}.\]
We also note that ($\|x_n\|$) is a monotonically decreasing sequence, bounded below by $0$, and so it converges to some non-negative limit. If $\|x_n\| \to 0$, then $x_n$ converges in norm, and hence converges weakly. We may therefore suppose that $\lim_{n\to \infty} \|x_n\| >0$, and so $\inf_{n\geq 0}\|x_n\|>0.$
For any neighbourhood $U$, let $\varepsilon = \varepsilon(U,J)$ be as in Lemma \ref{important neighbourhood lemma}. Then there exists an $N \in \mathbb{N}$ such that for $n \geq m \geq N$, we have \[\|x_n\| \geq (1-\varepsilon)\|x_m\|.\]
Indeed, suppose for a contradiction there was no such $N$, so that for any $N_i\in \mathbb{N}$ there are $n_i \geq m_i \geq N_i$ such that $\|x_{n_i}\| < (1-\varepsilon)\|x_{m_i}\|$. We begin by picking $N_1 = 0$ and finding appropriate $n_1 \geq m_1 \geq N_1=0$. Then, letting $N_2 = n_1+1$, we pick $n_2 \geq m_2 \geq N_2$, and continue inductively in this way. We then have for $k \in \mathbb{N}$,
\begin{equation*}
\begin{aligned}
\|x_{n_k}\| &< (1-\varepsilon)\|x_{m_k}\| \leq (1-\varepsilon)\|x_{N_k}\| \leq (1-\varepsilon)\|x_{n_{k-1}}\| \\
& < (1-\varepsilon)^2\|x_{n_{k-2}}\| <\dots <(1-\varepsilon)^k\|x_{n_1}\| \\
&\leq (1-\varepsilon)^k\|x_0\| \to 0 \textnormal{ as } k\to \infty,
\end{aligned}
\end{equation*}
contradicting $\inf_{n\geq0}\|x_n\|>0$.
For a given $U$, we find an $N$ as above, and let $n\geq m \geq N$. Let $x = \frac{x_m}{\|x_m\|}$, and note that there is some $S\in \mathcal{M}_J$ such that $x_n = S\circ x_m$. Then
\begin{equation*}
\|Sx\| = \frac{\|x_n\|}{\|x_m\|} \geq 1-\varepsilon,
\end{equation*}
so Lemma \ref{important neighbourhood lemma} guarantees that $(I-S)x \in U$. Hence
\begin{equation} \label{difference in neighbourhood}
\begin{aligned}
x_m - x_n &= (I-S)x_m = \|x_m\|(I-S) \frac {x_m} {\|x_m\|} \\
&= \|x_m\|(I-S)x \leq \|x_0\|(I-S)x \\
&= (I-S)x \in U.
\end{aligned}
\end{equation}
Let $\delta >0$ and $y \in H$. Since $U$ was an arbitrary neighbourhood, we can pick $U = \{x \in H : | \langle x,y \rangle | < \delta/2\}$. Then (\ref{difference in neighbourhood}) gives that for $n \geq m \geq N$, we have $x_m - x_n \in U$, and therefore \[| \langle x_m - x_n, y \rangle | < \delta / 2.\] But we also note that $(x_n)$ is a bounded sequence ($\|x_n\| \leq \|x_0\|$ for each $n \in \mathbb{N}$), and so has a weakly convergent subsequence, say $x_{n_k}$, converging weakly to some limit $x_\infty \in H$. Then there exists some $K \geq N$ such that \[| \langle x_{n_K} - x_\infty, y \rangle | < \delta /2 .\] Therefore for $n\geq n_K$,
\begin{equation*}
|\langle x_n - x_\infty, y \rangle| \leq |\langle x_n - x_{n_{K}},y \rangle| + |\langle x_{n_{K}} - x_\infty,y \rangle| < \delta /2 + \delta /2 = \delta.
\end{equation*}
Hence $x_n$ converges weakly to $x_\infty$, completing our proof.
\end{proof}
We have shown that $(x_n)$ always converges weakly, but we do not yet know to what limit. Rather than taking Amemiya and Ando's approach in finding the limit, we notice that a simpler argument due to Sakai (Lemma \ref{key result sakai 2}) \cite{Sak95} works here too. Combining this with Theorem \ref{amemiya}, we obtain the following.
\begin{corollary} \label{limit}
Suppose that $(j_n)$ takes every value in $\{1,\dots,J\}$ infinitely many times. Then $(x_n)$ converges weakly to the orthogonal projection of $x_0$ onto $M = \bigcap_{j=1}^J M_j$.
\end{corollary}
Since in a finite-dimensional Hilbert space, convergence in norm is equivalent to weak convergence, the following corollary is immediate from Theorem \ref{amemiya}.
\begin{corollary}
Assume the same setting as in Theorem \ref{amemiya}, except that we now specify that $H$ is a finite-dimensional Hilbert space. Then $(x_n)$ converges in norm.
\end{corollary}
In fact, even more can be said. We end this section with a nice observation due to Sakai \cite{Sak95} (briefly mentioned in Section \ref{concluding remarks}). We consider the set $D=\{1\leq d \leq J : (j_n) \textnormal{ takes value $d$ infinitely many times}\}.$
\begin{lemma} \label{Sakai mistake}
Suppose there is some $i \in D$ such that $M_i$ is finite-dimensional. Then $(x_n)$ converges in norm.
\end{lemma}
Sakai gave a proof of this result in his paper \cite{Sak95}, but it appears to be incorrect. In particular, it seems as though Sakai assumed that if a sequence converges weakly to a limit, and has a subsequence which converges in norm, then the sequence converges in norm. However, this is not generally true, and we demonstrate this as follows.
It is known that there are sequences which converge weakly, but not in norm. By translating the sequence if needed, we may find a sequence $(a_n)$ converging weakly to $0$, but not in norm. But then the sequence $(b_n)$ given by
\begin{equation*}
(b_n) = (a_1,0,a_2,0,a_3,0 \dots)
\end{equation*}
converges weakly to $0$ and has a subsequence which converges in norm, but does not itself converge in norm.
However, we find that Lemma \ref{Sakai mistake} still turns out to be true. We offer the following proof.
\begin{proof}[Proof of Lemma \ref{Sakai mistake}]
We know, due to Theorem \ref{amemiya}, that $(x_n)$ converges weakly to some limit $x_\infty$. Since $i \in D$, we may pass to a subsequence $(x_{n_k})_{k\geq 1}$ such that each $x_{n_k} \in M_i$, and note that it must also converge weakly to $x_\infty$ (as $k \to \infty$). However $M_i$ is finite-dimensional, and in a finite-dimensional space, weak convergence is equivalent to convergence in norm, so we have that $(x_{n_k})$ converges in norm to $x_\infty$.
By definition of $D$, we may find some $t \in \mathbb{N}$ such that for $n\geq t$, we have $j_n \in D$. In particular, for $n\geq t$, we have $P_{j_n}x_\infty = x_\infty$. Since $(x_{n_k})$ converges in norm to $x_\infty$, for any $\varepsilon>0$, we may find some $K \geq t$ such that $\|x_{n_K} - x_\infty\| < \varepsilon$. So for $m\geq n_K$, we have
\begin{equation*}
\begin{aligned}
\|x_m - x_\infty\| &= \|P_{j_m}P_{j_{m-1}}\dots P_{j_{n_{K+1}}}x_{n_{K}} - x_\infty\| \\
& = \|P_{j_m}P_{j_{m-1}}\dots P_{j_{n_{K+1}}}(x_{n_{K}} - x_\infty)\| \\
& \leq \|x_{n_{K}} - x_\infty\| < \varepsilon.
\end{aligned}
\end{equation*}
Hence $(x_m)$ converges in norm to $x_\infty$, concluding our proof.
\end{proof}
\section{Failure of strong convergence} \label{failure strong convergence}
As mentioned in the introduction, Amemiya and Ando's question as to whether there is a sequence of projections that does not converge in norm \cite{AmAn65} went unanswered for a long time. It was resolved only in 2012, when Paszkiewicz proved that for any infinite-dimensional Hilbert space, we may find five subspaces, a vector $x_0 \in H$, and a sequence $(j_n)$, so that $(x_n)$ does not converge in norm \cite{Pas12}. This construction was improved by Kopeck\'a and M\"uller from five subspaces to three \cite{KoMu14}, and then refined in 2017 by Kopeck\'a and Paszkiewicz \cite{KoPa17}.
In this section, we will closely follow Kopeck\'a and Paszkiewicz's construction, presenting a series of technical lemmas as stated in \cite{KoPa17}, leading to a proof of the following theorem.
\begin{theorem} \label{big result kopecka}
There exists a sequence $(j_n)$ with the following property. If $H$ is an infinite-dimensional Hilbert space, and $x_0 \in H$ is a non-zero vector, then there exists three closed subspaces $M_1,M_2,M_3 \subset H$ intersecting only at the origin such that the sequence $(x_n)$ does not converge in norm.
\end{theorem}
We aim to present the proofs of the lemmas in a more accessible way, adding additional details where they have been omitted.
\subsection{Notation}
Given subsets $X,Y \subset H$, we will write $\bigvee X$ for the closed linear span of $X$, and $X \vee Y$ for the closed linear span of $X \cup Y$. We will also write $\bigvee_{i\in I} X_i$ for the closed linear span of $\bigcup_{i \in I} X_i$. Given $x,y \in H$, we will write $\vee x$ and $x\vee y$ for $\vee \{x\}$ and $\vee\{x,y\}$, respectively.
For $m \in \mathbb{N}$, we will write $\mathcal{S}_m$ to denote the free semigroup with generators $a_1, \dots, a_m$. If $A_1, \dots, A_m \in B(H)$, and $\varphi = a_{i_r}\dots a_{i_1} \in \mathcal{S}_m$ for some $r \in \mathbb{N}$ and $i_j \in \{1,\dots,m\}$, then we write
\begin{equation*}
\varphi(A_1,\dots,A_M) = A_{i_r}\dots A_{i_1} \in B(H).
\end{equation*}
We refer to elements of a semigroup as `words', made up of the `letters' from the set $\{a_1,\dots,a_m\}$. We denote the length of the word $\varphi$ by $|\varphi| = r$, and the number of occurrences of the letter $a_i$ in the word $\varphi$ by $|\varphi _i|$, so that $\sum_{i=1}^m |\varphi_i| = |\varphi|=r$.
\subsection{Continuous dependence of words on letters}
We begin by proving that if we have control over the number of appearances of a contraction in a product, then replacing the contraction with one close in norm to the original does not change the product much.
\begin{lemma} \label{contraction inequality}
Let $\psi \in \mathcal{S}_n$ for some $n \in \mathbb{N}$. Assume that for each $i \in \{1,2,\dots ,n\}$, $A_i, B_i, E \in B(H)$ are contractions such that each $A_i$ commutes with $E$. Then
\begin{equation*}
\|\psi(A_1,\dots ,A_n)E - \psi(B_1,\dots ,B_n)E\| \leq \sum_{1\leq i \leq n} |\psi_i| \cdot \|A_iE - B_iE\|.
\end{equation*}
\end{lemma}
\begin{proof}
We prove this by induction on $|\psi|$. For $|\psi|=0$, we have that $\psi(A_1,\dots ,A_n) = \psi(B_1,\dots ,B_n)$ is the identity on $H$, so the inequality holds (with both sides being $0$). For our induction hypothesis (IH), suppose the assertion is true for $|\psi| \leq r$. We now suppose $|\psi|= r+1$. Then we have $\psi = \varphi a_j$ for some $\varphi \in \mathcal{S}_n$ with $|\varphi| = r$ and $j\in \{1,2,\dots ,n\}$. Hence
\begin{align*}
&\|\psi(A_1,\dots ,A_n)E - \psi(B_1,\dots ,B_n)E\| && \\
& =\|\varphi(A_1,\dots ,A_n)A_jE - \varphi(B_1,\dots ,B_n)B_jE\| \\
& \leq \|\varphi(A_1,\dots ,A_n)A_jE - \varphi(B_1,\dots ,B_n)A_jE\| \\
& \quad + \|\varphi(B_1,\dots ,B_n)A_jE - \varphi(B_1,\dots ,B_n)B_jE\| && \textnormal{\big{[}triangle inequality\big{]}} \\
& \leq \|\varphi(A_1,\dots ,A_n)E - \varphi(B_1,\dots ,B_n)E\| \\
& \quad + \|\varphi(B_1,\dots ,B_n)\| \cdot \|A_jE-B_jE\| && \textnormal{\big{[}$A_jE=EA_j$,\, $\|A_j\| \leq 1$\big{]}} \\
& \leq \|A_jE-B_jE\| + \sum_{1\leq i \leq n} |\varphi_i| \cdot \|A_iE - B_iE\| && \textnormal{\big{[}IH, $\|\varphi(B_1,\dots, B_n)\| \leq 1$\big{]}} \\
& = \sum_{1\leq i \leq n} |\psi_i| \cdot \|A_iE - B_iE\|.
\end{align*}
This completes the induction.
\end{proof}
We now state two simple corollaries of Lemma \ref{contraction inequality}, which will be useful later on.
\begin{corollary} \label{contraction inequality corollary} Let $\psi \in \mathcal{S}_3$. Suppose $E,W,X,X',Y,Y',Z$ are subspaces of $H$, such that $W,X,Y \subset E$ and $X',Y' \perp$ E. Then
\begin{equation*}
\|\psi(P_W,P_X,P_Y)E - \psi(P_Z,P_{X\vee X'}, P_{Y\vee Y'})E\| \leq | \psi_1| \cdot \|P_ZP_E-P_W\|.
\end{equation*}
\end{corollary}
\begin{proof}
We apply Lemma \ref{contraction inequality} for $n=3$, and note that $P_WP_E=P_W$, $P_X=P_XP_E=P_{X \vee X'}P_E$, and $P_Y=P_{Y}P_E=P_{Y \vee Y'}P_E$.
\end{proof}
\begin{corollary} \label{bound projection word}
Let $n \in \mathbb{N}$, $\varphi \in S_n$ and $A_1,\dots, A_n$, $B_1,\dots, B_n$ be contractions. Then
\begin{equation*}
\|\varphi(A_1,\dots ,A_n) - \varphi(B_1,\dots ,B_n)\| \leq |\varphi | \cdot \max_{1\leq i \leq n} \|A_i - B_i\|.
\end{equation*}
\end{corollary}
\begin{proof}
We apply Lemma \ref{contraction inequality}, with $E$ taken to be the identity map.
\end{proof}
\subsection{Constructing three subspaces and a finite product of projections} \label{triples}
In this subsection, given orthonormal vectors $u$ and $v$ in $H$, we construct three subspaces $X$, $Y$, and $W=u \vee v$, and a finite product of projections $\psi(P_W,P_X,P_Y)$ onto $W$ such that $\psi(P_W,P_X,P_Y)u$ is close to $v$. This will be useful later when we `glue' together countably many copies of these triples of subspaces.
Let $\varepsilon >0$, and let $k=k(\varepsilon)$ be the smallest positive integer $k$ such that \[\Big{(}\cos \frac{\pi}{2k}\Big{)}^k > 1 - \varepsilon.\] We note that such a $k$ exists since $(\cos \frac{\pi}{2r})^r \to 1$ as $r \to \infty$. Let $u$ and $v$ be orthonormal vectors in $H$, and for $j \in \{0,\dots ,k\}$, let \[h_j=u\cos \frac{\pi j}{2k} + v\sin \frac{\pi j}{2k},\] so that $h_0 = u$ and $h_k=v$.
Then by definition of $k$, if we project $u$ consecutively onto $\vee h_1,\dots ,\vee h_k$, then we arrive at $v$ with error less than $\varepsilon$, so that
\begin{equation} \label{consecutive projections}
\|(P_{\vee h_k}\dots P_{\vee h_1})u - v\|<\varepsilon.
\end{equation}
This is illustrated in the following figure.
\begin{figure}
\caption{Approximating $v$ by projections of $u$}
\label{segment projections}
\end{figure}
We have by Theorem \ref{von neumann} (von Neumann) \cite{von49} that a projection onto $\vee h_j$ can be arbitrarily well approximated by iterating projections between two subspaces with intersection $\vee h_j$. In the following lemma, we call these subspaces $W$ and $X_j'$, and show that we can approximate $X_j'$ with a subspace $X_j$ so that $X_1 \subset \dots \subset X_k$. We will then see in Lemma \ref{replace projection} that we are able to replace the projections onto each $X_j$ by a product $(XYX)^{s(j)}$ where $X=X_k$, and $Y$ is such that $\|P_X - P_Y\|$ is small.
Therefore, instead of projecting onto several subpaces to get from $u$ to $v$, we only need to project onto three of them, $W$, $X$, and $Y$, finitely many times.
\begin{lemma} \label{quarter circle}
Let $\varepsilon >0$, and recall that $k=k(\varepsilon)$ is the smallest positive integer $k$ such that $(cos\frac{\pi}{2k})^k < 1 -\varepsilon$. There exists $\varphi \in S_{k+1}$ with the following property.
Suppose $X$ is a subspace of $H$ with $\dim X = \infty$, and $u, v \in X$ are vectors such that $\|u\| = \|v\| = 1$ and $u \perp v$. Let $W=u \vee v$. Then there exists subspaces $X_1 \subset \dots \subset X_{k(\varepsilon )} \subset X$ such that $\dim X_j = j+1$ for all $j \in \{1,\dots ,k \}$, and
\begin{equation*}
\| \varphi(P_W,P_{X_1},\dots ,P_{X_{k}})u - v\| < 2\varepsilon.
\end{equation*}
\end{lemma}
\begin{proof}
We pick orthonormal vectors $z_0,z_1,\dots ,z_{k-1} \in W^{\perp} \cap X$. We then construct inductively a sequence $\alpha_1>\dots >\alpha_{k-1}>\alpha_k = 0$, and a sequence of subspaces $X_1 \subset \dots \subset X_k \subset X$ in the following way.
We pick $\alpha_0 \in (0,1)$ arbitrarily. Suppose that the sequence $\alpha_1>\dots > \alpha_{j-1}$, and the subspaces $X_1 \subset \dots \subset X_{j-1}$ have already been constructed for some $j \in \{1,\dots, k-1\}$. We then set
\begin{equation*}
X_j' = \bigvee \{h_0+\alpha_0z_0, h_1+\alpha_1z_1,\dots ,h_{j-1}+\alpha_{j-1}z_{j-1},h_j \}.
\end{equation*}
Since $z_i$ is orthogonal to $W$ for each $i\in \{1,\dots ,j-1\}$, we have \[W\cap X_j' = \vee h_j.\] Therefore by Theorem \ref{von neumann} (von Neumann), for each $x\in H$, we have
\begin{equation*}
(P_{X_j'}P_WP_{X_j'})^r x\to P_{\vee h_j}x \textnormal{ as } r\to\infty.
\end{equation*}
Since both $\vee h_j$ and $X_j'$ are finite-dimensional, both $P_{\vee h_j}$ and $P_{X_j'}P_WP_{X_j'}$ map into finite-dimensional subspaces of $H$. Hence there exists $r(j) \in \mathbb{N}$ such that
\begin{equation*}
\|(P_{X_j'}P_WP_{X_j'})^{r(j)} - P_{\vee h_j}\| < \frac{\varepsilon}{k}.
\end{equation*}
Let $X_j = \bigvee \{h_0+\alpha_0z_0, h_1+\alpha_1z_1,\dots ,h_{j-1}+\alpha_{j-1}z_{j-1},h_j + \alpha_jz_j\}$. As $\alpha_j \to 0$, $X_j$ is just a small perturbation of $X_j'$, so we can pick $\alpha_j>0$ small enough that
\begin{equation} \label{small perturbation}
\|(P_{X_j}P_WP_{X_j})^{r(j)} - P_{\vee h_j}\| < \frac{\varepsilon}{k}.
\end{equation}
Suppose we have constructed $X_1 \subset \dots \subset X_{k-1}$ and $\alpha_1>\dots >\alpha_{k-1}$ as above. We set $\alpha_k = 0$ and $X_k = X_k' = \bigvee \{h_0+\alpha_0z_0, h_1+\alpha_1z_1,\dots ,h_{k-1}+\alpha_{k-1}z_{k-1},h_k\}$. We now find $r(k) \in \mathbb{N}$ such that (\ref{small perturbation}) holds also for $j=k$. Let $\varphi \in S_{k+1}$, and $\psi \in S_k$ be given by
\begin{align*}
\varphi (c,b_1,\dots ,b_k) &= (b_kcb_k)^{r(k)}\dots (b_1cb_1)^{r(1)}, \\
\psi(a_1,\dots ,a_k) &= a_1\dots a_k.
\end{align*}
Then by Corollary \ref{bound projection word},
\begin{equation} \label{consecutive projections 2}
\begin{aligned}
&\| \varphi(P_W,P_{X_1},\dots ,P_{X_k}) - P_{\vee h_k}\dots P_{\vee h_1} \| \\
&= \|\psi\big{(}(P_{X_k}P_WP_{X_k})^{r(k)},\dots ,(P_{X_1}P_WP_{X_1})^{r(1)}\big{)} - \psi(P_{\vee h_k}\dots P_{\vee h_1})\| \\
& \leq |\psi|\cdot \max_{1\leq j \leq k} \|(P_{X_j}P_WP_{X_j})^{r(j)} - P_{\vee h_j} \| \\
&< k \cdot \frac{\varepsilon}{k} = \varepsilon.
\end{aligned}
\end{equation}
Hence, (\ref{consecutive projections}) and (\ref{consecutive projections 2}) give that
\begin{equation*}
\begin{aligned}
&\|\varphi(P_W,P_{X_1},\dots ,P_{X_k})u - v\| \\
&\leq \| \varphi(P_W,P_{X_1},\dots ,P_{X_k})u - (P_{\vee h_k}\dots P_{\vee h_1})u\| + \|(P_{\vee h_k}\dots P_{\vee h_1})u - v\| \\
&< \varepsilon + \varepsilon = 2\varepsilon,
\end{aligned}
\end{equation*}
completing the proof. We note that $\varphi$ does not depend on $X$, $u$, or $v$.
\end{proof}
We have constructed above a family of $k$ finite-dimensional subspaces. We see in the next lemma that these can be replaced by projections onto just two subspaces: the largest subspace in the family and a small variation of it.
\begin{lemma} \label{replace projection}
Let $k \in \mathbb{N}$, $\varepsilon >0$, $\eta >0$, and $a>0$ be given. There exist natural numbers $a<s(k)<s(k-1)<\dots <s(1)$ with the following property.
Suppose $X_1 \subset \dots \subset X_k \subset X \subset E$ are closed subspaces of $H$, with $X$ separable and $\dim (X^\perp \cap E) = \infty$. Then there exists a closed subspace $Y \subset E$ such that $X \cap Y = \{0\}$, $\|P_X-P_Y\|<\eta$, and
\begin{equation*}
\|(P_XP_YP_X)^{s(j)} - P_{X_j}\| < \varepsilon, \quad j \in \{1,\dots ,k\}.
\end{equation*}
\end{lemma}
\begin{proof}
We may assume that $0<\eta<1$; if the statement holds in this case, then it clearly holds for any $\eta >0$. We begin by fixing $0<\beta_{k+1}<\frac{\eta}{2}$, and choosing $s(k) > a$ large enough that $\frac{1}{(1+\beta_{k+1}^2)^{s(k)}} < \varepsilon$. We then inductively choose numbers $\beta_k,s(k-1),\beta_{k-1},s(k-2),\dots ,s(1),\beta_1$ such that
\begin{equation} \label{inductive numbers}
\begin{gathered}
\beta_{k+1} > \beta_k >\dots >\beta_1 > 0, \\
a < s(k) < s(k-1) < \dots < s(1), \\
\frac{1}{(1+\beta_{j+1}^2)^{s(j)}} < \varepsilon \textnormal{ \, and \, } \Big{|}\frac{1}{(1+\beta_{j}^2)^{s(j)}} - 1\Big{|}< \varepsilon, \quad j \in \{1,\dots ,k\}.
\end{gathered}
\end{equation}
We will show that these $s(j)$'s are as required.
Since $X_1, \dots, X_k$ and $X$ are closed subspaces of a separable Hilbert space $H$, they are themselves separable Hilbert spaces under the same norm. A Hilbert space is separable if and only if it has an, at most, countable orthonormal basis. Hence we can find an, at most, countable orthonormal basis $\{e_i\}_{i\in I}$ in $X$ such that there are sets $\emptyset = I_0 \subset I_1 \subset \dots \subset I_k \subset I_{k+1} = I$ with the property that $\{e_i\}_{i\in I_j}$ is an orthonormal basis in $X_j$ for $j \in \{1,\dots ,k\}$.
For $e_i \in I_j \setminus I_{j-1}$ we define $\gamma_i = \beta_j$. Since $\dim (X^\perp \cap E) = \infty$, we can find a set of orthonormal vectors $\{w_i\}_{i\in I}$ in $X^\perp \cap E$ indexed by $I$. Let $Y = \bigvee \{e_i + \gamma_iw_i : i\in I\}$. Then for $i \in I$, it is a simple check that
\begin{equation*}
e_i = \underbrace{\frac{e_i + \gamma_iw_i}{{1+\gamma_i^2}}}_{\in Y} + \underbrace{e_i - \frac{e_i + \gamma_iw_i}{1+\gamma_i^2}}_{\in Y^\perp}.
\end{equation*}
Hence $P_Ye_i = \frac{e_i + \gamma_iw_i}{1+\gamma_i^2}$. Therefore, since $e_i \in X$ and $w_i \in X^\perp$,
\begin{equation*}
(P_XP_YP_X)e_i = P_XP_Y(P_Xe_i) = P_X(P_Ye_i) = P_X(\frac{e_i + \gamma_iw_i}{1+\gamma_i^2}) = \frac{e_i}{1+\gamma_i^2}.
\end{equation*}
Hence, we have
\begin{equation*}
(P_XP_YP_X)^me_i = \frac{e_i}{(1+\gamma_i^2)^m}, \quad m\in\mathbb{N}.
\end{equation*}
Let $x \in X$. Writing it as $x = \sum_{i\in I} a_ie_i$, we see that Lemma \ref{foundational}\ref{pythagoras}, $\|e_i\| = 1$, (\ref{inductive numbers}), and $\gamma_i = \beta_j$, give
\begin{equation} \label{projections bound X}
\begin{aligned}
&\|(P_XP_YP_X)^{s(j)}x - P_{X_j}x\|^2 \\
&= \Big{\|} \sum_{i\in I} a_i \frac{e_i}{(1+\gamma_i^2)^{s(j)}} - \sum_{i\in I_j} a_ie_i \Big{\|}^2 \\
&= \Big{\|} \sum_{i\in I_j} a_ie_i\Big{(}-1+\frac{1}{(1+\gamma_i^2)^{s(j)}}\Big{)} + \sum_{i\in I\setminus I_j} a_i\frac{e_i}{(1+\gamma_i^2)^{s(j)}} \Big{\|}^2 \\
&= \sum_{i\in I_j} \Big{\|}a_ie_i\Big{(}-1+\frac{1}{(1+\gamma_i^2)^{s(j)}}\Big{)}\Big{\|}^2 + \sum_{i\in I\setminus I_j} \Big{\|}a_i\frac{e_i}{(1+\gamma_i^2)^{s(j)}} \Big{\|}^2 \\
&= \sum_{i\in I_j} |a_i|^2 \Big{(}1- \frac{1}{(1+\gamma_i^2)^{s(j)}}\Big{)}^2 + \sum_{i\in I\setminus I_j} |a_i|^2 \frac{1}{(1+\gamma_i^2)^{2s(j)}} \\
&\leq \sum_{i\in I_j} |a_i|^2 \varepsilon^2 + \sum_{i\in I\setminus I_j} |a_i|^2\varepsilon^2 \\
&= \varepsilon^2 \sum_{i\in I} |a_i|^2 = \varepsilon^2 \|x\|^2.
\end{aligned}
\end{equation}
We note that $P_{X_j}P_X = P_{X_j}$ (since $X_j \subset X$), and recall that projections are idempotent. Hence, by (\ref{projections bound X}), we have that for any $z \in H$ and $j \in \{1,\dots ,k\}$,
\begin{align*}
\|(P_XP_YP_X)^{s(j)}z - P_{X_j}z\|^2 &= \|(P_XP_YP_X)^{s(j)}(P_Xz) - P_{X_j}(P_Xz)\|^2 \\
&\leq \varepsilon^2 \|P_Xz\|^2.
\end{align*}
Therefore,
\begin{equation*}
\|(P_XP_YP_X)^{s(j)} - P_{X_j}\| < \varepsilon, \quad j \in \{1,\dots ,k\}.
\end{equation*}
It remains to verify that $\|P_X-P_Y\| < \eta$, and that $X \cap Y = \{0\}$. For the latter, suppose that $z \in X \cap Y$. Then since $z \in X$, $z$ can be written as $\sum_{i\in I} b_ie_i$, and since $z \in Y$, $z$ can be written as $\sum_{i\in I} c_i(e_i+\gamma_iw_i)$ (where each $b_i,c_i \in \mathbb{F}$). Since each $w_i \in X^\perp$, and each $e_i \in X$, then for every $j \in I$,
\begin{equation*}
b_j = \langle z, e_j \rangle = \langle \sum_{i\in I} c_i(e_i+\gamma_iw_i), e_j \rangle = \langle \sum_{i\in I} c_ie_i,e_j \rangle = c_j.
\end{equation*}
Therefore, \[0 = z - z = \sum_{i\in I} b_i(e_i+\gamma_iw_i) - \sum_{i\in I} b_ie_i = \sum_{i\in I} b_i\gamma_iw_i.\] Hence $b_i$ = 0 for every $i \in I$, and so $z = \sum_{i\in I} b_ie_i = 0$.
Finally, we want to show that $\|P_X-P_Y\| < \eta$. It is known that if $U = \bigvee \{f_i : i\in I\}$ for some orthonormal set $\{f_i\}_{i \in I}$, we have $P_Ux = \sum_{i\in I} \langle x,f_i \rangle f_i $ for all $x \in Z$. This, along with $0<\gamma_i<\beta_k+1<\frac{\eta}{2}<1$ and $|a-b|^2 \leq 2|a|^2 + 2|b|^2$, gives that for any $0\neq z \in H$,
\begin{equation*}
\begin{aligned}
&\|P_Xz - P_Yz\|^2 \\
&= \Big{\|} \sum_{i\in I} \langle e_i,z \rangle e_i - \sum_{i\in I} \frac{e_i + \gamma_iw_i }{1+\gamma_i^2} \langle e_i+\gamma_iw_i,z \rangle \Big{\|}^2 \\
&\leq \sum_{i\in I} \frac{1}{(1+\gamma_i^2)^2} \big{|}\gamma_i^2\langle e_i,z\rangle - \gamma_i \langle w_i,z \rangle \big{|}^2 + \frac{\eta^2}{4} \sum_{i\in I} \big{|}\frac{1}{1+\gamma_i^2} \langle e_i + \gamma_iw_i,z \rangle \big{|}^2 \\
&\leq 2(\eta/2)^4\|z\|^2 + 2(\eta /2)^2\|z\|^2 + (\eta^2/4)\|z\|^2 \\
&< \eta^2 \|z\|^2.
\end{aligned}
\end{equation*}
So indeed $\|P_X-P_Y\| < \eta$.
\end{proof}
As before, let $u$ and $v$ be orthonormal vectors, $W=u\vee v$, and $\varepsilon >0$. We proceed to make use of Lemmas \ref{quarter circle} and \ref{replace projection} to find a word $\psi$, and two (almost parallel) subspaces $X$ and $Y$, such that $\|\psi(P_W,P_X,P_Y)u - v\| < 3\varepsilon$.
\begin{lemma} \label{word and almost parallel subspaces}
For every $\varepsilon >0$, there exists $N=N(\varepsilon)$, such that for every $\eta>0$, there exists $\psi \in S_3$ with $|\psi_1| \leq N$ that has the following property.
Let $X\subset E$ be subspaces of $H$ such that $X$ is separable and $\dim (X^\perp \cap E) = \infty$. Let $u,v \in X$ be vectors such that $\|u\|=\|v\|=1$ and $u\perp v$. Let $W = u \vee v$. Then there exists a subspace $Y \subset E$ such that $X \cap Y = \{0\}$, $\|P_X-P_Y\|<\eta$, and
\begin{equation*}
\|\psi(P_W,P_X,P_Y)u - v\| < 3\varepsilon.
\end{equation*}
\end{lemma}
\begin{proof}
Let $\varepsilon >0$ and $\eta >0$ be given. Let $\varphi \in S_{k(\varepsilon)+1}$ be as in Lemma \ref{quarter circle}, and let $N = |\varphi_1|$.
Since $\|u\|=\|v\|=1$ and $u\perp v$, we can apply Lemma \ref{quarter circle} to see that there exist subspaces $X_1 \subset \dots \subset X_{k(\varepsilon)} \subset X$ such that
\begin{equation*}
\| \varphi(P_W,P_{X_1},\dots ,P_{X_{k(\varepsilon)}})u - v\| < 2\varepsilon.
\end{equation*}
For $k=k(\varepsilon)$, the given $\eta$, $a=1$, and $\varepsilon$ replaced by $\frac{\varepsilon}{|\varphi|}$, we choose natural numbers $s(k) < s(k-1) < \dots < s(1)$ as in Lemma \ref{replace projection}. Since $X$ is separable and $\dim X^\perp \cap E = \infty$, Lemma \ref{replace projection} gives that there exists a subspace $Y$ of $E$ such that $X \cap Y = \{0\}$, $\|P_X-P_Y\|<\eta$ and for each $j\in \{1,\dots ,k\}$,
\begin{equation*}
\|(P_XP_YP_X)^{s(j)} - P_{X_j}\| < \frac{\varepsilon}{|\varphi|}.
\end{equation*}
We then define $\psi$ to be $\varphi$, but with $a_i$ replaced by $(a_2a_3a_2)^{s(i-1)}$ for each $i\in \{2,\dots ,k+1\}$, so that
\begin{equation*}
\psi(P_W,P_X,P_Y) = \varphi(P_W,(P_XP_YP_X)^{s(1)},\dots ,(P_XP_YP_X)^{s(k)}).
\end{equation*}
It is simple to see that $|\psi_1| = |\varphi_1| = N$. Finally, by Corollary \ref{bound projection word},
\begin{align*}
& \|\psi(P_W,P_X,P_Y)u - v\| \\
&= \|\varphi(P_W,(P_XP_YP_X)^{s(1)},\dots ,(P_XP_YP_X)^{s(k)})u - v\| \\
& \leq \| \varphi(P_W,(P_XP_YP_X)^{s(1)},\dots ,(P_XP_YP_X)^{s(k)})u - \varphi(P_W,P_{X_1},\dots ,P_{X_k})u \| \\
& \quad + \|\varphi(P_W,P_{X_1},\dots ,P_{X_k})u - v\| \\
& \leq |\varphi| \cdot \frac{\varepsilon}{|\varphi|} + 2\varepsilon = 3\varepsilon.
\end{align*}
This concludes the proof.
\end{proof}
We may now make use of Corollary \ref{contraction inequality corollary} to show that we in fact have some freedom in our choice of $W$, $X$, and $Y$ above.
\begin{lemma} \label{freedom choice spaces}
For every $\varepsilon>0$, there exists $\delta = \delta(\varepsilon)$ such that for every $\eta >0$, there exists $\psi \in S_3$ with the following property.
Let $X\subset E$ be subspaces of $H$ such that X is separable and $\dim X=\dim X^\perp \cap E = \infty$. Let $u,v \in X$ be vectors such that $\|u\|=\|v\|=1$ and $u\perp v$. Let $W=u\vee v$. Then there exists a subspace $Y\subset E$ such that $X \cap Y = \{0\}$ and $\|P_X - P_Y\|<\eta$ with the following property. If $X',Y',Z$ are subspaces such that $X',Y' \subset E$ and $\|P_W-P_ZP_E\|<\delta$, then
\begin{equation*}
\|\psi(P_Z,P_{X\vee X'},P_{Y\vee Y'})u - v\| < 4\varepsilon.
\end{equation*}
\end{lemma}
\begin{proof}
Given $\varepsilon >0$, we pick $N \in \mathbb{N}$ as in Lemma \ref{word and almost parallel subspaces}, and let $\delta = \frac{\varepsilon}{N}$. For these $\varepsilon$ and $N$, and a given $\eta > 0$, we choose $\psi$ according to Lemma \ref{word and almost parallel subspaces}. For a given subspace $X$, we also choose $Y$ according to this lemma. Let $X',Y',Z$ be as above. Then applying both Corollary \ref{contraction inequality corollary} and Lemma \ref{word and almost parallel subspaces}, we have
\begin{align*}
& \|\psi(P_Z,P_{X\vee X'},P_{Y\vee Y'})u - v\| \\
& \leq \|\psi(P_Z,P_{X\vee X'},P_{Y\vee Y'})u - \psi(P_W,P_X,P_Y)u\| + \|\psi(P_W,P_X,P_Y)u - v\| \\
&\leq |\psi_1| \cdot \|P_W-P_ZP_E\| + 3\varepsilon \\
&\leq N\delta + 3\varepsilon = 4\varepsilon. \qedhere
\end{align*}
\end{proof}
\subsection{`Gluing' together the triples}
The last step in proving Theorem \ref{big result kopecka} uses Lemma \ref{freedom choice spaces} to show that given an orthonormal set $\{e_i\}_{i=1}^\infty$ with an infinite-dimensional orthogonal complement, we can construct three closed subspaces $X,Y,Z$ of $H$ and words $\Psi^{(i)}$ such that $\Psi^{(i)}(P_Z,P_X,P_Y)e_i$ is close to $e_{i+1}$ for every $i \in \mathbb{N}$. Kopeck\'a and Paszkiewicz refer to this as `gluing' together countably many of the triples $W$, $X$, and $Y$ considered in Section \ref{triples} \cite{KoPa17}.
\begin{lemma} \label{almost there eva}
For any $\varepsilon_i >0$ where $i\in\mathbb{N}$, there exists $\Psi^{(i)} \in \mathcal{S}_3$ with the following property.
Suppose $\{e_i\}_{i=1}^{\infty}$ is an orthonormal set in $H$ with an infinite-dimensional orthogonal complement. Then there are three closed subspaces $X,Y,Z$ of $H$ such that
\begin{equation} \label{Psi projection inequality}
\|\Psi^{(i)}(P_Z,P_X,P_Y)e_i - e_{i+1}\| < 4\varepsilon_i, \quad i\in \mathbb{N}.
\end{equation}
\end{lemma}
\begin{proof}
For each $\varepsilon_i >0$ ($i \in \mathbb{N}$), we define $\delta_i = \delta(\varepsilon_i)$ as in Lemma \ref{freedom choice spaces}. We set $\delta_0 = 1$ and $\eta_i = \min\{\delta_{i-1},\delta_{i+1}\}$, and choose $\psi^{(i)} \in S_3$ as in Lemma \ref{freedom choice spaces}. We define $\Psi^{(i)}$ as follows,
\begin{equation*}
\Psi^{(i)}(P_Z,P_X,P_Y) =
\begin{cases}
\psi^{(i)}(P_Z,P_X,P_Y) & \text{if $i$ is even},\\
\psi^{(i)}(P_Y,P_X,P_Z) & \text{if $i$ is odd}.
\end{cases}
\end{equation*}
We begin by finding, for each $i \in \mathbb{N}$, closed infinite-dimensional subspaces $E_i$ of $H$ such that
\begin{equation} \label{finding ei}
\begin{gathered}
e_i,e_{i+1} \in E_i, \\
P_{\vee e_{i+1}} = P_{E_i}P_{E_{i+1}} = P_{E_{i+1}}P_{E_i}, \\
E_i \perp E_j \textnormal{ if } |i-j| \geq 2.
\end{gathered}
\end{equation}
We first note that since $\{e_i\}_{i=1}^\infty$ has an infinite-dimensional orthogonal complement, we may find an orthonormal set $\{f_i\}_{i=1}^\infty$, such that $e_i \perp f_i$ for every $i,j \in \mathbb{N}$. We then consider the infinite-dimensional spaces \[F_k = \bigvee \big{\{}f_i : i=(p_k)^r \textnormal{ for some } r\in\mathbb{N} \big{\}} ,\] where $p_k$ is the $k\textsuperscript{th}$ prime number. We set \[E_{i} = \langle e_{i} \rangle \oplus \langle e_{i+1} \rangle \oplus F_{i},\] and note these $E_i$ do indeed satisfy (\ref{finding ei}).
For each $i \in \mathbb{N}$, we find a closed subspace $X_i \subset E_i$, such that $e_i,e_{i+1} \in X_i$, and $\dim X_i = \dim(X_i^\perp \cap E_i) = \infty$. We then have $X_n = \langle e_{n} \rangle \oplus \langle e_{n+1} \rangle \oplus \widetilde{F_{n}}$ for some closed infinite-dimensional subspace $\widetilde{F_{n}} \subset F_n$, and
\begin{equation*} \label{subspace conditions}
\begin{gathered}
e_i,e_{i+1} \in X_i, \\
P_{\vee e_{i+1}} = P_{X_i}P_{X_{i+1}} = P_{X_{i+1}}P_{X_i}, \\
X_i \perp X_j \textnormal{ if } |i-j| \geq 2.
\end{gathered}
\end{equation*}
By Lemma \ref{freedom choice spaces}, there exist closed subspaces $Y_i \subset E_i$ such that $\|P_{X_i} - P_{Y_i}\| < \eta_i$, and
\begin{equation} \label{above lemma property}
\|\psi^{(i)}(P_{Z_i},P_{X_i\vee X'},P_{Y_i\vee Y'})e_i - e_{i+1}\| < 4\varepsilon,
\end{equation}
whenever $W_i = e_i\vee e_{i+1}$, and $X',Y',Z_i$ are subspaces such that $X',Y' \subset E_i^\perp$ and $\|P_{W_i} - P_{Z_i}P_{E_i}\|<\delta$.
We now set $Y_0 = \vee e_1$ and
\begin{equation*}
X = \bigvee_{i \in \mathbb{N}} X_i, \quad Y= \bigvee_{i\in\mathbb{N}_{\geq0}} Y_{2i}, \quad Z = \bigvee_{i\in\mathbb{N}_{\geq0}} Y_{2i+1}.
\end{equation*}
Then setting \[X_i' = \widetilde{F_{i-1}} \vee \widetilde{F_{i+1}} \vee \bigvee_{\substack{j\in\mathbb{N} \\ j\notin \{i-1,i+1\}}}X_j,\]we have $X_i' \perp E_i$ and $X = X_i \vee X_i'$. We proceed to show (\ref{Psi projection inequality}) by considering the cases where $i$ is even and odd separately.
Suppose first that $i$ is even. Then as above, for each $i \in \mathbb{N}$, we can find a subspace $Y_i'$ of $H$ such that $Y_i' \perp E_i$ and $Y=Y_i \vee Y_i'$.
We note that $P_ZP_{E_i} = P_{Y_{i-1} \vee Y_{i+1}}P_{E_i}$, $P_{W_i} = P_{X_{i-1} \vee X_{i+1}}P_{E_i}$, $X_{i-1} \perp X_{i+1}$, and $Y_{i-1}\perp Y_{i+1}$. By Lemma \ref{foundational}\ref{sum projections closed}, for orthogonal closed subspaces $U,V$ of $H$, we have that $U+V = U\vee V$. Applying this, along with Lemma \ref{foundational}\ref{adding projections}, we have
\begin{equation*}
\begin{aligned}
\|P_{W_i} - P_{Z}P_{E_i}\| &= \|P_{X_{i-1} \vee X_{i+1}}P_{E_i} - P_{Y_{i-1} \vee Y_{i+1}}P_{E_i}\| \\
&= \|P_{X_{i-1} + X_{i+1}}P_{E_i} - P_{Y_{i-1} + Y_{i+1}}P_{E_i}\| \\
&= \|(P_{X_{i-1}} + P_{X_{i+1}})P_{E_i} - (P_{Y_{i-1}} + P_{Y_{i+1}})P_{E_i} \| \\
&= \|(P_{X_{i-1}} - P_{Y_{i-1}})P_{E_i} + (P_{X_{i+1}} - P_{Y_{i+1}})P_{E_i} \| \\
&\leq \|P_{X_{i-1}} - P_{Y_{i-1}}\| + \|P_{X_{i+1}} - P_{Y_{i+1}} \| < \eta_{i-1} + \eta_{i+1} \\
&= \min \{\delta_{i-2}, \delta_i \} + \min \{\delta_{i}, \delta_{i+2} \} \leq \delta_i.
\end{aligned}
\end{equation*}
Hence, by (\ref{above lemma property}),
\begin{equation*}
\|\Psi^{(i)}(P_{Z},P_{X},P_{Y})e_i - e_{i+1}\| = \|\psi^{(i)}(P_{Z},P_{X},P_{Y})e_i - e_{i+1}\| < 4\varepsilon_i.
\end{equation*}
If $i$ is odd, then we can find a subspace $Y_i'$ of $H$ such that $Y_i' \perp E_i$ and $Z=Y_i \vee Y_i'$. As above, we show that $\|P_{W_i} - P_{Y}P_{E_i}\| < \delta_i$, and so by (\ref{above lemma property}),
\begin{align*}
& \|\Psi^{(i)}(P_{Z},P_{X},P_{Y})e_i - e_{i+1}\| = \|\psi^{(i)}(P_{Y},P_{X},P_{Z})e_i - e_{i+1}\| < 4\varepsilon_i. \qedhere
\end{align*}
\end{proof}
We are finally able to prove Theorem \ref{big result kopecka}, that a sequence of alternating projections may diverge.
\begin{proof}[Proof of Theorem \ref{big result kopecka}]
For $\varepsilon_i = 9^{-i}$ ($i \in \mathbb{N}$), we pick $\Psi^{(i)}$ as in Lemma \ref{almost there eva}. Let $e_1 = \frac{x_0}{\|x_0\|}$. Since $H$ is infinite-dimensional, we can find an orthonormal set $\{e_i\}_{i=1}^\infty$ with an infinite-dimensional orthogonal complement. We choose closed subspaces $X, Y, Z$ as in Lemma \ref{almost there eva}, renaming them $M_1,M_2$ and $M_3$ respectively. Let $A_k = \Psi^{(k)}(P_{M_1},P_{M_2},P_{M_3})$. We then have, for all $k \in \mathbb{N}$,
\begin{equation} \label{KoPa last}
\begin{aligned}
&\|A_kA_{k-1}\dots A_1e_1 - e_{k+1}\| \\
& \leq \|A_kA_{k-1}\dots A_2(A_1e_1 - e_2)\| + \|A_kA_{k-1}\dots A_2 e_2- e_{k+1}\| \\
& < 4\varepsilon_1 + \|A_kA_{k-1}\dots A_2 e_2- e_{k+1}\| \\
& \leq 4\varepsilon_1 + \|A_kA_{k-1}\dots A_3(A_2e_2 - e_3)\| + \|A_kA_{k-1}\dots A_3e_3- e_{k+1}\| \\
& < 4\varepsilon_1 + 4\varepsilon_2 + \|A_kA_{k-1}\dots A_3e_3- e_{k+1}\| \\
&\vdotswithin{=} \\
&< 4\varepsilon_1 + 4\varepsilon_2 + \dots + 4\varepsilon_{k-1} + \|A_ke_k - e_{k+1}\| \\
&< 4(9^{-1} + \dots + 9^{-k}) < 4 \sum_{k=1}^{\infty}\frac{1}{9^j} = \frac{1}{2}.
\end{aligned}
\end{equation}
By construction, each $A_k$ is some product of orthogonal projections onto $M_1$, $M_2$ or $M_3$. Let $n_k$ be the total number of projections in the product $A_kA_{k-1}\dots A_1$. We define the sequence $(j_n)$ by letting $j_n$ take value $i$ whenever the $n\textsuperscript{th}$ projection in $A_kA_{k-1}\dots A_1$ is onto $M_i$. We define the sequence $(x_n)$ as in the statement of the theorem, so that $x_{n_k} = A_k \dots A_1 x_0$.
We will now show that the subsequence $(x_{n_k})_{k\geq1}$ does not converge in norm. This then implies that $(x_n)$ does not converge in norm either, and we are done.
We define the sequence \[(y_k) = \Big{(}\frac{x_{n_k}}{\|x_0\|}\Big{)} = (A_k\dots A_1e_1).\] By (\ref{KoPa last}), for each $k \in \mathbb{N}$, we have \[ \|y_k-e_{k+1}\| < \frac{1}{2} .\] Applying the reverse triangle inequality and the Cauchy-Schwarz inequality (and noting that $\|e_{k+1}\|$=1), we have
\begin{equation*}
\begin{aligned}
|\langle y_k,e_{k+1} \rangle | &\geq |\langle e_{k+1},e_{k+1} \rangle| - |\langle y_k - e_{k+1},e_{k+1} \rangle| \\
&= 1 - |\langle y_k - e_{k+1},e_{k+1} \rangle| \\
&\geq 1 - \|y_k - e_{k+1}\| \\
&\geq 1- \frac{1}{2} = \frac{1}{2}.
\end{aligned}
\end{equation*}
Now suppose for a contradiction that $y_k$ converges in norm to some limit $y$. Then there is some $m \in \mathbb{N}$ such that for $k\geq m$, \[ \|y_k-y\| < \frac{1}{4}. \] Again, applying the reverse triangle inequality and Cauchy-Schwartz inequality, we have that for $k \geq m$,
\begin{equation*}
\begin{aligned}
|\langle y,e_{k+1} \rangle | &\geq |\langle y_k, e_{k+1} \rangle | - | \langle y - y_k,e_{k+1} \rangle| \\
& \geq |\langle y_k, e_{k+1} \rangle | - \|y-y_k\| \\
& > \frac{1}{2} - \frac{1}{4} = \frac{1}{4}.
\end{aligned}
\end{equation*}
We therefore have, by Bessel's inequality, that
\begin{equation*}
\|y\|^2 \geq \sum_{k=1}^\infty |\langle y,e_k \rangle|^2 \leq \sum_{k=m}^\infty |\langle y,e_{k+1} \rangle|^2 = \sum_{k=m}^\infty \frac{1}{16} = \infty,
\end{equation*}
a contradiction. Hence $(y_k)$ does not converge in norm. Therefore $(x_{n_k})$ does not converge in norm, and so neither does $(x_n)$. This completes the proof.
\end{proof}
\subsection{An extension}
In fact, Kopeck\'a and Paszkiewicz went on to prove that there exist three closed subspaces in $H$ such that for any non-zero vector $x_0 \in H$, there is some sequence of projections $(j_n)$ for which $(x_n)$ does not converge in norm \cite{KoPa17}. In particular, here we begin by choosing three subspaces, and then given a non-zero vector $x_0 \in H$, we find an appropriate sequence $(j_n)$. This is in contrast with Theorem \ref{big result kopecka}, where we first find a sequence $(j_n)$, and then given a non-zero vector $x_0 \in H$, we find appropriate subspaces.
The main idea of the proof is showing the following. Suppose we have three closed subspaces $X_1,X_2,X_3 \subset H$, a non-zero vector $x_0 \in H$, and a sequence of projections $(j_n)$ such that $(x_n)$ does not converge in norm (which we know is possible by Theorem \ref{big result kopecka}). Then we may find a closed infinite-dimensional subspace $L$ of $H$ such that for every non-zero $y_0 \in L$, there exists a sequence $(k_n)$ taking values in $\{1,2,3\}$ such that the sequence given by
\begin{equation*}
y_n = P_{Y_{k_n}}y_{n-1}, \quad n\geq 1,
\end{equation*}
does not converge in norm, where $Y_i = X_i \cap L$ for $i \in \{1,2,3\}$.
The proof is technical, non-constructive, and not directly relevant to our focus, so we omit it.
We end the section with a brief remark. As mentioned in Section \ref{concluding remarks}, Sakai's paper \cite{Sak95} ends by posing the following open question. For an arbitrary sequence $s = (j_n)$, does (\ref{key inequality}) always hold with $A=J-1$? That is, does
\begin{equation*}
\|x_n - x_m\|^2 \leq A \sum_{k=m}^{n-1} \|x_{k+1} - x_k \|^2
\end{equation*}
hold with $A=J-1$, and any $n \geq m \geq 1$?
In light of Theorem \ref{big result kopecka}, this is easily resolved. We find a sequence $s$ for which (\ref{key inequality}) does not hold for any constant $A$.
\begin{corollary} \label{Sakai open question}
There exists a sequence s, and subspaces $M_1,M_2,M_3$ in $H$ such that there is no constant $A$ for which
\begin{equation*}
\|x_n - x_m\|^2 \leq A \sum_{k=m}^{n-1} \|x_{k+1} - x_k \|^2
\end{equation*}
holds for any $n>m\geq1$.
\end{corollary}
\begin{proof}
Let $s=(j_n)$ be the sequence in Theorem \ref{big result kopecka}. We pick a vector $x_0 \in H$, and choose $M_1$, $M_2$, and $M_3$ according to this theorem. Suppose for a contradiction there is such a constant $A$. Then by Lemma \ref{key result sakai}, we have that $x_n$ converges in norm, contradicting Theorem \ref{big result kopecka}.
\end{proof}
\section{Concluding remarks}
There is a lot of interesting mathematics related to the method of alternating projections that we could not fit into this dissertation. Two main areas we have not covered are what happens when we have closed convex subsets instead of closed subspaces, and the rate of convergence in the method of alternating projections.
\subsection{Closed convex subsets}
There are many extensions of the method of alternating projections. These include considering closed convex subsets rather than closed subspaces, contractions rather than projections, or generalising results to Banach spaces with certain properties (for example, considering a uniformly convex Banach space instead of a Hilbert space; see \cite{BaLy10, BaSe17, BrRe77}). Here, we give a brief summary of known results when we have closed convex subsets.
We begin by remarking that we can indeed define a projection $P$ onto a closed convex subset $C$ of $H$. By the Hilbert projection theorem, for any $x \in H$, there exists a unique $y \in C$ minimising $\|x-y\|$ over $C$. We define the projection $P_C$ of $x$ onto $C$ by $P_C(x) = y$.
In 1965, Bregman proved that any sequence of periodic projections converges weakly to an element in the intersection of the closed convex subsets (assuming the intersection is non-empty) \cite{Bre65}. We note that the intersection of a finite number of closed convex subsets is also closed and convex. However, as opposed to the case of closed subspaces, the point we converge to need not be the projection onto the intersection of the closed convex subsets. We offer an example to illustrate this.
\begin{figure}
\caption{An example of two closed convex subsets where $(x_n)$ does not converge to the projection of $x_0$ onto the intersection of the two subsets}
\label{intersection projection}
\end{figure}
We note that Bregman's result implies that we have convergence in norm for periodic projections when $H$ is finite-dimensional, since in this case, convergence in norm and weak convergence are equivalent. In fact, an identical argument to our proof of Lemma \ref{Sakai mistake} shows that it is enough for only one of the closed subsets to be contained in a finite-dimensional space.
As in the case for closed subspaces, it is natural to ask if we always have convergence in norm. In 2004, Hundal constructed an example of two closed convex subsets $C_1$ and $C_2$, intersecting only at the origin, such that $(P_{C_2}P_{C_1})^n$ converges weakly to $0$ (by Bregman's result), but does not converge in norm \cite{Hun04}. In fact, this proof was an important input towards Paszkiewicz's construction of five subspaces, a non-zero vector $x_0 \in H$, and a sequence $(j_n)$ such that $(x_n)$ does not converge in norm \cite{KoPa17, Pas12}.
\subsection{Rates of convergence}
Let $H=\mathbb{R}^2$, and let $\theta \in (0,\pi/2)$ be fixed. We consider the two closed subspaces
\begin{align*}
M_1 &= \big{\{}(x,y) \in \mathbb{R}^2 : x=y\big{\}}, \\
M_2 &= \big{\{}(t\cos \theta,t \sin \theta) \in \mathbb{R}^2 : t \in \mathbb{R}\big{\}}.
\end{align*}
Our example in the introduction is the case $\theta = \pi/4$. Looking at Figure \ref{alternating projections}, it is not surprising that if we increase the angle $\theta$, we converge faster to $M = M_1\cap M_2 = \{0\}$.
What may be more surprising is that we may extend this idea to define the notion of an angle between subspaces.
The Friedrichs angle between two closed subspaces $M_1$ and $M_2$ of $H$ is defined to be the angle in $[0,\frac{\pi}{2}]$ whose cosine is given by
\begin{equation*}
c(M_1,M_2) = \sup \{ |\langle x,y \rangle | : x \in M_1 \cap M^\perp \cap B_H, y\in M_2 \cap M^\perp \cap B_H \}.
\end{equation*}
It is known that for all $n\geq1$, we have
\begin{equation*}
\|(P_{M_2}P_{M_1})^n - P_M\| = c(M_1,M_2)^{2n-1}.
\end{equation*}
The upper bound was proved by Aronszajn \cite{Aro50}, and equality by Kayalar and Weinert \cite{KaWe88}. Hence, letting $T=P_{M_2}P_{M_1}$, we have that $T^n$ converges uniformly (in operator norm) to $P_M$ if and only if $c(M_1,M_2)<1$ (i.e. the Friedrichs angle between $M_1$ and $M_2$ is positive). When this happens, $T^n$ converges uniformly to $P_M$ at a geometric rate, in the sense that there exist $C \geq 0 $ and $\alpha \in (0,1)$ such that
\begin{equation*}
\|T^n - P_M\| \leq C\alpha^n, \quad n\geq 1.
\end{equation*}
It turns out that $c(M_1,M_2)=1$ can only happen in an infinite-dimensional space. For $c(M_1,M_2)=1$, we do not have uniform convergence, but we still have strong convergence (for every $x \in H$, $\|T^nx - P_Mx\| \to 0$) by Theorem \ref{von neumann} (von Neumann). In 2009, Bauschke, Deutsch and Hundal \cite{BaDeHu09} proved that in this case, convergence is arbitrarily slow, in the sense that for any monotonically decreasing sequence $(\lambda_n)$ in $[0,1]$ tending to $0$, there exists $x_\lambda \in H$ such that
\begin{equation*}
\|T^n(x_\lambda) - P_M(x_\lambda)\| \geq \lambda_n, \quad n\geq 1.
\end{equation*}
Hence we have a dichotomy:
\begin{equation*}
\begin{aligned}
&c(M_1,M_2) < 1 \implies \textnormal{ convergence at a uniform geometric rate}, \\
&c(M_1,M_2) = 1 \implies \textnormal{ arbitrarily slow convergence}.
\end{aligned}
\end{equation*}
In 2012, Badea, Grivaux and M\"uller \cite{BaGrMu12} extended the notion of Friedrichs angle and the results discussed above to the case of $J\geq2$ closed subspaces $M_1,\dots,M_J$. In particular, the same dichotomy still holds, except with $c(M_1,M_2)$ replaced by $c(M_1,\dots, M_J)$.
The most recent result concerning the rate of convergence is the following. Let $M = \bigcap_{j=1}^J M_j$ be the intersection of $J$ closed subspaces, and $T=P_{M_J}\dots P_{M_1}$. In 2017, Badea and Seifert \cite{BaSe16} proved that there exists a dense subspace $H_0$ of $H$ such that for any $x_0 \in H_0$, we have
\begin{equation*}
\|T^n(x_0) - P_M(x_0)\| = o(n^{-k}), \quad k\geq1.
\end{equation*}
They referred to this as `super-polynomially fast' convergence \cite{BaSe16}. Their result tells us that given $\varepsilon>0$, even in the bad case where $c(M_1,\dots, M_J)=1$, if we pick an initial point where we have slow convergence, we are a distance of less than $\varepsilon$ away from a point where we have super-polynomially fast convergence.
For applications, it would be useful to be able to get a better idea of where the points (elements of $H$) that give fast and slow convergence are located. However, fairly little is known about this. Nevertheless, there is a conjecture by Deutsch and Hundal as to where points that give slow convergence can be found \cite{DeHu10}. The paper proves equivalent conditions for $c(M_1,\dots,M_J)<1$, from which it follows that
\begin{equation*}
c(M_1,\dots, M_J) = 1 \iff \sum_{j=1}^J M_j^\perp \textnormal{ is not closed in } H.
\end{equation*}
In this case, we know that given a monotonically decreasing sequence $(\lambda_n)$ in $[0,1]$ tending to $0$, there exists $x_\lambda \in H$ such that
\begin{equation*}
\|T^n(x_\lambda) - P_M(x_\lambda)\| \geq \lambda_n, \quad n\geq 1.
\end{equation*}
Deutsch and Hundal's conjecture is that for $(\lambda_n)$ tending to $0$ sufficiently slowly,
\begin{equation*}
x_\lambda \in M^\perp \setminus \sum_{j=1}^J M_j^\perp.
\end{equation*}
This would be useful in knowing how to avoid points where we have slow convergence, but it remains to be seen if this conjecture is true.
\subsection{Conclusion}
In this dissertation, we presented proofs of some well known results concerning the method of alternating projections. These include an original proof of von Neumann's theorem \cite{von49}, clarifying a remark in Sakai's paper \cite{Sak95}, and simplifying Amemiya and Ando's proof \cite{AmAn65} for the case of orthogonal projections. The key results are that $(x_n)$ always converges weakly, $(x_n)$ converges in norm when $(j_n)$ is quasiperiodic (and in particular periodic), and that we may find a sequence $(j_n)$, such that for any given vector $x_0 \in H$, we may find three closed subspaces intersecting only at the origin, for which $(x_n)$ does not converge in norm.
Beyond those mentioned in the dissertation, we do not know of any other results regarding the convergence of $(x_n)$. In particular, given a sequence $(j_n)$ that is not quasiperiodic, and with none of the closed subspaces $M_j$ finite-dimensional, no further results are available to determine whether $(x_n)$ converges in norm. Whether in the future we will be able to say more about the convergence of $(x_n)$ remains to be seen.
\renewcommand{Acknowledgements}{Acknowledgements}
\begin{abstract}
\thispagestyle{plain}
I would like to thank David Seifert for sparking my interest in the method of alternating projections, and for taking the time to supervise my dissertation.
\end{abstract}
\nocite{*}
\end{document}
|
\begin{document}
\title{Probing nonclassicality with matrices of phase-space distributions}
\author{Martin Bohmann}
\email{[email protected]}
\affiliation{Institute for Quantum Optics and Quantum Information - IQOQI Vienna, Austrian Academy of Sciences, Boltzmanngasse 3, 1090 Vienna, Austria}
\affiliation{QSTAR, INO-CNR, and LENS, Largo Enrico Fermi 2, I-50125 Firenze, Italy}
\orcid{0000-0003-3857-4555}
\author{Elizabeth Agudelo}
\affiliation{Institute for Quantum Optics and Quantum Information - IQOQI Vienna, Austrian Academy of Sciences, Boltzmanngasse 3, 1090 Vienna, Austria}
\orcid{0000-0002-5604-9407}
\author{Jan Sperling}
\affiliation{Integrated Quantum Optics Group, Applied Physics, Paderborn University, 33098 Paderborn, Germany}
\orcid{0000-0002-5844-3205}
\maketitle
\begin{abstract}
We devise a method to certify nonclassical features via correlations of phase-space distributions by unifying the notions of quasipro\-babilities and matrices of correlation functions.
Our approach complements and extends recent results that were based on Chebyshev's integral inequality [\href{https://doi.org/10.1103/PhysRevLett.124.133601}{Phys. Rev. Lett. \textbf{124}, 133601 (2020)}].
The method developed here correlates arbitrary phase-space functions at arbitrary points in phase space, inclu\-ding multimode scenarios and higher-order correlations.
Furthermore, our approach provides necessary and sufficient nonclassicality criteria, applies to phase-space functions beyond $s$-parametrized ones, and is accessible in experiments.
To demonstrate the power of our technique, the quantum characteristics of discrete- and continuous-variable, single- and multimode, as well as pure and mixed states are certified only employing second-order correlations and Husimi functions, which always resemble a classical probability distribution.
Moreover, nonlinear generalizations of our approach are studied.
Therefore, a versatile and broadly applicable framework is devised to uncover quantum properties in terms of matrices of phase-space distributions.
\end{abstract}
\section{Introduction}\label{sec:Introduction}
Telling classical and quantum features of a physical system apart is a key challenge in quantum physics.
Besides its fundamental importance, the notion of (quantum-optical) nonclassicality provides the basis for many applications in photonic quantum technology and quantum information \cite{KLM01,RL09a,OFJ09,KMSUZ16,SP19}.
Nonclassicality is, for example, a resource in quantum networks \cite{YBTNGK18}, quantum metrology \cite{KCTVJ19}, boson sampling \cite{SLR17}, or distributed quantum computing \cite{SLR19}.
The corresponding free (i.e., classical) operations are passive linear optical transformations and measurement.
By exceeding such operations, protocols which utilize nonclassical states can be realized.
Furthermore, nonclassicality is closely related to entanglement.
Each entangled state is nonclassical, and single-mode nonclassicality can be converted into two- and multi-mode entanglement \cite{KSBK02,VS14,KSP16}.
Consequently, a plethora of techniques to detect nonclassical properties have been developed, each coming with its own operational meanings for applications.
For example, quantumness criteria which are based on correlation functions and phase-space representations have been extensively studied in the context of nonclassical light \cite{MBWLN10,SV20}.
The description of physical systems using the phase-space formalism is one of the cornerstones of modern physics \cite{S01,ZFC05,N10}.
Beginning with ideas introduced by Wigner and others \cite{W27,W32,G46,M49}, the notion of a phase-space distribution for quantum systems generalizes principles from classical statistical theories (including statistical mechanics, chaos theory, and thermodynamics) to the quantum domain.
However, the nonnegativity condition of classical probabilities does not translate well to the quantum domain.
Rather, the notion of quasiprobabilities---i.e., normalised distributions that do not satisfy all axioms of probability distributions and particularly can attain negative values---was established and found to be the eminent feature that separates classical concepts from genuine quantum effects.
(See Refs. \cite{SW18,SV20} of thorough introductions to quasiproabilities.)
In particular, research in quantum optics significantly benefited from the concept of phase-space quasiprobability distributions, including prominent examples, such as the Wigner function \cite{W32}, the Glauber-Sudarshan $P$ function \cite{G63,S63}, and the Husimi $Q$ function \cite{H40}.
In fact, the very definition of nonclassicality---the impossibility of describing the behaviour of quantum light with classical statistical optics---is directly connected to negativities in such quasiprobabilities, more specifically, the Glauber-Sudarshan $P$ function \cite{TG65,M86}.
Because of the general success of quasiprobabilities, other phase-space distributions for light have been conceived \cite{C66,CG69,AW70}, each coming with its own advantages and drawbacks.
For example, squeezed states are represented by nonnegative (i.e., classical) Wigner functions although they form the basis for continuous-variable quantum information science and technology \cite{BL05,WPGCRSL12,ARL14}, also having a paramount role for quantum metrology \cite{GDDSSV13,T19}.
Another way of revealing nonclassical effects is by using correlation constraints which, when violated, witness nonclassicality.
Typically, such conditions are formulated in terms of inequalities involving expectation values of different observables.
Examples in optics are photon anti-bunching \cite{CW76,KM76,KDM76} and sub-Poissonian photon-number distributions \cite{M79,ZM90}, using intensity correlations, as well as various squeezing criteria, being based on field-operator correlations \cite{Y76,W83,LK87,A93}.
They can follow, for instance, from applying Cauchy-Schwartz inequalities \cite{A88} and uncertainty relations \cite{H87}, as well as from other violations of classical bounds \cite{K96,RL09,BQVC19}.
Remarkably, many of these different criteria can be jointly described via so-called matrix of moments approaches \cite{AT92,SV05a,SV06,MPHH09,SV06b}.
However, each of the mentioned kinds of nonclassicality, such as squeezed and sub-Poissonian light, requires a different (sub-)matrix of moments, a hurdle we aim at overcoming.
Over the last two decades, there had been many attempts to unify matrix-of-moment-based criteria with quasiprobabilities.
For example, the Fourier transform of the $P$ function can be used, together with Bochner's theorem, to correlate such transformed phase-space distributions through determinants of a matrix \cite{V00,RV02}, being readily available in experimental applications \cite{LS02,ZPB07,KVHDSS09,MKNPE11}, and further extending to the Laplace transformation \cite{SVA16}.
Furthermore, a joint description of field-operator moments and transformed phase-space functions has been investigated as well \cite{RSAMKHV15}.
Rather than considering matrices of phase-space quasiproabilities, concepts like a matrix-valued distributions enable us to analyzed nonclassical hybrid systems \cite{WMV97,ASCBZV17}.
Very recently, a first successful strategy that truly unifies correlation functions and phase-space functions has been conceived \cite{BA19}.
However, these first demonstrations of combining phase-space distributions and matrices of moments are still restricted to rather specific scenarios.
In this contribution, we formulate a general framework for uncovering quantum features through correlations in phase-space matrices which unifies these two fundamental approaches to characterizing quantum systems.
By combining matrix of moments and quasiprobabilities, this method enables us to probe nonclassical characteristics in different points in phase space, even using different phase-space distributions at the same time.
We specifically study implications from the resulting second- and higher-order phase-space distribution matrices for single- and multimode quantum light.
Furthermore, a direct measurement scheme is proposed and non-Gaussian phase-space distributions are analyzed.
To benchmark our method, we consider a manifold of examples, representing vastly different types of quantum features.
In particular, we show that our matrix-based approach can certify nonclassicality even if the underlying phase-space distribution is nonnegative.
In summary, our approach renders it possible test for nonclassicality by providing easily accessible nonclassicality conditions.
While previously derived phase-space-correlation conditions \cite{BA19} were restricted to single-mode scenarios, the present approach straightforwardly extends to multimode cases.
In addition, our phase-space matrix technique includes nonclassicality-certification approaches based on phase-space distributions and matrices of moments as special cases, resulting in an overarching structure that combines both previously separated techniques.
The paper is structured as follows.
Some initial remarks are provided in Sec. \ref{sec:Prelim}.
Our method is rigorously derived and thoroughly discussed in Sec. \ref{sec:PSM}.
Section \ref{sec:GenImp} concerns several generalizations and potential implementations of our toolbox.
Various examples are analyzed in Sec. \ref{sec:Ex}.
Finally, we conclude in Sec. \ref{sec:Conclusion}.
\section{Preliminaries}\label{sec:Prelim}
In their seminal papers \cite{G63,S63}, Glauber and Sudarshan showed that all quantum states of light can be represented diagonally in a coherent-state basis through the Glauber-Sudarshan $P$ distribution.
Specifically, a single-mode quantum state can be expanded as
\begin{align}
\label{eq:GSrepresentation}
\hat \rho=\int d^{2}\alpha\, P(\alpha)|\alpha\rangle\langle\alpha|,
\end{align}
where $|\alpha\rangle$ denotes a coherent state with a complex amplitude $\alpha$.
Then, classical states are identified as statistical (i.e., incoherent) mixtures of pure coherent states, which resemble the behavior of a classical harmonic oscillator most closely \cite{S26,H85}.
For this diagonal representation to exist for nonclassical states as well, the Glauber-Sudarshan distribution has to exceed the class of classical probability distributions \cite{TG65,M86}, particularly violating the nonnegativity constraint, $P\ngeq 0$.
This classification into states which have a classical correspondence and those which are genuinely quantum is the common basis for certifying nonclassical light.
As laid out in the introduction, nonclassicality is a vital resource for utilizing quantum phenomena, ranging from fundamental to applied \cite{YBTNGK18,KCTVJ19,SLR17,SLR19}.
In this context, it is worth adding that, contrasting other notions of quantumness, nonclassicality is based on a classical wave theory.
That is, it is essential to discern nonclassical coherence phenomena from those which are accessible with classical statistical optics, as formalized through Eq. \eqref{eq:GSrepresentation} with $P\geq0$.
See, e.g., Ref. \cite{RSG19} for a recent experiment that separates classical and quantum interference effects in such a manner.
For instance, free operations, i.e., those maps which preserve classical states, do include beam splitter transformations, resulting in the generation of entanglement from single-mode nonclassical states via such a free operation \cite{KSBK02,VS14,KSP16} that is vital for many quantum protocols.
\subsection{Phase-space distributions}
Since the Glauber-Sudarshan distribution can be a highly singular distribution (see, e.g., Ref. \cite{S16}), generalized phase-space functions have been devised.
Within the wide range of quantum-optical phase-space representations, the family of $s$-parametrized distributions \cite{CG69,AW70} is of particular interest.
Such distributions can be expressed as
\begin{align}
\label{eq:RelExpPSD}
P(\alpha;\sigma)
=\frac{\sigma}{\pi}
\left\langle {:}\exp\left(
-\sigma\hat n(\alpha)
\right){:}\right\rangle,
\end{align}
where colons indicate normal ordering \cite{VW06} and $\hat n(\alpha)=(\hat a-\alpha)^\dagger (\hat a-\alpha)$ is the displaced photon-number operator, written in terms of bosonic annihilation and creation operators, $\hat a$ and $\hat a^\dag$, respectively.
It is worth recalling that the normal ordering acts on the expression surrounded by the colons in such a way that creation operators are arranged to the left of annihilation operators whilst ignoring commutation relations.
Note that, for convenience, we parametrize distributions via the width parameter $\sigma$, rather than using $s$.
Both are related via
\begin{equation}
\label{eq:sTOsigma}
\sigma=\frac{2}{1-s}.
\end{equation}
From this relation, we can identify the Husimi function, $Q(\alpha)=P(\alpha;1)$, for $s=-1$ and $\sigma=1$; the Wigner function, $W(\alpha)=P(\alpha;2)$, for $s=0$ and $\sigma=2$; and the Glauber-Sudarshan function, $P(\alpha)=P(\alpha;\infty)$, for $s=1$ and $\sigma=\infty$.
Whenever a phase-space distribution contains a negative contribution, i.e., $P(\alpha;\sigma)<0$ for at least one pair $(\alpha;\sigma)$, the underlying quantum state is nonclassical \cite{TG65,M86}.
In such a case, the distribution $P(\alpha;\sigma)$ refers to as a quasiprobability distribution which is incompatible with classical probability theory.
Nonetheless, for any $\sigma\geq0$ and any state, this function represents a real-valued distribution which is normalized, $P(\alpha;\sigma)=P(\alpha;\sigma)^\ast$ and $\int d^2\alpha\, P(\alpha;\sigma)=1$.
In addition, it is worth mentioning that the normalization of the state is guaranteed through the limit
\begin{equation}
\lim_{\sigma\to 0}\frac{\pi}{\sigma} P(\alpha;\sigma)=\langle{:}\exp(0){:}\rangle=\langle\hat 1\rangle=1.
\end{equation}
\subsection{Matrix of moments approach}
Besides phase-space distributions, a second family of nonclassicality criteria is based on correlation functions; see, e.g., Refs. \cite{SRV05,SV05} for introductions.
For this purpose, we can consider an operator function $\hat f=f(\hat a, \hat a^\dagger)$.
Then,
\begin{align}
\label{eq:fdagf}
\langle{:}\hat f^\dagger\hat f{:}\rangle
=\int d^2\alpha\, P(\alpha)|f(\alpha,\alpha^*)|^2
\stackrel{\text{cl.}}{\geq}0
\end{align}
holds true for all $P\geq 0$.
Now, one can expand $\hat f$ in terms of a given set of operators, e.g., $\hat f=\sum_{i} c_i \hat O_i$, resulting in $\langle{:}\hat f^\dag\hat f{:}\rangle=\sum_{i,j} c_i^\ast c_j \langle {:}\hat O_i^\dag\hat O_j{:}\rangle$.
Furthermore, this expression is nonnegative [cf. Eq. \eqref{eq:fdagf}] iff the matrix $(\langle {:}\hat O_i^\dag\hat O_j{:}\rangle)_{i,j}$ is positive semidefinite.
This constraint can, for example, be probed using Sylvester's criterion \cite{HJ90} which states that a Hermitian matrix is positive-definite if and only if all its leading principal minors have a positive determinant.
It is worth mentioning that Eq. \eqref{eq:fdagf} defines the notion of a nonclassicality witness, where $\langle{:}\hat f^\dagger\hat f{:}\rangle<0$ certifies nonclassicality.
The above observations form the basis for many experimentally accessible nonclassicality criteria, such as using basis operators which are powers of quadrature operators \cite{AT92}, photon-number operators \cite{A93}, and general creation and annihilation operators \cite{SRV05,SV05}.
See Refs. \cite{MBWLN10} for an overview of moment-based inequalities.
In the following, we are going to combine the phase-space distribution technique with the method of matrices of moments to arrive at the sought-after unifying approach of both techniques.
\section{Matrix of phase-space distributions}\label{sec:PSM}
Both phase-space distributions and matrices of moments exhibit a rather dissimilar structure when it comes to formulating constraints for classical light.
Consequently, a full unification of both approaches is missing to date, excluding the few attempts mentioned in Sec. \ref{sec:Introduction}.
In this section, we bridge this gap and derive a matrix of phase-space distributions which leads to previously unknown nonclassicality criteria, also overcoming the limitations of earlier methods.
\subsection{Derivation}
For the purpose of deriving our criteria, we consider an operator function $\hat f=\sum_i c_i \exp[-\sigma_i\hat n_i(\alpha_i)]$.
Then, the normally ordered expectation value of $\hat f^\dag\hat f$ can be expanded as
\begin{align}
\label{eq:QuadraticForm}
\begin{aligned}
&\langle{:}\hat f^\dagger \hat f{:}\rangle
=\sum_{i,j} c_i^\ast c_j\langle{:}e^{-\sigma_i\hat n(\alpha_i)}e^{-\sigma_j\hat n(\alpha_j)}{:}\rangle
\\
=&\sum_{i,j} c_j^\ast c_i
\exp\left[-\frac{\sigma_i\sigma_j}{\sigma_i+\sigma_j}|\alpha_i-\alpha_j|^2\right]
\\ &\times
\left\langle{:}\exp\left[
-(\sigma_i+\sigma_j)\,\hat n \left(\frac{\sigma_i\alpha_i+\sigma_j\alpha_j}{\sigma_i+\sigma_j}\right)
\right]{:}\right\rangle.
\end{aligned}
\end{align}
Based on the above relation, we may define two matrices,
one for classical amplitudes,
\begin{align}
M^\mathrm{(c)}=\left(
\exp\left[-\frac{\sigma_i\sigma_j}{\sigma_i+\sigma_j}|\alpha_i-\alpha_j|^2\right]
\right)_{i,j},
\end{align}
and one for the quantum-optical expectation values,
\begin{align}
\begin{aligned}
M^\mathrm{(q)}=&\left(
\left\langle{:}\exp\left[
-(\sigma_i+\sigma_j)\,\hat n \left(\frac{\sigma_i\alpha_i+\sigma_j\alpha_j}{\sigma_i+\sigma_j}\right)
\right]{:}\right\rangle
\right)_{i,j}
\\ =&
\left(\frac{\pi}{\sigma_i+\sigma_j}\,P\left(\frac{\sigma_i\alpha_i+\sigma_j\alpha_j}{\sigma_i+\sigma_j};\sigma_i+\sigma_j\right)\right)_{i,j},
\end{aligned}
\end{align}
which can be expressed in terms of phase-space distributions using Eq. \eqref{eq:RelExpPSD}.
Specifically, $M^\mathrm{(q)}$ corresponds to a matrix of phase-space distributions.
Moreover, the fact that the normally ordered expectation value of $\hat f^\dag\hat f$ is nonnegative for classical light [Eq. \eqref{eq:fdagf}] is then identical to the entry-wise product (i.e., the Hadamard product $\circ$) of both matrices being positive semidefinite,
\begin{align}
M\stackrel{\text{cl.}}{\geq}0,
\text{ with }
M=M^\mathrm{(c)}\circ M^\mathrm{(q)}
\end{align}
defining our phase-space matrix $M$.
For classical light, all principal minors of $M$ have to be nonnegative according to Sylvester's criterion.
Conversely, the violation of this constraint certifies a nonclassical state,
\begin{align}
\label{eq:NclPSM}
\det(M)<0,
\end{align}
where $M$ is defined through arbitrary small or large sets of parameters $\sigma_i$ and $\sigma_j$ and coherent amplitudes $\alpha_i$ and $\alpha_j$.
Therefore, inequality \eqref{eq:NclPSM} enables us to formulate various nonclassicality conditions which correlate distinct phase-space distributions as it typically only done for matrix-of-moments-based techniques when using different kinds of observables.
We finally remark that the expression in Eq. \eqref{eq:NclPSM} resembles a nonlinear nonclassicality witnessing approach.
As a first example, we may explore the first-order criterion, i.e., a $1\times 1$ matrix of quasiprobabilities.
Selecting arbitrary $\sigma$-parameters and coherent amplitudes, i.e., ($\alpha_1;\sigma_1)=(\alpha;\sigma)$, we find the following restriction for classical states [cf. Eq. \eqref{eq:NclPSM}]:
\begin{align}\label{eq:1x1}
\frac{\pi}{2\sigma}P(\alpha;2\sigma)\stackrel{\text{cl.}}{\geq}0.
\end{align}
This inequality reflects the fact that finding negativities in a parametrized phase-space distribution $P(\alpha;2\sigma)$ is sufficient to certify nonclassicality.
Also recall that we retrieve the Glauber-Sudarshan distribution in the limit $\sigma\to\infty$.
Since the nonnegativity of this distribution defines the very notion of a nonclassical state \cite{TG65,M86}, we can conclude from this examples that our approach is necessary and sufficient for certifying nonclassicality.
However, the Glauber-Sudarshan distribution has the disadvantage of being a highly singular for many relevant nonclassical states of light and, thus, hard to reconstruct from experimental data.
Consequently, it is of practical importance (see Secs. \ref{sec:GenImp} and \ref{sec:Ex}) to consider higher-order criteria beyond this trivial one.
\subsection{Second-order criteria}
We begin our consideration with an interesting second-order case.
We chose $(\alpha_1;\sigma_1)=(0;0)$ and $(\alpha_2;\sigma_2)=(\alpha;\sigma)$.
This yields the $2\times 2$ phase-space matrix
\begin{align}
\label{eq:2x2specific}
M=\begin{pmatrix}
1 & \langle{:}\exp(-\sigma \hat n(\alpha)){:}\rangle
\\ \langle{:}\exp(-\sigma \hat n(\alpha)){:}\rangle & \langle{:}\exp(-2\sigma \hat n(\alpha)){:}\rangle
\end{pmatrix}.
\end{align}
Up to a positive scaling, the determinant of this matrix results in the following nonclassicality criterion:
\begin{equation}
\label{eq:WQ}
P(\alpha;2\sigma)
-\frac{2\pi}{\sigma}\left(P(\alpha;\sigma)\right)^2
< 0.
\end{equation}
In particular, we can set $\sigma=1$ to relate this condition to the Wigner and Husimi functions, leading to $W(\alpha)-2\pi Q(\alpha)^2<0$.
This special case of our general approach has been recently derived using a very different approach, using Chebyshev's integral inequality \cite{BA19}.
There it was shown that, by applying the inequality \eqref{eq:WQ} for $\sigma=1$, it is possible to certify nonclassicality even if the Wigner function of the state under study is nonnegative.
In this context, remember that the Husimi function, $Q(\alpha)=\langle\alpha|\hat\rho|\alpha\rangle/\pi$, is always nonnegative, regardless of the state $\hat\rho$.
Beyond this scenario, we now study a more general $2\times 2$ phase-space matrix $M$.
For an efficient description, it is convenient to redefine transformed parameters as
\begin{subequations}
\begin{eqnarray}
\label{eq:TrafoAmp}
\Delta\alpha = \alpha_2-\alpha_1
&\text{ and }&
A = \frac{\sigma_1\alpha_1+\sigma_2\alpha_2}{\sigma_1+\sigma_2},
\\ \label{eq:TrafoS}
\tilde\sigma = \frac{\sigma_1\sigma_2}{\sigma_1+\sigma_2}
&\text{ and }&
\Sigma = \sigma_1+\sigma_2.
\end{eqnarray}
\end{subequations}
Note that these parameters relate to the two-body problem.
That is, the quantities in Eq. \eqref{eq:TrafoAmp} define the relative position and barycenter in phase-space, respectively; and the two quantities in Eq. \eqref{eq:TrafoS} resemble the reduced and total mass in a mechanical system, respectively.
In this alternative parametrization, the two matrices, giving the total phase-space matrix $M=M^\mathrm{(c)}\circ M^\mathrm{(q)}$, read
\begin{subequations}
\begin{align}
M^\mathrm{(c)}=&\begin{pmatrix}
1 & e^{-\tilde\sigma|\Delta\alpha|^2}
\\
e^{-\tilde\sigma|\Delta\alpha|^2} & 1
\end{pmatrix}, \quad \text{and}
\\
M^\mathrm{(q)}=&\begin{pmatrix}
\langle{:} e^{-2\sigma_1\hat n(\alpha_1)} {:}\rangle
& \langle{:} e^{-\Sigma\hat n\left(A\right)} {:}\rangle
\\
\langle{:} e^{-\Sigma\hat n\left(A\right)} {:}\rangle
& \langle{:} e^{-2\sigma_2\hat n(\alpha_2)} {:}\rangle
\end{pmatrix}.
\end{align}
\end{subequations}
Hence, the determinant of the Hadamard product of both matrices then gives
\begin{align}
\label{eq:2x2}
\det(M) =
\langle{:} e^{-2\sigma_1\hat n(\alpha_1)} {:}\rangle
\langle{:} e^{-2\sigma_2\hat n(\alpha_2)} {:}\rangle\nonumber\\
{-} e^{-2\tilde\sigma|\Delta\alpha|^2}
\langle{:} e^{-\Sigma\hat n\left(A\right)} {:}\rangle^2.
\end{align}
If this determinant is negative for the state of light under study, its nonclassicality is proven.
In terms of phase-space distributions, this condition can be also recast into the form
\begin{align}
\label{eq:2x2condition}
P(\alpha_1;2\sigma_1)P(\alpha_2;2\sigma_2)
-\frac{4\tilde\sigma}{\Sigma}\left[e^{-\tilde\sigma|\Delta\alpha|^2}P(A;\Sigma)\right]^2<0.
\end{align}
Interestingly, this nonclassicality criterion correlates different points in phase space for different distributions, $P(\alpha_1;2\sigma_1)$ and $P(\alpha_2;2\sigma_2)$, with a phase-space distribution with the total width $\Sigma$ at the barycenter $A$ of coherent amplitudes, $P(A;\Sigma)$.
\subsection{Higher-order cases}\label{sec:HigherOrderCases}
The next natural extension concerns the analysis of higher-order correlations.
Clearly, one can obtain an increasingly large set of nonclassicality tests with an increasing dimensionality of $M$, determined by the number of pairs $(\alpha_i;\sigma_i)$.
In order to exemplify this potential, let us focus on one specific $3\times 3$ scenario and more general scenarios for specific choices of parameters.
\begin{widetext}
Let us discuss the $3\times 3$ case firstly, for which we are going to consider $\sigma_3=0$.
From this, one obtains the following phase-space matrix:
\begin{equation}
M=\begin{pmatrix}
\langle{:}\exp(-2\sigma_1\hat n(\alpha_1)){:}\rangle & \langle{:}\exp(-\sigma_1\hat n(\alpha_1)-\sigma_2\hat n(\alpha_2)){:}\rangle & \langle{:}\exp(-\sigma_1\hat n(\alpha_1)){:}\rangle
\\
\langle{:}\exp(-\sigma_1\hat n(\alpha_1)-\sigma_2\hat n(\alpha_2)){:}\rangle & \langle{:}\exp(-2\sigma_2\hat n(\alpha_2)){:}\rangle & \langle{:}\exp(-\sigma_2\hat n(\alpha_2)){:}\rangle
\\
\langle{:}\exp(-\sigma_1\hat n(\alpha_1)){:}\rangle & \langle{:}\exp(-\sigma_2\hat n(\alpha_2)){:}\rangle & 1
\end{pmatrix}.
\end{equation}
Again, directly expressing this matrix in terms of phase-space functions, as done previously, we get a third-order nonclassicality criterion from its determinant \cite{comment:3x3}.
It reads
\begin{equation}
\begin{aligned}
\frac{\det(M)}{\pi^2} =&
\left(
\frac{P(\alpha_1;2\sigma_1)}{2\sigma_1}
-\pi\left(\frac{P(\alpha_1;\sigma_1)}{\sigma_1}\right)^2
\right)
\left(
\frac{P(\alpha_2;2\sigma_2)}{2\sigma_2}
-\pi\left(\frac{P(\alpha_2;\sigma_2)}{\sigma_2}\right)^2
\right)
\\ & -
\left(
\exp(-\tilde\sigma|\Delta\alpha|^2)\frac{P(A;\Sigma)}{\Sigma}
-\pi\frac{P(\alpha_1;\sigma_1)}{\sigma_1}\frac{P(\alpha_2;\sigma_2)}{\sigma_2}
\right)^2<0,
\end{aligned}
\end{equation}
using the parameters defined in Eqs. \eqref{eq:TrafoAmp} and \eqref{eq:TrafoS}.
In fact, this condition combines the earlier derived criteria of the forms \eqref{eq:WQ} and \eqref{eq:2x2} in a manner similar to cross-correlations nonclassicality conditions known from matrices of moments \cite{SVA16}.
\end{widetext}
Another higher-order matrix scenario corresponds to having identical coherent amplitudes, i.e., $\alpha_i=\alpha$ for all $i$.
In this case, we find that the two Hadamard-product components of the matrix $M$ simplify to
\begin{align}
\begin{aligned}
M^\mathrm{(c)}&=(1)_{i,j}\quad
\text{and}\quad\\
M^\mathrm{(q)}&=\left(
\frac{\pi}{\sigma_i+\sigma_j} P\left(\alpha;\sigma_i+\sigma_j\right)
\right)_{i,j},
\end{aligned}
\end{align}
thus resulting in $M=M^\mathrm{(q)}$.
Therefore, we can formulate nonclassicality criteria which correlate an arbitrary number of different phase-space distributions, defined via $\sigma_i$, at the same point in phase-space, $\alpha$.
Analogously, one can consider a scenario in which all $\sigma$ parameters are identical, $\sigma_i=\sigma$.
Then, we get
\begin{align}
\begin{aligned}
M^\mathrm{(c)}=&(e^{-\sigma|\alpha_i-\alpha_j|^2/2})_{i,j}
\qquad\text{and}
\\
M^\mathrm{(q)}=&\left(
\frac{\pi}{2\sigma} P\left(\frac{\alpha_i+\alpha_j}{2};2\sigma\right)
\right)_{i,j}.
\end{aligned}
\end{align}
Consequently, we obtain nonclassicality criteria which correlate an arbitrary number of different points in phase space, $\alpha_i$, for a single phase-space distribution, parametrized by $\sigma$.
\subsection{Comparison with Chebyshev's integral inequality approach}
As mentioned previously, a related method based on Chebyshev's integral inequality has been introduced recently \cite{BA19}.
It also provides inequality conditions for different phase-space distributions.
The nonclassicality conditions based on Chebyshev's integral inequality take the form
\begin{align}
\label{eq:Chebyshev}
P(\alpha;\Sigma)-\frac{\Sigma}{\pi}\prod_{i=1}^{D} \left[\frac{\pi}{\sigma_i} P(\alpha;\sigma_i)\right]<0,
\end{align}
where $\Sigma=\sum_{i=1}^{D} \sigma_i$.
To compare both approaches, let us discuss their similarities and differences.
In its simplest form, involving only $\sigma_1$ and $\sigma_2$, the condition in Eq. \eqref{eq:Chebyshev} resembles the tests based on the $2\times2$ matrix in Eq. \eqref{eq:2x2specific}.
In particular, for the case $\sigma_1=\sigma_2=\sigma$ both methods yield the exact same conditions.
For $\sigma_1\neq\sigma_2$ such an agreement of both methods cannot be found because of the inherent symmetry of the phase-space matrix approach, $M=M^\dag$, which stems from its construction via a quadratic form; cf. Eq. \eqref{eq:QuadraticForm}.
Also, for more general, higher-order conditions, i.e. $D>2$, such similarities cannot be found either.
Conditions of the form in Eq. \eqref{eq:Chebyshev} consist of only two summands.
The first term is a single phase-space function with the width parameter $\Sigma$ which is associated to the highest $\sigma$ parameter involved in the inequality.
The second term is a product of $D$ phase-space distributions, each individual distribution has a width parameter $\sigma_i$, together bound by the condition $\Sigma=\sum_{i=1}^{D} \sigma_i$.
By comparison, our phase-space matrix approach yields, in general, a richer and more complex set of higher-order nonclassicality tests, such as demonstrated in Sec. \ref{sec:HigherOrderCases}.
Let us point out further differences between the two approaches.
Firstly, we observe that the inequalities based on Chebyshev's integral inequality only apply to one single point in phase space.
In contrast, the phase-space matrix method devised here includes conditions that combine different points in phase space; cf. Eq. \eqref{eq:QuadraticForm}.
Secondly, Chebyshev's integral inequality approach cannot be extended to multimode settings.
Such a limitation does not exist for the matrix approach either, as we show in the following Sec. \ref{subsec:multimode}.
We conclude that both the technique in Ref. \cite{BA19} and our phase-space matrix approach for obtaining phase-space inequalities yield similar second-order conditions but, in general, give rise to rather different nonclassicality criteria.
In particular, the phase-space matrix framework offers a broader range of variables---be it coherent amplitudes or widths---that lead to a richer set of nonclassicality conditions.
\subsection{Extended relations to nonclassicality criteria}
To finalize our first discussions we now focus on the relation to matrices of moments.
Previously, we have shown that, already in the first order, our criteria are necessary and sufficient to verify the nonclassicality, and we discussed our method in relation to Chebychev's integral inequality.
Furthermore, indirect techniques using transformed phase-space functions, such as the characteristic function \cite{RSAMKHV15} and the two-sided Laplace transform \cite{SVA16}, have been previously related to moments.
Thus, the question arises what the relation of our direct technique to such matrices of moments is.
For showing that our framework includes the matrix of moments technique, we may remind ourselves that derivatives can be understood as a linear combination, specifically as a limit of a differential quotient, $\partial_z^mg(z)=\lim_{\epsilon\to0}\epsilon^{-m}\sum_{k=0}^m\binom{m}{k}(-1)^{m-k}g(z+k\epsilon)$.
This enables us to write \cite{comment:derivatives}
\begin{equation}
\hat a^{\dag m}\hat a^{n}
=\sigma^{-(m+n)}\left.\partial_{\alpha}^m\partial^n_{\alpha^\ast} e^{\sigma|\alpha|^2}{:}e^{-\sigma\hat n(\alpha)}{:}\right|_{\alpha=0 \text{ and }\sigma=0},
\end{equation}
expressing arbitrary moments $\hat a^{\dag m}\hat a^{n}$ via linear combinations of the normally ordered operators that represent $\sigma$-parametrized phase-space distributions.
Thus, in the corresponding limits, we can identify the operator $\hat f$ in Eq. \eqref{eq:fdagf} with $\hat f=\sum_{m,n}c_{m,n}\sigma^{-(m+n)}\partial_{\alpha}^m\partial^n_{\alpha^\ast} e^{\sigma|\alpha|^2}{:}e^{-\sigma\hat n(\alpha)}{:}|_{\alpha=0,\sigma=0}=\sum_{m,n}c_{m,n}\hat a^{\dag m}\hat a^{n}$.
For such a choice $\hat f$, $\langle{:}\hat f^\dag\hat f{:}\rangle<0$ is in fact identical to the most general form of the matrix of moments criterion for nonclassicality \cite{SRV05,SV05}.
In conclusion, we find that our necessary and sufficient methodology not only includes nonclassicality criteria based on phase-space functions themselves [cf. Eq. \eqref{eq:1x1}], but it also includes the technique of matrices of moments as a special case.
In a hierarchical picture, this means that our family of nonclassicality criteria, including arbitrary orders of $\sigma$-parametrized phase-space functions, encompasses both negativities of phase-space functions and matrices of moments.
Because of the above relation, the order of moments that is required to certify nonclassicality also sets an upper bound to the size of the matrix of phase-space distributions so that it certifies nonclassicality.
Therefore, our approach unifies and subsumes both earlier types of nonclassicality conditions.
\section{Generalizations and implementation}\label{sec:GenImp}
In this section, we generalize our approach to arbitrary multimode nonclassical light and propose a measurement scheme to experimentally access the matrix of phase-space distributions.
In addition, we show that our approach applies to phase-space distributions which are no longer limited to $\sigma$ parametrizations and relate these findings to the response of nonlinear detection devices.
\subsection{Multimode case}\label{subsec:multimode}
After our in-depth analysis of single-mode phase-space matrices, the multimode case follows almost straightforwardly.
For the purpose of such a generalization, we consider $N$ optical modes, represented via the annihilation operators $\hat a_m$ for $m=1,\ldots,N$ and extending to the displaced photon-number operators $\hat n_m(\alpha^{(m)})=(\hat a_m-\alpha^{(m)})^\dag(\hat a_m-\alpha^{(m)})$.
Now, $\sigma$-parametrized multimode phase-space functions can be expressed as
\begin{equation}
\begin{aligned}
&\left\langle{:}
e^{-\sigma^{(1)}\hat n_1(\alpha^{(1)})}
\cdots
e^{-\sigma^{(N)}\hat n_N(\alpha^{(N)})}
{:}\right\rangle
\\
=& \frac{\pi^N}{\sigma^{(1)}\cdots \sigma^{(N)}}
P(\alpha^{(1)},\ldots,\alpha^{(N)} ; \sigma^{(1)},\ldots,\sigma^{(N)}),
\end{aligned}
\end{equation}
where we allow for different $s$ parameters for each mode, with $s^{(m)}=1-2/\sigma^{(m)}$ [Eq. \eqref{eq:sTOsigma}].
As in the single-mode case, we can now formulate a matrix $M$ of multimode phase-space functions,
\begin{align*}
M{=}\left(\left\langle{:}
e^{-\sum\limits_{m=0}^N\sigma_i^{(m)}\hat n_m(\alpha_i^{(m)})}
e^{-\sum\limits_{m=0}^N\sigma_j^{(m)}\hat n_m(\alpha_j^{(m)})}
{:}\right\rangle\right)_{i,j}.
\end{align*}
Consequently, this matrix of phase-space functions also has to be positive semidefinite if the underlying state of multimode light is classical.
That is,
\begin{equation}
M\stackrel{\text{cl.}}{\geq}0
\end{equation}
holds true for classical light and for any dimension (or order) of the multimode matrix $M$ and any sigma and alpha values.
Conversely, $\det(M)<0$ is a nonlinear witness of multimode nonclassicality.
Similarly to the single-mode case, an increasingly large matrix $M$ with increasingly dense sets of parameters for the various alpha and sigma values then enables one to probe the nonclassicality of arbitrary multimode states.
Since we have already exemplified various scenarios for single-mode phase-space correlations, in the following, we restrict ourselves to a particular multimode case.
Specifically, we focus on two optical modes and a $3\times 3$ phase-space matrix $M$ is,
\begin{equation}
\begin{pmatrix}
1 & \frac{\pi P(\alpha^{(1)};\sigma)}{\sigma} & \frac{\pi P(\alpha^{(2)};\sigma)}{\sigma}
\\
\frac{\pi P(\alpha^{(1)};\sigma)}{\sigma} & \frac{\pi P(\alpha^{(1)};2\sigma)}{2\sigma} & \frac{\pi^2P(\alpha^{(1)},\alpha^{(2)};\sigma,\sigma)}{\sigma^2}
\\
\frac{\pi P(\alpha^{(2)};\sigma)}{\sigma} & \frac{\pi^2P(\alpha^{(1)},\alpha^{(2)};\sigma,\sigma)}{\sigma^2} & \frac{\pi P(\alpha^{(1)};2\sigma)}{2\sigma}
\end{pmatrix},\nonumber
\end{equation}
where quasiprobabilities as a function of single-mode parameters indicate marginal phase-space distributions.
Adopting a notation of pairs of coherent amplitudes and widths, $M$ is thus defined via the following two-mode parameters: $(\alpha^{(1)}_1,\alpha^{(2)}_1;\sigma^{(1)}_1,\sigma^{(2)}_1)=(0,0;0,0)$, $(\alpha^{(1)}_2,\alpha^{(2)}_2;\sigma^{(1)}_2,\sigma^{(2)}_2)=(\alpha^{(1)},0;\sigma,0)$, and $(\alpha^{(1)}_3,\alpha^{(2)}_3;\sigma^{(1)}_3,\sigma^{(2)}_3)=(0,\alpha^{(2)};0,\sigma)$.
In particular, we can express the nonclassicality constraint from the determinant of $M$ \cite{comment:3x3} for $\sigma=1$ via joint and marginal Wigner and Husimi functions,
\begin{align}
\begin{aligned}
\frac{\det M}{\pi^4}
=& \left[
\tfrac{W(\alpha^{(1)})}{2\pi}-Q(\alpha^{(1)})^2
\right]\left[
\tfrac{W(\alpha^{(2)})}{2\pi}-Q(\alpha^{(2)})^2
\right]
\\ &
-\left[
Q(\alpha^{(1)},\alpha^{(2)})-Q(\alpha^{(1)})Q(\alpha^{(2)})
\right]^2 \stackrel{\text{cl.}}{\geq}0.
\end{aligned}
\end{align}
Violating this inequality verifies the nonclassicality of the two-mode state under study.
\subsection{Direct measurement scheme}\label{subsec:Detect}
The reconstruction of phase-space distributions can be a challenging task \cite{LR09}.
For this reason, we are going to devise a directly accessible setup to infer the phase-space matrix.
See Fig. \ref{fig:setup} for an outline which is based on the approaches in Refs. \cite{WV96,SV05,BRWK99}.
For convenience, we restrict ourselves to a single optical mode; the extension to multiple modes follows straightforwardly.
That is, each of the multiple modes can be detected individually by a correlation-measurement setup as depicted in Fig. \ref{fig:setup}.
Furthermore, it is noteworthy that our phase-space matrix approach is not limited to this specific measurement scheme proposed here and generally applies to any detection scenario which allows for a reconstruction of quasiprobability distributions.
\begin{figure}
\caption{
Outline of phase-space matrix correlation measurement.
The signal, i.e., the state $\hat\rho$ of the light field under study, is split into two identical outputs at a 50:50 beam splitter.
Each of the resulting beams is combined with a local oscillator (LO) on a $|t|^2:|r|^2$ beam splitter and measured with a photon-number-based detector, represented through $\Pi(\hat n)$.
The resulting correlations yield the entries of our phase-space matrix $M$.
}
\label{fig:setup}
\end{figure}
For the setup in Fig. \ref{fig:setup}, we begin our considerations with a coherent state $|\beta\rangle$, representing our signal $\hat\rho=|\beta\rangle\langle \beta|$.
Firstly, we split this signal equally into $2$ modes, resulting in a two-mode coherent state $|\beta/\sqrt2,\beta/\sqrt2\rangle$.
In addition, local oscillator states are prepared, $|\beta_i\rangle$ and $|\beta_j\rangle$ for each mode.
Each of the two signals is then mixed with its local oscillator on a $|t|^2{:}|r|^2$ beam splitter, where $|t|^2+|r|^2=1$.
One output of each beam splitter is discarded, namely the lower and upper one for the top and bottom path in Fig. \ref{fig:setup}, respectively.
This results in the input-output relation
\begin{align}
|\beta\rangle\mapsto \left|t\frac{\beta}{\sqrt2}+r\beta_i,t\frac{\beta}{\sqrt2}+r\beta_j\right\rangle,
\end{align}
which is then detected as follows.
Each of the resulting modes is measured with a detector or detection scheme based on photon absorption, thus being described by a positive operator-valued measure (POVM) which is diagonal in the photon-number representation \cite{KK64}.
Consequently, one or a combination of detector outcomes (e.g., in a generating-function-type combination \cite{SPBTEWLNLGVASW20}) corresponds to a POVM element of the form $\Pi(\hat n)={:}e^{-\Gamma(\hat n)}{:}$.
Using $|m\rangle\langle m|={:}e^{-\hat n}\hat n^m/m!{:}$ for an $m$-photon projector, this means that we identify $\sum_{m=0}^\infty\pi_m|m\rangle\langle m|={:}e^{-\hat n}\sum_{m=0}\pi_m \hat n^m/m!{:}=\Pi(\hat n)={:}e^{-\Gamma(\hat n)}{:}$, where the eigenvalues $\pi_m$ corresponds to the Taylor expansion coefficients of the function $z\mapsto\exp[z-\Gamma(z)]$.
Accordingly, the function $\Gamma(\hat n)$ models the detector response \cite{KK64,VW06}.
Finally, the correlation measurement of this response for our coherent signal states takes the form
\begin{equation}
\label{eq:DetectCohState}
\begin{aligned}
M_{i,j}&=
\exp\left(
-\Gamma\left(\frac{|t|^2}{2}\left|
\beta - \frac{r\sqrt 2\beta_i}{t}
\right|\right)
\right)
\\ &\times
\exp\left(
-\Gamma\left(\frac{|t|^2}{2}\left|
\beta + \frac{r\sqrt 2\beta_j}{t}
\right|\right)
\right).
\end{aligned}
\end{equation}
Now it is convenient to define $\tilde\Gamma(\hat n)=\Gamma(|t|^2\hat n/2)$ and
\begin{equation}
\alpha_i=-\frac{r\sqrt 2\beta_{i}}{t},
\end{equation}
for all LO choices $i$ and, similarly, for $j$.
Furthermore, we generalize this treatment to arbitrary states, $\hat\rho=\int d^2\beta\, P(\beta)|\beta\rangle\langle\beta|$, using the Glauber-Sudarshan representation [Eq. \eqref{eq:GSrepresentation}].
Therefore, the correlations measured as described above [Eq. \eqref{eq:DetectCohState}] obey
\begin{equation}
M_{i,j}=\left\langle{:}
e^{-\tilde\Gamma(\hat n(\alpha_i))}
e^{-\tilde\Gamma(\hat n(\alpha_j))}
{:}\right\rangle,
\end{equation}
which corresponds to a directly measured phase-space matrix element, e.g., for a linear detector response $\tilde\Gamma(\hat n)=\sigma \hat n$.
The other way around, we can choose $\hat f=\sum_{i} c_i\exp(-\tilde\Gamma(\hat n(\alpha_i)))$ for the general classicality constraint in Eq. \eqref{eq:fdagf}, even for nonlinear detector responses.
Then, the matrix of phase-space distribution approach applies, regardless of a linear or nonlinear detection model.
(See also Refs. \cite{ASW93,SPBTEWLNLGVASW20} in this context.)
As an example, we consider a case with two single on-off click detectors (represented by $\Pi(\hat n)$ in Fig. \ref{fig:setup}) with a non-unit quantum efficiency $\eta_{\det}$ and a non-vanishing dark-count rate $\delta$ \cite{comment:darkcount}, which represents realistic detectors in experiments.
In addition, we can introduce neutral density (ND) filters to attenuate the light that impinges on each detector.
The POVM element for the no-click event in combination with the ND filters then reads $\hat\Pi(\hat n)={:}\exp(-(\eta\hat n+\delta)){:}$, where $0\leq \eta\leq\eta_{\det}$ is a controllable efficiency.
The measured correlation for this scenario takes the form
\begin{equation}
M_{i,j}=\exp(-2\delta)\langle{:}\exp\left(
-\eta_i\hat n(\alpha_i)-\eta_j\hat n(\alpha_j)
\right){:}\rangle.
\end{equation}
Therein, the adjustable efficiency $\eta_i$ plays role of $\sigma_i$.
Also, the positive factor that includes the dark counts is irrelevant because it does not change the sign of the determinant of $M$, i.e., the verified nonclassicality.
In summary, the measurement layout in Fig. \ref{fig:setup} enables us to directly measure the entries of our phase-space matrix $M$.
As an experimental setup, this scheme also underlines the strong connection between correlations and their measurements and phase-space quasiprobabilities and their reconstruction.
We may emphasize that all experimental techniques and components that are used in the proposed setup are readily available; see, e.g., the related quantum state reconstruction experiments reported in Refs. \cite{BTBSSV18,SPBTEWLNLGVASW20}.
\subsection{Generalized phase-space functions}\label{subsec:Regularized}
The $\sigma$-parametrized phase-space distributions we considered so far are related to each other via convolutions with Gaussian distributions \cite{C66,CG69,AW70}.
However, there are additional means to represent a state without relying on Gaussian convolutions only.
Such generalized phase-space function can be obtained from the Glauber-Sudarshan $P$ function via
\begin{equation}
P_\Omega(\alpha)
=\int d^2\tilde\alpha\, P(\tilde \alpha)\,\Omega(\alpha;\tilde\alpha,\tilde\alpha^\ast)
=\langle{:}\Omega(\alpha;\hat a,\hat a^\dag){:}\rangle
\end{equation}
for a kernel $\Omega\geq0$ \cite{AW70,KV10}.
The construction of this so-called filter or regularizing function $\Omega$ can be done so that the resulting distribution $P_\Omega$ is regular (i.e., without the singular behavior known from the $P$ function) and is positive semidefinite for all classical states \cite{KV10}.
For instance, a non-Gaussian filter $\Omega$ has been used to experimentally characterize squeezed states via regular distributions which exhibit negativities in phase space \cite{KVHS11};
this cannot be done with $s$-parametrized quasiprobability distributions, which are either nonnegative or highly singular for squeezed states.
As done for the previously considered distributions, we can define an operator $\hat f=\sum_i c_i \Omega_i(\alpha;\hat a,\hat a^\dag)$, which leads to a phase-space matrix with the entries
\begin{equation}
M_{i,j}=\langle{:}\Omega_i(\alpha;\hat a,\hat a^\dag)\Omega_j(\alpha;\hat a,\hat a^\dag){:}\rangle=P_{\Omega_i\Omega_j}(\alpha).
\end{equation}
This expression utilizes product of filters $\Omega(\alpha;\tilde\alpha,\tilde\alpha^\ast)=\Omega_i(\alpha;\tilde\alpha,\tilde\alpha^\ast)\Omega_j(\alpha;\tilde\alpha,\tilde\alpha^\ast)$ to be convoluted with the $P$ function.
From this definition of a regularized phase-space matrix, we can proceed as we did earlier to formulate nonclassicality criteria in terms of phase-space functions.
Moreover, the non-Gaussian filter functions can be even related to nonlinear detectors.
For this purpose, we assume that $\Omega(\alpha;\tilde\alpha,\tilde\alpha^\ast)=\Omega(|\alpha-\tilde\alpha|^2)$ (likewise, ${:}\Omega(\alpha,\hat a,\hat a^\dag){:}={:}\Omega(\hat n(\alpha)){:}$ in the normally ordered operator representation).
In this form, the function is invariant under rotations.
As we did for the general POVM element $\Pi(\hat n)$, we can now identify
\begin{equation}
\Gamma(\hat n)=-\ln\Omega(\hat n).
\end{equation}
This enables us to associate non-Gaussian filters and nonlinear detectors and, by extension, generalized phase-space matrices for certifying nonclassical states of light.
An example for this treatment is studied in Sec. \ref{subsec:ExNL}.
\section{Examples and benchmarking}\label{sec:Ex}
In the following, we apply our method of phase-space matrices to various examples and benchmark its performance.
For the latter benchmark, we could consider different phase-space functions.
Using the $P$ function would be impractical as it is often a highly singular distribution.
The Wigner function is regular and can exhibit negativities.
But error estimations from measured data can turn out to be rather difficult because it requires diverging pattern functions \cite{R96,LMKRR96} (see Ref. \cite{SVKMH15} for an in-depth analysis).
Beyond those practical hurdles, we focus on the $Q$ function here because, already in theory, it is always nonnegative.
Thus, it is hard to verify nonclassical features based on this particular phase-space distribution.
Additionally, the $Q$ function is easily accessible in experiments and can be directly measured via the widely-used double-homodyne (aka, eight-port homodyne) detection scheme \cite{VW06}.
Nonetheless, we are going to demonstrate that, with our method, it is already sufficient for many examples to consider second-order correlations of $Q$ functions.
For this purpose, we use the condition in Eq. \eqref{eq:2x2condition}, which follows from the $2\times2$ matrix condition with $\sigma_1=\sigma_2=1/2$.
This special case of that condition then reads as
\begin{align}
\label{eq:QQ}
\det(M)=Q(\alpha_1)Q(\alpha_{2})
-e^{-|\alpha_2-\alpha_{1}|^2/2}Q\left(\tfrac{\alpha_1+\alpha_2}{2}\right)^2<0.
\end{align}
Meaning that, when the correlations from $Q$ functions at different points in phase space fall below the classical limit zero, nonclassical light is certified with the nonnegative family of $Q$ distributions.
Moreover, since $Q$ functions are nonnegative, the second term in Eq. \eqref{eq:QQ} is subtractive in nature.
Thus, it is sufficient to find a point $\alpha_1$ in phase space for which $Q(\alpha_1)=0$ holds true---together with an $\alpha_2$ with $Q(\alpha_2/2)>0$, which has to exist because of normalization---in order to certify nonclassicality through Eq. \eqref{eq:QQ}.
Setting $\alpha_1=\alpha$, this leads to the simple nonclassicality condition $Q(\alpha)=0$, which applies to arbitrary quantum states.
In Ref. \cite{LB95}, this specific condition has been independently verified as a nonclassical signature of non-Gaussian states.
Here, we see that this nonclassical signature is indeed a corollary of our general approach.
Furthermore, we remark that this condition only holds if the $Q$ function is exactly zero.
In experimental scenarios, in which errors have to be accounted for, it is infeasible to get this exact value.
Therefore, the condition Eq. \eqref{eq:QQ} is more practical as it allows us to certify nonclassicality through a finite negative value.
Furthermore, this condition is applicable even if $Q(\alpha)=0$ does not hold true.
\subsection{Discrete-variable states}
We start our analysis of nonclassicality by considering discrete-variable states for a single mode.
In the case of quantized harmonic oscillators, such as electromagnetic fields, a family of discrete-variable states that are of particular importance are number states $|n\rangle$.
They represent an $n$-fold excitation of the underlying quantum field and show the particle nature of said fields, thus being nonclassical when compared to classical electromagnetic wave phenomena.
However, photon-number states require Glauber-Sudarshan $P$ distributions that are highly singular because they involve up to $2n$th-order derivatives of delta distributions \cite{VW06}.
On the other hand, the $Q$ function of photon-number states,
\begin{align}
\label{eq:PhotonQ}
Q_{|n\rangle}(\alpha)=\frac{|\alpha|^{2n}}{\pi n!} e^{-|\alpha|^2},
\end{align}
is an accessible and smooth, but nonnegative function.
Thus, by itself, it cannot behave as a quasiprobability which includes negative contributions that uncover nonclassicality.
\begin{figure}
\caption{
Nonclassicality of number states $|n\rangle$ via Eq. \eqref{eq:QQ}
\label{fig:QQFock}
\end{figure}
Except for vacuum, the $Q_{|n\rangle}$ function is zero for $\alpha=0$ and positive for all other arguments $\alpha$ [Eq. \eqref{eq:PhotonQ}].
Consequently, we can apply Eq. \eqref{eq:QQ} with $\alpha_1=0$ and $\alpha_2\neq0$, yielding $\det(M)<0$.
Furthermore, a straightforward optimization shows that $|\alpha_2|=\sqrt{2n}$ results in the minimal value $\det(M)=-e^{-2n}(n/2)^{2n}/(\pi n!)^2$.
Note that this family of discrete-variable number states is rotationally invariant, rendering the phase of $\alpha_2$ irrelevant.
In Fig. \ref{fig:QQFock}, we visualize the results of our analysis.
For all number states, we observe a successful verification of nonclassicality in terms of inequality Eq. \eqref{eq:QQ}.
The single-photon state shows the largest violation for this specific nonclassicality test, and the negativity of $\det(M)$ decreases with the number of photons.
A possible explanation for this behavior is that this condition is most sensitive towards the particle nature of the quantum states, being most prominent in the single excitation of the quantized radiation field.
Again, let us emphasize that we verified nonclassciality via a matrix $M$ of classical (i.e., nonnegative) phase-space functions.
\subsection{Continuous-variable states}
After studying essential examples of discrete-variable quantum states, we now divert our attention to typical examples of continuous-variable states.
For this reason, we consider squeezed vacuum states which are defined as $|\xi\rangle=(\cosh{r})^{-1/2}\sum_{n=0}^\infty (-e^{i\varphi}\tanh[r]/2)^n\sqrt{(2n)!}|2n\rangle/n!$, for a squeezing parameter $r=|\xi|$ and a phase $\varphi=\arg(\xi)$.
Without a loss of generality, we set $\varphi=0$.
Squeezed states are widely used in quantum optical experiments and provide the basis of continuous-variable quantum information processing \cite{BL05}.
Their parametrized phase-space distributions are known to be either highly singular or nonnegative Gaussian functions (see, e.g., Refs. \cite{WPGCRSL12,S16}).
For example, the $Q$ function of the states under study can be written as
\begin{align}
\label{eq:Qsqueezed}
Q_{|\xi\rangle}(\alpha)=\frac{
\exp\left[-|\alpha|^2-\tanh(r)\mathrm{Re}(\alpha^2)\right]
}{\pi \cosh (r)}.
\end{align}
In the context of earlier discussions, note that this $Q$ function is not zero for $\alpha=0$, or anywhere else.
\begin{figure}
\caption{
The maximally negative value for inequality \eqref{eq:QQ}
\label{fig:SqueezedQ}
\end{figure}
In Fig. \ref{fig:SqueezedQ}, the left-hand side of inequality Eq. \eqref{eq:QQ} is shown for the $Q_{|\xi\rangle}$ function of a squeezing parameter $r$.
The points in phase space are determined by choosing $\alpha_1=0$ and minimizing $\det(M)$, being solved for $\alpha_2=[(2/\lambda)\ln[(1+\lambda)/(1+ \lambda/2)]]^{1/2}$, where $\lambda=\tanh(r)$.
We observe negative values as a direct signature of the nonclassicality of squeezed states.
Remarkably, this is achieved using the same criterion that applies to photon-number states, typically vastly different correlation functions are required (using either photon numbers \cite{M79} or quadratures \cite{Y76}).
While inequality Eq. \eqref{eq:QQ} is violated for any squeezing parameter $r>0$, we see that there exists an optimal region of squeezing values around $r=0.6$ (likewise, $5\,\mathrm{dB}$ of squeezing) for which the considered criterion is optimal.
In particular, this shows that this condition works optimally in a range of moderate squeezing values and, thus, is compatible with typical experiments.
We also want to recall that the $Q_{|\xi\rangle}$ are a Gaussian distributions which do not have any zeros in the phase space.
Thus, criteria based on the zeros of the Husimi $Q$ function \cite{LB95} cannot detect nonclassicality in this scenario.
In contrast, our inequality condition can even certify this Gaussian nonclassicality, hence providing a more sensitive approach to detecting quantum light.
\subsection{Mixed two-mode states}
\begin{figure}
\caption{
In plot (a), the two-mode $Q$ function in Eq. \eqref{eq:Qbipart}
\label{fig:ElizaState}
\end{figure}
To further challenge our approach, we now consider a bipartite mixed state.
We begin with a two-mode squeezed vacuum state, $|\lambda\rangle=\sqrt{1-|\lambda|^2}\sum_{n=0}^\infty\lambda^n|n,n\rangle$.
This state undergoes a full phase diffusion, leading to the mixed state
\begin{equation}
\label{Eq:State}
\begin{aligned}
\hat\rho&=\frac{1}{2\pi}\int\limits_{0}^{2\pi}d\varphi\, |\lambda e^{i\varphi}\rangle\langle \lambda e^{i\varphi}|
\\
&=\sum_{n=0}^\infty (1{-}|\lambda|^2)|\lambda|^{2n} |n,n\rangle\langle n,n|.
\end{aligned}
\end{equation}
This state presents a particular challenge for nonclassicality verification because it shows only weak nonclassicality and quantum correlations.
Namely, this state is not entangled, has zero quantum discord, and has classical marginal single-mode states (i.e., the partial traces $\mathrm{tr}_1(\hat\rho)=\mathrm{tr}_2(\hat\rho)$ yield thermal states) \cite{ASV13}.
However, it shows nonclassical photon-photon correlations \cite{FP12,ASV13,SBVHBAS15}.
The state's two-mode $Q$ function can be computed using Gaussian functions and the phase averaging in Eq. \eqref{Eq:State}, which gives
\begin{align}
\label{eq:Qbipart}
\begin{aligned}
Q_{\hat\rho}(\alpha^{(1)},\alpha^{(2)})
= \frac{1{-}|\lambda|^2}{\pi^2}
e^{-|\alpha^{(1)}|^2-|\alpha^{(2)}|^2}\\
\times\,
I_0(2|\lambda||\alpha^{(1)}||\alpha^{(2)}|),
\end{aligned}
\end{align}
where $I_0$ denotes the zeroth modified Bessel function of the first kind.
See also Fig. \ref{fig:ElizaState}(a) in this context.
To apply our approach, whilst using $Q$ functions only, we can directly generalize our criterion in Eq. \eqref{eq:QQ} to the multimode case (see also Sec. \ref{subsec:multimode}).
For $N$ modes, this results in the nonclassicality criterion
\begin{align}
\label{eq:QQmulti}
\begin{aligned}
\det(M)
=& Q(\alpha^{(1)}_{1},\ldots,\alpha^{(N)}_{1}) Q(\alpha^{(1)}_{2},\ldots,\alpha^{(N)}_{2})
\\
& -e^{-\sum_{m=1}^N |\alpha^{(m)}_2-\alpha^{(m)}_1|^2/2}
\\
&\times Q\left(\frac{\alpha^{(1)}_1{+}\alpha^{(1)}_2}{2},\ldots,\frac{\alpha^{(N)}_1{+}\alpha^{(N)}_2}{2}\right)^2<0.
\end{aligned}
\end{align}
In Fig. \ref{fig:ElizaState}(b), we apply the case $N=2$ of this inequality to identify the nonclassicality of $\hat\rho$ for $|\lambda|^2=1/2$.
Again, the same approach as used in both single-mode scenarios enables us yet again to uncover the nonclassical behavior of this bipartite state for all nonzero choices of parameters $|\alpha|$ and $|\beta|$.
Note in this context that the phase of these parameters does not contribute because of the fully phase-randomized structure of the mixed state in Eq. \eqref{Eq:State}.
\begin{figure}
\caption{
Determinant ($\times 10^4$) of the multimode $2\times2$ phase-space matrix $M$ of $Q$ functions [$\det(M)$ in Eq. \eqref{eq:QQmulti}
\label{fig:OddCoh}
\end{figure}
\subsection{Multimode superposition states}
To further exceed the previous, bipartite state, we consider an $N$-mode state in this part.
Specifically, we focus on a multimode superposition of coherent states \cite{SMM74},
\begin{align}
\label{eq:MultimodeState}
|\Psi^{(\pm)}_{\gamma,N}\rangle=\frac{|\gamma\rangle^{\otimes N}\pm|-\gamma\rangle^{\otimes N}}{\sqrt{2\left(1\pm e^{-2N|\gamma|^2}\right)}},
\end{align}
which consists of two $N$-fold tensor products of polar opposite coherent states, $|\pm\gamma\rangle$.
Specifically, the skew-symmetric state $|\Psi^{(-)}_{\gamma,N}\rangle$ is of interest because it yields a GHZ state for $|\gamma|\to\infty$ and W state for $|\gamma|\to0$, combining in an asymptotic manner two inequivalent forms of multipartite entanglement \cite{DVC00,SV20}.
\begin{widetext}
The $Q$ functions for the states in Eq. \eqref{eq:MultimodeState} can be straightforwardly computed; they read
\begin{align}
\begin{aligned}
Q_{|\Psi^{(\pm)}_{\gamma,N}\rangle}(\alpha^{(1)},\ldots,\alpha^{(N)})
=
\frac{e^{-N|\gamma|^2} e^{-|\alpha^{(1)}|^2}\cdots e^{-|\alpha^{(N)}|^2}}{2\pi^N\left[1\pm e^{-2N|\gamma|^2}\right]}
\!\!\left(
\!\!\cosh\!\!\left[2\,\mathrm{Re}\!\!\left(\!\!\gamma^\ast\sum_{m=1}^N \alpha^{(m)} \right)\right]
\!\!{\pm}
\cos\!\!\left[2\,\mathrm{Im}\!\!\left(\!\!\gamma^\ast\sum_{m=1}^N \alpha^{(m)} \right)\right]
\right)\!\!.
\end{aligned}
\end{align}
\end{widetext}
To apply our criteria in Eq. \eqref{eq:QQmulti}, and for simplicity, we set $\alpha^{(m)}_j=\alpha_j$ for all mode numbers $m$ and points in phase space, $\alpha_j$.
In Fig. \ref{fig:OddCoh}, we exemplify the certification of nonclassicality for the state $|\Psi^{(-)}_{\gamma,3}\rangle$ [Eq. \eqref{eq:MultimodeState}] as a function of $\alpha_1=\mathrm{Re}(\alpha_1)$ and for a fixed $\alpha_2=0$.
We remark that, for other mode numbers $N$, the plot looks quite similar.
Most pronounced are nonclassical features for $\gamma$ close to zero, relating to a W state in which a single photon is uniformly distributed over three modes.
For large $\gamma$ values, relating to a GHZ state, the negativities decrease, but $\det(M)$ remains below zero.
We reiterate that our relatively simple, second-order correlations of $Q$ functions render it possible to certify the nonclassical properties of multimode, non-Gaussian states.
\subsection{Generalized phase-space representations and nonlinear detection model}\label{subsec:ExNL}
For demonstrating how our phase-space matrix approach functions beyond $s$-parametrized distributions, we consider an on-off detector that is based on two-photon absorption \cite{JA69}.
In this case, the POVM element for no click is approximated by
\begin{align}
\label{eq:NLdetector}
\hat\Pi
={:}e^{-\eta \hat n+\chi\hat n^2}{:}
=\sum_{n=0}^\infty \frac{(2n)!}{n!}\left(\frac{\chi}{\eta^2}\right)^n
{:}\frac{(\eta\hat n)^{2n}}{(2n)!}e^{-\eta\hat n}{:},
\end{align}
where ${:}(\eta\hat n)^{2n}e^{-\eta\hat n}/(2n)!{:}$ describes a measurement operator for $2n$-photon states with a linear quantum efficiency $\eta$.
In this context, it is worth mentioning that $\chi\ll [e\eta^2]/[4n]$ has to be satisfied to ensure that the approximated POVM element correctly applies for photon numbers up to $2n$ \cite{comment:POVM}.
The parameter $\chi$ relates to the nonlinear absorption efficiency.
Based on such a nonlinear detector, we then define the non-Gaussian operator $\hat \Omega(\alpha;\eta,\chi)={:}e^{-\eta \hat n(\alpha)+\chi\hat n(\alpha)^2}{:}$, as described in Secs. \ref{subsec:Detect} and \ref{subsec:Regularized}.
For a correlation measurement with two detectors (see Fig. \ref{fig:setup}), this then results in the correlation matrix elements $\langle{:}\hat\Omega(\alpha_i;\eta_i,\chi_i)\hat\Omega(\alpha_j;\eta_j,\chi_j){:}\rangle$.
For specific parameters and up to a scaling with $\pi$, this correlation function also results in the nonlinear $Q_{\Omega}(\alpha)=\langle{:}\hat\Omega(\alpha;1,\chi)\hat\Omega(0;0,0){:}\rangle$ function (cf. Sec. \ref{subsec:Regularized} for the similarly defined $P_\Omega$), where $\sigma=\eta=1$.
By extension, and using $\chi=\chi'$ and $\eta=1=\eta'$, these phase-space correlation functions also provide the entries required for the nonclassicality criterion.
Here, it reads
\begin{align}
\label{eq:NLNC}
\begin{aligned}
\langle{:}\hat\Omega(\alpha_1;1,\chi)\hat\Omega(\alpha_1;1,\chi){:}\rangle
\langle{:}\hat\Omega(\alpha_2;1,\chi)\hat\Omega(\alpha_2;1,\chi){:}\rangle
\\
-\langle{:}\hat\Omega(\alpha_1;1,\chi)\hat\Omega(\alpha_2;1,\chi){:}\rangle^2
<0,
\end{aligned}
\end{align}
which applies to the nonlinear detection scenario under study.
\begin{figure}
\caption{
Application of the nonclassicality criterion in Eq. \eqref{eq:NLNC}
\label{fig:EvenCoh}
\end{figure}
In Fig. \ref{fig:EvenCoh}, we apply this approach and consider the single-mode even coherent state $|\Psi^{(+)}_{\gamma,1}\rangle$ [cf. Eq. \eqref{eq:MultimodeState} for $N=1$], which is a non-Gaussian state, because we focused on the odd coherent state in the previous example.
It is worth emphasizing that other methods to infer nonclassical light (e.g., the Chebyshev approach from Ref. \cite{BA19}) are incapable to detect this state's quantum features.
Here, we can directly certify nonclassicality of this non-Gaussian state despite the challenge of also having a non-Gaussian detection model.
\section{Conclusion}\label{sec:Conclusion}
In summary, we devised a generally applicable method that unifies nonclassicality criteria from correlation functions with quasiprobability distributions.
Thereby, we created an advanced toolbox of nonclassicality tests which exploit the capabilities of both phase-space distributions and matrices of moments to probe for nonclassical effects.
Furthermore, our framework is applicable to an arbitrary number of modes, arbitrary orders of correlation, and even phase-space functions perturbed through convolutions with non-Gaussian kernels.
A measurement scheme was proposed to directly determine the elements of the phase-space matrix, the underlying key quantities of our method.
In addition, we showed and discussed in detail that our treatment includes previous findings as special cases, is experimentally accessible even if other methods are not, and overcomes challenges of previous techniques when identifying nonclassicality.
The phase-space-matrix approach incorporates nonclassicality tests based on negativities of the phase-space distributions, including the Glauber-Sudarshan $P$ function, and the matrix-of-moments approach as special cases.
Thus, we were able to unify two major techniques for certification of nonclassicality.
As the $P$ function and the matrix of moments themselves are already necessary and sufficient conditions for the detection of nonclassicality, the introduced phase-space-matrix approach obeys the same universal feature.
In other words, for any nonclassical state there exists a phase-space matrix condition which certifies its nonclassicality.
By applying our nonclassicality criteria to a diverse set of examples, we further demonstrated the power and versatility of our method.
These examples covered discrete- and continuous-variable, single- and multimode, Gaussian and non-Gaussian, as well as pure and mixed quantum states of light.
Remarkably, we used for all these states only the family of second-order correlations and phase-space distributions which are always nonnegative.
Nevertheless, these basic criteria were already sufficient to certify distinct nonclassical effects on one common ground, further demonstrating the strength of our method.
When compared to matrices of moments, the kinds of nonclassicality under study would require very different moments for determining the states' distinct quantum properties.
Finally, we put forward an experimental scheme, only relying on readily available optical components, to directly measure the quantities required to apply our method.
This scheme applies even for imperfect detectors with a nonlinear response.
Furthermore, we want to add that the practicality and strength of the matrix of phase-space distributions in certifying nonclassicality of lossy and noisy quantum state can be experimentally demonstrated \cite{BBABZ20}.
Here, we focused on nonclassical effects of light, owing to their relevance for photonic quantum computation and optical quantum communication.
The introduced approach may be further developed for the certification of other quantum features, such as non-Gaussianity.
Currently, our method detects nonclassicality for Gaussian and non-Gaussian states equally, which could be further developed for a more fine-grained quantumness analysis.
However, instead of applying normal ordering, the construction of linear and nonlinear witnesses has to be adapted for this purpose.
Furthermore, other kinds of quantum effects, such as entanglement, can be interpreted in terms of quasiprobabilities \cite{SW18} and are similarly witnessed through correlations \cite{HHHH09}.
Thus, an extension to entanglement might be feasible as well.
Therefore, our findings may provide the starting point for uncovering quantum characteristics through matrices of quasiprobabilities in other physical systems.
Additionally, the derived framework can be utilized in the context of quantum information theory, such as in recently formulated resource theories of nonclassicality \cite{YBTNGK18,KCTVJ19} and other measures of nonclassicality \cite{TCJ20}, which employ the phase-space formalism \cite{SW18}, thus potentially benefiting from our phase-space correlation conditions for future applications.
\paragraph*{Note added.}
After finalizing this work, we have been made aware of a related work in preparation by J. Park, J. Lee, and H. Nha \cite{N20}.
\begin{acknowledgments}
M.B. acknowledges financial support by the Leopoldina Fellowship Programme of the German National Academy of Science (LPDS 2019-01) and by the Erwin Schr\"odinger International Institute for Mathematics and Physics (ESI), Vienna (Austria) through the thematic programme on \textit{Quantum Simulation - from Theory to Application}.
E.A. acknowledges funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\l{}odowska-Curie IF InDiQE (EU project 845486).
The authors thank J. Park, J. Lee, and H. Nha for valuable comments.
\end{acknowledgments}
\end{document}
|
\begin{document}
\title{Schr\"{o}dinger Equation \\
with the Potential $V(r)=a r^2+b r^{-4}+c r^{-6}$}
\author{Shi-Hai Dong\thanks{Electronic address: [email protected]}\\
{\scriptsize Institute of High Energy Physics, P. O. Box 918(4),
Beijing 100039, People's Republic of China}\\
\\
Xi-Wen Hou \\
{\scriptsize Institute of High Energy Physics,
P. O. Box 918(4), Beijing 100039, and}\\
{\scriptsize Department of Physics, Hubei University,
Wuhan 430062, People's Republic of China} \\
\\
Zhong-Qi Ma\\
{\scriptsize China Center for Advanced Science and Technology
(World Laboratory), P. O. Box 8730, Beijing 100080}\\
{\scriptsize and Institute of High Energy Physics, P. O. Box 918(4),
Beijing 100039, People's Republic of China}}
\date{}
\maketitle
\begin{abstract}
By making use of an ${\it ansatz}$ for the eigenfunction,
we obtain the exact solutions to the Schr\"{o}dinger equation
with the anharmonic potential, $V(r)=a r^2+b r^{-4}+c r^{-6}$,
both in three dimensions and in two dimensions, where
the parameters $a$, $b$, and $c$ in the potential satisfy
some constraints.
\vskip 6mm
PACS numbers: 03.65.Ge.
\vskip 4mm
{\bf Key words}: Exact solution, Anharmonic potential,
Schr\"{o}dinger equation.
\end{abstract}
\begin{center}
{\large 1. Introduction}\\
\end {center}
The exact solutions to the fundamental dynamical equations
play crucial roles in physics. It is well-known that the
exact solutions to the Schr\"{o}dinger equation have been
obtained only for a few potentials, and some approximate methods
are frequently applied to arrive at the approximate solutions.
In recent years, the higher order anharmonic potentials have
drawn more attentions of physicists and mathematicians
in order to partly understand a few newly discovered
phenomena, such as the structural phase transitions [1],
the polaron formation in solids [2], and the concept
of false vacuo in field theory [3]. Interest in these
anharmonic oscillator-like interactions comes from the fact
that the study of the relevant Schr\"{o}dinger equation, for
example, in the atomic and molecular physics, provides us with
insight into the physical problem in question.
For the Schr\"{o}dinger equation ($\hbar=2m=1$ for convenience)
$$-\nabla^{2} \psi +V(r) \psi =E \psi, \eqno (1) $$
\noindent
with the potential
$$V(r)=a r^2+b r^{-4}+c r^{-6},~~~~~ a>0, ~~c>0, \eqno (2) $$
\noindent
let
$$\psi(r,\theta, \varphi)=r^{-1} R_{\ell}(r) Y_{\ell m}(\theta, \varphi),
\eqno (3) $$
\noindent
where $\ell$ and $E$ denote the angular momentum and the
energy, respectively, and the radial wave function
$R_{\ell}(r)$ satisfies
$$\displaystyle {d^{2} R_{\ell}(r) \over dr^{2} }
+\left[E-V(r)-\displaystyle {\ell(\ell+1) \over r^{2}} \right]
R_{\ell}(r)=0. \eqno (4) $$
Znojil [4,5] converted Eq. (4) into a difference
equation in terms of a Laurent-series ansatz for the radial
function
$$R_{\ell}(r)=N_{0}r^{\kappa}\exp [-(\sqrt{a}r^{2}+\sqrt{c}
r^{-2})/2] \displaystyle \sum_{m=-M}^{N}~h_{m}r^{2m}. \eqno (5) $$
\noindent
He defined the continued fraction solutions to accelerate the
convergence of the series, and obtained the solutions
for the ground state and the first excited state.
Kaushal and Parashar highly simplified the ansatz for
calculating those solutions
$$R_{0}(r)=N_{0}r^{\kappa_{0}}\exp [-(\sqrt{a}r^{2}+\sqrt{c} r^{-2})/2]
,~~~~~\kappa_{0}=(b+3\sqrt{c})/(2\sqrt{c}), \eqno (6) $$
\noindent
for the ground state [6], and
$$R_{0}(r)=N_{1}r^{\kappa_{1}}\left(1+\beta r^{2}+\gamma r^{-2}
\right)\exp [-(\sqrt{a}r^{2}+\sqrt{c} r^{-2})/2] , \eqno (7) $$
\noindent
for the first excited state [7]. By this ansatz, the
parameters in the potential (2) have to satisfy two constraints:
$$\left(2\sqrt{c}+b\right)^{2}=c\left[(2\ell+1)^{2}+8\sqrt{ac}\right],
\eqno (8) $$
\noindent
and
$$\begin{array}{l}
\eta_{\ell}\left[(\eta_{\ell}-4)^{2}-4(2\kappa_{1}-1)^{2}\right]
=64\sqrt{ac}(\eta_{\ell}-4), \\
\kappa_{1}=(b+7\sqrt{c})/(2\sqrt{c}),~~~~~
\eta_{\ell}=\ell(\ell+1)+2\sqrt{ac}-\kappa_{1}^{2}+\kappa_{1}.
\end{array} \eqno (9) $$
\noindent
where there was a sign misprint in [7] (see Eq. (13) in [7]).
They set the values of the parameters by
$$\ell=0,~~~~~a=1.0,~~~~~c=0.18,~~~~~b=0.04082,
\eqno (10) $$
\noindent
and found that $\beta=-0.1787$ and $\gamma=0.8485$,
and the energies for the ground state and the first
excited state were $E_{0}=4.096214$ and $E_{1}=12.09621$,
respectively. Unfortunately, their parameters given in Eq. (10)
do not satisfy the second constraint (9), such that the so-called
solution of the first excited state in [7] does not satisfy
Eq. (4). As a matter of fact, they assumed
that the angular momentum $\ell$ is same for both the ground
state and the first excited state, and that the normalized
factor $N_{1} \neq 0$, so that they must obtain, as shown
in Sec. 2 of the present letter, infinite solutions for
$\beta$ and $\gamma$ if the parameters in the
potential satisfy the constraints (8) and (9).
In our viewpoint, Kaushal and Parashar presented a good
idea for studying the Schr\"{o}dinger equation (1) with
the higher order anharmonic potential (2), but their
calculation was wrong. In the present letter, we
recalculate the solutions following their idea, and then,
generalize this method to the two-dimensional Schr\"{o}dinger
equation because of the wide interest in lower-dimensional
field theories recently. Besides, with the advent of growth technique
for the realization of the semiconductor quantum wells, the
quantum mechanics of low-dimensional systems has become a
major research field. Almost all of the computational
techniques developed for the three-dimensional
problems have already been extended to two dimensions.
This letter is organized as follows. In Sec. 2, we
recalculate the ground state and the first excited state
of the Schr\"{o}dinger equation with this potential using
an ${\it ansatz}$ for the eigenfunctions. This method is
applied to two dimensions in Sec.3. The figures for the
unnormalized radial functions of the solutions are plotted
in the due sections.
\begin{center}
{\large 2. Ansatz }
\end{center}
Assume that the radial function in Eq. (3) is
$$R_{\ell}(r)=r^{\kappa}\left(\alpha+\beta r^{2}+\gamma r^{-2}
\right)\exp [-(\sqrt{a}r^{2}+\sqrt{c} r^{-2})/2] , \eqno (11) $$
\noindent
where $\beta=\gamma=0$ and $\kappa=\kappa_{0}$ for the ground
state, and $\beta \neq 0$, $\gamma \neq 0$ and $\kappa=\kappa_{1}$
for the first excited state. Substituting Eq. (11) into Eq. (4),
we have
$$\begin{array}{rl}
\displaystyle {d^{2} R_{\ell}(r) \over dr^{2} }
&=~\left\{r^{4} a \beta
+r^{2}[\alpha a-\beta E]
+[-\alpha E+ \beta \ell (\ell+1)+ \gamma a] \right. \\
&~~~+r^{-2}[\alpha \ell (\ell +1)+\beta b-\gamma E]
+r^{-4}[\alpha b+\beta c+\gamma \ell (\ell +1) ] \\
&~~~\left.+r^{-6}[\alpha c+ \gamma b]
+r^{-8} \gamma c \right\}
r^{\kappa} \exp [-(\sqrt{a}r^{2}+\sqrt{c} r^{-2})/2] .
\end{array} \eqno (12a) $$
\noindent
On the other hand, the derivative of the radial function
can be calculated directly from Eq. (11),
$$\begin{array}{rl}
\displaystyle {d^{2} R_{\ell}(r) \over dr^{2} }
&=~\left\{r^{4} a \beta
+r^{2}[a \alpha-\beta \sqrt{a}(2\kappa+5)]\right. \\
&~~~+[- \alpha \sqrt{a} (2\kappa+1)+\beta (2+3\kappa+\kappa^{2}-2\sqrt{ac})
+\gamma a ] \\
&~~~+r^{-2}[\alpha (\kappa^{2}-\kappa-2\sqrt{ac})
+\beta \sqrt{c} (2\kappa+1)-\gamma \sqrt{a} (2\kappa-3)]\\
&~~~+r^{-4}[\alpha \sqrt{c}(2\kappa -3)+\beta c
+\gamma(6-5\kappa-2\sqrt{ac}+\kappa^{2})]\\
&\left.~~~+r^{-6}[ \alpha c+\gamma \sqrt{c}(2\kappa-7)
+r^{-8} \gamma c\right\}
r^{\kappa} \exp [-(\sqrt{a}r^{2}+\sqrt{c} r^{-2})/2] .
\end{array} \eqno (12b) $$
\noindent
Comparing the coefficients in the same power of $r$, we obtain
$$\beta [E-\sqrt{a}(2\kappa+5)]=0, \eqno (13a) $$
$$\alpha [E-\sqrt{a}(2\kappa+1)]
=\beta[\ell(\ell+1)+2\sqrt{ac}-\kappa^{2}-3\kappa-2], \eqno (13b) $$
$$\alpha [\ell(\ell+1)+2\sqrt{ac}-\kappa^{2}+\kappa]=
\beta [-b+\sqrt{c}(2\kappa+1)]+\gamma[E-\sqrt{a}(2\kappa-3)], \eqno (13c) $$
$$\alpha [b-\sqrt{c}(2\kappa-3)]=
-\gamma [\ell(\ell+1)+2\sqrt{ac}-\kappa^{2}+5\kappa-6], \eqno (13d) $$
$$\gamma [b-\sqrt{c} (2\kappa-7)]=0. \eqno (13e) $$
For the ground state, $\beta=\gamma=0$ and $\alpha\neq 0$,
we obtain a constraint (8) and
$$\kappa_{0}=(3\sqrt{c}+b)/(2\sqrt{c}),~~~~~
E_{0}=\sqrt{a/c}~(b+4\sqrt{c}). \eqno (14) $$
\noindent
For the first excited state, $\beta \neq 0$ and
$\gamma \neq 0$. From Eqs. (13a) and (13e) we have
$$\kappa_{1}=(7\sqrt{c}+b)/(2\sqrt{c}),~~~~~
E_{1}=\sqrt{a/c}~(b+12\sqrt{c}). \eqno (15) $$
\noindent
It is easy to see from Eqs. (8) and (15) that the right hand
side of Eq. (13d) becomes zero, namely, $\alpha=0$. Since
Kaushal and Parashar [7] assumed $\alpha \neq 0$, they
must obtain the infinite $\gamma$ if the parameters in
the potential satisfy two constraints (8) and (9). Now,
we obtain from Eq. (13)
$$\alpha=0,~~~~~~ \gamma=-\sqrt{c/a}~\beta,
\eqno (16) $$
\noindent
and another constraint
$$b=-6\sqrt{c}. \eqno (17) $$
\noindent
It is easy to check that the constraints (8) and (17) coincide
with the constraints (8) and (9).
Setting $\ell=0$ and $a=1.0$ for comparison with Znojil [4]
and Kaushal-Parashar [7], we obtain
$$\begin{array}{llll}
b=-11.25,~~~~~&\sqrt{c}=1.875,~~~~~&\gamma=-1.875 \beta,~~~~~& \\
\kappa_{0}=-1.5, &\kappa_{1}=0.5,~~~~~
&E_{0}=-2,~~~~~ &E_{1}=6. \end{array} \eqno (18) $$
\noindent
Thus, the radial functions $R^{(0)}_{0}(r)$ for the ground state and
$R^{(1)}_{0}(r)$ for the first excited state are
$$\begin{array}{l}
R^{(0)}_{0}(r)=N_{0}r^{-1.5}\exp \{-(r^{2}+1.875 r^{-2})/2\},\\
R^{(1)}_{0}(r)=N_{1}r^{-0.5}(r^{2}-1.875 r^{-2})
\exp \{-(r^{2}+1.875 r^{-2})/2\},
\end{array} \eqno (19) $$
\noindent
where the normalized factors are calculated by the normalized
condition:
$$\displaystyle \int_{0}^{\infty}|R_{0}^{(i)}(r)|^{2} dr=1,~~~~~
i=0 {\rm ~~and}~~ 1. \eqno (20) $$
\noindent
Without loss of any main property, we show the unnormalized
radial functions in Fig. 1 and Fig. 2.
Furthermore, if the angular momentum $\ell'$ for the first
excited state is different from the angular momentum $\ell$
for the ground state, equation (16) and the constraint (17) become
$$\begin{array}{l}
\beta=4 \alpha \sqrt{a}/
[\ell'(\ell'+1)-\ell(\ell+1)-4(b+6\sqrt{c})/\sqrt{c}], \\
\gamma=4 \alpha \sqrt{c}/[\ell'(\ell'+1)-\ell(\ell+1)], \\
\left[\ell'(\ell'+1)-\ell(\ell+1)-2(b+4\sqrt{c})/\sqrt{c}\right]
/(32\sqrt{ac}) \\
~~~~=\left[\ell'(\ell'+1)-\ell(\ell+1)-4(b+6\sqrt{c})/\sqrt{c}\right]^{-1}
+\left[\ell'(\ell'+1)-\ell(\ell+1)\right]^{-1}.
\end{array} \eqno (21) $$
Setting $a=1.0$, $\ell=0$ and $\ell'=1$, we obtain
$$\begin{array}{llll}
b=-4.2011,~~~~~& c=0.75878,~~~~~& \kappa_{0}=-0.91144,
~~~~~&\kappa_{1}=1.08856,\\
\beta=-1.47683 ~\alpha , &\gamma=1.74216~\alpha ,
&E_{0}=-0.82288, &E_{1}=7.17713. \end{array} \eqno (22) $$
\begin{center}
{\large 3. Solutions in two dimensions}
\end{center}
For the Schr\"{o}dinger equation in two dimensions with the potential,
$$V(\rho)=a\rho^{2}+b\rho^{-4}+c\rho^{-6},~~~~~a>0,~~c>0,
\eqno (23) $$
\noindent
let
$$\psi(\rho, \varphi)=\rho^{-1/2} R_{m}(\rho) e^{\pm im\varphi},
~~~~~m=0,1,2,\cdots , \eqno (24) $$
\noindent
where the radial function $R_{m}(\rho)$ satisfies the
radial equation
$$\displaystyle {d^{2} R_{m}(\rho) \over d\rho^{2} }
+\left[E-V(r)-\displaystyle {m^{2}-1/4 \over r^{2}} \right]
R_{m}(\rho)=0. \eqno (25) $$
Making the ansatz for the radial functions of the ground state
and the first excited state:
$$\begin{array}{l}
R_{m}^{(0)}(\rho)=N_{0}\rho^{\kappa_{0}}
\exp [-(\sqrt{a}\rho^{2}+\sqrt{c} \rho^{-2})/2] , \\
R_{m}^{(1)}(\rho)=N_{1}\rho^{\kappa_{1}}
\left(\alpha+\rho^{2}+\gamma \rho^{-2}\right)
\exp [-(\sqrt{a}\rho^{2}+\sqrt{c} \rho^{-2})/2] ,
\end{array} \eqno (26) $$
\noindent
where $\gamma \neq 0$, and substituting Eq. (26) into Eq. (25), we have
$$\begin{array}{l}
\beta [E-\sqrt{a}(2\kappa+5)]=0, \\
\alpha [E-\sqrt{a}(2\kappa+1)]
=\beta[m^{2}-1/4+2\sqrt{ac}-\kappa^{2}-3\kappa-2],\\
\alpha [m^{2}-1/4+2\sqrt{ac}-\kappa^{2}+\kappa]=
\beta [-b+\sqrt{c}(2\kappa+1)]+\gamma[E-\sqrt{a}(2\kappa-3)], \\
\alpha [b-\sqrt{c}(2\kappa-3)]=
-\gamma [m^{2}-1/4+2\sqrt{ac}-\kappa^{2}+5\kappa-6], \\
\gamma [b-\sqrt{c} (2\kappa-7)]=0. \end{array} \eqno (27) $$
Hence, if the angular momentum $m$ of the ground state is
the same as that of the first excited state, we obtain from Eq. (27)
$$\begin{array}{ll}
\left(2\sqrt{c}+b\right)^{2}=4c\left[m^{2}+2\sqrt{ac}\right],
~~~~~&b=-6\sqrt{c}, \\
\kappa_{0}=(3\sqrt{c}+b)/(2\sqrt{c}),~~~~~~~~~~~
&E_{0}=\sqrt{a/c}~(b+4\sqrt{c}), \\
\kappa_{1}=(7\sqrt{c}+b)/(2\sqrt{c}),~~~~~~
&E_{1}=\sqrt{a/c}~(b+12\sqrt{c}), \\
\alpha=0, & \gamma=-\sqrt{c}, \end{array} \eqno (28) $$
If $m=0$ and $a=1.0$, the values of the corresponding parameters
are
$$\begin{array}{llll}
b=-12,~~~~~&c=4,~~~~~&\gamma=-2,~~~~~& \\
\kappa_{0}=-1.5,~~~~~ &\kappa_{1}=0.5, ~~~~~& E_{0}=-2,~~~~~
&E_{1}=6, \end{array} \eqno (29) $$
\noindent
The unnormalized radial functions are shown in Fig. 3 and Fig. 4.
To summarize, we discuss the ground state and the first
excited state for the Schr\"{o}dinger equation with the
potential $V(r)=a r^2+b r^{-4}+c r^{-6}$ using a simple
${\it ansatz}$ for the eigenfunctions. Two constraints on the
parameters in the potential are arrived at from the compared
equations. This simple and intuitive method can be
generalized to the other potentials, such as the sextic
potential, the octic potential, and the inverse potential.
{\bf Acknowledgments}. This work was supported by the National
Natural Science Foundation of China and Grant No. LWTZ-1298 from
the Chinese Academy of Sciences.
\end{document}
\vskip 0.5cm
\begin{figure}
\caption{The ground state wave functions $R_{0}
\end{figure}
\vskip 0.5cm
\begin{figure}
\caption{The first excited state wave
functions $R_{0}
\end{figure}
\vskip 0.5cm
\begin{figure}
\caption{The ground state wave functions $R_{0}
\end{figure}
\vskip 0.5cm
\begin{figure}
\caption{The ground state wave
functions $R_{0}
\end{figure}
\end{document}
|
\begin{document}
\title{A decomposition method to construct cubature formulae of degree 3\thanks{The project is
supported by NNSF of China(Nos. 61033012,11171052,11271060,61272371)
and also supported by ``the Fundamental Research Funds for the
Central Universities''.}}
\author{Zhaoliang Meng\thanks{Corresponding author. E-mail: m\_zh\[email protected]}
\\[4pt]
School of Mathematical Sciences, Dalian University of Technology,
Dalian, 116024, China\\[5pt]
Zhongxuan Luo\\[4pt]
School of Mathematical Sciences, Dalian University of Technology, Dalian, 116024, China\\
School of Software, Dalian University of Technology, Dalian, 116620, China}
\date{}
\maketitle
\begin{abstract}
Numerical integration formulas in $n$-dimensional Euclidean space
of degree three are discussed. For the integrals with permutation
symmetry we present a method to construct its third-degree
integration formulas with $2n$ real points. We present a
decomposition method and only need to deal with $n$ one-dimensional
moment problems independently.
\end{abstract}
\begin{quote}
\textbf{Keywords:} Numerical integration; Degree three; Cubature formulae; Decomposition
method; One-dimensional moment problem
\end{quote}
\section{Introduction}
Let $\Pi^n=\mathbb{R}[x_1,\ldots,x_n]$ be the space of polynomials
in $n$ real variables and $\mathscr{L}$ be a square positive linear
functional defined on $\Pi^n$ such as those given by
$\mathscr{L}(f)=\int_{\mathbb{R}^n}f(x)W(x)\mathrm{d}x$, where $W$
is a nonnegative weight function with finite moments of all order. Let $\Pi^n_d$ be the space of polynomials of degree at most $d$.
Here we discuss numerical integration formulas of the form
\begin{eqnarray}\label{Eq:int}
\mathscr{L}(f)\thickapprox \sum_{k}a_kf(u^{(k)}),
\end{eqnarray}
where $a_k$ are constants and $u^{(k)}$ are points in the spaces.
The formulas are called degree of $d$ if they are exact for
integrations of any polynomials of $x$ of degree at most $d$ but not
$d+1$.
In this paper, we only deal with the construction of third-degree cubature
formula. It
looks like a simple problem, however it remains to be solved. The
well known result is due to Stroud
\cite{stroud1957,stroud1960,stroud1971}. He presented a method to
construct numerical integration formulas of degree 3 for centrally
symmetrical region and recently Xiu \cite{xiu} also considered the
similar numerical formulas for integrals as
\begin{eqnarray*}
\mathscr{L}(f)=\int_{a}^b\ldots\int_a^bw(x_1)\ldots,w(x_n)f(x_1,x_2\ldots,x_n)\mathrm{d}x_1\ldots\mathrm{d}x_n.
\end{eqnarray*}
In \cite{xiu}, Xiu assumed that every single integral is symmetrical, which means his result naturally belongs to centrally
symmetrical case. Recently, the authors \cite{meng} extended Stroud's results and
presented formulas of degree 3 of $2n$ points or $2n+1$ points for
integrals as
\begin{eqnarray*}
&& \mathscr{L}(f)=\int_{a_1}^{b_1}\ldots\int_{a_n}^{b_n}w_1(x_1)\ldots
w_n(x_n)f(x_1,\ldots,x_n)\ \mathrm{d}x_1\ldots \mathrm{d}x_n,\\
&& w_i(x_i)\geq 0,\quad x_i\in [a_i,b_i],\quad i=1,\ldots,n.\notag
\end{eqnarray*}
Besides, many scholars employed the invariant theory method to deal
with symmetrical case and we can refer to
\cite{cools2001,cools2002,cools2003} and the references therein. As
far as we know, $2n$ is the minimum of the integration points except
some special regions(see \cite{stroud1961}), and for centrally
symmetrical region Mysovskikh \cite{mysovskikh} had shown this
point. However, for the general integration case, it remains unknown
how to construct the formulas of degree 3 with $2n$ points. In the
two-dimensional case, third degree integration formulas with 4 real
points was given in \cite{gunther,luo} for any regions. But it is
difficult to extend it to higher dimension. For other related work,
we can refer to \cite{Om,meng2,erich} and the reference therein.
This paper will extend the results in \cite{meng} for the integrals of product regions to
those with permutation symmetry. First we present a
condition which is satisfied by the integral. And then we prove that under
this condition, the construction problem of cubature formulae with degree
three can be transformed into two smaller sub-cubature problems. Finally, for
the construction of cubature rules of the integrals with permutation symmetry
can be decomposed into $n$ one-dimensional moment problems.
This paper is organized as follows. The construction of cubature
formulas of degree 3 are presented in section 2. And section 3 will
present two examples to illustrate the construction process.
Finally, section 4 will make a conclusion.
\section{The construction of third-degree formulas}
Assume that $\mathscr{L}$ has the following property:
\begin{minipage}[c]{0.1\textwidth}
(P)
\end{minipage}
\begin{minipage}[c]{0.8\textwidth}
\emph{There exist $n$ linearly independent polynomial
$l_i(x_1,\ldots,x_n),i=1,2,\ldots,n$ such that all $l_il_n(i=1,\ldots,n-1)$
are the orthogonal polynomials of degree two with respect to
$\mathscr{L}$.}
\end{minipage}
Let
\begin{eqnarray}\label{eq:transformation}
T:\quad l_i(x_1,\ldots,x_n)\longrightarrow t_i, \quad i=1,2,\ldots,n
\end{eqnarray}
be a linear transformation and $\mathscr{L}$ be transformed into $\mathscr{L}'$. Then by
the assumption all $t_it_n(i=1,2\ldots,n-1)$ are the orthogonal polynomial of degree two
with respect to $\mathscr{L}'$. Here we do not require that all $t_it_n(i=1,2\ldots,n-1)$
can constitute a basis of orthogonal polynomials of degree 2 with respect to
$\mathscr{L}'$. We can also assume that the third-degree formula of $\mathscr{L}'$ has
the following form
\begin{eqnarray}\label{eq:cub_form}
\begin{split}
&v^{(1)}=(v_{1,1},v_{1,2},v_{1,3}\ldots,v_{1,n-1},0)\quad \omega_{1}\\
&v^{(2)}=(v_{2,1},v_{2,2},v_{2,3}\ldots,v_{2,n-1},0)\quad \omega_{2}\\
&\cdots \cdots \cdots\cdots \cdots \cdots \cdots \cdots \cdots\cdots \\
&v^{(N)}=(v_{N,1},v_{N,2},v_{N,3}\ldots,v_{N,n-1},0)\quad \omega_{N}\\
&v^{(N+j)}=(0,0,0\ldots,0,v_{N+j,n})\quad \omega_{N+j},\ j=1,2.
\end{split}
\end{eqnarray}
To enforce polynomial exactness of degree 3, it suffices to require
\eqref{eq:cub_form} to be exact for
$$
1, \ t_1,\ t_2,\ldots,t_n,\ t_it_j,\ t_it_jt_k\quad
i,j,k=1,2,\ldots,n.
$$
Then we have
\begin{subequations}\label{eq:moment_equation}
\begin{eqnarray}
&&\quad \omega_1+\omega_2+\ldots+\omega_{N+2}=\mathscr{L}'(1) \label{eq:moment_equation_constant}\\
&&\left\{
\begin{array}{ll}
\omega_1v_{1,i}+\omega_2v_{2,i}+\ldots+\omega_{N}v_{N,i}=\mathscr{L}'(t_i),\quad i=1,2,\ldots,n-1 \\
\omega_{N+1}v_{N+1,n}+\omega_{N+2}v_{N+2,n}=\mathscr{L}'(t_n)
\end{array}
\right. \\
&&\left\{
\begin{array}{ll}
\omega_1v_{1,i}v_{1,j}+\omega_2v_{2,i}v_{2,j}+\ldots+\omega_{N}v_{N,i}v_{N,j}=\mathscr{L}'(t_it_j),\quad i,j=1,2,\ldots,n-1 \\
\omega_{N+1}v_{N+1,n}^2+\omega_{N+2}v_{N+2,n}^2=\mathscr{L}'(t_n^2)
\end{array}
\right. \\
&&\left\{
\begin{array}{ll}
\omega_1v_{1,i}v_{1,j}v_{1,k}+\ldots+\omega_{N}v_{N,i}v_{N,j}v_{N,k}=\mathscr{L}'(t_it_jt_k),\quad i,j,k=1,2,\ldots,n-1 \\
\omega_{N+1}v_{N+1,n}^3+\omega_{N+2}v_{N+2,n}^3=\mathscr{L}'(t_n^3)
\end{array}
\right.
\end{eqnarray}
\end{subequations}
and the equation \eqref{eq:moment_equation_constant} can be rewritten as
\begin{eqnarray*}
&&\omega_{1}+ \omega_{2}+\ldots+\omega_{N}=\xi_1,\\
&&\omega_{N+1}+\omega_{N+2}=\xi_2 \\
&&\xi_1+\xi_2=\mathscr{L}'(1).
\end{eqnarray*}
Hence we can rewrite the equations \eqref{eq:moment_equation} as
\begin{subequations}\label{eq:moment_equation_new}
\begin{eqnarray}
&&\left\{
\begin{array}{ll}
\omega_{1}+ \omega_{2}+\ldots+\omega_{N}=\xi_1, \label{eq:moment_equation2}\\
\omega_1v_{1,i}+\omega_2v_{2,i}+\ldots+\omega_{N}v_{N,i}=\mathscr{L}'(t_i),\quad i=1,2,\ldots,n-1 \\
\omega_1v_{1,i}v_{1,j}+\omega_2v_{2,i}v_{2,j}+\ldots+\omega_{N}v_{N,i}v_{N,j}=\mathscr{L}'(t_it_j),\quad i,j=1,2,\ldots,n-1 \\
\omega_1v_{1,i}v_{1,j}v_{1,k}+\ldots+\omega_{N}v_{N,i}v_{N,j}v_{N,k}=\mathscr{L}'(t_it_jt_k),\quad i,j,k=1,2,\ldots,n-1 \\
\end{array}
\right. \label{eq:cuba_dimn-1}\\
&&\left\{
\begin{array}{ll}
\omega_{N+1}+\omega_{N+2}=\xi_2 \\
\omega_{N+1}v_{N+1,n}+\omega_{N+2}v_{N+2,n}=\mathscr{L}'(t_n)\\
\omega_{N+1}v_{N+1,n}^2+\omega_{N+2}v_{N+2,n}^2=\mathscr{L}'(t_n^2)\\
\omega_{N+1}v_{N+1,n}^3+\omega_{N+2}v_{N+2,n}^3=\mathscr{L}'(t_n^3)
\end{array}
\right. \label{eq:cuba_dim1}\\
&&\quad\ \xi_1+\xi_2=\mathscr{L}'(1). \label{eq:xi}
\end{eqnarray}
\end{subequations}
Once $\xi_i$ is determined by \eqref{eq:xi}, then \eqref{eq:cuba_dimn-1} and
\eqref{eq:cuba_dim1} become one $n-1$ dimensional and the other one-dimensional moment
problems respectively. If these two lower dimensional moment problems can be solved, then
we can get a cubature formula of degree 3 with respect to the original integration
problem. Generally speaking, the one-dimensional moment problem can be easily solved, but
it is difficult to be solved for the $n-1$ dimensional moment problem. However, if the
$n-1$ dimensional problem can be divided into one $n-2$ dimensional moment problem and
the other one-dimensional moment problem and further the $n-2$ dimensional moment problem
can continue this process, then the original integration problem can be turned into $n$
one-dimensional moment problems.
In what follows, we will prove that if $\mathscr{L}$ is a integral functional with
permutation symmetry, then the construction problem of third-degree cubature formulae can
be turned into $n$ one-dimensional moment problems. In fact, we usually encounter this
kind of integral functional, for example, the integration over the simplex, the square,
the ball or the positive sector of the ball, that is $\{(x_1,x_2,\ldots,x_n)|x_i\geq
0,i=1,2,\ldots,n; x_1^2+x_2^2+\ldots,+x_n^2\leq 1\}$.
We first prove that the integral functional with permutation
symmetry must meet the property (P). Thus the original cubature
problem can be turned into two sub-cubature problems.
\begin{thm}\label{th:l1ln}
Let $n\geq 2$ and
\begin{eqnarray*}
&&l_i(x_1,\ldots,x_n)=x_i-x_n,\quad i=1,2,\ldots,n-1\\
&&l_n(x_1,\ldots,x_n)=x_1+x_2+\ldots+x_n+c_n
\end{eqnarray*}
where
$$
c_n=-\frac{\mathscr{L}(x_1^3+(n-3)x_1^2x_2-(n-2)x_1x_2x_3)}{\mathscr{L}(x_1^2-x_1x_2)}.
$$
If $\mathscr{L}$ is permutation symmetrical,
then $l_il_n\ (i=1,2,\ldots,n-1)$ is orthogonal to the polynomials
of degree $\leq 1$.
\end{thm}
\begin{proof}
Take $l_1l_n$ as an example. We first prove $\mathscr{L}(x_1^2-x_1x_2)\neq 0$ to confirm the existence of $l_n$. In
fact, by the symmetry and the positivity,
$$
\mathscr{L}(x_1^2-x_1x_2)=\frac{1}{2}\mathscr{L}(x_1^2-x_1x_2-x_1x_2+x_2^2)=\frac{1}{2}\mathscr{L}((x_1-x_2)^2)>0.
$$
Let us exam the orthogonality of $l_1l_n$. Assume $n\geq 3$. By the symmetry, we have
\begin{eqnarray*}
&&\mathscr{L}(l_1l_n)=\mathscr{L}\big(x_1^2-x_n^2+(x_1-x_n)(x_2+\ldots+x_{n-1}+c_n)\big)=0,\\
&&\mathscr{L}(x_il_1l_n)=\mathscr{L}\big(x_1^2x_i-x_n^2x_i+(x_1-x_n)(x_2x_i+\ldots+x_{n-1}x_i+c_nx_i)\big)=0,\
2\leq i\leq n-1
\end{eqnarray*}
and for $i=1$(the same to $i=n$)
\begin{eqnarray*}
&&\mathscr{L}(x_1l_1l_n)=\mathscr{L}\big(x_1^3-x_n^2x_1+(x_1-x_n)(x_2x_1+\ldots+x_{n-1}x_1+c_nx_1)\big)\\
&=&\mathscr{L}\big(x_1^3+(n-3)x_1^2x_2-(n-2)x_1x_2x_3+c_n(x_1^2-x_1x_2)\big)=0.
\end{eqnarray*}
It is easy to verify that the result holds when $n=2$. This completes the proof.
\end{proof}
Let $\mathscr{L}_1$ and $\mathscr{L}^1$ be two linear functionals
defined on $\Pi^{n-1}$ and $\Pi^{1}$, whose moments are determined
by
\begin{eqnarray}
&&\mathscr{L}_1(t_it_jt_k)=\mathscr{L}(l_il_jl_k),\
\mathscr{L}_1(t_it_j)=\mathscr{L}(l_il_j),\
\mathscr{L}_1(t_i)=\mathscr{L}(l_i),\ \mathscr{L}_1(1)=\xi_1^{(0)},\\
&&\mathscr{L}^1(t_n^3)=\mathscr{L}(l_n^3),\
\mathscr{L}^1(t_n^2)=\mathscr{L}(l_n^2),\
\mathscr{L}^1(t_n)=\mathscr{L}(l_n),\ \mathscr{L}^1(1)=\xi_2^{(0)}\\
&&\hspace{2cm}\xi_1^{(0)}+\xi_2^{(0)}=\mathscr{L}(1) \quad\text{and}
\quad 1\leq i,j,k\leq n-1 \notag
\end{eqnarray}
respectively. Thus the construction problem of third-degree
formulas with respect to $\mathscr{L}$ is turned into two smaller
problems, one of which is the construction with respect to
$\mathscr{L}_1$ and the other of which is the construction with
respect to $\mathscr{L}^1$. It is easy to compute
\begin{eqnarray*}
&& \mathscr{L}^1({t_n})=\mathscr{L}(l_n)=n\cdot
\mathscr{L}(x_1)+c_n\mathscr{L}(1),\\
&&
\mathscr{L}^1({t_n}^2)=n\mathscr{L}(x_1^2)+n(n-1)\mathscr{L}(x_1x_2)+2nc_n\cdot
\mathscr{L}(x_1)+c_n^2\mathscr{L}(1)\\
&&
\mathscr{L}^1({t_n}^3)=n\mathscr{L}(x_1^3)+6\binom{n}{2}\mathscr{L}(x_1^2x_2)+6\binom{n}{3}\mathscr{L}(x_1x_2x_3)\\
&&\hspace{1.7cm}+3c_n\big(n\mathscr{L}(x_1^2)+n(n-1)\mathscr{L}(x_1x_2)\big)+3nc_n^2\mathscr{L}(x_1)+c_n^3\mathscr{L}(1)\\
&&\hspace{1.4cm}=n\mathscr{L}(x_1^3)+3n(n-1)\mathscr{L}(x_1^2x_2)+n(n-1)(n-2)\mathscr{L}(x_1x_2x_3)\\
&&\hspace{1.7cm}+3c_n\big(n\mathscr{L}(x_1^2)+n(n-1)\mathscr{L}(x_1x_2)\big)+3nc_n^2\mathscr{L}(x_1)+c_n^3\mathscr{L}(1).\\
\end{eqnarray*}
Next we will show this decomposition process can continue.
Let us consider the cubature formula with respect to
$\mathscr{L}_1$. Obviously, $\mathscr{L}_1$ is also permutation
symmetrical, which allows us to employ theorem \ref{th:l1ln}
continuously. Define
\begin{eqnarray*}
&&l^{(1)}_i(t_1,\ldots,t_{n-1})=t_i-t_{n-1},\quad i=1,2,\ldots,n-2,\\
&&l^{(1)}_{n-1}(t_1,\ldots,t_n)=t_1+t_2+\ldots+t_{n-1}+c_{n-1},
\end{eqnarray*}
where
$$
c_{n-1}=-\frac{\mathscr{L}_1(t_1^3+(n-4)t_1^2t_2-(n-3)t_1t_2t_3)}{\mathscr{L}_1(t_1^2-t_1t_2)},
$$
then $l^{(1)}_il^{(1)}_{n-1},n=1,2,\ldots,n-2$ are the orthogonal
polynomials of degree two with respect to $\mathscr{L}_1$. Noticing
$t_i=l_i(x_1,x_2,\ldots,x_n)$, we have
\begin{eqnarray*}
&&l^{(1)}_i(t_1,\ldots,t_{n-1})=t_i-t_{n-1}=x_i-x_{n-1},\quad i=1,2,\ldots,n-2,\\
&&l^{(1)}_{n-1}(t_1,\ldots,t_n)=t_1+t_2+\ldots+t_{n-1}+c_{n-1} \\
&& \hspace{2.6cm}=x_1+x_2+\ldots+x_{n-1}-(n-1)x_n+c_{n-1}.
\end{eqnarray*}
Again let $\mathscr{L}_2$ and $\mathscr{L}^2$ be two linear
functionals defined on $\Pi^{n-2}$ and $\Pi^{1}$, whose moments are
determined by
\begin{eqnarray*}
&&\mathscr{L}_2(\tau_i\tau_j\tau_k)=\mathscr{L}_1(l_i^{(1)}l_j^{(1)}l_k^{(1)}),\
\mathscr{L}_2(\tau_i\tau_j)=\mathscr{L}_1(l_i^{(1)}l_j^{(1)}),\
\mathscr{L}_2(\tau_i)=\mathscr{L}_1(l_i^{(1)}),\ \mathscr{L}_2(1)=\xi_1^{(1)},\\
&&\mathscr{L}^2(\tau_{n-1}^3)=\mathscr{L}_1((l_{n-1}^{(1)})^3),\
\mathscr{L}^2(\tau_{n-1}^2)=\mathscr{L}_1((l_{n-1}^{(1)})^2),\
\mathscr{L}^2(\tau_{n-1})=\mathscr{L}_1(l_n^{(1)}),\ \mathscr{L}^2(1)=\xi_2^{(1)}\\
&&\hspace{2cm}\xi_1^{(1)}+\xi_2^{(1)}=\xi_1^{(0)} \quad\text{and}
\quad 1\leq i,j,k\leq n-2
\end{eqnarray*}
respectively. It is easy to compute
\begin{eqnarray*}
&& \mathscr{L}^2({\tau_{n-1}})=\mathscr{L}_1(l_{n-1}^{(1)})=\mathscr{L}(l_{n-1}^{(1)})+c_{n-1}\big(\mathscr{L}_1(1)-\mathscr{L}(1)\big),\\
&&
\mathscr{L}^2({\tau_{n-1}}^2)=\mathscr{L}_1((l_{n-1}^{(1)})^2)=\mathscr{L}_1\big([(t_1+t_2+\ldots+t_{n-1})+c_{n-1}]^2\big)\\
&&\hspace{1.9cm} =\mathscr{L}((l_{n-1}^{(1)})^2)+c_{n-1}^2\big(\mathscr{L}_1(1)-\mathscr{L}(1)\big), \\
&& \mathscr{L}^2({\tau_{n-1}}^3)= \mathscr{L}((l_{n-1}^{(1)})^3)+c_{n-1}^3\big(\mathscr{L}_1(1)-\mathscr{L}(1)\big).\\
\end{eqnarray*}
Assume that $\mathscr{L}_{k}$ is a linear functional defined on
$\Pi^{n-k}$ for every $k(0\leq k<n)$ and satisfies property (P).
Then a cubature problem of degree 3 with respect to
$\mathscr{L}_{k}$ can be divided into two smaller cubature
problems---one with respect to $\mathscr{L}_{k+1}$ and the other
with respect to $\mathscr{L}^{k+1}$. Moreover $\mathscr{L}_{k+1}$
also satisfies the property (P) and then this process can continue
and will end when $k=n-1$. Finally, an $n$-dimensional cubature
problem can be transformed into $n$ one-dimensional cubature
problems.
\begin{thm}\label{th:cn}
Let
$$
l_{n-k}^{(k)}=x_1+x_2+\ldots+x_{n-k}-(n-k)x_{n-k+1}+c_{n-k}
$$
and $\mathscr{L}^{k}$ be a linear functional defined on $\Pi^1$
according to the above process, then the corresponding moments are
\begin{eqnarray*}
\mathscr{L}^{k+1}(1) &=& \xi^{(k)}_{2}, \\
\mathscr{L}^{k+1}(\tau) &=& \mathscr{L}(l_{n-k}^{(k)})+c_{n-k}\big(\mathscr{L}_{k}(1)-\mathscr{L}(1)\big) \\
&=& c_{n-k}\mathscr{L}_{k}(1),\\
\mathscr{L}^{k+1}(\tau^2) &=& \mathscr{L}((l_{n-k}^{(k)})^2)+c_{n-k}^2\big(\mathscr{L}_{k}(1)-\mathscr{L}(1)\big) \\
&=& (n-k)(n-k+1)\mathscr{L}(x_1^2-x_1x_2)+c_{n-k}^2\mathscr{L}_k(1),\\
\mathscr{L}^{k+1}(\tau^3) &=&
\mathscr{L}((l_{n-k}^{(k)})^3)+c_{n-k}^3\big(\mathscr{L}_{k}(1)-\mathscr{L}(1)\big)\\
&=&
(n-k)(n-k+1)(n-k+2)\mathscr{L}(-x_1^3+3x_1^2x_2-2x_1x_2x_3)+c_{n-k}^3\mathscr{L}_{k}(1)
\end{eqnarray*}
where
\begin{eqnarray}\label{eq:cnk}
c_{n-k}=-\dfrac{\mathscr{L}(x_1^3-3x_1^2x_2+2x_1x_2x_3)}{\mathscr{L}(x_1^2-x_1x_2)},\quad
k=1,2,\ldots,n-2.
\end{eqnarray}
\end{thm}
\begin{proof}
It remains to prove Eq.\eqref{eq:cnk}. It follows from theorem
\ref{th:l1ln} that
$$
c_{n-k}=-\frac{\mathscr{L}_k(t_1^3+(n-3-k)t_1^2t_2-(n-2-k)t_1t_2t_3)}{\mathscr{L}_k(t_1^2-t_1t_2)}.
$$
According to the definition of $\mathscr{L}_k$, we have
\begin{eqnarray*}
&&\mathscr{L}_k(t_1^3+(n-3-k)t_1^2t_2-(n-2-k)t_1t_2t_3)\\
&=&\mathscr{L}(t_1^3+(n-3-k)t_1^2t_2-(n-2-k)t_1t_2t_3)\\
&=&\mathscr{L}\Big((x_1-x_{n-k})^3+(n-3-k)(x_1-x_{n-k})^2(x_2-x_{n-k})\\
&&\hspace{2cm}-(n-2-k)(x_1-x_{n-k})(x_2-x_{n-k})(x_3-x_{n-k})\Big)\\
&=&\mathscr{L}\Big(x_1^3-3x_1^2x_2+2x_1x_2x_3\Big),
\end{eqnarray*}
\begin{eqnarray*}
\mathscr{L}_k(t_1^2-t_1t_2)&=&\mathscr{L}((x_1-x_{n-k})^2-(x_1-x_{n-k})(x_2-x_{n-k}))\\
&=& \mathscr{L}(x_1^2-x_1x_2),
\end{eqnarray*}
where the permutation symmetry is used. This completes the proof.
\end{proof}
\begin{rem}
In fact $\mathscr{L}_{n-1}$ is also a one-dimensional integration
functional. The corresponding moment can be calculated by
\begin{eqnarray}
\begin{split}
&\mathscr{L}_{n-1}(1)=\xi_1^{(n-2)}, \\
&\mathscr{L}_{n-1}(\tau)=\mathscr{L}(x_1-x_2)=0, \\
&\mathscr{L}_{n-1}(\tau^2)=\mathscr{L}((x_1-x_2)^2)=2\mathscr{L}(x_1^2-x_1x_2), \\
&\mathscr{L}_{n-1}(\tau^3)=\mathscr{L}((x_1-x_2)^3)=0. \\
\end{split}
\label{eq:moment_nth}
\end{eqnarray}
For convenience, in what follows let
$\mathscr{L}^n=\mathscr{L}_{n-1}$.
\end{rem}
Suppose that
\begin{eqnarray}\label{eq:OneDim}
\mathscr{L}^{k}(g)\approx \sum_{i=1}^{n_k}w_{i,k}g(t_{i,k}),
k=1,2,\ldots,n
\end{eqnarray}
is exact for any $g\in \Pi_3^1$. And let
$v^{i,k}=(v_{i,k}^{(1)},v_{i,k}^{(2)},\ldots,v_{i,k}^{(n)})$ be the
solution of
\begin{align}
&\left\{
\begin{array}{l}
x_1+x_2+\ldots+x_n+c_n=t_{i,1} \\
x_{n-1}-x_n=0 \\
\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots \\
x_{2}-x_n=0\\
x_1-x_{n}=0
\end{array}
\right. \qquad\hbox{for $k=1$,}\label{eq:Solution1}\\
&\left\{
\begin{array}{l}
x_1+x_2+\ldots+x_n+c_n=0 \\
x_1+x_2+\ldots+x_{n-1}-(n-1)x_n+c_{n-1}=0 \\
\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots \\
x_1+x_2-2x_3+c_{2}=0\\
x_1-x_2=t_{i,n}
\end{array}
\right.\qquad\hbox{for $k=n$,}\label{eq:Solution3}\\
\intertext{and}
&\left\{
\begin{array}{l}
x_1+x_2+\ldots+x_n+c_n=0 \\
x_1+x_2+\ldots+x_{n-1}-(n-1)x_n+c_{n-1}=0 \\
\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots \\
x_1+x_2+\ldots+x_{n-k+1}-(n-k+1)x_{n-k+2}+c_{n-k+1}=t_{i,k}\\
x_{n-k}-x_{n-k+1}=0\\
\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots \\
x_1-x_{n-k+1}=0
\end{array}
\right.\qquad\hbox{for $k=2,3,\ldots,n-1$,}\label{eq:Solution2}
\end{align}
Hence the final cubature formula can be written as
\begin{eqnarray*}
\mathscr{L}(f)\approx\sum_{k=1}^n\sum_{i=1}^{n_k}w_{i,k}f(v^{i,k})
\end{eqnarray*}
which is exact for any $f\in \Pi^n_3$. It is clear that the solution
of Eq.\eqref{eq:Solution1} is
$$
(\eta_i,\eta_i,\ldots,\eta_i), \quad \eta_i=\frac{t_{i,1}-c_n}{n}
$$
and the solution of Eq.\eqref{eq:Solution2} is
\begin{eqnarray*}
\left\{
\begin{array}{ll}
x_n=\dfrac{c_{n-1}-c_n-\delta_{2,k}t_{i,2}}{n}, \\
x_{n-1}=x_n+\dfrac{c_{n-2}-c_{n-1}}{n-1}=x_n \\
\ldots\ldots\ldots \\
x_{n-k+3}=x_{n-k+4}\\
x_{n-k+2}=x_{n-k+3}+\dfrac{c_{n-k+2}-c_{n-k+3}-t_{i,k}}{n-k+2}=x_{n-k+3}-\dfrac{t_{i,k}}{n-k+2}\\
x_1=x_2=\ldots=x_{n-k+1}=x_{n-k+2}+\dfrac{t_{i,k}-c_{n-k+1}}{n-k+1}
\end{array}
\right.
\end{eqnarray*}
where $\delta_{2,k}=1$ if $k=2$ and $\delta_{2,k}=0$ if $k\neq 2$,
and the solution of Eq.\eqref{eq:Solution3} is
\begin{eqnarray*}
\left\{
\begin{array}{ll}
x_n=x_{n-1}=\ldots=x_3=\dfrac{c_{n-1}-c_n}{n} \\
x_{2}=x_3-\dfrac{c_{n-1}+t_{i,n}}{2}\\
x_1=x_2+t_{i,n}=x_3+\dfrac{t_{i,n}-c_{n-1}}{2}
\end{array}
\right.
\end{eqnarray*}
Collecting the above discussion, we have
\begin{thm}\label{th:coef}
Assume that $\mathscr{L}$ is permutation symmetrical. Then there
must exist a cubature formula
\begin{eqnarray*}
\mathscr{L}(f)&\approx
&\sum_{k=2}^{n}\sum_{i=1}^{m_k}w_{i,k}f(\underbrace{\alpha_{i,k},\ldots,\alpha_{i,k}}_{n-k+1},\beta_{i,k},\underbrace{\gamma,\gamma,\ldots,\gamma}_{k-2})+\sum_{i=1}^{m_1}w_{i,1}f(\alpha_{i,1},\ldots,\alpha_{i,1}):=C(f)\\
\end{eqnarray*}
which is exact for every polynomial of degree $\leq 3$. In the
formula, $\alpha$'s and $\beta$'s can be computed by
\begin{eqnarray*}
\left\{
\begin{array}{ll}
\gamma=\dfrac{c_{n-1}-c_n}{n}\\
\beta_{i,k}=\gamma-\dfrac{t_{i,k}}{n-k+2}, & \hbox{$2\leq k \leq n-1$;} \\
\alpha_{i,k}=\beta_{i,k}+\dfrac{t_{i,k}-c_{n-k}}{n-k+1}, & \hbox{$2\leq k \leq n-1$;} \\
\alpha_{i,1}=\dfrac{t_{i,1}-c_{n}}{n}, & \\
\beta_{i,n}=\gamma-\dfrac{t_{i,n}+c_2}{2}\\
\alpha_{i,n}=\beta_{i,n}+t_{i,n}
\end{array}
\right.
\end{eqnarray*}
where $t_{i,k}$'s and $w_{i,k}$'s are the nodes and weights of the
quadrature formula \eqref{eq:OneDim} with respect to
$\mathscr{L}^k$.
\end{thm}
The proof is a direct result of the computation and is omitted.
\begin{rem}
For the quadrature problem of the one-dimensional moment, it is
well known that the number of the nodes $n_k=2$ in the general
case. Hence the total number of the nodes of the cubature formula
with respect to $\mathscr{L}$ is $2n$ generally and $2n$ is
usually the minimum among all the cubature formula of degree 3 except one case of the integration over the n-dimensional simplex \cite{stroud1961}.
For more knowledge of the problem of the one-dimensional moment, we
can refer to the appendix of \cite{tnt}.
\end{rem}
\begin{rem}
For convenience, we present the relations of $\xi$'s as follows
\begin{eqnarray*}
\left.
\begin{array}{ccccccccccc}
\mathscr{L}(1) & \longrightarrow & \xi_1^{(0)} & \longrightarrow & \xi_1^{(1)} & \longrightarrow & \ldots & \longrightarrow & \xi_1^{(n-2)}\\
& & + & & + & &&& + \\
& & \xi_2^{(0)} & & \xi_2^{(1)} & && & \xi_2^{(n-2)}\\
\end{array}
\right.
\end{eqnarray*}
According to the previous discussion, it is clear that
$$
\mathscr{L}^k(1)=\xi_2^{(k-1)}\quad \text{for}\quad 1\leq k\leq n-1
\quad \text{and} \quad
\mathscr{L}^n(1)=\mathscr{L}_{n-1}(1)=\xi_1^{(n-2)}.
$$
\end{rem}
\section{Numerical Examples}
$\bullet$ Firstly take the integration over the n-dimensional simplex as
an example. Define
$$
\mathscr{L}(f)=\int_{T_n}f(x_1,x_2,\ldots,x_n)\mathrm{d}x_1\mathrm{d}x_2\ldots\mathrm{d}x_n
$$
where
$$
T_n=\{(x_1,x_2,\ldots,x_n)|x_1+x_2+\ldots+x_n\leq 1,\quad x_i\geq
0,\quad i=1,2,\ldots,n\}.
$$
It is well known that
$$
\mathscr{L}(x_1^{\alpha_1}x_2^{\alpha_2}\ldots
x_n^{\alpha_n})=\dfrac{\alpha_1!\alpha_2!\ldots
\alpha_n!}{(n+\alpha_1+\alpha_2+\ldots+\alpha_n)!}.
$$
Then by a simple computation, we have
\begin{eqnarray*}
&&c_i=-\frac{2}{n+3},i=2,3,\ldots,n-1, \quad c_n=-\frac{n+2}{n+3},\\
&&\gamma=\frac{1}{n+3}.
\end{eqnarray*}
Here if we take
$$
\xi_2^{(i)}=t_{i+1}\cdot \frac{1}{n\cdot n!},\ \xi_1^{(n-2)}=t_n \cdot \frac{1}{n\cdot
n!}, \ i=0,1,\ldots,n-2,
$$
and $\sum_{i=1}^n t_i=n$, then the moments of
$\mathscr{L}^{k+1}(1\leq k\leq n-2)$ are
\begin{eqnarray*}
\mathscr{L}^{k+1}(1) &=& t_{k+1}\frac{1}{n\cdot
n!}, \\
\mathscr{L}^{k+1}(\tau) &=& c_{n-k}\mathscr{L}_{k}(1)=-\frac{2(n-\sum_{i=1}^k t_k)}{(n+3)n\cdot
n!}, \\
\mathscr{L}^{k+1}(\tau^2) &=& \mathscr{L}((l_{n-k}^{(k)})^2)+c_{n-k}^2\big(\mathscr{L}_{k}(1)-\mathscr{L}(1)\big)=\frac{(n-k)(n-k+1)}{(n+2)!}+\frac{4(n-\sum_{i=1}^k t_k)}{n(n+3)^2\cdot
n!}, \\
\mathscr{L}^{k+1}(\tau^3) &=&
(n-k)(n-k+1)(n-k+2)\mathscr{L}(-x_1^3+3x_1^2x_2-2x_1x_2x_3)+c_{n-k}^3\mathscr{L}_{k}(1)\\
&=&\frac{-2(n-k)(n-k+1)(n-k+2)}{(n+3)!}-\frac{8(n-\sum_{i=1}^k t_k)}{n(n+3)^3\cdot
n!}
\end{eqnarray*}
and
\begin{eqnarray*}
\mathscr{L}^{1}(1) &=& \frac{t_1}{n\cdot
n!}, \\
\mathscr{L}^{1}(\tau) &=& \frac{-2}{(n+1)!(n+3)}, \\
\mathscr{L}^{1}(\tau^2) &=& \frac{n^2+5n+8}{(n+3)!(n+3)}, \\
\mathscr{L}^{1}(\tau^3) &=&
-\frac{2(n+2)(n+4)}{(n+3)^2(n+3)!}
\end{eqnarray*}
and
\begin{eqnarray*}
\mathscr{L}^{n}(1) &=& \frac{t_n}{n\cdot
n!}, \\
\mathscr{L}^{n}(\tau) &=& 0, \\
\mathscr{L}^{n}(\tau^2) &=& \frac{2}{(n+2)!}, \\
\mathscr{L}^{n}(\tau^3) &=&
0.
\end{eqnarray*}
By taking different values for $\xi$s, we can get different cubature
formulae. For $n=3$ and $n=4$, if we take all $t_i=1$, then we can
get formulas as showed in Tables \ref{tab:t31} and \ref{tab:t41}.
\begin{table}[h]
\centering
\begin{tabular}{|cccc|}
\hline
$x_1$ & $x_2$ & $x_3$ & weight \\ \hline
0.34240723692377 & 0.34240723692377 & 0.34240723692377 & 0.01469064053612 \\
0.14125289379518 & 0.14125289379518 & 0.14125289379518 & 0.04086491501944 \\
0.41353088165296 & 0.41353088165296 & 0.00627157002742 & 0.01887111233337 \\
0.12380973765487 & 0.12380973765487 & 0.58571385802358 & 0.03668444322218 \\
0.60719461208592 & 0.05947205458075 & 0.16666666666667 & 0.02777777777778 \\
0.05947205458075 & 0.60719461208592 & 0.16666666666667 & 0.02777777777778 \\
\hline
\end{tabular}
\caption{Nodes and weights for $T_3$}\label{tab:t31}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{|cccc|}
\hline
$x_1$ & $x_2$ & $x_3$ & $x_4$ \\ \hline
0.27145760185760 & 0.27145760185760 & 0.27145760185760 & 0.27145760185760 \\
0.12024746726682 & 0.12024746726682 & 0.12024746726682 & 0.12024746726682 \\
0.30652570925957 & 0.30652570925957 & 0.30652570925957 & -0.06243427063585 \\
0.11154151763119 & 0.11154151763119 & 0.11154151763119 & 0.52251830424930 \\
0.37131176827505 & 0.37131176827505 & -0.02833782226438 & 0.14285714285714 \\
0.09266869570542 & 0.09266869570542 & 0.52894832287488 & 0.14285714285714 \\
0.54391317546145 & 0.02751539596712 & 0.14285714285714 & 0.14285714285714 \\
0.02751539596712 & 0.54391317546145 & 0.14285714285714 & 0.14285714285714 \\
\hline
\multicolumn{4}{|c|}{$w_1=0.00254167472911,\ w_2=
0.00787499193755,\ w_3=0.00294495824332,$}\\
\multicolumn{4}{|c|}{$w_4=0.00747170842335,\ w_5=0.00365639117145,\
w_6=0.00676027549522,$}\\ \multicolumn{4}{|c|}{$\
w_7=0.00520833333333,\ w_8=0.00520833333333.$} \\ \hline
\end{tabular}
\caption{Nodes and weights for $T_4$}\label{tab:t41}
\end{table}
In Table \ref{tab:t31}, the first point is outside of the region. To
avoid it, we can take
$$
t_1=\frac{93}{85},\ t_2=\frac{378}{391},\ t_3=\frac{108}{115},
$$
and the corresponding cubature formula is listed in Table
\ref{tab:t32}. In the formula, the first and third nodes are on the
boundary of the region $T_3$.
\begin{table}[h]
\centering
\begin{tabular}{|cccc|}
\hline
$x_1$ & $x_2$ & $x_3$ & weight \\ \hline
0.33333333333333 & 0.33333333333333 & 0.33333333333333 & 0.01875000000000 \\
0.14285714285714 & 0.14285714285714 & 0.14285714285714 & 0.04203431372549 \\
0.41666666666667 & 0.41666666666667 & 0.00000000000000 & 0.01875000000000 \\
0.12037037037037 & 0.12037037037037 & 0.59259259259259 & 0.03495843989770 \\
0.61593041596355 & 0.05073625070311 & 0.16666666666667 & 0.02608695652174 \\
0.05073625070311 & 0.61593041596355 & 0.16666666666667 & 0.02608695652174 \\
\hline
\end{tabular}
\caption{Nodes and weights for $T_3$}\label{tab:t32}
\end{table}
If we take
$$
t_1=\frac{94}{85},\ t_2=1,\ t_3=\frac{76}{85},
$$
then all the nodes are inside the region, see Table \ref{tab:t33}.
\begin{table}[h]
\centering
\begin{tabular}{|cccc|}
\hline
$x_1$ & $x_2$ & $x_3$ & weight \\ \hline
0.33237874197689 & 0.33237874197689 & 0.33237874197689 & 0.01927056497746 \\
0.14303319621635 & 0.14303319621635 & 0.14303319621635 & 0.04216734351927 \\
0.41247250927755 & 0.41247250927755 & 0.00838831477823 & 0.02043165185637 \\
0.12085734331926 & 0.12085734331926 & 0.59161864669481 & 0.03512390369919 \\
0.61593041596355 & 0.04371016618787 & 0.16666666666667 & 0.02516339869281 \\
0.04371016618787 & 0.61593041596355 & 0.16666666666667 & 0.02516339869281 \\
\hline
\end{tabular}
\caption{Nodes and weights for $T_3$}\label{tab:t33}
\end{table}
In Table \ref{tab:t41}, all the weights are positive. However, we
find there exist three points outside of the region. If we take
$$
t_1=\frac{104}{75},\ t_2=\frac{3577}{2775},\ t_3=\frac{9947}{8880},\
t_4=\frac{49}{60}.
$$
and add one more point with weight $-49/80$, then the corresponding
formula is showed in Table \ref{tab:t42}.
\begin{table}[h]
\centering
\begin{tabular}{|cccc|}
\hline
$x_1$ & $x_2$ & $x_3$ & $x_4$ \\ \hline
0.25000000000000 & 0.25000000000000 & 0.25000000000000 & 0.25000000000000 \\
0.12500000000000 & 0.12500000000000 & 0.12500000000000 & 0.12500000000000 \\
0.28571428571429 & 0.28571428571429 & 0.28571428571429 & 0.00000000000000 \\
0.11224489795918 & 0.11224489795918 & 0.11224489795918 & 0.52040816326531 \\
0.35714285714286 & 0.35714285714286 & 0.00000000000000 & 0.14285714285714 \\
0.07792207792208 & 0.07792207792208 & 0.55844155844156 & 0.14285714285714 \\
0.00000000000000 & 0.42857142857143 & 0.14285714285714 & 0.14285714285714 \\
0.42857142857143 & 0.00000000000000 & 0.14285714285714 & 0.14285714285714 \\
0.28571428571429 & 0.28571428571429 & 0.14285714285714 & 0.14285714285714 \\
\hline
\multicolumn{4}{|c|}{$w_1=0.00555555555556,\ w_2=
0.00888888888889,\
w_3=0.00600490196078,$}\\
\multicolumn{4}{|c|}{$w_4=0.00742227521640,\
w_5=0.00633074935401,\
w_6=0.00533755957993,$}\\
\multicolumn{4}{|c|}{$\
w_7=0.00425347222222,\ w_8=0.00425347222222, w_9=-49/80.$} \\
\hline
\end{tabular}
\caption{Nodes and weights for $T_4$}\label{tab:t42}
\end{table}
In Table \ref{tab:t42}, there are 5 points on the boundary. To make
all the nodes inside the region, we can take
$$
t_1=\frac{7}{5},\ t_2=\frac{187}{145},\ t_3=\frac{179522}{160283},\ t_4=\frac{5}{6}
$$
and add one more node with weight $-618391/961698$ and the corresponding formula is shown
in Table \ref{tab:t43}.
\begin{table}[h]
\centering
\begin{tabular}{|cccc|}
\hline
$x_1$ & $x_2$ & $x_3$ & $x_4$ \\ \hline
0.24955035825246 & 0.24955035825246 & 0.24955035825246 & 0.24955035825246 \\
0.12511271452382 & 0.12511271452382 & 0.12511271452382 & 0.12511271452382 \\
0.28567845223177 & 0.28567845223177 & 0.28567845223177 & 0.00010750044754 \\
0.11210697374487 & 0.11210697374487 & 0.11210697374487 & 0.52082193590823 \\
0.35714280321315 & 0.35714280321315 & 0.00000010785942 & 0.14285714285714 \\
0.07749399446444 & 0.07749399446444 & 0.55929772535683 & 0.14285714285714 \\
0.56855699818890 & 0.00287157323967 & 0.14285714285714 & 0.14285714285714 \\
0.00287157323967 & 0.56855699818890 & 0.14285714285714 & 0.14285714285714 \\
0.28571428571429 & 0.28571428571429 & 0.14285714285714 & 0.14285714285714 \\
\hline
\multicolumn{4}{|c|}{$w_1=0.00566710734383,\ w_2=0.0089162259895080,\
w_3=0.006035977209200,$}\\
\multicolumn{4}{|c|}{$w_4=0.00739793083678,\
w_5=0.0063627387083462,\ w_6=0.005780572945942,$}\\
\multicolumn{4}{|c|}{$\
w_7=0.00434027777778,\ w_8=0.0043402777777778,\ w_9=-0.643019950129875.$} \\
\hline
\end{tabular}
\caption{Nodes and weights for $T_4$}\label{tab:t43}
\end{table}
The integration problem on the $n$-simplex was studied very
extensively. According to the collection of R. Cools in the
website(http://nines.cs.kuleuven.be/research/ecf/ecf.html), the
minimum number of nodes in the third-degree formulas is $n+2$, in
which there is a negative weight. Except the $(n+2)$-point formula,
the minimum number is 8 and 10 for $n=3$ and $n=4$, respectively. If
we only consider the formulas with positive weight, the minimum of
the points is 8 and 11 for $n=3$ and $n=4$ respectively. Therefore
our formula for $n=3$ and $n=4$ have the fewest numbers among all
the formulas with positive weights.
$\bullet$ Secondly take the integration over the positive sector of a ball as an example.
Define
$$
\mathscr{L}(f)=\int_{S_n}f(x_1,x_2,\ldots,x_n)\mathrm{d}x_1\mathrm{d}x_2\ldots\mathrm{d}x_n
$$
where
$S_n=\{(x_1,x_2,\ldots,x_n)|x_1^2+x_2^2+\ldots+x_n^2\leq 1, x_1\geq 0,\ldots,
x_n\geq 0\}$. It is easy to get that
$$
\mathscr{L}(x_1^{\alpha_1}x_2^{\alpha_2}\ldots
x_n^{\alpha_n})=\dfrac{(\alpha_1-1)!!(\alpha_2-1)!!\ldots
(\alpha_n-1)!!}{(n+\alpha_1+\alpha_2+\ldots+\alpha_n)!!}\cdot
\Big(\dfrac{\pi}{2}\Big)^{\big[\frac{n-n_o}{2}\big]}
$$
where $n_o$ denotes the number of the odd number among
$\alpha_1,\alpha_2,\ldots,\alpha_n$ and $m!!$ denotes the double
factorial of $m$ and $m!!=1$ if $m\leq 0$.
Then by a simple computation, we have
\begin{eqnarray*}
&&c_i=-\dfrac{(n+2)!!}{(n+3)!!}\cdot\Big(\dfrac{\pi}{2}\Big)^{\big[\frac{n-1}{2}\big]-\big[\frac{n}{2}\big]}\cdot
\dfrac{4-\pi}{\pi-2},i=2,3,\ldots,n-1, \\
&& c_n=-\dfrac{(n+2)!!}{(n+3)!!}\cdot\Big(\dfrac{\pi}{2}\Big)^{\big[\frac{n-1}{2}\big]-\big[\frac{n}{2}\big]}\cdot
\dfrac{(n-1)\pi-(2n-4)}{\pi-2},\\
&&\gamma=\dfrac{(n+2)!!}{(n+3)!!}\cdot\Big(\dfrac{\pi}{2}\Big)^{\big[\frac{n-1}{2}\big]-\big[\frac{n}{2}\big]}.
\end{eqnarray*}
Here if we take
$$
\xi_2^{(i)}=\frac{t_{i+1}}{n\cdot n!!}\cdot
\Big(\dfrac{\pi}{2}\Big)^{\big[\frac{n}{2}\big]},\ \xi_1^{(n-2)}=\frac{t_{n}}{n\cdot
n!!}\cdot \Big(\dfrac{\pi}{2}\Big)^{\big[\frac{n}{2}\big]},\ i=0,1,2,\ldots,n-2,
$$
then the moments of $\mathscr{L}^{k+1}(1\leq k\leq n-2)$ are
\begin{eqnarray*}
\mathscr{L}^{k+1}(1) &=& \frac{t_{k+1}}{n\cdot
n!!}\cdot \big(\frac{\pi}{2}\big)^{[\frac{n}{2}]}, \\
\mathscr{L}^{k+1}(\tau) &=&
c_{n-k}\mathscr{L}_{k}(1)=-\dfrac{(n+2)\cdot (n-\sum_{i=1}^kt_i)}{n\cdot (n+3)!!}\cdot\Big(\dfrac{\pi}{2}\Big)^{\big[\frac{n-1}{2}\big]}\cdot \dfrac{4-\pi}{\pi-2}, \\
\mathscr{L}^{k+1}(\tau^2) &=&
\frac{(n-k)(n-k+1)}{(n+2)!!}\Big(\frac{\pi}{2}\Big)^{[\frac{n-2}{2}]}\cdot
\Big(\frac{\pi}{2}-1\Big)\\
&&+\frac{(n-\sum_{i=1}^kt_i)[(n+2)!!]^2}{n\cdot n!!\cdot
[(n+3)!!]^2}\cdot \Big(\frac{4-\pi}{\pi-2}\Big)^2\cdot
\Big(\frac{\pi}{2}\Big)^{2[\frac{n-1}{2}]-[\frac{n}{2}]}, \\
\mathscr{L}^{k+1}(\tau^3) &=&
\frac{(n-k)(n-k+1)(n-k+2)}{(n+3)!!}\Big(\frac{\pi}{2}\Big)^{[\frac{n-3}{2}]}\cdot
\Big(\frac{\pi}{2}-2\Big)\\
&&-\frac{(n-\sum_{i=1}^kt_i)[(n+2)!!]^3}{n\cdot n!!\cdot
[(n+3)!!]^3}\cdot \Big(\frac{4-\pi}{\pi-2}\Big)^3\cdot
\Big(\frac{\pi}{2}\Big)^{3[\frac{n-1}{2}]-2[\frac{n}{2}]}
\end{eqnarray*}
and
\begin{eqnarray*}
\mathscr{L}^{1}(1) &=& \frac{t_1}{n\cdot
n!!}\cdot \Big(\dfrac{\pi}{2}\Big)^{\big[\frac{n}{2}\big]}, \\
\mathscr{L}^{1}(\tau) &=& \frac{2(n\pi-3n+\pi-4)}{(n+3)!!\cdot (\pi-2)}\cdot
\Big(\frac{\pi}{2}\Big)^{[\frac{n-1}{2}]}, \\
\mathscr{L}^{1}(\tau^2) &=&
\frac{n(\frac{\pi}{2}+n-1)}{(n+2)!!}\Big(\frac{\pi}{2}\Big)^{[\frac{n-2}{2}]}-\frac{2n\cdot
(n+2)!!}{(n+1)!!\cdot (n+3)!!}\cdot
\Big(\frac{\pi}{2}\Big)^{2[\frac{n-1}{2}]-[\frac{n}{2}]}\cdot
\dfrac{(n-1)\pi-(2n-4)}{\pi-2}\\
&& + \frac{n+2}{[(n+3)!!]^2}\cdot \Big(\frac{\pi}{2}\Big)^{2[\frac{n-1}{2}]-[\frac{n}{2}]}\cdot
\Big(\dfrac{(n-1)\pi-(2n-4)}{\pi-2}\Big)^2,\\
\mathscr{L}^{1}(\tau^3) &=& \frac{3n^2-n}{(n+3)!!}\cdot
\Big(\frac{\pi}{2}\Big)^{[\frac{n-1}{2}]}+\frac{n(n-1)(n-2)}{(n+3)!!}\Big(\frac{\pi}{2}\Big)^{[\frac{n-3}{2}]}\\
&& -\frac{3n(\frac{\pi}{2}+n-1)}{(n+3)!!}\cdot
\Big(\frac{\pi}{2}\Big)^{[\frac{n-3}{2}]}\cdot
\dfrac{(n-1)\pi-(2n-4)}{\pi-2}\\
&& +\frac{3n}{(n+1)!!}\cdot\Big[\frac{(n+2)!!}{(n+3)!!}\Big]^2\cdot
\Big(\frac{\pi}{2}\Big)^{3[\frac{n-3}{2}]-2[\frac{n}{2}]}\cdot\Big(\dfrac{(n-1)\pi-(2n-4)}{\pi-2}\Big)^2\\
&& -\frac{1}{n!!}\cdot\Big[\frac{(n+2)!!}{(n+3)!!}\Big]^3\cdot
\Big(\frac{\pi}{2}\Big)^{3[\frac{n-3}{2}]-2[\frac{n}{2}]}\cdot\Big(\dfrac{(n-1)\pi-(2n-4)}{\pi-2}\Big)^3
\end{eqnarray*}
and
\begin{eqnarray*}
\mathscr{L}^{n}(1) &=& \frac{t_n}{n\cdot
n!!}\cdot \Big(\dfrac{\pi}{2}\Big)^{\big[\frac{n}{2}\big]}, \\
\mathscr{L}^{n}(\tau) &=& 0, \\
\mathscr{L}^{n}(\tau^2) &=&
\frac{2}{(n+2)!!}\Big(\frac{\pi}{2}\Big)^{[\frac{n-2}{2}]}\cdot
\Big(\frac{\pi}{2}-1\Big), \\
\mathscr{L}^{n}(\tau^3) &=&
0.
\end{eqnarray*}
If take all $t_i=1$ for $n=3$ and $n=4$, then we can get formulae as showed in Tables
\ref{tab:s31} and \ref{tab:s41}. If we take $t_1=0.8, t_2=1.31, t_3=1.11$ and $t_4=0.78$
for $n=4$, then we can get a formula with all the nodes inside the region, see Table
\ref{tab:s42}.
\begin{table}[h]
\centering
\begin{tabular}{|cccc|}
\hline
$x_1$ & $x_2$ & $x_3$ & weight \\ \hline
0.53887049476004 & 0.53887049476004 & 0.53887049476004 & 0.07852747507104 \\
0.18341741723402 & 0.18341741723402 & 0.18341741723402 & 0.09600545012840 \\
0.57520979290336 & 0.57520979290336 & 0.02206116228206 & 0.06975676243570 \\
0.20283315000517 & 0.20283315000517 & 0.76681444807844 & 0.10477616276373 \\
0.76016315955181 & 0.09981758853698 & 0.31250000000000 & 0.08726646259972 \\
0.09981758853698 & 0.76016315955181 & 0.31250000000000 & 0.08726646259972 \\
\hline
\end{tabular}
\caption{Nodes and weights for $S_3$}\label{tab:s31}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{|cccc|}
\hline
$x_1$ & $x_2$ & $x_3$ & $x_4$ \\ \hline
0.47721483105875 & 0.47721483105875 & 0.47721483105875 & 0.47721483105875 \\
0.17126237887529 & 0.17126237887529 & 0.17126237887529 & 0.17126237887529 \\
0.48420041705925 & 0.48420041705925 & 0.48420041705925 & -0.06966276495181 \\
0.20526869095372 & 0.20526869095372 & 0.20526869095372 & 0.76713241336478 \\
0.56004995494835 & 0.56004995494835 & -0.02818760532449 & 0.29102618165375 \\
0.16531879543241 & 0.16531879543241 & 0.76127471370739 & 0.29102618165375 \\
0.74847573599445 & 0.05241038692400 & 0.29102618165375 & 0.29102618165375 \\
0.05241038692400 & 0.74847573599445 & 0.29102618165375 & 0.29102618165375 \\
\hline
\multicolumn{4}{|c|}{$w_1=0.03771636146294,\ w_2=0.03938992292057,\ w_3=0.02874740384082,$}\\
\multicolumn{4}{|c|}{$w_4=0.04835888054269,\ w_5=0.03167997303102,\
w_6=0.04542631135249,$}\\ \multicolumn{4}{|c|}{$\
w_7=0.03855314219176,\
0.03855314219176.$} \\
\hline
\end{tabular}
\caption{Nodes and weights for $S_4$}\label{tab:s41}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{|cccc|}
\hline
$x_1$ & $x_2$ & $x_3$ & $x_4$ \\ \hline
0.49819378497585 & 0.49819378497585 & 0.49819378497585 & 0.49819378497585 \\
0.15640817934597 & 0.15640817934597 & 0.15640817934597 & 0.15640817934597 \\
0.46048445804733 & 0.46048445804733 & 0.46048445804733 & 0.00148511203764 \\
0.21848509706654 & 0.21848509706654 & 0.21848509706654 & 0.72748319509617 \\
0.54494615070832 & 0.54494615070832 & 0.00202000298217 & 0.29102618165375 \\
0.16998718727254 & 0.16998718727254 & 0.75193793030368 & 0.29102618165375 \\
0.79451246595922 & 0.00637365695922 & 0.29102618165375 & 0.29102618165375 \\
0.00637365695922 & 0.79451246595922 & 0.29102618165375 & 0.29102618165375 \\
\hline
\multicolumn{4}{|c|}{$w_1=0.02857087469598,\ w_2=0.03311415281083,\ w_3=0.04213154579078,$}\\
\multicolumn{4}{|c|}{$w_4=0.05887768675432,\ w_5=0.03842850515254,\
w_6=0.04715947052132,$}\\ \multicolumn{4}{|c|}{$\
w_7=0.03855314219176,\
w_8=0.03855314219176.$} \\
\hline
\end{tabular}
\caption{Nodes and weights for $S_4$}\label{tab:s42}
\end{table}
\section{Conclusion}
In this paper, we present a method to construct third-degree
formulae for integrals with permutation symmetry. Our method is a
decomposition method, which is easy to compute. At the end of the
paper, we present some numerical results, which seem to be new.
Compared with the existing method, we focus on the case of
permutation symmetry, which seldom is considered. The numerical
results show that the number of the points attain of close to the
minimum. Besides, in most cases, the weights in our formulas are all
positive or at most one negative weight.
\end{document}
|
\begin{document}
\title{Graph Hausdorff dimension, Kolmogorov complexity and construction of fractal networks}
\begin{abstract}
In this paper we introduce
and study discrete analogues of Lebesgue and Hausdorff dimensions for graphs. It turned out that they are
closely related to well-known graph characteristics such as rank dimension and Prague (or Ne{\v s}et{\v r}il-R{\"o}dl) dimension. It
allows us to formally define fractal graphs and establish fractality of some graph classes. We show, how Hausdorff dimension of graphs is related to their Kolmogorov complexity. We also demonstrate applications of this approach by establishing a novel property of general compact metric spaces using ideas from hypergraphs theory and by proving an estimation for Prague dimension of almost all graphs using methods from algorithmic information theory.
{\bf Keywords:} Prague dimension, rank dimension, Lebesgue dimension, Hausdorff dimension, Kolmogorov complexity, fractal
\end{abstract}
\section{Introduction}
Lately there was a growing interest in studying self-similarity and fractal
properties of graphs, which is largely inspired by applications in biology, sociology and chemistry \cite{song2005self,shanker2007defining}. Such studies often employ
statistical physics methods that use ideas from graph theory and general topology, but are not intended to approach the problems under consideration in a rigorous mathematical way. Several studies that translate certain notions of topological dimension theory to graphs using combinatorial methods are also known \cite{smyth2010topological,evako1994dimension}. However, to the best of our knowledge a rigorous combinatorial theory that defines and studies graph-theoretical analogues of topological fractals still has not been developed.
In this paper we introduce and study graph analogues of Lebesgue and Hausdorff dimensions of topological spaces from the graph-theoretical point of view. We show that they are closely related to well-known graph characteristics such as rank dimension \cite{berge1984hypergraphs} and Prague (or Ne{\v s}et{\v r}il-R{\"o}dl) dimension \cite{hell2004graphs}. It
allowed us to define fractal graphs and determine fractality of some graph classes. In particular, it occurred that fractal graphs in some sense could be considered as generalizations of class 2 graphs. We demonstrate that various properties of dimensions of compact topological spaces have graph-theoretical analogues. Moreover, we also show how these relations allow for reverse transfer of combinatorial results to the general topology by proving a new property of general compact metric spaces using machinery from theory of hypergraphs. Finally, we show how Hausdorff (and Prague) dimension of graphs is related to Kolmogorov complexity. This relation allowed us to find a lower bound for Prague dimension for almost all graphs using incompressibility method from the theory of Kolmogorov complexity.
\section{Basic definitions and facts from measure theory, dimension theory and graph theory}
\label{sec:examples}
Let $X$ be a compact metric space. A family $\mathcal{C} = \{C_{\alpha} : \alpha \in A\}$ of open subsets of $X$ is a {\it cover}, if $X = \bigcup_{\alpha \in A} C_{\alpha}$. A cover $\mathcal{C}$ is {\it $k$-cover}, if every $x\in X$ belongs to at most $k$ sets from $\mathcal{C}$; {\it $\epsilon$-cover}, if for every set $C_i\in \mathcal{C}$ its diameter $diam(C_i)$ does not exceed $\epsilon$; $(\epsilon, k)$-cover, if it is both $\epsilon$-cover and $k$-cover. {\it Lebesgue dimension} ({\it cover dimension}) $dim_L(X)$ of the space $X$ is the minimal integer $k$ such that for every $\epsilon > 0$ there exists $(\epsilon, k+1)$-cover of $X$.
Let $\mathcal{F}$ be a semiring of subsets of a set X. A function $m:\mathcal{F}\rightarrow \mathbb{R}^+_0$ is {\it a measure}, if $m(\emptyset) = 0$ and for any disjoint sets $A,B\in \mathcal{F}$ $m(A\cup B) = m(A) + m(B)$.
Let now $X$ be a subspace of an Euclidean space $\mathbb{R}^d$. {\it Hyper-rectangle} $R$ is a Cartesian product of semi-open intervals: $R = [a_1,b_1)\times\dots\times [a_d,b_d)$, where $a_i < b_i$, $a_i,b_i \in \mathbb{R}$; the {\it volume} of a the hyper-rectangle $R$ is defined as $vol(R) = \prod_{i=1}^d (b_i - a_i)$. The {\it $d$-dimensional Jordan measure} of the set $X$ is the value
$\mathcal{J}^d(X) = \inf\{\sum_{R\in \mathcal{C}} vol(R)\},$
where infimum is taken over all finite covers $\mathcal{C}$ of $X$ by disjoint hyper-rectangles. The {\it $d$-dimensional Lebesgue measure} of a measurable set $\mathcal{L}^d(X)$ is defined analogously, with the additional condition that infimum is taken over all countable covers $\mathcal{C}$ of $X$ by (not necessarily disjoint) hyper-rectangles.
Let $s >0$ and $\epsilon > 0$. Consider the parameter
$\mathcal{H}^s_{\epsilon}(X) = \inf\{\sum_{C\in \mathcal{C}}diam(C)^s\},$ where infimum is taken over all $\epsilon$-covers of $X$. The {\it $s$-dimensional Hausdorff measure} of the set $X$ is defined as $\mathcal{H}^s(X) = \lim_{\epsilon \rightarrow 0} \mathcal{H}^s_{\epsilon}(X)$.
The aforementioned measures are related as follows. If Jordan measure of the set $X$ exists, then it is equal to its Lebesgue measure. For Borel sets Lebesgue measure and Hausdorff measure are equivalent in the sense, that for any Borel set $Y$ we have $\mathcal{L}^d(Y) = C_d\mathcal{H}^d(Y)$, where $C_d$ is a constant depending only on $d$.
{\it Hausdorff dimension} $dim_H(X)$ of the set $X$ is the value
\begin{equation}\label{hdimtop}
dim_H(X) = \inf\{s \geq 0 :\mathcal{H}^s(X) < \infty\}.
\end{equation}
Lebesgue and Hausdorff dimension of $X$ are related as follows:
\begin{equation}\label{lebvshaus}
dim_L(X) \leq dim_H(X).
\end{equation}
The set $X$ is {\it a fractal} \cite{falconer2004fractal}, if the inequality (\ref{lebvshaus}) is strict.
Let $G=(V(G),E(G))$ be a simple graph. The family of subgraphs $\mathcal{C} = \{C_1,...,C_m\}$ of $G$ is a {\it cover}, if every edge $uv\in E(G)$ belongs to at least one subgraph from $\mathcal{C}$. A cover $\mathcal{C}$ is {\it $k$-cover}, if every vertex $v\in V(G)$ belongs to at most $k$ subgraphs of $\mathcal{C}$; {\it clique cover}, if all subgraphs $C_i$ are cliques. A set $W\subseteq V(G)$ {\it separates} vertices $u,v\in V(G)$, if $|W\cap \{u,v\}| = 1$.
For a hypergraph $\mathcal{H} = (\mathcal{V}(\mathcal{H}),\mathcal{E}(\mathcal{H}))$, its {\it rank} $r(\mathcal{H})$ is the maximal size of its edges. A hypergraph $\mathcal{H}$ is {\it strongly $k$-colorable}, if for every vertex a color from the set $\{1,...,k\}$ can be assigned in such a way that vertices of every edge receive different colors.
Intersection graph $L = L(\mathcal{H})$ of a hypergraph $\mathcal{H}$ is a simple graph with a vertex set $V(L) = \{v_E : E\in \mathcal{E}(\mathcal{H})\}$ in a bijective correspondence with the edge set of $\mathcal{H}$ and two distinct vertices $v_E,v_F\in V(L)$ being adjacent, if and only if $E\cap F \ne \emptyset$. The following theorem establishes a connection between intersection graphs and clique $k$-covers:
\begin{theorem}\cite{berge1984hypergraphs}\label{thm:covervsinter}
A graph $G$ is an intersection graph of a hypergraph of rank $\leq k$ if and only if it has a clique $k$-cover.
\end{theorem}
{\it Rank dimension} \cite{metelsky2003} $dim_R(G)$ of a graph $G$ is the minimal $k$ such that $G$ satisfies conditions of Theorem \ref{thm:covervsinter}. In particular, graphs with $dim_R(G)=1$ are disjoint unions of cliques (such graphs are called {\it equivalence graphs} \cite{alon1986covering} or {\it $M$-graphs} \cite{tyshkevich1989matr}), and graphs with $dim_R(G)=2$ are line graphs of multigraphs.
{\it Categorical product} of graphs $G_1$ and $G_2$ is the graph $G_1 \times G_2$ with the vertex set $V(G_1\times G_2) = V(G_1)\times V(G_2)$ with two vertices $(u_1,u_2)$ and $(v_1,v_2)$ being adjacent whenever $u_1v_1\in E(G_1)$ and $u_2v_2\in E(G_2)$. {\it Prague dimension} $dim_P(G)$ is the minimal integer $d$ such that $G$ is an induced subgraph of a categorical product of $d$ complete graphs.
{\it Equivalent cover} $\mathcal{M} = \{M_1,...,M_k\}$ of the graph $G$ consists of spanning subgraphs such that each subgraph $M_i$ is an equivalence graph. Equivalent cover is {\it separating}, if every two distinct vertices of $G$ are separated by one of connected components in some subgraph from $\mathcal{M}$.
Relations between Prague dimension, clique covers, vertex labeling and intersection graphs are described by the following theorem:
\begin{theorem}\cite{hell2004graphs,babaits1996kmern}\label{thm:pdimcharact}
The following statements are equivalent:
1) $dim_P(\overline{G}) \leq k$;
2) there exists a separating equivalent cover of $G$;
3) $G$ is an intersection graph of strongly $k$-colorable hypergraph without multiple edges;
4) there exists an injective mapping $\phi: V(G) \rightarrow \mathbb{N}^k$, $v \mapsto (\phi_1(v),\dots,\phi_k(v))$ such that $uv\not\in E(G)$ whenever $\phi_j(u) \ne \phi_j(v)$ for every $j=1,...,k$.
\end{theorem}
Numbers of vertices and edges of a graph $G$ are denoted by $n$ and $m$, respectively. A subgraph of $G$ induced by vertex subset $U\subseteq V(G)$ is denoted as $G[U]$. For two graphs $G_1$ and $G_2$ the notation $G_1 \leq G_2$ indicates, that $G_1$ is an induced subgraph of $G_2$.
\section{Lebesgue dimension of graphs}\label{lebgraph}
Lebesgue dimension of a metric space is defined through $k$-covers by sets of arbitrary small diameter. It is natural to transfer this definition to graphs using graph $k$-covers by subgraphs of smallest possible diameter, i.e. by cliques. Thus by Theorem \ref{thm:covervsinter} we define Lebesgue dimension of a graph through its rank dimension:
\begin{equation}\label{kdimeqleb}
dim_L(G) = dim_R(G)-1.
\end{equation}
An analogy between Lebesgue and rank dimensions is futher justified by the following Proposition \ref{topspacehyper}, that states that any compact metric spaces of bounded Lebesgue measure could be approximated by intersection graphs of (infinite) hypergraphs of bounded rank. To prove it, we will use the following fact:
\begin{lemma}\label{lebnumber}\cite{edgar2007measure}
Let $X$ be a compact metric space and $\mathcal{U}$ be its open cover. Then there exists $\delta>0$ (called a {\it Lebesgue number} of $\mathcal{U}$) such that for every subset $A\subseteq X$ with $diam(A) < \delta$ there is a set $U\in \mathcal{U}$ such that $A\subseteq U$.
\end{lemma}
\begin{theorem}\label{topspacehyper}
Let $X$ be a compact metric space with a metric $\rho$. Then $dim_L(X) \leq k-1$ if and only if for any $\epsilon > 0$ there exists a number $0 < \delta < \epsilon$ and a hypergraph $\mathcal{H}(\epsilon)$ on a finite vertex set $V(\mathcal{H}(\epsilon))$ with an edge set $E(\mathcal{H}(\epsilon)) = \{e_x : x\in X\}$ with the following properties:
1) $rank(\mathcal{H}(\epsilon)) \leq k$;
2) $e_x\cap e_y \ne \emptyset$ for every $x,y\in X$ such that $\rho(x,y) < \delta$;
3) $\rho(x,y) < \epsilon$ for every $x,y\in X$ such that $e_x \cap e_y \ne \emptyset$;
4) for every $v\in V(\mathcal{H}(\epsilon))$ the set $X_v = \{x\in X : v\in e_x \}$ is open.
\end{theorem}
\begin{proof}
The proof borrows some ideas from intersection graphs theory (see \cite{berge1984hypergraphs}). Suppose that $dim_L(X) \leq k$, $\epsilon > 0$ and let $\mathcal{C}$ be the corresponding $(\epsilon,k)$-cover of $X$. Since $X$ is compact, we can assume that $\mathcal{C}$ is finite, i.e. $\mathcal{C} = \{C_1,...,C_m\}$. Let $\delta$ be the Lebesgue number of $\mathcal{C}$.
For a point $x\in X$ let $e_x = \{i\in [m] : x\in C_i\}$. Consider a hypergraph $\mathcal{H}$ with $V(\mathcal{H}) = [m]$ and $E(\mathcal{H}) = \{e_x : x\in X\}$. Then $\mathcal{H}$ satisfies conditions 1)-4). Indeed, $rank(\mathcal{H}) \leq k$, since $\mathcal{C}$ is $k$-cover. If $\rho(x,y) < \delta$, then by Lemma \ref{lebnumber} there is $i\in [m]$ such that $\{x,y\}\in C_i$, i.e. $i\in e_x\cap e_y$. Condition $j\in e_x\cap e_y$ means that $x,y\in C_j$, and so $\rho(x,y) < \epsilon$, since $diam(C_j) < \epsilon$. Finally, for every $i\in V(\mathcal{H})$ we have $X_v = C_v$, and thus $X_v$ is open.
Conversely, let $\mathcal{H}$ be a hypergraph with $V(\mathcal{H}) = [m]$ satisfying conditions (1)-(4). Then it is straightforward to checked, that $\mathcal{C} = \{X_1,...,X_m\}$ is an open $(\epsilon,k)$-cover of $X$.
\end{proof}
So, $dim_L(X)\leq k$ whenever for any $\epsilon > 0$ there is a well-defined hypergraph $\mathcal{H}(\epsilon)$ of $rank(\mathcal{H}(\epsilon)) \leq k$ with edges in bijective correspondence with points of $X$ such that two points are close if and only if corresponding edges intersect.
\section{Graph measure and Hausdorff dimension of graphs}\label{measuregraph}
In order to rigorously define a graph analogue of Hausdorff dimension, we need to define first a corresponding measure. Note that in any meaningful finite graph topology every set is a Borel set. As mentioned above, for measurable Borel sets in $\mathbb{R}^n$ Jordan, Lebesgue and Hausdorff measures are equivalent. Thus further we will work with a graph analogue of Jordan measure.
It is known, that every graph is isomorphic to an induced subgraph of a categorical product of complete graphs \cite{hell2004graphs}. Consider a graph $G$ embedded into a categorical product of $d$ complete graphs $K^1_{n_1}\times\dots \times K^d_{n_d}$. Without loss of generality we may assume that $n_1 = ... = n_d = n$, i.e.
\begin{equation}\label{graphembed}
G\cong G' \leq S = (K_n)^d,
\end{equation}
Suppose that $V(K^n) = \{1,\dots,n\}$. The graph $S$ will be referred to as {\it a space} of dimension $d$ and $G'$ as an embedding of $G$ into $S$. It is easy to see, that by definition every vertex $v\in S$ is a vector $v = (v_1,...,v_d)$ with $v_i\in [n]$, and two vertices $u$ and $v$ are adjacent in $S$ if and only if $v_r\ne u_r$ for every $r\in [d]$.
{\it Hyper-rectangle} $R = R(J_1,...,J_d)$ is a subgraph of $S$, that is defined as follows: for every $i=1,...,d$ choose a non-empty subset $J_i \subseteq [n]$, then $R = K_n[J_1]\times\dots\times K_n[J_d]$. The {\it volume} of a hyper-rectangle $R$ is the value $vol(R) = |V(R)| = \prod_{i=1}^d |J_i|$.
The family $\mathcal{R} = \{R^1,...,R^m\}$ of hyper-rectangles is a {\it rectangle co-cover} of $G'$, if the subgraphs $R^i$ are pairwise vertex-disjoint, $V(G')\subseteq \bigcup_{i=1}^m V(R^i)$ and $\mathcal{R}$ covers all non-edges of $G'$, i.e. for every $x,y\in V(G')$, $xy\not\in E(G')$ there exists $j\in[m]$ such that $x,y\in V(R^j)$. We define {\it $d$- volume} of a graph $G$ as
\begin{equation}\label{grvol}
vol^d(G) = \min_{G'}\min_{\mathcal{R}} \sum_{R\in \mathcal{R}} vol(R),
\end{equation}
where the first minimum is taken over all embeddings $G'$ of $G$ into $d$-dimensional spaces $S$ and the second minimum - over all rectangle co-covers of $G'$ (see Fig. \ref{fig:embedP4}).
\begin{figure}
\caption{\label{fig:embedP4}
\label{fig:embedP4}
\end{figure}
We define a {\it $d$- measure} of a graph $F$ as follows:
\begin{equation}\label{neasuregraph}
\mathcal{H}^d(F) = \left \{
\begin{array}{lll}
vol^d(\overline{F}), \text{ if } \overline{F} \text{ can be represented as (\ref{graphembed})};
\\ +\infty, \text{ otherwise }
\end{array}
\right.
\end{equation}
In the remaining part of this section we will prove, that $\mathcal{H}^d$ indeed satisfy the property of a measure, i.e.
$\mathcal{H}^d(F^1\cup F^2) = \mathcal{H}^d(F^1) + \mathcal{H}^d(F^2)$, where $F^1\cup F^2$ is a disjoint union of graphs $F^1$ and $F^2$.
Let $W^1,W^2\subseteq V(S)$. We write $W_1\sim W_2$, if every vertex from $W_1$ is adjacent to every vertex from $W_2$. Denote by $P_k(W_1)$ {\it the $k$-th projection} of $W_1$, i.e. the set of all $k$-coordinates of vertices of $W_1$: $$J_k(W_1) = \{v_k : v\in W_1\}.$$
In particular, $P_k(R(J_1,...,J_d)) = J_k$.
The following proposition follows directly from the definition of $S$
\begin{proposition}\label{padjcoord}
$W^1 \sim W^2$ if and only if $P_k(W_1)\cap P_k(W_2) = \emptyset$ for every $k\in [d]$.
\end{proposition}
Assume that $\mathcal{R} = \{R^1,...,R^m\}$ is a minimal rectangle co-cover of a minimal embedding $G'$, i.e. $vol^d(G) = \sum_{R\in \mathcal{R}} vol(R)$. Further we will demonstrate, that $\mathcal{R}$ has a rather simple structure.
Let $J_k^i = P_k(R^i)$.
\begin{proposition}\label{dvolumeinterindex}
$J^i_k \cap J^j_k = \emptyset$ for every $i,j\in [m]$, $i\ne j$ and every $k\in [d]$ .
\end{proposition}
\begin{proof}
First note, that for every hyper-rectangle $R^i = R(J^i_1,...,J^i_d)$, every coordinate $k\in [d]$ and every $l\in J^i_k$ there exists $v\in V(G')\cap V(R^i)$ such that $v_k = l$. Indeed, suppose that it does not hold for some $l\in J^i_k$. If $|J_k^i| = 1$, then it means that $V(G')\cap V(R^i) = \emptyset$. Thus, $\mathcal{R}' = \mathcal{R}\setminus \{R^i\}$ is a rectangle co-cover, which contradicts the minimality of $\mathcal{R}$. If $|J_k^i| > 1$, consider a hyper-rectangle $(R^i)' = R(J_i^1,...,J_k^i\setminus \{l\},...,J^i_d)$. The set $\mathcal{R}' = \mathcal{R}\setminus \{R^i\}\cup \{(R^i)'\}$ is a rectangle co-cover, and the $d$-volume of $(R^i)'$ is smaller than the $d$-volume of $R^i$. Again it contradicts minimality of $\mathcal{R}$.
Now assume that for some distinct $i,j\in [m]$ and $k\in [d]$ we have $J^i_k \cap J^j_k \supseteq \{l\}$. Then there exist $u\in V(G')\cap V(R^i)$ and $v\in V(G')\cap V(R^j)$ such that $u_k = v_k = l$. So, $uv\not\in E(G')$, and therefore by the definition $uv$ is covered by some $R^h\in \mathcal{R}$. The hyper-rectangle $R^h$ intersects both $R^i$ and $R^j$, which contradicts the definition of a rectangle co-cover.
\end{proof}
\begin{proposition}\label{dvolumecoconcomp}
Let $U^i = V(G')\cap V(R^i)$, $i=1,...,m$. Then the set $U=\{U^1,...,U^m\}$ coincides with the set of co-connected components of $G'$.
\end{proposition}
\begin{proof}
Propositions \ref{padjcoord} and \ref{dvolumeinterindex} imply, that vertices of distinct hyper-rectangles from the co-cover $\mathcal{R}$ are pairwise adjacent. So, $U_i \sim U_j$ for every $i,j\in [m]$, $i\ne j$.
Let $\mathcal{C}=\{C^1,...,C^r\}$ be the set of co-connected components of $G'$ (thus $V(G') = \bigsqcup_{l=1}^r C^l = \bigsqcup_{i=1}^m U^i$). Consider a component $C^l\in \mathcal{C}$ and the sets $C^l_i = C^l \cap U^i$, $i=1,...,m$. We have $C^l = \bigsqcup_{i=1}^m C^l_i$ and $C^l_i \sim C^l_j$ for all $i\ne j$. Therefore, due to co-connectedness of $C^l$, exactly one of the sets $C^l_i$ is non-empty.
So, we have demonstrated, that every co-connected component $C^l$ is contained in some of the sets $U^i$. Now, let some $U_i$ consists of several components, i.e. without loss of generality $U_i = C^1\sqcup\dots\sqcup C^q$, $q\geq 2$. Let $I_k^j = P_k(C^j)$, $j=1,...,q$. By Proposition \ref{padjcoord} we have $I_k^{j_1} \cap I_k^{j_2} = \emptyset$ for all $j_1\ne j_2$, $k=1,...,d$. Consider hyper-rectangles $R^{i,1} = R(I_1^1,...,I_d^1)$,...,$R^{i,q} = R(I_1^q,...,I_d^q)$. Those hyper-rectangles are pairwise vertex-disjoint, and $C^j \subseteq V(R^{i,j})$ for all $j\in [q]$. Since every pair of non-connected vertices of $G'$ is contained in some of its co-connected components, we arrived to the conclusion, that the set $\mathcal{R}' = \mathcal{R}\setminus \{R^i\} \cup \{R^{i,1},...,R^{i,q}\}$ is a rectangle co-cover. Moreover, $V(R^{i,1})\sqcup\dots\sqcup V(R^{i,q}) \subsetneq V(R^i)$, and therefore $\sum_{j=1}^q vol(R^{i,j}) < vol(R^i)$, which contradicts the minimality of $\mathcal{R}$.
\end{proof}
\begin{corollary}\label{voljoin}
Let $\mathcal{C}=\{C^1,...,C^m\}$ be the set of co-connected components of $G'$. Then $R_i = R(P_1(C^i),...,P_d(C^i))\supseteq C^i$.
\end{corollary}
\begin{proof}
By Proposition \ref{dvolumecoconcomp}, $|\mathcal{R}| = |\mathcal{C}|$, and every component $C^i\in \mathcal{C}$ is contained in a unique hyper-rectangle $R^i\in \mathcal{R}$, $i=1,...,m$. Every pair of non-adjacent vertices of $G'$ belong to some of its co-connected components. This fact, together with the minimality of $\mathcal{R}$, implies that $R^i$ is the minimal hyper-rectangle that contains $C^i$. Thus $R_i = R(P_1(C^i),...,P_d(C^i))$.
\end{proof}
\begin{corollary}\label{cor:minvolspace}
If $\overline{G}$ is connected, than $vol^d(G)$ is the minimal volume of $d$-dimensional space $S$, where $G$ can be embedded.
\end{corollary}
\begin{corollary}\label{volconcomp}
Let $\mathcal{D}=\{D^1,...,D^m\}$ be the set of co-connected components of $G$. Then $vol^d(G) = \sum_{i=1}^m vol^d(G[D^i])$.
\end{corollary}
\begin{proof}
Suppose that $\{C^1,...,C^m\}$ is the set of co-connected components of $G'$, and $G[D^i]\cong G'[C^i]$. Proposition \ref{dvolumecoconcomp} and Corollary \ref{voljoin} imply that $\{R_i\}$ is a rectangle co-cover of an embedding of $G[D^i]$. Therefore we have $vol(R^i) \geq vol^d(G[D^i])$ and thus $vol^d(G) = \sum_{i=1}^m vol(R^i) \geq \sum_{i=1}^m vol^d(G[D^i])$.
Now let $G'^i$ be a minimal embedding of $G[D^i]$ into $(K_n)^d$. By Proposition \ref{dvolumecoconcomp}, every minimal hyper-rectangle co-cover of $G'^i$ consists of a single hyper-rectangle $R^i = R(J^i_1,...,J^i_d)$ or, in other words, $G[D^i]$ is embedded into $R^i$. Now construct an embedding $G'$ of $G$ into $S=(K_{nm})^d$ and its hyper-rectangle co-cover as follows:
let $I^i_k = (i-1)m + J^i_k = \{(i-1)m + l : l\in J^i_k\}$, $k=1,...,d$. Obviously, $Q^i = K_{mn}(I^i_1)\times\dots\times K_{mn}(I^i_d) \cong R^i$. Now embed $G[D^i]$ into $Q^i$. Let $G'^i$ be those embeddings. By Proposition \ref{padjcoord} $V(G'^i) \sim V(G'^j)$, so $G' = G'^1\cup...\cup G'^m$ is indeed embedding of $G$.
All $Q^i$ are pairwise disjoint. Since every pair of non-adjacent vertices belong to some $G[D^i]$, we have that $\mathcal{Q} = \{Q^1,...,Q^m\}$ is hyper-rectangle co-cover of $G'$. Therefore $vol^d(G) \leq \sum_{i=1}^m vol(Q^i) = \sum_{i=1}^m vol^d(G[D^i])$.
\end{proof}
\begin{theorem}\label{measureadditivity}
Let $F_1$ and $F_2$ be two graphs. Then
\begin{equation}\label{eq:measaddit}
\mathcal{H}^d(F^1\cup F^2) = \mathcal{H}^d(F^1) + \mathcal{H}^d(F^2)
\end{equation}
\end{theorem}
\begin{proof}
It can be shown \cite{babai1992linear}, that $\overline{F^1\cup F^2}$ can be embedded into a categorical product of $d$ complete graphs if and only if both $\overline{F_1}$ and $\overline{F_2}$ have such embeddings. Therefore the relation (\ref{eq:measaddit}) holds, if one of its members is equal to $+\infty$.
If all $\overline{F_1}$,$\overline{F_2}$, $\overline{F^1\cup F^2}$ can be embedded into a product of $d$ complete graphs, then (\ref{eq:measaddit}) follows from Corollary \ref{volconcomp}.
\end{proof}
Following the analogy with Hausdorff dimension of topological spaces (\ref{hdimtop}), we define a {\it Hausdorff dimension} of a graph $G$ as
\begin{equation}\label{hdimgraph}
dim_H(G) = \min\{s \geq 0 :\mathcal{H}^s(G) < \infty\}-1.
\end{equation}
Thus, Hausdorff dimension of a graph can be identified with a Prague dimension of its complement minus 1.
\section{Relations with Kolmogorov complexity}
Let $\mathbb{B}^*$ be the set of all finite binary strings and $\Phi:\mathbb{B}^* \rightarrow \mathbb{B}^*$ be a computable function. A {\it Kolmogorov complexity} $K_{\Phi}(s)$ of a binary string $s$ with respect to $\Phi$ is defined as a minimal length of a string $s'$ such as $\Phi(s') = s$. Since Kolmogorov complexities with respect to any two functions differ only by an additive constant \cite{li2009Kolmogorov}, it is usually assumed that some canonical function $\Phi$ is fixed, and Kolmogorov complexity is denoted simply by $K(s)$. Thus, informally $K(s)$ could be described as a length of a shortest encoding of the string $s$, that allows to completely reconstruct it. Analogously, for two strings $s,t\in \mathbb{B}^*$, a {\it conditional Kolmogorov complexity} $K(s|t)$ is a a length of a shortest encoding of $s$, if $t$ is known in advance. More information on properties of Kolmogorov complexity can be found in \cite{li2009Kolmogorov}.
Every graph $G$ can be naturally encoded using the string representation of an upper triangle of its adjacency matrix. Kolmogorov complexity of a graph could be defined as a Kolmogorov complexity of that string \cite{mowshowitz2012entropy,buhrman1999kolmogorov}. It gives estimations $K(G) = O(n^2)$, $K(G|n) = O(n^2)$. Alternatively, $n$-vertex connected labeled graph can be represented as a list of edges with ends of each edge encoded using their binary representations concatenated with a binary representation of $n$. It gives estimations $K(G) \leq 2m\log(n) + \log(n) = O(m\log(n))$, $K(G|n) \leq 2m\log(n) = O(m\log(n))$ \cite{li2009Kolmogorov,mowshowitz2012entropy}.
Further in this section for simplicity we will consider connected graphs (for disconnected graphs all considerations below could be applied to every connected component). Let $dim_H(G) = dim_P(\overline{G}) - 1 = d-1$ and $\mathcal{H}^d(G) = h$. Then by Corollary \ref{cor:minvolspace} $\overline{G}$ is an induced subgraph of a product
\begin{equation}\label{graphembedmin}
K_{p_1}\times\dots \times K_{p_d},
\end{equation}
where $h = p_1\cdot\dots \cdot p_d$.
So, by Theorem \ref{thm:pdimcharact}, $G$ and $\overline{G}$ could be encoded using a collection of vectors $\phi(v) = (\phi_1(v),\dots,\phi_d(v))$, $v\in V(G)$, $\phi_j(v)\in [p_j]$. Such encoding could be stored as a string containing binary representations of coordinates $\phi_j(v)$ using $\log(p_j)$ bits concatenated with a binary representations of $n$ and $p_j$, $j=1,...,n$. The length of this string is $(n+1)\sum_{j=1}^d \log(p_j) + \log(n)$. Analogously, if $n$ and $p_j$ are given, then the length of encoding is $n\sum_{j=1}^d \log(p_j)$. Thus, the following estimation is true:
\begin{proposition} For any $G$,
\begin{equation}\label{eq:KolG}
K(G)\leq (n+1)\log(\mathcal{H}^d(G)) + \log(n)
\end{equation}
\begin{equation}\label{eq:KolcondG}
K(G|n,p_1,...,p_d)\leq n\log(\mathcal{H}^d(G))
\end{equation}
\end{proposition}
Let $p^* = \max_j p_j$. Then we have $K(G)\leq (n+1)d\log(p^*) + \log(n),$\newline $K(G|n,p_1,...,p_d)\leq nd\log(p^*).$ By minimality of the representation (\ref{graphembedmin}), we have $p^* \leq n$. Thus $K(G) = O(dn\log(n))$, $K(G|n,p_1,...,p_d) = O(dn\log(n))$. So, Hausdorff (and Prague) dimension could be considered as a measure of descriptive complexity of a graph. In particular, for graphs with a small Hausdorff dimension (\ref{eq:KolG})-(\ref{eq:KolcondG}) give better estimation of their Kolmogorov complexity than the standard estimations mentioned above.
Relations between Hausdorff (Prague) dimension and Kolmogorov complexity could be used to derive lower bound for Hausdorff (and Prague) dimension in a typical case. More rigorously, let $X$ be a graph property and $\mathcal{P}_n(X)$ be the set of labeled n-vertex graphs having $X$. Property $X$ holds for {\it almost all graphs} \cite{erdHos1977chromatic}, if $|\mathcal{P}_n(X)|/2^{\binom{n}{2}} \rightarrow 1$ as $n \rightarrow \infty$. We will use the following lemma:
\begin{lemma}\label{lem:kolcomplbgr}\cite{buhrman1999kolmogorov}
For every $n > 0$ and $\delta: \mathbb{N}\rightarrow \mathbb{N}$, there are at least $2^{\binom{n}{2}}(1-2^{-\delta(n)})$ $n$-vertex labeled graphs $G$ such that $K(G|n) \geq \frac{n(n-1)}{2} - \delta(n)$.
\end{lemma}
Then the following theorem is true:
\begin{theorem}\label{thm:almostalldim}
For every $\epsilon > 0$, almost all graphs have Prague dimension $d$ such that
\begin{equation}\label{lowbounddim}
d \geq \frac{1}{1+\epsilon}\Big(\frac{n-1}{2\log(n)} - \frac{1}{n}\Big)
\end{equation}
\end{theorem}
\begin{proof}
Let $n_{\epsilon} = \ceil{\frac{2}{\epsilon}}$. Consider a graph $G$ with $n\geq n_{\epsilon}$. From (\ref{eq:KolcondG}) we have $K(G)\leq (n+1)d\log(n) + \log(n)$. Using the fact, that $\frac{1}{n} + \frac{1}{nd}\leq \frac{2}{n}\leq \epsilon$, it is straightforward to check that $(n+1)d\log(n) + \log(n) \leq (1+\epsilon)nd\log(n)$. Therefore we have
\begin{equation}\label{eq:kolmepsilon}
K(G)\leq (1+\epsilon)nd\log(n)
\end{equation}
Let $X$ be the set of all graphs $G$ such that
\begin{equation}\label{eq:kolmcondlog}
K(G|n) \geq \frac{n(n-1)}{2} - \log(n)
\end{equation}
Using Lemma \ref{lem:kolcomplbgr} with $\delta(n) = \log(n)$, we conclude that $|\mathcal{P}_n(X)|/2^{\binom{n}{2}} \geq 1-\frac{1}{n}$, and so almost all graphs have the property $X$.
Now it is easy to see that for graphs with the property $X$ and with $n\geq n_{\epsilon}$ the inequality (\ref{lowbounddim}) holds. It follows by combining inequalities (\ref{eq:kolmepsilon})-(\ref{eq:kolmcondlog}) using the fact that $K(G|n)\leq K(G)$. It concludes the proof.
\end{proof}
Considerations above and proof of Theorem \ref{thm:almostalldim} imply that for every $\epsilon > 0$ and $n$-vertex graph $G$ with sufficiently large $n$, $dim_H(G) \geq C\frac{K(G)}{n}$, where $C=\frac{1}{(1+\epsilon)\log(p^*)}$. Interestingly, similar relations between Kolmogorov complexity and analogues of Hausdorff dimension hold for other objects. In particular, for Cantor space $\mathcal{C}$ (the space of all infinite 0-1 sequences) it is proved in \cite{mayordomo2002kolmogorov} that Kolmogorov complexity and effective (or constructive) Hausdorff dimension $dim_H(s)$ of each sequence $s$ are related as follows: $dim_H(s)= \liminf\limits_{n \rightarrow \infty} \frac{K(s_n)}{n}$ (here $s_n$ is the prefix of $s$ of length $n$). Similar estimations are known for other variants of Hausdorff dimension \cite{ryabko1994complexity,staiger1993kolmogorov}.
\section{Fractal graphs}\label{fracgraph}
Importantly, the relation (\ref{lebvshaus}) between Lebesgue and Hausdorff dimensions of topological spaces remains true for graphs.
\begin{proposition}\label{lebvshausgraph}
For any graph $G$
\begin{equation}
dim_R(G) - 1 = dim_L(G) \leq dim_H(G) = dim_P(\overline{G})-1.
\end{equation}
\end{proposition}
\begin{proof}
Let Prague dimension of a graph $\overline{G}$ is equal to $k$. Then by Theorem \ref{thm:pdimcharact} $G$ is an intersection graph of strongly $k$-colorable hypergraph. Since rank of every such hypergraph obviously does not exceed $k$, Theorem \ref{thm:covervsinter} implies, that $dim_R(G)\leq k$.
\end{proof}
Definitions of dimensions immediately imply, that both Lebesgue and Hausdorff dimensions are monotone with respect to induced subgraphs.
Analogously to the definition of fractals for topological spaces, we say, that a graph $G$ is a {\it fractal}, if $dim_L(G) < dim_H(G)$, i.e. $dim_R(G) < dim_P(\overline{G})$.
The following proposition provides a first non-trivial example of fractal graphs
\begin{proposition}
Triangle-free fractals are exactly triangle-free graphs of class 2
\end{proposition}
\begin{proof}
Note that for triangle-free graphs $dim_R(G) = \Delta(G)$. Moreover, it can be shown (see \cite{hell2004graphs}) that if triangle-free graph $G$ is not a disjoint union of edges $nK_2$, then $dim_P(\overline{G}) = \chi'(G)$, where $\chi'(G)$ is a chromatic index. Therefore triangle-free fractals, that are not disjoint unions of edges, are exactly triangle-free graphs of class 2. On the other hand, all bipartite graphs except $nK_2$ are not fractals, since they all are traingle-free graphs of class 1.
\end{proof}
The connection between fractality and class 2 graphs continue to hold for graphs with maximal degree $\Delta(G) \leq 3$. To formulate the corresponding theorem, we need to introduce the following graph operation: let vertices $a,b,c\in V(G)$ form a triangle.
Replace the subgraph $G[a,b,c]$ with the graph $H_{12}$ shown on Fig. \ref{fig:H12}, and edges $ua,vb,wc\in E(G)$ (if such edges exist) with edges $ux_a$, $vx_b$, $wx_c$, where $x_a,x_b,x_c$ are vertices of degree 1 of the graph $H_{12}$. Let $\mathcal{T} = \{T_1,...,T_k\}$ be a set of disjoint triangles of the graph $G$. The graph $\widetilde{G}(\mathcal{T})$ is obtained by applying the operation described above to each triangle $T_i$, $i=1,...,k$.
\begin{figure}
\caption{\label{fig:H12}
\label{fig:H12}
\end{figure}
\begin{theorem}\label{fractaldelta3}
Let $G$ be a connected graph with $\Delta(G) \leq 3$. Then $G$ is a fractal, if and only if one of the following conditions hold:
\begin{itemize}
\item[1)] $G=K_4$
\item[2)] $G$ is claw-free, but contains diamond or odd hole
\item[3)] For any set of disjoint triangles $\mathcal{T}$, the graph $\widetilde{G}(\mathcal{T})$ is of class 2
\end{itemize}
\end{theorem}
\begin{proof}
It is easy to see by definition, that $dim_R(K_4) = 1 < dim_P(O_4) = 2$, which implies that $K_4$ is a fractal. Assume further that $G \ne K_4$. Then obviously, since $dim_R(G) \leq 3$, $G$ is a fractal if and only if one of the following conditions hold:
\begin{itemize}
\item [(a)] $dim_R(G) = 2$, $dim_P(\overline{G}) = 3$
\item [(b)] $dim_P(\overline{G}) \geq 4$
\end{itemize}
First we will show that the condition (a) is equivalent to the condition 2) from the formulation of the theorem. By Theorem \ref{thm:covervsinter}, $dim_R(G) = 2$ if and only if $G$ is a line graph of a multigraph. Such graphs are characterized by a list of 7 forbidden induced subgraphs \cite{bermond1973representative}, only one of which ($K_{1,3}$) has the maximal degree, which does not exceed 3 (see e.g. \cite{berge1984hypergraphs}). Therefore $dim_R(G) = 2$ if and only if $G$ is claw-free.
Furthermore, by Theorem \ref{thm:pdimcharact} $dim_P(\overline{G}) = 2$ if and only if $G$ is a line graph of a bipartite graph. These graphs are exactly (claw,diamond,odd-hole)-free graphs \cite{brandstadt1999graph}. By combining these characterizations together, we get that $dim_R(G) = 2$, $dim_P(\overline{G}) = 3$ if and only if $G$ is claw-free, but contains diamond or odd hole.
Now we will show that the condition (b) is equivalent to the condition 3) from the formulation of the theorem. To do it, we will use the following property of the graph $H_{12}$:
\begin{lemma}\label{h12color}
In every edge 3-coloring of the graph $H_{12}$ edges $x_ay_9,x_by_3,x_cy_6$ have the same color
\end{lemma}
\begin{proof}
Let $c:E(H_{12}) \rightarrow \{1,2,3\}$ be an edge 3-coloring of $H_{12}$. Assume first that $c(y_1y_9) = c(y_5y_6) = 1$, $c(y_1y_2) = c(y_5y_6) = 2$.
\end{proof}
\end{proof}
As another example of fractal graphs, consider Sierpinski gasket graphs $S_n$ \cite{teguia06serp,klavzar2008coloring,teufl2006spanning}. This class of graphs is associated with the Sierpinski gasket - well-known topological fractal with a Hausdorff dimension $\log(3)/\log(2)\approx 1.585$. Edges of $S_n$ are line segments of the $n$-th approximation of the Sierpinski gasket, and vertices are intersection points of these segments (Fig. \ref{fig:serpGasket}).
Sierpinski gasket graphs can be defined recursively as follows. Consider tetrads $T_n = (S_n,x_1,x_2,x_3)$, where $x_1,x_2,x_3$ are distinct vertices of $S_n$ called {\it contact vertices}. The first Sierpinski gasket graph $S_1$ is a triangle $K_3$ with vertices $x_1,x_2,x_3$, the first tetrad is defined as $T_1 = (S_1,x_1,x_2,x_3)$. The $(n+1)$-th Sierpinski gasket graph $S_{n+1}$ is constructed from 3 disjoint copies $(S_n,x_1,x_2,x_3)$, $(S'_n,x'_1,x'_2,x'_3)$, $(S''_n,x''_1,x''_2,x''_3)$ of $n$-th tetrad $T_n$ by gluing together $x_2$ with $x'_1$, $x'_3$ with $x''_2$ and $x_3$ with $x''_1$; the corresponding $(n+1)$-th tetrad is $T_{n+1} = (S_{n+1},x_1,x'_2,x''_3)$.
\begin{proposition}\label{serpgasket}
For every $n\geq 2$ Sierpinski gasket graph $S_n$ is a fractal with $dim_L(S_n) = 1$ and $dim_H(S_n) = 2$
\end{proposition}
\begin{proof}
First, we will prove that
\begin{equation}\label{formula:dimssn}
dim_R(S_n)\leq 2, dim_P(\overline{S_n})\leq 3.
\end{equation}
for every $n\geq 2$. We will show it using an induction by $n$. In fact, we will prove slightly stronger fact: for any $n\geq 2$ there exists a clique cover $\mathcal{C} = \{C_1,...,C_m\}$ such that (i) every non-contact vertex is covered by two cliques from $\mathcal{C}$; (ii) every contact vertex is covered by one clique from $\mathcal{C}$; (iii) cliques from $\mathcal{C}$ can be colored using 3 colors in such a way, that intersecting cliques receive different colors and cliques containing different contact vertices also receive different colors; (iv) every two distinct vertices are separated by some clique from $\mathcal{C}$.
For $n=2$ the clique cover $\mathcal{C}$ consisting of 3 cliques, that contain contact vertices, obviously satisfies conditions (i)-(iv) (see Fig. \ref{fig:serpGasket}). Now suppose that $\mathcal{C}$, $\mathcal{C'}$ and $\mathcal{C''}$ are clique covers of $S_n$, $S'_n$ and $S''_n$ with properties (i)-(iv). Assume that $x_i\in C_i$, $x'_i\in C'_i$. $x''_i\in C''_i$ and $C_i,C'_i,C''_i$ have colors $i$, $i=1,...,3$. Then it is straightforward to check, that $\mathcal{C}\cup \mathcal{C'}\cup \mathcal{C''}$ with all cliques keeping their colors is a clique cover of $S_{n+1}$ that satisfies (i)-(iv). So, (\ref{formula:dimssn}) is proved.
Finally, note that for every $n\geq 2$ the graph $S_n$ contains graphs $K_{1,2}$ and $K_4 - e$ as induced subgraphs. Since $dim_R(K_{1,2}) = 2$ and $dim_P(\overline{K_4 - e}) = 3$ (the later is easy to see using clique cover formulation), we have equalities in (\ref{formula:dimssn})
\end{proof}
\begin{figure}
\caption{\label{fig:serpGasket}
\label{fig:serpGasket}
\end{figure}
It is important to emphasize that Lebesgue and Hausdorff dimensions of Sierpinski gasket graphs agree with the corresponding dimensions of Sierpinski gasket fractal. As a contrast, note that rectangular grid graphs (cartesian products of 2 paths) are not fractals (by K{\"o}nig theorem since they are bipartite), just like rectangles are not fractals in $\mathbb{R}^2$.
\begin{comment}
So, separation of graphs into fractals and non-fractals in some sense generalizes separation onto class I and class II. For triangle-free graphs the difference between Lebesgue and Hausdorff dimensions does not exceed 1. The natural question to ask is how different can be Lebesgue dimension and Hausdorff dimension of general graphs. We will try to answer this question in the rest of this section.
Note first, that equivalence number and complement Prague dimension??? are close to each other.
\begin{observation}\label{obs:eqvshaus}
For any $n$-vertex graph $G$ $eq(G) \leq dim_P(\overline{G}) \leq eq(G)+1$
\end{observation}
\begin{proof}
The inequalities follow from second characterization of Theorem \ref{thm:pdimcharact}. Left inequality directly follows from the definition. If $\mathcal{M} = \{M_1,...,M_r\}$ is an equivalent cover of $G$, which is not separating, then $M\cup \{O_n\}$ is a separating equivalent cover of $G$. It proves the second inequality.
\end{proof}
In light of Observation \ref{obs:eqvshaus} we will further consider relations between weak krausz dimension and equivalence number.
Let $dim_R(G) = k$, which, by Theorem \ref{thm:covervsinter}, implies that $G$ is a line graph of a $k$-uniform hypergraph $\mathcal{H}$. Obviously, $eq(G) \leq \chi_s(\mathcal{H})$, where $\chi_s(\mathcal{H})$ is a strong chromatic number of $\mathcal{H}$. This upper bound can be an overestimation. However, we will show that $eq(G)$ can be bounded above by a function of $\chi_s(\mathcal{H})$.
To do it, we will generalize a construction proposed in \cite{esperet2010cover}. Obviously, every clique in $G$ corresponds to a family of pairwise intersecting edges of $\mathcal{H}$ ({\it intersecting family} \cite{berge1984hypergraphs}). Therefore, each spanning equivalence subgraph of $G$ corresponds to a partition of $\mathcal{E}(H)$ into intersecting families ({\it intersecting partition}). Equivalent cover of $G$ transforms into a family $\mathcal{M}$ of intersecting partitions of $\mathcal{H}$ such that every pair of intersecting hyperedges belongs to some intersecting family from one of the partitions from $\mathcal{M}$. The family $\mathcal{M}$ will be called an {\it intersecting cover} of $\mathcal{H}$.
We will consider special type of intersecting covers, where each intersecting family $S=\{E_1,...,E_p\}$ is required to have {\it Helly property}, i.e. $\bigcap_{j=1}^p E_j \ne \emptyset$ \cite{berge1984hypergraphs}. Formally, let $v\in \mathcal{V}\mathcal{H}$. The family $S(v)=\{E_1,...,E_p\}$ is a {\it Helly family}, if either it is empty or $v\in \bigcap_{j=1}^p E_j$.
{\it Helly partition} is a collection $\mathcal{M}^i$ of disjoint Helly families $\mathcal{M}_i = \{S^i_v : v\in \mathcal{V}\mathcal{H}\}$ such that $\bigcup_{v\in \hypvert{H}} S^i(v) = \hypedge{H}$. {\it Helly cover} of size $r$ is a set $\mathcal{M} = \{\mathcal{M}_1,...,\mathcal{M}_r\}$ of Helly partitions such that for every intersecting pair of edges $E_1,E_2\in \hypedge{H}$ there exists $i\in [r]$ and $v\in E_1\cap E_2$ such that $E_1,E_2\in S^i(v)$. Finally, $\sigma(\mathcal{H})$ is the minimal size of Helly cover of $\mathcal{H}$. Obviously, every Helly cover is an intersecting cover, therefore
\begin{equation}\label{eqvssigma}
eq(G) \leq \sigma(\mathcal{H})
\end{equation}
In \cite{esperet2010cover}, the notion of {\it orientation cover} of a simple graph $H$ was introduced. Let $\mathcal{O}=\{\overrightarrow{H}^1,...,\overrightarrow{H}^r\}$ be a set of orientations of $H$. This set is an orientation cover, if for every pair of edges $uv,uw\in E(H)$ there exists an orientation $\overrightarrow{H}^i$ such that $\overrightarrow{uv},\overrightarrow{uw} \in E(\overrightarrow{H}^i)$.
It is easy to see that for simple graphs Helly covers and orientation covers are equivalent: every orientation $\overrightarrow{H}^i$ can be transformed into Helly partition $M^i$ by setting $C^i_v = \{vw : \overrightarrow{vw}\in E(\overrightarrow{H}_i)\}$, and vise versa. It was proved in \cite{esperet2010cover}, that for a simple graph $H$
\begin{equation}\label{sigmavschisimpgraph}
\sigma(H) \leq 2 \lceil\log_2\log_2 \chi(H)\rceil + 2
\end{equation}
\end{comment}
\begin{comment}
Two vertices $u,v\in V(G)$ are called {\it twins}, if $N[u] = N[v]$, where $N[u]$ is a closed neighborhood of $u$. For such graphs $dim_P(\overline{G}) = eq(G)$.
Graphs without twins with $dim_R \leq 2$ are line graphs. For such graphs, equivalence number was studied in \cite{esperet2010cover}. The main result obtained in \cite{esperet2010cover} is the following theorem.
\begin{theorem}\cite{esperet2010cover}\label{thm:eqline}
Let $G$ is a line graph of a simple graph $G'$. Then
\begin{equation}
\frac{1}{3}\log_2\log_2 \chi(G') \leq eq(G) \leq 2\log_2\log_2\chi(G') + 2
\end{equation}
\end{theorem}
So, for any graph without twins with Lebesgue dimension 1 its Hausdorff dimension can be arbitrarily large, although it is bounded above by
\end{comment}
\section{Acknowlegements} The authors were partially supported by the NIH grant 1R01EB025022-01 "Viral Evolution and Spread of Infectious Diseases in Complex Networks: Big Data Analysis and Modeling".
\end{document}
|
\begin{document}
\begin{titlepage}
\vskip 2cm
\begin{flushright}
Preprint CNLP-1994-02
\end{flushright}
\vskip 2cm
\begin{center}
{\bf
SOLITON EQUATIONS IN 2+1 DIMENSIONS
AND DIFFERENTIAL GEOMETRY OF CURVES/SURFACES}
\footnote{Preprint
CNLP-1994-02. Alma-Ata. 1994 }
\end{center}
\vskip 2cm
\begin{center}
Ratbay MYRZAKULOV
\footnote{E-mail: [email protected]}
\end{center}
\vskip 1cm
\begin{center}
Centre for Nonlinear Problems, PO Box 30, 480035, Alma-Ata-35, Kazakhstan
\end{center}
\begin{abstract}
Some aspects of the relation between differential geometry of curves and
surfaces and (2+1)-dimensional soliton equations are discussed.
For the (2+1)-dimensional case,
self-cordination of geometrical formalism with the Hirota's bilinear
method is established.
A connection
between supersymmetry, geometry and soliton equations is also considered.
\end{abstract}
\end{titlepage}
\setcounter{page}{1}
\tableofcontents
\section{Introduction}
Consider the curve in 3-dimensional space. Equations of such curves,
following [1] we can write in the form
$$
\left ( \begin{array}{ccc}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right)_{x}= C
\left ( \begin{array}{ccc}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right)
\eqno(1a)
$$
$$
\left ( \begin{array}{ccc}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right)_{t}= G
\left ( \begin{array}{ccc}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right) \eqno(1b)
$$
where
$$
C =
\left ( \begin{array}{ccc}
0 & k & 0 \\
-\beta k & 0 & \tau \\
0 & -\tau & 0
\end{array} \right) ,\quad
G =
\left ( \begin{array}{ccc}
0 & \omega_{3} & -\omega_{2} \\
-\beta\omega_{3} & 0 & \omega_{1} \\
\beta\omega_{2} & -\omega_{1} & 0
\end{array} \right) \eqno(2)
$$
Here
$$
{\bf e}_{1}^{2}=\beta = \pm 1, {\bf e}_{2}^{2}={\bf e}^{2}_{3}=1 \eqno(3)
$$
Note that equation (1a) is the Serret - Frenet equation (SFE). So we have
$$
C_t - G_x + [C, G] = 0 \eqno(4)
$$
or
$$
k_{t} - \omega_{3x} - \tau \omega_{2} = 0 \eqno (5a)
$$
$$
\omega_{2x} - \tau \omega_{3} + k \omega_{1} = 0 \eqno (5b)
$$
$$
\tau_{t} - \omega_{1x} + \beta k \omega_{2} = 0. \eqno (5c)
$$
We now consider the isotropic Landau-Lifshitz equation (LLE)
$$
{\bf S}_{t} = {\bf S}\wedge {\bf S}_{xx}. \eqno(6)
$$
If
$$
{\bf e}_{1} \equiv {\bf S} \eqno(7)
$$
then
$$
q=\frac{k}{2}e^{i\partial^{-1}_{x} \tau} \eqno(8)
$$
satisfies the NLSE
$$
iq_{t}+q_{xx}+2\beta \mid q\mid^{2}q =0. \eqno(9)
$$
This equivalence between the LLE (6) and the NLSE (9) we call the Lakshmanan
equivalence or L-equivalence [1]. These results for the case
$\beta =+1$ was obtained in [2] and for the case $\beta =-1$ in [1].
Note that between these equations also take places gauge equivalence (G-equivalence) [6].
In this paper, starting from Lakshmana's idea,
we will discuss some aspects of the relation between
differential geometry of curves and surfaces and (2+1)-dimensional soliton
equations. Before this, in [1] we proposed some approaches to this problem,
namely, the A-, B-, C-, and D-approaches. Below we will work with
the B-, C-, D-approaches. We will discuss the relation between geometry and the Hirota's bilinear
method. Also, we will consider the connection between
supersymmetry, geometry and soliton equations.
\section{Curves and Solitons in 2+1}
In this section, we work with the D-approach. Using this D-approach, we will
establish a connection between curves and (2+1)-dimensional soliton
equations.
\subsection{Some 2-dimensional extensions of the SFE}
According to the D-approach, to establish the connection between (2+1)-dimensional soliton equations
and differential geometry of curves in [1] was constructed some
two (spatial) dimensional generalizations of the SFE (1a). Here we present
some of them.
\subsubsection{The M-LIX equation}
This equation has the form [1]
$$
\alpha {\bf e}_{1y}=f_{1}{\bf e}_{1x}+
\sum_{j=1}^{n}b_{j}{\bf e}_{1}\wedge \frac{\partial^{j}}{\partial x^{j}}{\bf e}_{1} +
c_{1}{\bf e}_{2}+d_{1}{\bf e}_{3} \eqno(10a)
$$
$$
\alpha {\bf e}_{2y} =Exercise \quad N1 \eqno(10b)
$$
$$
\alpha {\bf e}_{3y} = Exercise \quad N1 \eqno(10c)
$$
Here the finding of the explicit forms of r.h. of (10b,c) we left
as the exercises (see, the section 7).
\subsubsection{The M-LX equation}
The M-LX equation reads as [1]
$$
\alpha\left ( \begin{array}{ccc}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right)_{y}= A
\left ( \begin{array}{ccc}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right)_{x} + B
\left ( \begin{array}{ccc}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right)
\eqno(11a)
$$
$$
\left ( \begin{array}{ccc}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right)_{t}=
\sum_{j=0}^{n}C_{j}\frac{\partial^{j}}{\partial x^{j}}
\left ( \begin{array}{ccc}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right)
\eqno(11b)
$$
where $A, B, C_{j}$ - some matrices.
\subsubsection{The M-LXI equation}
This extension has the form [1]
$$
\left ( \begin{array}{c}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right)_{x}= C
\left ( \begin{array}{ccc}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right), \quad \left ( \begin{array}{ccc}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right)_{y}= D
\left ( \begin{array}{ccc}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right)
\eqno(12a)
$$
где
$$
C =
\left ( \begin{array}{ccc}
0 & k & 0 \\
-\beta k & 0 & \tau \\
0 & -\tau & 0
\end{array} \right) ,
\quad
D=
\left ( \begin{array}{ccc}
0 & m_{3} & -m_{2} \\
-\beta m_{3} & 0 & m_{1} \\
\beta m_{2} & -m_{1} & 0
\end{array} \right). \eqno(12b)
$$
\subsubsection{The modified M-LXI equation}
The modified M-LXI (mM-LXI) equation usually we write in the form [1]
$$
\left ( \begin{array}{c}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right)_{x}= C_{m}
\left ( \begin{array}{ccc}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right), \quad \left ( \begin{array}{ccc}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right)_{y}= D_{m}
\left ( \begin{array}{ccc}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right)
\eqno(13a)
$$
where
$$
C_{m} =
\left ( \begin{array}{ccc}
0 & k & -\sigma \\
-\beta k & 0 & \tau \\
\beta\sigma & -\tau & 0
\end{array} \right) ,
\quad
D_{m}= D=
\left ( \begin{array}{ccc}
0 & m_{3} & -m_{2} \\
-\beta m_{3} & 0 & m_{1} \\
\beta m_{2} & -m_{1} & 0
\end{array} \right) \eqno(13b)
$$
and so on [1]. In this paper, we work with the M-LIX, M-LXI and mM-LXI
equations. Note that the M-LXI equation is the particular case of the
mM-LXI eq. as $\sigma=0$.
\subsection{The mM-LXI equation and the mM-LXII equation}
Let us return to the mM-LXI equation (13), which we write in the form
$$
\left ( \begin{array}{c}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right)_{x}= C_{m}
\left ( \begin{array}{ccc}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right) \eqno(14a)
$$
$$
\left ( \begin{array}{ccc}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right)_{y}= D_{m}
\left ( \begin{array}{ccc}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right)
\eqno(14b)
$$
$$
\left ( \begin{array}{ccc}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right)_{t}= G
\left ( \begin{array}{ccc}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right) \eqno(14c)
$$
where
$$
G =
\left ( \begin{array}{ccc}
0 & \omega_{3} & -\omega_{2} \\
-\beta\omega_{3} & 0 & \omega_{1} \\
\beta\omega_{2} & -\omega_{1} & 0
\end{array} \right) \eqno(15)
$$
From (14a,b), we obtain the following mM-LXII equation [1]
$$
C_y - D_x + [C, D] = 0 \eqno (16a)
$$
or
$$
k_{y} - m_{3x} + \sigma m_{1} - \tau m_{2} = 0 \eqno(16b)
$$
$$
\sigma_{y} - m_{2x} + \tau m_{3} - km_{1} =0 \eqno(16c)
$$
$$
\tau_{y} - m_{1x} + \beta (km_{2} - \sigma m_{3}) =0. \eqno(16d)
$$
As $\sigma=0$ the mM-LXII equation reduces to the M-LXII equation [1].
The mM-LXII equation (16), we can rewrite in form
$$
k_{y} - m_{3x} = \frac{1}{\beta}{\bf e}_{3}\dot ({\bf e}_{3x}\wedge {\bf e}_{3y}) \eqno(17a)
$$
$$
\sigma_{y} - m_{2x} = \frac{1}{\beta}{\bf e}_{2}\dot ({\bf e}_{2x}\wedge {\bf e}_{2y}) \eqno(17b)
$$
$$
\tau_{y} - m_{1x} = {\bf e}_{1}\dot ({\bf e}_{1x}\wedge {\bf e}_{1y}) \eqno(17c)
$$
Also from (14) we get
$$
k_{t} - \omega_{3x} + \sigma \omega_{1} - \tau \omega_{2} = 0 \eqno (18a)
$$
$$
\sigma_{t} - \omega_{2x} + \tau \omega_{3} - k \omega_{1} = 0 \eqno (18b)
$$
$$
\tau_{t} - \omega_{1x} + \beta (k \omega_{2} - \sigma \omega_{3}) = 0 \eqno (18c)
$$
and
$$
m_{1t} - \omega_{1y} + \beta (m_{3} \omega_{2} - m_{2} \omega_{3}) = 0 \eqno (19a)
$$
$$
m_{2t} - \omega_{2y} + m_{1} \omega_{3} - m_{3} \omega_{1} = 0 \eqno (19b)
$$
$$
m_{3t} - \omega_{3y} + m_{2} \omega_{1} - m_{1} \omega_{2} = 0. \eqno (19c)
$$
\subsection{On the topological invariants}
From the mM-LXII equation (16) follows
$$
[C, D]_{t} + C_{ty} - D_{tx} = 0 \eqno (20a)
$$
or
$$
(\sigma m_{1} - \tau m_{2})_{t} + k_{ty} - m_{3tx} = 0 \eqno(20b)
$$
$$
(\tau m_{3} - km_{1})_{t} + \sigma_{ty} - m_{2tx} = 0 \eqno(20c)
$$
$$
\epsilon (km_{2} - \sigma m_{3})_{t} + \tau_{ty} - m_{1tx} = 0. \eqno(20d)
$$
Hence we get
$$
(\sigma m_1 -\tau m_2)_t +(\sigma \omega_1 -\tau \omega_2)_y -
(m_2 \omega_1 -m_1 \omega_2)_x =0, \eqno (21a)
$$
$$
(\tau m_3 -k m_1)_t +(\tau \omega_3 -k \omega_1)_y -
(m_1 \omega_3 -m_3 \omega_1)_x =0, \eqno (21b)
$$
$$
(k m_2 -\sigma m_3)_t +(k \omega_2 -\sigma \omega_3)_y -
(m_3 \omega_2 -m_2 \omega_3)_x =0. \eqno (21c)
$$
So we have proved the following
\\
{\bf Teorema}: The (2+1)-dimensional nonlinear evolution equations (NLEE)
or dynamical curves which
are given by the mM-LXI equation have the following integrals of motions
$$
K_{1} = \int \int (\kappa m_2 +\sigma m_{3})dxdy, \quad
K_{2} = \int \int (\tau m_2 + \sigma m_{1})dxdy, \quad
K_{3} = \int \int (\tau m_3 -km_{1})dxdy \eqno(22a)
$$
or
$$
K_{1} = \int \int {\bf e}_1({\bf e}_{1x} \wedge {\bf e}_{1y})dxdy \eqno(22b)
$$
$$
K_{2} = \int \int {\bf e}_2({\bf e}_{2x} \wedge {\bf e}_{2y})dxdy \eqno(22c)
$$
$$
K_{3} = \int \int {\bf e}_3({\bf e}_{3x} \wedge {\bf e}_{3y})dxdy. \eqno(22d)
$$
So we have the following three topological invariants
$$
Q_{1} =\frac{1}{4\pi} \int \int {\bf e}_{1}\dot ({\bf e}_{1x}\wedge {\bf e}_{1y})dxdy \eqno(23a)
$$
$$
Q_{2} =\frac{1}{4\pi} \int \int {\bf e}_{2}\dot ({\bf e}_{2x}\wedge {\bf e}_{2y})dxdy \eqno(23b)
$$
$$
Q_{3} =\frac{1}{4\pi} \int \int {\bf e}_{3}\dot ({\bf e}_{3x}\wedge {\bf e}_{3y})dxdy \eqno(23c)
$$
We note that may be not all of these topological invariants are independent.
\subsection{The M-LXI equation and Soliton equations in 2+1}
In this section we will establish the connection between the
M-LXI equation (12) and soliton equations in 2+1 dimensions.
Let us, we assume
$$
{\bf e}_{1} \equiv {\bf S} \eqno(24)
$$
Moreover we introduce two complex functions
$q, p$ according to the following expressions
$$
q = a_{1}e^{ib_{1}}, \quad p=a_{2}e^{ib_{2}} \eqno(25)
$$
where $a_{j}, b_{j}$ are real functions. Now we ready to consider some
examples.
\subsubsection{The Ishimori equation}
The Ishimori equation (IE) reads as [7]
$$
{\bf S}_{t} = {\bf S}\wedge ({\bf S}_{xx} +\alpha^2 {\bf S}_{yy})+
u_x{\bf S}_{y}+u_y{\bf S}_{x} \eqno (26a)
$$
$$
u_{xx}-\alpha^2 u_{yy} = -2\alpha^2 {\bf S}\cdot ({\bf S}_{x}\wedge
{\bf S}_{y}). \eqno (26b)
$$
In this case we have
$$
m_{1}=\partial_{x}^{-1}[\tau_{y}-\frac{\epsilon}{2\alpha^2}M_2^{Ish}u] \eqno(27a)
$$
$$
m_{2}= -\frac{1}{2\alpha^2 k}M_2^{Ish}u \eqno (27b)
$$
$$
m_{3} =\partial_{x}^{-1}[k_y +\frac{\tau}{2\alpha^2 k}M_2^{Ish}u] \eqno(27c)
$$
and
$$
\omega_{1} = \frac{1}{k}[-\omega_{2x}+\tau\omega_{3}]
\eqno (28a)
$$
$$
\omega_{2}= -k_{x}-
\alpha^{2}(m_{3y}+m_{2}m_{1})+im_{2}u_{x}
\eqno (28b)
$$
$$
\omega_{3}= -k \tau+\alpha^{2}(m_{2y}-m_{3}m_{1})
+ik u_{y}+im_{3}u_{x}.
\eqno (28c)
$$
$$
M_2^{Ish}=M_2|_{a=b=-\frac{1}{2}}.
$$
Functions $q, p$ are given by (25) with
$$
a_{1}^2 =a_{1}^{\prime^{2}}=\frac{1}{4}k^2+
\frac{|\alpha|^2}{4}(m_3^2 +m_2^2)-\frac{1}{2}\alpha_{R}km_3-
\frac{1}{2}\alpha_{I}km_2
\eqno(29a)
$$
$$
b_1 =\partial_{x}^{-1}\{-\frac{\gamma_1}{2ia_1^{\prime^{2}}}-(\bar A-A+D-\bar D)\} \eqno(29b)
$$
$$
a_2^2=a_{2}^{\prime^{2}}=\frac{1}{4}k^2+
\frac{|\alpha|^2}{4}(m_3^2 +m_2^2)+\frac{1}{2}\alpha_{R}km_3
-\frac{1}{2}\alpha_{I}km_2
\eqno(29c)
$$
$$
b_{2} =\partial_{x}^{-1}\{-\frac{\gamma_2}{2ia_2^{\prime^{2}}}-(A-\bar A+\bar D-D)\} \eqno(29d)
$$
where
$$
\gamma_1=i\{\frac{1}{2}k^{2}\tau+
\frac{|\alpha|^2}{2}(m_3km_1+m_2k_y)-
$$
$$
\frac{1}{2}\alpha_{R}(k^{2}m_1+m_3k\tau+
m_2k_x)
+\frac{1}{2}\alpha_{I}[k(2k_y-m_{3x})-
k_x m_3]\}. \eqno(30a)
$$
$$
\gamma_2=-i\{\frac{1}{2}k^{2}\tau+
\frac{|\alpha|^2}{2}(m_3km_1+m_2 k_y)+
$$
$$
\frac{1}{2}\alpha_{R}(k^{2}m_1+m_3k\tau+
m_2k_x )
+\frac{1}{2}\alpha_{I}[k(2k_y-m_{3x})-
k_x m_3]\}. \eqno(30b)
$$
Here $\alpha=\alpha_{R}+i\alpha_{I}$. In this case, $q,p$ satisfy the following
DS equation
$$
iq_t + q_{xx}+\alpha^{2}q_{yy} + vq = 0 \eqno (31a)
$$
$$
-ip_t + p_{xx}+\alpha^{2}p_{yy} + vp = 0 \eqno (31b)
$$
$$
v_{xx}-\alpha^{2}v_{yy} + 2[(p q)_{xx}+\alpha^{2}(p q)_{yy}] = 0.
\eqno (31c)
$$
So we have proved that theIE (26) and the (31) are L-equivalent to each other.
As well known that these equations are G-equivalent to each other [5].
Note that the IE contains two reductions:
the Ishimori I equation as $\alpha_{R}=1, \alpha_{I}=0$ and
the Ishimori II equation as $\alpha_{R}=0, \alpha_{I}=1$.
The corresponding versions of the DS equation (31), we obtain as
the corresponding values of the parameter $\alpha$ [1].
\subsubsection{The Myrzakulov IX equation}
Now we find the connection between the Myrzakulov IX (M-IX) equation
and the curves (the M-LXI equation). The M-IX equation reads as
$$
{\bf S}_t = {\bf S} \wedge M_1{\bf S}+A_2{\bf S}_x+A_1{\bf S}_y \eqno(32a)
$$
$$
M_2u=2\alpha^{2} {\bf S}({\bf S}_x \wedge {\bf S}_y) \eqno(32b)
$$
where $ \alpha,b,a $= consts and
$$
M_1= \alpha ^2\frac{\partial ^2}{\partial y^2}+4\alpha (b-a)\frac{\partial^2}
{\partial x \partial y}+4(a^2-2ab-b)\frac{\partial^2}{\partial x^2},
$$
$$
M_2=\alpha^2\frac{\partial^2}{\partial y^2} -2\alpha(2a+1)\frac{\partial^2}
{\partial x \partial y}+4a(a+1)\frac{\partial^2}{\partial x^2},
$$
$$
A_1=i\{\alpha (2b+1)u_y - 2(2ab+a+b)u_{x}\},
$$
$$
A_2=i\{4\alpha^{-1}(2a^2b+a^2+2ab+b)u_x - 2(2ab+a+b)u_{y}\}.
$$
The M-IX equation was introduced in [1] and is integrable. It admits several
integrable reductions:
\\
1) the Ishimori equation as $a=b=-\frac{1}{2}$
\\
2) the M-VIII equation as $a=b=-1$
\\
and so on [1].
In this case we have
$$
m_{1}=\partial_{x}^{-1}[\tau_{y}-\frac{\beta}{2\alpha^2}M_2 u] \eqno(33a)
$$
$$
m_{2}=-\frac{1}{2\alpha^2 k}M_2 u \eqno (33b)
$$
$$
m_{3}=\partial_{x}^{-1}[k_y +\frac{\tau}{2\alpha^2 k}M_2 u] \eqno(33c)
$$
and
$$
\omega_{1} = \frac{1}{k}[-\omega_{2x}+\tau\omega_{3}],
\eqno (34a)
$$
$$
\omega_{2}= -4(a^{2}-2ab-b)k_{x}-
4\alpha (b-a)k_{y} -\alpha^{2}(m_{3y}+m_{2}m_{1})+m_{2}A_{1}
\eqno (34b)
$$
$$
\omega_{3}= -4(a^{2}-2ab-b)k \tau-
4\alpha (b-a)k m_{1}+\alpha^{2}(m_{2y}-m_{3}m_{1})
+k A_{2}+m_{3}A_{1}
\eqno (34c)
$$
Functions $q, p$ are given by (25) with
$$
a_{1}^2 =\frac{|a|^2}{|b|^2}a_{1}^{\prime^{2}}=\frac{|a|^2}{|b|^2}\{(l+1)^2k^2
+\frac{|\alpha|^2}{4}(m_3^2 +m_2^2)-(l+1)\alpha_{R}km_3-
(l+1)\alpha_{I}km_2\}
\eqno(35a)
$$
$$
b_{1} =\partial_{x}^{-1}\{-\frac{\gamma_1}{2ia_1^{\prime^{2}}}-(\bar A-A+D-\bar D)\} \eqno(35b)
$$
$$
a_{2}^2 =\frac{|b|^2}{|a|^2}a_{2}^{\prime^{2}}=\frac{|b|^2}{|a|^2}\{l^2k^2
+\frac{|\alpha|^2}{4}(m_3^2 +m_2^2)-l\alpha_{R}km_3+
l\alpha_{I}km_2\}
\eqno(35c)
$$
$$
b_{2} =\partial_{x}^{-1}\{-\frac{\gamma_2}{2ia_2^{\prime^{2}}}-(A-\bar A+\bar D-D) \eqno(2.9a)
$$
where
$$
\gamma_1=i\{2(l+1)^2k^{2}\tau+\frac{|\alpha|^2}{2}(m_3km_1+m_2k_y)-
$$
$$
(l+1)\alpha_{R}[k^{2}m_1+m_3k\tau+
m_2k_x]+(l+1)\alpha_{I}[k(2k_y-m_{3x})-
k_x m_3]\} \eqno(36a)
$$
$$
\gamma_2=-i\{2l^2k^{2}\tau+
\frac{|\alpha|^2}{2}(m_3km_1+m_2k_y)-
$$
$$
l\alpha_{R}(k^{2}m_1+m_3k\tau+
m_2k_x)-l\alpha_{I}[k(2k_y-m_{3x})-
k_x m_3]\}. \eqno(36b)
$$
Here $\alpha=\alpha_{R}+i\alpha_{I}$. In this case, $q,p$
satisfy the following Zakharov equation [4]
$$
iq_t+M_{1}q+vq=0 \eqno(37a)
$$
$$
ip_t-M_{1}p-vp=0 \eqno(37b)
$$
$$
M_{2}v=-2M_{1}(pq) \eqno(37c)
$$
As well known the M-IX equation admits several reductions:
1) the M-IXA equation as $\alpha_{R}=1, \alpha_{I}=0$;
2) the M-IXB equation as $\alpha_{R}=0, \alpha_{I}=1$;
3) the M-VIII equation as $a=b=1$
4) the IE $a=b=-\frac{1}{2}$
and so on. The corresponding versions of the ZE (9), we obtain as
the corresponding values of the parameter $\alpha$.
\subsection{The modified M-LXI equation and Soliton equations in 2+1}
In this section we will establish the connection between the modified
M-LXI equation (14) and soliton equations in 2+1 dimensions. As above we assume
$$
{\bf e}_{1} \equiv {\bf S} \eqno(38)
$$
and
$$
q = a_{1}e^{ib_{1}}, \quad p=a_{2}e^{ib_{2}} \eqno(39)
$$
where $a_{j}, b_{j}$ are as and above, real functions. Examples.
\subsubsection{The Ishimori equation}
Consider the IE (26).
For this equation we obtain
$$
m_{1}=\partial_{x}^{-1}[\tau_{y}-\frac{\beta}{2\alpha^2}M_2^{Ish}u] \eqno(40a)
$$
$$
m_{2}=\frac{\sigma}{k}m_3 -\frac{1}{2\alpha^2 k}M_2^{Ish}u \eqno (40b)
$$
$$
m_{3x} +\frac{\tau\sigma}{k}m_3=k_y +\sigma\partial_x^{-1}[\tau_y -
\frac{\epsilon}{2\alpha^2}M_2^{Ish}u]+\frac{\tau}{2\alpha^2 k}M_2^{Ish}u \eqno(40c)
$$
and
$$
\omega_{1} = \frac{1}{k}[\sigma_{t}-\omega_{2x}+\tau\omega_{3}]
\eqno (41a)
$$
$$
\omega_{2}= -(k_{x}+\sigma \tau)-
\alpha^{2}(m_{3y}+m_{2}m_{1})+i\sigma u_{y}+im_{2}u_{x}
\eqno (41b)
$$
$$
\omega_{3}= (\sigma_{x}-k \tau)+
\alpha^{2}(m_{2y}-m_{3}m_{1})
+ik u_{y}+im_{3}u_{x}.
\eqno (41c)
$$
Functions $q, p$ are given by (39) with
$$
a_{1}^2 =a_{1}^{\prime^{2}}=\frac{1}{4}(k+
\sigma^2)+\frac{|\alpha|^2}{4}(m_3^2 +m_2^2)-frac{1}{2}\alpha_{R}(km_3+\sigma m_2)-
\frac{1}{2}\alpha_{I}(km_2+\sigma m_3)
\eqno(42a)
$$
$$
b_{1} =\partial_{x}^{-1}\{-\frac{\gamma_1}{2ia_1^{\prime^{2}}}-(\bar A-A+D-\bar D)\} \eqno(42b)
$$
$$
a_2^2=a_{2}^{\prime^{2}}=\frac{1}{4}(k^2+
\sigma^2)+\frac{|\alpha|^2}{4}(m_3^2 +m_2^2)+\frac{1}{2}\alpha_{R}(km_3+\sigma m_2)
-\frac{1}{2}\alpha_{I}(km_2+\sigma m_3)\}
\eqno(42c)
$$
$$
b_{2} =\partial_{x}^{-1}\{-\frac{\gamma_2}{2ia_2^{\prime^{2}}}-(A-\bar A+\bar D-D)\} \eqno(42d)
$$
where
$$
\gamma_1=i\{\frac{1}{2}[k(k\tau-\sigma_x)+\sigma(\sigma\tau+k_x)]+
\frac{|\alpha|^2}{2}[m_3(km_1-\sigma_y)+m_2(\sigma m_1 +k_y)]-
$$
$$
\frac{1}{2}\alpha_{R}[k(km_1-\sigma_y)+\sigma(\sigma m_1+k_y)+m_3(k\tau-\sigma_x)+
m_2(\sigma\tau+k_x)]+
$$
$$
\frac{1}{2}\alpha_{I}[k(2k_y-m_{3x})+\sigma(2\sigma_y-m_{2x})-
k_x m_3-\sigma_x m_2]\} \eqno(43a)
$$
$$
\gamma_2=-i\{\frac{1}{2}[k(k\tau-\sigma_x)+\sigma(\sigma\tau+k_x)]+
\frac{|\alpha|^2}{2}[m_3(km_1-\sigma_y)+m_2(\sigma m_1 +k_y)]+
$$
$$
\frac{1}{2}\alpha_{R}[k(km_1-\sigma_y)+\sigma(\sigma m_1+k_y)+m_3(k\tau-\sigma_x)+
m_2(\sigma\tau+k_x)]+
$$
$$
\frac{1}{2}\alpha_{I}[k(2k_y-m_{3x})+\sigma(2\sigma_y-m_{2x})-
k_x m_3-\sigma_x m_2]\}. \eqno(43b)
$$
Here
$$
\alpha=\alpha_{R}+i\alpha_{I}, \quad A=\frac{i}{4}[u_{y}-
\frac{2a}{\alpha}u_{x}],
\quad D=\frac{i}{4}[\frac{(2a+1)}{\alpha}u_{x}-u_{y}].
$$
In this case, $q,p$ satisfy the
DS equation (31).
The Ishimori I and DS I equations, we get as $\alpha_R=1, \alpha_I=0.$
The Ishimori II and DS II equations we obtain from these results as
$\alpha_R=0, \alpha_I=1.$ Details, you can find in [1].
\subsubsection{The Myrzakulov IX equation}
Now let us establish the connection between the M-IX equation (32)
and the mM-LXI equation (14). From (32) and (14) we get
$$
m_{1}=\partial_{x}^{-1}[\tau_{y}-\frac{\beta}{2\alpha^2}M_2 u] \eqno(44a)
$$
$$
m_{2}=\frac{\sigma}{k}m_3 -\frac{1}{2\alpha^2 k}M_2 u \eqno (44b)
$$
$$
m_{3x} +\frac{\tau\sigma}{k}m_3=k_y +\sigma\partial_x^{-1}[\tau_y -
\frac{\epsilon}{2\alpha^2}M_2 u]+\frac{\tau}{2\alpha^2 k}M_2 u \eqno(44c)
$$
and
$$
\omega_{1} = \frac{1}{k}[\sigma_{t}-\omega_{2x}+\tau\omega_{3}],
\eqno (45a)
$$
$$
\omega_{2}= -4(a^{2}-2ab-b)(k_{x}+\sigma \tau)-
4\alpha (b-a)(k_{y}+\sigma m_{1}) -\alpha^{2}(m_{3y}+m_{2}m_{1})+\sigma A_{2}+m_{2}A_{1}
\eqno (45b)
$$
$$
\omega_{3}= 4(a^{2}-2ab-b)(\sigma_{x}-k \tau)+
4\alpha (b-a)(\sigma_{y}-k m_{1}) +\alpha^{2}(m_{2y}-m_{3}m_{1})
+k A_{2}+m_{3}A_{1}
\eqno (45c)
$$
Functions $q, p$ are given by (39) with
$$
a_{1}^2 =\frac{|a|^2}{|b|^2}a_{1}^{\prime^{2}}=\frac{|a|^2}{|b|^2}\{(l+1)^2(k+
\sigma^2)+\frac{|\alpha|^2}{4}(m_3^2 +m_2^2)-(l+1)\alpha_{R}(km_3+\sigma m_2)-
(l+1)\alpha_{I}(km_2+\sigma m_3)\}
\eqno(46a)
$$
$$
b_{1} =\partial_{x}^{-1}\{-\frac{\gamma_1}{2ia_1^{\prime^{2}}}-(\bar A-A+D-\bar D)\} \eqno(46b)
$$
$$
a_{2}^2 =\frac{|b|^2}{|a|^2}a_{2}^{\prime^{2}}=\frac{|b|^2}{|a|^2}\{l^2(k^2+
\sigma^2)+\frac{|\alpha|^2}{4}(m_3^2 +m_2^2)-l\alpha_{R}(km_3+\sigma m_2)+
l\alpha_{I}(km_2+\sigma m_3)\}
\eqno(46c)
$$
$$
b_{2} =\partial_{x}^{-1}\{-\frac{\gamma_2}{2ia_2^{\prime^{2}}}-(A-\bar A+\bar D-D)\} \eqno(46d)
$$
where
$$
\gamma_1=i\{2(l+1)^2[k(k\tau-\sigma_x)+\sigma(\sigma\tau+k_x)]+
\frac{|\alpha|^2}{2}[m_3(km_1-\sigma_y)+m_2(\sigma m_1 +k_y)]-
$$
$$
(l+1)\alpha_{R}[k(km_1-\sigma_y)+\sigma(\sigma m_1+k_y)+m_3(k\tau-\sigma_x)+
m_2(\sigma\tau+k_x)]+
$$
$$
(l+1)\alpha_{I}[k(2k_y-m_{3x})+\sigma(2\sigma_y-m_{2x})-
k_x m_3-\sigma_x m_2]\} \eqno (47a)
$$
$$
\gamma_2=-i\{2l^2[k(k\tau-\sigma_x)+\sigma(\sigma\tau+k_x)]+
\frac{|\alpha|^2}{2}[m_3(km_1-\sigma_y)+m_2(\sigma m_1 +k_y)]-
$$
$$
l\alpha_{R}[k(km_1-\sigma_y)+\sigma(\sigma m_1+k_y)+m_3(k\tau-\sigma_x)+
m_2(\sigma\tau+k_x)]-
$$
$$
l\alpha_{I}[k(2k_y-m_{3x})+\sigma(2\sigma_y-m_{2x})-
k_x m_3-\sigma_x m_2]\}. \eqno (47b)
$$
Directly calculation show that $q,p$ satisfy the ZE (37).
These results gives: 1) as $\alpha_R=1, \alpha_I=0$ the M-IXA equation;
2) as $\alpha_R=0, \alpha_I=1$ the M-IXB equation; 3) as $a=b=-\frac{1}{2},
\alpha_R=1, \alpha_I=0$ the Ishimori I and DS I equations; 4) as $a=b=-\frac{1}{2},
\alpha_R=0, \alpha_I=1$ the Ishimori II and DS II equations; 5) as $a=b-1$
the M-VIII and corresponding Zakharov equations; and so on [1].
\subsection{The M-LIX equation and Soliton equations in 2+1}
Now let us consider the connection between the M-LIX equation and
(2+1)-dimensional soliton equations. Mention that the M-LIX equation is
one of (2+1)-dimensional extensions of the SFE (1a).
As example, let us consider the connection between the M-LIX equation
and the M-IX
equation (32). Let the M-LIX equation has the form [1]
$$
\alpha {\bf e}_{1y}=\frac{2a+1}{2}{\bf e}_{1x}+
\frac{i}{2}{\bf e}_{1}\wedge{\bf e}_{1x} +
+i(q+p){\bf e}_{2}+(q-p){\bf e}_{3} \eqno(48a)
$$
$$
\alpha {\bf e}_{2y} = Exercise \quad N1 \eqno(48b)
$$
$$
\alpha {\bf e}_{3y} = Exercise \quad N1. \eqno(48c)
$$
In terms of matrix this equation we can write in the form
$$
\alpha \hat e_{1y} = \frac{2a+1}{2}\hat e_{1x}
+ \frac{1}{4}[\hat e_{1},\hat e_{1x}] +
i(q+p)\hat e_{2}+(q-p)\hat e_{3} \eqno(49a)
$$
$$
\alpha \hat e_{2y} = Exercise \quad N1 \eqno(49b)
$$
$$
\alpha \hat e_{3y} = Exercise \quad N1 \eqno(49c)
$$
where
$$
\hat e_{1} = g^{-1}\sigma_{3}g, \quad \hat e_{2}=g^{-1}\sigma_{2}g,
\quad \hat e_{3} = g^{-1}\sigma_{1}g \eqno(50)
$$
Here $\sigma_{j}$ are Pauli matrices
$$
\sigma_{1} =
\left ( \begin{array}{cc}
0 & 1 \\
1 & 0
\end{array} \right) , \quad
\sigma_{2} =
\left ( \begin{array}{cc}
0 & -i \\
i & 0
\end{array} \right) , \quad
\sigma_{3} =
\left ( \begin{array}{cc}
1 & 0 \\
0 & -1
\end{array} \right) \eqno(51)
$$
So we have
$$
\sigma_{1}\sigma_{2} = i\sigma_{3} = -\sigma_{2}\sigma_{1}, \quad
\sigma_{1}\sigma_{3} = -i\sigma_{2} = -\sigma_{3}\sigma_{1}, \quad
\sigma_{3}\sigma_{2} = -i\sigma_{1} = -\sigma_{2}\sigma_{3}
\eqno(52a)
$$
and
$$
\sigma_{j}^{2} =I = diag(1,1). \eqno(52b)
$$
Equations (49) we can rewrite in the form
$$
[\sigma_{3},B_{0}] = i(q+p)\sigma_{2}+(q-p)\sigma_{1} \eqno(53a)
$$
$$
[\sigma_{2},B_{0}] = -i(q+p)\sigma_{3} \eqno(53b)
$$
$$
[\sigma_{1},B_{0}] =-(q-p)\sigma_{3} \eqno(53c)
$$
where
$$
B_{0} = \alpha g_{y}g^{-1} - B_{1}g_{x}g^{-1}, \quad
B_{1}=\frac{2a+1}{2}I+\frac{1}{2}\sigma_{3} \eqno(54)
$$
Hence we get
$$
B_{0} =
\left ( \begin{array}{cc}
0 & q \\
p & 0
\end{array} \right). \eqno(55)
$$
Thus the matrix-function $g$ satisfies the equations
$$
\alpha g_{y}= B_{1}g_{x} + B_{0}g. \eqno(56)
$$
To find the time evolution of matrices $\hat e_{j}$ or vectors
${\bf e}_{j}$, we require that the matrix $\hat e_{1}$
saisfy the M-IX equation, i.e.
$$
i\hat e_{1t} = \frac{1}{2}[\hat e_{1}, M_{1}\hat e_{1}] +A_{1}\hat e_{1y}+A_{2}\hat e_{1x}
\eqno(57a)
$$
$$
M_{2}u = \frac{\alpha^{2}}{2i}tr(\hat e_{1}([\hat e_{1x},\hat e_{1y}])
\eqno(57b)
$$
From these informations we find the time evolution of matrices
$\hat e_{2}, \hat e_{3}$. So after some algebra we obtain
$$
[\sigma_{3},C_{0}] = i(c_{12}+c_{21})\sigma_{2}+(c_{12}-c_{21})\sigma_{1} \eqno(58a)
$$
$$
[\sigma_{2},C_{0}] = i(c_{11}-c_{22})\sigma_{1}-i(c_{12}+c_{21})\sigma_{3} \eqno(58b)
$$
$$
[\sigma_{1},C_{0}] =-i(c_{11}-c_{22})\sigma_{2}-(c_{1@}-c_{21})\sigma_{3} \eqno(58c)
$$
where
$$
C_{0} = g_{t}g^{-1} - 2iC_{2}g_{xx}g^{-1}-C_{1}g_{x}g^{-1}, \quad
C_{2}=\frac{2b+1}{2}I+\frac{1}{2}\sigma_{3},
\quad C_{1}=iB_{0}. \eqno(59)
$$
Hence we get
$$
C_{0} =
\left ( \begin{array}{cc}
c_{11} & c_{12} \\
c_{21} & c_{22}
\end{array} \right) \eqno(60)
$$
with
$$
c_{12}=i(2b-a+1)q_{x}+i\alpha q_{y},\quad c_{21}=i(a-2b)q_{x}-i\alpha p_{y} \eqno(61a)
$$
and $c_{jj}$ are the solutions of the following equations
$$
(a+1)c_{11x}-\alpha c_{11y}=i[(2b-a+1)(pq)_{x}+\alpha(pq)_{y}] \eqno(61b)
$$
$$
ac_{22x}-\alpha c_{22y}=i[(a-2b)(pq)_{x}-\alpha (pq)_{y}]. \eqno(61c)
$$
So that the matrix $g$ satisfies the equation
$$
g_{t}= 2C_{2}g_{xx} + C_{1}g_{x} +C_{0}g. \eqno(62)
$$
So we have identified the curve,
given by the M-LIX equation (48) with
the M-IX equation (32). On the other hand, the compatibilty
condition of equations (56) and (62) is equivalent to the ZE (37).
So that we have also established the connection between the curve
(the M-LIX equation) and the ZE. And we have shown, once more that
the M-IX equation (32) and the ZE (37) are L-equivalent to each other. Finally
we note as $a=b=-\frac{1}{2}$ from these results follows the corresponding
connection between the M-LIX, Ishimori and DS equations [1]. And
as $a=b=-1$ we get the relation between the
M-VIII, M-LIX and other Zakharov equations (for details, see [1]).
\subsection{Spin systems as reductions of the M-0 equation}
Consider the (2+1)-dimensional M-0 equation [1]
$$
{\bf S}_{t} = a_{12} {\bf e}_{2} + a_{13}{\bf e}_{3}, \quad
{\bf S}_{x} = b_{12} {\bf e}_{2} + b_{13}{\bf e}_{3}, \quad
{\bf S}_{y} = c_{12} {\bf e}_{2} + c_{13}{\bf e}_{3} \eqno(63)
$$
where
$$
{\bf e}_{2} = \frac{c_{13}}{\triangle}{\bf S}_{x} -
\frac{b_{13}}{\triangle}{\bf S}_{y}, \quad
{\bf e}_{3} = -\frac{c_{12}}{\triangle}{\bf S}_{x} +
\frac{b_{12}}{\triangle}{\bf S}_{y}, \quad \triangle =
b_{12}c_{13}- b_{13}c_{12}. \eqno(64)
$$
All known spin systems (integrable and nonintegrable) in 2+1 dimensions
are the particular reductions of the M-0 equation (63). In particular,
the IE (26) is the integrable reduction of equation (63). In this case,
we have
$$
a_{12} = \omega_{3}, a_{13}= -\omega_{2}, b_{12}= k, b_{13}= -\sigma,
c_{12}= m_{3}, c_{13}= -m_{2}. \eqno(65)
$$
Sometimes we use the following form of the M-0 equation [1]
$$
{\bf S}_{t} = d_{2} {\bf S}_{x} + d_{3}{\bf S}_{y} \eqno(66)
$$
with
$$
d_{2}= \frac{a_{12}c_{13}-a_{13}c_{12}}{\triangle}, \quad
d_{3}= \frac{a_{12}b_{13}-a_{13}b_{12}}{\triangle}. \eqno(67)
$$
\section{Surfaces and Solitons in 2+1}
\subsection{The M-LVIII equation and Soliton equationa in 2+1}
In the C-approach [1], our starting point is the following (2+1)-dimensional
M-LVIII equation [1]
$$
{\bf r}_{t} = \Upsilon_{1} {\bf r}_{x} + \Upsilon_{2} {\bf r}_{y}
+ \Upsilon_{3}{\bf n} \eqno(68a)
$$
$$
{\bf r}_{xx} = \Gamma^{1}_{11} {\bf r}_{x} + \Gamma^{2}_{11} {\bf r}_{y}
+ L{\bf n} \eqno(68b)
$$
$$
{\bf r}_{xy} = \Gamma^{1}_{12} {\bf r}_{x} + \Gamma^{2}_{12} {\bf r}_{y}
+ M{\bf n} \eqno(68c)
$$
$$
{\bf r}_{yy} = \Gamma^{1}_{22} {\bf r}_{x} + \Gamma^{2}_{22} {\bf r}_{y}
+ N{\bf n} \eqno(68d)
$$
$$
{\bf n}_{x} = p_{11} {\bf r}_{x} + p_{12} {\bf r}_{y} \eqno(68e)
$$
$$
{\bf n}_{y} = p_{21} {\bf r}_{x} + p_{22} {\bf r}_{y}. \eqno(68f)
$$
This equation admits several integrable reductions. Practically, all
integrable spin systems in 2+1 dimensions are some integrable reductions
of the M-LVIII equation (68).
\subsection{The M-LXIII equation and Soliton equationa in 2+1}
Sometimes it is convenient to work using the B-approach. In this approach
the starting equation is the following M-LXIII equation [1]
$$
{\bf r}_{tx} = \Gamma^{1}_{01} {\bf r}_{x} + \Gamma^{2}_{01} {\bf r}_{y}
+ \Gamma^{3}_{01}{\bf n} \eqno(69a)
$$
$$
{\bf r}_{ty} = \Gamma^{1}_{02} {\bf r}_{x} + \Gamma^{2}_{02} {\bf r}_{y}
+ \Gamma^{3}_{02}{\bf n} \eqno(69b)
$$
$$
{\bf r}_{xx} = \Gamma^{1}_{11} {\bf r}_{x} + \Gamma^{2}_{11} {\bf r}_{y}
+ L{\bf n} \eqno(69c)
$$
$$
{\bf r}_{xy} = \Gamma^{1}_{12} {\bf r}_{x} + \Gamma^{2}_{12} {\bf r}_{y}
+ M{\bf n} \eqno(69d)
$$
$$
{\bf r}_{yy} = \Gamma^{1}_{22} {\bf r}_{x} + \Gamma^{2}_{22} {\bf r}_{y}
+ N{\bf n} \eqno(69e)
$$
$$
{\bf n}_{t} = p_{01} {\bf r}_{x} + p_{02} {\bf r}_{y} \eqno(69f)
$$
$$
{\bf n}_{x} = p_{11} {\bf r}_{x} + p_{12} {\bf r}_{y} \eqno(69g)
$$
$$
{\bf n}_{y} = p_{21} {\bf r}_{x} + p_{22} {\bf r}_{y}. \eqno(69h)
$$
This equation follows from the M-LVIII equation (68) under the following conditions
$$
\Gamma^{1}_{01}= \Upsilon_{1x}+ \Upsilon_{1} \Gamma^{1}_{11}
+ \Upsilon_{2}\Gamma^{1}_{12}+\Upsilon_{3}p_{11}
$$
$$
\Gamma^{1}_{02}= \Upsilon_{2x}+ \Upsilon_{1} \Gamma^{2}_{11}
+ \Upsilon_{2}\Gamma^{2}_{12}+\Upsilon_{3}p_{12}
$$
$$
\Gamma^{1}_{03}= \Upsilon_{3x}+ \Upsilon_{1} L
+ \Upsilon_{2}M
$$
$$
p_{01}= \frac{F \Gamma^{3}_{02}}{\Lambda}, \quad
p_{02}= -\frac{E \Gamma^{3}_{02}}{\Lambda}, \quad
\Lambda = EG-F^{2} \eqno(70)
$$
Note that the M-LXIII equation (69) usually we use in the following form
$$
Z_{x} = AZ \eqno(71a)
$$
$$
Z_{y} = BZ \eqno(71b)
$$
$$
Z_{t} = CZ \eqno(71c)
$$
where
$Z=({\bf r}_{x}, {\bf r}_{y}, {\bf n})^{t}$ and
$$
A =
\left ( \begin{array}{ccc}
\Gamma^{1}_{11} & \Gamma^{2}_{11} & L \\
\Gamma^{1}_{12} & \Gamma^{2}_{12} & M \\
p_{11} & p_{12} & 0
\end{array} \right) , \quad
B =
\left ( \begin{array}{ccc}
\Gamma^{1}_{12} & \Gamma^{2}_{12} & M \\
\Gamma^{1}_{22} & \Gamma^{2}_{22} & N \\
p_{21} & p_{22} & 0
\end{array} \right), \quad
C =
\left ( \begin{array}{ccc}
\Gamma^{1}_{01} & \Gamma^{2}_{01} & \Gamma^{3}_{01} \\
\Gamma^{1}_{02} & \Gamma^{2}_{02} & \Gamma^{3}_{02} \\
\Gamma^{1}_{03} & \Gamma^{2}_{03} & 0
\end{array} \right) . \eqno(72)
$$
\subsection{The M-LXIV equation}
In this subsection we derive the M-LXIV equation, which express some
relations between
coefficients of the M-LXIII equation (69) or (71). From (71) we have
$$
A_{y}-B_{x}+[A,B]=0 \eqno(73a)
$$
$$
A_{t}-C_{x}+[A,C]=0 \eqno(73b)
$$
$$
B_{t}-C_{y}+[B,C]=0 \eqno(73c)
$$
It is the M-LXIV equation. These equations are equivalent the relations
$$
{\bf r}_{yxx} = {\bf r}_{xxy}, \quad
{\bf r}_{yyx}={\bf r}_{xyy} \eqno(74a)
$$
$$
{\bf r}_{txx} = {\bf r}_{xxt}, \quad
{\bf r}_{txy}={\bf r}_{xyt}, \quad {\bf r}_{tyy}={\bf r}_{yyt}. \eqno(74b)
$$
Note that (73a) is the well known Codazzi-Mainardi-Peterson equation (CMPE).
\subsection{Orthogonal basis and LR of the M-LXIV equation}
Let us introduce the orthogonal trihedral
$$
{\bf e}_{1} = \frac{{\bf r}_{x}}{\sqrt E}, \quad
{\bf e}_{2} = {\bf n},
\quad {\bf e}_{3} = {\bf e}_{1} \wedge {\bf e}_{2}. \eqno(75)
$$
Let ${\bf e}_{1}^{2}=\beta = \pm 1, {\bf e}_{2}^{2}={\bf e}_{3}^{2}= 1$.
Then these vectors satisfy the following equations
$$
\left ( \begin{array}{ccc}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right)_{x}=\frac{1}{\sqrt{E}}
\left ( \begin{array}{ccc}
0 & L & -\frac{\Lambda}{\sqrt{E}}\Gamma^{2}_{11} \\
-\beta L & 0 & -\Lambda p_{12} \\
\frac{\beta\Lambda}{\sqrt{E}}\Gamma^{2}_{11}& \Lambda p_{12} & 0
\end{array} \right)
\left ( \begin{array}{c}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right)
\eqno(76a)
$$
$$
\left ( \begin{array}{ccc}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right)_{y}=\frac{1}{\sqrt{E}}
\left ( \begin{array}{ccc}
0 & M & -\frac{\Lambda}{\sqrt{E}}\Gamma^{2}_{12} \\
-\beta M & 0 & -\Lambda p_{22} \\
\frac{\beta\Lambda}{\sqrt{E}}\Gamma^{2}_{12}& \Lambda p_{22} & 0
\end{array} \right)
\left ( \begin{array}{c}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right)
\eqno(76b)
$$
$$
\left ( \begin{array}{ccc}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right)_{t}=\frac{1}{\sqrt{E}}
\left ( \begin{array}{ccc}
0 & \Gamma^{3}_{01} & -\frac{\Lambda}{\sqrt{E}}\Gamma^{2}_{01} \\
-\beta\Gamma^{3}_{01} & 0 & -\Lambda \Gamma^{2}_{03} \\
\frac{\beta\Lambda}{\sqrt{E}}\Gamma^{2}_{01}& \Lambda \Gamma^{2}{03} & 0
\end{array} \right)
\left ( \begin{array}{c}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right).
\eqno(76c)
$$
The matrix form of this equation is
$$
\hat e_{1x}=
\frac{1}{\sqrt{E}}
( L \hat e_{2} -\frac{\Lambda}{\sqrt{E}}\Gamma^{2}_{11} \hat e_{3}) \eqno(77a)
$$
$$
\hat e_{2x}=\frac{1}{\sqrt{E}}
(-\beta L \hat e_{1} -\Lambda p_{12} \hat e_{3}) \eqno(77b)
$$
$$
\hat e_{3x}=
\frac{1}{\sqrt{E}}
(\frac{\beta\Lambda}{\sqrt{E}}\Gamma^{2}_{11} \hat e_{1}+
\Lambda p_{12} \hat e_{2})\eqno(77c)
$$
$$
\hat e_{1y} =
\frac{1}{\sqrt{E}}
( M \hat e_{2} -\frac{\Lambda}{\sqrt{E}}\Gamma^{2}_{12} \hat e_{3})
\eqno(78a)
$$
$$
\hat e_{2y} =
\frac{1}{\sqrt{E}}
(-\beta M \hat e_{1} -\Lambda p_{22} \hat e_{3})
\eqno(78b)
$$
$$
\hat e_{3y} = \frac{1}{\sqrt{E}}
( \frac{\beta\Lambda}{\sqrt{E}}\Gamma^{2}_{12}\hat e_{1}+
\Lambda p_{22} \hat e_{2}) \eqno(78c)
$$
$$
\hat e_{1t}=
\frac{1}{\sqrt{E}}
(\Gamma^{3}_{01}\hat e_{2} -\frac{\Lambda}{\sqrt{E}}\Gamma^{2}_{01} \hat e_{3})
\eqno(79a)
$$
$$
\hat e_{2t}=\frac{1}{\sqrt{E}}
(-\beta\Gamma^{3}_{01} \hat e_{1} -\Lambda \Gamma^{2}_{03} \hat e_{3})
\eqno(79b)
$$
$$
\hat e_{3t}=
\frac{1}{\sqrt{E}}
(\frac{\beta\Lambda}{\sqrt{E}}\Gamma^{2}_{01} \hat e_{1}+
\Lambda \Gamma^{2}{03} \hat E_{2}\eqno(79c)
$$
where
$$
\hat e_{1} = g^{-1}\sigma_{3}g, \quad \hat e_{2}=g^{-1}\sigma_{2}g,
\quad \hat e_{3} = g^{-1}\sigma_{1}g. \eqno(80)
$$
Equations (77-79) we can rewrite in the form
$$
[\sigma_{3}, U] = \frac{1}{\sqrt{E}}
( L \sigma_{2} -\frac{\Lambda}{\sqrt{E}}\Gamma^{2}_{11} \sigma_{1}) \eqno(81a)
$$
$$
[\sigma_{2},U] =\frac{1}{\sqrt{E}}
(-\beta L \sigma_{3} -\Lambda p_{12} \sigma_{1}) \eqno(81b)
$$
$$
[\sigma_{1},U] =
\frac{1}{\sqrt{E}}
(\frac{\beta\Lambda}{\sqrt{E}}\Gamma^{2}_{11} \sigma_{3}+
\Lambda p_{12} \sigma_{2})\eqno(81c)
$$
$$
[\sigma_{3},V]= \frac{1}{\sqrt{E}}
( M \sigma_{2} -\frac{\Lambda}{\sqrt{E}}\Gamma^{2}_{12} \sigma_{1})
\eqno(82a)
$$
$$
[\sigma_{2},V] =
\frac{1}{\sqrt{E}}
(-\beta M \sigma_{3} -\Lambda p_{22} \sigma_{1})
\eqno(82b)
$$
$$
[\sigma_{1},V] = \frac{1}{\sqrt{E}}
( \frac{\beta\Lambda}{\sqrt{E}}\Gamma^{2}_{12}\sigma_{3}+
\Lambda p_{22} \sigma_{2}) \eqno(82c)
$$
$$
[\sigma_{3}, W] =
\frac{1}{\sqrt{E}}
(\Gamma^{3}_{01}\sigma_{2} -\frac{\Lambda}{\sqrt{E}}\Gamma^{2}_{01} \sigma_{1})
\eqno(83a)
$$
$$
[\sigma_{2},W] = \frac{1}{\sqrt{E}}
(-\beta\Gamma^{3}_{01} \sigma_{3} -\Lambda \Gamma^{2}_{03} \sigma_{1})
\eqno(83b)
$$
$$
[\sigma_{1},W] = \frac{1}{\sqrt{E}}
(\frac{\beta\Lambda}{\sqrt{E}}\Gamma^{2}_{01} \sigma_{3}+
\Lambda \Gamma^{2}_{03} \sigma_{2}\eqno(83c)
$$
where
$$
U = g_{x}g^{-1},\quad V = g_{y}g^{-1}, \quad W = g_{t}g^{-1} \eqno(84)
$$
Hence we get
$$
U =
\frac{1}{2i\sqrt{E}}
\left ( \begin{array}{cc}
-\sqrt{\Lambda} p_{12} & L+i\sqrt{\frac{\Lambda}{E}}\Gamma^{2}_{11} \\
L-i\sqrt{\frac{\Lambda}{E}}\Gamma^{2}_{11} &\sqrt{\Lambda} p_{12}
\end{array} \right)
\eqno(85a)
$$
$$
V=
\frac{1}{2i\sqrt{E}}
\left ( \begin{array}{cc}
-\sqrt{\Lambda} p_{22} & M-i\sqrt{\frac{\Lambda}{E}}\Gamma^{2}_{12} \\
M+i\sqrt{\frac{\Lambda}{E}}\Gamma^{2}_{12} &\sqrt{\Lambda} p_{22}
\end{array} \right)
\eqno(85b)
$$
$$
W=
\frac{1}{2i\sqrt{E}}
\left ( \begin{array}{cc}
-\sqrt{\Lambda}\Gamma^{2}_{03} & \Gamma^{3}_{01}-i\sqrt{\frac{\Lambda}{E}}\Gamma^{2}_{01} \\
\Gamma^{3}_{01}+i\sqrt{\frac{\Lambda}{E}}\Gamma^{2}_{01} &\sqrt{\Lambda} \Gamma^{2}_{03}
\end{array} \right).
\eqno(85c)
$$
Thus the matrix-function $g$ satisfies the equations
$$
g_{x}=Ug,\quad g_{y}=Vg, \quad g_{t}=Wg. \eqno(86)
$$
From these equations follow
$$
U_{y}-V_{x}+[U,V]=0 \eqno(87a)
$$
$$
U_{t}-W_{x}+[U,W]=0 \eqno(87b)
$$
$$
V_{t}-W_{y}+[V,W]=0 \eqno(87c)
$$
This equation is the M-LXIV equation. Equation (87a) is the CMPE.
Note that the M-LXIII equation in the form (76) have
the same form with the mM-LXI equation (14) with the following
identifications
$$
k=
\frac{L}{\sqrt{E}}, \quad \sigma =\frac{\Lambda}{E}\Gamma^{2}_{11},
\quad \tau = -\frac{\Lambda}{\sqrt{E}}p_{12} \eqno(88a)
$$
$$
m_{1}= -\frac{\Lambda}{\sqrt{E}}p_{22},\quad
m_{2}= \frac{\Lambda}{E}\Gamma^{2}_{12}, \quad
m_{3}=
\frac{M}{\sqrt{E}} \eqno(88b)
$$
$$
\omega_{1}=-\frac{1}{\sqrt{E}}\Lambda \Gamma^{2}_{03}, \quad
\omega_{2}=\frac{\Lambda}{E}\Gamma^{2}_{03},
\quad
\omega_{3}=\frac{1}{\sqrt{E}}\Gamma^{3}_{01} \eqno(88c)
$$
\section{Self-cordination of the geometrical formalism and Hirota's bilinear method}
The main goal of this section is the establishment self-coordination
of the our geometrical formalism that presented above with the other
powerful tool of soliton theory - the Hirota's bilinear method.
We demonstrate our idea in some examples. Usually,
for the spin vector ${\bf S}=(S_{1},S_{2},S_{3})$ take the following transformation
$$
S^{+} = S_{1}+iS_{2}= \frac{2\bar f g}{\bar f f+\bar g g}, \quad S_{3} = \frac{\bar f f
- \bar g g}{\bar f f +\bar g g}. \eqno(89)
$$
Also n this section, we assume
$$
{\bf S} = {\bf e}_{1} \eqno(90)
$$
Now consider examples.
\subsection{The Ishimori equation}
It is well known that for the IE (26) the bilinear
representation has the form
$$
u_{x}=-2i\alpha^{2}\frac{D_{y}(\bar f \circ f + \bar g\circ g )}
{\bar f f +\bar g g},
\quad u_{y}=-2i\frac{D_{x}(\bar f \circ f + \bar g\circ g )}
{\bar f f + \bar g g}
\eqno(91)
$$
Then the IE (26) is transformed into the bilinear equations [7]
$$
(iD_{t}-D_{x}^{2}-\alpha^{2}D^{2}_{y}) (\bar f \circ f - \bar g \circ g)=0
\eqno(92a)
$$
$$
(iD_{t}-D_{x}^{2}-\alpha^{2}D^{2}_{y}) \bar f \circ g =0.
\eqno(92b)
$$
Plus the additional condition, which follows from the condtion
$$
u_{xy}=u_{yx} \eqno(93)
$$
Now we assume that
$$
\tau =\frac{1}{2}u_{y}, \quad
m_{1} =\frac{1}{2\alpha^{2}}u_{x} \eqno(94)
$$
Then, the second equation of the IE (26b) has the same form
with the third equation of then mM-LXII equation (17c).
So, we get
$$
e^{+}_{1} = \frac{2\bar f g}{\Lambda}, \quad e_{13} = \frac{\bar f f
- \bar g g}{\Lambda} \eqno(95a)
$$
$$
\tau=-i\frac{D_{x}(\bar f \circ f + \bar g\circ g )}{\Lambda},
\quad m_{1}=-i\frac{D_{y}(\bar f \circ f + \bar g\circ g )}{\Lambda}
\eqno(95b)
$$
Similarly, after some algebra we obtain
$$
e^{+}_{2} = i\frac{\bar f^{2} +\bar g^{2}}{\Lambda}, \quad
e_{23} = i\frac{fg- \bar f \bar g}{\Lambda}, \quad
e^{+}_{3} = \frac{\bar f^{2} - \bar g^{2}}{\Lambda}, \quad
e_{33} = - \frac{fg + \bar f \bar g}{\Lambda} \eqno(96)
$$
and
$$
k =-i\frac{D_{x}(g \circ f - \bar g\circ \bar f )}{\Lambda},
\quad \sigma = -i\frac{D_{x}(g \circ f + \bar g\circ \bar f )}{\Lambda},
\eqno(97a)
$$
$$
m_{2}=-i\frac{D_{y}(g \circ f + \bar g\circ \bar f)}{\Lambda}
\quad m_{3}=-i\frac{D_{y}(g\circ f + \bar g\circ \bar f)}{\Lambda}
\eqno(97b)
$$
Here ${\bf e}_{j} = (e_{j1}, e_{j2}, e_{j3}), \quad e_{j}^{\pm} = e_{j1}
\pm ie_{j2}$.
\subsection{The M-I equation}
Let us now consider the Myrzakulov I (M-I) equation, which looks like [1]
$$
{\bf S}_{t} = ({\bf S}\wedge {\bf S}_{y}+u{\bf S})_{x} \eqno (98a)
$$
$$
u_{x} = - {\bf S}\cdot ({\bf S}_{x}\wedge{\bf S}_{y}). \eqno (98b)
$$
To this equation we take
$$
\tau =0, \quad
m_{1} =u \eqno(99)
$$
Then equations (17c) and (98b) have the same form. From (95) and (99) follow
$$
D_{x}(\bar f \circ f + \bar g\circ g )=0 \eqno (100)
$$
$$
u=-i\frac{D_{y}(\bar f \circ f + \bar g\circ g )}{\Lambda}
\eqno(101)
$$
\subsection{The M-IX equation}
In this case, we take
$$
\tau =\frac{1}{2\alpha}[\alpha u_{y}-(2a+1)u_{x}], \quad
m_{1} =\frac{1}{2\alpha^{2}}[\alpha ((2a+1)u_{y} -4a(a+1)u_{x}] \eqno(102)
$$
So, for potential we have
$$
u_{x}=2i\alpha (2a+1)\frac{D_{x}(\bar f \circ f + \bar g\circ g )}
{\Lambda}-2i\alpha^{2}\frac{D_{y}(\bar f \circ f + \bar g\circ g )}
{\Lambda}\eqno(103a)
$$
$$
u_{y}=8ia(a+1)\frac{D_{x}(\bar f \circ f + \bar g\circ g )}
{\Lambda}-2i\alpha (2a+1)\frac{D_{y}(\bar f \circ f + \bar g\circ g )}
{\Lambda}\eqno(103b)
$$
\section{Supersymmetry, geometry and soliton equations}
In this section we establish a connection between geometry and supersymmetric
(susy) soliton equations. As example we consider the susy generalizations
of NLSE (1) and LLE (9). To this purpose, first we must construct a
susy extensions of the SFE (9). Simple example of such extensions
is the OSP(2$\mid$1) M-LXV equation [1]. It is convenient to work with the
matrix form of the OSP(2$\mid$1) M-LXV equation,
which we write in the form [1]
$$
\hat e_{1x} = 2q\hat e_{2}-2p\hat e_{3} +\beta \hat e_{4}
-\epsilon\hat e_{5}
\eqno(104a)
$$
$$
\hat e_{2x} = p e_{1}-2i\lambda \hat e_{2}+\epsilon\hat e_{4}
\eqno(104b)
$$
$$
\hat e_{3x} = -q e_{1}+2i\lambda \hat e_{3}+\beta\hat e_{5}
\eqno(104c)
$$
$$
\hat e_{4x} = \epsilon e_{1}-2\beta \hat e_{2} -i\lambda \hat e_{4}
-p\hat e_{5}
\eqno(104d)
$$
$$
\hat e_{5x} =-\beta e_{1}+2\epsilon \hat e_{2} -q\hat e_{4} +
i\lambda \hat e_{5} \eqno(104e)
$$
Here, $\hat e_{1},\hat e_{2},\hat e_{3}$ are bosonic matrices,
$\hat e_{4}, \hat e_{5}$ are fermionic matrices,
$p(q)=p(p)=0, p(\beta)=p(\epsilon)=1$ and
$$
\hat e_{1} = g^{-1}l_{1}g, \quad \hat e_{2}=g^{-1}l_{2}g,
\quad \hat e_{3} = g^{-1}l_{3}g, \quad
\hat e_{4} = g^{-1}l_{4}g, \quad \hat e_{5}=g^{-1}l_{5}g
\eqno(105)
$$
Generators of the supergroup OSP(2$\mid$1) have the forms
$$
l_{1} =
\left ( \begin{array}{ccc}
1 & 0 & 0 \\
0 & -1 & 0 \\
0 & 0 & 0
\end{array} \right), \quad
l_{2} =
\left ( \begin{array}{ccc}
0 & 1 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0
\end{array} \right), \quad
l_{3} =
\left ( \begin{array}{ccc}
0 & 0 & 0 \\
1 & 0 & 0 \\
0 & 0 & 0
\end{array} \right),
$$
$$
l_{4} =
\left ( \begin{array}{ccc}
0 & 0 & 1\\
0 & 0 & 0 \\
0 & -1 & 0
\end{array} \right), \quad
l_{5} =
\left ( \begin{array}{ccc}
0 & 0 & 0\\
0 & 0 & 1 \\
1 & 0 & 0
\end{array} \right)
\eqno(106)
$$
These generators satisfy the following commutation relations
$$
[l_{1},l_{2}]=2l_{2}, \quad [l_{1},l_{3}]=2l_{3}, \quad [l_{2},l_{3}]=l_{1},
\quad [l_{1},l_{4}]=l_{4}, \quad [l_{1},l_{5}]=-l_{5}
$$
$$
[l_{2},l_{4}]=0, \quad [l_{2},l_{5}]=l_{4}, \quad [l_{3},l_{4}]=l_{5},\quad
[l_{3},l_{5}]=0
$$
$$
\{l_{4},l_{4}\}=-2l_{2},\quad \{l_{4},l_{5}\}=l_{1},
\{l_{5},l_{5}\}=2l_{3} \eqno(107)
$$
From (9) follows
$$
[l_{1}, U] = 2ql_{2}-2pl_{3} +\beta l_{4}
-\epsilon l_{5}
\eqno(108a)
$$
$$
[l_{2},U] = p l_{1}-2i\lambda l_{2}+\epsilon l_{4}
\eqno(108b)
$$
$$
[l_{3} ,U]= -q l_{1}+2i\lambda l_{3}+\beta l_{5}
\eqno(108c)
$$
$$
[l_{4},U] = \epsilon l_{1}-2\beta l_{2} -i\lambda l_{4}-pl_{5}
\eqno(108d)
$$
$$
[l_{5},U] =-\beta l_{1}+2\epsilon l_{2} -ql_{4} +
i\lambda l_{5} \eqno(108e)
$$
where
$$
g_{x}g^{-1} = U \eqno(109)
$$
Hence we get
$$
U = i\lambda l_{1}+ql_{2}+pl_{3}+\beta l_{4} + \epsilon l_{5}
\eqno(110)
$$
Now we consider the (1+1)-dimensional M-V equation [1]
$$
iR_{t} = \frac{1}{2}[R, R_{xx}] + \frac{3}{2}[R^{2}, (R^{2})_{xx}]
\eqno (111)
$$
Here $R \in osp(2|1)$, i.e it has the form
$$
R =
\left ( \begin{array}{ccc}
S_{3} & S^{-} & \gamma_{1} \\
S^{+} & -S_{3} & \gamma_{2} \\
\gamma_{2} & -\gamma_{1} & 0
\end{array} \right)
\eqno(112)
$$
and satisfies the condition
$$
R^{3} = R \eqno(113a)
$$
or in elements
$$
S_{3}^{2} + S^{+}S^{-} + 2\gamma_{1} \gamma_{2} = 1. \eqno(113b)
$$
Here $S_{ij}$ are bosonic functions and $\gamma_{j}$ are fermionic
functions, i.e. $p(S_{ij}) = 0, p(\gamma_{j})=1$.
The M-V equation is the simplest supersymmetric generalization of
the LLE (9) on group OSP(2$\mid$1). It admits two reductions: the
UOSP(2$\mid$1) M-V equation and the UOSP(1,1$\mid$1) M-V equation [1].
As was established in [1], the gauge equivalent counterparts of the M-V
equation (9) is the OSP(2$\mid$1) NLSE [8,9]. In [10] was studied
the UOSP(1,1$\mid$1) M-V equation.
The LR of the M-V equation has the form [1]
$$
\psi_{x}= U^{\prime}\psi, \eqno (114a)
$$
$$
\psi_{t} = V^{\prime}\psi \eqno (114b)
$$
with
$$
U^{\prime} = i\lambda R, \eqno (115a)
$$
$$
V^{\prime} = 2i\lambda^{2} R + \frac{3\lambda}{2}[R^{2}, (R^{2})_{x}]. \eqno (115b)
$$
Now let us return to the our supercurves. To find the time evolution of this
supercurves for the OSP(2$\mid$1) group case, we assume that
$$
\hat e_{1} \equiv R \eqno(116)
$$
Then $\hat e_{1}$ satisfies the the M-V equation, i.e
$$
i\hat e_{1t} = \frac{1}{2}[\hat e_{1}, \hat e_{1xx}] +
\frac{3}{2}[\hat e_{1}^{2}, (\hat e_{1}^{2})_{xx}]
\eqno (117)
$$
Now we are in position to write the time evolution of $\hat e_{j}$. We have
$$
\hat e_{1t} =2\lambda \hat e_{1x} -2iq_{x}\hat e_{2}-2ip_{x}\hat e_{3}
-2i\beta_{x} \hat e_{4}-2i\epsilon_{x}\hat e_{5} \eqno(118a)
$$
$$
\hat e_{2t}=2\lambda \hat e_{2x} -2i(pq+2\beta \epsilon)\hat e_{2} +
ip_{x} \hat e_{2}+2i\epsilon_{x}\hat e_{4} \eqno(118b)
$$
$$
\hat e_{3t} = 2\lambda \hat e_{3x} +2i(pq + 2\beta \epsilon)\hat e_{3}
+ir_{x}\hat e_{1}-2i\beta_{x}\hat e_{5} \eqno(118c)
$$
$$
\hat e_{4t} = 2\lambda \hat e_{4x}+2i\epsilon_{x}\hat e_{1}+
4i\beta_{x}\hat e_{2}-i(pq+2\beta\epsilon)\hat e_{4}-ip_{x}\hat e_{5} \eqno(118d)
$$
$$
\hat e_{5t}=2\lambda\hat e_{5x} -2i\beta_{x}\hat e_{1}+4i\epsilon_{x}
\hat e_{3}+ir_{x}\hat e_{4} + i(pq+2\beta \epsilon) \hat e_{5} \eqno(118e)
$$
Hence we obtain
$$
[l_{1}, V-2\lambda U] = -2iq_{x}l_{2}-2ip_{x}l_{3}
-2i\beta_{x} l_{4}-2i\epsilon_{x}l_{5} \eqno(119a)
$$
$$
[l_{2},V-2\lambda U]= -2i(pq+2\beta \epsilon)l_{2} +
ip_{x} l_{2}+2i\epsilon_{x}l_{4} \eqno(119b)
$$
$$
[l_{3},V- 2\lambda U]=2i(pq + 2\beta \epsilon)l_{3}
+ir_{x}l_{1}-2i\beta_{x}l_{5} \eqno(119c)
$$
$$
[l_{4},V- 2\lambda U]=2i\epsilon_{x}l_{1}+
4i\beta_{x}l_{2}-i(pq+2\beta\epsilon)l_{4}-ip_{x}l_{5} \eqno(119d)
$$
$$
[l_{5},V-2\lambda U]= -2i\beta_{x}l_{1}+4i\epsilon_{x}
l_{3}+ir_{x}l_{4} + i(pq+2\beta \epsilon) l_{5} \eqno(119e)
$$
where
$$
g_{t}g^{-1} = V \eqno (120)
$$
From (9) follows
$$
V = 2\lambda U +i(pq+2\beta \epsilon)l_{1} -iq_{x}l_{2}+ip_{x}l_{3}
-2i\beta_{x}l_{4}+2i\epsilon_{x}l_{5} \eqno(121)
$$
So for $g$ we have the following set of the linear equations
ce we obtain
$$
g_{x} = Ug \eqno(122a)
$$
$$
g_{t}=Vg \eqno(122b)
$$
The combatibility condition of these equations gives
$$
iq_t + q_{xx} - 2rq^2 - 4q \beta \epsilon - 4 \epsilon \epsilon _{x} = 0,
\eqno(123a)
$$
$$
ir_{t} - r_{xx} +2qr^2 + 4r \beta \epsilon - 4 \beta \beta _{x} =0,
\eqno (123b)
$$
$$
i \epsilon _{t} +2 \epsilon _{xx} + 2q \beta _{x}+q_{x} \beta - \epsilon rq=0,
\eqno (123c)
$$
$$
i \beta _{t}-2 \beta_{xx}- 2r \epsilon_{x}-r_{x} \epsilon + \beta rq =0,
\eqno (123d)
$$
It is the OSP(2$\mid$1) NLSE [8,9]. So we have proved that the M-V equation
and the OSP(2$\mid$1) NLSE are equivalent to each other in geometrical sense.
\section{Conclusion}
To conclude, in this paper, starting from Lakshmanan's idea [2]
we have discussed some aspects of the
relation between differential geometry of curves/surfaces and soliton
equations in 2+1 dimensions. Also we presented our point of view
on the connection between geometry of curves and supersymmetric
soliton equations. The self-cordination of geometry and Hirota's bilinear
method is established.
Finally, we would like note that the above presented results are rather
the formulation of problems than their solutions. The further studies of these
problems seem to be very interesting. In this connection, I would like
ask you, if you have or will have any results in these or close
directions, dear colleaque, please inform me. Also any comments and
questions are welcome.
\section{Exercises}
Finishing we also would like to pose the following particular questions
as exercises: \\
{\bf Exercise N1:} Write the vector form of the M-LIX equation.\\
{\bf Exercise N2:} Write the vector form of the M-LXIV equation.\\
{\bf Exercise N3:} Find a surface corresponding to the M-LIX equation.\\
{\bf Exercise N4:} Find the integrable reductions of the M-LVIII.\\
{\bf Exercise N5:} Find the integrable reductions of the M-LXIII.\\
{\bf Exercise N6:} Find the integrable reductions of the M-LXV.\\
{\bf Exercise N7:} As well known the M-XXXIV equation (9) is integrable.
Find the other integrable equations among spin - phonon
systems (3)-(9).\\
{\bf Exercise N8:} Study the following version of the M-LIX equation
$$
\alpha {\bf e}_{1y}=
\frac{2a+1}{2}{\bf e}_{1x}+
\frac{i}{2}{\bf e}_{1}\wedge{\bf e}_{1x} +
+c{\bf e}_{2}-d{\bf e}_{3} \eqno(124a)
$$
$$
\alpha {\bf e}_{2y} =
\frac{2a+1}{2}{\bf e}_{2x}+
\frac{i}{2}{\bf e}_{2}\wedge{\bf e}_{2x} +
-c{\bf e}_{1}+n{\bf e}_{3} \eqno(124b)
$$
$$
\alpha {\bf e}_{3y} = \frac{2a+1}{2}{\bf e}_{3x}+
\frac{i}{2}{\bf e}_{3}\wedge{\bf e}_{3x} +
d{\bf e}_{1}-n{\bf e}_{2} \eqno(124c)
$$
{\bf Exercise N9:} Find the physical applications of the above
presented equations and spin-phonon systems from Appendix.
\section{Appendix: Spin - phonon systems}
Here we wish present some spin-phonon systems, which describe the nonlinear
dynamics of compressible magnets [1]. May be some of these equation are
integrable. For example, the M-XXXIV equation is integrable.
\subsection{The 0-class}
The M-LVII equation:
$$
2iS_t=[S,S_{xx}]+(u+h)[S,\sigma_3] \eqno (125)
$$
The M-LVI equation:
$$
2iS_t=[S,S_{xx}]+(uS_3+h)[S,\sigma_3] \eqno (126)
$$
The M-LV equation:
$$
2iS_t=\{(\mu \vec S^2_x-u+m)[S,S_x]\}_x+h[S,\sigma_3] \eqno (127)
$$
The M-LIV equation:
$$
2iS_t=n[S,S_{xxxx}]+2\{(\mu \vec S^2_x-u+m)[S,S_x]\}_x+
h[S,\sigma_3] \eqno (128)
$$
The M-LIII equation:
$$
2iS_t=[S,S_{xx}]+2iuS_x \eqno (129)
$$
where $v_{0}, \mu, \lambda, n, m, a, b, \alpha, \beta, \rho, h$ are constants,
$u$ is scalar potential,
$$
S= \pmatrix{
S_3 & rS^- \cr
rS^+ & -S_3
}, \quad S^{\pm}=S_{1}\pm i S_{2},\quad r^{2}=\pm 1\quad S^2=I.
$$
\subsection{The 1-class}
The M-LII equation:
$$
2iS_t=[S,S_{xx}]+(u+h)[S,\sigma_3] \eqno (130a)
$$
$$
\rho u_{tt}=\nu^2_0 u_{xx}+\lambda(S_3)_{xx} \eqno (130b)
$$
The M-LI equation:
$$
2iS_t=[S,S_{xx}]+(u+h)[S,\sigma_3] \eqno (131a)
$$
$$
\rho u_{tt}=\nu^2_0 u_{xx}+\alpha(u^2)_{xx}+\beta u_{xxxx}+
\lambda(S_3)_{xx} \eqno (131b)
$$
The M-L equation:
$$
2iS_t=[S,S_{xx}]+(u+h)[S,\sigma_3] \eqno (132a)
$$
$$
u_t+u_x+\lambda(S_3)_x=0 \eqno (132b)
$$
The M-XLIX equation:
$$
2iS_t=[S,S_{xx}]+(u+h)[S,\sigma_3] \eqno (133a)
$$
$$
u_t+u_x+\alpha(u^2)_x+\beta u_{xxx}+\lambda(S_3)_x=0 \eqno (133b)
$$
\subsection{The 2-class}
The M-XLVIII equation:
$$
2iS_t=[S,S_{xx}]+(uS_3+h)[S,\sigma_3] \eqno (134a)
$$
$$
\rho u_{tt}=\nu^2_0 u_{xx}+\lambda(S^2_3)_{xx} \eqno (134b)
$$
The M-XLVII equation:
$$
2iS_t=[S,S_{xx}]+(uS_3+h)[S,\sigma_3] \eqno (135a)
$$
$$
\rho u_{tt}=\nu^2_0 u_{xx}+\alpha(u^2)_{xx}+\beta u_{xxxx}+
\lambda (S^2_3)_{xx} \eqno (135b)
$$
The M-XLVI equation:
$$
2iS_t=[S,S_{xx}]+(uS_3+h)[S,\sigma_3] \eqno (136a)
$$
$$
u_t+u_x+\lambda(S^2_3)_x=0 \eqno (136b)
$$
The M-XLV equation:
$$
2iS_t=[S,S_{xx}]+(uS_3+h)[S,\sigma_3] \eqno (137a)
$$
$$
u_t+u_x+\alpha(u^2)_x+\beta u_{xxx}+\lambda(S^2_3)_x=0 \eqno (137b)
$$
\subsection{The 3-class}
The M-XLIV equation:
$$
2iS_t=\{(\mu \vec S^2_x - u +m)[S,S_x]\}_x \eqno (138a)
$$
$$
\rho u _{tt}=\nu^2_0 u_{xx}+\lambda(\vec S^2_x)_{xx} \eqno (138b)
$$
The M-XLIII equation:
$$
2iS_t=\{(\mu \vec S^2_x - u +m)[S,S_x]\}_x \eqno (139a)
$$
$$
\rho u _{tt}=\nu^2_0 u_{xx}+\alpha (u^2)_{xx}+\beta u_{xxxx}+ \lambda
(\vec S^2_x)_{xx} \eqno (139b)
$$
The M-XLII equation:
$$
2iS_t=\{(\mu \vec S^2_x - u +m)[S,S_x]\}_x \eqno (140a)
$$
$$
u_t+u_x +\lambda (\vec S^2_x)_x = 0 \eqno (140b)
$$
The M-XLI equation:
$$
2iS_t=\{(\mu \vec S^2_x - u +m)[S,S_x]\}_x \eqno (141a)
$$
$$
u_t+u_x +\alpha(u^2)_x+\beta u_{xxx}+\lambda (\vec S^2_x)_{x} = 0 \eqno (141b)
$$
\subsection{The 4-class}
The M-XL equation:
$$
2iS_t=[S,S_{xxxx}]+2\{((1+\mu)\vec S^2_x-u+m)[S,S_x]\}_{x} \eqno (142a)
$$
$$
\rho u_{tt}=\nu^2_0 u_{xx}+\lambda (\vec S^2_x)_{xx} \eqno (142b)
$$
The M-XXXIX equation:
$$
2iS_t=[S,S_{xxxx}]+2\{((1+\mu)\vec S^2_x-u+m)[S,S_x]\}_{x} \eqno (143a)
$$
$$
\rho u_{tt}=\nu^2_0 u_{xx}+\alpha(u^2)_{xx}+\beta u_{xxxx}+\lambda (\vec S^2_x)_{xx}
\eqno (143b)
$$
The M-XXXVIII equation:
$$
2iS_t=[S,S_{xxxx}]+2\{((1+\mu)\vec S^2_x-u+m)[S,S_x]\}_{x} \eqno (144a)
$$
$$
u_t + u_x + \lambda (\vec S^2_x)_x = 0 \eqno (144b)
$$
The M-XXXVII equation:
$$
2iS_t=[S,S_{xxxx}]+2\{((1+\mu)\vec S^2_x-u+m)[S,S_x]\}_{x} \eqno (145a)
$$
$$
u_t + u_x + \alpha(u^2)_x + \beta u_{xxx}+\lambda (\vec S^2_x)_x = 0 \eqno(145b)
$$
\subsection{The 5-class}
The M-XXXVI equation:
$$
2iS_t=[S,S_{xx}]+2iuS_x \eqno (146a)
$$
$$
\rho u_{tt}=\nu^2_0 u_{xx}+\lambda (f)_{xx} \eqno (146b)
$$
The M-XXXV equation:
$$
2iS_t=[S,S_{xx}]+2iuS_x \eqno(147a)
$$
$$
\rho u_{tt}=\nu^2_0 u_{xx}+\alpha(u^2)_{xx}+\beta u_{xxxx}+\lambda
(f)_{xx} \eqno (147b)
$$
The M-XXXIV equation:
$$
2iS_t=[S,S_{xx}]+2iuS_x \eqno(148a)
$$
$$
u_t + u_x + \lambda (f)_x = 0 \eqno (148b)
$$
The M-XXXIII equation:
$$
2iS_t=[S,S_{xx}]+2iuS_x \eqno (149a)
$$
$$
u_t + u_x + \alpha(u^2)_x + \beta u_{xxx}+\lambda (f)_x = 0 \eqno (149b)
$$
Here $f = \frac{1}{4}tr(S^{2}_{x}), \quad \lambda =1. $
\end{document}
|
\begin{document}
\title{\textsc{\SystemName}
\abstract{Creating comprehensible visualizations of highly overlapping set-typed data is a challenging task due to its complexity.
To facilitate insights into set connectivity and to leverage semantic relations between intersections, we propose a fast two-step layout technique for Euler diagrams that are both well-matched and well-formed.
Our method conforms to established form guidelines for Euler diagrams regarding semantics, aesthetics, and readability.
First, we establish an initial ordering of the data, which we then use to incrementally create a planar, connected, and monotone dual graph representation.
In the next step, the graph is transformed into a circular layout that maintains the semantics and yields simple Euler diagrams with smooth curves.
When the data cannot be represented by simple diagrams, our algorithm always falls back to a solution that is not well-formed but still well-matched, whereas previous methods often fail to produce expected results.
We show the usefulness of our method for visualizing set-typed data using examples from text analysis and infographics.
Furthermore, we discuss the characteristics of our approach and evaluate our method against state-of-the-art methods.}
\keywords{Multi-class visualization, layout, Venn diagrams, Euler diagrams, chain decomposition}
\firstsection{Introduction}
\title{\textsc{\SystemName}
\section{Introduction}
Set-typed data is ubiquitous across many different research areas, such as multi-label classification~\cite{wei2015hcp} in machine learning, RNA and DNA sequencing~\cite{ramirez2018high, hentze2018brave, d2012banana} in computational biology, and topic modeling~\cite{blei2003latent} in natural language processing.
There are two prominent methods to visualize set relations.
Venn diagrams~\cite{venn1880diagrammatic} show all possible relations between sets.
In contrast, Euler diagrams \cite{leonhard1768lettres} only depict non-empty relations\st{ and therefore preserve the semantics of the data faithfully}.
Many special-purpose visualizations have been developed for set-specific tasks~\cite{alsallakh2014visual}.
Still, traditional Venn and Euler diagrams remain an essential tool for showing set intersections because they are easy to read, familiar to most users, and can incorporate data points directly.
As such, they are often part of larger systems, such as \textit{UpSet}~\cite{LexGSVP14}.
Due to their combinatorial nature, the construction of Venn diagrams is straightforward.
However, automatically creating Euler diagrams of high quality remains a challenging task, in particular for highly intersecting datasets.
An Euler diagram should only include relations that are present in the data and avoid introducing superfluous areas.
Further, the diagram should be monotone~\cite{Cao2010}.\st{, which means that there are only pairwise intersections of curves and, if possible, no curves are concurrent to another.}
We call Euler diagrams that adhere to these properties \textit{semantics-preserving}, following the definition of semantics in the domain of linguistics.
Accordingly, representing the data faithfully and preserving neighbourhood relations are a part of semantics, as how a set intersection is read depends on its neighbours.
An example result of our method and the impact of the above-mentioned properties is shown in \autoref{fig:teaser:our-euler}.
The Euler diagram on the right has lost the symmetry of the Venn diagram (\autoref{fig:teaser:our-venn}) but represents the data faithfully\st{ by removing empty intersections}.
\st{As each of the diagrams may be preferred by users, depending on the task and what information they want to emphasize, our method is able to provide both solutions.}
First, we introduce and formalize the properties of Euler diagrams.
Next, we propose a two-step algorithm for constructing such diagrams efficiently.
The first step computes the \textit{Euler dual}, a graph representation of the \st{final} diagram.
\st{From the dual, t}The second step \st{of the algorithm then} creates \st{curves}\revised{the \textit{Euler diagram}}, whose curves follow guidelines~\cite{Blake2016} for creating intuitive \st{and readable} Euler diagrams.
We show the usefulness and characteristics of our algorithm on three examples from different domains and \st{evaluate}\revised{compare} our method to previous work.
In summary, the main contributions of this paper are:
\begin{itemize}
\setlength\itemsep{0.25em}
\item \textsc{spEuler}, a \textbf{novel method} for constructing semantics-preserving Euler diagrams that yield fast and reliable results.
\item Extensive \textbf{analysis of existing construction methods} and how they relate to properties of the Euler diagrams.
\item \textbf{Three examples} from different domains that show the characteristics and potential of our approach.
\item An \textbf{extensive evaluation} based on established guidelines of Euler diagrams and direct comparison to state-of-the-art methods.
\end{itemize}
\section{Characteristics of Euler Diagrams}\label{sec:properties}
Before we go into the previous work that is related to our method, we want to introduce important properties and concepts of Euler diagrams that will help to understand the subsequent sections.
Formally, an Euler diagram is a set of smooth, closed Jordan curves that represent the different sets~\cite{chow2007generating}.
Together, these curves comprise various areas in the drawing that represent the intersections of the sets.
All set relations that exist in the data can be described by the \textit{abstract description}---a list of the existing intersections.
Euler diagrams can exhibit several different properties that directly influence their appearance and effectiveness in visualizing information.
The two most important properties are well-formedness and well-matchedness, as defined by Chow~\cite{chow2007generating}.
\paragraph{Properties}
An Euler diagram is \textbf{well-formed}, if it is \textit{simple} (i.e. at most, two curves meet at any given point and there is no concurrency), and exactly a \textit{single curve} represents each set.
In a \textbf{well-matched} Euler diagram, all intersections are correctly represented, thereby retaining the semantics from the original data: each intersection is represented only once, and the diagram does not contain areas of intersections that are not part of the abstract description.
Alsallakh et al.~\cite{alsallakh2014visual} discuss different properties of algorithms for Euler diagrams and their connection to well-formedness.
However, there is no such discussion for the well-matchedness and the interplay between both properties, which plays a big role in the effectiveness of the diagram~\cite{Gurr99}.
The two properties are visualized in \autoref{fig:properties}, which shows a Venn diagram with 4 curves and their 16 intersections.
We use uppercase letters to refer to a curve or all nodes that participate in a set, and lowercase letters to refer to specific intersections, which are faces (also called \textit{zones}) in the diagram.
We will revisit this simple example throughout the next sections to help showcase our method.
\autoref{fig:properties} shows the visual differences of adhering to only one or both of these two properties for the same data. Each zone is marked with its respective intersection.
As can be observed in \autoref{fig:properties:well-matched}, all four curves intersect on the lower-left corner, resulting in concurrent lines.
By creating a well-matched and well-formed diagram, this can be avoided (\autoref{fig:properties:well-formed}).
It is important to note that many abstract descriptions exist, for which both properties cannot be satisfied at the same time, requiring a trade-off.
However, as analyzed by Chow~\cite{chow2007generating}, it is currently not possible to infer for a given abstract description if it is possible to maintain both properties.
If a trade-off has to be made, we adhere to the guidance of the work by Chapman et al.~\cite{ChapmanSRMB14},
which concludes that users prefer well-matched diagrams over well-formed ones.
As a result, in these cases, our algorithm always produces well-matched diagrams while minimizing the violations of well-formedness.
\paragraph{Euler Dual}
A key concept that frequently shows up in construction algorithms is modeling the Euler diagram as a graph.
Instead of thinking about the Euler diagram as a set of curves, it can be modeled directly as an edge-labeled graph, called the \textit{Euler graph}.
In this representation, each intersection of the curves is represented by a node, and each curve segment is represented by a link, labeled with the respective curve of the underlying original Euler diagram.
Instead of creating the Euler diagram directly from the data using curves, it is also possible to indirectly create it by constructing the \textbf{Euler dual} of the Euler graph.
Each node in the Euler dual represents a face of the Euler graph, and neighboring zones are represented by linked nodes in the Euler dual. However, in theory, all nodes that differ by one set could be linked in the dual---a graph that contains all possible links is therefore called the \textit{super dual}.
The \textit{rank} of a node in the Euler dual equals the number of sets participating in that intersection.
We can find an ordered representation of the Euler dual by grouping all nodes of the dual that have the same rank. The resulting graph is the \textit{rank-based Euler dual}. \autoref{fig:properties:duals:well-matched} and \autoref{fig:properties:duals:well-formed} show the respective rank-based duals of \autoref{fig:properties:well-matched} and \autoref{fig:properties:well-formed}---the non-pairwise intersection of \autoref{fig:properties:well-matched} is equal to the face $\Set{ABCD}$ in \autoref{fig:properties:duals:well-matched}. In comparison, all the faces of \autoref{fig:properties:duals:well-formed} are quads---we will explain what this means for the diagram in \autoref{sec:euler-dual}.
\begin{figure}
\caption{well-matched}
\label{fig:properties:well-matched}
\caption{well-formed and well-matched}
\label{fig:properties:well-formed}
\caption{Euler dual of (a)}
\label{fig:properties:duals:well-matched}
\caption{Euler dual of (b)}
\label{fig:properties:duals:well-formed}
\caption{
(a) A well-matched diagram and (b) an additionally well-formed diagram.
Well-matched diagrams may exhibit concurrent curves and points where more than two curves intersect, e.g., the intersection of curves $\Set{ABCD}
\label{fig:properties}
\end{figure}
\begin{table*}[htb]
\small
\centering
\caption{Details of different construction methods for any amount of curves and their properties.}\label{table:Properties}
\begin{tabular}{lcccccccc}
\toprule
Method & Construction & \makecell{Any \\ relation} & \makecell{\bf Well-\\\bf matched} & \makecell{\bf Well-\\\bf formed} & \makecell{Monotonicity} & simple & \makecell{Duplicate \\ curves} & \makecell{Non-pairwise \\ intersections} \\
\toprule
SCD~\cite{ruskey2006search} / nVenn~\cite{Perez-SilvaAQ18} & Euler dual & yes & no & no & yes & no & no & yes\\
Stapleton\revised{~\cite{StapletonFRH12}}/ Rodgers~\cite{RodgersSAMBT16} & direct & yes & no & no & yes & yes & yes & no\\
\midrule
Venn~\cite{venn1880diagrammatic} & direct & yes & no & \textbf{yes} & yes & yes & no & no\\
Edwards~\cite{edwards1989venn} & direct & yes & no & \textbf{yes} & yes & yes & no & no \\
vennEuler~\cite{Wilkinson2012} & direct & no & no & \textbf{yes} & yes & yes & no & no\\
eulerr~\cite{Larsson2020} & direct & no & no & \textbf{yes} & yes & yes & no & no \\
\midrule
Chow-Ruskey~\cite{ChowR05} & Euler dual & yes & \textbf{yes} & no & yes & no & no & yes \\
Simonetto~\cite{SimonettoA09} & Intersection graph & yes & \textbf{yes} & no & yes & yes & yes & no\\
MetroSets~\cite{Jacobsen2021} & hypergraph & yes & \textbf{yes} & no & no & - & - & yes\\
\midrule
Flower~\cite{FlowerH02,FlowerFH08}\textsuperscript{\textasteriskcentered} & Euler dual & no & \textbf{yes} & \textbf{yes} & yes & yes & yes & no\\
\textbf{Our method} & Euler dual & no & \textbf{yes} & \textbf{yes} & yes & yes & yes & no\\
\bottomrule
\end{tabular}
\parbox{0.85\textwidth}{\footnotesize
\small
\textsuperscript{\textasteriskcentered}Note: The authors only provide a rough sketch of their method.
}
\end{table*}
\section{Related Work}\label{sec:related}
Many set visualization approaches have been proposed in the past. Good starting points are the survey of Venn diagrams by Ruskey and Weston~\cite{ruskey1997survey}, or Rodgers~\cite{Rodgers14}, who focuses on Euler diagrams. Alsallakh et al.~\cite{alsallakh2014visual} offer a comprehensive survey of set visualizations and group the techniques based on their best-suited tasks: Element tasks, set relation tasks, and element attributes tasks.
\subsection{General Set Visualization}
Alternative approaches to visualize set-typed data are matrix and aggregation-based techniques, such as UpSet~\cite{LexGSVP14} or RadialSets~\cite{AlsallakhAMH13}.
These are usually very well suited for element and element attribute tasks. However, they can be verbose to show all set relations at once when the data is complex.
For spatial data, such as maps, there are also techniques that focus on highlighting the connections between sets, such as BubbleSets~\cite{collins2009Bubblesets} or KelpFusion~\cite{MeulemansRSAD13}.
Most methods are not able to directly encode information of the original data points in a unified visualization.
For this task, Venn and Euler diagrams are especially well suited and therefore have been combined with glyphs~\cite{MicallefDF12}, and graphs~\cite{MuttonRF04, SathiyanarayananSBH14}.
Finally, Jacobsen et al.~\cite{Jacobsen2021} propose using the metro map metaphor to visualize set relations in their MetroSets technique.
The visualization can show individual data points for each set relation, and the layout can be fine-tuned according to different optimization strategies.
\subsection{Constructing Venn and Euler Diagrams}
Venn diagrams always show all possible set relations, with many different methods for their construction~\cite{venn1880diagrammatic, edwards1989venn, ruskey2006search, Rodgers14, Bannier2017}.
Euler diagrams are more flexible in this regard, but many construction algorithms are limited to specific abstract descriptions and might produce unexpected results~\cite{chow2007generating, RodgersZF08, FlowerFH08,micallef2014eulerforce, simonetto2016simple}.
Inductive methods construct diagrams by adding one curve at a time.
Venn himself proposed an inductive method to create diagrams for any amount of curves.
Edwards later proposed an alternative inductive construction method that creates diagrams by projecting the curves onto a sphere~\cite{edwards1989venn}.
This method always creates diagrams that are well-formed and well-matched. However, for a larger number of sets, the result becomes hard to understand as the area of new zones becomes smaller and smaller.
Other methods focus on the creation of simple, convex Venn diagrams, e.g., Mamakani et al.\cite{MamakaniMR12}, which are aesthetically more pleasing.
Ruskey et al.~\cite{ruskey2006search} use a general Venn construction method to analyze methods that create symmetric Venn diagrams.
nVenn~\cite{Perez-SilvaAQ18}, a recently developed area-preserving Euler-like visualization technique, allows users to get a compact overview, even for larger set counts.
They use a conventional Venn construction algorithm~\cite{ruskey2006search} as its initial layout and adapts it using a force-directed optimization. It heavily relies on the initial position\revised{ing and}\st{s during construction, as well as the} parameters of the force-directed strategy.
If the given dataset does not cover all possible set relations, Venn diagrams produce\st{, as mentioned above,} additional (unwanted) zones, and the diagram is not well-matched.
For diagrams that are not well-matched, there is a discrepancy between the semantically correct representation of the abstract description and the visualization.
Oftentimes, this problem is solved using shading to mark such additional faces~\cite{venn1880diagrammatic, StapletonFRH12}.
In any case, this encodes unnecessary information that the reader has to process. A solution to this mismatch is well-matched Euler diagrams.
By design, methods that create Euler diagrams are usually well-matched.
Their drawback, however, is that they often cannot make any guarantee about the aesthetics of the diagrams, i.e., their well-formedness.
Results might contain crossings, concurrent curves, and non-smooth shapes.
To alleviate this problem, Stapleton et al.~\cite{StapletonFRH12} proposed an inductive method to create (semi) well-formed Euler diagrams using circles.
Such diagrams weaken the constraints of the well-formedness and allow curve labels to be used multiple times.
A current hindrance in the application of Euler diagrams is that most methods only produce expected results for certain datasets.
Users do not know beforehand which method will produce well-formed or well-matched diagrams or if it will produce a valid result at all.
\st{It is important to note that most e}Existing implementations often fail silently without producing any results or create unwanted zones without communicating this to the user.
It is challenging to create a well-formed and well-matched diagram for any abstract description because of the intricate interplay between the different properties.
Therefore, many construction methods that only optimize for one property often cannot make guarantees for the others.
This can be seen in \autoref{table:Properties}.
Usually, an Euler diagram is either \textit{directly} constructed via curves or \textit{indirectly} through an intermediate representation, which is then transformed into the Euler diagram.
Examples are constructions using the Euler dual, Euler graph, connectivity graph, closeness graph, or intersection graph.
Based on the surveys by Ruskey~\cite{ruskey1997survey}, Rodgers et al.~\cite{Rodgers14}, and Alsallakh et al.~\cite{alsallakh2014visual}, we created \autoref{table:Properties}, in which we compare different properties of Euler and Venn construction methods.
It should be noted that the properties of the final Euler diagram highly depend on the used construction steps as well as the properties of the intermediate representations.
As we can observe from the table, direct construction methods usually produce well-formed diagrams, as they directly model the curves.
This means the produced curves are usually constrained heavily, for example, by only using circles.
As a trade-off, they only produce Venn diagrams or introduce unwanted zones for higher set counts.
Alternatively, indirect methods only create the exact intersections needed and then transform the graph to the diagram but fail to create well-formed diagrams from them.
Some methods try to transform non-well-formed diagrams into more aesthetic ones, but doing this in hindsight is often not possible.
Examples can be found in~\cite{StapletonRHZ11, SimonettoA09, RodgersZF08, StapletonFRH12}.
There is only a single approach that allows for the creation of Euler diagrams of any amount of curves that are both well-matched and well-formed.
Flower et al.~\cite{FlowerFH08} propose an initial sketch of a solution but do not propose a general implementation.
They resort to heuristics to create solutions for less than 5 curves.
There are two main differences between our algorithm and the approach by Flower et al.~\cite{FlowerFH08}: They do not use the rank-based dual as an intermediate, and they cannot fall back to a sub-optimal solution when no well-formed and well-matched result exists.
\subsection{Evaluation of Euler Diagrams}
As mentioned previously, the properties of Euler diagrams can be generally divided into well-matched and well-formed diagrams.
However, there are many more properties that influence the semantics (e.g., monotonicity) and the aesthetics (e.g., shape, color, and symmetry).
A general overview is given by Blake et al.~\cite{Blake2016}, which introduces different guidelines that good Euler diagrams should adhere to.
\revised{They directly compare real-word examples with adapted diagrams, which follow their proposed guidelines.
Comparing both, such diagrams improve user comprehension.
However,
it is still unknown which of the guides might have a larger impact, and how they might influence each other.}
\st{Further, t}\revised{T}here are several studies that analyze the readability of well-matched vs. well-formed diagrams~\cite{ChapmanSRMB14, wallinger2021readability}. \st{and how diagrams can be combined with graphs~\cite{RodgersSAMBT16} to solve tasks concerning set visualizations.}
\revised{Chapman et al.~\cite{ChapmanSRMB14} compare various types of set diagrams
\st{ regarding task completion time and error rates}
and found that linear diagrams outperform all other methods, followed by unshaded Euler diagrams.
They explain their results by the well-matchedness of those approaches, combined with well-formedness as a secondary influence.
Rodgers et al.~\cite{RodgersSAMBT16} evaluate methods that combine the Euler diagram with a graph of the datapoints. As their results are not consistent with previous studies of the same methods, they suggest that this might be due to them using datapoint specific tasks, whereas the previous studies used intersection related tasks.
They conclude that for graph specific tasks,
the properties that we summarize as ``semantics preserving'' may
explain why some methods perform better than others.}
Wallinger et al.~\cite{wallinger2021readability} compare Euler diagrams with MetroSets and \revised{LineSets}\st{BubbleSets} for set-related tasks.
To conclude: it is still an open problem to design and implement an algorithm that produces well-matched and well-formed Euler diagrams for any amount of curves if the abstract description allows for it.
Generating Euler diagrams with specific properties was also identified as an open problem by Alsallakh et al.~\cite{alsallakh2014visual}.
Depending on the existing relations in the data, some properties are impossible to guarantee.
We therefore propose a semantics-preserving construction method that generates Euler diagrams for any amount of curves.
It creates well-matched and well-formed diagrams if allowed for by the data.
If not, we retain the well-matchedness and relax as few individual properties as possible that infringe the well-formedness.
\begin{figure*}
\caption{Abstract description}
\label{fig:overview:abstract-description}
\caption{Rank-based Euler dual}
\label{fig:overview:rank-based-dual}
\caption{Circular layout}
\label{fig:overview:circular-dual}
\caption{Final diagram}
\label{fig:overview:euler-diagram}
\caption{
Overview of our method: After finding all set intersections that exist in the dataset (a), the rank-based Euler dual is created from the abstract description (b). The graph is then transformed to be circular, and nodes are arranged in a well-distributed manner across several rings (c). In (d) we create the final curve for each set using splines.}
\label{fig:overview}
\end{figure*}
\section{Overview}\label{sec:overview}
Our method constructs Euler diagrams for a given list of sets and their intersections---the abstract description.
For example, the three sets $\{ \Set{A}, \Set{B}, \Set{C} \}$ can have relations
$\{ \emptyset,\Set{a},\Set{b},\Set{c},\Set{ab},\Set{bc},\Set{ac},\Set{abc}\}$, where we abbreviate the zone $\Set{A} \cap \Set{B} \cap \Set{C}$ with $\Set{abc}$.
However, in real-world data usually not all intersections are realized, for example, the intersection $\Set{ac}$ could be missing.
For some abstract descriptions, it is possible to find well-matched and well-formed diagrams.
However, there are also many configurations where this is not possible---in these cases our algorithm yields well-matched diagrams, while minimizing the violations of the well-formedness property.
We provide further discussion on the influence of the abstract description on these properties \st{of the diagram} in \autoref{sec:discussion}.
Our algorithm consists of four main steps, see also \autoref{fig:overview}.
Starting from the \textbf{abstract description} (\autoref{fig:overview:abstract-description}), we first find the appropriate order in which we place each set.
Our algorithm then iteratively grows the graph based on this order while ensuring that new nodes conform to the well-formed property.
After finding the connected, planar \textbf{rank-based Euler dual} (\autoref{fig:overview:rank-based-dual}), our algorithm arranges the nodes in a \textbf{circular layout} (\autoref{fig:overview:circular-dual}), which we then use to draw the curves that correspond to each set.
Because of the properties of the dual, it is possible to generate a planar \textbf{Euler diagram} (\autoref{fig:overview:euler-diagram}) from this circular layout.
We use smooth curves to create compact and simple shapes.
In contrast to other techniques, we guarantee a semantic match between the data and the final diagram. In addition, by creating mostly simple set curves and diagrams, we support the readability of the diagram, avoiding unnecessary crossings and concurrency of curves.
To demonstrate the usefulness and evaluate the characteristics of our algorithm, we implemented a prototype in JavaScript and D3\footnote{D3: \texttt{\url{https://www.d3js.org}}}.
This implementation also allows stepping through the individual steps of our algorithm.
The prototype shows exemplary abstract descriptions that can be found in the paper, as well as different interactions that support set-related tasks such as visual identification of subsets and hovering.
The implementation of our prototype, together with the example datasets, can be found online\footnote{ \url{https://github.com/RKehlbeck/spEuler}}.
\begin{figure*}
\caption{
Creating the Euler dual:
(a) shows the initial order of nodes and their respective groups.
(b) Simply inserting the nodes with this initial order results in a dual that is non-planar.
(c) We first remove crossings by changing the insertion order.
(d) We finalize the graph by choosing the consecutive ones sequence which does not destroy monotone faces.
The final result is a planar graph.
}
\label{fig:insertion}
\end{figure*}
\begin{algorithm}[t]
\DontPrintSemicolon{}
\SetKwFunction{createDual}{createDual}
\SetKwProg{fn}{function}{}{}
\fn{\createDual{$ nodes[] $}}{
$ G \gets $ group $nodes$ by extended set and rank\; \label{algo:grouping:1}
$ G \gets $ sort $G$ by $rank(G_{i_{0}})$ and $len(G_{i})$ \; \label{algo:grouping:2}
\ForAll{$S$ in $G$}{
$R \gets$ group $S$ by rank \; \label{algo:CO:1}
\ForAll{$r$ of $R$}{
${cos} \gets$ Calculate COs for $r$ \; \label{algo:CO:2}
$r \gets $ sort $r$ by len($cos$) and $dist\_twin$ \; \label{algo:CO:3}
\ForAll{$n$ of $r$}{
$list \gets []$ \;
\ForAll{co of $cos_{n}$}{
\ForAll{possible position p of co}{
$pp \gets \{len(co), \#mono, \#cross\}$ \; \label{algo:insert:1}
list.push(pp) \;
}
}
sort $list$ to maximize monotone faces\;
insert\_node($list$[0])\; \label{algo:insert:2}
remove\_crossings() \; \label{algo:insert:3}
}
}
}
}
\caption{Dual construction algorithm}
\label{alg:sort-nodes}
\end{algorithm}
\section{Rank-based Euler dual}\label{sec:euler-dual}
We already introduced the rank of an intersection in \autoref{sec:properties}, which is the number of its involved or participating sets.
In the context of the Euler dual, we will call these intersections \textit{nodes}.
For example, the node $\Set{a}$ has rank $1$, while node $\Set{ab}$ has rank $2$.
In the rank-based Euler dual, there can only be a link from a node with rank $r$ to a node with rank $r+1$.
This means that with each link, an additional set gets added to the intersection.
In the example above, this is the set $\Set{b}$, which we call the \textit{color} of a link.
By definition, a link always has a single distinct color.
Accordingly, we color the links in the figures containing Euler duals throughout this paper.
Our goal is to draw Euler diagrams with minimal violations of the well-formedness property.
As we create our diagram using the dual, we need to find the equivalent property in the dual that guarantees a well-formed result---the faces of the dual.
A face in the Euler dual is an area that is enclosed by links and nodes.
\textit{Monotone faces} are enclosed by exactly four links that have two alternating colors.
It is also important to note that by this definition, monotone faces always span exactly three ranks in the Euler dual.
Monotone faces are essential for well-formedness, as they limit how many curves can intersect at a given face---this directly corresponds to how simple the final Euler diagram will be.
\autoref{fig:properties} shows an example of a monotone and a non-monotone face.
Our goal is therefore to \textit{remove} possible links and \textit{reorder} the nodes across all ranks until we are left with a \textit{connected}, \textit{crossing-free} and \textit{monotone} version of the dual.
Computing the rank-based dual can be structured into three parts:
First, we group the nodes and decide in which order these groups should be placed (lines \ref{algo:grouping:1}--\ref{algo:grouping:2}).
Second, we look at each of the groups and sort the nodes by rank, and improve the sorting using \textit{consecutive ones sequences} (\autoref{sec:euler-dual:co}) and distance to the previous set group (lines \ref{algo:CO:1}--\ref{algo:CO:3}).
Finally, we place each node so that it maximizes the number of monotone faces while linking them to the already existing nodes in the graph, removing unwanted crossings (lines \ref{algo:insert:1}--\ref{algo:insert:3}).
\subsection{Grouping by Participating Sets}\label{sec:euler-dual:grouping}
As described in the previous section, each node has one or more participating sets.
To create the dual, we start by separating these nodes into groups (\autoref{algo:insert:1}).
We first sort the sets by the lowest rank of each set, and the number of nodes they participate in (\autoref{algo:insert:2}).
In practice, this means that sets that contain nodes with lower ranks such as $\Set{a}$ are considered first.
We then iterate over the individual sets and group nodes that extend the nodes of the previous set with the current set (set extension) in the previously computed order.
An example of this can be seen in \autoref{fig:insertion}a:
Nodes are arranged vertically by rank and the different colors represent the resulting groups from each extension step.
The numbers for each node describe the order in which nodes are added to the groups.
Grouping the nodes using the extension of each set gives us a general order in which nodes are inserted into the dual graph.
For each group, nodes are inserted in a rank-based order.
Meaning nodes that have a lower rank are placed first.
However, this on its own is not enough.
If we were to insert the nodes in the order determined only by their rank and grouping order, the resulting dual will not be planar.
This can be seen in \autoref{fig:insertion}b.
Here, we are currently inserting the nodes from the red set with rank 2.
If we naively insert and connect nodes, each node of the rank is connected to all nodes in the rank above using all possible links.
Therefore, we need to establish the correct order for the nodes within a group, and the correct subset of links, which we will discuss next.
\subsection{Consecutive Ones Sequences}\label{sec:euler-dual:co}
To determine the order of nodes within a group we need to introduce the concept of \textit{consecutive ones} (CO).
Imagine the following scenario: Given an adjacency matrix of a graph, this graph has the consecutive ones property, if we can reorder the rows of the adjacency matrix so that all $1$s in the columns are consecutive.
This property was defined by Booth, and is true for graphs that have a planar embedding~\cite{BoothL76}.
A \textit{consecutive ones sequence} is then a group of consecutive nodes.
As we want to create planar duals, we can use this property in our construction algorithm.
Remember our goal is to insert nodes into the Euler dual so that as many monotone faces as possible are created.
\st{Here it is important to note that w}We do not need to realize all possible links between nodes.
The only thing that we need to ensure is that the resulting graph is \textit{planar} and \textit{connected}.
In the rank-based dual, it suffices to ensure the CO property for neighboring ranks.
As there are more links in the abstract description than we need for the Euler dual, there are also multiple potential CO sequences, from which we choose the CO sequence that \textbf{maximizes the number of monotone faces}.
If we think back to the overall goal, which is to maximize monotone faces, we can see that the longer the consecutive ones sequence is, the more monotone faces are closed and created, when inserting the corresponding node.
Therefore, we change the order of the nodes on each rank, so that nodes with longer CO sequences are placed before nodes with shorter consecutive ones sequences.
If the length of COs is equal, we further sort the nodes by their respective \textit{twins} in the previous group, without the current set, and sort them by their distance to the closest CO sequence of length 1 of the current group in the rank above (dist\_twin in \autoref{algo:CO:3})
In the example of \autoref{fig:insertion}c, we can see that by reordering the nodes so that we first place node \nodeIcon{redNode}{12} and \nodeIcon{redNode}{9}, and then node \nodeIcon{redNode}{10}, the nodes \nodeIcon{redNode}{12} and \nodeIcon{redNode}{9} will not produce crossings anymore.
However, as shown in \autoref{fig:insertion}c, some crossings still remain. To adhere to the CO property, all possible CO sequences of a node have to be reduced to a single CO sequence.
For example node \nodeIcon{redNode}{10} has two possible parents nodes in the rank above---nodes \nodeIcon{orangeNode}{2} and \nodeIcon{redNode}{8}.
The latter two are not adjacent, as they generate two CO sequences.
So, to insert \nodeIcon{redNode}{10}, we have to choose one of the two.
For each CO sequence and for each possible position in the current rank, we calculate a set of attributes that helps to make the decision where to place it.
These attributes consider the length of the CO sequence, and the change in \#monotone Faces and \#crossings (\autoref{algo:CO:3}).
We collect these attributes across the CO sequences and the possible positions of the node in a list.
As an example, a new node might destroy an existing face, if we place it inside the face. Because the newly inserted node has to be connected to the next rank, a crossing will appear, and the \#monotone faces decreases.
After we have sorted the list accordingly, we insert the node at the current best position (\autoref{algo:insert:2}), resolve crossings (\autoref{algo:insert:3}), and move on to the next node.
Returning to our previous example, which can be observed in \autoref{fig:insertion}d, the CO sequence $\Set{d}$ is chosen, because this way no previously created face is destroyed. This is because the node \nodeIcon{redNode}{8} has already been inserted in the rank above, and has created an open space between the nodes \nodeIcon{redNode}{12} and \nodeIcon{redNode}{9}.
We therefore insert the node \nodeIcon{redNode}{10} into this open space, keeping existing monotone faces intact.
Once all nodes of the current rank have been placed, we continue on to the next rank. If we have placed all nodes of the current group, we move on to the next set, get all its nodes, sort the nodes on each rank, and insert the nodes, rank by rank. Using this method, it is possible to create Euler duals that are \textit{connected}, \textit{planar} and contain only \textit{monotone faces}.
We will discuss problematic cases, where this is not possible, in \autoref{sec:discussion}.
\section{Circular Layout of the Euler diagram}\label{sec:circular-layout}
Based on the rank-based Euler dual, we can create the curves of the final Euler diagram.
We do this by first removing the empty set and then arranging the nodes in a circular layout (\autoref{fig:overview:circular-dual}).
At the center of this layout is the intersection with the largest rank, which is usually the full-set.
The other nodes are placed on rings around the center, depending on their rank.
Using this layout, we then devise a strategy to draw smooth curves that result in the final diagram (\autoref{fig:overview:euler-diagram}).
\subsection{Circular Layout}
To guarantee a good distribution of nodes on each ring, we need to place the nodes at well-defined distances to each other.
The rank with the largest number of nodes---usually the middle rank---is placed first to guarantee an overlap-free and well-distributed result.
Rings are then placed outwards and inwards of this rank.
The radii are chosen so that there is still enough space in the inner rings for all nodes, while the outer rings are not too distant from each other.
Distributing nodes evenly on each ring can result in clutter in the ranks above and below.
Therefore, we place nodes so that enough space is reserved for their children and parents.
Accounting for this, nodes with many children require more space compared to nodes with only a few children.
This approach is similar to the layout of \textit{radial trees}, but with the tree growing in both directions.
The circular layout is then used to create the final Venn diagram.
\autoref{fig:curves:circular-dual} shows the circular distribution of the Euler dual from \autoref{fig:properties:duals:well-matched}.
On each ring, the nodes are well-distributed.
\subsection{Drawing Curves}
To create an Euler diagram from the dual, the simplest approach would be to use the convex hull of the nodes for each set to create a closed shape.
However, such a curve would not consider the nodes outside of the current set.
This results in closed curves that create many unwanted zones\st{for a single intersection} and a very uneven distribution of areas across the faces.
This is clearly not well-matched and decreases readability.
Therefore, we developed an approach to directly control the curve of each set by introducing additional virtual nodes that act as control points for its shape.
We call these nodes \textit{gate nodes}.
They lie on the same circular path as the intersection nodes but are distributed so that they always lie at the midpoint between two nodes on the ring (\autoref{fig:curves:circular-layout}, dark gray circles).
When we move between ranks, we cross different circular paths in the circular layout, depending on the ranks of the current and following link.
As we want to control the shape, we define where this crossing is allowed to happen: only at a gate node position.
Due to the properties of the circular graph, which is still a dual of the Euler diagram, we can then create a set curve by finding the order of the links of each set and connecting the midpoints of the links with the gate nodes.
This generates a path that moves between the rings, ``cutting'' the dual graph into two disconnected components.
Using the gate nodes in combination with the midpoints of the links in their respective link order, we create a mostly compact, closed curve for each set.
Additionally, we can control the shape of the curve by using different interpolation strategies and adapting the link-mid point.
We achieved the best curve results using Catmull-Rom splines.
\autoref{fig:curves:circular-layout} shows a circular graph with the shape of a set defined by intersection link midpoints and gate nodes.
Even though we use splines in our approach to maximize the smoothness, it would be possible to constrain the curve further to generate curves that can only use diagonal, rectangular, or octagonal lines.
\subsection{Concurrency}
Euler duals that only consist of monotone faces will only have pairwise crossings in the diagram.
However, if we have a non-monotone face, this is not the case.
Instead, we will create non-pairwise crossings.
Curves that use the same gate nodes to cross a ring will produce a concurrent curve segment.
To retain a well-matched diagram, we control the curves, which avoids creating unwanted zones.
This means that for concurrent segments, each curve is offset according to the order in which they enter the concurrent segment.
As the Catmull-Rom interpolation cannot handle straight line segments easily, we instead split the curve into different segments and add additional points to create segments that are concurrent.
This can be observed in the bottom left part of \autoref{fig:curves:circular-layout}.
If the concurrency is not a straight line but instead happens on the outside of the diagram, we create the curve normally but offset the curves as previously explained.
These are then combined with normal curve segments to create the final closed smooth shape for each set.
\st{We tried different strategies to visualize the concurrency, e.g., alternating stripes and concurrent lines.}
\begin{figure}
\caption{To define the shape of the Euler diagram, we order the links for each set along the circles and shape them with gate nodes---shown here as grey dots---between the intersection nodes. This enables us to fine tune their shapes.}
\label{fig:curves}
\end{figure}
\section{Evaluation}
\begin{figure*}
\caption{UpSet plot}
\caption{EulerR}
\caption{VennEuler}
\caption{SetNet}
\caption{nVenn}
\caption{MetroSets}
\caption{Our method}
\caption{Comparing relevant previous works for Euler diagrams of a topic modeling dataset. Problems in the results are marked: these can either be not well-formed (\textbf{P2}
\label{fig:topic-modeling:metrosets}
\label{fig:TopicModelling}
\end{figure*}
\begin{figure*}
\caption{Original}
\caption{MetroSets}
\caption{SetNet}
\caption{Our method}
\label{fig:flags:original}
\label{fig:flags:metro-sets}
\label{fig:flags:setnet}
\label{fig:flags:ours}
\label{fig:flags}
\end{figure*}
We directly compare our method to several other state-of-the-art approaches across three different datasets.
Many older set visualization techniques are not made publicly available~\cite{SimonettoA09,wallinger2021readability}, so it is not possible to compare ourselves directly to them, or they only work on a very limited amount of set curves~\cite{Micallef2014}.
We evaluate our method against vennEuler~\cite{Wilkinson2012}, EulerR~\cite{Larsson2020}, SetNet~\cite{RodgersSAMBT16}, nVenn~\cite{Perez-SilvaAQ18}, and MetroSets~\cite{Jacobsen2021}.
vennEuler, EulerR, and SetNet only allow circles as curves, whereas nVenn allows for arbitrarily shaped curves.
MetroSets are conceptually different from the other techniques as they only produce the Euler graph as an output. \revised{Therefore, we only compared them based on criteria that can be applied to the lines of the graph, such as line intersections, concurrency and overall compactness.}
Some of these approaches also have an additional weight parameter for each intersection that is used to create area-proportionate diagrams.
Because such factors skew the comparison, we used an equal weight for all set intersections in these methods to unbias the individual areas.
MetroSets allow to directly show data points---examples can be seen in \autoref{fig:flags:metro-sets} and \autoref{fig:topic-modeling:metrosets}\st{, where each circle represents one data point}.
This is not supported by other methods\revised{, including ours}.
To overcome this, the datapoints in \autoref{fig:flags} and \autoref{fig:teaser} were added manually.
As datasets, we chose three examples from different domains: topic modeling and info-graphics. \revised{These datasets show a wide variety in their structure, as well as the kind of datapoints that can be overlayed on top of the diagram.} We will discuss the results according to well-matchedness, well-formedness, as well as additional properties that we are going to introduce in the next section.
\subsection{Guidelines for Euler diagrams}\label{sec:guidelines}
We base our evaluation on the guidelines proposed by Blake et al.~\cite{Blake2016}.
They define 10 different measures which can be used to judge the quality of Euler diagrams, ranked by their importance.
There are three properties that we will not discuss in detail: Diverging lines, orientation, and color.
Diverging lines are not applicable, and orientation is not considered by any method presented.
It can also easily be changed by rotating the visualization.
In order to strengthen the comparability of the methods we changed the color and style of all evaluated works to \revised{match ours}\st{be the same}.
As Blake et al.~\cite{Blake2016} found that only outlines are preferred,\st{ so} we refrain from filling the curves.
Some of the measures, such as \textbf{well-matchedness} (\textbf{P1}{}) and \textbf{well-formedness} (\textbf{P2}{}), have already been defined in \autoref{sec:properties}.
The others\st{, which we use in our evaluation with other methods,} will be described briefly.
They all relate to the form of the diagram but are not captured in the notion of well-formedness.
\paragraph{Curve Guidelines}
The \textbf{Compactness} (\textbf{P3}{}) defines how close a shape is to a perfect circle.
Blake et al.~\cite{Blake2016} call this property \textit{shape}, as they only consider circles.
This is closely connected to the convexity of a shape, and there are further studies on the general understanding of convex vs. non-convex shapes~\cite{Hulleman2000, Bertamini2012, Schmidtmann2015}.
In general, they conclude that convexity allows users to finish set-related tasks faster, albeit it might make individual curves harder to distinguish.
\textbf{Smooth curves} (\textbf{P4}{}) are preferred by users and result in diagrams that are easier to read.
\st{As a fundamentally different technique, it does not apply to MetroSets.}
\paragraph{Diagram Guidelines}
\textbf{Symmetry} (\textbf{P5}{}) can also be beneficial if the curves are as symmetric as possible while retaining the features that distinguish individual faces.
This property measures the similarity across all shapes in a uniform way.
Circular approaches will always retain perfect symmetry, while more relaxed shapes might produce symmetric, pseudo-symmetric, or non-symmetric results.
If the diagram is symmetric, finding a given intersection face can be challenging because many faces will have a similar shape.
Therefore \textbf{shape discrimination} (\textbf{P6}{}{}) is another important property, which defines the uniqueness of individual faces, and allows for effective search tasks.
\textbf{Zone area equality} (\textbf{P7}{}) measures the area of each face in relation to the other faces.
In general, for Euler diagrams that are not area-preserving, the area of each zone should be as similar as possible.
Area-preserving Euler diagrams, in contrast, try to adapt the size of faces to be equal to a property, for example, to the number of contained data points (cardinality).
Infringing this property means that users might misinterpret the difference in size as a difference in the cardinality of the face.
\subsection{Topic Modeling}
Using \textit{latent Dirichlet allocation}~\cite{blei2003latent}, a common topic modeling algorithm, we extracted 5 topics from a political debate.
The result of such a topic modeling algorithm is usually a list of keywords that describe each topic, together with their probability of belonging to said topic.
We filter keywords to retain words for many combinations of topics while still creating an interesting abstract description
\st{\footnote{We will publish the abstract description for this dataset along with our tool.} }
that has a well-matched and well-formed diagram.
One common problem of topic modeling results is that it is very hard to visually compare them just using their descriptive keywords.
Often words are attributed to multiple topics, but just representing them as a list, one cannot easily discern this.
These words, however, might be of special interest to the user.
They might describe all the topics very well, in which case the topics might be very similar to each other, or they might be general "common" words that should not be considered by the topic model algorithm, as they reduce the descriptiveness.
In this section, we only show the resulting curves; the full diagram including the words can be found in the supplemental material.
\autoref{fig:TopicModelling} shows the results for the above dataset across all methods.
For easier comparison, we have highlighted problematic zones in the respective diagrams, which result from infringements of the well-matchedness (\textbf{P1}{}), well-formedness (\textbf{P2}{}), and area-equality (\textbf{P7}{}) properties.
Only a subset of the infringements is shown, as the diagrams might otherwise become unreadable.
Some approaches are very similar (vennEuler and EulerR), while others diverge substantially.
\st{We will now give a small summary of each method and identify potential problems.}
Most methods preserve the abstract description faithfully (\textbf{P1}{}).
However, both EulerR and nVenn create intersections that do not appear in the abstract description.
nVenn even realizes some relations with multiple faces, that appear in different parts of the visualization.
Regarding well-formedness (\textbf{P2}{}), we can observe that all circular visualizations (b-d) are simple.
However, this comes at a cost: SetNet creates \revised{duplicate}\st{duplicates} curves for $\Set{E}$, while the other two approaches are not well-matched.
nVenn and MetrosSets are not simple, as they contain non-pairwise crossings and concurrency.
vennEuler, SetNet, and EulerR use circles and are therefore perfectly compact (\textbf{P3}{}).
But nVenn also produces relatively compact shapes.
MetroSets, on the other hand, produce a very spread out intersection graph that does not fit into a compact shape.
All results that produce Euler diagrams create smooth curves (\textbf{P4}{}).
Symmetry is not considered in any of the related work (\textbf{P5}{}).
Circular methods create zones that are easily distinguishable, whereas the zones produced by nVenn are very similar to each other.
As MetroSets only create intersection nodes, no real shape is created that can be considered here (\textbf{P6}{}). All of EulerR, VennEuler, SetNet, and nVenn create very small zones that are difficult to recognize (\textbf{P7}{}).
Our method produces a both well-matched (\textbf{P1}{}) and well-formed (\textbf{P2}{}) result.
The resulting shapes are mostly compact (\textbf{P3}{}).
As we use curve interpolation, the \revised{produced}\st{produces} curves are smooth (\textbf{P4}{}).
In our method, some curves retain their symmetry at least partly (\textbf{P5}{}).
However, because of this, the zones are also more similar, affecting how easy they are to distinguish (\textbf{P6}{}).
The area of the zones remains relatively equal across all ranks (\textbf{P7}{}).
In summary, our method retains the guidelines better than all other related works, except for zone discrimination, where we lie between nVenn and vennEuler/EulerR.
Most important, the result is a well-matched and well-formed diagram.
\subsection{Size Venn Diagram}
As a second example, we show a Venn diagram published on \texttt{xkcd.com}\footnote{By \textsc{Randall Munroe} at \texttt{\url{https://xkcd.com/2122}} \ccbync} in \autoref{fig:teaser:original}, that describes possible combinations of words in combination with five different adjectives: \textit{little}, \textit{large}, \textit{small}, \textit{great}, and \textit{big}.
The original visualization uses a 5-Venn diagram to show which words can occur together with these adjectives.
However, there are some combinations for which no words were specified, such as \textit{little}, \textit{large} and \textit{great}.
We can recreate the symmetric 5-Venn diagram used by the author using our algorithm, as can be seen in \autoref{fig:teaser:our-venn}.
The words for each relation are added manually on top of the generated layout.
This gives us the direct equivalent to the hand-made Euler diagram by the author.
Then, we can remove the empty intersections and instead create a well-matched (\textbf{P1}{}) Euler diagram.
In this case, the result is not well-formed (\textbf{P2}{}), so we retain minimal concurrency as well as one non-pairwise intersection.
The resulting curves are mostly compact (\textbf{P3}{}) and smooth (\textbf{P4}{}).
As only a few relations are empty, the diagram retains its high symmetry (\textbf{P5}{}), but in turn, many zones are similarly shaped (\textbf{P6}{}).
The area is evenly distributed across the zones (\textbf{P7}{}).
A comparison across the related works can be found in the supplemental material.
\subsection{Supranational Caribbean Bodies}
We recreate another info-graphic visualization published on Wikipedia commons\footnote{By \textsc{Wdcf} at \texttt{\url{https://w.wiki/39HJ}} \ccbysa}, where countries are grouped by organizations.
In this case, we look at all Caribbean countries that are contained in Supranational Caribbean Bodies.
There are three different bodies: the \textit{Association of Caribbean States}, the \textit{Caribbean Community}, and the \textit{Organization of Eastern Caribbean States}.
However, not all intersections between the three exist, as there are no countries for some relations.
The original visualization uses a 3-Venn diagram to visualize the relations.
Existing relations are filled with flags that represent each country.
This makes the visualization quite large, as a lot of space is needed to visualize the empty intersections, even though no data is shown.
As an additional comparison, we \st{also }show how SetNet and MetroSets visualize this data set.
In \autoref{fig:flags:setnet}, we can observe that SetNet does not always preserve well-matchedness (\textbf{P1}{}), as an empty zone is created.
SetNet handles this by placing red dots inside \st{the }faces that are part of the abstract description.
Since we already show the flags of the countries that belong to each intersection directly\st{ in the area}, we chose to omit this in our recreation.
MetroSets (\autoref{fig:flags:metro-sets}) shows all the data points, in this case countries, directly in the visualization.
However, the visualization needs a lot of space, as lines extend outwards, resulting in a non-compact shape (\textbf{P3}{}).
Using our technique, we can visualize the relations as a well-matched (\textbf{P1}{}) Euler diagram.
The diagram has concurrency, as some bodies do not contain countries that are only in this body, and is therefore not well-formed (\textbf{P2}{}).
Curves are compact (\textbf{P3}{}) and smooth (\textbf{P4}{}).
The symmetry is limited (\textbf{P5}{}), but still the zones are similar (\textbf{P6}{}).
The area is evenly distributed across the zones (\textbf{P7}{}).
Our visualization allows the reader to immediately see that there are four relations in total.
The central intersection is shared for all three bodies, while there is a single relation that has outer concurrency.
\section{Discussion and Future Work}\label{sec:discussion}
First, we will discuss runtime, problematic abstract descriptions, and alternative construction methods.
Then we will further analyze the influence of design decisions on the aesthetics of the visualization.
\begin{figure}
\caption{non-monotone faces}
\label{fig:non-monotone:no-quad}
\caption{no common sink}
\label{fig:non-monotone:no-sink}
\caption{Examples with problematic abstract descriptions: (a) non-monotone faces will result in complex Euler diagrams. (b) If there is no shared intersection on the highest rank\st{ (sink)}
\label{fig:non-monotone}
\end{figure}
\subsection{Runtime}
We performed experiments to compare the runtime of our approach to two other state-of-the-art methods.
Because of its optimization strategy, MetroSets does not scale well with the number of nodes \st{but instead}\revised{and} increases in quadratic time~\cite{Jacobsen2021}.
SetNet extends the \textit{iCircles}~\revised{\cite{StapletonFRH12}} algorithm and runs in polynomial time.
For lower node counts, our \revised{algorithm}\st{algorithms} has similar runtime ($n=64$, $0.072$s) to SetNet ($n=64$, $0.14$s), whereas MetroSets is a bit slower ($n=64$, $~2$s)\footnote{All experiments were run on a desktop PC with an Intel i5-8400 CPU.}.
\revised{For larger number of intersection nodes, we can still achieve fast results ($n=1024$, $27.48$s).
Our approach is a greedy algorithm that uses grouping and reordering to reduce the search space of possible positions for a new node.
This allows us to avoid complex optimization strategies, making the output deterministic.}
\st{The reason for this is that by using our reordering strategy and restricting the search space of possible positions for a new node, we do not need to solve
complex optimization tasks. As our method only needs to do an initial sorting of all nodes and afterwards is constrained to nodes of the same rank, we can achieve very fast results ($n=1024$, $0.72$s).}
From our experiments, we expect our algorithm to run in polynomial time
with the grouping of nodes (\autoref{sec:euler-dual:grouping}) as the limiting factor.
However, we hope to prove stronger bounds for this in future work.
\subsection{Problematic Abstract Descriptions}\label{sec:problematic-descriptions}
As we have discussed before in \autoref{sec:properties}, the layout of an Euler diagram strongly depends on the abstract description, and in particular, if there exists a well-formed solution for it.
Our method handles this by relaxing the well-formedness properties if otherwise no such diagram can be found while guaranteeing the well-matchedness property.
In contrast, other related works, such as eulerR, vennEuler, nVenn, or sometimes even SetNet, fail silently for these abstract descriptions or, arguably worse, create diagrams that are not well-matched.
As we rate well-matchedness above all other attributes, this usually means for problematic abstract descriptions that we create non-monotone faces.
An example of this is shown in \autoref{fig:non-monotone}.\st{, where non-monotone faces result in concurrent curves and non-pairwise intersections in the Euler diagram (\autoref{fig:non-monotone:no-quad}). }
If there is no common intersection for all sets, all sets will intersect in the center of the diagram and cause a non-pairwise intersection(\autoref{fig:non-monotone:no-sink}).
If there is no monotone connection between the empty set and the lowest-ranked nodes, the outer curve will have concurrent curves, as can be seen in \autoref{fig:flags:ours}.
There is one aspect of abstract description that we have not considered so far:
In some cases, it can happen that the resulting diagram as a whole could be disconnected, or only some sets will have a disconnect, meaning nodes cannot be connected via a strong monotone link to at least one parent node and one child node.
\revised{These datasets can currently not be visualized in our tool.}
We plan to remove this limitation in future work, by having special solutions for these disconnected components, for example, separation of the nodes so that each disconnected component is visualized by its own curve or concurrency on the rings, similar to the method of collapsing faces \st{proposed} by Chow~\cite{ChowR05}.
Finally, we want to point out that although our algorithm always produces a valid well-matched result, so far we have not proven that this result is optimal with respect to minimal violations of the well-formedness property.
We also aim to pursue this in future work.
\subsection{Alternative Construction Methods}\label{sec:alternative-construction}
Initially, we also tried alternative construction methods for Euler diagrams.
We adapted the backtracking algorithm that Mamakani et al.~\cite{MamakaniMR12} proposed for finding symmetric Venn diagrams to handle arbitrary Euler diagrams.
The basic idea behind this technique is to find suitable crossings by permuting the order of curves.
We used this approach to investigate the proportion of abstract descriptions for which well-formed diagrams exist.
The number of possible intersections grows drastically with each additional set:
For $n = 4$ there are 65K+ combinations---for $n = 5$ there are already 4M+ possible combinations.
Furthermore, we only consider descriptions where each node has at least one incoming and one outgoing link, all sets have a common source as well as sink, and the empty set always exists.
As described in \autoref{sec:problematic-descriptions}, these are the abstract descriptions for which the diagram is connected and monotone.
This lowers the combinations significantly to 3152 for $n=4$.
Using the modified backtracking, we found that for $n=4$, only 125 out of the 3152 combinations have monotone diagrams (3,96 \%).
In the rest of the cases, the backtracking approach would find no solution at all.
The reason for this is simple:
Backtracking cannot readily find sub-optimal solutions, which motivates our proposed method.
By allowing non-monotone faces as the last resort, our approach always yields a valid Euler diagram.
\subsection{Design Considerations}
\paragraph{Curves}
When we create the diagram using our method, we use customized splines to create a single, smooth curve for each set.
We have also performed experiments using convex hulls and linear poly-hulls for drawing curves, but these approaches cannot guarantee well-formedness, which is why we chose Catmull-Rom curves.
In particular, we segment the curve into parts and use different strategies depending on what kind of curve segment occurs.
There are three different types of segments: regular segments, U-turn segments, and concurrent segments.
This differentiation allows us to be flexible in our choice of interpolation strategies, and we can control the smoothness of the curve by adapting the number of control points.
For instances where concurrency cannot be avoided, we tried different approaches to mitigate it, e.g., \st{true} concurrent\st{ly running} lines or thinner lines so that equal line width is retained.
Another approach would be \st{to create} dashed stroke segments with alternating colors.
\st{We plan to allow for more constrained curve shapes, such as octolinear or rectangular shapes, in the future.}
Another interesting consideration is how to visualize the individual intersections.
\st{Usually, w}\revised{W}e \st{only} show the curves without\st{ any} filled-in areas~\cite{Blake2016}.
The problem with filling the areas of the curves is that, because of blending, each intersection will have a unique color that is not part of the original color set.
With \st{an }increasing number of sets, the visual difference between these colors decreases, which makes it hard to distinguish the intersections.
Methods have been developed to alleviate these effects~\cite{alsallakh2014visual}, but we consider their application \st{and evaluation }out of the scope for this paper.
\revised{A limitation of using Catmull-Rom curves is that for large abstract descriptions\st{with many set intersection nodes}, curves might become complex and non-convex, which might have a negative impact on the readability of the diagram. This effect can already be observed Fig.~\ref{fig:teaser:our-euler}, and it increases for highly-intersecting datasets, which can also be seen in the supplementary.
Solutions to this problem could either be to directly adapt the Euler dual by reordering nodes, or to post-process the faces of the diagram by optimizing the convexity of their outline. }
\paragraph{Area-proportionate Diagrams}
\revised{Cardinality is an important characteristic that is inherent to the data, if it exists in a given dataset.
Currently, our method does not incorporate information about the cardinality of set intersection nodes.}
In the future, we \st{also} plan to integrate a method to adapt the zones to given data point weights, creating area-proportionate diagrams, and fill these zones automatically with data points. \revised{This will also remove a current limitation of our tool, as it does not scale with large number of datapoints in a single zone.}
\paragraph{Symmetry}
Another factor that influences the aesthetics of the diagram is symmetry, which many approaches do not retain.
However, we believe that symmetry is an important aspect to investigate with regard to user engagement.
Symmetric objects are often perceived as more aesthetic~\cite{Cawthon2007}, especially so rotational symmetry~\cite{Makin2012}.
However, striving for symmetry goes against the guidelines by Blake et al.~\cite{Blake2016}, which we introduced in \autoref{sec:guidelines}, as symmetry leads to zones that are visually very similar to another.
\st{Similar shapes}\revised{These} violate the shape discrimination property (\textbf{P6}{}) and can hinder user-understanding.
An extreme example of this is nVenn.
Ultimately, we believe that this is a trade-off that has to be made depending on the goal of the visualization.
For efficiency, the diagram should be simple and easy to read.
If, on the other hand, the goal is to achieve an aesthetic representation, a more symmetric result can be chosen,
\revised{based on user preference.}
By default, our method creates well-matched diagrams.
Depending on the task, it can also make sense to draw empty intersections to emphasize the absence of instances.
An example of this can be seen in \autoref{fig:teaser:our-venn}.
Using our method, it is also possible to create well-matched and well-formed Venn diagrams for any number of curves very fast. This is because of the inductive nature of Venn diagrams. Because each new set in a Venn diagram doubles the number of nodes, we can simply for each rank inverse the order of the nodes in the rank before, extend the nodes using the new set, and insert the nodes and links.
However, currently some artifacts appear at higher set counts ($n\geq8$). This is because the space of the individual faces is compressed a lot. We plan to improve our curve interpolation to handle Venn diagrams with more curves. The current problems can be observed in the supplementary material.
A more difficult problem is creating symmetric Venn diagrams. Currently, we are able to create symmetric Venn diagrams for 5 and 7 sets by constricting the number of children a node can have. This influences the insertion strategy so that CO sequences are preferred that have fewer children while still maximizing monotone faces.
\st{Currently, i}It is still an open problem to find simple symmetric Venn diagrams for any prime number of sets, and the largest to be produced is a 13-Venn diagram \cite{Mamakani2014}.
\section{Conclusion}
We have presented \textsc{spEuler}\st{, semantics-preserving Euler diagrams}, a novel approach to create well-matched Euler diagrams that focuses on creating mostly simple, planar, and connected solutions.
The visualization of highly connected sets is a challenging problem, as the number of possible intersections increases exponentially with the number of sets.
Our\st{ deterministic} solution is fast and\st{ creates layouts without any time-consuming optimization strategies.
Its flexibility allows to create Venn and Euler diagrams, while scaling to a large number of sets.}
\revised{ can scale to a large number of intersection nodes.}
We \st{can }imagine \st{that }our technique and accompanying visualization to \st{also }be used as a part of larger \revised{a system}\st{systems} that gives a brief and intuitive overview of set-typed data.
\acknowledgments{
This work was supported by the German Research Foundation (DFG) within project KE 740/17-2 of the FOR2111 “Questions at the Interfaces” as well as Project-ID 251654672 – TRR 161 "Quantitative methods for visual computing". We would further like to thank Matthias Albrecht for help during the implementation and Patrick Paetzold for valuable feedback during the revision process.}
\end{document}
|
\begin{document}
\title{A combinatorial PROP for bialgebras}
\date{\longrightarrowday}
\author{Jorge Becerra}
\address{Bernouilli Institute, University of Groningen, The Netherlands}
\email{\href{mailto:[email protected]}{[email protected]}}
\urladdr{ \href{https://sites.google.com/view/becerra/}{https://sites.google.com/view/becerra/}}
\begin{abstract}
It is a classical result that the category of finitely-generated free monoids serves as a PROP for commutative bialgebras. Attaching permutations to fix the order of multiplication, we construct an extension of this ca\-te\-gory that is equivalent to the PROP for bialgebras.
\end{abstract}
\maketitle
\setcounter{tocdepth}{1}
\tableofcontents
\section{Introduction}
Bialgebras are an important algebraic structure that one commonly finds in many areas of mathematics, eg algebraic topology \cite{maymoreconcise}, homotopical algebra \cite{lodayvallete}, quantum groups theory \cite{kassel}, knot theory \cite{habiro}, etc. Therefore it is sometimes useful to take one level of abstraction up and study their structure maps on their own. A convenient setup to do this is given by symmetric monoidal categories whose family of objects can be identified with the set of non-negative integers, also known as \textit{PROPs}. One could think of a PROP as a gadget to study abstract algebraic operations, whereas an \textit{algebra} over a PROP is a concrete realisation of these operations. This realisation is achieved by means of a strong monoidal functor $\mathsf{P} \longrightarrow \mathcal{C}$, where $\mathsf{P}$ is a PROP and $\mathcal{C}$ is a symmetric monoidal category.
In general, an algebraic structure is given by some structure maps that satisfy certain compatibility conditions. A rather straightforward way to describe a PROP for such an algebraic structure is given by setting the structure maps as generators and taking the compatibility conditions as relations. In this paper we focus on the PROP $\mathsf{B}$ for (noncommutative, noncocomutative) bialgebras, monoidally generated by the maps $\mu: 2 \longrightarrow 1$, $\eta: 0 \longrightarrow 1$, $\mathcal{D}elta: 1 \longrightarrow 2$ and $\varepsilon: 1 \longrightarrow 0$, subject to the usual relations that define a bialgebra (see Figure \ref{fig bialgebra axioms}). Though it may seem this gives rise to complex combinations of these maps, it turns out that any morphism in $\mathsf{B}$ has a unique normal form.
\begin{theorem}[Theorem \ref{thm normal form}]\label{thm normal form intro}
Every morphism $f: n \longrightarrow m$ in $\mathsf{B}$ factors in a unique way as $$f= (\mu^{[q_1]} \otimes \cdots \otimes \mu^{[q_m]}) \circ P_{\sigma} \circ (\mathcal{D}elta^{[p_1]} \otimes \cdots \otimes \mathcal{D}elta^{[p_n]}) $$ for some (unique) integers $p_1, \ldots , p_n, q_1, \ldots , q_m,s \geq 0$ and a permutation ${\sigma \in \Sigma_s}$, where the map $\mu^{[k]}: k \longrightarrow 1$ (resp. $\mathcal{D}elta^{[k]}: 1 \longrightarrow k$) stands for the iterated mul\-ti\-pli\-ca\-tion (resp. comultiplication).
\end{theorem}
Even though this is an important step towards the understanding of the PROP $\mathsf{B}$, computationally it does not present any advantage, as it gives no information about how to determine the coefficients $p_i, q_i, s$. Thence we would like to find an equivalent, computationally explicit description of $\mathsf{B}$.
Let $\mathsf{fgFMon}$ be the category of finitely-generated free monoids and let $\mathcal{C}$ be a symmetric monoidal category. It turns out that the datum of a strong monoidal functor $\mathsf{fgFMon} \longrightarrow \mathcal{C}$ uniquely determines a commutative bialgebra $A$ in $\mathcal{C}$, where the multiplication $\mu: A \otimes A \longrightarrow A$ is given by the image of the monoid map $F(x,y) \longrightarrow F(z)$, $x,y \mapsto z$ and the comultiplication $\mathcal{D}elta: A \longrightarrow A \otimes A$ is the image of the monoid map $F(x) \longrightarrow F(y,z)$, $x \mapsto yz$ (Example \ref{ex ComB = fgFMon}). In a similar fashion, if $\mathsf{fSet}$ denotes the category of finite sets, then the datum of a strong monoidal functor $\mathsf{fSet} \longrightarrow \mathcal{C}$ uniquely determines a commutative algebra $A$ in $\mathcal{C}$, where this time the multiplication $\mu: A \otimes A \longrightarrow A$ is given by the image of the unique map of sets $\{1,2 \} \longrightarrow \{ 1 \}$ (Example \ref{ex fSet A}). These define equivalence of categories $\mathsf{ComA} \overset{\simeq}{\longrightarrow} \mathsf{fSet} $ and $\mathsf{ComB} \overset{\simeq}{\longrightarrow} \mathsf{fgFMon} $, where $\mathsf{ComA}$ and $\mathsf{ComB}$ are the PROPs for commutative algebras and bialgebras, respectively (we think of these as described in terms of generators and relations).
For instance, assume for simplicity $\mathcal{C} = \mathsf{Vect}_k$, the category of vector spaces over some field $k$. Given a commutative bialgebra $A$, consider the monoid maps $$f: F(x) \longrightarrow F(a), \quad x \mapsto a^n \qquad , \qquad g:F(x) \longrightarrow F(a,b), \quad x \mapsto aba^2b.$$ Under the previous equivalences, they correspond to the $k$-linear maps
$$f': A \longrightarrow A, \ x \mapsto x_{(1)} \cdots x_{(n)} \quad , \quad g': A \longrightarrow A \otimes A, \ x \mapsto x_{(1)} x_{(3)} x_{(4)} \otimes x_{(2)} x_{(5)}$$
where the order of the multiplication is arbitrary as $A$ is commutative. If $A$ was noncommutative, we would be able to fix the order of the multiplication by means of a permutation $\sigma \in \Sigma_n$ for $f'$ and a pair of permutations $(\tau_1, \tau_2) \in \Sigma_3 \times \Sigma_2$ for $g'$. The upshot of this observation is that by attaching these permutations to the monoid maps $f$ and $g$, we can produce noncommutative bialgebras from free monoids. A similar observation follows for finite set maps.
One question that arises here is if this attaching of permutations can be made functorial in the following sense: the canonical faithful functor $\mathsf{ComA} \lhook\joinrel\longrightarrow \mathsf{ComB}$ corresponds, under the previous equivalences, to the free monoid functor, so that the diagram
$$\begin{tikzcd}
\mathsf{ComA} \dar{\simeq} \rar[hook] & \mathsf{ComB} \dar{\simeq}\\
\mathsf{fSet} \rar{F} & \mathsf{fgFMon}
\end{tikzcd} $$
commutes (Lemma \ref{lem fSet fgFMon}). One would like to lift this diagram to a noncommutative context along the canonical full functors $\mathsf{A} \longrightarrow \mathsf{ComA}$ and $\mathsf{B} \longrightarrow \mathsf{ComB}$, where $\mathsf{A}$ is the PROP for (noncommutative) algebras.
Our main result states that the attaching of permutations gives an affirmative answer to the previous question.
\begin{theorem}[Theorem \ref{thm main thm 2}]\label{thm main thm}
There exist symmetric monoidal categories $\widehat{\mathsf{fSet}}$ and $\widehat{\mathsf{fgFMon}} $, together with full and essentially surjective functors $$\widehat{\mathsf{fSet}} \longrightarrow \mathsf{fSet} \qquad , \qquad \widehat{\mathsf{fgFMon}} \longrightarrow \mathsf{fgFMon},$$monoidal equivalences $$\mathsf{A} \overset{\simeq}{\longrightarrow}\widehat{\mathsf{fSet}} \qquad , \qquad \mathsf{B} \overset{\simeq}{\longrightarrow} \widehat{\mathsf{fgFMon}} $$
and a strong monoidal functor between them $$\widehat{F}: \widehat{\mathsf{fSet}} \longrightarrow \widehat{\mathsf{fgFMon}} $$ making the following cube commutative,
\begin{equation}
\begin{tikzcd}[row sep=scriptsize,column sep=scriptsize]
& \widehat{\mathsf{fSet}} \arrow[from=dl,"\simeq"] \arrow[rr,"\widehat{F}"] \arrow[dd] & & \widehat{\mathsf{fgFMon}} \arrow[from=dl,"\simeq"]\arrow[dd] \\
\mathsf{A} \arrow[rr,crossing over,hook]\arrow[dd] & & \mathsf{B} \\
& \mathsf{fSet}\arrow[from=dl, "\simeq"]\arrow[rr, "F \phantom{holaaa}"] & & \mathsf{fgFMon}\arrow[from=dl,"\simeq"] \\
\mathsf{ComA} \arrow[rr,hook] & & \mathsf{ComB}\arrow[from=uu,crossing over]
\end{tikzcd}
\end{equation}
where $F$ is the free monoid functor.
\end{theorem}
The composite law in $\widehat{\mathsf{fgFMon}}$ gives an explicit formula that keeps track of the changes of indices that arise when iterating the structure maps of a bialgebra, and that might be of interest in computational algebra, see \eqref{eq composite law fgFMonhat}. The category $\widehat{\mathsf{fSet}}$ is a suitable modification of the category $\widetilde{\mathsf{fSet}}$ of finite sets and maps of sets with a total order on fibres (Proposition \ref{prop fSethat fSettilde}) originally described by Pirashvili \cite{pirashvili} under the name of $\mathcal{F}(\mathrm{as})$. On the other hand, the category $\widehat{\mathsf{fgFMon}}$ can be viewed as a combinatorial, explicit description of the Quillen $Q$-construction $\mathcal{Q}\mathcal{F}(\mathrm{as})$ carried out by Pirashvili. In particular, the composite law of $\widehat{\mathsf{fgFMon}}$ encodes an expression for the bialgebra maps $1 \longrightarrow 1$ from \cite[5.3]{pirashvili} as a special case.
\subsection*{Outline of the paper} The paper is structured as follows: in Section \ref{sec Bialgebras and PROPs} we recall the definition of a bialgebra and fix notation. We also recall the notion of PROP and algebra over a PROP and a general construction via generators and relations. Examples \ref{ex fSet A} and \ref{ex ComB = fgFMon} make explicit the equivalences $\mathsf{ComA} \simeq \mathsf{fSet}$ and $\mathsf{ComB} \simeq \mathsf{fgFMon}$, and in Proposition \ref{prop fSethat fSettilde} we lift the previous equivalence for algebras to the noncommutative setting, showing that $\widehat{\mathsf{fSet}} \simeq \mathsf{A}$.
In Section \ref{sec A PROP for bialgebras} we start by a few motivating examples showing the necessity to choose some permutations to fix the order of multiplication. Next we briefly detour to prove Theorem \ref{thm normal form intro}, which will be needed to construct a PROP equivalent to $\mathsf{B}$. We later define the category $\widehat{\mathsf{fgFMon}}$ and show the equivalence $\widehat{\mathsf{fgFMon}} \simeq \mathsf{B}$ in Theorem \ref{thm fundamental theorem}. The key step is to define properly the composite law in $\widehat{\mathsf{fgFMon}}$, for which we will need a few constructions relating permutations and elements of a free monoid. These are carried out in pages \pageref{constr Phi} -- \pageref{lem utilisimo}. We end the section with a couple of examples illustrating our construction.
Lastly, in Section \ref{sec Applications} we include several rather immediate applications of the equivalence $\widehat{\mathsf{fgFMon}} \simeq \mathsf{B}$ and prove Theorem \ref{thm main thm}.
\subsection*{Acknowledgments} The author would like to thank Roland van der Veen for helpful discussions and valuable comments.
\section{Bialgebras and PROPs}\label{sec Bialgebras and PROPs}
For this section we fix $(\mathcal{C}, \otimes, \mathds{1},P)$ a symmetric monoidal category, where $\otimes$ stands for the monoidal product, $\mathds{1}$ stands for the unit object and $P: \otimes \overset{\cong}{\Longrightarrow} \otimes^{op}$ stands for the symmetry, so $P_{Y, X} \circ P_{X,Y} = \mathrm{Id}_{X\otimes Y}$. We will implicitly use MacLane's coherence theorem \cite{maclane} to remove the associativity and unit constrains from the formulae. For an introduction to monoidal categories see \cite{turaevvirelizier}.
\subsection{Algebras, coalgebras and bialgebras}\label{subsec Algebras, coalgebras and bialgebras}
An \textit{algebra} in $\mathcal{C}$ (more precisely, an \textit{algebra object} in $\mathcal{C}$) is an object $A \in \mathcal{C}$ together with two arrows $\mu: A \otimes A \longrightarrow A$ and $\eta: \mathds{1} \longrightarrow A$, called the \textit{multiplication} and the \textit{unit}, satisfying the associativity and unit conditions
\begin{equation}\label{eq algebra axioms}
\mu (\mu \otimes \mathrm{Id}) = \mu (\mathrm{Id} \otimes \mu) \qquad , \qquad \mu(\mathrm{Id} \otimes \eta) = \mathrm{Id} = \mu(\eta \otimes \mathrm{Id}).
\end{equation}
If $\mu \circ P_{A,A} =\mu$, we say that $A$ is \textit{commutative}.
A \textit{coalgebra} in $\mathcal{C}$ is an algebra in $\mathcal{C}^{op}$, that is, an object $A \in \mathcal{C}$ endowed with two arrows $\mathcal{D}elta: A\longrightarrow A \otimes A $ and $\varepsilon: A \longrightarrow \mathds{1} $, called the \textit{comultiplication} and the \textit{counit}, satisfying the coassociativity and counit conditions
\begin{equation}\label{eq coalgebra axioms}
(\mathcal{D}elta \otimes \mathrm{Id})\mathcal{D}elta =(\mathrm{Id} \otimes \mathcal{D}elta)\mathcal{D}elta \qquad , \qquad (\mathrm{Id} \otimes \varepsilon)\mathcal{D}elta = \mathrm{Id} = (\varepsilon \otimes \mathrm{Id})\mathcal{D}elta.
\end{equation}
If $ P_{A,A} \circ \mathcal{D}elta =\mathcal{D}elta$, we say that $A$ is \textit{cocommutative}.
It should be clear that a \textit{(co)algebra morphism} between (co)algebras should be an arrow in $\mathcal{C}$ that respects the structure maps.
If $A, A'$ are algebras in $\mathcal{C}$, the monoidal product $A \otimes A'$ is naturally an algebra with structure maps $(\mu \otimes \mu')(\mathrm{Id} \otimes P_{A,A'} \otimes \mathrm{Id})$ and $\eta \otimes \eta'$. The same observation with reversed arrows follows for coalgebras. A \textit{bialgebra} in $\mathcal{C}$ is an object $A \in \mathcal{C}$ which is both an algebra and a coalgebra and whose structure maps are compatible in the sense that the coalgebra structure maps are algebra morphisms (alternatively, the algebra structure maps are coalgebra morphisms). Explicitly,
\begin{equation}\label{eq bialgebra axioms}
\begin{gathered}
\mathcal{D}elta \mu = (\mu \otimes \mu)(\mathrm{Id} \otimes P_{A,A} \otimes \mathrm{Id})(\mathcal{D}elta \otimes \mathcal{D}elta) \\
\mathcal{D}elta \eta = \eta \otimes \eta \quad , \quad \varepsilon \mu = \varepsilon \otimes \varepsilon \quad , \quad \varepsilon \eta = \mathrm{Id}_{\mathds{1}}.
\end{gathered}
\end{equation}
It will be convenient to depict the previous maps as some planar diagrams,
\begin{center}
\begin{tikzpicture}
\draw (0, 0) node[inner sep=0] {\includegraphics[width=1cm]{multiplication.pdf}};
\draw (-1, 0) node {$\mu=$};
\draw (1.5, 0) node {, \quad $\eta=$};
\draw (2.5, 0) node[inner sep=0] {\includegraphics[width=1cm]{unit.pdf}};
\draw (4, 0) node {, \quad $\mathcal{D}elta=$};
\draw (5.5, 0) node[inner sep=0] {\includegraphics[width=1cm]{comultiplication.pdf}};
\draw (7, 0) node {, \quad $\varepsilon=$};
\draw (8, 0) node[inner sep=0] {\includegraphics[width=1cm]{counit.pdf}};
\end{tikzpicture}
\end{center}
whereas the symmetry will be represented as
\begin{center}
\begin{tikzpicture}
\draw (-1.5, 0) node {\quad $P_{A,A}=$};
\draw (0, 0) node[inner sep=0] {\includegraphics[width=1cm]{crossing_sym.pdf}};
\end{tikzpicture}
\end{center}
By doing this, we can represent the bialgebra axioms graphically as in Figure \ref{fig bialgebra axioms}.
\begin{figure}
\caption{ The bialgebra axioms are represented graphically. (\subref{fig associativity}
\label{fig bialgebra axioms}
\end{figure}
In the following we will adhere to Habiro's notation \cite{habiro2016}: for $k \geq 0$ we define the iterated multiplication and comultiplication $$\mu^{[k]}: A^{\otimes k} \longrightarrow A \qquad , \qquad \mathcal{D}elta^{[k]} : A \longrightarrow A^{\otimes k}$$ as follows:
\begin{gather*}
\mu^{[0]} := \eta , \quad \mu^{[1]} := \mathrm{Id} \quad , \quad \mu^{[k]} := \mu \circ (\mu^{[k-1]} \otimes \mathrm{Id}), \quad k \geq 2\\
\mathcal{D}elta^{[0]} := \varepsilon , \quad \mathcal{D}elta^{[1]} := \mathrm{Id} \quad , \quad \mathcal{D}elta^{[k]} := (\mathcal{D}elta^{[k-1]} \otimes \mathrm{Id})\circ \mathcal{D}elta, \quad k \geq 2
\end{gather*}
For integers $p >0$ and $k_1, \ldots , k_p \geq 0$ let us write $$\mu^{[k_1, \ldots , k_p]} := \mu^{[k_1]} \otimes \cdots \otimes \mu^{[ k_p]} \qquad , \qquad \mathcal{D}elta^{[k_1, \ldots , k_p]} := \mathcal{D}elta^{[k_1]} \otimes \cdots \otimes \mathcal{D}elta^{[ k_p]}. $$
The associativity and coassociativity axioms imply the following generalised relations:
$$\mu^{[p]} \circ \mu^{[k_1, \ldots , k_p]} = \mu^{[k_1 + \cdots + k_p]} \qquad , \qquad \mathcal{D}elta^{[k_1, \ldots , k_p]} \circ \mathcal{D}elta^{[p]} = \mathcal{D}elta^{[k_1+ \cdots + k_p]}. $$
\subsection{PROPs and their algebras}\label{subsec PROPs}
A \textit{(classical, monocoloured) PROP} is a symmetric monoidal category monoidally generated by a single object. By MacLane's coherence theorem, every PROP is monoidally equivalent to another that has the set of non-negative integers as objects and the monoidal product is given by the sum.
PROPs are useful to study structural morphisms of large classes of algebraic structures. Let us sketch here the general construction \cite{zanasi}: let $(\mathcal{G}, \mathcal{E})$ be a pair of sets, where $\mathcal{G}$ contains \textit{generators}, a set formal arrows $g: n \longrightarrow m$ between non-negative integers, which includes special arrows $\mathrm{Id}: 1 \longrightarrow 1$ and $P: 2 \longrightarrow 2$. The set of \textit{$\mathcal{G}$-terms} is obtained by (formally) combining the generators with the operations ``composition'' $\circ$ and ``monoidal product'' $\otimes$. The set $\mathcal{E}$ contains \textit{equations}, pairs of $\mathcal{G}$-terms $(t,t':n \longrightarrow m )$ with the same arity and coarity.
Any such pair $(\mathcal{G}, \mathcal{E})$ induces a PROP $\mathsf{P}=\mathsf{P}_{(\mathcal{G}, \mathcal{E})}$ by letting $\hom{\mathsf{P}}{n}{m}$ be the set of $\mathcal{G}$-terms $n \longrightarrow m$ modulo the least congruence relation (with respect to the composition and the monoidal product) that contains the laws of strict symmetric categories and the equations $t=t'$ for every pair $(t,t') \in \mathcal{E}$.
Whereas a PROP encodes algebraic operations abstractly, an algebra over a PROP is an evaluation of these operations on a concrete object. More precisely, a \textit{$\mathsf{P}$-algebra} in a symmetric monoidal category $\mathcal{C}$ is a strong monoidal functor ${A: \mathsf{P} \longrightarrow \mathcal{C}}$. The class of $\mathsf{P}$-algebras on $\mathcal{C}$ forms a category $\mathsf{Alg}_{\mathsf{P}} (\mathcal{C})$ in the obvious way, where morphisms are monoidal natural transformations between functors $ \mathsf{P} \longrightarrow \mathcal{C}$.
The following observation will be useful:
\begin{lemma}\label{lemma yoneda}
Let $\mathsf{P}$, $\mathsf{P}'$ be PROPs. If there is a natural bijection $$\mathsf{Alg}_{\mathsf{P}} (\mathcal{C}) \longrightarrowiso \mathsf{Alg}_{\mathsf{P}'} (\mathcal{C})$$ for any symmetric monoidal category $\mathcal{C}$, then $\mathsf{P}$ and $\mathsf{P}'$ are monoidally equivalent.
\end{lemma}
\begin{proof}
This is a direct consequence of the Yoneda lemma.
\end{proof}
Let us illustrate the above construction with a few examples:
\begin{example}
Let $\mathcal{G}$ consist of two arrows $\mu: 2\longrightarrow 1$ and $\eta: 0 \longrightarrow 1$, that we can think of as the planar diagrams depicted in the previous section. If $\mathcal{E}$ is the set given by the two equations depicted in Figure \ref{fig associativity} and \ref{fig unitality}, then the resulting PROP is denoted by $\mathsf{A}$ and $\mathsf{Alg}_{\mathsf{A}} (\mathcal{C})$ is equivalent to the category of algebras in $\mathcal{C}$. In this case, one says that $\mathsf{A}$ \textit{is a PROP for algebras}. If the relation $P \circ \mu =\mu$ is added, then the corresponding category $\mathsf{ComA}$ is a PROP for commutative algebras.
A similar note can be made for the categories of coalgebra or bialgebras, perhaps with the (co)commutative axiom included. If $\mathcal{G}$ contains $\mu:2\longrightarrow 1$, $\eta: 0\longrightarrow 1$, $\mathcal{D}elta: 1\longrightarrow 2$ and $\varepsilon: 1 \longrightarrow 0$ and $\mathcal{E}$ consists of the equations given in Figure \ref{fig bialgebra axioms}, then the resulting PROP for bialgebras will be denoted as $\mathsf{B}$. If the equation $P \circ \mu =\mu$ is added, then we will denote the resulting PROP for commutative bialgebras as $\mathsf{ComB}$.
\end{example}
An important drawback of the previous construction is that it might seem ad-hoc or artificial, being produced by generators and relations. A sensible question is the following:
\begin{question}
Is it possible to describe the PROPs for ((co)commutative) algebras, coalgebras or bialgebras without using generators and relations?
\end{question}
The answer for ((co)commutative) (co)algebras is quite simple. Before addressing this case let us introduce some notation: for $n \geq 0$, let $\Sigma_n$ be the symmetric group of order $n$ (if $n=0$ we set $\Sigma_0 = \emptyset$). Given an object $X \in \mathcal{C}$, any element $\sigma \in \Sigma_n$ induces an arrow $$P_\sigma: X^{\otimes n} \longrightarrow X^{\otimes n}$$ determined by $P_{(i,i+1)}=\mathrm{Id}^{\otimes (i-1)} \otimes P_{X,X} \otimes \mathrm{Id}^{n-i-1}$ and the property that the passage $\Sigma_n \longrightarrow \hom{\mathcal{C}}{X^{\otimes n}}{X^{\otimes n}}$ is a monoid homomorphism.
\begin{example}[essentially \cite{pirashvili}]\label{ex fSet A}
Let $\mathsf{fSet}$ be the category of finite sets. Up to monoidal equivalence, we can view it as a PROP with the ordinals $[n] = \{ 1< \cdots < n \}$, $n \geq0$, as objects and the sum of ordinals (disjoint union) as the monoidal product ($[0]$ is by definition the empty set).
For any $k \geq 0$, there is a unique map $[k] \longrightarrow [1]$, in the same way that there is a unique map $\mu^{[k]}: A^{\otimes k} \longrightarrow A$ for a commutative algebra $A$ in $\mathcal{C}$.\footnote{Homotopy theorists will recognise the commutative operad $\mathsf{Com}(n)= *$ here.} This suggests the following: given a strong monoidal functor $F: \mathsf{fSet} \longrightarrow \mathcal{C}$, let $X:= F([1])$ and let $\mu^{[k]} := F([k] \longrightarrow [1])$. This determines a structure of a commutative algebra object on $X$. Conversely, if $A\in \mathcal{C}$ is a commutative algebra, we define a strong monoidal functor $F: \mathsf{fSet} \longrightarrow \mathcal{C}$ as follows: first set $F([1]):=A$. For a map of sets $f:[n] \longrightarrow [m]$, let $k_i := \#f^{-1}(i) \geq 0$ for every $i \in [m]$. If $\sigma \in \Sigma_n$ is any permutation that maps the subset $\{k_1 + \cdots + k_{i-1} +1 , \ldots , k_1 + \cdots + k_i \}$ to $f^{-1}(i)$, then define $$Ff := \mu^{[k_1, \ldots , k_m]} \circ P_\sigma,$$ where $P_\sigma: A^{\otimes n} \longrightarrow A^{\otimes n}$ is as above. The commutativity of $A$ ensures that any choice of $\sigma \in \Sigma_n$ yields the same arrow. This shows, by \ref{lemma yoneda}, that $\mathsf{fSet}$ is equivalent to the PROP $\mathsf{ComA}$ for commutative algebras.
Let us illustrate the previous construction in the case $\mathcal{C} = \mathsf{Vect_k}$, the category of vector spaces over a field $k$. If $A$ is an algebra and $f:[n] \longrightarrow [m]$ then $$(Ff)(a_1 \otimes \cdots \otimes a_n)= \left( \prod_{i \in f^{-1}(1)} a_i \right) \otimes \cdots \otimes \left( \prod_{i \in f^{-1}(m)} a_i \right),$$ where an empty product stands for $1 \in A$.
By reversing arrows, we have that $\mathsf{fSet}^{op}$ is equivalent to the PROP $\mathsf{CocomC}$ for co\-co\-mu\-ta\-tive coalgebras.
\end{example}
\begin{example}[\cite{pirashvili}]\label{ex pirashvili ordered}
Now suppose we want to refine the previous example to obtain the PROP $\mathsf{A}$ for (noncommutative) algebras. If the order of multiplication matters, we are forced to impose an order on the fibres $f^{-1}(i)$, which in turn determines a unique permutation.
More concretely, define $\widetilde{\mathsf{fSet}}$ as the following category: its objects are the ordinals $[n]$, $n \geq 0$. An arrow $f: [n] \longrightarrow [m]$ is the data of a map of sets $f: [n] \longrightarrow [m]$ together with a total order of $f^{-1}(i)$ for every $i \in [m]$. Given arrows $f:[n] \longrightarrow [m]$ and $g: [m] \longrightarrow [p]$, the composite $g \circ f$ is defined as the composite of the underlying maps of sets together with the following order on fibres: for $i \in [p]$, let
\begin{equation}\label{eq composite in fSettilde}
(g \circ f)^{-1}(i) := f^{-1}(j_1) \amalg \cdots \amalg f^{-1}(j_r)
\end{equation}
where $g^{-1}(i)= \{j_1 < \cdots < j_r \}$ and the disjoint union carries the order imposed by the writing from left to right. The disjoint union makes it a symmetric monoidal category in the obvious way. Reasoning as in the previous example, we see that given a map $f$ there is a unique $\sigma \in \Sigma_n$ mapping the ordered set $\{k_1 + \cdots + k_{i-1} +1 < \cdots < k_1 + \cdots + k_i \}$ to $f^{-1}(i)$ in an order-preserving way\footnote{This is another way of saying that the associative operad $\mathsf{Ass}(n)$ equals $\Sigma_n$.}, and therefore $\widetilde{\mathsf{fSet}}$ is equivalent to the PROP for algebras.
As a concrete instance of this construction, let
\begin{align*}
f:[5]\longrightarrow [4] \quad &, \quad f^{-1}(1)= \{ 4<2 \} \quad , \quad f^{-1}(3)= \{ 1<3 \} \quad , \quad f^{-1}(4)= \{ 5 \},\\
g:[4]\longrightarrow [2] \quad &, \quad g^{-1}(1)= \{ 4<3 \} \quad , \quad g^{-1}(2)= \{ 1<2 \} .
\end{align*}
Then the composite is determined by $$(g \circ f)^{-1}(1) = \{ 5<1<3 \} \quad , \quad (g \circ f)^{-1}(2) = \{ 4<2\}.$$
The order on fibres that $f$ determines yields a unique permutation $\sigma = (143) \in \Sigma_5$ such that $Ff = \mu^{[2, 0,2,1]} \circ P_\sigma$. Likewise, $g$ determines a unique $\tau = (1423) \in \Sigma_4$ such that $Fg = \mu^{[2,2]} \circ P_\tau$. If $\mathcal{C}= \mathsf{Vect_k}$, then $(Ff)(a_1 \otimes \cdots \otimes a_5) = a_4a_2 \otimes 1 \otimes a_1a_3\otimes a_5$ and $(Fg)(b_1 \otimes \cdots \otimes b_4) = b_4b_3 \otimes b_1b_2 $.
\end{example}
What we learned from this example is that by adding extra data to the PROP for commutative algebras, we were able to ``fix the order of multiplication'' and obtain a PROP for algebras. We also see that the data of a total order on fibres is equivalent to the datum of a permutation, since for $f: [n] \longrightarrow [m]$ and $\sigma$ as before $$f^{-1}(i)= \{\sigma (k_1 + \cdots + k_{i-1} +1) < \cdots < \sigma (k_1 + \cdots + k_i) \}.$$ We can then replace the data of one by the data of the other.
\begin{construction}
Let $\alpha \in \Sigma_m$ and let $k_1, \ldots, k_m \geq 0$ be a sequence of $m$ non-negative integers. If $n= \sum_i k_i$, then $\alpha$ induces a permutation $\langle \alpha \rangle_{k_1, \ldots, k_m} \in \Sigma_n$ which permutes the $m$ consecutive blocks of $[n]$ with sizes $ k_1, \ldots, k_m$. More precisely, for $i \in [m]$, define $\langle \alpha \rangle_{k_1, \ldots, k_m}$ as the unique permutation that sends $\{k_{\alpha(1)} + \cdots + k_{\alpha(i-1)}+1, \ldots , k_{\alpha(1)} + \cdots + k_{\alpha(i)} \}$ to $\{k_1 + \cdots + k_{\alpha(i)-1}+1, \ldots , k_1 + \cdots + k_{\alpha(i)} \}$ in a order-preserving way.
A schematic of this construction is shown in Figure \ref{fig langle rangle constr}. It is immediate to check that $$\langle \alpha \rangle_{1, \overset{m}{\ldots}, 1} = \alpha \qquad , \qquad \langle \mathrm{Id}_m \rangle_{k_1, \ldots, k_m} = \mathrm{Id}_n$$ for any $\alpha \in \Sigma_m$ and $k_1, \ldots , k_m \geq 0$.
\begin{figure}
\caption{Illustration of the construction of $\langle \alpha \rangle_{k_1, \ldots, k_m}
\label{fig langle rangle constr}
\end{figure}
\end{construction}
We also introduce the following notation: given two permutations $\alpha \in \Sigma_n$, ${\beta \in \Sigma_m}$, we denote by $\alpha \otimes \beta \in \Sigma_{n+m}$ the \textit{block product} of $\alpha$ and $\beta$, that is, the image of the the pair $(\alpha , \beta)$ under the ``block inclusion'' $$\Sigma_n \times \Sigma_m \lhook\joinrel\longrightarrow \Sigma_{n+m}.$$ Concretely, $$(\alpha \otimes \beta) (i) = \begin{cases}
\alpha (i), & i=1, \ldots ,n\\
\beta (i-n)+n, & i=n+1, \ldots , n+m.
\end{cases} $$
\begin{definition}
Let $\widehat{\mathsf{fSet}}$ be the following category: its objects are the ordinals. An arrow is a pair $(f, \sigma)$ where $f$ is a map of sets $f: [n] \longrightarrow [m]$ and $\sigma \in \Sigma_n$ is a permutation. Given pairs $(f, \sigma)$ and $(g, \tau)$ with $f$ as before and $g: [m] \longrightarrow [p]$, define the composite as $$(g, \tau) \circ (f, \sigma):= (g \circ f, \sigma \circ \langle \tau \rangle_{k_1, \ldots , k_m}) $$ where $k_i := \#f^{-1} (i)$ for every $i \in [m]$. The identity is the pair $(\mathrm{Id}, \mathrm{Id})$. Furthermore there is a monoidal product given by $$(f_1, \sigma_1) \otimes (f_2, \sigma_2) := (f_1 \amalg f_2, \sigma_1 \otimes \sigma_2).$$
\end{definition}
\begin{proposition}\label{prop fSethat fSettilde}
The categories $\widehat{\mathsf{fSet}}$ and $\widetilde{\mathsf{fSet}}$ are monoidally equivalent, so $\widehat{\mathsf{fSet}}$ is also equivalent to $\mathsf{A}$.
\end{proposition}
\begin{proof}
We have already described how giving an order on fibres amounts to giving a permutation. Hence to see that this defines a functor $\widetilde{\mathsf{fSet}} \longrightarrow \widehat{\mathsf{fSet}} $ all is left to check is that under this correspondence the composite is mapped to the composite.
Let $f:[n] \longrightarrow [m]$, $g:[m]\longrightarrow [p]$ be composable arrows in $\widetilde{\mathsf{fSet}}$ with $\sigma \in \Sigma_n$, $\tau \in \Sigma_m$ the permutations that arise from the order of the fibres as explained in \ref{ex pirashvili ordered}, respectively. If $\delta$ is the permutation associated to the composite, then it is determined by the following equality of ordered sets:
\begin{align*}
\{ \delta(1) < \cdots < \delta(n) \} &= (g \circ f)^{-1}(1) \amalg \cdots \amalg (g \circ f)^{-1}(p)\\
&= \coprod_{j \in g^{-1}(1) \amalg \cdots \amalg g^{-1}(p)} f^{-1}(j)\\
&= \coprod_{j \in \{ \tau(1) < \cdots < \tau (m) \}} \{\sigma (k_1 + \cdots + k_{j-1} +1) < \cdots < \sigma (k_1 + \cdots + k_j) \}.
\end{align*}
This expresses that $\delta$ is given by a block permutation of the set $\{ \sigma(1) < \cdots < \sigma(n) \}$, where the blocks have sizes $k_1 , \ldots , k_m$ and the order of the block permutation is determined by $\tau$. This exactly says that $\delta =\sigma \circ \langle \tau \rangle_{k_1, \ldots , k_m}$.
\end{proof}
\subsection{Finitely generated free monoids}
Recall that there is a free - forgetful adjunction
$$\begin{tikzcd}[column sep={4em,between origins}]
\mathsf{Set}
\arrow[rr, bend left, swap, "F"' pos=0.5]
& \bot & \mathsf{Mon}
\arrow[ll, bend left, swap, "U"' pos=0.5]
\end{tikzcd}$$
between the category of sets and the category of monoids, where $F(X):= \coprod_{n \geq 0} X^{\times n}$ is given by ``words on the alphabet $X$''. We denote by $\mathsf{fgFMon}$ the category of finitely-generated free monoids, that is, the full subcategory of $\mathsf{Mon}$ on the objects $F([n])$, $n \geq 0$.
For the sake of clarity let us label $[n]=\{x_1 < \cdots < x_n \}$ with ``$x$'s'' or any other letter so that $$F(x_1, \ldots , x_n) := F([n])= F(\{ x_1, \ldots , x_n \}).$$ Observe that by the adjunction,
\begin{align*}
\hom{\mathsf{Mon}}{F(x_1, \ldots , x_n)}{F(y_1, \ldots , y_m)} &\cong \hom{\mathsf{Set}}{\{ x_1, \ldots , x_n \}}{UF(y_1, \ldots , y_m)} \\
&\cong \prod_{i=1}^n \hom{\mathsf{Set}}{\{ x_i \}}{UF(y_1, \ldots , y_m)}\\
&\cong \prod_{i=1}^n UF(y_1, \ldots , y_m),
\end{align*}
so any monoid map between free monoids is completely determined by a tuple of words (the images of the elements $x_i$).
The category $\mathsf{fgFMon}$ has finite coproducts given by the free product of monoids, $$F(x_1, \ldots , x_n) * F(y_1, \ldots , y_m) \cong F(x_1, \ldots , x_n,y_1, \ldots , y_m ),$$ which turns it into a cocartesian monoidal category, where the initial object $\mathbf{1}:= F([0])$ (the trivial monoid) serves as the unit of the monoidal product.
\begin{example}[\cite{habiro2016}]\label{ex ComB = fgFMon}
The object $F(x)= F([1]) \in \mathsf{fgFMon}$ is a bialgebra object: the multiplication and multiplication are given by
\begin{align*}
&\mu: F(x,y) \longrightarrow F(z) \qquad , \qquad x,y \mapsto z,\\
&\mathcal{D}elta: F(x) \longrightarrow F(y,z) \qquad , \qquad x \mapsto yz,
\end{align*}
and the unit and counit are the unique maps $\eta: \mathbf{1} \longrightarrow F(x)$ and $\varepsilon: F(x)\longrightarrow \mathbf{1}$. It is straightforward to check that the bialgebra axioms \eqref{eq algebra axioms} -- \eqref{eq bialgebra axioms} hold. For instance, both sides of the associativity axiom from Figure \ref{fig bialgebra axioms}(\subref{fig associativity}) are the map $x,y,z \mapsto x$; from Figure \ref{fig bialgebra axioms}(\subref{fig coassociativity}) they are the map $x \mapsto xyz$; and from Figure \ref{fig bialgebra axioms}(\subref{fig bialgebra1}) they are the map $x, y \mapsto ab$. The rest of axioms can be similarly checked.
Therefore there is a functor $$T: \mathsf{ComB} \longrightarrow \mathsf{fgFMon}.$$
It is a well-know result that the above functor is a monoidal equivalence. Habiro \cite{habiro2016} showed this directly\footnote{Actually, Habiro showed the more general result that the PROP for commutative Hopf algebras is equivalent to the category of finitely generated free groups, but his construction easily restricts to the bialgebra case.}, but it can also be proven using Lawvere theories \cite{pirashvili}.
\end{example}
\begin{lemma}\label{lem fSet fgFMon}
The free monoid functor $F: \mathsf{fSet} \longrightarrow \mathsf{fgFMon}$ makes the following diagram commutative,
$$\begin{tikzcd}
\mathsf{ComA} \dar{\simeq} \rar[hook] & \mathsf{ComB} \dar{\simeq}\\
\mathsf{fSet} \rar{F} & \mathsf{fgFMon}
\end{tikzcd} $$
where the upper arrow is the natural inclusion.
\end{lemma}
\begin{proof}
This is immediate: the generator $\mu^{[k]}: k \longrightarrow 1$ of $\mathsf{ComA}$, $k \geq 0$, is mapped under the two possible composites to the monoid map $F(x_1, \ldots , x_k) \longrightarrow F(y)$ determined by $x_i \mapsto y$ for all $1 \leq i \leq k$, the iterated multiplication in $\mathsf{fgFMon}$.
\end{proof}
\begin{remark}
The non-existence of a right-adjoint for the free functor $F: \mathsf{fSet} \longrightarrow \mathsf{Mon}$ can be seen as the failure of the inclusion $\mathsf{ComA} \lhook\joinrel\longrightarrow \mathsf{ComB}$ to have a right-adjoint.
\end{remark}
\section{A PROP for bialgebras}\label{sec A PROP for bialgebras}
The main question we want to address is whether it is possible to obtain a PROP for (non)commutative bialgebras out of $\mathsf{fgFMon}$ by adding extra data, as we did in the algebra case. We would like to start with some motivating examples:
\begin{example}\label{ex motivating 1}
Let $A$ be a commutative algebra in $\mathsf{Vect}_k$, and let $\mathsf{fgFMon}\longrightarrow \mathsf{Vect}_k$ be the functor making the diagram
$$\begin{tikzcd}
\mathsf{ComB} \arrow{rr}{\simeq} \drar[swap]{A} & & \mathsf{fgFMon} \dlar\\
&\mathsf{Vect}_k &
\end{tikzcd}$$
commutative (up to natural isomorphism), so that we can view $A$ as a functor $A: \mathsf{fgFMon}\longrightarrow \mathsf{Vect}_k$. Let us investigate this point of view: the noncocomutativity of the comultiplication $\mathcal{D}elta: A \longrightarrow A \otimes A$, $$\sum_{(a)} a_{(1)} \otimes a_{(2)} \neq \sum_{(a)} a_{(2)} \otimes a_{(1)},$$ can be seen as a consequence of the non-equality of the monoid maps $F(x) \longrightarrow F(y,z)$ given by $(x \mapsto yz)$ and $(x \mapsto zy)$.
Similarly, the commutativity of the multiplication $\mu: A \otimes A \longrightarrow A$, $ab = ba$, can be seen as a consequence of the commutativity of the following diagram:
$$\begin{tikzcd}
F(x,y) \arrow{rr}{P} \drar[swap]{\mu} & & F(x,y) \dlar{\mu}\\
&F(z) &
\end{tikzcd}$$
where $P=(x\mapsto y, y \mapsto x)$. So producing noncommutative bialgebras amounts to being able to distinguish $\mu$ and $\mu \circ P $. One way to achieve this is as follows: note that the ordinals $[n] = \{1 < \cdots <n \}$ carry a canonical order, which induces an order\footnote{Usually we write $\{x_1, x_2, x_3, \ldots \} = \{a, b,c, \ldots \}$. In the latter case the order will always be alphabetical.} on the subset $\{ x_1, \ldots , x_n \} \subset F(x_1, \ldots , x_n)$. In particular, there is a ``canonically ordered'' word $x_1 \cdots x_n \in F(x_1, \ldots , x_n)$. For the multiplication ${\mu: F(x,y) \longrightarrow F(z)}$, this element $xy$ maps to $z^2$. This corresponds to $a\otimes b \mapsto ab=ba$ one we pass to the algebra. If we are interested in fixing the order of multiplication, we could associate the permutation $\mathrm{Id}_2 \in \Sigma_2$ to the first option $a \otimes b \mapsto ab$ and $(12) \in \Sigma_2$ to the second one, $a \otimes b \mapsto ba$. At the level of the free monoids, this corresponds to associating $\mathrm{Id}_2$ to $\mu$ and $(12)$ to $\mu \circ P$.
\end{example}
\begin{example}
Let $f: F(x) \longrightarrow F(y)$ be a monoid map, which necessarily must be $x \mapsto y^n$ for some $n \geq 0$. Given a strong monoidal functor $A: \mathsf{fgFMon}\longrightarrow \mathsf{Vect}_k$, this map will be sent to the map $A \longrightarrow A$, $a \mapsto a_{(1)} \cdots a_{(n)}$ (one way to see this is that $f$ factors as $f= \mu^{[k]} \mathcal{D}elta^{[k]}$). Since $A$ is a commutative algebra, then $ a_{(1)} \cdots a_{(n)} = a_{(\sigma (1))} \cdots a_{(\sigma (n))}$ for any $\sigma \in \Sigma_n$. If we want to lift this correspondence to the non-commutative case, we are forced to fix the indeterminacy that the assignment $x \mapsto y^n$ carries for the order of multiplication. If we fix $\sigma \in \Sigma_n$, then we also fix the order of multiplication.
\end{example}
The above two examples suggest that to extend $\mathsf{fgFMon}$ to a PROP for (noncommutative) bialgebras, we have to add the extra data of some permutations that will determine the order of multuplication once we evaluate in a algebra.
\begin{notation}\label{notation}
Let $A$ be a bialgebra in $\mathsf{Vect}_k$ and let $f: n \longrightarrow m$ be an arrow in $\mathsf{B}$. If $A: \mathsf{B} \longrightarrow \mathsf{Vect}_k$ as a functor, then it is possible to read off $Ff$ (and hence $f$) from the expression of $(Af)(a_1 \otimes \cdots \otimes a_n) \in A^{\otimes m}$. For instance, if $(a \otimes b \mapsto \varepsilon (a_{(2)}) b_{(1)} \otimes 1 \otimes b_{(2)}a_{(1)} )$ then $Af$ must be
$$Af = (\varepsilon \otimes \mathrm{Id} \otimes \eta \otimes \mu) \circ P_{(1234)} \circ (\mathcal{D}elta \otimes \mathcal{D}elta) $$ (obviously $f$ has the same expression). Therefore, for a algebra object $A$ of a symmetric monoidal category $\mathcal{C}$, we will adopt the convention of writing maps $Af$ with the symbols that are used for $k$-algebras. Thence when convenient we will denote a map in $\mathcal{C}$ as eg $(a \otimes b \mapsto \varepsilon (a_{(2)}) b_{(1)} \otimes 1 \otimes b_{(2)}a_{(1)} )$ keeping in mind that there is no underlying set in $A$ and the expressing is purely notational.
\end{notation}
\subsection{A unique normal form in $\mathsf{B}$}
On our way to obtain an equivalent PROP for bialgebras it will become essential to have control over the arrows we can find in $\mathsf{B}$. The main result of this section ensures that any such an arrow has a unique ``normal form''. The main tool to show this will be the so-called ``diamond lemma''. Let us recall the setup.
Let $S$ be a set. A \textit{reduction} on $S$ is a strictly antisymmetric relation of $S$, that we denote as $\rightarrow$. A \textit{reduction chain }on $S$ is a sequence $x_1 \rightarrow x_2 \rightarrow x_3 \rightarrow \cdots$. We say that the relation $\rightarrow$ satisfies the \textit{descending chain condition} if every reduction chain is finite. An element $x \in S$ is said to be in \textit{normal form} if it is minimal with respect to $\rightarrow$, that is, if there is no $y \in S$ such that $x \rightarrow y$. Given a reduction $\rightarrow$ on $S$, its \textit{transitive-reflexive closure} is the order relation $\twoheadrightarrow$ defined as $x \twoheadrightarrow y$ if and only if there is a reduction chain connecting $x$ and $y$. More formally, $x \twoheadrightarrow y$ if and only if there exist $x_0, \ldots , x_n \in S$ such that $x=x_0 \rightarrow x_1 \rightarrow \cdots \rightarrow x_n=y$. Finally, we say that $\rightarrow$ satisfies the \textit{diamond condition} if given $y \leftarrow x \rightarrow z$, there exists $w \in S$ such that $y \twoheadrightarrow w$ and $z \twoheadrightarrow w$.
\begin{lemma}[Diamond lemma]\label{lem diamond}
Let $\rightarrow$ be a reduction on a non-empty set $S$. If $\rightarrow$ satisfies the descending chain condition and the diamond condition, then every element in $S$ has a unique normal form.
\end{lemma}
We refer the reader to \cite[4.6]{barnatanveen} or \cite[4.47]{becker} for a proof of the lemma.
\begin{theorem}\label{thm normal form}
Every morphism $f: n \longrightarrow m$ in $\mathsf{B}$ factors in a unique way as $$f= \mu^{[q_1, \ldots , q_m]} \circ P_{\sigma} \circ \mathcal{D}elta^{[p_1, \ldots , p_n]} $$ for some (unique) integers $p_1, \ldots , p_n, q_1, \ldots , q_m \geq 0$ such that $\sum_i p_i = \sum_i q_i =:s$, and a (unique) permutation $\sigma \in \Sigma_s$.
\end{theorem}
Graphically, we can represent this factorisation as depicted in Figure \ref{fig factorisation}.
\begin{figure}\label{fig factorisation}
\end{figure}
The observation that ignites the proof is that in the relations that define the bialgebra PROP $\mathsf{B}$, depicted in Figure \ref{fig bialgebra axioms}, with the exception of (a) and (c) (since in the statement of the above theorem the iterated (co)multiplication appears), only one of the sides of the equations are in the normal form that the theorem suggests.
Following the construction described in subsection \ref{subsec PROPs}, let $\mathcal{B}$ be the PROP arising from a set of generators $\mathcal{G}$ containing $\mu:2\longrightarrow 1$, $\eta: 0\longrightarrow 1$, $\mathcal{D}elta: 1\longrightarrow 2$ and $\varepsilon: 1 \longrightarrow 0$ and a set of equations $\mathcal{E}$ containing the equations appearing in Figure \ref{fig bialgebra axioms}(a),(c). Next we are going to define a reduction on the set $$\mathrm{arr} (\mathcal{B}) := \coprod_{n,m \geq 0} \hom{\mathcal{B}}{n}{m}. $$
Let
\begin{align*}
\mu(\mathrm{Id} \otimes \eta) \rightarrow \mathrm{Id} \leftarrow \mu(\eta \otimes \mathrm{Id}) \qquad &, \qquad (\mathrm{Id} \otimes \varepsilon)\mathcal{D}elta \rightarrow \mathrm{Id} \leftarrow (\varepsilon \otimes \mathrm{Id})\mathcal{D}elta\\
\mathcal{D}elta \mu \rightarrow (\mu \otimes \mu)(\mathrm{Id} \otimes P_{A,A} \otimes \mathrm{Id})(\mathcal{D}elta \otimes \mathcal{D}elta) \qquad &, \qquad \mathcal{D}elta \eta \rightarrow \eta \otimes \eta\\
\varepsilon \mu \rightarrow \varepsilon \otimes \varepsilon \qquad &, \qquad \varepsilon \eta \rightarrow \mathrm{Id}_{\mathds{1}}
\end{align*}
be the \textit{elementary reductions}, and extend $\rightarrow$ to $\mathrm{arr} (\mathcal{B})$ in way compatible with the composite and the monoidal product, that is, if $f \rightarrow g$ is an elementary reduction then $$h_4 \circ (h_2 \otimes f \otimes h_3) \circ h_1 \rightarrow h_4 \circ (h_2 \otimes g \otimes h_3) \circ h_1 $$
for any arrows $h_1, \ldots, h_4 \in \mathrm{arr} (\mathcal{B}) $ (so long as the composite makes sense).
\begin{lemma}\label{lem elements in normal form}
The subset of elements of $\mathrm{arr} (\mathcal{B})$ in normal form with the reduction described above equals the set $$ \{ \mu^{[q_1, \ldots , q_m]} \circ P_{\sigma} \circ \mathcal{D}elta^{[p_1, \ldots , p_n]} : p_1, \ldots , p_n, q_1, \ldots , q_m \geq 0, \sum_i p_i = \sum_i q_i =:s , \sigma \in \Sigma_s \}.$$
\end{lemma}
\begin{proof}
It is clear that the elements of this set are in normal form, since there is no (elementary) reduction possible. Conversely, suppose an element of $\mathrm{arr} (\mathcal{B})$ is in normal form. Then it must be some (not \textit{any}) monoidal product and composite of the following ``building blocks'': $$ P_{\sigma} \quad , \quad \mu^{[q]} \quad , \quad \mathcal{D}elta^{[p]} \quad , \quad (\mu \otimes \mu)(\mathrm{Id} \otimes P_{A,A} \otimes \mathrm{Id})(\mathcal{D}elta \otimes \mathcal{D}elta) \quad , \quad \eta \otimes \eta \quad , \quad \varepsilon \otimes \varepsilon$$ (note any of the three first morphisms includes the identity). By the naturality of the symmetry $P$, any monoidal product and composite of arrows $P_{\sigma}$ and iterated multiplications $\mu^{[q]}$ equals (in the category $\mathcal{B}$) $ \mu^{[q_1, \ldots , q_m]} \circ P_{\sigma '} $ for some $\sigma' \in \Sigma_{q_1 + \cdots + q_m}$ and some $m \geq 0$. Similarly, any monoidal product and composite of arrows $P_{\sigma}$ and iterated comultiplications $\mu^{[q]}$ equals $ P_{\sigma '} \circ \mathcal{D}elta^{[p_1, \ldots , p_n]} $ for some $n \geq 0$. Furthermore, we can get the three latter arrows by means of the three first arrows, setting $p=0=q$ and using the monoidal product. Hence we are left to a monoidal product and composite of the three first arrows. But in that composite we cannot find a $\mu^{[q]}$ before a $\mathcal{D}elta^{[p]}$ since it would violate the normal form hypothesis. So we are left to the elements of the given set, as required.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm normal form}]
By the previous lemma, it suffices to check that the reduction $\rightarrow$ on $\mathrm{arr} (\mathcal{B})$ satisfies the conditions of the Diamond lemma, since there is a canonical full functor $\pi: \mathcal{B} \longrightarrow \mathsf{B}$ with the property that reduction chains map to chains of equalities.
That the relation $\rightarrow$ satisfies the descending chain condition follows since any element of $\mathrm{arr} (\mathcal{B})$ is a finite composite of finite many generators, there are finitely many elementary reductions, there are no reduction loops (if $f \rightarrow g$ then no reduction chain starting from $g$ can hit $f$, as $\mathcal{B}$ arises as a free category where only the (co)associativity relation is modded out) and the identity morphism is in normal form.
To check the diamond condition, let $f_1 \leftarrow f \rightarrow f_2$. If either $f_1$ or $f_2$ is in normal form, then so is the other and $f_1=f_2$. For if $f_2$ is in formal form then $f$ must only differ from it in one reduction. This means that there is no other reduction possible in $f$, so if $f \rightarrow f_1$ then $f_1=f_2$. For the general case, if $f_1 \leftarrow f \rightarrow f_2$, then $f_1$ and $f_2$ differ from $f$ by two (possibly different) reductions. But then we can always apply the elementary reduction that induces $f \rightarrow f_2$ to $f_1$, and similarly to $f_2$, which by construction must yield the same element, $f_2 \rightarrow g \leftarrow f_1$, hence the diamond.
\end{proof}
\begin{corollary}\label{cor normal form algebras}
Every morphism $f: n \longrightarrow m$ in $\mathsf{A}$ factors in a unique way as $$f= \mu^{[q_1, \ldots , q_m]} \circ P_{\sigma} $$ for some (unique) integers $q_1, \ldots , q_m \geq 0$ and $\sigma \in \Sigma_n$.
\end{corollary}
\begin{remark}
The existence of the factorisation of \ref{thm normal form} is shown in \cite[Lemma 2]{habiro2016} in a more general context (namely for the PROP for commutative Hopf algebras) using a different approach, yet the arguments used there are essentially the same as the ones used in \ref{lem elements in normal form}. We prefer to use an argument involving the diamond lemma to get existence and uniqueness at once. Also note that for the PROP for (commutative) Hopf algebras, uniqueness is not possible, by the antipode axiom.
\end{remark}
\subsection{The main theorem}
Let us write $\mathbb{N}$ for the monoid of non-negative integers. Recall that the forgetful functor $U: \mathsf{CMon} \longrightarrow \mathsf{Mon}$ from commutative monoids to monoids has a left-adjoint $(-)^{ab} : \mathsf{Mon} \longrightarrow \mathsf{CMon}$ given by the \textit{abelianisation} of the monoid, that is, the quotient by the smallest congruence relation containing the relations $mn \sim nm$. In particular, we have $$F(x_1, \ldots , x_n )^{ab}\cong \mathbb{N} x_1 \oplus \cdots \oplus \mathbb{N} x_n$$ where $\oplus$ is the biproduct (both product and coproduct) of the category $\mathsf{CMon}$.
Denote by $\pi_i$ the composite $$F(x_1, \ldots , x_n ) \longrightarrow \mathbb{N} x_1 \oplus \cdots \oplus \mathbb{N} x_n \longrightarrow \mathbb{N} x_i$$ where the first map is the natural map to the abelianisation and the second map is the $i$-th projection. Likewise, we will denote as $\pi$ the composite $$F(x_1, \ldots , x_n ) \longrightarrow \mathbb{N} x_1 \oplus \cdots \oplus \mathbb{N} x_n \longrightarrow \mathbb{N}$$ where the first map is as before and the second map is the unique map coming from the coproduct such that the composites $\mathbb{N} x_i \longrightarrow \mathbb{N} x_1 \oplus \cdots \oplus \mathbb{N} x_n \longrightarrow \mathbb{N}$ are the identity maps (this is the ``sum'' map). For $w \in F(x_1, \ldots , x_n )$, it should be clear that $\pi_i (w)$ counts how many times the letter $x_i$ appears in $w$, whereas $\pi(w)$ counts the total number of letters $w$ contains.
\begin{definition}
Let $\widehat{\mathsf{fgFMon}}$ be the following category: its objects are the free monoids $F([n]) = F(x_1, \ldots , x_n)$, $n \geq 0$. An arrow is a pair $(f, \bm{\sigma})$, where $$f: F(x_1, \ldots , x_n)\longrightarrow F(y_1, \ldots , y_m)$$ is a monoid map and $\bm{\sigma} = (\sigma_1, \ldots , \sigma_m)$ is a tuple of $m$ permutations $\sigma_i \in \Sigma_{k_i}$, where $k_i := \pi_i (f(x_1 \cdots x_n))$.
The identity of an object $ F(x_1, \ldots , x_n)$ is given by the identity monoid homomorphism and $n$ copies of the unique element $\mathrm{Id}_1 \in \Sigma_1$. Moreover, this category is symmetric monoidal (hence a PROP) where the monoidal product is given by the free product $*$ of monoids on objects and $$(f,\bm{\sigma}) \otimes (g, \bm{\tau} ) := (f * g, \bm{\sigma \tau})$$ on morphisms. In the above expression, $\bm{\sigma \tau}$ stands for the concadenation of the two tuples of permutations. The unit is the trivial monoid $\bm{1} = F([0])$.
We still have to describe the composite law in $\widehat{\mathsf{fgFMon}}$. Setting up a formula for the composite is a highly non-trivial task and requires several constructions that we describe in the next subsection. Yet we would like to place the expression here for future reference. Given arrows $(f, \bm{\sigma}): F(x_1, \ldots x_n) \longrightarrow F(a_1, \ldots , a_m)$ and $(g, \bm{\tau}): F(a_1, \ldots , a_m) \longrightarrow F(s_1, \ldots, s_\ell)$, the composite $(g, \bm{\tau})\circ (f, \bm{\sigma}) =: (g \circ f, \bm{\tau} \circ \bm{\sigma})$ will consist of $\ell$ permutations $$\bm{\tau} \circ \bm{\sigma} := ((\bm{\tau} \circ \bm{\sigma})_1, \ldots , (\bm{\tau} \circ \bm{\sigma})_\ell)$$ where $ (\bm{\tau} \circ \bm{\sigma})_i \in \Sigma_{\pi_i (gf (x_1 \cdots x_n))}$. These permutations are uniquely determined by the following formula:
\begin{align}\label{eq composite law fgFMonhat}
\begin{split}
(\bm{\tau} \circ \bm{\sigma})_1 \otimes \cdots \otimes (\bm{\tau} \circ \bm{\sigma})_\ell &= \xi_{(g\circ f)(x_1 \cdots x_n)}\circ \rho_{f, \mu^{[m]} \circ g} \circ (\gamma_{(k_1, p_1)} \otimes \cdots \otimes \gamma_{(k_m, p_m)})\\
&\circ (\sigma_1^{\otimes p_1} \otimes \cdots \otimes \sigma_m^{\otimes p_m}) \circ \langle \Psi (g(a_1 \cdots a_m), \bm{\tau}) \rangle_{k_1^{\times p_1}, \ldots k_m^{\times p_m}}
\end{split}
\end{align}
where $p_i:= \pi (g (a_i))$ and $k_i := \pi_i (f (x_1 \cdots x_n))$ for $1 \leq i \leq m$. The symbols used here are explained in \ref{constr Phi}, \ref{constr xi gamma} and \ref{constr Psi}.
\end{definition}
Although strange, the previous composite law yields the following
\begin{theorem}\label{thm fundamental theorem}
The categories $\widehat{\mathsf{fgFMon}}$ and $ \mathsf{B}$ are monoidally equivalent. In other words, they are equivalent PROPs.
\end{theorem}
The proof of this theorem consists of a long construction and will carried out in next subsection. Part of this construction will consist of establishing the expression \eqref{eq composite law fgFMonhat}.
\begin{remark}
Informally, we would like to convince the reader that the previous theorem is true. Starting from the base that there is an equivalence $\mathsf{ComB} \simeq \mathsf{fgFMon}$ as explained in \ref{ex ComB = fgFMon}, the permutations that we added to the monoid maps will fix the order of the multiplication. More concretely, $(\mu, \mathrm{Id}) \circ (P, \bm{\mathrm{Id}}) = (\mu, (12))$, compare with \ref{ex motivating 1}. Yet these permutations do not add extra data to the rest of structure maps: for the iterated comultiplication $\mathcal{D}elta^{[m]}: F(x) \longrightarrow F(y_1, \ldots , y_m)$, $x \mapsto y_1 \cdots y_m$, we have that $\pi_i (y_1 \cdots y_m) =1$, so the $m$ permutations that accompany $\mathcal{D}elta^{[m]}$ must be the trivial ones, $\mathrm{Id}_1 \in \Sigma_1$. For the unit $\eta: \bm{1} \longrightarrow F(y)$ and the counit $\varepsilon : F(x) \longrightarrow 1$ this is even more vacuous, since there cannot be any permutation attached. Thence the permutations only add extra data for the multiplication map, which is precisely what we want to refine.
\end{remark}
\subsection{Proof of Theorem \ref{thm fundamental theorem}}
Our strategy consists of constructing, for any symmetric monoidal category $\mathcal{C}$, a bijection $$\mathsf{Alg}_{\widehat{\mathsf{fgFMon}}} (\mathcal{C}) \longrightarrowiso \mathsf{Alg}_{\mathsf{B}} (\mathcal{C})$$ natural on $\mathcal{C}$ and appeal to \ref{lemma yoneda}. The hardest part of the proof will be to check (or rather, to define) that the cumbersome composite law \eqref{eq composite law fgFMonhat} models the order of multiplication in a bialgebra. The first step in the construction is the following
\begin{lemma}\label{lem F(x) bialgebra object}
The object $ F([1])=F(x) \in \widehat{\mathsf{fgFMon}}$ is a bialgebra object. It has structure maps $\mu, \eta, \mathcal{D}elta, \varepsilon$ whose underlying monoid maps are the same as in \ref{ex ComB = fgFMon} and whose attached permutations are the identities.
\end{lemma}
The proof of the lemma requires to handle the composition law of $\widehat{\mathsf{fgFMon}}$, that we agreed we will describe later. Thus we defer this proof until page \pageref{proof lem F(x) bialg}.
This gives rise to a functor $$T: \mathsf{B} \longrightarrow \widehat{\mathsf{fgFMon}},$$ which by precomposition defines a map
\begin{equation}\label{eq bijection alg_B = alg_fgFMonhat}
\mathsf{Alg}_{\widehat{\mathsf{fgFMon}}} (\mathcal{C}) \longrightarrow \mathsf{Alg}_{\mathsf{B}} (\mathcal{C})
\end{equation}
natural on $\mathcal{C}$.
We aim to define an inverse for the previous map \eqref{eq bijection alg_B = alg_fgFMonhat}. For that purpose a key point will be to understand how we can get large permutations from smaller ones ``patched'' according to a certain element of a free monoid.
Recall that a \textit{partition} in $m\geq 0$ blocks on a set $S$ is a collection of $m$ pairwise disjoint subsets $S_1, \ldots , S_m \subseteq S$ (some of them possibly empty) such that $\cup_i S_i =S$. If $S$ is ordered, so are the subsets $S_i$. We denote by $\mathrm{Part}_m (S)$ the set of partitions in $m$ blocks on $S$.
\begin{construction}\label{constr Phi}
Let $m \geq 0$. Given a word $w \in F(y_1, \ldots , y_m) $, we will construct a partition of the set $[\pi (w)]$ given by $m$ subsets $S_i$, each of them with $\pi_i(w)$ elements. Write $w$ as a product of the elements $y_i$ with no powers (this is a product of $\pi (w)$ letters). Then $S_i$ is the ordered subset containing the positions, from left to right, where the letter $y_i$ appears in $w$. Denote by $\Phi (w)$ the partition defined this way.
\end{construction}
\begin{lemma}
Let $m \geq 0$. The previous construction defines a bijection
\begin{equation}
\Phi : F(y_1, \ldots , y_m) \longrightarrowiso \coprod_{N \geq 0} \mathrm{Part}_m ([N]).
\end{equation}
\end{lemma}
\begin{proof}
The obvious inverse is defined as the map taking a partition $S_1, \ldots , S_m$ of $[N]$ to the word $w$ product of $N$ letters $y_i$ according to the order specified by the subsets $S_i$. This establishes the bijection.
\end{proof}
\begin{construction}\label{constr xi gamma}
Given a word $w \in F(y_1, \ldots , y_m)$ and $\Phi(w)=(S_1, \ldots , S_m)$, let $\xi_w \in \Sigma_{\pi(w)}$ be the unique permutation mapping $S_i$ to $$\left\lbrace \sum_{j=1}^{i-1} \pi_j(w) + 1 < \cdots < \sum_{j=1}^{i} \pi_j(w) \right\rbrace $$ in a order-preserving way. We can think of this permutation as ``turning'' the word $a_{1}^{\pi_1(w)} \cdots a_{m}^{\pi_m(w)}$ into $w$. A schemantic of this permutation is depicted in Figure \ref{fig omega}. A special instance of this construction is the case where $w_0= (y_1 \cdots y_m) \overset{p}{\cdots}(y_1 \cdots y_m)$. In this case we denote $$\gamma_{(m,p)}:= \xi_{w_0}^{-1} \in \Sigma_{mp}.$$ Explicitly, $$\gamma_{(m,p)} (lm+r) := (r-1)p+(l+1)$$ for $0 \leq l \leq p-1$ and $1 \leq r \leq m$. It is straightforward to check that $\gamma_{(m,1)} = \mathrm{Id}_m $ and $\gamma_{(1,p)} = \mathrm{Id}_p$.
Note that if
\begin{equation}\label{eq w k_i,j}
w= y_1^{k_{1,1}} y_2^{k_{2,1}} \cdots y_m^{k_{m,1}} y_1^{k_{1,2}}y_2^{k_{2,2}} \cdots y_m^{k_{m,p}} \in F(y_1, \ldots , y_m)
\end{equation}
for some $k_{i,j} \geq 0$, then we have $$ \xi_{w}= \langle \xi_{w_0} \rangle_{k_{1,1}, k_{1,2}, \ldots , k_{p,m}} \in \Sigma_{\pi(w)}.$$
\begin{figure}
\caption{Schemantic of the permutation $\xi_w \in \Sigma_{\pi(w)}
\label{fig omega}
\end{figure}
\end{construction}
If $f: F(x) \longrightarrow F(y_1, \ldots , y_m)$ with $f(x)=w$ and $g: F(y_1, \ldots , y_m) \longrightarrow F(s)$, with $f(y_i)= s^{p_i}$, $p_i \geq 0$, we define
\begin{equation}
\rho_{f,g} := (\langle \xi_w \rangle_{p_1^{\times k_1}, \ldots , p_m^{\times k_m}})^{-1}
\end{equation}
where $k_i:= \pi_i (w)$ and $p_1^{\times k_1}$ stands for a sequence $p_i , \ldots , p_i$ with $k_i$ times $p_i$.
\begin{construction}\label{constr Psi}
Let $k_1, \ldots k_m \geq 0$ and let $n= \sum_i k_i$. Write $$\mathcal{W}(k_1, \ldots , k_m):= \{ w \in F(y_1, \ldots, y_m) : \pi_i (w) = k_i , \ 1 \leq i \leq m \}.$$
We define a map
$$\Psi_{k_1, \ldots , k_m} = \Psi : \mathcal{W}(k_1, \ldots , k_m) \times \Sigma_{k_1} \times \cdots \times \Sigma_{k_m} \longrightarrow \Sigma_n$$ as follows:
$$\Psi (w, \sigma_1, \ldots , \sigma_m) := \xi_w^{-1} \circ (\sigma_1 \otimes \cdots \otimes \sigma_m) .$$
\end{construction}
\begin{lemma}\label{lem utilisimo}
The previous map $$ \Psi: \mathcal{W}(k_1, \ldots , k_m) \times \Sigma_{k_1} \times \cdots \times \Sigma_{k_m} \longrightarrowiso \Sigma_n $$ is a bijection. Furthermore, it is ``monoidal'' in the sense that the diagram
$$
\begin{tikzcd}
\mathcal{W}(k_1, \ldots , k_m) \times \prod_{i=1}^m \Sigma_{k_i} \times \mathcal{W}(k_1', \ldots , k'_r) \times \prod_{i=1}^r \Sigma_{k'_i} \rar{\Psi_{k_1, \ldots , k_m} \times \Psi_{k'_1, \ldots , k'_r}} \dar &[3.8em] \Sigma_n \times \Sigma_{n'} \dar{\otimes} \\
\mathcal{W}(k_1, \ldots , k_m, k'_1, \ldots , k'_r) \times \prod_{i=1}^m \Sigma_{k_i} \times \prod_{i=1}^r \Sigma_{k'_i} \rar{\Psi_{k_1, \ldots , k_m, k'_1, \ldots , k'_r}} & \Sigma_{n +n'}
\end{tikzcd}
$$
commutes, where $n'= \sum_i k'_i$ and the left-hand arrow concadenates the words and acts as the identity on the symmetric groups.
\end{lemma}
\begin{proof}
Observe that the cardinality of $\mathcal{W}(k_1, \ldots , k_m)$ is the multinomial coefficient $\binom{n}{ k_1, \ldots , k_m}$, which means that the source and target of $\Psi$ have the same cardinality since $$ \binom{n}{ k_1, \ldots , k_m} = \frac{n!}{k_1 ! \ldots k_m !}.$$ Hence it suffices to check that $\Psi$ is surjective: given $\alpha \in \Sigma_n$, turn the formal word $z_{1} \cdots z_{n}$ into a word $w \in F(y_1, \ldots, y_m)$ by the following rule: $$z_{i} := \begin{cases}
y_1, & 1 \leq \alpha^{-1} (i) \leq k_1\\
y_2, & k_1+1 \leq \alpha^{-1} (i) \leq k_2\\
\vdots \\
y_m, & \sum_{j=1}^{m-1} k_j+1 \leq \alpha^{-1} (i) \leq \sum_{j=1}^{m} k_j
\end{cases}$$ It is clear that $w \in \mathcal{W}(k_1, \ldots , k_m)$. By construction, $ \xi_w \circ \alpha$ is of the form $\sigma_1 \otimes \cdots \otimes \sigma_m$ for some (unique) permutations $\sigma_i \in \Sigma_{k_i}$, what establishes the bijection.
The commutativity of the diagram follows from the observation that the block product of permutations is compatible with the group law of the symmetric group\footnote{A more concise way to restate this is by saying that if we view $\Sigma_n$ as a groupoid in the usual way, then the block product $\otimes$ makes $\Sigma_n$ a monoidal category.} in the sense that $$(\sigma \otimes \sigma') \circ (\tau \otimes \tau') = (\sigma \circ \tau) \otimes (\sigma' \circ \tau') .$$ Then we have
\begin{align*}
\Psi_{k_1, \ldots , k_m} (w, \sigma_1, \ldots , \sigma_m) &\otimes \Psi_{k'_1, \ldots , k'_r} (w', \sigma'_1, \ldots , \sigma'_m) \\ &= [\xi_w^{-1} \circ (\sigma_1 \otimes \cdots \otimes \sigma_m)] \otimes [\xi_{w'}^{-1} \circ (\sigma'_1 \otimes \cdots \otimes \sigma'_r)]\\
&= (\xi_w^{-1} \otimes \xi_{w'}^{-1}) \circ (\sigma_1 \otimes \cdots \otimes \sigma_m \otimes \sigma'_1 \otimes \cdots \otimes \sigma'_r)\\
&= \xi_{ww'}^{-1} \circ (\sigma_1 \otimes \cdots \otimes \sigma_m \otimes \sigma'_1 \otimes \cdots \otimes \sigma'_r)\\
&= \Psi_{k_1, \ldots , k_m, k'_1, \ldots , k'_r} (ww', \sigma_1, \ldots , \sigma_m, \sigma'_1, \ldots , \sigma'_m)
\end{align*}
which concludes.
\end{proof}
Now we are ready to define an inverse for \eqref{eq bijection alg_B = alg_fgFMonhat}. Given a bialgebra $A \in \mathcal{C}$, define a functor $$\widehat{A}: \widehat{\mathsf{fgFMon}} \longrightarrow \mathcal{C}$$ as follows: on objects, $\widehat{A}(F(x_1, \ldots , x_n)):= A^{\otimes n}$. On morphisms, given an arrow $(f, \bm{\sigma}): F(x_1, \ldots , x_n) \longrightarrow F(y_1, \ldots , y_m)$, let
$$ p_i := \pi(f(x_i)) \quad , \quad q_j := \pi_j (f(x_1 \cdots x_n)) \quad , \quad \sigma := \Psi (f(x_1 \cdots x_n), \bm{\sigma}) $$ for $1 \leq i \leq n$ and $1 \leq j \leq m$ and define
\begin{equation}\label{eq def Ahat}
\widehat{A} (f, \bm{\sigma}):= \mu^{[q_1, \ldots , q_m]} \circ P_{\sigma} \circ \mathcal{D}elta^{[p_1, \ldots , p_n]} .
\end{equation}
Let us illustrate the above assignment.
\begin{example}
Let $(f, \bm{\sigma}): F(x,y) \longrightarrow F(a,b)$ be the map in $\widehat{\mathsf{fgFMon}}$ determined by $$ f(x):= a^2 b \quad , \quad f(y):= abab \quad , \quad \sigma_1 := (4321) \quad , \quad \sigma_2 := (13),$$ where $\bm{\sigma} = (\sigma_1, \sigma_2)$. Note that $\sigma_1 \in \Sigma_4$ as $4= \pi_1 (f(xy)) = \pi_1 (a^2babab)$ and similarly $\sigma_2 \in \Sigma_3$. We easily see that $p_1 = 3$, $p_2= 4$, $q_1 =4$, $q_2 = 3$. Then
\begin{align*}
\sigma &= \Psi (a^2babab, (4321), (13)) = \xi_{ab^2abab}^{-1} \circ [(4321) \otimes (13)] \\ &= (3456)(4321)(57)= (165732) \in \Sigma_7.
\end{align*}
Alternatively, this can also be computed as follows: consider the ordered sets $A:= \{ z_1, z_2, z_3, z_4 \}$ and $B:= \{ z_5, z_6, z_7 \}$, and let $\sigma_1$ act on $A$ and $\sigma_2$ on $B$, which yields $\sigma_1 A = \{z_4, z_1, z_2, z_3 \}$ and $ \sigma_2 B = \{ z_7, z_6, z_5 \}$. Next consider the ordered union $\sigma_1 A \amalg \sigma_2 B = \{z_4, z_1, z_2, z_3 , z_7, z_6, z_5 \}$. Now for $1 \leq i \leq 4$, we replace the letter $z_i$ by $z_{o(i)}$ where $o(i)$ is the position of the $i$-th letter $a$ in the pattern $a a b abab$. Similarly for $1 \leq i \leq 3$ replace $z_{i+4}$ by $z_{o'(i)}$ where $o'(i)$ is the position of the $i$-th letter $b$ in the pattern, according to the order from left to right. This yields the pattern $z_6 z_1 z_2 z_4 z_7 z_5 z_3$. Then $\sigma$ is the permutation such that the previous pattern equals $z_{\sigma (1)} \cdots z_{\sigma(7)}$, that is, $\sigma = (165732) $, as above.
Following \ref{notation} we conclude that $\widehat{A} (f, \bm{\sigma})$ is the map $$x \otimes y \mapsto y_{(2)} x_{(1)} x_{(2)} x_{(4)} \otimes y_{(3)} y_{(1)} x_{(3)}.$$
\end{example}
Let us continue with the proof of \ref{thm fundamental theorem}. The key point will be to set the expression \eqref{eq composite law fgFMonhat}, which has to model all changes of indices that take place when applying a combination of multiplication and comultiplication maps to an element of a bialgebra. In other words, we have to set \eqref{eq composite law fgFMonhat} so that $\widehat{A}$ actually defines a functor.
\begin{proposition}
Given a bialgebra $A \in \mathcal{C}$, we have that
\begin{equation}\label{eq what is the composite}
\widehat{A} ( (g, \bm{\tau}) \circ (f, \bm{\sigma}) ) = \widehat{A}(g, \bm{\tau}) \circ \widehat{A}(f, \bm{\sigma}) ,
\end{equation}
so that $\widehat{A}: \widehat{\mathsf{fgFMon}} \longrightarrow \mathcal{C}$ is indeed a functor.
\end{proposition}
\begin{proof}
Rather than a verification that \eqref{eq what is the composite} holds using \eqref{eq composite law fgFMonhat}, the proof will construct the composite law \eqref{eq composite law fgFMonhat} of $\widehat{\mathsf{fgFMon}}$ by \textit{imposing} the equality \eqref{eq what is the composite} and the associativity of the composite. We will subdivide the proof in several steps, increasing the complexity of the maps at hand.
\noindent \textit{Step 1.} We start by composing maps between the objects $F([1])$. Let $(f, \sigma): F(x) \longrightarrow F(a)$, $x \mapsto a^k$ and $(g, \tau): F(a) \longrightarrow F(s)$, $a \mapsto s^p$ for integers $k,p \geq 0$. If $\sigma= \mathrm{Id}_k \in \Sigma_k $ and $\tau = \mathrm{Id}_p \in \Sigma_p$, then their images under $\widehat{A}$ correspond to the maps $x \mapsto x_{(1)} \cdots x_{(k)}$ and $a \mapsto a_{(1)} \cdots a_{(p)}$. Their composite $\widehat{A}(g, \mathrm{Id}_p) \circ \widehat{A}(f, \mathrm{Id}_k)$ is
\begin{equation}\label{eq base case}
x \mapsto (x_{(1)} \cdots x_{(k)})_{(1)} \cdots ( x_{(1)} \cdots x_{(k)})_{(p)} = \prod_{i=1}^p x_{(i)} x_{(p+i)} \cdots x_{((n-1)p+i)}.
\end{equation}
This implies that the permutation associated to the composite is given by the element ${\gamma_{(k,p)} \in \Sigma_{kp}}$, which was already defined in \ref{constr xi gamma}. If $\sigma, \tau$ are not the identity permutations, then the left-hand side of the above expression becomes $$x \mapsto (x_{(\sigma 1)} \cdots x_{(\sigma k)})_{(\tau 1)} \cdots ( x_{(\sigma 1)} \cdots x_{(\sigma k)})_{(\tau p)} .$$ This says that $\tau$ acts on \eqref{eq base case} permuting blocks of size $k$, whereas $\sigma$ acts inside each of the $p$ blocks. In total, the permutation associated to the composite $\widehat{A} ( (g, \tau) \circ (f,\sigma) )$ must be
\begin{equation}
\gamma_{(k,p)} \circ (\sigma \otimes \overset{p}{\cdots} \otimes \sigma) \circ \langle \tau \rangle_{k, \overset{p}{\cdots} , k} \in \Sigma_{kp}.
\end{equation}
\noindent \textit{Step 2.} We want to study now the case where $(f, \bm{\sigma}): F(x) \longrightarrow F(a_1, \ldots , a_m)$, $x \mapsto w$, $\bm{\sigma} = (\sigma_1, \ldots, \sigma_m)$ and $(g, \tau): F(a_1, \ldots , a_m) \longrightarrow F(s)$, $a_i \mapsto s^{p_i}$, $p_i \geq 0$. Let us start from the base case where $\sigma_i= \mathrm{Id}$ , $\tau=\mathrm{Id}$ and $w = a_1^{k_1} \cdots a_m^{k_m}$. Here the situation is almost identical to the first part of Step 1: $\widehat{A} (f,\bm{\sigma}) $ corresponds to the map $x \mapsto x_{(1)} \cdots x_{(k_1)} \otimes x_{(k_1+1)} \cdots x_{(k_2)} \otimes \cdots$ whereas $\widehat{A} (g,\tau)$ corresponds to $a_1 \otimes \cdots \otimes a_m \mapsto (a_1)_{(1)} \cdots (a_1)_{(p_1)} \cdots (a_m)_{(1)} \cdots (a_m)_{(p_m)}$, so it follows reasoning with each of the ``monoidal blocks'' of $\widehat{A} (f,\bm{\sigma}) $ that the composite gets as permutation $$\gamma_{(k_1,p_1)}\otimes \cdots \otimes \gamma_{(k_m,p_m)} \in \Sigma_{N}$$ where $N= k_1p_1 + \cdots + k_mp_m$. Now suppose that $f(x)=w \in F(a_1, \ldots , a_m)$ is a general element. We want to show that if $ \rho_{f,g}$ is defined from $\Phi (w)$ as above, the permutation associated to the composite $\widehat{A} ( (g, \tau) \circ (f,\bm{\sigma}) )$ is $$ \rho_{f,g} \circ ( \gamma_{(k_1,p_1)}\otimes \cdots \otimes \gamma_{(k_m,p_m)} ) \in \Sigma_{N}.$$ For observe that the permutation $\xi_w$ is designed to keep track of the ``reordering'' of the word $a_{1}^{\pi_1(w)} \cdots a_{m}^{\pi_m(w)}$ turning into $w$. When passing to $ \langle \xi_w \rangle_{p_1^{\times k_1}, \ldots , p_m^{\times k_m}} = \rho_{f,g}^{-1}$, we extend this permutation to a block permutation according to the $p_i$'s. Taking the inverse is needed (just as with $\gamma_{(m,p)}$) to match the indices and not the positions. Lastly, if $\bm{\sigma}$ and $\tau$ are not trivial, then the situation is again similar to the one in Step 1: we have that each of the $ \sigma_i$ acts inside $p_i$ blocks (of size $m_i$), whereas $\tau$ acts permuting these blocks. An illustration of this is in Figure \ref{fig permutations}.
\begin{figure}
\caption{Illustration of the action of $\bm{\sigma}
\label{fig permutations}
\end{figure}
In total, we conclude the permutation associated to the composite $\widehat{A} ( (g, \tau) \circ (f,\bm{\sigma}) )$ must be
\begin{equation}\label{eq composite law Step 2}
\rho_{f,g} \circ ( \gamma_{(k_1,p_1)}\otimes \cdots \otimes \gamma_{(k_m,p_m)} ) \circ (\sigma_1^{ \otimes p_1} \otimes \cdots \otimes\sigma_m^{ \otimes p_m}) \circ \langle \tau \rangle_{k_1^{\times p_1}, \ldots k_m^{\times p_m}} \in \Sigma_{N}
\end{equation}
where $k_i = \pi_i (w)$ and $N= k_1p_1 + \cdots + k_mp_m.$
\noindent \textit{Step 3.} Next we consider the case where $(f, \bm{\sigma}): F(x_1, \ldots , x_n) \longrightarrow F(a_1, \ldots , a_m)$, $x_i \mapsto w_i$, $\bm{\sigma} = (\sigma_1, \ldots, \sigma_m)$ and $(g, \tau): F(a_1, \ldots , a_m) \longrightarrow F(s)$, $a_i \mapsto s^{p_i}$, $p_i \geq 0$ as before. In this case, the claim is that the formula \eqref{eq composite law Step 2} obtained in Step 2 still holds, taking $w := f(x_1 \cdots x_n)= w_1 \cdots w_n$. Vaguely speaking, the reason is that in this case $(f, \bm{\sigma})$ models a map $A^{\otimes n} \longrightarrow A^{\otimes m}$, which is determined by the image of elements $x_1 \otimes \cdots \otimes x_n$.
More precisely, let us show that if we impose \eqref{eq what is the composite}, then the equation
\begin{equation}\label{eq imposed associativity 1}
(f, \bm{\sigma}) \circ (\mathcal{D}elta^{[n]}, \bm{\mathrm{Id}}) = (f \circ \mathcal{D}elta^{[n]},\bm{\sigma} )
\end{equation}
must hold. We have that
\begin{align*}
\widehat{A}(f, \bm{\sigma}) \circ \widehat{A}(\mathcal{D}elta^{[n]}, \bm{\mathrm{Id}}) &= (\mu^{[q_1, \ldots , q_m]} \circ P_{\sigma} \circ \mathcal{D}elta^{[p_1, \ldots , p_n]} ) \circ \mathcal{D}elta^{[n]} \\
&= \mu^{[q_1, \ldots , q_m]} \circ P_{\sigma} \circ \mathcal{D}elta^{[p_1+ \cdots + p_n]} .
\end{align*}
Since we want the equation
$$\widehat{A}(f, \bm{\sigma}) \circ \widehat{A}(\mathcal{D}elta^{[n]}, \bm{\mathrm{Id}}) = \widehat{A}(f \circ \mathcal{D}elta^{[n]},\bm{\sigma} \circ \bm{\mathrm{Id}} )$$
to hold and $\sigma = \Psi (f(x_1 \cdots x_n), \bm{\sigma}) = \Psi ((f \circ \mathcal{D}elta^{[n]})(x), \bm{\sigma}) $, then $\bm{\sigma} \circ \bm{\mathrm{Id}}$ must be $\bm{\sigma}$. By associativity of the composite, we must have
\begin{align*}
(g \circ (f \circ \mathcal{D}elta^{[n]}), \tau \circ \bm{\sigma}) &= (g, \tau) \circ [ (f, \bm{\sigma}) \circ (\mathcal{D}elta^{[n]}, \bm{\mathrm{Id}}) ]\\
&= [(g, \tau) \circ (f, \bm{\sigma})] \circ (\mathcal{D}elta^{[n]}, \bm{\mathrm{Id}})\\
&= (g \circ f , \tau \circ \bm{\sigma})\circ (\mathcal{D}elta^{[n]}, \bm{\mathrm{Id}})\\
&= ((g \circ f) \circ \mathcal{D}elta^{[n]}, \tau \circ \bm{\sigma})
\end{align*}
where we have applied \eqref{eq imposed associativity 1} in the first and third equality. Since the left-hand side of the equation is computable in terms of the formula given in Step 2, the claim follows.
\noindent \textit{Step 4.} Let us finally deal with the general case where $(f, \bm{\sigma}): F(x_1, \ldots x_n) \longrightarrow F(a_1, \ldots , a_m)$ and $(g, \bm{\tau}): F(a_1, \ldots , a_m) \longrightarrow F(s_1, \ldots, s_\ell)$, $a_i \mapsto w'_i$ and $\bm{\tau} =( \tau_1, \ldots , \tau_\ell )$. By Step 3, it is enough to consider $(f, \bm{\sigma}): F(x) \longrightarrow F(a_1, \ldots , a_m)$, $x \mapsto w$, $\bm{\sigma} = (\sigma_1, \ldots, \sigma_m)$. As explained before, the composite $(g, \bm{\tau})\circ (f, \bm{\sigma}) = (gf, \bm{\tau} \circ \bm{\sigma})$ will have $\ell$ associated permutations that are denoted as $(\bm{\tau} \circ \bm{\sigma})_1, \ldots , (\bm{\tau} \circ \bm{\sigma})_\ell$.
The strategy we follow is ``dual'' to the one of Step 3, where we read off the composite with $(\mathcal{D}elta^{[n]}, \bm{\mathrm{Id}})$. This time we want to show that if $(f,\bm{\sigma}): F(x_1, \ldots , x_n) \longrightarrow F(a_1, \ldots , a_m)$ then the equation
\begin{equation}\label{eq imposed associativity 2}
(\mu^{[m]}, \bm{\mathrm{Id}}) \circ (f,\bm{\sigma}) = (\mu^{[m]} \circ f, \Psi (f(x_1 \cdots x_n), \bm{\sigma}))
\end{equation}
must hold. Indeed for any bialgebra $A \in \mathcal{C}$ we have
\begin{align*}
\widehat{A}(\mu^{[m]}, \bm{\mathrm{Id}}) \circ \widehat{A}(f, \bm{\sigma}) &= \mu^{[m]} \circ (\mu^{[q_1, \ldots , q_m]} \circ P_{\sigma} \circ \mathcal{D}elta^{[p_1, \ldots , p_n]} ) \\
&= \mu^{[q_1 + \cdots + q_m]} \circ P_{\sigma} \circ \mathcal{D}elta^{[p_1, \ldots , p_n]}
\end{align*}
where $\sigma = \Psi (f(x_1 \cdots x_n), \bm{\sigma})$. Since we want the equation
$$\widehat{A}(\mu^{[m]}, \bm{\mathrm{Id}}) \circ \widehat{A}(f, \bm{\sigma}) = \widehat{A}(\mu^{[m]} \circ f ,\bm{\mathrm{Id}}\circ \bm{\sigma} )$$
to hold, then $\bm{\mathrm{Id}}\circ \bm{\sigma}$ must be\footnote{Alternatively, we could use \eqref{eq composite law Step 2}, which yields the same result.} $\sigma = \Psi (f(x_1 \cdots x_n), \bm{\sigma}) = \xi_{f(x_1 \cdots x_n)}^{-1} \circ (\sigma_1 \otimes \cdots \otimes \sigma_m)$. In particular, this says that $\bm{\sigma}$ is determined by $\sigma$, since
\begin{equation}\label{eq sigma determines boldsigma}
\sigma_1 \otimes \cdots \otimes \sigma_m = \xi_{f(x_1 \cdots x_n)} \circ \sigma
\end{equation}
and the map $$\Sigma_{k_1} \times \cdots \times \Sigma_{k_m} \lhook\joinrel\longrightarrow \Sigma_{k_1 + \cdots + k_m} \quad , \quad (\sigma_1, \ldots, \sigma_m ) \mapsto \sigma_1 \otimes \cdots \otimes \sigma_m $$ is injective. Therefore, for the composite $ (g, \bm{\tau})\circ (f, \bm{\sigma})$, using the associativity of the composite, we must have
\begin{align*}
((\mu^{[m]} \circ g) \circ f , \Psi (g(a_1 \cdots a_m), \bm{\tau}) \circ \bm{\sigma}) &= [ (\mu^{[m]}, \bm{\mathrm{Id}}) \circ (g, \bm{\tau})] \circ (f, \bm{\sigma}) \\
&= (\mu^{[m]}, \bm{\mathrm{Id}}) \circ [ (g, \bm{\tau}) \circ (f, \bm{\sigma})]\\
&= (\mu^{[m]}, \bm{\mathrm{Id}}) \circ (g \circ f, \bm{\tau} \circ \bm{\sigma}).
\end{align*}
The first term is computable in terms of Step 2: if $p_i= \pi (g (a_i))$ and $k_i = \pi_i (f (x))$ for $1 \leq i \leq m$, then
\begin{align*}
\Psi (g(a_1 \cdots a_m), \bm{\tau}) \circ \bm{\sigma} &= \rho_{f, \mu^{[m]} \circ g} \circ (\gamma_{(k_1, p_1)} \otimes \cdots \otimes \gamma_{(k_m, p_m)})\\
&\circ (\sigma_1^{\otimes p_1} \otimes \cdots \otimes \sigma_m^{\otimes p_m}) \circ \langle \Psi (g(a_1 \cdots a_m), \bm{\tau}) \rangle_{k_1^{\times p_1}, \ldots k_m^{\times p_m}}
\end{align*}
Hence by \eqref{eq sigma determines boldsigma}, we conclude that
\begin{align*}
(\bm{\tau} \circ \bm{\sigma})_1 \otimes \cdots \otimes (\bm{\tau} \circ \bm{\sigma})_\ell &= \xi_{(g\circ f)(x)} \circ \rho_{f, \mu^{[m]} \circ g} \circ (\gamma_{(k_1, p_1)} \otimes \cdots \otimes \gamma_{(k_m, p_m)})\\
&\circ (\sigma_1^{\otimes p_1} \otimes \cdots \otimes \sigma_m^{\otimes p_m}) \circ \langle \Psi (g(a_1 \cdots a_m), \bm{\tau}) \rangle_{k_1^{\times p_1}, \ldots k_m^{\times p_m}}
\end{align*}
as we wanted to show.
\end{proof}
\begin{proof}[Proof (of Lemma \ref{lem F(x) bialgebra object})]\label{proof lem F(x) bialg}
As in \ref{ex ComB = fgFMon} all we need to check is that the maps
\begin{align*}
&(\mu, \mathrm{Id}): F(x,y) \longrightarrow F(z) \qquad , \qquad x,y \mapsto z,\\
&(\mathcal{D}elta, \bm{\mathrm{Id}}): F(x) \longrightarrow F(y,z) \qquad , \qquad x \mapsto yz,
\\
&\eta: \bm{1} \longrightarrow F(x)\\
&\varepsilon : F(x) \longrightarrow \bm{1}
\end{align*}
satisfy the bialgebra axioms \eqref{eq algebra axioms} -- \eqref{eq bialgebra axioms} (note that both $\eta$ and $\varepsilon$ do not have any associated permutation as $\Sigma_0$ is empty). We already checked in \ref{ex ComB = fgFMon} that the equalities hold for the underlying free monoid maps, so we are left to check that the associated permutations coincide. For instance, for the associativity axiom from Figure \ref{fig bialgebra axioms}(\subref{fig associativity}) we have that both sides of the equation yield $\mathrm{Id}_3 \in \Sigma_3$, since $\langle \mathrm{Id} \rangle_{k_1, \ldots}= \mathrm{Id}$, $\gamma_{(k,1)}=\mathrm{Id}$ and $\xi_{y^2z}= \mathrm{Id}$. Perhaps the less obvious verification is the bialgebra axiom from Figure \ref{fig bialgebra axioms}(\subref{fig bialgebra1}). In this case $x,y \mapsto ab$ and the two associated permutations are given by the pair $\bm{\mathrm{Id}} = (\mathrm{Id}_2, \mathrm{Id}_2)$. For both left- and right-hand sides this follows since $(23)(23) = \mathrm{Id}_4 = \mathrm{Id}_2 \otimes \mathrm{Id}_2$. The rest of axioms are also easy to check and are left to the reader.
\end{proof}
We can now wrap up the proof of \ref{thm fundamental theorem}. We are left to check that the ``hat'' construction \eqref{eq def Ahat} defines a two-sided inverse for \eqref{eq bijection alg_B = alg_fgFMonhat}. If $A \in \mathcal{C}$ is a bialgebra, then the composite $$\mathsf{B} \longrightarrow \widehat{\mathsf{fgFMon}} \overset{\widehat{A}}{\longrightarrow} \mathcal{C}$$ defines the same bialgebra as $A$. This follows at once from \ref{thm normal form} and the functoriality of $\widehat{A}$. Conversely, if $G: \widehat{\mathsf{fgFMon}} \longrightarrow \mathcal{C}$ is a functor, let $A \in \mathcal{C}$ be the bialgebra determined by the composite $$\mathsf{B} \longrightarrow \widehat{\mathsf{fgFMon}} \overset{G}{\longrightarrow} \mathcal{C}.$$ By construction, the structure maps of $A$ are given by $G(\mu, \bm{\mathrm{Id}})$, $G(\mathcal{D}elta, \bm{\mathrm{Id}})$, $G(\eta)$ and $G(\varepsilon)$. Given an arrow $(f, \bm{\sigma})$ in $\widehat{\mathsf{fgFMon}}$, we have that
\begin{align*}
\widehat{A} (f, \bm{\sigma}) &= G(\mu^{[q_1, \ldots , q_m]}, \bm{\mathrm{Id}}) \circ G(P_{\sigma}, \bm{\mathrm{Id}}) \circ G(\mathcal{D}elta^{[p_1, \ldots , p_n]}, \bm{\mathrm{Id}})\\
&= G((\mu^{[q_1, \ldots , q_m]}, \bm{\mathrm{Id}}) \circ (P_{\sigma}, \bm{\mathrm{Id}}) \circ (\mathcal{D}elta^{[p_1, \ldots , p_n]}, \bm{\mathrm{Id}}))
\end{align*}
so to check that $\widehat{A} =G$ it is enough to show that
\begin{equation}\label{eq decomposition fsigma}
(f, \bm{\sigma}) = (\mu^{[q_1, \ldots , q_m]}, \bm{\mathrm{Id}}) \circ (P_{\sigma}, \bm{\mathrm{Id}}) \circ (\mathcal{D}elta^{[p_1, \ldots , p_n]}, \bm{\mathrm{Id}})
\end{equation}
where $p_i, q_i$ and $\sigma$ are as described in \eqref{eq def Ahat}. The equality for the underlying monoid map holds since $\mathcal{D}elta^{[p_1, \ldots , p_n]}$ produces as many letters (counting multiplicities) as letters $f(x_1 \cdots x_n)$ has, $P_\sigma$ shuffles them according to the pattern $f(x_1 \cdots x_n)$ and $\bm{\sigma}$ (via $\Psi$) and $\mu^{[q_1, \ldots , q_m]}$ evaluates the variables producing the pattern. For the equiality at the level of the permutations, by \eqref{eq imposed associativity 1} and \eqref{eq imposed associativity 2} we have that the equality \eqref{eq decomposition fsigma} holds if and only if it holds after precomposition with $(\mathcal{D}elta^{[n]}, \bm{\mathrm{Id}})$ and composition with $(\mu^{[m]}, \bm{\mathrm{Id}})$. For the left-hand side this yields $$(\mu^{[m]} \circ f \circ \mathcal{D}elta^{[n]}, \sigma ),$$ where $\sigma = \Psi (f(x_1 \cdots x_n), \bm{\sigma})$ as above. For the right-hand side, we have
\begin{align*}
(\mu^{[m]}, \bm{\mathrm{Id}}) &\circ (\mu^{[q_1, \ldots , q_m]}, \bm{\mathrm{Id}}) \circ (P_{\sigma}, \bm{\mathrm{Id}}) \circ (\mathcal{D}elta^{[p_1, \ldots , p_n]}, \bm{\mathrm{Id}}) \circ (\mathcal{D}elta^{[n]}, \bm{\mathrm{Id}}) \\
&= (\mu^{[q_1+ \cdots + q_m]}, \bm{\mathrm{Id}}) \circ (P_{\sigma}, \bm{\mathrm{Id}}) \circ (\mathcal{D}elta^{[p_1 + \cdots + p_n]}, \bm{\mathrm{Id}})\\ &= (\mu^{[q_1+ \cdots + q_m]}, \bm{\mathrm{Id}}) \circ (P_{\sigma} \circ \mathcal{D}elta^{[p_1 + \cdots + p_n]}, \bm{\mathrm{Id}})\\
&= ( \mu^{[q_1+ \cdots + q_m]} P_{\sigma} \circ \mathcal{D}elta^{[p_1 + \cdots + p_n]}, \Psi ((P_{\sigma} \circ \mathcal{D}elta^{[p_1 + \cdots + p_n]})(x), \bm{\mathrm{Id}} ))\\
&= ( \mu^{[q_1+ \cdots + q_m]} P_{\sigma} \circ \mathcal{D}elta^{[p_1 + \cdots + p_n]}, \xi^{-1}_{(P_{\sigma} \circ \mathcal{D}elta^{[p_1 + \cdots + p_n]})(x)})\\
&= ( \mu^{[q_1+ \cdots + q_m]} P_{\sigma} \circ \mathcal{D}elta^{[p_1 + \cdots + p_n]}, \xi^{-1}_{x_{\sigma (1)} \cdots x_{\sigma(p_1 + \cdots + p_n)}})\\
&= ( \mu^{[q_1+ \cdots + q_m]} P_{\sigma} \circ \mathcal{D}elta^{[p_1 + \cdots + p_n]}, \sigma)
\end{align*}
as required. This shows that the map \eqref{eq bijection alg_B = alg_fgFMonhat} is indeed a bijection. By \ref{lemma yoneda}, we obtain the equivalence of categories $\widehat{\mathsf{fgFMon}} \simeq \mathsf{B}$. This concludes the proof of \ref{thm fundamental theorem}.
\subsection{A few illustrative examples} To finish off this section, we would like to illustrate the composite law of $\widehat{\mathsf{fgFMon}} $ with a couple of examples.
\begin{example}
Let $(f, \bm{\sigma}): F(x) \longrightarrow F(a,b)$ and $(g, \tau): F(a,b) \longrightarrow F(s)$ determined by the following data: $$f(x) = a^2bab \quad, \quad \bm{\sigma} = ((132), (12)) \quad, \quad g(a)=s \quad, \quad g(b)= s^2 \quad, \quad \tau =\mathrm{Id}.$$ For any bialgebra $A \in \mathcal{C}$, the map $(f, \bm{\sigma})$ induces the map $$x \mapsto x_{(4)} x_{(1)} x_{(2)} \otimes x_{(5)} x_{(3)}$$
since $$\Psi (a^2bab, (132), (12)) = \xi_{a^2bab}^{-1} \circ [(132) \otimes (12)] = (34)(132)(45) = (14532).$$ Similarly $(g, \tau)$ induces $a \otimes b \mapsto a b_{(1)} b_{(2)}$. Therefore $\widehat{A}(f, \bm{\sigma}) \circ \widehat{A}(g, \tau)$ must be
$$x \mapsto x_{(4)} x_{(1)} x_{(2)} (x_{(5)} x_{(3)})_{(1)} (x_{(5)} x_{(3)})_{(1)} = x_{(5)} x_{(1)} x_{(2)} x_{(6)} x_{(3)} x_{(7)} x_{(4)}.$$ Let us check that we obtain the same result computing $\widehat{A}( g \circ f, \tau \circ \bm{\sigma})$ applying the composite law \eqref{eq composite law fgFMonhat} (or rather its slightly simplified version \eqref{eq composite law Step 2}). We compute as
\begin{align*}
\tau \circ \bm{\sigma} &= \rho_{f,g} \circ ( \gamma_{(3,1)}\otimes \gamma_{(2,2)} ) \circ (\sigma_1 \otimes \sigma_2^{ \otimes 2}) \circ \langle \mathrm{Id}_3 \rangle_{3, 2,2}\\
&= (\langle \xi_{a^2bab} \rangle_{1,1,1,2,2})^{-1} \circ [\mathrm{Id}_3 \otimes (23)] \circ [(132)\otimes (12) \otimes (12)]\\
&= (435)(56)(132)(45)(67)\\
&= (1532)(467),
\end{align*}
which yields the same result as above.
\end{example}
\begin{example}
Let $(f, \bm{\sigma}): F(x) \longrightarrow F(a,b)$, $f(x) = abab$ with $\bm{\sigma} = (\mathrm{Id}, (12))$ and $(g, \bm{\tau}): F(a,b) \longrightarrow F(s,t)$, $g(a)=s, g(b)= ts$ with $\bm{\tau} =((12), \mathrm{Id})$. Let us compute the composite $(g, \bm{\tau}) \circ (f, \bm{\sigma}) $ in $\widehat{\mathsf{fgFMon}}$. To start with, write $$\tau = \Psi (g(ab), (12), \mathrm{Id}) = \Psi (sts, (12), \mathrm{Id}) = \xi_{sts}^{-1} \circ (12) = (23)(12)= (132).$$ Provided $k_1 = \pi_1 (abab) =2 = \pi_1 (abab) = k_2$ and $p_1 = \pi (s) =1$, $p_2 = \pi (ts)=2$, we have
\begin{align*}
(\bm{\tau} \circ \bm{\sigma})_1 \otimes (\bm{\tau} \circ \bm{\sigma})_2 &= \xi_{sts^2ts} \circ \rho_{f, \mu g} \circ (\gamma_{(2, 1)} \otimes \gamma_{(2,2)}) \circ (\sigma_1 \otimes \sigma_2 \otimes \sigma_2) \circ \langle \tau \rangle_{2,2,2}\\
&= (25643) \circ (\langle (23) \rangle_{1,1,2,2})^{-1} \circ (\mathrm{Id}_2 \otimes (23)) \\ &\phantom{--} \circ (\mathrm{Id}_2 \otimes (12) \otimes (12)) \circ \langle (132) \rangle_{2,2,2}\\
&= (25643)(432) (45) (34)(56) (153)(264)\\
&= (25643)(1623) \hspace{5cm} (*)\\
&= (143)(56)\\
&= (143) \otimes (12).
\end{align*}
thence $(\bm{\tau} \circ \bm{\sigma})_1 = (143)$ and $(\bm{\tau} \circ \bm{\sigma})_2 = (12)$.
A bialgebra $A \in \mathcal{C}$ induces the map $\widehat{A}(f, \bm{\sigma})$ mapping $x \mapsto x_{(1)}x_{(3)} \otimes x_{(4)}x_{(2)}$, and similarly $\widehat{A}(g, \bm{\tau})$ maps $a \otimes b \mapsto b_{(2)}a \otimes b_{(1)}$. Therefore, their composite $\widehat{A}(g, \bm{\tau}) \circ \widehat{A}(f, \bm{\sigma})$ is the map $$x \mapsto (x_{(4)}x_{(2)})_{(2)} x_{(1)}x_{(3)} \otimes (x_{(4)}x_{(2)})_{(1)} = x_{(6)} x_{(3)} x_{(1)} x_{(4)} \otimes x_{(5)} x_{(2)}.$$
We can already see that this agrees with our previous calculation: the rightmost permutation in the equality marked as $(*)$ above is precisely the one that dominates the order of the ``elements'' of the image of $\widehat{A}(g, \bm{\tau}) \circ \widehat{A}(f, \bm{\sigma})$, according to the definition of $\Psi$.
\end{example}
\section{Applications}\label{sec Applications}
We would like to give a few consequences which follow in a rather straightforward way from theorem \ref{thm fundamental theorem}.
\begin{corollary}
For any symmetric monoidal category $\mathcal{C}$, there is an equivalence of categories $$\mathsf{Alg}_{\mathsf{B}}(\mathcal{C}) \simeq \mathsf{Alg}_{\widehat{\mathsf{fgFMon}}}(\mathcal{C}). $$
\end{corollary}
\begin{proof}
This is immediate from \ref{thm fundamental theorem}.
\end{proof}
\begin{theorem}\label{thm factorisation fgFMonhat}
Every morphism $(f, \bm{\sigma}): F(x_1, \ldots , x_n) \longrightarrow F(y_1, \ldots , y_m)$ in $\widehat{\mathsf{fgFMon}}$ factors in a unique way as $$f= \mu^{[q_1, \ldots , q_m]} \circ P_{\sigma} \circ \mathcal{D}elta^{[p_1, \ldots , p_n]} $$ for some (unique) integers $p_1, \ldots , p_n, q_1, \ldots , q_m \geq 0$ such that $\sum_i p_i = \sum_i q_i =:s$, and a (unique) $\sigma \in \Sigma_s$. In particular, these elements are determined as follows: $$ p_i = \pi(f(x_i)) \quad , \quad q_i = \pi_i (f(x_1 \cdots x_n)) \quad , \quad \sigma = \Psi (f(x_1 \cdots x_n), \bm{\sigma}). $$
\end{theorem}
\begin{proof}
This is again immediate from \ref{thm normal form}, \ref{thm fundamental theorem} and \eqref{eq def Ahat}.
\end{proof}
\begin{proposition}\label{prop fgFMon hat nohat}
The following diagram is commutative:
$$\begin{tikzcd}
\mathsf{B} \rar{\simeq} \dar & \widehat{\mathsf{fgFMon}} \dar\\
\mathsf{ComB} \rar{\simeq} & \mathsf{fgFMon}
\end{tikzcd} $$
where the left vertical functor is the natural inclusion and the right vertical functor forgets the permutations.
\end{proposition}
\begin{proof}
This follows directly from \ref{thm normal form} and \ref{thm fundamental theorem}.
\end{proof}
\begin{proposition}\label{prop fSethat fgFMonhat}
There is a strong monoidal functor $$\widehat{F}: \widehat{\mathsf{fSet}} \longrightarrow \widehat{\mathsf{fgFMon}} $$ defined as $$\widehat{F} (f, \sigma) := (Ff, \bm{\sigma})$$ where $Ff$ is the monoid map induced by $f$ via the free monoid functor $F$ and $\bm{\sigma}$ is the image of $\sigma$ under the composite $$\Sigma_N \overset{\Psi^{-1}}{\longrightarrow} \mathcal{W}(k_1, \ldots , k_m) \times \Sigma_{k_1} \times \cdots \times \Sigma_{k_m} \overset{pr}{\longrightarrow} \Sigma_{k_1} \times \cdots \times \Sigma_{k_m} $$ where $(f, \sigma) : [n] \longrightarrow [m]$ and $k_i := \# f^{-1}(i)$, which makes the following diagram commutative,
$$\begin{tikzcd}
\mathsf{A} \dar{\simeq} \rar[hook] & \mathsf{B} \dar{\simeq}\\
\widehat{\mathsf{fSet}} \dar \rar{\widehat{F}} & \widehat{\mathsf{fgFMon}} \dar\\
\mathsf{fSet} \rar{F} & \mathsf{fgFMon}
\end{tikzcd} $$
where the functors $\widehat{\mathsf{fSet}} \longrightarrow \mathsf{fSet}$ and $\widehat{\mathsf{fgFMon}} \longrightarrow \mathsf{fgFMon}$ are the ones that forget the associated permutations.
\end{proposition}
\begin{proof}
Let us check in first place that the assignment indeed defines a functor. This is certainly true for the underlying monoid maps since $F: \mathsf{fSet} \longrightarrow \mathsf{fgFMon} $ is already a functor. For $(f, \sigma): [n] \longrightarrow [m]$ and $(g, \tau): [m] \longrightarrow [\ell]$ in $\widehat{\mathsf{fSet}}$, let us write for convenience $\delta := \sigma \circ \langle \tau \rangle_{k_1, \ldots , k_m}$ for the permutation associated to the composite $(g, \tau) \circ (f, \sigma)$, and let $\bm{\delta} = (pr \circ \Psi^{-1})(\delta)$. Then
\begin{align*}
(\bm{\tau} \circ \bm{\sigma})_1 \otimes \cdots \otimes (\bm{\tau} \circ \bm{\sigma})_\ell &= \xi_{F(g \circ f)(x_1 \cdots x_n)} \circ( \langle \xi_{(Ff)(x_1 \cdots x_n)} \rangle_{1, \ldots , 1})^{-1} \\
&\phantom{--} \circ (\sigma_1 \otimes \cdots \otimes \sigma_m) \circ \langle \tau \rangle_{k_1, \ldots , k_m}\\
&= \xi_{x_{gf(1)} \cdots x_{gf(n)}} \circ\xi_{x_{f(1)} \cdots x_{f(n)}}^{-1} \circ (\sigma_1 \otimes \cdots \otimes \sigma_m)\\ &\phantom{--} \circ \langle \tau \rangle_{k_1, \ldots , k_m}\\
&= \xi_{x_{gf(1)} \cdots x_{gf(n)}} \circ \sigma \circ \langle \tau \rangle_{k_1, \ldots , k_m}\\
&= \xi_{x_{gf(1)} \cdots x_{gf(n)}} \circ \delta\\
&= \delta_1 \otimes \cdots \otimes \delta_\ell,
\end{align*}
where in the third and fifth equality we have used that for a map $(f, \sigma): [n] \longrightarrow [m]$, the word in $\Psi^{-1}(\sigma)$ can be described as $x_{f(1)} \cdots x_{f(n)}$, as can be easily seen. That $\widehat{F}(\mathrm{Id}, \mathrm{Id}) = (\mathrm{Id}, \bm{\mathrm{Id}})$ is straightforward. The fact that $\widehat{F}$ is strong monoidal follows directly from $F$ being monoidal and the commutative diagram of \ref{lem utilisimo}, which implies that $$(pr \circ \Psi^{-1}) (\sigma \otimes \tau) = ((pr \circ \Psi^{-1})\sigma, (pr \circ \Psi^{-1})\tau),$$ which is precisely the required condition for the associated permutations. Finally, the upper square commutes since the image of $([k] \longrightarrow [1], \mathrm{Id})$ under $\widehat{F}$ is precisely $(\mu^{[k]}, \mathrm{Id})$ The commutativity of the lower diagram is obvious.
\end{proof}
Putting all the pieces together, we finally have
\begin{theorem}\label{thm main thm 2}
The following cube is commutative:
\begin{equation}
\begin{tikzcd}[row sep=scriptsize,column sep=scriptsize]
& \widehat{\mathsf{fSet}} \arrow[from=dl,"\simeq"] \arrow[rr,"\widehat{F}"] \arrow[dd] & & \widehat{\mathsf{fgFMon}} \arrow[from=dl,"\simeq"]\arrow[dd] \\
\mathsf{A} \arrow[rr,crossing over,hook]\arrow[dd] & & \mathsf{B} \\
& \mathsf{fSet}\arrow[from=dl, "\simeq"]\arrow[rr, "F \phantom{holaaa}"] & & \mathsf{fgFMon}\arrow[from=dl,"\simeq"] \\
\mathsf{ComA} \arrow[rr,hook] & & \mathsf{ComB}\arrow[from=uu,crossing over]
\end{tikzcd}
\end{equation}
\end{theorem}
\begin{proof}
This follows at once from \ref{lem fSet fgFMon}, \ref{thm fundamental theorem}, \ref{prop fgFMon hat nohat} and \ref{prop fSethat fgFMonhat}.
\end{proof}
If $\mathsf{C}$ (resp. $\mathsf{CocomC}$) is the PROP for (cocommutative) coalgebras, by reversing arrows we directly obtain
\begin{corollary}\label{cor main thm for coalg}
There are monoidal equivalences $$\mathsf{C} \overset{\simeq}{\longrightarrow}\widehat{\mathsf{fSet}}^{op} \qquad , \qquad \mathsf{B} \overset{\simeq}{\longrightarrow} \widehat{\mathsf{fgFMon}}^{op} $$
making the following cube commutative,
\begin{equation}
\begin{tikzcd}[row sep=scriptsize,column sep=scriptsize]
& \widehat{\mathsf{fSet}}^{op} \arrow[from=dl,"\simeq"] \arrow[rr,"\widehat{F}^{op}"] \arrow[dd] & & \widehat{\mathsf{fgFMon}}^{op} \arrow[from=dl,"\simeq"]\arrow[dd] \\
\mathsf{C} \arrow[rr,crossing over,hook]\arrow[dd] & & \mathsf{B} \\
& \mathsf{fSet}^{op}\arrow[from=dl, "\simeq"]\arrow[rr, "F^{op} \phantom{holaaa}"] & & \mathsf{fgFMon}^{op}\arrow[from=dl,"\simeq"] \\
\mathsf{CocomC} \arrow[rr,hook] & & \mathsf{CocomB}\arrow[from=uu,crossing over]
\end{tikzcd}
\end{equation}
\end{corollary}
\subsection{Related work}
Our work could be seen as the bialgebra case of what is called in the literature as the \textit{acyclic Hopf word problem} \cite{thomas}. It gives a complete answer to the question of whether two different combinations of the bialgebra structure maps represent the same morphism in $\mathsf{B}$. More precisely, it asks whether given two arrows $f_1,f_2: n \longrightarrow m$ in the PROP $\mathcal{B}$ as defined just before \ref{lem elements in normal form}, it is true that $\pi (f_1)= \pi (f_2)$, where $\pi: \mathcal{B} \longrightarrow \mathsf{B}$ is the canonical projection. Indeed if $(f_1', \bm{\sigma})$, $(f_2', \bm{\tau})$ are the images of $\pi (f_1), \pi (f_2)$ under the equivalence $\mathsf{B} \overset{\simeq}{\longrightarrow} \widehat{\mathsf{fgFMon}} $, then \ref{thm factorisation fgFMonhat} ensures that $\pi (f_1)= \pi (f_2)$ if and only if the following equalities hold:
$$ \pi(f_1'(x_i)) = \pi(f_2'(x_i)) \quad , \quad \pi_j (w_1) = \pi_j(w_2) \quad , \quad \Psi (w_1, \bm{\sigma})= \Psi (w_2, \bm{\tau}) $$ for all $1 \leq i \leq n$, $1\leq j \leq m$, where $w_1 := f_1' (x_1 \cdots x_n)$ and $w_2 := f_2' (x_1 \cdots x_n)$.
In \cite{thomas} (and in \cite{kuperberg} in a different context) this problem is used to produce an invariant of decorated 3-manifolds.
\subsection{Future work} The commutative cube from \ref{thm main thm} is only a part of the larger solid diagram shown below.
\begin{equation}
\begin{tikzcd}[row sep=scriptsize,column sep=scriptsize]
& \widehat{\mathsf{fSet}} \arrow[from=dl,"\simeq"] \arrow[rr,"\widehat{F}"] \arrow[dd] & & \widehat{\mathsf{fgFMon}} \arrow[from=dl,"\simeq"]\arrow[dd] \arrow[rr, dashed] & & \widehat{\mathsf{fgFGp}} \arrow[from=dl,"\simeq", dashed]\arrow[dd, dashed] \\
\mathsf{A} \arrow[rr,crossing over,hook]\arrow[dd] & & \mathsf{B} \arrow[rr,crossing over,hook] & & \mathsf{H} \\
& \mathsf{fSet}\arrow[from=dl, "\simeq"]\arrow[rr, "F \phantom{holaaa}"] & & \mathsf{fgFMon}\arrow[from=dl,"\simeq"] \arrow[rr, "G \phantom{holaaa}"] \arrow[dd] & & \mathsf{fgFGp}\arrow[from=dl,"\simeq"] \arrow[dd] \\
\mathsf{ComA} \arrow[rr,hook] & & \mathsf{ComB}\arrow[from=uu,crossing over] \arrow[rr,crossing over,hook] & & \mathsf{ComH}\arrow[from=uu,crossing over]\\
& & & \mathsf{fgFCMon}\arrow[from=dl,"\simeq"] \arrow[rr, "K \phantom{holaaa}"] & & \mathsf{fgFAb}\arrow[from=dl,"\simeq"] \\
& & \mathsf{ComCocomB} \arrow[from=uu,crossing over] \arrow[rr,hook] & & \mathsf{ComCocomH} \arrow[from=uu,crossing over]
\end{tikzcd}
\end{equation}
Let us say a few words about the this diagram. Here $\mathsf{H}$ (resp. $\mathsf{ComH}$) stands for the PROP for (commutative) Hopf algebras. All functors in the diagram are bijective-on-objects. Horizontal functors are faithful, vertical functors are full, and perpendicular (with respect to the page) functors are equivalences of categories. The functor $K$ is the restriction of the \textit{Grothendieck construction} $K: \mathsf{ CMon} \longrightarrow \mathsf{Ab} $, that is, the left-adjoint to the forgetful $\mathsf{Ab} \longrightarrow \mathsf{ CMon}$, that topologists should be familiar with. The functor $G$ is its noncommutative counterpart, the restriction of the left-adjoint to the forgetful $\mathsf{Gp} \longrightarrow \mathsf{Mon}$, which is known as the \textit{group completion} or \textit{universal enveloping group} of a monoid, $G: \mathsf{Mon} \longrightarrow \mathsf{Gp}$. The vertical functors $\mathsf{Mon} \longrightarrow \mathsf{CMon}$ and $\mathsf{Gp} \longrightarrow \mathsf{Ab}$ are the abelianisation functors.
In this situation, we conjecture the existence of a category $\widehat{\mathsf{fgFGp}}$ built out of $\mathsf{fgFGp}$ in an analogous way to our construction of $\widehat{\mathsf{fgFMon}}$, together with a monoidal equivalence $\mathsf{H} \simeq \widehat{\mathsf{fgFGp}}$, a functor $\widehat{G}: \widehat{\mathsf{fgFMon}} \longrightarrow \widehat{\mathsf{fgFGp}}$ lifting $G$ and a full (forgetful) functor $\widehat{\mathsf{fgFGp}} \longrightarrow \mathsf{fgFGp}$. This will be the subject of a forthcoming paper.
\end{document}
|
\begin{document}
\title{A (forgotten) upper bound for the spectral radius of a graph}
\author{Clive Elphick\thanks{\texttt{[email protected]}},~~~~~~
Chia-an Liu\thanks{\texttt{[email protected]}}}
\maketitle
\abstract{The best degree-based upper bound for the spectral radius is due to Liu and Weng. This paper begins by demonstrating that a (forgotten) upper bound for the spectral radius dating from 1983 is equivalent to their much more recent bound. This bound is then used to compare lower bounds for the clique number.
A series of line graph based upper bounds for the Q-index is then proposed and compared experimentally with a graph based bound.
Finally a new lower bound for generalised $r-$partite graphs is proved, by extending a result due to Erd\"os.}
\section{Introduction}
Let $G$ be a simple and undirected graph with $n$ vertices, $m$ edges, and degrees $\Delta = d_1 \ge d_2 \ge ... \ge d_n = \delta$. Let $d$ denote the average vertex degree, $\omega$ the clique number and $\chi$ the chromatic number. Finally let $\mu(G)$ denote the spectral radius of $G$, $q(G)$ denote the spectral radius of the signless Laplacian of $G$ and $G^L$ denote the line graph of $G$.
In 1983, Edwards and Elphick \cite{edwards83} proved in their Theorem 8 (and its corollary) that $\mu \le y - 1$, where $y$ is defined by the equality:
\begin{equation}\label{eq:edwards}
y(y - 1) = \sum_{k = 1}^{\lfloor{y}\rfloor} d_k + (y - \lfloor{y}\rfloor)d_{\lceil{y}\rceil}.
\end{equation}
Edwards and Elphick \cite{edwards83} show that $1 \le y \le n$ and that $y$ is a single-valued function of $G$.
This bound is exact for regular graphs because, we then have that:
\[
d = \mu \le y - 1 = \frac{1}{y}\left(\sum_{k = 1}^{\lfloor{y}\rfloor} d + (y - \lfloor{y}\rfloor)d\right) = d.
\]
The bound is also exact for various bidegreed graphs. For example, let G be the Star graph on $n$ vertices, which has $\mu =\sqrt{n - 1}$. It is easy to show that $\lfloor \sqrt{n - 1} \rfloor < y < \lceil \sqrt{n - 1} \rceil$. It then follows that $y$ is the solution to the equation:
\[
y(y - 1) = (n - 1) + \lfloor \sqrt{n - 1} \rfloor - 1 + (y - \lfloor \sqrt{n - 1} \rfloor) = n - 2 + y,
\]
which has the solution $y = 1 + \sqrt{n - 1}$, so $\mu \le y - 1 = \sqrt{n - 1}$.
Similarly let G be the Wheel graph on $n$ vertices, which has $\mu = 1 + \sqrt{n}$. It is straightforward to show that $y = 2 + \sqrt{n}$ is the solution to (\ref{eq:edwards}) so again the bound is exact.
\section{An upper bound for the spectral radius}
The calculation of $y$ can involve a two step process.
1. Restrict $y$ to integers, so (\ref{eq:edwards}) simplifies to:
\[
y(y - 1) = \sum_{k=1}^y d_k.
\]
Since $d \le \mu$, we can begin with $y = \lfloor d + 1 \rfloor$, and then increase $y$ by unity until $y(y - 1) \ge \sum_{k=1}^y d_k$. This determines that either $y = a$ or $a < y < a + 1$, where $a$ is an integer.
2. Then, if necessary, solve the following quadratic equation:
\begin{equation}\label{eq:elphick}
y(y - 1) = \sum_{k=1}^a d_k + (y - a)d_{a+1}.
\end{equation}
For convenience let $c = \sum_{k=1}^a d_k$. Equation (\ref{eq:elphick}) then becomes:
\[
y^2 - y(1 + d_{a+1}) - (c - ad_{a+1}) = 0.
\]
Therefore
\[
y = \frac{d_{a+1} + 1 + \sqrt{(d_{a+1} + 1)^2 + 4(c - ad_{a+1})}}{2}
\]
so
\[
\mu \le y - 1 = \frac{d_{a+1} - 1 + \sqrt{(d_{a+1} + 1)^2 + 4(c - ad_{a+1})}}{2}.
\]
This two step process can be combined as follows, by letting $a + 1 = k$:
\begin{equation}\label{eq:elphick2}
\mu \le \frac{d_k - 1 + \sqrt{(d_k + 1)^2 + 4\sum_{i=1}^{k-1}(d_i - d_k)}}{2}, \mbox{ where } 1 \le k \le n.
\end{equation}
In 2012, Liu and Weng \cite{liu2013} proved (\ref{eq:elphick2}) using a different approach. They also proved there is equality if and only if $G$ is regular or there exists $2 \le t \le k$ such that $d_1 = d_{t-1} = n - 1$ and $d_t = d_n$. Note that if $k = 1$ this reduces to $\mu \le \Delta$.
If we set $k = n$ in (\ref{eq:elphick2}) then:
\[
\mu \le \frac{\delta - 1 + \sqrt{(\delta + 1)^2 - 4n\delta + 8m}}{2}
\]
which was proved by Nikiforov \cite{nikiforov02} in 2002.
\section{Lower bounds for the clique number}
Tur\'an's Theorem, proved in 1941, is a seminal result in extremal graph theory. In its concise form it states that:
\[
\frac{n}{n - d} \le \omega(G).
\]
Edwards and Elphick \cite{edwards83} used $y$ to prove the following lower bound for the clique number:
\begin{equation}\label{eq:chris}
\frac{n}{n - y + 1} < \omega(G) + \frac{1}{3}.
\end{equation}
In 1986, Wilf \cite{wilf86} proved that:
\[
\frac{n}{n - \mu} \le \omega(G).
\]
Note, however, that:
\[
\frac{n}{n - y + 1} \not\le \omega(G),
\]
since for example $\frac{n}{n - y + 1} = 2.13$ for $K_{7,9}$ and $\frac{n}{n - y + 1} = 3.1$ for $K_{3,3,4}$.
Nikiforov \cite{nikiforov02} proved a conjecture due to Edwards and Elphick \cite{edwards83} that:
\begin{equation}\label{eq:niki}
\frac{2m}{2m - \mu^2} \le \omega(G).
\end{equation}
Experimentally, bound (\ref{eq:niki}) performs better than bound (\ref{eq:chris}) for most graphs.
\section{Upper bounds for the Q-index}
Let $q(G)$ denote the spectral radius of the signless Laplacian of $G$. In this section we investigate graph and line graph based bounds for $q(G)$ and then compare them experimentally.
\subsection{Graph bound}
Nikiforov \cite{nikiforov14} has recently strengthened various upper bounds for $q(G)$ with the following theorem.
\begin{thm}
If $G$ is a graph with $n$ vertices, $m$ edges, with maximum degree $\Delta$ and minimum degree $\delta$, then
\[
q(G) \le min\left(2\Delta, \frac{1}{2}\left(\Delta + 2\delta - 1 + \sqrt{(\Delta + 2\delta - 1)^2 + 16m -8(n - 1 + \Delta)\delta}\right)\right).
\]
Equality holds if and only if $G$ is regular or $G$ has a component of order $\Delta + 1$ in which every vertex is of degree $\delta$ or $\Delta$, and all other components are $\delta$-regular.
\end{thm}
\subsection{Line graph bounds}
The following well-known Lemma (see, for example, Lemma 2.1 in \cite{chen02}) provides an equality between the spectral radii of
the signless Laplacian matrix and the adjacency matrix of the line graph of a graph.
\begin{lem}\label{lem:chen02}
If $G^L$ denotes the line graph of $G$ then:
\begin{equation}\label{eq:chen02}
q(G) = 2 + \mu(G^L).
\end{equation}
\end{lem}
Let $\Delta_{ij}=\{d_{i}+d_{j}-2 ~|~ i \sim j \}$
be the degrees of vertices in $G^{L},$
and $\Delta_{1} \geq \Delta_{2} \geq \ldots \geq \Delta_{m}$ be a renumbering of them in non-increasing order.
Cvetkovi\'c \emph{et al.} proved the following theorem using Lemma~\ref{lem:chen02}.
\begin{thm}\label{thm:cvetkovic07}(Theorem 4.7 in \cite{cvetkovic07})
$$q(G) \leq 2+\Delta_{1}$$
with equality if and only if $G$ is regular or semi-regular bipartite.
\end{thm}
The following lemma is proved in varying ways in \cite{shu04,das11,liu2013}.
\begin{lem}\label{lem:shu04}
$$\mu(G) \leq \frac{d_{2}-1+\sqrt{(d_{2}-1)^{2}+4d_{1}}}{2}$$
with equality if and only if $G$ is regular or $n-1=d_{1}>d_{2}=d_{n}.$
\end{lem}
Chen \emph{et al.} combined Lemma~\ref{lem:chen02} and Lemma~\ref{lem:shu04} to prove the following result.
\begin{thm}\label{thm:chen11}(Theorem 3.4 in \cite{chen11})
$$q(G) \leq 2+\frac{\Delta_{2}-1+\sqrt{(\Delta_{2}-1)^{2}+4\Delta_{1}}}{2}$$
with equality if and only if $G$ is regular, or semi-regular bipartite, or the tree obtained by joining
an edge to the centers of two stars $K_{1,\frac{n}{2}-1}$ with even $n,$ or $n-1=d_{1}=d_{2} > d_{3}=d_{n}=2.$
\end{thm}
Stating (\ref{eq:elphick2}) as a Lemma we have:
\begin{lem}\label{lem:edwards83}
For $1 \leq k \leq n,$
\begin{equation}\label{eq:dwards83}
\mu(G) \leq \phi_{k} := \frac{d_{k}-1+\sqrt{(d_{k}+1)^{2}+4\sum_{i=1}^{k-1}(d_{i}-d_{k})}}{2}
\end{equation}
with equality if and only if $G$ is regular or there exists $2 \leq t \leq k$ such that
$n-1=d_{1}=d_{t-1}>d_{t}=d_{n}.$ Furthermore,
$$\phi_{\ell}=\min\{\phi_{k} ~|~ 1 \leq k \leq n \}$$
where $3 \leq \ell \leq n$ is the smallest integer such that $\sum_{i=1}^{\ell}d_{i}<\ell(\ell-1).$
\end{lem}
Combining Lemma~\ref{lem:chen02} and Lemma~\ref{lem:edwards83}
provides the following series of upper bounds for the signless Laplacian spectral radius.
\begin{thm}\label{thm:signlessLap}
For $1 \leq k \leq m,$ we have
\begin{equation}\label{eq:signlessLap}
q(G) \leq \psi_{k} := 1 + \frac{\Delta_{k}+1+
\sqrt{(\Delta_{k}+1)^{2}+4\sum_{i=1}^{k-1}(\Delta_{i}-\Delta_{k})}}{2}
\end{equation}
with equality if and only if
$\Delta_{1}=\Delta_{m}$ or there exists $2 \leq t \leq k$ such that
$m-1=\Delta_{1}=\Delta_{t-1} > \Delta_{t}=\Delta_{m}.$ Furthermore,
$$\psi_{\ell}=\min\{\psi_{k} ~|~ 1 \leq k \leq m \}$$
where $3 \leq \ell \leq m$ is the smallest integer such that $\sum_{i=1}^{\ell}\Delta_{i}<\ell(\ell-1).$
\end{thm}
\begin{proof}
$G^{L}$ is simple.
Hence (\ref{eq:signlessLap}) is a direct result of (\ref{eq:chen02}) and (\ref{eq:dwards83}).
The sufficient and necessary conditions are immediately those in Lemma~\ref{lem:edwards83}.
\end{proof}
\begin{rem}
Note that Theorem~\ref{thm:signlessLap} generalizes both Theorem~\ref{thm:cvetkovic07} and Theorem~\ref{thm:chen11}
since these bounds are precisely $\psi_{1}$ and $\psi_{2}$ in (\ref{eq:signlessLap}) respectively.
We list all the extremal graphs with equalities in (\ref{eq:signlessLap}) in the following.
From Theorem~\ref{thm:cvetkovic07} the graphs with $q(G)=\psi_{1},$
i.e. $\Delta_{1}=\Delta_{m},$ are regular or semi-regular bipartite.
From Theorem~\ref{thm:chen11} the graphs
with $q(G) < \psi_{1}$ and $q(G)=\psi_{2},$ i.e. $m-1=\Delta_{1}>\Delta_{2}=\Delta_{m},$
are the tree obtained by joining an edge to the centers of two stars $K_{1,\frac{n}{2}-1}$ with even n,
or $n-1=d_{1}=d_{2}>d_{3}=d_{n}=2.$
The only graph with $q(G) < \min\{\psi_{i}|i=1,2\}$ and $q(G)=\psi_{3},$ i.e.
$m-1=\Delta_{1}=\Delta_{2}>\Delta_{3}=\Delta_{m},$ is the $4$-vertex graph $K_{1,3}^{+}$
obtained by adding one edge to $K_{1,3}.$
\begin{picture}(50,75)
\put(10,14){\circle*{3}} \put(40,14){\circle*{3}}
\put(25,40){\circle*{3}} \put(25,70){\circle*{3}}
\qbezier(10,14)(25,14)(40,14)
\qbezier(10,14)(17.5,27)(25,40)
\qbezier(40,14)(32.5,27)(25,40)
\qbezier(25,40)(25,55)(25,70)
\put(15,2){$K_{1,3}^{+}$}
\end{picture}
We now prove that no graph satisfies $q(G) < \min\{\psi_{i}|1\leq i \leq k - 1\}$ and $q(G)=\psi_{k}$ where $m \geq k \geq 4.$ Let $G$ be a counter-example such that $m-1=\Delta_{1}=\Delta_{k-1}>\Delta_{k}=\Delta_{m}.$
Since $\Delta_{3}=m-1$ there are at least $3$ edges incident to all other edges in $G.$
If these $3$ edges form a $3$-cycle then there is nowhere to place the fourth edge,
which is a contradiction. Hence they are incident to a common vertex, and $G$ has to be a star graph.
However a star graph is semi-regular bipartite so $q(G)=\psi_{1},$ which completes the proof.
\end{rem}
\begin{rem}
By analogy with (\ref{eq:edwards}), if $z$ is defined by the equality
\[
z(z - 1) = \sum_{k = 1}^{\lfloor{z}\rfloor} \Delta_k + (z - \lfloor{z}\rfloor)\Delta_{\lceil{z}\rceil},
\]
then $q \le z + 1$.
This bound is exact for $d-$regular graphs, because we then have:
\[
2d = q \le z + 1 = 2 + (z - 1) = 2 + \frac{1}{z}\left(\sum_{k = 1}^{\lfloor{z}\rfloor} \Delta + (z - \lfloor{z}\rfloor)\Delta\right) = 2 +\Delta = 2d.
\]
\end{rem}
\subsection{Experimental comparison}
It is straightforward to compare the above bounds experimentally using the named graphs and LineGraph function in Wolfram Mathematica. Theorem 1 is exact for some graphs (eg Wheels) for which Theorems 5 and 7 are inexact and Theorems 5 and 7 are exact for some graphs (eg complete bipartite) for which Theorem 1 is inexact. Tabulated below are the numbers of named irregular graphs on 10, 16, 25 and 28 vertices in Mathematica and the average values of $q$ and the bounds in Theorems 1, 5 and 7.
\[
\begin{array}{cccccc}
\mbox{n} & \mbox{irrregular graphs} & \mbox{q(G)} & \mbox{Theorem 1} & \mbox{Theorem 5} & \mbox{Theorem 7} \\
10 &59 &9.3 &10.0 &10.3 &9.8 \\
16 &48 &10.3 &11.2 &11.5 &11.0 \\
25 &25 &11.5 &13.4 &13.1 &12.6 \\
28 &21 &11.2 &12.6 &12.7 &12.2 \\
\end{array}
\]
It can be seen that Theorem 5 gives results that are broadly equal on average to Theorem 1 and Theorem 7 gives results which are on average modestly better. This is unsurprising since more data is involved in Theorem 7 than in the other two theorems. For some graphs, $q(G)$ is minimised in Theorem 7 with large values of $k$.
\section{A lower bound for the Q-index}
Elphick and Wocjan \cite{elphick14} defined a measure of graph irregularity, $\nu$, as follows:
\[
\nu = \frac{n\sum d_i^2}{4m^2},
\]
where $\nu \ge 1$, with equality only for regular graphs.
It is well known that $q \ge 2\mu$ and Hofmeister \cite{hofmeister88} has proved that $\mu^2 \ge \sum d_i^2/n$, so it is immediate that:
\[
q \ge 2\mu \ge \frac{4m\sqrt{\nu}}{n}.
\]
Liu and Liu \cite{liu09} improved this bound in the following theorem, for which we provide a simpler proof using Lemma 2.
\begin{thm}
Let $G$ be a graph with irregularity $\nu$ and Q-index $q(G)$. Then
\[
q(G) \ge \frac{4m\nu}{n}.
\]
This is exact for complete bipartite graphs.
\end{thm}
\begin{proof}
Let $G^L$ denote the line graph of $G$. From Lemma 2 we know that $q(G) = 2 + \mu(G^L)$ and it is well known that $n(G^L) = m$ and $m(G^L) = (\sum d_i^2/2) - m$. Therefore:
\[
q = 2 + \mu(G^L) \ge 2 + \frac{2m(G^L)}{n(G^L)} = 2 + \frac{2}{m}\left(\frac{\sum d_i^2}{2} - m\right) = \frac{\sum d_i^2}{m} = \frac{4m\nu}{n}.
\]
For the complete bipartite graph $K_{s,t}$ :
\[
q \ge \frac{\sum_i d_i^2}{m} = \frac{\sum_{ij\in E} (d_i + d_j)}{m} = d_i + d_j = s + t = n, \mbox{ which is exact}.
\]
\end{proof}
\section{Generalised $r-$partite graphs}
In a series of papers, Bojilov and others have generalised the concept of an $r-$partite graph. They define the parameter $\phi$ to be the smallest integer $r$ for which $V(G)$ has an $r-$partition:
\[
V(G) = V_1\cup V_2 \cup ... \cup V_r, \mbox{ such that } d(v) \le n - n_i, \mbox{ where } n_i = |V_i|,
\]
for all $v \in V_i$ and for $i = 1,2, ... ,r$.
Bojilov \emph{et al} \cite{bojilov13} proved that $\phi(G) \le \omega(G)$ and Khadzhiivanov and Nenov \cite{khad04} proved that:
\[
\frac{n}{n - d} \le \phi(G).
\]
Despite this bound, Elphick and Wocjan \cite{elphick14} demonstrated that:
\[
\frac{n}{n - \mu} \not\le \phi(G).
\]
However, it is proved below in Corollary 10 that:
\[
\frac{n}{n - \mu} \le \frac{n}{n - y +1} < \phi(G)+ \frac{1}{3}.
\]
\emph{Definition}
If $H$ is any graph of order $n$ with degree sequence $d_H(1) \ge d_H(2) \ge ... \ge d_H(n)$, and if $H^*$ is any graph of order $n$ with degree sequence $d_{H^*}(1) \ge d_{H^*}(2) \ge ... \ge d_{H^*}(n)$, such that $d_H(i) \le d_{H^*}(i)$ for all $i$, then $H^*$ is said to "dominate" $H$.
Erd\"os proved that if $G$ is any graph of order $n$, then there exists a graph $G^*$ of order $n$, where $\chi(G^*) = \omega(G) = r$, such that $G^*$ dominates $G$ and $G^*$ is complete $r-$partite.
\begin{thm}
If $G$ is any graph of order $n$, then there exists a graph $G^*$ of order $n$, where $\omega(G^*) = \phi(G) = r$, such that $G^*$ dominates $G$, and $G^*$ is complete $r-$partite.
\end{thm}
\begin{proof}
Let $G$ be a generalised $r-$partite graph with $\phi(G) = r$ and $n_i = |V_i|$, and let $G^*$ be the complete $r-$partite graph $K_{n_1, ... , n_r}$. Let $d(v)$ denote the degree of vertex $v$ in $G$ and $d^*(v)$ denote the degree of vertex $v$ in $G^*$. Clearly $\chi(G^*) = \omega(G^*) = r$, and by the definition of a generalised $r-$partite graph:
\[
d^*(v) = n - n_i \ge d(v)
\]
for all $v \in V_i$ and for $i = 1, ..., r$. Therefore $G^*$ dominates $G$.
\end{proof}
\begin{lem} (Lemma 4 in \cite{edwards83})
Assume $G^*$ dominates $G$. Then $y(G^*) \ge y(G)$.
\end{lem}
\begin{thm}
\[
\frac{n}{n - y(G) +1} < \phi(G) + \frac{1}{3}.
\]
\end{thm}
\begin{proof}
Let $G^*$ be any graph of order $n$, where $\omega(G^*) = \phi(G)$ such that $G^*$ dominates $G$. (By Theorem 7 at least one such graph $G^*$ exists.) Then, using Lemma 8:
\[
\frac{n}{n - y(G) + 1} \le \frac{n}{n - y(G^*) + 1} < \omega(G^*) + \frac{1}{3} = \phi(G) + \frac{1}{3} \le \omega(G) + \frac{1}{3}.
\]
\end{proof}
\begin{cor}
\[
\frac{n}{n - \mu} < \phi(G) + \frac{1}{3}.
\]
\end{cor}
\begin{proof}
Immediate since $\mu \le y - 1$.
\end{proof}
\end{document}
|
\begin{document}
\baselineskip 3ex
\parskip 1ex
\title{Scalable Wake-up of Multi-Channel Single-Hop Radio Networks \footnotemark[1]
}
\author{ Bogdan S. Chlebus \footnotemark[2] \and
Gianluca De Marco \footnotemark[3] \and
Dariusz R. Kowalski \footnotemark[4]}
\footnotetext[1]{This paper was published in a preliminary form as~\cite{ChlebusDK-OPODIS14} and in its final form as~\cite{ChlebusDK16}.}
\footnotetext[2]{Department of Computer Science and Engineering,
University of Colorado Denver, Denver, Colorado 80217, USA.
Work supported by the National Science Foundation under Grant No. 1016847.}
\footnotetext[3]{Dipartimento di Informatica,
Universit\`a degli Studi di Salerno,
Fisciano, 84084 Salerno, Italy.}
\footnotetext[4]{Department of Computer Science,
University of Liverpool,
Liverpool L69 3BX, United Kingdom.}
\date{}
\maketitle
\begin{abstract}
We consider single-hop radio networks with multiple channels as a model of wireless networks.
There are $n$ stations connected to $b$ radio channels that do not provide collision detection.
A station uses all the channels concurrently and independently.
Some $k$ stations may become active spontaneously at arbitrary times.
The goal is to wake up the network, which occurs when all the stations hear a successful transmission on some channel.
Duration of a waking-up execution is measured starting from the first spontaneous activation.
We present a deterministic algorithm that wakes up a network in ${\mathcal O}(k\log^{1/b} k\log n)$ time, where $k$ is unknown.
We give a deterministic scalable algorithm for the special case when $b>d \log \log n$, for some constant $d>1$, which wakes up a network in ${\mathcal O}(\frac{k}{b}\log n\log(b\log n))$ time, with $k$ unknown.
This algorithm misses time optimality by at most a factor of ${\mathcal O}(\log n(\log b +\log\log n))$, because any deterministic algorithm requires $\Omega(\frac{k}{b}\log \frac{n}{k})$ time.
We give a randomized algorithm that wakes up a network within ${\mathcal O}(k^{1/b}\ln \frac{1}{\varepsilonsilon})$ rounds with a probability that is at least $1-\varepsilonsilon$, for any $0<\varepsilonsilon<1$, where $k$ is known.
We also consider a model of jamming, in which each channel in any round may be jammed to prevent a successful transmission, which happens with some known parameter probability~$p$, independently across all channels and rounds.
For this model, we give two deterministic algorithms for unknown~$k$: one wakes up a network in time ${\mathcal O}(\log^{-1}(\frac{1}{p})\, k\log n\log^{1/b} k)$, and the other in time ${\mathcal O}(\log^{-1}(\frac{1}{p}) \, \frac{k}{b} \log n\log(b\log n))$ when the inequality $b>\log(128b\log n)$ holds, both with probabilities that are at least $1-1/\mbox{poly}(n)$.
\noindent
\textbf{Keywords:}
multiple access channel,
radio network,
multi channel,
wake-up,
synchronization,
deterministic algorithm,
randomized algorithm,
distributed algorithm.
\end{abstract}
\thispagestyle{empty}
\setcounter{page}{0}
\section{Introduction}
We consider wireless networks organized as a group of stations connected to a number of channels.
Each channel provides the functionality of a single-hop radio network.
A station can use any of these channels to communicate directly and concurrently with all the stations.
A restriction often assumed about such networks is that a station can connect to at most one channel at a time for either transmitting or listening.
We depart from this constraint and consider an apparently stronger model in which a station can use all the available channels simultaneously and independently from each other, for instance, some for transmitting and others for listening.
On the other hand, we do not assume collision detection on any channel.
The algorithmic problem we consider is to wake up such a network.
Initially, all the stations are dormant but connected and passively listening to all channels.
Some stations become active spontaneously and want the whole network to be activated and synchronized.
The first successful transmission on any channel suffices to accomplish this goal.
The algorithms we develop are oblivious in the sense that actions of stations are scheduled in advance.
Deterministic oblivious algorithms are determined by decisions for each station when to transmit on each channel and when not.
Randomized oblivious algorithms are determined by the probabilities for each station and each channel if to transmit on the channel in a round.
We use the following parameters to characterize a multi-channel single-hop radio network.
The number of stations is denoted by $n$ and the number of shared channels by~$b$.
All stations know~$b$.
At most $k$ stations become active spontaneously at arbitrary times and join execution with the goal to wake up the network.
The parameter $k$ is used to characterize scalability of wake-up algorithms, along with the number of channels~$b$.
\Paragraph{Our results.}
We give randomized and deterministic oblivious algorithms to wake up a multi-channel single-hop radio network.
One of the algorithms scales well with both the number of stations $k$ that may be activated spontaneously and with the number of channels~$b$.
We develop two deterministic algorithms for the case of unknown~$k$.
Our general deterministic algorithm wakes up a network in ${\mathcal O}(k\log^{1/b} k\log n)$ rounds.
We also give a deterministic algorithm which performs well when sufficiently many channels are available: it wakes up a network in ${\mathcal O}(\frac{k}{b}\log n\log(b\log n))$ rounds when the numbers of nodes~$n$ and channels~$b$ satisfy the inequality $b>\log(128b\log n)$.
An algorithm of time performance ${\mathcal O}(\frac{k}{b}\log n\log(b\log n))$, like this one, misses time optimality by at most a factor of $\log n(\log b+\log\log n)$, because $\Omega(\frac{k}{b}\log \frac{n}{k})$ rounds are required by any deterministic algorithm.
This algorithm is best among those we develop, with respect to scalability with parameters $k$ and~$b$.
We give a randomized algorithm that wakes up a network within ${\mathcal O}(k^{1/b}\ln \frac{1}{\varepsilonsilon})$ rounds with a probability that is at least $1-\varepsilonsilon$, for any $0<\varepsilonsilon<1$, for known~$k$.
This algorithm demonstrates a separation between time performance of fastest deterministic algorithms and randomized ones that can use the knowledge of~$k$.
We also consider a model of jamming, in which each channel in any round may be jammed to prevent a successful transmission, which happens with some known probability~$p$, treated as a parameter, independently across all channels and rounds.
For this model, we give two deterministic algorithms.
One of them wakes up the network in time ${\mathcal O}(\log^{-1}\frac{1}{p}\, k\log n\log^{1/b} k)$ with a probability that is at least $1-1/\mbox{poly}(n)$.
Another algorithm is designed for the case when the inequality $b>\log(128b\log n)$ holds; in such networks the algorithm operates in time ${\mathcal O}(\log^{-1}(\frac{1}{p}) \, \frac{k}{b} \log n\log(b\log n))$ with a large probability.
\Paragraph{Previous work.}
G\k asieniec et al.~\cite{GasieniecPP-JDM01} gave a deterministic oblivious algorithm to wake up a single-hop single-channel radio network in time ${\mathcal O}(n\log^2 n)$, where $n$ is known and any number of stations may be activated spontaneously.
Our deterministic oblivious algorithms have time performance bounds expressed by formulas in which the three parameters $n$, $b$, and $k$ appear, of which $k$ is unknown while $n$ and $b$ are known.
Observe that if we substitute $k=n$ and $b=1$ in the upper bound ${\mathcal O}(k\log^{1/b} k\log n)$, which holds for the general deterministic oblivious algorithm, then what is obtained is ${\mathcal O}(n\log^2 n)$.
Our algorithm, when applied in networks with one channel, has the advantage of scaling with the unknown number $k$ of stations that are activated spontaneously, and provides an asymptotic improvement over the upper bound ${\mathcal O}(n\log^2 n)$ even for just two channels.
Jurdzi\'nski and Stachowiak~\cite{JurdzinskiS05} gave two randomized algorithms to wake up a multiple access channel.
One of them works in time ${\mathcal O}(\log^2 n)$ with high probability, when performance is optimized with respect to~$n$, and another works in time ${\mathcal O}(k)$ with high probability, when performance is optimized with respect to~$k$.
Our randomized algorithm for multi-channel networks has performance sub-linear in~$k$ for even just two channels.
Koml\'os and Greenberg~\cite{KomlosG-TIT85} showed how to resolve conflict for access to one channel among any of~$k$ stations in time ${\mathcal O}(k + k\log\frac{n}{k})$, when the stations begin an execution in the same round.
This can be compared to two of our results for the apparently more challenging problem of waking up a network, albeit equipped with multiple channels.
First, our general deterministic wake-up algorithm runs in time ${\mathcal O}(k\log^{1/b} k\log n)$.
Second, when the number of channels satisfies $b=\Omega(\log\log n)$ then another of our algorithms wakes up a network in time~${\mathcal O}(k\log n)$.
\Paragraph{Related work.}
Shi et al.~\cite{ShiHYWL12} considered the model of a multi-channel network, where there are $n$ nodes connected to $n$ channels, each channel being a single-hop radio network.
A node can use all the available channels concurrently for transmitting and/or receiving transmissions.
They studied the information-exchange problem, in which some $\ell$ nodes start with a rumor each and the goal is to disseminate all rumors across all stations.
They gave an information-exchange algorithm of time performance ${\mathcal O}(\log \ell\log\log \ell)$.
The work reported by Shi et al.~\cite{ShiHYWL12} was the only one, that we are familiar with, to use the model in which nodes can use all the available channels concurrently and independently.
All the other work on algorithms for multi-channel single-hop radio networks used the model in which a node has to choose a channel per round to participate in communication only through this particular channel, either as a listener or as transmitter; variants of this model with adversarial disruptions of channels were also considered.
Next we review work done for this very multi-channel model, in which a station can use at most one channel at a time for communication.
Dolev et al.~\cite{DolevGGN-DISC07} studied a parametrized variant of gossip for multi-channel radio networks.
They gave oblivious deterministic algorithms for an adversarial setting in which a malicious adversary can disrupt one channel per round.
Daum et al.~\cite{DaumGKN12} considered leader election and Dolev et al.~\cite{DolevGGKN-PODC09} gave algorithms to synchronize a network, both papers about an adversarial setting in which the adversary can disrupt a number of channels in each round, this number treated as a parameter in performance bounds.
Information exchange has been investigated extensively for multi-channel wireless networks.
The problem is about some $\ell$ nodes initialized with a rumor each and the goal is either to disseminate the rumors across the whole network or, when the communication environment is prone to failures, to have each node learn as many rumors as possible.
Gilbert et al.~\cite{GilbertGKN-INFOCOM09} gave a randomized algorithm for the scenario when an adversary can disrupt a number of channels per round, this number being an additional parameter in performance bounds.
Holzer et al. in~\cite{HolzerPSW11} and~\cite{HolzerLPW17} gave deterministic and randomized algorithms to accomplish the information-exchange task in time ${\mathcal O}(\ell)$, for $\ell$ rumors and for suitable numbers of channels that make this achievable.
This time bound ${\mathcal O}(\ell)$ is optimal when multiple rumors cannot be combined into compound messages.
Wang et al.~\cite{WangWYY14} considered information-exchange in a model when rumors can be combined into compound messages and collision detection is available.
They gave an algorithm of time performance ${\mathcal O}(\ell/b +n\log^2n)$, for $\ell$ rumors and $b$ channels.
Ning et al.~\cite{YuNZJWLF17} gave a randomized algorithm for the model with collision detection that completes information exchange of $\ell$ rumors held by $\ell$ nodes in time ${\mathcal O}(\ell/b + b\log n)$ with a high probability, where $n$ is not known.
A multi-channel single-hop network is a generalization of a multiple-access channel, which consists of just one channel.
For recent work on algorithms for multiple-access channels see~\cite{AnantharamuC15, AnantharamuCKR-JCSS19,AnantharamuCR-TCS17, AntaMM13,BienkowskiKKK-STACS10, ChlebusKR-DC09, ChlebusKR-TALG12, CzyzowiczGKP11,DeMarcoK15,JurdzinskiS15,Kowalski05}.
The problem of waking up a radio network was first investigated by G\k asieniec et al.~\cite{GasieniecPP-JDM01} in the case of multiple-access channels, see \cite{DeMarcoK17, jDMK13, DeMarcoPS07, JurdzinskiS05} for more on related work.
A broadcast from a synchronized start in a radio network was considered in~\cite{ChlebusGGPR02,ChrobakGR-JA02,ClementiMS03,CzumajR-JA06,DeMarco-soda08,DeMarco-JC10,KowalskiP-DC05}.
The general problem of waking up a multi-hop radio network was studied in~\cite{ChlebusGKR-ICALP05,ChlebusK-PODC04,ChrobakGK-SICOMP07}.
A lower bound for a multiple-access channel was given by Greenberg and Winograd~\cite{GreenbergW-JACM85}.
Lower bounds for multi-hop radio networks we proved by Alon et al.~\cite{AlonBLP91}, Clementi et al.~\cite{ClementiMS03}, Farach-Colton et al.~\cite{Farach-ColtonFM06} and Kushilevitz and Mansour~\cite{KushilevitzM-SICOMP98}.
Ad-hoc multi-hop multi-channel networks were studied by Alonso et al.~\cite{AlonsoKSWW03}, Daum et al. in~\cite{DaumGGKN13} and \cite{DaumKN12}, Dolev et al.~\cite{DolevGKN11}, and So and Vaidya~\cite{SoV04}.
\Paragraph{Structure and history of this document.}
We summarize the technical preliminaries in Section~\ref{sec:technical-preliminaries}.
A lower bound on time performance of deterministic algorithms is given in Section~\ref{sec:lowerbound}.
A randomized wake-up algorithm is given in Section~\ref{sec:randomized-algorithm}.
The concept of a generic deterministic oblivious algorithm is discussed in Section~\ref{sec:deterministic-oblivious-algorithms}.
Instantiations of such a generic algorithm are presented in Section~\ref{sec:general-deterministic-algorithm}, and the next Section~\ref{sec:large} discusses specialized instantiations of the generic deterministic algorithm when the number of channels is sufficiently large with respect to the number of stations.
The results of this paper appeared in a preliminary form in~\cite{ChlebusDK-OPODIS14}.
\section{Technical Preliminaries}
{\langle}bel{sec:technical-preliminaries}
The model of \emph{multi-channel single-hop radio network} is defined as follows.
There are $n$ nodes attached to a spectrum of $b$ frequencies.
Each frequency determines a multiple access channel.
We use the term ``station'' and ``node'' interchangeably.
The set of all stations is denoted by~$V$, where $|V|=n$.
Each station has a unique name assigned to it, which is an integer in $[1,n]$.
All the available channels operate concurrently and independently from each other.
Each channel has a unique identifier, which is an integer in the interval $[1,b]$.
A station identifies a channel by its identifier, which is the same for all stations.
A station can transmit on any set of channels at any time.
A station obtains the respective feedback from each channel, separately and concurrently among the channels.
\Paragraph{The semantics of channels.}
We say that a station \emph{hears} a message on a channel when the station successfully receives a message transmitted on this channel.
A channel is \emph{silent} in a time interval when no station transmits on this channel in this time interval.
When more than one stations transmit on a channel, such that their transmissions overlap, then we say that a \emph{collision} occurs on this channel during the time of overlap.
We say that a channel is equipped with \emph{collision detection} when feedback from the channel allows to distinguish between the channel being silent and a collision occurring on the channel.
When stations receive the same feedback from a channel when it is silent and when a collision occurs on this channel then the channel is said to be \emph{without collision detection}.
When a station transmits on some channel and no collision occurs on this channel during such a transmission then each station hears the transmitted message on this channel.
When a station transmits a message and a collision occurs during the transmission on this channel then no station hears this transmitted message.
There could be a collision on one channel and at the same time a message may be heard on some other channel.
There is no collision detection on any channel.
\Paragraph{Synchrony.}
Transmissions on all channels are synchronized.
This means that an execution of an algorithm is partitioned into \emph{rounds}.
Rounds are understood to be of equal length.
Each station has its private clock which is ticking at the rate of rounds.
Rounds begin and end at the same time for all stations.
When we refer to a round number then this means the indication of some station's private clock, while this station is understood from context.
Messages are scaled to duration of rounds, so that transmitting a message takes a whole round.
Two transmissions overlap in time precisely when they are performed in the same round.
This means that two messages result in a collision when and only when they are transmitted on the same channel and in the same round.
\Paragraph{Spontaneous activations.}
Initially, all stations are \emph{passive}, in that they do not execute any communication algorithm, and in particular do not transmit any messages on any channel.
Passive stations listen to all channels all the time, in that when a message is heard on a channel then all passive stations hear it too.
At an arbitrary point in time, some stations become \emph{activated} spontaneously and afterwards they are \emph{active}.
Passive stations may keep getting activated spontaneously after the round of the first activations.
A specific scenario of timings of certain stations being activated is called an \emph{activation pattern}.
An activated station resets its private clock to zero at the round of activation.
When a station becomes active, it starts from the first round indicated by its private clock to execute a communication algorithm.
Time, as measured by an external observer, is called \emph{global}.
Its units are of the same duration as rounds.
A unit of the global time is called a \emph{time step}.
The first round of a spontaneous activation of some station is considered as the first time step of the global time.
The time step in which a station~$u$ becomes activated spontaneously is denoted by~$\sigma_u$.
The set of stations that are active by time step~$t$ is denoted by~$W(t)$.
\Paragraph{The task of waking up a network.}
The algorithms we consider have as their goal to wake up the network that is executing it.
A network gets \emph{woken up} in the first round when some active station transmits on some channel as the only station transmitting in this round on this particular channel.
This moment is understood as resulting in all passive stations receiving a signal to ``wake up'' and next proceed with executing a predetermined communication algorithm.
The round of waking up a network can be used to synchronize local clocks so that they begin to indicate the same number of a time step.
Time performance of wake-up algorithms is measured as the number of rounds counted from the first spontaneous activation until the round of the first message heard on the network.
Performance bounds of wake-up algorithms in this paper employ the following three parameters: $n$, $b$, and $k$, which are natural numbers such that $1\le k\le n$.
Here $n$ is the number of stations, $b$ is the number of channels, and $k$ denotes an upper bound on the number of stations that may get activated spontaneously in an execution.
Given the parameters $n$, $k$, and $b$, they determine what can be called the \emph{$(n,k,b)$-wake-up problem}: find an algorithm that minimizes the time of waking up a network with $n$ nodes and $b$ channels when up to $k$ stations can be activated spontaneously.
We consider deterministic and randomized algorithms whose goal is to wake up a network.
They are \emph{oblivious} in that the actions of stations are determined in advance; such a determination is given as the probabilities of actions in the case of randomized algorithms.
A parameter of a system or executions is \emph{known} when it can be used in codes of algorithms.
For an instance of an $(n,k,b)$-wake-up problem, the number of channels $b$ is assumed to be known, which is natural, since stations need to know channels in order to use them.
Regarding the other parameters $n$ and $k$ in this paper, the assumptions are as follows.
If $n$ is known then $k$ is not assumed as known, which is the case of deterministic algorithms.
If $k$ is known then $n$ is not assumed to be known, which is the case of a randomized algorithm.
\section{A Lower Bound}
{\langle}bel{sec:lowerbound}
We present a lower bound on time performance of any deterministic algorithm for the $(n,k,b)$-wake-up problem.
A family~${\mathcal F}$ of subsets of~$[n]$ is said to be \emph{$(n,k)$-selective} when for any subset $A\subseteq [n]$ of $k$ elements there exists a set $b\in{\mathcal F}$ such that $A\cap B$ is a singleton set.
There is a straightforward correspondence between $(n,k)$-selective families and deterministic oblivious wake-up protocols on a multiple-access channel with $n$ stations when up to $k$ stations are activated spontaneously.
Clementi et al.~\cite{ClementiMS03} showed that $\Omega(k\log\frac{n}{k})$ is a lower bound on time needed to wake-up a single-channel single-hop radio network with $n$ nodes, when some $k$ nodes are activated spontaneously.
More precisely, Clementi et al.~\cite{ClementiMS03} showed that any $(n,k)$-selective family needs to have at least $\frac{k}{24}\lg\frac{n}{k}$ elements, for $k$ such that $2 < k\le\frac{n}{64}$.
Wake-up protocols for our model of multi-channel networks can also be interpreted as $(n,k)$-selective families.
An additional aspect is that we can apply $b$ sets from the family simultaneously as concurrent transmissions on different channels.
This directly implies that $\frac{1}{b}\cdot \frac{k}{24}\lg\frac{n}{k}$ time is required of a wake-up protocol for a $b$-channel network of $n$ nodes, for $k$ such that $2< k\le \frac{n}{64}$, by a lower bound on the size of selective families given in~\cite{ClementiMS03}.
In the remaining part of this section, we demonstrate a lower bound on time of wake-up for multi-channel networks.
The arguments we expound follow the main ideas of the proof of a lower bound given by Clementi et al.~\cite{ClementiMS03} for one channel; in particular, we also refer to properties of intersection-free families proved by Frankl and~F{\"u}redi~\cite{FranklF85}.
There are two goals for including this Section, rather than simply accepting $\frac{1}{b}\cdot \frac{k}{24}\lg\frac{n}{k}$ as a lower bound.
First, proving Theorem~\ref{thm:lower-bound} makes the paper self-contained.
Second, we state the lower bound in Theorem~\ref{thm:lower-bound} in a form that, first, improves the key involved constants, by obtaining $\frac{1}{b}\cdot \frac{k}{4}\lg\frac{n}{k}+ {\mathcal O}(\frac{k}{b})$ instead of $\frac{1}{b}\cdot \frac{k}{24}\lg\frac{n}{k}$, and, second, relaxes the restriction $k\le \frac{n}{64}$ to a general case $k\le n$.
We define a \emph{query} to be a set of ordered pairs $(x,\beta)$, for $x \in V$ and $1\leq \beta\leq b$.
An interpretation of a pair $(x,\beta)\in Q$, for a query $Q$, is that station $x$ is to transmit on channel~$\beta$ at the time step assigned for the query.
In this section, a deterministic oblivious algorithm ${\mathcal A}$ is represented as a sequence of queries ${\mathcal A} = \{Q_1,\ldots, Q_t\}$.
The index $i$ of a query $Q_i$ in such a sequence ${\mathcal A}$ is interpreted as the time step assigned for the query.
We use the notation
\[
Q_{i,\beta} = \{ x\in V : (x,\beta) \in Q_i \}
\ ,
\]
for a query~$Q_i$.
This represents the subset of all stations that transmit on channel~$\beta$ in time step~$i$.
We use the Iverson's bracket $[{\mathcal P}]$, where ${\mathcal P}$ is a logical statement, that could be either true or false, defined as follows: $[{\mathcal P}]=1$ if ${\mathcal P}$ is true and $[{\mathcal P}]=0$ if ${\mathcal P}$ is false.
We denote by ${\mathcal F}_{k}^n$ the family of sets with exactly~$k$ elements out of $n$ possible elements, interpreted as $k$-sets of stations taken from among all $n$ stations.
\begin{lemma}
{\langle}bel{pigeonhole}
Let ${\mathcal A} =\{Q_1,Q_2,\ldots,Q_t\}$ be a sequence of queries representing an algorithm.
There exists a sub-family ${\mathcal S} \subseteq {\mathcal F}_{k}^n$ with at least $|{\mathcal F}_{k}^n|/2^{bt}$ elements such that any two sets $A$ and $B$ in~${\mathcal S}$ satisfy
\begin{equation}
{\langle}bel{eqn:equivalence}
[|A\cap Q_{i,\beta}| \text{\rm\ is odd}\,] = [|B\cap Q_{i,\beta}| \text{\rm\ is odd}\,]
\ ,
\end{equation}
for all $i$ and $\beta$ such that $1\le i \le t$ and $1\le \beta\le b$.
\end{lemma}
\begin{proof}
Two sets $A$ and $B$ in ${\mathcal F}_{k}^n$ are said to be \emph{$i$-similar} when the equality~\eqref{eqn:equivalence} holds for all $\beta$ such that $1\le \beta\le b$.
The relation of $i$-similarity is an equivalence relation on~${\mathcal F}_{k}^n$.
The proof is by induction on~$t$.
The base of induction is obtained by taking an equivalence class of $1$-similarity that is of a largest size.
This size is at least $|{\mathcal F}_{k}^n|/2^{b}$, by the pigeonhole principle.
For the inductive step, assume that the claim holds for $i$ such that $0\le i \le t$, that is, there exists a sub-family ${\mathcal S} \subseteq {\mathcal F}_{k}^n$ with at least $|{\mathcal F}_{k}^n|/2^{bt}$ elements such that any two sets $A$ and $B$ in~${\mathcal S}$ satisfy the identity~\eqref{eqn:equivalence}, for all $i$ and $\beta$ such that $1\le i \le t$ and $1\le \beta\le b$.
Consider the relation of $(t+1)$-similarity determined on~${\mathcal S}$ by a query~$Q_{t+1}$.
There are at most $2^b$ nonempty equivalence classes of this relation.
One of them has at least $ |{\mathcal S}(i)|/2^b$ elements, by the pigeonhole principle.
By the inductive assumption, the size of this equivalence class is at least $|{\mathcal F}_{k}^n|/2^{b(t+1)}$.
\end{proof}
For ${\langle}mbda \leq \kappa \leq n$, a family ${\mathcal F} \subseteq {\mathcal F}_{\kappa}^n$ is said to be
\emph{$(n, \kappa, {\langle}mbda)$-intersection free} if $|F_1 \cap F_2 | \not= {\langle}mbda$ for every $F_1$ and $F_2$ in $ {\mathcal F}_{\kappa}^n$.
\begin{fact}[\cite{FranklF85}]
{\langle}bel{fact:FF}
For any $(n, \kappa, {\langle}mbda)$-intersection free family ${\mathcal F}$ the following inequality holds:
\[
|{\mathcal F}|
\leq
{n \choose {\langle}mbda} \cdot
\frac{{ 2\kappa-{\langle}mbda-1 \choose \kappa}}{{2\kappa -{\langle}mbda -1 \choose {\langle}mbda}}
\ ,
\]
assuming the inequality $2{\langle}mbda + 1 {\gamma}eq \kappa$ and that $\kappa - {\langle}mbda$ is a prime power.
$\square$
\end{fact}
\begin{lemma}
{\langle}bel{lem:binomial-simplification}
The following identity holds true
\[
{3k/2-1 \choose k} \vspace*{-
amount}ig/ {3k/2-1 \choose k/2} = \frac{1}{2}
\ .
\]
for integers $k{\gamma}e 0$ and $n{\gamma}e 0$ such that $k$ is even and $k\le 2n$.
\end{lemma}
\begin{proof}
Let $k=2m$.
It is sufficient to verify the following equation:
\[
2\cdot \binom{3m-1}{ 2m} = \binom{3m-1 }{ m}
\ .
\]
This indeed is the case, as the following transformations
\begin{eqnarray*}
\binom{3m-1 }{ m} &=& \binom{3m-1}{2m-1}\\
&=& \frac{(3m-1)!}{m! \cdot (2m-1)!}\\
&=& \frac{2\cdot (3m-1)!}{(m-1)!\cdot (2m)!}\\
&=& 2\cdot\binom{3m-1}{2m}
\end{eqnarray*}
provide the needed verification.
\end{proof}
We use notation $\lg x$ to denote the binary logarithm $\log_2 x$.
\begin{lemma}
{\langle}bel{l:sets}
Let ${\mathcal A} =\{Q_1,Q_2,\ldots,Q_t\}$ be an algorithm, where the following inequality holds
\[
t\leq \frac{k}{2b}\lg \frac{n}{k} - \frac{3k-2}{2b}
\]
and $\frac{k}{2}$ is a prime power.
There exist two sets $A,B\subseteq {\mathcal F}_k^n$ such that the following are satisfied:
\begin{enumerate}
\item[\rm (a)] $|A \cap B| = \frac{k}{2}$,
\item[\rm (b)] $[|A\cap Q_{i,\beta}| \text{\rm\ is odd}\,] = [|B\cap Q_{i,\beta}| \text{\rm\ is odd}\,]$,
for every $i$ and $\beta$ such that $1\le i \le t$ and $1\le \beta\le b$.
\end{enumerate}
\end{lemma}
\begin{proof}
By Lemma~\ref{pigeonhole}, there exists a sub-family ${\mathcal S} \subseteq {\mathcal F}_{k}^n$ of $|{\mathcal S}|$ elements in ${\mathcal F}_{k}^n$ and such that
\begin{equation}{\langle}bel{S}
|{\mathcal S}|{\gamma}e |{\mathcal F}_{k}^n|/2^{bt} = {n \choose {k}} / 2^{bt}
\end{equation}
and
\[
[|A \cap Q_{i,\beta}| \mbox{ is odd}] = [|B\cap Q_{i,\beta}| \mbox{ is odd}]
\ ,
\]
for every $A,B\in{\mathcal S}$, $1\le \beta\le b$ and $1\le i \le t$.
It follows that any two sets $A$ and $B$ in ${\mathcal S} \subseteq {\mathcal F}_{k}^n$ satisfy condition~(b).
It remains to demonstrate that there are at least two sets in ${\mathcal S}$ that also satisfy condition~(a), that is, their intersection has $\frac{k}{2}$ elements.
We use Fact~\ref{fact:FF}, for $\kappa=k$ and ${\langle}mbda=\frac{k}{2}$, such that $\frac{k}{2}$ is a prime power.
It gives that any sub-family of ${\mathcal F}_{k}^n$ containing sets that have pairwise intersections of size
different from $k/2$ has at most these many elements:
\[
{n \choose k/2} \cdot {3k/2-1 \choose k} \vspace*{-
amount}ig/ {3k/2-1 \choose k/2}
\ ,
\]
which equals $\frac{1}{2}{n \choose k/2}$ by Lemma~\ref{lem:binomial-simplification}.
It follows that it is sufficient for the following inequality to hold:
\begin{equation}
{\langle}bel{eqn:half-of-binom}
|{\mathcal S}| > \frac{1}{2} {n \choose k/2}
\ .
\end{equation}
To demonstrate this, we start from inequality~\eqref{S} and proceed through a sequence of inequalities.
In the process, we use the following estimates on binomial coefficients, for positive integers $x$:
\[
\left(\frac{n}{x}\right)^x \le \binom{n }{x} < \left(\frac{ne}{x}\right)^x
\ ,
\]
along with the assumed bound on $t$ in the form $bt\le \frac{k}{2}\lg \frac{n}{k}-\frac{3k-2}{2}$.
The algebraic manipulations are as follows:
\begin{eqnarray*}
|{\mathcal S}| &{\gamma}eq& \binom{n}{k}/2^{bt} \\
&{\gamma}eq& 2^{k\lg (n/k) - bt} \mbox{ }\\
&{\gamma}eq& 2^{k\lg (n/k) - (k/2)\lg (n/k) + (3k-2)/2} \;\;\mbox{\small after substituting the bound on $bt$}\\
&=& 2^{(k/2)\lg (n/k) + 3k/2-1} \\
&=& 2^{(k/2)\lg (8n/k) -1} \\
&>& 2^{(k/2)\lg (2ne/k)-1} \\
&=& \frac{1}{2} \left(\frac{2ne}{k}\right)^{k/2} \\
&>& \frac{1}{2} \binom{n }{ k/2} \ .
\end{eqnarray*}
We have thus justified~\eqref{eqn:half-of-binom}.
This in turn implies that there exist two sets in ${\mathcal S}$ whose intersection has exactly $\frac{k}{2}$ elements.
This completes the proof of existence of two sets $A$ and $B$ in ${\mathcal S}$ that satisfy part~(a).
\end{proof}
Now we proceed to prove the lower bound, which is formulated as follows.
\begin{theorem}
{\langle}bel{thm:lower-bound}
Any deterministic oblivious algorithm that wakes up a network of $n$ nodes with $b$ channels, when at most $k$ nodes are activated spontaneously, for $2< k\le n$, requires more than
\[
\frac{k}{4b}\lg \frac{n}{k} - \frac{3k-2}{2b}
\]
time steps.
\end{theorem}
\begin{proof}
Let $i$ be the largest integer such that $2< 2^i \leq k$.
Assume that $k' = 2^i$ stations, out of at most $k$ available stations, are activated simultaneously at time step zero.
Let ${\mathcal A} =\{Q_1,Q_2,\ldots,Q_t\}$ be an algorithm such that the following inequality holds:
\begin{equation}
{\langle}bel{eqn:lower-bound}
t \leq \frac{k'}{2b}\lg \frac{n}{k'} - \frac{3k'-2}{2b}
\ .
\end{equation}
Lemma~\ref{l:sets} is applicable, because $k'/2$ is power of $2$ and so a prime power.
Let $A$ and $B$ be two subsets of ${\mathcal F}_{k'}^n$, with the properties as stated in Lemma~\ref{l:sets}.
Let us set $A'=A\setminus B$ and $B'=B \setminus A$.
Observe that if $A$ and~$B$ have properties (a) and~(b) of Lemma~\ref{l:sets} then the following holds for $A'$ and $B'$:
\begin{enumerate}
\item[\rm (a*)] $|A'| = |B'| = \frac{k'}{2}$,
\item[\rm (b*)] $A'\cap B'=\emptyset$,
\item[\rm (c*)] $[|A'\cap Q_{i,\beta}| \mbox{ is odd}] = [|B'\cap Q_{i,\beta}| \mbox{ is odd}]$,
for every $1\le \beta\le b$ and $1\le i \le t$.
\end{enumerate}
We set $X = A' \cup B'$ to obtain that (a*) and (b*) imply $|X| = k'$.
Moreover, from (c*) it follows that
either $X\cap Q_{i,\beta} = \emptyset$
or $|X\cap Q_{i,\beta}| {\gamma}e 2$, for all $1\le \beta\le b$ and $1\le i \le t$.
Consider an execution in which the stations in $X$ are simultaneously activated as the only stations activated spontaneously.
Then, during the first $t$ time steps after activations, no station in $X$ is heard on any channel.
We conclude that if an algorithm ${\mathcal A}$ always wakes up the network within $t$ steps then the
inequality~\eqref{eqn:lower-bound} cannot hold.
Consequently, since $k/2<k'\le k$, the lower bound follows.
\end{proof}
\section{A Randomized Algorithm}
{\langle}bel{sec:randomized-algorithm}
A pseudocode of a randomized algorithm, called \textsc{Channel-Screening}, is in Figure~\ref{alg:channel-screening}.
All random bits generated during an execution are independent from each other.
The pseudocode refers to~$k$, which means it is known.
At the same time, $n$ needs not to be known, because only active stations participate in the execution, so their number is always bounded above by~$k$.
\begin{figure}
\caption{{\langle}
\end{figure}
\begin{lemma}
{\langle}bel{lem:class}
Let $t$ be a time step and let $1\leq \beta\leq b$ be such that the following inequalities hold:
\[
k^{(\beta-1)/b} \leq |W(t)| \leq k^{\beta/b}
\ .
\]
When algorithm \textsc{Channel-Screening} is executed then the probability of hearing a message at time step~$t$ on channel $\beta$ is at least $\frac{1}{2ek^{1/b}}$.
\end{lemma}
\begin{proof}
Let $S(\beta, t)$ be the event of a successful transmission on channel $\beta$ at time $t$.
The probability that a station $w\in W(t)$ transmits at time $t$ on channel $\beta$ while all the
others remain silent is
\begin{eqnarray*}
\Pr (S(\beta, t) ) &{\gamma}eq& \frac{|W(t)|}{k^{\beta/b}} \left( 1- \frac{1}{k^{\beta/b}} \right)^{|W(t)|-1}\\
&{\gamma}eq& \frac{k^{(\beta-1)/b}}{k^{\beta/b}} \left( 1- \frac{1}{k^{\beta/b}} \right)^{ k^{\beta/b}},
\end{eqnarray*}
where the last inequality follows from the hypothesis that $k^{(\beta-1)/b} \leq |W(t)| \leq k^{\beta/b}$.
Hence
\[
\Pr (S(\beta, t) ) {\gamma}eq \frac{1}{2ek^{1/b}}\ ,
\]
which completes the proof.
\end{proof}
An estimate the number of rounds needed to make the probability of failure smaller than a threshold $\varepsilonsilon$ is given in the following theorem:
\begin{theorem}
{\langle}bel{thm:randomized-algorithm}
Algorithm \textsc{Channel-Screening} executed by $n$ nodes on a network with $b$ channels and when at most $k$ stations get activated succeeds in waking up the network in ${\mathcal O}(k^{1/b}\ln \frac{1}{\varepsilonsilon})$ time with a probability that is at least $1-\varepsilonsilon$.
\end{theorem}
\begin{proof}
Let us consider a set of contiguous time steps $T$.
For $1\leq \beta \leq b$, let us use the following notation:
\[
T_{\beta} = \{ t \in T |\ k^{(\beta-1)/b} \leq |W(t)| \leq k^{\beta/b}\}\ .
\]
Let $\bar{E}(t)$ be the event of an unsuccessful time step $t$, in which no station transmits as the only transmitter on any channel, and let $\bar{E}(\beta, t)$ be the event of an unsuccessful time step $t$ on channel~$\beta$, with $1\leq \beta \leq b$.
By Lemma~\ref{lem:class}, the probability of having a sequence of ${\langle}mbda = |T|$ unsuccessful
time steps can be estimated as follows:
\begin{eqnarray*}
\Pr\vspace*{-
amount}igl ( \bigcap_{t \in T} \bar{E}(t) \vspace*{-
amount}igr )
& \leq &
\Pr \vspace*{-
amount}igl ( \bigcap_{t \in T_1} \bar{E}(1, t) \vspace*{-
amount}igr ) \cdot
\Pr \vspace*{-
amount}igl ( \bigcap_{t \in T_2} \bar{E}(2, t) \vspace*{-
amount}igr ) \cdots
\Pr \vspace*{-
amount}igl ( \bigcap_{t \in T_b} \bar{E}(b, t) \vspace*{-
amount}igr ) \\
& \leq &
\vspace*{-
amount}igl( 1 - \frac{1}{2ek^{1/b}} \vspace*{-
amount}igr)^{|T_1|} \cdot
\vspace*{-
amount}igl( 1 - \frac{1}{2ek^{1/b}} \vspace*{-
amount}igr)^{|T_2|} \cdots
\vspace*{-
amount}igl( 1 - \frac{1}{2ek^{1/b}} \vspace*{-
amount}igr)^{|T_b|} \\
& \leq &
\vspace*{-
amount}igl( 1 - \frac{1}{2ek^{1/b}} \vspace*{-
amount}igr)^{{\langle}mbda} \\
& \leq &
\varepsilonsilon,
\end{eqnarray*}
for ${\langle}mbda {\gamma}eq 2ek^{1/b}\ln \frac{1}{\varepsilonsilon}$.
\end{proof}
Algorithm \textsc{Channel-Screening} demonstrates that the lower bound of Theorem~\ref{thm:lower-bound} can be beaten by a randomized algorithm that can use the magnitude of the parameter~$k$ explicitly.
Actually, the bound of Theorem~\ref{thm:randomized-algorithm} is such that just for $b=2$ channels the network is woken up with a positive probability in time that is ${\mathcal O}(\sqrt{k})$ while the lower bound of Theorem~\ref{thm:lower-bound} implies that time~$\Omega(k)$ is required.
\section{A Generic Deterministic Oblivious Algorithm}
{\langle}bel{sec:deterministic-oblivious-algorithms}
Deterministic oblivious algorithms are represented as schedules of transmission precomputed for each station.
A schedule is simply a binary sequence.
Such schedules of transmission are organized as rows of a binary matrix, for the sake of visualization and discussion.
Let $\ell$ be a positive integer treated as a parameter.
An array~${\mathcal T}$ of entries of the form $T(u,\beta,j)$, where $u\in V$ is a station, $\beta$ such that $1\leq \beta \leq b$ is a channel, and integer $j$ is such that $0\leq j\leq \ell$, is a \emph{transmission array} when each entry is either a~$0$ or a~$1$.
The parameter $\ell=\ell({\mathcal T})$ is called the \emph{length} of array~${\mathcal T}$.
Entries of a transmission array~${\mathcal T}$ are called \emph{transmission bits} of~${\mathcal T}$.
The number~$j$ is the \emph{position} of a transmission bit~$T(u,\beta,j)$.
For a transmission array ${\mathcal T}$, a station~$u$ and channel~$\beta$, the sequence of entries $T(u,\beta,j)$, for $j=1,\ldots,\ell$, is called a \emph{$(u,\beta)$-schedule} and is denoted ${\mathcal T}(u,\beta,\ast)$.
A $(u,\beta)$-schedule ${\mathcal T}(u,\beta,\ast)$ defines the following \emph{schedule of transmissions} for station~$u$: transmit on channel~$\beta$ in the $j$th round if and only if $T(u,\beta,j) = 1$.
Every station $u\in V$ is provided with a copy of all entries $T(u,\ast,\ast)$ of some transmission array~${\mathcal T}$ as a way to instantiate the code of a wake-up algorithm.
A pseudocode representation of such a generic deterministic oblivious algorithm is given in Figure~\ref{alg:wake-up}.
In analysis of performance, based on properties of~${\mathcal T}$, it is assumed that the number~$n$ is known and the parameter~$k$ is unknown.
\begin{figure}
\caption{{\langle}
\end{figure}
Time is measured by each station's private clock, with rounds counted from the activation.
Let us recall that if a station $u$ is active in a time step~$t$ then $u$ perceives this time step~$t$ as round~$t-\sigma_u$.
A station $v$ is $\beta$-{\em isolated} at time step~$t$ when $v \in W(t)$ and when both $T(v,\beta, t-\sigma_v) = 1$ and $T(u,\beta, t-\sigma_u) = 0$, for every $u\in W(t) \setminus \{v\}$.
A station $v$ is \emph{isolated at time step $t$} when $v$ is $\beta$-isolated at time step~$t$ for some channel $1\leq \beta \leq b$.
For a given transmission array, by an \emph{isolated position} we understand a pair $(t,\beta)$ of time step~$t$ and channel~$\beta$ such that there is a $\beta$-isolated station at time step~$t$.
When $(t,\beta)$ is an isolated position of a transmission array~${\mathcal T}$ then a successful wake-up occurs by time~$t$ when algorithm \textsc{Wake-Up}\,$({\mathcal T})$ is executed.
We organize a transmission array by partitioning it into sections of increasing length.
We will use the property of the mapping $i\mapsto 2^i\cdot i^{1/b}$ to be strictly increasing, which can be verified directly.
Let $c$ be a positive integer and let us define the function $\varphi$ as follows:
\begin{enumerate}
\item
$\varphi (0) = 0$, and
\item
$\varphi(i) = c2^i\cdot i^{1/b} \lg n$, for positive integers~$i$.
\end{enumerate}
The $i$th \emph{section} of a $(u,\beta)$-schedule $T(u,\beta,*)$, for $1\leq i \leq \lg n$, consists of a concatenation of all the segments
\[
T(u,\beta, \varphi(i)), T(u,\beta, \varphi(i)+1), \ldots, T(u,\beta, \varphi(i+1)-1)
\]
of transmission bits.
A station executing the $i$th section of its schedules is said to be \emph{in stage~$i$}.
The stations that are in a stage $i$ at a time step~$j$ are denoted by~$W_i(j)$.
The constant $c$ will be determined later as needed.
The identity
\[
\bigcup_{i=1}^{\lg n} W_i(j) = W(j)
\]
holds for every time step $j$, because an active station is in some stage.
Observe that the length of the $i$th section for any $(u,\beta)$-schedule is $\varphi(i+1) - \varphi(i)$, which can be verified to be at least as large as~$\varphi(i)$.
These time steps at which sufficiently many stations are in a stage, say, $\omega$, and no station is involved in a stage with index larger than $\omega$, play a special role in the analysis.
The relevant notions are that of a balanced time step, given in Definition~\ref{bal}, and also of a balanced time interval, given in Definition~\ref{baltime}.
Intuitively, a balanced time step is a round at which there are some $\Theta(2^{\omega})$ awaken stations involved in section $\omega$ and no station involved in the subsequent sections, for some $1\leq \omega \leq \log n$.
Similarly, a balanced time interval is a time interval that includes sufficiently many balanced time slots.
These notions are precisely defined as follows.
\begin{definition}[Balanced time steps.]
{\langle}bel{bal}
For a stage $\omega$, where $1\leq \omega \leq \log k$, a time step~$j$ is \emph{$\omega$-balanced} when the following properties hold:
\begin{enumerate}
\item[\rm (a)] ${2^{\omega}}\leq |W_{\omega}(j)| \leq {2^{\omega+2}}$, and
\item[\rm (b)] $|W_{i}(j)| = 0$, for all stages $i$ such that $i > \omega$.
\end{enumerate}
\end{definition}
When we refer to time intervals then this means intervals of time steps of the global time.
We identify time intervals with sufficiently large sequences of consecutive time steps that
contain only balanced time steps as follows:
\begin{definition}[Balanced time intervals.]
{\langle}bel{baltime}
Let $\omega$ be a stage, where $1\leq \omega \leq \log k$.
A time interval $[t_1, t_2]$ of size $\varphi(\omega-1)$,
is said to be $\omega$-\emph{balanced}, if every time step $j \in [t_1, t_2]$ is $\omega$-balanced.
An interval is called \emph{balanced} when there exists a stage $\omega$, for $1\leq \omega\leq \log k$, such that it is $\omega$-balanced.
\end{definition}
For a time step $j$, we define $\Psi (j)$ as follows:
\[
\Psi (j) = \sum_{\omega=1}^{\log k} \frac{|W_\omega(j)|}{2^i}.
\]
We categorize balanced time intervals further by considering their useful properties:
\begin{definition}[Light time intervals.]
{\langle}bel{d:light}
Let $\omega$ be a stage, where $1\leq \omega \leq \log k$.
An $\omega$-balanced time interval $[t_1, t_2]$ is called \emph{$\omega$-light} when
\begin{enumerate}
\item[\rm (1)] the inequality $\vspace*{-
amount}igl| \bigcup_{i=1}^{\omega} W_{i}(j) \vspace*{-
amount}igr| \leq 2^{\omega+4}$
holds for every time step $j\in [t_1,t_2]$, and
\item[\rm (2)] interval $[t_1, t_2]$ contains at least $\varphi(\omega-2)$ time steps $j$ such that
\begin{equation}
{\langle}bel{condition}
1 \leq \Psi(j) \leq 128 \cdot \omega
\ .
\end{equation}
\end{enumerate}
An interval is called \emph{light} when there exists a stage $\omega$, for $1\leq \omega\leq \log n$, such that it is $\omega$-light.
\end{definition}
We show existence of a suitable array of waking schedules by the probabilistic method.
In the argument, we refer to randomized transmission arrays, as defined next.
These are arrays with entries being independent random variables.
\begin{definition}[Regular randomized transmission arrays.]
{\langle}bel{def:array}
A \emph{randomized transmission array~${\mathcal T}$} has the structure of a transmission array. Transmission bits $T(u,\beta,j)$ are not fixed but instead are independent Bernoulli random variables.
Let $u$ be a station and $\beta$ denote a channel.
For $1\leq i\leq \lg n$, the entries of the $i$th section of the $(u,\beta)$-schedule are stipulated to have the following probability distribution
\[
\Pr (T(u,\beta,j) = 1 ) = 2^{-i}\cdot i^{-\beta/b}
\ ,
\]
for $j= \varphi(i), \ldots, \varphi(i+1)-1$.
\end{definition}
We say that the number of channels $b$ is \emph{$n$-large}, or simply \emph{large}, or that there are \emph{$n$-many channels}, when the inequality $b>\log(128\,b\log n)$ holds.
We set
\[
\varphi(i)=c\cdot (2^i/b) \lg n\log(128\,b\log n)
\]
for such $b$, where $c$ is a sufficiently large constant to be specified later.
Recall the notation
\[
\Psi (j) = \sum_{i=1}^{\log k} \frac{|W_i(j)|}{2^i}
\ ,
\]
for a time step~$j$.
For $n$-many channels, we use a modified version of a light time interval (see Definition~\ref{d:light}), where condition~(2) is replaced by the following one:
\begin{equation}
{\langle}bel{newcondition}
1 \leq \Psi(j) \leq 128 \cdot \log n.
\end{equation}
For a channel~$\beta$, we use the following notation:
\begin{equation}
{\langle}bel{eqn:beta-star}
\beta^*=\beta \bmod \log(128\,b\log n)
\ .
\end{equation}
A \emph{modified randomized transmission array~${\mathcal T}$} has the structure of a transmission array.
Transmission bits $T(u,\beta,j)$ are not fixed but instead are independent Bernoulli random variables.
Let $u$ be a station and $\beta$ denote a channel.
For $1\leq i\leq \lg n$, the entries of the $i$th section of the $(u,\beta)$-schedule are stipulated to have the following probability distribution
\[
\Pr (T(u,\beta,j) = 1 ) = b \cdot 2^{-i -\beta^*}
\ ,
\]
for $j= \varphi(i), \ldots, \varphi(i+1)-1$, where we use the notation stipulated in~\eqref{eqn:beta-star}.
A randomized transmission array, whether regular or modified, is used to represent a randomized wake-up algorithm.
To decide if a station~$u$ transmits on a channel~$\beta$ in a round~$j$, this station first carries out a Bernoulli trial with the probability of success as stipulated in the definition of the respective randomized array, and transmits when the experiment results in a success.
Regular arrays are used in the general case and modified arrays when there are $n$-many channels.
\begin{definition}[Waking arrays.]
{\langle}bel{schedule}
A transmission array ${\mathcal T}$ is said to be \emph{waking} when for every $k$ such that $1\leq k\leq n$ and a light interval $[t_1, t_2]$ such that $|W(t)| \leq k$, whenever $t_1 \leq t \leq t_2$,
there exist both a time step~$j\in [t_1, t_2]$ and a station $w\in W(j)$ such that $w$ is isolated
at time step~$j$.
\end{definition}
The length of a waking array is the worst-case time bound on performance of the wake-up algorithm determined by this transmission array.
\section{A General Deterministic Algorithm}
{\langle}bel{sec:general-deterministic-algorithm}
We consider waking arrays ${\mathcal T}$ to be used as instantiations of algorithm \textsc{Wake-Up} given in Section~\ref{sec:deterministic-oblivious-algorithms}.
The goal is to minimize their length in terms of $n$ while their effectiveness is to be expressed in terms of both $n$ and $k\le n$.
The existence of waking arrays of small length is shown by the probabilistic method.
The main fact proved in this Section is as follows:
\begin{theorem}
{\langle}bel{thm:general-deterministic}
There exists a deterministic waking array ${\mathcal T}$ of length ${\mathcal O}(n\log n \log^{1/b} n)$ such that, when used to instantiate the generic algorithm \textsc{Wake-Up}, produces an algorithm \textsc{Wake-Up}\,$({\mathcal T})$ that wakes up a network in time ${\mathcal O}(k\log n\log^{1/b} k)$, for up to $k\le n$ stations activated spontaneously.
\end{theorem}
We proceed with a sequence of preparatory Lemmas.
Let $X$ be the set of stations that are activated first.
Let $\sigma_X$ be the time step at which they become active.
All stations are passive before the time step~$\sigma_X$.
Let ${\gamma}_0 = 0$ and for $i = 1,2,\ldots,\lg n$, define ${\gamma}_i$ as the sum of the lengths of the first $i$ sections.
We have the following identities
\[
{\gamma}_i = \sum_{h=1}^{i} (\varphi(h+1) - \varphi(h)) = \varphi(i+1)
\ .
\]
\begin{lemma}
{\langle}bel{zigzag}
For $i = 1,2,\ldots,\lg n$, all stations in $X$ are in section $i$ of their transmission
schedules between time $\sigma_X + {\gamma}_{i-1}$ and time $\sigma_X + {\gamma}_i-1$.
\end{lemma}
\begin{proof}
Any station $x \in X$, woken up at time $\sigma_X$, for $1 \leq i \leq \lg n$, reaches section $i$ at time $\sigma_X + {\gamma}_{i-1}$ and continues to transmit according to transmission bits in section $i$ until time $\sigma_X + {\gamma}_i-1$.
\end{proof}
\begin{lemma}
{\langle}bel{join}
Fix a time step $j'$ and an integer $\omega$, with $1 \leq \omega \leq \log n$.
For any integer $h {\gamma}eq 1$, there exists a time step $j''{\gamma}eq j'$ such that the following holds for
$j = j'', \ldots, j'' + \varphi(\omega+h)$:
\[
\bigcup_{i=1}^{\omega} W_i(j') \subseteq W_{\omega+h+1}(j)
\ .
\]
\end{lemma}
\begin{proof}
Let us fix $h{\gamma}eq 1$.
Recall that the sum of the lengths of the first $i$ sections is ${\gamma}_i = \varphi(i+1)$.
Any station $x\in W_1(j')$ is in section $\omega+h+1$ by time $j'+\varphi(\omega+h+1)$.
Analogously, a station $y\in W_{\omega}(j')$ cannot leave section $\omega+h+1$
before time step $j' + \varphi(\omega+h+2) - \varphi(\omega+1)$.
It follows that all stations in $W_i(j')$, for $1\leq i\leq \omega$, are in section $\omega+h+1$
between the time step
\[
j'' = j'+\varphi(\omega+h+1)
\]
and the time step
\[
\tau = j' + \varphi(\omega+h+2) - \varphi(\omega+1)
\ .
\]
It remains to count the number of time steps between~$j''$ and~$\tau$.
We have that
\begin{eqnarray*}
\tau - j'' &=& \varphi(\omega+h+2) - \varphi(\omega+h+1) - \varphi(\omega+1) \\
&{\gamma}eq& \varphi(\omega+h+1) - \varphi(\omega+1) \\
&{\gamma}eq& \varphi(\omega+h),
\end{eqnarray*}
for every $h{\gamma}eq 1$.
\end{proof}
\begin{lemma}
{\langle}bel{fat}
Fix a time step $j'$ and an integer $\omega$, with $1 \leq \omega \leq \log n$.
Assume the following two inequalities:
\[
\vspace*{-
amount}igl| \bigcup_{i=1}^{\omega-1} W_{i}(j') \vspace*{-
amount}igr| {\gamma}eq 3\cdot |W_{\omega}(j')| \ \ \ \mbox{and}\ \ \
|W_{\omega}(j')| {\gamma}eq 2^{\omega}
\ .
\]
Then there exists an interval $[t_1, t_2]$ of size $\varphi(\omega+1)$ with $t_1 {\gamma}eq j'$ such that
$|W_{\omega+2}(j)| {\gamma}eq 2^{\omega+2}$.
\end{lemma}
\begin{proof}
We have the following estimate:
\begin{eqnarray}
\vspace*{-
amount}igl| \bigcup_{i=1}^{\omega} W_i(j') \vspace*{-
amount}igr| &=& \vspace*{-
amount}igl| \bigcup_{i=1}^{\omega-1} W_i(j') \vspace*{-
amount}igr| + |W_\omega(j')|\nonumber \\
&{\gamma}eq& 3\cdot |W_\omega(j')| + |W_\omega(j')|\nonumber \\
&=& 2^{\omega+2}. {\langle}bel{und}
\end{eqnarray}
By Lemma ~\ref{join}, there exists a round $j''{\gamma}eq j'$ such that for
$j = j'', \ldots, j'' + \varphi (\omega+1)$,
\[
\bigcup_{i=1}^{\omega} W_i(j') \subseteq W_{\omega+2}(j).
\]
Therefore, for $j = j'', \ldots, j'' + \varphi(\omega+1)$, the following bounds hold:
\[
\vspace*{-
amount}igl|W_{\omega+2}(j)\vspace*{-
amount}igr| {\gamma}eq \vspace*{-
amount}igl| \bigcup_{i=1}^{\omega} W_i(j') \vspace*{-
amount}igr| {\gamma}eq {2^{\omega+2}}
\ ,
\]
where the last step follows from~\eqref{und}.
We conclude by setting $t_1 = j''$ and $t_2 = j'' + \varphi(\omega+1)$.
\end{proof}
\begin{lemma}
{\langle}bel{iter}
Suppose that $[t_1, t_2]$ is an interval of size $\varphi(\omega-1)$, for some $1 \leq \omega\leq \log k$,
such that for every round $j \in [t_1, t_2]$, the following conditions hold:
\begin{itemize}
\item [\rm (a')] $|W_{\omega}(j)| {\gamma}eq {2^{\omega}}$;
\item [\rm (b')] for $i>\omega$, $|W_{i}(j)| = 0$.
\end{itemize}
Then there exists an $\omega'$-balanced interval for some $\omega \leq \omega'\leq \log k$.
\end{lemma}
\begin{proof}
If $|W_{\omega}(j)| \leq {2^{\omega+2}}$ for every $j\in [t_1, t_2]$ and condition (a) of Definition~\ref{bal} holds then there is nothing to prove.
Therefore, assume that there exists $j'\in [t_1, t_2]$ such that $|W_{\omega}(j')| > {2^{\omega+2}}$.
Observe that since at most $k$ stations can be activated, we must have $\omega < \log k - 2$.
Let $h{\gamma}eq 1$ be an integer such that the following inequalities hold:
\[
2^{\omega+h+1} < |W_{\omega}(j')| \leq 2^{\omega+h+3}
\ .
\]
By Lemma~\ref{join}, there exists a round $j''{\gamma}eq j'$ such that, for
$j = j'', \ldots, j'' + \varphi (\omega+h)$, the inclusion
\[
\bigcup_{i=1}^{\omega} W_{i}(j') \subseteq W_{\omega+h+1} (j)
\]
holds.
Therefore $|W_{\omega+h+1} (j)| {\gamma}eq 2^{\omega+h+1}$ for $j = j'', \ldots, j'' + \varphi (\omega+h)$.
Let $t'_1 = j''$, $t'_2 = j'' + \varphi (\omega+h)$ and $\omega' = \omega+h+1$. We have found an interval
$[t'_1, t'_2]$ of size $\varphi (\omega'-1)$ such that for every round $j \in [t'_1, t'_2]$,
the following conditions hold:
\begin{itemize}
\item [1.] $|W_{\omega'}(j)| {\gamma}eq {2^{\omega'}}$;
\item [2.] for $i>\omega'$, $|W_{i}(j)| = 0$.
\end{itemize}
If $|W_{\omega'} (j)| \leq 2^{\omega'+2}$ for every $j \in [t'_1, t'_2]$ then interval $[t'_1, t'_2]$ is $\omega'$-balanced and we are done; otherwise we repeat the same reasoning to find a new interval.
Since the number of stations that can be woken up is bounded by $k$, there must exist
an interval $[\tau_1, \tau_2]$ of size $\varphi (\iota-1)$, for some $1 \leq \iota \leq \log k$, such that
$|W_{\iota} (j)| \leq 2^{\iota+2}$ for every $j \in [\tau_1, \tau_2]$.
\end{proof}
\begin{lemma}
{\langle}bel{proper}
There exists an $\omega$-balanced interval $[t_1, t_2]$,
for some $1\leq \omega \leq \log k$, such that
\begin{equation}{\langle}bel{up}
\vspace*{-
amount}igl| \bigcup_{i=1}^{\omega-1} W_{i}(j) \vspace*{-
amount}igr| < 3\cdot |W_{\omega}(j)|
\end{equation}
for every $j \in [t_1,t_2]$.
\end{lemma}
\begin{proof}
Let us pick $\iota$ such that $1\leq \iota \leq \log k$ and $2^{\iota}\leq |X| \leq 2^{\iota+1}$.
By Lemma~\ref{zigzag}, all stations in $X$ are in section~$\iota$, for $j\in [\sigma_X + {\gamma}_{\iota-1}, \sigma_X + {\gamma}_{\iota}-1]$.
Therefore the inequality $|W_{\iota}(j)| {\gamma}eq |X| {\gamma}eq 2^{\iota}$ holds for every
$j\in [\sigma_X + {\gamma}_{\iota-1}, \sigma_X + {\gamma}_{\iota}-1]$.
Since there is no active station woken up before $t_X$ we also have that
$|W_{i}(j)| = 0$ for $i>\iota$ and every $j\in [\sigma_X + {\gamma}_{\iota-1}, \sigma_X + {\gamma}_{\iota}-1]$.
By Lemma~\ref{iter}, there is a $\iota^{*}$-balanced interval $[\tau_1, \tau_2]$
for some $\iota^{*}{\gamma}eq \iota$.
Let us assume that \eqref{up} does not hold, otherwise we are done. Let $j' \in [\tau_1, \tau_2]$
be such that $ \left| \bigcup_{i=1}^{\iota^*-1} W_{i}(j') \right| {\gamma}eq 3\cdot |W_{\iota^*}(j')|$.
By Lemma~\ref{fat}, there exists an interval $[t_1, t_2]$ of size $\varphi(\iota^*+1)$
with $t_1 {\gamma}eq j'$ such that
$|W_{\iota^*+2}(j)| {\gamma}eq 2^{\iota^*+2}$. Letting $\omega = \iota^*+2$, we have found an interval of size
$\varphi (\omega-1)$ such that $|W_{\omega}(j)| {\gamma}eq 2^{\omega}$.
By Lemma~\ref{iter}, there is an $\omega'$-balanced interval for some $\omega'{\gamma}eq \omega$.
This process can be iterated until a balanced interval that satisfies condition \eqref{up} is identified.
\end{proof}
\begin{lemma}{\langle}bel{C64}
There exists an $\omega$-light interval $[t_1, t_2]$, for some $1\leq \omega \leq \log k$.
\end{lemma}
\begin{proof}
Let $[t_1,t_2]$ be an $\omega$-balanced interval for some $1\leq \omega\leq \log k$, whose existence is guaranteed by Lemma~\ref{proper}.
We can assume that every $j \in [t_1,t_2]$ satisfies condition~\eqref{up}, by this very Lemma.
Moreover, since the interval is $\omega$-balanced, we also have that $|W_{\omega}(j)| \leq 2^{\omega+2}$ for every $j \in [t_1,t_2]$, by condition (a) of Definition~\ref{bal}.
This yields
\begin{eqnarray}{\langle}bel{small}
\vspace*{-
amount}igl| \bigcup_{i=1}^{\omega} W_i(j) \vspace*{-
amount}igr| &=& \vspace*{-
amount}igl| \bigcup_{i=1}^{\omega-1} W_i(j) \vspace*{-
amount}igr| + |W_{\omega}(j)| \nonumber \\
&<& 3|W_{\omega}(j)| + |W_{\omega}(j)| \nonumber \\
&\leq& 4\cdot 2^{\omega+2} = 2^{\omega+4},
\end{eqnarray}
for every $j \in [t_1,t_2]$.
Thus condition~(1) of Definition~\ref{d:light} is proved.
Next, we demonstrate condition~(2).
By condition (a) of Definition~\ref{bal}, we have that $|W_{\omega}(j)| {\gamma}eq 2^{\omega}$ for every
$j \in [t_1, t_2]$.
Therefore the following inequalities hold for every $j \in [t_1, t_2]$:
\[
\Psi(j) {\gamma}eq \frac{|W_{\omega}(j)|}{2^{\omega}} {\gamma}eq 1
\ .
\]
It remains to prove that the upper bound of~\eqref{condition} holds for at least $\varphi(\omega-2)$
time steps.
Suppose, with the goal to arrive at a contradiction, that the number of time steps $j$ in $[t_1, t_2]$ that satisfies the rightmost inequality of condition~\eqref{condition} is less than $\varphi(\omega-2)$.
Let $B \subseteq [t_1, t_2]$ be the set of balanced time steps $j \in [t_1, t_2]$ such that
condition~\eqref{condition} is not satisfied.
By the assumption, the following inequalities hold:
\begin{equation}{\langle}bel{contra}
|B| > |[t_1,t_2]| - \varphi(\omega-2) {\gamma}eq \frac{\varphi(\omega-2)}{2}.
\end{equation}
For any $j \in [t_1, t_2]$, let us consider
\[
U(j) = \bigcup_{i=1}^{\lg n} W_{i}(j) = \bigcup_{i=1}^{\omega} W_{i}(j)
\ ,
\]
where the second identity follows by condition (b) of Definition~\ref{bal}.
We have that $|W_i(j)| = 0$ for $\omega < i\leq \lg n$ and for every $j \in [t_1, t_2]$, because $[t_1, t_2]$ is $\omega$-balanced.
Hence all stations in $W(j)$ lie on sections $i \leq \omega$ for every $j\in [t_1, t_2]$.
By the specification of sections, a station is in section~$i$, for $1\leq i\leq \omega$, during $\varphi(i+1) - \varphi(i) {\gamma}eq \varphi(i)$ time steps.
Therefore, for every $1\leq i\leq \omega$
\[
\varphi(i)\max_{t_1 \leq j\leq t_2} |U(j)| {\gamma}eq \sum_{j=t_1}^{t_2} |W_{i}(j)|
{\gamma}eq \sum_{j\in B} |W_{i}(j)|.
\]
We continue with the following estimates:
\begin{eqnarray*}
\sum_{i=1}^{\omega} \max_{t_1 \leq j\leq t_2} |U(j)| &{\gamma}eq& \sum_{i=1}^{\omega} \sum_{j\in B} \frac{|W_{i}(j)|}{\varphi(i)} \\
& = & \sum_{j\in B} \sum_{i=1}^{\log k} \frac{|W_{i}(j)|}{\varphi(i)} \\
& = & \frac{1}{c\log n} \sum_{j\in B} \sum_{i=1}^{\omega} \frac{|W_{i}(j)|}{2^i\cdot i^{1/b}} \\
& {\gamma}eq & \frac{1}{c\log n\log^{1/b}k} \sum_{j\in B} \sum_{i=1}^{\omega} \frac{|W_{i}(j)|}{2^i} \\
& > & \frac{1}{c\log n\log^{1/b}k} \sum_{j\in B} 128\cdot\omega \;\;\mbox{ by the assumption}\\
& = & \frac{|B|\cdot 128\cdot \omega}{c\log n\log^{1/b} k}.
\end{eqnarray*}
Therefore the following inequality holds:
\[
\max_{t_1 \leq j\leq t_2} |U(j)| > \frac{|B|\cdot 128}{c\log n\log^{1/b} k}.
\]
By \eqref{contra} we obtain that the following estimate holds:
\[
\max_{t_1 \leq j\leq t_2} |U(j)| > \frac{2^{\omega-3} c\log n\log^{1/b} k \cdot 128}{c\log n\log^{1/b} k} = 2^{\omega+4}.
\]
This implies that there exists $j'\in [t_1, t_2]$ such that the following inequality holds
\[
\vspace*{-
amount}igl| \bigcup_{i=1}^{\omega} W_{i}(j') \vspace*{-
amount}igr| > 2^{\omega+4},
\]
which contradicts \eqref{small}.
\end{proof}
\begin{lemma}
{\langle}bel{isolating}
Let $\beta$ be a channel, for $1\leq \beta \leq b$.
Let every station be executing a randomized algorithm as represented by a regular randomized transmission array.
Let $[t_1, t_2]$ be a light interval.
The probability that there exists a station $w\in W(t)$ that is $\beta$-isolated at an arbitrary time step~$j$ such that $j\leq t$ and $t_1 \leq j\leq t_2$, is at least
\[
\frac{\Psi(j)}{\log^{\beta/b}k}\cdot 4^{-\frac{\Psi(j)}{\log^{\beta/b}k}}.
\]
\end{lemma}
\begin{proof}
Let $E_1(\beta, i,j)$ be the event ``there exists $w\in W_{i}(j)$ such that $T(\beta, w, j) = 1$'', and
let $E_2(\beta, i,j)$ be the event ``$T(u,\beta, j) = 0$ for all $l$ with $l\not = i$ and for every $u\in W_{l}(j)$.''
Let us say that $W(t)$ is $\beta$-isolated at time step $j\leq t$ if and only if there exists a station
$w\in W(t)$ that is $\beta$-isolated at time step $j$.
Clearly, $W(t)$ is $\beta$-isolated at time $j$ if and only if the following event occurs:
\[
\bigcup_{i=1}^{\log n} \left(E_1(\beta, i,j) \cap E_2(\beta, i,j)\right)
\ .
\]
We use the following estimate on probability:
\begin{eqnarray*}
\Pr ( E_1(\beta, i,j) )
&{\gamma}eq&
\frac{|W_{i}(j)|}{2^{i}\lpbi} \left(1-\frac{1}{2^{i}\lpbi}\right)^{|W_{i}(j)|-1} \\
&{\gamma}eq&
\frac{|W_{i}(j)|}{2^{i}\lpbi} \left( 1-\frac{1}{2^{i}\lpbi}\right)^{|W_{i}(j)|}
\end{eqnarray*}
and the following identity:
\[
\Pr (E_2(\beta, i,j) ) = \prod_{l=1,l \not = i}^{\log n} \left(1-\frac{1}{2^{l} l^{\beta/b} }\right)^{|W_{l}(j)|}.
\]
Events $E_1(\beta, i,j)$ and $E_2(\beta, i,j)$ are independent, so the following can be derived:
\begin{eqnarray*}
\Pr (E_1(\beta, i,j) \cap E_2(\beta, i,j) )
& {\gamma}eq &
\frac{|W_{i}(j)|}{2^{i}\lpbi} \prod_{l = 1}^{\log n} \left(1-\frac{1}{2^{l}l^{\beta/b}}\right)^{|W_{l}(j)|}\\
&=&
\frac{|W_{i}(j)|}{2^{i}\lpbi} \prod_{l = 1}^{\log n} \left(1-\frac{1}{2^{l}l^{\beta/b}}\right)^{2^{l}l^{\beta/b} \frac{|W_{l}(j)|}{2^{l} l^{\beta/b}} }\\
&{\gamma}eq&
\frac{|W_{i}(j)|}{2^{i}\lpbi} \cdot {4}^{-\sum_{l = 1}^{\log n} \frac{|W_{l}(j)|}{2^{l} l^{\beta/b}} }
\ .
\end{eqnarray*}
The events $E_1(\beta, i,j) \cap E_2(\beta, i,j)$ are mutually exclusive, for any fixed $j$ and all $1 \leq i \leq \lg n$.
Additionally, $W_i(j) = \emptyset$ for all $i > \log k$, as $[t_1, t_2]$ is a light interval.
Combining all this gives
\begin{eqnarray*}
\Pr (E_1(\beta, i,j) \cap E_2(\beta, i,j) )
&{\gamma}eq&
\sum_{l = 1}^{\log n} \frac{|W_{l}(j)|}{2^{l} l^{\beta/b}} \cdot {4}^{-\sum_{l = 1}^{\log n} \frac{|W_{l}(j)|}{2^{l} l^{\beta/b}} } \\
& = &
\sum_{l = 1}^{\log k} \frac{|W_{l}(j)|}{2^{l} l^{\beta/b}} \cdot {4}^{-\sum_{l = 1}^{\log k} \frac{|W_{l}(j)|}{2^{l} l^{\beta/b}} }
\ .
\end{eqnarray*}
Observe that the function $x\cdot 4^{-x}$ is monotonically decreasing in~$x$.
We apply this for
\[
x = \sum_{l = 1}^{\log k} \frac{|W_{l}(j)|}{2^{l} l^{\beta/b}}
\ .
\]
Observe that the following inequality holds:
\[
\sum_{l = 1}^{\log k} \frac{|W_{l}(j)|}{2^{l} l^{\beta/b}}
< \frac{1}{\log^{\beta/b}k }\sum_{l = 1}^{\log k} \frac{|W_{l}(j)|}{2^{l}}
\ .
\]
Combining these facts together justifies the following estimates
\begin{eqnarray*}
\Pr (E_1(\beta, i,j) \cap E_2(\beta, i,j) )
& {\gamma}eq &
\frac{1}{\log^{\beta/b}k }\sum_{l = 1}^{\log k} \frac{|W_{l}(j)|}{2^{l}} \cdot {4}^{-\frac{1}{\log^{\beta/bk}}\sum_{l = 1}^{\log k} \frac{|W_{l}(j)|}{2^{l}} } \\
& {\gamma}eq &
\frac{\Psi(j)}{\log^{\beta/b}k} \cdot 4^{\frac{\Psi(j)}{\log^{\beta/b}k}}
\ ,
\end{eqnarray*}
which completes the proof.
\end{proof}
\begin{lemma}
{\langle}bel{probability}
Let every station be executing a randomized algorithm as represented by a regular randomized transmission array.
There exists an $\omega$-light interval $[t_1, t_2]$, for some $1\leq \omega \leq \log k$, that
contains at least $\varphi(\omega-2)$ time steps $j\in [t_1, t_2]$ such that
the probability that there exists a station $w\in W(j)$ isolated at time $j$ is at least
\[
\frac{1}{4^{128}} \cdot \omega^{1/b} .
\]
\end{lemma}
\begin{proof}
By Lemma~\ref{C64}, there exists an $\omega$-light interval $[t_1, t_2]$, for some $1\leq \omega \leq \log k$.
There are at least $\varphi(\omega-2)$ time steps $j\in [t_1, t_2]$ with
$1 \leq \Psi(j) \leq 128\cdot \omega$.
Let $T$ be the set of such time steps~$j$.
We define sets $T_i$ as follows, for $1\le i\le b$:
\[
T_1 = \{ j \in T |\ 1 \leq \Psi(j) \leq 128\cdot\omega^{1/b} \}
\]
and, for $q = 2, \ldots, b$,
\[
T_{q} = \{ j \in T |\ 128\cdot\omega^{(q-1)/b} < \Psi(j) \leq 128\cdot\omega^{q/b} \}\ .
\]
It suffices to show that for every time step $t\in T$ there exists a channel $\beta$, for $1\leq \beta \leq b$, such that the probability of $\beta$-isolating a station $w\in W(t)$ at time $t$ is at least
\[
\frac{1}{4^{128} \cdot \omega^{1/b} }
\ .
\]
Let us consider a time step $t\in T$, so that $t \in T_q$ for some $1\leq q\leq b$.
By Lemma~\ref{isolating}, we have that if $t \in T_1$ then the probability that a station is $1$-isolated at a time step~$t$ is at least
\[
\frac{\Psi(j)}{\lpbo}\cdot 4^{-\frac{\Psi(j)}{\lpbo}} > \frac{1}{\lpbo}\cdot 4^{-128\frac{\lpbo}{\lpbo}}
= \frac{1}{4^{128} \cdot \lpbo}
\ .
\]
If $t\in T_\beta$, for $2\leq \beta \leq b$, then the probability that a station is $\beta$-isolated at time step $t$ is at least
\[
\frac{\Psi(j)}{\lpbo}\cdot 4^{-\frac{\Psi(j)}{\lpbo}} > \frac{128}{\lpbo}\cdot 4^{-\frac{\Psi(j)}{\lpbo}}
= \frac{128}{4^{128} \cdot \lpbo}
\ ,
\]
which completes the proof.
\end{proof}
\begin{lemma}
{\langle}bel{l:t2}
Let $s$ be the time at which the first station wakes up and
let $[t_1,t_2]$ be an $\omega$-light interval, for some $\omega\le \log k$.
Then $t_2\le s + \varphi(\omega+1)$.
\end{lemma}
\begin{proof}
The interval $[t_1,t_2]$ is $\omega$-balanced, by Definition~\ref{d:light}.
We have that $W_j(t_2)$ is empty for every~$j$ such that $j>\omega$, by Definitions~\ref{baltime} and~\ref{bal}(b).
This means that no station is in a section bigger than~$\omega$, including those activated first at time step~$s$.
Each station is activated after at most $\varphi(\omega+1)$ time steps because the sum of the lengths of the first $i$ sections is ${\gamma}_i = \varphi(i+1)$.
\end{proof}
\begin{lemma}
{\langle}bel{l:fraction}
Let $c$ in the definition of $\varphi$ be bigger than some sufficiently large constant.
There exists a waking array of length $2cn\log n\log^{1/b} k$ such that, for any transmission array, there is an integer $0\le\omega\le\log k$ with the following properties:
\begin{enumerate}
\item[\rm (1)]
There are at least $c\cdot 2^{\omega-259}\log n$ isolated positions by time $c\cdot 2^{\omega+1}\log n\log^{1/b} k$ .
\item[\rm (2)]
At least $c\cdot 2^{\omega-259}\log n$ isolated positions occur at time steps with at least $2^{\omega}$ but no more than~$2^{\omega+4}$ activated stations.
\end{enumerate}
\end{lemma}
\begin{proof}
Consider a regular randomized transmission array, as defined in Definition~\ref{def:array}.
Assume also a sufficiently large $c>0$ in the definition of $\varphi(i)= c\cdot 2^{i} \log n\cdot i^{1/b}$, for any $1\le i\le \log n$.
Consider an activation pattern, with the first activation at point zero.
By Lemma~\ref{probability}, there is $\omega\le \log k$ and an $\omega$-light interval $[t_1,t_2]$ such that there are at least $\varphi(\omega-2)$ time steps $j\in [t_1, t_2]$, where the probability that a station $w\in W(j)$ isolated at time step~$j$ exists is at least~$1/(4^{128}\lpbou)$.
We choose the smallest such $\omega$ and associate the corresponding $\omega$-light interval $[t_1,t_2]$ with the activation pattern.
Note that we can partition all activation patterns into disjoint classes based on the intervals associated with them.
The expected number of isolated positions in the $\omega$-light interval $[t_1,t_2]$ is at least
\[
\varphi(\omega-2) \cdot \frac{1}{4^{128} \lpbou}
{\gamma}eq
\varphi(\omega-2)\cdot \frac{1}{4^{128} (\omega-2)^{1/b} } \cdot\frac{(\omega-2)^{1/b}}{\lpbou}
{\gamma}eq
c\cdot 2^{\omega -258} \log n
\ ,
\]
where we take $\omega{\gamma}e 3$.
By the Chernoff bound, the probability that the number of isolated positions is smaller than $c\cdot 2^{\omega -259} \log n$ is at most $\exp(-c\cdot 2^{\omega -261} \log n)$.
We want to apply the argument of the probabilistic method to the class of activation patterns associated with the $\omega$-light time interval $[t_1,t_2]$.
To this end, we need an estimate from above of the number of all such activation patterns.
By Lemma~\ref{l:t2}, the rightmost end~$t_2$ of this time interval is not bigger than
\[
\varphi(\omega+1) \le c\cdot 2^{\omega+1}\log n \log^{1/b} k\ .
\]
There are no more than $2^{\omega+4}$ stations activated by time step~$t_2$, because $[t_1,t_2]$ is $\omega$-light.
The number of different activation patterns in the class associated with the
$\omega$-light interval $[t_1,t_2]$ is at most $\binom{n}{2^{\omega+4}}(t_2)^{2^{\omega+4}}$.
This quantity can be estimated from above as
\begin{eqnarray*}
\left(\frac{ne}{2^{\omega+4}}\right)^{2^{\omega+4}}
\bigl(c\cdot 2^{\omega+1}\log n \log^{1/b} k \bigr)^{2^{\omega+4}}
&=&
\exp\bigl(2^{\omega+4}\cdot \ln ((ce/8)\cdot n \log n \log^{1/b} k)\bigr)\\
&\le&
\exp\left(3\ln c \cdot 2^{\omega+4}\cdot \log n \right)
\ .
\end{eqnarray*}
This bound is smaller than $\exp(c\cdot 2^{\omega -261} \log n - 4\log(2cn \log n \log^{1/b} k))$ for a sufficiently large constant~$c$.
We combine the following two bounds:
\begin{itemize}
\item
this upper bound $\exp(c\cdot 2^{\omega -261} \log n - 4\log(2cn \log n \log^{1/b} k))$ on the number all activation patterns in the class associated with the $\omega$-light time interval $[t_1,t_2]$, with
\item
the upper bound $\exp(-c\cdot 2^{\omega -261} \log n)$ on the probability that for any fixed such activation pattern the number of isolated positions is smaller than $c\cdot 2^{\omega -259} \log n$.
\end{itemize}
We conclude that the probability of the event that there is an activation pattern associated with the $\omega$-light time interval $[t_1,t_2]$ with less than $c\cdot 2^{\omega -259} \log n$ isolated positions, is smaller than
\[
\exp(c\cdot 2^{\omega -261} \log n - 4\log(2cn \log n \log^{1/b} k)) \cdot
\exp(-c\cdot 2^{\omega -261} \log n)
=
\exp(- 4\log(2cn \log n \log^{1/b} k))
\ .
\]
Finally, observe that there are at most $2cn \log n \log^{1/b} k$ candidates for time step $t_1$
and also for $t_2$, by Lemma~\ref{l:t2} and the bound $\omega\le \log n$.
Hence, applying the union bound to the above events over all such feasible intervals, we obtain that the probability of the event that there is $\omega\le \log n$ and an activation pattern associated with some $\omega$-light time interval $[t_1,t_2]$ with less than $c\cdot 2^{\omega -259} \log n$ isolated positions is smaller than
\[
\exp(- 4\log(2cn \log n \log^{1/b} k)) \cdot (2cn \log n \log^{1/b} k)^2 < 1/n^2 \le 1
\ .
\]
By the probabilistic-method argument, there is an instantiation of the random array, which is a regular array, for which the complementary event holds.
Note that more than the fraction $1-1/n^2$ of random arrays defined in the beginning of the proof satisfy the complementary event.
Hence, this array satisfies Claim (1) with respect to any activation pattern.
Claim (2) follows by noticing that these occurrences of isolated positions take place in the corresponding $\omega$-light interval.
The interval, by definition, has no more than $2^{\omega+4}$ stations activated by its end, and at least $2^{\omega}$ activated stations in the beginning.
This is because $\omega$-light interval is by definition an $\omega$-balanced interval, according to Definitions~\ref{bal}, \ref{baltime} and~\ref{d:light}.
\end{proof}
\Paragraph{Proof completed.}
We conclude with a proof of Theorem~\ref{thm:general-deterministic}.
There is an isolated position for every activation pattern by time ${\mathcal O}(k\log n\log^{1/b} k)$.
This follows from point (1) of Lemma~\ref{l:fraction}.
To see this, notice that otherwise the $\omega$-light interval, which is also $\omega$-balanced, would have at least $2^\omega > k$ stations activated, by Definitions~\ref{bal} and~\ref{baltime}, contradicting the assumption.
Theorem~\ref{thm:general-deterministic} is thereby proved.
\Paragraph{Channels with random jamming.}
In the final part of this Section, we consider a model of a network in which channels may get jammed.
Assume that at each time step and on every channel a jamming error occurs with the probability~$p$, for $0\le p<1$, independently over time steps and channels.
When a channel is jammed then the feedback it provides to the stations is the same as if there were a collision on this channel.
The case $p=0$ is covered by Theorem~\ref{thm:general-deterministic}.
\begin{theorem}
{\langle}bel{thm:randomized-jamming}
For a given error probability $0< p<1$, there exists a waking array of length ${\mathcal O}(\log^{-1}(\frac{1}{p})\,n\log n\log^{1/b} k)$ providing wake-up in ${\mathcal O}(\log^{-1}(\frac{1}{p}) \, k\log n\log^{1/b} k)$ time, for any number $k\le n$ of spontaneously activated stations, with a probability that is at least $1 - 1/\text{\emph{poly}}(n)$.
\end{theorem}
\begin{proof}
Let us set $c=c'\cdot\log^{-1}\frac{1}{p}$ for sufficiently large constant $c'$, and consider any activation pattern.
By Lemma~\ref{l:fraction}, at least $c\cdot 2^{\omega-259}\log n$ isolated positions occur by time $c\cdot 2^{\omega+1}\log^{1+1/b} n$ and by that time no more than $2^{\omega+4}$ stations are activated.
Each such an isolated position can be jammed independently with probability~$p$.
Therefore, the probability that all these positions are jammed, and thus no successful transmission
occurs by time
\[
c\cdot 2^{\omega+1}\log n\log^{1/b} k = {\mathcal O}(\log^{-1}\vspace*{-
amount}igl(\frac{1}{p}\vspace*{-
amount}igr) \, k \log n\log^{1/b} k)\ ,
\]
is at most
\[
p^{c\cdot 2^{\omega-259}\log n}
=
\exp\left(c'\cdot \log^{-1}\vspace*{-
amount}igl(\frac{1}{p}\vspace*{-
amount}igr) \cdot 2^{\omega-259}\log n\cdot \ln p\right)
\ .
\]
This is smaller than $1/\mbox{poly}(n)$ for sufficiently large constant $c'$.
Here we use the fact that $\frac{\ln p}{\log(1/p)}$ is negative for $0<p<1$.
When estimating the time of a successful wake-up, we relied on the fact that $2^{\omega}$, which is the lower bound on the number of activated stations by Lemma~\ref{l:fraction}(2), must be smaller than~$k$.
\end{proof}
\section{A Specialized Deterministic Algorithm}
{\langle}bel{sec:large}
We give a deterministic algorithm that has a better time-performance bound than the one given in Theorem~\ref{thm:general-deterministic}.
The construction applies to networks with sufficiently many channels with respect to the number of nodes.
The main fact proved in this Section is as follows:
\begin{theorem}
{\langle}bel{thm:many-channels}
If the numbers of channels~$b$ and nodes~$n$ satisfy $b>\log(128\,b\log n)$ then there exists a deterministic waking array ${\mathcal T}$ of length ${\mathcal O}(\frac{n}{b}\log n\log(b\log n))$ which, when used to instantiate the generic algorithm \textsc{Wake-Up}, produces an algorithm \textsc{Wake-Up}\,$({\mathcal T})$ that wakes up the network in time ${\mathcal O}(\frac{k}{b}\log n\log(b\log n))$, for up to $k\le n$ stations activated spontaneously.
\end{theorem}
The proof is by way of showing the existence of a waking array, as defined in Definition~\ref{schedule}, for a section length defined as $\varphi(i)=c\cdot (2^i/b) \lg n\log(128\,b\log n)$.
Note that Lemmas~\ref{zigzag} to~\ref{proper} as well as Lemma~\ref{l:t2} hold for the current specification of function $\varphi$, as their proofs do not refer to the value of this function.
The following Lemma corresponds to Lemma~\ref{C64}, which was proved for $\varphi(i)= c\cdot 2^i\cdot i^{1/b} \log n$, while now we prove an analogous statement for $\varphi(i)=c\cdot (2^i/b) \lg n\log(128\,b\log n)$.
\begin{lemma}
{\langle}bel{C64-large}
There exists an $\omega$-light interval $[t_1, t_2]$, for some $1\leq \omega \leq \log n$.
\end{lemma}
\begin{proof}
Let $[t_1,t_2]$ be an $\omega$-balanced interval, which exists by Lemma~\ref{proper}.
By that very Lemma, we can assume that every $j \in [t_1,t_2]$ satisfies condition~\eqref{up}.
Moreover, since the interval is $\omega$-balanced, we also have that $|W_{\omega}(j)| \leq 2^{\omega+2}$ for every $j \in [t_1,t_2]$, by condition (a) of Definition~\ref{bal}.
We conclude with the following upper bound, for every $j \in [t_1,t_2]$:
\begin{eqnarray}{\langle}bel{small-large}
\vspace*{-
amount}igl| \bigcup_{i=1}^{\omega} W_i(j) \vspace*{-
amount}igr| &=& \vspace*{-
amount}igl| \bigcup_{i=1}^{\omega-1} W_i(j) \vspace*{-
amount}igr| + |W_{\omega}(j)| \nonumber \\
&<& 3|W_{\omega}(j)| + |W_{\omega}(j)| \nonumber \\
&\leq& 4\cdot 2^{\omega+2} = 2^{\omega+4}
\ .
\end{eqnarray}
This proves condition~(1) of Definition~\ref{d:light}.
Next we prove condition~(2).
By condition~(a) of Definition~\ref{bal}, we know that $|W_{\omega}(j)| {\gamma}eq 2^{\omega}$ for every
$j \in [t_1, t_2]$.
Therefore, the following bounds hold for every $j \in [t_1, t_2]$:
\[
\Psi(j) {\gamma}eq \frac{|W_{\omega}(j)|}{2^{\omega}} {\gamma}eq 1
\ .
\]
What remains to show is the upper bound of~\eqref{newcondition}.
Suppose, to arrive at a contradiction, that the number of time steps $j$ in $[t_1, t_2]$, that satisfies
the rightmost inequality of condition~\eqref{newcondition}, is less than $\varphi(\omega-2)$.
Let $B \subseteq [t_1, t_2]$ be the set of balanced time steps $j \in [t_1, t_2]$ such that
condition~\eqref{newcondition} is not satisfied.
By the assumption, the following is the case:
\begin{equation}{\langle}bel{contra-large}
|B| > |[t_1,t_2]| - \varphi(\omega-2) = \frac{\varphi(\omega-2)}{2}.
\end{equation}
For any $j \in [t_1, t_2]$, let us consider
\[
U(j) = \bigcup_{i=1}^{\lg n} W_{i}(j) = \bigcup_{i=1}^{\omega} W_{i}(j)
\ ,
\]
where the second identity follows by condition~(b) of Definition~\ref{bal}.
By the specification of the array, any station belongs to section~$i$ during $\varphi(i+1) - \varphi(i) {\gamma}eq \varphi(i)$ time steps, for $1\leq i\leq \lg n$.
Therefore we have the following bounds
\[
\varphi(i)\max_{t_1 \leq j\leq t_2} |U(j)| {\gamma}eq \sum_{j=t_1}^{t_2} |W_{i}(j)|
{\gamma}eq \sum_{j\in B} |W_{i}(j)|
\]
for every $1\leq i\leq \lg n$.
This in turn allows to obtain the following bound:
\begin{eqnarray*}
\sum_{i=1}^{\lg n} \max_{t_1 \leq j\leq t_2} |U(j)| &{\gamma}eq& \sum_{i=1}^{\lg n} \sum_{j\in B} \frac{|W_{i}(j)|}{\varphi(i)} \\
& = & \sum_{j\in B} \sum_{i=1}^{\lg n} \frac{|W_{i}(j)|}{\varphi(i)} \\
& = & \frac{1}{c(\lg n\log(128\,b\log n))/b} \sum_{j\in B} \sum_{i=1}^{\lg n} \frac{|W_{i}(j)|}{2^i} \\
& > & \frac{1}{c(\lg n\log(128\,b\log n))/b} \sum_{j\in B} 128\cdot\log n \\
& = & \frac{128 b |B|}{c\log(128\,b\log n)} \ .
\end{eqnarray*}
This gives the following estimate:
\[
\max_{t_1 \leq j\leq t_2} |U(j)| > \frac{128 b |B|}{c\lg n\log(128\,b\log n)} \ .
\]
By applying~\eqref{contra-large}, we obtain
\[
\max_{t_1 \leq j\leq t_2} |U(j)| > \frac{128 b \cdot c (2^{\omega-3}/b)\lg n\log(128\,b\log n)}{c\lg n\log(128\,b\log n)} = 2^{\omega+4} \ .
\]
This implies that there exists $j'\in [t_1, t_2]$ such that
\[
\vspace*{-
amount}igl| \bigcup_{i=1}^{\omega} W_{i}(j') \vspace*{-
amount}igr| > 2^{\omega+4} \ ,
\]
which contradicts \eqref{small-large}.
\end{proof}
\begin{lemma}
{\langle}bel{isolating-large}
Let $\beta$ be a channel, for $1\leq \beta \leq b$.
Let every station be executing the randomized algorithm as represented by a modified randomized transmission array.
The probability that there exists a station $w\in W(t)$ that is $\beta$-isolated at any time step $j\leq t$ is at least
\[
\Psi(j)\cdot b\cdot 2^{-\beta^*} \cdot 4^{-\Psi(j)\cdot b\cdot 2^{-\beta^*}} \ .
\]
\end{lemma}
\begin{proof}
Let $E_1(\beta, i,j)$ be the event ``there exists $w\in W_{i}(j)$ such that $T(\beta, w, j) = 1$'', and
let $E_2(\beta, i,j)$ be the event ``$T(u,\beta, j) = 0$ for all $l$ with $l\not = i$ and for every $u\in W_{l}(j)$.''
Let us say that $W(t)$ is $\beta$-isolated at time step $j\leq t$ if and only if there exists a station
$w\in W(t)$ that is $\beta$-isolated at time step $j$.
Clearly, $W(t)$ is $\beta$-isolated at time $j$ if and only if the following event occurs:
\[
\bigcup_{i=1}^{\log n} \bigl(E_1(\beta, i,j) \cap E_2(\beta, i,j)\bigr)
\ .
\]
We use the following inequalities
\begin{eqnarray*}
\Pr (E_1(\beta, i,j) )
&{\gamma}eq&
|W_{i}(j)|\cdot b \cdot 2^{-i -\beta^*} \vspace*{-
amount}igl(1-b \cdot 2^{-i -\beta^*}\vspace*{-
amount}igr)^{|W_{i}(j)|-1} \\
&{\gamma}eq &
|W_{i}(j)|\cdot b \cdot 2^{-i -\beta^*} \vspace*{-
amount}igl(1-b \cdot 2^{-i -\beta^*}\vspace*{-
amount}igr)^{|W_{i}(j)|}
\end{eqnarray*}
combined with the following identity
\[
\Pr (E_2(\beta, i,j) ) = \prod_{l=1,l \not = i}^{\log n} \left(1-b \cdot 2^{-l -\beta^*}\right)^{|W_{l}(j)|} \ .
\]
Events $E_1(\beta, i,j)$ and $E_2(\beta, i,j)$ are independent.
It follows that
\begin{eqnarray*}
\Pr (E_1(\beta, i,j) \cap E_2(\beta, i,j) )
& {\gamma}eq &
|W_{i}(j)|\cdot b \cdot 2^{-i -\beta^*} \prod_{l = 1}^{\log n}
\left(1-b \cdot 2^{-l -\beta^*}\right)^{|W_{l}(j)|}
\\
&=& |W_{i}(j)|\cdot b \cdot 2^{-i -\beta^*} \prod_{l = 1}^{\log n}
\left(1-b \cdot 2^{-l -\beta^*}\right)^{(2^{l +\beta^*}/b) \cdot b|W_{l}(j)|2^{-l-\beta^*}}
\\
&{\gamma}eq& |W_{i}(j)|\cdot b \cdot 2^{-i -\beta^*} \cdot
{4}^{-\sum_{l = 1}^{\log n} (b|W_{l}(j)|2^{-l-\beta^*})} \\
&=& \frac{|W_{i}(j)|}{2^{i}}\cdot (b\cdot 2^{-\beta^*}) \cdot{4}^{-\Psi(j)\cdot (b\cdot 2^{-\beta^*})} \ .
\end{eqnarray*}
To conclude, observe that the events $E_1(\beta, i,j) \cap E_2(\beta, i,j)$ are mutually exclusive, for any~$j$ and $1 \leq i \leq \lg n$.
\end{proof}
\begin{lemma}
{\langle}bel{probability-large}
Let every station be executing the randomized algorithm as represented by a modified randomized transmission array.
There exists an $\omega$-light interval $[t_1, t_2]$, for some $1\leq \omega \leq \log n$, that
contains at least $\varphi(\omega-2)$ time steps $j\in [t_1, t_2]$ such that in each of these steps the number of channels~$\beta$, with the probability of a $\beta$-isolated station being at least $1/8$, is at least $\bigl\lfloor\frac{b}{\log(128\,b\log n)}\bigr\rfloor$.
\end{lemma}
\begin{proof}
By Lemma~\ref{C64-large}, there are at least $\varphi(\omega-2)$ time steps $j\in [t_1, t_2]$ such
that the inequalities $1 \leq \Psi(j) \leq 128\cdot \lg n$ hold.
Let $T$ be the set of such time steps.
Let us define a partition of $T$ into sets
\[
T_{q} = \{ j \in T |\ 2^{q} < b\cdot \Psi(j) \leq 2^{q+1} \}
\ ,
\]
for $0\le q< \log(128\,b\log n)$.
It suffices to show that for every time step $t\in T$ there exists $\bigl\lfloor\frac{b}{\log(128\,b\log n)}\bigr\rfloor$ channels $\beta$, for $1\leq \beta \leq b$, such that the probability of $\beta$-isolating a station $w\in W(t)$ at time $t$ is at least~$1/8$.
Let us take any time step $t\in T$, so that $t \in T_q$ for some $0\leq q<\log(128\,b\log n)$.
By Lemma~\ref{isolating-large}, if $t \in T_q$ then for each $\beta$ such that $\beta^*=q$,
the probability that a station is $\beta$-isolated at time step~$t$ is at least
\[
\Psi(j)\cdot b\cdot 2^{-\beta^*} \cdot 4^{-\Psi(j)\cdot b\cdot 2^{-\beta^*}}
{\gamma}e
\left(2^{q+1}\cdot 2^{-q}\right) \cdot 4^{-2^{q+1}\cdot 2^{-q}}
=
1/8
\ ,
\]
where we use the fact that the function $x\cdot 4^{-x}$ is monotonically decreasing in~$x$.
To conclude, notice that there are at least $\bigl\lfloor\frac{b}{\log(128\,b\log n)}\bigr\rfloor$ channels~$\beta$ satisfying $\beta^*=q$, for any given $q$ such that $0\le q< \log(128\,b\log n)$.
\end{proof}
\begin{lemma}
{\langle}bel{l:fraction-large}
Let $c$ in the definition of $\varphi$ be bigger than some sufficiently large constant.
There exists a waking array of length $\frac{2cn}{b}\log n\log(128\,b\log n)$
such that for any activation pattern, there is an integer $0\le\omega\le\log n$ with the following properties:
\begin{enumerate}
\item[\rm (1)]
There are at least $c\cdot 2^{\omega-6}\log n$ isolated positions by time step $c\cdot (2^{\omega+1}/b)\log n\log(128\,b\log n)$,
\item[\rm(2)]
These positions occur at time step with at least $2^{\omega}$ but no more than $2^{\omega+4}$ activated stations.
\end{enumerate}
\end{lemma}
\begin{proof}
Let us consider a modified randomized transmission array.
Let us assume that $c>0$ in the specification
\[
\varphi(i)= c\cdot (2^{i}/b) \log n \log(128\,b\log n)\
\]
is sufficiently large, for any $1\le i\le \log n$.
Observe that the length of the schedules is not bigger than
\[
\varphi(i+1) \le \frac{2cn}{b}\log n\log(128\,b\log n)\ .
\]
Let us consider an activation pattern with the first activation at time step~$0$.
By Lemma~\ref{probability-large}, there is $\omega\le \log n$ and $\omega$-light interval $[t_1,t_2]$ such that
there are at least $\varphi(\omega-2)$ time steps $j\in [t_1, t_2]$ such that each of them has
at least $\vspace*{-
amount}igl\lfloor\frac{b}{\log(128\,b\log n)}\vspace*{-
amount}igr\rfloor$ channels $\beta$ with
the probability of $\beta$-isolation of a station $w\in W(j)$ being at least $1/8$.
We choose the smallest such an~$\omega$ and associate the corresponding $\omega$-light interval $[t_1,t_2]$
with the activation pattern.
We can clearly partition all activation patterns into disjoint classes based on the intervals associated
with them.
Observe that the expected number of isolated positions in the $\omega$-light interval $[t_1,t_2]$ is at least
\[
\varphi(\omega-2) \cdot \left\lfloor\frac{b}{\log(128\,b\log n)}\right\rfloor \cdot \frac{1}{8}
=
c\cdot 2^{\omega -5} \log n
\ .
\]
By the Chernoff bound, the probability that the number of isolated positions is smaller than
$c\cdot 2^{\omega -6} \log n$ is at most $\exp(-c\cdot 2^{\omega -8} \log n)$.
In order to apply the probabilistic-method argument to the class of activation patterns associated with
the $\omega$-light time interval $[t_1,t_2]$, it remains to estimate from above the number of all such activation patterns.
By Lemma~\ref{l:t2}, the rightmost end~$t_2$ of this time interval is not bigger than
\[
\sum_{i=1}^\omega \varphi(i) \le c\cdot (2^{\omega+1}/b)\log n\log(128b\log n)\ .
\]
Next observe that since $[t_1,t_2]$ is $\omega$-light, there are no more than
$2^{\omega+4}$ stations activated by time step~$t_2$.
Hence, the number of different activation patterns in the class associated with the
$\omega$-light interval $[t_1,t_2]$
is at most
\begin{eqnarray*}
{n \choose 2^{\omega+4}} \left(t_2\right)^{2^{\omega+4}}
&\le&
\left( \frac{ne}{2^{\omega+4}}\right)^{2^{\omega+4}} \left(c\cdot (2^{\omega+1}/b)\log n \log(128b\log n) \right)^{2^{\omega+4}} \\
&=&
\exp\left(2^{\omega+4}\cdot \ln ((ce/8)\cdot (n/b))\log n\log(128b\log n))\right)\\
&\le&
\exp\left(3\ln c \cdot 2^{\omega+4}\cdot \log n\right)
\ ,
\end{eqnarray*}
which is smaller than $\exp(c\cdot 2^{\omega -8} \log n - 4\log(2cn\log n))$, for a sufficiently large constant~$c$.
Next we combine the following two bounds:
\begin{itemize}
\item
the upper bound $\exp(c\cdot 2^{\omega -8} \log n - 4\log(2cn\log n))$ on the number all activation patterns in the class associated with the $\omega$-light time interval $[t_1,t_2]$, with
\item
the upper bound $\exp(-c\cdot 2^{\omega -8} \log n)$ on the probability that for any fixed such an activation pattern the number of isolated positions is smaller than $c\cdot 2^{\omega -6} \log n$.
\end{itemize}
This allows to conclude that the probability of the event that there is an activation pattern associated with the $\omega$-light time interval $[t_1,t_2]$ with less than $c\cdot 2^{\omega -6} \log n$ isolated positions is smaller than
\[
\exp(c\cdot 2^{\omega -8} \log n - 4\log(2cn\log n)) \cdot
\exp(-c\cdot 2^{\omega -8} \log n)
=
\exp(- 4\log(2cn\log n))
\ .
\]
There are at most $2cn\log n$ candidates for time step $t_1$ and also for $t_2$, by Lemma~\ref{l:t2} applied to $\varphi(\omega)=c(2^\omega/b)\log n\log(128\,b\log n)$
and the bound $\omega\le \log n$.
We apply the union bound to these events over all such feasible intervals.
This gives that the probability that there is $\omega\le \log n$ and an activation pattern associated with some $\omega$-light time interval $[t_1,t_2]$ with less than $c\cdot 2^{\omega -6} \log n$ isolated positions is smaller than
\[
\exp(- 4\log(2cn\log n))
\cdot
(2cn\log n)^2
<
1/n^2 \leq 1
\ .
\]
Thus, by the probabilistic-method argument, there is an instantiation of the random array, which is a deterministic array, for which the complementary event holds.
Hence, this array satisfies claim~(1) of this Lemma with respect to any activation pattern.
Claim~(2) follows when one observes that these occurrences of isolated positions
take place in the corresponding $\omega$-light interval, which by definition has no more than
$2^{\omega+4}$ stations activated by its end, and at least $2^{\omega}$ activated stations
in its beginning.
This is because $\omega$-light interval is by definition an $\omega$-balanced interval,
according to Definitions~\ref{bal}, \ref{baltime} and~\ref{d:light}.
\end{proof}
\Paragraph{Proof completed.}
We conclude with the proof of Theorem~\ref{thm:many-channels}.
There is an isolated position by time ${\mathcal O}(\frac{k}{b}\log n\log(128b\log n))$ for every activation pattern.
This follows from point (1) of Lemma~\ref{l:fraction-large}.
Indeed, otherwise the $\omega$-light interval, which is also $\omega$-balanced, would have
at least $2^\omega > k$ stations activated, by Definitions~\ref{bal} and~\ref{baltime}, contrary to the assumptions.
\Paragraph{Channels with random jamming.}
We also consider a model of random jamming of channels for the case of sufficiently many channels.
Let us assume that at each time step and on every channel, a jamming error occurs with the probability~$p$, where $0\le p<1$, independently over time steps and channels.
The case $p=0$ is covered by Theorem~\ref{thm:general-deterministic}.
\begin{theorem}
{\langle}bel{matrix-failures-large}
For a given error probability~$p$, where $0< p<1$, if the numbers of channels~$b$ and nodes~$n$ satisfy the inequality $b>\log(128\,b\log n)$, then there exists a waking array of length ${\mathcal O}(\log^{-1}(\frac{1}{p})\, \frac{n}{b}\log n\log(b\log n))$ providing wake-up in time ${\mathcal O}(\log^{-1}(\frac{1}{p}) \, \frac{k}{b} \log n\log(b\log n))$, for any number $k\le n$ of spontaneously activated stations, with a probability that is at least $1 - 1/\text{\emph{poly}}(n)$.
\end{theorem}
\begin{proof}
Let us set $c=c'\cdot\log^{-1}\frac{1}{p}$, for a sufficiently large constant $c'$, and consider any activation pattern.
By Lemma~\ref{l:fraction-large}, $c\cdot 2^{\omega-6}\log n$ isolated positions occur by time step $c\cdot (2^{\omega+1}/b)\log n\log(128b\log n)$ and by that time step no more than $2^{\omega+4}$ stations get activated.
Each such an isolated position is jammed independently with probability~$p$.
Therefore, the probability that all these positions are jammed, and thus no successful transmission
occurs by time
\[
c\cdot (2^{\omega+1}/b)\log n\log(128b\log n) = {\mathcal O}(\log^{-1}\vspace*{-
amount}igl(\frac{1}{p}\vspace*{-
amount}igl) \, \frac{k}{b}\log n\log(b\log n))
\ ,
\]
is at least
\[
p^{c\cdot 2^{\omega-6}\log n}
=
\exp\left(c'\cdot \log^{-1}\vspace*{-
amount}igl(\frac{1}{p}\vspace*{-
amount}igr) \cdot (2^{\omega-6}/b)\log n\cdot \ln p\right)
\ ,
\]
which is smaller than $1/\mbox{poly}(n)$ for sufficiently large constant $c'$.
Here we use the fact that $\frac{\ln p}{\log(1/p)}$ is negative for $0<p<1$.
When bounding the time step of a successful wake-up to occur, we rely on the fact that $2^{\omega}$, which is the lower bound on the number of activated stations by Lemma~\ref{l:fraction-large}(2), must be smaller than~$k$.
\end{proof}
\section{Conclusion}
We considered waking up a multi-channel single-hop radio network by deterministic and randomized algorithms.
To assess optimality of a solution, we gave a lower bound $\frac{k}{4b}\lg \frac{n}{k} - \frac{k+1}{b}$ on time of a deterministic algorithm, which holds when both $k$ and $n$ are known.
This lower bound can be beaten by randomized algorithms when $k$ is known, as we demonstrated that a randomized algorithm exists that refers to $k$ and works in time ${\mathcal O}(k^{1/b}\ln \frac{1}{\varepsilonsilon})$ with a large probability.
This shows a separation between the best performance bounds of randomized and deterministic wake-up algorithms when the parameter~$k$ is known, even for just two channels.
We may interpret the parameters $k$ and $b$ as representing scalability of an algorithmic solution, by the presence of factors $k$ and $1/b$ in time-performance bounds.
This could mean that an algorithm that scales perfectly with $k$ and $b$ has time performance of the form ${\mathcal O}(\frac{k}{b} \cdot f(n,b,k))$, for some function~$f(n,b,k)$ such that $f(n_0,b,k)={\mathcal O}(1)$ for any constant $n_0$ and the variables $b$ and $k$ growing unbounded.
Deterministic algorithms given in this paper are developed for the case when $n$ is known but $k$ is unknown.
Our general solution operates in time ${\mathcal O}(k\log^{1/b} k\log n)$.
This means that $k\log^{1/b} k$ reflects scalability with $k$, which is close to linear in~$k$, while the scalability with~$b$ is poor, as $1/b$ is not a factor in the performance bound at all.
When sufficiently many channels are available, we show that a multi-channel can be woken up deterministically in time ${\mathcal O}(\frac{k}{b}\log n\log(b\log n))$.
The respective algorithm is effective in two ways.
The first one is about time performance: the algorithm misses time optimality by at most a poly-logarithmic factor that is ${\mathcal O}(\log n(\log b +\log\log n))$, because of the lower bound~$\frac{k}{4b}\lg \frac{n}{k} - \frac{k+1}{b}$.
The second one is about scalability: the algorithm scales perfectly with the unknown~$k$, and also its scalability with~$b$ is~$\frac{\log b}{b}$, so it misses optimality in that respect by the factor of~$\log b$ only.
\noindent
\textbf{Acknowledgement:}
We want to thank the anonymous reviewers, of a manuscript that resulted in publishing~\cite{ChlebusDK16}, for comments that led to improving the submission; in particular, for pointing out that the lower bound from Clementi et al.~\cite{ClementiMS03} could be used to structure a short argument for a lower bound for multi channel networks.
\end{document}
|
\begin{document}
\begin{abstract}
Given a finite set $\{S_1\dots,S_k \}$ of substitution maps acting on a certain finite number (up to translations) of tiles in $\mathbb{R}^d$, we consider the multi-substitution tiling space associated to each sequence $\bar a\in \{1,\ldots,k\}^{\mathfrak{m}athbb{N}}$. The action by translations on such spaces gives rise to uniquely ergodic dynamical systems.
In this paper we investigate the rate of convergence for ergodic limits of patches frequencies and prove that these limits vary continuously with $\bar a$.
\end{abstract}
\subjclass[2010]{37A15, 37A25, 52C22}
\mathfrak{m}aketitle
\section{Introduction}
Roughly speaking, a \emph{tiling} of $\mathfrak{m}athbb{R}^d$ is an arrangement of tiles that covers $\mathfrak{m}athbb{R}^d$ without overlapping. An important class of tilings is that of \emph{self-similar tilings}.
In order to construct a self-similar tiling $x$, one starts with a finite number (up to translation) of tiles and a \emph{substitution map} $S$ that determines how to inflate and subdivide these tiles into certain configurations of the same tiles. Many examples can be found in \cite{F,GS}. The \emph{substitution tiling space} $X_S$ is the closure of all the translations of $x$ in an appropriate metric, with respect to which $X_S$ is compact and the group $\mathbb{R}^d$ acts continuously on $X_S$ by translations, defining a \emph{substitution dynamical system}. The ergodic and spectral properties of such dynamical systems were studied in detail by Solomyak \cite{Sol}.
A substitution tiling space $X_S$ is then associated to an hierarchy. The zero level of this hierarchy is constituted by the initial set of tiles and the level $i>0$ is constituted by the patches of tiles obtained from those of level $i-1$ by applying the substitution map $S$. Recently, Frank and Sadun \cite{FS} have introduced a framework to handling with general hierarchical (\emph{fusion}) tiling spaces, where the procedure for obtaining patches of level $i$ from those of level $i-1$ is not necessarily an ``inflate-subdivide" procedure and can depend on $i$. However,
many of the ergodic and spectral properties available for substitution dynamical systems are hard to achieve in such generality.
In the present paper we deal with \emph{multi-substitution tiling spaces}, also referred to in the literature as \emph{mixed substitution} tiling spaces \cite{GM} or \emph{$S$-adic systems \cite{Du,Fe}}. They form a particular class of hierarchical tiling spaces which includes the substitution tiling spaces. A multi-substitution tiling space is determined by a finite number (up to translation) of tiles, a finite set $\mathfrak{m}athcal{S}=\{S_1\dots,S_k \}$ of substitutions maps acting on these tiles and a sequence $\bar a=(a_1,a_2,\ldots)$ in $\Sigma:=\{1,\ldots,k\}^{\mathfrak{m}athbb{N}}$. In the corresponding hierarchy, the patches of the level $i$ are obtained from those of level $i-1$ by applying the substitution map $S_{a_{i}}$. The continuous action of $\mathbb{R}^d$ by translations on a multi-substitution tiling space $X_{\bar {{a}}}(\mathfrak{m}athcal{S})$ defines a uniquely ergodic dynamical system. The unique measure $\mathfrak{m}u_{\bar a,\mathfrak{m}athcal{S}}$ is closely related with the patch frequencies in tilings of $X_{\bar{{a}}}(\mathfrak{m}athcal{S})$. In this paper we prove that, in the usual topology of $\Sigma$, the ergodic limits of patch frequencies vary continuously with $\bar a$ (theorem \ref{ss}). Moreover, we prove that
the convergence of patch frequencies to their ergodic limits is locally uniform in some open subset of $\Sigma$ (theorem \ref{cu}).
\section{Tilings and Substitutions}
We start by recalling some standard definitions and results concerning substitution tiling spaces. For details, motivation and examples see \cite{F,GS,Rob,Sol}. We introduce also the concept of \emph{strongly recognizable} substitution. As we will see later, such substitutions provide isomorphisms between ergodic dynamical systems associated to certain multi-substitution tiling spaces.
Consider $\mathfrak{m}athbb{R}^d$ with its usual Euclidean norm $\|\cdot\|$ and write $B_r=\{\vec v\in\mathfrak{m}athbb{R}^d:\,\|\vec v\|\leq r\}$. A set $D\subset \mathfrak{m}athbb{R}^d$ is called a \emph{tile} if it is compact, connected and equal to the closure of its interior. A \emph{patch} is a collection $x=\{D_i\}_{i\in I}$ of tiles such that $D^{^\circ}_i\cap D^{^\circ}_j=\emptyset$, for all $i,j\in I$ with $i\mathfrak{m}athfrak{n}eq j$. The \emph{support} of $x$ is defined by $\mathfrak{m}athrm{supp}(x):=\bigcup_{i\in I}D_i$.
If $\mathfrak{m}athrm{supp}(x)=\mathbb{R}^d$, we say that $x$ is a \emph{tiling} of $\mathbb{R}^d$. When a patch has a single tile $D$, we identify this patch with the corresponding tile. Given a patch $x=\{D_i\}_{i\in I}$ and $\vec{t}\in\mathfrak{m}athbb{R}^d$, $\vec{t}+x:=\{\vec{t}+D_i\}_{i\in I}$ is another patch. In particular, if $x$ is a tiling of $\mathbb{R}^d$, $\vec{t}+x$ is another tiling of $\mathbb{R}^d$. Hence we have an action of $\mathfrak{m}athbb{R}^d$ on the space of all tilings of $\mathbb{R}^d$ by translations, which we denote by $T$.
Two patches $x$ and $x'$ are said to be \emph{equivalent} if $x'=\vec{t}+x$ for some $\vec{t}\in\mathfrak{m}athbb{R}^d$. We denote by $[x]$ the equivalence class of $x$.
Let $X$ be a space of tilings of $\mathbb{R}^d$ invariant by $T$
and $\mathfrak{m}athcal{P}^N(X)$ be the set of all patches $x'=\{D_i\}_{i\in I}$ such that $|I|=N$ and $x'\subset x$ for some $x\in X$. We denote by $\mathfrak{m}athcal{T}^N(X)$ the set of equivalence classes with representatives in $\mathfrak{m}athcal{P}^N(X)$. These representatives are called $N$-\emph{protopatches} of $X$. The $1$-protopatches are more usually called \emph{prototiles}. The tiling space $X$ has \emph{finite local complexity} if $\mathfrak{m}athcal{T}^2(X)$ is finite. Equivalently, $\mathfrak{m}athcal{T}^N(X)$ is finite for each $N$.
If $K\subset \mathfrak{m}athbb{R}^d$ is compact and $x\in X$, we denote by $x[[K]]$ the set of all patches $x'\subset x$ with bounded support satisfying $K\subseteq \mathfrak{m}athrm{supp}( x')$.
For $x,y\in X$, we set
\begin{align*}
\mathfrak{m}athfrak{n}onumber d_T(x,y)=\inf\Big\{\{\sqrt{2}/2\}\cup&\{0<r<\sqrt{2}/2: \,\textrm{exist $x'\in x[[B_{1/r}]]$, $\,y'\in y[[B_{1/r}]],$}\\ &\textrm{and $\vec{t}\in \mathfrak{m}athbb{R}^d$ with $\|\vec{t}\|\leq r$ and $\vec{t}+x'=y'$}\}\Big\}.\label{distance}
\end{align*}
\begin{thm}\cite{Rob,Sol}
$(X,d_T)$ is a complete metric space. Moreover, if $X$ has finite local complexity, then $(X,d_T)$ is compact and the action $T$ is continuous.
\end{thm}
From now on we assume that $X$ is equipped with the metric $d_T$ and that $X$ has finite local complexity.
\begin{rem}
The above equivalence relation between patches and the corresponding definition of distance could be defined with respect to rather general ``actions" of groups on patches (see \cite{PV}). However, the metric $d_T$ is adequate to the purposes of this paper since we shall only be concerned with the dynamics associated to the action $T$.
\end{rem}
A \emph{(self-similar) substitution} is a map $S:\mathfrak{m}athcal{P}^1(X)\to\mathfrak{m}athcal{P}(X):=\bigcup_N\mathfrak{m}athcal{P}^N(X)$ such that:
\begin{itemize}
\item[(S$_1$)] there is $\lambda>1$ (the \emph{dilatation factor} of $S$) such that
$\mathfrak{m}athrm{supp}(S(P))=\lambda \mathfrak{m}athrm{supp}(P)$ for all $P\in\mathfrak{m}athcal{P}^1(X)$;
\item[(S$_2$)] if $P=\vec{t}+Q$ then $S(P)=\lambda \vec{t}+S(Q)$.
\end{itemize}
Take a finite number of (non-equivalent) prototiles $\{D_1,\ldots,D_l\}$ of $X$ such that $$\mathfrak{m}athcal{T}^1(X)=\{[D_1],\ldots, [D_l]\}.$$ The \emph{structure matrix} $A_S$ associated to the substitution $S$ is the $l\times l$ matrix with entries $A_{ij}$ equal to the number of tiles equivalent to $D_i$ that appear in $S(D_j)$. If $A_S^m>0$ for some $m>0$, $S$ is said to be \emph{primitive}. In the particular case $m=1$, $S$ is \emph{strongly primitive}.
Given a patch $x=\{D_i\}_{i \in I}$ with $D_i\in \mathfrak{m}athcal{P}^1(X)$, we define the patch $S(x):=\bigcup_{i\in I} S(D_i)$. Assume that the substitution $S$ can be extended to maps $S: \mathfrak{m}athcal{P}(X)\to \mathfrak{m}athcal{P}(X)$ and $S:X\to X$. In this case, take a tile $D\in \mathfrak{m}athcal{P}^1(X)$ and define inductively the following sequence of patches in $\mathfrak{m}athcal{P}(X)$: $x_1=D$, and $x_k=S(x_{k-1})$ for $k>1$. Consider the closed (hence compact) tiling space $X_S\subseteq X$, commonly known as \emph{substitution tiling space} associated to $S$, defined by: a tiling $x\in X$ belongs to $X_S$ if, and only if, for any finite patch $x'\subset x$ there exist $k>0$ and a vector $\vec{t}\in \mathbb{R}^d$ such that $\vec{t}+x'\subseteq x_k$. We have:
\begin{prop}\cite{Rob,Sol} Suppose that $S$ is primitive. Then $X_S\mathfrak{m}athfrak{n}eq \emptyset$, $S(X_S)\subseteq X_S$ and $X_S$ is independent of the initial tile $D\in\mathfrak{m}athcal{P}^1(X)$.
\end{prop}
The Perron-Frobenius (PF) theorem for non-negative matrices is a crucial tool for the study of substitution tiling spaces:
\begin{thm}\cite{Rue}
Let $A\mathfrak{m}athfrak{g}eq 0$ be a real square matrix with $A^m>0$ for some $m>0$. Then there is a simple positive eigenvalue $\omega >0$ of $A$ with $\omega>|\omega'|$ for all other eigenvalues $\omega'$. Moreover, there exist eigenvectors $\vec p$ and $\vec q$ corresponding to $\omega$ for $A$ and $A^T$, respectively, such that $\vec p\cdot \vec q=1$ and $\vec p,\vec q >0$. In this case, for any $\vec v\in \mathbb{R}^d$
$$\lim_n \frac{A^n\vec v}{\omega^n}=(\vec q\cdot \vec v)\vec p.$$
\end{thm}
The eigenvalue $\omega >0$ is called the \emph{PF-eigenvalue} of $A$. The eigenvectors $\vec p >0$ and $\vec q>0$ are called the \emph{right PF-eigenvector} and \emph{left PF-eigenvector} of $A$, respectively.
In general, if $S$ is a substitution acting on a set of prototiles $\{D_1,\ldots,D_l\}$ with Euclidian volumes $V_1,\ldots,V_l$, the vector $\vec q=(V_1,\ldots,V_l)$ is a left eigenvector of $A_S$ associated to the eigenvalue $\lambda^d$. For primitive substitutions, $\vec q$ and $\omega=\lambda^d$ are precisely the left PF-eigenvector and the corresponding PF-eigenvalue of $A_S$, respectively.
A substitution $S$ is said to be \emph{recognizable} if $S:X\to S(X)$ is injective. In this case, we say that $S$ is \emph{strongly recognizable} if for any $x\in X$ and any tile $D\in \mathfrak{m}athcal{P}^1(X)$ the following holds: if $S(D)$ is a patch of $S(x)$, then $D\in x$.
\begin{eg}
The Ammann A3 substitution (figure \ref{ammann}; see \cite{GS} for a detailed description of this substitution) is recognizable but not strongly recognizable.
\begin{figure}
\caption{Ammann A3 substitution.}
\label{ammann}
\end{figure}
More generally, any substitution for which one patch $S(D_i)$ contains another patch $S(D_j)$ can not be strongly recognizable.
The pentiamond substitution (figure 2; see the Tilings Encyclopedia at http://tilings.math.uni-bielefeld.de) is strongly recognizable.
\begin{figure}
\caption{Pentiamond substitution.}
\label{pentiamond}
\end{figure}
\end{eg}
\section{Multi-Substitution Tiling Spaces}
In this section we establish the definition and basic properties of multi-substitution tiling spaces. These spaces are also referred to as \emph{mixed substitution} tiling spaces \cite{GM} or \emph{$S$-adic systems \cite{Du,Fe}}. In \cite{FS}, the authors developed a framework for studying the ergodic theory and topology
of hierarchical tilings in great generality. The classical substitution tiling spaces and the multi-substitution tiling spaces fit in this general framework. In fact, they are particular cases of \emph{fusion} tiling spaces. Certain properties, like minimality or
unique ergodicity, can be derived within the framework of fusion tiling spaces. However, naturally, some other properties become hard to achieve in such generality.
Let $\mathfrak{m}athcal{S}=\{S_i\}_{i\in J}$ be a finite collection of substitutions $S_i:\mathfrak{m}athcal{P}^1(X)\to\mathfrak{m}athcal{P}(X)$. Assume that the substitution $S_i$ can be extended to maps $S_i: \mathfrak{m}athcal{P}(X)\to \mathfrak{m}athcal{P}(X)$ and $S_i:X\to X$, for each $i\in J$. Denote by $\lambda_i>1$ and $A_i$ the dilatation factor and the structure matrix, respectively, associated to $S_i$. Provide the space of sequences $$\Sigma:=\{\bar a=(a_1,a_2,\ldots): {a_i}\in J\}$$ with the usual structure $(\Sigma,d_\Sigma)$ of metric space: given $\bar a=(a_1,a_2,\ldots)$ and $\bar b=(b_1,b_2,\ldots)$ in $\Sigma$, we set $d_\Sigma(\bar a,\bar b)=1/L$ if $L$ is the smallest integer such that $a_L\mathfrak{m}athfrak{n}eq b_L$. We also introduce the standard shift map $\sigma:\Sigma\to\Sigma$, given by $\sigma(a_1,a_2,\ldots)=(a_2,a_3,\ldots)$, which is continuous with respect to $d_\Sigma$. Given $\bar a =(a_1,a_2,\ldots)\in\Sigma$, we denote by $[\bar a]_n$ the periodic sequence $(a_1,...,a_n,a_1,...,a_n,\ldots)\in\Sigma$. Clearly, for each $k>0$, $S_{\bar{a}}^k:=S_{a_1}\circ S_{a_2}\circ \ldots \circ S_{a_k}$ is itself a substitution with structure matrix given by $A_{\bar{a}}^k:=A_{a_1}A_{a_2} \ldots A_{a_k}$ and dilation factor $\lambda_{\bar a}^n:=\lambda_{a_1}\lambda_{a_2}\ldots \lambda_{a_n}$.
The sequence of substitutions $(S_{a_n})$ is called \emph{primitive} if for each $n$ there exists a least $N^{\bar a}_n$ such that $A_{a_{n}}A_{a_{n+1}}\ldots A_{a_{n+N^{\bar a}_n}}>0$. Observe that, in this case, each matrix $A_i$ does not have any column of all zeroes. Hence,
$A_{a_{n}}A_{a_{n+1}}\ldots A_{a_{n+N^{\bar a}_n+j}}>0$ for all $j\mathfrak{m}athfrak{g}eq 0$.
If, for each $n$, the substitution $S_{a_n}$ is strongly primitive, that is, $A_{a_{n}}>0$, the sequence $(S_{a_n})$ is called \emph{strongly primitive}. The set $\mathfrak{m}athcal{S}$ of substitutions is \emph{primitive} if, for any $\bar a\in \Sigma$, $(S_{a_n})$ is primitive. Moreover, we say that a primitive set of substitutions $\mathfrak{m}athcal{S}$ is \emph{bounded primitive} if the set $\{N^{\bar a}_n:\, n\in \mathfrak{m}athbb{N}, \bar a\in \Sigma\}$ is bounded.
\begin{lem}
If $\mathfrak{m}athcal{S}$ is primitive then it is bounded primitive.
\end{lem}
\begin{proof}
The number of possible configurations of zero entries in finite products of structure matrices associated to substitutions in $\mathfrak{m}athcal{S}$ is finite, say $L(\mathfrak{m}athcal{S}).$ Now, assume that
$\mathfrak{m}athcal{S}$ is not bounded primitive. Then, for some $q>L(\mathfrak{m}athcal{S})$, there is $\bar a\in \Sigma$ such that $A_{\bar a}^q$ has some zero entry. This means that we can find $p'<p\leq q$ such that $A_{\bar a}^{p'}$ has the same zero configuration of entries as $A_{\bar a}^p$. Then the sequence of substitutions $(S_{b_n})$, with $\bar b=(a_1,\ldots, a_{p'},a_{p'+1},\ldots,a_{p},a_{p'+1},\ldots,a_{p},a_{p'+1},\ldots)$ is non-primitive.
\end{proof}
Before introduce the multi-substitution tiling spaces, let us prove the following useful lemma.
\begin{lem}\label{super}
Take $D\in \mathfrak{m}athcal{P}^1(X)$ and assume that the sequence of substitutions $(S_{a_n})$ is primitive. Given $n>0$, there are $R>0$ and $N_0>n$ such that, for any $N>N_0$ and any ball $B$ of radius $R$ contained in the support of $S^N_{\bar{a}}(D)$, the following holds: $\vec t + S^n_{\bar{a}}(D)\subset S^N_{\bar{a}}(D)$ and $\mathfrak{m}athrm{supp}(\vec t + S^n_{\bar{a}}(D))\subset B$, for some $\vec t \in \mathbb{R}^d$.
\end{lem}
\begin{proof}
Take a finite number of prototiles $\{D_1,\ldots,D_l\}$ of $X$ such that $$\mathfrak{m}athcal{T}^1(X)=\{[D_1],\ldots, [D_l]\}.$$ By primitivity, we can take ${N}_0$ such that a translated copy of $ S^n_{\bar{a}}(D)$ can be found in each $S^{{N}_0}_{\bar{a}}(D_i)$ for all $i\in\{1,\ldots,l\}$. Now, consider the tiles $\tilde{D}_i=\mathfrak{m}athrm{supp}(S^{ N_0}_{\bar{a}}(D_i))$. For sufficiently large $R$, if $B$ is a ball with radius $R$ and $x'$ is a patch formed with translated copies of the tiles $\tilde{D}_i$, with $B\subset \mathfrak{m}athrm{supp}(x')$, then, for some $i\in\{1,\ldots,l\}$ and $\vec{t}_i\in \mathbb{R}^d$, we have $\vec{t}_i+\tilde{D}_i\subset B$ and $\vec{t}_i+\tilde{D}_i\in x'$. Take $N> N_0$ such that the support of $S^{N}_{\bar a}(D)$ contains some ball $B$ of radius $R$. Since $S^{N}_{\bar a}(D)$ is the disjoint union of translated copies of patches of the form $S^{N_0}_{\bar a}(D_i)$, the support of one of this copies must be contained in $B$, and we are done.
\end{proof}
\begin{rem}\label{remsei}
It is clear from the proof that, given $n>0$ and tile $D$, if $\mathfrak{m}athcal{S}$ is primitive (hence bounded primitive) we can take $R>0$ and $N_0>n$ so that the statement of lemma \ref{super} holds for any $\bar a\in \Sigma$.
\end{rem}
Take $D\in \mathfrak{m}athcal{P}^1(X)$, $\bar{a}\in\Sigma$ and the corresponding sequence of patches in $\mathfrak{m}athcal{P}(X)$: $$x_{\bar{a}}^1=D,\quad\textrm{ and }\quad x_{\bar{a}}^k=S_{\bar a}^k(D),\,\textrm{ for } k>1.$$ We define the \emph{multi-substitution tiling space} $X_{\bar{a}}:=X_{\bar{a}}(\mathfrak{m}athcal{S})\subseteq X$ as follows: a tiling $x\in X$ belongs to $X_{\bar{a}}$ if, and only if, for any finite patch $x'\subset x$ there exist $k>0$ and a vector $\vec{t}\in \mathbb{R}^d$ such that $\vec{t}+x'\subseteq x_{\bar{a}}^k$.
\begin{prop}\label{Xa} If the sequence of substitutions $(S_{a_n})$ is primitive, then:
\begin{itemize}
\item[a)] $X_{\bar{a}}\mathfrak{m}athfrak{n}eq \emptyset$;
\item[b)] $X_{\bar{a}}$ is independent of the initial tile $D\in\mathfrak{m}athcal{P}^1(X)$;
\item[c)] $X_{[\bar{a}]_n}$ coincides with the substitution tiling space $X_{S_{\bar a}^n}$;
\item[d)] $X_{\bar{a}}$ is closed.
\end{itemize}
\end{prop}
\begin{proof}
Take $D\in \mathfrak{m}athcal{P}^1(X)$. By primitivity, the definition of $X_{\bar{a}}$ is independent of the initial tile and, taking account lemma \ref{super}, for each $n>0$ there are $\vec{t}_n\in\mathbb{R}^d$, $k_n>0$ and $r_n>0$, with $\lim_n r_n=\infty$, such that
the sequence $(x'_n)$ of patches $x'_n=\vec{t}_n+S_{\bar a}^{k_n}(D)$ satisfies: $x'_{n-1}\subset x'_n$ and $\mathfrak{m}athrm{supp}(x'_{n-1})\subset B_{r_n}\subset \mathfrak{m}athrm{supp}(x'_n)$. Set
\begin{equation}\label{x0}x_{\bar a}^\infty=\bigcup_{n\mathfrak{m}athfrak{g}eq 1}x'_n\end{equation} and observe that $x_{\bar a}^\infty$ is a tiling of $\mathbb{R}^d$ in $X_{\bar{a}}$. Hence $X_{\bar{a}}\mathfrak{m}athfrak{n}eq \emptyset$.
It is clear that $X_{S_{\bar a}^n}\subset X_{[\bar{a}]_n}$. Now, take $x\in X_{[\bar{a}]_n}$ and a finite patch $x'\subset x$. By definition, a translated copy of $x'$ appears in $S_{[\bar{a}]_n}^k(D)$ for some $k>0$. Take $N>k$ such that $A_{a_{k+1}}\ldots A_{a_{N}}>0$ and $m$ such that $N\leq mn$. Then we also have $A_{a_{k+1}}\ldots A_{a_{nm}}>0$. In particular, a translated copy of $D$ appears in $S_{a_{k+1}}\circ\ldots \circ S_{a_{nm}}(D)$. Consequently, a translated copy of $x'$ appears in $S_{[\bar{a}]_n}^{nm}(D)=(S_{[\bar{a}]_n}^{n})^m(D)$. This means that $x\in X_{S_{\bar a}^n}$.
To prove that $X_{\bar{a}}$ is closed, take a sequence of tilings $(x_n)$ in $X_{\bar{a}}$ converging to some $x\in X$. Take a patch $x''$ in $x$ and $r>0$ such that $\mathfrak{m}athrm{supp}(x'')\subset B_{1/r}$. We know that there exists $n_0$ such that $d_T(x_n,x)<r$ for all $n>n_0$. This means that, for each $n>n_0$, there are patches $x'_n\in x_n[[B_{1/r}]]$ and $x'\in x[[B_{1/r}]]$, and a vector $\vec{t}_n$ with $\|\vec{t}_n\|<r$, such that $x'=\vec{t}_n+x'_n$. Since $x_n\in X_{\bar{a}}$, there exists a translation of $x'_n$, and consequently of $x''\subset x'$, that is contained in some $S_{\bar a}^{k_n}(D)$. Hence $x\in X_{\bar a}$.
\end{proof}
Henceforth we assume that the sequence of substitutions $(S_{a_n})$ is primitive. As for substitution tiling spaces, recognizability is closely related with non-periodicity:
\begin{prop}
Let $\mathfrak{m}athcal{S}=\{S_1,\ldots,S_k\}$ be a set of recognizable substitutions, $\bar a\in \Sigma$, and $X_{\bar a}$ the corresponding multi-substitution tiling space. Then any tiling $x\in X_{\bar{a}}$ is aperiodic. \end{prop}
\begin{proof}The argument is standard.
Take $x\in X_{\bar{a}}$ and suppose that $\vec t+x = x$ for some $\vec t\mathfrak{m}athfrak{n}eq 0$. Since our substitutions are recognizable, for each $n\mathfrak{m}athfrak{g}eq 1$ there exists a unique $x_n\in X_{\sigma^n (\bar{a})}$ such that $S_{\bar a}^n(x_n)=x$. We have
$$S_{\bar a}^n(x_n)=\vec t+S_{\bar a}^n(x_n)=S_{\bar a}^n\Big(\frac{\vec t}{\lambda^n_{\bar a}}+x_n\Big),$$
which means, by recognizability that
$x_n={\vec t}/{\lambda^n_{\bar a}}+x_n.$
Now, for $n$ sufficiently large, it is clear that
$\big\{\mathfrak{m}athrm{supp}(D)+ {\vec t}/{\lambda^n_{\bar a}}\big\}\cap\mathfrak{m}athrm{supp}(D)\mathfrak{m}athfrak{n}eq \emptyset$
for any prototile $D$, which is a contradiction.
\end{proof}
It is well known that the set of periodic points of $\sigma$ is dense in $\Sigma$. Together with proposition \ref{Xa}, this result suggests that any multi-substitution tiling can be approximated arbitrarily closely by substitution tilings. In fact we have:
\begin{prop}For each $x\in X_{\bar{a}}$, there exists a sequence of tilings $(x_n)$, with $x_n\in X_{S^{j_n}_{\bar a}}$, for some $j_n\mathfrak{m}athfrak{g}eq 1$, such that $\lim_n x_n=x$.
\end{prop}
\begin{proof} For each $n$ take a patch $x'_n\in x[[B_{n}]]$. By definition of multi-substitution tiling space, we have $x'_n\subset \vec t_n+ S_{\bar a}^{j_n}(D)$, for some $j_n\mathfrak{m}athfrak{g}eq 1$ and $\vec t_n\in \mathfrak{m}athbb{R}^d$. Adapting the procedure we have used in the proof of proposition \ref{Xa} to construct a tiling in $X_{\overline{a}}$, it is possible to construct a tiling $x_n\in X_{S^{j_n}_{\bar a}}=X_{[\bar{a}]_{j_n}}$ containing $\vec t_n+ S_{\bar a}^{j_n}(D)$. Clearly we have $\lim_n x_n=x$.
\end{proof}
\section{Minimality and Repetitivity}
As Sadun and Frank \cite{FS} have shown, fusion tiling spaces are minimal and its elements are repetitive. For completeness, we shall next give a proof of this result in the particular case of multi-substitution tiling spaces.
A \emph{dynamical system} will be a pair $(Y,G)$ where $Y$ is a compact metric space and $G$ is a continuous action of a group. $(Y,G)$ is \emph{minimal} if $Y$ is the orbit closure $\overline{\mathfrak{m}athcal{O}(y)}$ of any of its elements $y$. A point $y\in Y$ is \emph{almost periodic} if
$$G(y,U)=\{g\in G:\,\,g(y) \in U\}$$ is \emph{relatively dense} (that is, there exists
a compact set $K\subseteq G$ such that $g\cdot K$ intersects $G(y,U)$ for all $g\in G$) for every open set $U\subseteq Y$ with $G(y,U)\mathfrak{m}athfrak{n}eq \emptyset$.
Minimality and almost periodicity are related by Gottschalk's theorem:
\begin{thm}\cite{Go} Let $(Y,G)$ be a dynamical system. If $y\in Y$ is an almost periodic point, then $(\overline{\mathfrak{m}athcal{O}(y)}, G)$ is
minimal. Moreover, if $(Y,G)$ is minimal, then any point in $Y$ is almost periodic.
\end{thm}
It is well known \cite{Rob} that, for any primitive substitution $S$, $(X_S,T)$ is minimal. More generally, for multi-substitutions tiling spaces we have:
\begin{thm}
The dynamical system $(X_{\bar{a}},T)$ is minimal.
\end{thm}
\begin{proof}
Let $x$ and $y$ be two tilings in $X_{\bar{a}}$ and $\epsilon >0$. Fix $y'\in y[[B_{1/\epsilon}]]$. By definition of $X_{\bar{a}}$, there is $n$ such that a translated copy of $y'$ can be found in $S_{\bar a}^n(D)$. Taking account lemma \ref{super}, there are $R>0$ and $N'>0$ such that, for any ball $B$ of radius $R$ with $B\subset \mathfrak{m}athrm{supp}( S_{\bar a}^{N'}(D))$, there is $\vec{t}$ for which $\vec{t}+S_{\bar a}^n(D)\subset S_{\bar a}^{N'}(D)$ and $\mathfrak{m}athrm{supp}( \vec t + S_{\bar a}^{n}(D))\subset B $. On the other hand, again by definition of $X_{\bar{a}}$, given a patch $x'\in x[[B_{R}]]$, there is some $N''$ such that $S_{\bar a}^{N''}(D)$ contains a translated copy of $x'$. Hence, due to primitivity, we can take some $N\mathfrak{m}athfrak{g}eq \mathfrak{m}ax\{N',N''\}$ such that
$$y'\subseteq \vec t_1+S_{\bar a}^n(D)\subseteq \vec t_2+ x' \subseteq \vec t_3+ S_{\bar a}^{N}(D).$$ In particular, $d_T(\vec t_2+ x,y)<\epsilon$.
\end{proof}
A tiling $x$ of $\mathfrak{m}athbb{R}^d$ is said to be \emph{repetitive} if for every patch $x'$ of $x$ with bounded support there is some $r(x')>0$ such that, for every ball $B$ of $\mathfrak{m}athbb{R}^d$ with radius $r(x')$, there exists $\vec{t}\in\mathfrak{m}athbb{R}^d$ such that $\mathfrak{m}athrm{supp}(\vec t+x' )\subseteq B$ and $\vec t+x' \subset x$. It is also common to refer to repetitive tilings as tilings satisfying the \emph{local isomorphism} property \cite{Rad}. As explained in \cite{Rob}, for tiling dynamical systems $(X,T)$, repetitivity is equivalent to {almost periodicity}. From
Gottschalk's theorem it follows that:
\begin{thm}
Any $x\in X_{\bar a}$ is repetitive.
\end{thm}
Of course, this can also be seen as an easy consequence of lemma \ref{super}. Given a tiling $x\in X_{\bar{a}}$ and a finite patch $x'\subset x$, we have $x'\subset S_{\bar a}^n(D)$ for some $n$ and $D$. The radius $r(x')$ can be taken as the radius $R$ of lemma \ref{super}, which does not depend on the tiling $x$ of $X_{\bar{a}}$ we take. On the other hand, $x'\in\mathfrak{m}athcal{P}(X_{\bar b})$ for any $\bar b\in \Sigma$ with $d_\Sigma(\bar a, \bar b)<1/n$.
Hence, taking account remark \ref{remsei}, we have:
\begin{prop}\label{R}
Assume that $\mathfrak{m}athcal{S}$ is primitive and take $x'\subset S_{\bar a}^n(D)$. Then, there exists $R>0$ such that,
for any $\bar b\in \Sigma$ with $d_\Sigma(\bar a, \bar b)<1/n$, any $x\in X_{\bar b}$ and any ball $B$ of radius $R$, a translated copy $\vec t+x'$ of $x'$ can be founded in $x$ with $\mathfrak{m}athrm{supp}(\vec t+x')\subseteq B$.
\end{prop}
\section{Statistical stability}
\subsection{Unique ergodicity} The unique ergodicity of the system $(X_{\bar a}, T)$ was established in \cite{FS} in the general framework of fusion tiling spaces. Next we remake the proof of the unique ergodicity for multi-substitution tiling spaces, based on a result of Solomyak \cite{Sol}, and prove that
the convergence of patch frequencies to their ergodic limits is locally uniform in some open subset of $\Sigma$ (theorem \ref{cu}).
Given a patch $x'\in\mathfrak{m}athcal{P}(X_{\bar{a}})$ and a measurable subset $U$ of $\mathbb{R}^d$, define the \emph{cylinder set} $X^{\bar{a}}_{x',U}$ as
$$X^{\bar{a}}_{x',U}=\{x\in X_{\bar{a}}: \,\,\, x'+\vec{t}\subset x\,\,\mathfrak{m}box{for some $\vec{t}\in U$} \}.$$ These cylinders form a semi-algebra and a topology base for $X_{\bar{a}}$.
For any set $H\subset \mathfrak{m}athbb{R}^d$ and $r \mathfrak{m}athfrak{g}eq 0$ we define
\begin{equation*}
H^{+r}=\{\vec t\in\mathbb{R}^d:\,\mathfrak{m}athrm{dist}(\vec{t},H)\leq r\},
\quad H^{-r}=\{\vec t\in H:\,\mathfrak{m}athrm{dist}(\vec{t},\mathfrak{p}artial H)\mathfrak{m}athfrak{g}eq r\},
\end{equation*}
where $\mathfrak{p}artial H$ denotes the boundary of $H$.
A sequence $(H_n)$ of subsets of $\mathbb{R}^d$ is a \emph{Van Hove} sequence if for any $r\mathfrak{m}athfrak{g}eq 0$
$$\lim_n\frac{\mathfrak{m}athrm{vol}((\mathfrak{p}artial H_n)^{+r})}{\mathfrak{m}athrm{vol}(H_n)}=0.$$
Clearly, the sequence $(\mathfrak{m}athrm{supp}(S_{\bar a}^n(D)))$ is a Van Hove sequence for each $D\in\mathfrak{m}athcal{P}^1(X)$.
Consider the tiling $x_{\bar a}^\infty$ given by \eqref{x0}. For any patch $x'\in \mathfrak{m}athcal{P}(X_{\bar{a}})$, denote by $L^{\bar a}_{x'}(H)$ (respectively, $N^{\bar a}_{x'}(H)$) the number of distinct translated copies of $x'$ in $x_{\bar a}^\infty$ whose support is completely contained in $H$ (respectively, intersects the border of $H$). If $y'\in \mathfrak{m}athcal{P}(X_{\bar{a}})$ is another patch, we denote by $L_{x'}(y')$ the number of distinct translated copies of $x'$ in $y'$ and by $\mathfrak{m}athrm{vol}(y')$ the Euclidean volume of the support of $y'$.
\begin{thm}\label{freq vs van Hove}\cite{Sol}
The dynamical system $(X_{\bar{a}},T)$ is uniquely ergodic if for any patch $x'\in \mathfrak{m}athcal{P}(X_{\bar{a}})$ there is a number $\mathfrak{m}athrm{freq}_{\bar a}(x')>0$ such that, for any Van Hove sequence $(H_n)$,
$$\mathrm{freq}_{\bar{a}} (x')=\lim_n \frac{L^{\bar a}_{x'}(H_n)}{\mathfrak{m}athrm{vol}(H_n)}.$$ In this case, the unique ergodic measure $\mathfrak{m}u_{\bar a}$ on $X_{\bar{a}}$ satisfies
$$\mathfrak{m}u_{\bar a}(X^{\bar a}_{x',U})=\mathrm{freq}_{\bar{a}} (x')\mathfrak{m}athrm{vol}(U)$$
for all Borel subsets $U$ with $\mathfrak{m}athrm{diam}(U)<\eta$, where $\eta>0$ is such that any prototile contains a ball of radius $\eta$.
\end{thm}
For a primitive substitution tiling space $X_{S}$, such \emph{frequencies} exist \cite{GH,Sol}. For example, if $\vec p=(p_1,\dots,p_l)$ and $\vec q=(V_1,\ldots,V_l)$ are right and left PF-eigenvectors, respectively, of $A_S$ satisfying $\vec{p}\cdot \vec{q}=1$, then
$\mathrm{freq}_{S}{(D_i)}=p_i$.
Consequently, $(X_S,T)$ is uniquely ergodic. We want to extend this result for multi-substitutions tiling spaces.
So, start with a primitive set $\mathfrak{m}athcal{S}=\{S_1,\ldots, S_k\}$ of substitutions acting on the set of prototiles $\{D_1,\ldots, D_l\}$.
Being primitive, there exists some $N(\mathfrak{m}athcal{S})>0$ such that $A>0$ for any $A\in\mathfrak{m}athcal{A}_\mathfrak{m}athcal{S}^{N(\mathfrak{m}athcal{S})}$, the set of all $N(\mathfrak{m}athcal{S})$-products of matrices from $\{\frac{A_1}{\omega_1},\ldots ,\frac{A_k}{\omega_k}\},$ where $\omega_i$ is the PF-eigenvalue of $A_i$.
For each $i,j\in\{1,\ldots,l\}$, set
\begin{equation}\label{eqcDi}c_{\bar{a}}(D_i):=\lim_n \frac{L_{D_i}(S_{\bar a}^n(D_j))}{\mathfrak{m}athrm{vol}(S_{\bar a}^n(D_j)))}=\lim_n\frac{(A_{\bar a}^n)_{ij}}{\omega_{\bar a}^n\mathfrak{m}athrm{vol}(D_j)}.\end{equation}
\begin{lem}\label{cDi}
The limit \eqref{eqcDi} exists, does not depend on $j$ and is uniform with respect to $\bar{a}\in \Sigma$. Moreover, there are $C>0$ and $0<\theta<1$ such that, for any $\delta>0$ and $\bar a, \bar b\in\Sigma$ with $d_\Sigma(\bar a,\bar b)<\delta$, we have
\begin{equation}\label{contfreq}
|c_{\bar a}(D_i)-c_{\bar{b}}(D_i)|<C\theta^{1/\delta},\end{equation}
for all $i\in\{1,\ldots,l\}$.
\end{lem}
\begin{proof}Let $\bar a\in \Sigma$. For each $n\mathfrak{m}athfrak{g}eq 1$, consider the matrix $E_{\bar a}^n=[(E_{\bar a}^n)_{ij}]$ defined by
$$(E_{\bar a}^n)_{ij}=\frac{(A_{\bar a}^n)_{ij}}{\omega_{\bar a}^n\mathfrak{m}athrm{vol}(D_j)}.$$
Let $\Delta_{\bar a}^n\subset \mathbb{R}^d$ be the convex hull of the columns of $E_{\bar a}^n$. It is easy to check that each column of $E_{\bar a}^{n+N}$ sits in $\Delta_{\bar a}^n$. Indeed,
$$(E_{\bar a}^{n+N})_{ij}=\sum_{k=1}^l \frac{(A_{\bar a}^n)_{ik}}{\omega_{\bar a}^n\mathfrak{m}athrm{vol}(D_k)}\frac{(A_{\sigma^{n}(\bar a)}^N)_{kj}\mathfrak{m}athrm{vol}(D_k)}{\omega_{\sigma^{n}(\bar a)}^N\mathfrak{m}athrm{vol}(D_j)}.$$
Hence, the $j$-column $v_{\bar a,j}^{n+N}$ of $E_{\bar a}^{n+N}$ is given by
\begin{equation}\label{vs}
v_{\bar a,j}^{n+N}=\sum_{k=1}^l v_{\bar a,k}^{n}\frac{(A_{\sigma^n(\bar a)}^N)_{kj}\mathfrak{m}athrm{vol}(D_k)}{\omega_{\sigma^{n}(\bar a)}^N\mathfrak{m}athrm{vol}(D_j)}.\end{equation}
Since $$\sum_{k=1}^l\frac{(A_{\sigma^n(\bar a)}^N)_{kj}\mathfrak{m}athrm{vol}(D_k)}{\omega_{\sigma^{n}(\bar a)}^N\mathfrak{m}athrm{vol}(D_j)}=1,$$
we have $v_{\bar a,j}^{n+N}\in \Delta_{\bar a}^n$, that is $\Delta_{\bar a}^{{n+N}}\subseteq \Delta_{\bar a}^{{n}}$ for all $N>0$. Set $\Delta_{\bar a}=\bigcap_{n=1}^\infty \Delta_{\bar a}^{{n}}$ and take $\theta'$ (not depending on $\bar a$) satisfying $\theta' l<1$ and
$$0<\theta'<\mathfrak{m}in _{A\in\mathfrak{m}athcal{A}_\mathfrak{m}athcal{S}^{N(\mathfrak{m}athcal{S})}}\mathfrak{m}in_{k,j} \Big\{\frac{A_{kj}\mathfrak{m}athrm{vol}(D_k)}{\mathfrak{m}athrm{vol}(D_j)}\Big\}.$$
Taking account \eqref{vs}, we have
\begin{align*}
v_{\bar a,j}^{n+N(\mathfrak{m}athcal{S})}&=\sum_{k=1}^l (1-\theta'l)v_{\bar a,k}^{n}\Big(\frac{(A_{\sigma^n(\bar a)}^{N(\mathfrak{m}athcal{S})})_{kj}\mathfrak{m}athrm{vol}(D_k)}{(1-\theta'l)\omega_{\sigma^n(\bar a)}^{N(\mathfrak{m}athcal{S})}\mathfrak{m}athrm{vol}(D_j)}-\frac{\theta'}{1-\theta'l}\Big)+ \sum_{k=1}^l v_{\bar a,k}^{n}\theta'.
\end{align*}
Observe that
$$\sum_{k=1}^l\Big( \frac{(A_{\sigma^n(\bar a)}^{N(\mathfrak{m}athcal{S})})_{kj}\mathfrak{m}athrm{vol}(D_k)}{(1-\theta'l)\omega_{\sigma^n(\bar a)}^{N(\mathfrak{m}athcal{S})}\mathfrak{m}athrm{vol}(D_j)}-\frac{\theta'}{1-\theta'l}\Big)=1.$$
Hence $\Delta_{\bar a}^{n+N(\mathfrak{m}athcal{S})}$ is contained, up to translation, in $(1-\theta'l)\Delta_{\bar a}^n$. Then
\begin{equation}\label{diam}
\mathfrak{m}athrm{diam}(\Delta_{\bar a}^{1+nN(\mathfrak{m}athcal{S})})\leq \rho(1-\theta'l)^n
\end{equation} for all $n$, where $\rho$ is the maximum of the $l$ possible values of $\mathfrak{m}athrm{diam}(\Delta_{\bar a}^1)$.
Consequently $\Delta_{\bar a}$ has diameter zero, hence it consists of a single point, which means that the limit \eqref{eqcDi} exists and does not depend on $j$. Moreover, this limit is uniform with respect to $\bar{a}\in \Sigma$ since the majoration \eqref{diam} does not depend on $\bar a$.
Finally, note that if $d_{\Sigma}(\bar a,\bar b)=1/k<\delta$, that is $a_j=b_j$ and $\Delta_j:=\Delta_{\bar a}^j=\Delta_{\bar b}^j$ for all $j\in\{1,\ldots,k-1\}$, then
$$|c_{\bar a}(D_i)-c_{\bar b}(D_i) |\leq \mathfrak{m}athrm{diam}(\Delta_{k-1})\leq C\theta^{1/\delta}$$
for $\theta=(1-\theta'l)^{1/N(\mathfrak{m}athcal{S})}$ and some constant $C>0$.
\end{proof}
Take a finite patch $x'\in\mathfrak{m}athcal{P}(X)$. For $\bar a\in \Sigma$, set
\begin{equation}\label{cx}c_{\bar a}({x'}):=\lim_n \frac{L_{x'}(S_{\bar a}^n(D_j))}{\mathfrak{m}athrm{vol}(S_{\bar a}^n(D_j)))}.\end{equation}
\begin{lem}\label{exists freq P}
The limit \eqref{cx} exists and does not depend on $j$. Moreover, if $x'\in\mathfrak{m}athcal{P}(X_{\bar a})$, then the limit is uniform in a small neighborhood of $\bar a$.
\end{lem}
\begin{proof}
Take a finite patch $x'\in\mathfrak{m}athcal{P}(X)$ and a sequence $\bar a\in \Sigma$.
Given a subset $H$ of $\mathbb{R}^d$, observe that
\begin{equation}\label{vol}
\frac{L^{\bar a}_{x'}(H)}{\mathfrak{m}athrm{vol}(H)}\leq \frac{1}{{\mathfrak{m}athrm{vol}(x')}}.
\end{equation}
Given $n>m$,
write
$$S_{\sigma^m(\bar a)}^{n-m}(D_j)=\bigcup_{i,k} D_{ik}\,,$$ where $i\in \{1,\ldots ,l\}$, $k\in \{1,\ldots, \big(A_{\sigma^m(\bar a)}^{n-m}\big)_{ij}\}$ and each
$D_{ik}$ is a translated copy of the prototile $D_i$. Denote by $N_{x'}^{\bar a}(n,m,i,j)$ the number of translated copies of $x'$ contained in $S_{\bar a}^n(D_j)$ whose support intersects the boundary of some $\mathfrak{m}athrm{supp}(S_{\bar a}^m(D_{ik}))$.
Since the sequence $(S_{\bar{a}}^m(D_i))$ is Van Hove, for each $\epsilon >0$ there exists $m(\bar a)$ such that
\begin{equation*}\label{Nx}
N_{x'}^{\bar a}(n,m,i,j)<\epsilon L_{x'}(S_{\bar{a}}^m(D_i))\big(A_{\sigma^m(\bar a)}^{n-m}\big)_{ij},
\end{equation*}
for all $m>m(\bar a)$ and $i\in\{1,\ldots,l\}$. Hence
\begin{equation}\label{conf}
\sum_{i=1}^l\frac{L_{x'}(S_{\bar{a}}^m(D_i))\big(A_{\sigma^m(\bar a)}^{n-m}\big)_{ij}}{\omega_{\bar a}^m\mathfrak{m}athrm{vol}(S_{\sigma^m(\bar a)}^{n-m}(D_j))}\leq \frac{L_{x'}(S_{\bar a}^n(D_j))}{\mathfrak{m}athrm{vol}(S_{\bar a}^n(D_j)))}\leq (1+\epsilon)\sum_{i=1}^l\frac{L_{x'}(S_{\bar{a}}^m(D_i))\big(A_{\sigma^m(\bar a)}^{n-m}\big)_{ij}}{\omega_{\bar a}^m\mathfrak{m}athrm{vol}(S_{\sigma^m(\bar a)}^{n-m}(D_j))}.
\end{equation}
Taking account \eqref{vol} and lemma \ref{cDi}, we have
\begin{align*}
\limsup_n\frac{L_{x'}(S_{\bar a}^n(D_j))}{\mathfrak{m}athrm{vol}(S_{\bar a}^n(D_j))}-\liminf_n\frac{L_{x'}(S_{\bar a}^n(D_j))}{\mathfrak{m}athrm{vol}(S_{\bar a}^n(D_j))}&\leq \frac{\epsilon}{\mathfrak{m}athrm{vol} (x')}.
\end{align*}
Since this holds for an arbitrary $\epsilon$, it follows that the limit $c_{\bar a}({x'})$ exists. The independence of $c_{\bar a}(x')$ with respect to $j$ follows from the observation that
the lower and upper bounds in \eqref{conf} do not depend on $j$ when $n$ goes to infinity.
Suppose now that $x'\in\mathfrak{m}athcal{P}(X_{\bar a})$. Let us prove that the limit is uniform in a small neighborhood of $\bar a$. We have $x'\subset S_{\bar a}^{n_0}(D)$ for some $n_0$ and $D$.
Take the radius $R>0$ associated to $x'$ given by proposition \ref{R}. For $\lambda>0$, let $L(R,\lambda,i)$ be the maximum number of open disjoint balls of radius $R$ contained in $\lambda D_i$. Clearly, there exists $\lambda_0$ such that
$$\frac{\mathfrak{m}athrm{vol}((\mathfrak{p}artial{\lambda D_i)}^{+r})}{\mathfrak{m}athrm{vol}(x')} <\epsilon L(R,\lambda,i)$$
for all $\lambda>\lambda_0$, with $r$ the diameter of $\mathfrak{m}athrm{supp}(x')$. Let $\omega=\mathfrak{m}in_i\{\omega_i\}$ and take $m_0>0$ such that $\omega^{m_0}>\lambda_0$. Then, for all $m>m_0$ and $\bar b\in \Sigma$ with $d_\Sigma(\bar a, \bar b)<1/n_0$, we have $x'\in\mathfrak{m}athcal{P}(X_{\bar b})$ and
\begin{align*}
N_{x'}^{\bar b}(n,m,i,j)&< \frac{\mathfrak{m}athrm{vol}\big((\mathfrak{p}artial{\omega_{\bar b}^m D_i)}^{+r}\big)}{\mathfrak{m}athrm{vol}(x')}(A_{\sigma^m(\bar b)}^{n-m}\big)_{ij}\\&<\epsilon L(R,\lambda,i) (A_{\sigma^m(\bar b)}^{n-m}\big)_{ij} \\&<\epsilon L_{x'}(S_{\bar{b}}^m(D_i))\big(A_{\sigma^m(\bar b)}^{n-m}\big)_{ij},
\end{align*}
and we are done.
\end{proof}
\begin{lem}\label{freq VH implies uni erg}
Assume that the limit \eqref{cx} exists. Then,
for any Van Hove sequence $(H_n)$,
$$c_{\bar a}(x')=\lim_n \frac{L^{\bar a}_{x'}(H_n)}{\mathfrak{m}athrm{vol}(H_n)}.$$
\end{lem}
\begin{proof}Although the proof follows closely that of \cite{LMS} for substitution tilings, we present it here since it is essential to establish theorem \ref{cu}.
For each $m>0$, there is $\mathfrak{m}athcal{D}^{\bar a}_m\subseteq\mathfrak{m}athcal{P}^1(X_{\bar a})$ such that the tiling $x_{\bar a}^\infty$ in \eqref{x0} is the disjoint union of patches $S_{\bar a}^{m}(D)$, with $D\in \mathfrak{m}athcal{D}^{\bar a}_m$. Given $m>0$ and $n>0$, define
\begin{align*}G^{\bar a}_{m,n}:=\{D\in\mathfrak{m}athcal{D}^{\bar a}_m: \mathfrak{m}athrm{supp}(S_{\bar a}^m(D))\cap H_n \mathfrak{m}athfrak{n}eq\emptyset\},\,\,\, H^{\bar a}_{m,n}:=\{D\in\mathfrak{m}athcal{D}^{\bar a}_m: \mathfrak{m}athrm{supp}(S_{\bar a}^m(D))\subseteq H_n \}.
\end{align*}
Hence
\begin{equation}\label{ineq aux freq v hove 0}\sum_{D\in H^{\bar a}_{m,n}}L_{x'}(S_{\bar a}^m(D))\leq L^{\bar a}_{x'}(H_n)\leq \sum_{D\in G^{\bar a}_{m,n}} \Big(L_{x'}(S_{\bar a}^m(D))+N^{\bar a}_{x'}(\mathfrak{p}artial S_{\bar a}^m(D))\Big),\end{equation}
where $\mathfrak{p}artial S_{\bar a}^m(D)$ denotes the border of the support of $S_{\bar a}^m(D)$.
Now, fix $\epsilon>0$. Taking account that $(S_{\bar a}^m(D_j))$ is a Van Hove sequence, we can take $m$ large enough so that, for every $D\in \mathfrak{m}athcal{D}^{\bar a}_m$, we have
\begin{equation}\label{unif}
\left|\frac{L_{x'}( S_{\bar a}^m(D))}{\textrm{vol}(S_{\bar a}^m(D))}-c_{\bar a}(x')\right|<\epsilon\quad\textrm{and}\quad
N_{x'}(\mathfrak{p}artial S_{\bar a}^m(D))<\epsilon L_{x'}( S_{\bar a}^m(D)).\end{equation}
Together with \eqref{ineq aux freq v hove 0}, this gives
\begin{equation}\label{ineq aux freq v hove}(c_{\bar a}(x')-\epsilon)\sum_{D\in H^{\bar a}_{m,n}}\textrm{vol}(S_{\bar a}^m(D))\leq L^{\bar a}_{x'}(H_n)\leq(1+\epsilon)(c_{\bar a}(x')+\epsilon) \sum_{D\in G^{\bar a}_{m,n}}\textrm{vol}(S_{\bar a}^m(D)).\end{equation}
On the other hand, note that, setting $t_m:=\mathfrak{m}ax_{j}\{\textrm{diam}(S_{\bar a}^m(D_j)\}$, since $m$ is fixed, for large enough $n$ we have
$$\sum_{D\in H^{\bar a}_{m,n}}\textrm{vol}(S_{\bar a}^m(D))\mathfrak{m}athfrak{g}eq \textrm{vol}(H_n^{-t_m})\mathfrak{m}athfrak{g}eq (1-\epsilon)\textrm{vol}(H_n)$$
and
$$\sum_{D\in G^{\bar a}_{m,n}}\textrm{vol}(S_{\bar a}^m(D))\leq \textrm{vol}(H_n^{+t_m})\leq (1+\epsilon)\textrm{vol}(H_n),$$
which, combining with \eqref{ineq aux freq v hove}, concludes the proof, since $\epsilon>0$ is arbitrary.
\end{proof}
Combining theorem \ref{freq vs van Hove} with lemmas \ref{exists freq P} and \ref{freq VH implies uni erg}, with $c_{\bar a}(x')=\mathfrak{m}athrm{freq}_{\bar a}(x')$, we conclude that
\begin{thm}\label{unique erg}
If $\mathfrak{m}athcal{S}$ is primitive, $(X_{\bar{a}},T)$ is uniquely ergodic for all $\bar a\in \Sigma$.
\end{thm}
The following theorem establishes that the patch frequencies converge uniformly to their ergodic limits in an open subset of $\Sigma$.
\begin{thm}\label{cu}
If $x'\in\mathfrak{m}athcal{P}(X_{\bar a})$ and $H_n$ is a Van Hove sequence, then the sequences $$\frac{L^{\bullet}_{x'}(H_n)}{\mathfrak{m}athrm{vol}(H_n)}$$ converge uniformly to $\mathfrak{m}athrm{freq}_{\bullet}(x')$ in a small neighborhood of $\bar a$.
\end{thm}
\begin{proof}
This follows easily from lemma \ref{exists freq P} and from the proof of lemma \ref{freq VH implies uni erg}, since \eqref{unif} holds for any sequence in a small neighborhood of $\bar a$.
\end{proof}
We denote by $\mathfrak{m}u_{\bar a}:=\mathfrak{m}u_{\bar a,\mathfrak{m}athcal{S}}$ the unique ergodic measure of $(X_{\bar{a}},T)$. Two ergodic dynamical systems $(X,G,\mathfrak{m}u)$ and $(Y,H,\mathfrak{m}athfrak{n}u)$ are said to be \emph{isomorphic} if there exist a group isomorphism $\xi:G\to H$ and a measure preserving homeomorphism $F:X\to Y$ such that $F\circ g=\xi(g)\circ F$ for all $g\in G$.
\begin{thm}
Let $\mathfrak{m}athcal{S}=\{S_1,\ldots,S_k\}$ be a set of strong recognizable substitutions.
Then the ergodic dynamical systems $(X_{\bar a},T,\mathfrak{m}u_{\bar a})$ and $(X_{\sigma(\bar a)},T,\mathfrak{m}u_{\sigma(\bar a)})$ are isomorphic.
\end{thm}
\begin{proof}
First note that $S_{a_1}\circ \vec t=\lambda_{a_1}\vec t\circ S_{a_1}$ and $\xi:T\to T$ defined by $\xi(\vec t\,)=\lambda_{a_1}\vec t$ is an isomorphism.
By recognizability, $S_{a_1}$ is a bijection from $X_{\sigma(\bar a)}$ onto $X_{\bar{a}}$. On the other hand, if $x,y\in X_{\sigma(\bar a)}$ and $d(x,y)< \epsilon$, then $d(S_{a_1}(x),S_{a_1}(y))<\lambda_{a_1}\epsilon$; similarly, if $x,y\in X_{\bar a}$ and $d(x,y)< \epsilon$, then $d(S^{-1}_{a_1}(x),S^{-1}_{a_1}(y))<\lambda_{a_1}\epsilon$. Hence $S_{a_1}:X_{\sigma(\bar a)}\to X_{\bar{a}}$ is a homeomorphism. To prove that $S_{a_1}$ is measure preserving is sufficient to prove it for cylinders $X_{P,U}$ with $U$ sufficiently small.
By strong recognizability, it is clear that $L_{S_{a_1}(P)}(S_{\bar a}^n(D_j))=L_{P}(S_{\sigma(\bar a)}^{n-1}(D_j))$. Then
\begin{align*}
\mathfrak{m}athfrak{n}onumber \mathfrak{m}u_{\bar a}\big(S_{a_1}(X^{\sigma(\bar a)}_{P,U})\big)&=\mathfrak{m}u_{\bar a}\big(X^{\bar a}_{S_{a_1}(P),\lambda_{a_1}U}\big)=\mathfrak{m}athrm{freq}_{\bar a}\big(S_{a_1}(P)\big)\mathfrak{m}athrm{vol}(\lambda_{a_1}U)\\
&=\frac{1}{\lambda_{a_1}^d}\mathfrak{m}athrm{freq}_{\sigma(\bar a)}(P)\lambda_{a_1}^d\mathfrak{m}athrm{vol}(U)=\mathfrak{m}u_{\sigma(\bar a)}\big(X^{\sigma(\bar a)}_{P,U}\big).
\end{align*}
\end{proof}
\subsection{Statistical stability}
The inequality \eqref{contfreq} says, in particular, that $\bar a\mathfrak{m}apsto c_{\bar a}(D)$ defines a continuous map $\Sigma\to \mathfrak{m}athbb{R}$ for each tile $D$. In this subsection we extend this result to arbitrary patches $x'$. As a consequence, we will see that the unique measures $\mathfrak{m}u_{\bar a}$, although defined in different spaces, also satisfy a certain kind of continuity with respect to $\bar a\in \Sigma$.
Recall that the \emph{upper Minkowski dimension} of a subset $H\subset \mathfrak{m}athbb{R}^d$ can be defined as
$$\mathfrak{m}athcal{D}_H:=\inf\{\beta:\,\mathfrak{m}athrm{vol}(H^{+r})=O(r^{d-\beta})\,\,\mathfrak{m}box{as $r\to 0^+$}\}.$$ Set $d-1\leq\mathfrak{m}athcal{D}:=\mathfrak{m}ax_i\{\mathfrak{m}athcal{D}_{\mathfrak{p}artial D_i}\}<d$.
\begin{thm}\label{ss} Given $N>0$, there is $C_N>0$ such that, for any $\delta>0$ and any $\bar a, \bar b\in\Sigma$ with $d_\Sigma(\bar a,\bar b)<\delta$, we have, for all $x'\in\mathfrak{m}athcal{P}^N(X)$,
$$|c_{\bar a}(x')-c_{\bar{b}}(x')|<C_N\theta_0^{1/\delta},$$ where $\theta_0=\theta^{\frac{\mathfrak{m}athcal{D}-d}{\mathfrak{m}athcal{D}-d+\log_\omega\theta}}<1$ and $\theta$ is given by lemma \ref{cDi}.
\end{thm}
\begin{proof}
Set $\omega=\mathfrak{m}in_i\{\omega_i\}$, $t=\mathfrak{m}ax_{j}\{\textrm{diam}(D_j)\}$ and $v=\mathfrak{m}in_{j}\{\mathfrak{m}athrm{vol}(D_j)\}$. We have
$$\frac{\mathfrak{m}athrm{vol}\big((\mathfrak{p}artial S_{\bar{a}}^m(D_i))^{+t}\big)}{\mathfrak{m}athrm{vol}(S_{\bar{a}}^m(D_i))}\leq \frac{\mathfrak{m}athrm{vol}\big((\mathfrak{p}artial D_i)^{+\frac{t}{\omega^m}}\big)}{\mathfrak{m}athrm{vol}(D_i)}=O\Big({1}/{\omega^{m(d-\mathfrak{m}athcal{D})}}\Big)$$ as $m\to \infty$, for all $\bar a\in \Sigma$.
Hence, for some constants $C_1$ and $m_1$,
\begin{align*}
N_{x'}^{\bar a}(n,m,i,j)&<\frac{N\mathfrak{m}athrm{vol}((\mathfrak{p}artial S^m_{\bar a}(D_i))^{+t})\big(A_{\sigma^m(\bar a)}^{n-m}\big)_{ij}}{v} < \frac{C_1N}{v}\frac{\mathfrak{m}athrm{vol}(S^m_{\bar a}(D_i))
\big(A_{\sigma^m(\bar a)}^{n-m}\big)_{ij}}{\omega^{m(d-\mathfrak{m}athcal{D})}}
\end{align*}
for all $\bar a\in \Sigma$ and $n>m>m_1$.
On the other hand,
$$\sum_{i=1}^l \frac{\mathfrak{m}athrm{vol}(S^m_{\bar a}(D_i))\big(A_{\sigma^m(\bar a)}^{n-m}\big)_{ij}}{\mathfrak{m}athrm{vol}(S^n_{\bar a}(D_j))}=1$$
for all $j$. Hence
\begin{align*}
\sum_{i=1}^l&\frac{L_{x'}(S_{\bar{a}}^m(D_i))\big(A_{\sigma^m(\bar a)}^{n-m}\big)_{ij}}{\omega_{\bar a}^m\mathfrak{m}athrm{vol}(S_{\sigma^m(\bar a)}^{n-m}(D_j))}\leq \frac{L_{x'}(S_{\bar a}^n(D_j))}{\mathfrak{m}athrm{vol}(S_{\bar a}^n(D_j)))} \leq \sum_{i=1}^l\frac{L_{x'}(S_{\bar{a}}^m(D_i))\big(A_{\sigma^m(\bar a)}^{n-m}\big)_{ij}}{\omega_{\bar a}^m\mathfrak{m}athrm{vol}(S_{\sigma^m(\bar a)}^{n-m}(D_j))}+\frac{C_1N}{v\omega^{m(d-\mathfrak{m}athcal{D})}}.
\end{align*}
Taking the limit $n\to\infty$ we obtain
\begin{equation}\label{conf2}
\sum_{i=1}^l\frac{L_{x'}(S_{\bar a}^m(D_i))}{\omega_{\bar a}^m}c_{\sigma^m(\bar a)}(D_i) \leq c_{\bar a}({x'})\leq \sum_{i=1}^l\frac{L_{x'}(S_{\bar a}^m(D_i))}{\omega_{\bar a}^m}c_{\sigma^m(\bar a)}(D_i)+\frac{C_1N}{v\omega^{m(d-\mathfrak{m}athcal{D})}},
\end{equation}
for all $m>m_1$ and $\bar a\in \Sigma$. Set
\begin{equation}\label{gamma}
\mathfrak{m}athfrak{g}amma := 1-\frac{d-\mathfrak{m}athcal{D}}{\log_\omega\theta}>1,
\end{equation}
with $\theta<1$ given by lemma \ref{cDi}. Observe also that
\begin{equation}\label{qqq}
L_{x'}(S_{\bar a}^m(D_i))\leq \frac{\mathfrak{m}athrm{vol}(S_{\bar a}^m(D_i))}{\mathfrak{m}athrm{vol}(x')}\leq \frac{\mathfrak{m}athrm{vol}(S_{\bar a}^m(D_i))}{Nv}
\end{equation}
In view of \eqref{conf2}, \eqref{gamma}, \eqref{qqq} and lemma \ref{cDi}, there is a constant $C$ such that, if, for some $m>0$,
$d_\Sigma(\bar a,\bar b)<\frac{1}{\mathfrak{m}athfrak{g}amma m}=\delta$,
\begin{align*}
|c_{\bar a}(x')- c_{\bar b}(x')|&\leq \sum_{i=1}^l\frac{L_{x'}(S_{\bar a}^m(D_i))}{\omega_{\bar a}^m}|c_{\sigma^m(\bar a)}(D_i)-c_{\sigma^m(\bar b)}(D_i)|+\frac{C_1N}{v\omega^{m(d-\mathfrak{m}athcal{D})}} \\ &\leq\sum_{i=1}^l\frac{\mathfrak{m}athrm{vol}(D_i)}{Nv}C'\theta^{(\mathfrak{m}athfrak{g}amma-1)m}+ \frac{C_1N}{v\omega^{m(d-\mathfrak{m}athcal{D})}} \\ & \leq
C_N\big(\theta^{\frac{\mathfrak{m}athcal{D}-d}{\mathfrak{m}athfrak{g}amma \log_\omega \theta}}\big)^{1/\delta}\end{align*}
for all $x'\in\mathfrak{m}athcal{P}^N(X)$.
\end{proof}
From theorem \ref{freq vs van Hove} and theorem \ref{ss} we have:
\begin{cor}\label{stat}
Given $N>0$, there is $C_N>0$ such that, for any $\delta>0$ and any $\bar a, \bar b\in\Sigma$ with $d_\Sigma(\bar a,\bar b)<\delta$, we have, for all $x'\in\mathfrak{m}athcal{P}^N(X)$ and all sufficiently small Borel subset $U\subset \mathfrak{m}athbb{R}^d$,
$$|\mathfrak{m}u_{\bar a}(X^{\bar a}_{x',U})-\mathfrak{m}u_{\bar b}(X^{\bar b}_{x',U})|<C_N\theta_0 ^{1/\delta},$$
where $\theta_0$ is given by theorem \ref{ss}.
\end{cor}
To finalize, we extend the previous corollary:
\begin{thm}\label{int} Take finite patches $P,Q\in\mathfrak{m}athcal{P}(X)$ and measurable sets $U,V\subset \mathbb{R}^d$. There exists $C>0$ such that, for any $\delta>0$ sufficiently small and any $\bar a, \bar b\in\Sigma$ with $d_{\Sigma}(\bar a, \bar b)<\delta$, we have
\begin{equation*}
|\mathfrak{m}u_{\bar a}(X^{\bar a}_{ P,U}\cap X^{\bar a}_{Q,V})-\mathfrak{m}u_{\bar b}(X^{\bar b}_{ P,U}\cap X^{\bar b}_{Q,V})|<C\theta_0^{\frac{1}{\delta}},\end{equation*}
where $\theta_0$ is given by theorem \ref{ss}.
\end{thm}
\begin{proof} Let $B(P,U)$ and $B(Q,V)$ be two open balls such that $\mathfrak{m}athrm{supp}(\vec u+P)\subset B(P,U)$ and $\mathfrak{m}athrm{supp}(Q+\vec v) \subseteq B(Q,V)$ for all $\vec u\in U$ and $\vec v\in V$.
Consider the set $\mathfrak{m}athcal{P}(P,Q,{\bar a})$ of finite patches $x'$ of $X_{\bar{a}}$ such that $B:=B(P,U)\cup B(Q,V)$ is contained in the interior of $\mathfrak{m}athrm{supp}(x')$ and all tiles of $x'$ intersect $B$. There are finitely many equivalence classes $\{[x_1'],\ldots,[x'_{k}]\}$ of such patches, with $x'_i\in\mathfrak{m}athcal{P}(P,Q,{\bar a})$. For each $i\in I:=\{1,\ldots,k\}$, set $$U_i=\{\vec w:\,\, \vec w+x'_i\in \mathfrak{m}athcal{P}(P,Q,{\bar a})\},$$ which is an open subset of $\mathfrak{m}athbb{R}^d$. Then we have a disjoint cylinder decomposition
$$X_{\bar a}=\bigcup_{i\in I}X^{\bar a}_{x'_i,U_i}.$$ The diameter of $U_i$ is smaller then $\mathfrak{m}ax_j\{\mathfrak{m}athrm{diam}(D_j)\}$. Take $N_0,\mathfrak{p}si >0$ such that, for all $\bar a$ and $i$, the following hold: $\mathfrak{m}athrm{vol}(U_i)\leq \mathfrak{p}si $; if $x'_i\in \mathfrak{m}athcal{P}^N(X)$ then $N\leq N_0$. For simplicity of exposition we assume that $\mathfrak{m}athrm{diam}(U_i)<\eta$ where $\eta$ is such that any tile contains a ball of radius $\eta$. Otherwise, we could decompose each $U_i$ into a fixed number of disjoint subsets satisfying this property.
Consider the subset $W_i\subset U_i$ of vectors $\vec w$ such that $\vec u+P$ and $\vec v+Q$ are contained in $\vec w+x'_i$ for some $\vec u\in U$ and $\vec v\in V$. We have
$$X^{\bar a}_{P,U}\cap X^{\bar a}_{Q,V}\cap X^{\bar a}_{x'_i,U_i}=X^{\bar{a}}_{x'_i,W_i}.$$
Then
\begin{equation}\label{mua}
\mathfrak{m}u_{\bar a}(X^{\bar a}_{P,U}\cap X^{\bar a}_{Q,V})=\!\!\sum_{i\in I}\mathfrak{m}u_{\bar a}(X^{\bar a}_{x'_i,W_{i}}).\end{equation}
Take $\bar b\in \Sigma$ with $d_{\Sigma}(\bar a, \bar b)<\delta$.
We have a finite disjoint cylinder decomposition
\begin{equation*}\label{decomcylindersb}
X^{\bar b}_{ P,U}\cap X^{\bar b}_{Q,V}=\!\!\bigcup_{i\in I,j\in \mathfrak{m}athfrak{h}at I}\!\!\!X^{\bar b}_{x'_i,W_i}\cup X^{\bar b}_{\mathfrak{m}athfrak{h}at x'_j,\mathfrak{m}athfrak{h}at W_j},
\end{equation*}
where, for each $i\in \mathfrak{m}athfrak{h}at I$, $\mathfrak{m}athfrak{h}at x'_i$ is a patch in $\mathfrak{m}athcal{P}(P,Q,{\bar b})$ but not in $\mathfrak{m}athcal{P}(P,Q,{\bar a})$.
Hence
\begin{equation}\label{mub}
\mathfrak{m}u_{\bar b}(X^{\bar b}_{P,U}\cap X^{\bar b}_{Q,V})= \!\!\sum_{i\in I} \mathfrak{m}u_{\bar b}(X^{\bar b}_{x'_i,W_i})+\!\!\sum_{i\in \mathfrak{m}athfrak{h}at{I}}\mathfrak{m}u_{\bar b}(X^{\bar b}_{\mathfrak{m}athfrak{h}at{x}'_i,\mathfrak{m}athfrak{h}at W_i}).\end{equation}
Equations
\eqref{mua} and \eqref{mub} give
\begin{align}
\mathfrak{m}athfrak{n}onumber |\mathfrak{m}u_{\bar b}(X^{\bar b}_{P,U}\cap X^{\bar b}_{Q,V})-&\mathfrak{m}u_{\bar a}(X^{\bar a}_{P,U}\cap X^{\bar a}_{Q,V})| \\ &\leq
\sum_{i\in I}|\mathfrak{m}u_{\bar b}(X^{\bar b}_{x'_i,W_i})-\mathfrak{m}u_{\bar a}(X^{\bar a}_{x'_i,W_i})|+\!\!\sum_{i\in \mathfrak{m}athfrak{h}at{I}} \mathfrak{m}u_{\bar b}(X^{\bar b}_{\mathfrak{m}athfrak{h}at{x}'_i,\mathfrak{m}athfrak{h}at W_i{}}).\label{a1}
\end{align}
But, taking account theorem \ref{ss}, we have
\begin{align}\mathfrak{m}athfrak{n}onumber
\sum_{i\in I}|\mathfrak{m}u_{\bar b}(X^{\bar b}_{x'_i, W_i})-\mathfrak{m}u_{\bar a}(X^{\bar a}_{x'_i,W_i})|&= \sum_{i\in I}|c_{\bar b}(x'_i)-
c_{\bar a}(x'_i)|\mathfrak{m}athrm{vol}(W_i)\\ &\leq
C| I| \mathfrak{p}si\theta_0^{\frac{1}{\delta}},\label{a2}
\end{align}
for some constant $C$. On the other hand, it follows from
\begin{equation*}
\sum_{i\in {I}} \mathfrak{m}u_{\bar a}(X^{\bar a}_{{x}'_i, U_{i}}) = \!\sum_{i\in {I}} \mathfrak{m}u_{\bar b}(X^{\bar b}_{{x}'_i, U_{i}})+\! \sum_{i\in \mathfrak{m}athfrak{h}at{I}} \mathfrak{m}u_{\bar b}(X^{\bar b}_{\mathfrak{m}athfrak{h}at{x}'_i,\mathfrak{m}athfrak{h}at U_{i}})
\end{equation*}
that
\begin{equation}\label{a3}
\sum_{i\in \mathfrak{m}athfrak{h}at{I}} \mathfrak{m}u_{\bar b}(X^{\bar b}_{\mathfrak{m}athfrak{h}at{x}'_i,\mathfrak{m}athfrak{h}at U_{i}})\leq \! \sum_{i\in {I}}| \mathfrak{m}u_{\bar b}(X^{\bar b}_{{x}'_i, U_{i}})- \mathfrak{m}u_{\bar a}(X^{\bar a}_{{x}'_i, U_{i}}) |
\leq C| I| \mathfrak{p}si \theta_0^{\frac{1}{\delta}}.
\end{equation}
Finally, the result follows from \eqref{a1}, \eqref{a2} and \eqref{a3}.
\end{proof}
\end{document}
|
\begin{document}
\def\spacingset#1{\renewcommand{\baselinestretch}
{#1}\small\normalsize} \spacingset{1}
\if11
{
\title{Robust Inference for Change Points in High Dimension }
\author[1]{Feiyu Jiang}
\author[2]{Runmin Wang}
\author[3]{Xiaofeng Shao}
\affil[1]{Department of Statistics and Data Science\\ School of Management, Fudan University}
\affil[2]{Department of Statistical Science\\ Southern Methodist University}
\affil[3]{Department of Statistics\\
University of Illinois at Urbana Champaign}
\date{}
\maketitle
} \fraci
\if01
{
\title{Robust Inference for Change Points in High Dimension}
\author{}
\date{}
\maketitle
} \fraci
\baselineskip=2\baselineskip
\begin{abstract}
\baselineskip=1.25\baselineskip
This paper proposes a new test for a change point in the mean of high-dimensional data based on the spatial sign and self-normalization. The test is easy to implement with no tuning parameters, robust to heavy-tailedness and theoretically justified with both fixed-$n$ and sequential asymptotics under both null and alternatives, where $n$ is the sample size. We demonstrate that the fixed-$n$ asymptotics provide a better approximation to the finite sample distribution and thus should be preferred in both testing and testing-based estimation. To estimate the number and locations when multiple change-points are present, we propose to combine the p-value under the fixed-$n$ asymptotics with the seeded binary segmentation (SBS) algorithm. Through numerical experiments, we show that the spatial sign based procedures are robust with respect to the heavy-tailedness and strong coordinate-wise dependence, whereas their non-robust counterparts proposed in
\cite{wang2021inference} appear to under-perform. A real data example is also provided to illustrate the robustness and broad applicability of the proposed test and its corresponding estimation algorithm.
\varepsilonnd{abstract}
\noindent
{\it Keywords:} Change Points, High Dimensional Data, Segmentation, Self-Normalization, Spatial Sign
\setcounter{section}{0}
\setcounter{equation}{0}
\spacingset{1.25}
\section{Introduction}
High-dimensional data analysis often encounters testing and estimation of change-points in the mean, and it has attracted a lot of attention in statistics recently. See \cite{horvath2012change}, \cite{jirak2015uniform}, \cite{cho2016change}, \cite{wang2018high}, \cite{liu2020unified}, \cite{wang2021inference}, \cite{zhang2021adaptive} and \cite{yu2021finite,yu2022robust} for some recent literature. Among the proposed tests and estimation methods, most of them require quite strong moment conditions (e.g., Gaussian or sub-Gaussian assumption, or sixth moment assumption) and some of them also require weak component-wise dependence assumption. There are only a few exceptions, such as \cite{yu2022robust}, where they used anti-symmetric and
nonlinear kernels in a U-statistics framework to achieve robustness. However, the limiting distribution of their test statistic is non-pivotal and their procedure requires bootstrap calibration, which could be computationally demanding. In addition, their test statistic targets the sparse alternative only. As pointed out in athe review paper by \cite{liu2021high}, the interest in the dense alternative can be well motivated by real data and is often the type of alternative the practitioners want to detect. For example, copy
number variations in cancer cells are commonly manifested as change-points occurring at the same
positions across many related data sequences corresponding to cancer samples and biologically related individuals; see \cite{fan2017}.
In this article, we propose a new test for a change point in the mean of high-dimensional data that works for a broad class of data generating processes. In particular, our test targets the dense alternative, is robust to heavy-tailedness, and can accommodate both weak and strong coordinate-wise dependence. Our test is built on two recent advances in high-dimensional testing: spatial sign based two sample test developed in \cite{chakraborty2017tests} and U-statistics based change-point test developed in \cite{wang2021inference}.
Spatial sign based tests have been studied in the literature of multivariate data and they are usually used to handle heavy-tailedness, see \cite{oja2010multivariate} for a book-length review. However, it was until recently that \cite{wang2015high} and \cite{chakraborty2017tests} discovered that spatial sign could also help relax the restrictive moment conditions in high dimensional testing problems. In \cite{wang2021inference}, they advanced the high-dimensional two sample U-statistic pioneered by \cite{chen2010two} to the change-point setting by adopting the self-normalization (SN) \citep{shao2010self,shao2010testing}. Their test targets dense alternative, but requires sixth moment assumption and only allows for weak coordinate-wise dependence.
Building on these two recent advances, we shall propose a spatial signed SN-based test for a change point in the mean of high-dimensional data.
Our contribution to the literature is threefold. Firstly, we derive the limiting null distribution of our test statistic under the so-called fixed-$n$ asymptotics, where the sample size $n$ is fixed and dimension $p$ grows to infinity. We discovered that the fixed-$n$ asymptotics provide a better approximation to the finite sample distribution when the sample size is small or moderate. We also let $n$ grows to infinity after we derive $n$-dependent asymptotic distribution, and obtain the limit under the sequential asymptotics \citep{phillips1999}. This type of asymptotics seems new to the high-dimensional change-point literature and may be more broadly adopted in change-point testing and other high-dimensional problems. Secondly, our asymptotic theory covers both scenarios, the weak coordinate-wise dependence via $\rho$ mixing, and strong coordindate-wise dependence under the framework of ``randomly scaled $\rho$-mixing sequence" (RSRM) in \cite{chakraborty2017tests}. The process convergence associated with spatial signed U-process we develop in this paper further facilitates the application of our test under sequential asymptotics where $n$, in addition to $p$, also goes to infinity. In particular, we have developed novel theory to establish the process convergence result under the RSRM framework. In general, this requires to show the finite dimensional convergence and asymptotic equicontinuity (tightness). For the tightness, we derive a bound for the eighth moment of the increment of the sample path based on a conditional argument under the sequential asymptotics, which is new to the literature. Using this new technique, we provide the unconditional limiting null distribution of the test statistic for the fixed-$n$ and growing-$p$ case. This is stronger than the results in \cite{chakraborty2017tests} which is a conditional limiting null distribution. Thirdly, We extend our test to estimate multiple changes by combining the p-value based on the fixed-$n$ asymptotics and the seeded binary segmentation (SBS) \citep{kovacs2020seeded}. The use of fixed-$n$ asymptotics is especially recommended due to the fact that in these popular generic segmentation algorithms such as WBS \citep{fryzlewicz2014wild} and SBS, test statistics over many intervals of small/moderate lengths are calculated and the sequential asymptotics is not accurate in approximating the finite sample distribution, as compared to its fixed-$n$ counterpart. The superiority and robustness of our estimation algorithm is corroborated in a small simulation study.
The rest of the paper is organized as follows. In Section \ref{sec:test}, we define the spatial signed SN test. Section \ref{sec:theory} studies the asymptotic behavior of the test under both the null and local alternatives. Extensions to estimating multiple change-points are
elaborated in Section \ref{sec:est}. Numerical studies for both testing and estimation are relegated to Section \ref{sec:num}. Section \ref{sec:real} contains a real data example and Section \ref{sec:con} concludes. All proofs with auxiliary lemmas are given in the appendix.
Throughout the paper, we denote $\to_p$ as the convergence in probability, $\overset{\mathcal{D}}{\rightarrow}$ as the convergence in distribution and $\rightsquigarrow$ as the
weak convergence for stochastic processes. The notations $\bm{1}_d$ and $\bm{0}_d$ are used to represent vectors of dimension $d$ whose entries are all ones and zeros, respectively. For $a,b\in\mathbb{R}$, denote $a\wedge b=\min(a,b)$ and $a\vee b=\max(a,b)$. For a vector $a\in \mathbb{R}^d$, $\|a\|$ denotes its Euclidean norm. For a matrix $A$, $\|A\|_F$ denotes its Frobenius norm.
\section{Test Statistics}\label{sec:test}
Let $\{X_i\}_{i=1}^n$ be a sequence of i.i.d $\mathbb{R}^p$-valued random vectors with mean $0$ and covariance $\Sigma$. We assume that the observed data $\{Y_i\}_{i=1}^n$ satisfies $Y_i=\mu_i+X_i$, where $\mathbb{E}X_i=\bm{0}_p$ and $\mu_i\in\mathbb{R}^p$ is the mean at time $i$. We are interested in the following testing problem:
\begin{equation}\label{test}
H_0: \mu_1=\cdots=\mu_n,\quad\text{v.s.}\quad H_1:\mu_1=\cdots=\mu_{k^*}\neq \mu_{k^*+1}=\cdots=\mu_n,\quad\text{for some }2\leq k^*\leq n-1.
\varepsilonnd{equation}
In (\ref{test}), under the null, the mean vectors are constant over time while under the alternative,
there is one change-point at unknown time point $k^*$.
Let $S(X)=X/\|X\|\mathbf{1}(X\neq \mathbf{0})$ denote the spatial sign of a vector $X$. Consider the following spatial signed SN test statistic:
\begin{equation}\label{teststat}
T_n^{(s)}:=\sup_{k=4,\cdots,n-4}\fracrac{(D^{(s)}(k;1,n))^2}{W_n^{(s)}(k;1,n)},
\varepsilonnd{equation}
where for $1\leq l\leq k<m\leq n$,
\begin{flalign}\label{DW}
D^{(s)}(k;l,m)=&\sum_{\substack{l\leq j_1,j_3\leq k\\j_1\neq j_3}}\sum_{\substack{k+1\leq j_2,j_4\leq m\\j_2\neq j_4}}S(Y_{j_1}-Y_{j_2})'S(Y_{j_3}-Y_{j_4}),\\
\label{W}W_n^{(s)}(k;l,m)=&\fracrac{1}{n}\sum_{t=l+1}^{k-2}D^{(s)}(t;l,k)^2+\fracrac{1}{n}\sum_{t=k+2}^{m-2}D^{(s)}(t;k+1,m)^2.
\varepsilonnd{flalign}
Here, the superscript $^{(s)}$ is used to highlight the role of spatial sign plays in constructing the testing statistic. In contrast to \cite{wang2021inference}, where they introduced the test statistic
\begin{equation}
T_n:=\sup_{k=2,\cdots,n-3}\fracrac{(D(k;1,n))^2}{W_n(k;1,n)},
\varepsilonnd{equation}
with $D(k;l,m)$ and $W_n(k;l,m)$ defined in the same way as (\ref{DW}) but without spatial sign.
As pointed out by \cite{wang2021inference}, the limiting distribution of (properly standardized) $D(k;1,n)$ relies heavily on the covariance (correlation) structure of $Y_i$, which is typically unknown in practice. One may replace it with a consistent estimator, and this is indeed adopted in high dimensional one-sample or two-sample testing problems, see, for example, \cite{chen2010two} and \cite{chakraborty2017tests}. Unfortunately, in the context of change-point testing, the unknown location $k^*$ makes this method practically unreliable. For our spatial sign based test, the limiting distribution of (properly standardized) $D^{(s)}(k;1,n)$ depends on unknown nuisance parameters that could be a complex functional of the underlying distribution.
To this end, following \cite{wang2021inference} and \cite{zhang2021adaptive}, we propose to adopt the SN technique in \cite{shao2010testing} to avoid the consistent estimation of unknown nuisance parameters. SN technique was initially developed in \cite{shao2010self} and \cite{shao2010testing} in the low dimensional time series setting and its main idea is to use an inconsistent variance estimator (i.e. self-normalizer) which is based on recursive subsample test statistic, so that the limiting distribution is pivotal under the null. See \cite{shao2015self} for a recent review.
\section{Theoretical Properties}\label{sec:theory}
We first introduce the concept of $\rho$-mixing, see e.g. \cite{bradley1986basic}. Typical $\rho$-mixing sequences include i.i.d sequences, $m$-dependent sequences, stationary strong ARMA processes and many Markov chain models.
\begin{defn}[$\rho$-mixing]
A sequence $(\xi_1,\xi_2,\cdots)$ is said to be $\rho$-mixing if
$$
\rho(d)=\sup_{k\geq 1}\sup_{f\in\mathcal{F}_1^{k},g\in\mathcal{F}_{d+k}^{\infty}}|\mathrm{Corr}(f,g)|\to 0,\quad\text{as }d\to\infty.
$$
where $\mathrm{Corr}(f,g)$ denotes the correlation between $f$ and $g$, and $\mathcal{F}_{i}^{j}$ is the $\sigma$-field generated by $(\xi_{i},\xi_{i+1},\cdots,\xi_j)$. Here $\rho(\cdot)$ is called the $\rho$-mixing coefficient of $(\xi_1,\xi_2,\cdots)$.
\varepsilonnd{defn}
\subsection{Assumptions}
To analyze the asymptotic behavior of $T_n^{(s)}$, we make the following assumptions.
\begin{ass}\label{ass_model}
$\{X_i\}_{i=1}^n$ are i.i.d copies of $\xi$, where $\xi$ is formed by the first $p$ observations from a sequence of strictly stationary and $\rho$-mixing random variables $(\xi_1,\xi_2,\cdots)$ such that $E\xi_1=0$ and $E\xi_1^2=\sigma^2$.
\varepsilonnd{ass}
\begin{ass}\label{ass_mixing}
The $\rho$-mixing coefficients of $\xi$ satisfies $\sum_{k=1}^{\infty}\rho(2^k)<\infty$.
\varepsilonnd{ass}
Assumptions \ref{ass_model} and \ref{ass_mixing} are imposed in \cite{chakraborty2017tests} to analyze the behaviour of spatial sign based two-sample test statistic for the equality of high dimensional mean. In particular, Assumption \ref{ass_model} allows us to analyze the behavior of $T_n^{(s)}$ under the fixed-$n$ scenario by letting $p$ go to infinity alone. Assumption \ref{ass_mixing} allows weak dependence among the $p$ coordinates of the data, and similar assumptions are also made in, e.g. \cite{wang2021inference} and \cite{zhang2021adaptive}. The strict stationary assumption can be relaxed with additional conditions and the scenario that corresponds to strong coordinate-wise dependence is provided in Section \ref{sec:rsrm}
\subsection{Limiting Null}
We begin by deriving the limiting distribution of $T_n^{(s)}$ when $n$ is fixed while letting $p\to\infty$, and then analyze the large sample behavior of the fixed-$n$ limit by letting $n\to\infty$. The sequential asymptotics is fairly common in statistics and econometrics, see \cite{phillips1999}.
\begin{thm}\label{thm_main}
Suppose Assumption \ref{ass_model} and \ref{ass_mixing} hold, then under $H_0$:
(i) for any fixed $n\geq 8$, as $p\to\infty$, we have
$$
T_n^{(s)}\overset{\mathcal{D}}{\rightarrow} \mathcal{T}_n,\quad\text{and}\quad T_n\overset{\mathcal{D}}{\rightarrow} \mathcal{T}_n,
$$
where
$$
\mathcal{T}_n:= \sup_{k=4,\cdots,n-4}\fracrac{nG_n^2(\fracrac{k}{n};\fracrac{1}{n},1)}{\sum_{t=2}^{k-2}G_n^2(\fracrac{t}{n};\fracrac{1}{n},\fracrac{k}{n})+\sum_{t=k+2}^{n-2}G_n^2(\fracrac{t}{n};\fracrac{k+1}{n},1)},$$ with
\begin{flalign*}
G_n\Big(\fracrac{k}{n};\fracrac{l}{n},\fracrac{m}{n}\Big)=&\fracrac{(m-l)}{n}\fracrac{(m-k-1)}{n}Q_n\Big(\fracrac{l}{n},\fracrac{k}{n}\Big)+\fracrac{(m-l)}{n}\fracrac{(k-l)}{n}Q_n\Big(\fracrac{k+1}{n},\fracrac{m}{n}\Big)\\&-\fracrac{(k-l)}{n}\fracrac{(m-k-1)}{n}Q_n\Big(\fracrac{l}{n},\fracrac{m}{n}\Big),
\varepsilonnd{flalign*}
and $Q_n(\cdot,\cdot)$ is a centered Gaussian process defined on $[0,1]^2$ with covariance structure given by:
\begin{flalign*}
&\mathrm{Cov}(Q_n(a_1,b_1),Q_n(a_2,b_2))\\=&n^{-2}( \lfloor nb_1\rfloor\land \lfloor nb_2\rfloor-\lfloor na_1\rfloor\lor \lfloor na_2\rfloor)(\lfloor nb_1\rfloor\land \lfloor nb_2\rfloor-\lfloor na_1\rfloor\lor\lfloor na_2\rfloor+1)\mathbf{1}(b_1\land b_2>a_1\lor a_2).
\varepsilonnd{flalign*}
(ii) Furthermore, if $n\to\infty$, then
\begin{equation}\label{limitT}\mathcal{T}_n\overset{\mathcal{D}}{\rightarrow}\mathcal{T}:=\sup_{r\in (0,1)}\fracrac{G(r;0,1)^2}{\int_{0}^{r}G(u;0,r)^2du+\int_{r}^{1}G(u;r,1)^2du},\varepsilonnd{equation}
with
\begin{flalign*}
G(r;a,b)=(b-a)(b-r)Q(a,r)+(r-a)(b-a)Q(r,b)-(r-a)(b-r)Q(a,b),
\varepsilonnd{flalign*}
and $Q(\cdot,\cdot)$ is a centered Gaussian process defined on $[0,1]^2$ with covariance structure given by:
$$
\mathrm{Cov}(Q(a_1,b_1),Q(a_2,b_2))=(b_1\land b_2-a_1\lor a_2)^2\mathbf{1}(b_1\land b_2>a_1\lor a_2).
$$
\varepsilonnd{thm}
Theorem \ref{thm_main} (i) states that for each fixed $n\geq 8$, when $p\to\infty$, the limiting distribution $\mathcal{T}_n$ is a functional of Gaussian process, which is pivotal and can be easily simulated, see Table \ref{tab_cri} for tabulated quantiles with $n=10,20,30,40,50,100,200$ (based on 50,000 Monte Carlo replications). Theorem \ref{thm_main} (ii) indicates that $\mathcal{T}_n$ converges in distribution as $n$ diverges, which is indeed supported by Table \ref{tab_cri}. In fact, $\mathcal{T}$ is exactly the same as the limiting null distribution obtained in \cite{wang2021inference} under the joint asymptotics when both $p$ and $n$ diverge at the same time.
Our spatial signed SN test builds on the test by \cite{chakraborty2017tests}, where an estimator $\widehat{\Sigma}$ for the covariance $\Sigma$ is necessary as indicated by Section 2.1 therein. However, if the sample size $n$ is fixed, their estimator $\widehat{\Sigma}$ is only unbiased but not consistent.
In contrast,
the SN technique adopted in this paper enables us to avoid such estimation, and thus makes the fixed $n$ inference feasible in practice. It is worth noting that the test statistics $T_n^{(s)}$ and $T_n$ share the same limiting null under both fixed-$n$ asymptotics and sequential asymptotics.
Our test statistic is based on the spatial signs and only assumes finite second moment, which is much weaker than the sixth moment in \cite{wang2021inference} under joint asymptotics of $p$ and $n$.
The fixed-$n$ asymptotics provides a better approximation to the finite sample distribution of $T_n^{(s)}$ and $T_n$ when $n$ is small or moderate. So its corresponding critical value should be preferred than the counterparts derived under the joint asymptotics.
Thus, when data is heavy-tailed and data length is short, our test is more appealing.
\begin{table}[H]
\caption{Simulated $100\gamma\%$th quantiles of $\mathcal{T}_n$}
\label{tab_cri}
\centering
\begin{tabular}{lllllll}
\hline
$n\backslash\gamma$ & 80\% & 90\% & 95\% & 99\% & 99.5\% & 99.9\% \\ \hline
10 & 1681.46 & 3080.03 & 5167.81 & 14334.10 & 20405.87 & 46201.88 \\
20 & 719.03 & 1124.26 & 1624.11 & 3026.24 & 3810.61 & 5899.45 \\
30 & 633.70 & 965.12 & 1350.52 & 2403.64 & 2988.75 & 4748.03 \\
40 & 609.65 & 926.45 & 1283.00 & 2292.31 & 2750.01 & 4035.71 \\
50 & 596.17 & 889.34 & 1224.98 & 2186.99 & 2624.72 & 3846.51 \\
100 & 594.54 & 881.93 & 1200.31 & 2066.37 & 2482.51 & 3638.74 \\
200 & 592.10 & 878.23 & 1195.25 & 2049.32 & 2456.71 & 3533.44 \\ \hline
\varepsilonnd{tabular}
\varepsilonnd{table}
\subsection{Power Analysis}
Denote $\delta=\mu_n-\mu_1$ as the shift in mean under the alternative, and $\iota^2=\lim_{p\to\infty}p^{-1}\|\delta\|^2$ as the limiting average signal. Next, we study the behavior of the test under both fixed ($\iota>0$) and local alternatives ($\iota=0$).
We first consider the case when the average signal is non-diminishing.
\begin{ass}\label{ass_fix}
(i) $\iota>0$, (ii) $np\|\Sigma\|_F^{-1}\to\infty$ as $p\to\infty$.
\varepsilonnd{ass}
Here the Assumption \ref{ass_fix} (ii) is quite mild and can be satisfied by many weak dependent sequences such as ARMA sequences.
\begin{thm}\label{thm_fix}
[Fixed Alternative] Suppose Assumptions \ref{ass_model}--\ref{ass_fix} hold, then
$$
T_n^{(s)}\to_p\infty, \quad T_n\to_p\infty$$
\varepsilonnd{thm}
Theorem \ref{thm_fix} shows that when average signal is non-diminishing, then both $T_n^{(s)}$ and $T_n$ are consistent tests. Next, we analyze $T_n^{(s)}$ under local alternatives when $\iota=0$.
\begin{ass}\label{ass_power}
(i) $\iota=0$, (ii) $\delta'\Sigma\delta=o(\|\Sigma\|_F^2)$ as $p\to\infty$.
\varepsilonnd{ass}
Assumption \ref{ass_power} regulates the behavior of the shift size, and is used to simplify the theoretical analysis of $T_n^{(s)}$ under local alternatives. Similar assumptions are also made in \cite{chakraborty2017tests}.
Clearly, when $\Sigma$ is the identity matrix, Assumption \ref{ass_power} (ii) automatically holds if $\iota=0$.
\begin{thm}\label{thm_power}
[Local Alternative] Suppose Assumptions \ref{ass_model}, \ref{ass_mixing} and \ref{ass_power} hold. Assume there exists a $k^*$ such that $\mu_i=\mu$, $i=1\cdots, k^*$ and $\mu_i=\mu+\delta$, $i=k^*+1,\cdots,n$. Then for any fixed $n$, as $p\to\infty$,
(i) if $np\|\Sigma\|_F^{-1}\iota^2\to\infty$, then $T_n^{(s)}\to_p\infty$ and $T_n\to_p\infty$;
(ii)if $np\|\Sigma\|_F^{-1}\iota^2\to0$, then $T_n^{(s)}\overset{\mathcal{D}}{\rightarrow} \mathcal{T}_n$ and $T_n\overset{\mathcal{D}}{\rightarrow} \mathcal{T}_n$;
(iii)if $np\|\Sigma\|_F^{-1}\iota^2\to c_n\in(0,\infty)$, then
$
T_n^{(s)}\overset{\mathcal{D}}{\rightarrow} \mathcal{T}_n(c_n,\Delta_n),
$ and $
T_n\overset{\mathcal{D}}{\rightarrow} \mathcal{T}_n(c_n,\Delta_n),
$
where \begin{flalign*} &\mathcal{T}_n(c_n,\Delta_n)\\=&\sup_{k=4,\cdots,n-4}\fracrac{n[\sqrt{2}G_n(\fracrac{k}{n};\fracrac{1}{n},1)+c_n\Delta_n(\fracrac{k}{n};\fracrac{1}{n},1)]^2}{\sum_{t=2}^{k-2}[\sqrt{2}G_n(\fracrac{t}{n};\fracrac{1}{n},\fracrac{k}{n})+c_n\Delta_n(\fracrac{t}{n};\fracrac{1}{n},\fracrac{k}{n})]^2+\sum_{t=k+2}^{n-2}[\sqrt{2}G_n(\fracrac{t}{n};\fracrac{k+1}{n},1)+c_n\Delta_n(\fracrac{t}{n};\fracrac{k+1}{n},1)]^2},\varepsilonnd{flalign*}and $$
\Delta_n\Big(\fracrac{k}{n};\fracrac{l}{n},\fracrac{m}{n}\Big)=
\begin{cases}
\fracrac{4{k-l+1\choose 2}{m-k^*\choose 2}}{n^4},&l<k\leq k^*<m;\\
\fracrac{4{k^*-l+1\choose 2}{m-k\choose 2}}{n^4},&l<k^*<k<m;\\
0,&\text{otherwise}.
\varepsilonnd{cases}
$$
Furthermore, if $\lim_{n\to\infty}c_n=c\in(0,\infty)$, then as $n\to\infty$,
\begin{equation}\label{TND}
\mathcal{T}_n(c_n,\Delta_n)\overset{\mathcal{D}}{\rightarrow} \mathcal{T}(c,\Delta)
\varepsilonnd{equation}
where $$
\mathcal{T}(c,\Delta):=\sup _{r \in[0,1]} \fracrac{\{\sqrt{2} G(r ; 0,1)+c \Delta(r, 0,1)\}^{2}}{\int_{0}^{r}\{\sqrt{2} G(u ; 0, r)+c \Delta(u, 0, r)\}^{2} d u+\int_{r}^{1}\{\sqrt{2} G(u ; r, 1)+c \Delta(u, r, 1)\}^{2} d u},
$$
and for $b^*=\lim_{n\to\infty}(k^*/n)$,
$$
\Delta(r, a, b):= \begin{cases}\left(b^{*}-a\right)^{2}(b-r)^{2}, & a<b^{*} \leq r<b; \\ (r-a)^{2}\left(b-b^{*}\right)^{2}, & a<r<b^{*}<b; \\ 0, & \text{otherwise}.\varepsilonnd{cases}
$$
\varepsilonnd{thm}
The above theorem implies that the asymptotic power of $T_n^{(s)}$ and $T_n$ depends on the joint behavior of $\delta$ and $\|\Sigma\|_F$, holding $n$ as fixed. If $\Sigma$ is the identity matrix, then $T_n^{(s)}$ and $T_n$ will exhibit different power behaviors according to whether $\|\delta\|/p^{1/4}$ converges to zero, infinity, or some constant $c_n\in(0,\infty)$.
In addition, under the local alternative, the limiting distribution of $T_n^{(s)}$ and $T_n$ under the sequantial asymptotics coincides with that in \cite{wang2021inference} under the joint asymptotics, see Theorem 3.5 therein. In Figure \ref{fig:local}, we plot $\mathcal{T}(c,\Delta)$ at 10\%, 50\% and 90\% quantile levels with $b^*$ fixed at $1/2$ and it suggests that $\mathcal{T}(c,\Delta)$ is stochastically increasing with $c$, which further supports the consistency of both tests.
\begin{figure}[!h]
\centering
\includegraphics[width=0.7\textwidth]{local.eps}
\caption{$\mathcal{T}(c,\Delta)$ at 10\%, 50\% and 90\% quantile levels. }
\label{fig:local}
\varepsilonnd{figure}
\subsection{Analysis Under Stronger Dependence Structure}\label{sec:rsrm}
In this section, we focus on a special class of probability models for high dimensional data termed ``randomly scaled $\rho$-mixing (RSRM)" sequence.
\begin{defn}[RSRM, \cite{chakraborty2017tests}]
A sequence $(\varepsilonta_1,\varepsilonta_2,\cdots)$ is a randomly scaled $\rho$-mixing sequence if there exist a zero mean $\rho$-mixing squence $(\xi_1,\xi_2,\cdots)$ and an independent positive non-degenerate random variable $R$ such that $\varepsilonta_i=\xi_i/R$, $i=1,2,\cdots$.
\varepsilonnd{defn}
RSRM sequences introduce stronger dependence structure among the coordinates than $\rho$-mixing sequences, and many models fall into this category, see, e.g. elliptically symmetric models in \cite{wang2015high} and non-Gaussian sequences in \cite{cai2014two}.
\begin{ass}\label{ass_RSRM}
Suppose $Y_i=X_i/R_i+\mu_i$, where $\{X_i\}_{i=1}^n$ satisfies Assumptions \ref{ass_model} and \ref{ass_mixing}, and $\{R_i\}_{i=1}^n$ are i.i.d. copies of a positive random variable $R$.
\varepsilonnd{ass}
Clearly, when $R$ is degenerate (i.e., a positive constant), Assumption \ref{ass_RSRM} reduces to the model assumed in previous sections. However, when $R$ is non-degenerate, Assumption \ref{ass_RSRM} imposes stronger dependence structure on coordinates of $Y_i$ than $\rho$-mixing sequences, and hence result in additional theoretical difficulties. We refer to \cite{chakraborty2017tests} for more discussions of RSRM sequences.
\begin{thm}\label{thm_rsrm}
Suppose Assumption \ref{ass_RSRM} holds, then under $H_0$,
(i) let $\mathcal{R}_n=\{R_i\}_{i=1}^n,$ for any fixed $n\geq 8$, if $\mathbb{E}(R_i^2)<\infty$ and $\mathbb{E}(R_i^{-2})<\infty$, as $p\to\infty$, there exists two random variables $\mathcal{T}_n^{(\mathcal{R}_n,s)}$ and $\mathcal{T}_n^{(\mathcal{R}_n)}$ dependent on $\mathcal{R}_n$ such that,
$$
T_n^{(s)}\overset{\mathcal{D}}{\rightarrow}\mathcal{T}_n^{(\mathcal{R}_n,s)},\quad T_n\overset{\mathcal{D}}{\rightarrow}\mathcal{T}_n^{(\mathcal{R}_n)}.$$
(ii)
Furthermore, if we further assume $\mathbb{E}(R_i^4)<\infty$ and $\mathbb{E}(R_i^{-4})<\infty$, then as $n\to\infty$, we have $$\mathcal{T}_n^{(\mathcal{R}_n,s)} \overset{\mathcal{D}}{\rightarrow} \mathcal{T},\quad \mathcal{T}_n^{(\mathcal{R}_n)} \overset{\mathcal{D}}{\rightarrow} \mathcal{T}, $$
where $\mathcal{T}$ is defined in (\ref{limitT}).
\varepsilonnd{thm}
In general, if the sample size $n$ is small and $Y_i$ is generated from an RSRM sequence, the unconditional limiting distributions of $T_n^{(s)}$ and $T_n$ as $p\to\infty$ are no longer pivotal due to the randomness in $R_i$. Nevertheless, using the pivotal limiting distribution $\mathcal{T}_n$ in hypothesis testing can still deliver relatively good performance for $T_n^{(s)}$ in both size and power, see Section \ref{sec:size} below for numerical evidence.
If $n$ is also diverging, the same pivotal limiting distribution as presented in Theorem \ref{thm_main} (ii) and in Theorem 3.4 of \cite{wang2021inference} can still be reached.
Let $\Sigma_Y$ be the covariance of $Y_i$ (or equivalently $X_i/R_i$), the next theorem provides with the asymptotic behavior under local alternative for the RSRM model.
\begin{thm}\label{thm_rsrmpower}
Suppose Assumptions \ref{ass_power} and \ref{ass_RSRM} hold, then under the local alternative such that $ n\|\Sigma_Y\|_F^{-1}\|\delta\|^2\to c_n\in (0,\infty)$,
(i) let $\mathcal{R}_n=\{R_i\}_{i=1}^n,$ for any fixed $n\geq 8$, if $\mathbb{E}(R_i^2)<\infty$ and $\mathbb{E}(R_i^{-2})<\infty$, as $p\to\infty$, there exists two random variables $\mathcal{T}_n^{(\mathcal{R}_n,s)}(\Delta_n^{(\mathcal{R}_n,s)})$ and $\mathcal{T}_n^{(\mathcal{R}_n)}(\Delta_n)$ dependent on $\mathcal{R}_n$ such that, $$T_n^{(s)}\overset{\mathcal{D}}{\rightarrow}\mathcal{T}_n^{(\mathcal{R}_n,s)}(c_n,\Delta_n^{(\mathcal{R}_n,s)}),\quad T_n\overset{\mathcal{D}}{\rightarrow}\mathcal{T}_n^{(\mathcal{R}_n)}(c_n,\Delta_n).$$
(ii)
Furthermore, if we assume $\mathbb{E}(R_i^4)<\infty$ and $\mathbb{E}(R_i^{-4})<\infty$, and $\lim_{n\to\infty}c_n=c\in(0,\infty)$, then as $n\to\infty$, we have $$\mathcal{T}_n^{(\mathcal{R}_n,s)}(c_n,\Delta_n^{(\mathcal{R}_n,s)}) \overset{\mathcal{D}}{\rightarrow} \mathcal{T}(Kc, \Delta),\quad \mathcal{T}_n^{(\mathcal{R}_n)}(c_n,\Delta_n^{(\mathcal{R}_n)}) \overset{\mathcal{D}}{\rightarrow} \mathcal{T}(c, \Delta),$$
where $\mathcal{T}(c,\Delta)$ is defined in (\ref{TND}), and $$K=\mathbb{E}^{-1}\Big[\fracrac{R_1R_2}{\sqrt{(R_1^2+R_3^2)(R_2^2+R_3^2)}}\Big]\mathbb{E}(R_1^{-2})\mathbb{E}^2\Big[\fracrac{R_1R_2}{\sqrt{R_1^{2}+R_2^2}}\Big]>1 $$
is a constant.
\varepsilonnd{thm}
For the RSRM model, similar to Theorem \ref{thm_rsrm} (i), the fixed-$n$ limiting distributions of $T_n^{(s)}$ and $T_n$ are non-pivotal under local alternatives. However, the distribution of $T_n^{(s)}$ under sequential limit is pivotal $\mathcal{T}(Kc, \Delta)$ while that of $T_n$ is $\mathcal{T}(c,\Delta)$. The multiplicative constant $K>1$
suggests that for the RSRM model, using $T_n^{(s)}$ could be more powerful as $\mathcal{T}(c,\Delta)$ is expected to be monotone in $c$, see Figure \ref{fig:local} above. This finding coincides with \cite{chakraborty2017tests} where they showed that using spatial sign based U-statistics for testing the equality of two high dimensional means could be more powerful than the conventional mean-based ones in \cite{chen2010two}. Thus, when strong coordinate-wise dependence is exhibited in the data, $T_n^{(s)}$ is more preferable.
\section{Multiple Change-point Estimation}\label{sec:est}
In real applications, in addition to change-point testing, another important task is to estimate the number and locations of these change-points.
In this section, we assume there are $m\geq 1$ change-points and are denoted by ${\bm{k}}=(k_1,k_2,\cdots,k_m)\subset \{1,2,\cdots,n\}$.
A commonly used algorithm for many practitioners would be binary segmentation (BS), where the data segments are recursively split at the maximal points of the test statistics until the null of no change-points is not rejected for each segment. However, as criticized by many researchers, BS tends to miss potential change-points when non-monotonic change patterns are exhibited. Hence, many algorithms have been proposed to overcome this drawback.
Among them, wild binary segmentation (WBS) by \cite{fryzlewicz2014wild} and its variants have become increasingly popular because of their easy-to-implement procedures. The main idea of WBS is to perform BS on randomly generated sub-intervals so that some sub-intervals can localize at most one change-point (with high probability).
As pointed out by \cite{kovacs2020seeded}, WBS relies on randomly generated sub-intervals and different researchers may obtain different estimates.
Hence, \cite{kovacs2020seeded} propose seeded binary segmentation (SBS) algorithm based on deterministic construction of these sub-intervals with relatively cheaper computational costs so that results are replicable.
To this end, we combine the spatial signed SN test with SBS to achieve the task of multiple change-point estimation, and we call it SBS-SN$^{(s)}$. We first introduce the concept of seeded sub-intervals.
\begin{defn}[Seeded Sub-Intervals, \cite{kovacs2020seeded}]\label{def:seed}
Let $\alpha\in[1/2,1)$ denote a given decay parameter. For $1\leq k \leq \lfloor\log_{1/\alpha}(n)\rfloor$ (i.e. logarithm with base $1/\alpha$) define the $k$-th layer as the collection of $n_k$ intervals of initial length $l_k$ that are
evenly shifted by the deterministic shift $s_k$ as follows:
$$
\mathcal{I}_{k}=\bigcup_{i=1}^{n_{k}}\left\{\left(\left\lfloor(i-1) s_{k}\right\rfloor,\left\lceil(i-1) s_{k}+l_{k}\right\rceil\right)\right\}
$$
where $n_{k}=2\left\lceil(1 / \alpha)^{k-1}\right\rceil-1, l_{k}=10\lceil n \alpha^{k-1}/10\rceil$ and $s_{k}=\left(n-l_{k}\right) /\left(n_{k}-1\right) .$ The overall collection of seeded intervals is
$$
\mathcal{I}_{\alpha}(n)=\bigcup_{k=1}^{\left\lceil\log _{1 / \alpha}(n)\right\rceil} \mathcal{I}_{k}.
$$
\varepsilonnd{defn}
Let $\alpha\in[1/2,1)$ be a decay parameter, denote $\mathcal{I}_{\alpha}(n)$ as the set of seeded intervals based on Definition \ref{def:seed}.
For each sub-interval $(a,b) \in \mathcal{I}_{\alpha}(n)$, we calculate the spatial signed SN test
$$
T^{(s)}(a,b)=\max_{k\in\{ a+3,\cdots,b-4\}}\fracrac{(D^{(s)}(k;a,b))^2}{W_{b-a+1}^{(s)}(k;a,b)},\quad b-a\geq 7,
$$
where $D^{(s)}(k;a,b))$ and $W_{b-a+1}^{(s)}(k;a,b)$ are defined in (\ref{DW}) and (\ref{W}). We obtain the p-value of the sub-interval test statistic $T^{(s)}(a,b)$ based on the fixed-$n$ asymptotic distribution $\mathcal{T}_{b-a+1}$. SBS-SN$^{(s)}$ then finds the smallest p-value evaluated at all sub-intervals and compare it with a predetermined threshold level $\zeta_p$. If the smallest p-value is also smaller than $\zeta_p$, denote the corresponding sub-interval where the smallest p-value is achived as $(a^*,b^*)$ and estimate the change-point by $\hat{k}=\arg\max_{k\in\{ a^*+3,\cdots,b^*-4\}}\fracrac{(D^{(s)}(k;a^*,b^*))^2}{W_{b^*-a^*+1}^{(s)}(k;a^*,b^*)}$. Once a change-point is identified, SBS-SN$^{(s)}$ then divides the data sample into two subsamples accordingly and apply the same procedure to each of them. The process is implemented recursively until no change-point is detected. Details are provided in Algorithm \ref{alg}.
\begin{algorithm}[!h]
\caption{SBS-SN$^{(s)}$}\label{alg}
\KwIn{Data $\{Y_t\}_{t=1}^{n}$, threshold p-value $\zeta_p\in(0,1)$, SBS intervals $\mathcal{I}_{\alpha}(n)$.}
\KwOut{Estimated number of change-points $\widehat{m}$ and estimated change-points set $\hat{\bm{k}}$}
\KwIni{SBS-SN$^{(s)}$ $(1,n,\zeta_p)$}
\KwPro{SBS-SN$^{(s)}$ $(a,b,\zeta_p)$}
\varepsilonIf{$b-a+1<8$}{Stop}{$\mathcal{M}_{(a,b)}:=\{i:[a_i,b_i]\in \mathcal{I}_{\alpha}(n),[a_i,b_i]\subset[a,b], b_i-a_i+1\geq 8\}$ \;
for each $i\in \mathcal{M}_{(a,b)}$, find the p-value $p_i$ of $T^{(s)}(a_i,b_i)$ based on $\mathcal{T}_{b_i-a_i+1}$\;
$i^*=\arg\min_{i\in\mathcal{M}_{(a,b)}}p_i$\;
\varepsilonIf{$p_{i^*}<\zeta_p$}{
$k^*=\arg\max_{k\in\{ a_{i^*}+3,\cdots,b_{i^*}-4\}}\fracrac{(D^{(s)}(k;a_{i^*},b_{i^*}))^2}{W_{b_{i^*}-a_{i^*}+1}^{(s)}(k;a_{i^*},b_{i^*})}$ \;
$\widehat{\bm{k}}=\widehat{\bm{k}}\cup k^*$, $\widehat{m}=\widehat{m}+1$\;
SBS-SN$^{(s)}$ $(a,k^*,\zeta_p)$\;
SBS-SN$^{(s)}$ $(k^*+1,b,\zeta_p)$\;
}{Stop}}
\varepsilonnd{algorithm}
Our SBS-SN$^{(s)}$ algorithm differs from WBS-SN algorithm in \cite{wang2021inference} and \cite{zhang2021adaptive} in two aspects. First, WBS-SN is built on WBS, which relies on randomly generated intervals while SBS relies on deterministic intervals. As documented in \cite{kovacs2020seeded}, WBS is computationally more demanding than SBS. Second, the threshold used in WBS-SN is universal for each sub-interval, depends on the sample size $n$ and dimension $p$ and needs to be simulated via extensive Monte Carlo simulations. Generally speaking, WBS-SN requires simulating a new threshold each time for a new dataset. By contrast, our estimation procedure is based on p-values under the fixed-$n$ asymptotics, which takes into account the interval length $b-a+1$ for each sub-interval $(a,b)$. When implementing either WBS or SBS, inevitably, there will be intervals of small lengths. Hence, the universal threshold may not be suitable as it does not take into account the effect of different interval lengths. In order to alleviate the problem of multiple testing, we may set a small threshold number for $\zeta_p$, such as 0.0001 or 0.0005. Furthermore, the WBS-SN requires to specify a minimal interval lentgh which can affect the finite sample performance. In this work, when generating seed sub-intervals as in Definition \ref{def:seed}, the lengths of these intervals are set as integer values times 10 to reduce the computational cost for simulating fixed-$n$ asymptotic distribution $\mathcal{T}_n$. Therefore, we only require the knowledge of $\{\mathcal{T}_n\}_{n=10,20,\cdots}$ for SBS-SN$^{(s)}$ to work, which can be simulated once for good and do not change with a new dataset.
\section{Numerical Experiments}\label{sec:num}
This section examines the finite sample behavior of the proposed tests and multiple change-point estimation algorithm SBS-SN$^{(s)}$ via simulation studies.
\subsection{Size and Power}\label{sec:size}
We first assess the performance of $T_n^{(s)}$ with respect to various covariance structure of the data. Consider the following data generating process with $p=100$ and $n\in\{10,20,50,100,200\}$:
$$
Y_i=\delta\mathbf{1}(i>0.5n)+X_i,
$$
where $\delta$ represents the mean shift vector, and $\{X_i\}_{i=1}^n$ are i.i.d copies of $X$ based on the following specifications:
\begin{enumerate}[(i)]
\item $X\sim \mathcal{N}(\mathbf{0},I_p)$;
\item $X\sim t_5(I_p)$;
\item $X\sim t_3(I_p)$;
\item $X=(X^{(1)},\cdots,X^{(p)})'$, $X^{(t)}=\rho X^{(t-1)}+\varepsilonpsilon_t$, $t=1,\cdots,p$, where $\varepsilonpsilon_t\sim \mathcal{N}(0,1)/2$ are i.i.d random variables;
\item $X=(X^{(1)},\cdots,X^{(p)})'$, $X^{(t)}=\rho X^{(t-1)}+\varepsilonpsilon_t$, $t=1,\cdots,p$, where $\varepsilonpsilon_t\sim t_5/2$ are i.i.d random variables;
\item $X=R/U$, $R=(R^{(1)},\cdots,R^{(p)})'$, $R^{(t)}=\rho R^{(t-1)}+\varepsilonpsilon_t$, $t=1,\cdots,p$, where $\varepsilonpsilon_t\sim \mathcal{N}(0,1)/2$ are i.i.d random variables, and $U\sim \text{Exp}(1)$ is independently generated;
\item $X=R/U$, $R=(R^{(1)},\cdots,R^{(p)})'$, $R^{(t)}=\rho R^{(t-1)}+\varepsilonpsilon_t$, $t=1,\cdots,p$, where $\varepsilonpsilon_t\sim t_5/2$ are i.i.d random variables, and $U\sim \text{Exp}(1)$ is independently generated;
\varepsilonnd{enumerate}
where $t_{\nu}(I_p)$ is the multivariate $t$ distribution with degree of freedom $\nu$ and covariance $I_p$; $\text{Exp}(1)$ is the exponential distribution with mean $1$.
Case (i) assumes that coordinates of $X$ are independent and light-tailed; Cases (ii) and (iii) consider the scenario of heavy-tailedness of $X$; Cases (iv) and (v) assume the coordinates of $X$ are consecutive random observations from a stationary AR(1) model with autoregressive coefficient $\rho=0.7$; and Cases (vi) and (vii) assume the coordinates of $X$ are generated from an RSRM with $\rho=0.7$.
Table \ref{tab_simu} shows the empirical rejection rate of $T_n$ and $T_n^{(s)}$ in percentage based on 1000 replications under the null with $H_0:$ $\delta=\mathbf{0}$; dense alternative $H_a^{1}: \delta=1/\sqrt{p}\mathbf{1}_p$; and sparse alternative $H_a^{2}:\delta=(\mathbf{1}_2^{\top},\mathbf{0}_{p-2}^{\top})^{\top}$. We compare the approximation using the limiting null distribution of fixed-$n$ asymptotics $\mathcal{T}_n$ and sequential asymptotics $\mathcal{T}$ at $5\%$ level.
\begin{table}[H]
\centering
\caption{Size and power comparison of $T_n$ and $T_n^{(s)}$}
\resizebox{0.9\textwidth}{!}{
\label{tab_simu}
\renewcommand\tabcolsep{2pt}
\centering
\begin{tabular}{ccccccccccccccccccccc}
\hline
& & & & \multicolumn{5}{c}{$H_0$} & & \multicolumn{5}{c}{$H_a^1$} & & \multicolumn{5}{c}{$H_a^2$} \\ \cline{5-9} \cline{11-15} \cline{17-21}
Case & Test & Limit & $n$ & 10 & 20 & 50 & 100 & 200 & & 10 & 20 & 50 & 100 & 200 & & 10 & 20 & 50 & 100 & 200 \\ \cline{1-4} \cline{5-9} \cline{11-15} \cline{17-21}
\multirow{4}{*}{(i)} & \multirow{2}{*}{$T_n$} & $\mathcal{T}_n$ & & 5.6 & 4.8 & 6.9 & 4.0 & 6.4 & & 6.3 & 6.5 & 15.2 & 34.7 & 77.1 & & 7.3 & 11.0 & 34.1 & 78.0 & 99.9 \\
& & $\mathcal{T}$ & & 27.4 & 9.0 & 7.4 & 4.1 & 6.7 & & 29.9 & 11.2 & 16.5 & 35.2 & 77.8 & & 33.7 & 18.1 & 34.9 & 78.1 & 99.9 \\
& \multirow{2}{*}{$T_n^{(s)}$} & $\mathcal{T}_n$ & & 5.5 & 4.8 & 6.2 & 4.3 & 6.6 & & 6.2 & 5.9 & 15.0 & 33.4 & 76.7 & & 7.2 & 10.4 & 33.3 & 77.7 & 99.8 \\
& & $\mathcal{T}$ & & 28.5 & 8.7 & 6.8 & 4.4 & 7.1 & & 29.8 & 10.8 & 15.7 & 34.6 & 77.6 & & 33.5 & 17.3 & 34.6 & 78.6 & 99.8 \\ \hline
\multirow{4}{*}{(ii)} & \multirow{2}{*}{$T_n$} & $\mathcal{T}_n$ & & 6.9 & 6.4 & 6.8 & 4.3 & 6.0 & & 7.2 & 7.2 & 11.7 & 18.5 & 41.4 & & 8.0 & 8.8 & 22.0 & 47.2 & 87.3 \\
& & $\mathcal{T}$ & & 31.8 & 12.6 & 7.6 & 4.3 & 6.2 & & 31.8 & 12.4 & 12.8 & 19.0 & 42.5 & & 33.9 & 15.3 & 22.9 & 47.5 & 87.4 \\
& \multirow{2}{*}{$T_n^{(s)}$} & $\mathcal{T}_n$ & & 5.3 & 5.3 & 6.2 & 4.1 & 5.6 & & 5.7 & 5.5 & 11.8 & 26.1 & 59.6 & & 6.4 & 8.1 & 26.7 & 62.9 & 96.8 \\
& & $\mathcal{T}$ & & 28.2 & 9.7 & 6.7 & 4.2 & 5.7 & & 28.5 & 10.0 & 12.6 & 27.0 & 60.4 & & 30.8 & 14.4 & 28.0 & 63.2 & 96.8 \\ \hline
\multirow{4}{*}{(iii)} & \multirow{2}{*}{$T_n$} & $\mathcal{T}_n$ & & 9.0 & 9.5 & 9.2 & 6.7 & 7.9 & & 10.0 & 10.4 & 11.8 & 14.2 & 25.7 & & 9.8 & 12.3 & 17.9 & 27.9 & 57.5 \\
& & $\mathcal{T}$ & & 35.8 & 16.1 & 9.6 & 6.9 & 8.5 & & 35.6 & 16.1 & 12.6 & 14.7 & 26.0 & & 36.8 & 18.7 & 18.8 & 28.7 & 58.2 \\
& \multirow{2}{*}{$T_n^{(s)}$} & $\mathcal{T}_n$ & & 5.6 & 5.0 & 6.4 & 4.8 & 6.4 & & 5.7 & 4.9 & 10.5 & 21.2 & 50.2 & & 6.2 & 6.9 & 21.9 & 53.0 & 93.4 \\
& & $\mathcal{T}$ & & 27.5 & 9.6 & 7.0 & 4.9 & 6.8 & & 29.2 & 9.3 & 11.4 & 21.7 & 50.8 & & 28.6 & 12.8 & 23.0 & 53.9 & 93.7 \\ \hline
\multirow{4}{*}{(iv)} & \multirow{2}{*}{$T_n$} & $\mathcal{T}_n$ & & 5.9 & 4.8 & 6.1 & 6.8 & 5.4 & & 6.5 & 8.6 & 23.5 & 46.4 & 78.8 & & 9.3 & 14.7 & 43.2 & 83.8 & 99.6 \\
& & $\mathcal{T}$ & & 28.1 & 9.2 & 6.7 & 6.9 & 5.7 & & 30.9 & 13.8 & 24.6 & 47.4 & 79.1 & & 33.9 & 20.1 & 44.4 & 84.0 & 99.6 \\
& \multirow{2}{*}{$T_n^{(s)}$} & $\mathcal{T}_n$ & & 4.8 & 3.9 & 6.0 & 6.3 & 5.4 & & 5.5 & 7.1 & 22.5 & 44.2 & 77.9 & & 6.9 & 13.0 & 41.5 & 84.1 & 99.8 \\
& & $\mathcal{T}$ & & 27.6 & 8.2 & 6.5 & 6.6 & 5.4 & & 30.6 & 12.2 & 23.5 & 45.3 & 78.1 & & 33.1 & 18.8 & 43.0 & 84.6 & 99.8 \\ \hline
\multirow{4}{*}{(v)} & \multirow{2}{*}{$T_n$} & $\mathcal{T}_n$ & & 7.0 & 7.6 & 6.0 & 6.8 & 6.1 & & 8.6 & 11.1 & 17.5 & 30.1 & 54.2 & & 9.4 & 11.3 & 26.7 & 56.7 & 94.0 \\
& & $\mathcal{T}$ & & 33.5 & 12.9 & 6.6 & 7.2 & 6.1 & & 33.6 & 16.9 & 17.9 & 30.4 & 54.6 & & 34.4 & 18.3 & 27.9 & 57.1 & 94.2 \\
& \multirow{2}{*}{$T_n^{(s)}$} & $\mathcal{T}_n$ & & 5.3 & 4.4 & 5.0 & 6.9 & 5.2 & & 6.1 & 7.5 & 18.5 & 37.1 & 65.2 & & 5.6 & 9.4 & 35.2 & 73.7 & 98.7 \\
& & $\mathcal{T}$ & & 29.3 & 8.5 & 5.3 & 7.5 & 5.5 & & 30.5 & 11.8 & 19.0 & 37.7 & 65.6 & & 30.6 & 14.1 & 35.8 & 74.2 & 98.8 \\ \hline
\multirow{4}{*}{(vi)} & \multirow{2}{*}{$T_n$} & $\mathcal{T}_n$ & & 34.7 & 39.7 & 39.2 & 34.6 & 33.6 & & 34.6 & 40.7 & 39.4 & 35.6 & 34.2 & & 35.0 & 39.6 & 40.1 & 34.3 & 33.8 \\
& & $\mathcal{T}$ & & 60.2 & 46.7 & 40.5 & 34.9 & 34.1 & & 62.5 & 47.5 & 40.3 & 36.1 & 34.8 & & 60.6 & 46.9 & 41.0 & 34.4 & 34.1 \\
& \multirow{2}{*}{$T_n^{(s)}$} & $\mathcal{T}_n$ & & 5.0 & 4.2 & 5.3 & 5.9 & 5.9 & & 6.0 & 4.8 & 11.3 & 20.1 & 35.3 & & 5.6 & 7.1 & 16.8 & 37.2 & 73.5 \\
& & $\mathcal{T}$ & & 27.9 & 8.6 & 5.7 & 6.2 & 6.1 & & 28.1 & 10.0 & 12.0 & 20.3 & 35.4 & & 28.2 & 11.9 & 17.6 & 38.0 & 74.0 \\ \hline
\multirow{4}{*}{(vii)} & \multirow{2}{*}{$T_n$} & $\mathcal{T}_n$ & & 33.7 & 40.6 & 37.9 & 36.5 & 36.6 & & 34.3 & 40.3 & 37.9 & 37.0 & 36.9 & & 33.5 & 40.6 & 38.3 & 36.9 & 36.8 \\
& & $\mathcal{T}$ & & 61.9 & 47.3 & 38.6 & 37.2 & 36.8 & & 62.2 & 46.5 & 39.1 & 37.4 & 37.1 & & 61.5 & 47.7 & 39.8 & 37.7 & 36.9 \\
& \multirow{2}{*}{$T_n^{(s)}$} & $\mathcal{T}_n$ & & 4.3 & 4.4 & 5.2 & 6.4 & 6.0 & & 5.1 & 6.2 & 9.5 & 17.5 & 32.9 & & 5.1 & 5.8 & 14.1 & 28.5 & 62.5 \\
& & $\mathcal{T}$ & & 30.2 & 8.4 & 5.5 & 6.7 & 6.5 & & 30.6 & 10.1 & 10.2 & 17.7 & 33.5 & & 30.4 & 9.2 & 15.3 & 29.1 & 63.0 \\ \hline
\varepsilonnd{tabular}}
\varepsilonnd{table}
We summarize the findings of Table \ref{tab_simu} as follows: (1) both $T_n$ and $T_n^{(s)}$ suffer from severe size distortion using sequential asymptotics $\mathcal{T}$ if $n$ is small (i.e., $n=10,20,50$); (2) both fixed-$n$ asymptotics $\mathcal{T}_n$ and large-$n$ asymptotics $\mathcal{T}$ work well for $T_n$ and $T_n^{(s)}$ when $n$ is large under weak dependence in coordinates (cases (i)-(v)); (3) $T_n$ and $T_n^{(s)}$ are both accurate in size and comparable in power performance when $X_i$'s are light-tailed (cases (i),(ii), (iv) and (v)) if appropriate limiting distributions are used; (4) $T_n$ is slightly oversized compared with $T_n^{(s)}$ under heavy-tailed distributions (case (iii)); (5) when strong dependence is exhibited in coordinates (cases (vi) and (vii)), $(T_n^{(s)},\mathcal{T}_n)$ still works for small $n$ while other combinations of tests and asymptotics generally fail; (6) increasing the data length $n$ enhances power under all settings while increasing dependence in coordinates generally reduces power. Overall, the spatial signed SN test using fixed-$n$ asymptotic critical value outperforms all other tests and should be preferred due to its robustness and size accuracy.
\subsection{Segmentation}
We then examine the numerical performance of SBS-SN$^{(s)}$ by considering the multiple change-points models in \cite{wang2021inference}. For $p=50$ and $n=120$, we generate i.i.d. samples of $\{X_i\}_{i=1}^{n}$, where $X_i$'s are either normally distributed, i.e., case (i); or RSRM sequences, i.e., case (vii) with autoregressive coefficient $\rho=0.3.$ We assume there are three change points at $k=$ 30, 60 and 90. Denote the changes in mean by $\bm{\theta}_1$, $\bm{\theta}_2$ and $\bm{\theta}_3$, where $\bm{\theta}_1=-\bm{\theta}_2=\bm{\theta}_3=\sqrt{h/d}(\bm{1}_{d}^{\top},\bm{0}_{p-d}^{\top})^{\top}$, $d\in\{5,p\}$ and $h\in\{2.5,4\}.$ Here, $d$ represents the sparse or dense alternatives while $h$ represents the signal strength. For example, we refer to the choice of $d=5$ and $h=2.5$ as Sparse(2.5) and $d=p$, $h=4$ as Dense(4).
To assess the estimation accuracy, we consider (1) the differences between the estimated number of change-points $\hat{m}$ and the truth $m=3$; (2) Mean Squared Error (MSE) between $\hat{m}$ and the truth $m=3$; (3) the adjusted Rand index (ARI) which measures the similarity between two partitions of the same observations. Generally speaking, a higher ARI (with the
maximum value of 1) indicates more accurate change-point estimation, see \cite{hubert1985comparing} for more details.
We also report the estimation results using WBS-SN in \cite{wang2021inference} and SBS-SN (by replacing WBS with SBS in \cite{wang2021inference}). The superiority of WBS-SN over other competing methods can be found in \cite{wang2021inference} under similar simulation settings. For SBS based estimation results, we vary the decay rate $\alpha\in\{2^{-1/2}, 2^{-1/4}\}$, which are denoted by SBS$^{1}$ and SBS$^{2}$ respectively. Here, the thresholds in above algorithms are either Monte
Carlo simulated constant $\zeta_n$ for all sub-interval test statistics; or prespecified quantile level $\zeta_p\in(0,1)$ based on p-values reported by these sub-interval statistics. Here, we fix $\zeta_n$ as the 95\% quantile levels of the maximal test statistics based on 5000 simulated Gaussian data with no change-points as in \cite{wang2021inference} while $\zeta_p$ is set as 0.001 (the results using 0.005 is similar hence omitted). We replicate all experiments 500 times, and results for dense changes and sparse changes are reported in Table \ref{tab_dense} and Table \ref{tab_sparse}, respectively.
From Table \ref{tab_dense}, we find that (1) WBS-SN and SBS-SN tend to produce close estimation accuracy holding the form of $X$ and signal strength fixed; (2) different decay rates have some impact on the performance of SBS-SN methods, and when the signal is weak the impact is noticeable; (3) increasing the signal strength of change-points boosts the detection power for all methods; (4) using $\zeta_p$ gives more accurate estimation than using $\zeta_n$ in SBS-SN when data is normally distributed with weak signal level and they work comparably in other settings; (5) when $X$ is normally distributed, SBS-SN$^{(s)}$ works comparably with other estimation algorithms while for RSRM sequences, our SBS-SN$^{(s)}$
greatly outperforms other methods. The results in Table \ref{tab_sparse} are similar. These findings together suggest
substantial gain in detection performance using SBS-SN$^{(s)}$ due to its robustness to heavy-tailedness and stronger dependence in the coordinates.
\begin{table}[H]
\centering
\caption{Estimation results with dense changes.}
\label{tab_dense}
\resizebox{0.9\textwidth}{!}{
\begin{tabular}{cccccccccccc}
\hline
& & Test & Threshold & $X$ & \multicolumn{5}{c}{$\hat{m}-m$} & MSE & ARI \\ \cline{6-10}
& & & & & \textless{}-1 & -1 & 0 & 1 & \textgreater{}1 & & \\ \hline
\multirow{14}{*}{Dense(2.5)} & WBS & $T_n$ & $\zeta_n$ & Normal & 95 & 156 & 246 & 3 & 0 & 1.278 & 0.729 \\
& SBS$^1$ & $T_n$ & $\zeta_n$ & Normal & 76 & 206 & 214 & 4 & 0 & 1.068 & 0.742 \\
& SBS$^2$ & $T_n$ & $\zeta_n$ & Normal & 32 & 195 & 266 & 7 & 0 & 0.660 & 0.809 \\
& SBS$^1$ & $T_n$ & $\zeta_p$ & Normal & 0 & 0 & $\bm{464}$ & 35 & 1 & 0.078 & $\bm{0.929}$ \\
& SBS$^2$ & $T_n$ & $\zeta_p$& Normal & 0 & 10 & $\bm{ 466}$ & 23 & 1 & $\bm{0.074}$ & $\bm{0.931}$ \\
& SBS$^1$ & $T_n^{(s)}$ & $\zeta_p$& Normal & 0 & 1 &$\bm {469}$ & 29 & 1 & $\bm{0.068}$ & $\bm{0.928}$ \\
& SBS$^2$ & $T_n^{(s)}$ & $\zeta_p$& Normal & 0 & 19 & 463 & 18 & 0 & $\bm { 0.074}$ & 0.926 \\
& WBS & $T_n$ & $\zeta_n$ & RSRM &64 &89 &133 &104 & 110 &2.668 &0.461 \\
& SBS$^1$ & $T_n$ & $\zeta_n$ & RSRM & 35 & 58 & 116 & 107 & 184 & 3.612 & 0.462 \\
& SBS$^2$ & $T_n$ & $\zeta_n$ & RSRM & 38 & 72 & 114 & 130 & 146 & 3.032 & 0.470 \\
& SBS$^1$ & $T_n$ & $\zeta_p$ & RSRM & 8 & 6 & 32 & 84 & 376 & 8.792 & 0.653 \\
& SBS$^2$ & $T_n$ & $\zeta_p$& RSRM & 57 & 43 & 104 & 106 & 233 & 4.508 & 0.532 \\
& SBS$^1$ & $T_n^{(s)}$ & $\zeta_p$& RSRM & 3 & 65 & 412 & 20 & 0 & 0.194 & 0.880 \\
& SBS$^2$ & $T_n^{(s)}$ & $\zeta_p$ & RSRM & 11 & 121 & 355 & 13 & 0 & 0.356 & 0.850 \\ \hline
\multirow{14}{*}{Dense(4)} & WBS & $T_n$ & $\zeta_n$ & Normal &0 & 7 & 486 & 7 & 0 & 0.028 & 0.948 \\
& SBS$^1$ & $T_n$ & $\zeta_n$ & Normal & 0 & 21 & 467 & 12 & 0 & 0.066 & 0.936 \\
& SBS$^2$ & $T_n$ & $\zeta_n$ & Normal & 0 & 23 & 464 & 13 & 0 & 0.072 & 0.945 \\
& SBS$^1$ & $T_n$ & $\zeta_p$& Normal & 0 & 0 & 464 & 34 & 2 & 0.084 & 0.937 \\
& SBS$^2$ & $T_n$ & $\zeta_p$& Normal & 0 & 0 & 476 & 23 & 1 & 0.054 & 0.943 \\
& SBS$^1$ & $T_n^{(s)}$ & $\zeta_p$ & Normal & 0 & 0 & 468 & 31 & 1 & 0.070 & 0.936 \\
& SBS$^2$ & $T_n^{(s)}$ & $\zeta_p$ & Normal & 0 & 0 & 482 & 18 & 0 & 0.036 & 0.942 \\
& WBS & $T_n$ & $\zeta_n$ & RSRM & 64 & 89 & 133 & 104 & 110 & 2.668 & 0.461 \\
& SBS$^1$ & $T_n$ & $\zeta_n$ & RSRM & 26 & 67 & 107 & 115 & 185 & 3.732 & 0.484 \\
& SBS$^2$ & $T_n$ & $\zeta_n$ & RSRM & 33 & 74 & 115 & 125 & 153 & 3.146 & 0.489 \\
& SBS$^1$ & $T_n$ & $\zeta_p$& RSRM & 27 & 25 & 71 & 93 & 309 & 6.604 & 0.579 \\
& SBS$^2$ & $T_n$ & $\zeta_p$ & RSRM & 39 & 33 & 103 & 114 & 244 & 4.740 & 0.559 \\
& SBS$^1$ & $T_n^{(s)}$ & $\zeta_p$& RSRM & 0 & 10 & 469 & 20 & 1 & 0.068 & 0.918 \\
& SBS$^2$ & $T_n^{(s)}$ & $\zeta_p$ & RSRM & 0 & 35 & 451 & 14 & 0 & 0.098 & 0.911 \\ \hline
\varepsilonnd{tabular}}
{\raggedright\small ~~ ~~Note: Top 3 methods are in bold format. \partialar}
\varepsilonnd{table}
\begin{table}[H]
\centering
\caption{Estimation results with sparse changes.}
\label{tab_sparse}
\resizebox{0.9\textwidth}{!}{\begin{tabular}{cccccccccccc}
\hline
& & Test & Threshold & $X$ & \multicolumn{5}{c}{$\hat{m}-m$} & MSE & ARI \\ \cline{6-10}
& & & & & \textless{}-1 & -1 & 0 & 1 & \textgreater{}1 & & \\ \hline
\multirow{14}{*}{Sparse(2.5)} & WBS & $T_n$ & $\zeta_n$ & Normal & 78 & 147 & 274 & 1 & 0 & 1.050 & 0.759 \\
& SBS$^1$ & $T_n$ & $\zeta_n$ & Normal & 59 & 214 & 223 & 4 & 0 & 0.928 & 0.760 \\
& SBS$^2$ & $T_n$ &$\zeta_n$ & Normal & 20 & 185 & 287 & 8 & 0 & 0.546 & 0.829 \\
& SBS$^1$ & $T_n$ & $\zeta_p$ & Normal & 0 & 1 & 464 & 34 & 1 & 0.078 & 0.929 \\
& SBS$^2$ & $T_n$ & $\zeta_p$& Normal & 0 & 13 & 465 & 21 & 1 & 0.076 & 0.930 \\
& SBS$^1$ & $T_n^{(s)}$ & $\zeta_p$ & Normal & 0 & 2 & 470 & 27 & 1 & 0.066 & 0.927 \\
& SBS$^2$ & $T_n^{(s)}$ & $\zeta_p$ & Normal & 0 & 23 & 460 & 17 & 0 & 0.080 & 0.924 \\
& WBS & $T_n$ & $\zeta_n$ & RSRM & 70 & 97 & 121 & 102 & 110 & 2.682 & 0.449 \\
& SBS$^1$ & $T_n$ & $\zeta_n$ & RSRM & 38 & 51 & 110 & 124 & 177 & 3.572 & 0.460 \\
& SBS$^2$ & $T_n$ & $\zeta_n$ & RSRM & 39 & 66 & 122 & 122 & 151 & 3.100 & 0.474 \\
& SBS$^1$ & $T_n$ & $\zeta_p$ & RSRM & 38 & 35 & 75 & 109 & 278 & 5.800 & 0.447 \\
& SBS$^2$ & $T_n$ & $\zeta_p$ & RSRM & 84 & 69 & 129 & 108 & 179 & 3.258 & 0.528 \\
& SBS$^1$ & $T_n^{(s)}$ & $\zeta_p$ & RSRM & 5 & 64 & 414 & 17 & 0 & 0.212 & 0.872 \\
& SBS$^2$ & $T_n^{(s)}$ & $\zeta_p$ & RSRM & 8 & 117 & 364 & 11 & 0 & 0.330 & 0.851 \\ \hline
\multirow{14}{*}{Sparse(4)} & WBS & $T_n$ & $\zeta_n$ & Normal & 0 & 7 & 486 & 7 & 0 & 0.028 & 0.958 \\
& SBS$^1$ & $T_n$ & $\zeta_n$ & Normal & 0 & 19 & 468 & 13 & 0 & 0.064 & 0.938 \\
& SBS$^2$ & $T_n$ & $\zeta_n$ & Normal & 0 & 27 & 458 & 15 & 0 & 0.084 & 0.945 \\
& SBS$^1$ & $T_n$ & $\zeta_p$ & Normal & 0 & 0 & 465 & 34 & 1 & 0.076 & 0.938 \\
& SBS$^2$ & $T_n$ & $\zeta_p$ & Normal & 0 & 0 & 477 & 22 & 1 & 0.052 & 0.944 \\
& SBS$^1$ & $T_n^{(s)}$ & $\zeta_p$ & Normal & 0 & 0 & 472 & 27 & 1 & 0.062 & 0.937 \\
& SBS$^2$ & $T_n^{(s)}$ & $\zeta_p$ & Normal & 0 & 0 & 481 & 19 & 0 & 0.038 & 0.943 \\
& WBS & $T_n$ & $\zeta_n$ & RSRM & 58 & 93 & 125 & 106 & 118 & 2.716 & 0.476 \\
& SBS$^1$ & $T_n$ & $\zeta_n$ & RSRM & 32 & 51 & 96 & 133 & 188 & 3.840 & 0.486 \\
& SBS$^2$ & $T_n$ & $\zeta_n$ & RSRM & 32 & 62 & 121 & 117 & 168 & 3.256 & 0.503 \\
& SBS$^1$ & $T_n$ & $\zeta_p$ & RSRM & 27 & 25 & 76 & 97 & 300 & 6.250 & 0.574 \\
& SBS$^2$ & $T_n$ & $\zeta_p$ & RSRM & 70 & 55 & 119 & 117 & 194 & 3.460 & 0.557 \\
& SBS$^1$ & $T_n^{(s)}$ & $\zeta_p$ & RSRM & 0 & 8 & 472 & 20 & 0 & 0.056 & 0.914 \\
& SBS$^2$ & $T_n^{(s)}$ & $\zeta_p$ & RSRM & 0 & 38 & 448 & 14 & 0 & 0.104 & 0.908 \\ \hline
\varepsilonnd{tabular}}
\varepsilonnd{table}
\section{Real Data Application}\label{sec:real}
In this section, we analyze the genomic micro-array (ACGH) dataset for 43 individuals with bladder tumor. The ACGH data contains log intensity ratios of these individuals measured at 2215 different loci on their genome, and copy number variations in the loci can be viewed as the change-point in the genome. Hence change-point estimation could be helpful in determining the abnormality regions, as analyzed by \cite{wang2018high} and \cite{zhang2021adaptive}. The data is denoted by $\{Y_i\}_{i=1}^{2215}$.
To illustrate the necessity of robust estimation method proposed in this paper, we use the Hill's estimator to estimate the tail index of a sequence, see \cite{hill1975simple}. Specifically, let $Y_{(i),j}$ be the ascending order statistics of the $j$th individual (coordinate) across 2215 observations. For $j=1,2,\cdots,43$, we give the left-tail and right-tail Hill estimators by
$$
H_{1 k,j}=\left\{\fracrac{1}{k} \sum_{i=1}^{k} \log \left(\fracrac{Y_{(i),j}}{Y_{(k+1),j}}\right)\right\}^{-1} \quad \text { and } \quad H_{2 k,j}=\left\{\fracrac{1}{k} \sum_{i=1}^{k} \log \left(\fracrac{Y_{(n-i+1),j}}{Y_{(n-k),j}}\right)\right\}^{-1},
$$
respectively, and they are plotted in Figure \ref{fig:hill}. From the plot, we see that
most of the right-tail and the left-tail indices are below 3, suggesting the data is very likely heavy-tailed.
\begin{figure}[H]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=1\textwidth]{hill_l.eps}
\varepsilonnd{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=1\textwidth]{hill_r.eps}
\varepsilonnd{subfigure}
\caption{Hill's estimator for 43 individuals. }
\label{fig:hill}
\varepsilonnd{figure}
We take the first 200 loci for our SBS-SN$^{(s)}$ change-point estimation following the practice in \cite{zhang2021adaptive}, where the decay rate for generation of seeded interval in SBS is $2^{-1/4}$. We also compare the results obtained for Adaptive WBS-SN in \cite{zhang2021adaptive} and 20 most
significant points detected by INSPECT in \cite{wang2018high}. For this dataset, INSPECT is more like a screening method as it delivers a total of 67 change-points. In contrast to Adaptive WBS-SN and INSPECT where the thresholds for change-point estimation are simulated, the threshold used in SBS-SN$^{(s)}$ can be pre-specified, and it reflects a researcher's confidence in detecting the change-points. We set the p-value threshold $\zeta_p$ as 0.001, 0.005 and 0.01 and the results are as follows:
$$
\begin{array}{ll}
\text{Adaptive WBS-SN} & 15,32,38,44,59,74,91,97,102,116,134,158,173,186,191 \\
\text {INSPECT} & 15,26,28,33,36,40,56,73,91,97,102,119,131,134,135,146,155,\\&174,180,191\\
\text{SBS-SN$^{(s)}$, $\zeta_p=0.001$}& 30, 41, 72, 89, 130, 136, 174\\
\text{SBS-SN$^{(s)}$, $\zeta_p=0.005$}& 30, 41, 56, 72, 89, 97, 116, 130, 136, 155, 174, 191\\
\text{SBS-SN$^{(s)}$, $\zeta_p=0.01$}& 30, 41, 56, 72, 89, 97, 111, 116, 130, 136, 155, 174, 191
\varepsilonnd{array}
$$
As we see, increasing the p-value threshold $\zeta_p$ leads to more estimated change-points, and the set of estimated change-points by using larger $\zeta_p$ contain those by smaller $\zeta_p$ as subsets. In addition, increasing $\zeta_p$ from 0.005 to 0.01 only brings in one more estimated change-point, suggesting $\zeta_p=0.005$ may be a reasonable choice for the ACGH dataset.
All of our detected change-points at $\zeta_p=0.005$ are also detected by INSPECT, i.e., 30(28), 41(40), 56, 72(73), 89(91), 97, 116, 130(131), 136 (134,135), 155, 174, 191. Although most of these points also coincide with Adaptive WBS-SN, there are non-overlapping ones. For example, 41, 56, 130 in SBS-SN$^{(s)}$ seem to be missed in Adaptive WBS-SN while 102 is missed by our SBS-SN$^{(s)}$ as it is detected by both Adaptive WBS-SN and INSPECT. These results are not really in conflict as Adaptive WBS-SN targets both sparse and dense alternatives, whereas our procedure aims to detect dense change with robustness properties.
\section{Conclusion}\label{sec:con}
In this paper, we propose a new method for testing and estimation of change-points in high dimensional independent data. Our test statistic builds on two recent advances in high-dimensional testing problem: spatial sign used in two-sample testing in \cite{chakraborty2017tests} and self-normalized U-statistics in \cite{wang2021inference}, and inherits many advantages therein such as robustness to heavy-tailedness and tuning-free.
The test is theoretically justified under both fixed-$n$ asymptotics and sequential asymptotics, and under both null and alternatives. When data exhibits stronger dependence in coordinates, we further enhance the analysis by focusing on RSRM models, and discover that using spatial sign leads to power improvement compared with mean based tests in \cite{wang2021inference}. As for multiple change-point estimation, we propose to combine p-values under the fixed-$n$ asymptotics with the SBS algorithm. Numerical simulations demonstrate that our fixed-$n$ asymptotics for spatial sign based test provides a better approximation to the finite sample distribution, and the estimation algorithm outperforms the mean-based ones when data is heavy-tailed and when coordinates are strongly dependent.
To conclude, we mention a few interesting topics for future research. Our method builds on spatial sign and targets dense signals by constructing unbiased estimators for $\|\mathbb{E} S(Y_1-Y_n)\|$. As pointed out by \cite{liu2020unified}, many real data exhibit both sparse and dense changes, and it would be interesting to combine with the adaptive SN based test in \cite{zhang2021adaptive} to achieve both robustness and adaptiveness. In addition, the independence assumption imposed in this paper may limit its applicability to high dimensional time series where temporal dependence can not be neglected. It's desirable to relax the independence assumption by U-statistics based on trimmed observations, as is adopted in \cite{wang2021inference}. It would also be interesting to develop robust methods for detecting change-points in other quantities beyond mean, such as quantiles, covariance matrices and parameter vectors in high dimensional linear models.
\setcounter{section}{0}
\begin{center}
\Large Supplement to ``Robust Inference for Change Points in High Dimension"
\varepsilonnd{center}
\baselineskip=1.25\baselineskip
\renewcommand{S.\arabic{table}}{S.\arabic{table}}
\renewcommand{S.\arabic{figure}}{S.\arabic{figure}}
\renewcommand{S.\arabic{section}}{S.\arabic{section}}
\renewcommand{S.\arabic{ass}}{S.\arabic{ass}}
\renewcommand{S.\arabic{thm}}{S.\arabic{thm}}
\renewcommand{S.\arabic{equation}}{S.\arabic{equation}}
\spacingset{1.25}
This supplementary material contains all the technical proofs for the main paper. Section \ref{sec:proof} contains all the proofs of main theorems and Section \ref{sec:lemma} contains auxiliary lemmas and their proofs.
In what follows, let $a_{i,k}$ denote the $k$th coordinate of a vector $a_i$. Denote $a_n\lesssim b_n$ if there exits $M,C>0$ such that $a_n\leq C b_n$ for $n>M$.
\section{Proofs of Theorems}\label{sec:proof}
\subsection{Proof of Theorem \ref{thm_main}}
First, we have that
\begin{flalign}\label{decomp}
\|Y_i-Y_j\|^2=\sum_{\varepsilonll=1}^{p}(X_{i,\varepsilonll}-X_{j,\varepsilonll})^2+2(\mu_j-\mu_i)'(X_{j}-X_i)+\|\mu_i-\mu_j\|^2.
\varepsilonnd{flalign}
(i) Under $H_0$, by Theorem 8.2.2 in \cite{zhengyan1997limit}, as $p\to\infty$, we have almost surely,
\begin{flalign}\label{consis}
\fracrac{1}{p} \|Y_i-Y_j\|^2=2\sigma^2.
\varepsilonnd{flalign}
Then, for any fixed $k,l,m$, we have that
\begin{flalign}\label{D}
\begin{split}
&D^{(s)}(k;l,m)\\=&\sum_{\substack{l\leq j_1,j_3\leq k\\j_1\neq j_3}}\sum_{\substack{k+1\leq j_2,j_4\leq m\\j_2\neq j_4}}\fracrac{(Y_{j_1}-Y_{j_2})'(Y_{j_3}-Y_{j_4})}{2p\sigma^2}
\\&+\sum_{\substack{l\leq j_1,j_3\leq k\\j_1\neq j_3}}\sum_{\substack{k+1\leq j_2,j_4\leq m\\j_2\neq j_4}}\fracrac{(Y_{j_1}-Y_{j_2})'(Y_{j_3}-Y_{j_4})}{2p\sigma^2}\Big\{\fracrac{2p\sigma^2}{\|Y_{j_1}-Y_{j_2}\|\|Y_{j_3}-Y_{j_4}\|}-1\Big\}
\\=:&(2p\sigma^2)^{-1}[D_1(k;l,m)+D_2(k;l,m)],
\varepsilonnd{split}
\varepsilonnd{flalign}
where clearly $D_1(k;l,m)=D(k;l,m)$, and
\begin{flalign*}
D_2(k;l,m)=\sum_{\substack{l\leq j_1,j_3\leq k\\j_1\neq j_3}}\sum_{\substack{k+1\leq j_2,j_4\leq m\\j_2\neq j_4}}{(Y_{j_1}-Y_{j_2})'(Y_{j_3}-Y_{j_4})}\Big\{\fracrac{2p\sigma^2}{\|Y_{j_1}-Y_{j_2}\|\|Y_{j_3}-Y_{j_4}\|}-1\Big\}.
\varepsilonnd{flalign*}
Then, Theorem 4.0.1 in \cite{zhengyan1997limit} implies that
$$
\fracrac{\Gamma^{-1/2}(k;l,m)\Big\{D_1(k;l,m)\Big\}}{(m-k)(m-k-1)(k-l+1)(k-l)}\overset{\mathcal{D}}{\rightarrow} \mathcal{N}(0,1),
$$
where $\Gamma(k;l,m)=\fracrac{2[(m-l)(m-l-1)]}{(m-k)(m-k-1)(k-l+1)(k-l)}\mathrm{tr}(\Sigma^2)$, or equivalently
\begin{flalign}\label{D1}
\fracrac{1}{n^3\|\Sigma\|_F}\Big\{D_1(k;l,m)\Big\}\overset{\mathcal{D}}{\rightarrow} \mathcal{N}\Big(0,\fracrac{16{m-l\choose2}{k-l+1\choose2}{m-k\choose 2}}{n^6}\Big).
\varepsilonnd{flalign}
Next, since we view $n$ as fixed, then for all $j_1\neq j_3$, $j_3\neq j_4$, by Theorem 4.0.1 in \cite{zhengyan1997limit}, it follows that $\|\Sigma\|_F^{-1}(Y_{j_1}-Y_{j_2})'(Y_{j_3}-Y_{j_4})=O_p(1)$. In addition, in view of (\ref{consis}) we have
$\fracrac{2p\sigma^2}{\|Y_{j_1}-Y_{j_2}\|\|Y_{j_3}-Y_{j_4}\|}-1=o_p(1)$, and this implies that $n^{-3}\|\Sigma\|_F^{-1}D_2(k;l,m)=o_p(1)$.
Hence, combined with (\ref{D1}), we have
\begin{flalign}\label{equiv}
\begin{split}
T_n^{(s)}
=&\sup_{k=4,\cdots,n-4}\fracrac{\Big(2p\sigma^2 n^{-3}\|\Sigma\|_F^{-1}D^{(s)}(k;1,n)\Big)^2}{4p\sigma^4n^{-6} \|\Sigma\|_F^{-2}W_n^{(s)}(k;1,n)}
\\=&\sup_{4,\cdots,n-4}\fracrac{ n^{-6}\|\Sigma\|_F^{-2}[D_1(k;1,n)+D_2(k;1,n)]^2}{n^{-6} \|\Sigma\|_F^{-2}W_{n}(k;1,n)}+o_p(1)
\\=&T_n+o_p(1),
\varepsilonnd{split}
\varepsilonnd{flalign}
where the last equality holds since $n^{-3}\|\Sigma\|_F^{-1}D_2(k;l,m)=o_p(1)$ for each triplet $(k,l,m)$.
For $0\leq k<m\leq n$, we let
$$
Z(k,m)=\sum_{i=k+1}^{m}\sum_{j=k}^{i-1}X_i'X_j,
$$
then it follows that
\begin{flalign}\label{decompD}
\begin{split}
D(k;l,m)=&2(m-k)(m-k-1)Z(l,k)+2(k-l+1)(k-l)Z(k+1,m)\\&-2(k-l)(m-k-1)[Z(l,m)-Z(l,k)-Z(k+1,m)].
\varepsilonnd{split}
\varepsilonnd{flalign}
Then, by Lemma \ref{lem_fix}, and continuous mapping theorem, we have
$$
T_n\overset{\mathcal{D}}{\rightarrow} \sup_{k=4,\cdots,n-4}\fracrac{nG_n^2(\fracrac{k}{n};\fracrac{1}{n},1)}{\sum_{t=2}^{k-1}G_n^2(\fracrac{t}{n};\fracrac{1}{n},\fracrac{k}{n})+\sum_{t=k+2}^{n-2}G_n^2(\fracrac{t}{n};\fracrac{k+1}{n},1)}.
$$
(ii) The proof is a simplified version of the proof of Theorem \ref{thm_rsrm} (ii), hence omitted here.
\qed
\subsection{Proof of Theorem \ref{thm_fix}}
Clearly,
$$
T_n^{(s)}=\sup_{k=4,\cdots,n-4}\fracrac{(D^{(s)}(k;1,n))^2}{W_n^{(s)}(k;1,n)}\geq \fracrac{(D^{(s)}(k^*;1,n))^2}{W_n^{(s)}(k^*;1,n)},
$$
and
$$
T_n=\sup_{k=4,\cdots,n-4}\fracrac{(D(k;1,n))^2}{W_n(k;1,n)}\geq \fracrac{(D(k^*;1,n))^2}{W_n(k^*;1,n)},
$$
Note that
$W_n^{(s)}(k;1,n)=\fracrac{1}{n}\sum_{t=2}^{k^*-2}D^{(s)}(t;1,k^*)^2+\fracrac{1}{n}\sum_{t=k^*+2}^{n-2}D^{(s)}(t;k^*+1,n)^2.$ The construction of $D^{(s)}(t;1,k^*)^2$ (or $D^{(s)}(t;k^*+1,n)^2$) only uses sample before (or after) the change point, so the change point has no influence on this part. The proof of Theorem \ref{thm_main} indicates that $4p^2n^{-6}\|\Sigma\|_F^{-2}W_n^{(s)}(k;1,n)=O_p(1)$ and similarly $4n^{-6}\|\Sigma\|_F^{-2}W_n(k;1,n)=O_p(1)$. Hence, it suffices to show $pn^{-3}\|\Sigma\|_F^{-1}D^{(s)}(k^*;1,n)\to_p\infty$ and $n^{-3}\|\Sigma\|_F^{-1}D(k^*;1,n)\to_p\infty$.
Denote $\delta_i$ as the $i$th element of $\delta$. By (\ref{decomp}), for $1\leq j_1\neq j_3\leq k^*$ and $k^*+1\leq j_2\neq j_4\leq n$,
\begin{flalign*}
p^{-1}\|Y_{j_1}-Y_{j_2}\|^2=& p^{-1}\|\delta\|^2+p^{-1}\sum_{i=1}^{p}(X_{j_1,i}-X_{j_2,i})^2-p^{-1}\sum_{i=1}^{p}2\delta_i(X_{j_1,i}-X_{j_2,i}),\\
p^{-1}\|Y_{j_3}-Y_{j_4}\|^2=& p^{-1}\|\delta\|^2+p^{-1}\sum_{i=1}^{p}(X_{j_3,i}-X_{j_4,i})^2-p^{-1}\sum_{i=1}^{p}2\delta_i(X_{j_3,i}-X_{j_4,i}),\\
\varepsilonnd{flalign*}
and
\begin{flalign*}
p^{-1}(Y_{j_1}-Y_{j_2})'(Y_{j_3}-Y_{j_4})=& p^{-1}\|\delta\|^2+p^{-1}\sum_{i=1}^{p}(X_{j_1,i}-X_{j_2,i})(X_{j_3,i}-X_{j_4,i})\\&-p^{-1}\sum_{i=1}^{p}\delta_i(X_{j_1,i}-X_{j_2,i})-p^{-1}\sum_{i=1}^{p}\delta_i(X_{j_3,i}-X_{j_4,i}).
\varepsilonnd{flalign*}
Using Theorem 8.2.2 in \cite{zhengyan1997limit}, and the independence of $X_i$'s, we have
\begin{flalign*}
p^{-1}\|Y_{j_1}-Y_{j_2}\|^2\to_p\iota^2+2\sigma^2,\\
p^{-1}\|Y_{j_3}-Y_{j_4}\|^2\to_p\iota^2+2\sigma^2,\\
\varepsilonnd{flalign*}
and
\begin{flalign*}
p^{-1}(Y_{j_1}-Y_{j_2})'(Y_{j_3}-Y_{j_4})\to_p \iota^2.
\varepsilonnd{flalign*}
If $\iota>0$, then
$$
n^{-4}D^{(s)}(k^*;1,n)\to_p n^{-4}k^*(k^*-1)(n-k^*)(n-k^*-1) \fracrac{\iota^2}{\iota^2+2\sigma^2}>0,
$$
and
$$
p^{-1}n^{-4}D(k^*;1,n)\to_p n^{-4}(k^*)(k^*-1)(n-k^*)(n-k^*-1)\iota^2>0.
$$
Hence, $$pn^{-3}\|\Sigma\|_F^{-1}D^{(s)}(k^*;1,n)=(pn\|\Sigma\|_F^{-1})n^{-4}D^{(s)}(k^*;1,n)\to_p\infty,$$ and $$n^{-3}\|\Sigma\|_F^{-1}D(k^*;1,n)=(pn\|\Sigma\|_F^{-1})p^{-1}n^{-4}D(k^*;1,n)\to_p\infty.$$
\qed
\subsection{Proof of Theorem \ref{thm_power}}
By symmetry, we only consider the case $l<k\leq k^*<m$. Since under Assumption \ref{ass_power}, (\ref{consis}) still holds by Cauchy-Schwartz inequality, then using similar arguments in the proof of Theorem \ref{thm_main}, we have
\begin{flalign}\label{D1_decomp}
\begin{split}
&2p\sigma^2D^{(s)}(k;l,m)\\=&\sum_{\substack{l\leq j_1,j_3\leq k\\j_1\neq j_3}}\sum_{\substack{k+1\leq j_2,j_4\leq m\\j_2\neq j_4}}(X_{j_1}-X_{j_2})'(X_{j_3}-X_{j_4})(1+o(1))
\\&+(k-l+1)(k-l)(m-k^*)(m-k^*-1)\|\delta\|^2(1+o(1))\\&-\Big(2(k-l)(m-k^*)(m-k-2)\sum_{j=l}^{k}X_j'\delta+4(k-l)(k-l-1)(m-k^*)\sum_{j=k+1}^{k^*}X_j'\delta\Big)(1+o(1))
\\:=&D^{(s)}_{(1)}(k;l,m)+D^{(s)}_{(2)}(k;l,m)-D^{(s)}_{(3)}(k;l,m).
\varepsilonnd{split}
\varepsilonnd{flalign}
That is, $2p\sigma^2D^{(s)}(k;l,m)=D(k;l,m)(1+o(1))$ for any triplet $(k,l,m)$, hence it suffices to consider $T_n^{(s)}$ as the results of $T_n$ are similar.
We first note that
$$
\mathrm{Var}(X_i'\delta)=\delta'\Sigma\delta=o(\|\Sigma\|_F^2),
$$
hence by Chebyshev inequality, for any triplet $(k,l,m)$, we have
\begin{equation}\label{D3}
n^{-3}\|\Sigma\|_F^{-1}D^{(s)}_{(3)}(k;l,m)=o_p(1).
\varepsilonnd{equation}
(i)
By similar arguments in the proof of Theorem \ref{thm_fix}, it suffices to show $$2p\sigma^2n^{-3}\|\Sigma\|_F^{-1}D^{(s)}(k^*;1,n)\to_p\infty.$$
In fact, by similar arguments used in the proof of Theorem \ref{thm_main}, we can show that $$ n^{-3}\|\Sigma\|_F^{-1}D^{(s)}_{(1)}(k;l,m)=O_p(1).$$ Then, recall (\ref{D1_decomp}), the result follows by noting $$n^{-3}\|\Sigma\|_{F}^{-1} D^{(s)}_{(2)}\left(k^{*} ; 1, n\right)=n^{-3}\|\Sigma\|_{F}^{-1}(k-l+1)(k-l)\left(m-k^{*}\right)\left(m-k^{*}-1\right)\|\delta\|^{2}(1+o(1)) \rightarrow \infty.$$
(ii) As $n\|\Sigma\|_F^{-1}\|\delta\|^2\to0$, it follows from the same argument as (\ref{equiv}).
(iii) As $n\|\Sigma\|_F^{-1}\|\delta\|^2\to c_n\in (0,\infty)$, then we have
\begin{flalign*}
&n^{-3}\|\Sigma\|_F^{-1}D^{(s)}_{(2)}(k^*;m,l)]\\=&n^{-3}\|\Sigma\|_F^{-1}(k-l+1)(k-l)(m-k^*)(m-k^*-1)\|\delta\|^2(1+o(1))\\\to& c_n\fracrac{4{k-l+1\choose 2}{m-k^*\choose 2}}{n^4}.
\varepsilonnd{flalign*}
Therefore, continuous mapping theorem together with Lemma \ref{lem_fix} indicate that
$$
T_n^{(s)}\overset{\mathcal{D}}{\rightarrow} \sup_{k=4,\cdots,n-4}\fracrac{n[\sqrt{2}G_n(\fracrac{k}{n};\fracrac{1}{n},1)+c_n\Delta_n(\fracrac{k}{n};\fracrac{1}{n},1)]^2}{\sum_{t=2}^{k-2}[\sqrt{2}G_n(\fracrac{t}{n};\fracrac{1}{n},\fracrac{k}{n})+c_n\Delta_n(\fracrac{t}{n};\fracrac{1}{n},\fracrac{k}{n})]^2+\sum_{t=k+2}^{n-2}[\sqrt{2}G_n(\fracrac{t}{n};\fracrac{k+1}{n},1)+c_n\Delta_n(\fracrac{t}{n};\fracrac{k+1}{n},1)]^2}.
$$
The last part of the proof is similar to the proof of Theorem \ref{thm_rsrmpower} (ii) below, and is simpler, hence omitted.
\qed
\subsection{Proof of Theorem \ref{thm_rsrm}}
(i)
Note that
\begin{equation}
\fracrac{1}{p}\|Y_i-Y_j\|^2=\fracrac{1}{p}\sum_{\varepsilonll=1}^{p}(\fracrac{X_{i,\varepsilonll}}{R_i}-\fracrac{X_{j,\varepsilonll}}{R_j})^2,
\varepsilonnd{equation}
hence given $\mathcal{R}_n$, as $p\to\infty$, we have almost surely
\begin{equation}\label{consis2}
\fracrac{1}{p}\|Y_i-Y_j\|^2\to \sigma^2(R_{i}^{-2}+R_{j}^{-2}).
\varepsilonnd{equation}
Note that
\begin{flalign}\label{D34}
\begin{split}
&p\sigma^2D^{(s)}(k;l,m)\\=&\sum_{\substack{l\leq j_1,j_3\leq k\\j_1\neq j_3}}\sum_{\substack{k+1\leq j_2,j_4\leq m\\j_2\neq j_4}}\fracrac{(Y_{j_1}-Y_{j_2})'(Y_{j_3}-Y_{j_4})}{(R_{j_1}^{-2}+R_{j_2}^{-2})^{1/2}(R_{j_3}^{-2}+R_{j_4}^{-2})^{1/2}}
\\&+\sum_{\substack{l\leq j_1,j_3\leq k\\j_1\neq j_3}}\sum_{\substack{k+1\leq j_2,j_4\leq m\\j_2\neq j_4}}\fracrac{(Y_{j_1}-Y_{j_2})'(Y_{j_3}-Y_{j_4})}{(R_{j_1}^{-2}+R_{j_2}^{-2})^{1/2}(R_{j_3}^{-2}+R_{j_4}^{-2})^{1/2}}\Big\{\fracrac{p\sigma^2(R_{j_1}^{-2}+R_{j_2}^{-2})^{1/2}(R_{j_3}^{-2}+R_{j_4}^{-2})^{1/2}}{\|Y_{j_1}-Y_{j_2}\|\|Y_{j_3}-Y_{j_4}\|}-1\Big\}
\\=:&[D_3(k;l,m)+D_4(k;l,m)].
\varepsilonnd{split}
\varepsilonnd{flalign}
{Let
\begin{flalign*}
A_{j_1,j_3}(k;l,m)=&\sum_{\substack{k+1\leq j_2,j_4\leq m\\j_2\neq j_4}}(R_{j_1}^{-2}+R_{j_2}^{-2})^{-1/2}(R_{j_3}^{-2}+R_{j_4}^{-2})^{-1/2},\\
B_{j_2,j_4}(k;l,m)=&\sum_{\substack{l\leq j_1,j_3\leq k\\j_1\neq j_3}}(R_{j_1}^{-2}+R_{j_2}^{-2})^{-1/2}(R_{j_3}^{-2}+R_{j_4}^{-2})^{-1/2},\\
\varepsilonnd{flalign*}
and
\begin{flalign*}
C_{j_1,j_2}(k;l,m)=&-2\sum_{\substack{l\leq j_3\leq k\\j_3\neq j_1}}\sum_{\substack{k+1\leq j_4\leq m\\j_4\neq j_2}}(R_{j_1}^{-2}+R_{j_2}^{-2})^{-1/2}(R_{j_3}^{-2}+R_{j_4}^{-2})^{-1/2}.
\varepsilonnd{flalign*}
Then under $H_0$,
\begin{align*}
&D_3(k;l,m)\\
=& \sum_{\substack{l \leq j_1, j_3 \leq k\\j_1 \neq j_3}}X_{j_1}^TX_{j_3}(R_{j_1}R_{j_3})^{-1}A_{j_1,j_3}(k;l,m)+\sum_{\substack{k + 1 \leq j_2, j_4 \leq m\\ j_2 \neq j_4}}X_{j_2}^TX_{j_4}(R_{j_2}R_{j_4})^{-1}B_{j_2,j_4}(k;l,m)\\
&+\sum_{l \leq j_1 \leq k}\sum_{k+1 \leq j_2 \leq m}X_{j_1}^TX_{j_2}(R_{j_1}R_{j_2})^{-1}C_{j_1,j_2}(k;l,m).
\varepsilonnd{align*}
Denote that $U_1 = (X_1^TX_2,...,X_1^TX_n, X_2^TX_3,...,X_2^TX_n,...,X_{n-1}^TX_n)^T$ which contains all inner products of $X_i$ and $X_j$ for all $i\neq j$, and $U_2 = (R_1,...,R_n)^T$. By definition, $\sigma(U_1) \partialerp \!\!\! \partialerp \sigma(U_2)$, where $\sigma(U)$ is the $\sigma-$field generated by $U$, and we further observe that $2p\sigma^2D_3(k;l,m)$ is a continuous functional of $U_1$ and $U_2$. Hence to derive the limiting distribution of $2p\sigma^2D_3(k;l,m)$ when $p \rightarrow \infty$, it suffices to derive the limiting distribution of $(U_1,U_2)^T$.
For any $\alpha \in \mathbb{R}^{n(n-1)/2}$, similar to the proof of Theorem \ref{thm_main}, by Theorem 4.0.1 in \cite{zhengyan1997limit} we have
$$\|\Sigma\|_F^{-1}\alpha^TU_1 \overset{\mathcal{D}}{\rightarrow} \alpha^T\mathcal{Z} := \alpha^T(\mathcal{Z}_{1,2},\mathcal{Z}_{1,3},...,\mathcal{Z}_{1,n},\mathcal{Z}_{2,3},...,\mathcal{Z}_{2,n},...,\mathcal{Z}_{n-1,n})^T,$$
where $\mathcal{Z}_{1,2},...,\mathcal{Z}_{n-1,n}$ are i.i.d. standard normal random variables, and we can assume $\mathcal{Z}$ is independent of $U_2$. For the ease of our notation, we let $\mathcal{Z}_{i,j} = \mathcal{Z}_{j,i}$, for all $i > j$. Furthermore since $\sigma(U_1) \partialerp \!\!\! \partialerp \sigma(U_2)$, for any $\alpha \in \mathbb{R}^{n(n-1)/2}$ and $\beta \in \mathbb{R}^{n}$, the characteristic function of $\alpha^TU_1 + \beta^TU_2$ is the product of the characteristic function of $\alpha^TU_1$ and that of $\beta^TU_2$. By applying the Cram\'er-Wold device, $(\|\Sigma\|_F^{-1}U_1,U_2) \overset{\mathcal{D}}{\rightarrow} (\mathcal{Z},U_2)$.
Therefore, by continuous mapping theorem, as $p \rightarrow \infty$,
\begin{equation}\label{D3_con}
n^{-3}\|\Sigma\|_F^{-1}D_3(k;l,m) \overset{\mathcal{D}}{\rightarrow} G_n^{(\mathcal{R}_n,s)}(k/n;l/n,m/n),
\varepsilonnd{equation}
where
\begin{flalign}\label{GnR}
\begin{split}
&G_n^{(\mathcal{R}_n,s)}(k/n;l/n,m/n) \\:=& n^{-3}\sum_{\substack{l \leq j_1, j_3 \leq k\\j_1 \neq j_3}}\mathcal{Z}_{j_1,j_3}(R_{j_1}R_{j_3})^{-1}A_{j_1,j_3}(k,l,m)+n^{-3}\sum_{\substack{k + 1 \leq j_2, j_4 \leq m\\ j_2 \neq j_4}}\mathcal{Z}_{j_2,j_4}(R_{j_2}R_{j_4})^{-1}B_{j_2,j_4}(k,l,m)\\
& +n^{-3}\sum_{l \leq j_1 \leq k}\sum_{k+1 \leq j_2 \leq m}\mathcal{Z}_{j_1,j_2}(R_{j_1}R_{j_2})^{-1}C_{j_1,j_2}(k,l,m).
\varepsilonnd{split}
\varepsilonnd{flalign}
It is clear that the conditional distribution of $G_n^{(\mathcal{R}_n,s)}(k/n;l/n,m/n)$ given $\mathcal{R}_n$ is Gaussian, and for any $l_1 < k_1 < m_1, l_2 < k_2 < m_2$, $k_1,k_2,l_1,l_2,m_1,m_2 = 1,2,...,n$, the covariance structure is given by
\begin{flalign}\label{cov}
&\mathrm{Cov}(G_n^{(\mathcal{R}_n,s)}(k_1/n;l_1/n,m_1/n),G_n^{(\mathcal{R}_n,s)}(k_2/n;l_2/n,m_2/n)|\mathcal{R}_n)
\\=&\notag2n^{-6}\Big\{\sum_{\substack{(l_1\lor l_2)\leq j_1,j_2\leq (k_1\land k_2)\\j_1\neq j_2}}R_{j_1}^{-2}R_{j_2}^{-2}A_{j_1,j_2}(k_1;l_1,m_1)A_{j_1,j_2}(k_2;l_2,m_2)
\\&\notag+\sum_{\substack{(l_1\lor k_2+1)\leq j_1,j_2\leq (k_1\land m_2)\\j_1\neq j_2}}R_{j_1}^{-2}R_{j_2}^{-2}A_{j_1,j_2}(k_1;l_1,m_1)B_{j_1,j_2}(k_2;l_2,m_2)
\\&\notag+2\sum_{j_1=(l_1\lor l_2)}^{k_2}\sum_{j_2=k_2+1}^{(m_2\land k_1)}\mathbf{1}(k_1>k_2)R_{j_1}^{-2}R_{j_2}^{-2}A_{j_1,j_2}(k_1;l_1,m_1)C_{j_1,j_2}(k_2;l_2,m_2)
\\&\notag+\sum_{\substack{(k_1+1\lor l_2)\leq j_1,j_2\leq (m_1\land k_2)\\j_1\neq j_2}}R_{j_1}^{-2}R_{j_2}^{-2}B_{j_1,j_2}(k_1;l_1,m_1)A_{j_1,j_2}(k_2;l_2,m_2)
\\&\notag+\sum_{\substack{(k_1+1\lor k_2+1)\leq j_1,j_2\leq (m_1\land m_2)\\j_1\neq j_2}}R_{j_1}^{-2}R_{j_2}^{-2}B_{j_1,j_2}(k_1;l_1,m_1)B_{j_1,j_24}(k_2;l_2,m_2)
\\&\notag+2\sum_{j_1=(k_1+1\lor l_2)}^{k_2}\sum_{j_2=k_2+1}^{(m_1\land m_2)}\mathbf{1}(m_1> k_2)R_{j_1}^{-2}R_{j_2}^{-2}B_{j_1,j_2}(k_1;l_1,m_1)C_{j_1,j_2}(k_2;l_2,m_2)
\\&\notag+2\sum_{j_1=(l_1\lor l_2)}^{k_1}\sum_{j_2=k_1+1}^{(m_1\land k_2)}\mathbf{1}(k_2>k_1)R_{j_1}^{-2}R_{j_2}^{-2}C_{j_1,j_2}(k_1;l_1,m_1)A_{j_1,j_2}(k_2;l_2,m_2)
\\&\notag+2\sum_{j_1=(k_2+1\lor l_1)}^{k_1}\sum_{j_2=k_1+1}^{(m_1\land m_2)}\mathbf{1}(m_2> k_1)R_{j_1}^{-2}R_{j_2}^{-2}C_{j_1,j_2}(k_1;l_1,m_1)B_{j_1,j_2}(k_2;l_2,m_2)
\\&\notag+\sum_{j_1=(l_1\lor l_2)}^{(k_1\land k_2)}\sum_{j_2=(k_1+1\lor k_2+1)}^{(m_1\land m_2)}R_{j_1}^{-2}R_{j_2}^{-2}C_{j_1,j_2}(k_1;l_1,m_1)C_{j_1,j_2}(k_2;l_2,m_2)\Big\}.
\varepsilonnd{flalign}
Clearly, when $R_i\varepsilonquiv1$, we have $2D_3(k;l,m)=D_1(k;l,m)$ where $D_1(k;l,m)$ is defined in (\ref{D}), and the result reduces to (\ref{D1}).
Using (\ref{consis2}), we can see that given $\mathcal{R}_n$, $\fracrac{D_4(k;l,m)}{n^3\|\Sigma\|_F}=o_p(1)$. Hence, given $\mathcal{R}_n$, we have
$$
T_n^{(s)}=\sup_{k=4,\cdots,n-4}\fracrac{[D_3(k;1,n)]^2}{\fracrac{1}{n}\sum_{t=l+1}^{k-2}D_3(t;l,k)^2+\fracrac{1}{n}\sum_{t=k+2}^{m-2}D_3(t;k+1,m)^2}+o_p(1).
$$
Then, by (\ref{D3_con}), we have that as $p\to\infty$,
$$
T_n^{(s)}|\mathcal{R}_n\overset{\mathcal{D}}{\rightarrow}\mathcal{T}_n^{(\mathcal{R}_n,s)}:= \sup_{k=4,\cdots,n-4}\fracrac{n[G_n^{(\mathcal{R}_n,s)}(k/n;1/n,1)]^2}{\sum_{t=2}^{k-1}[G_n^{(\mathcal{R}_n,s)}(t/n;1/n,k/n)]^2+\sum_{t=k+2}^{n-2}[G_n^{(\mathcal{R}_n,s)}(t/n;(k+1)/n,1)]^2}.$$
As for $T_n$, note that \begin{align*}
D(k;l,m)
=& (m-k)(m-k-1)\sum_{\substack{l \leq j_1, j_3 \leq k\\j_1 \neq j_3}}X_{j_1}^TX_{j_3}(R_{j_1}R_{j_3})^{-1}\\&+(k-l+1)(k-l)\sum_{\substack{k + 1 \leq j_2, j_4 \leq m\\ j_2 \neq j_4}}X_{j_2}^TX_{j_4}(R_{j_2}R_{j_4})^{-1}\\
&-2(k-l)(m-k-1)\sum_{l \leq j_1 \leq k}\sum_{k+1 \leq j_2 \leq m}X_{j_1}^TX_{j_2}(R_{j_1}R_{j_2})^{-1}.
\varepsilonnd{align*}
Using similar arguments as in (\ref{D3_con}), we have
\begin{equation}\label{D_con}
n^{-3}\|\Sigma\|_F^{-1}D(k;l,m) \overset{\mathcal{D}}{\rightarrow} G_n^{(\mathcal{R}_n)}(k/n;l/n,m/n),
\varepsilonnd{equation}
where
\begin{flalign}\label{GnR2}
\begin{split}
G_n^{(\mathcal{R}_n)}(k/n;l/n,m/n)=& (m-k)(m-k-1)n^{-3}\sum_{\substack{l \leq j_1, j_3 \leq k\\j_1 \neq j_3}}\mathcal{Z}_{j_1,j_3}(R_{j_1}R_{j_3})^{-1}\\&+(k-l+1)(k-l)n^{-3}\sum_{\substack{k + 1 \leq j_2, j_4 \leq m\\ j_2 \neq j_4}}\mathcal{Z}_{j_2,j_4}(R_{j_2}R_{j_4})^{-1}\\
& -2(k-l)(m-k-1)n^{-3}\sum_{l \leq j_1 \leq k}\sum_{k+1 \leq j_2 \leq m}\mathcal{Z}_{j_1,j_2}(R_{j_1}R_{j_2})^{-1}.
\varepsilonnd{split}
\varepsilonnd{flalign}
Similar to $G_n^{(\mathcal{R}_n,s)}(k/n;l/n,m/n)$, the conditional distribution of $G_n^{(\mathcal{R}_n)}(k/n;l/n,m/n)$ given $\mathcal{R}_n$ is Gaussian, and for any $l_1 < k_1 < m_1, l_2 < k_2 < m_2$, $k_1,k_2,l_1,l_2,m_1,m_2 = 1,2,...,n$, the covariance structure is given by
\begin{flalign}\label{cov2}
&\mathrm{Cov}(G_n^{(\mathcal{R}_n)}(k_1/n;l_1/n,m_1/n),G_n^{(\mathcal{R}_n)}(k_2/n;l_2/n,m_2/n)|\mathcal{R}_n)
\\=&\notag8n^{-6}\Big\{\sum_{\substack{(l_1\lor l_2)\leq j_1,j_2\leq (k_1\land k_2)\\j_1\neq j_2}}R_{j_1}^{-2}R_{j_2}^{-2}{m_1-k_1\choose 2}{m_2-k_2\choose 2}
\\&\notag+\sum_{\substack{(l_1\lor k_2+1)\leq j_1,j_2\leq (k_1\land m_2)\\j_1\neq j_2}}R_{j_1}^{-2}R_{j_2}^{-2}{m_1-k_1\choose 2}{k_1-l_1+1\choose 2}
\\&\notag-2\sum_{j_1=(l_1\lor l_2)}^{k_2}\sum_{j_2=k_2+1}^{(m_2\land k_1)}\mathbf{1}(k_1>k_2)R_{j_1}^{-2}R_{j_2}^{-2}{m_1-k_1\choose 2}(k_2-l_2)(m_2-k_2-1)
\\&\notag+\sum_{\substack{(k_1+1\lor l_2)\leq j_1,j_2\leq (m_1\land k_2)\\j_1\neq j_2}}R_{j_1}^{-2}R_{j_2}^{-2}{k_1-l_1+1\choose2}{m_2-k_2\choose 2}
\\&\notag+\sum_{\substack{(k_1+1\lor k_2+1)\leq j_1,j_2\leq (m_1\land m_2)\\j_1\neq j_2}}R_{j_1}^{-2}R_{j_2}^{-2}{k_1-l_1+1\choose2}{k_2-l_2+1\choose2}
\\&\notag-2\sum_{j_1=(k_1+1\lor l_2)}^{k_2}\sum_{j_2=k_2+1}^{(m_1\land m_2)}\mathbf{1}(m_1> k_2)R_{j_1}^{-2}R_{j_2}^{-2}B_{j_1,j_2}{k_1-l_1+1\choose2}(k_2-l_2)(m_2-k_2-1)
\\&\notag-2\sum_{j_1=(l_1\lor l_2)}^{k_1}\sum_{j_2=k_1+1}^{(m_1\land k_2)}\mathbf{1}(k_2>k_1)R_{j_1}^{-2}R_{j_2}^{-2}(k_1-l_1)(m_1-k_1-1){m_2-k_2\choose 2}
\\&\notag-2\sum_{j_1=(k_2+1\lor l_1)}^{k_1}\sum_{j_2=k_1+1}^{(m_1\land m_2)}\mathbf{1}(m_2> k_1)R_{j_1}^{-2}R_{j_2}^{-2}(k_1-l_1)(m_1-k_1-1){k_2-l_2+1\choose2}
\\&\notag+\sum_{j_1=(l_1\lor l_2)}^{(k_1\land k_2)}\sum_{j_2=(k_1+1\lor k_2+1)}^{(m_1\land m_2)}R_{j_1}^{-2}R_{j_2}^{-2}(k_1-l_1)(m_1-k_1-1)(k_2-l_2)(m_2-k_2-1)\Big\}.
\varepsilonnd{flalign}
Hence, as $p\to\infty$,
$$
T_n|\mathcal{R}_n\overset{\mathcal{D}}{\rightarrow}\mathcal{T}_n^{(\mathcal{R}_n)}:= \sup_{k=4,\cdots,n-4}\fracrac{n[G_n^{(\mathcal{R}_n)}(k/n;1/n,1)]^2}{\sum_{t=2}^{k-1}[G_n^{(\mathcal{R}_n)}(t/n;1/n,k/n)]^2+\sum_{t=k+2}^{n-2}[G_n^{(\mathcal{R}_n)}(t/n;(k+1)/n,1)]^2}.$$
(ii)
{We shall only show the process convergence $G_n^{(\mathcal{R}_n,s)}(\cdot) \rightsquigarrow \mathbb{E}\Big[\fracrac{R_1R_2}{\sqrt{(R_1^2+R_3^2)(R_2^2+R_3^2)}}\Big]\sqrt{2}G(\cdot)$. That $G_n^{(\mathcal{R}_n)}(\cdot) \rightsquigarrow \mathbb{E}(R_1^{-2})\sqrt{2}G(\cdot)$ is similar and simpler.
Once the process convergence is obtained, the limiting distributions of $\mathcal{T}_n^{(\mathcal{R}_n,s)}$ and $\mathcal{T}_n^{(\mathcal{R}_n)}$ can be easily obtained by the continuous mapping theorem.
The proof for the process convergence contains two parts: the finite dimensional convergence and the tightness.
To show the finite dimensional convergence, we need to show that for any positive integer $N$, any fixed $u_1,u_2,...,u_N \in [0,1]^3$ and any $\alpha_1,...,\alpha_N \in \mathbb{R}$,
$$\alpha_1G_n^{(\mathcal{R}_n,s)}(u_1) + \cdots \alpha_NG_n^{(\mathcal{R}_n,s)}(u_N)\overset{\mathcal{D}}{\rightarrow}\mathbb{E}^2\Big[\fracrac{R_1R_2}{\sqrt{(R_1^2+R_3^2)(R_2^2+R_3^2)}}\Big]\sqrt{2}[\alpha_1G(u_1) + \cdots \alpha_NG(u_N)],$$
where for $u = (u^{(1)},u^{(2)},u^{(3)})^T$, $G_n(u) = G_n(u_1;u_2,u_3)$. Since both $G_n^{(\mathcal{R}_n,s)}(\cdot)|\mathcal{R}_n$ and $G(\cdot)$ are Gaussian processes, by Lemma \ref{lem_equiv} we have
\begin{flalign*}
&P(\alpha_1G_n^{(\mathcal{R}_n,s)}(u_1) + \cdots \alpha_kG_n^{(\mathcal{R}_n,s)}(u_k) < x|\mathcal{R}_n)\\\to_p
&P(\mathbb{E}\Big[\fracrac{R_1R_2}{\sqrt{(R_1^2+R_3^2)(R_2^2+R_3^2)}}\Big]\sqrt{2}[\alpha_1G(u_1) + \cdots \alpha_NG(u_N)] < x).
\varepsilonnd{flalign*}
Then by bounded convergence theorem we have
\begin{align*}
&\lim_{n\rightarrow\infty}P(\alpha_1G_n^{(\mathcal{R}_n,s)}(u_1) + \cdots \alpha_kG_n^{(\mathcal{R}_n,s)}(u_k) < x)\\ =& \lim_{n\rightarrow\infty}\mathbb{E}[P(\alpha_1G_n^{(\mathcal{R}_n,s)}(u_1) + \cdots \alpha_kG_n^{(\mathcal{R}_n,s)}(u_k) < x|\mathcal{R}_n)]\\
=&\mathbb{E}[\lim_{n\rightarrow\infty}P(\alpha_1G_n^{(\mathcal{R}_n,s)}(u_1) + \cdots \alpha_kG_n^{(\mathcal{R}_n,s)}(u_k) < x|\mathcal{R}_n)]\\ =&\mathbb{E} \Big[\fracrac{R_1R_2}{\sqrt{(R_1^2+R_3^2)(R_2^2+R_3^2)}}\Big]\sqrt{2}[\alpha_1G(u_1) + \cdots \alpha_kG(u_k)] < x)]\\
=&P(\mathbb{E}\Big[\fracrac{R_1R_2}{\sqrt{(R_1^2+R_3^2)(R_2^2+R_3^2)}}\Big]\sqrt{2}[\alpha_1G(u_1) + \cdots \alpha_kG(u_k)] < x).
\varepsilonnd{align*}
This completes the proof of the finite dimensional convergence.
To show the tightness, it suffices to show that there exists $C > 0$ such that $$\mathbb{E}[(G_n^{(\mathcal{R}_n,s)}(u) - G_n^{(\mathcal{R}_n,s)}(v))^8] \leq C(\|u - v\|^4 + 1/n^4),$$ for any $u,v \in [0,1]^3$ (see the proof of equation S8.12 in \cite{wang2021inference}).
Since given $\mathcal{R}_n$, $G_n^{(\mathcal{R}_n,s)}(\cdot)$ is a Gaussian process, we have
\begin{align*}
&\mathbb{E}[(G_n^{(\mathcal{R}_n,s)}(u) - G_n^{(\mathcal{R}_n,s)}(v))^8] = \mathbb{E}[\mathbb{E}[(G_n^{(\mathcal{R}_n,s)}(u) - G_n^{(\mathcal{R}_n,s)}(v))^8|\mathcal{R}_n]]\\
&= C\mathbb{E}[\mathrm{Var}((G_n^{(\mathcal{R}_n,s)}(u) - G_n^{(\mathcal{R}_n,s)}(v))|\mathcal{R}_n)^4].
\varepsilonnd{align*}
By (\ref{cov}), for $u = (k_1/n,l_1/n,m_1/n)$ (and similar for $v = (k_2/n, l_2/n, m_2/n)$) this reduces to
\begin{align*}
&\mathrm{Var}(G_n^{(\mathcal{R}_n,s)}(u)|\mathcal{R}_n)\\
=& 2n^{-6}\Big\{\sum_{\substack{l_1 \leq j_1, j_3 \leq k_1\\j_1 \neq j_3}}(R_{j_1}R_{j_3})^{-2}A_{j_1,j_3}(k_1,l_1,m_1)^2 +\sum_{\substack{k_1 + 1 \leq j_2, j_4 \leq m_1\\ j_2 \neq j_4}}(R_{j_2}R_{j_4})^{-2}B_{j_2,j_4}(k_1,l_1,m_1)^2\\
&+\sum_{l_1 \leq j_1 \leq k_1}\sum_{k_1+1 \leq j_2 \leq m_1}(R_{j_1}R_{j_2})^{-2}C_{j_1,j_2}(k_1,l_1,m_1)^2\Big\}.
\varepsilonnd{align*}
Note that
\begin{align*}
&\mathbb{E}[(G_n^{(\mathcal{R}_n,s)}(k_1/n;l_1/n,m_1/n) - G_n^{(\mathcal{R}_n,s)}(k_2/n;l_2/n,m_2/n))^8] \\
\lesssim& \mathbb{E}[(G_n^{(\mathcal{R}_n,s)}(k_1/n;l_1/n,m_1/n) - G_n^{(\mathcal{R}_n,s)}(k_2/n;l_1/n,m_1/n))^8] \\
&+ \mathbb{E}[(G_n^{(\mathcal{R}_n,s)}(k_2/n;l_1/n,m_1/n) - G_n^{(\mathcal{R}_n,s)}(k_2/n;l_2/n,m_1/n))^8]\\
&+ \mathbb{E}[(G_n^{(\mathcal{R}_n,s)}(k_2/n;l_2/n,m_1/n) - G_n^{(\mathcal{R}_n,s)}(k_2/n;l_2/n,m_2/n))^8]\\
=& I_1 + I_2 + I_3.
\varepsilonnd{align*}
We shall analyze $I_1$ first, and WLOG we let $k_1 < k_2$. Then we have (with $l_1=l_2,m_1=m_2$)
\begin{flalign*}
&\mathrm{Cov}(G_n^{(\mathcal{R}_n,s)}(k_1/n;l_1/n,m_1/n),G_n^{(\mathcal{R}_n,s)}(k_2/n;l_2/n,m_2/n)|\mathcal{R}_n)
\\=&2n^{-6}\Big\{\sum_{\substack{l_1 \leq j_1,j_3\leq k_1 \\j_1\neq j_3}}R_{j_1}^{-2}R_{j_3}^{-2}A_{j_1,j_3}(k_1;l_1,m_1)A_{j_1,j_3}(k_2;l_1,m_1)
\\&+\sum_{\substack{k_1+1\leq j_1,j_2\leq k_2\\j_1\neq j_2}}R_{j_1}^{-2}R_{j_2}^{-2}B_{j_1,j_2}(k_1;l_1,m_1)A_{j_1,j_2}(k_2;l_2,m_2)
\\&+\sum_{\substack{k_2+1\leq j_2,j_4\leq m_1 \\j_2\neq j_4}}R_{j_2}^{-2}R_{j_4}^{-2}B_{j_2,j_4}(k_1;l_1,m_1)B_{j_2,j_4}(k_2;l_2,m_2)
\\&+2\sum_{j_1=k_1+1}^{k_2}\sum_{j_2=k_2+1}^{m_1}R_{j_1}^{-2}R_{j_2}^{-2}C_{j_1,j_2}(k_2;l_2,m_2)B_{j_1,j_2}(k_1;l_1,m_1)
\\&+2\sum_{j_1=l_1}^{k_1}\sum_{j_2=k_1+1}^{k_2}R_{j_1}^{-2}R_{j_2}^{-2}C_{j_1,j_2}(k_1;l_1,m_1)A_{j_1,j_2}(k_2;l_2,m_2)
\\&+\sum_{j_1=l_1}^{k_1}\sum_{j_2=k_2+1}^{m_1}R_{j_1}^{-2}R_{j_2}^{-2}C_{j_1,j_2}(k_1;l_1,m_1)C_{j_1,j_2}(k_2;l_2,m_2)\Big\}.
\varepsilonnd{flalign*}
Hence,
\begin{align*}
&\mathrm{Var}(G_n^{(\mathcal{R}_n,s)}(k_1/n;l_1/n,m_1/n)-G_n^{(\mathcal{R}_n,s)}(k_2/n;l_2/n,m_2/n)|\mathcal{R}_n)\\
=& 2n^{-6}\Big\{\sum_{\substack{l_1 \leq j_1, j_3 \leq k_1\\j_1 \neq j_3}}(R_{j_1}R_{j_3})^{-2}A_{j_1,j_3}(k_1,l_1,m_1)^2 +\sum_{\substack{k_1 + 1 \leq j_2, j_4 \leq m_1\\ j_2 \neq j_4}}(R_{j_2}R_{j_4})^{-2}B_{j_2,j_4}(k_1,l_1,m_1)^2\\
&+\sum_{l_1 \leq j_1 \leq k_1}\sum_{k_1+1 \leq j_2 \leq m_1}(R_{j_1}R_{j_2})^{-2}C_{j_1,j_2}(k_1,l_1,m_1)^2\Big\}\\
+& 2n^{-6}\Big\{\sum_{\substack{l_1 \leq j_1, j_3 \leq k_2\\j_1 \neq j_3}}(R_{j_1}R_{j_3})^{-2}A_{j_1,j_3}(k_2,l_1,m_1)^2 +\sum_{\substack{k_2 + 1 \leq j_2, j_4 \leq m_1\\ j_2 \neq j_4}}(R_{j_2}R_{j_4})^{-2}B_{j_2,j_4}(k_2,l_1,m_1)^2\\
&+\sum_{l_1 \leq j_1 \leq k_2}\sum_{k_2+1 \leq j_2 \leq m_1}(R_{j_1}R_{j_2})^{-2}C_{j_1,j_2}(k_2,l_1,m_1)^2\Big\}\\
-&4n^{-6}\Big\{\sum_{\substack{l_1 \leq j_1,j_3\leq k_1 \\j_1\neq j_3}}R_{j_1}^{-2}R_{j_3}^{-2}A_{j_1,j_3}(k_1;l_1,m_1)A_{j_1,j_3}(k_2;l_1,m_1)
\\&+\sum_{\substack{k_1+1\leq j_1,j_2\leq k_2\\j_1\neq j_2}}R_{j_1}^{-2}R_{j_2}^{-2}B_{j_1,j_2}(k_1;l_1,m_1)A_{j_1,j_2}(k_2;l_2,m_2)
\\&+\sum_{\substack{k_2+1\leq j_2,j_4\leq m_1 \\j_2\neq j_4}}R_{j_2}^{-2}R_{j_4}^{-2}B_{j_2,j_4}(k_1;l_1,m_1)B_{j_2,j_4}(k_2;l_2,m_2)
\\&+2\sum_{j_1=k_1+1}^{k_2}\sum_{j_2=k_2+1}^{m_1}R_{j_1}^{-2}R_{j_2}^{-2}C_{j_1,j_2}(k_2;l_2,m_2)B_{j_1,j_2}(k_1;l_1,m_1)
\\&+2\sum_{j_1=l_1}^{k_1}\sum_{j_2=k_1+1}^{k_2}R_{j_1}^{-2}R_{j_2}^{-2}C_{j_1,j_2}(k_1;l_1,m_1)A_{j_1,j_2}(k_2;l_2,m_2)
\\&+\sum_{j_1=l_1}^{k_1}\sum_{j_2=k_2+1}^{m_1}R_{j_1}^{-2}R_{j_2}^{-2}C_{j_1,j_2}(k_1;l_1,m_1)C_{j_1,j_2}(k_2;l_2,m_2)\Big\}.
\varepsilonnd{align*}
By rearranging terms we have
\begin{align*}
&\mathrm{Var}(G_n^{(\mathcal{R}_n,s)}(k_1/n;l_1/n,m_1/n)-G_n^{(\mathcal{R}_n,s)}(k_2/n;l_2/n,m_2/n)|\mathcal{R}_n)\\
=& 2n^{-6}\Big\{\sum_{\substack{l_1 \leq j_1, j_3 \leq k_1\\j_1 \neq j_3}}(R_{j_1}R_{j_3})^{-2}(A_{j_1,j_3}(k_1,l_1,m_1) - A_{j_1,j_3}(k_2,l_1,m_1))^2\\
&+\sum_{j_1 = k_1 + 1}^{k_2}\sum_{j_3 = l_1 }^{j_1-1}(R_{j_1}R_{j_3})^{-2}(A_{j_1,j_3}(k_2,l_1,m_1)^2 +A_{j_3,j_1}(k_2,l_1,m_1)^2)\\
&+\sum_{\substack{k_2+1\leq j_2,j_4\leq m_1 \\j_2\neq j_4}}R_{j_2}^{-2}R_{j_4}^{-2}(B_{j_2,j_4}(k_1;l_1,m_1) - B_{j_2,j_4}(k_2;l_1,m_1))^2\\
&+\sum_{j_2 = k_1 + 1}^{k_2-1}\sum_{j_4 = j_2 + 1}^{m_1}R_{j_2}^{-2}R_{j_4}^{-2}(B_{j_2,j_4}(k_1;l_1,m_1)^2 + B_{j_4,j_2}(k_1;l_1,m_1)^2)\\
&+\sum_{j_1 = l_1}^{k_1}\sum_{j_2 = k_2 + 1}^{m_1}R_{j_1}^{-2}R_{j_2}^{-2}(C_{j_1,j_2}(k_1,l_1,m_1) - C_{j_1,j_2}(k_2,l_1,m_1))^2\\
&+ \sum_{j_1 = l_1}^{k_1}\sum_{j_2 = k_1 + 1}^{k_2}R_{j_1}^{-2}R_{j_2}^{-2}C_{j_1,j_2}(k_1,l_1,m_1)^2 + \sum_{j_1 = k_1 + 1}^{k_2}\sum_{j_2 = k_2 + 1}^{m_1}R_{j_1}^{-2}R_{j_2}^{-2}C_{j_1,j_2}(k_2,l_1,m_1)^2\\
&-2\sum_{\substack{k_1+1\leq j_1,j_2\leq k_2\\j_1\neq j_2}}R_{j_1}^{-2}R_{j_2}^{-2}B_{j_1,j_2}(k_1;l_1,m_1)A_{j_1,j_2}(k_2;l_1,m_1)\\
&-4\sum_{j_1=k_1+1}^{k_2}\sum_{j_2=k_2+1}^{m_1}R_{j_1}^{-2}R_{j_2}^{-2}C_{j_1,j_2}(k_2;l_1,m_1)B_{j_1,j_2}(k_1;l_1,m_1)
\\&-4\sum_{j_1=l_1}^{k_1}\sum_{j_2=k_1+1}^{k_2}R_{j_1}^{-2}R_{j_2}^{-2}C_{j_1,j_2}(k_1;l_1,m_1)A_{j_1,j_2}(k_2;l_1,m_1)\\
=& \sum_{l = 1}^{10}J_i.
\varepsilonnd{align*}
Thus by CR-inequality we have $I_1 = \mathbb{E}[(\sum_{l = 1}^{10}J_l)^4] \lesssim \sum_{i = 1}^{10}\mathbb{E}[J_l^4]$. We shall analyze $\mathbb{E}[J_1^4]$ first. Note that
\begin{align*}
&A_{j_1,j_3}(k_1,l_1,m_1) - A_{j_1,j_3}(k_2,l_1,m_1)\\
= &\sum_{\substack{k_1+1\leq j_2,j_4\leq m_1\\j_2\neq j_4}}(R_{j_1}^{-2}+R_{j_2}^{-2})^{-1/2}(R_{j_3}^{-2}+R_{j_4}^{-2})^{-1/2} -\sum_{\substack{k_2+1\leq j_2,j_4\leq m_1\\j_2\neq j_4}}(R_{j_1}^{-2}+R_{j_2}^{-2})^{-1/2}(R_{j_3}^{-2}+R_{j_4}^{-2})^{-1/2}\\
=&\sum_{j_2 = k_1 + 1}^{k_2}\sum_{j_4 = j_2 + 1}^{m_1}(R_{j_1}^{-2}+R_{j_2}^{-2})^{-1/2}(R_{j_3}^{-2}+R_{j_4}^{-2})^{-1/2} + \sum_{j_4 = k_1 + 1}^{k_2}\sum_{j_2 = j_4 + 1}^{m_1}(R_{j_1}^{-2}+R_{j_2}^{-2})^{-1/2}(R_{j_3}^{-2}+R_{j_4}^{-2})^{-1/2}\\
\leq& 2\sum_{j_2 = k_1 + 1}^{k_2}\sum_{j_4 = j_2 + 1}^{m_1}(2|R_{j_1}|^{-1}|R_{j_2}|^{-1})^{-1/2}(2|R_{j_3}|^{-1}|R_{j_4}|^{-1})^{-1/2}\\
\leq&|R_{j_1}R_{j_3}|^{1/2}\sum_{j_2 = k_1 + 1}^{k_2}\sum_{j_4 = j_2 + 1}^{m_1}|R_{j_2}R_{j_4}|^{1/2}.
\varepsilonnd{align*}
Since $A_{j_1,j_3}(k_1,l_1,m_1) - A_{j_1,j_3}(k_2,l_1,m_1) > 0$ and $J_1 > 0$ almost surely, we have
$$\mathbb{E}[J_1^4] \leq 2^4n^{-24}\mathbb{E}[\partialrod_{i = 1}^4(\sum_{\substack{l_1 \leq j_{1,i}, j_{3,i} \leq k_1\\j_{1,i} \neq j_{3,i}}}\sum_{j_{2,i} = k_1 + 1}^{k_2}\sum_{j_{4,i} = j_{2,i} + 1}^{m_1}\sum_{j_{2,i}' = k_1 + 1}^{k_2}\sum_{j_{4,i}' = j_{2,1} + 1}^{m_1}
|R_{j_{1,i}}R_{j_{3,i}}|^{-1}|R_{j_{2,i}}R_{j_{4,i}}|^{1/2}|R_{j_{2,i}'}R_{j_{4,i}'}|^{1/2})].
$$
By the H\"older's inequalilty, and the fact that $j_{1,s} \neq j_{3,s}$, $j_{2,s} \neq j_{4,s}$, $j_{2,s}' \neq j_{4,s}'$ and $j_{1,s},j_{3,s}$ are not identical to any of $\{j_{2,s}, j_{4,s}, j_{2,s}, j_{4,s}\}$ for any $s = 1,2,3,4$, we have
\begin{align*}
&\mathbb{E}[|R_{j_{1,1}}R_{j_{3,1}}|^{-1}|R_{j_{2,1}}R_{j_{4,1}}|^{1/2}|R_{j_{2,1}'}R_{j_{4,1}'}|^{1/2}\cdots|R_{j_{1,4}}R_{j_{3,4}}|^{-1}|R_{j_{2,4}}R_{j_{4,4}}|^{1/2}|R_{j_{2,4}'}R_{j_{4,4}'}|^{1/2}]\\
\leq&\partialrod_{s = 1}^4\mathbb{E}[((|R_{j_{1,s}}||R_{j_{3,s}}|)^{-1}|R_{j_{2,s}}R_{j_{4,s}}R_{j_{2,s}'}R_{j_{4,s}'}|^{1/2})^4]^{1/4}\\
=&\partialrod_{s = 1}^4\mathbb{E}[(R_{j_{1,s}}R_{j_{3,s}})^{-4}R_{j_{2,s}}^{2}R_{j_{4,s}}^{2}R_{j_{2,s}'}^{2}R_{j_{4,s}'}^{2}]^{1/4} = \partialrod_{s = 1}^4\Big\{\mathbb{E}[R_{j_{1,s}}^{-4}]\mathbb{E}[R_{j_{3,s}}^{-4}]\mathbb{E}[R_{j_{2,s}}^{2}R_{j_{4,s}}^{2}R_{j_{2,s}'}^{2}R_{j_{4,s}'}^{2}]\Big\}^{1/4}\\
\leq & \partialrod_{s = 1}^4\Big\{\mathbb{E}[R_{j_{1,s}}^{-4}]\mathbb{E}[R_{j_{3,s}}^{-4}]\mathbb{E}[R_{j_{2,s}}^{4}R_{j_{4,s}}^{4}]^{1/2}\mathbb{E}[R_{j_{2,s}'}^{4}R_{j_{4,s}'}^{4}]^{1/2}\Big\}^{1/4} = \mathbb{E}[R_{1}^{-4}]^2\mathbb{E}[R_{2}^{4}]^2.
\varepsilonnd{align*}
Therefore,
\begin{align*}
\mathbb{E}[J_1^4] \lesssim 2^4n^{-24}(k_2-k_1)^8n^{16}\mathbb{E}[R_{1}^{-4}]^2\mathbb{E}[R_{2}^{4}]^2 = 2^4\mathbb{E}[R_{1}^{-4}]^2\mathbb{E}[R_{2}^{4}]^2(k_2/n - k_1/n)^8.
\varepsilonnd{align*}
We repeatedly apply the H\"older's inequality and the above bound for the expectation, and we have $\mathbb{E}[J_s^4] \lesssim (k_2/n - k_1/n)^8$ for $s = 1,3,5,8$ since there are 8 summations in each $\mathbb{E}[J_s^4]$ which take the sum from $k_1 + 1$ to $k_2$, and $\mathbb{E}[J_s^4] \lesssim (k_2/n - k_1/n)^4$ for $s = 2,4,6,7,9,10$ since there are only 4 summations in each $\mathbb{E}[J_s^4]$ which take the sum from $k_1 + 1$ to $k_2$. Combining these results we have $I_1 \lesssim (k_2/n - k_1/n)^4.$
We can also show $I_2 \lesssim (l_2/n - l_1/n)^4$, and $I_3 \lesssim (m_2/n - m_1/n)^4$. Since the steps are very similar to the the arguments for $I_1$, we omit the details here. Thus, for any $u = (u_1,u_2,u_3), v = (v_1,v_2,v_3) \in [0,1]^3$, we have
$$\mathbb{E}[(G_n^{(\mathcal{R}_n,s)}(u) - G_n^{(\mathcal{R}_n,s)}(v))^8] \leq C'((\fracloor{nu_1}/n - \fracloor{n v_1}/n)^4 + (\fracloor{nu_2}/n - \fracloor{n v_2}/n)^4 + (\fracloor{nu_3}/n - \fracloor{n v_3}/n)^4),$$
for some positive constant $C' > 0$. It is easy to see that
\begin{align*}
(\fracloor{nu_1}/n - \fracloor{n v_1}/n)^4 &= ((u_1 - v_1) - (\{u_1\} - \{v_1\})/n)^4 \lesssim (u_1 - v_1)^4 + (\{u_1\} - \{v_1\})^4/n^4\\
&\lesssim (u_1 - v_1)^4 + 1/n^4.
\varepsilonnd{align*}
So
\begin{align*}
\mathbb{E}[(G_n^{(\mathcal{R}_n,s)}(u) - G_n^{(\mathcal{R}_n,s)}(v))^8] &\leq C((u_1 - v_1)^4 + (u_2 - v_2)^4 + (u_3 - v_3)^4 + 1/n^4) \\
&= C(\|u-v\|_4^4 + 1/n^4) \leq C(\|u-v\|^4 + 1/n^4),
\varepsilonnd{align*}
since $\|u-v\|_4^4 = \sum_{i = 1}^3(u_i - v_i)^4 \leq \sum_{i,j = 1}^3(u_i - v_i)^2(u_j - v_j)^2 = (\sum_{i}^3(u_i - v_i)^2)^2 = \|u-v\|^4.$ This completes the proof of tightness.
}
\qed
\subsection{Proof of Theorem \ref{thm_rsrmpower}}
(i)
Under Assumption \ref{ass_power}, conditional on $\mathcal{R}_n$, we still have almost surely
$$
\fracrac{1}{p}\|Y_i-Y_j\|^2=\fracrac{1}{p}\sum_{k=1}^p \left(\fracrac{X_{i,k}}{R_i}-\fracrac{X_{j,k}}{R_j}\right)^2+\fracrac{2}{p}(\mu_i-\mu_j) \left(\fracrac{X_{i,k}}{R_i}-\fracrac{X_{j,k}}{R_j}\right)+\fracrac{1}{p}\|\mu_i-\mu_j\|^2\to \sigma^2(R_i^{-2}+R_j^{-2})
$$
as conditional on $\mathcal{R}_n$, $\{R_i^{-1}X_{i,k}\}_{k=1}^p$ is still a $\rho$-mixing sequence.
Recall (\ref{D34}), conditional on $\mathcal{R}_n$, we mainly work on $D_3(k;l,m)$ since $D_4(k;l,m)$ is of a smaller order, where
$$
D_3(k;l,m)\\=\sum_{\substack{l\leq j_1,j_3\leq k\\j_1\neq j_3}}\sum_{\substack{k+1\leq j_2,j_4\leq m\\j_2\neq j_4}}\fracrac{(Y_{j_1}-Y_{j_2})'(Y_{j_3}-Y_{j_4})}{(R_{j_1}^{-2}+R_{j_2}^{-2})^{1/2}(R_{j_3}^{-2}+R_{j_4}^{-2})^{1/2}}.$$
By symmetry, we only consider the case $l<k\leq k^*<m$, and the summation in $D_3(k;l,m)$ can be decomposed into
$$
\sum_{\substack{l\leq j_1,j_3\leq k\\j_1\neq j_3}}\sum_{\substack{k+1\leq j_2,j_4\leq m\\j_2\neq j_4}}=\sum_{\substack{l\leq j_1,j_3\leq k\\j_1\neq j_3}}\Big\{\sum_{\substack{k+1\leq j_2,j_4\leq k^*\\j_2\neq j_4}} +\sum_{\substack{k^*+1\leq j_2,j_4\leq m\\j_2\neq j_4}}+\sum_{j_2=k+1}^{k^*}\sum_{j_4=k^*+1}^{m}+\sum_{j_4=k+1}^{k^*}\sum_{j_2=k^*+1}^{m}\Big\},
$$
according to the relative location of $j_2, j_4$ and $k^*$.
Then, it is not hard to see that
\begin{flalign*}
D_3(k;l,m)=&\sum_{\substack{l\leq j_1,j_3\leq k\\j_1\neq j_3}}\sum_{\substack{k+1\leq j_2,j_4\leq m\\j_2\neq j_4}}\fracrac{(X_{j_1}/R_{j_1}-X_{j_2}/R_{j_2})'(X_{j_3}/R_{j_3}-X_{j_4}/R_{j_4})}{(R_{j_1}^{-2}+R_{j_2}^{-2})^{1/2}(R_{j_3}^{-2}+R_{j_4}^{-2})^{1/2}}
\\&+\sum_{\substack{l\leq j_1,j_3\leq k\\j_1\neq j_3}}\sum_{\substack{k^*+1\leq j_2,j_4\leq m\\j_2\neq j_4}}\fracrac{\|\delta\|^2}{(R_{j_1}^{-2}+R_{j_2}^{-2})^{1/2}(R_{j_3}^{-2}+R_{j_4}^{-2})^{1/2}}\\&-
\sum_{\substack{l\leq j_1,j_3\leq k\\j_1\neq j_3}}\sum_{j_2=k^*+1}^{m}\sum_{j_4=k+1, j_4\neq j_2}^{m} \fracrac{\delta'(X_{j_1}/R_{j_1}-X_{j_2}/R_{j_2})}{(R_{j_1}^{-2}+R_{j_2}^{-2})^{1/2}(R_{j_3}^{-2}+R_{j_4}^{-2})^{1/2}}\\&-\sum_{\substack{l\leq j_1,j_3\leq k\\j_1\neq j_3}}\sum_{j_4=k^*+1}^{m}\sum_{j_2=k+1, j_4\neq j_2}^{m} \fracrac{\delta'(X_{j_3}/R_{j_3}-X_{j_4}/R_{j_4})}{(R_{j_1}^{-2}+R_{j_2}^{-2})^{1/2}(R_{j_3}^{-2}+R_{j_4}^{-2})^{1/2}}\\
:=&\sum_{i=1}^4D_{3,i}(k;l,m).
\varepsilonnd{flalign*}
Under Assumption \ref{ass_power}, and conditional on $\mathcal{R}_n$, similar to (\ref{D3}), we can show for $i=3,4$.
$$
n^{-3}\|\Sigma\|_F^{-1}D_{3,i}(k;l,m)=o_p(1),
$$
while by (\ref{D3_con}), $n^{-3}\|\Sigma\|_F^{-1}D_{3,1}(k;l,m)\overset{\mathcal{D}}{\rightarrow} G^{(\mathcal{R}_n,s)}(k/n;l/n,m/n)$.
Hence, if $n\mathbb{E}(R^{-2})^{-1}\|\Sigma\|_F^{-1}\|\delta\|^2\to c_n$ as $p\to\infty$, then conditional on $\mathcal{R}_n$, we obtain that
\begin{flalign*}
&n^{-3}\|\Sigma\|_F^{-1}p\sigma^2D^{(s)}(k;l,m)=n^{-3}\|\Sigma\|_F^{-1}D_3(k;l,m)+o_p(1)\\=&n^{-3}\|\Sigma\|_F^{-1}[D_{3,1}(k;l,m)+D_{3,2}(k;l,m)]+o_p(1)\\=&n^{-3}\|\Sigma\|_F^{-1}\sum_{\substack{l\leq j_1,j_3\leq k\\j_1\neq j_3}}\sum_{\substack{k+1\leq j_2,j_4\leq m\\j_2\neq j_4}}\fracrac{(X_{j_1}/R_{j_1}-X_{j_2}/R_{j_2})'(X_{j_3}/R_{j_3}-X_{j_4}/R_{j_4})}{(R_{j_1}^{-2}+R_{j_2}^{-2})^{1/2}(R_{j_3}^{-2}+R_{j_4}^{-2})^{1/2}}
\\&+n^{-4}\sum_{\substack{l\leq j_1,j_3\leq k\\j_1\neq j_3}}\sum_{\substack{k^*+1\leq j_2,j_4\leq m\\j_2\neq j_4}}\fracrac{n\|\Sigma\|_F^{-1}\|\delta\|^2}{(R_{j_1}^{-2}+R_{j_2}^{-2})^{1/2}(R_{j_3}^{-2}+R_{j_4}^{-2})^{1/2}}+o_p(1)\\
\overset{\mathcal{D}}{\rightarrow}&G^{(\mathcal{R}_n,s)}(k/n;l/n,m/n)+c_n\Delta_n^{(\mathcal{R}_n,s)}(k/n;l/n,m/n)
\varepsilonnd{flalign*}
where
\begin{flalign}\label{DRSRM}
\begin{split}
&\Delta_n^{(\mathcal{R}_n,s)}(k/n;l/n,m/n)\\=&\begin{cases}
n^{-4}\sum_{\substack{l\leq j_1,j_3\leq k\\j_1\neq j_3}}\sum_{\substack{k^*+1\leq j_2,j_4\leq m\\j_2\neq j_4}}\fracrac{\mathbb{E}(R^{-2})}{(R_{j_1}^{-2}+R_{j_2}^{-2})^{1/2}(R_{j_3}^{-2}+R_{j_4}^{-2})^{1/2}}, \quad & l<k\leq k^*<m \\
n^{-4}\sum_{\substack{l\leq j_1,j_3\leq k^*\\j_1\neq j_3}}\sum_{\substack{k+1\leq j_2,j_4\leq m\\j_2\neq j_4}}\fracrac{\mathbb{E}(R^{-2})}{(R_{j_1}^{-2}+R_{j_2}^{-2})^{1/2}(R_{j_3}^{-2}+R_{j_4}^{-2})^{1/2}}, \quad & l<k^*<k<m\\
0, \quad & otherwise.
\varepsilonnd{cases}
\varepsilonnd{split}
\varepsilonnd{flalign}
Hence, we have
\begin{flalign*}
&T_n^{(s)}|\mathcal{R}_n\overset{\mathcal{D}}{\rightarrow}\mathcal{T}_n^{(\mathcal{R}_n,s)}(c_n,\Delta_n^{(\mathcal{R}_n,s)}):=\sup_{k=4,\cdots,n-4}\\&
\fracrac{n[G_n^{(\mathcal{R}_n,s)}(\fracrac{k}{n};\fracrac{1}{n},1)+c_n\Delta_n^{(\mathcal{R}_n,s)}(\fracrac{k}{n};\fracrac{1}{n},1)]^2}{\sum_{t=2}^{k-1}[G_n^{(\mathcal{R}_n,s)}(\fracrac{t}{n};\fracrac{1}{n},\fracrac{k}{n})+c_n\Delta_n^{(\mathcal{R}_n,s)}(\fracrac{t}{n};\fracrac{1}{n},\fracrac{k}{n})]^2+\sum_{t=k+2}^{n-2}[G_n^{(\mathcal{R}_n,s)}(\fracrac{t}{n};\fracrac{(k+1)}{n},1)+c_n\Delta_n^{(\mathcal{R}_n,s)}(\fracrac{t}{n};\fracrac{(k+1)}{n},1)]^2}.
\varepsilonnd{flalign*}
For $T_n$, by similar arguments as above, we have
\begin{flalign*}
n^{-3}\|\Sigma\|_F^{-1}D(k;l,m)=&n^{-3}\|\Sigma\|_F^{-1}\sum_{\substack{l\leq j_1,j_3\leq k\\j_1\neq j_3}}\sum_{\substack{k+1\leq j_2,j_4\leq m\\j_2\neq j_4}}(X_{j_1}/R_{j_1}-X_{j_2}/R_{j_2})'(X_{j_3}/R_{j_3}-X_{j_4}/R_{j_4})
\\&+n^{-4}\sum_{\substack{l\leq j_1,j_3\leq k\\j_1\neq j_3}}\sum_{\substack{k^*+1\leq j_2,j_4\leq m\\j_2\neq j_4}}n\|\Sigma\|_F^{-1}\|\delta\|^2+o_p(1)\\
\overset{\mathcal{D}}{\rightarrow}&G^{(\mathcal{R}_n)}(k/n;l/n,m/n)+c_n\mathbb{E}(R^{-2})\Delta_n(k/n;l/n,m/n).
\varepsilonnd{flalign*}
Hence, we have
\begin{flalign*}
&T_n^{(s)}|\mathcal{R}_n\overset{\mathcal{D}}{\rightarrow}\mathcal{T}_n^{(\mathcal{R}_n)}(c_n,\Delta_n^{(\mathcal{R}_n,s)}):=\sup_{k=4,\cdots,n-4}\\&
\fracrac{n[G_n^{(\mathcal{R}_n)}(\fracrac{k}{n};\fracrac{1}{n},1)+c_n\mathbb{E}(R^{-2})\Delta_n(\fracrac{k}{n};\fracrac{1}{n},1)]^2}{\sum_{t=2}^{k-1}[G_n^{(\mathcal{R}_n)}(\fracrac{t}{n};\fracrac{1}{n},\fracrac{k}{n})+c_n\mathbb{E}(R^{-2})\Delta_n(\fracrac{t}{n};\fracrac{1}{n},\fracrac{k}{n})]^2+\sum_{t=k+2}^{n-2}[G_n^{(\mathcal{R}_n)}(\fracrac{t}{n};\fracrac{(k+1)}{n},1)+c_n\mathbb{E}(R^{-2})\Delta_n(\fracrac{t}{n};\fracrac{(k+1)}{n},1)]^2}.
\varepsilonnd{flalign*}
(ii)
Note that for any $u=(u_1,u_2,u_3)^{\top}\in[0,1]^3$ such that $u_2\leq u_1\leq u_3$, as $n\to\infty$,
$$
\Delta_n^{(\mathcal{R}_n,s)}(\lfloor nu_1\rfloor/n;\lfloor nu_2\rfloor/n,\lfloor nu_3\rfloor/n)\to_p\mathbb{E}(R_1^{-2})E^2\Big[\fracrac{R_1R_2}{\sqrt{R_1^{2}+R_2^2}}\Big]\Delta(u_1;u_2,u_3).
$$
by the law of large numbers for $U$-statistics (since $\Delta_n^{(\mathcal{R}_n,s)}$ can be viewed as a two sample $U$-statistic). Then using the similar arguments in the proof of Theorem \ref{thm_rsrm} (ii), we have
$$
\Delta_n^{(\mathcal{R}_n,s)}(\cdot)\rightsquigarrow \mathbb{E}(R_1^{-2})E^2\Big[\fracrac{R_1R_2}{\sqrt{R_1^{2}+R_2^2}}\Big]\Delta(\cdot).
$$
Note that $\Delta(\cdot)$ is deterministic, and recall $G_n^{(\mathcal{R}_n,s)}(\cdot)\rightsquigarrow \mathbb{E} \Big[\fracrac{R_1R_2}{\sqrt{(R_1^2+R_3^2)(R_2^2+R_3^2)}}\Big]\sqrt{2}G(\cdot)$ in the proof of Theorem \ref{thm_rsrm} (ii), by similar arguments in the proof of Theorem 3.6 in \cite{wang2021inference}, we have
$$
G_n^{(\mathcal{R}_n,s)}(\cdot)+c_n\Delta_n^{(\mathcal{R}_n,s)}(\cdot)\rightsquigarrow \mathbb{E}\Big[\fracrac{R_1R_2}{\sqrt{(R_1^2+R_3^2)(R_2^2+R_3^2)}}\Big]\sqrt{2}G(\cdot)+c\mathbb{E}(R_1^{-2})E^2\Big[\fracrac{R_1R_2}{\sqrt{R_1^{2}+R_2^2}}\Big]\Delta(\cdot).
$$
Similarly,
$$
G_n^{(\mathcal{R}_n)}(\cdot)+c_n\mathbb{E}(R^{-2})\Delta_n^{(\mathcal{R}_n,s)}(\cdot)\rightsquigarrow \mathbb{E}(R^{-2})\sqrt{2}G(\cdot)+c\mathbb{E}(R_1^{-2})\Delta(\cdot).
$$
The result follows by the continuous mapping theorem. Here, the multiplicative $K>1$ follows by the proof of Theorem 3.2 in \cite{chakraborty2017tests}.
\qed
\section{Auxiliary Lemmas}\label{sec:lemma}
\begin{lemma}\label{lem_fix}
Under Assumptions \ref{ass_model} and \ref{ass_mixing}, let $n\geq 8$ be a fixed number, and for any $0\leq k<m\leq n$, let $Z(k,m)=\sum_{i=k+1}^{m}\sum_{j=k}^{i-1}X_i'X_j.$ Then, as $p\to \infty$,
$$\fracrac{\sqrt{2}}{n\|\Sigma\|_F}Z(k,m)\overset{\mathcal{D}}{\rightarrow} Q_n(\fracrac{k}{n},\fracrac{m}{n}),$$
where
$Q_n(a,b)$ is a centered Gaussian process defined on $[0,1]^2$ with covariance structure given by:
\begin{flalign*}
&\mathrm{Cov}(Q_n(a_1,b_1),Q_n(a_2,b_2))\\=&n^{-2}( \lfloor nb_1\rfloor\land \lfloor nb_2\rfloor-\lfloor na_1\rfloor\lor \lfloor na_2\rfloor)(\lfloor nb_1\rfloor\land \lfloor nb_2\rfloor-\lfloor na_1\rfloor\lor\lfloor na_2\rfloor+1)\mathbf{1}(b_1\land b_2>a_1\lor a_2).
\varepsilonnd{flalign*}
\varepsilonnd{lemma}
\textsc{Proof of Lemma \ref{lem_fix}}
By Cram\'er-Wold device, it suffices to show that for fixed $n$ and $N$, any sequences of $\{\alpha_i\}_{i=1}^N$, $\alpha_i\in\mathbb{R}$,
$$
\sum_{i=1}^{N}\alpha_i \fracrac{\sqrt{2}}{n\|\Sigma\|_F}Z(k_i,m_i)\overset{\mathcal{D}}{\rightarrow} \sum_{i=1}^{N}\alpha_iQ_n(\fracrac{k_i}{n},\fracrac{m_i}{n}),
$$
where $1\leq k_i\leq m_i\leq n$ are integers.
For simplicity, we consider the case of $N=2$, and by symmetry there are basically three types of enumerations of $(k_1,m_1,k_2,m_2)$: (1) $k_1\leq m_1\leq k_2\leq m_2$; (2) $k_1\leq k_2\leq m_1\leq m_2$; (3) $k_1\leq k_2\leq m_2\leq m_1$.
Define $\xi^{(1)}_{i,t}=X_{i,t}\sum_{j=k_1}^{i-1}X_{j,t}$, and $\xi^{(2)}_{i,t}=X_{i,t}\sum_{j=k_2}^{i-1}X_{j,t}$. Then, we can show
\begin{flalign*}
&\fracrac{\sqrt{2}}{n\|\Sigma\|_F}[\alpha_1Z(k_1,m_1)+\alpha_2Z(k_2,m_2)]
\\=&\fracrac{\sqrt{2}}{n\|\Sigma\|_F}\Big(\alpha_1\sum_{i=k_1+1}^{m_1}\sum_{j=k_1}^{i-1}X_i'X_j+\alpha_2\sum_{i=k_2+1}^{m_2}\sum_{j=k_2}^{i-1}X_i'X_j\Big)
\\=&\left\{\begin{aligned}
&\fracrac{\sqrt{2}}{n\|\Sigma\|_F}\sum_{t=1}^{p}\Big(\sum_{i=k_1+1}^{m_1}\alpha_1\xi^{(1)}_{i,t}+\sum_{i=k_2+1}^{m_2}\alpha_2\xi^{(2)}_{i,t}\Big),\quad &\text{Case (1)}\\
&\fracrac{\sqrt{2}}{n\|\Sigma\|_F}\sum_{t=1}^{p}\Big(\sum_{i=k_1+1}^{k_2}\alpha_1\xi^{(1)}_{i,t}+\sum_{i=k_2+1}^{m_1}[\alpha_1\xi^{(1)}_{i,t}+\alpha_2\xi^{(2)}_{i,t}]+\sum_{i=m_1+1}^{m_2}\alpha_2\xi^{(2)}_{i,t}\Big),\quad &\text{Case (2)}\\
&\fracrac{\sqrt{2}}{n\|\Sigma\|_F}\sum_{t=1}^{p}\Big(\sum_{i=k_1+1}^{k_2}\alpha_1\xi^{(1)}_{i,t}+\sum_{i=k_2+1}^{m_2}[\alpha_1\xi^{(1)}_{i,t}+\alpha_2\xi^{(2)}_{i,t}]+\sum_{i=m_2+1}^{m_1}\alpha_1\xi^{(1)}_{i,t}\Big),\quad &\text{Case (3)}\\
\varepsilonnd{aligned}\right.
\varepsilonnd{flalign*}
For simplicity, we consider the Case (2), and using the independence of $X_i$, one can show that $S_1=\fracrac{\sqrt{2}}{n\|\Sigma\|_F}\sum_{t=1}^{p}\sum_{i=k_1+1}^{k_2}\alpha_1\xi^{(1)}_{i,t}$, $S_2=\fracrac{\sqrt{2}}{n\|\Sigma\|_F}\sum_{t=1}^{p}[\alpha_1\xi^{(1)}_{i,t}+\alpha_2\xi^{(2)}_{i,t}]$ and $S_3=\fracrac{\sqrt{2}}{n\|\Sigma\|_F}\sum_{t=1}^{p}\sum_{i=k_2+1}^{m_2}\alpha_2\xi^{(2)}_{i,t}$ are independent. Then by Theorem 4.0.1 in Lin and Lu (2010), they are asymptotically normal with variances given by
\begin{flalign*}
\mathrm{Var}(S_1)=&n^{-2}\alpha_1^2(k_2-k_1)(k_2-k_1+1),\\
\mathrm{Var}(S_2)=&n^{-2}[\alpha_1^2(m_1-k_2)(k_2-k_1+1+m_1-k_1)+2\alpha_1\alpha_2(m_1-k_2)(m_1-k_2+1)\\&+\alpha_2^2(m_1-k_2)(m_1-k_2+1)],\\
\mathrm{Var}(S_1)=&n^{-2}\alpha_2^2(m_2-m_1)(m_2-k_2+m_1-k_2+1).
\varepsilonnd{flalign*}
Similarly, we can obtain the asymptotic normality for Case (1) and Case (3).
Hence,
$$
\fracrac{\sqrt{2}}{n\|\Sigma\|_F}[\alpha_1Z(k_1,m_1)+\alpha_2Z(k_2,m_2)]\overset{\mathcal{D}}{\rightarrow} N(0,\fracrac{\tau^2}{n^2 }),
$$
where $$
\tau^2=\begin{cases}
\alpha_1^2{(m_1-k_1)(m_1-k_1+1)}+\alpha_2^2{(m_2-k_2)(m_2-k_2+1)},\quad& \text{Case (1)}\\
\alpha_1^2{(m_1-k_1)(m_1-k_1+1)}+\alpha_2^2{(m_2-k_2)(m_2-k_2+1)}+2\alpha_1\alpha_2(m_1-k_2)(m_1-k_2+1),\quad &\text{Case (2)}\\
\alpha_1^2{(m_1-k_1)(m_1-k_1+1)}+\alpha_2^2{(m_2-k_2)(m_2-k_2+1)}+2\alpha_1\alpha_2(m_2-k_2)(m_2-k_2+1),\quad &\text{Case (3)}.
\varepsilonnd{cases}
$$
Hence, the case of $N=2$ is proved by examining the covariance structure of $Q_n$ defined in Theorem \ref{thm_main}. The cases $N>2$ are similar.
\qed
\begin{lemma}\label{lem_equiv}
As $n\to\infty$, we have for any $0\leq a_1<r_1<b_1\leq 1$ and $0\leq a_2<r_2<b_2\leq 1$, as $n \rightarrow \infty$,
\begin{flalign*}
&\mathrm{Cov}(G_n^{(\mathcal{R}_n,s)}(r_1;a_1,b_1),G_n^{(\mathcal{R}_n,s)}(r_2;a_2,b_2))\\
\rightarrow_p&2\mathbb{E}^{2}\Big[\fracrac{R_1R_2}{\sqrt{(R_1^2+R_3^2)(R_2^2+R_3^2)}}\Big]\mathrm{Cov}(G(r_1;a_1,b_1),G(r_2;a_2,b_2)).
\varepsilonnd{flalign*}
\varepsilonnd{lemma}
There are 9 terms in the covariance structure given in (\ref{cov}), for first one, we have
\begin{flalign*}
&2n^{-6}\sum_{\substack{\lfloor n(a_1\lor a_2)\rfloor\leq j_1,j_2\leq \lfloor n(r_1\land r_2)\rfloor\\j_1\neq j_2}}R_{j_1}^{-2}R_{j_2}^{-2}A_{j_1,j_2}(\lfloor nr_1\rfloor;\lfloor na_1\rfloor,\lfloor nb_1\rfloor)A_{j_1,j_2}(\lfloor nr_2\rfloor;\lfloor na_2\rfloor,\lfloor nb_2\rfloor)
\\=2&n^{-6}\sum_{\substack{\lfloor n(a_1\lor a_2)\rfloor\leq j_1,j_2\leq \lfloor n(r_1\land r_2)\rfloor\\j_1\neq j_2}}R_{j_1}^{-2}R_{j_2}^{-2}\sum_{\substack{\lfloor nr_1\rfloor+1\leq j_3,j_4\leq \lfloor nb_1\rfloor\\j_3\neq j_4}}\fracrac{R_{j_1}R_{j_3}}{\sqrt{(R_{j_1}^2+R_{j_3}^2)}}\fracrac{R_{j_2}R_{j_4}}{\sqrt{(R_{j_2}^2+R_{j_4}^2)}}\\&\times\sum_{\substack{\lfloor nr_2\rfloor+1\leq j_5,j_6\leq \lfloor nb_2\rfloor\\j_5\neq j_6}}\fracrac{R_{j_1}R_{j_5}}{\sqrt{(R_{j_1}^2+R_{j_5}^2)}}\fracrac{R_{j_2}R_{j_6}}{\sqrt{(R_{j_2}^2+R_{j_6}^2)}}
\\=2&n^{-2}\sum_{\substack{\lfloor n(a_1\lor a_2)\rfloor\leq j_1,j_2\leq \lfloor n(r_1\land r_2)\rfloor\\j_1\neq j_2}}R_{j_1}^{-2}R_{j_2}^{-2}(b_1-r_1)^2\Big\{\mathbb{E}\Big[\fracrac{R_{j_1}R_{j_3}}{\sqrt{(R_{j_1}^2+R_{j_3}^2)}}\fracrac{R_{j_2}R_{j_4}}{\sqrt{(R_{j_2}^2+R_{j_4}^2)}}|R_{j_1},R_{j_2}\Big]+o_p(1)\Big\}\\&\times(b_2-r_2)^2\Big\{\mathbb{E}\Big[\fracrac{R_{j_1}R_{j_5}}{\sqrt{(R_{j_1}^2+R_{j_6}^2)}}\fracrac{R_{j_2}R_{j_6}}{\sqrt{(R_{j_2}^2+R_{j_6}^2)}}|R_{j_1},R_{j_2}\Big]+o_p(1)\Big\}
\\\to_p& 2[(r_1\land r_2)-(a_1\lor a_2)]^2(b_1-r_1)^2(b_2-r_2)^2 \mathbb{E}^2\Big[\fracrac{R_1R_2}{\sqrt{(R_1^2+R_3^2)(R_2^2+R_3^2)}}\Big].
\varepsilonnd{flalign*}
where the last equality holds by applying the law of large numbers for $U$-statistics to $R_{j_3}, R_{j_4}$ and $R_{j_5},R_{j_6}$, and the last holds by the law of large numbers of $U$-statistics to $R_{j_1}, R_{j_2}$.
Therefore, similar arguments for other terms indicate that
\begin{flalign*}
&2^{-1}\mathbb{E}^{-2}\Big[\fracrac{R_1R_2}{\sqrt{(R_1^2+R_3^2)(R_2^2+R_3^2)}}\Big]\lim\limits_{n\to\infty}\mathrm{Cov}(G_n^{(\mathcal{R}_n,s)}(\lfloor nr_1\rfloor;\lfloor na_1\rfloor,\lfloor nb_1\rfloor),G_n^{(\mathcal{R}_n,s)}(\lfloor nr_2\rfloor;\lfloor na_2\rfloor,\lfloor nb_2\rfloor))
\\=&[(r_1\land r_2)-(a_1\lor a_2)]^2(b_1-r_1)^2(b_2-r_2)^2\mathbf{1}((r_1\land r_2)>(a_1\lor a_2))\\&+[(r_1\land b_2)-(a_1\lor r_2)]^2(b_1-r_1)^2(r_2-a_2)^2\mathbf{1}((r_1\land b_2)>(a_1\lor r_2))\\&-4[r_2-(a_1\lor a_2)][(b_2\land r_1)-r_2](b_1-r_1)^2(b_2-r_2)(r_2-a_2)\mathbf{1}(r_1>r_2,r_2>(a_1\lor a_2),(b_2\land r_1)>r_2)
\\&+[(b_1\land r_2)-(r_1\lor a_2)]^2(r_1-a_1)^2(b_2-r_2)^2\mathbf{1}((b_1\land r_2)>(r_1\lor a_2))\\&+[(b_1\land b_2)-(r_1\lor r_2)]^2(r_1-a_1)^2(r_2-a_2)^2\mathbf{1}((b_1\land b_2)-(r_1\lor r_2))\\
&-4[r_2-(r_1\lor a_2)][(b_1\land b_2)-r_2](r_2-a_2)(b_2-r_2)(r_1-a_1)^2\mathbf{1}(b_1>r_2,r_2>(r_1\lor a_2),(b_1\land b_2)>r_2)
\\&-4[r_1-(a_1\lor a_2)][(b_1\land r_2)-r_1](r_1-a_1)(b_1-r_1)(b_2-r_2)^2\mathbf{1}(r_2>r_1,r_1>(a_1\lor a_2),(b_1\lor r_2)>r_1)
\\&-4[r_1-(r_2\lor a_1)][(b_1\land b_2)-r_1](r_1-a_1)(b_1-r_1)(r_2-a_2)^2\mathbf{1}(b_2>r_1,r_1>(r_2\lor a_1),(b_1\land b_2)>r_1)
\\&+4[(r_1\land r_2)-(a_1\lor a_2)][(b_1\land b_2)-(r_1\land r_2)](r_1-a_1)(b_1-r_1)(r_2-a_2)(b_2-r_2)\\&\times\mathbf{1}((r_1\land r_2)>(a_1\lor a_2),(b_1\land b_2)>(r_1\land r_2)).
\varepsilonnd{flalign*}
This is indeed the covariance structure of $G(\cdot)$ after tedious algebra.
\qed
\varepsilonnd{document}
|
\begin{document}
\title{Characterization of the Distribution of Twin Primes}
\author{P.F.~Kelly\footnote{patrick\[email protected]}
and Terry~Pilling\footnote{[email protected]} \\
Department of Physics \\
North Dakota State University \\
Fargo, ND, 58105-5566 \\
U.S.A.}
\maketitle
\begin{abstract}
We adopt an empirical approach to the characterization of
the distribution of twin primes within the set of primes,
rather than in the set of all natural numbers.
The occurrences of twin primes in any finite sequence of primes
are like fixed probability random events.
As the sequence of primes grows, the probability decreases
as the reciprocal of the count of primes to that point.
The manner of the decrease is consistent with the Hardy--Littlewood
Conjecture, the Prime Number Theorem, and the Twin Prime Conjecture.
Furthermore, our {\it probabilistic model}, is simply parameterized.
We discuss a simple test which indicates the consistency of the
model extrapolated outside of the range in which it was constructed.
\end{abstract}
\noindent
Key words: Twin primes
\noindent
MSC: 11A41 (Primary), 11Y11 (Secondary)
\section{Introduction}
Prime numbers~\cite{cohen}, with
their many wonderful properties, have been an intriguing subject
of mathematical investigation since ancient times.
The ``twin primes,'' pairs of prime numbers $\{ p, p+2 \}$ are a
subset of the primes and themselves possess remarkable properties.
In particular, we note that the {\it Twin Prime Conjecture},
that there exists an infinite number of these prime number pairs
which differ by 2, is not yet proven~\cite{hardy,marzinske}.
In recent years much human labor and computational effort have been
expended on the subject of twin primes.
The general aims of these researches have been three-fold:
the task of enumerating the twin primes~\cite{halberstam}
({\it i.e.,} identifying the members of this particular subset of
the natural numbers, and its higher-order variants ``$k$-tuples'' of primes),
the attempt to elucidate how twin primes are distributed among the
natural numbers~\cite{huxley,brent1,brent2,wolf1} ({\it especially} searches
for long gaps in the sequence~\cite{nicely1,wolf2,odlyzko}),
and finally, the precise estimation of the value of Brun's
Constant~\cite{wolf3}.
Many authors have observed that the twin primes, along with the primes
themselves, generally become more sparse or diffuse as their magnitude
increases.
In fact the {\it Prime Number Theorem} may be rephrased to state that
the number of prime numbers less than or equal to some large
(not necessarily prime) number $N$ is
approximately\footnote{In fact, more accurate
approximations are known, but the formulae we quote suffice for
our purposes.}
\begin{equation}
\pi_{1}(N) \sim \int_2^N \frac{1}{\ln(x)} dx\, .
\label{pi1}
\end{equation}
A similar result is believed to hold for the number of twin primes
where each element of the pair is less than or equal to large $N$
\begin{equation}
\pi_{2}(N) \sim 2 c_2 \int_2^N \frac{1}{\left(\ln(x) \right)^2} dx\, ,
\label{pi2}
\end{equation}
where the ``twin prime'' constant~\cite{tpconst1} $c_2$ has the
numerical value $c_2 = 0.661618\ldots$, and is currently known to
many decimal places~\cite{tpconst2,tpconst3}.
The expression~(\ref{pi2}) is the first instance of the Hardy--Littlewood
conjectures which estimate the multiplicity of $k$-tuples of primes
smaller than natural number $N$~\cite{hardy}.
In our investigation, we sought to account for the effect of the
primes themselves becoming more rarefied by examining the distribution
of twin primes within the prime numbers.
We have done this for the set of prime numbers less than
approximately $4 \times 10^9$, and have observed that within this range
the occurrence of twin primes may be characterized as
{\it slowly-varying-probability random} events.
\section{Method and Results}
To ensure that there is no confusion, let us first make very clear
our methodology.
We generated prime numbers in sequence, {\it viz,}
$\ P_1 = 2, P_2 = 3, P_3 = 5 \ldots $,
and within this sequence identified twin primes and their
{\it prime separations} as illustrated
somewhat schematically below.
\[
\cdots \ P_i \ \left( P_{i+1} \ P_{i+2} \right) \ P_{i+3} \ P_{i+4}
\ \left( P_{i+5} \ P_{i+6} \right) \ P_{i+7} \ P_{i+8} \ P_{i+9} \ P_{i+10}
\ \left( P_{i+11} \ P_{i+12} \right) \ P_{i+13} \ \cdots
\]
We say that the first pair of twins in the above sequence has a
prime separation of 2.
There are two non-twin prime numbers, {\it i.e., singletons}, which
occur between the second prime element, $P_{i+2}$, of the first
twin and the first prime element, $P_{i+5}$, of the subsequent twin.
Similarly, the second pair of twins has prime separation 4.
Note that there are many twins with prime separation equal to zero:
for example
$(5\ 7) (11\ 13)$, or $(137\ 139) (149\ 151)$.
Actually, all of the prime 4-tuples $\left( P, P+2, P+6, P+8 \right)$
are comprised of a pair of twins with zero separation in primes.
There is an irregularity with our definition of separation
for the first few primes: $2\ (3\ 5) (5\ 7)$,
where the pairs in fact overlap, yielding a prime separation of $-1$.
Fortunately, for very well-known reasons such overlapping twins
do not ever recur and we choose to begin our analysis with the twin
$(5\ 7)$.
For instance, in the set of seven twins between 5 and 100,
\[
\left\{ (5\ 7) (11\ 13) (17\ 19) (29\ 31) (41\ 43) (59\ 61)
(71\ 73) \right\},
\]
there are 6 separations.
Three of these happen to be 0, two are 1, and one is 2, so the
relative frequencies for separations $s = 0 , 1 , 2$ are
$\frac{1}{2} , \frac{1}{3}$, and $\frac{1}{6}$, respectively.
From the set of primes, all separations between pairs of twins
up to a fixed number $N$ were computed and tabulated.
[We chose certain values of $N$ in the range $79561$ to $4020634603$.
These particular numbers are the second prime elements of the
thousandth and twelve-millionth twins respectively.
Many of our $N$'s were chosen such that our analysis started and
ended on twins.
However the behavior that we observe holds for all $N$, sufficiently
large, with the understanding that the singleton primes between the
last twin and the upper bound $N$ are ignored.]
The logarithm of the relative frequency of occurrence of each
separation in each of our
analyses appears to obey a surprisingly simple relation
as illustrated {\it schematically} in Figure~\ref{graph1}.
\begin{figure}
\caption{Data and Linear Fits for log(frequency) {\it vs.}
\label{graph1}
\end{figure}
Two comments must be made.
The first is that this remarkable behavior is perfectly characteristic
of a completely {\it random} system.
We infer that as one approaches {\it each} prime member in the
sequence of primes following a twin, the likelihood of it being the first
member of the next twin prime is {\it constant}!
By way of analogy, we can consider a radioactive substance.
The likelihood of one of its atoms decaying in any short time
interval is fixed, with the effect that the probability that
the next decay occurs at time $t$ is just
${\cal N} e^{-\gamma t}$, where $\gamma$ is the decay rate, and ${\cal N}$
is a constant to ensure appropriate normalization.
The measured slope of the line fit to our data provides a decay constant
which is particular to the twin primes.
Again, recourse to our analogy is warranted.
The decay rate of a radioactive element is a {\it defining characteristic}
of that element.
We appear to have the occurrences of twin primes governed by
a similar sort of ``prime constant.''
We qualify this statement, however, since the curve in Figure~\ref{graph1}
is only illustrative because the slope varies inversely with $N$.
Figure~\ref{graph2} displays curves (with associated best-fit straight lines)
for the twin prime separation data for ranges $[ 5 , 10^6]$, $[ 5 , 10^8]$
and $[ 5 , 10^9]$, illustrating the variation with $N$.
\begin{figure}
\caption{Data and Linear Fits for log(frequency) {\it vs.}
\label{graph2}
\end{figure}
\noindent
Were it not the case that the magnitude of the slope diminished for
larger values of $N$ then the Hardy--Littlewood Conjecture for twins
would certainly fail to hold.
If the slopes of our lines were indeed the {\it same} value for
all $N$, meaning that the probability of a given prime being a
member of a twin pair is a universal constant, then the number
of twin primes $\pi_2(N)$ would just be a fixed fraction of $\pi_1(N)$
in disagreement with Hardy--Littlewood and the empirical data.
The linear fits that we employed were constrained to ensure that the
relative frequencies are properly normalized.
That is, if the relative frequency with which separation $s$ occurs
obeys the exponential relation consistent with our data, then for
the sum of frequencies to be normalized to $1$, we must have
\begin{equation}
+ \mbox{(intercept)} \equiv \ln( -\mbox{(slope)})\, ,
\end{equation}
so the fit that we performed was constrained by the one-parameter
{\it Ansatz}
\begin{equation}
f(s) = - m s + \ln(m)\, .
\end{equation}
Table~\ref{table1} gives the best-fit values of this probability-conserving
slope ($m$) for various $N$, along with simple statistical estimates of
the uncertainty.
The quoted error estimates measure only the quality of the estimate
of $m$ and and does not account for effects resulting
from our arbitrary choices of $N$.
It may very well be the case that a more realistic assessment of
error would double the values quoted in the table.
\begin{table}[htb]
\begin{center}
\begin{tabular}{|c||c|c|c|c|}
\hline
$\pi_2(N)$ & slope & stat.~error ($\pm$) & $\pi_1(N)$ & $N$ \\
\hline
\hline
$1 \times 10^3$ & 0.141667 & 0.00599 & 7793 & 79561 \\
$5 \times 10^3$ & 0.122415 & 0.00315 & 45886 & 557521 \\
$1 \times 10^4$ & 0.114097 & 0.00325 & 97255 & 1260991 \\
$5 \times 10^4$ & 0.104126 & 0.00105 & 556396 & 8264959 \\
$1 \times 10^5$ & 0.096421 & 0.00095 & 1175775 & 18409201 \\
$5 \times 10^5$ & 0.086700 & 0.00056 & 6596231 & 115438669 \\
$1 \times 10^6$ & 0.081143 & 0.00041 & 13804822 & 252427603 \\
$3 \times 10^6$ & 0.075491 & 0.00035 & 44214960 & 863029303 \\
$5 \times 10^6$ & 0.073150 & 0.00031 & 75860671 & 1523975911 \\
$8 \times 10^6$ & 0.070965 & 0.00032 & 124538861 & 2566997821 \\
$1 \times 10^7$ & 0.070154 & 0.00029 & 157523559 & 3285916171 \\
$1.2 \times 10^7$ & 0.069814 & 0.00024 & 190894477 & 4020634603 \\
\hline
\end{tabular}
\caption{Values of slope, statistical error, $\pi_{1}(N)$, and $\pi_{2}(N)$
for certain $N$ from $79561$ to $4020634603$.}
\label{table1}
\end{center}
\end{table}
Two comments must be made.
The first is that all of the separations which appeared in the
data received equal (frequency-weighted) consideration in
our computation of best-fit slopes.
This has a consequence insofar as the large-separation, low-frequency
events constituting the tail of the distribution reduce the magnitude
of the slope, as is readily seen in Figures~\ref{graph1} and \ref{graph2}.
One might well be inclined to truncate the data by excising the
tails and fixing the slopes by the (more-strongly-linear) low-separation
data for each $N$.
We did not do this because it would have entailed a generally systematic
discarding of data from pairs appearing near the upper limit of the
range, and thus would nearly correspond to the slope with greater magnitude
that one would expect associated with an {\it effective} upper
limit $N_{{\hbox{\small eff}}} < N$.
Viewed from this perspective, it is better to consider all points
rather than submit to this degree of uncertainty.
The second comment is that we have thus far adhered to the convention
of expressing all of our results in terms of the natural number $N$.
We shall now pass over to a characterization in terms of $\pi_{1}$
-- itself a function of $N$ -- which better suits our viewpoint
that the analysis of the distribution of twins is most meaningful
when considered in terms of the primes themselves.
\begin{figure}
\caption{Slope Data and our empirical fit~(\ref{cool-eh}
\label{graph3}
\end{figure}
In light of the above comments, we sketch in Figure~\ref{graph3} a plot of
the slopes (computed in the manner described) {\it versus}
$\log( \pi_{1} )$.
The trend seen on the graph may be well-described by
the function (remember that the error bars are understated)
\begin{equation}
- m(x) \simeq - (1.321 \pm 0.010) \big/ x \ , \quad
\mbox{for } x = \log( \pi_{1}(N) ) \, .
\label{cool-eh}
\end{equation}
There are two amazing features of this functional form for the dependence
of the slope on $\pi_{1}$.
The first is that the factor which appears looks suspiciously like
$-2 c_2$, the twin primes constant!
This result will be confirmed in the next section.
The second feature is that
\[
\lim_{x \longrightarrow \infty} - m(x) = 0^{-} \ ,
\]
{\it i.e.,} as one progresses through the
infinite set of primes, the slope which governs the distribution
of twins does not crash through zero.
This is consistent with the Hardy--Littlewood conjecture
insofar as the twins become progressively more sparse within the
set of primes.
In addition, it is consistent with the Twin Prime Conjecture
in that the reciprocal of the slope admits the interpretation of
being the ``expected number of primes interspersed between a given twin
and the next twin in order.''
\begin{equation}
\bar{s} = \frac{1}{m} \ , \quad \mbox{for random fixed probability events.}
\end{equation}
Since $\bar{s}$ remains finite for all $\pi_{1}$, then we can
conjecture that wherever one happens to be in the infinite set of
primes it is possible to characterize the expected number of
primes that will be encountered on the way to the next twin.
This is also consistent with the Twin Prime Conjecture, although
unfortunately it is empirical and does not constitute a proof.
\section{Interpretive Framework}
Recall that, up to this point, all of our results have been
purely empirical.
Now, we will argue for their essential truth and consistency
beyond the range of our data.
Consider our approximation~(\ref{pi1}) for $\pi_{1}(N)$.
Making the additional draconian approximation that the integrand
is constant at its minimum value, and discarding a small term, we get
the oft-quoted estimate
\begin{equation}
\widetilde{\pi}_{1}(N) \sim \frac{N}{\log(N)}\, .
\label{blah1}
\end{equation}
In precisely the same manner we get
\begin{equation}
\widetilde{\pi}_2 (N) \sim 2 c_2 \, \frac{N}{\left(\log(N) \right)^2}\, .
\label{blah2}
\end{equation}
The entire set of prime numbers less than $N$ consists of the
$2 \times \pi_{2}(N)$ elements which occur together in twins and
$\pi_{1}(N) - 2 \times \pi_{2}(N)$ singletons.
Now, let us suppose that the set of twins is randomly interspersed
among the singletons.
This would imply that between each twin pair there will appear,
on average, $s_{0}(N)$ singletons, where
\begin{equation}
s_{0}(N) = \frac{ \pi_{1} - 2 \pi_{2} }{\pi_{2}} \ .
\label{blah3}
\end{equation}
Note that there is an essential distinction between $\bar{s}$ which
arises from the actual distribution of separations and $s_0$
which, in effect, assumes that the twins are evenly spaced.
That is, from a value of $\bar{s}( \pi_{1}(N) )$ one can infer
$m$ and thus the probability distribution of twin prime separations
characteristic of the set of primes less than $N$.
On the other hand, $s_{0}(N)$ is an average value in which no account
is taken of the details of the distribution and thus no more information
is contained in it.
In the approximation scheme developed in (\ref{blah1}) and (\ref{blah2}),
\begin{equation}
\widetilde{s}_{0} = \frac{\log(N) - 4 c_2}{2 c_2}
\simeq \frac{\log(N)}{2 c_2} \, .
\label{10}
\end{equation}
Furthermore, to very lowest-order
\begin{equation}
\log(N) \simeq \log( \widetilde{\pi}_{1} )
+ \log( \log( \widetilde{\pi}_{1} )) \, ,
\label{11}
\end{equation}
and hence
\begin{equation}
\widetilde{s}_{0} \approx \frac{ \log( \widetilde{\pi}_{1} )}{2 c_2} \, .
\label{12}
\end{equation}
Finally, taking this as the expected number of singleton primes
occurring between twin prime pairs for numbers less than or equal to $N$
we see immediately that
\begin{equation}
\widetilde{m} = \frac{1}{\widetilde{s}_{0}} \approx
\frac{2 c_2}{\log(\widetilde{\pi}_{1})}
\label{13}
\end{equation}
is completely consistent with the Prime Number Theorem,
the Hardy--Littlewood Conjecture, and with our empirical results.
As an aside, one might consider the effect of attempting to improve
upon the draconian approximation.
It turns out that any reasonable improvement merely results in the addition
of (small) constant terms which may be neglected in the limit of large $N$.
We are quite surprised that our empirical results yield the large $N$
limit with such accuracy.
Another test of the general consistency of our model is by comparison
with $m_0$, where
\begin{equation}
m_{0}(N) = \frac{1}{s_{0}(N)} =
\frac{\pi_{2}(N)}{\pi_{1}(N) - 2 \pi_{2}(N)} \, .
\label{14}
\end{equation}
Bearing in mind that we are modeling more accurately the actual
distribution of prime separations with $\bar{s}$ than with $s_0$,
we do not expect perfect agreement, but rather that the
general trend exposed by $s_0$ will be followed by $\bar{s}$ if
investigated beyond the range thus far examined.
We sketch below a plot of $m_{0}$ {\it vs.} $\log{\pi_{1}}$
using precise values for $\pi_1$ and $\pi_2$ computed by
T.R.~Nicely~\cite{nicely2}.
Note that we have made the small adjustments of decrementing
the published $\pi_{2}$'s by one
and decrementing the $\pi_{1}$ by two
to take into account our skipping the anomalous prime $2$
and twin $(3\ 5)$.
We are quite encouraged by the correspondence of the data on
graph, and believe that the distributional model does extend itself
well beyond the range of our present data.
\begin{figure}
\caption{$m_{0}
\label{graph4}
\end{figure}
\section{Conclusion}
We believe that we have constructed a novel characterization of the
distribution of twin primes.
The most essential feature of our approach is that we consider the
spacings of twins among the {\it primes} themselves, rather than
among the natural numbers.
Secondly, we modeled the distribution empirically --
without preconceptions -- and argued that for any given $N$
(larger than $10^4$, say) the twin primes appear amongst the sequence
of primes in a manner characteristic
of a completely random, fixed probability system.
Again working empirically, we noted that the ``fixed'' probability
varied with $N$, in a manner consistent with Theorems and with
Conjectures that are believed to hold.
We have parameterized the variation of the ``separation constant''
in terms of $\pi_1$, as suggested by our outlook, and have discovered
that it has a particularly simple functional form and is also
consistent with the established Theorems and Conjectures.
With this model for the distribution now in hand and assumed viable,
we are beginning to investigate other consequences.
These will be reported upon in a forthcoming paper~\cite{pktpinprep}.
\end{document}
|
\begin{document}
\begin{frontmatter}
\title{Deep Time Series Models for Scarce Data}
\author{Qiyao Wang, Ahmed Farahat, Chetan Gupta, Shuai Zheng}
\address{Industrial AI Laboratory, Hitachi America, Ltd. R$\&$D \\
Santa Clara, CA, USA \\
$\{$Qiyao.Wang, Ahmed.Farahat, Chetan.Gupta, Shuai.Zheng$\}[email protected]
}
\begin{abstract}
Time series data have grown at an explosive rate in numerous domains and have
stimulated a surge of time series modeling research. A comprehensive comparison of different time series models, for a considered data analytics task, provides useful guidance on model selection for data analytics practitioners. Data scarcity is a universal issue that occurs in a vast range of data analytics problems, due to the high costs associated with collecting, generating, and labeling data as well as some data quality issues such as missing data. In this paper, we focus on the temporal classification/regression problem that attempts to build a mathematical mapping from multivariate time series inputs to a discrete class label or a real-valued response variable. For this specific problem, we identify two types of scarce data: scarce data with small samples and scarce data with sparsely and irregularly observed time series covariates. Observing that all existing works are incapable of utilizing the sparse time series inputs for proper modeling building, we propose a model called sparse functional multilayer perceptron (SFMLP) for handling the sparsity in the time series covariates. The effectiveness of the proposed SFMLP under each of the two types of data scarcity, in comparison with the conventional deep sequential learning models (e.g., Recurrent Neural Network, and Long Short-Term Memory), is investigated through mathematical arguments and numerical experiments \footnote{This paper is an extended version of the following papers written by the authors \cite{wang2019multilayer, wang2019remaining}.}.
\end{abstract}
\begin{keyword}
Time series analysis \sep Scarce data \sep Deep learning models \sep Functional data analysis
\end{keyword}
\end{frontmatter}
\section{Introduction}
Nowadays, time series data that consist of repeated data measurements over a bounded time range have become ubiquitous in numerous domains, such as meteorology, epidemiology, transportation, agriculture, industry, bioinformatics, and the world wide web. Time series data often have intrinsic temporal structures such as auto-correlation, trend, and seasonality. For example, for industrial equipment, the sensor measurements over the lifespan are correlated. Also, the sensor time series often gradually increase/decrease due to performance degradation. When modeling time series data, it is of paramount importance to leverage the internal temporal information to achieve good data analytical performance.
In the literature, three types of time series approaches have been widely explored, including (1) the classical time series models (e.g., the Auto-Regressive Integrated Moving Average and the autoregressive exogenous models) \cite{hamilton1994time}, (2) the sequential deep learning models such as the Recurrent Neural Network (RNN), the Long Short-Term Memory (LSTM), and the Gated Recurrent Unit (GRU) in the machine learning community \cite{mikolov2010recurrent, connor1994recurrent, chung2014empirical}, and the (3) emerging functional data analysis in the statistical field \cite{ramsay2006functional}. Fundamentally, the classical time series models exploit the autoregressive (AR) and moving average (MA) techniques to encode the dependency of later observations on the prior data and the regression error at previous timestamps. The deep sequential learning models use hidden states to hold the up-to-present memory and recurrently conduct the same transformation on the internal memory and input data along the timestamps to sequentially process the temporal information. Rather than considering time series as a sequence of scalar-valued observations, functional data analysis (FDA) models treat the observed time series data as discrete realizations given rise by a continuous underlying random function of time (i.e., random curve) \cite{ramsay2006functional} and directly analyze a sample of such finitely evaluated random functions. This functional setting naturally accounts for temporal information. There is substantial literature on modeling and estimation for functional data, including functional principal component analysis \citep{castro1986principal, silverman1996smoothed}, regression with functional responses, functional predictors or both \citep{cardot2003spline, cai2006prediction, muller2005time, yao2005functional}, functional classification and clustering \citep{james2003clustering,muller2005functional}, and functional quantile analysis \citep{ cardot2005quantile,ferraty2005conditional, chen2012conditional}. Ramsay \cite{ramsay2006functional} offers a comprehensive perspective of FDA methods.
The aforementioned time series analytic modeling techniques handle temporal information from different perspectives. Correspondingly, they impose varying requirements on data and achieve different performances under diverse scenarios. A comprehensive comparison of time series models, for a considered data analytics task, provides useful guidance on model selection for data analytics practitioners. Some papers \cite{adebiyi2014comparison, ho2002comparative} conducted comparative studies of RNN and ARIMA for the time series forecasting task, but the literature still lacks a comprehensive comparison between FDA and these approaches.
In this paper, we focus on the crucial problem of building supervised learning models in time series analysis. In particular, the goal is to build a mathematical mapping from one or several real-valued time series within a bounded period to a discrete class label or a real-valued response variable. Note that we consider the general case where the response variable is different from the temporal covariates. The classical time series models such as ARIMA and the autoregressive exogenous models are infeasible for the considered temporal classification/regression problem, as they are incapable of building mappings for time series to a heterogeneous response. In the other two categories of time series models, several methods are feasible to solve the considered problem, of which the most important two are the sequential learning models \cite{mikolov2010recurrent, hochreiter1997long, chung2014empirical}, and the Functional Multilayer Perceptron (FMLP), a counterpart of the classical MLP for continuous random functions over a continuum such as time series \cite{rossi2005functional, rossi2005representation, wang2019remaining}.
Due to the nature of deep learning models, the sequential and functional deep learning models can both be used to train an end-to-end temporal classification/regression model with reasonable generalizability if we have access to a large number of training samples that cover the diverse variability in data. However, due to the high cost associated with collecting, labeling, storing, processing, and modeling a large amount of training data, building effective deep learning models with a limited amount of samples (i.e., scarce data) is an appealing and meaningful topic in the time series analysis field. In particular to the considered problem, we identify two types of scarce data which can be described as follows. The ideal case is that the number of samples $N$ is sufficiently large and each time series input is densely and regularly observed time series, as shown in Figure \ref{e1}. Scarce data occur when any of these requirements are not satisfied: scarce data with a limited number of samples as illustrated in Figure \ref{e2} and scarce data with sparsely and irregularly evaluated time series covariates as illustrated in Figure \ref{e3}.
This paper first presents a review of the existing sequential and functional temporal predictive models. Observing that the existing method's limitations in dealing with the sparse and irregularly observed covariates, we introduce two sparse functional MLP based on the univariate and multivariate sparse functional principal component analysis \cite{yao2005functional, happ2018multivariate, chiou2014multivariate} \footnote{A preliminary version of this method appears as \cite{wang2019multilayer}.}. The contributions of this paper are summarized as follows:
\begin{enumerate}[leftmargin=*]
\item We discuss the different types of data scarcity for the temporal classification/regression problem.
\item We introduce a new temporal predictive model specially designed to handle scenarios with sparsely and irregularly observed time series inputs.
\item We use mathematical arguments and numerical experiments to investigate each model's feasibility and efficiency in building temporal predictive models under various types of data scarcity.
\end{enumerate}
\begin{figure*}
\caption{Left: An example of large samples. Middle: Scarce data with a limited number of data pairs. Right: Scarce data with sparsely evaluated time series covariates.}
\label{lstm12}
\end{figure*}
The rest of the paper is organized as follows. Some preliminaries, including the problem formulation, and review of sequential learning models and the conventional FMLP that only works for dense and regular time series \cite{rossi2005functional, wang2019remaining} are presented in Section \ref{sec2}. The proposed sparse FMLP for sparse time series inputs is described in Section \ref{sec2.4}. The performance of the candidate models under the two types of scarce data scenarios is respectively investigated in Section \ref{sec3} and \ref{sec4}. The paper is concluded in Section \ref{sec5}.
\section{Preliminaries}\label{sec2}
\subsection{Problem formulation}\label{sec2.1}
The goal of temporal predictive models is to build a mapping from multivariate time series covariates to a scalar response variable with good accuracy and generalizability by leveraging the temporal patterns and dependencies.
Suppose that we have access to $N$ independent training samples. For each subject $i\in \{1,...,N\}$, $R$ features are repeatedly measured within a bounded time range $\mathcal{T}\subseteq \mathbb{R}$. In practice, the measuring timestamps can vary across different covariates and different subjects. As a result, the subject and feature indexes need to be included in the mathematical notations. In particular, the observed data for the $r$-th covariate of subject $i$ is represented by a $M_{i,r}$ dimensional vector $\mathbf{Z}^{(i,r)} = [Z^{(i,r)}_{1}, ...,Z^{(i,r)}_{j},...,Z^{(i,r)}_{M_{i,r}}]^T$, which correspond to observations at timestamps $T^{(i,r)}_{1},...,T^{(i,r)}_{j},...,T^{(i,r)}_{M_{i,r}}$, with $T^{(i,r)}_{j} \in \mathcal{T}$, for $j=1,...,M_{i,r}$. The response variable is $Y_i$, which is binary for temporal classifications and real-valued for temporal regressions. In summary, the observed data are $\{\mathbf{Z}^{(i,1)}, ..., \mathbf{Z}^{(i,R)}, Y_i\}_{i=1}^N$, based on which the temporal predictive models aim at constructing the mapping in Eq.\eqref{setting1}.
\begin{equation}
Y_i = F(\mathbf{Z}^{(i,1)}, ..., \mathbf{Z}^{(i,R)}).\label{setting1}
\end{equation}
The sample size $N$, the number of observations per curve $M_{i,r}$, and the measuring timestamps $T^{(i,r)}_{1},...,T^{(i,r)}_{j},...,T^{(i,r)}_{M_{i,r}}$ jointly determine the level of data availability. A large data scenario depicted in Figure \ref{e1} means that not only $N$ and $M_{i,r}$ are sufficiently large but also the measuring timestamps uniformly cover the time domain $\mathcal{T}$ for all subjects and covariates. Scarce data occur when at least one of these requirements is not satisfied. In particular, scarce data with a limited number of data points happen if the sample size $N$ is small. Whereas, scarce data with sparsely evaluated time series features correspond to situations where $M_{i,r}$ is a small number and/or there exist large gaps among the measuring times $T^{(i,r)}_{1},...,T^{(i,r)}_{j},...,T^{(i,r)}_{M_{i,r}}$, for at least one subject and one covariate. This paper focuses on examining the advantages and disadvantages of several time series models in solving the problem in Eq.\eqref{setting1} given scarce data. We respectively present the candidate models in the remainder of this section.
\subsection{Sequential learning models}\label{sec2.2}
Sequential learning models such as the Recurrent Neural Network (RNN), the Long Short-Term Memory (LSTM), and the Gated Recurrent Unit (GRU) are generalizations of the fully connected MLP that have internal hidden states to process the sequences of inputs \cite{hochreiter1997long}. The key idea is that they employ a series of MLP-based computational cells with the same architecture and parameters to build a directed neural network structure. Any individual cell in the network takes the actual observations at the current index and the hidden states obtained at the previous step (i.e., memory) to produce the updated hidden states that serve as the input for the next computational cell. When the goal is to predict the scalar response associated with the time series, the achieved hidden states at the last index are fed into a nonlinear function to compute the output state. In RNN, each computational cell has one MLP, while there are multiple interacting MLPs in each recurrent unit in LSTM \cite{mikolov2010recurrent, hochreiter1997long}.
The sequential learning models originally designed for text data mining are capable of capturing the order information and the interactions among observations in sequential inputs \cite{mikolov2010recurrent, hochreiter1997long}. Recently, these sequential learning models are also frequently adopted to model time series data \cite{zheng2017long, adebiyi2014comparison, ho2002comparative}. However, when using these models to handle time series data, it is explicitly required that the multivariate inputs are evaluated at an equally-spaced time grid shared by all subjects. This is because the sequential learning models cannot encode the concrete measuring timestamps associated with the individual observations in the time series inputs. Mathematically, in the observed data samples $\{\mathbf{Z}^{(i,1)}, ..., \mathbf{Z}^{(i,R)}, Y_i\}_{i=1}^N$, the covariate vectors $\mathbf{Z}^{(i,r)}$ are of the same length $M$ across all subjects and all covariates. Also, they are evaluated at equally-spaced time grid within time range $\mathcal{T}$, which is denoted as $T_{1},...,T_{j},...,T_{M}$. In practice, data pre-processing procedures such as interpolation are often implemented to obtain the required regular time series when the raw data is sparse and irregular. It is noteworthy that the conventional data pre-processing techniques significantly contaminate the training data when the individual time series are highly sparse and irregular.
Let's use RNN as an example to present the mathematics behind the sequential learning models. Suppose that $\mathbf{Z}^{(i)}_j=[Z^{(i,1)}_{j},...,Z^{(i,r)}_{j},...,Z^{(i,R)}_{j}]^T$ represents the $R$ dimensional vector containing the $R$ features at time $T_{j}$. Let the number of hidden units in MLP be $L_{\text{RNN}}$ and the $L_{\text{RNN}}$ dimensional hidden state at time $T_{j}$ be $\mathbf{h^{(i)}_j}$. For $j=0,...,M$, the following calculation is recursively conducted
\begin{equation}\label{SeqDL}
\mathbf{h^{(i)}_j}=U_{\text{act1}} (\mathbf{W_{hh}}\mathbf{h^{(i)}_{j-1}} + \mathbf{W_{hz}}\mathbf{Z}^{(i)}_j),
\end{equation}
where $\mathbf{W_{hh}}$ is a $L_{\text{RNN}}$ by $L_{\text{RNN}}$ dimensional parameter matrix, and $\mathbf{W_{hz}}$ is a $L_{\text{RNN}}$ by $R$ dimensional parameter matrix, and $U_{\text{act1}}(\cdot)$ is a non-linear activation function. Let $\mathbf{W_{yh}}$ denote the parameter matrix that associates the last hidden state $\mathbf{h^{(i)}_{M}}$ with the response variable $Y_i$. In the output layer, the output is computed by
\begin{equation}\label{SeqDL2}
Y_i= U_{\text{act2}}(\mathbf{W_{yh}}\mathbf{h^{(i)}_{M}}),
\end{equation}
where $U_{\text{act2}}(\cdot)$ is a non-linear activation function.
\subsection{Functional data analysis and dense functional MLP}\label{sec2.3}
Functional data analysis (FDA) is an emerging branch in statistics that specializes in the analysis and theory of data dynamically evolves over a continuum. In general, FDA deals with data subjects that can be viewed as a functional form $X_i(t)$ over a continuous index $t$. Frequently encountered FDA-type data include time series data, tracing data such as hand-writings, and image data \cite{ramsay2006functional}.
From the FDA modeling perspective, the $M_{i,r}$ observations of the $r$-th time series feature of subject $i$ (i.e., $\mathbf{Z}^{(i,r)}$) are treated as discretized realizations from a continuous underlying curve $X^{(i,r)}(t)$ contaminated with zero-mean random errors.
\begin{equation}
Z^{(i,r)}_{j}=X^{(i,r)}(T^{(i,r)}_{j}) + \epsilon_{i,r,j}.\label{setting1.5}
\end{equation}
FDA predictive models directly handle the continuous time series features and solve the problem defined in Eq.\eqref{setting1} by learning
\begin{equation}
Y_i = F(X^{(i,1)}(t), ...., X^{(i,R)}(t)).\label{setting2}
\end{equation}
Under certain assumptions on the smoothness of the underlying random functions $X^{(i,r)}(t)$ (e.g., continuous second derivatives exist), the conventional functional linear classification or regression models \cite{ramsay2006functional, yao2010functional} assume and learn the unknown real-valued parameters in a linear-formed mapping
\begin{equation} \label{flm}
Y_i = b + \sum_{r=1}^{R} \int_{t\in \mathcal{T}}W_{r}(\ve{\beta}_{r},t)X^{(i,r)}(t)dt,
\end{equation}
where $b\in \mathbb{R}$ is the unknown intercept, $\ve{\beta}_{r}$ is a finite-dimensional vector that quantifies the parameter function $W_{r}(\ve{\beta}_{r},t)$, and $\int_{t\in \mathcal{T}}W_{r}(\ve{\beta}_{r},t)X^{(i,r)}(t)dt$ is a generalization of vector inner product to $L^2(t)$ space and it aggregates the time-varying impact of the time series input on the response. Given Eq.\eqref{flm}, it can be seen that, unlike the sequential learning models, the functional models do not require equally-spaced time series observations and can be effectively trained end-to-end as long as the integral can be consistently approximated based on the actual observations $\mathbf{Z}^{(i,r)}$.
To capture more complex relationships, a functional MLP that embeds the linear calculation in Eq.\eqref{flm} into the network structure of the conventional MLP \cite{pal1992multilayer} is introduced by \cite{rossi2002functional, rossi2005representation} and later explored further by \cite{wang2019remaining}. In particular, functional MLP proposed a new functional neuron that consists of the linear transformation in Eq.\eqref{flm} and an additional non-linear activation step, as shown in Figure \ref{FNN_exp}. To build functional MLP, multiple functional neurons that take functional inputs and calculates a numerical output are placed on the first layer. The outputs from the functional layer are supplied into subsequent numerical neuron layers whose inputs and outputs are both scalar values, for further manipulations till the output layer that holds the response variable. An example FMLP with three functional neurons on the first layer and two numerical neurons on the second layer is illustrated in Figure \ref{FNN_exp} \cite{wang2019remaining}.
For the simplicity of mathematical notations, let's consider the case where $K$ functional neurons in the first layer and one numerical neuron in the second layer. Let $U_k(\cdot)$ be an activation function from $\mathbb{R}$ to $\mathbb{R}$, $a_k, b_k \in \mathbb{R}$ for $k=1,..,K$. The weight function for the $r$-th functional feature in the $k$-th functional neuron is assumed to be quantified by a finite dimensional unknown vector $\ve{\beta}_{k,r}$ and is denoted as $W_{k,r}(\ve{\beta}_{k,r}, t)$ for $k=1,...,K$ and $r=1,..,R$. Let $\ve{\beta}=[\ve{\beta}_{1,1},...,\ve{\beta}_{1,R},...,\ve{\beta}_{K,1},....,\ve{\beta}_{K,R}]^T$, $\mathbf{X^{(i)}}=[X^{(i,1)}(t), ...., X^{(i,R)(t)}]^T$. Then the scalar output of the first layer is
\begin{equation}
H(\mathbf{X^{(i)}}, \ve{\beta})=\sum_{k=1}^{K}a_k U_k(b_k + \sum_{r=1}^{R} \int_{t\in \mathcal{T}}W_{k,r}(\ve{\beta}_{k,r}, t)X^{(i,r)}(t)dt).\label{FN1}
\end{equation}
\begin{figure}
\caption{The architecture of a functional MLP with three functional neurons on the first layer and two numerical neurons on the second layer \cite{wang2019remaining}
\label{FNN_exp}
\end{figure}
For more details about the functional MLP including the specifications of the weight functions $W_{k,r}(\ve{\beta}_{k,r}, t)$ in the functional neurons, and assumptions and implementations of the gradient descent based training procedure, we list the literature \cite{rossi2002functional, wang2019remaining} as good references. It is noteworthy that this conventional functional MLP requires that the number of observations per curve is large enough and the individual observations are relatively regularly spaced (i.e., Figure \ref{e2} and \ref{e3}), as stated in Theorem 1 in \cite{rossi2002functional}. In the next section, we propose an effective way of generalizing functional MLP to sparsely and irregularly observed time series inputs.
\section{Proposed sparse functional MLP}\label{sec2.4}
When there is a limited amount of irregularly-spaced data available per feature curve, as shown in Figure \ref{e3}, the existing models described in the previous section are no longer feasible solutions. The deep sequential models need dense and regular observations to model the temporal information through the recurrent network structure. A common practice is to conduct data interpolation to obtain the required dense regular data, however, the conventional interpolation techniques such as the cubic B-spline and the Gaussian process regression often produce biased curve estimates. Analogously, the numerical integration calculation in the conventional FMLP is problematic under sparse data scenarios. In this section, we propose a novel algorithm for generalizing Multilayer Perceptron (MLP) to handle sparse functional data, wherein for a given subject there are multiple observations available over time and these observations are sparsely and irregularly distributed within the considered time range.
\subsection{Sparse functional MLP based on univariate PACE}\label{sec2.4.1}
To derive and define a functional neuron that is calculable for sparse functional data, we propose to go one step further than the existing functional neuron in Eq.\eqref{FN1} with the help of the functional principal component analysis \cite{hall2006properties, silverman1996smoothed, yao2005functional}. Let the $r$-th feature be a random process with an unknown mean function $\mu_r(t)$ and an unknown covariance function $G_r(t,t^\prime)$, $t, t^\prime \in \mathcal{T}$. Mathematically, the non-increasing eigenvalues $\{\lambda_{r,p}\}_{p=1}^\infty$ and the corresponding eigenfunctions $\{\phi_{r,p}(t)\}_{p=1}^\infty$ are solutions of
\begin{equation}
\lambda \phi(t) = \int_{t^\prime \in \mathcal{T}} G_r(t,t^\prime)\phi(t^\prime)dt^\prime.\label{sparse1}
\end{equation}
The eigenfunctions are orthonormal in the sense that $\int_{t\in \mathcal{T}} \phi_{r,p}(t)\phi_{r,p^\prime}(t)dt=0$ for $p\neq p^\prime$ and $\int_{t\in \mathcal{T}} \phi_{r,p}^2(t)dt=1$. Based on the orthonormality of the eigenfunctions, the $r$-th feature of subject $i$ can be represented as
\begin{equation}
X^{(i,r)}(t) = \sum_{p=1}^\infty \eta_{i,r,p} \phi_{r,p}(t),\label{sparse2}
\end{equation}
where $\eta_{i,r,p}= \int_{t\in \mathcal{T}}X^{(i,r)}(t)\phi_{r,p}(t) dt$. A common practice for basis expansion-based methods in FDA \cite{pomann2015two,muller2005functional} is to truncate the expansion at the first several directions. This practice is supported by the fact that the core information regarding $X^{(i,r)}(t)$ is mostly captured by the first several basis functions when the curve is smooth in a certain degree. Strict theoretical proofs can be found in \cite{yao2005functional}. For the $r$-th feature, let's truncate at the first $P_r$ dimensions and plug the $X^{(i,r)}(t)$'s approximated representation into the FMLP in Eq.\eqref{FN1}, we have
\begin{multline}
\label{sparse33}
H(\mathbf{X^{(i)}}, \ve{\beta}) \approx \sum_{k=1}^{K}a_k U_k([b_k + \sum_{r=1}^{R} \int_{t\in \mathcal{T}}W_{k,r}(\ve{\beta}_{k,r}, t) \sum_{p=1}^{P_r} \eta_{i,r,p} \phi_{r,p}(t)dt]).
\end{multline}
Practically, $P_r$ can be selected using the fraction of variance explained approach, AIC or BIC criterion based approach, or the leave-one-curve-out cross-validation method \cite{peng2009geometric}.
Given Eq.\eqref{sparse33}, it can be seen that the model still cannot directly go through as we cannot consistently estimate $\eta_{i,r,p}=\int_{t\in \mathcal{T}}X^{(i,r)}(t)\phi_{r,p}(t) dt$ from the sparse observations $\mathbf{Z}^{(i,r)}$. Borrowing the idea from sparse data Principal Components Analysis through Conditional Expectation (PACE) \cite{yao2005functional}, we propose to estimate $\eta_{i,r,p}$ by its best linear unbiased predictor, $E[\eta_{i,r,p}|\mathbf{Z}^{(i,r)}]$. This is a reasonable choice to estimate $\eta_{i,r,p}$, because if we take the randomness of measuring time $t$ into account, then
\begin{equation}
E_t[E[\eta_{i,r,p}|\mathbf{Z}^{(i,r)}]] = E[\eta_{i,r,p}]. \label{sparse4}
\end{equation}
That is to say the random quantities $E[\eta_{i,r,p}|\mathbf{Z}^{(i,r)}]$ and $\eta_{i,r,p}$ share the same expectation. Motivated by a special case where the observations $\mathbf{Z}^{(i,r)}$ and the random errors $\epsilon_{i,r,j}$ are jointly Gaussian distributed, given Eq.\eqref{setting1.5} and \eqref{sparse2}, we can get the explicit formula for $E[\eta_{i,r,p}|\mathbf{Z}^{(i,r)}]$,
\begin{multline}
\label{sparse5}
E[\eta_{i,r,p}|\mathbf{Z}^{(i,r)}] =\gamma_{r,p} +
\lambda_{r,p}\ve{\phi}_{i,r,p}^T{[\ve{\Phi}_{i,r}\text{diag}(\ve{\lambda}_r)\ve{\Phi}_{i,r}^T+\sigma_r^2 I]}^{-1}(\mathbf{Z}^{(i,r)}-\ve{\mu}_{i,r}),
\end{multline}
where $\gamma_{r,p}=\int \phi_{r,p}(t)\mu_{r}(t)\,dt$, $\ve{\lambda}_r=[\lambda_1,...,\lambda_{P_r}]^T$, and $\ve{\phi}_{i,r,p}$ is the eigenfunction $\ve{\phi}_{r,p}(t)$ evaluated at the $M_{i,r}$ observing time points, i.e., $\ve{\phi}_{i,r,p}=[\phi_{r,p}(t_{i,r,1}),...,\phi_{r,p}(t_{i,r,M_{i,r}})]^T$. $\ve{\Phi}_{i,r}$ is a $M_{i,r} \times {P_r}$ matrix, with the $p$-th column being $\ve{\phi}_{i,r,p}$. $\sigma_r$ is the standard deviation of the random noise $\epsilon_{i,r,j}$. By plugging Eq.\eqref{sparse5} back into Eq.\eqref{sparse33}, we achieve the output of the first layer of our \textbf{\textit{proposed sparse functional MLP}} with $K$ neurons,
\begin{multline}
\label{sparse55}
\tilde{H}(\mathbf{X^{(i)}}, \ve{\beta})=\sum_{k=1}^{K}a_k U_k([b_k + \sum_{r=1}^{R} \int_{t\in \mathcal{T}}W_{k,s}(\ve{\beta}_{k,r}, t)\sum_{p=1}^{P_r} E[\eta_{i,r,p}|\mathbf{Z}^{(i,r)}]\phi_{r,p}(t)dt]).
\end{multline}
The sparse functional neuron in the above equation plays the same role as the functional neuron in Eq.\eqref{FN1} for the conventional functional MLP \cite{rossi2002functional}.
Before training the proposed sparse functional MLP model, we need to first estimate the unknown values in Eq.\eqref{sparse33} and Eq.\eqref{sparse5}, including the eigenfunctions $\phi_{r,p}(t)$, eigenvalues $\lambda_{r,p}$, standard deviation of random error $\sigma_r$. They can be estimated using the restricted maximum likelihood estimation in \cite{peng2009geometric}, the local linear smoothing based method in \cite{yao2005functional}, or the EM algorithm in \cite{james2000principal}. Since This is not the main focus in this paper, we skip the details. The point is that our algorithm needs consistent estimations for the functional components. All the three methods have been proved to produce consistent estimates for these eigen components for sparse functional data and they can be used to perform the estimation step. In our numerical experiments in Section \ref{sec4}, we used the local linear smoothing based method in \cite{yao2005functional}. Let's denote the estimated values as $\hat{\phi}_{r,p}(t)$, $\hat{\lambda}_{r,p}$ and $\hat{\sigma}_r$. Then we have the empirical counterparts for Eq.\eqref{sparse33} and Eq.\eqref{sparse5}, denoted as $\hat{\tilde{H}}(\mathbf{X^{(i)}}, \ve{\beta})$ and $\hat{E}[\eta_{i,r,p}|\mathbf{Z}^{(i,r)}]$.
After the estimation step, similar to the dense functional MLP in \cite{rossi2002functional}, we propose to use gradient based algorithms to train our sparse functional MLP model. The forward propagation step can go through as follows. First, in the functional neurons, the integral $\int_{t\in \mathcal{T}}W_{k,r}(\ve{\beta}_{k,r}, t)\sum_{p=1}^{P_r}\hat{E}[\eta_{i,r,p}|\mathbf{Z}^{(i,r)}] \hat{\phi}_{r,p}(t)dt$ is approximated by the numerical integration techniques. First layer's output $\hat{\tilde{H}}(\mathbf{X^{(i)}}, \ve{\beta})$ can then calculated by the formula in Eq.\eqref{sparse33}. The forward propagation calculation in subsequent numerical layers is straightforward. In the backward propagation step, the partial derivatives from the output layer up to the second hidden layer (i.e., the numerical layer after the functional neuron layer) can be easily calculated as before. Whereas, it is essential to ensure that the partial derivatives of the values at the second layer (i.e., $\hat{\tilde{H}}(\mathbf{X^{(i)}}, \ve{\beta})$) with respect to the parameters $\ve{\beta}$ exist. This requires that $\partial W_{k,r}(\ve{\beta}_{k,r},t)/\partial \beta_{k,r,q}$ exists almost everywhere for $t\in \mathcal{T}$. Under this assumption, $\partial \hat{\tilde{H}}(\mathbf{X^{(i)}}, \ve{\beta})/\partial \beta_{k,r,q}$ for any $k=1,...,K;r=1,..,R; q=1,...,Q_r$ can be estimated using numerical approximations of the following quantity
\begin{multline}
\label{sparse6}
\frac{\partial\hat{\tilde{H}}(\mathbf{X^{(i)}}, \ve{\beta})}{\partial \beta_{k,r,q}} \approx
a_k U_k^\prime([b_k + \sum_{r=1}^{R} \int_{t\in \mathcal{T}}W_{k,r}(\ve{\beta}_{k,r}, t)\sum_{p=1}^{P_r} \hat{E}[\eta_{i,r,p}|\mathbf{Z}^{(i,r)}]\hat{\phi}_{r,p}(t)dt])\\
\times \int_{t\in \mathcal{T}}\frac{\partial W_{k,r}(\ve{\beta}_{k,r}, t)}{\partial \beta_{k,r,q}}\sum_{p=1}^{P_r} \hat{E}[\eta_{i,r,p}|\mathbf{Z}^{(i,r)}]\hat{\phi}_{r,p}(t)dt.
\end{multline}
To justify the validity of our proposal, we have provided brief arguments regarding the consistency of using estimated values as well as the equivalence between our sparse and the proposed dense MLP under dense regular data scenarios.
\subsection{Sparse functional MLP based on multivariate FPCA}\label{sec2.4.2}
The functional neurons in Eq.\eqref{sparse33} (i.e., an equivalent of the dense functional neuron in Section \ref{sec2.3}) and Eq.\eqref{sparse55} (i.e., the proposed sparse functional neuron) are based on the univariate functional principal component analysis. When separately conducting FPCA, the joint variations among the $R$ variables $\{X^{(i,1)}(t), ...., X^{(i,R)(t)}\}$ are not captured, which makes the random scores from different variables (i.e, $\eta_{i,r,p}$ and $E[\eta_{i,r,p}|\mathbf{Z}^{(i,r)}]$) being correlated and causes multicollinearity issues during modeling. An example of correlated random scores are illustrated in Figure \ref{corr_socre} in Section \ref{sec4}. To overcome this issue, we propose functional neural networks based on the multivariate FPCA \citep{chiou2014multivariate, happ2018multivariate} that are described as follows.
Let $\mathbf{W}_{k,r}(\ve{\beta}_{k}, t) = [W_{k,1}(\ve{\beta}_{k,r}, t),...,W_{k,r}(\ve{\beta}_{k,R}, t)]^T$, then Eq.\eqref{FN1} can be written as in the following vector format
\begin{equation}
\label{FN1_m}
H(\mathbf{X^{(i)}}, \ve{\beta}) = \sum_{k=1}^{K}a_k U_k([b_k + \int_{t\in \mathcal{T}}\mathbf{W}_{k,r}(\ve{\beta}_{k}, t)^T \mathbf{X}^{(i)}(t)]).
\end{equation}
Let's denote the $R\times R$ matrix that quantifies the covariance of each variable and the joint variation between variables as $\mathbf{G}(t, t^\prime)$, with the $(r,r^\prime)$-th element being $G_{r,r^\prime}(t, t^\prime)=\text{Cov}(X^{(i,r)}(t), X^{(i,r^\prime)}(t^\prime))$. According to the multivariate FPCA, there exists a set of $R$ dimensional orthonormal eigenfunction vectors $\tilde{\ve{\phi}}_p(t)=[\tilde{\phi}_{1,p}(t),...,\tilde{\phi}_{R,p}(t)]^T$, for $p=1,...,\infty$, such that
\begin{equation} \label{mfpca1}
\int \mathbf{G}(t, t^\prime) \tilde{\ve{\phi}}_p(t^\prime) d t^\prime = \tilde{\lambda}_p \tilde{\ve{\phi}}_p(t), \text{with } \lim_{p\rightarrow \infty}\tilde{\lambda}_p = 0,
\end{equation}
where $\tilde{\lambda}_p \in \mathbb{R}$ is the eigenvalue corresponding to the $p$-th eigenfunction vector $\tilde{\ve{\phi}}_p(t)$. Accordingly, it has been shown that the $R$ dimensional data $\mathbf{X}^{(i)}(t)$ can be represented by
\begin{equation} \label{mfpca2}
\mathbf{X}^{(i)}(t) = \sum_{p=1}^{\infty} \tilde{\eta}_{i,p}\tilde{\ve{\phi}}_p(t) \approx \sum_{p=1}^{P} \tilde{\eta}_{i,p}\tilde{\ve{\phi}}_p(t),
\end{equation}
where $\tilde{\eta}_{i,p}=\sum_{r=1}^R \int X^{(i,r)}(t)\tilde{\phi}_{r,p}(t)dt$. Comparing the separate FPCA in Section \ref{sec2.4.1} with the multivariate FPCA in Eq.\eqref{mfpca1}~\eqref{mfpca2}, it can be seen that univariate FPCA is a special case of the multivariate FPCA that assumes zero joint variation between variables. Theoretically, we expect the functional regression models in this section to outperform those in Section \ref{sec2.4.1}. The magnitude of improvement is affected by the size of joint variation between variables and the complexity of the underlying mapping.
Given Eq. \eqref{FN1_m}\eqref{mfpca1} \eqref{mfpca2}, when all the variables $\{X^{(i,r)}(t), i=1,...,N; r=1,...,R\}$ are densely and regularly evaluated, the multivariate functional neuron is defined as follows
\begin{multline}
\label{sparse33_m}
H_{M}(\mathbf{X^{(i)}}, \ve{\beta}) \approx \sum_{k=1}^{K}a_k U_k([b_k + \int_{t\in \mathcal{T}} \mathbf{W}_{k,r}(\ve{\beta}_{k}, t)^T \sum_{p=1}^{P} \tilde{\eta}_{i,p} \tilde{\ve{\phi}}_p(t)dt]).
\end{multline}
For sparsely evaluated data, analogous to Eq.\eqref{sparse33}, the sparse multivariate function neuron is
\begin{equation*}
\resizebox{1.0\hsize}{!}{
\label{sparse33_ms}
$H_{M}(\mathbf{X^{(i)}}, \ve{\beta}) \approx \sum_{k=1}^{K}a_k U_k([b_k + \int \mathbf{W}_{k,r}(\ve{\beta}_{k}, t)^T \sum_{p=1}^{P} E[\tilde{\eta}_{i,p}|\mathbf{Z}^{(i,1)},...,\mathbf{Z}^{(i,R)}] \tilde{\ve{\phi}}_p(t)dt])$
}
\end{equation*}
\begin{equation}
\label{sparse33_ms2}
E[\tilde{\eta}_{i,p}|\mathbf{Z}^{(i,1)},...,\mathbf{Z}^{(i,R)}]=\sum_{r=1}^R E[\tilde{\eta}_{i,p}|\mathbf{Z}^{(i,r)}].
\end{equation}
Note that $E[\tilde{\eta}_{i,p}|\mathbf{Z}^{(i,r)}]$ is calculated by replacing $\phi_{r,p}(t)$ and $\lambda_{r,p}$ with $\tilde{\phi}_{r,p}(t)$ and $\tilde{\lambda}_{p}$ respectively. Based on the multivariate function neurons discussed above, we propose multivariate FMLP by embedding these neurons in the architecture in Figure \ref{FNN_exp}.
Next, we describe how to numerically implement the multivariate FMLP as follows. The core theoretical result in \cite{happ2018multivariate} is that there is an analytical relationship between the univariate FPCA and the multivariate FPCA, which implied that $\tilde{\phi}_{r,p}(t)$ and $\tilde{\lambda}_{p}$ can be calculated by estimating $\phi_{r,p}(t)$ and $\lambda_{r,p}$ for all $r=1,...,R$. In particular, let the estimated score from univariate FPCA be $\hat{s}_{i,r,p}=\hat{\eta}_{i,r,p}$ or $\hat{s}_{i,r,p}=\hat{E}[\eta_{i,r,p}|\mathbf{Z}^{(i,r)}]$, for $i=1,...,N; r=1,...R; p=1,...,P_r$. Let $P_+=\sum_{r=1}^R P_r$ and $\ve{\Xi}$ is a $P_+ \times P_+$ consisting of blocks $\ve{\Xi^{(rr^\prime)}} \in \mathbb{R} ^{P_{r}\times P_{r^\prime}}$ with the $(p, p^\prime)$-th entry being
\begin{equation}
\begin{split}
\Xi_{pp^\prime}^{(rr^\prime)} & = \text{Cov}(\hat{s}_{i,r,p}, \hat{s}_{i,r^\prime,p^\prime})\\
& = \frac{1}{N-1}\sum_{i=1}^N (\hat{s}_{i,r,p} - \bar{\hat{s}}_{i,r,p} )(\hat{s}_{i,r^\prime,p^\prime} - \bar{\hat{s}}_{i,r^\prime,p^\prime} )
\end{split}
\end{equation}
Let's conduct eigen decomposition on matrix $\ve{\Xi}$ and denote the $p$-th eigenvector as $\mathbf{c}_p$. Note that $\mathbf{c}_p$ can be considered as a vector consisting of $R$ blocks, with the $r$-th block being denoted as $[\mathbf{c}_p]^{(r)} \in \mathbb{R}^{P_r}$. According to the proposition in \cite{happ2018multivariate}, we can estimate $\tilde{\phi}_{r,p}(t)$ by
\begin{equation}
\hat{\tilde{\phi}}_{r,p}(t) = \sum_{m=1}^{P_r}[\mathbf{c}_p]_m^{(r)} \hat{\phi}_{r,m}(t),
\end{equation}
where $\hat{\phi}_{r,m}(t)$ is the achieved eigenfunction from conducting univariate FPCA on the $r$-th variable. The joint eigenvalue $\tilde{\lambda}_{p}$ is the same as the eigenvalue of matrix $\ve{\Xi}$.
In summary, we propose two types of extensions of the FMLP in \cite{rossi2005functional, wang2019remaining}. The model in Section \ref{sec2.4.1} equipped with the sparse functional principal component analysis is proposed to handle sparse data cases. The models in Section \ref{sec2.4.2} are generalizations of the FMLP to explicitly account for the correlations among variables. The FMLPs using the conventional FPCA are more straightforward to implement, while the multivariate FMLPs are expected to be more accurate, especially when the correlations among the variables are large.
\section{Scarce data with limited number of samples}\label{sec3}
In this section, we present a comparative study of the sequential learning models and the regular FMLP (equivalently, the proposed sparse FMLP) under scenarios where the sample size is small, i.e., Figure \ref{e2}. We first theoretically compare the minimum sample size required by each model by calculating the number of parameters. We also discuss their feasibility and efficiency in dealing with two different types of time series inputs. Finally, we conduct numerical experiments to demonstrate their performance in solving the challenging remaining useful life prediction task in the Predictive Maintenance domain, given the limited amount of training data.
\subsection{Theoretical comparison} \label{sec3.1}
\subsubsection{Comparing the number of parameters}\label{sec3.1.1}
To understand the minimum sample size required to train each candidate model, we first present the mathematical formula for the number of unknown parameters in the considered deep learning models. For the simple RNN described in Section \ref{sec2.2}, supposing that there are $L_{\text{RNN}}$ hidden states, the total number of unknown parameters $\text{Count}_{\text{RNN}}$ is given in Eq.\eqref{count1}. The first term $L_{\text{RNN}}(L_{\text{RNN}}+R)$ in the equation corresponds to the sequential processing by memory cells in Eq.\eqref{SeqDL}, while the second term is related to the output state in Eq.\eqref{SeqDL2}. The number of unknown parameters for LSTM is four times that of RNN, as there are an input gate, an output gate, and a forget gate in addition to the RNN-like memory cell. Gated recurrent units (GRUs) are a type of hidden activation function in recurrent neural networks. GRU is like a long short-term memory (LSTM) with a forget gate, but has fewer parameters than LSTM, as it lacks an output gate. GRU has been used to model speech, music, and language data by maximizing the conditional probability of a target sequence given a source sequence. For the functional MLP in Section \ref{sec2.3} and \ref{sec2.4}, let's denote the length of unknown parameters in the parameter functions $W_{k,r}(\ve{\beta}_{k,r}, t)$ as $Q_{k,r}$ and the number of hidden functional neurons as $L_{\text{FMLP}}$. Then the number of unknown parameters in FMLP $\text{Count}_{\text{FMLP}}$ can be obtained by the formula in Eq.\eqref{count1}. Note that the first term represents the connections between layers and the second term corresponds to the biases in every layer.
\begin{equation}\label{count1}
\begin{split}
& \text{Count}_{\text{RNN}} = L_{\text{RNN}}(L_{\text{RNN}}+R) + L_{\text{RNN}} = O(L^2_{\text{RNN}})\\
& \text{Count}_{\text{LSTM}} = 4(L_{\text{LSTM}}(L_{\text{LSTM}}+R) + L_{\text{LSTM}})= O(L^2_{\text{LSTM}})\\
& \text{Count}_{\text{GRU}} = 3(L_{\text{GRU}}(L_{\text{GRU}}+R) + L_{\text{GRU}})= O(L^2_{\text{GRU}})\\
& \text{Count}_{\text{FMLP}} = (\sum_{k=1}^{L_{\text{FMLP}}}\sum_{r=1}^R Q_{k,r} + L_{\text{FMLP}}) +(L_{\text{FMLP}}+1)= O(L_{\text{FMLP}})
\end{split}
\end{equation}
Based on Eq.\eqref{count1}, it can be seen that the total number of unknown parameters of RNN, LSTM, and GRU are in a quadratic order of the number of hidden units, while the number of parameters of FMLP is in the same order with $L_{\text{FMLP}}$, the number of functional neurons. This indicates that the minimal sample size required by the sequential learning model is theoretically much larger than FMLP when the underlying mapping $F(\cdot)$ in Eq.\eqref{setting1} and \eqref{setting2} is complex and needs a comparable large number of hidden units in the sequential learning models and FMLP to be well approximated. On the other hand, when $L_{\text{RNN}}$ and $L_{\text{FMLP}}$ are significantly different (i.e., $L_{\text{RNN}} \ll L_{\text{FMLP}}$, or $L_{\text{RNN}} \gg L_{\text{FMLP}}$), the model that can more efficiently capture the temporal information in the covariates requires less number of training samples. As a next step, we study the candidate model's feasibility and efficiency in terms of temporal information capturing under different circumstances. LSTM is the popularly used in many applications, in the rest of the paper, we use LSTM as a representative of the sequential learning models.
\subsubsection{Comparison under different scenarios}\label{sec3.1.2}
In real practice, the observed time series data often contains certain zero-mean noises. That is there exist the additive relationship among the actual observation $Z^{(i,r)}_{j}$, the underlying continuous process $X^{(i,r)}(t)$ that give rises to the time-specific observation, and the random noise $\epsilon_{i,r,j}$, as indicated by Eq.\eqref{setting1.5}. Depending on its mathematical properties, the individual function $X^{(i,r)}(t)$ can be divided into two categories, consisting of smooth functions whose continuous second derivatives exist and non-smooth functions otherwise. Both smooth and non-smooth processes are frequently encountered in real-world applications. Examples of time series data from smooth underlying functions include child growth over time, traffic flow during the day, accumulative number of positive cases over time for a certain pandemic. Continuous processes of the non-smooth nature include the vibration data and acoustic data, of which the rapidly changing dynamics contain the key information to distinguish a given function $X^{(i,r)}(t)$ from other random samples $\{X^{(i,r)}(t)\}_{i^{\prime}\neq i}$.
According to Section \ref{sec2}, the sequential learning models are more appropriate models for building predictive models with non-smooth time series covariates. This is because the sequential learning models conduct computations on the individual observations at each timestamp and offer a good way of capturing the non-linear dependencies within the highly dynamic time series as well as the complex relationship between the time series and the response. Whereas, the functional predictive models attempt to model the infinite-dimensional continuous curve and therefore essentially rely on certain smoothness assumptions on each underlying process $X^{(i,r)}(t)$ to overcome the curse of dimensionality in the formulation in Eq.\eqref{flm} and \eqref{FN1} \cite{ramsay2006functional}. More specifically, the assumption that the parameter function $W_{k,r}(\ve{\beta}_{k,r},t)$ can be determined by a finite-dimensional vector $\ve{\beta}_{k,r}$, i.e., the assumed model can be trained end-to-end accordingly, only if the underlying process is smooth.
When it comes to smooth time series inputs, the FMLP has advantages over the sequential learning models in general thanks to the basis expansion technique in FDA. The key idea of basis expansion is to set the weight function $W_{k,r}(\ve{\beta}_{k,r},t)$ as a linear combination of a set of fixed or data-driven basis functions $\{\phi_{k,r,p}(t)\}_{p=1}^{Q_{k,r}}$.
\begin{equation}
W_{k,r}(\ve{\beta}_{k,r},t) = \sum_{p=1}^{Q_{k,r}}\beta_{k,r,p}\phi_{k,r,p}(t). \label{basis1}
\end{equation}
Then the core temporal information processing unit $\int_{t\in \mathcal{T}}W_{k,r}(\ve{\beta}_{k,r},t)X^{(i,r)}(t)dt$ in FMLP becomes
\begin{equation}
\int_{t\in \mathcal{T}}W_{r}(\ve{\beta}_{r},t)X^{(i,r)}(t) = \sum_{p=1}^{Q_{k,r}}\beta_{k,r,p}\int_{t\in \mathcal{T}}\phi_{k,r,p}(t)X^{(i,r)}(t)dt. \label{basis2}
\end{equation}
This means that the infinite-dimensional variation in $X^{(i,r)}(t)$ is transferred to the $Q_{k,r}$ scalar random variables $\{\int_{t\in \mathcal{T}}\phi_{k,r,p}(t)X^{(i,r)}(t)dt\}_{p=1}^{Q_{k,r}}$, the projection scores of $X^{(i,r)}(t)$ onto the pre-defined basis functions. The basis function-based parameter specification in Eq.\eqref{basis1} is supported by theoretical arguments that the infinite continuous stochastic process $X^{(i,r)}(t)$ can be consistently represented by a finite set of basis functions given a certain degree of smoothness \cite{ramsay2006functional,hall2006properties}. The benefit of the formulation in Eq.\eqref{basis2} is two-folds. First, the temporal information in $X^{(i,r)}(t)$ can be more succinctly captured by the integrals, which take less time to compute than RNN, especially when the time series inputs are long time series. Second, it allows us to embed prior domain knowledge about the characteristic of time series features into the temporal predictive model. For instance, if we know there are sparse jumps, spikes, peaks in the time series covariates, the wavelet basis is a good choice to extract meaningful projection scores. On the other hand, the sequential learning models need more complex architectures to capture this useful knowledge. This means that the number of hidden units and accordingly the minimal sample size required in FMLP is in general much less than the sequential learning models for a target level model performance.
\subsection{Comparison of performance on equipment remaining useful life prediction}\label{sec3.2}
Remaining Useful Life (RUL) of equipment or one of its components is defined as the time left until the equipment or component reaches its end of useful life. Accurate RUL prediction is exceptionally beneficial to Predictive Maintenance, and Prognostics and Health Management (PHM). Recently, data-driven solutions that utilize historical sensor and operational data to estimate RUL are gaining popularity. In particular, the RUL prediction problem is usually formulated as a temporal regression problem defined in Eq.\eqref{setting1} and \eqref{setting2}. In this section, we compare the performance of functional MLP (`FMLP'), LSTM, and the traditional multivariate regression models that treat the measurements in time series as features (`MLP' and `SVR') in solving the RUL prediction task with a limited number of training samples in a widely-used benchmark data set called NASA C-MAPSS (Commercial Modular Aero-Propulsion System Simulation) data \cite{saxena2008phm08}.
\subsubsection{Background and data pre-processing}
\textit{Background:} C-MAPSS data set consists of simulated 21 sensor readings, 3 operating condition variables for a group of turbofan engines as they running until some critical failures happen. There are four data subsets in C-MAPSS that correspond to scenarios with different numbers of operating conditions and fault modes \cite{saxena2008phm08}. Each subset is divided into the training and testing sets. The training sets contain run-to-failure data where engines are fully observed from an initial healthy state to a failure state. The testing sets consist of prior-to-failure data where engines are observed until a certain time before failure. Table.~\ref{bg} provides a summary for each subset in C-MAPSS, including the number of operating conditions and fault modes, and the number of subjects in the training and testing phase.
\begin{table}[htbp]
\caption{Summary of the subsets in C-MAPSS data set}
\begin{center}
\begin{tabular}{c|cccc}
\hline
\hline
\textbf{}& \textbf{FD001}& \textbf{FD002}& \textbf{FD003}& \textbf{FD004}\\
\hline
$\#$ of engines in training& 100 & 260 & 100 & 249 \\
$\#$ of engines in testing& 100 & 259 & 100 & 248 \\
$\#$ of operating conditions& 1 & 6 & 1 & 6 \\
$\#$ of fault modes & 1 & 1 & 2 & 2 \\
\hline
\hline
\end{tabular}
\label{bg}
\end{center}
\end{table}
\textit{Removing the effect of operating conditions:} For the second and the fourth sub-datasets, there are six operating conditions reflected by the three operating condition variables. To remove the effects of operating conditions, for each of the 21 sensors, we train an MLP model to learn a mapping from the operating condition variables to the sensor variable. In particular, for a given sensor in `FD002', we use data from all the 260 engines in the training set to train the MLP model. The sample size is the summation of the number of observations across the 260 engines. The input data is the three operation condition variables and the output is the considered sensor data. Then we normalize the considered sensor variable by deducting the fitted value of the MLP model from the raw sensor readings. The raw sensor trajectory and normalized trajectory of the second sensor for a randomly selected engine in the training set of FD002 are visualized in Figure.~\ref{normalization}.
\begin{figure*}
\caption{Raw sensor data for one randomly selected engine in the training set of FD002.}
\label{raw}
\caption{Normalized sensor data after removing the effect of operating conditions.}
\label{normalized}
\caption{Removing the effect of operating conditions on sensor data.}
\label{normalization}
\end{figure*}
\textit{Window-sliding and RUL labeling:} In the C-MAPSS data set, the engines in the training and testing sets are observed for a different number of time cycles. Moreover, the full sensor data trajectories in the testing sets are blinded for a variety of periods, therefore the true RUL labels are distributed variously. To handle this phenomenon, we propose to use the window sliding technique used in \cite{wu2007neural, tian2012artificial}. Let's denote the smallest number of sensor measurements for the individual engines in data subset $d$ as $\mathcal{M}_d$ for $d=1,...,4$. The values for
$\mathcal{M}_1, \mathcal{M}_2, \mathcal{M}_3, \mathcal{M}_4$ are $31, 21, 38, 19$ respectively. The functional inputs and RUL labels are generated as follows. For the $d$-th subset, trajectories corresponding to each engine in the training and testing data sets are cut into multiple data instances of length $\mathcal{M}_d$. For instance, the first engine in the training set of FD001 fails at the 144th cycle. A total of 114 training data instances are generated from this engine, with the $c$-th data instance being the sensor measurements between time cycle $c$ and $c+\mathcal{M}_d-1$. To specify the RUL labels for the 114 data instances of this engine, we adopt the widely-used piece-wise labeling approach in relevant literature \cite{babu2016deep,zheng2017long}. Under the observation that the degradation in the performance is negligible at the beginning period and it starts to degrade linearly at some point $T$, the RUL label is defined as
\begin{equation}
\text{RUL}_{c, \text{piecewise}} = \min\{T, \text{RUL}_{c,\text{linear}}\}.\label{rul_ps}
\end{equation}
Note that in our experiment, we set $T=130$, following the specifications in the prior art \cite{babu2016deep,zheng2017long}.
\begin{figure}
\caption{Remaining useful life label for a given engine: the red line represents the piece-wise RUL label capped at $T=130$.}
\label{fig:my_label}
\end{figure}
\subsubsection{Implementations and results}
\textit{Implementation of FMLP:} Following the implementation of LSTM in \cite{zheng2017long}, we use the Min-Max normalization to scale the individual sensor sequences to the range of $[0,1]$. The specific mathematical formula can be found in \cite{zheng2017long, wang2019remaining}. A FMLP with a two-layered architecture is deployed to learn the mapping from 21 sensors to the RUL label. There are four functional neurons (i.e., K=4) on the first layer and two numerical neurons on the second layer in the FMLP. The activation function on both layers are the standard logistic function, i.e., $U_{k}(u) = \frac{1}{1+e^{-u}}$. To better deal with the complex sensor data, we propose to specify data-driven weight functions by calculating the eigenfunctions from data. Let the estimated eigenfunction from the $N$ samples of the $r$-th sensor be $\hat{\phi}_{r,p}(t)$. The weight functions are then specified as
\begin{equation}
\label{exp1_d}
W_{k,r}(\ve{\beta}_{k,r},t) = \sum_{p=1}^{P_{k,r}} \beta_{k,r,p}, \hat{\phi}_{r,p}(t),
\end{equation}
where $P_{k,r}$ is selected from the regularly-used fraction of variance explained (FVE) with a $80\%$ cutoff. In practice, there are four commonly-used cutoff values, i.e., $80\%$, $90\%$, $95\%$, and $99\%$. In this experiment, we choose the smallest rule-of-thumb value. This is because choosing a smaller FVE helps retaining the key smooth patterns in sensor data and removing the random noises shown in Figure \ref{normalization}.
\textit{Evaluation metrics:} We evaluate the performance of functional MLP with the same evaluation strategy used in \cite{babu2016deep,zheng2017long}. Suppose that there are $N$ subjects in the testing set, and the true RUL since the last observation of engine $i$ is $\text{RUL}_{i, true}$ and the estimated RUL is $\text{RUL}_{i, est}$. The root mean squared error (RMSE) calculated from the $N$ engines is defined as
\begin{equation}
\text{RMSE} = \sqrt{\frac{1}{N}\sum_{i=1}^N (\text{RUL}_{i, est}-\text{RUL}_{i, true})^2}.
\end{equation}
\textit{Results:}
The RMSE of FMLP together with the results of LSTM and the multivariate regression models including the Support vector regression (`SVR') and the multilayer perceptron (`MLP') from previous literature are summarized in Tables \ref{tab1d}. For all the four subsets, functional MLP significantly outperforms the baseline methods in terms of RMSE. The average improvement over LSTM \cite{zheng2017long} is $26.89\%$. For industrial equipment like turbofan engines, the sensor signals over time are often correlated with the smooth degradation process and thus can be assumed to be generated by smooth continuous functions. The experimental results in Tables \ref{tab1d} numerically justify our discussion about the advantage of FMLP over the state-of-art models in handing smooth time series covariates in Section \ref{sec3.1}.
\begin{table}[h]
\caption{RMSE comparison on C-MAPSS data and improvement (`IMP') of functional MLP over LSTM \cite{zheng2017long}.}
\begin{center}
\begin{tabular}{c|cccc}
\hline
\hline
\textbf{Model}& \textbf{FD001}& \textbf{FD002}& \textbf{FD003}& \textbf{FD004}\\
\hline
MLP\cite{babu2016deep}& 37.56 & 80.03 & 37.39 & 77.37 \\
SVR\cite{babu2016deep} & 20.96 & 42.00 & 21.05 & 45.35 \\
LSTM\cite{zheng2017long} & 16.14 & 24.49& 16.18 & 28.17 \\
FMLP& \textbf{13.36} & \textbf{16.62} & \textbf{12.74} & \textbf{17.76} \\
\hline
\hline
IMP& $17.22\%$ & $32.14\%$ & $21.26\%$ & $36.95\%$ \\
\hline
\hline
\multicolumn{5}{l}{\footnotesize * IMP w.r.t LSTM is $(\text{RMSE}_{\text{LSTM}}-\text{RMSE}_{\text{LSTM}})/\text{RMSE}_{\text{LSTM}}$}\\
\end{tabular}
\label{tab1d}
\end{center}
\end{table}
\section{Scarce data with sparse time series features}\label{sec4}
In this section, we consider circumstances where the time series inputs are sparsely and irregularly observed over the time domain. The sparse functional MLP (SFMLP) in Section \ref{sec2.4.1} and the SFMLP equipped with multivariate FPCA in Section \ref{sec2.4.2} are temporal predictive models that are specially designed to handle this type of scenario. The sequential learning models are incapable of directly utilizing the sparsely evaluated temporal information for proper model building. They require to preliminarily fill in the gaps in the raw sparse data based on certain interpolation techniques. The performance of the sequential learning models heavily relies on the accuracy of interpolation. A description of this interpolation plus sequential learning approach is provided in the first part of this Section. Next, we conduct three numerical experiments to demonstrate the superior performance of the proposed sparse FMLPs for sparse data scenarios.
\subsection{Sequential learning models under sparse data}
As discussed in Section \ref{sec2.2}, the sequential deep learning models essentially require the time series covariates to be densely observed at an equally-spaced time grid. Under sparse data cases, certain data interpolation techniques such as the cubic spline interpolation are often deployed to get data readings of the same interval. The interpolated data are then fed into sequential learning models to build temporal predictive models. The performance of the sequential learning models heavily relies on the accuracy of interpolation.
The conventional way of performing interpolation is to separately apply techniques such as cubic B-spline and Gaussian process regression on the sparse observations from each subject. This common practice is illustrated in Figure ~\ref{lstm}. However, the recovered curves using data from the individual curves alone are often not consistent estimates for the true underlying curves, due to the limited amount of information available per curve. For instance, as shown by the left figure in Figure ~\ref{int}, the interpolated curves are significantly dispersed from the actual $\sin$-shaped random functions with two full cycles. As a consequence, the sequential learning models built based upon the biased input data are not reliable.
Observing the benefit of jointly using time series across samples to estimate or model the temporal variations within time series data in functional data analysis (i.e., the right figure in Figure ~\ref{int}), in this paper, we also consider the functional data approach PACE in Eq.\eqref{sparse2} and \eqref{sparse5} as an interpolation method for the sequential learning models. As shown in the numerical experiments in Section \ref{sec5.2}, the performance of LSTM when the recovered data are generated by PACE is significantly better than the other interpolations for the considered problems. In general, FDA type of models is valid when the time series of different subjects can be considered as random samples from an underlying random process, which is a reasonable assumption under many real-world use cases. Therefore, we recommend using FDA-type modelings as interpolation when dealing with sparse time series.
\begin{figure}
\caption{Data pre-processing in sequential learning models.}
\label{lstm}
\end{figure}
\begin{figure}
\caption{Left: An example when doing interpolation based on a single subject is problematic. Brown line is the underlying true curve, black dots are the observations to be used, and purple lines is the biased pre-smoothing results. Right: Visualization of the benefit of performing interpolate using combined the observations from different subjects.}
\label{int}
\end{figure}
\subsection{Performance comparison in numerical experiments}\label{sec5.2}
In this section, we conduct three numerical studies, including classification of synthetic curves, prediction of patient's survival beyond a given period, and prediction of engine's remaining time to failure. We compare our sparse functional MLP (`sparse FMLP' or `SFMLP') with the `LSTM' as the baseline methods. Our proposed method outperforms LSTM in all three numerical studies, where the input variables are observed multiple times for each subject, and the observing times are irregularly spaced and are not shared across subjects.
\subsubsection{Curve classification on simulated data}
\label{sec4.1}
In this subsection, we conduct a numerical experiment on synthetic data, with the objective of showing the benefit of modeling repeated data over time in the functional fashion as well as the validity of the proposed sparse functional MLP. Specifically, we consider a curve classification problem. For each subject, we have a variable of interest measured at multiple random times within time range $\mathcal{T}$. Each subject has an associated group label. The problem is to build a model to predict the group label using the repeatedly observed feature within time window $\mathcal{T}$.
The synthetic curves and labels are generated as follows. There are two distinct groups, i.e., $g=1,2$. The $j$-th observation of the $i$-th subject in group $g$ is denoted as $Z_{g,i,j}$, which is generated using $Z^{(g,i)}_{j}=X^{(g,i)}(T^{(g,i)}_{j}) + \epsilon_{g,i,j}$ Eq.\eqref{setting1.5} and the Karhunen-Lo{\`e}ve expansion $X^{(g,i)}(T^{(g,i)}_{j})=\mu_g(T^{(g,i)}_{j})+\sum_{p=1}^{\infty}\xi_{g,i,p}\phi_{p}(T^{(g,i)}_{j})$\cite{ramsay2006functional}, with $\xi_{g,i,p}\sim N(0,\lambda_p)$, for $i=1,...,N_g$, $j=1,...,M_{g,i}$. Without loss of generality, we set time window $\mathcal{T}=[0,1]$. The number of subjects in both groups are $N_1=N_2=300$. The number of observation on each curve is 10, i.e., $M_{g,i}=M=10$. Given $M_{g,i}$, the observing times are i.i.d samples from the Uniform distribution within [0,1]. The $p$-th eigenfunction $\phi_{p}(t)$ equals $\sqrt{2}\sin{(p\pi t)}$ for $p=1,...,\infty$. The first four eigenvalues are $(0.1, 0.045, 0.01, 0.001)$ and $\lambda_p=0$ for $p>4$. The mean functions are $\mu_1(t)=\sin{(4\pi t)}$ and $\mu_2(t)=-\sin{(4\pi t)}$. The standard deviation of the i.i.d random errors is 0.3. Two randomly selected subjects from each of the two groups are visualized in Figure.~\ref{exp1_1} and Figure.~\ref{exp1_2}. In each plot, the true curve $X_{g,i}(t)$ is the brown dashed line and the observations to be modeled are the black dots. We can see that the data on each curve is sparse, and the big gaps between observations prohibit the pre-smoothing step in LSTM and the dense functional MLP to successfully recover the two-cycle sine functions using the limited amount of data available. Our proposed sparse functional MLP is specifically designed to handle this kind of scenarios.
\begin{figure}
\caption{Two randomly selected subjects in group 1.}
\label{exp1_1}
\caption{Two randomly selected subjects in group 2.}
\label{exp1_2}
\caption{Visualizations of simulated data in Section \ref{sec4.1}
\label{exp111}
\end{figure}
Cubic spline, the Gaussian process regression, and functional data analysis based PACE interpolations are used in this experiment to get data readings of the same interval. We then feed these interpolation data readings into an LSTM network. This LSTM uses two layers LSTM (32, 64) and one layer of neural network (8). Note that grid search is used to tune the hyperparameters in LSTM. Figure.~\ref{exp1_44} shows the achieved interpolation at 100 points ($M=100$) within the period for one randomly selected subject in group 1. It can be seen that cubic splines and the Gaussian process regression produce biased estimates of the true curve, while the result of PACE is consistent. The same observation is shown by the RMSE for all the 300 curves in group 1 in the first table in Table.~\ref{exp_tab1}. Given the simulation result, in real practice, we recommend trying PACE as an alternative curve fitting method when the measurements in each time series are sparse and irregular. The leave-one-out cross validation results are given by the first three rows of Table.~\ref{exp_tab1}.
To implement our proposed sparse FMLP, we first estimate all the required components in the proposed sparse FMLP. The first three dimensions of the achieved eigen projection scores, i.e., $E[\eta_{g, i,1}|\mathbf{Z}_{g,i}], E[\eta_{g,i,2}|\mathbf{Z}_{g,i}]$, and $E[\eta_{g,i,3}|\mathbf{Z}_{g,i}]$ for $g=1,2$ ;$i=1,...,N_g$, are visualized in Figure.~\ref{exp1_4}. In the plots, two different types of dots correspond to the two different groups. As shown by the plots, the extracted eigen projection scores better distinguishes the two evolution curves $\sin(4\pi t)$ and $-\sin(4\pi t)$. We specify the weight function of the $k$-th functional neuron as $W_{k}(\ve{\beta}_{k}, t) = \sum_{p=1}^{P} \beta_{k, p} \hat{\phi}_p(t)$, with $\hat{\phi}_p(t)$ being the $p$-th estimated eigenfunction. The architecture of the sparse functional MLP is that there are 4 functional neurons in the first layer followed by a layer with two numerical neurons. The activation functions in both layers are logistic function. The leave-one-out cross-validation results are given by the last two rows of Table.~\ref{exp_tab1}.
Based on the two tables in Table.~\ref{exp_tab1}, we have the following observations. First, the functional data based interpolation that jointly consider the data from all samples significantly outperforms the alternative techniques. Second, for this example, the performance of our sparse functional MLP under $P=2$ and $P=3$ is comparable, with the result under $P=3$ being slightly worse. This makes sense because the first two and three components contain a comparable amount of information in the curves, according to our setting (i.e., eigenvalues are (0.1, 0.045, 0.01, 0.001)). Also, the reason why $P=3$ slightly worse is that the third component with a very small eigenvalue and can be considered as additional noises introduced to the first two components. Third, the performances of LSTM with different interpolations are similar, although the fitting performance of PACE is significantly better than the other approaches. We think this is because the interpolation within each group might be biased in a similar way for all subjects, such that the two groups of curves are still distinguishable. Last but not least, the performance of LSTM with PACE and the proposed sparse FMLP is comparable. We think this is because the distinction between the two groups of data is large and different models tend to have similar performances.
\begin{figure}
\caption{Extracted features from sparse FMLP($P=3$). Black dots represent group 1 and red dots represent group 2.}
\label{exp1_4}
\end{figure}
\begin{figure}
\caption{Comparison of interpolation results for a randomly selected sample. The blue dots are the sparse observations available in the data st. The black line is the ground truth. The red, green, and purple lines respectively corresponding to the Gaussian process regression, cubic B-spline and PACE. }
\label{exp1_44}
\end{figure}
\begin{table}[h]
\caption{The interpolation accuracy of different methods for time series in group 1 and leave-one-out cross-validation results for synthetic curve classification task in Section \ref{sec4.1}.}
\begin{center}
\begin{tabular}{cc}
\hline
\hline
\textbf{Interpolation method}& \textbf{RMSE}\\
\hline
Cubic spline& 0.562 \\
Gaussian process regression& 0.440 \\
PACE & \textbf{0.169}\\
\hline
\hline
\end{tabular}
\begin{tabular}{c|c|c|c|c}
\hline
\hline
\textbf{Model}& \textbf{Spec}& \textbf{$\#$ of samples}& \textbf{Architecture}& \textbf{Accuracy}\\
\hline
$\text{LSTM}_{\text{Spline}}$& $M=100$ & 600 & L(32,64)N(8) & $99.4\%$ \\
$\text{LSTM}_{\text{GP}}$& $M=100$ & 600 & L(32,64)N(8) & \textbf{99.5}$\%$ \\
$\text{LSTM}_{\text{PACE}}$ & $M=100$ & 600 & L(32,64)N(8) & $99.3\%$ \\
Sparse FMLP& $P=2$& 600& F(4)N(2) & \textbf{99.5}$\%$ \\
Sparse FMLP& $P=3$& 600& F(4)N(2) & $99.3\%$ \\
\hline
\hline
\end{tabular}
\label{exp_tab1}
\end{center}
\end{table}
\subsubsection{Prediction of PBC patient's long-term survival}
In this section, we consider the problem of predicting the long-term survival of patients with primary biliary cirrhosis (PBC) using their serum bilirubin measurements (in mg/dl) at the beginning period of the study. This enables the doctors to get early warnings and to take corresponding actions to increase patient's survival possibility. The PBC data set we used are results of a Mayo Clinic trial from 1974 to 1984. This data set is publicly available in a $R$ package called `survival' and has been investigated by numerous researchers including \cite{muller2005functional}.
We consider the following problem setting. We use the patient's bilirubin measurements, which is known to be an important indicator of the presence of chronic liver cirrhosis, within the first 910 days of the study to predict whether the patients survive beyond 10 years after entering the study. There are 260 patients included in the analysis, with $84$ died between 910 and 3650 days and 176 being alive after 10 years. Following \cite{muller2005functional}, the bilirubin measurements are log-transformed. The number of bilirubin measurements per patient within the 910 days ranges from 1 to 5, with histogram given in Figure.~\ref{fig1}. The sparse observations of two randomly selected patients are plotted as the black dots in Figure.~\ref{fig4}. Given these two plots, it can be seen that the functional data is sparse, and the number of observations and the observing time are not the same across patients.
\begin{figure}
\caption{Histogram of number of observation per patient within the first 910 days in the PBC study.}
\caption{Estimated eigenfunctions with first two highest eigenvalues from sparse FMLP.}
\caption{Some results for the PBC long-term survival prediction study.}
\label{fig1}
\label{fig3}
\end{figure}
Our proposed sparse functional MLP is implemented in the same fashion as described in the previous numerical experiment. The number of projection is chosen through the leave-one-curve-out cross-validation approach described in \cite{yao2005functional}. The selected $\hat{P}=2$. The estimated eigen-functions using the restricted maximum likelihood estimate method in \cite{peng2009geometric}, denoted as $\hat{\phi}_{p}(t)$ for $p=1,2$, are given in Figure.~\ref{fig3}. The best predicted curves using $\tilde{X}_i(t)=\sum_{p=1}^{\hat{P}} \hat{E}[\eta_{i,p}|\mathbf{Z}_{i}]\hat{\phi}_{p}(t)$ for two randomly selected patients are given in Figure.~\ref{fig4}. In each of the two plots in Figure.~\ref{fig4}, the observed dots are closely distributed around the predicted log(bilirubin) curve $\tilde{X}_{i}(t)$ with some random errors, which visually justified the validity of our curve approximator in Eq.\eqref{sparse55} and \eqref{sparse6}. The architecture of the sparse functional MLP we used is that there are four functional neurons on the first layer, i.e., $K=4$ in Eq. \eqref{sparse55} and \eqref{sparse6}, followed by another layer of two numerical neurons. The activation functions in both layers are logistic function. The weight function $V_{k}(\ve{\beta}_{k}, t)$ for the $k$-th functional neuron in the first layer
\begin{equation}
\label{exp1}
W_{k}(\ve{\beta}_{k}, t) = \sum_{p=1}^{\hat{P}} \beta_{k, p} \hat{\phi}_p(t),
\end{equation}
with $\hat{\phi}_p(t)$ being the $p$-th estimated eigenfunction. The leave-one-out cross-validation results using the specifications mentioned above are given in Table.~\ref{tab1}. The overall accuracy is 73.08$\%$. We also implemented the cubic B-spline interpolation plus LSTM strategy described in the previous subsection. The accuracy of LSTM with $M=910$ is 67$\%$, which is lower than our proposed method.
\begin{figure}
\caption{The raw data and the recovered log(bilirubin) curve using sparse FMLP for two randomly selected patients. }
\label{fig4}
\end{figure}
\begin{table}[h]
\caption{Leave-one-out cross-validation results for PBC long-term survival prediction using sparse FMLP}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
\textbf{}&\multicolumn{2}{|c|}{\textbf{True}} \\
\cline{2-3}
\textbf{Classified} & \textbf{\textit{Survived}}& \textbf{\textit{Died}}\\
\hline
\textbf{\textit{Survived}}& 151& 25 \\
\hline
\textbf{\textit{Died}}& 45& 39\\
\hline
\end{tabular}
\label{tab1}
\end{center}
\end{table}
\subsubsection{Predicting remaining useful life for aircraft engines}
In this subsection, we re-consider the RUL prediction problem in Section \ref{sec3}. The C-MAPSS data set is a simulated data set without any irregularity and missing values. To mimic real scenarios where there are usually a certain level of irregularity in time series trajectories, we sparsify the data set by randomly keep a certain percentage of the raw data ($30\%$, $50\%$ or $100\%$) for each of the engines in the training data. Note that the sampled timestamps are different for different variables and subject. The last observation of each engine is always kept to indicate the failure time.
For each of the 21 sensor variables, the correlations among the projection scores from separately conducting FPCA on different sensors are visualized in Figure \ref{corr_socre}. It can be seen that there are strong correlations among some of the sensors. Therefore, there will be some benefits of considering the multivariate FPCA in Section \ref{sec2.4.1}. The architecture of the sparse functional MLP (`Sparse FMLP') and the multivariate sparse functional MLP (`Sparse MFMLP') we used is that there are four functional neurons on the first layer, i.e., $K=4$ in \eqref{sparse55} and \eqref{sparse6}, followed by another layer of two numerical neurons. The activation functions in both layers are logistic function. And the weight function is the same as Eq.\eqref{exp1}. The basis functions in Eq.\eqref{exp1} are the eigenfunction from separate FPCA for `Sparse MFMLP' and the eigenfunctions from multivariate FPCA for `Sparse MFMLP'. The baselines respectively utilize the cubic spline, Gaussian process regression, PACE and multivariate PACE to prepare data for LSTM (`$\text{LSTM}_\text{Spline}$', `$\text{LSTM}_\text{GP}$', `$\text{LSTM}_\text{PACE}$', `$\text{LSTM}_\text{MPACE}$'). The RMSE are summarized in Table.~\ref{exp3_tab1}. The plots of RMSE over different level of data sparsity for the considered approaches are provided in Figure \ref{Trend_rmse}. The plot is produced with the result for data set 'FD001'.
Here a summary of the major observations. First, the performance of LSTM varies across different interpolation techniques. What's more, the LSTM with FDA type of interpolation approaches significantly outperforms the conventional data interpolation techniques. This is because the fitting error of the FDA type of interpolations is smaller. Second, the proposed models in the paper outperform all the LSTM based approaches. Specifically, the proposed models is more accurate the `$\text{LSTM}_\text{PACE}$' and `$\text{LSTM}_\text{MPACE}$'. This indicates that the functional way of modeling is better than the sequential calculation in LSTM for this specific problem. Third, the multivariate FMLP performs slightly better than the FMLP based on separate PACE. This is consistent with our intuition, as there are some correlations among the projection scores and `Sparse MFMLP' is proposed to handle such correlations. Note that the magnitude of improvement is not large. We think it is reasonable as the correlation exists among small clusters of projections. As another point, under the dense data scenario, i.e., $100\%$ for both training and testing, our proposed sparse FMLP and the dense FMLP \cite{rossi2002functional} has comparable performance. This experimentally justifies their equivalence under dense data circumstance as discussed in Section \ref{sec2.4.1}. Last, as shown in Figure \ref{Trend_rmse}, the performance of models utilize functional thinking is less affected when the number of data points per curve decreases, compared to models such as `$\text{LSTM}_\text{Spline}$', `$\text{LSTM}_\text{GP}$'.
\begin{figure*}
\caption{Correlation matrix for the projection scores of sensors calculated from the univariate PACE. }
\label{corr_socre}
\end{figure*}
\begin{figure*}
\caption{RMSE under different percentage of data per subject. }
\label{Trend_rmse}
\end{figure*}
\begin{table}[htbp]
\caption{RMSE comparison on C-MAPSS data.}
\begin{center}
\begin{tabular}{c|c|c|c c c c}
\hline
\hline
\textbf{Train} & \textbf{Test} & \textbf{Model}& \textbf{FD001}& \textbf{FD002}& \textbf{FD003}& \textbf{FD004}\\
\hline
$10\%$& $100\%$& $\text{LSTM}_\text{Spline}$& 40.54 & 43.76 & 39.81 & 49.02 \\
& & $\text{LSTM}_\text{GP}$ & 34.22 & 36.78 & 35.89 & 34.33 \\
& & $\text{LSTM}_\text{PACE}$ & 22.03 & 23.69 & 21.89 & 23.75 \\
& & $\text{LSTM}\text{MPACE}$& 21.85 & 23.10 & 21.10 & 23.54 \\
& & Sparse FMLP& 20.81 & 19.53 & 19.83 & 19.39 \\
& & Sparse MFMLP& \textbf{20.27} & \textbf{18.67} & \textbf{19.63} & \textbf{19.03} \\
\hline
$30\%$& $100\%$& $\text{LSTM}_\text{Spline}$& 40.54 & 43.76 & 39.81 & 49.02 \\
& & $\text{LSTM}_\text{GP}$& 29.98 & 30.28 & 30.42 & 28.91 \\
& & $\text{LSTM}_\text{PACE}$& 17.79 & 18.94 & 20.26 & 21.64 \\
& & $\text{LSTM}_\text{MPACE}$& 17.48 & 18.36 & 20.09 & 21.56 \\
& & Sparse FMLP& 17.12 & 17.45 & 16.41 & 18.34 \\
& & Sparse MFMLP& \textbf{16.85} & \textbf{17.08} & \textbf{16.06} & \textbf{18.18} \\
\hline
$50\%$ & $100\%$& $\text{LSTM}_\text{Spline}$& 40.57 & 49.56 & 39.82 & 44.35 \\
& & $\text{LSTM}_\text{GP}$& 26.45 & 24.76 & 26.43 & 25.26 \\
& & $\text{LSTM}_\text{PACE}$& 18.85 & 18.84 & 20.44 & 20.77 \\
& & $\text{LSTM}_\text{MPACE}$& 17.13 & 18.52 & 19.49 & 20.14 \\
& &Sparse FMLP& 15.47 & 17.24 & 14.63 & 17.44 \\
& & Sparse MFMLP& \textbf{15.02} & \textbf{16.98} & \textbf{14.41} & \textbf{17.19} \\
\hline
\hline
\end{tabular}
\begin{tabular}{c|c|c|c c c c}
\hline
\hline
\textbf{Train} & \textbf{Test} & \textbf{Model}& \textbf{FD001}& \textbf{FD002}& \textbf{FD003}& \textbf{FD004}\\
\hline
$100\%$ & $100\%$ &LSTM \cite{zheng2017long} & 16.14 & 24.49& 16.18 & 28.17 \\
& &FMLP& 13.36 & 16.62 & 12.74 & 17.76 \\
& &Sparse FMLP& 13.73 & 17.04 & 12.75 & 16.92 \\
& &Sparse MFMLP& \textbf{13.11} & \textbf{16.03} & \textbf{11.97} & \textbf{16.33} \\
\hline
\hline
\end{tabular}
\label{exp3_tab1}
\end{center}
\end{table}
\section{Conclusion and Discussion}
\label{sec5}
In this paper, we focused on the temporal classification/regression problem, the purpose of which is to learn a mathematical mapping from the time series inputs to scalar response, leveraging the temporal dependencies and patterns. In real-world applications, we noticed that two types of data scarcity are frequently encountered: scarcity in terms of small sample sizes and scarcity introduced by sparsely and irregularly observed time series covariates. Noticing the lack of feasible temporal predictive models for sparse time series data in the literature, we proposed two sparse functional MLP (`Sparse FMLP' and `Sparse MFMLP') to specifically handle this problem. The proposed SFMLP is an extension of the conventional FMLP for densely observed time series data, employing the univariate and multivariate sparse functional principal component analysis. We used mathematically arguments and numerical experiments to evaluate the performance of each candidate model under different types of data scarcity and achieved the following conclusions:
\begin{itemize}[leftmargin=*]
\item When the sample size is large and the underlying function that gives rise to the time series observations is non-smooth over time, the sequential learning models are more appropriate models to handle the supervised learning task.
\item When the sample size is small and the underlying function is smooth, we expect FMLP or equivalently the proposed SFMLP to outperform the sequential learning in general, since
\begin{enumerate}
\item FMLP requires fewer training samples due to its feed-forward network structure, succinct temporal pattern capturing technique, and its capability of encoding domain expert knowledge.
\item FMLP is easier to train as the time-varying impact of the covariates on the response is aggregated through integrals of the continuous processes, rather than recursive computations on the individual observations.
\item FMLP is less restrictive on the data format. In particular, the time series covariates can be regular or irregular. Also, the number of observations as well as the measuring timestamps can be different across features and subjects.
\end{enumerate}
\item The proposed sparse FMLPs are feasible solutions when the individual time series features are sparsely and irregularly evaluated.
\end{itemize}
\end{document}
|
\begin{document}
\twocolumn[
\icmltitle{Online hyperparameter optimization by real-time recurrent learning}
\icmlsetsymbol{equal}{*}
\begin{icmlauthorlist}
\icmlauthor{Daniel Jiwoong Im}{nyu}
\icmlauthor{Cristina Savin}{nyu}
\icmlauthor{Kyunghyun Cho}{nyu}
\end{icmlauthorlist}
\icmlaffiliation{nyu}{New York University}
\icmlcorrespondingauthor{Daniel Jiwoong Im}{[email protected]}
\icmlkeywords{Machine Learning, ICML}
\vskip 0.3in
]
\printAffiliationsAndNotice{\icmlEqualContribution}
\begin{abstract}
Conventional hyperparameter optimization methods
are computationally intensive and hard to generalize to scenarios that require dynamically adapting hyperparameters, such as life-long learning.
Here, we propose an online hyperparameter optimization algorithm that is asymptotically exact and computationally tractable, both theoretically and practically. Our framework
takes advantage of the analogy between hyperparameter optimization and parameter learning in recurrent neural networks (RNNs). It adapts a well-studied family of online learning algorithms for RNNs to tune hyperparameters and network parameters simultaneously, without repeatedly rolling out iterative optimization. This procedure yields systematically better generalization performance compared to standard methods, at a fraction of wallclock time.
\end{abstract}
\section{Introduction}
The success of training complex machine learning models critically depends on good choices for hyperparameters that control the learning process. These hyperparameters can specify the speed of learning as well as model complexity, for instance, by setting learning rates, momentum, and weight decay coefficients. Oftentimes, finding the best hyperparameters requires not only extensive resources, but also human supervision.
Well-known procedures, such as random search and Bayesian hyperparameter optimization, require fully training many models to identify the setting that leads to the best validation loss~\citep{Bergstra2012, Snoek2012}. While popular for cases when the number of hyperparameters is small, these methods easily break down when the number of hyperparameters increases. Alternative gradient-based hyperparameter optimization methods take a two level approach (often referred to as the `inner' and the `outer loop'). The outer loop finds potential candidates for the hyperparameters, while the inner loop is used for model training. These methods also require unrolling the full learning dynamics to compute the long-term consequences that perturbation of the hyperparameters has on the parameters~\citep{luketina2016scalable, Lorraine2018, Metz2019}. This inner loop of fully training the parameters of the model for each outer loop hyperparameter update makes existing methods computationally expensive and inherently offline.
It is unclear how to adapt these ideas to nonstationary environments, as required by advanced ML applications, e.g.\ lifelong learning~\cite{German2019,Kurle2020}.
Moreover, while these type of approaches can be scaled to high-dimensional hyperparameter spaces~\citep{Lorraine2018}, they can also suffer from stability issues~\citep{Metz2019}, making them nontrivial to apply in practice.
In short, we are still missing robust and scalable procedures for online hyperparameter optimization.
We propose an alternative class of online hyperparameter optimization (OHO) algorithm that update hyperparameters in parallel to training the parameters of the model. At the core of our framework is the observation that hyperparameter optimization entails a form of \emph{temporal credit assignment}, akin to the computational challenges of training recurrent neural networks (RNN). Drawing on this similarity allows us to adapt real-time recurrent learning (RTRL)~\citep{Williams1989}, an online alternative to backprop through time (BPTT), for the purpose of online hyperparameter optimization.
We empirically show that our joint optimization of parameters and hyperparameters yields systematically better generalization performance compared to standard methods. These improvements are accompanied sometimes with a substantial reduction in overall computational cost. We characterize the behavior of our algorithm in response to various meta-parameter choices and to dynamic changes in hyperparameters. OHO can rapidly recover from adversarial perturbations to the hyperparameters. We also find that OHO with layerwise hyperparameters provides a natural trade-off between a flexible hyperparameter configuration and computational cost. In general, our framework opens a door to a new line of research, that is, real-time hyperparameter learning.
Since we have posted the initial version of this preprint, it has been brought to our attention that the early work by~\citep{Franceschi2017} presented a very closely related method to ours, from a different perspective. \citet{Franceschi2017} studied both forward and reverse gradient-based hyperparameter optimization. They derived RTRL \citep{Williams1989} by computing the gradient of a response function from a Lagrangian perspective. In contrast, here we directly map an optimizer to a recurrent network, by taking the process of fully training a model as propagating a sequence of minibatches through an RNN, which allows us to use RTRL for the purpose of online hyperparameter optimization. Moreover, if \citet{Franceschi2017} included a single small experiment comparing their approach to random search on phonetic recognition, here we empirically evaluate our algorithm's strengths and properties extensively, and compare them to several state of the art methods.
In doing so, we demonstrate OHO's i) effectiveness: performance w.r.t the wall clock time improves existing approaches, ii) robustness: low variance of generalization performance across parameter and hyparparameter initializations, iii) scalability: sensible trade-off between performance vs. computational cost w.r.t.\ the number of hyperparameters, and iv) stability of hyperparameter dynamics during training for different choices of meta-hyperparameters.
\section{Background}
\subsection{Iterative optimization}
A standard technique for training large-scale machine learning models, such as deep neural networks (DNN), is stochastic optimization. In general, this takes the form of parameters changing as a function of the the data and the previous parameters, iteratively updating until convergence:
\begin{align}
\boldsymbol{\theta}_{\tau+1} = g(\boldsymbol{\theta}_{\tau}, \mathbf{B}_{\tau}; \boldsymbol{\varphi}),
\label{eq:update_rule}
\end{align}
where $\boldsymbol{\theta}$ denotes the parameters of the model, $g$ is the update rule reflecting the learning objectives, ${\boldsymbol \varphi}$ denotes the hyperparameters, and $\mathbf{B}_{\tau}$ is the random subset of training data (the minibatch) at iteration $\tau$.
As a representative example, when training a DNN $f$ using stochastic gradient descent, (SGD) learning follows the loss gradient approximated using a random subset of the training data at each time step
\begin{align}
\label{eq:sgd}
\boldsymbol{\theta}_{\tau+1} \leftarrow \boldsymbol{\theta}_{\tau} - \frac{\alpha}{|\mathbf{B}_\tau|} \sum_{(x,y) \in \mathbf{B}_\tau} \nabla_{\theta} l(y, f(x; \boldsymbol{\theta}_{\tau})),
\end{align}
where $l(\cdot)$ is the per-example loss function and $\alpha$ is the learning rate.
In this case, $g(\cdot)$ is a simple linear update based on the last parameter and the scaled gradient. Other well-known optimizers, such as RMSprop~\citep{Tieleman2012} or Adam~\citep{Kingma2015}, can also be expressed in this general form.
\subsection{Recurrent neural networks}
\label{sec:rtrl}
An RNN is a time series model. At each time step $t =\{ 1\cdots T\}$, it updates the memory state $\mathbf{h}_t$ and generates the output $\mathbf{o}_{t+1}$ given the input $\mathbf{x}_t$:
\begin{align}
\label{eq:rnn}
\left( \mathbf{h}_{t+1}, \mathbf{o}_{t+1} \right) = r(\mathbf{h}_{t}, \mathbf{x}_t; \boldsymbol{\phi}),
\end{align}
where the recurrent function $r$ is parametrized by $\boldsymbol{\phi}$.
Learning this model involves optimizing the total loss over $T$ steps with respect to $\boldsymbol{\phi}$ by gradient descent. BPTT is typically used to calculate the gradient by unrolling the network dynamics and backward differentiating through the chain rule:
\begin{align*}
\nabla_\phi \mathcal{L}
&= \sum^T_{t=1} \Big(\sum^T_{s\geq t+1}\frac{\partial \mathcal{L}_{s}}{\partial \mathbf{h}_{t+1}} \frac{\partial \mathbf{h}_{t+1}}{\partial \mathbf{h}_{t}} + \frac{\partial \mathcal{L}_t}{\partial \mathbf{h}_{t}} \Big)
\frac{\partial \mathbf{h}_t}{\partial \boldsymbol{\phi}},
\end{align*}
where $\mathcal{L}_t$ is the instantaneous loss at time $t$, and the double summation indexes all losses and all the applications of the parameters.
Because the time complexity for BPTT grows with respect to $T$, it can be challenging to compute the gradient using BPTT when the temporal horizon is lengthy. In practice, this is mitigated by truncating the computational graph (`truncated BPTT').
\subsection{Real-time recurrent learning}
RTRL \citep{Williams1989} is an online alternative to BPTT, which --instead of rolling out the network dynamics-- stores a set of summary statistics of the network dynamics that are themselves updated online as new inputs come in. To keep updates causal, it uses a forward view of the derivatives, instead of backward differentiation, with the gradient computed as
\begin{align}
\nabla_{\boldsymbol{\phi}} \mathcal{L} &
= \sum^T_{t=1} \frac{\partial \mathcal{L}_{t+1}}{\partial \mathbf{h}_{t+1}} \Big(\frac{\partial \mathbf{h}_{t+1}}{\partial \mathbf{h}_{t}}\frac{\partial \mathbf{h}_{t}}{\partial \boldsymbol{\phi}} +\frac{\partial \mathbf{h}_{t+1}}{\partial \boldsymbol{\phi}_t} \Big).
\label{eq:rtrl}
\end{align}
The Jacobian $\Gamma_{\tau}=\frac{\partial \mathbf{h}_{t}}{\partial \boldsymbol{\phi}}$ (also referred to as the {\em influence matrix}), dynamically updates as
\begin{align*}
\Gamma_{\tau} = D_\tau \Gamma_{\tau-1} + G_{\tau-1},
\end{align*}
where $D_\tau = \frac{\partial \mathbf{h}_{\tau}}{\partial \mathbf{h}_{\tau-1}}$, and $G_\tau = \frac{\partial \mathbf{h}_{\tau}}{\partial \boldsymbol{\phi}_{\tau-1}}$.
In this way, the complexity of learning no longer depends on the temporal horizon. However, this comes at the cost of memory requirements $\mathcal{O}(|\boldsymbol{\phi}||\mathbf{h_t}|)$, which makes RTRL rarely used in ML practice.
\section{Learning as a recurrent neural network}
Both RNN parameter learning and hyperparameter optimization need to take into account long-term consequences of (hyper-)parameter changes on a loss. This justifies considering an analogy between the recurrent form of parameter learning (Eq.~\ref{eq:update_rule}) and RNN dynamics (Eq.~\ref{eq:rnn}).
\paragraph{Mapping SGD to a recurrent network.}
To establish the analogy, consider the following mapping:
\begin{align*}
\text{parameters } \boldsymbol{\theta}_\tau &\rightarrow \text{state } \mathbf{h}_t \\
\text{batch } \mathbf{B}_\tau &\rightarrow \text{input } \mathbf{x}_t\\
\text{update rule } g(\cdot) &\rightarrow \text{recurrent function } r(\cdot)\\
\text{hyper-parameters } \boldsymbol{\varphi} &\rightarrow \text{parameters } \boldsymbol{\phi}.
\end{align*}
Parameters ${\boldsymbol \theta}_\tau$ and recurrent network state $\mathbf{h}_t$ both evolve via recurrent dynamics, $g(\cdot)$ and $r(\cdot)$, respectively.
These dynamics are each parametrized by hyper-parameters ${\boldsymbol \varphi}$ and parameters ${\boldsymbol \phi}$.
The inputs $\mathbf{x}_t$ in the RNN correspond to the minibatches of data $\mathbf{B}_\tau$ in the optimizer, and the outputs $\mathbf{o}_t$ to
$\left\{
f(x; {\boldsymbol \theta}_\tau)
\Big|
x \in B_\tau
\right\}$, respectively.
In short, we interpret a full model training process as a single forward propagation of a sequence of minibatches in an RNN.
\paragraph{Mapping the validation loss to the RNN training loss.}
The same analogy can be made for the training objectives. We associate the per-step loss of the recurrent network to the hyperparameter optimization objective:
\begin{align*}
\mathcal{L}(\boldsymbol{\theta}_\tau, \boldsymbol{\varphi}_\tau; \mathcal{D}_\text{val}) = \frac{1}{|\mathcal{D}_{\text{val}}|} \sum_{(\mathbf{x},\mathbf{y}) \in \mathcal{D}_{\text{val}}} l\big(\mathbf{y}, f(\mathbf{x}; \boldsymbol{\theta}_\tau(\boldsymbol{\varphi}_\tau))\big),
\end{align*}
where $l$ is the per-example loss function, with $\boldsymbol{\varphi}_\tau = \lbrace \alpha, \lambda \rbrace$ the set of tune-able hyperparameters, which in our concrete example corresponds to learning rate $\alpha$ and regularizer $\lambda$.
The hyperparameter optimization objective is calculated over the validation dataset $\mathcal{D}_{\text{val}}$.
The total loss over a full training run is $L_{\text{Total}} = \sum_{\tau=1}^{\infty} \mathcal{L}(\boldsymbol{\theta}_\tau, \boldsymbol{\varphi}_\tau; \mathcal{D}_{\text{val}}),$
where there are multiple parameter updates within a run. We use $\infty$ to emphasize that the number of epochs is often not determined {\it a priori}.
Both the hyperparameter optimization objective and the update rule, Eq.~\ref{eq:sgd}, contain the per-example loss $l$.
The former updates the parameters of RNN using $\mathcal{D}_{\text{val}}$, while the latter updates the state of RNN using training batch ${\bf{B}_{\tau}}$.
This meta-optimization loss function is commonly used for hyperparameter optimization in practice~\citep{Bengio2000, Pedregosa2016}. Nevertheless, the mapping from the optimizer to RNN and the mapping from the validation loss to RNN training objective is novel and enables us to treat the whole hyperparameter optimization process as an instance of RNN training.
\section{Online hyperparameter optimization}
\label{sec:oho}
With our analogies in place, we are ready to construct our online hyperparameter optimization (OHO) algorithm.
We adapt the hyperparameters at each time step, such that both the parameters and hyperparameters are jointly optimized in a single training process (Fig.~\ref{fig:my_label}).
To achieve this, we update the parameters using training data and update the hyperparameters using validation data.
Then, the update rules for $\boldsymbol{\theta}$ and $\boldsymbol{\varphi}$ are:
\begin{align}
\boldsymbol{\theta}_{\tau+1} &= \boldsymbol{\theta}_{\tau} - \alpha_{\tau} \Delta_{\mathcal{D}_{\text{tr}}} (\boldsymbol{\theta}_{\tau}) + \alpha_{\tau} w(\boldsymbol{\theta}_{\tau},\lambda_{\tau})
\label{eq:param_updates}
\\
\boldsymbol{\varphi}_{\tau+1} &= \boldsymbol{\varphi}_{\tau} - \eta \Delta_{ \mathcal{D}_{\text{val}}} (\boldsymbol{\varphi}_{\tau}),
\label{eq:meta_update}
\end{align}
where $\Delta_{ \mathcal{D}_{\text{tr/val}}}$ is a descent step with respect to training or validation data, respectively, and $w(\boldsymbol{\theta}_{\tau};\lambda_\tau)$ is a regularization function.
We can use any differentiable stochastic optimizer, such as RMSProp and ADAM, to compute the descent direction $\Delta$.
Without the loss of generality, from now on, we use SGD to compute the $\Delta$ and use weight decay penalty as a regularizer for the rest of the paper. Expanding $\Delta(\boldsymbol{\varphi}_{\tau}; \mathcal{D}_{\text{val}})$ in Eq.~\ref{eq:meta_update}, we have
\begin{align}
&\Delta_{\mathcal{D}_{\text{val}}} (\boldsymbol{\varphi}_{\tau}) = \nabla_{\boldsymbol{\varphi}} \mathcal{L}_{\mathcal{D}_{\text{val}}}(\boldsymbol{\theta}_{\tau+1}, \boldsymbol{\varphi}_{\tau}) \nonumber \\
&= \nabla_{\boldsymbol{\varphi}} \mathcal{L}_{\mathcal{D}_{\text{val}}} \big(\boldsymbol{\theta}_{\tau} -\alpha_{\tau} \big( \Delta_{\mathcal{D}_{\text{tr}}}(\boldsymbol{\theta}_{\tau})- w(\boldsymbol{\theta}_{\tau},\lambda_{\tau})\big), \boldsymbol{\varphi}_{\tau}\big) \label{eq:delta_metaopt}.
\end{align}
The gradient of hyperparameters depends on $\varphi_0\cdots \varphi_\tau$ at iteration $\tau$.
We apply RTRL to compute this gradient in an online fashion.
Let us re-write the gradient expression of RTRL in Eq.~\ref{eq:rtrl} taking into account our mappings,
\begin{align}
\nabla_{\boldsymbol{\varphi}} \mathcal{L}_{\mathcal{D}_{\text{val}}} &= \sum^T_{\tau=1} \frac{\partial \mathcal{L}_{\mathcal{D}_{\text{val}}}}{\partial \boldsymbol{\theta}_{\tau+1}} \Bigg( \underbrace{ \frac{\partial \boldsymbol{\theta}_{\tau+1}}{\partial \boldsymbol{\theta}_{\tau}}\frac{\partial \boldsymbol{\theta}_{\tau}}{\partial \boldsymbol{\varphi}} +\frac{\partial \boldsymbol{\theta}_{\tau+1}}{\partial \boldsymbol{\varphi}_{\tau}} }_{\Gamma_{\tau+1}} \Bigg).
\label{eq:oho_gradient}
\end{align}
Generally, meta-optimization involves computing the gradient through a gradient, as shown in Eq.~\ref{eq:delta_metaopt}.
This causes the temporal derivative to contain the Hessian matrix,
\begin{align*}
\frac{\partial \boldsymbol{\theta}_{\tau+1}}{\partial \boldsymbol{\theta}_{\tau}} = I - \alpha_\tau H_\tau - 2 \alpha_\tau\lambda_\tau,
\end{align*}
where $H_\tau=\mathbb{E}_{\mathcal{B}}[\nabla^2_{\boldsymbol{\theta}} \mathcal{L}]$ is the Hessian of the minibatch loss with respect to the parameters.
Then, we plug $\frac{\partial \boldsymbol{\theta}_{\tau+1}}{\partial \boldsymbol{\theta}_{\tau}}$ into the influence matrix's recursive formula,
\begin{align}
\Gamma_{\tau+1} = \big(I-\alpha_\tau H_{\tau} -2\alpha_\tau\lambda_\tau \big) \Gamma_\tau + G_{\tau}.
\label{eq:influence_formula}
\end{align}
By approximating the Hessian-vector product
using the finite difference method,\footnote{
See \url{https://justindomke.wordpress.com/2009/01/17/hessian-vector-products/} for details.
}
we compute the gradient $\frac{d l(\boldsymbol{\theta}_{\tau+1})}{d\alpha_\tau}$ in linear time.
The formulation shows that the influence matrix is composed of all the relevant parts of the learning history and is updated online as new data come in.
In other words, it accumulates the long-term influence of all the previous applications of the hyperparameters $\boldsymbol{\varphi}$ on the parameters $\boldsymbol{\theta}$.
Overall RTRL is more efficient than truncated BPTT for hyperparameter optimization, in both computation and memory complexity.
First, RTRL greatly reduces the computational cost since the influence matrix formulation circumvents the explicit unrolling of temporal dependencies. Our approach is intrinsically online, while BPTT is strictly an offline algorithm (although it can sometimes be loosely approximated to mimic online methods, see~\citep{Lorraine2018}). Moreover, the standard memory challenge of training RNNs with RTRL does not apply in the case of hyperparameter optimization, as the number of hyperparameters is in general much smaller than the number of parameters. For example, one popular choice of $\varphi$ is to just include a global learning rate and some regularization coefficients.
\subsection{Gradient computation}
\label{sec:gradient_computation}
Since we frame hyperparameter optimization as RNN training, one might reasonably wonder about the fundamental issues of vanishing and exploding gradients~\citep{hochreiter2001gradient} in this setup. Gradients do not vanish, because the SGD update in Eq.~\ref{eq:sgd} is additive. This resembles RNN architectures such as long- short-term memory~\citep[LSTM;][]{hochreiter1997long} and gated recurrent units~\citep[GRU;][]{chung2014empirical}, both of which were explicitly designed to have an additive gradients, as a way of avoiding the vanishing gradient issue. Nonetheless, exploding gradient may still be an issue and is worth further discussion.
\begin{figure}
\caption{Overview of OHO joint optimization procedure. The parameters and hyperparameters update in parallel based on training/loss gradients computed using the current minibatch.}
\label{fig:my_label}
\end{figure}
Because the influence matrix $\Gamma_{\tau+1}$ is defined recursively, the gradient of the validation loss with respect to ${\boldsymbol \varphi}$ contains the product of Hessians:
\begin{align*}
{\small
\nabla_{\boldsymbol{\varphi}} \mathcal{L}_{\mathcal{D}_{\text{val}}} = \Bigg\langle G_{\tau}, -\sum^{\tau}_{i=0} \Bigg(\prod^{\tau}_{j=i+1} (I-\alpha_j H_j-2\alpha_j\lambda_j)\Bigg) G_i\Bigg\rangle ,
}
\end{align*}
where $\langle \cdot, \cdot \rangle$ denotes an inner product. $G_i$ and $H_j$ are the gradient and Hessian of the loss $\mathcal{L}(\boldsymbol{\theta}_\tau)$ at iteration $i$ and $j$, respectively. The product of the Hessians can lead to gradient explosion, especially when consecutive gradients are correlated~\citep{Metz2019}.
It remains unclear whether the gradient explosion is an issue for RTRL (or, more generally, forward-differentiation based optimization). We study this in our experiments.
\iffalse
In order to analyze the exploding gradient issue, we consider the gradient of the validation loss after $\delta$ steps w.r.t. the learning rate at time $t$:
\begin{align*}
\frac{\partial L(\boldsymbol{\theta}_t+\delta)}{\partial \eta^{(t)}}
&=
\frac{\partial L(\boldsymbol{\theta}_t+\delta)}{\partial \boldsymbol{\theta}_{t+\delta}}
\frac{\partial \boldsymbol{\theta}_{t+\delta}}{\partial \boldsymbol{\theta}_t}
\frac{\partial \boldsymbol{\theta}_t}{\partial \eta^{(t)}}
\end{align*}
where $\eta^{(t)}$ is the use of the learning rate $\eta$ at time $t$, and $g(\theta) = \frac{\eta}{B} \sum_{(x,y) \in B} \nabla_\theta l(y, f(x; \theta))$. More specifically, we look at the temporal derivative:
\begin{align*}
\frac{\partial \theta_{t+\delta}}{\partial \theta_t}
=
\frac{\partial \theta_{t+\delta}}{\partial \theta_{t+\delta-1}}
\cdots
\frac{\partial \theta_{t+1}}{\partial \theta_{t}},
\end{align*}
where the one-step temporal derivative is written as
\begin{align}
\label{eq:1-step-temporal-derivative}
\frac{\partial \theta_{t+1}}{\partial \theta_{t}}
=
I - \eta^{(t+1)} H(\theta_t).
\end{align}
We use $H(\theta_t)$ as a shorthand for the Hessian of the mini-batch loss w.r.t. the parameters:
\begin{align}
\label{eq:hessian}
H(\theta_t) = \frac{1}{B} \sum_{(x,y) \in B} \text{Hess}_{\theta} l(y, f(x; \theta_t)).
\end{align}
\fi
\section{Related work}
Optimizing hyperparameters based on a validation loss has been tackled from various stances. Here, we provide a brief overview of state-of-the-art approaches to hyperparameter optimization.
\paragraph{Random search.}
For small hyperparameter spaces, grid or manual search methods are often used to tune hyperparameters.
For a moderate number of hyperparameters, random search can find hyperparameters that are as good or better than those obtained via
grid search, at a fraction of its computation time \citep{Bengio2000, Bergstra2012, Bergstra2011, Jamieson2015, Li2016}.
\paragraph{Bayesian optimization approaches.}
Bayesian optimization (BO) is a smarter way to search for the next hyperparameter candidate \citep{Snoek2012, Swersky2014, Snoek2015, Eriksson2019, Kandasamy2020} by explicitly keeping track of uncertainty.
BO iteratively updates the posterior distribution of the hyperparameters and assigns a score to each hyperparameter based on it. Because evaluation of hyperparameter candidates must largely be done sequentially, the overall computational benefit from BO's smarter candidate proposal is only moderate.
\paragraph{Gradient-based approaches.}
Hyperparameter optimization with approximate gradient (HOAG) is an alternative technique that iteratively updates the hyperparameters following the gradient of the validation loss.
HOAG approximates the gradient using an implicit equation with respect to the hyperparameters \cite{Domke2012, Pedregosa2016}.
This work was extended to DNNs with stochastic optimization settings by \citet{Maclaurin2015, Lorraine2020}. In contrast to our approach, which uses the chain rule to compute the exact gradient online, this approach exploits the implicit function theorem and inverse Hessian approximations.
While the hyperparameter updates appear online in form, the method requires the network parameters to be nearly at a stable point. In other words, the network needs to be fully trained at each step.
\citet{Metz2019} attempt to overcome the difficulties of backpropagation through an unrolled optimization process by using a surrogate loss, based on variational and evolutionary strategies. Although their method addresses the exploding gradient problem, it still truncates
backpropagation \citep{Shaban2019},
which introduces a bias in the gradient estimate of the per step loss. In contrast, our method naturally addresses both issues: It is unbiased by algorithm design, and the gradients are empirically stable, as will be shown in Experiment~\ref{exp:gradient_stability}.
\paragraph{Forward differentiation approaches.}
Our work is closely related to \citep{Franceschi2017, Baydin2018, Donini2020}, which share the goal of optimizing hyperparameters in real-time.
\citet{Franceschi2017} first introduced a forward gradient-based hyperparameter optimization scheme (RTHO), where they compute the gradient of the response function from a Lagrangian perspective. Their solutions ends up being equivalent to computing the hypergradient using RTRL \citep{Williams1989}, as in OHO. Our conceptual framework is however different, as we derive the hyperparameter gradient by directly mapping an optimizer to a recurrent network and then applying RTRL. Unlike \citet{Franceschi2017}, whose only quantitative evaluation was performed on TIMIT phonetic recognition, we evaluate OHO extensively in a large experimental setup and rigorously analyze its properties. \citet{Donini2020} recently proposed adding a discount factor $\gamma \in [0,1]$ to the influence function, as an heuristic designed to compensate for inner loop non-stationarities by controlling the range of dependencies of the past influence function.
One potential disadvantage of our approach is that the memory complexity grows with the number of hyperparameters. There are however several approximate variants of RTRL that can be used to address this issue.
As an example, unbiased online recurrent optimization (UORO) reduces the memory complexity of RTRL from quadratic to linear \cite{Tallec2017}.
UORO uses rank-1 approximation of the influence matrix and can be extended to rank-$k$ approximation~\citep{Mujika2018}.
Although UORO gives an unbiased estimate of the gradient, it is known to have high variance, which may slow down or impair training \cite{Cooijmans2019, Marschall2020}. It remains an open question whether these more scalable alternatives to RTRL are also applicable for online hyperparameter optimization.
\begin{figure}
\caption{The number of search trials and time required for a model to reach $0.3$ or lower cross-entropy loss on CIFAR10 for various inner-loop learning procedures. Corresponding hyperparameters are selected by
(A) uniform sampling or (B) Bayesian-optimization (off-the-shelf package
scikit-optimize).
}
\label{fig:performance}
\end{figure}
\begin{figure}
\caption{
Test performance distribution across different hyper-parameters settings (initial learning rates and L2 weight decay coefficients) using different optimization methods.}
\label{fig:perm_stability}
\end{figure}
\iffalse
\begin{figure*}
\caption{The number of searches and time required for a model to reach $0.3$ or lower cross-entropy loss on CIFAR10.
(A) show results using uniform sampling method to select hyper-parameters.
(B) show results using Bayesian-optimization to select hyper-parameters.
We deployed the off-the-shelf package Bayesian-optimization from
scikit-optimize.
}
\label{fig:cont_mlp_tr}
\label{fig:performance}
\end{figure*}
\begin{figure}
\caption{
The stability performance over different variations of hyper-parameters using different optimization methods on MNIST and CIFAR10.
Each violin shows the test loss distribution over various ranges of hyper-parameters of initial learning rates and l2 weight decay coefficients.
Meta-optimization methods have the tightest range of test loss (robustness) and the lowest test loss (performance).}
\label{fig:perm_stability_mnist}
\label{fig:perm_stability_cifar}
\label{fig:perm_stability}
\end{figure}
\begin{figure}
\caption{The OHO performance and the wall clock time using different number of hyper-parameter sets, where each set contains three hyper-parameters:
two learning rates for weights and bias, and one L2 weight decay coefficient. The hyper-parameter set are allocated by evenly partitioning the network. Layerwise OHO is when the number of hyper-parameters set is equal to the number of parameters. The trade off between the performance and the computational time is shown as the hyper-parameters increase.}
\label{fig:tradeoff_perm_speed}
\end{figure}
\begin{figure*}
\caption{Layerwise learning rate at the beginning of the training.}
\caption{Sensitivity Analysis}
\label{fig:tr_vl_grad}
\label{fig:valid_size}
\label{fig:global_vs_full}
\end{figure*}
\begin{figure*}
\caption{
The learning rate is manually re-initialized to 0.2 at epoch 30, 50, 90, 180, and 270 (marked with the arrows).
(A) the learning rates during the training. The learning rates get automatically correct itself.
(B) the test loss of corrected learning rate (solid line) and non-corrected learning rate (dash line) after the corruption.}
\caption{
The weight decay is manually re-initialized to 0.2 at epoch 30, 50, 90, 180, and 270 (marked with the arrows).
(A) the learning rates during the training. The learning rates get automatically correct itself.
(B) the test loss of corrected learning rate (solid line) and non-corrected learning rate (dash line) after the corruption.}
\label{fig:perm_corruption_lr}
\label{fig:perm_corruption_loss}
\label{fig:perm_corruption}
\label{fig:perm_corruption_wc}
\label{fig:perm_corruption_loss_wc}
\label{fig:perm_corruption_wc}
\end{figure*}
\fi
\section{Experiments}
We conduct empirical studies to assess the proposed algorithm's performance and computational cost. We compare them to widely used hyperparameter optimization methods, on standard learning problems such as MNIST and CIFAR10 classification. To better understand the properties of OHO, we examine hyperparameter dynamics during training and how they are affected by various meta-hyperparameter settings. We explore layerwise hyperparameter sharing as a potential way to trade-off between computational costs and the flexibility of the learning dynamics.
Lastly, we analyze the stability of the optimization procedure with respect to meta-hyperparameters.
For the MNIST~\citep{LeCun1998MNIST} experiments, we use 10,000 out of the 60,000 training examples as the validation set, for evaluating OHO outer-loop gradients. We assess two architectures: a 4-layer neural network and a 4-layer convolutional neural network, with ReLU activations and $5\times5$ kernel size. We use 128 units and 128 kernels at each layer for both networks.
For the CIFAR10 dataset~\citep{Krizhevsky2009CIFAR}, we split the images into 45,000 training, 5,000 validation, and 10,000 test images and normalize
them such that each pixel value ranges between [0, 1]. We use the ResNet18 architecture~\citep{He2016ResNet} and apply random cropping and random horizontal flipping for data augmentation \citep{Shorten2019}. For both MNIST and CIFAR10, the meta optimizer is SGD, with meta learning rate $0.000005$ and initial weight decay coefficient $0$. We set initial learning rates to $0.001$ and $0.01$, and validation batch size to $100$ and $1000$ for MNIST and CIFAR10, respectively.
\subsection{Performance}
We compare OHO against hyperparameter optimization by uniform random search and Bayesian optimization, in terms of
the number of training runs and the computation time required to achieve a predefined level of generalization performance (test loss $\leq0.3$, see Fig.~\ref{fig:performance}).
We vary the optimizer $g$ (Eq.~\ref{eq:update_rule}) by considering five commonly used learning rate schedulers: 1) SGD with fixed learning rate (`Fixed'), 2) Adam, 3) SGD with step-wise learning rate annealing (`Step'), 4) exponential decay (`Exp'), and 5) cosine (`Cosine') schedulers \citep{Loshchilov2017}.
We optimize the learning rate and weight decay coefficient, and all models are trained for 100 and 300 epoch for MNIST and CIFAR10, respectively.
Additionally, we tune step size, decay rate, and momentum coefficients for Step, Exp, and Adam, respectively.
In our experiments, we find that OHO usually takes a single run to find a good solution.
The single training run takes approximately $12\times$ longer than training the network once without OHO, and yet, the other hyperparameter algorithms require multiple training of neural network to reach the same level of performance.
Overall OHO ends up significantly faster at finding a good solution in terms of total wall-clock time (Fig.~\ref{fig:performance}, bottom row).
For each method, we measure the final test loss distribution obtained for various initial learning rates and weight decay coefficients.
The initial learning rates and weight decay coefficients were randomly chosen from the range of $[0.0001,0.2]$ and $[0,0.0001]$, respectively.
For both MNIST and CIFAR10, our procedure, which dynamically tunes the learning rate and degree of regularization during model learning, results in better generalization performance and much smaller variance compared to other optimization methods (Fig.~\ref{fig:perm_stability}, `global OHO').
This demonstrates the robustness of OHO to the initialization of hyperparameters and makes it unnecessary to train multiple models in order to find good hyperparameters.
These results also suggest that there may be a systematic advantage in jointly optimizing parameters and hyperparameters.
\begin{figure}
\caption{(A) The learning rate dynamics for each layer during the early stage of the training for different initializations.
(B) Test loss comparison for layerwise vs.\ global OHO, for several initial learning rates.}
\label{fig:layewise_lr}
\end{figure}
\begin{figure}
\caption{Test loss and wallclock time statistics when using OHO to optimize different numbers of hyper-parameter sets, where each set contains three hyper-parameters:
two learning rates for weights and bias, and one L2 weight decay coefficient. The hyper-parameter sets are allocated to neighboring layers to evenly partition the full network. Layerwise OHO has 6 sets for MNIST, 62 for CIFAR10. }
\label{fig:tradeoff_perm_speed}
\end{figure}
\begin{figure}
\caption{
(A) Learning rate dynamics, after being manually re-initialized to 0.2 at epoch 30, 50, 90, 180, and 270 (vertical dashed lines).
(B) Test loss of dynamic learning rate (solid line) and non-corrected learning rate (dashed line) after the reset.
(C) Weight decay dynamics, after being manually re-initialized, details as in (a).
(D) Corresponding test loss on CIFAR10.}
\label{fig:perm_corruption}
\end{figure}
\subsection{Layerwise OHO}
In the earlier experiment (Fig.~\ref{fig:perm_stability}, `layerwise OHO'), we saw that {\em layer-specific} learning rates and weight decay coefficients can lead to even better test loss on CIFAR10. Here we further study richer hyperparametrizations.
To do so, we examine the effects of layerwise OHO on final test performance and quantify the trade-offs between performance and computational time, as we vary the number of hyperparameters.
First, we experiment with having a separate learning rate and weight decay coefficient per layer. We train both global OHO and layerwise OHO on CIFAR10 with several initial learning rates, $\alpha_0 = [0.1,0.15,0.2]$, and no initial regularization, $\lambda_0 = 0$, and compare their test losses (Fig.~\ref{fig:layewise_lr}). We find that the layerwise OHO test loss (green curves) is generally lower than that of global OHO (purple curves) is. Moreover, when analyzing the early stage of training across runs with different initial learning rates, we obverse that learning rates tend to cluster according to their layers regardless of the starting points, and their values are layer-specific (Fig.~\ref{fig:layewise_lr}). This suggests that layerwise grouping of the hyperparameters may be an appropriate thing to do in such models.
We analyze the performance and computational speed trade-off of layerwise OHO with coarser hyperparmater grouping schemes.
If we define one hyperparameter set as the learning rates for the weight and bias, together with the L2 regularization weights, then global OHO has one set, while layerwise OHO has as many sets as there are layers in the network (6 for MNIST and 62 for CIFAR10). We can then interpolate between these extremes. Specifically, for $k$ hyperparameter sets, we partition the network layers into $k$ neighboring groups, each with their own hyperparameter set (Fig.~\ref{fig:tradeoff_perm_speed}). For example, for $k=2$, the 6-layer network trained on MNIST would be partitioned into top 3 and bottom 3 layers, with 2 separate hyperparameter sets. For each configuration, we ran OHO optimization 10 times with different random initializations.
As shown in Fig.~\ref{fig:tradeoff_perm_speed}, the average performance improves as the number of hyperparameter sets increases, while the variance shrinks. In contrast, the wallclock time increases with the number of hyperparameter sets. This demonstrates a clear trade-off between the flexibility of the learning dynamics (and subsequently generalization performance) and computational cost, which argues for selecting the richest hyperparameter grouping scheme possible given a certain computational budget.
\subsection{Response to hyperparameter perturbations}
Due to its online nature, the OHO algorithm lends itself to being used for non-stationary problems, for instance when input statistics or task objectives change over time. Here we emulate such a scenario by perturbing the hyperparameters directly under a static learning scenario, and track the evolution of (meta-)learning in response to these perturbations.
In particular, we initialize the learning rates to a fixed value (0.1) and then reset them to a higher value (0.2) at various points during CIFAR10 training (Fig.~\ref{fig:perm_corruption}A).
We observe that the learning rates decrease rapidly early on and that after being reset, they drop almost immediately back to the value before the perturbation. This illustrates the resilience and rapid response of OHO against learning rate corruption which reflects sudden change in an environment. We further compare the performance of OHO to the setting where the learning rate is fixed after re-initialization. Figure~\ref{fig:perm_corruption}B shows the test losses for fixed (dashed line) and OHO (solid line). We find that the dynamic optimization of hyperparameters leads to systematically better performance.
In the next set of experiments we reset the weight decay coefficient to zero at various epochs: 30, 50, 90, 180, and 270.
The results are similar, with the weight decay coefficients quickly adapting back to their previous values (Fig.~\ref{fig:perm_corruption}C and D). The test loss fluctuates less for continually optimized OHO, relative to the fixed hyperparameters setting.
Altogether, these results suggest that joint optimization of parameters and hyperparameters is more robust to fluctuations in a learning environment.
\begin{figure}
\caption{Sensitivity analysis on CIFAR10. A) The comparison of learning when the outer-gradient $\frac{\partial L_{\mathcal{D}
\label{fig:sensitivity_analysis}
\end{figure}
\begin{SCfigure}
\includegraphics[width=0.5\linewidth,page=10]{figs/figsOHO.pdf}
\caption{Test loss as a function of learning time, when the influence get reset to zero at every 1, 100, and 1000 steps; `no reset' corresponds to standard infinite-horizon global OHO. }
\label{fig:sensitivity_analysisC}
\end{SCfigure}
\subsection{Long-term dependencies in learning and their effects on hyperparameters }
Past work has suggested that the temporal horizon for the outer-loop learning influences both performance and the stability of hyperparameter optimization \citep{Metz2019}. In our case, the influence matrix is computed recursively over the entire learning episode and there is no strict notion of a horizon. We can however control the extent of temporal dependencies by resetting the influence matrix at a predefined interval (effectively forgetting all previous experience). We compare such resetting against our regular infinite-horizon procedure (global-OHO; see Fig.~\ref{fig:sensitivity_analysisC}). We find that both the average test loss and its variance decrease as we reset less frequently; ultimately global OHO performs the best.
Thus, we conclude it is important to take into account past learning dynamics for improving generalization performance.
\subsection{Sensitivity analysis for meta-hyperparameters.}
In order to better understand how OHO works in practice, we explore its sensitivity to meta-hyperparameter choices: the validation dataset, initial learning rates, and meta learning rates.
\paragraph{Validation dataset.}
We explore the gradient of the meta-optimization objective $\nabla_{\boldsymbol \varphi} \mathcal{L}_{\mathcal{D}_{\text{val}}}$ in Eq.~\ref{eq:oho_gradient}, where the outer gradient $\frac{\partial \mathcal{L}_{\mathcal{D}_{\text{val}}}}{\partial {\boldsymbol \theta}_{\tau+1}}$ is computed with respect to the validation dataset $\mathcal{D}_{\text{val}}$. To show the importance of having a separate validation loss for the outer loop optimization, we replace the validation dataset with the training dataset, and find that it yields to systematically worse generalization performance (Fig.~\ref{fig:sensitivity_analysis}B).
Expectedly, hyperparameters overfit to the training loss when using the training dataset, but not when using the validation dataset. When using the validation dataset, OHO prevents overfitting by shrinking the learning rates and increasing the weight regularization over time. In other words, OHO early-stops on its own.
In some practical applications it may prove computationally cumbersome to use the full validation dataset. We experiment using a random subset of validation set to compute the outer gradient while varying the validation minibatch size: 100, 500, and 1,000. Unsurprisingly we find that bigger is better (Fig.~\ref{fig:sensitivity_analysis}C): the test loss is lowest for the largest size considered, while the test loss for the smallest (100) minibatch size is on par with the step-wise learning rate scheduler. The performance gradually changes as we vary the size, which allows us to make trade-off between computational cost and the final performance.
\begin{figure}
\caption{Norm of influence matrix for different meta learning rates and initial learning rates. (A) Large meta learning rates lead to a discontinuous landscape (red curve). (B) The initial learning rates $1e^{-3}
\label{fig:normJ_fix_mlr}
\end{figure}
\begin{figure*}
\caption{(A,B) Demonstrating that the influence matrix norm w.r.t the learning rate and the weight coefficient plateaus, rather than exploding. (C) The gradients within a single epoch (100 updates) do not correlate with each other. The average and the standard deviation of the correlation decrease over time.}
\label{fig:stability}
\end{figure*}
\iffalse
\begin{figure*}
\caption{The performance on continual learning for various optimization methods.
We apply multiple augmentation over the MNIST dataset during 100 epochs of training.
First 10 epoch of regular training and then 5 epochs of horizontal flips, followed by
5 epochs of regular training and then 15 epochs of clockwise rotation,
10 epochs of regular training and then 5 epochs of vertical flip,
5 epochs of regular training and then 10 epochs of target class shift by one,
10 epochs of regular training and then 10 epochs of clockwise rotation, and
finally 10 epochs of regular training
(10 regular, 5 hflip, 5 regular, 15 rotation, 5 vflip, 5 regular, 10 target shift, 10 regular0, 10 rotation, 10 regular).}
\label{fig:cont_mlp_tr}
\label{fig:cont_learning}
\end{figure*}
\begin{figure*}
\caption{Stability of the influence matrix norm for learning rates (A) and weight regularizer (B) for multiple runs with different initialization, which corresponds to multiple lines. (C) The average and the standard deviation of the norm correlations over learning.}
\label{fig:stability}
\end{figure*}
\begin{figure*}
\caption{Norm of influence matrix w.r.t learning rates. (A) The large meta learning rate results in discontinuous norm surfaces (red curve). (B) The initial learning rates 1e-3 and 1e-2 are sensitivity when combined with large meta learning rates (green and blue curve). }
\label{fig:normJ_fix_mlr}
\label{fig:stability}
\end{figure*}
\fi
\paragraph{Meta learning rates and initial hyper-parameters.}
\label{exp:gradient_stability}
To deploy OHO in practice, one needs to choose a meta learning rate and the initial hyperparameters. We investigate the sensitivity of OHO to these choices in order to inform users on what may be a sensible range for each meta-hyperparameter. We define the sensible range to be the region where training is stable. We consider learning to be stable when the gradients are well-defined along the entire learning trajectory, in which computing the gradient hinges upon the product of Hessians induced by the recursion in the influence matrix. In sum, sensible meta-hyperparameters should keep the norm of the influence matrix bounded throughout the training.
We visualize the evolution of the norms of the influence matrices for the learning rates, $\|\frac{d{\boldsymbol \theta}^{(T)}}{d\alpha}\|^2_F$, and weight decay coefficients, $\|\frac{d{\boldsymbol \theta}^{(T)}}{d\lambda}\|^2_F$, for different meta learning rates and initial learning rates (Fig.~\ref{fig:normJ_fix_mlr}A, B). The norm is numerical ill-conditioned (blows up) with large meta learning rates ($>1e^{-4}$) at the end stage of training, but is smooth with smaller meta learning rates.
Recall that the OHO-optimized learning rates decrease during training to avoid overfitting. Later in training this eventually leads to a regime where the meta learning rate is larger than the learning rate. Some elements of the scaled gradient can then become greater than the corresponding learning rate $\alpha_{t-1}^i < \epsilon \left[\nabla_\alpha L({\boldsymbol \theta})\right]^i$, causing instability. The ideal meta learning rate should thus be small.
We further investigate the norm of influence matrix during training. We train multiple models with different initialization of hyperparameters while fixing the meta learning rate to $5 \times 10^{-5}$. We find that the norm rarely explodes in practice (Fig.~\ref{fig:stability} A, B). The norms plateau as training converges, but they do not explode.
We observe similar results for the norm of the Hessian matrix $\|\frac{\partial {\boldsymbol \theta}^{(T)}}{\partial {\boldsymbol \theta}^{(T-1)}}\|^2_2$ as well.
In order to understand why the norm of influence matrix is stable along the trajectory, we look into exploding gradient phenomena.
As we discussed in Section~\ref{sec:gradient_computation}, exploding gradients likely happen when a series of gradients are correlated. We compute the moving average and standard deviation of gradient correlations with the window size of 100 updates (a single epoch).
The mean correlation very rapidly approaches zero, with the standard deviation also decreasing as training progresses (Fig.~\ref{fig:stability}C). The gradients only correlate at the beginning of training but quickly decorrelate as learning continues. This causes the norm of the influence matrix to plateau and prevents the gradients from exploding.
\iffalse
Let us analyze the $\|\frac{d{\boldsymbol \theta}^{(T)}}{d\alpha}\|^2_2$ and $\|\frac{d{\boldsymbol \theta}^{(T)}}{d\lambda}\|^2_2$ landscape to understand the stability of optimization.
First, we examine the norm of influence matrix over the range of learning rates $\alpha$ and weight decay coefficient $\lambda$ at every 10 epoch out of 100 epochs conditioned on three different meta learning rates, $5e^{-5}$, $1e^{-5}$, and $5e^{-6}$ (see Figure~\ref{fig:normJ_fix_mlr}A).
The curves are widely spread in the early stage of training (each curve corresponds to every 10th epoch), and then converges (highly concentrated) as training continues.
We observe that the surface blows up when the meta learning rate is $5e^{-5}$ for both $\|\frac{d{\boldsymbol \theta}^{(T)}}{d\alpha}\|^2_2$ and $\|\frac{d{\boldsymbol \theta}^{(T)}}{d\lambda}\|^2_2$ near the final stage of training. However, the surfaces are smooth and steady for smaller meta learning rates, $1e^{-5}$ and $5e^{-6}$.
From Figure~\ref{fig:normJ_fix_mlr}b, the landscape blew up for learning rates $1e^{-3}$ and $1e^{-2}$. This time, we examine the norm of influence with respect to the range of meta learning rates given that the networks are trained using learning rates $1e^{-3}$ and $1e^{-2}$ in Figure~\ref{fig:normJ_fix_mlr}. The norm surface is less stable as the meta learning rates become larger. This illustrates that the gradient computation, which is subject to the influence matrix, depends on meta learning rate size. After seeing the results, it is not surprising that the optimization is unstable for large meta learning rate at near finally stage of training. The learning rate decreases over the training and actually becomes zero by the end stage when trained with OHO. This causes the meta learning rate to become larger than the learning rate at some point. Then, the scaled gradient can potentially become larger than the learning rate $\alpha_{t-1} < \epsilon \nabla_\alpha \frac{d l({\boldsymbol \theta}^{(T)})}{d\alpha}$, which causes instability in training. Therefore, we want the ideal meta learning rate to be small.
\fi
\section{Discussion}
Truly online hyperparameter optimization remains an open problem in machine learning. In this paper, we presented a novel hyperparameter optimization algorithm, OHO, which takes advantage of online temporal credit assignment in RNNs to jointly optimize parameters and hyperparameters based on minibatches of training and validation data. This procedure leads to robust learning, with better generalization performance than competing offline hyperparameter optimization procedures. It is also competitive in terms of total wallclock time.
The dynamic interaction between parameter and hyperparameter optimization was found to not only improve test performance but to also reduce variability across runs. Beyond automatic shrinking of learning rates that avoids overfitting, OHO
quickly adapts the hyperparameters to compensate for sudden changes, such as perturbations of the hyperparameters and allows the learning process to reliably find better models.
The online nature of OHO updates makes it widely applicable to both stationary and nonstationary learning problems. We expect the same set of benefits to apply to problems such as life-long learning \citep{German2019,Kurle2020}, or policy learning in deep reinforcement learning \cite{Padakandla2019,Xie2020,Igl2020}, in which the statistics of data and learning objectives change over time. Up to now, it has been virtually impossible to do automated hyperparameter tuning in such challenging learning settings. OHO on the other hand may prove to be a key steppingstone for achieving robust automated hyperparameter optimization in these domains.
In machine learning, an enormous amount of time and energy is spent on hyperparameter tuning. Out of the box, RTRL-based OHO is already quite efficient, with memory requirement that is linear in the number of parameters and outer-loop gradients computation having a similar cost to that of computing inner gradients. This covers many practical use cases of meta-optimization. We believe that OHO and related ideas can dramatically reduce the burden of tuning hyperparameters and becoming a core component of future off-the-shelf deep learning packages.
\nocite{langley00}
\iffalse
\begin{figure*}
\caption{L2 norm of influence matrix w.r.t learning rate $\alpha$ and weight decay coefficient $\lambda$ VS meta Learning rate}
\label{fig:normJ_fix_mlr}
\caption{L2 norm of Influence w.r.t learning rate $\alpha$ and weight decay coefficient $\lambda$ VS meta Learning rate}
\label{fig:normJ_lr}
\end{figure*}
\fi
\end{document}
|
\begin{document}
\title
{Local rates of Poincar{\'e} recurrence for rotations and weak mixing}
\author{J.-R. Chazottes}
\address{
Centre de Physique Th{\'e}orique,
CNRS-UMR 7644,
{\'E}cole Polytechnique,
91128 Palaiseau Cedex, France.
}
\email{[email protected]}
\author{F. Durand}
\address{Laboratoire Ami{\'e}nois
de Math{\'e}matiques Fondamentales et
Appliqu{\'e}es, CNRS-UMR 6140, Universit{\'e} de Picardie
Jules Verne, 33 rue Saint Leu, 80039 Amiens Cedex 1, France.}
\email{[email protected]}
\begin{abstract}
We study the lower and upper local rates of Poincar{\'e} recurrence
of rotations on the circle by means of symbolic dynamics.
As a consequence, we show that
if the lower rate of Poincar{\'e} recurrence of an ergodic dynamical
system $(X,\EuScript{F} , \mu, T)$ is greater or equal to 1 $\mu$-almost everywhere, then
it is weakly mixing.
\end{abstract}
\subjclass{Primary: 37B20 ; Secondary: 37B10}
\keywords{Poincar{\'e} recurrence, Sturmian subshift, linearly recurrent subshift, rotation, weak mixing.}
\maketitle
\section{Introduction and main result}\label{intro}
Let $T$, acting on the Lebesgue probability space $(X,\EuScript{F},\mu)$, be a
(non necessarily invertible) measure-preserving ergodic map.
Let $U$ be a measurable subset of $X$ and $x\in U$. Define the first return time of $x$ to $U$ by
$\tau_{U}(x)\eqdef\inf\{k\geq 1:T^{k}x\in U\}$. If $\mu(U)>0$ then $\tau_U(x)$ is finite
for $\mu$-almost every $x$ by Poincar{\'e} recurrence theorem.
Now define the Poincar{\'e} recurrence of the set $U$ by
$$
\tau(U)\eqdef \inf\{\tau_{U}(x):x\in U\}\;.
$$
It is easy to check that
$$
\tau(U)=\inf\{k>0: T^{k}U\cap U\neq \emptyset \}=
\inf\{k>0: T^{-k}U\cap U\neq \emptyset \}\,.
$$
The notion of Poincar{\'e} recurrence of a set is used in
\cite{valya} to define a notion of dimension similar to Hausdorff
dimension where diameters of sets are replaced by their Poincar{\'e}
recurrence (see also
\cite{acs,asuu,bruin,fuu,KPV,KM,psv}).
Suppose now that $\zeta$ is a finite measurable partition of $X$ and
denote, as usually, by $\zeta_n$ the partition
$\zeta\vee T^{-1}\zeta\vee\cdots\vee T^{-n+1}\zeta$
($n\geq 1$, $\zeta_1\eqdef\zeta$) and by $\zeta_n(x)$
the atom of this partition containing point $x$.
We define the (lower and upper) local rate of Poincar{\'e} recurrence
for the partition $\zeta$ respectively as follows:
$$
\underline{\mathbb Ret}_{\zeta}(x)\eqdef \liminf_{n\to\infty}
\frac{\tau(\zeta_{n}(x))}{n}\, ,\quad
\overline{\mathbb Ret}_{\zeta}(x)\eqdef \limsup_{n\to\infty}
\frac{\tau(\zeta_{n}(x))}{n}\, .
$$
(Of course, these quantities depend on the map $T$ but we omit this
dependence in the notation.)
If $x\in X$ is a periodic point, then obviously $\underline{\mathbb Ret}_{\zeta}(x)=\overline{\mathbb Ret}_{\zeta}(x)=0$.
A useful fact to be used later is that
both $\underline{\mathbb Ret}_{\zeta}$ and $\overline{\mathbb Ret}_{\zeta}$ are sub-invariant
functions, namely $\underline{\mathbb Ret}_{\zeta}\circ T\leq\underline{\mathbb Ret}_{\zeta}$ and
$\overline{\mathbb Ret}_{\zeta}\circ T\leq\overline{\mathbb Ret}_{\zeta}$.
This is because
for any $n\geq 1$ and any $x\in X$, $\tau(\xi_{n-1}(Tx))\leq\tau(\xi_{n}(x))$.
If we assume $\mu$ is an ergodic probability measure, then it follows
by basic arguments that
$\underline{\mathbb Ret}_{\zeta}$ and $\overline{\mathbb Ret}_{\zeta}$ are $\mu$-a.e. constant
functions (see \cite{acs} for details).
The definition of the (lower and upper) local rate of Poincar{\'e}
recurrence first appeared in \cite{hsv} in connection with the error
term in the approximation by the exponential law of the distribution
of rescaled return times to cylinder sets.
In \cite{acs-era} the authors prove that the following result, where
$h_\mu(T,\zeta)$ denotes the measure-theoretic entropy of
$(X,\EuScript{F},\mu,T)$ with respect to the partition $\zeta$.
\begin{theorem}
\label{Rinf->h>0}
Let $(X,\EuScript{F},\mu,T)$ be as above and $\zeta$ a finite partition of $X$.
If $h_{\mu}(T,\zeta)>0$ then $\underline{\mathbb Ret}_{\zeta}(x)\geq 1$ $\mu$-almost everywhere.
\end{theorem}
It is equivalent to state that if $\underline{\mathbb Ret}_{\zeta}(x)< 1$
$\mu$-almost everywhere, then $h_{\mu}(T,\zeta)=0$.
In the case when $(X,T)$ has the specification property (think of a
topologically mixing subshift of finite type over a finite alphabet as
a typical example), one has
$
\overline{\mathbb Ret}_{\zeta}(x)\leq 1
$
for all $x\in X$, where $\zeta$ is the canonical partition labelled by
the alphabet (see \cite{acs}). Therefore, in this case $\underline{\mathbb Ret}_\zeta(x)=\overline{\mathbb Ret}_\zeta(x)=1$
$\mu$-almost everywhere where $\mu$ is any ergodic probability measure
such that $h_{\mu}(T,\zeta)>0$.
Outside such sets of full measure,
the specification property allows one to construct some points $x$ such that
$0<\underline{\mathbb Ret}_\zeta(x)=\overline{\mathbb Ret}_\zeta(x)<1$.
On another hand,
the positiveness of entropy is an unavoidable assumption in Theorem
\ref{Rinf->h>0}.
Indeed,
it is shown in \cite{acs} that $\underline{\mathbb Ret}_{\zeta}(x)=0$ for Lebesgue
almost every $x$ for a special class of rotations where $\zeta$ is the canonical partition into two
atoms given by the rotation angle.
A natural question is thus to know
whether or not the converse to Theorem \ref{Rinf->h>0} holds.
In the course of the present note, we will provide an example,
namely the Morse system, showing that this is not the case.
The motivation of the present work was to figure out which
property of an ergodic dynamical system $(X,\EuScript{F},\mu,T)$,
equipped with a partition $\zeta$, the property
``$\underline{\mathbb Ret}_\zeta(x)\geq 1$ $\mu$-almost surely'' is related to.
In this direction, we have the following result.
\begin{theorem}\label{main}
Let $T $, acting on the Lebesgue probability space $(X,\EuScript{F},\mu , T)$, be a measure-preserving ergodic map. If
$\underline{\mathbb Ret}_{\zeta}(x)\geq 1$ for $\mu$-almost every $x\in X$ and
every non-trivial measurable partition $\zeta$ then $(X,\EuScript{F},\mu , T)$ is weakly
mixing.
\end{theorem}
(By `non-trivial' we mean that no atom of $\zeta$ has measure $0$ or $1$.)
For definitions and properties of the classical notions of mixing in
ergodic theory, we refer the reader to e.g. \cite{petersen}.
Our approach to prove this theorem is as follows. The point is that
a non weakly mixing system has a non trivial eigenvalue, hence a
measure-theoretical factor which is a rotation. Moreover,
every measurable partition of the factor induces a
measurable partition of the original system. Therefore we are led
to show that the lower local rate of Poincar{\'e} recurrence for rotations
is strictly less than 1 for some partition. But the rotations are measure
theoretically isomorphic to Sturmian subshifts. Hence, it suffices to
prove that the lower rate of
Poincar{\'e} recurrence for Sturmian subshifts is strictly less than 1
for some partitions. In fact we prove:
\begin{theorem}\label{bounded-case}
Let $({\cal O}mega_{\alpha} , S)$ be the Sturmian subshift generated by
$\alpha \in \mathbb R \setminus \mathbb Q$ and $\mu$ be its unique ergodic measure.
Let $\zeta$ be the partition $\{ [0] , [1] \}$.
\begin{enumerate}
\item
\label{equivalence}
The following
statement are equivalent.
\begin{enumerate}
\item
The coefficients of the continued fraction of
$\alpha $ are bounded;
\item
$\underline{\mathbb Ret}_{\zeta }(x) > 0$ for $\mu$-almost every $x$;
\item
$\overline{\mathbb Ret}_{\zeta }(x) < \infty$ for $\mu$-almost every $x$.
\end{enumerate}
\item
\label{inferieura1}
Moreover, if the coefficients of the continued fraction of $\alpha $ are
bounded, then $\underline{\mathbb Ret}_{\zeta }(x)< 1$ and $\overline{\mathbb Ret}_{\zeta }(x) > 1$
for $\mu$-almost every $x$.
\item
\label{rotation}
The same results hold for $([0,1[ , x\mapsto x+ \alpha \mod 1)$ and
$\zeta = \{ [0,1-\alpha [ , [ 1-\alpha , 1[ \}$.
\end{enumerate}
\end{theorem}
After completing our work, we discovered the preprint
\cite{Ku} in which the author computes the precise values of $\underline{\mathbb Ret}$ and
$\overline{\mathbb Ret}$ for rotations. See also \cite{K} for substitutive subshifts.
\section{Local rates of recurrence for some zero entropy systems}\label{0-entropy}
In this section we establish some results about the local rate of recurrence for some zero entropy
systems that will be useful in the proof of Theorem \ref{main}.
Let $A$ be a finite alphabet. We endow $A^\mathbb Z$ and $A^\mathbb N$ with the product topology. Let $S:A^\mathbb Z \to A^\mathbb Z$ be the shift map:
$(S x)_n = x_{n+1}$ where $x=(x_n)\in A^\mathbb Z$. If $X$ is a closed and $S$-invariant set of $A^\mathbb Z$ then $(X,S)$ is
called a subshift. It is a continuous map. For all word $u$ on the alphabet $A$, we call
cylinder generated by $u$ the set $[u]=\{(x_n)\in X : x_0 x_1 \cdots x_{|u|-1} = u \}$.
\subsection{Linearly recurrent subshifts.}
Let $x=(x_n)$ be a sequence in $A^\mathbb Z$ where $A$ is a finite alphabet. We call
$L(x)$ the set of all finite words appearing in $x$. The length $|u|$ of a
word $u\in L(x)$ is the number of symbols in $u$.
Let $u,w\in L(x)$.
For a subshift $(X,S)$ we set $L(X)=\bigcup_{x\in X} L(x)$. We say an
element $x$ of $A^\mathbb N$ or $A^\mathbb Z$ generates $(X,S)$ if $L(x) = L(X)$.
We say that $w$ is a return word
to $u$ of $x$ if
$wu$ belongs to $L (x)$, $u$ is a prefix of $wu$ and $u$ has exactly two occurrences in $wu$.
(There is a more general definition of return words in \cite{DHS}.)
We say that $x$ is uniformly recurrent if all $u\in L(x)$ appears infinitely many times in $x$, and,
for all $u\in L(x)$ there exists $K_u$ such that for all return word $w$
to $u$, $|w|\leq K_u |u|$.
Let us recall that if the subshift $(X,S)$ is minimal then all its points are uniformly recurrent.
We say that $x$ is {\rm linearly recurrent} (LR) (with constant $K\in\mathbb R\backslash\{0\}$) if it is uniformly
recurrent and if for all $u\in L(x)$
and all return word $w$ to $u$, we have $|w| \leq K|u|$.
We say that a subshift $(X,S)$ is LR (with constant $K$) if it is minimal and contains a LR sequence (with constant $K$).
Notice that a minimal subshift is LR if and only if all its elements
are linearly recurrent.
Let $u$ be a word and $\alpha \in \mathbb R_+$. The prefix of length
$\lfloor |u| \alpha \rfloor $ of the sequence $uuu \dots $ is denoted by
$u^\alpha$, where $\lfloor . \rfloor$ is the integer part map.
\begin{proposition}[\cite{DHS}]
\label{proplr}
Let $(X,S)$ be an aperiodic LR subshift with constant $K$. Then
$X$ is $(K+1)$-power free
(i.e. $u^{K+1}\in L(X)$ if and only if
$u=\emptyset$)
and
for all $ u\in L(X)$ and for all $w \in\mathbb Re_{u}$ we have $(1/K)|u| < |w| $.
\end{proposition}
Let $(X,S)$ be a subshift on the alphabet $A$.
Let $\zeta $ be the partition into $1$-cylinders, i.e. $\zeta = \{ [a] ;
a \in A \}$.
There is a bijective correspondence between the
partition $\zeta_n$ and the set of the words of length $n$ of
$X$. Moreover, for all $x=(x_n)\in X$ and $n\in \mathbb N$ we have
$$
\tau (\zeta_n (x))
=
\min \{ |w| : w\,\textup{is a return word to}\, x_0 x_1 \dots x_{n-1} \}\, .
$$
Hence by Proposition \ref{proplr} we have the following result.
\begin{proposition}\label{ratelinrec}
Let $(X,S)$ be an aperiodic LR subshift with constant $K$ and $\zeta $ the
partition into $1$-cylinders.
Then for all $x\in X$:
$$
\frac{1}{K}
\leq
\underline{\mathbb Ret}_{\zeta} (x)
\leq
\overline{\mathbb Ret}_{\zeta} (x)
\leq
K .
$$
\end{proposition}
Let $(X,S)$ be a subshift and $\zeta $ be the partition into $1$-cylinders.
If a subshift is $(K+1)$-power free then
$1/K \leq \underline{\mathbb Ret}_{\zeta} (x)$ for all $x\in X$. Hence if we define
$$
\delta = \inf \{ K\in\mathbb R\backslash\{0\}:(X,S) \hbox{ is } (K+1)-\hbox{power free} \}
$$
then it comes that $1 / \delta \leq \underline{\mathbb Ret}_{\zeta} (x)$ for
all $x\in X$.
It is known \cite{Th} that $\delta = 1$ for the Morse sequence
(this is the fixed point of the substitution $\sigma$ defined by $\sigma (0) = 01$ and $\sigma (1) = 10$).
For the subshift $(X,S)$ of $\{0,1\}^{\mathbb N}$ generated by the $S$-orbit closure of the Morse sequence, one
has $\underline{\mathbb Ret}_{\zeta} (x)\geq 1$ for all $x\in X$, where $\zeta=\{[0],[1]\}$.
This gives an example of a zero-entropy dynamical system having a local rate
of Poincar{\'e} recurrence greater or equal to one. Therefore the converse to
Theorem \ref{Rinf->h>0} is false.
\subsection{Sturmian subshifts}
\label{sturmsubshift}
Let \( 0<\alpha <1 \) be an
irrational number. We define the map
\( R_{\alpha }:\left[ 0,1\right[ \rightarrow \left[ 0,1\right[ \)
by
$R_{\alpha }\left( t\right) =t+\alpha$
(mod 1) and the map
$I_{\alpha }:\left[ 0,1\right[ \rightarrow \left\{ 0,1\right\}$
by
$I_{\alpha }\left( t\right) =0 $
if
$t\in \left[ 0,1-\alpha \right[ $
and
$ I_{\alpha }\left( t\right) =1 $
otherwise.
Let ${\cal O}mega _{\alpha }$ be the closure of the set $\left\{ \left. \left( I_{\alpha }
\left( R^{n}_{\alpha }\left( t\right) \right) \right) _{n\in \mathbb Z }\textrm{ }\right| \textrm{ }t\in
\left[ 0,1\right[ \textrm{ }\right\}$.
The subshift
$ \left( {\cal O}mega _{\alpha },S \right) $
is the {\it Sturmian subshift} generated by $\alpha$ and its elements are called {\it Sturmian
sequences}.
There exists a factor map (see \cite{HM})
$\phi :\left( {\cal O}mega _{\alpha },S \right) \rightarrow \left( \left[ 0,1\right[ ,R_{\alpha }\right) $
such that
$
\left| \phi ^{-1}\left( \left\{ \beta \right\} \right) \right| =2$
if
$\beta \in \left\{ \left. n\alpha \textrm{ }\right| \textrm{ }n\in \mathbb Z \textrm{ }\right\}$
and
$\left| \phi ^{-1}\left( \left\{ \beta \right\} \right) \right| =1$
otherwise.
Consequently $\phi$ is a measure-theoretic isomorphism.
It is well-known that $\left( {\cal O}mega _{\alpha },S \right) $
is a non-periodic uniquely ergodic minimal subshift.
\begin{proposition}
[\cite{Du2,Du3}]
\label{lrsturm}
A Sturmian subshift $( {\cal O}mega _{\alpha },S) $ is LR
if and only if
the coefficients of the continued fraction expansion of $\alpha$ are bounded.
\end{proposition}
In the sequel we will make use of the following
morphisms
\( \rho _{n} \)
and
\( \gamma _{n} \),
\( n\in \mathbb N \setminus \{0\} \),
from
\( \left\{ 0,1\right\} \)
to
\( \left\{ 0,1\right\} ^{*} \)
defined
by
\[
\begin{array}{l}
\rho _{n}\left( 0\right) =01^{n+1}\\
\rho _{n}\left( 1\right) =01^{n}
\end{array}\textrm{ and }\begin{array}{l}
\gamma _{n}\left( 0\right) =10^{n+1}\\
\gamma _{n}\left( 1\right) =10^{n}
\end{array}\, .\]
\begin{proposition}\label{kappa}
\label{sturmdb}
Let
\( \left( {\cal O}mega_{\alpha},S\right) \)
be a Sturmian
subshift.
There exists a sequence
\( \left( \kappa_{n}\right) _{n\in \mathbb N } \)
taking values in
\( \left\{ \rho _{1},\gamma _{1},\rho _{2},\gamma _{2},\ldots \right\} \)
such that
\begin{enumerate}
\item
\( \displaystyle y=\lim _{n\rightarrow +\infty }\kappa _{1}\cdots \kappa _{n}\left( 00\cdots \right) \)
exists and generates
\( \left( {\cal O}mega_{\alpha},S \right) \);
\item
$ \left( {\cal O}mega_{\alpha},S\right) $ is uniquely ergodic;
\item
$1 \leq \frac{| \kappa _{1}\cdots \kappa _{n} (0)|}{ |\kappa _{1}\cdots \kappa _{n} (1)| } \leq \frac{3}{2}$;
\item
Let \( P_{0}=\{[0],[1]\}, \) and for \( n\geq 1, \) let
$$
P_{n}=\left\{ \left. S ^{k}\kappa _{1}\cdots \kappa _{n}\left( \left[
a\right] \right) \textrm{ }\right|
\textrm{ }0\leq k<\left| \kappa _{1}\cdots \kappa _{n}\left( a\right) \right| ,
\textrm{ }a\in \left\{ 0,1\right\} \textrm{ }\right\}
$$
is a partition of \( {\cal O}mega_{\alpha} \) with the following properties:
\begin{enumerate}
\item
\( \kappa_{1}\cdots \kappa_{n+1}\left( \left[ 0\right] \right) \cup \kappa
_{1}\cdots \kappa _{n+1}\left( \left[ 1\right] \right)
\subseteq \kappa _{1}\cdots \kappa _{n}\left( \left[ 0\right] \right)
\cup \kappa _{1}\cdots \kappa _{n}\left( \left[ 1\right] \right) \),
\item \( P_{n}\prec P_{n+1} \)
as partitions,
\end{enumerate}
\end{enumerate}
\end{proposition}
\begin{proof}
The statements 1., 2. and 4. are mainly due to Hedlund and Morse \cite{HM} (see \cite{DDM}). Point 3. follows from the definition
of $\rho_n$ and $\gamma_n$ given above.
\end{proof}
\section{Proof of Theorem \ref{main} and Theorem \ref{bounded-case}}
\subsection{Proof of Theorem \ref{bounded-case}}
In this subsection $\alpha \in [0,1[$ is an irrational number,
$({\cal O}mega_\alpha , S)$ the Sturmian subshift it defines, $\mu$ its unique
ergodic measure and
$(\kappa_n)_{n\in \mathbb N}$ the sequence given by Proposition \ref{sturmdb}.
For all $n\in \mathbb N$ and all $a \in \{ 0,1 \}$ we set
\begin{align}
\label{mu}
\mu_n (a) = \sum_{0\leq k<\left| \kappa _{1}\cdots \kappa _{n}(a) \right| }
\mu \left( S^{k}\kappa _{1}\cdots \kappa _{n} ([a]) \right)
=
\left| \kappa _{1}\cdots \kappa _{n}(a) \right|
\mu \left( \kappa _{1}\cdots \kappa _{n} ([a]) \right) .
\end{align}
We remark that $\mu_n(0) + \mu_n (1) = 1$.
\begin{lemma}
\label{sturmrate}
Let $n\in \mathbb N$ and $m\in \mathbb R$ with
$1\leq m \leq |\kappa_n (0)| -2$. We set $\kappa_n (0) = ab^{|\kappa_n (0)| - 1}$. Let $U_n$ be the union of the sets
$S^k \kappa_{1}\cdots \kappa_{n} ([0])$, where
\begin{equation}
\label{ineq1}
|\kappa _{1}\cdots \kappa_{n-1} (a)| \leq k \leq | \kappa _{1}\cdots \kappa _{n} (0)| -\lfloor | \kappa _{1}\cdots \kappa_{n-1} (b)| (m+1) \rfloor ,
\end{equation}
and $V_n$ be the union of the sets
$S^l \kappa_{1}\cdots \kappa_{n} ([1])$, where
\begin{equation}
\label{ineq2}
|\kappa _{1}\cdots \kappa_{n-1} (a)| \leq l \leq | \kappa _{1}\cdots \kappa _{n} (1)| -\lfloor | \kappa _{1}\cdots \kappa_{n-1} (b)| m \rfloor .
\end{equation}
If $x\in U_n$ then there exists a word $v$ such that
$x\in [v^{m+1}]$. If $x\in V_n$ then there exists a word $v$ such that
$
x\in [v^m] .
$
In both case $|v| = |\kappa_1 \cdots \kappa_{n-1} (b)|$.
Moreover for all $n\in \mathbb N$
$$
\mu (U_n ) \geq \mu_n(0) \left(\frac{|\kappa_n (0)| - m - 2}{3/2 + |\kappa_n (0)| - 1 } \right) \hbox{ and } \mu (V_n ) \geq \mu_n (1) \left( \frac{ |\kappa_n (0)| - m -2}{3/2 + |\kappa_n (0)| -2 } \right) .
$$
\end{lemma}
\begin{proof}
Let $n\in \mathbb N$ and $m\in [1 , |\kappa_n (0)| -2 ]$. Let $x\in S^k
\kappa_{1}\cdots \kappa_{n} [0] $ where $k\in
\mathbb N$ satisfies ($\ref{ineq1}$).
Let $w$ be the prefix of length $\lfloor |\kappa_0 \cdots \kappa_{n-1}
(b)|(m+1) \rfloor $ of $x$.
We have $x\in [w]$ and
$$
\lfloor |\kappa_0 \cdots \kappa_{n-1}
(b)|(m+1) \rfloor
=
|\kappa_0 \cdots \kappa_{n-1}
(b)|(\lfloor m \rfloor +1)
+
\lfloor |\kappa_0 \cdots \kappa_{n-1}
(b)|(m - \lfloor m \rfloor ) \rfloor .
$$
The sequence $x$ belongs to
$S^k[ \kappa _{1}\cdots \kappa_{n-1} (a b^{|\kappa_n (0)| -1})]$. Consequently, the hypotheses on $k$ imply
that there exist two words $s$ and $p$ such that
$$
w = s (\kappa_0 \cdots \kappa_{n-1} (b))^{\lfloor m \rfloor } p (sp)^{ m - \lfloor m \rfloor }
=
(sp)^{m+1},
$$
where
$ps = \kappa_0 \cdots\kappa_{n-1} (b)$.
The other case can be treated in the same way.
From Proposition \ref{sturmdb} we deduce that
\begin{align*}
\mu (U_n) &
=
\mu_n (0) \frac{|\kappa _{1}\cdots \kappa _{n}(0)| - |\kappa _{1}\cdots \kappa _{n-1}(a)| - \lfloor |\kappa _{1}\cdots
\kappa _{n-1}(b)|(m+1) \rfloor +1 }{|\kappa _{1}\cdots \kappa _{n}(0)|} \\
&
\geq
\mu_n (0)
\left(
\frac{
|\kappa _{1}\cdots \kappa _{n-1}(b)|(|\kappa_n (0)| - m - 2)
}
{
|\kappa _{1}\cdots \kappa _{n-1}(a)|+ |\kappa _{1}\cdots \kappa _{n-1}(b)|
\left(
|\kappa_n (0)|-1
\right)
}
\right) \\
& \geq
\mu_n (0) \left(\frac{|\kappa_n (0)| - m - 2}{3/2 + |\kappa_n (0)| - 1
} \right) .
\end{align*}
The same computations can be done for $V_n$.
\end{proof}
\begin{proof}[Proof of the statement \eqref{equivalence} of Theorem \ref{bounded-case}]
To prove that (a) is equivalent to (b) it suffices to prove that if the coefficients of the continued fraction of $\alpha $ are not bounded then
$\underline{\mathbb Ret}_{\zeta } (x) = 0$ for $\mu$-almost every $x$. The
other part of the proof follows from Proposition \ref{ratelinrec} and
\ref{lrsturm}. Hence, $\underline{\mathbb Ret}_\zeta$ being $S$-invariant and
$\mu$ ergodic, it is enough to prove that $\mu \left( \{ x; \underline{\mathbb Ret}_{\zeta } (x) = 0 \} \right) > 0$.
Let $(U_n)$ and $(V_n)$ be the sequences of open sets given by Lemma \ref{sturmrate}. From
Proposition 1.1 in \cite{Du3} and Proposition \ref{lrsturm} there exists a strictly increasing sequence $(n_i)$ such that
$\lim_{i\rightarrow +\infty} |\kappa_{n_i} (0)| = +\infty $.
For all $i\in \mathbb N$ we set $m_i = \lfloor\sqrt{|\kappa_{n_i} (0)|
-2}\rfloor$.
For all $i\in \mathbb N$ and all $x\in U_{n_i} \cup V_{n_i}$, by Lemma \ref{sturmrate},
there exists $v_i$ such that $x$ belongs to the cylinder $[v_i^{m_i}]$ and
consequently
$$
\frac{\tau \left(\zeta_{(m_i-1)|v_i|} (x)\right)}{(m_i-1)|v_i|} =
\frac{1}{m_i -1} .
$$
Hence, if $x\in \cap_{j\in \mathbb N} \cup_{i\geq j} (U_{n_i}\cup V_{n_i} )$ then
$\underline{\mathbb Ret}_{\zeta} (x) = 0$. But, from Lemma \ref{sturmrate}, we
also have $\mu (\cap_{j\in \mathbb N} \cup_{i\geq j}(U_{n_i} \cup V_{n_i} ) ) \geq 2/3$. Thus, (a) is equivalent to (b).
To prove that (a) is equivalent to (c) it suffices to prove that if the coefficients of the continued fraction of $\alpha $ are not bounded then
$\overline{\mathbb Ret}_{\zeta } (x) = \infty$ for $\mu$-almost every $x$. The
other part of the proof follows from Proposition \ref{ratelinrec} and
\ref{lrsturm}. Hence, $\underline{\mathbb Ret}_\zeta$ being $S$-invariant and
$\mu$ ergodic, it is enough to prove that $\mu \left( \{ x; \overline{\mathbb Ret}_{\zeta } (x) \geq h \} \right) > 0$ for all $h\geq 0$.
Let $h\geq 2$. Let $n\in \mathbb N$ be such that $\kappa_{n+1} (0) =ab^{i+1}$
with $i\geq 2$. We set $l_n = |\kappa _{1}\cdots \kappa_{n}(a)|$, $k_n = |\kappa _{1}\cdots \kappa_{n}(b^i)|/h$ and $W_n = \cup_{1\leq k\leq k_n}
S^{-k} \kappa _{1}\cdots \kappa_{n}([abb])$.
Take $x \in S^{-k} \kappa _{1}\cdots
\kappa_{n}([abb])$ with $1\leq k \leq k_n$. We remark $\kappa _{1}\cdots
\kappa_{n}([abb])$ is contained in $ \kappa _{1}\cdots
\kappa_{n+1}([a]) \cup \kappa _{1}\cdots
\kappa_{n+1}([b])$ (the proof is left to the reader). The words $\kappa_{n+1} (a)$
and $\kappa_{n+1} (b)$ end with the word $b^i$. Thus, we can write
$\kappa_1 \cdots \kappa_n (b) = uv$ in such a way that $v (\kappa_1
\cdots \kappa_n (b))^p \kappa_1
\cdots \kappa_n (ab^j) $ is a prefix of $x$ for some $p \leq i$, where $j=\max (i ,2 )$. It can be seen that
$\tau \left( \zeta_{k_n + l_n-1} (x) \right)$ is greater than $| \kappa_1 \cdots \kappa_n
(ab^j)|$. Consequently
\begin{align}
\label{retinfini}
\frac{\zeta_{k_n + l_n-1} (x)}{k_n + l_n }
\geq
\frac{ j |\kappa_1 \cdots \kappa_n (b)| + |\kappa_1 \cdots
\kappa_n (a)| }{|\kappa_1 \cdots \kappa_n (b^i)|/h + |\kappa_1 \cdots
\kappa_n (a)|}
\geq \frac{j + 3/2}{i/h+ 3/2} =f(h,i,j) .
\end{align}
Hence if $x \in {\mathcal W} (h) = \cap_{j\in \mathbb N} \cup_{i\geq j} W_i$ then
$\overline{\mathbb Ret} (x) \geq f(h,i,j)$. But for all $n\in \mathbb N$ we have
\begin{align*}
\mu (W_n) & = k_n \mu (\kappa_1 \cdots \kappa_n ([abb]))
\geq
k_n \mu \left( \kappa_1 \cdots \kappa_{n+1} ([0]) \cup \kappa_1 \cdots
\kappa_{n+1} ([1]) \right) \\
& =
\frac{i|\kappa_1 \cdots \kappa_{n} (b)|}{h}
\left(
\frac{\mu_{n+1}(0)}{|\kappa_1 \cdots \kappa_{n} (ab^{i+1}) |}
+
\frac{\mu_{n+1}(1)}{|\kappa_1 \cdots \kappa_{n} (ab^{i}) |}
\right)
\\
&
\geq
\frac{i}{h}
\frac{|\kappa_1 \cdots \kappa_{n} (b)|}{|\kappa_1 \cdots \kappa_{n}
(ab^{i+1}) |}
=
\frac{i}{h}
\frac{|\kappa_1 \cdots \kappa_{n} (b)|}{|\kappa_1 \cdots \kappa_{n} (a)|
+ (i+1) |\kappa_1 \cdots \kappa_{n} (b) |}
\\
&
\geq
\frac{i}{h}
\frac{1}{3/2 + (i+1)}
\geq
\frac{2}{h}
\frac{1}{3/2 + (2+1)}
\geq
\frac{4}{9h} .
\end{align*}
Hence $\mu ({\mathcal W} (h)) \geq 4/9h 1 $. From
Proposition 1.1 in \cite{Du3} and Proposition \ref{lrsturm} there exists a strictly increasing sequence $(n_i)$ such that
$\lim_{i\rightarrow +\infty} |\kappa_{n_i} (0)| = +\infty $. Thus, using
\eqref{retinfini} it comes that $\overline{\mathbb Ret} (x) \geq h$ for all
$x\in {\mathcal W} (h)$. This proves that (a) is equivalent (c).
\end{proof}
\begin{proof}[Proof of the statement \eqref{inferieura1} of Theorem \ref{bounded-case}]
Using \eqref{retinfini} we
conclude that $\overline{\mathbb Ret} (x) > \frac{7}{5}$ for $\mu$-almost every $x\in
X$.
Now we prove the other part of the statement. It suffices to prove that
for some $\theta < 1$ we have $\mu ( \{ x : \underline{\mathbb Ret}_\zeta(x) \leq \theta \}) > 0$.
From the hypotheses the sequence $(|\kappa_n (0)| ; n\in \mathbb N)$ is
bounded by some constant $K$. We will need the following lemma.
\begin{lemma}
\label{muzero}
For all $a\in \{ 0,1 \}$ and all $n\in \mathbb N$ we have $\mu_n (a) \geq 2/(3K+1)$.
\end{lemma}
\begin{proof}
Suppose $\kappa_{n+1} (0) = ab^{i+1}$ then one can check we have
\begin{align*}
\mu ( \kappa_1 \dots \kappa_{n} ([a]))
= &
\mu ( \kappa_1 \dots \kappa_{n+1} ([0]))
+
\mu ( \kappa_1 \dots \kappa_{n+1} ([1])) \hbox{ and } \\
\mu ( \kappa_1 \dots \kappa_{n} ([b]))
= &
(i+1)\mu ( \kappa_1 \dots \kappa_{n+1} ([0]))
+
i\mu ( \kappa_1 \dots \kappa_{n+1} ([1])) . \\
\end{align*}
Consequently using \eqref{mu} we obtain
$$
\frac{3}{2i}
\geq
\frac{\mu_{n} (a)}{\mu_{n} (b)}
\geq
\frac{2}{3}
\frac{\mu ( \kappa_1 \dots \kappa_{n} ([a]))}{ \mu ( \kappa_1 \dots \kappa_{n} ([b])) }
\geq
\frac{2}{3(i+1)} .
$$
We conclude using the facts that $i\geq K$ and $\mu_n (a) + \mu_n (b) = 1$.
\end{proof}
We consider several cases. Suppose there exists a strictly increasing sequence $(n_i)$
such that $|\kappa_{n_i} (0)|\geq 4$ for all $i\in \mathbb N$.
We set $m_i = 3$, $i\in \mathbb N$.
Let $(U_{n_i})_{i\in \mathbb N}$ and $(V_{n_i})_{i\in \mathbb N}$ be the sequences of open sets given by
Lemma \ref{sturmrate} and associated to the sequence $(m_i)_{i\in \mathbb N}$.
For all $i\in \mathbb N$ and all $x\in U_{n_i}$, by Lemma \ref{sturmrate},
there exists $v_i$ such that $x$ belongs to the cylinder $[v_i^{5/2}]$
and $\lim_{i\to \infty} |v_i| = + \infty$. Consequently
$$
\frac{\tau \left(\zeta_{(3/2)|v_i|-1} (x)\right)}{ \lfloor (3/2)|v_i| \rfloor } \leq
\frac{|v_i|}{(3/2)|v_i|} = \frac{2}{3} ,
$$
and, if $x\in \cap_{j\in \mathbb N} \cup_{i\geq j}U_{n_i}$ then
$\underline{\mathbb Ret}_{\zeta} (x) \leq 2/3$. Lemma \ref{muzero} and Lemma \ref{sturmrate} imply
$$
\mu (U_{n_i})
\geq
\mu_n (0) \frac{|\kappa_{n_i} (0)| - 7/2}{1/2 + |\kappa_{n_i} (0)| }
\geq
\frac{2(K - 7/2)}{(3K+1)(1/2 + K)} > 0.
$$
Hence $\mu (\cap_{j\in \mathbb N} \cup_{i\geq j}U_{n_i}) > 0$ and $\mu ( \{ x : \underline{\mathbb Ret}_\zeta(x) \leq 2/3\}) > 0$.
It remains to treat the following case: There exists $i_0$ such that for all $i\geq i_0$, $|\kappa_i (0)| = 3$.
For all $n\in \mathbb N$ we define $W_n$ to be the union of the sets
$S^k \kappa_{1}\cdots \kappa_{2n} [0]$, where
\begin{equation}
\label{ineq3}
|\kappa _{1}\cdots \kappa_{2n-2} (01)| \leq k \leq | \kappa _{1}\cdots \kappa_{2n-2} (01)|
+\lfloor | \kappa _{1}\cdots \kappa_{2n-2} (1)|/2 \rfloor .
\end{equation}
Let $x\in W_n$. We consider four cases.
First case: Suppose $\kappa_{2n-1} \kappa_{2n} = \rho_1^2$. We have
$\kappa_{2n-1} \kappa_{2n} (0) = 0110101$. Hence $x$ belongs to $T^k [
\kappa_1 \kappa_2 \cdots \kappa_{2n-2} (0110101)]$ for some $k$
satisfying (\ref{ineq3}).
We can write $\kappa_1
\kappa_2 \cdots \kappa_{2n-2} (1) = uv$, with $|v| \geq \lfloor | \kappa _{1}\cdots \kappa_{2n-2} (1)|/2 \rfloor $, in such a way that the word
$$
v \ \kappa_1 \kappa_2 \cdots \kappa_{2n-2} (0) \ uv \ \kappa_1 \kappa_2
\cdots \kappa_{2n-2} (0) \ uv
$$
is a prefix of $x$.
We set $k_n = | \kappa _{1}\cdots \kappa_{2n-2} (01)| + |v|$. We have
\begin{align*}
\frac{
\tau
\left(
\zeta_{k_n-1} (x)
\right)
}{k_n}
= &
\frac{| \kappa _{1}\cdots \kappa_{2n-2} (01)| }{k_n}
\leq
\frac{| \kappa _{1}\cdots \kappa_{2n-2} (01)| }{| \kappa _{1}\cdots \kappa_{2n-2} (01)| +
| \kappa _{1}\cdots \kappa_{2n-2} (1)|/2 -1 } \\
\leq &
\theta_1 =
\frac{2}{2+1/5} < 1,
\end{align*}
Second case: $\kappa_{2n-1} \kappa_{2n} = \gamma_1 \rho_1$. We have
$\kappa_{2n-1} \kappa_{2n} (0) = 1001010$. As previously we obtain that there exists $\theta_2 < 1$ such that
$$
\frac{
\tau
\left(
\zeta_{k_n-1} (x)
\right)
}
{k_n}
\leq
\theta_2 < 1 .
$$
Third case: $\kappa_{2n-1} \kappa_{2n} = \gamma_1 ^2$.
We have
$\kappa_{2n-1} \kappa_{2n} (0) = 10100100$.
We write $\kappa_1 \kappa_2 \cdots \kappa_{2n-2} (1) = uv$ such that
$v\kappa_1 \kappa_2 \cdots \kappa_{2n-2} (00) uv \kappa_1 \kappa_2
\cdots \kappa_{2n-2} (00) $ is a prefix of $x$ and
$|v| \geq \lfloor | \kappa _{1}\cdots \kappa_{2n-2} (1)|/2 \rfloor $.
But the images of 0 and 1 by $\kappa_{2n-1} \kappa_{2n} $ begin with the letter $1$. Furthermore the word
$$
v \ \kappa_1 \kappa_2 \cdots \kappa_{2n-2} (00) \ uv \ \kappa_1 \kappa_2 \cdots
\kappa_{2n-2} (00) \ uv
$$
is a prefix of $x$.
We set $l_n = | \kappa _{1}\cdots \kappa_{2n-2} (001)| + |v|$.
We have
\begin{align*}
\frac{
\tau
\left(
\zeta_{l_n-1} (x)
\right)}
{l_n} = &
\frac{| \kappa _{1}\cdots \kappa_{2n-2} (001)| }{l_n} \\
\leq &
\frac{| \kappa _{1}\cdots \kappa_{2n-2} (001)| }{| \kappa _{1}\cdots \kappa_{2n-2} (001)| +
| \kappa _{1}\cdots \kappa_{2n-2} (1)|/2 -1 }
\leq \theta_3 = \frac{2}{2+1/8} < 1.
\end{align*}
Fourth case: $\kappa_{2n-1} \kappa_{2n} = \rho_1 \gamma_1$. We have
$\kappa_{2n-1} \kappa_{2n} (0) = 01011011$. Proceeding as in the third case we obtain that there exists $\theta_4 < 1$ such that
$$
\frac{
\tau
\left(
\zeta_{m_n-1} (x)
\right)}
{m_n}
\leq
\theta_4<1 ,
$$
where $m_n = | \kappa _{1}\cdots \kappa_{2n-2} (011)| + |v|$.
To conclude we set $\theta = \max \{ \theta_1, \theta_2 , \theta_3 ,\theta_4 \}$ and
we remark we have
\begin{align*}
\mu (W_n)
\geq & \mu_n (0)
\frac{
|\kappa _{1}\cdots \kappa_{2n-2}(1)|
}
{
2|\kappa _{1}\cdots \kappa _{2n}(0)|
} \\
\geq &
\mu_n (0)
\frac{
|\kappa _{1}\cdots \kappa_{2n-2}(1)|}
{
5|\kappa _{1}\cdots \kappa _{2n-2}(0)|
+ 5|\kappa _{1}\cdots \kappa _{2n-2}(1)|
}
\geq
\frac{4}{25(3K+1)}.
\end{align*}
Hence $\mu (\cap_{j\in \mathbb N} \cup_{i\geq j}W_i) > 0$ and $\mu ( \{ x : \underline{\mathbb Ret}_\zeta(x) \leq \theta\}) > 0$.
\end{proof}
\begin{proof}[Proof of the statement \eqref{rotation}]
It suffices to remark that $[0] = \phi^{-1} ([0, 1-\alpha[ )$ and
$[1] = \phi^{-1} ([1-\alpha,1[ )$ (where $\phi$ is the
measure-theoretical isomorphism given in Subsection \ref{sturmsubshift}).
\end{proof}
\subsection{Proof of Theorem \protect{\ref{main}}}
If the system is not weakly mixing then it has a nontrivial
eigenvalue $\exp(2i\pi \alpha )$, therefore a measure-theoretical factor which is a
rotation: $([0,1[ , R_\alpha)$.
Furthermore, every measurable partition of the factor induces a
measurable partition of the original system.
If $\alpha$ is a rational number then clearly there exists
a non-trivial partition $\zeta$ of $[0,1[$ so that $\underline{\mathbb Ret}_\zeta(x)=0$
for $\mu$-almost every $x$.
If $\alpha$ is irrational then by Theorem \ref{bounded-case} there
is a non-trivial partition $\zeta$ of $[0,1 [$ such that
$\underline{\mathbb Ret}_\zeta(x) = 0$ for $\mu$-almost every $x$ if the coefficients
of the continued fration of $\alpha$ are not bounded and
$0<\underline{\mathbb Ret}_\zeta(x)< 1$ for $\mu$-almost every $x$ otherwise. This ends the
proof.
$\qed$
{\bf Acknowledgments.}
We kindly acknowledge the CMM in Santiago,
Chile, for the support. We also thank V. Afraimovich for stimulating
suggestions when we met him at the workshop on Dynamics and Randomness
held at Santiago in 2000.
\end{document}
|
\begin{document}
\begin{abstract}
Density of Lipschitz functions in Newtonian spaces based on quasi-Banach function lattices is discussed. Newtonian spaces are first-order Sobolev-type spaces on abstract metric measure spaces defined via (weak) upper gradients. Our main focus lies on metric spaces with a doubling measure that support a $p$-Poincar\'e inequality. Absolute continuity of the function lattice quasi-norm is shown to be crucial for approximability by (locally) Lipschitz functions. The proof of the density result uses, among others, that a suitable maximal operator is locally weakly bounded. In particular, various sufficient conditions for such boundedness on rearrangement-invariant spaces are established and applied.
\end{abstract}
\title[Regularization of Newtonian functions via weak boundedness of maximal operators]{Regularization of Newtonian functions on metric spaces\\via weak boundedness of maximal operators}
\author{Luk\'{a}\v{s} Mal\'{y}}
\date{April 28, 2014}
\subjclass[2010]{Primary 46E35; Secondary 30L99, 42B25, 46E30.}
\keywords{Newtonian space, Sobolev-type space, metric measure space, upper gradient, Banach function lattice, rearrangement-invariant space, maximal operator, Lipschitz function, regularization, weak boundedness, density of Lipschitz functions}
\address{Department of Mathematics\\Link\"{o}ping University\\SE-581~83~Link\"{o}ping\\Sweden}
\address{Department of Mathematical Analysis\\Faculty of Mathematics and Physics\\Charles University in Prague\\Sokolovsk\'a 83\\CZ-186~75~Praha 8\\Czech Republic}
\email{[email protected]}
\thanks{The author was partly supported by The Matts Ess\'en Memorial Fund.}
\maketitle{}
\section{Introduction}
\label{sec:intro}
{\setlength{\parskip}{0pt plus 0.5ex minus 0.2ex}
Newtonian functions represent an analogue and a generalization of first-order Sobolev functions in metric measure spaces. The notion of a distributional gradient relies heavily on the linear structure of $\Rbb^n$, which is missing in the setting of metric spaces. In the Newtonian theory, the distributional gradients are replaced by the so-called upper gradients or weak upper gradients, which were originally introduced by Heinonen and Koskela~\cite{HeiKos0} and Koskela and MacManus~\cite{KosMac}, respectively. The foundations for the Newtonian spaces $N^{1,p}$, based on the $L^p$ norm of a function and its (weak) upper gradient (i.e., corresponding to the classical Sobolev spaces $W^{1,p}$) were laid by Shanmugalingam~\cite{Sha}. In the past two decades, various authors have developed the elements of the Newtonian theory based on other function norms, see e.g.\@~\cite{CosMir,HarHasPer,Pod,Tuo}. Most recently, general complete quasi-normed lattices of measurable functions were considered as the base function space in Mal\'y~\cite{Mal1,Mal2}.
For many applications of the classical Sobolev spaces, it is of utmost importance that smooth functions are dense and provide good approximations of Sobolev functions. On metric spaces, the notion of a derivative, and hence of a smooth function, is unavailable; nevertheless, we may consider regularity in terms of (local) Lipschitz continuity. Such a regularity condition has turned out to suffice in many cases, e.g., within non-linear potential theory, see Bj\"{o}rn and Bj\"{o}rn~\cite{BjoBjo}. It has been shown already in Shanmugalingam's work~\cite{Sha} that Lipschitz functions are dense in $N^{1,p}(\Pcal)$ provided that $\Pcal$ is endowed with a doubling measure and supports a $p$-Poincar\'e inequality (see Definition~\ref{df:pPI} below). Tuominen~\cite{Tuo} has proven a similar result for Orlicz--Newtonian spaces with doubling Young function, while replacing the $p$-Poincar\'e inequality by an Orlicz-type Poincar\'e inequality.
Costea and Miranda~\cite{CosMir} studied the density of Lipschitz functions in Newtonian spaces based on the Lorentz $L^{p,q}$ spaces, assuming that $\Pcal$ carries an $L^{p,q}$-Poincar\'e inequality. They managed to prove the density for $1 \le q \le p<\infty$ using the fact that a Lorentz-type maximal operator is bounded from $L^{p,q}$ to $L^{p, \infty}$. They also found a counterexample for $1<p<q=\infty$. The case when $1\le p<q<\infty$ was however left open. Similar results were obtained earlier by Podbrdsk\'{y}~\cite{Pod} considering a more general setting of Banach space valued Lorentz functions, where the case $1\le p<q<\infty$ was not solved either.
It is known that Poincar\'e inequality is not a necessary condition to obtain the desired density. Using tools from optimal transportation theory, Ambrosio, Gigli and Savar\'e~\cite{AmbGigSav} argued that Lipschitz functions are dense in $N^{1,p}(\Pcal)$ for $p\in (1, \infty)$ if $\Pcal$ is compact and endowed with a doubling metric. Norm convergence of the sequence of approximating Lipschitz functions follows from reflexivity of $N^{1,p}(\Pcal)$, which was in that setting shown by Ambrosio, Colombo and Di~Marino~\cite{AmbColDiM}.
The current paper studies the question of density in situations when the base function space is a quasi-Banach function lattice with absolutely continuous quasi-norm. First, we provide a general theorem, where Newtonian and Haj\l asz's theory of Sobolev-type spaces on metric measure spaces are intertwined. There, we do not need to assume that $\Pcal$ carries any Poincar\'e inequality and the measure need not be doubling. The Haj\l asz gradient is however required to satisfy a weak type norm estimate. The connection between (weak) upper and Haj\l asz gradients is then established via the fractional sharp maximal operator and a $p$-Poincar\'e inequality. This leads to the assumption that a maximal operator of Hardy--Littlewood type (corresponding to the right-hand side of the $p$-Poincar\'e inequality supported by $\Pcal$) is weakly bounded on the function lattice. In particular, the open case in Lorentz--Newtonian spaces is settled with an affirmative answer. The presented results also extend the theory of Lipschitz truncations in variable exponent Newtonian spaces by Harjulehto, H\"ast\"o and Pere~\cite{HarHasPer} since we allow the infimum of the exponent to~be~$1$.
To determine whether Newtonian functions may be approximated by bounded functions is one of the steps towards the desired results. We will see that the absolute continuity of the function norm on sets of finite measure plays a vital role, which will help us with construction of examples where bounded functions are not dense in the quasi-Banach function lattice, whence neither are (locally) Lipschitz continuous functions in the corresponding Newtonian space.
One of the aims of the paper is to provide rather general theorems on the density of Lipschitz functions in Newtonian spaces with tangible hypotheses. Therefore, we also study when the suitable maximal operators are weakly bounded on sets of finite measure. We are particularly interested in their boundedness on rearrangement-invariant spaces and in its characterization in terms of the properties of the fundamental function.
We will prove that Lipschitz functions are dense in every Newtonian space based on a rearrangement-invariant space with absolutely continuous norm provided that $\Pcal$ supports a $1$-Poincar\'e inequality. If $\Pcal$ carries merely a $p$-Poincar\'e inequality with $p>1$, then it suffices, besides absolute continuity of the norm, that the upper fundamental (Zippin) or the upper Boyd index is less than $1/p$. Moreover, if $\Pcal$ is complete, then the indices may be equal to $1/p$. More generally, one can instead assume that $t \mapsto \phi(t)^p \fint_0^t \phi(s)^{-p}\,ds$ is bounded in a small neighborhood of $0$, where $\phi$ is the fundamental function of $X$.
If the Newtonian space is trivial, i.e., equal to the base function lattice, then the situation is much simpler. Regardless of the doubling condition of $\mu$, we give a general characterization of this triviality in terms of properties of the Sobolev capacity and of the $X$-modulus of a family of curves. In particular, we will see that the Newtonian space coincides with the base function lattice as sets if and only if their quasi-norms are equal. Such a characterization seems to be new even in the setting of the well-studied spaces $N^{1,p}$ that are built upon $L^p$. If a trivial Newtonian space is based on a Banach function space, then Lipschitz functions are dense whenever the norm is absolutely continuous.
The structure of the paper is the following. Section~\ref{sec:prelim} provides an overview of the used notation and preliminaries in the area of quasi-Banach function lattices and Newtonian spaces. Moreover, the characterization of triviality of a Newtonian space is given here. In Section~\ref{sec:trunc}, we study the density of truncated functions. Then, we obtain the general form of the main theorem about density of Lipschitz functions in Newtonian spaces in Section~\ref{sec:LipDensGeneral}, using the connection between Haj\l asz gradients, fractional sharp maximal operators and (weak) upper gradients. Rearrangement-invariant spaces lie in the focus of Section~\ref{sec:rispaces}. There, we also present a certain type of function spaces that will serve as counterexamples, where Newtonian functions cannot be approximated by Lipschitz functions. In Section~\ref{sec:weaktype}, we study maximal operators, with particular attention aimed at the weak boundedness in the setting of rearrangement-invariant spaces. Finally, Section~\ref{sec:LipDensSpec} contains various concretizations of the main result of the paper, giving sufficient conditions for Lipschitz functions to be dense in the Newtonian space.
}
\section{Preliminaries}
\label{sec:prelim}
We assume throughout the paper that $\Pcal = (\Pcal, \dd, \mu)$ is a metric measure space equipped with a metric $\dd$ and a $\sigma$-finite Borel regular measure $\mu$ such that every ball in $\Pcal$ has finite positive measure. In our context, Borel regularity means that all Borel sets in $\Pcal$ are $\mu$-measurable and for each $\mu$-measurable set $A$ there is a Borel set $D\supset A$ such that $\meas{D} = \meas{A}$. Since $\mu$ is Borel regular and $\Pcal$ can be decomposed into countably many (possibly overlapping) open sets of finite measure, it is outer regular, see Mattila~\cite[Theorem 1.10]{Mat}.
The open ball centered at $x\in \Pcal$ with radius $r>0$ will be denoted by $B(x,r)$. Given a ball $B=B(x,r)$ and a scalar $\lambda > 0$, we let $\lambda B = B(x,\lambda r)$. We say that $\mu$ is a \emph{doubling} measure, if there is a constant $c_{\dbl}\ge1$ such that $\meas{2B} \le c_{\dbl} \meas{B}$ for every ball $B$. In Sections~\ref{sec:prelim} and~\ref{sec:trunc}, unlike in the rest of the paper, we will not assume that $\mu$ is doubling or non-atomic.
Let $\Mcal(\Pcal, \mu)$ denote the set of all extended real-valued $\mu$-measurable functions on $\Pcal$. The set of extended real numbers, $\Rbb \cup \{\pm \infty\}$, will be denoted by $\overline{\Rbb}$. We will also use $\Rbb^+$, which denotes the set of positive real numbers, i.e., the interval $(0, \infty)$. The symbol $\Nbb$ will denote the set of positive integers, i.e., $\{1,2, \ldots\}$. We define the \emph{integral mean} of a measurable function $u$ over a set $E$ of finite positive measure as
\[
u_E \coloneq \fint_E u\,d\mu = \frac{1}{\mu(E)} \int_E u\,d\mu,
\]
whenever the integral on the right-hand side exists, not necessarily finite though. The characteristic function of a set $E$ will be denoted by $\chi_E$. Given an extended real-valued function $u: \Pcal \to \overline{\Rbb}$ and a real number $\sigma\ge 0$, we define $\suplev{u}{\sigma}$ as the superlevel set $\{x\in \Pcal: |u(x)| > \sigma\}$.
The notation $L \lesssim R$ will be used to express that there exists a constant $c>0$, perhaps dependent on other constants within the context, such that $ L \le cR$. If $L \lesssim R$ and simultaneously $R \lesssim L$, then we will simply write $L \approx R$ and say that the quantities $L$ and $R$ are \emph{comparable}. The words \emph{increasing} and \emph{decreasing} will be used in their non-strict sense.
A linear space $X = X(\Pcal, \mu)$ of equivalence classes of functions in $\Mcal(\Pcal, \mu)$ is said to be a \emph{quasi-Banach function lattice} over $(\Pcal, \mu)$ equipped with the quasi-norm $\|\cdot\|_X$ if the following axioms hold:
\begin{enumerate}
\renewcommand{(\roman{enumi})}{(P\arabic{enumi})}
\setcounter{enumi}{-1}
\item \label{df:qBFL.initial} $\|\cdot\|_X$ determines the set $X$, i.e., $X = \{u\in \Mcal(\Pcal, \mu)\colon \|u\|_X < \infty\}$;
\item \label{df:qBFL.quasinorm} $\|\cdot\|_X$ is a \emph{quasi-norm}, i.e.,
\begin{itemize}
\item $\|u\|_X = 0$ if and only if $u=0$ a.e.,
\item $\|au\|_X = |a|\,\|u\|_X$ for every $a\in\Rbb$ and $u\in\Mcal(\Pcal, \mu)$,
\item there is a constant $\cconc \ge 1$, the so-called \emph{modulus of concavity}, such that $\|u+v\|_X \le \cconc(\|u\|_X+\|v\|_X)$ for all $u,v \in \Mcal(\Pcal, \mu)$;
\end{itemize}
\item \label{df:BFL.latticeprop} $\|\cdot\|_X$ satisfies the \emph{lattice property}, i.e., if $|u|\le|v|$ a.e., then $\|u\|_X\le\|v\|_X$;
\renewcommand{(\roman{enumi})}{(RF)}
\item \label{df:qBFL.RF} $\|\cdot\|_X$ satisfies the \emph{Riesz--Fischer property}, i.e., if $u_n\ge 0$ a.e.\@ for all $n\in\Nbb$, then $\bigl\|\sum_{n=1}^\infty u_n \bigr\|_X \le \sum_{n=1}^\infty \cconc^n \|u_n\|_X$, where $\cconc\ge 1$ is the modulus of concavity. Note that the function $\sum_{n=1}^\infty u_n$ needs to be understood as a pointwise (a.e.\@) sum.
\end{enumerate}
Observe that $X$ contains only functions that are finite a.e., which follows from~\ref{df:qBFL.initial}--\ref{df:BFL.latticeprop}. In other words, if $\|u\|_X<\infty$, then $|u|<\infty$ a.e.
Throughout the paper, we will also assume that the quasi-norm $\|\cdot\|_X$ is \emph{continuous}, i.e., if $\|u_n - u\|_X \to 0$ as $n\to\infty$, then $\|u_n\|_X \to \|u\|_X$. We do not lose any generality by this assumption as the Aoki--Rolewicz theorem (see Benyamini and Lindenstrauss~\cite[Proposition H.2]{BenLin} or Maligranda~\cite[Theorem~1.2]{Mali}) implies that there is always an equivalent quasi-norm that is an $r$-norm, i.e., it satisfies
\[
\|u + v\|^r \le \|u\|^r + \|v\|^r,
\]
where $r = 1/(1+ \log_2 \cconc) \in (0, 1]$, which implies the continuity. The theorem's proof shows that such an equivalent quasi-norm retains the lattice property~\ref{df:BFL.latticeprop}.
It is worth noting that the Riesz--Fischer property is actually equivalent to the completeness of the quasi-normed space $X$, given that the conditions \ref{df:qBFL.initial}--\ref{df:BFL.latticeprop} are satisfied and that the quasi-norm is continuous, see Maligranda~\cite[Theorem 1.1]{Mali}.
If $\cconc = 1$, then the functional $\| \cdot \|_X$ is a norm. We then drop the prefix \emph{quasi} and hence call $X$ a \emph{Banach function lattice}.
A (quasi)Banach function lattice $X = X(\Pcal, \mu)$ is called a \emph{(quasi)Banach function space} over $(\Pcal, \mu)$ if the following axioms are satisfied as well:
\begin{enumerate}
\renewcommand{(\roman{enumi})}{(P\arabic{enumi})}
\setcounter{enumi}{2}
\item $\|\cdot\|_X$ satisfies the \emph{Fatou property}, i.e., if $0\le u_n \upto u$ a.e., then $\|u_n\|_X\upto\|u\|_X$;
\item \label{df:BFS.finmeasfinnorm} if a measurable set $E \subset \Pcal$ has finite measure, then $\|\chi_E\|_X < \infty$;
{
\item for every measurable set $E\subset \Pcal$ with $\meas{E}<\infty$ there is $C_E>0$ such that $\int_E |u|\,d\mu \le C_E \|u\|_X$ for every measurable function $u$.
\label{df:BFL.locL1}
}
\end{enumerate}
Note that the Fatou property implies the Riesz--Fischer property. Axiom~\ref{df:BFS.finmeasfinnorm} is equivalent to the condition that $X$ contains all simple functions (with support of finite measure). Due to the lattice property~\ref{df:BFL.latticeprop}, we can equivalently characterize~\ref{df:BFS.finmeasfinnorm} as embedding of $L^\infty(\Pcal, \mu)$ into $X$ on sets of finite measure. Finally, condition~\ref{df:BFL.locL1} describes that $X$ is embedded into $L^1(\Pcal, \mu)$ on sets of finite measure.
In the further text, we will slightly deviate from this rather usual definition of (quasi)\allowbreak{}Banach function lattices and spaces. Namely, we will consider $X$ to be a linear space of functions defined everywhere instead of equivalence classes defined a.e. Then, the functional $\|\cdot\|_X$ is really only a (quasi)seminorm. Unless explicitly stated otherwise, we will always assume that $X$ is a quasi-Banach function lattice.
The quasi-norm $\| \cdot \|_X$ in a quasi-Banach function lattice $X$ is \emph{absolutely continuous} if every $u \in X$ satisfies the condition
\begin{enumerate}
\renewcommand{(\roman{enumi})}{(AC)}
\item \label{df:AC}
$\| u \chi_{E_n} \|_X \to 0$ as $n\to\infty$ whenever $\{E_n\}_{n=1}^\infty$ is a decreasing sequence of measurable sets with $\meas{\bigcap_{n=1}^\infty E_n} = 0$.
\end{enumerate}
It follows from the dominated convergence theorem that the $L^p$ norm is absolutely continuous for $p\in (0, \infty)$. On the other hand, $L^\infty$ lacks this property apart from in a few exceptional cases. For example, if $\mu$ is atomic, $0<\delta\le \meas{A}$ for every atom $A\subset \Pcal$, and $\meas{\Pcal}<\infty$, then every quasi-Banach function lattice has absolutely continuous quasi-norm since the condition $\meas{\bigcap_{n=1}^\infty E_n} = 0$ implies that there is $n_0 \in \Nbb$ such that $E_n=\emptyset$ for all $n\ge n_0$. However, atomic measures lie outside of the main scope of our interest.
\begin{df}
For $p \in [1, \infty)$, the \emph{non-centered maximal operator} $M_p$ is defined by
\[
M_p u(x) = \sup_{B\ni x} \biggl(\fint_B |u|^p\,d\nu\biggr)^{1/p}, \quad x\in \Rcal,
\]
where $(\Rcal, \nu)$ is a given metric measure space and $u \in \Mcal(\Rcal, \nu)$.
We also define the superlevel set
\[
\suplevp{p}{u}{\sigma} \coloneq \suplev{M_pu}{\sigma} = \{ x\in \Rcal: M_p u(x) > \sigma \} \quad \mbox{for }\sigma \ge 0.
\]
\end{df}
\begin{rem}
We will use either $(\Rcal, \nu) = (\Rbb^+, \lambda^1)$ or $(\Rcal, \nu) = (\Pcal, \mu)$ depending on the context, yet without any explicit indication of which of the cases applies at the moment. Since $M_p u= (M_1 |u|^p)^{1/p}$, we obtain that $M_p: L^p \to L^{p,\infty}$ is bounded due to the weak-$L^1$ boundedness of $M_1$ on doubling spaces (see e.g.\@ Coifman and Weiss~\cite[Theorem III.2.1]{CoiWei}). Obviously, $M_p: L^\infty \to L^\infty$ is also bounded.
\end{rem}
Given a function lattice $X$, we also define a ``local'' space $X_\fm$ that consists of measurable functions whose restrictions to sets of finite measure belong to $X$, i.e., $u\in X_\fm$ if $u \chi_E \in X$ for every measurable set $E$ with $\meas{E}<\infty$. If $\meas{\Pcal} < \infty$, then obviously $X_\fm = X$.
We say that a (sub)linear mapping $T: X(\Pcal, \mu) \to Y_\fm(\Rcal, \nu)$ is bounded, if for every $E\subset \Rcal$ with $\nu(E)<\infty$ there is $c_E>0$ such that $\|(Tu) \chi_E\|_Y \le c_E \|u\|_X$ whenever $u\in X$. It might actually happen that $Tu \notin Y$ even though $u\in X$. If $\nu(\Rcal)<\infty$, then $T: X \to Y_\fm$ is bounded if and only if $T: X\to Y$ is bounded. We will also say that $X$ is \emph{continuously embedded} in $Y_\fm$, which will be denoted by $X \emb Y_\fm$, if the identity mapping $\Id: X(\Pcal, \mu) \to Y_\fm(\Pcal, \mu)$ is bounded.
By a \emph{curve} in $\Pcal$ we will mean a non-constant continuous mapping $\gamma: I\to \Pcal$ with finite total variation (i.e., length of $\gamma(I)$), where $I \subset \Rbb$ is a compact interval. Thus, a curve can be (and we will always assume that all curves are) parametrized by arc length $ds$, see e.g.\@ Heinonen~\cite[Section~7.1]{Hei}. Note that every curve is Lipschitz continuous with respect to its arc length parametrization. The family of all non-constant rectifiable curves in $\Pcal$ will be denoted by $\Gamma(\Pcal)$. By abuse of notation, the image of a curve $\gamma$ will also be denoted by $\gamma$.
A statement holds for \emph{$\Mod_X$-a.e.\@} curve $\gamma$ if the family of exceptional curves $\Gamma_e$, for which the statement fails, has \emph{zero $X$-modulus}, i.e., if there is a Borel function $\rho \in X$ such that $\int_\gamma \rho\,ds = \infty$ for every curve $\gamma \in \Gamma_e$ (see Mal\'{y}~\cite[Proposition 4.8]{Mal1}).
\begin{df}
\label{df:ug}
Let $u: \Pcal \to \overline{\Rbb}$. Then, a Borel function $g: \Pcal \to [0, \infty]$ is an \emph{upper gradient} of $u$ if
\begin{equation}
\label{eq:ug_def}
|u(\gamma(0)) - u(\gamma(l_\gamma))| \le \int_\gamma g\,ds
\end{equation}
for every curve $\gamma: [0, l_\gamma]\to\Pcal$. To make the notation easier, we are using the convention that $|(\pm\infty)-(\pm\infty)|=\infty$. If we allow $g$ to be a measurable function and \eqref{eq:ug_def} to hold only for $\Mod_X$-a.e.\@ curve $\gamma: [0, l_\gamma]\to\Pcal$, then $g$ is an \emph{$X$-weak upper gradient}.
\end{df}
Observe that the ($X$-weak) upper gradients are by no means given uniquely. Indeed, if we have a function $u$ with an ($X$-weak) upper gradient $g$, then $g+h$ is another ($X$-weak) upper gradient of $u$ whenever $h\ge0$ is a Borel (measurable) function.
\begin{df}
Whenever $u\in \Mcal(\Pcal, \mu)$, let
\begin{equation}
\label{eq:def-N1X-norm}
\|u\|_{\NX} = \| u \|_X + \inf_g \|g\|_X,
\end{equation}
where the infimum is taken over all upper gradients $g$ of $u$. The \emph{Newtonian space} based on $X$ is the space
\[
\NX = \NX (\Pcal, \mu)= \{u\in\Mcal(\Pcal, \mu): \|u\|_{\NX} <\infty \}.
\]
\end{df}
Let us point out that we assume that functions are defined everywhere, and not just up to equivalence classes $\mu$-almost everywhere. This is essential for the notion of upper gradients since they are defined by a pointwise inequality.
The functional $\| \cdot \|_\NX$ is a quasi-seminorm on $\NX$ and its modulus of concavity equals the modulus $\cconc$ of the base function space $X$. We may very well take the infimum over all $X$-weak upper gradients $g$ of $u$ in \eqref{eq:def-N1X-norm} without changing the value of the Newtonian quasi-seminorm. Moreover, $\NX$ is complete (see~\cite[Theorem~7.1]{Mal1}).
The \emph{(Sobolev) capacity}, defined as $C_X(E) = \inf\{\|u\|_\NX: u\ge \chi_E\}$ for $E\subset \Pcal$, is a set function that distinguishes which sets do not carry any information about a Newtonian function and thus are negligible. The natural equivalence classes in $\NX$ are given by equality outside of sets of zero capacity. These as well as other basic properties of Newtonian functions have been established in~\cite{Mal1}.
It has been shown in~\cite{Mal2} that the infimum in \eqref{eq:def-N1X-norm} is attained for functions in $\NX$ by a \emph{minimal $X$-weak upper gradient}. Such an $X$-weak upper gradient is minimal both normwise and pointwise (a.e.\@) among all ($X$-weak) upper gradients in $X$, whence it is given uniquely up to equality a.e.
The following lemma provides us with several equivalent conditions that describe triviality of the Newtonian space in the sense that $\NX = X$. Such a characterization seems to be new even for the well-studied spaces $N^{1,p}\coloneq N^1L^p$.
\begin{lem}
\label{lem:NX=X}
The following are equivalent:
\begin{enumerate}
\item \label{it:NX=X-sets} $\NX = X$ as sets of functions;
\item \label{it:NX=X-cap} $C_X(E) = 0$ if and only if $\mu(E) = 0$, where $E\subset \Pcal$;
\item \label{it:NX=X-mod} $\Mod_X(\Gamma(\Pcal)) = 0$;
\item \label{it:NX=X-norms} $\|u\|_\NX = \|u\|_X$ for every $u\in\Mcal(\Pcal, \mu)$.
\end{enumerate}
\end{lem}
\begin{proof}
\ref{it:NX=X-sets} $\Rightarrow$~\ref{it:NX=X-cap} Let $E\subset \Pcal$ satisfy $\mu(E) = 0$. Then, $u = \infty \chi_E$ belongs to $X$ since $\|u\|_X = \|0\|_X = 0$. Hence, $u\in \NX$. Thus, $C_X(E) = C_X(\{x\in \Pcal: |u(x)|=\infty\}) = 0$ by~\cite[Proposition 3.6]{Mal1}.
\ref{it:NX=X-cap} $\Rightarrow$~\ref{it:NX=X-mod} Let $\{x_n \in \Pcal: n\in\Nbb\}$ be a dense subset of $\Pcal$. For each $n\in\Nbb$, we can find a set of radii $\{r_{n,k}>0: k\in\Nbb\}$, dense in $(0, \infty)$, such that the spheres $S(x_n, r_{n,k}) = \{z\in \Pcal: \dd(x_n, z) = r_{n,k}\}$ satisfy $\meas{S(x_n, r_{n,k})} = 0$ for every $k\in\Nbb$. Such a sequence indeed exists since at most countably many spheres centered at $x_n$ have positive measure as balls would not have finite measure otherwise.
Let $E_n = \bigcup_{k=1}^\infty S_{n,k}$. Then, $\meas{E_n} = 0 = C_X(E_n)$. Therefore, $\Mod_X(\Gamma_{E_n}) = 0$ by~\cite[Proposition~5.10]{Mal1}, where $\Gamma_{E_n} = \{\gamma\in\Gamma(\Pcal): \gamma^{-1}(E_n) \neq \emptyset\}$. Let
\[
\Gamma_n = \{ \gamma \in \Gamma(\Pcal): \dd(x_n, \gamma(t_1)) \neq \dd(x_n, \gamma(t_2))\mbox{ for some }0\le t_1 < t_2 \le l_\gamma \}.
\]
Then, $\Gamma_n \subset \Gamma_{E_n}$ and hence $\Mod_X(\Gamma_n) \le \Mod_X(\Gamma_{E_n}) = 0$. As there are no (non-constant) curves that have a constant distance from all points $x_n$, $n\in\Nbb$, we obtain that
\[
\Mod_X(\Gamma(\Pcal)) = \Mod_X \biggl(\bigcup_{n=1}^\infty \Gamma_n\biggr) = 0.
\]
\ref{it:NX=X-mod} $\Rightarrow$~\ref{it:NX=X-norms} Since \eqref{eq:ug_def} is allowed to fail for every curve $\gamma \in \Gamma(\Pcal)$, $g \equiv 0$ is an \mbox{$X$-}weak upper gradient of every measurable function $u\in\Mcal(\Pcal, \mu)$, whence $\|u\|_X \le \|u\|_\NX \le \|u\|_X + \|0\|_X = \|u\|_X$.
\ref{it:NX=X-norms} $\Rightarrow$~\ref{it:NX=X-sets} If the quasi-norms are equal, then $X = \{ u\in \Mcal(\Pcal, \mu): \|u\|_X < \infty\} = \{ u\in X: \|u\|_\NX < \infty\} = \NX$.
\end{proof}
In the next proposition, we demonstrate that the density of Lipschitz functions relies only on the properties of $X$ whenever the Newtonian space is trivial.
\begin{pro}
\label{pro:NX=X-dens}
Let $X$ be a Banach function space with absolutely continuous norm, i.e., it satisfies~\ref{df:qBFL.initial}--\ref{df:BFL.locL1} and~\ref{df:AC}. Suppose that $\NX = X$. Then, Lipschitz functions are dense in $\NX$.
\end{pro}
\begin{proof}
Simple functions are dense in $X$ by~\cite[Theorem~I.3.11]{BenSha}.
Let $E\subset \Pcal$ be a measurable set of finite measure and $\eps>0$ be arbitrary. Then, there exists a bounded set $E_b \subset E$ such that $\meas{E\setminus E_b} < \eps$. Let $\itoverline{B}\subset \Pcal$ be a closed ball that contains $E_b$. By outer regularity of $\mu$, there is an open set $G \supset \itoverline{B} \setminus E_b$ such that $\meas{G \cap E_b} \le \meas{G \setminus (\itoverline{B} \setminus E_b)} < \eps$. Let $F = \itoverline{B} \setminus G$. Then, $F$ is closed in $\itoverline{B}$ and hence in $\Pcal$, and $\meas{E_b \setminus F} = \meas{E_b \cap G}< \eps$. Thus, $\meas{E \setminus F} < 2\eps$.
Therefore, for every measurable $E \subset \Pcal$ of finite measure, there is a bounded closed set $F \subset E$ such that $\|\chi_{E\setminus F}\|_X$ is arbitrarily small by the absolute continuity of the norm. For such a set $F$, we define $\eta_k(x) = (1- k \dist(x, F))^+$, $x\in \Pcal$, $k\in\Nbb$. Then, $\eta_k$ has bounded support, whence $\eta_k \in X$. The function $\eta_k$ is $k$-Lipschitz, and $\eta_k \to \chi_F$ a.e.\@ in $\Pcal$ as $k\to\infty$. By the dominated convergence theorem (which follows from the absolute continuity, see~\cite[Proposition I.3.6]{BenSha}), we obtain $\eta_k \to \chi_F$ in $X$ as $k\to\infty$. Therefore, every simple function can be approximated in the norm of $X$ by Lipschitz functions.
Consequently, every $u\in X=\NX$ can be approximated in the norm of $\NX$ by Lipschitz functions since the norms of $X$ and $\NX$ are equal by Lemma~\ref{lem:NX=X}.
\end{proof}
\begin{df}
\label{df:pPI}
We say that $\Pcal$ supports a \emph{$p$-Poincar\'{e} inequality} or, for the sake of brevity, that $\Pcal$ is a \emph{$p$-Poincar\'{e} space} if there exist constants $c_{\PI} > 0$ and $\lambda \ge 1$ such that for all balls $B\subset \Pcal$, for all $u\in L^1_\loc(\Pcal)$ and all upper gradients $g$ of $u$,
\begin{equation}
\label{eq:pPI}
\fint_B |u-u_B|\,d\mu \le c_{\PI} \diam (B) \Biggl( \fint_{\lambda B} g^p\,d\mu\Biggr)^{1/p},
\end{equation}
where $u_B = \fint_B u\,d\mu$.
\end{df}
This form of the inequality is sometimes called a \emph{weak $p$-Poincar\'{e} inequality}. The word ``weak'' indicates that the dilation factor $\lambda$ is allowed to be greater than $1$. Note also that it follows by~\cite[Lemma 5.6]{Mal1} that we may equivalently require that the inequality holds for all $p$-weak upper gradients $g$ of $u$ and, in particular, for all $X$-weak upper gradients $g$ of $u$ if $X \emb L^p_\loc$, i.e., if $\|f \chi_B\|_{L^p} \le c_B \|f\chi_B\|_X$ for all balls $B\subset \Pcal$. There are several other characterizations in~\cite[Proposition 4.13]{BjoBjo}, e.g., we may require that \eqref{eq:pPI} holds only for $u \in L^\infty$, or conversely that it holds for all measurable functions $u$ if the left-hand side is interpreted as $\infty$ whenever $u\chi_B \notin L^1$.
\section{Approximation by bounded functions}
\label{sec:trunc}
In this section, we will determine a set of sufficient conditions ensuring that truncated functions provide a good approximation of Newtonian functions, which is an important step on the way to study the density of Lipschitz functions as these are bounded on bounded sets. In Section~\ref{sec:rispaces}, we will find a certain type of function spaces where the truncations are not dense, which will lead us later on to constructing examples when (locally) Lipschitz functions are not dense in the Newtonian space.
\begin{lem}
\label{lem:trunc_dense_AC}
Let $X$ be a quasi-Banach function lattice with absolutely continuous quasi-norm. Then every function $u \in X$ can be approximated by its truncations with arbitrary precision in the norm of $X$, i.e., if we define $u_\sigma \coloneq \max \{ \min \{u, \sigma\}, -\sigma\}$ for $\sigma\in\Rbb^+$, then $u_\sigma \to u$ in $X$ as $\sigma\to\infty$.
\end{lem}
Recall that $\suplev{u}{\sigma}$ denotes the superlevel set of $|u|$ with level $\sigma \ge 0$, i.e., $\suplev{u}{\sigma} = \{x\in \Pcal: |u(x)| > \sigma\}$.
\begin{proof}
Let $u\in X$ and let $u_\sigma$ be its truncations at the levels $\pm \sigma$ for every $\sigma \in \Rbb^+$. Then, $u-u_\sigma = 0$ on $\Pcal \setminus \suplev{u}{\sigma}$. Since $|u|<\infty$ a.e.\@ in $\Pcal$, we have that $\meas{\bigcap_{\sigma>0} \suplev{u}{\sigma}} = 0$. The absolute continuity of the quasi-norm of $X$ implies that
\[
\| u - u_\sigma\|_X =\| (|u|-\sigma) \chi_{\suplev{u}{\sigma}}\|_X \le \| u \chi_{\suplev{u}{\sigma}} \|_X \to 0\quad\mbox{as }\sigma\to\infty.
\qedhere
\]
\end{proof}
The following lemma shows that the measure of the superlevel sets of an $L^p_\fm$ function is finite if the level is chosen sufficiently large. In fact, it tends to zero as the level approaches infinity.
\begin{lem}
\label{lem:superlevelsets-to-zero}
Let $u \in L^p_\fm\fcrim$ for some $p>0$. Suppose further that $\mu$ is non-atomic. Then, $\mu(\suplev{u}{\sigma}) \to 0$ as $\sigma \to \infty$.
\end{lem}
\begin{proof}
Since $|u|<\infty$ a.e., we obtain that $\meas{\bigcap_{\sigma>0} \suplev{u}{\sigma}} = 0$. If we show that $\meas{\suplev{u}{\sigma}} < \infty$ for some $\sigma>0$, then $\meas{\bigcap_{\sigma>0} \suplev{u}{\sigma}} = \lim_{\sigma \to \infty} \meas{\suplev{u}{\sigma}}$.
Suppose on the contrary that $\meas{\suplev{u}{\sigma}} = \infty$ for every $\sigma>0$. Then, we can construct a set $F$ of finite measure such that $u\chi_F\notin L^p$ as follows. Let us choose a sequence of pairwise disjoint sets $F_k$, where $\meas{F_k} = 1/k^2$ and $F_k \subset \suplev{u}{k^{1/p}}$. Let now $F = \bigcup_{k=1}^\infty F_k$. Then, $\meas{F}<\infty$, but
\[
\| u \chi_F \|_{L^p}^p \ge \sum_{k=1}^\infty k \meas{F_k} = \sum_{k=1}^\infty \frac{1}{k} = \infty,
\]
whence $u\chi_F \notin L^p$, which contradicts the assumption that $u \in L^p_\fm$.
\end{proof}
In order to investigate whether truncated functions are good approximations in Newtonian spaces, we need to check how truncation affects weak upper gradients. The following auxiliary lemma will help us settle this problem as the gradient may be modified so that it vanishes on a given level set of a Newtonian function.
\begin{lem}
\label{lem:glueing_lemma}
Let $X$ be a quasi-Banach function lattice. Suppose that $u\in\NX$ with an $X$-weak upper gradient $g\in X$. Given a constant $k\in\Rbb$, define $E=\{x\in\Pcal: u(x) = k\}$. Then, $g \chi_{\Pcal \setminus E}$ is an $X$-weak upper gradient of $u$ as well.
\end{lem}
\begin{proof}
Let $\tilde{g}$ be a Borel representative of $g$. We will show that $\tilde{g} \chi_{\Pcal\setminus E}$ is an $X$-weak upper gradient of $u$ and hence so is $g \chi_{\Pcal\setminus E}$ by~\cite[Lemma~4.10]{Mal1}. For $\Mod_X$-a.e.\@ curve~$\gamma$ we have that $u$ is absolutely continuous on $\gamma$ by~\cite[Theorem 6.7]{Mal1} and $\tilde{g}$ satisfies \eqref{eq:ug_def} for every subcurve $\gamma' = \gamma|_I$ by~\cite[Corollary 5.9]{Mal1}, where $I\subset [0, l_\gamma]$ is a closed interval. Let $\gamma: [0, l_\gamma] \to \Pcal$ be such a curve. If $\gamma \cap E = \emptyset$, then $\tilde{g}=\tilde{g} \chi_{\Pcal \setminus E}$ everywhere on $\gamma$. Suppose now that the curve $\gamma$ intersects with the set $E$. Let
\[
\alpha = \inf\{ t\in[0, l_\gamma]: \gamma(t) \in E\}, \quad\mbox{and}\quad \beta = \sup\{ t\in[0, l_\gamma]: \gamma(t) \in E\}.
\]
Hence, $\gamma([0, \alpha)) \cap E = \emptyset = \gamma((\beta, l_\gamma]) \cap E $ and $\tilde{g}\circ \gamma = (\tilde{g}\chi_{\Pcal \setminus E})\circ \gamma$ on $[0, \alpha)\cup(\beta, l_\gamma]$. Furthermore, $u(\gamma(\alpha)) = u(\gamma(\beta)) = k$ since $u \circ \gamma \in \Ccal([0, l_\gamma])$. Consequently,
\begin{align*}
|u(\gamma(0)) - u(\gamma(\alpha))| &\le \int_0^\alpha \tilde{g}(\gamma(t))\,dt = \int_0^\alpha (\tilde{g}\chi_{\Pcal\setminus E})(\gamma(t))\,dt,\\
|u(\gamma(\alpha)) - u(\gamma(\beta))| & = 0 \le \int_\alpha^\beta (\tilde{g}\chi_{\Pcal\setminus E})(\gamma(t))\,dt,\\
|u(\gamma(\beta)) - u(\gamma(l_\gamma))| &\le \int_\beta^{l_\gamma} \tilde{g}(\gamma(t))\,dt = \int_\beta^{l_\gamma} (\tilde{g}\chi_{\Pcal\setminus E})(\gamma(t))\,dt.
\end{align*}
These estimates together give that $|u(\gamma(0)) - u(\gamma(l_\gamma))| \le \int_\gamma \tilde{g}\chi_{\Pcal\setminus E}\,ds$ holds for $\Mod_X$-a.e.\@ curve $\gamma$ whence $\tilde{g}\chi_{\Pcal\setminus E}$ is an $X$-weak upper gradient of $u$ and so is $g\chi_{\Pcal\setminus E}$.
\end{proof}
Now, we are ready to prove that truncated functions are dense in $\NX$ as well, provided that $X$ has absolutely continuous quasi-norm. In Example~\ref{exa:trunc_not_dense} below, the absolute continuity is shown to be crucial for the density of truncations in $\NX$.
\begin{cor}
\label{cor:bdd-dense-in-N1X}
Let $X$ be a quasi-Banach function lattice with absolutely continuous quasi-norm. Then, every function $u \in \NX$ can be approximated by its truncations with arbitrary precision in $\NX$, i.e., if $u_\sigma \coloneq \max \{ \min \{u, \sigma\}, -\sigma\}$ for $\sigma\in\Rbb^+$, then $u_\sigma \to u$ in $\NX$ as $\sigma\to\infty$.
\end{cor}
\begin{proof}
Let $u\in\NX$ be given and suppose that $g_u \in X$ is its minimal $X$-weak upper gradient. Then, $g_u$ is an $X$-weak upper gradient of $u-u_\sigma$ as well. The previous lemma implies that $g_u\chi_{\suplev{u}{\sigma}}$ is also an $X$-weak upper gradient of $u-u_\sigma$ as
\[
\suplev{u}{\sigma} = \{x\in\Pcal: |u(x)|>\sigma \} = \{x\in\Pcal: (u-u_\sigma)(x) \neq 0\}.
\]
Since $\meas{\bigcap_{\sigma>0} \suplev{u}{\sigma}} = 0$, the absolute continuity of the norm of $X$ leads to
\[
\|u-u_\sigma\|_\NX \le \|u \chi_{\suplev{u}{\sigma}}\|_X + \|g_u \chi_{\suplev{u}{\sigma}}\|_X \to 0\quad \mbox{as }\sigma\to\infty.
\qedhere
\]
\end{proof}
\section{Main results in their general form}
\label{sec:LipDensGeneral}
The main results on density of Lipschitz functions in Newtonian spaces are stated and proven in this section. Here, we show general theorems and provide examples of Newtonian spaces where they can be readily applied. Various special cases of the main theorems, whose hypotheses are easier to verify, will be discussed in Section~\ref{sec:LipDensSpec}. Recall that in this as well as in all subsequent sections, we will assume that $\mu$ is a non-atomic doubling measure (unless explicitly stated otherwise).
A different approach to study Sobolev-type functions on metric measure spaces was proposed by Haj\l{}asz in~\cite{Haj96}. Instead of (weak) upper gradients, another type of gradient was used, which allows a simple construction of Lipschitz approximations.
\begin{df}
\label{df:haj-grad}
Let $u: \Pcal \to \overline{\Rbb}$. Then, a measurable function $h: \Pcal \to [0, \infty]$ is a \emph{Haj\l{}asz gradient} of $u$ if there is a set $E \subset \Pcal$ with $\meas{E} = 0$ such that
\begin{equation}
\label{eq:def-haj}
|u(x) - u(y) | \le \dd(x,y) (h(x) + h(y)) \quad \mbox{for every $x,y \in \Pcal \setminus E$.}
\end{equation}
\end{df}
Jiang, Shanmugalingam, Yang, and Yuan~\cite[Theorem~1.3]{JiaShaYanYua} have shown that $4h$ is an $X$-weak upper gradient of a suitable representative of a function $u\in X \subset L^1_\loc$ with a Haj\l{}asz gradient $h \in X$, provided that $\mu$ is doubling. The main idea of this claim can be traced back to J.~Mal\'y, cf.~Haj\l{}asz~\cite[Proposition 1]{Haj94}. Slightly improved version can be found in Heinonen, Koskela, Shanmugalingam and Tyson~\cite[Lemma~9.2.5]{HeiKosShaTys}, where $3h$ is shown to be an upper gradient of $u\in \Ccal\cap L^1_\loc$ with a Haj\l{}asz gradient $h\in L^1_\loc$. Such a result is further refined in~\cite{Mal5}, where $2h$ is proven to be an $X$-weak upper gradient of a measurable function $u$ that is absolutely continuous on $\Mod_X$-a.e.\@ curve, regardless of the doubling condition of $\mu$ and regardless of the summability of $u$ or $h$. Moreover, the factor $2$ is shown to be optimal.
On the other hand, without any additional assumptions on the metric measure space, it is in general impossible to find a Haj\l{}asz gradient using a (weak) upper gradient of a function. If $\Pcal$ supports a Poincar\'e inequality, then a certain maximal function of an upper gradient is a Haj\l{}asz gradient, see the proof of Theorem~\ref{thm:general_Lip-dens} below or Haj\l{}asz~\cite{Haj}, cf.\@ also Shanmugalingam~\cite[Theorem~4.9]{Sha}.
\begin{thm}
\label{thm:Haj_Lip-dens}
Let $X$ be a quasi-Banach function lattice with absolutely continuous quasi-norm. Suppose that $u\in \NX$ has a Haj\l{}asz gradient $h$ that satisfies the weak estimate $\|\sigma \chi_{\suplev{h}{\sigma}} \|_X \to 0$ as $\sigma \to \infty$. (In particular, it suffices to suppose that $h\in X$.) Then, for every $\eps>0$ there is a Lipschitz function $u_\eps \in \NX$ such that $\|u-u_\eps\|_\NX < \eps$.
Moreover, we can find measurable sets $E_\eps \subset \Pcal$ such that $u=u_\eps$ in $\Pcal \setminus E_\eps$ and $\meas{\bigcap_{\eps>0} E_\eps} = 0$. If both $\suplev{h}{\sigma}$ and $\suplev{u}{\sigma}$ are of finite measure for some $\sigma>0$, then we can require $\meas{E_\eps}<\eps$. (In particular, it suffices to assume that $u, h\in L^q_\fm$ or that $X \subset L^q_\fm\fcrim$ for some $q>0$.)
\end{thm}
Note that we will not use the doubling condition of $\mu$ in the proof, and indeed Theorem~\ref{thm:Haj_Lip-dens} holds even if the measure violates this condition. On the other hand, $\mu$ needs to be assumed non-atomic.
\begin{proof}
Since $\sigma \chi_{\suplev{h}{\sigma}} \le h \chi_{\suplev{h}{\sigma}}$ for every $\sigma \ge 0$, the absolute continuity of the quasi-norm of $X$ yields that $\|\sigma \chi_{\suplev{h}{\sigma}}\|_X \le \|h \chi_{\suplev{h}{\sigma}}\|_X \to 0$ as $\sigma \to \infty$ if $h \in X$.
Let $\eps>0$ and set $\eta = \eps / 6\cconc^2$, where $\cconc \ge 1$ is the modulus of concavity of the quasi-norm of $X$. Using Corollary~\ref{cor:bdd-dense-in-N1X}, we find $\sigma_0>1/\eps$ such that $\|u-v\|_\NX < \eta$, where $v$ is the truncation of $u$ at the levels $\pm\sigma_0$. Evidently, $u=v$ in $\Pcal \setminus \suplev{u}{\sigma_0}$. Moreover, if $\meas{\suplev{u}{\sigma}}<\infty$ for some $\sigma>0$, then $0 = \meas{\bigcap_{\sigma>0} \suplev{u}{\sigma}} = \lim_{\sigma \to \infty} \meas{\suplev{u}{\sigma}}$. Therefore, we can choose $\sigma_0>0$ sufficiently large to obtain $\meas{\suplev{u}{\sigma_0}}<\eta$ in this case. Note that $h$ is a Haj\l{}asz gradient of $v$ as well.
Now, we will show that the weak estimate $\|\sigma \chi_{\suplev{h}{\sigma}}\|_X \to 0$ as $\sigma \to \infty$ yields that $h<\infty$ a.e.
Let $Q=\{x\in \Pcal: h(x) = \infty\}$. Then, $\|\sigma \chi_Q\|_X \le \|\sigma \chi_{\suplev{h}{\sigma}}\|_X \to 0$ as $\sigma \to \infty$. Thus, $\|\chi_Q\|_X = 0$, whence $\meas{Q} = 0$.
For an upper gradient $g\in X$ of $u$, we can find $\sigma_1\ge\sigma_0$ such that $\|g \chi_{\suplev{h}{\sigma_1}} \|_X < \eta$ by the absolute continuity of the quasi-norm of $X$. If $\meas{\suplev{h}{\sigma}} < \infty$ for some $\sigma>0$, then $0 = \meas{\bigcap_{\sigma>0} \suplev{h}{\sigma}} = \lim_{\sigma \to \infty} \meas{\suplev{h}{\sigma}}$. Therefore, we can choose $\sigma_1$ sufficiently large so that $\meas{\suplev{h}{\sigma_1}}<\eta$ in this case.
Now, fix $\sigma \ge \sigma_1$ such that $\| \sigma \chi_{\suplev{h}{\sigma}} \|_X < \eta$.
Let $E\subset \Pcal$ be the exceptional set, where \eqref{eq:def-haj} fails, and let $A_\eta = E \cup \suplev{h}{\sigma}$. Thus, we obtain that $v|_{\Pcal \setminus A_\eta}$ is $2\sigma$-Lipschitz continuous, since $|v(x) - v(y)| \le \dd(x,y) (h(x) + h(y)) \le 2\sigma \dd(x,y)$. We define $u_\eps$ as the truncation of the upper McShane extension of $v|_{\Pcal \setminus A_\eta}$ at levels $\pm \sigma$, i.e.,
\[
u_\eps (x) = \max\{-\sigma, \min\{ \sigma, \inf\{v(y) + 2\sigma \dd(x,y): y\in \Pcal \setminus A_\eta\} \} \} \quad \mbox{for }x\in\Pcal.
\]
As $\sup_{x\in\Pcal} |v(x)| \le \sigma_0 \le \sigma_1 \le \sigma$, we have
\[
\|v-u_\eps\|_X \le \| (|v| + \sigma) \chi_{A_\eta}\|_X \le 2\|\sigma \chi_{\suplev{h}{\sigma}}\|_X < 2\eta.
\]
Since $g\in X$ is an upper gradient of $u$, it is an upper gradient of $v$ as well. Then, $g+ 2\sigma$ is an upper gradient of $v-u_\eps$. Furthermore, it follows by Lemma~\ref{lem:glueing_lemma} that $(g+ 2\sigma) \chi_{A_\eta}$ is an $X$-weak upper gradient of $v-u_\eps$, whose minimal $X$-weak upper gradient can be estimated by
\[
\| g_{v-u_\eps} \|_X \le \| (g+ 2\sigma) \chi_{A_\eta}\|_X \le \cconc( \|g \chi_{\suplev{h}{\sigma}}\|_X + 2 \|\sigma \chi_{\suplev{h}{\sigma}}\|_X) < 3\cconc \eta.
\]
Therefore,
\begin{align*}
\|u-u_\eps\|_\NX & \le \cconc (\|u- v\|_\NX + \|v- u_\eps\|_\NX) \\
& = \cconc (\|u- v\|_\NX + \|v- u_\eps\|_X + \| g_{v-u_\eps} \|_X)
< \cconc (\eta + 2\eta + 3\cconc\eta) \le \eps.
\end{align*}
We see that $u_\eps = v$ outside of $A_\eta$ and $v = u$ outside of $\suplev{u}{\sigma_0}$, whence $u_\eps = u$ in $\Pcal \setminus E_\eps$, where $E_\eps = A_\eta \cup \suplev{u}{\sigma_0} = E \cup \suplev{h}{\sigma} \cup \suplev{u}{\sigma_0}$.
Both $\sigma$ and $\sigma_0$ depend on $\eps$ and $\sigma \ge \sigma_0 \to \infty$ as $\eps \to 0$. Thus, $\bigcap_{\eps>0} E_\eps = E\cup \bigcap_{\tau>0} (\suplev{h}{\tau} \cup \suplev{u}{\tau})$, which yields that $\meas{\bigcap_{\eps>0} E_\eps}=0$ since both $h$ and $u$ are finite a.e.
If the superlevel sets are of finite measure, then $\meas{E_\eps} \le \meas{\suplev{h}{\sigma}} + \meas{\suplev{u}{\sigma_0}} < 2 \eta < \eps$.
If $u,h \in L^p_\fm\fcrim$ for some $p>0$, then $\meas{\suplev{u}{\sigma} \cup \suplev{h}{\sigma}} \to 0$ as $\sigma \to \infty$ by Lemma~\ref{lem:superlevelsets-to-zero}.
Suppose now that $X\subset L^p_\fm$ for some $p>0$. Since $u\in X$, we have $u\in L^p_\fm$ and hence $\meas{\suplev{u}{\sigma}} \to 0$ as $\sigma \to \infty$ by Lemma~\ref{lem:superlevelsets-to-zero}. It remains to prove that $\suplev{h}{\sigma}$ is of finite measure for some $\sigma > 0$. Suppose on the contrary that $\meas{\suplev{h}{\sigma}} = \infty$ for all $\sigma>0$. Since $\Bigl\|\sigma^{1/p} \chi_{\suplev{h}{\sigma^{1/p}}}\Bigr\|_X \to 0$ as $\sigma\to \infty$, there is a sequence $\{\sigma_n\}_{n=1}^\infty\subset\Rbb^+$ such that $\sigma_n \ge n$ and $\Bigl\|\sigma_n^{1/p} \chi_{\suplev{h}{\sigma_n^{1/p}}}\Bigr\|_X \le (2\cconc)^{-n}$ for every $n\in\Nbb$. We choose a sequence of pairwise disjoint sets $F_n$ such that $\smash{F_n \subset \suplev{h}{\sigma_n^{1/p}}}$ and $\meas{F_n} = 1/n^2$. Let $f = \sum_{n=1}^\infty \sigma_n^{1/p} \chi_{F_n}$ and $F = \bigcup_{n=1}^\infty F_n$. Then,
\[
\|f\|_X \le \sum_{n=1}^\infty \cconc^n \sigma_n^{1/p} \| \chi_{F_n} \|_X \le \sum_{n=1}^\infty \cconc^n \sigma_n^{1/p} \Bigl\| \chi_{\suplev{h}{\sigma_n^{1/p}}} \Bigr\|_X \le \sum_{n=1}^\infty \frac{1}{2^n} = 1.
\]
Hence, $f \in X$ but $f \notin L^p_\fm$ since $\meas{F} < \infty$ and
\[
\| f \chi_F \|_{L^p}^p = \sum_{n=1}^\infty \sigma_n \meas{F_n} = \sum_{n=1}^\infty \frac{\sigma_n}{n^2} \ge \sum_{n=1}^\infty \frac{1}{n}= \infty,
\]
which contradicts the inclusion $X\subset L^p_\fm$. We have thus shown that $\meas{\suplev{h}{\sigma}} < \infty$ for some $\sigma>0$. Consequently, $\lim_{\sigma \to \infty} \meas{\suplev{h}{\sigma}} = \meas{\bigcap_{\sigma>0} \suplev{h}{\sigma}} = 0$.
\end{proof}
Note that the hypotheses in Theorem~\ref{thm:Haj_Lip-dens} are sufficient but not necessary by any means. We saw in Proposition~\ref{pro:NX=X-dens} that the density of Lipschitz functions in $\NX$ relies only on the properties of $X$ if the Newtonian space is trivial (i.e., if $\NX = X$).
Another tool to study Lipschitz and H\"older continuity of Sobolev (thus Newtonian) functions was introduced by Calder\'on and Scott~\cite{CalSco} in 1978, cf.~Calder\'on~\cite{Cal}.
\begin{df}
Let $u \in L^1_\loc(\Pcal)$. Then, for $\alpha \in (0,1]$, we define the \emph{fractional sharp maximal function} by
\[
u^\sharp_\alpha(x) = \sup_{r>0} \frac{1}{r^\alpha} \fint_{B(x,r)} |u - u_{B(x,r)}|\,d\mu, \quad x\in\Pcal.
\]
\end{df}
Roughly speaking, $u^\sharp_\alpha$ measures the $\alpha$-H\"older continuity of a function. Since we are interested in Lipschitz continuity, we will only work with $u^\sharp_1$.
\begin{rem}
If $u$ is $L$-Lipschitz continuous, then obviously $u^\sharp_1 \le 2L$. The converse also holds true. Namely, if a function $u\in L^1_\loc$ has $u^\sharp_1 \in L^\infty$, then there is a Lipschitz continuous function $\tilde{u}$ such that $u=\tilde{u}$ a.e. Boundedness of $u^\sharp_1$ guarantees that $u$ has a Haj\l asz gradient $h\in L^\infty$, which was shown by Haj\l asz and Kinnunen~\cite[Lemma~3.6]{HajKin}. Let $L = \|h\|_{L^\infty}$ and $E_L = \suplev{h}{L} \cup E$, where $E$ is the set where \eqref{eq:def-haj} fails. Then, $u|_{\Pcal\setminus E_L}$ is $2L$-Lipschitz and it has a unique continuous extension to $\Pcal$ since $\Pcal\setminus E_L$ is dense in $\Pcal$. Such an extension retains the $2L$-Lipschitz continuity.
\end{rem}
\begin{thm}
\label{thm:general_sharp_Lip-dens}
Assume that $X$ is a quasi-Banach function lattice with absolutely continuous quasi-norm. Let $u \in \NX$ and suppose that the fractional sharp maximal function $v^\sharp_1$ satisfies the weak estimate $\|\sigma \chi_{\suplevshp{v}{\sigma}} \|_X \to 0$ as $\sigma \to \infty$ for every truncation $v$ of $u$, where
\[
\suplevshp{v}{\sigma} \coloneq \suplev{v^\sharp_1}{\sigma} = \{ x\in \Pcal: {v}^\sharp_1(x) > \sigma \} \quad \mbox{for }\sigma \ge 0.
\]
(In particular, it suffices that $\|\sigma \chi_{\suplevshp{u}{\sigma}} \|_X \to 0$ as $\sigma \to \infty$.)
Then, for every $\eps>0$ there is a Lipschitz function $u_\eps \in \NX$ such that $\|u-u_\eps\|_\NX < \eps$.
\end{thm}
Similarly as before, we can find $u_\eps=u$ outside of a set of arbitrarily small measure provided that there are some superlevel sets of $u$ and $u^\sharp_1$ (or $v^\sharp_1$) of finite measure.
\begin{proof}
Whenever $v$ is a truncation of $u$, we have $|v(x) - v(y)| \le |u(x) - u(y)|$ for every $x,y \in \Pcal$, whence
\begin{multline*}
\fint_B |v(x)-v_B|\,d\mu(x) \le \fint_B \fint_B |v(x) - v(y)| \,d\mu(x)\,d\mu(y) \\
\le \fint_B \fint_B |u(x) - u(y) + u_B - u_B| \,d\mu(x)\,d\mu(y) \le 2 \fint_B |u(x) - u_B|\,d\mu(x).
\end{multline*}
Therefore, $v^\sharp_1 \le 2 u^\sharp_1$ and if $u^\sharp_1$ satisfies the weak estimate, then so does $v^\sharp_1$.
Let $\eps > 0$. Then, there is a truncation $v\in\NX$ of $u$ such that $\|u-v\|_\NX < \eps/2\cconc$ by Corollary~\ref{cor:bdd-dense-in-N1X}.
Applying~\cite[Lemma~3.6]{HajKin}, we see that $c v^\sharp_1$ is a Haj\l asz gradient of~$v$ for some $c = c(c_\dbl) > 0$. Thus, there is a Lipschitz function $u_\eps \in \NX$ such that $\|u_\eps - v\|_\NX < \eps/2\cconc$ by Theorem~\ref{thm:Haj_Lip-dens}. Finally, the triangle inequality yields
\[
\| u-u_\eps\|_\NX \le \cconc (\|u-v\|_\NX + \|v-u_\eps\|_\NX) < \eps.
\qedhere
\]
\end{proof}
In the previous proof, we have used that a multiple of the fractional sharp maximal function $u^\sharp_1$ is a Haj\l asz gradient of a function $u\in L^1_\loc$. On the other hand, if $u \in L^1_\loc$ has a Haj\l asz gradient $h$, then it is easy to show that $u^\sharp_1 \le 4 M_1^c h$, where $M_1^c$ is the centered Hardy--Littlewood maximal operator. Note that this estimate holds true even if $\mu$ is not doubling.
Similarly as with the Haj\l{}asz gradients, it is in general impossible without any additional assumptions on the metric measure space to find (or at least provide an estimate for) the fractional sharp maximal function $u^\sharp_1$ using an ($X$-weak) upper gradient of $u\in\NX$. A clear connection, perhaps not optimal, is however obtained if $\Pcal$ supports a $p$-Poincar\'e inequality (see Definition~\ref{df:pPI} above).
\begin{thm}
\label{thm:general_Lip-dens}
Assume that $\Pcal$ is a $p$-Poincar\'e space for some $p \in [1, \infty)$. Suppose further that $X$ is a quasi-Banach function lattice with absolutely continuous quasi-norm and that $\|\sigma \chi_{\suplevp{p}{v}{\sigma}} \|_X \to 0$ as $\sigma \to \infty$ whenever $v\in X$, where $\suplevp{p}{v}{\sigma}$ is the superlevel set of $M_pv$. Then, the set of Lipschitz functions is dense in $\NX$.
\end{thm}
Similarly as before, the approximating Lipschitz functions coincide with the approximated Newtonian functions outside of sets of arbitrarily small measure provided that $X \subset L^q_\fm$ for some $q>0$.
\begin{proof}
Since $\Pcal$ is a $p$-Poincar\'e space, we obtain that $u^\sharp_1(x) \le 2 c_{\PI} M_p g(x)$ whenever $g\in X$ is an upper gradient of $u\in \NX$. Since $\bigl\|\sigma \smash{\chi_{\suplevp{p}{g}{\sigma}}} \bigr\|_X \to 0$ as $\sigma \to \infty$, the fractional sharp maximal function $u^\sharp_1$ satisfies the weak estimate of Theorem~\ref{thm:general_sharp_Lip-dens}, which then yields the desired conclusion.
\end{proof}
The following example shows that the hypotheses that $\Pcal$ supports a $p$-Poincar\'{e} inequality and that $M_p$ obeys the weak estimate are in fact more restrictive than posing an analogous assumption that $u^\sharp_1$ satisfies the weak estimate of Theorem~\ref{thm:general_sharp_Lip-dens} for every $u\in \NX$.
\begin{exa}
Consider the bow-tie in ($\Rbb^n$, $dx$), i.e., let
\[
\Pcal = \bigl\{(x_1, x_2, \ldots, x_n) \in \Rbb^n: x_i x_j\ge 0 \mbox{ for all } i,j = 1,\ldots,n\bigr\}.
\]
Let $X=L^q(\Pcal)$ for some $q\in[1, \infty)$. In fact, we are revisiting Bj\"{o}rn, Bj\"{o}rn, and Shanmugalingam~\cite[Example 5.2]{BjoBjoSha}, where other methods were used to show that Lipschitz functions are dense in $\NX$ even though $\Pcal$ is a $p$-Poincar\'{e} space if and only if $p>n$ (see also~\cite[Example A.23]{BjoBjo}).
Theorem~\ref{thm:general_Lip-dens} yields merely that Lipschitz functions are dense in $N^{1,q}\coloneq N^1L^q$ for $q>n$. We will show that the hypotheses of Theorem~\ref{thm:general_sharp_Lip-dens} are fulfilled for every $u\in N^{1,q}$ with $q\in[1, n)$ as well, yielding density of Lipschitz functions in $N^{1,q}$ for every $q\in[1, \infty)\setminus\{ n \}$.
Let $q\in [1, n)$. We can split $\Pcal = \Pcal^+ \cup \Pcal^-$, where
\[
\Pcal^+ = \bigl\{ x\in\Rbb^n: x_j \ge 0,\ j=1,\ldots,n\bigr\} \quad \mbox{and} \quad \Pcal^- = \bigl\{ x\in\Rbb^n: x_j \le 0,\ j=1,\ldots,n \bigr\}.
\]
Both $\Pcal^+$ and $\Pcal^-$ support a $1$-Poincar\'e inequality, e.g., by~\cite[Example 5.6]{BjoBjo}. Let $v$ be a truncation of $u \in N^{1,q}$ and let $g\in L^q$ be an upper gradient of $u$ and thus of $v$. Let $x \in \Pcal$ and $B = B(x,r)$. Then,
\[
\fint_{B} |v - v_B| \,d\mu \lesssim
r \fint_{B} g\,d\mu \le r M_1 g(x)\quad\mbox{if }r\le |x|.
\]
Suppose now that $r > |x|$ and $x\in \Pcal^+$. By the triangle inequality, we obtain that
\begin{multline*}
\fint_{B} |v - v_B| \,d\mu \lesssim \fint_B |v-v_{B\cap \Pcal^+}|\,d\mu \\
\le \fint_{B\cap \Pcal^+} |v-v_{B\cap \Pcal^+}|\,d\mu + \fint_{B\cap \Pcal^-}|v-v_{B\cap \Pcal^+}|\,d\mu
\lesssim r \fint_{B\cap \Pcal^+} g\,d\mu + \|v\|_{L^\infty} \,.
\end{multline*}
Hence, $v^\sharp_1(x) \lesssim M_1 g(x) + \|v\|_{L^\infty} / |x|$ whenever $x\in \Pcal^+$. An analogous argument shows that the inequality holds for $x\in\Pcal^-$ as well. Therefore, there is $c>0$ such that
\begin{multline*}
\suplevshp{v}{c \sigma} \subset \biggl\{ x \in \Pcal: M_1 g(x) + \frac{\|v\|_{L^\infty}}{|x|} > \sigma \biggr\} \\
\subset
\biggl\{ x \in \Pcal: M_1 g(x) > \frac{\sigma}{2} \biggr\} \cup
\biggl\{ x \in \Pcal: \frac{\|v\|_{L^\infty}}{|x|} > \frac{\sigma}{2} \biggr\} = \Suplevp{1}{g}{\frac{\sigma}{2\vphantom{\|_L}}} \cup \Suplev{h}{\frac{\sigma}{2\|v\|_{L^\infty}}}\,,
\end{multline*}
where $h(x) = 1/|x|$ for $x\in \Pcal$.
The function $M_1 g$ fulfills the needed weak estimate by~\cite[Lemma~3.12 and Theorem~3.13]{BjoBjo} (see also Section~\ref{sec:weaktype} below). The superlevel sets $\suplev{h}{\tilde\sigma}$ are balls of radius $1/\tilde\sigma$, centered at the origin.
Therefore, $\| \tilde\sigma \chi_{\suplev{h}{\tilde\sigma}} \|_{L^q} \approx \tilde\sigma^{1-n/q} \to 0$ as $\tilde\sigma \to \infty$. Consequently, $\| \sigma \chi_{\suplevshp{v}{\sigma}}\|_{L^q} \to 0$ as $\sigma \to \infty$. Note that the rate of convergence depends on $\|v\|_{L^\infty}$, i.e., on the chosen truncation of $u$. Theorem~\ref{thm:general_sharp_Lip-dens} now gives that $u$ can be approximated in $N^{1,q}$ by Lipschitz functions.
The case $q=n$ is more delicate. In general, we obtain merely that $\bigl\| \sigma \chi_{\suplevshp{v}{\sigma}}\bigr\|_X$ is bounded but does not tend to zero as $\sigma \to \infty$. For example, such a behavior is exhibited by $v(x)\coloneq(\chi_{\Pcal^+}(x)-\dist(B(0,1),x))^+ \in N^{1,n}$. Nevertheless, Lipschitz functions are dense even in $N^{1,n}$, which was shown in~\cite[Example~5.2]{BjoBjoSha}.
\end{exa}
The following proposition extends known density results in the variable exponent Sobolev and Newtonian spaces on $\Rbb^n$, cf.\@ Diening, Harjulehto, H\"ast\"o and R\r{u}\v{z}i\v{c}ka~\cite[Theorem~9.5.2]{DieHarHasRuz} and Harjulehto, H\"ast\"o and Pere~\cite[Theorem 3.5]{HarHasPer}, respectively. The main difference, when using our approach via the weak type estimate for the maximal operator, is that we allow for $p^- = \essinf_{x\in\Rbb^n} p(x) = 1$.
\begin{pro}
Suppose that $(\Pcal, \mu) = (\Rbb^n, dx)$. Let $X$ be the variable exponent Lebesgue space $L^{p(\cdot)}$ whose norm is given by
\[
\| u\|_{p(\cdot)} = \inf\biggl\{ \lambda > 0 : \int_{\Rbb^n} \biggl( \frac{|u(x)|}{\lambda}\biggr)^{p(x)}\,dx \le 1\biggr\},
\]
where $p: \Rbb^n \to [1, \infty)$ is measurable. Assume that $p$ is essentially bounded and that $p$ is of class $\Acal$, i.e.,
\[
\biggl\|\sum_{Q \in \Qcal} \Bigl(\chi_Q \fint_Q |f(x)|\,dx\Bigr) \biggr\|_{p(\cdot)} \lesssim \|f\|_{p(\cdot)}
\]
holds uniformly for all $f\in L^{p(\cdot)}$ and all systems of pairwise disjoint cubes $\Qcal$, cf.\@~\cite[Definition 4.4.6]{DieHarHasRuz}.
Then, the \emph{Lipschitz truncations}, i.e., bounded Lipschitz functions that coincide with a given function outside of sets of small measure, are dense in $N^{1,p(\cdot)}(\Rbb^n)$.
\end{pro}
Theorem 4.4.8 of~\cite{DieHarHasRuz} yields that $p$ is in particular of class $\Acal$ if $p$ is globally log-H\"older continuous, i.e., if $|p(x) - p(y)| \lesssim -1/\log(|x-y|)$ whenever $|x-y|<1/2$ and if there is $p_\infty \in [1,\infty)$ such that $|p(x) - p_\infty| \lesssim 1/\log (e+|x|)$ for all $x\in\Rbb^n$.
\begin{proof}
The space $\Rbb^n$ with the Lebesgue $n$-dimensional measure supports a $1$-Poincar\'{e} inequality. By~\cite[Theorem 3.4.1]{DieHarHasRuz}, the $L^{p(\cdot)}$ norm is absolutely continuous if and only if $p$ is essentially bounded. It is also shown in~\cite[Theorem 4.4.10]{DieHarHasRuz}, that if $p$ is of class $\Acal$, then the maximal operator $M_1$ is of weak type $(p(\cdot),p(\cdot))$, i.e.,
\[
\sup_{\sigma>0} \Bigl\| \sigma \chi_{\suplevp{1}{f}{\sigma}}\Bigr\|_{p(\cdot)} \lesssim \|f\|_{p(\cdot)},
\]
where $\suplevp{1}{f}{\sigma} = \{x\in\Rbb^n: M_1f(x) > \sigma\}$ as before. It remains to show that in fact $\Bigl\| \sigma \chi_{\suplevp{1}{f}{\sigma}}\Bigr\|_{p(\cdot)} \to 0$ as $\sigma \to \infty$. Let $f\in L^{p(\cdot)}$ be fixed and then define $f_\sigma = f \chi_{\suplev{f}{\sigma/2}}$ for $\sigma > 0$. Then, $M_1f \le M_1f_\sigma + \sigma/2$ whence $\suplevp{1}{f}{\sigma} \subset \suplevp{1}{f_\sigma}{\sigma/2}$. Consequently,
\[
\Bigl\| \sigma \chi_{\suplevp{1}{f}{\sigma}}\Bigr\|_{p(\cdot)} \le \Bigl\| \sigma \chi_{\suplevp{1}{f_\sigma}{\sigma/2}}\Bigr\|_{p(\cdot)} \lesssim \| f_\sigma \|_{p(\cdot)} = \bigl\| f \chi_{\suplev{f}{\sigma/2}} \bigr\|_{p(\cdot)} \to 0\quad\mbox{as }\sigma \to \infty.
\]
In this estimate, we have used that $\measl{\bigcap_{\sigma>0} \suplev{f}{\sigma/2}}=0$ so that the absolute continuity of the norm yields zero as the limit.
Theorem~\ref{thm:general_Lip-dens} and its proof give the desired conclusion of density of Lipschitz functions in $N^{1,p(\cdot)}(\Rbb^n)$.
\end{proof}
\section{Rearrangement-invariant spaces}
\label{sec:rispaces}
In order to be able to study boundedness of the maximal operators $M_p$, some structure of the function space $X$ needs to be known. In the current paper, we discuss a rather wide class of function spaces where the function norm is, roughly speaking, invariant under measure-preserving transformations. The setting of these so-called \ri spaces includes among others the Lebesgue $L^p$ spaces, the Orlicz $L^\Psi$ spaces, and the Lorentz $L^{p,q}$ spaces.
A quasi-normed function lattice $X = X(\Pcal, \mu)$ is \emph{rearrangement-invariant} if it satisfies the condition
\begin{enumerate}
\renewcommand{(\roman{enumi})}{(RI)}
\item \label{df:RI}
if $u$ and $v$ are \emph{equimeasurable}, i.e.,
\[
\meas{\{x\in \Pcal: u(x) > t\}} = \meas{\{x\in \Pcal: v(x) > t\}} \quad\mbox{for all } t\ge0,
\]
then $\|u\|_X = \|v\|_X$.
\end{enumerate}
We say that $X$ is an \emph{\ri space} if it is a rearrangement-invariant Banach function space. In other words, $X$ satisfies not only~\ref{df:qBFL.initial}--\ref{df:BFL.locL1} with the modulus of concavity $\cconc = 1$, but also~\ref{df:RI}.
It is easy to show that if $X \emb Y_\fm$, where both $X$ and $Y$ are \ri spaces over $(\Pcal, \mu)$, then the constant $c_E\ge 0$ in the embedding inequality $\| u \chi_E\|_Y \le c_E \|u\|_X$ actually depends only on $\meas{E}$ for all measurable sets $E \subset \Pcal$ of finite measure.
For $f \in \Mcal(\Pcal, \mu)$, we define its \emph{distribution function $\mu_f$} by
\[
\mu_f(t) = \meas{\{x\in \Pcal: |f(x)| > t\}}, \quad t\in[0, \infty).
\]
Furthermore, we define the \emph{decreasing rearrangement $f^*$} of $f$ as the right-continuous generalized inverse function of $\mu_f$, i.e.,
\[
f^*(t) = \inf\{ s\ge 0: \mu_f(s) \le t\}, \quad t\in[0, \infty).
\]
The Cavalieri principle implies that $\|f\|_{L^1(\Pcal, \mu)} = \|\mu_f \|_{L^1(\Rbb^+, \lambda^1)} = \|f^*\|_{L^1(\Rbb^+, \lambda^1)}$. The \emph{elementary maximal function} $f^{**}$ of $f$ is given by
\[
f^{**}(t) = \fint_0^t f^*(s)\,ds, \quad t\in \Rbb^+.
\]
For a measure space $(\Rcal, \nu)$, a function $u \in \Mcal(\Rcal, \nu)$, and $t \in [0, \nu(\Rcal))$, we can find a measurable ``superlevel'' set $A \subset \Rcal$ such that $\nu(A) = t$ and $|u(x)| \ge |u(y)|$ whenever $x\in A$ and $y\in \Rcal \setminus A$.
In general, such a set $A$ is not defined uniquely by these conditions. Hence, we define $\suplevr{u}{t}$ as the family of all measurable sets $A$ with $\nu(A) = t$ that obey
\begin{equation}
\label{eq:df-suplevr}
\{x \in \Rcal: |u(x)| > u^*(t)\} \subset A \subset \{x \in \Rcal: |u(x)| \ge u^*(t)\}.
\end{equation}
Depending on the context, we will use either $(\Rcal, \nu) = (\Rbb^+, \lambda^1)$ or $(\Rcal, \nu) = (\Pcal, \mu)$.
\begin{df}
Given a quasi-normed rearrangement-invariant function lattice $X$, we define the \emph{fundamental function} of $X$ as
\[
\phi_X(t) =
\begin{cases}
\|\chi_{E_t}\|_X, & t\in [0, \meas{\Pcal}),\\
\|1\|_X, & t\in[\meas{\Pcal}, \infty),
\end{cases}
\]
where $E_t \subset \Pcal$ is an arbitrary measurable set with $\meas{E_t} = t$.
\end{df}
The purpose of defining $\phi_X$ beyond $\meas{\Pcal}$ is merely for the sake of convenience, which will allow us to skip the distinction of the exact (possibly infinite) value of $\meas{\Pcal}$ in the coming claims and proofs. We will simply write $\phi$ instead of $\phi_X$ whenever any confusion of function spaces is unlikely to arise. Different spaces may very well have the same fundamental function, which is seen in the example below.
\begin{exa}
(a) For the Lebesgue $L^p$ spaces, $1\le p<\infty$, we obtain that $\phi(t) = t^{1/p}$ for $t<\meas{\Pcal}$. It is also easy to see that $\phi_{L^\infty} = \chi_{(0, \infty)}$.
(b) The \emph{Lorentz spaces} $L^{p,q}(\Pcal)$ and $L^{p,\infty}(\Pcal)$ for $1\le p, q < \infty$, whose respective (quasi)norms are defined by
\[
\| u \|_{L^{p,q}} = \biggl( \frac{q}{p} \int_0^\infty (u^*(t) t^{1/p})^q \frac{dt}{t} \biggr)^{1/q}\quad\mbox{and}\quad \| u \|_{L^{p,\infty}} = \sup_{t > 0} u^*(t) t^{1/p},
\]
have the fundamental function $\phi(t) = t^{1/p}$ for $t<\meas{\Pcal}$.
(c) The fundamental functions of the grand and small Lebesgue spaces, which arise in the extrapolation theory, are for $t$ near zero estimated by
\[
\phi_{L^{p)}}(t) \approx \frac{t^{1/p}}{|{\log t}|} \quad\mbox{and}\quad \phi_{L^{(p}}(t) \approx t^{1/p} |{\log t}|,
\]
which was established by Lang and Pick, see Capone and Fiorenza~\cite{CapFio}.
(d) The Orlicz spaces based on an $N$-function $\Psi$ with the Luxemburg norm
\[
\| u \|_{L^\Psi} = \inf \biggl\{\lambda>0: \int_\Pcal \Psi\biggl(\frac{|u(x)|}{\lambda}\biggr)\,d\mu(x) \le 1\biggr\}
\]
have the fundamental function $\phi(t) = 1/\Psi^{-1}(1/t)$ for $0<t< \meas{\Pcal}$.
\end{exa}
A function $f: [0, \infty) \to [0, \infty)$ is called \emph{quasi-concave on $[0, R)$} for some $R>0$, if it satisfies:
\begin{itemize}
\item $f(0) = 0 < f(t)$ for $t\in (0, R)$,
\item $f(t)$ is increasing for $t\in[0, R)$,
\item $f(t) / t$ is decreasing for $t\in(0, R)$.
\end{itemize}
If $R=\infty$, we say simply that $f$ is \emph{quasi-concave}.
Note that a function $f$ that is quasi-concave on $[0,R)$ for some $R>0$ is Lipschitz (and hence absolutely continuous) on $[\delta, R)$ for every $\delta > 0$. The Lipschitz constant is at most $f(\delta)/\delta$ then. Furthermore, there exists a concave function $\tilde{f}$ such that $\tilde{f}/2 \le f \le \tilde{f}$ on $[0,R)$, cf.\@~\cite[Proposition II.5.10]{BenSha}.
If a function $f: [0, \infty) \to [0, \infty)$ is increasing and concave on $[0, R)$ and if $f(0) = 0 < f(t)$ for all $t\in (0,R)$, then $f$ is quasi-concave on $[0, R)$ since
\begin{equation}
\label{eq:concave-quasiconcave}
\frac{f(t)}{t} = \frac{f(t) - f(0)}{t-0} \le \frac{f(s) - f(0)}{s-0} = \frac{f(s)}{s}\quad\mbox{for } 0<s<t<R.
\end{equation}
It is shown in~\cite[Corollary II.5.3]{BenSha} that the fundamental function $\phi$ of an \ri space $X$ is quasi-concave. By~\cite[Proposition II.5.11]{BenSha}, every \ri space can be equivalently renormed so that the fundamental function is concave. If $X$ has an absolutely continuous norm, then $\phi(0\limplus) \coloneq \lim_{t\to 0\limplus} \phi(t) = 0$. Note however that the converse does not hold true in general. For example, the weak-$L^p$ spaces (i.e., $L^{p,\infty}$) satisfy $\phi(0\limplus) = 0$ if $p<\infty$ even though their quasi-norm lacks the~\ref{df:AC} property.
\begin{df}
For a quasi-concave function $\phi$, we define the \emph{(classical) Lorentz space} $\Lambda_\phi^q$, where $q\in [1, \infty)$, the \emph{Marcinkiewicz space} $M_\phi$ and the \emph{weak Marcinkiewicz space} $M^*_\phi$ by their respective (quasi)norms:
\begin{align*}
\|u\|_{\Lambda^q_\phi} & = \biggl( \int_0^\infty (u^*(t) \phi(t))^q \frac{dt}{t} \biggr)^{1/q}, \\
\|u\|_{M_\phi} &= \sup_{t>0} u^{**}(t) \phi(t),\\
\|u\|_{M^*_\phi} &= \sup_{t>0} u^{*}(t) \phi(t).
\end{align*}
If $\phi$ is an increasing concave function, we define the \emph{Lorentz space} $\Lambda_\phi$ via its norm
\[
\|u\|_{\Lambda_\phi} = \int_{[0, \infty)} u^*(t)\,d\phi(t) = \phi(0\limplus) \|u\|_{L^\infty} + \int_0^\infty u^*(t) \phi'(t) \,dt,
\]
where $d\phi(t) = \phi'(t)\,dt$ a.e.\@ on $\Rbb^+$ due to the absolute continuity of $\phi$.
Given an \ri space $X$ with fundamental function $\phi$ (as long as $X$ is considered renormed so that $\phi$ is concave if needed), we write $\Lambda(X)$, $\Lambda^q(X)$, $M(X)$, and $M^*(X)$ instead of $\Lambda_\phi$, $\Lambda^q_\phi$, $M_\phi$, and $M^*_\phi$, respectively.
\end{df}
Neither the notation, nor the naming of these spaces is unified in the literature. Both $\Lambda_\phi$ and $M_\phi$ are sometimes called Lorentz spaces. Both $M_\phi$ and $M^*_\phi$ may very well be called weak Lorentz or just Marcinkiewicz spaces.
For instance, our $\Lambda_\phi$ is denoted by
\[
\Lambda(w, 1), \quad \Lambda_1(w), \quad \Lambda^1(w), \quad \Lambda_\phi, \quad \mbox{and}\quad L(\widetilde{w}, 1)
\]
in Lorentz~\cite{Lor}, Sawyer~\cite{Saw}, Cwikel, Kami\'{n}ska, Maligranda, and Pick~\cite{CwiKamMalPic}, Bennett and Sharpley~\cite{BenSha}, and Sparr~\cite{Spa}, respectively, where $w(t) = \phi'(t)$ and $\widetilde{w}(t) = t \phi'(t)$ for $t>0$.
Furthermore, our $M_\phi$ is denoted by $\Lambda^*(\psi', 1)$, $\Gamma^{1, \infty}(w)$, and $M_\phi$ in~\cite{Lor},~\cite{CwiKamMalPic}, and~\cite{BenSha}, respectively, where $\psi$ is the associated fundamental function of $\phi$, i.e., $\psi(t) = t/\phi(t)$.
The space $\Lambda^q_\phi$ has been studied in~\cite{Lor},~\cite{CwiKamMalPic}, and~\cite{Spa} using the notations $\Lambda(w_q, q)$, $\Lambda^q(w_q)$, and $L(\phi, q)$, respectively, where $w_q(t) = \phi(t)^q/t$.
Finally, our $M^*_\phi$ appears in~\cite{Spa} and~\cite{CwiKamMalPic} as $L(\phi, \infty)$ and $\Lambda^{1, \infty}(w)$, respectively. Besides, the notation $M^*(X)$ can be found in~\cite{BenSha}. The interested reader may consult~\cite{CwiKamMalPic}, where various references on Lorentz and Lorentz-type spaces are provided.
\begin{exa}
Focusing on the Lebesgue spaces, we can see that
\begin{enumerate}
\item $\Lambda(L^1) = M(L^1) = L^1$, $\Lambda^q(L^1) = L^{1,q}$, and $M^*(L^1) = L^{1,\infty}$;
\item $\Lambda(L^p) = L^{p,1}$, $\Lambda^q(L^p) = L^{p,q}$, and $M(L^p) = M^*(L^p) = L^{p,\infty}$, whenever $p\in (1, \infty)$,
\item $\Lambda(L^\infty) = M(L^\infty) = M^*(L^\infty) = L^\infty$, whereas $\Lambda^q(L^\infty) = \{0\}$.
\end{enumerate}
\end{exa}
If $X$ is an \ri space, then $\Lambda(X)$ and $M(X)$ are \ri spaces by~\cite[Proposition II.5.8, Theorem II.5.13]{BenSha}. In general, $M^*(X)$ is merely a rearrangement-invariant quasi-Banach function lattice. They all have the same fundamental function as $X$ and
\begin{equation}
\label{eq:LambdaX-X-MX-embed}
\Lambda(X) \emb X \emb M(X) \emb M^*(X)
\end{equation}
with the embedding norms equal to $1$.
If $\phi^q$ is quasi-concave, then $\Lambda^q_\phi$ is an \ri space. The triangle inequality in this case follows from Lorentz~\cite[Theorem 1]{Lor}. Otherwise, the space may be merely quasi-normed (see Sparr~\cite[Theorem 1.2]{Spa}). The fundamental function of $\Lambda^q_\phi$ is different from $\phi$ unless $\phi(t)=t^{1/q}$ for $0\le t<\meas{\Pcal}$, in which case $\Lambda^q_\phi = L^q$. It is however comparable to $\phi$ provided that $\phi'(t) \approx \phi(t)/t$ for $0<t<\meas{\Pcal}$, which occurs, e.g., if $\phi^\alpha$ is a convex function or, more generally, if $\phi(t)^\alpha/t$ is increasing on $(0, \meas{\Pcal})$ for some $\alpha \ge 1$.
The classical Lorentz spaces associated with the same quasi-concave function $\phi$ are embedded into each other relative to the exponent. The precise statement is given in the following lemma, which in fact follows from Stepanov~\cite[Proposition~1]{Ste}. In order to check the hypotheses of that proposition, a similar calculation as in the proof below is needed (with $v^*=\chi_{(0,a)}$ for arbitrary $a>0$). Therefore, we present a simple direct proof of the embedding, which is an elementary modification of the proofs available in the literature, where only $\phi(t) = t^\alpha$ with some $\alpha\in[0,\infty)$ is considered, cf.\@~\cite[Proposition IV.4.2]{BenSha}.
\begin{lem}
\label{lem:lorentz-embedding}
Let $\phi$ be a quasi-concave function and suppose that $1\le q < p < \infty$. Then, $\Lambda^q_\phi \emb \Lambda^p_\phi$ and the norm of the embedding can be estimated independently of $\phi$.
\end{lem}
\begin{proof}
Let $v \in \Lambda^q_\phi$. First, we show that $v \in M^*_{\phi}$ using the relation $\phi'(s) \le \phi(s)/s$ for $s>0$, which follows from the quasi-concavity of $\phi$,
\begin{align*}
&\|v\|_{M^*_\phi} = \biggl(\sup_{t>0} v^*(t)^q \phi(t)^q\biggr)^{1/q} = \biggl(\sup_{t>0} v^*(t)^q \int_0^t q \phi(s)^{q} \frac{\phi'(s)}{\phi(s)}\,ds\biggr)^{1/q} \\
&\quad \le c_q \biggl(\sup_{t>0} v^*(t)^q \int_0^t \phi(s)^q \frac{ds}{s}\biggr)^{1/q} \le c_q \biggl(\sup_{t>0} \int_0^t (v^*(s)\phi(s))^q \frac{ds}{s}\biggr)^{1/q} = c_q\|v\|_{\Lambda_\phi^q},
\end{align*}
where $c_q = q^{1/q}$. Now, we can estimate
\begin{align*}
\|v\|_{\Lambda^p_\phi}^p & = \int_0^\infty (v^*(t) \phi(t))^p \frac{dt}{t} \le \sup_{t>0} (v^*(t) \phi(t))^{p-q} \int_0^\infty (v^*(t) \phi(t))^q \frac{dt}{t} \\
& = \|v\|_{M^*_\phi}^{p-q} \|v\|_{\Lambda^q_\phi}^q \le c_q^{p-q} \|v\|_{\Lambda^q_\phi}^p.
\end{align*}
Thus, we obtain the desired inequality $\|v\|_{\Lambda^p_\phi} \le c_{p,q} \|v\|_{\Lambda^q_\phi}$, where $c_{p,q} = q^{1/q - 1/p}$.
\end{proof}
Given an \ri space $X$ over $(\Pcal, \mu)$, there is an \ri space $\reps{X}$ over $(\Rbb^+, \lambda^1)$, the so-called \emph{representation space} of $X$, such that $\|u\|_X = \|u^*\|_{\reps X}$ for all $u\in\Mcal(\Pcal, \mu)$. Existence of such a space $\reps X$ is established by the \emph{Luxemburg representation theorem} (see~\cite[Theorem II.4.10]{BenSha}). For the sake of uniqueness, $\reps X$ may be chosen such that $\|f\|_{\reps X} = \|f^* \chi_{(0, \meas{\Pcal})}\|_{\reps X}$ for all $f\in \reps X$. Furthermore, $\phi_X = \phi_{\reps X}$, whence $\Lambda(\reps X) = \reps{\Lambda(X)}$, $M(\reps X) = \reps{M(X)}$, and $M^*(\reps X) = \reps{M^*(X)}$.
The next lemma shows another rather unsurprising fact, namely, that the norm of the representation space retains the absolute continuity.
\begin{lem}
If an \ri space $X$ has an absolutely continuous norm, then so does $\reps X$.
\end{lem}
\begin{proof}
Let $f \in \reps X$. Then, there is a non-negative function $u\in X$ such that $u^* = f^*$ by~\cite[Corollary~II.7.8]{BenSha}. Let $\{E_n\}_{n=1}^\infty$ be a decreasing sequence of sets in $(0, \meas{\Pcal})$ such that $\measl{\bigcap_{n=1}^\infty E_n} = 0$. Let $\eps > 0$ be arbitrary. Since $\bigcap_{R=1}^\infty (\Pcal \setminus B(z, R)) = \emptyset$ for any fixed $z\in\Pcal$, we can find a ball $B\subset\Pcal$ such that $\|u\chi_{\Pcal \setminus B}\|_X < \eps/2$. Choose $F \in \suplevr{f}{\meas{B}}$ arbitrarily (recall that $\suplevr{f}{t}$ was defined in \eqref{eq:df-suplevr} as the collection of all measurable ``superlevel'' sets of $f$ whose measure is equal to $t\ge0$).
Let $E_n' = E_n \cap F$. For every $n\ge 1$, choose $G_n \in\suplevr{u}{\measl{E'_n}}$ such that $G_n \supset G_{n+1}$. Hence, $\meas{G_n}\to 0$ as $n\to\infty$. Thus, there is $n_0 \ge 1$ such that $\|u \chi_{G_n}\|_X < \eps /2$ for every $n\ge n_0$. For such $n$, we can estimate
\begin{align*}
\| f \chi_{E_n}\|_{\reps X} &
\le \| f \chi_{E'_n}\|_{\reps X} + \| f \chi_{E_n\setminus F}\|_{\reps X}
\le \| f \chi_{E'_n}\|_{\reps X} + \| f \chi_{(0, \meas{\Pcal}) \setminus F}\|_{\reps X} \\
& \le \|f^* \chi_{(0, \measl{E'_n})}\|_{\reps X} + \| f^* \chi_{[\meas{B}, \meas{\Pcal})}\|_{\reps X}
= \|u \chi_{G_n}\|_X + \| u \chi_{\Pcal \setminus B}\|_{X} < \eps.
\qedhere
\end{align*}
\end{proof}
\begin{df}
Given $p\in[1, \infty)$ and a quasi-concave function $\phi$ that is constant on $(\meas{\Pcal}, \infty)$, we define the \emph{Marcinkiewicz-type spaces} $M^p_\phi$ and $M^p_{\phi,\loc}$ by their norms
\begin{align*}
\|u\|_{M^p_\phi} = \sup_{t>0} M_pu^*(t) \phi(t) \quad \mbox{and}\quad
\|u\|_{M^p_{\phi, \loc}} = \sup_{0<t< 1} M_pu^*(t) \phi(t).
\end{align*}
If $X$ is an \ri space whose fundamental function is $\phi$, then we write $M^p(X)$ and $M^p_\loc(X)$ instead of $M^p_\phi$ and $M^p_{\phi, \loc}$, respectively.
\end{df}
It is easy to verify that $M^p_\phi \emb M^p_{\phi, \loc} \emb L^p_\fm \emb L^p_\loc$. If $\meas{\Pcal}<\infty$, then $M^p_\phi$ and $M^p_{\phi, \loc}$ coincide. On the other hand, if $\meas{\Pcal}=\infty$, then $M^p_\phi$ is non-trivial if and only if $\phi(t)/t^{1/p}$ is bounded for $t>1$. Observe also that $M^1(X) = M(X)$ by definition. The fundamental function of $M^p_\phi$ dominates $\phi$, whereas it is equal to $\phi$ if and only if $\phi^p$ is quasi-concave. The function $\psi(t)$ defined by \eqref{eq:quasiconv_dom} below equals the fundamental function of $M^p_{\phi, \loc}$ for $t\le 1$.
Having introduced various rather wide classes of function spaces, we can revisit the question of density of bounded functions, providing several examples where the density fails since the function norm is not absolutely continuous.
The following example shows that bounded functions are not dense in the Marcinkiewicz spaces that lie locally strictly between $L^\infty$ and $L^1$.
\begin{exa}
Let $X=M_\phi$, where $\phi$ is a quasi-concave function that satisfies
\[
\lim_{t\to 0\limplus} \phi(t) = 0, \quad \mbox{and} \quad \lim_{t\to 0\limplus} \frac{\phi(t)}{t} = \infty.
\]
Roughly speaking, these conditions on $\phi$ say that $L^\infty \subsetneq X_\fm$ and $X \subsetneq L^1_\fm$. By~\cite[Proposition II.5.10]{BenSha}, there exists a concave function $\psi(t) \approx t/\phi(t)$, $t>0$, since the latter is quasi-concave, which follows from quasi-concavity of $\phi$. Moreover, $\psi$ is increasing and absolutely continuous. Let now $u\ge 0$ be chosen such that $u^*(t)$ is the right-continuous representative of the right derivative $\psi'_+(t)$. Since $\psi(t)/t \approx 1/\phi(t) \to \infty$ as $t\to 0\limplus$, we have that $u^*(0\limplus) = \infty$, whence $u$ is not bounded. Now, we will show that $u\in X$, but no sequence of bounded functions converges to $u$ in $X$. Indeed,
\[
\| u \|_{X} = \sup_{t>0} u^{**}(t) \phi(t) = \sup_{t>0} \frac{\phi(t)}{t} \int_0^t u^*(s) \,ds = \sup_{t>0} \frac{\phi(t) \psi(t)}{t} \approx 1.
\]
Let $f\in X \cap L^\infty$ be non-negative and let $b\coloneq \|f\|_{L^\infty}$. Now, we choose $a>0$ such that $u^*(t) \ge 2b$ for $t\in(0, a)$. Then,
\begin{align*}
\| u-f \|_{X} & = \sup_{t>0} (u-f)^{**}(t) \phi(t) \ge \sup_{0<t<a} \frac{\phi(t)}{t} \int_0^t (u-f)^*(s)\,ds \\
& \ge \sup_{0<t<a} \frac{\phi(t)}{t} \int_0^t (u^*(s) - b)\,ds \ge \sup_{0<t<a} \frac{\phi(t)}{t} \frac{\psi(t)}{2} \approx 1.
\end{align*}
\end{exa}
In the setting of \ri spaces that contain unbounded functions, the absolute continuity (on sets of finite measure) is actually indispensable for the density of bounded functions.
\begin{lem}
\label{lem:trunc_dense_locAC}
Let $X$ be an \ri space such that $X\setminus L^\infty \neq \emptyset$. Then, the truncations are dense in $X$ if and only if $\| u \chi_{E_k}\|_X \to 0$ as $k\to\infty$ for every $u\in X$ and every decreasing sequence of sets $\{E_k\}_{k=1}^\infty$ such that $\meas{E_k} \to 0$.
\end{lem}
The latter condition can be understood as \emph{absolute continuity of the norm on sets of finite measure}. If $\meas{\Pcal} < \infty$, then it is equivalent to the absolute continuity of the norm. Otherwise, the absolute continuity is more restrictive since it requires that $\| u \chi_{E_k}\|_X \to 0$ even in the case when $\meas{E_k} = \infty$ for all $k\in\Nbb$ but $\meas{\bigcap_{k=1}^\infty E_k} = 0$, see Example~\ref{exa:AC-vs-ACloc} below.
In order to prove the density of truncations in $X$ under the condition of absolute continuity of the norm on sets of finite measure, we do not really need all the axioms of an \ri space. It would suffice to assume that $X$ is a quasi-Banach function lattice and $X \subset L^p_\fm$ for some $p>0$. Recall that this inclusion with $p=1$ follows by the axiom~\ref{df:BFL.locL1} in the definition of quasi-Banach function spaces (and hence \ri spaces). It is for the converse we make use of the remaining axioms of \ri spaces.
\begin{proof}
Suppose first that the norm of $X$ is absolutely continuous on sets of finite measure. Let $u\in X$. Recall the notation $\suplev{u}{k} = \{x\in\Pcal: |u(x)|>k\}$. Then, $\meas{\suplev{u}{k}} \to 0$ as $k\to \infty$ by Lemma~\ref{lem:superlevelsets-to-zero} since $X \emb L^1_\fm$ by~\ref{df:BFL.locL1} in the definition of \ri spaces. Let $u_k$ be the truncation of $u$ at the levels $\pm k$. Then, the absolute continuity of the norm of $X$ on sets of finite measure implies that
\[
\| u - u_k\|_X =\| (|u|-k) \chi_{\suplev{u}{k}}\|_X \le \| u \chi_{\suplev{u}{k}} \|_X \to 0\quad\mbox{as }k\to\infty,
\]
which finishes the proof of the sufficiency.
Suppose next that the norm of $X$ is not absolutely continuous on sets of finite measure, i.e., there exists $v\in X$ and a decreasing sequence of sets $\{E_k\}_{k=1}^\infty$ with $\meas{E_k} \to 0$ such that $\| v \chi_{E_k}\|_X \to a > 0$ as $k\to \infty$. We also have that $\phi(0\limplus) = 0$ since $X$ contains unbounded functions. By passing to a subsequence if needed, we may assume that $\phi(\meas{E_k}) < k^{-2}$. Let $A_k = E_k \setminus E_{k+1}$ for $k\in \Nbb$. Set
\[
\tilde{u} = \sum_{k=1}^\infty \frac{\chi_{A_k}}{k^{3/2} \phi(\meas{E_k})} \quad\mbox{and}\quad u=\tilde{u}+|v|.
\]
Thus, $u\in X$ while $\tilde{u} > \sqrt{k}$ on $E_k$. Let now $u_n = \min\{u, n\}$ for $n\in\Nbb$. Then, $u - u_n \ge |v|$ on $E_k$ whenever $k \ge n^2$, which yields $\|u-u_n\|_X \ge \| v \chi_{E_{n^2}} \|_X \ge a > 0$ for every $n\in\Nbb$.
\end{proof}
The next example illustrates that absolute continuity on sets of finite measure really is a more general notion than absolute continuity of the quasi-norm.
\begin{exa}
\label{exa:AC-vs-ACloc}
The norm of the space $X = (L^1 + L^\infty)(\Pcal)$, where $\meas{\Pcal}=\infty$, is given as $\|u\|_X = \|u^* \chi_{(0,1)}\|_{L^1(\Rbb^+)}$ (cf.\@~\cite[Theorem II.6.4]{BenSha}). It is not absolutely continuous, but merely absolutely continuous on sets of finite measure.
Indeed, let $u \equiv 1 \in X$ and let $E_k= \Pcal \setminus kB$ for $k\in\Nbb$, where $B \subset \Pcal$ is a ball. Then, $\bigcap_{k=1}^\infty E_k = \emptyset$, but $(u\chi_{E_k})^* \equiv 1$, whence $\|u\chi_{E_k}\|_X = 1$ for all $k$. If we however have a decreasing sequence of sets $F_k$ with $\meas{F_k} \to 0$, then $(u \chi_{F_k})^* \chi_{(0,1)} = (u \chi_{F_k})^*$ whenever $\meas{F_k} < 1$, which yields that $\|u \chi_{F_k}\|_X = \|u \chi_{F_k}\|_{L^1} \to 0$ as $k\to\infty$ by the dominated convergence theorem.
Consequently, the truncations are dense in $X$ by Lemma~\ref{lem:trunc_dense_locAC}, whereas this conclusion cannot be drawn from Lemma~\ref{lem:trunc_dense_AC}.
\end{exa}
We may modify the proof of Corollary~\ref{cor:bdd-dense-in-N1X} similarly as in Lemma~\ref{lem:trunc_dense_locAC} to see that the absolute continuity on sets of finite measure suffices for the density of truncations in $\NX$ provided that $X \subset L^p_\fm$ for some $p>0$.
The Marcinkiewicz spaces, whose norms in general lack the absolute continuity (also on sets of finite measure), will provide us with a setting where we can construct an unbounded Newtonian function whose truncations lie far away from the function. The situation here is somewhat more involved since not only $X$, but also $\NX$ has to contain unbounded functions.
\begin{exa}
\label{exa:trunc_not_dense}
Let $\phi \in \Ccal^1(\Rbb^+)$ be a quasi-concave function that satisfies
\[
\lim_{t\to 0\limplus} \phi(t) = 0 \quad \mbox{and} \quad \lim_{t\to 0\limplus} \frac{\phi(t)}{t} = \infty.
\]
Suppose that there are $q>p>1$ such that $\phi(t)^p/t$ is a decreasing function for $t> 0$ whereas $\phi(t)^q/t$ is increasing. In particular, these conditions are satisfied if $\phi^p$ is (quasi)concave whereas $\phi^q$ is convex. The maximal operator $M_1: M_\phi^* \to M_\phi^*$ is bounded under these assumptions, which is shown in Lemma~\ref{lem:fi-quasiconc_weak2weak-glob} below. In view of the Herz--Riesz inequality (Proposition~\ref{pro:herz} below), the Marcinkiewicz spaces $M_\phi$ and $M_\phi^*$ coincide with equivalent (quasi)norms. If, for example, $\phi(t) = t^{1/\alpha}$ with $\alpha > 1$, then we may choose any $p\in (1, \alpha]$ and $q \in (\alpha, \infty)$, and $M_\phi = M^*_\phi = L^{\alpha, \infty}$.
Let $X=M_\phi$ over $\Rbb^n$ endowed with the Euclidean metric and the $n$-dimensional Lebesgue measure, where $n>q$. Then, the truncations are not dense in $\NX$, which is seen by the following argument:
Let $f(t) = t/\phi(t^n)$ for $t>0$. Then, $f$ is decreasing and $f(0) \coloneq f(0\limplus) = \infty$ since
\[
\lim_{t\to 0\limplus} \frac{t}{\phi(t^n)} = \lim_{t\to 0\limplus} \frac{t^{1/n}}{\phi(t)} = \biggl( \lim_{t\to 0\limplus} \frac{t}{\phi(t)^n} \biggr)^{1/n} = \biggl( \lim_{t\to 0\limplus} \frac{t}{\phi(t)^q} \cdot \frac{1}{\phi(t)^{n-q}}\biggr)^{1/n} = \infty.
\]
Furthermore, $|f'(t)| \approx 1/\phi(t^n)$ since the inequality
\begin{equation}
\label{eq:phi'-phit/t}
\frac{\phi(t)}{qt} \le \phi'(t) \le \frac{\phi(t)}{t}\quad\mbox{ for all $t>0$,}
\end{equation}
which holds due to the monotonicity of $\phi(t)/t$ and $\phi(t)^q/t$, leads to
\[
\frac{-1}{\phi(t^n)} \approx \frac{1-n}{\phi(t^n)} \le \frac{1}{\phi(t^n)} - \frac{n t^n \phi'(t^n)}{\phi(t^n)^2} \le \frac{q-n}{q} \cdot \frac{1}{\phi(t^n)} \approx \frac{-1}{\phi(t^n)} \,.
\]
Let $u(x) = (f(|x|) - f(1))^+$ for $x\in \Rbb^n$. Then, $|\nabla u(x)| = |f'(|x|)|$ for $|x| < 1$. Similarly as in~\cite[Proposition 1.14]{BjoBjo}, we see that $g\coloneq|\nabla u| \chi_{B(0,1)}$ is an upper gradient of $u$. Hence, we can estimate $\|u\|_\NX = \|u\|_X + \|g\|_X \approx \|u\|_{M_\phi^*} + \|g\|_{M_\phi^*}$ while
\begin{align*}
\|u\|_{M_\phi^*} & = \sup_{t>0} u^*(t) \phi(t) = \sup_{0<r<1} (f(r) - f(1)) \phi(\omega_n r^n) \lesssim \sup_{0<r<1} f(r)\phi(r^n) = 1, \\
\|g\|_{M_\phi^*} & = \sup_{0<t<\omega_n} (\nabla u)^*(t) \phi(t) = \sup_{0<r<1} |f'(r)| \phi(\omega_n r^n) \approx \sup_{0<r<1} |f'(r)|\phi(r^n) \approx 1,
\end{align*}
where $\omega_n$ is the measure of the $n$-dimensional unit ball. Therefore, $u\in\NX$. We can see that $|\nabla u(x)|$ is also a minimal $X$-weak upper gradient of $u$ by following the argument of~\cite[Proposition A.3]{BjoBjo}, where we replace the representation formula~\cite[Theorem 2.51]{BjoBjo} by~\cite[Theorem~4.10]{Mal2} with $\varphi(t)=t$. Note that the function $\varphi$ in~\cite[Theorem~4.10]{Mal2} is unrelated to $\phi$.
Let now $u_k(x) = \min\{ u(x), k\}$ for $x\in\Rbb^n$, where $k\in \Nbb$. Then, $g_k \coloneq |\nabla u| \chi_{B(0,r_k)}$, with $r_k = f^{-1}(k + f(1))$, is a minimal $X$-weak upper gradient of $u-u_k$. This provides us with the estimate
\begin{align*}
\|u-u_k\|_\NX & = \|u-u_k\|_X + \|g_k\|_X \gtrsim \|g_k\|_{M_\phi^*} = \sup_{0<t<\omega_n r_k^n} (\nabla u)^*(t) \phi(t) \\
& = \sup_{0<r<r_k} |f'(r)| \phi(\omega_n r^n)
\approx \sup_{0<r<r_k} |f'(r)|\phi(r^n) \approx 1,
\end{align*}
whence $u \in \NX$ cannot be approximated in $\NX$ by its truncations.
\end{exa}
\section{Weak type boundedness of the maximal operator}
\label{sec:weaktype}
The general main result for $p$-Poincar\'e spaces, Theorem~\ref{thm:general_Lip-dens}, relies on the fact that the maximal function $M_pg$ fulfills the weak estimate $\smash{\bigl\|\sigma \chi_{\suplevp{p}{g}{\sigma}}\bigr\|_X} \to 0$ as $\sigma\to\infty$ whenever $g\in X$. Recall that $\suplevp{p}{g}{\sigma}$ denotes the superlevel set of $M_pg$ with level~$\sigma$, i.e., $\suplevp{p}{g}{\sigma} = \{x\in\Pcal: M_p g(x) > \sigma\}$. In this section, we will show that this condition is satisfied in the setting of \ri spaces if $M_p: X \to M^*(X)_\fm$ is bounded. Furthermore, we will establish various sufficient conditions on $X$, and in particular on its fundamental function, that guarantee such boundedness of $M_p$.
The Herz--Riesz inequality is a crucial tool for studying the maximal operators on \ri spaces. It allows us to compare the elementary maximal function, i.e., the maximal function of the rearrangement, with the rearrangement of the maximal function.
\begin{pro}[Herz--Riesz inequality]
\label{pro:herz}
There are constants $c,c'>0$ such that
\[
c (M_1 u)^*(t) \le u^{**}(t) \le c' (M_1 u)^*(t),\quad t\in (0, \meas{\Pcal}),
\]
whenever $u\in \Mcal(\Pcal, \mu)$.
\end{pro}
F.~Riesz~\cite{Rie} used the rising sun lemma to prove the estimate $(M_1u)^* \lesssim u^{**}$ for functions defined on the interval $[0,1]$ in 1932. The inequality in (unweighted) $\Rbb^n$ follows from Wiener~\cite{Wie}. The converse estimate was established much later (1968) and is attributable to Herz~\cite{Her} in one dimension, and to Bennett and Sharpley~\cite{BenSha} in $n$ dimensions. See also Asekritova, Kruglyak, Maligranda, and Persson~\cite{AseKruMalPer}.
\begin{proof}
If $u \notin L^1_\loc(\Pcal)$, then trivially $(M_1 u)^* = u^{**}\equiv \infty$. We can therefore suppose that $u \in L^1_\loc(\Pcal)$.
The proof of Theorem III.3.8 in Bennett and Sharpley~\cite{BenSha} works verbatim for the left-hand inequality even in the setting of metric measure spaces.
The proof of the right-hand inequality is however somewhat more involved. Let $u \in L^1_\loc(\Pcal)$ and $t>0$ be given. We may suppose that $(M_1 u)^*(t) < \infty$ since the inequality holds trivially otherwise. Let $E = \suplevp{1}{u}{(M_1u)^*(t)}$, i.e., $E=\{x\in\Pcal: M_1 u(x) > (M_1 u)^*(t)\}$. Then, $\meas{E} \le t$, and $E$ is open since $M_1 u$ is lower semicontinuous by~\cite[Lemma 3.12]{BjoBjo}.
Let $\{B_\alpha\}_{\alpha \in I}$ be a Whitney-type covering of $E$ by open balls, i.e., it satisfies:
\begin{enumerate}
\renewcommand{(\roman{enumi})}{(\roman{enumi})}
\item if $\alpha, \beta \in I$ and $\alpha \neq \beta$, then $B_\alpha \cap B_\beta = \emptyset$,
\item $E = \bigcup_{\alpha\in I} c_W B_\alpha$,
\item $4c_W B_\alpha \setminus E \neq \emptyset$ whenever $\alpha \in I$,
\end{enumerate}
where $c_W\ge 1$ is an absolute constant. Existence of such a covering is established, e.g., in Auscher and Bandara~\cite[Theorem 2.3.4]{AusBan}. Since $\Pcal = \spt \mu$ is a Lindel\"{o}f space by~\cite[Proposition 1.6]{BjoBjo}, the index set $I$ may be assumed at most countable. Let now $G = \bigcup_{\alpha\in I} 4c_W B_\alpha$, $v=u \chi_G$, and $w = u-v = u \chi_{\Pcal \setminus G}$. Then, we can estimate
\[
\|w\|_{L^\infty} \le \| u \chi_{\Pcal \setminus E}\|_{L^\infty} \le \| (M_1u) \chi_{\Pcal \setminus E}\|_{L^\infty} \le (M_1u)^*(t).
\]
For every $\alpha\in I$, there is $x_\alpha \in 4c_W B_\alpha \setminus E$. Hence,
\[
(M_1u)^*(t) \ge M_1u (x_\alpha) \ge \fint_{4c_WB_\alpha} |u|\,d\mu.
\]
This allows us to estimate
\begin{align*}
\|v\|_{L^1} & = \int_G |u|\,d\mu \le \sum_{\alpha \in I} \int_{4c_WB_\alpha} |u|\,d\mu \le \sum_{\alpha \in I} (M_1u)^*(t) \meas{4c_WB_\alpha} \\
& \le \tilde{c} (M_1u)^*(t) \sum_{\alpha \in I} \meas{B_\alpha} \le \tilde{c} (M_1u)^*(t) \meas{E} \le \tilde{c} t (M_1u)^*(t),
\end{align*}
where $\tilde{c}\ge1$ depends only on the doubling constant of $\mu$ and on $c_W$. Due to the subadditivity of the elementary maximal operator, we obtain that
\begin{align*}
u^{**}(t) & \le v^{**}(t) + w^{**}(t) = \fint_0^t (v^*(s) + w^*(s))\,ds \\
& \le \frac{\|v\|_{L^1}}{t} + \|w\|_{L^\infty} \le (\tilde{c}+1) (M_1u)^*(t).
\qedhere
\end{align*}
\end{proof}
As a direct consequence of the Herz--Riesz inequality for $M_1$, we can also estimate the rearrangement of $M_p u$ for any $p\in[1, \infty)$.
\begin{cor}[Herz--Riesz inequality]
\label{cor:herz}
For every $p\in[1,\infty)$, there are $c,c'>0$ such that
\[
c (M_p u)^*(t) \le M_pu^*(t) \chi_{(0, \meas{\Pcal})}(t) \le c' (M_p u)^*(t),\quad t\in \Rbb^+,
\]
whenever $u\in \Mcal(\Pcal, \mu)$.
\end{cor}
\begin{proof}
By the definition of the decreasing rearrangement, we have $(M_p u)^*(t) = 0$ whenever $t\ge \meas{\Pcal}$. In view of Proposition~\ref{pro:herz}, we can estimate for $t < \meas{\Pcal}$ that
\begin{align*}
(M_p u)^*(t) & = ((M_1 |u|^p)^{1/p})^*(t) = (M_1 |u|^p)^*(t)^{1/p} \approx (|u|^p)^{**}(t)^{1/p} \\
& = M_1 ((|u|^p)^*)(t)^{1/p} = M_1((u^*)^p)(t)^{1/p} = M_p u^* (t).
\qedhere
\end{align*}
\end{proof}
The following lemma shows that if $M_p$ is a bounded operator from $X$ to $X_\fm$, then it obeys the desired weak type estimate. In this case, we allow $X$ to be a more general function lattice with absolutely continuous norm. Recall that a (sub)linear operator~$T$ is bounded from $X$ to $X_\fm$ if for every $E\subset \Pcal$ of finite measure there is $c_E \ge 0$ such that $\|(T u)\chi_E\|_X \le c_E \|u\|_X$ whenever $u\in X$.
\begin{lem}
\label{lem:key_est-M_bdd}
Let $X$ be a quasi-Banach function lattice with absolutely continuous quasi-norm. Suppose that $X \subset L^p_\fm$ and that $M_p: X \to X_\fm$ is bounded for some $p\in[1, \infty)$. If $v\in X$, then $\bigl\|\sigma \suplevp{p}{v}{\sigma}\bigr\|_X \to 0$ as $\sigma \to \infty$.
\end{lem}
\begin{proof}
Since $M_p v\in X_\fm \subset L^p_\fm$, we may use Lemma~\ref{lem:superlevelsets-to-zero} to prove that there is $\sigma_0>0$ such that $\meas{\suplevp{p}{v}{\sigma_0}} < \infty$ and that $\meas{\suplevp{p}{v}{\sigma}} \to 0$ as $\sigma \to \infty$.
Then, using a Chebyshev-type estimate for $\sigma \ge \sigma_0$ and the boundedness of $M_p$, we obtain that
\[
\| \sigma\chi_{\suplevp{p}{v}{\sigma}}\|_X \le \| (M_pv) \chi_{\suplevp{p}{v}{\sigma}}\|_X \le \| (M_pv) \chi_{\suplevp{p}{v}{\sigma_0}}\|_X \le c_{\sigma_0} \|v\|_X < \infty.
\]
The absolute continuity of the norm gives that $\|(M_pv) \chi_{\suplevp{p}{v}{\sigma}}\|_X \to 0$ as $\sigma\to\infty$ since $(M_pv) \chi_{\suplevp{p}{v}{\sigma_0}} \in X$ and $\meas{\suplevp{p}{v}{\sigma}} \to 0$. Hence, $\| \sigma \chi_{\suplevp{p}{v}{\sigma}}\|_X \to 0$ as $\sigma\to\infty$.
\end{proof}
The following example shows that the assumption on absolute continuity of the quasi-norm of $X$ is crucial in the previous lemma and without it the weak type estimate may fail even though $M_p$ is bounded.
\begin{exa}
\label{exa:Mp-bdd_notAC}
The Herz--Riesz inequality (or the Marcinkiewicz interpolation theorem) yields that $M_p: L^{q,s} \to L^{q,s}$ is bounded for all $q\in (p, \infty)$ and $s\in[1, \infty]$. Let us consider $X = L^{q, \infty}(\Rbb^+)$ for arbitrary $q\in (p, \infty)$ and $g(t) = t^{-1/q}$ for $t \in \Rbb^+$. Since $g$ is decreasing, we obtain that $M_p g(t) = \bigl(\fint_0^t s^{-p/q}\,ds\bigr)^{1/p} = c_{p,q} t^{-1/q}$ for every $t\in\Rbb^+$. Hence, $\suplevp{p}{g}{\sigma} = (0, c_{p,q}^q / \sigma^q)$, which gives
\[
\Bigl\| \sigma \chi_{\suplevp{p}{g}{\sigma}}\Bigr\|_X = \sigma \sup_{t>0} \chi_{\suplevp{p}{g}{\sigma}}^*(t) \phi_X(t) = \sigma \sup_{0<t<c^q_{p,q}/\sigma^q} t^{1/q} = c_{p,q} > 0
\]
regardless of the value of $\sigma>0$.
\end{exa}
In the rest of this section, we will be describing weak boundedness (on sets of finite measure) of $M_p$ on \ri spaces, which will be considerably easier as we will apply the Herz--Riesz inequality to reduce the problem and investigate the behavior of the maximal functions on $\Rbb^+$ with $1$-dimensional Lebesgue measure instead.
\begin{rem}
It follows from the Herz--Riesz inequality that $M_p: X \to M^*(Y)$ is bounded if and only if $M_p: \reps{X} \to M^*(\reps{Y})$ is bounded whenever $X$ and $Y$ are \ri spaces over $(\Pcal, \mu)$ and $p\in[1, \infty)$.
In fact, it can be also shown that $M_p: X \to M^*(Y)_\fm$ is bounded if and only if $M_p: \reps{X} \to M^*(\reps{Y})_\fm$ is bounded. The proof of this statement is however more involved since a uniform correspondence between sets of finite measure in $\Rbb^+$ and in $\Pcal$ needs to be established. In other words, given $f\in \reps{X}$, an equimeasurable $u\in X$ needs to be found so that its level sets have a certain structure, independently of $f$.
Neither of these claims will be used in this paper, whence their proof is omitted.
\end{rem}
Next, we will see that the mere weak boundedness of $M_p$ on sets of finite measure for \ri spaces is sufficient for the desired weak type estimate. Considering the simple example $X=L^p$, we see that weak boundedness of $M_p$ is indeed more general than boundedness of $M_p$, which was required in Lemma~\ref{lem:key_est-M_bdd}. Note that we cannot omit the hypothesis that the norm of $X$ is absolutely continuous since that would lead to invalidity of the claim, which we have already observed in Example~\ref{exa:Mp-bdd_notAC}.
\begin{lem}
\label{lem:key-Mp-wbdd}
Let $X$ be an \ri space with absolutely continuous norm. Suppose that $M_p: X \to M^*(X)_\fm$ is bounded for some $p\in[1, \infty)$. If $v\in X$, then $\bigl\|\sigma \chi_{\suplevp{p}{v}{\sigma}}\bigr\|_X \to 0$ as $\sigma \to \infty$.
\end{lem}
\begin{proof}
First, we shall show that $X \emb L^p_\fm$. Let $u\in X$. Since $(M_p u)\chi_A \in M^*(X)$ for every measurable set $A\subset \Pcal$ of finite measure, we have $M_p u <\infty$ a.e.\@ in $\Pcal$. Consequently, $u\chi_B \in L^p$ for every ball $B\subset \Pcal$. Let $E \subset \Pcal$ with $\meas{E}<\infty$. Then, there exists a ball $B_E \subset \Pcal$ such that $\meas{E} \le \meas{B_E} < \infty$. By~\cite[Corollary~II.7.8]{BenSha}, there is a measurable function $\tilde{u} = \tilde{u} \chi_{B_E}$ such that $\tilde{u}^* = (u\chi_E)^*$. By the lattice property~\ref{df:BFL.latticeprop} of $\reps{X}$, we obtain that $\tilde{u} \in X$, whence $\tilde{u} \chi_{B_E} \in L^p$. Now, $\|u\chi_E\|_{L^p} =\|\tilde{u}\chi_{B_E}\|_{L^p} < \infty$ and that is why $u\in L^p_\fm$. Thus, $X \subset L^p_\fm$. In particular, the spaces of functions restricted to $E$ satisfy $X(E) \subset L^p(E)$. These spaces are \ri spaces as well and hence the embedding is continuous by~\cite[Theorem~I.1.8]{BenSha}, i.e., $\|u\chi_E\|_{L^p(\Pcal)} = \|u\|_{L^p(E)} \le c_E \|u\|_{X(E)} = c_E \|u\chi_E\|_{X}$. Therefore, $X \emb L^p_\fm$.
Next, we will show that $\meas{\suplevp{p}{v}{\sigma}} \to 0$ as $\sigma\to\infty$. Suppose on the contrary that $\meas{\suplevp{p}{v}{\sigma}} > a >0$ for every $\sigma>0$. Then, there exist pairwise disjoint sets $F_k \subset \suplevp{p}{v}{k\log k}$ that satisfy $\meas{F_k} = a /(k^2 + k)$, $k\in\Nbb$. Therefore, $M_p v \ge \sum_{k=1}^\infty (k \log k) \chi_{F_k}$. We also have $M^*(X) \emb L^{p,\infty}_\fm$, which follows from the inequality $\phi_{L^p} (t) \le c_b \phi_X(t)$ for $t\in (0, b)$ with arbitrary $b>0$, which in turn follows from the embedding $X \emb L^p_\fm$. Let $F = \bigcup_{k=1}^\infty F_k$. Then, $\meas{F} = a < \infty$ and
\begin{align*}
\infty &= \sup_{k\ge1} \biggl(\frac{a}{k}\biggr)^{1/p} k\log k = \sup_{t>0} t^{1/p} \sum_{k=1}^\infty (k\log k) \chi_{[a/(k+1), a/k)}(t) \\
& = \biggl\| \sum_{k=1}^\infty (k\log k)\chi_{F_k} \biggr\|_{L^{p,\infty}} \le c_F \|(M_pv) \chi_F\|_{M^*(X)} \le c_F' \|v\|_X < \infty,
\end{align*}
which is a contradiction and hence $\meas{\suplevp{p}{v}{\sigma}} \to 0$ as $\sigma \to \infty$.
Let $\sigma_0>0$ be chosen such that $\meas{\suplevp{p}{v}{\sigma_0}} < \infty$. For $\sigma > \sigma_0$, we define $v_\sigma = v \chi_{\suplev{v}{\sigma/2}}$. Then, $M_p v \le M_p v_\sigma + \sigma/2$. In particular, $M_p v_\sigma > \sigma/2$ on $\suplevp{p}{v}{\sigma}$. Using a Chebyshev-type estimate and the boundedness of $M_p: X \to M^*(X)_\fm$, we see that
\begin{align*}
\bigl\|\sigma \chi_{\suplevp{p}{v}{\sigma}}\bigr\|_{M^*(X)} \lesssim \bigl\|(M_p v_\sigma) \chi_{\suplevp{p}{v}{\sigma_0}}\bigr\|_{M^*(X)}
\le c_{\sigma_0} \|v_\sigma\|_{X} = c_{\sigma_0} \|v \chi_{\suplev{v}{\sigma/2}}\|_{X} \to 0
\end{align*}
as $\sigma\to\infty$ since the norm of $X$ is absolutely continuous and $\meas{\bigcap_{\sigma>0} \suplev{v}{\sigma}}=0$.
\end{proof}
Due to the definitions of the Marcinkiewicz-type spaces $M^p(X)$, $M^p_\loc(X)$, and $M^*(X)$, we will obtain that $M_p$ is weakly bounded on $M^p(X)$, and on $M^p_\loc(X)$ on sets of finite measure. Consequently, $M_1$ is weakly bounded on all \ri spaces. In view of Lemma~\ref{lem:key-Mp-wbdd}, we obtain the desired weak estimate, which is needed to conclude the density of Lipschitz functions in $\NX$ on $1$-Poincar\'e spaces using Theorem~\ref{thm:general_Lip-dens}, whenever $X$ is an \ri space.
\begin{pro}
\label{pro:Mp-wbdd}
Let $X$ be an \ri space. Then, $M_p: M^p(X) \to M^*(X)$ is bounded for all $p\in[1, \infty)$. In particular, $M_1: X\to M^*(X)$ is bounded.
Furthermore, $M_p: M^p_\loc(X)\to M^*(X)_\fm$ is bounded. If $X \emb M^p_\loc(X)$, then in particular $M_p: X\to M^*(X)_\fm$ is bounded.
\end{pro}
\begin{proof}
Let $u\in M^p(X)$. Then, the Herz--Riesz inequality yields
\[
\|M_p u\|_{M^*(X)} = \sup_{t>0} (M_pu)^*(t) \phi(t) \approx \sup_{t>0} M_p u^*(t) \phi(t) = \|u\|_{M^p(X)}.
\]
The restriction $M_1: X \to M^*(X)$ is bounded since $X \emb M(X) = M^1(X)$ by \eqref{eq:LambdaX-X-MX-embed}.
Let now $u\in M^p_\loc(X)$. Let $E \subset \Pcal$ with $\meas{E}<\infty$. With appeal to the Herz--Riesz inequality, we obtain
\begin{align*}
\|(M_p u) \chi_E\|_{M^*(X)} &= \sup_{t>0} ((M_pu) \chi_E)^*(t) \phi(t) \le \sup_{0<t<\meas{E}} (M_pu)^*(t) \phi(t)\\
& \approx \sup_{0<t<\meas{E}} M_pu^*(t) \phi(t) \le \sup_{0<t<1+\meas{E}} M_pu^*(t) \phi(t).
\end{align*}
By the quasi-concavity of $\phi$, we have that $\phi(t)/(1+\meas{E}) \le \phi\bigl(t/(1+\meas{E})\bigr)$. Monotonicity of $M_pu^*$ then gives us that
\begin{align*}
\sup_{0<t<1+\meas{E}} M_pu^*(t) \phi(t) & \le (1+\meas{E}) \sup_{0<t<1+\meas{E}} M_pu^*\Biggl(\frac{t}{1+\meas{E}}\Biggr) \phi\Biggl(\frac{t}{ 1+\meas{E}}\Biggr) \\
& = (1+\meas{E}) \|u\|_{M^p_\loc(X)}\,.
\end{align*}
We have thus shown that $\|(M_p u) \chi_E\|_{M^*(X)} \le c_E \|u\|_{M^p_\loc(X)}$. The boundedness of $M_p: X \to M^*(X)_\fm$ immediately follows provided that $X \emb M^p_\loc(X)$.
\end{proof}
Let us now take a look at an example that illustrates the difference in strength of the claims for $M^p(X)$ and $M^p_\loc(X)$ in the previous proposition.
\begin{exa}
\label{exa:Mp-vs-Mploc}
Suppose that $1 \le q < p \le s < \infty$ and let $X = (L^q \cap L^s)(\Pcal)$ with a norm given by $\|u\|_X = \max \{ \|u\|_{L^q(\Pcal)}, \|u\|_{L^s(\Pcal)}\}$ for $u\in\Mcal(\Pcal, \mu)$. Then, $X$ has fundamental function $\phi(t) = \max\{t^{1/q}, t^{1/s}\}$ for $t\in(0, \meas{\Pcal})$. The H\"older inequality yields that
\begin{align*}
\|u\|_{M_\loc^p(X)} & = \sup_{0<t<1} M_pu^*(t)\phi(t) = \sup_{0<t<1} \biggl( \fint_0^t u^*(\tau)^p\,d\tau\biggr)^{1/p} t^{1/s} \\
& \le \sup_{0<t<1} \biggl( \fint_0^t u^*(\tau)^s\,d\tau\biggr)^{1/s} t^{1/s} = \|u^* \chi_{(0,1)}\|_{L^s(\Rbb^+)} \le \|u\|_{L^s(\Pcal)} \le \|u\|_X.
\end{align*}
Hence, $X \emb M^p_\loc(X)$. If $\meas{\Pcal}<\infty$, then $M^p(X) = M^p_\loc(X)$, which gives us that $X \emb M^p(X)$ in this case. Suppose instead that $\meas{\Pcal}=\infty$. Then,
\begin{align*}
\sup_{t>1} M_pu^*(t)\phi(t) = \sup_{t>1} \biggl( \fint_0^t u^*(\tau)^p\,d\tau\biggr)^{1/p} t^{1/q} = \sup_{t>1} t^{1/q-1/p} \|u^* \chi_{(0,t)}\|_{L^p(\Rbb^+)} = \infty
\end{align*}
unless $\|u\|_{L^p} = 0$. Thus, $M^p(X) = \{ u \in \Mcal(\Pcal, \mu): u=0\mbox{ a.e.}\}$.
Therefore, boundedness of the maximal operator $M_p$ on $M^p(X)$ does not provide us with any useful information on boundedness of $M_p$ on $X$ if $\meas\Pcal = \infty$.
On the other hand, the previous proposition yields that $M_p: X \to M^*(X)_\fm$ (regardless of the measure of $\Pcal$), which suffices in Lemma~\ref{lem:key-Mp-wbdd} to obtain the weak type estimate that is used in Theorem~\ref{thm:general_Lip-dens} to prove density of Lipschitz functions in $\NX$ on $p$-Poincar\'e spaces. See also Example~\ref{exa:Lip-dens-LqcapLs} below, where the case when $0<q<1$ is discussed as well.
\end{exa}
The following technical lemma helps us find a function $\psi$, which dominates a given quasi-concave $\phi$, so that $\psi^p$ is quasi-concave on $[0, 1)$. Moreover, $M^p_{\psi} = M^p_{\phi}$ and $M^p_{\psi, \loc} = M^p_{\phi, \loc}$. In fact, $\psi$ equals the fundamental function of $M^p_{\phi, \loc}$ on $[0,1]$.
\begin{lem}
\label{lem:quasiconc_maj}
Let $\phi$ be a quasi-concave function that is constant on $(\meas{\Pcal}, \infty)$ and let $p\in[1, \infty)$. Define $\psi$ by
\begin{equation}
\label{eq:quasiconv_dom}
\psi(t) =
\begin{cases}
0&\mbox{for }t=0,\\
\displaystyle{t^{1/p} \sup_{t \le s \le 1} \frac{\phi(s)}{s^{1/p}}}& \mbox{for }0< t\le1,\\
\phi(t)&\mbox{for }t \ge 1.
\end{cases}
\end{equation}
Then, $\psi^p$ is quasi-concave on $[0,1)$. Moreover, $M^p_\psi = M^p_\phi$ and $M^p_{\psi, \loc} = M^p_{\phi, \loc}$ with equality of the respective (quasi)norms.
\end{lem}
Observe that if $\phi^p$ is quasi-concave on $[0,1)$, then the supremum is attained for $s=t$ whence $\phi(t) = \psi(t)$ for every $t\ge0$.
\begin{proof}
Let us first show that $\psi^p$ is indeed quasi-concave on $[0,1)$. It follows directly from its definition that $\psi(t)^p/t$ is decreasing for $t\in(0,1)$. Let now $0 < t_1 < t_2 \le 1$. Due to the continuity of $\phi(s)^p/s$ on $[t_1, 1]$, the suprema defining $\psi(t_1)$ and $\psi(t_2)$ are attained at $s_1\in[t_1, 1]$ and $s_2\in[t_2, 1]$, respectively. We distinguish two cases. If $s_1\ge t_2$, then we may choose $s_2=s_1$, whence $\psi(t_2)^p/\psi(t_1)^p = t_2/t_1 > 1$. Therefore, $\psi(t_2)^p>\psi(t_1)^p$. Now, suppose instead that $s_1\in[t_1, t_2)$. Then,
\[
0<\psi(t_1)^p = \frac{t_1 \phi(s_1)^p}{s_1} \le \phi(t_1)\phi(s_1)^{p-1} \le \phi(t_2)^p = \frac{t_2 \phi(t_2)^p}{t_2} \le \frac{t_2 \phi(s_2)^p}{s_2} = \psi(t_2)^p.
\]
The local norms are equal since
\begin{align*}
\| u \|_{M^p_{\psi,\loc}} & = \sup_{0<t< 1} M_p u^*(t)\psi(t)
= \sup_{0<t< 1} \biggl( \sup_{t < x < 1} \frac{\phi(x)^p}{x} \int_0^t u^*(s)^p\,ds \biggr)^{1/p} \\
& = \biggl( \sup_{0 < x < 1} \frac{\phi(x)^p}{x} \int_0^x u^*(s)^p\,ds \biggr)^{1/p}
= \sup_{0<x< 1} M_p u^*(x)\phi(x) = \| u \|_{M^p_{\phi,\loc}}.
\end{align*}
For the global norms, we have
\begin{align*}
\| u \|_{M^p_\psi} & = \sup_{t>0} M_p u^*(t)\psi(t) = \max \biggl\{ \sup_{0<t< 1} M_p u^*(t)\psi(t), \sup_{t \ge 1} M_p u^*(t)\psi(t)\biggr\} \\
& = \max \biggl\{ \sup_{0<t< 1} M_p u^*(t)\phi(t), \sup_{t\ge 1} M_p u^*(t)\phi(t)\biggr\} = \| u \|_{M^p_\phi}\,.
\qedhere
\end{align*}
\end{proof}
Since the space $M^p_\loc(X)$ is defined using $M_p$, the question of when $X \emb M^p_\loc(X)$ is in principle equivalent to determining whether $M_p$ is weakly bounded on $X$ on sets of finite measure. We can however find classical Lorentz spaces that are embedded in $M^p_\loc(X)$. Hence, we may use Lemma~\ref{lem:key-Mp-wbdd} to obtain the desired weak estimate whenever we show that $X$ is embedded in such a Lorentz space.
\begin{lem}
\label{lem:Lambdap_MpX}
Let $X$ be an \ri space with fundamental function $\phi$ and let $\psi$ be defined by \eqref{eq:quasiconv_dom}. Then, $\Lambda^p_\psi \emb M^p_\loc(X)$.
In particular, if $X \emb \Lambda^q_{\psi, \fm}\fcrim$ for some $q\le p$, then $X \emb M^p_\loc(X)$.
\end{lem}
\begin{proof}
Lemma~\ref{lem:quasiconc_maj} and the Herz--Riesz inequality yield
\begin{align*}
\| u \|_{M^p_{\loc}(X)} & = \| u \|_{M^p_{\phi,\loc}} = \| u \|_{M^p_{\psi,\loc}}
= \sup_{0<t<1} M_p u^*(t) \psi(t) \\
&= \sup_{0<t<1} \frac{\psi(t)}{t^{1/p}} \biggl(\int_0^t u^*(s)^p\,ds\biggr)^{1/p} \le \biggl(\int_0^1 u^*(s)^p\psi(s)^p \frac{ds}{s}\biggr)^{1/p} = \|u \chi_G\|_{\Lambda^p_\psi},
\end{align*}
where $G \in \suplevr{u}{\min\{1, \meas{\Pcal}\}}$.
Finally, we obtain that
\[
\|u \chi_G\|_{\Lambda^p_\psi} \le c_{p,q} \|u \chi_G\|_{\Lambda^q_\psi} \le c_{p,q}c_{\meas{G}} \|u\|_X
\]
by the embedding of the classical Lorentz spaces (Lemma~\ref{lem:lorentz-embedding}).
\end{proof}
\begin{rem}
If $\psi^p$ is the highest power of $\psi$ that is quasi-concave on $(0,1)$, then the inclusion $\smash{\Lambda^p_{\psi,\fm}} \emb M^p_\loc(X)$ is rather sharp, in particular when comparing the classical Lorentz spaces with $M^p_\loc(X)$. For example, let $X=L^{q,s}$ with $q \in (1, p)$ and $s\in[1, \infty]$ and suppose that $\meas\Pcal = \infty$. Then, $\psi(t) = \max\{t^{1/p}, t^{1/q}\}$. Similarly as in Example~\ref{exa:Mp-vs-Mploc}, we obtain that $M^p(X) = M^p_\psi = \{u\in\Mcal(\Pcal, \mu): u=0\mbox{ a.e.}\}$. On the other hand, $M^p_\loc(X) = L^p_\fm(\Pcal) = \Lambda^p_{\psi,\fm}(\Pcal)$ since
\begin{align}
\notag
\|u \|_{M^p_\loc(X)} & = \|u \|_{M^p_{\psi,\loc}} = \sup_{0<t<1} \biggl(\fint_0^t u^*(\tau)\,d\tau\biggr)^{1/p}\psi(t) \\ & = \sup_{0<t<1} \biggl(\int_0^t u^*(\tau)\,d\tau\biggr)^{1/p}
= \|u^* \chi_{(0,1)}\|_{L^p(\Rbb^+)} = \|u^* \chi_{(0,1)}\|_{\Lambda^p_\psi(\Rbb^+)}\,.
\label{eq:Mploc-vs-Lambdapsi}
\end{align}
If $X=L^{p, s}$ with $s\in[1, \infty]$, then $\psi(t) = \phi(t) = t^{1/p}$ and a calculation analogous to~\eqref{eq:Mploc-vs-Lambdapsi} yields that $M^p(X) = L^p = \Lambda^p_\psi$, while $M^p_\loc(X) = L^p_\fm = \Lambda^p_{\psi,\fm}$.
If $\phi^q$ is quasi-concave for some $q>p$, then $\psi = \phi$ and Lemma~\ref{lem:fi-quasiconc_weak2weak-glob} below yields that $M_p: M^*(\reps X) \to M^*(\reps X)$ is bounded. Then, $X\emb M^p(X)$ as the boundedness of $M_p$ and \eqref{eq:LambdaX-X-MX-embed} give that $\|u\|_{M^p(X)} = \|M_p u^*\|_{M^*(\reps{X})} \lesssim \|u^*\|_{M^*(\reps{X})} \le \|u^*\|_\reps{X} = \|u\|_X$.
\end{rem}
Without any additional information on the structure of the norm of an \ri space $X$, it is nearly impossible to describe when $X \emb M^p_\loc(X)$. We can however establish rather general characterizations of the boundedness of $M_p$ on sets of finite measure when both the source space and the target space are weak Marcinkiewicz spaces. In other words, we will study when $M^*(X) \emb M^p_\loc(X)$ using the properties of the fundamental function, which proves helpful since $X \emb M^*(X)$ as seen in \eqref{eq:LambdaX-X-MX-embed}.
\begin{pro}
\label{pro:Mp-weak2weak}
For every $p\in[1, \infty)$, the mapping $M_p: M^*(X) \to M^*(X)_\fm$ is bounded if and only if
\begin{equation}
\label{eq:phi-p-avg_bdd}
\sup_{0<t<1} \phi(t)^p \fint_0^t \frac{ds}{\phi(s)^p} < \infty.
\end{equation}
\end{pro}
\begin{proof}
Suppose first that $M_p: M^*(X) \to M^*(X)_\fm$ is bounded. By~\cite[Corollary~II.7.8]{BenSha}, there exists $v \in M^*(X)$ such that $v^* = 1/\phi$. Let $A \in \suplevr{M_p v}{1}$. Then, the Herz--Riesz inequality yields that
\begin{multline*}
\sup_{0<t<1} \biggl( \phi(t)^p \fint_0^t \frac{ds}{\phi(s)^p}\biggr)^{1/p}
= \sup_{0<t<1} M_p \frac{1}{\phi} (t) \phi(t)
= \sup_{0<t<1} M_p v^*(t) \phi(t) \\
\approx \sup_{0<t<1} (M_p v)^*(t) \phi(t)
= \| (M_p v) \chi_{A}\|_{M^*(X)}
\lesssim \| v \|_{M^*(X)} = \biggl\| \frac{1}{\phi} \biggr\|_{M^*(X)}
= 1.
\end{multline*}
For the proof of the converse, suppose that the expression $\phi(t)^p \fint_0^t \phi(s)^{-p}ds$ is bounded for $t\in(0,1)$. Due to the continuity and monotonicity of $\phi$, it follows that it is bounded for $t\in(0, b)$ for every $b \in \Rbb^+$. Let $u\in M^*(X)$ and $E\subset \Pcal$ with $b \coloneq \meas{E} < \infty$. Using the Herz--Riesz inequality, we obtain that
\[
\|(M_p u) \chi_E\|_{M^*(X)}
\le \sup_{0<t<b} (M_pu)^*(t) \phi(t) \approx \sup_{0<t<b} M_pu^*(t) \phi(t).
\]
We can also estimate
\begin{align*}
\sup_{0<t<b} M_pu^*(t) \phi(t) & = \sup_{0<t<b} \phi(t) \biggl(\fint_0^t u^*(s)^p \phi(s)^p \frac{ds}{\phi(s)^p}\biggr)^{1/p}
\\
&\le \sup_{0<t<b} \biggl( \phi(t)^p \fint_0^t \frac{ds}{\phi(s)^p}\biggr)^{1/p} \sup_{s>0} u^*(s) \phi(s) = c_E \|u\|_{M^*(X)}.
\end{align*}
Hence, $\|(M_p u) \chi_E\|_{M^*(X)} \le c_{E} \|u\|_{M^*(X)}$ as desired.
\end{proof}
The following characterization of the global boundedness of $M_p$ on the weak space $M^*(X)$ can be proven along the same lines. Thus, the proof is omitted.
\begin{pro}
\label{pro:Mp-weak2weak-glob}
For every $p\in[1, \infty)$, the mapping $M_p: M^*(X) \to M^*(X)$ is bounded if and only if
\begin{equation}
\label{eq:phi-p-avg_bdd-glob}
\sup_{t>0} \phi(t)^p \fint_0^t \frac{ds}{\phi(s)^p} < \infty.
\end{equation}
\end{pro}
We will see in the next lemma that \eqref{eq:phi-p-avg_bdd} is satisfied provided that a certain power of $\phi$ is quasi-concave near zero. In particular, it follows that $M_p: X \to M^*(X)_\fm$ is bounded and Lemma~\ref{lem:key-Mp-wbdd} can be applied to verify the hypotheses of Theorem~\ref{thm:general_Lip-dens}, the general result on density of Lipschitz functions in $\NX$.
\begin{lem}
\label{lem:fi-quasiconc_weak2weak}
Let $\phi: [0,\infty) \to [0, \infty)$ be an increasing function with $\phi(t) = 0$ if and only if $t=0$. Let $q>p\ge1$. If $\phi^q$ is concave or quasi-concave on $[0, \delta)$ for some $\delta>0$, then the condition \eqref{eq:phi-p-avg_bdd} is fulfilled.
\end{lem}
\begin{proof}
If $\phi^q$ is concave on $[0, \delta)$, then it is quasi-concave there by \eqref{eq:concave-quasiconcave}.
Suppose now that $\phi^q$ is quasi-concave on $[0, \delta)$. For $t\in (0, \delta)$, we obtain
\[
\phi(t)^p \fint_0^t \frac{ds}{\phi(s)^p} = \biggl( \frac{\phi(t)^q}{t}\biggr)^{p/q} t^{p/q} \fint_0^t \frac{ds}{\phi(s)^p} \le t^{p/q} \fint_0^t \frac{ds}{s^{p/q}} = \frac{q}{q-p}\,.
\]
If $\delta<1$, then $\phi^p, \phi^{-p} \in L^\infty([\delta, 1])$, giving the claimed boundedness on $(0,1)$.
\end{proof}
We can also obtain a global result by a minor tweak of the previous argument.
\begin{lem}
\label{lem:fi-quasiconc_weak2weak-glob}
Let $\phi$ be a quasi-concave function. If $\phi^q$ is concave or quasi-concave for some $q>p\ge1$, then $M_p: M^*(X) \to M^*(X)$ is bounded.
\end{lem}
\begin{proof}[Sketch of proof]
Analogously as in the proof of Lemma~\ref{lem:fi-quasiconc_weak2weak}, we show that \eqref{eq:phi-p-avg_bdd-glob} holds true, i.e., $\sup_{t>0} \phi(t)^p \fint_0^t \frac{ds}{\phi(s)^p} < \infty$ since the expression can be estimated from above by $q/(q-p)$ for all $t\in\Rbb^+$. By Proposition~\ref{pro:Mp-weak2weak-glob}, we can conclude that $M_p: M^*(X) \to M^*(X)$ is bounded.
\end{proof}
In order to show that \eqref{eq:phi-p-avg_bdd} holds, we may, roughly speaking, measure the ``modulus of quasi-concavity'' of the fundamental function. This gives us a slightly finer condition that generalizes the one established in the previous lemma.
\begin{lem}
\label{lem:m_phi-in-Lp}
Given a quasi-concave function $\phi$, let $m_\phi (s) = \sup_{0<t<1} \phi(t) / \phi(st)$ for $s\in(0,1)$. If $m_\phi \in L^p(0,1)$, then \eqref{eq:phi-p-avg_bdd} is satisfied.
\end{lem}
\begin{proof}
For $0<t<1$, a change of variables yields
\[
\phi(t)^p \fint_0^t \frac{ds}{\phi(s)^p} = \phi(t)^p \int_0^1 \frac{ds}{\phi(st)^p} \le \|m_\phi\|_{L^p(0,1)}^p < \infty.
\qedhere
\]
\end{proof}
Diverse properties of \ri spaces can be captured and described by various indices. The Boyd indices and the fundamental (Zippin) indices belong to the best studied characteristics. We will see that the upper Boyd index of an \ri space $X$ can be used to determine whether $M_p: X\to X$ is bounded, whereas the upper fundamental index determines whether $M_p: M^*(X) \to M^*(X)$ is bounded.
\begin{df}
\label{df:indices}
Let $X$ be an \ri space with fundamental function $\phi$. For $s>0$, we define the \emph{dilation operator} $E_s$ acting on $\Mcal(\Rbb^+, \lambda^1)$ as $E_sf(t) = f(st)$, $t>0$.
For $s>0$, let us define
\[
h_X(s) = \sup_{0\neq f\in\reps X}\frac{\|E_{1/s} f\|_{\reps X}}{\|f\|_{\reps X}} \quad \mbox{and}\quad k_X(s) = \sup_{t>0} \frac{\phi(st)}{\phi(t)} =\sup_{t>0}\frac{\|E_{1/s} \chi_{(0, t)}\|_{\reps X}}{\|\chi_{(0, t)}\|_{\reps X}}\,.
\]
Then, we define the \emph{upper Boyd index} $\itoverline{\alpha}_X$ of $X$, and the \emph{upper fundamental index} $\itoverline{\beta}_X$ of $X$ (also called the \emph{upper Zippin index}) by
\[
\itoverline{\alpha}_X = \inf_{s>1} \frac{\log h_X(s)}{\log s} \quad \mbox{and} \quad
\itoverline{\beta}_X = \inf_{s>1} \frac{\log k_X(s)}{\log s}\,.
\]
\end{df}
\begin{rem}
It shown in Bennett and Sharpley~\cite[Section III.5]{BenSha} that $1\le k_X(s) \le h_X(s) \le s$ for $s\ge 1$. Hence, the indices satisfy $\itoverline{\beta}_X \le \itoverline{\alpha}_X$ and both lie in $[0,1]$. Moreover, the infima in the definition of the indices can be determined as limits as $s\to \infty$. For more details, see also Boyd~\cite{Boy}, Zippin~\cite{Zip}, or Maligranda~\cite{Mali84}.
\end{rem}
\begin{lem}
\label{lem:beta_m}
If $\itoverline{\beta}_X < 1/p$, then the function $m_\phi$ of Lemma~\ref{lem:m_phi-in-Lp} lies in $L^p(0,1)$ and hence $M_p: M^*(X) \to M^*(X)_\fm$ is bounded.
\end{lem}
\begin{proof}
There exist $q \in (p, 1/\itoverline{\beta}_X)$ and $s_0 > 1$ such that $k_X(s) \le s^{1/q}$ for all $s>s_0$. We can estimate
\[
m_\phi(s) \le k_X(s^{-1}) \le
\begin{cases}
s^{-1/q}& \mbox{for }s\in(0, s_0^{-1}), \\
s^{-1}& \mbox{for }s\in[s_0^{-1}, 1).
\end{cases}
\]
Consequently, $m_\phi \in L^p(0,1)$. The boundedness of $M_p$ was established in Proposition~\ref{pro:Mp-weak2weak} in view of Lemma~\ref{lem:m_phi-in-Lp}.
\end{proof}
In fact, the inequality for $\itoverline{\beta}_X$ in Lemma~\ref{lem:beta_m} leads to a stronger result.
\begin{lem}
\label{lem:beta_m_glob}
If $\itoverline{\beta}_X < 1/p$, then $M_p: M^*(X) \to M^*(X)$ is bounded.
\end{lem}
\begin{proof}[Sketch of proof]
The result can be obtained similarly as in Lemma~\ref{lem:beta_m}, using Proposition~\ref{pro:Mp-weak2weak-glob} and a simple modification of Lemma~\ref{lem:m_phi-in-Lp}. If we define $\widetilde{m}_\phi$ as a global version of $m_\phi$, i.e., $\widetilde{m}_\phi(s) = \sup_{t>0} \phi(t)/\phi(st)$, then $\widetilde{m}_\phi(s) = k_X(s^{-1}) \in L^p(0,1)$. That provides us with inequality \eqref{eq:phi-p-avg_bdd-glob}, which in turn is equivalent to the boundedness of $M_p: M^*(X) \to M^*(X)$.
\end{proof}
\begin{pro}
\label{pro:alpha_M-bdd}
If $\itoverline{\alpha}_X < 1/p$, then $M_p: X \to X$ is bounded. On the other hand, if $\itoverline{\alpha}_X > 1/p$, then $M_p$ is not a bounded mapping from $X$ to $X$.
\end{pro}
\begin{proof}
Let $u\in X$. Using the embedding $L^{p,1}\emb L^p$, which follows by Lemma~\ref{lem:lorentz-embedding} since $ L^{p,1} = \Lambda^1(L^p)$ and $L^p = L^{p,p} = \Lambda^p(L^p)$, we may estimate for $t\in\Rbb^+$ that
\begin{align*}
M_pu^*(t) &= \biggl( \fint_0^t u^*(s)^p\,ds \biggr)^{1/p} = t^{-1/p} \| u^* \chi_{(0,t)}\|_{L^p} \\ &
\lesssim t^{-1/p} \| u^* \chi_{(0,t)}\|_{L^{p,1}} \approx t^{-1/p} \int_0^t u^*(s) s^{1/p}\,\frac{ds}{s} \eqcolon P_{1/p}u^*(t).
\end{align*}
Similarly, the embedding $L^{p} \emb L^{p,\infty} = M^*(L^p)$ yields the converse estimate
\begin{align*}
M_p u^*(t) & = \frac{\|u^* \chi_{(0,t)}\|_{L^p}}{t^{1/p}} \gtrsim \frac{\|u^* \chi_{(0,t)}\|_{L^{p,\infty}}}{t^{1/p}} = \frac{\sup_{0<s<t} u^*(s) s^{1/p}}{t^{1/q}} t^{1/q - 1/p}\\
& \approx \frac{\sup_{0<s<t} u^*(s) s^{1/p}}{t^{1/q}} \int_0^t s^{1/q-1/p} \frac{ds}{s} \ge t^{-1/q} \int_0^t u^*(s) s^{1/q} \frac{ds}{s} = P_{1/q} u^*(t)
\end{align*}
for arbitrary $q<p$.
According to~\cite[Theorem III.5.15]{BenSha}, the Hardy-type operator $P_{a}$ is a bounded mapping from $\reps X$ to $\reps X$ if and only if $\itoverline{\alpha}_X < a$. Suppose now that $\itoverline{\alpha}_X < 1/p$. Applying the Herz--Riesz inequality, we obtain that
\[
\|M_pu\|_X = \|(M_pu)^*\|_{\reps X} \approx \|M_pu^*\|_{\reps X} \lesssim \|P_{1/p}u^*\|_{\reps X} \lesssim \|u^*\|_{\reps X} = \|u\|_X\,.
\]
If instead $\itoverline{\alpha}_X > 1/p$. Then, there is $q<p$ such that $\itoverline{\alpha}_X > 1/q > 1/p$. The Herz--Riesz inequality yields that
\[
\sup_{\|u\|_X\le1} \|M_pu\|_{X} = \sup_{\|u\|_X\le1} \|(M_pu)^*\|_{\reps X} \approx \sup_{\|u^*\|_{\reps X}\le1} \|M_pu^*\|_{\reps X} \gtrsim \sup_{\|u^*\|_{\reps X}\le1} \|P_{1/q}u^*\|_{\reps X} = \infty.
\qedhere
\]
\end{proof}
\begin{rem}
Shimogaki~\cite{Shi} and Montgomery-Smith~\cite{MonSmi} have given examples of \ri spaces with $\itoverline{\beta}_X < \itoverline{\alpha}_X$. If we choose $p$ such that $\itoverline{\beta}_X < 1/p < \itoverline{\alpha}_X$, then $M_p: X \not\to X$, but $M_p: M^*(X) \to M^*(X)$ is bounded. Then, $M_p: X \to M^*(X)$ is bounded, which is a key hypothesis in Lemma~\ref{lem:key-Mp-wbdd}.
If $\itoverline{\alpha}_X = \itoverline{\beta}_X = 1/p$, then we cannot draw any satisfactory conclusions about the boundedness of $M_p$. For example, let $X=L^p$. Then, $M^*(X) = L^{p,\infty}$, whence neither $M_p: X \to X$ nor $M_p: M^*(X) \to M^*(X)$ is bounded. On the other hand, $M_p: X \to M^*(X)$ is bounded. If we instead consider $X=L^{p,q}$ for some $q>p$, then $L^p\subsetneq X \nsubset L^p_\loc$ and we can find a function $u\in X$ such that $M_pu \equiv \infty$.
\end{rem}
\section{Main results}
\label{sec:LipDensSpec}
The main theorem for $p$-Poincar\'e spaces as stated in its general form (Theorem~\ref{thm:general_Lip-dens}) depends on a weak type estimate for the maximal operator $M_p$, which may be deemed rather obscure. Using the results of Section~\ref{sec:weaktype}, we can replace this estimate by somewhat more tangible hypotheses. These will also allow us to find examples of base function spaces, for which the Newtonian functions can be approximated by Lipschitz continuous functions.
\begin{thm}
\label{thm:Mp-bdd_Lip-dens}
Assume that $\Pcal$ is a $p$-Poincar\'e space for some $p\in [1, \infty)$ and that $X \subset L^p_\fm$ is a quasi-Banach function lattice with absolutely continuous norm. Suppose that $M_p: X \to X_\fm$ is bounded. Then, the set of Lipschitz functions is dense in $\NX$.
\end{thm}
\begin{proof}
By Lemma~\ref{lem:key_est-M_bdd}, we have that the boundedness of the maximal operator implies that $\Bigl\| \sigma \chi_{\suplevp{p}{f}{\sigma}} \Bigr\|_X \to 0$ as $\sigma \to \infty$ whenever $f\in X$, where $\suplevp{p}{f}{\sigma}$ is the superlevel set of $M_pf$ with level $\sigma$. The conclusion then follows from Theorem~\ref{thm:general_Lip-dens}.
\end{proof}
\begin{exa}
\label{exa:Lip-dens-LqcapLs}
Let $\Pcal$ be a $p$-Poincar\'e space for some $p\in [1, \infty)$. Let $X=L^q \cap L^s$, with a (quasi)norm given by $\|u\|_X = \max\{\|u\|_{L^q}, \|u\|_{L^s}\}$, where $0<q\le p<s<\infty$. We shall show that Lipschitz functions are dense in $\NX$. Note that if $\meas{\Pcal} = \infty$, then $X\subsetneq L^s$. If in addition $q<1$, then $X$ is not normable.
Both spaces $L^q$ and $L^s$ have absolutely continuous (quasi)norms. Thus, so has $X$. We also need to show that $M_p: X \to X_\fm$ is bounded. Let $u\in X$ and let $E\subset \Pcal$ be of finite measure. The H\"older inequality implies that
\begin{align*}
\|(M_p u) \chi_E\|_X & = \max\{\|(M_p u) \chi_E\|_{L^q}, \|(M_p u) \chi_E\|_{L^s}\} \\
& \le c_{\meas{E}} \|(M_p u) \chi_E\|_{L^s}
\le c_{\meas{E}} \|M_p u\|_{L^s} \le \tilde{c}_{\meas{E}} \|u\|_{L^s} \le \tilde{c}_{\meas{E}} \|u\|_X.
\end{align*}
The density result now follows from Theorem~\ref{thm:Mp-bdd_Lip-dens}. It is also worth noting that if $\meas{\Pcal} = \infty$, then $M_p u \notin X$ (unless $u=0$ a.e.), but merely $M_p u \in L^s \cap L^{p,\infty}$.
If $q\ge 1$, then it is possible to draw the same conclusion on density of Lipschitz functions in $\NX$ using Example~\ref{exa:Mp-vs-Mploc} and Theorem~\ref{thm:Mp-wbdd-dtl_Lip-dens}~\ref{it:Lip-dens_X=Mploc} below.
\end{exa}
If the base function space is in fact an \ri space, then we obtain the required weak type estimate for the maximal operator $M_p$ even when $M_p$ is merely weakly bounded (on sets of finite measure).
\begin{thm}
\label{thm:Mp-wbdd_Lip-dens}
Assume that $\Pcal$ is a $p$-Poincar\'e space for some $p\in [1, \infty)$ and that $X$ is an \ri space with absolutely continuous norm. Suppose that $M_p: X \to \wMX_\fm$ is bounded. Then, the set of Lipschitz functions is dense in $\NX$.
\end{thm}
\begin{proof}
By Lemma~\ref{lem:key-Mp-wbdd}, we have that the boundedness of the maximal operator implies that $\Bigl\| \sigma \chi_{\suplevp{p}{f}{\sigma}} \Bigr\|_X \to 0$ whenever $f\in X$ as $\sigma \to \infty$. The conclusion then follows from Theorem~\ref{thm:general_Lip-dens}.
\end{proof}
As a special case, we obtain that if $\Pcal$ supports a $1$-Poincar\'e inequality, then the Lipschitz truncations are dense in all Newtonian spaces based on arbitrary \ri spaces with absolutely continuous norm.
\begin{thm}
\label{thm:1-PI_Lip-dens}
Assume that $\Pcal$ is a $1$-Poincar\'e space and that $X$ is an \ri space with absolutely continuous norm. Then, the set of Lipschitz functions is dense in $\NX$.
\end{thm}
\begin{proof}
By Proposition~\ref{pro:Mp-wbdd}, the maximal operator $M_1: X \to M^*(X)$ is bounded whenever $X$ is an \ri space. The conclusion then follows from Theorem~\ref{thm:Mp-wbdd_Lip-dens}.
\end{proof}
The absolute continuity of the norm is crucial for the density results. It is the only hypothesis that is violated in the following example, where we find a Newtonian function that cannot be approximated by (locally) Lipschitz functions.
\begin{exa}
Locally Lipschitz functions are not dense in $\NX$ in the setting of Example~\ref{exa:trunc_not_dense}. There, $X = M_\phi = M_\phi^*$ over $\Rbb^n$ and we considered a compactly supported radially decreasing function $u(x) = (f(|x|) - f(1))^+$, where $f(t) = t/\phi(t^n)$ for $t>0$ and $f(0) = f(0\limplus) = \infty$ due to the assumed properties of the fundamental function $\phi$. We also obtained that $g(x) = |f'(|x|)| \chi_{B(0,1)}(x)$, $x\in \Rbb^n$, was a minimal ($X$-weak) upper gradient of $u$. Moreover, we estimated that $|f'(t)| \approx 1/\phi(t^n)$.
Let now $v \in \NX$ be a locally Lipschitz function. The restriction $v|_{\overline{B(0,1)}}$ is a bounded $L$-Lipschitz function for some $L>0$. Let $h\in X$ be an upper gradient of $u-v$. Then, $h(x) \ge g(x) - L$ for a.e.\@ $x\in B(0,1)$. Hence
\begin{align*}
\|u-v\|_\NX & = \|u-v\|_{X} + \|h\|_X \ge \|h \chi_{B(0,1)}\|_X \gtrsim \|(g(x) - L)^+\|_{M_\phi^*} \\
& = \sup_{0<t<1} (|f'(t)| - L)^+ \phi(\omega_n t^n) \gtrsim \sup_{0<t<r_L} |f'(t)| \phi(t^n) \approx 1,
\end{align*}
where $\omega_n$ is the measure of the unit ball and $r_L = \inf\{r>0: |f'(r)| \le 2L\} > 0$. Therefore, $u\in \NX$ cannot be approximated by locally Lipschitz functions.
\end{exa}
In Section~\ref{sec:weaktype}, we elaborated various conditions that guarantee weak boundedness of the maximal operator $M_p$ on sets of finite measure. We may therefore concretize the assumptions of Theorem~\ref{thm:Mp-wbdd_Lip-dens}. It becomes apparent that the well-known results on density of Lipschitz functions in $N^{1,p}$ on doubling $p$-Poincar\'e spaces, cf.\@ Shanmugalingam~\cite[Theorem 4.1]{Sha}, are recovered by our approach.
\begin{thm}
\label{thm:Mp-wbdd-dtl_Lip-dens}
Assume that $\Pcal$ is a $p$-Poincar\'e space for some $p\in [1, \infty)$ and that $X$ is an \ri space with absolutely continuous norm and fundamental function $\phi$. Suppose that any of the following conditions is satisfied:
\begin{enumerate}
\item \label{it:Lip-dens_X=Mploc} $X \emb M^p_\loc(X)$;
\item \label{it:Lip-dens_X=Lambdap} $X \emb \Lambda^p_{\psi,\fm}$, where $\psi$ is defined by \eqref{eq:quasiconv_dom};
\item \label{it:Lip-dens_Lp=X} $L^{p, 1}(\Pcal) \emb X_\fm$ and $X \emb L^p_\fm(\Pcal)$;
\item \label{it:Lip-dens_wbdd} $\phi(t)^p \fint_0^t \phi(s)^{-p}\,ds$ is bounded on $(0, \delta)$ for some $\delta > 0$;
\item \label{it:Lip-dens_conc} $\phi^q$ is concave on $[0, \delta)$ for some $q>p$ and some $\delta>0$;
\item \label{it:Lip-dens_quasiconc} $\phi(t)^q/t$ is decreasing for $t\in(0, \delta)$ for some $q>p$ and some $\delta>0$;
\item \label{it:Lip-dens_m-Lp} $m_\phi \in L^p(0,1)$, where $m_\phi(s) = \sup_{0<t<1} \phi(t)/\phi(st)$;
\item \label{it:Lip-dens_beta} the upper fundamental index $\itoverline{\beta}_X < 1/p$ (see Definition~\ref{df:indices});
\item \label{it:Lip-dens_alpha} the upper Boyd index $\itoverline{\alpha}_X < 1/p$ (see Definition~\ref{df:indices}) .
\end{enumerate}
Then, the set of Lipschitz functions is dense in $\NX$.
\end{thm}
\begin{proof}
If $X \emb M^p_\loc(X)$, then $M_p: X\to M^*(X)_\fm$ is bounded by Proposition~\ref{pro:Mp-wbdd}. The conclusion in the case~\ref{it:Lip-dens_X=Mploc} then follows from Theorem~\ref{thm:Mp-wbdd_Lip-dens}.
If~\ref{it:Lip-dens_X=Lambdap} holds, then so does~\ref{it:Lip-dens_X=Mploc} by Lemma~\ref{lem:Lambdap_MpX}.
If~\ref{it:Lip-dens_Lp=X} holds, then $\phi_X(t) \approx t^{1/p}$ for $t$ near zero. Hence, $M^*(X)_\fm = L^{p,\infty}_\fm$ with equivalent quasi-seminorms. Let $E \subset \Pcal$ be of finite measure. In view of the Herz--Riesz inequality (Corollary~\ref{cor:herz}), we have that
\begin{multline*}
\|(M_pu) \chi_E \|_{M^*(X)} \le c_E \|(M_pu) \chi_E \|_{L^{p,\infty}} \le c_E \|(M_pu)^* \chi_{(0, \meas{E})} \|_{L^{p,\infty}} \\
\approx c_E \|M_p u^* \chi_{(0, \meas{E})} \|_{L^{p,\infty}} = c_E \|u^* \chi_{(0, \meas{E})} \|_{L^p} = c_E \|u \chi_G\|_{L^p} \le c_E' \|u\|_X,
\end{multline*}
where $G \in \suplevr{u}{\meas{E}}$ and $\suplevr{u}{\cdot}$ is the family of\fcrim{} ``superlevel sets'' defined by \eqref{eq:df-suplevr}. Thus, $M_p: X \to M^*(X)_\fm$ is bounded. Theorem~\ref{thm:Mp-wbdd_Lip-dens} now finishes the argument.
If~\ref{it:Lip-dens_wbdd} holds, then $M_p: M^*(X) \to M^*(X)_\fm$ is bounded by Proposition~\ref{pro:Mp-weak2weak}. The conclusion then follows from Theorem~\ref{thm:Mp-wbdd_Lip-dens} since $X \emb M^*(X)$.
If~\ref{it:Lip-dens_conc} or~\ref{it:Lip-dens_quasiconc} is satisfied, then so is~\ref{it:Lip-dens_wbdd} by Lemma~\ref{lem:fi-quasiconc_weak2weak}.
If~\ref{it:Lip-dens_m-Lp} holds, then so does~\ref{it:Lip-dens_wbdd} by Lemma~\ref{lem:m_phi-in-Lp}.
If~\ref{it:Lip-dens_beta} is satisfied, then so is~\ref{it:Lip-dens_m-Lp} by Lemma~\ref{lem:beta_m}.
If~\ref{it:Lip-dens_alpha} holds, then $M_p: X\to X$ is bounded by Proposition~\ref{pro:alpha_M-bdd}. Besides, $X \subset L^1_\fm$ by~\ref{df:BFL.locL1}. The desired result follows from Theorem~\ref{thm:Mp-bdd_Lip-dens}.
\end{proof}
In complete metric spaces, the conditions on the function space can be weakened.
\begin{thm}
\label{thm:Mp-wbdd-dtl_Lip-dens-comp}
Assume that $\Pcal$ is a complete $p$-Poincar\'e space for some $p\in [1, \infty)$ and that $X$ is an \ri space with absolutely continuous norm and fundamental function $\phi$. Suppose that any of the following conditions is satisfied:
\begin{enumerate}
\item \label{it:Lip-dens_conc-comp} $\phi^p$ is concave on $[0, \delta)$ for some $\delta>0$;
\item $\phi^p(t)/t$ is decreasing for $t\in (0, \delta)$ for some $\delta>0$;
\item the upper fundamental index $\itoverline{\beta}_X \le 1/p$;
\item the upper Boyd index $\itoverline{\alpha}_X \le 1/p$.
\end{enumerate}
Then, the set of Lipschitz functions is dense in $\NX$.
\end{thm}
\begin{proof}
If $p=1$, then all of the conditions are trivially satisfied (recall that $\phi$ is concave for a well-chosen equivalent norm on $X$) and the claim follows by Theorem~\ref{thm:1-PI_Lip-dens}.
Suppose instead that $p>1$. Keith and Zhong~\cite{KeiZho} have proven that $\Pcal$, being complete, supports an $r$-Poincar\'{e} inequality for some $r<p$. The claim then follows by Theorem~\ref{thm:Mp-wbdd-dtl_Lip-dens}, where $p$ and $q$ are to be replaced by $r$ and $p$, respectively.
\end{proof}
Costea and Miranda discussed the density of Lipschitz functions in Newtonian spaces based on the Lorentz $L^{p,q}$ spaces in~\cite{CosMir}. There, they showed the density whenever $1 \le q \le p < \infty$ under the assumptions that the underlying metric measure space is complete and supports an \emph{$L^{p,q}$-Poincar\'e inequality}, i.e., there are $c_\PI>0$ and $\lambda \ge 1$ such that
\[
\fint_B |u - u_B| \le c_\PI \diam (B) \frac{\|g \chi_{\lambda B}\|_{L^{p,q}}}{\meas{\lambda B}^{1/p}}
\]
for every ball $B\subset \Pcal$, every function $u\in L^1_\loc(\Pcal)$ and every upper gradient $g$ of $u$. They gave an example showing that one cannot hope for the wanted result while considering $L^{p,\infty}$ spaces. The question of $L^{p,q}$ for $1\le p<q<\infty$ remained however open. In their proof, the Lorentz-type maximal operator
\[
M_{p,q} u (x) = \sup_{B \ni x} \frac{\|u \chi_B\|_{L^{p,q}}}{\meas{B}^{1/p}}, \quad x\in\Pcal,
\]
and its boundedness as a mapping from $L^{p,q}$ to $L^{p, \infty}$ for $q \le p$ were used, which is exactly the place where the proof would have failed for $q>p$. The boundedness of $M_{p,q}: L^{p,q}\to L^{p, \infty}$ for $q> p$ was however left unsolved in~\cite{CosMir}. Chung, Hunt and Kurtz~\cite[pp. 119--120]{ChuHunKur} gave an example showing that $M_{p,q}$ is not bounded from $L^{p,q}$ to $L^{p, \infty}_\fm$ if $q>p$.
Nevertheless, the following propositions give an affirmative answer to the density of Lipschitz continuous functions in $N^1L^{p,q}(\Pcal)$ even for $1<p<q<\infty$. First, we will assume a stronger Poincar\'{e} inequality.
\begin{pro}
\label{pro:cosmir-gen}
If $\fcrim\Pcal$ supports an $L^{r,s}$-Poincar\'{e} inequality for some $r,s\in[1, \infty]$, then Lipschitz functions are dense in $N^1 L^{p,q}$ whenever $p\in(r, \infty)$ and $q\in[1, \infty)$.
\end{pro}
\begin{proof}
Due to the embedding between Lorentz spaces $L^{\textit{\v{r}}} \emb L^{r,s}_\fm$, we see that $\Pcal$ is actually an $\textit{\v{r}}$-Poincar\'{e} space whenever $\textit{\v{r}} \in (r,p)$. Since $q<\infty$, the space $L^{p,q}$ has absolutely continuous norm. The fundamental function of $L^{p,q}$ satisfies $\phi(t)^p = t$, which is concave. Therefore, the condition~\ref{it:Lip-dens_conc} of Theorem~\ref{thm:Mp-wbdd-dtl_Lip-dens} is fulfilled for $X=L^{p,q}$ and the $\textit{\v{r}}$-Poincar\'{e} space $\Pcal$, whence Lipschitz functions are dense in $N^1L^{p,q}$.
\end{proof}
Finally, we are prepared to show the density result for the case $1<p<q<\infty$ in the setting of~\cite{CosMir}, cf.\@ Theorem 6.9 therein.
\begin{pro}
\label{pro:cosmir}
Let $(\Pcal, \dd, \mu)$ be a complete metric measure space with a doubling measure. Suppose that $\Pcal$ admits an $L^{p,q}$-Poincar\'{e} inequality with $1<p<q<\infty$. Then, Lipschitz functions are dense in $N^1 L^{p,q}$.
\end{pro}
\begin{proof}
Due to the embedding between Lorentz spaces, $\Pcal$ is actually a $p$-Poincar\'{e} space. Lipschitz functions are dense in $N^1L^{p,q}$ by Theorem~\ref{thm:Mp-wbdd-dtl_Lip-dens-comp}\,\ref{it:Lip-dens_conc-comp} as $\phi_{L^{p,q}}(t)^p = t$ is concave on $\Rbb^+$.
\end{proof}
Note that the previous proposition does not discuss the case $1=p<q < \infty$ as $L^{1,q}$ are not \ri spaces for $q>1$, but mere rearrangement-invariant quasi-Banach function lattices. It was shown in~\cite[Example~2.6]{Mal1} that the Newtonian space may be trivial then, i.e., $N^1 L^{1,q} = L^{1,q}$, even though there are many curves in the metric measure space. In this case, the density can be established using arguments similar to those in Proposition~\ref{pro:NX=X-dens} above.
\end{document}
|
\begin{document}
\title{Affine factorable surfaces in isotropic spaces}
\author{Muhittin Evren Aydin$^{1},$ Ayla Erdur$^{2}$, Mahmut Ergut$^{3}$}
\address{$^{1}$ Department of Mathematics, Faculty of Science, Firat
University, Elazig, 23200, Turkey}
\address{$^{2,3}$ Department of Mathematics, Faculty of Science and Art,
Namik Kemal University, Tekirdag 59100, Turkey}
\email{[email protected], [email protected], [email protected], }
\thanks{}
\subjclass[2000]{ 53A35, 53A40, 53B25.}
\keywords{Isotropic space, affine factorable surface, mean curvature,
Gaussian curvature.}
\begin{abstract}
In this paper, we study the problem of finding the affine factorable
surfaces in a $3-$dimensional isotropic space with prescribed Gaussian $
\left( K\right) $ and mean $\left( H\right) $ curvature. Because the
absolute figure two different types of these surfaces appear by permutation
of coordinates. We firstly classify the affine factorable surfaces of type 1
with $K,H$ constants. Afterwards, we provide the affine factorable surfaces
of type 2 with $K=const.$ and $H=0.$ In addition, in a particular case, the
affine factorable surfaces of type 2 with $H=const.$ were obtained.
\end{abstract}
\maketitle
\section{Introduction}
Let $\mathbb{R}^{3}$ be a 3-dimensional Euclidean space with usual
coordinates $\left( x,y,z\right) $ and $w\left( x,y\right) :\mathbb{R}
^{2}\rightarrow \mathbb{R}$ be a smooth real-valued function of 2 variables.
Then its graph given by $z=w\left( x,y\right) $ is a smooth surface with an
atlas that consists of only the following patch
\begin{equation*}
\mathbf{r}:\mathbb{R}^{2}\rightarrow \mathbb{R}^{3},\left( x,y\right)
\mapsto \left( x,y,w\left( x,y\right) \right) .
\end{equation*}
Notice also that every surface in $\mathbb{R}^{3}$ is locally a part of the
graph $z=w\left( x,y\right) $ if its normal is not parallel to the $xy-$
plane.\ Otherwise, the regularity assures that it is a part of the graph $
x=w\left( y,z\right) $\ or $y=w\left( x,z\right) .$\ See \cite[p. 119]{Pr}.
These graphs are also called \textit{Monge surfaces} \cite[p. 302]{Gr}.
Because our target is to solve prescribed Gaussian $\left( K\right) $ and
mean $\left( H\right) $ curvature type equations in a 3-dimensional
isotropic space $\mathbb{I}^{3}$, it is naturally reasonable to focus on
graph surfaces. By separation of variables, we study the graphs $z=w\left(
x,y\right) =f_{1}\left( x\right) f_{2}\left( y\right) ,$ so-called \textit{
factorable} or \textit{homothetical} \textit{surface}, for smooth functions $
f_{1},f_{2}$ of single variable. Many results on the factorable surfaces in
other 3-dimensional spaces were made so far, see \cite
{AO,BS,GV,JS,LM,ML,W,YLi}.
This kind of surfaces also appear as invariant surfaces in the 3-dimensional
space $\mathbb{H}^{2}\times \mathbb{R}$ which is one of eight homogeneous
geometries of Thurston. More clearly, a certain type of translation surfaces
in $\mathbb{H}^{2}\times \mathbb{R}$ is the graph of $y=f_{1}\left( x\right)
f_{2}\left( y\right) ,$ see \cite[p. 1547]{Yo}. For further details, we
refer to \cite{ILM,LY,LMu,Lo,Lo1,T,YL,YLK}.
Recently, Zong, Xiao, Liu \cite{ZXL} defined \textit{affine factorable
surfaces} in $\mathbb{R}^{3}$ as the graphs $z=f_{1}\left( x\right)
f_{2}\left( y+ax\right) ,$ $a\in \mathbb{R},$ $a\neq 0.$ They obtained these
surfaces with $K=0$ and $H=const.$ It is clear that this class of surfaces
is more general than the factorable surfaces.
In this paper, the problem of determining the affine factorable surfaces in $
\mathbb{I}^{3}$ with $K,H$ constants is considered. By permutation of the
coordinates, because the absolute figure of $\mathbb{I}^{3},$ two different
types of these surfaces appear, i.e. the graphs of $z=f_{1}\left( x\right)
f_{2}\left( y+ax\right) $ and $x=f_{1}\left( y+az\right) f_{2}\left(
z\right) ,$ called \textit{affine factorable surface} of \textit{type 1 }and
\textit{2}, respectively. Point out also that such surfaces reduce to the
factorable surfaces in $\mathbb{I}^{3}$ when $a=0.$
In this manner, our first concern is to obtain the affine factorable
surfaces of type 1 with $K,H$ constants. And then, we present some results
relating to the affine factorable surfaces of type 2 with $K=const.$ and $
H=0.$ Furthermore, in a particular case, the affine factorable surfaces of
type 2 with $H=const.$ were found.
\section{Preliminaries}
In this section, we provide some fundamental properties of isotropic
geometry from \cite{CDV}-\cite{EDH},\cite{MS}-\cite{OGR},\cite{PGM,PO,S1,St}
. For basics of Cayley-Klein geometries see also \cite{K,OS,Y}.
Let $\left( x_{0}:x_{1}:x_{2}:x_{3}\right) $ denote the homogenous
coordinates in a real 3-dimensional projective space $P\left( \mathbb{R}
^{3}\right) .$ A \textit{3-dimensional} \textit{isotropic space} $\mathbb{I}
^{3}$ is a Cayley-Klein space defined in $P\left( \mathbb{R}^{3}\right) $ in
which the absolute figure consist of an \textit{absolute plane} $\omega $
and two \textit{absolute lines} $l_{1},l_{2}$ in $\omega $. Those are
respectively given by $x_{0}=0$ and $x_{0}=x_{1}\pm ix_{2}=0.$ The
intersection point of these complex-conjugate lines is called \textit{
absolute point}, $\left( 0:0:0:1\right) .$
The group of motions of $\mathbb{I}^{3}$ is given by the $6-$parameter group
\begin{equation}
\left( x,y,z\right) \longmapsto \left( \tilde{x},\tilde{y},\tilde{z}\right)
:\left\{
\begin{array}{l}
\tilde{x}=\theta _{1}+x\cos \theta -y\sin \theta , \\
\tilde{y}=\theta _{2}+x\sin \theta +y\cos \theta , \\
\tilde{z}=\theta _{3}+\theta _{4}x+\theta _{5}y+z,
\end{array}
\right. \tag{2.1}
\end{equation}
where $\left( x,y,z\right) $ denote the affine coordinates and $\theta
,\theta _{1},...,\theta _{5}\in \mathbb{R}.$ The \textit{isotropic metric}
induced by the absolute figure is given by $ds^{2}=dx^{2}+dy^{2}.$
Due to the absolute figure there are two types of the lines and the planes:
The \textit{isotropic lines} and\textit{\ planes} which are parallel to $z-$
axis and\textit{\ }others called \textit{non-isotropic lines and planes.} As
an example the equation $ax+by+cz=d$ determines a non-isotropic (isotropic)
plane if $c\neq 0$ ($c=0$), $a,b,c,d\in \mathbb{R}.$
Let us consider an admissible surface (without isotropic tangent planes).
Then it parameterizes
\begin{equation*}
\mathbf{r}\left( u,v\right) =\left( x\left( u,v\right) ,y\left( u,v\right)
,z\left( u,v\right) \right) ,
\end{equation*}
where $x_{u}y_{v}-x_{v}y_{u}\neq 0,$ $x_{u}=\frac{\partial x}{\partial u},$
because the admissibility. Notice that the admissible surfaces are regular
too.
Denote $g$ and $h$ the first and the second fundamental forms, respectively.
Then the components of $g$ are calculated by the induced metric from $
\mathbb{I}^{3}.$ The unit normal vector is $\left( 0,0,1\right) $ because it
is orthogonal to all non-isotropic vectors. The components of $h$ are given
by
\begin{equation*}
h_{11}=\frac{\det \left( \mathbf{r}_{uu},\mathbf{r}_{u},\mathbf{r}
_{v}\right) }{\sqrt{\det g}},\text{ }h_{12}=\frac{\det \left( \mathbf{r}
_{uv},\mathbf{r}_{u},\mathbf{r}_{v}\right) }{\sqrt{\det g}},\text{ }h_{22}=
\frac{\det \left( \mathbf{r}_{vv},\mathbf{r}_{u},\mathbf{r}_{v}\right) }{
\sqrt{\det g}},
\end{equation*}
where $\mathbf{r}_{uu}=\frac{\partial ^{2}\mathbf{r}}{\partial u\partial u}$
, etc. Therefore, the \textit{isotropic Gaussian} (or \textit{relative}) and
\textit{mean curvature} are respectively defined by
\begin{equation*}
K=\frac{\det h}{\det g},\text{ }H=\frac{
g_{11}h_{22}-2g_{12}h_{12}+g_{22}h_{11}}{2\det g},
\end{equation*}
where $g_{ij}$ $\left( i,j=1,2\right) $ denotes the component of $g.$ For
convenience, we call these \textit{Gaussian} and \textit{mean curvatures}.
By a\textit{\ flat} (\textit{minimal}) \textit{surface} we mean a surface
with vanishing Gaussian (mean) curvature.
In the particular case that the surface is the graph $z=w\left( x,y\right) $
parameterized $\mathbf{r}\left( x,y\right) =\left( x,y,w\left( x,y\right)
\right) ,$ the Gaussian and mean curvatures turn to
\begin{equation}
K=w_{xx}w_{yy}-w_{xy}^{2},\text{ }H=\frac{w_{xx}+w_{yy}}{2}. \tag{2.2}
\end{equation}
Accordingly; if one is the graph $x=u\left( y,z\right) $ parameterized $
\mathbf{r}\left( y,z\right) =\left( w\left( y,z\right) ,y,z\right) ,$ then
these curvatures are formulated by
\begin{equation}
K=\frac{w_{yy}w_{zz}-w_{yz}^{2}}{w_{z}^{4}},\text{ }H=\frac{
w_{z}^{2}w_{yy}-2w_{y}w_{z}w_{yz}+\left( 1+w_{y}^{2}\right) w_{zz}}{
2w_{z}^{3}}, \tag{2.3}
\end{equation}
where the admissibility assures $w_{z}\neq 0.$
\section{Affine factorable surfaces of type 1}
An \textit{affine factorable surface} \textit{of type 1 }in $\mathbb{I}^{3}$
is a graph surface given by
\begin{equation*}
z=w\left( x,y\right) =f_{1}\left( x\right) f_{2}\left( y+ax\right) ,a\neq 0.
\end{equation*}
Let us put $u_{1}=x$ and $u_{2}=y+ax$ in order to avoid confusion while
solving prescribed curvature type equations. By (2.2), we get the Gaussian
curvature as
\begin{equation}
K=f_{1}f_{2}f_{1}^{\prime \prime }f_{2}^{\prime \prime }-\left(
f_{1}^{\prime }f_{2}^{\prime }\right) ^{2}, \tag{3.1}
\end{equation}
where $f_{1}^{\prime }=\frac{df_{1}}{du_{1}}$ and $f_{2}^{\prime }=\frac{
df_{2}}{du_{2}}$ and so on.
Notice that the roles of $f_{1}$ and $f_{2}$ in (3.1) is symmetric and thus
it is sufficient to perform the cases depending on $f_{1}.$
\begin{theorem}
Let an affine factorable surface of type 1 in $\mathbb{I}^{3}$ have constant
Gaussian curvature $K_{0}.$ Then, for $c_{1},c_{2},c_{3}\in \mathbb{R}$, one
of the following happens:
\begin{enumerate}
\item[(i)] $w\left( x,y\right) =c_{1}f_{2}\left( y+ax\right) ;$
\item[(ii)] $w\left( x,y\right) =c_{1}e^{c_{2}x+c_{3}\left( y+ax\right) };$
\item[(iii)] $w\left( x,y\right) =c_{1}x^{\frac{1}{1-c_{2}}}\left(
y+ax\right) ^{\frac{c_{2}}{c_{2}-1}},$ $c_{2}\neq 1;$
\item[(iv)] $\left( K_{0}\neq 0\right) $ $w\left( x,y\right) =\sqrt{
\left\vert K_{0}\right\vert }x\left( y+ax\right) .$
\end{enumerate}
\end{theorem}
\begin{proof}
We have two cases:
\begin{enumerate}
\item Case $K_{0}=0.$ By (3.1), the item (i) of the theorem is obivous. If $
f_{1},f_{2}\neq const.,$ (3.1) implies $f_{1}^{\prime \prime }f_{2}^{\prime
\prime }\neq 0.$ Thereby, (3.1) can be rewritten as
\begin{equation}
\frac{f_{1}f_{1}^{\prime \prime }}{\left( f_{1}^{\prime }\right) ^{2}}
=c_{1}=-\frac{\left( f_{2}^{\prime }\right) ^{2}}{f_{2}f_{2}^{\prime \prime }
}, \tag{3.2}
\end{equation}
where $c_{1}\in \mathbb{R},$ $c_{1}\neq 0.$ If $c_{1}=1$, after solving
(3.2), we obtain
\begin{equation*}
f_{1}\left( u_{1}\right) =c_{2}\exp \left( c_{3}u_{1}\right) ,\text{ }
f_{2}\left( u_{2}\right) =c_{4}\exp \left( c_{5}u_{2}\right) ,\text{ }
c_{2},...,c_{5}\in \mathbb{R}.
\end{equation*}
This proves the item (ii) of the theorem. Otherwise, i.e. $c_{1}\neq 1,$ by
solving (3.2), we derive
\begin{equation*}
f_{1}\left( u_{1}\right) =\left[ \left( 1-c_{1}\right) \left(
c_{6}u_{1}+c_{7}\right) \right] ^{\frac{1}{1-c_{1}}},\text{ }f_{2}\left(
u_{2}\right) =\left[ \left( \frac{c_{1}}{c_{1}-1}\right) \left(
c_{8}u_{2}+c_{9}\right) \right] ^{\frac{c_{1}}{c_{1}-1}}
\end{equation*}
for $c_{6},...,c_{9}\in \mathbb{R}.$ This is the proof of the item (iii) of
the theorem.
\item Case $K_{0}\neq 0.$ (3.1) yields $f_{1},f_{2}\neq const.$ We have two
cases:
\begin{enumerate}
\item Case $f_{1}=c_{1}u_{1}+c_{2},$ $c_{1},c_{2}\in \mathbb{R},$ $c_{1}\neq
0.$ By (3.1), we get $K_{0}=-c_{1}^{2}\left( f_{2}^{\prime }\right) ^{2}$or
\begin{equation*}
f_{2}\left( u_{2}\right) =\frac{\sqrt{\left\vert -K_{0}\right\vert }}{
\left\vert c_{1}\right\vert }u_{2}+c_{3},\text{ }c_{3}\in \mathbb{R},
\end{equation*}
which proves the item (iv) of the theorem.
\item Case $f_{1}^{\prime \prime }\neq 0.$ By symmetry, we have $
f_{2}^{\prime \prime }\neq 0.$ Dividing (3.1) with $f_{1}f_{1}^{\prime
\prime }\left( f_{2}^{\prime }\right) ^{2}$ follows
\begin{equation}
K_{0}\left( \frac{1}{f_{1}f_{1}^{\prime \prime }}\right) \left( \frac{1}{
f_{2}^{\prime }}\right) ^{2}=\frac{f_{2}f_{2}^{\prime \prime }}{\left(
f_{2}^{\prime }\right) ^{2}}-\frac{\left( f_{1}^{\prime }\right) ^{2}}{
f_{1}f_{1}^{\prime \prime }}. \tag{3.3}
\end{equation}
The partial derivative of (3.3) with respect to $u_{1}$ gives
\begin{equation}
K_{0}\underset{\omega _{1}}{\underbrace{\left( \frac{1}{f_{1}f_{1}^{\prime
\prime }}\right) ^{\prime }}}\left( \frac{1}{f_{2}^{\prime }}\right) ^{2}+
\underset{\omega _{2}}{\underbrace{\left( \frac{\left( f_{1}^{\prime
}\right) ^{2}}{f_{1}f_{1}^{\prime \prime }}\right) ^{\prime }}}=0. \tag{3.4}
\end{equation}
Because $f_{2}^{\prime }\neq const.,$ (3.4) concludes $\omega _{1}=\omega
_{2}=0,$ namely
\begin{equation*}
f_{1}f_{1}^{\prime \prime }=c_{1},
\end{equation*}
and
\begin{equation*}
f_{1}f_{1}^{\prime \prime }=c_{2}\left( f_{1}^{\prime }\right) ^{2},\text{ }
c_{1},c_{2}\in \mathbb{R},\text{ }c_{1}c_{2}\neq 0.
\end{equation*}
Comparing last two equations leads to $f_{1}^{\prime \prime }=0,$ which is
not our case.
\end{enumerate}
\end{enumerate}
\end{proof}
From (2.2), the mean curvature follows
\begin{equation}
2H=\left( 1+a^{2}\right) f_{1}f_{2}^{\prime \prime }+2af_{1}^{\prime
}f_{2}^{\prime }+f_{1}^{\prime \prime }f_{2}. \tag{3.5}
\end{equation}
\begin{theorem}
Let an affine factorable surface of type 1 in $\mathbb{I}^{3}$ be minimal.
Then, for $c_{1},c_{2},c_{3}\in \mathbb{R}$, either
\begin{enumerate}
\item[(i)] it is a non-isotropic plane; or
\item[(ii)] $w\left( x,y\right) =e^{c_{1}x}\left[ c_{2}\sin \left( \frac{
c_{1}}{1+a^{2}}\left( y+ax\right) \right) +c_{3}\cos \left( \frac{c_{1}}{
1+a^{2}}\left( y+ax\right) \right) \right] .$
\end{enumerate}
\end{theorem}
\begin{proof}
If $f_{1}$ is a constant function, then (3.5) immediately yields the item
(i) of the theorem. Suppose that $f_{1}^{\prime }f_{2}^{\prime }\neq 0.$ If $
f_{1}=c_{1}u_{1}+c_{2},$ $c_{1},c_{2}\in \mathbb{R}$, $c_{1}\neq 0,$ (3.5)
follows the following polynomial equation on $f_{1}$
\begin{equation*}
2ac_{1}f_{2}^{\prime }+\left[ \left( 1+a^{2}\right) f_{2}^{\prime \prime }
\right] f_{1}=0,
\end{equation*}
which concludes $f_{2}^{\prime }=0.$ This is not our case. Then we deduce $
f_{1}^{\prime \prime }f_{2}^{\prime \prime }\neq 0$ and (3.5) can be divided
by $f_{1}^{\prime }f_{2}^{\prime }$ as follows:
\begin{equation}
-2a=\left( 1+a^{2}\right) \left( \frac{f_{1}}{f_{1}^{\prime }}\right) \left(
\frac{f_{2}^{\prime \prime }}{f_{2}^{\prime }}\right) +\left( \frac{
f_{1}^{\prime \prime }}{f_{1}^{\prime }}\right) \left( \frac{f_{2}}{
f_{2}^{\prime }}\right) , \tag{3.6}
\end{equation}
The partial derivative of (3.6) with respect to $u_{1}$ gives
\begin{equation}
\left( 1+a^{2}\right) \left( \frac{f_{1}}{f_{1}^{\prime }}\right) ^{\prime
}f_{2}^{\prime \prime }+\left( \frac{f_{1}^{\prime \prime }}{f_{1}^{\prime }}
\right) ^{\prime }f_{2}=0. \tag{3.7}
\end{equation}
We have two cases:
\begin{enumerate}
\item Case $f_{1}^{\prime }=c_{1}f_{1},$ $c_{1}\in \mathbb{R},$ $c_{1}\neq
0. $ That is a solution for (3.7) and thus (3.6) writes
\begin{equation}
-2a=\frac{1+a^{2}}{c_{1}}\left( \frac{f_{2}^{\prime \prime }}{f_{2}^{\prime }
}\right) +c_{1}\left( \frac{f_{2}}{f_{2}^{\prime }}\right) , \tag{3.8}
\end{equation}
which is a homogenous linear second-order ODE with constant coefficients.
The characteristic equation of (3.8) has complex roots $\frac{-c_{1}}{1+a^{2}
}\left( a\pm i\right) $, so the solution of (3.8) turns
\begin{equation}
f_{2}\left( u_{2}\right) =c_{2}\cos \left( \frac{c_{1}}{1+a^{2}}u_{2}\right)
+c_{3}\sin \left( \frac{c_{1}}{1+a^{2}}u_{2}\right) . \tag{3.9}
\end{equation}
Considering (3.9) with the assumption of Case 1 gives the item (ii) of the
theorem.
\item Case $\left( f_{1}/f_{1}^{\prime }\right) ^{\prime }\neq 0.$ The
symmetry yields $\left( f_{2}/f_{2}^{\prime }\right) ^{\prime }\neq 0.$
Furthermore, $\left( 3.7\right) $ leads to $f_{2}^{\prime \prime
}=c_{1}f_{2},$ $c_{1}\in \mathbb{R}$, $c_{1}\neq 0,$ and substituting it
into (3.6) gives
\begin{equation}
-2a\frac{f_{2}^{\prime }}{f_{2}}=c_{1}\left( 1+a^{2}\right) \frac{f_{1}}{
f_{1}^{\prime }}+\frac{f_{1}^{\prime \prime }}{f_{1}^{\prime }}. \tag{3.10}
\end{equation}
The fact that the left side of (3.10) is a function of the variable $u_{2}$
gives a contradiction.
\end{enumerate}
\end{proof}
\begin{theorem}
Let an affine factorable surface of type 1 in $\mathbb{I}^{3}$ have nonzero
constant mean curvature $H_{0}.$ Then we have either
\begin{enumerate}
\item[(i)] $w\left( x,y\right) =\frac{H_{0}}{1+a^{2}}\left( y+ax\right) ^{2}$
or
\item[(ii)] $w\left( x,y\right) =\frac{H_{0}}{a}x\left( y+ax\right) .$
\end{enumerate}
\end{theorem}
\begin{proof}
We get two cases:
\begin{enumerate}
\item Case $f_{1}^{\prime }=0.$ Then (3.5) proves the item (i) of the
theorem. If $f_{1}=c_{1}u_{1}+c_{2},$ $c_{1},c_{2}\in \mathbb{R},$ $
c_{1}\neq 0,$ then by (3.5) we get a polynomial equation on $f_{1}$
\begin{equation*}
-2H_{0}+2ac_{1}f_{2}^{\prime }+\left[ \left( 1+a^{2}\right) f_{2}^{\prime
\prime }\right] f_{1}=0,
\end{equation*}
which implies $f_{2}^{\prime \prime }=0$ and thus it turns to
\begin{equation*}
f_{2}^{\prime }=\frac{H_{0}}{ac_{1}}.
\end{equation*}
This proves the item (ii) of the theorem.
\item Case $f_{1}^{\prime \prime }\neq 0.$ The symmetry concludes $
f_{2}^{\prime \prime }\neq 0.$ Then (3.5) can be rearranged as
\begin{equation}
2H_{0}=\left( 1+a^{2}\right) f_{1}p_{2}\dot{p}_{2}+2ap_{1}p_{2}+f_{2}p_{1}
\dot{p}_{1}, \tag{3.11}
\end{equation}
where $p_{i}=\frac{df_{i}}{du_{i}}$ and $\dot{p}_{i}=\frac{dp_{i}}{df_{i}}=
\frac{f_{i}^{\prime \prime }}{f_{i}^{\prime }}.$ (3.11) can be divided by $
f_{2}p_{1}$ as
\begin{equation}
\frac{2H_{0}}{f_{2}p_{1}}=\left( 1+a^{2}\right) \left( \frac{f_{1}}{p_{1}}
\right) \left( \frac{p_{2}\dot{p}_{2}}{f_{2}}\right) +2a\frac{p_{2}}{f_{2}}+
\dot{p}_{1}. \tag{3.12}
\end{equation}
The partial derivative of (3.12) with respect to $f_{1}$ leads to
\begin{equation}
\left( \frac{2H_{0}}{f_{2}}\right) \frac{d}{df_{1}}\left( \frac{1}{p_{1}}
\right) =\left( 1+a^{2}\right) \frac{d}{df_{1}}\left( \frac{f_{1}}{p_{1}}
\right) \left( \frac{p_{2}\dot{p}_{2}}{f_{2}}\right) +\ddot{p}_{1}.
\tag{3.13}
\end{equation}
If $p_{1}=c_{3}f_{1}$, $c_{3}\in \mathbb{R},$ $c_{3}\neq 0,$ then the
right-hand side of (3.13) becomes zero, which is no possible. Thereby,
(3.13) can be rewritten by dividing $\frac{d}{df_{1}}\left( \frac{f_{1}}{
p_{1}}\right) $ as
\begin{equation*}
2H_{0}\underset{\omega _{1}}{\underbrace{\left( \frac{1}{f_{2}}\right) }}
\underset{\omega _{2}}{\underbrace{\frac{\frac{d}{df_{1}}\left( \frac{1}{
p_{1}}\right) }{\frac{d}{df_{1}}\left( \frac{f_{1}}{p_{1}}\right) }}}=\left(
1+a^{2}\right) \underset{\omega _{3}}{\underbrace{\frac{p_{2}\dot{p}_{2}}{
f_{2}}}}+\underset{\omega _{4}}{\underbrace{\frac{\ddot{p}_{1}}{\frac{d}{
df_{1}}\left( \frac{f_{1}}{p_{1}}\right) }}},
\end{equation*}
which implies $\omega _{i}=const.,$ $i=1,...,4,$ for every pair $\left(
f_{1},f_{2}\right) .$ However this is a contradiction due to $\omega _{1}=
\dfrac{1}{f_{2}}\neq const.$
\end{enumerate}
\end{proof}
\section{Affine factorable surfaces of type 2}
An \textit{affine factorable surface} \textit{of type 2 }in $\mathbb{I}^{3}$
is a graph surface given by
\begin{equation*}
z=w\left( x,y\right) =f_{1}\left( y+az\right) f_{2}\left( z\right) ,\text{ }
a\neq 0.
\end{equation*}
Put $u_{1}=y+az$ and $u_{2}=z.$ From (2.3), the Gaussian curvature follows
\begin{equation}
K=\frac{f_{1}f_{2}f_{1}^{\prime \prime }f_{2}^{\prime \prime }-\left(
f_{1}^{\prime }f_{2}^{\prime }\right) ^{2}}{\left( af_{1}^{\prime
}f_{2}+f_{1}f_{2}^{\prime }\right) ^{4}}, \tag{4.1}
\end{equation}
where $f_{1}^{\prime }=\frac{df_{1}}{du_{1}}$ and $f_{2}^{\prime }=\frac{
df_{2}}{du_{2}}.$ Notice that the regularity refers to $af_{1}^{\prime
}f_{2}+f_{1}f_{2}^{\prime }\neq 0$.
\begin{theorem}
Let an affine factorable surface of type 2 in $\mathbb{I}^{3}$ have constant
Gaussian curvature $K_{0}$. Then it is flat, i.e. $K_{0}=0,$ and for $
c_{1},c_{2},c_{3}\in \mathbb{R},$ one of the following occurs:
\begin{enumerate}
\item[(i)] $w\left( y,z\right) =c_{1}f_{1}\left( y+az\right) $, $\frac{
\partial f_{1}}{\partial z}\neq 0;$
\item[(ii)] $w\left( y,z\right) =c_{1}e^{c_{2}\left( y+az\right) +c_{3}z};$
\item[(iii)] $w\left( y,z\right) =c_{1}\left( y+az\right) ^{\frac{1}{1-c_{2}}
}z^{\frac{c_{2}}{c_{2}-1}},$ $c_{2}\neq 1.$
\end{enumerate}
\end{theorem}
\begin{proof}
If $K_{0}=0$ in (4.1), then the proofs of the items (i),(ii),(iii) are
similar with these of Theorem 3.1. The contd of the proof is by
contradiction. Suppose that $K_{0}\neq 0$ and hence $f_{1},f_{2}$ must be
non-constants. Afterwards, we use the property that the roles of $
f_{1},f_{2} $ are symmetric in (4.1). If $f_{1}=c_{1}u_{1}+c_{2},$ $
c_{1},c_{2}\in \mathbb{R},$ $c_{1}\neq 0,$ then (4.1) turns to a polynomial
equation on $f_{1}$
\begin{equation*}
\xi _{1}\left( u_{2}\right) +\xi _{2}\left( u_{2}\right) f_{1}+\xi
_{3}\left( u_{2}\right) f_{1}^{2}+\xi _{4}\left( u_{2}\right) f_{1}^{3}+\xi
_{5}\left( u_{2}\right) f_{1}^{4}=0,
\end{equation*}
where
\begin{equation*}
\left.
\begin{array}{l}
\xi _{1}\left( u_{2}\right) =K_{0}a^{4}c_{1}^{4}f_{2}^{4}+c_{1}^{2}\left(
f_{2}^{\prime }\right) ^{2}, \\
\xi _{2}\left( u_{2}\right) =4K_{0}a^{3}c_{1}^{3}f_{2}^{3}f_{2}^{\prime },
\\
\xi _{3}\left( u_{2}\right) =6K_{0}a^{2}c_{1}^{2}f_{2}^{2}\left(
f_{2}^{\prime }\right) ^{2}, \\
\xi _{3}\left( u_{2}\right) =4K_{0}ac_{1}f_{2}\left( f_{2}^{\prime }\right)
^{3}, \\
\xi _{5}\left( u_{2}\right) =K_{0}\left( f_{2}^{\prime }\right) ^{4}.
\end{array}
\right.
\end{equation*}
The fact that each coefficient must vanish contradicts with $f_{2}\neq
const. $ Thereby, we conclude $f_{1}^{\prime \prime }f_{2}^{\prime \prime
}\neq 0.$ Next, put $\omega _{1}=f_{1}f_{1}^{\prime \prime },$ $\omega
_{2}=\left( f_{1}^{\prime }\right) ^{2},$ $\omega _{3}=f_{1}^{\prime },$ $
\omega _{4}=f_{1}$ in (4.1). After taking partial derivative of (4.1) with
respect to $u_{1},$ one can be rewritten as
\begin{equation}
\mu _{1}f_{2}^{2}f_{2}^{\prime \prime }+\mu _{2}f_{2}f_{2}^{\prime
}f_{2}^{\prime \prime }-\mu _{3}f_{2}\left( f_{2}^{\prime }\right) ^{2}-\mu
_{4}\left( f_{2}^{\prime }\right) ^{3}=0, \tag{4.2}
\end{equation}
where
\begin{equation}
\left.
\begin{array}{l}
\mu _{1}=a\left( \omega _{1}^{\prime }\omega _{3}-4\omega _{1}\omega
_{3}^{\prime }\right) , \\
\mu _{2}=\omega _{1}^{\prime }\omega _{4}-4\omega _{1}\omega _{4}^{\prime },
\\
\mu _{3}=a\left( \omega _{2}^{\prime }\omega _{3}+4\omega _{2}\omega
_{3}^{\prime }\right) , \\
\mu _{4}=-\omega _{2}^{\prime }\omega _{4}+4\omega _{2}\omega _{4}^{\prime },
\text{ }
\end{array}
\right. \tag{4.3}
\end{equation}
for $\omega _{i}^{\prime }=\frac{d\omega _{i}}{du_{1}},i=1,...,4.$ By
dividing (4.2) with $f_{2}^{2}f_{2}^{\prime },$ we deduce
\begin{equation}
\frac{f_{2}^{\prime \prime }}{f_{2}^{\prime }}\left( \mu _{1}+\mu _{2}\frac{
f_{2}^{\prime }}{f_{2}}\right) =\left( \mu _{3}\frac{f_{2}^{\prime }}{f_{2}}
+\mu _{4}\left( \frac{f_{2}^{\prime }}{f_{2}}\right) ^{2}\right) . \tag{4.4}
\end{equation}
To solve (4.4) we have to distinguish several cases:
\begin{enumerate}
\item Case $\frac{f_{2}^{\prime }}{f_{2}}=c_{1}\neq 0,$ $c_{1}\in \mathbb{R}
. $ Substituting it into (4.1) leads to the polynomial equation on $f_{2}$
\begin{equation}
c_{1}^{2}\left( f_{1}f_{1}^{\prime \prime }-\left( f_{1}^{\prime }\right)
^{2}\right) -K_{0}\left( af_{1}^{\prime }+c_{1}f_{1}\right) ^{4}f_{2}^{2}=0,
\tag{4.5}
\end{equation}
where the coefficient $af_{1}^{\prime }+c_{1}f_{1}$ must vanish because $
K_{0}\neq 0$. This however contradicts with the regularity.
\item Case $\mu _{i}=0,$ $i=1,...,4.$ Because $\mu _{3}=0,\ $we conclude $
6\left( f_{1}^{\prime }\right) ^{2}f_{1}^{\prime \prime }=0$, which is not
our case.
\item Case $\mu _{1}+\mu _{2}\dfrac{f_{2}^{\prime }}{f_{2}}\neq 0.$ (4.4)
follows
\begin{equation}
\frac{f_{2}^{\prime \prime }}{f_{2}^{\prime }}=\frac{\mu _{3}\frac{
f_{2}^{\prime }}{f_{2}}+\mu _{4}\left( \frac{f_{2}^{\prime }}{f_{2}}\right)
^{2}}{\mu _{1}+\mu _{2}\frac{f_{2}^{\prime }}{f_{2}}}. \tag{4.6}
\end{equation}
The partial derivative of (4.6) with respect to $u_{1}$ gives a polynomial
equation on $\frac{f_{2}^{\prime }}{f_{2}}$ and the fact that each
coefficient must vanish yields the following system:
\begin{equation}
\left\{
\begin{array}{l}
\mu _{2}^{\prime }\mu _{4}-\mu _{2}\mu _{4}^{\prime }=0, \\
\mu _{2}^{\prime }\mu _{3}-\mu _{2}\mu _{3}^{\prime }+\mu _{1}^{\prime }\mu
_{4}-\mu _{1}\mu _{4}^{\prime }=0, \\
\mu _{1}^{\prime }\mu _{3}-\mu _{1}\mu _{3}^{\prime }=0.
\end{array}
\right. \tag{4.7}
\end{equation}
By (4.7), we deduce that $\mu _{3}=c_{1}\mu _{1},$ $\mu _{4}=c_{2}\mu _{2},$
$c_{1},c_{2}\in \mathbb{R},$ and
\begin{equation}
\left( c_{2}-c_{1}\right) \left( \mu _{1}^{\prime }\mu _{2}-\mu _{1}\mu
_{2}^{\prime }\right) =0. \tag{4.8}
\end{equation}
We have to consider two cases:
\begin{enumerate}
\item $c_{1}=c_{2}.$ Put $c_{1}=c_{2}=c$ and thus $c$ must be nonzero due to
the assumption of Case 3. Then (4.4) leads to
\begin{equation*}
\frac{f_{2}^{\prime \prime }}{f_{2}^{\prime }}=c\frac{f_{2}^{\prime }}{f_{2}}
,
\end{equation*}
which implies $f_{2}^{\prime }=c_{3}f_{2}^{c},$ $c_{3}\in \mathbb{R}$, $
c_{3}\neq 0.$ Note that $c\neq 1$ due to Case 1. Hence, (4.1) turns to
\begin{equation}
\frac{K_{0}}{c_{3}^{2}\left( cf_{1}f_{1}^{\prime \prime }-\left(
f_{1}^{\prime }\right) ^{2}\right) }=\frac{f_{2}^{2c}}{\left( af_{1}^{\prime
}f_{2}+c_{3}f_{1}f_{2}^{c}\right) ^{4}}. \tag{4.9}
\end{equation}
The partial derivative of (4.9) with respect to $f_{2}$ concludes
\begin{equation*}
a\left( c-2\right) f_{1}^{\prime }-cc_{3}f_{1}f_{2}^{c-1}=0,
\end{equation*}
which yields $c=2$ and $2c_{3}f_{1}f_{2}=0.$ This however is not possible.
\item $c_{1}\neq c_{2}.$ It follows from (4.8) that $\mu _{1}=c_{4}\mu _{2},$
$c_{4}\in \mathbb{R}.$ On the other hand, plugging $\omega _{1}=\omega
_{3}^{\prime }\omega _{4}$ and $\omega _{3}=\omega _{4}^{\prime }$ into the
equation $\mu _{3}-c_{1}\mu _{1}=0$ yields
\begin{equation}
\left( 6-c_{1}\right) \omega _{3}^{2}\omega _{3}^{\prime }-c_{1}\omega
_{3}\omega _{3}^{\prime \prime }\omega _{4}+4c_{1}\left( \omega _{3}^{\prime
}\right) ^{2}\omega _{4}=0. \tag{4.10}
\end{equation}
Dividing (4.10) with $\omega _{3}\omega _{4}\omega _{3}^{\prime }$ gives
\begin{equation}
\left( 6-c_{1}\right) \frac{\omega _{3}}{\omega _{4}}-c_{1}\frac{\omega
_{3}^{\prime \prime }}{\omega _{3}^{\prime }}+4c_{1}\frac{\omega
_{3}^{\prime }}{\omega _{3}}=0. \tag{4.11}
\end{equation}
Taking an integration of (4.11) leads to
\begin{equation}
\omega _{3}^{\prime }=c_{5}\omega _{3}^{4}\omega _{4}^{\frac{6-c_{1}}{c_{1}}
},\text{ }c_{5}\in \mathbb{R},\text{ }c_{5}\neq 0, \tag{4.12}
\end{equation}
or
\begin{equation}
\omega _{1}=c_{5}\omega _{3}^{4}\omega _{4}^{\frac{6}{c_{1}}}. \tag{4.13}
\end{equation}
Moreover, $\mu _{1}-c_{4}\mu _{2}=0$ implies
\begin{equation*}
\frac{\omega _{1}^{\prime }}{\omega _{1}}-4\frac{a\omega _{3}^{\prime
}-c_{4}\omega _{4}^{\prime }}{a\omega _{3}-c_{4}\omega _{4}}=0
\end{equation*}
or
\begin{equation}
\omega _{1}=c_{6}\left( a\omega _{3}-c_{4}\omega _{4}\right) ^{4},\text{ }
c_{6}\in \mathbb{R},\text{ }c_{6}\neq 0. \tag{4.14}
\end{equation}
Comparing (4.13) and (4.14) leads to
\begin{equation}
\omega _{3}=\frac{-c_{4}\omega _{4}}{\left( \frac{c_{5}}{c_{6}}\right) ^{
\frac{1}{4}}\omega _{4}^{\frac{3}{2c_{1}}}-a}. \tag{4.15}
\end{equation}
Revisiting (4.12) and taking its integration follows
\begin{equation}
\omega _{3}^{2}=\frac{1}{c_{7}\omega _{4}^{\frac{6}{c_{1}}}+c_{8}},\text{ }
c_{7},c_{8}\in \mathbb{R},\text{ }c_{7}\neq 0. \tag{4.16}
\end{equation}
After equalizing (4.15) and (4.16), we obtain an equation of the form
\begin{equation*}
c_{4}^{2}\omega _{4}^{\frac{6+2c_{1}}{c_{1}}}-\left( \frac{c_{5}}{c_{6}}
\right) ^{\frac{1}{2}}\omega _{4}^{\frac{3}{c_{1}}}-2a\left( \frac{c_{5}}{
c_{6}}\right) ^{\frac{1}{4}}\omega _{4}^{\frac{3}{2c_{1}}}+c_{8}c_{4}^{2}
\omega _{4}^{2}-a^{2}=0,
\end{equation*}
which gives a contradiction because $\omega _{4}=f_{1}$ is an arbitrary
non-constant function.
\end{enumerate}
\end{enumerate}
\end{proof}
By (2.3) the mean curvature is
\begin{equation}
2H=\frac{\left( f_{1}^{\prime }f_{2}\right) ^{2}f_{1}f_{2}^{\prime \prime
}-2\left( f_{1}^{\prime }f_{2}^{\prime }\right) ^{2}f_{1}f_{2}+\left(
f_{1}f_{2}^{\prime }\right) ^{2}f_{2}f_{1}^{\prime \prime
}+f_{1}f_{2}^{\prime \prime }+2af_{1}^{\prime }f_{2}^{\prime
}+a^{2}f_{1}^{\prime \prime }f_{2}}{\left( af_{1}^{\prime
}f_{2}+f_{1}f_{2}^{\prime }\right) ^{3}}. \tag{4.17}
\end{equation}
\begin{theorem}
There does not exist a minimal affine factorable surface of type 2 in $
\mathbb{I}^{3}$, except non-isotropic planes.
\end{theorem}
\begin{proof}
The proof is by contradiction. (4.17) follows
\begin{equation}
\left( f_{1}^{\prime }f_{2}\right) ^{2}f_{1}f_{2}^{\prime \prime }-2\left(
f_{1}^{\prime }f_{2}^{\prime }\right) ^{2}f_{1}f_{2}+\left(
f_{1}f_{2}^{\prime }\right) ^{2}f_{2}f_{1}^{\prime \prime
}+f_{1}f_{2}^{\prime \prime }+2af_{1}^{\prime }f_{2}^{\prime
}+a^{2}f_{1}^{\prime \prime }f_{2}=0. \tag{4.18}
\end{equation}
If $f_{1}$ or $f_{2}$ is a constant, then (4.18) deduces that the surface is
a non-isotropic plane. Assume that $f_{1},f_{2}$ are non-constant. If $
f_{1}^{\prime \prime }=0,$ then (4.18) gives a polynomial equation on $
f_{1}: $
\begin{equation*}
2af_{1}^{\prime }f_{2}^{\prime }+\left[ \left( f_{1}^{\prime }f_{2}\right)
^{2}f_{2}^{\prime \prime }-2\left( f_{1}^{\prime }f_{2}^{\prime }\right)
^{2}f_{2}+f_{2}^{\prime \prime }\right] f_{1}=0,
\end{equation*}
which is no possible because $af_{1}^{\prime }f_{2}^{\prime }\neq 0.$ The
symmetry implies $f_{2}^{\prime \prime }\neq 0.$ Henceforth we deal with the
case $f_{1}^{\prime \prime }f_{2}^{\prime \prime }\neq 0.$ Dividing (4.18)
with $\left( f_{1}^{\prime }f_{2}^{\prime }\right) ^{2}f_{1}f_{2}$ leads to
\begin{equation}
\left.
\begin{array}{l}
\frac{f_{2}f_{2}^{\prime \prime }}{\left( f_{2}^{\prime }\right) ^{2}}+\frac{
f_{1}f_{1}^{\prime \prime }}{\left( f_{2}^{\prime }\right) ^{2}}+\left(
\frac{1}{\left( f_{1}^{\prime }\right) ^{2}}\right) \left( \frac{
f_{2}^{\prime \prime }}{f_{2}\left( f_{2}^{\prime }\right) ^{2}}\right) + \\
+2a\left( \frac{1}{f_{1}f_{1}^{\prime }}\right) \left( \frac{1}{
f_{2}f_{2}^{\prime }}\right) +a^{2}\left( \frac{f_{1}^{\prime \prime }}{
f_{1}\left( f_{1}^{\prime }\right) ^{2}}\right) \left( \frac{1}{\left(
f_{2}^{\prime }\right) ^{2}}\right) =2.
\end{array}
\right. \tag{4.19}
\end{equation}
The partial derivative of (4.19) with respect to $u_{1}$ and $u_{2}$ yields
\begin{equation}
\left.
\begin{array}{l}
\underset{\omega _{1}}{\underbrace{\left( \frac{1}{\left( f_{1}^{\prime
}\right) ^{2}}\right) ^{\prime }}}\underset{\omega _{2}}{\underbrace{\left(
\frac{f_{2}^{\prime \prime }}{f_{2}\left( f_{2}^{\prime }\right) ^{2}}
\right) ^{\prime }}}+2a\underset{\omega _{3}}{\underbrace{\left( \frac{1}{
f_{1}f_{1}^{\prime }}\right) ^{\prime }}}\underset{\omega _{4}}{\underbrace{
\left( \frac{1}{f_{2}f_{2}^{\prime }}\right) ^{\prime }}} \\
+\underset{\omega _{5}}{\underbrace{a^{2}\left( \frac{f_{1}^{\prime \prime }
}{f_{1}\left( f_{1}^{\prime }\right) ^{2}}\right) ^{\prime }}}\underset{
\omega _{6}}{\underbrace{\left( \frac{1}{\left( f_{2}^{\prime }\right) ^{2}}
\right) ^{\prime }}}=0,
\end{array}
\right. \tag{4.20}
\end{equation}
where $\omega _{1}\omega _{6}\neq 0$ because $f_{1}^{\prime \prime
}f_{2}^{\prime \prime }\neq 0.$ To solve (4.20), we consider two cases:
\begin{enumerate}
\item Case $\omega _{3}=0.$ It follows $f_{1}f_{1}^{\prime }=c_{1},$ $
c_{1}\in \mathbb{R},$ $c_{1}\neq 0.$ Then, $\omega _{1}=\frac{2}{c_{1}},$ $
\omega _{5}=\frac{2c_{1}}{f_{1}^{4}}$ and hence (4.20) reduces to the
following polynomial equation on $f_{1}$
\begin{equation*}
f_{1}^{4}\omega _{2}+a^{2}c_{1}^{2}\omega _{6}=0,
\end{equation*}
which is no possible because $\omega _{6}\neq 0.$
\item Case $\omega _{3}\neq 0.$ After dividing (4.20) with $\omega
_{1}\omega _{6},$ we write
\begin{equation}
\mu _{1}\left( u_{2}\right) +2a\mu _{2}\left( u_{1}\right) \mu _{3}\left(
u_{2}\right) +a^{2}\mu _{4}\left( u_{2}\right) =0, \tag{4.21}
\end{equation}
where
\begin{equation*}
\mu _{1}=\frac{\omega _{2}}{\omega _{6}},\text{ }\mu _{2}=\frac{\omega _{3}}{
\omega _{1}},\text{ }\mu _{3}=\frac{\omega _{4}}{\omega _{6}},\text{ }\mu
_{4}=\frac{\omega _{5}}{\omega _{1}}.
\end{equation*}
Notice also that $\mu _{i},$ $i=1,...,4,$ in (4.21) must be constant for
every pair $\left( u_{1},u_{2}\right) .$ Therefore, being $\mu _{2}$ and $
\mu _{4}$ are constants lead to respectively
\begin{equation}
f_{1}=\frac{f_{1}^{\prime }}{c_{1}+c_{2}\left( f_{1}^{\prime }\right) ^{2}}
\tag{4.22}
\end{equation}
and
\begin{equation}
\frac{f_{1}^{\prime \prime }}{f_{1}^{\prime }}=f_{1}\left( \frac{c_{3}}{
f_{1}^{\prime }}+c_{4}f_{1}^{\prime }\right) , \tag{4.23}
\end{equation}
where $c_{1},...,c_{4}\in \mathbb{R}$, $c_{1}\neq 0$ because $\omega
_{3}\neq 0.$ Put $p_{1}=\frac{df_{1}}{du_{1}}$ and $\dot{p}_{1}=\frac{dp_{1}
}{df_{1}}=\frac{f_{1}^{\prime \prime }}{f_{1}^{\prime }}$ in (4.22) and
(4.23). The derivative of (4.22) with respect to $f_{1}$ gives
\begin{equation}
\dot{p}_{1}=\frac{\left( c_{1}+c_{2}p_{1}^{2}\right) ^{2}}{c_{1}-c_{2}\left(
p_{1}\right) ^{2}}. \tag{4.24}
\end{equation}
Nevertheless, plugging (4.22) into (4.23) leads to
\begin{equation}
\dot{p}_{1}=\frac{c_{3}+c_{4}p_{1}^{2}}{c_{1}+c_{2}p_{1}^{2}}. \tag{4.25}
\end{equation}
Equalizing (4.24) and (4.25) refers to the polynomial equation on $p_{1}$
\begin{equation*}
\xi _{1}+\xi _{2}p_{1}^{2}+\xi _{3}p_{1}^{4}+\xi _{4}p_{1}^{6}=0,
\end{equation*}
in which the following coefficients
\begin{equation*}
\left.
\begin{array}{l}
\xi _{1}=c_{1}^{3}-c_{1}c_{3}, \\
\xi _{2}=3c_{1}^{2}c_{2}-c_{1}c_{4}+c_{2}c_{3}, \\
\xi _{3}=3c_{1}c_{2}^{2}+c_{2}c_{4}, \\
\xi _{4}=c_{2}^{3}
\end{array}
\right.
\end{equation*}
must vanish. Being $\xi _{2}=\xi _{4}=0$ implies $c_{2}=c_{4}=0$ and thus
from (4.22) and (4.24) we get $f_{1}^{\prime \prime }=c_{1}f_{1}^{\prime
}=c_{1}^{2}f_{1}.$ Considering these ones into (4.17) leads to the
polynomial equation on $f_{1}$
\begin{equation}
c_{1}^{2}\left[ f_{2}^{2}f_{2}^{\prime \prime }-f_{2}\left( f_{2}^{\prime
}\right) ^{2}\right] f_{1}^{3}+\left[ f_{2}^{\prime \prime
}+2ac_{1}f_{2}^{\prime }+a^{2}c_{1}^{2}f_{2}\right] f_{1}=0, \tag{4.26}
\end{equation}
in which the fact that coefficients must vanish yields $f_{2}^{\prime
}+ac_{1}f_{2}=0.$ This however contradicts with the regularity.
\end{enumerate}
\end{proof}
\begin{lemma}
Let an affine factorable surface of type 2 in $\mathbb{I}^{3}$ have nonzero
constant mean curvature $H_{0}.$ If $f_{1}$ or $f_{2}$ is a linear function,
then we have
\begin{equation}
w\left( y,z\right) =\frac{c_{1}}{\sqrt{\left\vert H_{0}\right\vert }}\sqrt{
y+az}\text{, }c_{1}\in \mathbb{R}. \tag{4.27}
\end{equation}
\end{lemma}
\begin{proof}
If $f_{1}=c_{1},$ $c_{1}\in \mathbb{R},$ then (4.17) follows
\begin{equation}
2H_{0}c_{1}^{2}=\frac{f_{2}^{\prime \prime }}{\left( f_{2}^{\prime }\right)
^{3}}. \tag{4.28}
\end{equation}
Solving (4.28) concludes
\begin{equation*}
f_{2}\left( u_{2}\right) =\frac{-1}{2H_{0}c_{1}^{2}}\sqrt{
-4H_{0}c_{1}^{2}u_{2}+c_{2}}+c_{3},
\end{equation*}
for $c_{2},c_{3}\in \mathbb{R}.$ This proves the hypothesis of the lemma. If
$f_{1}^{\prime }=c_{4}\neq 0,$ $c_{4}\in \mathbb{R},$ then (4.17) reduces to
\begin{equation*}
2H_{0}\left( ac_{4}f_{2}+f_{1}f_{2}^{\prime }\right)
^{3}=2ac_{4}f_{2}^{\prime }+\left[ c_{4}^{2}f_{2}^{2}f_{2}^{\prime \prime
}-2c_{4}^{2}\left( f_{2}^{\prime }\right) ^{2}f_{2}+f_{2}^{\prime \prime }
\right] f_{1},
\end{equation*}
which is a polynomial equation on $f_{1}.$ It is easy to see that the the
coefficient of the term of degree 3 is $\left( f_{2}^{\prime }\right) ^{3}$
which cannot vanish. This completes the proves.
\end{proof}
\section{Conclusions}
The results of the present paper and \cite{AE, Ay} relating to the (affine)
factorable surfaces in $\mathbb{I}^{3}$ with $K,H$ constants are summed up
in Table 1 which categorizes those surfaces. Notice also that, without
emposing conditions, finding the affine factorable surfaces of type 2 with $
H=const.\neq 0$ is still an open problem.
\begin{sidewaystable}\centering
$
\begin{tabular}{lllll}
\hline
\textbf{Properties} & \textbf{FS of type 1} & \textbf{FS of type 2} &
\textbf{AFS of type 1} & \textbf{AFS of type 2} \\ \hline
${\small K=0}$ & $\left.
\begin{array}{l}
{\small z=c}_{1}{\small f}_{2}\left( y\right) {\small ;} \\
{\small z=c}_{1}{\small e}^{{\small c}_{2}{\small x+c}_{3}{\small y}}{\small
;} \\
{\small z=c}_{1}{\small x}^{\frac{{\small 1}}{{\small 1-c}_{2}}}{\small y}^{
\frac{{\small c}_{2}}{{\small c}_{2}{\small -1}}}{\small .}
\end{array}
\right. $ & $\left.
\begin{array}{l}
{\small x=c}_{1}{\small f}_{1}\left( {\small z}\right) {\small ;} \\
{\small x=c}_{1}{\small e}^{c_{2}y+c_{3}z}{\small ;} \\
{\small x=c}_{1}{\small y}^{\frac{{\small 1}}{{\small 1-c}_{2}}}{\small z}^{
\frac{{\small c}_{2}}{{\small c}_{2}{\small -1}}}{\small .}
\end{array}
\right. $ & $\left.
\begin{array}{l}
{\small z=c}_{1}{\small f}_{2}\left( {\small u}_{2}\right) {\small ;} \\
{\small z=c}_{1}{\small e}^{c_{2}x+c_{3}\left( {\small u}_{2}\right) }
{\small ;} \\
{\small z=c}_{1}{\small x}^{\frac{{\small 1}}{{\small 1-c}_{2}}}{\small u}
_{2}^{\frac{{\small c}_{2}}{{\small c}_{2}{\small -1}}}{\small ,} \\
{\small u}_{2}{\small =y+ax.}
\end{array}
\right. $ & $\left.
\begin{array}{l}
{\small x=c}_{1}{\small f}_{1}\left( {\small u}_{1}\right) {\small ;} \\
{\small x=c}_{1}{\small e}^{c_{2}{\small u}_{1}+c_{3}z}{\small ;} \\
{\small x=c}_{1}{\small u}_{1}^{\frac{{\small 1}}{{\small 1-c}_{2}}}{\small z
}^{\frac{{\small c}_{2}}{{\small c}_{2}{\small -1}}}{\small ,} \\
{\small u}_{1}{\small =y+az.}
\end{array}
\right. $ \\ \hline
${\small H=0}$ & $\left.
\begin{array}{l}
\text{{\small Non-isotropic planes;}} \\
{\small z=c}_{1}{\small xy;} \\
{\small z=}\left( c_{1}e^{c_{2}x}+c_{3}e^{-c_{2}x}\right) \\
{\small \times }\left( c_{4}\cos \left( \text{{\small $c_{2}$$y$}}\right)
+c_{5}\sin \left( \text{{\small $c_{2}$$y$}}\right) \right) {\small .}
\end{array}
\right. $ & $\left.
\begin{array}{l}
\text{{\small Non-isotropic planes;}} \\
{\small x=y}\tan \left( c_{1}z\right) {\small ;} \\
{\small x=}\frac{c_{1}y}{z}{\small .}
\end{array}
\right. $ & $\left.
\begin{array}{l}
\text{{\small Non-isotropic planes;}} \\
{\small z=}e^{c_{1}x}\left[ c_{2}\sin \left( c_{3}{\small u}_{2}\right)
\right. \\
\left. +c_{4}\cos \left( c_{5}{\small u}_{2}\right) \right] , \\
{\small u}_{2}{\small =y+ax.}
\end{array}
\right. $ & {\small Non-isotropic planes.} \\ \hline
${\small K=K}_{0}{\small \neq 0}$ & ${\small z=}\sqrt{\left\vert
K_{0}\right\vert }{\small xy.}$ & $\left.
\begin{array}{l}
{\small x=}\frac{\pm z}{\sqrt{\left\vert K_{0}\right\vert }y}{\small ;} \\
{\small x=}\frac{c_{1}f_{2}\left( z\right) }{y}{\small ,}\text{ } \\
{\small z=}\int \sqrt{{\small c}_{2}{\small f}_{2}^{-1}{\small -}\frac{K_{0}
}{c_{1}^{2}}}{\small df}_{2}\text{{\small .}}
\end{array}
\right. $ & ${\small x=}\sqrt{\left\vert {\small K}_{0}\right\vert }{\small x
}\left( {\small y+ax}\right) {\small .}$ & {\small Non-existence.} \\ \hline
${\small H=H}_{0}{\small \neq 0}$ & ${\small z}=\frac{{\small H}_{0}}{
{\small c}_{1}}{\small y}^{2},{\small \ }$ & ${\small x=}\pm \sqrt{\frac{-z}{
H_{0}}}$ & $\left.
\begin{array}{l}
{\small z=}\frac{{\small H}_{0}}{{\small 1+a}^{2}}\left( {\small y+ax}
\right) ^{2}{\small ;} \\
{\small z=}\frac{{\small H}_{0}}{{\small a}}{\small x}\left( {\small y+ax}
\right) {\small .}
\end{array}
\right. $ & $\left.
\begin{array}{l}
\text{{\small In the particular case}} \\
f_{1}\text{ or }f_{2}\text{ is linear,} \\
{\small x=\frac{c_{1}}{\sqrt{\left\vert H_{0}\right\vert }}\sqrt{y+az}.} \\
\text{{\small In general case, not }} \\
\text{{\small yet known.}}
\end{array}
\right. $ \\ \hline
\end{tabular}
$
\caption{Factorable surfaces (FS) and affine factorable surfaces (AFS) in
$\mathbb{I}^3$ with $K,H$ constants.}
\end{sidewaystable}
\end{document}
|
\begin{document}
\title{On finite groups with certain complemented $p$-subgroups
hanks{{f Acknowledgement:}
\date{}
\vskip 1cm
\begin{center}\textbf{Abstract}\end{center}
Given a prime power $p^d$ with $p$ a prime and $d$ a positive integer,
we classify the finite
groups $G$ with $p^{2d}$ dividing $|G|$ in which all subgroups of order $p^d$
are complemented and
the finite groups $G$ having a normal elementary abelian Sylow $p$-subgroup $P$ such that $p^d<|P|$ in which all subgroups of order $p^d$ are complemented.
\vskip 5cm
\textbf{Keywords}\,\, finite group; complemented subgroup.
\textbf{2020 AMS Subject Classification}: 20D10
\pagebreak
\section{Introduction}
A \emph{complement} of a subgroup $H$ in a group $G$ is a subgroup $K$ of $G$ such that
$G=HK$ and $H\cap K=1$ (here $H$ is called a \emph{complemented subgroup} of $G$).
A classical result by P. Hall states that if every Sylow subgroup of a finite group $G$ has
a complement in $G$, then $G$ is solvable.
In \cite{hall1937}, P. Hall also characterized finite groups $G$ in which every subgroup of $G$ has a complement.
Motivated by Hall's work, a number of authors studied the structure of finite groups $G$ in which certain class of subgroups are complemented.
For instance, Y.M. Gorchakov showed that Hall’s requirement of the
complementability of all subgroups can be reduced to the complementability of all minimal subgroups (see \cite{gorchakov1960}).
In this paper, we focus our attention on the complementability of certain $p$-subgroups.
Let $p$ always denote a prime, $d$ always denote a positive integer and $G$ always denote a finite group.
We say that a finite group $G$ is a \emph{$\mathfrak{C}(p)m{d}$-group} if each subgroup of $G$ of order $p^d$ is complemented in $G$;
if $G$ is a $\mathfrak{C}(p)m{d}$-group such that $p^d\leq |G|_p$, then we say $G$ is a \emph{nontrivial $\mathfrak{C}(p)m{d}$-group}.
There has been a lot of interest in the problem of characterizing nontrivial $\mathfrak{C}(p)m{d}$-groups.
V.S. Monakhov and V.N. Kniahina
investigated the $pd$-composition factors (the composition factors
of order divisible by $p$) of nontrivial $\mathfrak{C}(p)$-groups (see \cite{monakhov2015}).
In \cite{qian2015}, G. Qian and F. Tang described $p$-solvable nontrivial $\mathfrak{C}(p)m{d}$-groups $G$ for a given prime power $p^d\leq \sqrt{|G|_p}$;
and also characterized nontrivial $\mathfrak{C}(p)m{d}$-groups $G$ which has a normal elementary abelian Sylow $p$-subgroup for a given prime power $p^d< |G|_p$.
Having classified nontrivial $\mathfrak{C}(p)$-groups and $\mathfrak{C}(p)m{2}$-groups in \cite{zeng2019,zeng2020},
we in this paper
classify nontrivial $\mathfrak{C}(p)m{d}$-groups $G$ for a fixed prime power $p^d\leq \sqrt{|G|_p}$ and obtain the classification of nontrivial $\mathfrak{C}(p)m{d}$-groups $G$ which has a normal elementary abelian Sylow
$p$-subgroup for a fixed prime power $p^d< |G|_p$.
\begin{thmA}\langle}\def\ra{\ranglebel{Thm1}
Let $p^d$ be a power of a prime $p$ such that $p^d>1$ and $G$ a finite group
such that $|G|_p\geq p^{2d}$.
Assume $\oh{p'}{G}=1$.
Then $G$ is a nontrivial $\mathfrak{C}(p)m{d}$-group if and only if one of following is true.
{\rm (1)} $G$ is a nontrivial $\mathfrak{C}(p)$-group.
{\rm (2)} $G=H\ltimes P$ where $H$ is a cyclic Hall $p'$-subgroup and $P\in\syl{p}{G}$ is a faithful homogeneous $\mathbb{F}_p[H]$-module with all its
irreducible submodules having dimension $e$$(>1)$ so that $e\mid (d,\log_p|P|)$.
\end{thmA}
Observe that $G$ is a $\mathfrak{C}(p)m{d}$-group if and only if
$G/\oh{p'}{G}$ is a $\mathfrak{C}(p)m{d}$-group (see part (3) of Lemma \ref{element31}).
Hence, fixed a prime power $p^d$,
Theorem A and \cite[Theorem 1.1]{zeng2019} give a complete
classification of finite $\mathfrak{C}(p)m{d}$-groups $G$ such that $p^d\leq \sqrt{|G|_p}$.
Next theorem is a key to prove Theorem A.
\begin{thmB}
Assume that a finite $p'$-group $H$ acts faithfully on a finite elementary abelian $p$-group $V$.
Let $G=H\ltimes V$ where $|V|=p^n$.
Then $G$ is a nontrivial $\mathfrak{C}(p)m{d}$-group for a fixed positive integer $d<n$ if and only if
one of the following statements holds.
{\rm (1)} $G$ is supersolvable.
{\rm (2)} $H$ is cyclic and
$V$ is a homogeneous $\mathbb{F}_p[H]$-module with all its irreducible $\mathbb{F}_p[H]$-submodules having dimension $e$$(>1)$ such that $e\mid (d,n)$.
\end{thmB}
If $G$ is a finite group with $\oh{p'}{G}=1$ in which $P\in\syl{p}{G}$ is normal,
then by \cite[Lemma 2.4]{qian2020} $\Phi(P)=P\cap \Phi(G)=\Phi(G)$.
So by part (3) of Lemma \ref{element31} the next corollary, which is an improvement of \cite[Proposition E]{qian2015}, follows immediately.
\begin{corC}
Assume that a finite $p'$-group $H$ acts faithfully on a finite $p$-group $P$.
Let $G=H\ltimes P$ where $|P|=p^n$ and $\o G=G/\Phi(G)$ where $|\o P|=p^{n-s}$.
If $G$ is a nontrivial $\mathfrak{C}(p)m{d}$-group for a fixed positive integer $d<n$,
then one of the following statements holds.
{\rm (1)} $\o G$ is supersolvable.
{\rm (2)} $H$ is cyclic and
$\o P$ is a homogeneous $\mathbb{F}_p[H]$-module with all its irreducible $\mathbb{F}_p[H]$-submodules having dimension $e$$(>1)$ such that $e\mid (d-s,n-s)$.
\end{corC}
The paper is organized as follows: in Section 2 we collect useful results
and also give a detailed description of simple nontrivial $\mathfrak{C}(p)m{d}$-groups;
at the end of Section 3 we prove Theorems A and B; in Section 4 we improve \cite[Theorem A$'$]{qian2020} by applying Theorem B.
Our notation is standard, for group theory it follows \cite{huppert} and for character theory of finite groups it follows \cite{isaacs1976}.
All groups considered in the paper are finite.
\section{Preliminaries}
We begin with a couple of lemmas for later use.
\begin{lem}[\cite{qian2004}]\langle}\def\ra{\ranglebel{qian2004}
Let $G$ be an almost simple group with socle $S$.
If $p$ divides both $|S|$ and $|G:S|$,
then $G$ has non-abelian Sylow $p$-subgroups.
\end{lem}
\begin{lem}\langle}\def\ra{\ranglebel{ea}
Let $G$ be a group such that $\oh{p'}{G}=1$ and $P\in\syl{p}{G}$. Then the following hold.
{\rm (1)} If $P$ is elementary abelian, then $\Phi(G)=1$.
{\rm (2)} If $P$ is abelian, then $P\leq\gfitt{G}$.
\end{lem}
\begin{proof}
(1) Let $G$ be a minimal counterexample.
Let $N$ be a minimal normal subgroup of $G$ and $K/N=\oh{p'}{G/N}$.
By the minimality of $G$, $\Phi(G/K)=1$.
Therefore $\Phi(G)\leq N$ as $\oh{p'}{G}=1$, so $\Phi(G)$ is the unique minimal normal subgroup of $G$.
We now show that $\E G>1$.
Otherwise, $\gfitt{G}=\fitt{G}=P$.
So by \cite[Lemma 2.4]{qian2020} $\Phi(G)=\Phi(G)\cap P=\Phi(P)=1$, a contradiction.
Let $E$ be a component of $G$.
If $E<G$, then by the minimality of $G$ we have $\Phi(E)=1$.
Further, by the minimality of $G$ and the uniqueness of minimal normal subgroup of $G$,
$G$ is quasisimple with $\z G=\Phi(G)\cong C_p$.
Let $\langle}\def\ra{\ranglembda\in\irr{\z G}$ be non-principal.
By \cite[Theorem 6.26]{isaacs1976}, there exists a linear character $\chi\in\irr{G}$ such that $\chi_{\z{G}}=\langle}\def\ra{\ranglembda$, so $\langle}\def\ra{\ranglembda=1_{\z G}$ as $\chi=1_G$,
a contradiction.
(2) Let $G$ be a counterexample.
Note that $\fitt{G}$ is a $p$-group as $\oh{p'}{G}=1$.
If $\E G=1$, then $P=\gfitt{G}$ as $P$ being abelian, a contradiction.
So $\E{G}>1$.
Observing that $P\nleq \gfitt{G}$ and $P$ is abelian,
we take a $p$-element $x\in G-\gfitt{G}$ such that $x^p\in \gfitt{G}$.
Write $\overline{G}=G/\Phi(\E{G})$.
Then $\langle}\def\ra{\rangle\overline{x}\ra$ acts on $\overline{\E{G}}=S_1\times \cdots \times S_t$,
where $S_i$ are nonabelian simple groups.
Also since $\oh{p'}{\overline{G}}=1$, $p\mid |S_i|$ for $1\leq i\leq t$.
Since there exists a Sylow $p$-subgroup of $S_i$ which is centralized by $\langle}\def\ra{\rangle \overline{x}\ra$, $\overline{x}$ fixes each $S_i$.
We now show that there exists $1\leq i_0\leq t$ such that $[\overline{x},S_{i_0}]\neq 1$.
Otherwise, $[\langle}\def\ra{\rangle x\ra,\E{G}]\leq \Phi(\E{G})$.
Notice that $\Phi(\E{G})=\z{\E{G}}$, and so $[\langle}\def\ra{\rangle x\ra,\E{G},\E{G}]=[\E{G},\langle}\def\ra{\rangle x\ra,\E{G}]=1$.
By Three subgroups lemma, $[\E{G},\langle}\def\ra{\rangle x\ra]=[\E{G},\E{G},\langle}\def\ra{\rangle x\ra]=1$,
observing that $[\langle}\def\ra{\rangle x\ra, \gfitt{G}]=1$ as $P$ being abelian,
and it follows from $\cent{G}{\gfitt{G}}\leq \gfitt{G}$ that $x\in \gfitt{G}$, a contradiction.
Therefore $\langle}\def\ra{\rangle\overline{x}\ra S_{i_0}/\cent{\langle}\def\ra{\rangle \overline{x}\ra}{S_{i_0}}$ is an almost simple group with socle $S_{i_0}\cent{\langle}\def\ra{\rangle \overline{x}\ra}{S_{i_0}}/\cent{\langle}\def\ra{\rangle \overline{x}\ra}{S_{i_0}}$ and an abelian Sylow $p$-subgroup, which contradicts Lemma \ref{qian2004} as $\langle}\def\ra{\rangle\overline{x}\ra S_{i_0}/\cent{\langle}\def\ra{\rangle \overline{x}\ra}{S_{i_0}}>S_{i_0}\cent{\langle}\def\ra{\rangle \overline{x}\ra}{S_{i_0}}/\cent{\langle}\def\ra{\rangle \overline{x}\ra}{S_{i_0}}$.
Thus $P\leq\gfitt{G}$.
\end{proof}
We now collect some useful results about $\mathfrak{C}(p)m{d}$-groups.
In the following, we denote the class of $\mathfrak{C}(p)m{d}$-groups by $\mathfrak{C}(p)m{d}$ and the class of nontrivial $\mathfrak{C}(p)m{d}$-groups by $\mathfrak{C}^{\sharp}(p)m{d}$.
\begin{prop}\langle}\def\ra{\ranglebel{simpleCpd}
Let $S$ be a nonabelian simple group.
Then $S\in\mathfrak{C}^{\sharp}(p)m{d}$ if and only if
$|S|_p=p^d$ and one of the following holds.
{\rm (i)} $S=A_p$ and $p^d=p\geq 7$.
{\rm (ii)} $S={\operatorname{PSL}}(2,11)$ and $p^d=11$.
{\rm (iii)} $S=\mathrm{M}_{11}$ and $p^d=11$.
{\rm (iv)} $S=\mathrm{M}_{23}$ and $p^d=23$.
{\rm (v)} $S={\operatorname{PSL}}(n,q)$ where $p^d=p=\frac{q^n-1}{q-1}$ and $n$, $p$ are distinct primes.
{\rm (vi)} $S={\operatorname{PSL}}(n,q)$ where $p^d=\frac{q^n-1}{q-1}>p>2$ and $n$, $p$ are different primes.
In particular, Sylow $p$-subgroup of $S$ is isomorphic to $C_{p^d}$.
{\rm (vii)} $S={\operatorname{PSL}}(2,q)$ where $p^d=2^d=q+1\geq 8$ and $q$ is a Mersenne prime. In particular,
Sylow $2$-subgroup of $S$ is isomorphic to $D_{2^d}$.
\end{prop}
\begin{proof}
By \cite[Lemma 2.8]{zeng2019}, it suffices to prove that $G\in\mathfrak{C}^{\sharp}(p)m{d}$ for $d>1$ if and only if either (vi) or (vii) holds.
Assume either (vi) or (vii) holds. Then by \cite[Theorem 1]{guralnick1983}
Hall $p'$-subgroup of $S$, which has index $p^d$ in $S$, exists.
That is $S\in\mathfrak{C}^{\sharp}(p)m{d}$ for $d>1$.
Assume now $S\in\mathfrak{C}^{\sharp}(p)m{d}$ for $d>1$.
Then $S$ has a subgroup of index $p^d$, and hence, by \cite[Theorem 1]{guralnick1983},
$S\cong A_{p^d}$; or $S\cong {\operatorname{PSU}}(4,2)$ and $p^d=3^3$; or $S\cong {\operatorname{PSL}}(n,q)$ with $p^d=\frac{q^n-1}{q-1}$ and $n$ a prime.
Suppose $S\cong A_{p^d}$ with $d>1$. Then $|S|_p=p^n\geq p^{2d}$,
and hence $S$ has an elementary abelian Sylow $p$-subgroup by \cite[Proposition F]{qian2015},
a contradiction.
Suppose $S\cong {\operatorname{PSU}}(4,2)$ and $p^d=3^3$.
Let $D$ be a subgroup of order $3^3$ in $S$.
Then $S=D\cdot H$, where $D\cap H=1$.
Therefore $S=D\cdot H^x$ so that $D\cap H^x=1$ for all $x\in S$.
By checking \cite{atlas}, we know that all subgroups of index $3^3$ in $S$ are conjugate in $S$.
So we conclude a contradiction as $|H|_3=3$.
Thus $S\cong {\operatorname{PSL}}(n,q)$ with $p^d=\frac{q^n-1}{q-1}$ and $n$ a prime.
Then it follows from \cite[Theorem 1]{guralnick1983} that $|S|_p=p^d$.
Also, by \cite[II, Theorem 7.3]{huppert} $S$ has a cyclic subgroup of order $\frac{q^n-1}{c(q-1)}=\frac{p^d}{(q-1,n)}$ where $c=(q-1, n)$ (this cyclic subgroup is called Singer cycle).
Suppose $n\neq p$.
Then $S$ has a cyclic subgroup of order $p^d$, and hence (vi) follows.
Suppose $n=p$.
Note that $p^d=\frac{q^n-1}{q-1}$,
and so $q^p\equiv 1(\mathrm{mod}~p)$ and also $q^{p-1}\equiv 1(\mathrm{mod}~p)$.
Hence $q=q^{(p,p-1)}\equiv 1(\mathrm{mod}~p)$.
If $p>2$, then it follows by \cite[Lemma 10.11]{isaacs1976} that $p$ is the exact power of $p$ dividing $\frac{q^p-1}{q-1}=p^d$, contradicting to $d>1$.
So $n=p=2$.
Thus $S\cong {\operatorname{PSL}}(2,q)$ with $q=2^d-1$.
By \cite[Theorem 5]{mihailescu}, $q=2^d-1$ is a Mersenne prime.
In this case, Sylow $2$-subgroups of $S$ are isomorphic to $D_{2^d}$ where $d\geq 3$.
\end{proof}
\begin{lem}\langle}\def\ra{\ranglebel{element31}
For a group $G\in\mathfrak{C}(p)m{d}$, the following hold.
{\rm (1)} If $H\leq G$, then $H\in\mathfrak{C}(p)m{d}$.
{\rm (2)} If $N\leq G$ is a direct product of $n$ simple groups for $n\leq d$, then $|N|_p\leq p^d$.
In particular, if $N$ is minimal normal in $G$, then $|N|_p\leq p^d$.
{\rm (3)} If $N$ is a normal subgroup of $G$ with $|N|_p=p^e\leq p^d$, then $G/N\in\mathfrak{C}(p)m{d-e}$.
Furthermore, if $e=0$, then $G/N\in\mathfrak{C}(p)m{d-e}$ implies that $G\in\mathfrak{C}(p)m{d}$.
{\rm (4)} If $|G|_p\geq p^d$ and $N$ is subnormal in $G$ with $|N|_p=p^e$,
then $N\in\mathfrak{C}(p)m{m}$ where $m=\min\{d,e\}$.
{\rm (5)} $G\in\mathfrak{C}(p)m{md}$ for each nonnegative integer $m$.
\end{lem}
\begin{proof}
For (1), (3), (4) and (5), we refer to \cite[Lemma 3.1]{zeng2020}. We only prove (2) here.
By Proposition \ref{simpleCpd}, we may assume that $n>1$.
Let $G$ be a counterexample with smallest possible sum $|G|+n+d$.
Then $|N|_p>p^d$, $G=N$ and each direct factor of $N$ has order divisible by $p$.
Write $N=S\times T$ where $S$ is simple and $T$ is a direct product of $n-1$ simple groups,
we claim that $|T|_p<p^d$.
Otherwise, $|T|_p\geq p^d$.
Let $P\leq S$ be such that $|P|=p$ and let $H=P\times T$.
Then by (1) $H\in\mathfrak{C}(p)m{d}$ and hence by (3) $T\cong H/P\in\mathfrak{C}(p)m{d-1}$.
Notice that $T\leq G$ is a direct product of $n-1$ simple groups,
so by the minimality of $G$ we have $|T|_p\leq p^{d-1}$, a contradiction.
Now $|T|_p=p^a$ for some positive integer $a<d$.
By (3), $S\cong N/T=G/T\in\mathfrak{C}(p)m{d-a}$, so $|S|_p\leq p^{d-a}$.
As a consequence, $|N|_p=|S|_p|T|_p\leq p^d$, a contradiction.
In particular, if $N$ is minimal normal in $G$, then by \cite[Lemma 3.1(2)]{zeng2020} $N$ is a direct product of at most $d$ copies of simple groups, and so $|N|_p\leq p^d$.
\end{proof}
\section{Proof of the main theorem}
We denote by $\ohphi{G}$ the subgroup of a group $G$ such that $\ohphi{G}/\oh{p'}{G}=\Phi(G/\oh{p'}{G})$.
Let $\o G=G/\ohphi{G}$.
Then $\oh{p'}{\o G}=\Phi(\o G)$.
\begin{lem}\langle}\def\ra{\ranglebel{equalppart}
Let $G\in\mathfrak{C}^{\sharp}(p)m{d}$ be such that $\oh{p'}{G}=\Phi(G)=1$ and $|G|_p=|\gfitt{G}|_p>p^d$.
Assume that $|N|_p$ is a constant for each minimal normal subgroup $N$ of $G$.
Then $\E G=1$.
\end{lem}
\begin{proof}
As $\Phi(G)=1$, $\gfitt{G}=\soc{G}$.
Also since $\oh{p'}{G}=1$, by part (2) of Lemma \ref{element31}
the $p$-part of the order of every minimal normal subgroup $p^e$ satisfies $1\leq e\leq d$.
Let $G$ be a counterexample with smallest possible sum $|G|+d$.
Then there exists a nonabelian minimal normal subgroup $E$ of $G$.
Note that $|\mathrm{Soc}(G)|_p=|G|_p>p^d$ and also $|E|_p=p^e\leq p^d$,
and so there exists another minimal normal subgroup $D$ of $G$.
Let now $P\in\mathrm{Syl}_p(D)$. We show that $D=P$.
Write $\gfitt{G}=D\times E\times C$ where $C$ is a direct product of some minimal normal subgroups of $G$, and let $L=\trianglelefteqm{G}{P}$ and $\widetilde{L}=L/\ohphi{L}$.
Observe that $\widetilde{P}\times\widetilde{E}\times\widetilde{C}\leq \gfitt{\widetilde{L}}$ and $|\widetilde{L}|_p=|\widetilde{P}\times\widetilde{E}\times\widetilde{C}|_p$,
and so $\widetilde{L}$ satisfies the hypothesis of this lemma.
Thus, by the minimality of $G$, $G=L=\trianglelefteqm{G}{P}$, that is $D=P$ is an abelian minimal normal subgroup of $G$.
We next show that $e=d$.
Assume $e<d$.
Let $N/D=\oh{p'}{G/D}$, $M/N=\Phi(G/N)$ such that $|M/N|_p=p^s$ and $\overline{G}=G/M$.
Then by part (3) of Lemma \ref{element31} $\overline{G}\in\mathfrak{C}^{\sharp}(p)m{d-e-s}$.
Since $E\cap M=1$, $\overline{E}$ is minimal normal in $\overline{G}$.
It is routine to check that $\overline{G}$ satisfies the hypothesis of this lemma.
So, by the minimality of $G$, $E\cong \overline{E}\leq \E{\overline{G}}=1$, a contradiction.
Therefore $e=d$.
Recall that $|D|=p^d$, and hence $G=D\rtimes H$, where $H\leq G$, as $G\in\mathfrak{C}^{\sharp}(p)m{d}$.
We then show that $\gfitt{G}=D\times E$.
To see this, as $\oh{p'}{G}=1$, it suffices to show $|\gfitt{G}|_p=p^{2d}$.
Assume $|\gfitt{G}|_p>p^{2d}$.
Then $|H|_p>p^d$, and $H\in\mathfrak{C}^{\sharp}(p)m{d}$ by part (1) of Lemma \ref{element31}.
Let $\widehat{H}=H/\ohphi{H}$.
Clearly, $\widehat{H}$ satisfies the hypothesis of this lemma,
and hence $\E{\widehat{H}}=1$ by the minimality of $G$.
However, $\widehat{H}$ contains a nonabelian minimal normal
subgroup which is isomorphic to $E$ as $H\cong G/D$,
a contradiction.
Therefore $\gfitt{G}=D\times E$ where $D$ and $E$ ($\leq \E{G}$) are distinct minimal normal subgroups of $G$ so that $|D|=p^d$ and $|E|_p=p^d$.
Let $X\leq D$ be of order $p^a$($>1$) and $Y\leq E$ be of order $p^{d-a}$($>1$).
Hence $G=XY\cdot M$ where $M$ is a complement of $XY$ in $G$ as $G\in\mathfrak{C}^{\sharp}(p)m{d}$.
Then $|G:M|=p^d$ and $G=(D\times E)M$.
Since $XY\leq \gfitt{G}$, $\gfitt{G}\cap M$ is a complement of $XY$ in $\gfitt{G}$.
Next we conclude a contradiction by proving $E$ is abelian.
To see that, it suffices to show $E\cap M=1$.
We know that $E\cap M\unlhd DM$ and $E$ is minimal normal in $G$,
and so it suffices to show that $\trianglelefteqm{E}{E\cap M}=E$.
First, since $D$ is abelian, $D\cap M\unlhd G$.
It follows from $D\cap M<D$ that $D\cap M=1$.
Also, $|\gfitt{G}:\gfitt{G}\cap M|=|\gfitt{G}M:M|=p^d$.
We conclude that
$$\gfitt{G}=(M\cap \gfitt{G})D\leq \trianglelefteqm{\gfitt{G}}{E\cap M}D=\trianglelefteqm{E}{E\cap M} \times D\leq \gfitt{G}.$$
Thus $\trianglelefteqm{E}{E\cap M}=E$, as desired.
\end{proof}
\begin{prop}\langle}\def\ra{\ranglebel{F*=Op'}
Let $G\in\mathfrak{C}^{\sharp}(p)m{d}$ be such that $\oh{p'}{G}=1$ and $|G|_p=p^n>p^d$.
Write $\overline{G}=G/\Phi(G)$ and $|\Phi(G)|_p=p^s$.
Assume that $G/\gfitt{G}$ is a $p'$-group. Then one of the following is true.
{\rm (1)} $\overline{G}\in\mathfrak{C}^{\sharp}(p)$. In particular, if $\gfitt{\overline{G}}=\fitt{\overline{G}}$, then $\overline{G}=\overline{H}\ltimes \fitt{\overline{G}}$ is supersolvable where $\overline{H}$ is abelian with exponent dividing by $p-1$.
{\rm (2)}
$\overline{G}=\overline{H}\ltimes \overline{P}$
where $\overline{P}$ is a direct product of some minimal normal subgroups
$E_1,~E_2,~\dots,~E_t$ with the same order, say $p^e$ where $e>1$, and $e\mid (n-s,d-s)$.
Moreover,
all $E_i$ are isomorphic to an irreducible $\mathbb{F}_p[H]$-module $E$ which is not absolutely irreducible.
\end{prop}
\begin{proof}
Note that $\oh{p'}{G}=1$, and so $|\Phi(G)|=p^s$ and also $\oh{p'}{\overline{G}}=1$.
Since every subgroup of $\Phi(G)$ has no complement in $G$, it follows from $G\in\mathfrak{C}^{\sharp}(p)m{d}$ that $p^s<p^d$.
Applying part (3) of Lemma \ref{element31} to $\overline{G}$, we have $\overline{G}\in\mathfrak{C}^{\sharp}(p)m{d-s}$.
Also as $\overline{\gfitt{G}}\leq \gfitt{\overline{G}}$, $\overline{G}/\gfitt{\overline{G}}$ is a $p'$-group.
Thus, without loss of generality, we may assume that $\Phi(G)=1$.
Hence $\gfitt{G}=\soc{G}$, and $|\gfitt{G}|_p=|G|_p>p^d$.
Write
$\gfitt{G}=E_1\times \cdots\times E_t$ where $E_i$ are minimal normal subgroups of $G$.
Since $\oh{p'}{G}=1$, $p\mid |E_i|$ for $1\leq i\leq t$.
Also by part (2) of Lemma \ref{element31}, $|E_i|_p\leq p^d$ for $1\leq i\leq t$.
We claim first that the orders of all minimal normal subgroups of $G$ have equal $p$-part, say $p^e$.
Let $G$ be a counterexample with minimal possible sum $|G|+d$.
Since there exists at least one minimal normal subgroup of $G$ with $p$-parts less than $p^d$,
by part (3) of Lemma \ref{element31} $F^*(G)=D\times E$ where $D$ and $E$ are both minimal normal in $G$.
Without loss of generality, we may assume that $|E|_p=p^e<p^d$.
Since $|E|_p=p^e<p^d$, by part (3) of Lemma \ref{element31} $G/E\in\mathfrak{C}^{\sharp}(p)m{d-e}$.
It follows by part (2) of Lemma \ref{element31} that $|D|_p=|DE/E|_p\leq p^{d-e}$ as $DE/E$ being minimal normal in $G/E$.
We conclude that $p^d<|F^*(G)|_p=|D|_p|E|_p\leq p^d$, a contradiction.
Thus the orders of all minimal normal subgroups of $G$ share equal $p$-part.
As a consequence, $e\mid n$.
Write $d=ke+r$ where $0\leq r<e$, and let $N=E_1\times\cdots\times E_k$ where $k\leq t=\frac{n}{e}$.
If $r>0$, then $G/N\in\mathfrak{C}^{\sharp}(p)m{r}$ by part (3) of Lemma \ref{element31}.
Now $E_{t}N/N$ is minimal normal in $G/N$, it follows from part (2) Lemma \ref{element31} that $p^e=|E_{t}|_p=|E_{t}N/N|_p\leq p^r$ which contradicts to $r<e$.
Consequently, $e\mid (d,n)$.
We claim next that if $e=1$, then (1) holds.
Since $|G|_p>p^d\geq p$, by \cite[Proposition E]{qian2015}, it suffices to prove $G\in\mathfrak{C}^{\sharp}(p)$.
Since $e=1$, we have $t=n>d$.
Let $N=E_1\times E_2\times \cdots\times E_d$ and $N_i$ the direct product of groups in
$\{E_j\mid 1\leq j\leq d~\text{and}~j\neq i\}$.
As $|N_i|_p=p^{d-1}$, by part (3) of Lemma \ref{element31} $G/N_i\in\mathfrak{C}^{\sharp}(p)$.
Also, observing that $\bigcap_{i=1}^d N_i=1$, we have that
\[
G=G/\bigcap_{i=1}^d N_i\lesssim \bigtimes_{i}^d G/N_i,
\]
so we conclude that $G\in\mathfrak{C}^{\sharp}(p)$ since the direct product of two nontrivial $\mathfrak{C}(p)$-groups is still a nontrivial $\mathfrak{C}(p)$-group.
Finally, we show that if $e>1$, then (2) holds.
Let $P\in\syl{p}{G}$.
By Lemma \ref{equalppart}, $\E G=1$, and hence $P=\gfitt{G}=\fitt{G}=\soc{G}$.
Since $\Phi(G)=1$, it follows from \cite[Lemma 2.4]{qian2020} that $\Phi(P)=\Phi(G)\cap P=1$, so $P$ is elementary abelian.
Applying \cite[Proposition E]{qian2015} to $G$, we have that $E_i$ is isomorphic to an $\mathbb{F}_p[H]$-module $E$ for $1\leq i\leq t$.
Finally, one readily check by \cite[Corollary 2.10, Lemma 2.9(5)]{qian2020} that $E$ is not absolutely irreducible.
\end{proof}
\begin{lem}\langle}\def\ra{\ranglebel{end}
Let $H$ be a group
and $V$ a faithful irreducible $\mathbb{F}_p[H]$-module.
Then $\dim_{\mathbb{F}_p}(\mathrm{End}_{\mathbb{F}_p[H]}(V))\mid \dim_{\mathbb{F}_p}(V)$. Moreover, $\dim_{\mathbb{F}_p}(\mathrm{End}_{\mathbb{F}_p[H]}(V))=\dim_{\mathbb{F}_p}(V)$ if and only if $H$ is cyclic.
\end{lem}
\begin{proof}
Let $K$ be an algebraic closure of $\mathbb{F}_p$.
By \cite[V, Theorem 14.12]{huppert},
\begin{center}
$K\otimes_{\mathbb{F}_p} V\cong_{K[H]} V_1\oplus\cdots \oplus V_t$
\end{center}
where $V_i$ are non-isomorphic
faithful absolutely irreducible $K[H]$-modules with the same dimension.
Note that by \cite[Chapter B, Lemma 5.4]{doerk1992}
\begin{center}
$K\otimes_{\mathbb{F}_p} \mathrm{End}_{\mathbb{F}_p[H]}(V)\cong_K \mathrm{End}_{K[H]}(K\otimes_{\mathbb{F}_p}V)$
\end{center}
as a $K$-linear space, we have
\begin{align*}
\dim_{\mathbb{F}_p}(\mathrm{End}_{\mathbb{F}_p[H]}(V)) &=\dim_K(K\otimes_{\mathbb{F}_p}\mathrm{End}_{\mathbb{F}_p[H]}(V)) \\
&=\dim_K\mathrm{End}_{K[H]}(K\otimes_{\mathbb{F}_p}V)\\
&=\dim_K(\mathrm{End}_{K[H]}(V_1\oplus\cdots \oplus V_t))\\
&=\sum_{i=1}^t \dim_K(\mathrm{End}_{K[H]}(V_i))=t.
\end{align*}
Since $\dim_{\mathbb{F}_p}(V)=\dim_K(K\otimes_{\mathbb{F}_p}V)=\dim_K(V_1\oplus\cdots\oplus V_t)=t\dim_K(V_1)$,
it follows that $\dim_{\mathbb{F}_p}(\mathrm{End}_{\mathbb{F}_p[H]}(V))\mid \dim_{\mathbb{F}_p}(V)$.
Note that $\dim_{\mathbb{F}_p}(\mathrm{End}_{\mathbb{F}_p[H]}(V))=\dim_{\mathbb{F}_p}(V)$ if and only if $\dim_K(V_1)=1$,
and since $V_1$ is a faithful absolutely irreducible $K[H]$-module, we conclude that $\dim_{\mathbb{F}_p}(\mathrm{End}_{\mathbb{F}_p[H]}(V))=\dim_{\mathbb{F}_p}(V)$ if and only if $H$ is cyclic.
\end{proof}
Now we are ready to prove Theorem B.
\begin{thm}\langle}\def\ra{\ranglebel{hv}
Assume that a $p'$-group $H$ acts faithfully on an elementary abelian $p$-group $V$.
Suppose $G=H\ltimes V$ where $|V|=p^n$.
Then $G\in\mathfrak{C}^{\sharp}(p)m{d}$ for a fixed positive integer $d<n$ if and only if
one of the following statements holds.
{\rm (1)} $G$ is supersolvable.
{\rm (2)} $H$ is cyclic and
$V$ is a homogeneous $\mathbb{F}_p[H]$-module with all its irreducible $\mathbb{F}_p[H]$-submodules having dimension $e$$(>1)$ so that $e\mid (d,n)$.
\end{thm}
\begin{proof}
($\Leftarrow$) Write $V=V_1\times \cdots \times V_t$ where $V_i$ are minimal normal subgroups of $G$,
and denote $|V_i|=p^e$,
we claim that $G\in\mathfrak{C}^{\sharp}(p)m{e}$.
Suppose $e=1$.
Let $N_i$ be the direct product
of groups in $\{V_j\mid 1\leq j\leq d~\text{and}~j\neq i\}$.
Hence $|N_i|_p=p^{d-1}$ and $\bigcap_{i=1}^d N_i=1$.
Note that $G/N_i\in\mathfrak{C}^{\sharp}(p)$ for $1\leq i\leq d$ by part (3) Lemma \ref{element31} and
\[
G=G/\bigcap_{i=1}^d N_i\lesssim \bigtimes_{i}^d G/N_i,
\]
we conclude by part (1) of Lemma \ref{element31} that $G\in\mathfrak{C}^{\sharp}(p)$ since the direct product of two nontrivial $\mathfrak{C}(p)$-groups is still a nontrivial $\mathfrak{C}(p)$-group.
Suppose $e>1$.
Then $H$ is cyclic and $V_i$ are isomorphic irreducible $\mathbb{F}_p[H]$-submodules for $1\leq i\leq t$.
We proceed by induction on $|G|$ to prove that $G\in\mathfrak{C}^{\sharp}(p)m{e}$.
Let $\Omega$ be the set of all minimal normal subgroups of $G$.
Note that by Lemma \ref{end} $|\mathrm{End}_{\mathbb{F}_p[H]}(V_i)|=|V_i|=p^e$ as $H$ being cyclic,
and hence \cite[Chapter B, Proposition 8.2]{doerk1992} implies that
\[
|\Omega|=\frac{p^{et}-1}{p^e-1}=p^{e(t-1)}+\cdots+p^e+1.
\]
Let $X\leq G$ be of order $p^e$. Then $X\leq V$ as $V\in\mathrm{Syl}_p(G)$.
Since $|\Omega|> |X|$ and every minimal normal subgroup of $G$ shares only identity, there exists $D\in\Omega$ such that $XD=X\times D$.
Applying Maschke's theorem to $V$, we have $V=D\times E$ where $E$ is normal in $G$.
Write $L=H\ltimes E$, and by induction we have $G/D\cong L\in\mathfrak{C}^{\sharp}(p)m{e}$.
Note that $XD/D\leq G/D$ has order $p^e$, and so $G/D=HD/D\ltimes (XD/D\times B/D)$ where $B/D\unlhd G/D$.
Therefore $B\unlhd G$ and $X\cap B=1$.
As a consequence, $HB$ is a complement for $X$ in $G$.
Thus $G\in\mathfrak{C}^{\sharp}(p)m{e}$.
By part (5) of Lemma \ref{element31}, $G\in\mathfrak{C}^{\sharp}(p)m{d}$ as $e\mid (d,n)$.
($\Rightarrow$) Suppose that $G\in\mathfrak{C}^{\sharp}(p)m{d}$.
As $\oh{p'}{G}=1$, every nilpotent normal subgroup of $G$ is a $p$-group.
Also, since $V$ is a normal elementary abelian Sylow $p$-subgroup of $G$, by \cite[Lemma 2.4]{qian2020} we have that $\Phi(G)=\Phi(G)\cap V=\Phi(V)=1$.
Applying Proposition \ref{F*=Op'} to $G$, we know that either $G$ is supersolvable where $H$ is abelian with exponent dividing $p-1$ or $V$ is a faithful homogeneous $\mathbb{F}_p[H]$-module with all its irreducible submodules being not absolutely irreducible.
Now we claim that $H$ is abelian.
Let $G$ be a minimal counterexample.
Then $V$ is a faithful homogeneous $\mathbb{F}_p[H]$-module with all its irreducible $\mathbb{F}_p[H]$-submodules not
absolutely irreducible,
also since by part (1) of Lemma \ref{element31} $A\ltimes V\in\mathfrak{C}^{\sharp}(p)m{d}$ for $A<H$,
every proper subgroup of $H$ is abelian.
Let $W$ be an irreducible $\mathbb{F}_p[H]$-submodule of $V$.
Since $V$ is a faithful $\mathbb{F}_p[H]$-module which is
the direct product of $t$ copies of $W$ with $t>1$,
we have that $W$ is a faithful irreducible $\mathbb{F}_p[H]$-module which is also not absolutely irreducible.
Write $e=\dim_{\mathbb{F}_p}W$, we have that $G\in\mathfrak{C}^{\sharp}(p)m{e}$ by \cite[Corollary 2.10, Lemma 2.9(4)]{qian2020}.
By the minimality of $G$, we may assume $t=2$ and write $V=W\times U$ where $U$ is also a faithful irreducible $\mathbb{F}_p[H]$-submodule of $V$ which is isomorphic to $W$ as an $\mathbb{F}_p[H]$-module.
Since every proper subgroup of $H$ is abelian, $H$ is solvable.
We claim that $H$ possesses an abelian normal subgroup $A$ with a prime index.
To see that, it suffices to show such $A$ exists when $H$ is not nilpotent.
Now $\fitt{H}<H$ is abelian.
Write $A=\fitt{H}$.
Since $\cent{H}{A}=A$, we conclude that $|H:A|=r$ where $r$ is a prime, as claimed.
By Clifford's theorem,
$W_A$ is either an irreducible $\mathbb{F}_p[A]$-module or
the direct product of $r$ non-isomorphic irreducible $\mathbb{F}_p[A]$-submodules having same dimension (see \cite[Chapter B, Theorem 7.3]{doerk1992}).
Suppose that $W_A$ is not an irreducible $\mathbb{F}_p[A]$-module.
Since $A\ltimes V\in\mathfrak{C}^{\sharp}(p)m{d}$ where $V_A$ is not homogeneous as an $\mathbb{F}_p[A]$-module, by the minimality of $G$, $A$ is abelian with exponent dividing $p-1$.
As a consequence, every irreducible $\mathbb{F}_p[A]$-submodule of $W_A$ has dimension 1.
Now $\dim_{\mathbb{F}_p}W=r$,
we know from Lemma \ref{end} that
\begin{center}
$\dim_{\mathbb{F}_p}(\mathrm{End}_{\mathbb{F}_p[H]}(W))=\dim_{\mathbb{F}_p}W=r$
\end{center}
since $W$ is not absolutely irreducible as $\mathbb{F}_p[H]$-module.
Consequently, by Lemma \ref{end} $H$ is cyclic, a contradiction.
Suppose that $W_A$ is an irreducible $\mathbb{F}_p[A]$-module.
Since $V_A$ is a faithful homogeneous $\mathbb{F}_p[A]$-module, $W_A$ is also a faithful irreducible $\mathbb{F}_p[A]$-module.
It follows that $A$ is cyclic, so by Schur's lemma and Wedderburn's little theorem $\mathrm{End}_{\mathbb{F}_p[A]}(W_A)$ is a finite field.
Recall that $e=\dim_{\mathbb{F}_p}(W)$,
we have by Lemma \ref{end} that
\begin{center}
$\dim_{\mathbb{F}_p}\mathrm{End}_{\mathbb{F}_p[A]}(W_A)=\dim_{\mathbb{F}_p}(W_A)=e$,
\end{center}
and hence $\mathrm{End}_{\mathbb{F}_p[A]}(W_A)\cong \mathbb{F}_{p^e}$.
Since $W$ and $U$ are isomorphic faithful $\mathbb{F}_p[H]$-modules,
we may identify $H$ as a subgroup of ${\operatorname{GL}}(e,p)$,
and also we may assume that $W$ and $U$ are isomorphic faithful $\mathbb{F}_p[{\operatorname{GL}}(e,p)]$-modules.
Then $G\leq {\operatorname{GL}}(e,p)\ltimes V$.
Write $T=\cent{{\operatorname{GL}}(e,p)}{A}\ltimes V$.
Note that $\mathrm{End}_{\mathbb{F}_p[A]}(W_A)=\cent{{\operatorname{GL}}(e,p)}{A}\cup \{ 0\}$, and so $\cent{{\operatorname{GL}}(e,p)}{A}\cong \mathbb{F}_{p^e}^{\times}$ is cyclic.
Since $A\leq \cent{{\operatorname{GL}}(e,p)}{A}\leq {\operatorname{GL}}(e,p)$, $W$ and $U$ are isomorphic faithful irreducible $\mathbb{F}_p[\cent{{\operatorname{GL}}(e,p)}{A}]$-modules,
and hence $V$ is a faithful homogeneous $\mathbb{F}_p[\cent{{\operatorname{GL}}(e,p)}{A}]$-module with its irreducible components having dimension $e$.
Then by the other implication of this lemma $T\in\mathfrak{C}^{\sharp}(p)m{e}$,
and hence by part (1) of Lemma \ref{element31} $\cent{{\operatorname{GL}}(e,p)}{H}\ltimes V$, as a subgroup of $T$, is also a nontrivial $\mathfrak{C}(p)m{e}$-group.
Write $\Gamma_0=\cent{{\operatorname{GL}}(e,p)}{H}$ and $a=\dim_{\mathbb{F}_p}\mathrm{End}_{\mathbb{F}_p[H]}(W)$.
Since $H$ is nonabelian and $W$ is not absolutely irreducible,
it follows from Lemma \ref{end} that $1<a<e$.
Note that $\Gamma_0\leq \cent{{\operatorname{GL}}(e,p)}{A}$,
and so $W$ is an $\mathbb{F}_p[\Gamma_0]$-module.
Let $Z$ be an irreducible $\mathbb{F}_p[\Gamma_0]$-submodule of $W$.
Since $\Gamma_0=\cent{{\operatorname{GL}}(e,p)}{H}=\mathrm{End}_{\mathbb{F}_p[H]}(W)^{\times}\cong \mathbb{F}_{p^a}^{\times}$,
it follows from Lemma \ref{end} that $\dim_{\mathbb{F}_p}Z=\dim_{\mathbb{F}_p}\mathrm{End}_{\mathbb{F}_p[\Gamma_0]}(Z)=\dim_{\mathbb{F}_p}\mathbb{F}_{p^a}=a$.
Observe that $\Gamma_0=\mathrm{End}_{\mathbb{F}_p[H]}(W)^{\times}$,
and so $\mathbb{F}_p[\Gamma_0]=\mathrm{End}_{\mathbb{F}_p[H]}(W)$.
Let $0\neq w_0\in Z$ and write $\mathbb{K}=\mathrm{End}_{\mathbb{F}_p[H]}(W)$, and let $X=\langle}\def\ra{\ranglengle w_0^k, w_0^{k}f(w_0)\mid k\in \mathbb{K}\rangle$ where $f:W\rightarrow U$
is an $\mathbb{F}_p[H]$-isomorphism, and hence $X=\langle}\def\ra{\ranglengle w_0^{k}\mid k\in \mathbb{K}\rangle\times \langle}\def\ra{\ranglengle f(w_0)\rangle$.
Since $w_0\in Z$ and $Z$ is an irreducible $\mathbb{F}_p[\Gamma_0]$-module where $\mathbb{K}=\mathbb{F}_p[\Gamma_0]$,
we have $X=Z\times \langle}\def\ra{\ranglengle f(w_0)\rangle$.
Notice that by \cite[Chapter B, Proposition 8.2]{doerk1992} $X$ shares a nontrivial common element with every minimal normal subgroups of $G$.
However, $|X|=p^{a+1}\leq p^e$ which contradicts $G\in\mathfrak{C}^{\sharp}(p)m{e}$.
\end{proof}
Finally, we prove Theorem~A, which we state again.
\begin{thm}
Let $G$ be a finite group such that $|G|_p\geq p^{2d}\geq p^2$.
Assume $\oh{p'}{G}=1$.
Then $G\in\mathfrak{C}^{\sharp}(p)m{d}$ if and only if one of the following is true.
{\rm (1)} $G\in\mathfrak{C}^{\sharp}(p)$.
{\rm (2)} $G=H\ltimes P$ where $H$ is a cyclic Hall $p'$-subgroup and $P\in\syl{p}{G}$ is a faithful homogeneous $\mathbb{F}_p[H]$-module with all its
irreducible submodules having dimension $e$$(>1)$ so that $e\mid (d,\log_p|P|)$.
\end{thm}
\begin{proof}
($\Rightarrow$)
Let $P\in\mathrm{Syl}_p(G)$ where $|P|=p^n$. Since $2d\leq n$, by \cite[Proposition F]{qian2015} $P$ is elementary abelian, and therefore by Lemma \ref{ea} $\Phi(G)=1$ and $P\leq\gfitt{G}$.
Application of Proposition \ref{F*=Op'} and Theorem \ref{hv} to $G$ yields either (1) or (2).
($\Leftarrow$) If $G\in\mathfrak{C}^{\sharp}(p)$,
then by part (5) of Lemma \ref{element31} $G\in\mathfrak{C}^{\sharp}(p)m{d}$ as $p^d\mid |G|$.
If (2) holds, then $G\in\mathfrak{C}^{\sharp}(p)m{d}$ by Theorem \ref{hv}.
\end{proof}
\section{Applications}
Let $G$ be a finite group, and let $A \leq G$ and $K \leq H \leq G$ with $K, H$
normal in $G$.
Following \cite[VI, Definition 11.5]{huppert}, we say that $A$ \emph{covers} $H/K$ if
$AH = AK$, and $A$ \emph{avoids} $H/K$ if $A \cap H = A \cap K$.
Following \cite{qian2020}, we say that
a subgroup $A$ of $G$ has the \emph{partial cover-avoidance property} in $G$ and call $A$ a \emph{partial
CAP-subgroup} of $G$, if there exists a chief series of $G$ such that $A$ either covers or avoids each chief factor of the chief series.
\noindent\textbf{HY($p^d$)} \emph{Let $P \in \mathrm{Syl}_p(G)$ and $p^d$ be a given prime power such that $1 < p^d < |P|$. Assume
that all subgroups of $G$ of order $p^d$ are partial CAP-subgroups, and assume further that
all cyclic subgroups of $G$ of order $4$ are partial CAP-subgroups when $p^d = 2$ and $P$ is
nonabelian.}
\begin{cor}
Assume that $G$ satisfies $\mathrm{HY}(p^d)$ with $\oh{p'}{G} = 1$ and $p$-rank larger than $1$.
Then $G = H\ltimes P$ where $H \in \mathrm{Hall}_{p'}(G)$ and $P \in \mathrm{Syl}_p(G)$; furthermore, let $V$ be an irreducible $\mathbb{F}_p[H]$-submodule of $P/\Phi(P)$ and write
\[
\dim_{\mathbb{F}_p} V = e, d' = d-\log_p |\Phi(P)|, n' = \log_p |P/\Phi(P)|,
\]
then the following statements hold.
{\rm (1)} $P/\Phi(P)$ is a homogeneous $\mathbb{F}_p[H]$-module, while $V$ is not absolutely irreducible.
{\rm (2)} $d' \geq e \geq 2$, $e \mid (d', n')$, also $G/\Phi(P)$ satisfies $\mathrm{HY}(p^e )$.
{\rm (3)} $H$ is cyclic.
\end{cor}
\begin{proof}
By \cite[Theorem A]{qian2020}, $G = H\ltimes P$ where $H\in\mathrm{Hall}_{p'}(G)$ and $P\in\syl{p}{G}$.
Observe that $d'\geq 2$ by \cite[Lemma 2.8]{qian2020}, so $G/\Phi(P)$ satisfies HY$(p^{d'})$.
Also since $P/\Phi(P)\in\syl{p}{G/\Phi(P)}$ is elementary abelian,
\cite[Corollary 2.10]{qian2020} implies that $G/\Phi(P)$ satisfies HY$(p^{d'})$ if and only if $G/\Phi(P)\in\mathfrak{C}^{\sharp}(p)m{d'}$.
So the results follow directly by Theorem B.
\end{proof}
The above corollary improve \cite[Theorem A$'$]{qian2020} in which $H$ is proved to be supersolvable whose Sylow subgroups are all abelian and the Fitting
subgroup of $H$ is cyclic.
\end{document}
|
\begin{document}
\title{The bound-state solutions of the one-dimensional pseudoharmonic oscillator}
\author{Rufus Boyack}
\affiliation{D\'epartement de physique, Universit\'e de Montr\'eal, Montr\'eal, Qu\'ebec H3C 3J7, Canada}
\author{Asadullah Bhuiyan}
\affiliation{Department of Physics, University of Alberta, Edmonton, Alberta T6G 2E1, Canada}
\author{Aneca Su}
\affiliation{Department of Physics, University of Alberta, Edmonton, Alberta T6G 2E1, Canada}
\author{Frank Marsiglio}
\affiliation{Department of Physics, University of Alberta, Edmonton, Alberta T6G 2E1, Canada}
\begin{abstract}
We study the bound states of a quantum mechanical system consisting of a simple harmonic oscillator with an inverse square interaction, whose interaction strength is governed by a constant $\alpha$.
The singular form of this potential has doubly-degenerate bound states for $-1/4\leq\alpha<0$ and $\alpha>0$; since the potential is symmetric, these consist of even and odd-parity states.
In addition we consider a regularized form of this potential with a constant cutoff near the origin.
For this regularized potential, there are also even and odd-parity eigenfunctions for $\alpha\geq -1/4$.
For attractive potentials within the range $-1/4\leq\alpha<0$, there is an even-parity ground state with increasingly negative energy and a probability density that approaches a Dirac delta function as the cutoff parameter becomes zero.
These properties are analogous to a similar ground state present in the regularized one-dimensional hydrogen atom.
We solve this problem both analytically and numerically, and show how
the regularized excited states approach their unregularized counterparts.
\end{abstract}
\maketitle
\section{Introduction}
\label{sec:Intro}
The one-dimensional (1D) potential $V\sim-a/x^{2}$ is a fascinating quantum mechanical system with several theoretical perplexities~\citep{Case1950,Gupta1993,Essin2006,Nguyen2020}, including the absence of any bound-state solutions.
As noted in Ref.~\citep{Essin2006}, for a particle of mass $m$ in this potential there is no quantity with the dimensions of energy that can be constructed from only the
available parameters $m, \hbar$, and $a$, and thus no quantized bound-state solutions are expected to exist.
A familiar system with a natural energy scale is the simple harmonic oscillator, where
the oscillator frequency $\omega$ determines $\hbar\omega$ as the pertinent energy scale.
In addition, the oscillator length $\sqrt{\hbar/\left(m\omega\right)}$ is the natural length scale.
Thus, one expects that, if a simple harmonic oscillator interaction is added to the $1/x^{2}$ potential,
then bound-state solutions will exist for this combined system.
A physical context where this situation might arise is if a harmonic oscillator is considered in
the presence of an external dipole-like interaction.
Indeed, in Refs.~\citep{Palma2003,Palma2003b} the authors studied the 1D potential
\begin{equation}
\label{eq:Potential1}
V\left(x\right)=\frac{1}{2}m\omega^{2}x^{2}+\frac{\hbar^{2}}{2m}\frac{\alpha}{x^{2}}
\end{equation}
in such a context, and they computed the bound-state energy eigenvalues and eigenfunctions for $\alpha>0$.
The energy eigenfunctions were found to be doubly degenerate,
which is in contrast to the well-known theorem~\cite{LandauLifshitz} that finite 1D potentials do not have degenerate spectra.
For $\alpha<0$, Ref.~\cite{Palma2003} stated that the attractive potential has no lower energy bound.
As we will find, finite-energy solutions do indeed exist when $\alpha<0$.
Prior to the studies of Refs.~\cite{Palma2003,Palma2003b}, Ref.~\cite{Ballhausen1988} studied the potential in Eq.~\eqref{eq:Potential1},
and for $\alpha>0$ the same eigenfunctions and eigenvalues as given in Refs.~\cite{Palma2003,Palma2003b} were obtained.
Interestingly, for $-1/4\leq\alpha<0$, two sets of bound-state solutions were also obtained; that is, for a fixed $\alpha$,
two distinct bound-state eigenfunctions with distinct energy eigenvalues were found.
This counterintuitive behaviour, namely two distinct solutions for the same value of $\alpha$, was criticized~\cite{Senn1989}, and it was argued that only one of the proposed solutions was in fact the correct one.
This argument was validated using an alternative explanation~\cite{Ballhausen1989}, and as a result a well-defined set of bound-state solutions for $-1/4\leq\alpha<0$ was obtained.
As we will show below, for this range of $\alpha$ a degenerate set of even and odd-parity solutions also exists.
This is most readily seen in the regularized calculations.
The potential in Eq.~\eqref{eq:Potential1} has been studied using a variety of different methods, including raising and lowering operators~\cite{Ballhausen1988b,Singh2006,DongBook},
$su(1,1)$ spectrum generating algebra~\cite{Brajamani1990,Buyukilic1992,Levai1994,Oyewumi2012}, supersymmetric quantum mechanics~\citep{Pena2005}, Laplace transform~\cite{Arda2012},
and by explicitly solving the differential equation~\cite{Goldman,TerHaar,Weissman1979}; the latter approach was for fixed $\alpha=1$.
The 2D~\cite{Dong2005} and 3D~\cite{Constantinescu,Sage1984,Sage1985,Dong2003,Oyewumi2008,Tezcan2009} versions of this potential have also been studied.
In molecular physics Eq.~\eqref{eq:Potential1} is known as a pseudoharmonic oscillator potential~\cite{Ballhausen1988}
(strictly speaking the term pseudoharmonic oscillator usually refers to the 3D version with a specific value of $\alpha$~\cite{Oyewumi2012}).
More detailed references on applications of pseudoharmonic-oscillator-type potentials can be found in Refs.~\citep{Oyewumi2012,Nogueira2016},
including the application of the 3D version to describe a diatomic molecule~\cite{Oyewumi2012}.
Singular potentials~\cite{Andrews1976} in 1D quantum mechanics are of great interest, with the most notable example being the 1D hydrogen atom.
An essential aspect of the hydrogen problem concerns the existence of even-parity solutions, which for the singular, ``unregularized'' potential have been argued
by different researchers to be present~\cite{Andrews1981,Andrews1981b,Home1982,Andrews1988,Hammer1988} or absent~\cite{Haines1969,Gomes1980,Gomes1981,Palma2006},
whereas for a regularized version of this potential~\cite{Loudon1959,Boyack2021} even-parity solutions are indisputably present.
An interesting phenomenon in the regularized 1D hydrogen atom is the presence of an even-parity ground state whose energy becomes increasingly negative as the cutoff parameter goes to zero~\cite{Loudon1959,Boyack2021}.
Moreover, the probability density for the corresponding wave function of this state limits to a Dirac delta function.
This ground state acts like a pseudopotential~\cite{Ibrahim2018}.
In this paper we will study a regularized form of Eq.~\eqref{eq:Potential1} and show that, for $-1/4\leq\alpha<0$, the pseudoharmonic oscillator also has an even-parity ground state
with the exact same aforementioned properties as the ground state of the 1D hydrogen atom, namely, increasingly negative energy and a probability density limiting to a Dirac delta function.
We will obtain the analytical form of this solution as a function of the interaction strength $\alpha$ and numerically confirm this result in the limit of small cutoff.
Thus, for both $-1/4\leq\alpha<0$ and $\alpha>0$, there are even and odd-parity states, and the energies of these solutions become degenerate with one another as the cutoff parameter limits to zero.
Since there are many theoretical applications of the pseudoharmonic oscillator, as discussed previously, this analysis will be of interest in several pertinent contexts.
The structure of the paper is as follows. In Sec.~\ref{sec:UnregSE} we review the analysis of the unregularized potential.
We are in agreement with previous researchers, except that we argue that the eigenfunctions are also doubly degenerate for negative $\alpha$.
Following this, in Sec.~\ref{sec:RegSE1} we analyze the regularized potential for the case $-1/4\leq\alpha<0$ and demonstrate
that our results for the excited states reproduce those for the unregularized potential as the cutoff limits to zero.
Then, in Sec.~\ref{sec:GS}, we analyze the properties of the even-parity ground state that has an increasingly negative energy as the cutoff approaches zero.
In Sec.~\ref{sec:RegSE2}, we study the regularized potential for the case $\alpha>0$.
In Sec.~\ref{sec:matrix_mechanics}, we describe a simple numerical method based on matrix mechanics that we have used
to confirm the analytical results obtained with the less familiar confluent hypergeometric functions.
Finally, we present our conclusions in Sec.~\ref{sec:Conclusion}. Technical details are presented in the Appendices.
In Appendix~\ref{App:OddError}, we derive the ``correction'' term for the difference between the energy eigenvalues of the regularized potential and those of the unregularized potential.
In Appendix~\ref{App:CCoeffs}, we derive an expansion of the ground-state energy in small values of the cutoff parameter.
In Appendix~\ref{App:C0Sol}, we obtain a closed-form expression for the approximate ground-state energy as a function of $\alpha$.
\section{Unregularized potential}
\label{sec:UnregSE}
\subsection{Eigenstates and eigenvalues}
The potential energy can be written in a compact form by introducing the length scale $x_{0}$ and the energy scale $V_{0}$
defined by
\begin{align}
\label{eq:X0} x_{0}&=\sqrt{\frac{\hbar}{m\omega}},\\
\label{eq:V0} V_{0}&=\frac{1}{2}\hbar\omega.
\end{align}
The potential is then given by
\begin{equation}
\label{eq:Potential2}
V(x)=V_{0}\left[\left(\frac{x}{x_{0}}\right)^2+\alpha\left(\frac{x_{0}}{x}\right)^2\right].
\end{equation}
A plot of the potential is shown in Fig.~\ref{fig:Potential} for various values of $\alpha$.
\begin{figure}
\caption{The potential energy function in Eq.~\eqref{eq:Potential2}
\label{fig:Potential}
\end{figure}
The time-independent Schr$\ddot{\text{o}}$dinger equation for the
potential in Eq.~\eqref{eq:Potential1} is
\begin{equation}
-\frac{\hbar^{2}}{2m}\frac{d^{2}\psi\left(x\right)}{dx^{2}}+\left(\frac{1}{2}m\omega^{2}x^{2}+\frac{\alpha\hbar^{2}}{2m}\frac{1}{x^{2}}\right)\psi\left(x\right)=E\psi\left(x\right).\label{eq:SE1}
\end{equation}
Due to the $1/x^2$ term in the potential, we require the solutions to satisfy the boundary condition $\psi(0)=0$.
Since the Hamiltonian has inversion symmetry, the solutions of Eq.~\eqref{eq:SE1}
have definite parity and are either even or odd functions of position.
Define the dimensionless variables $y$ and $\kappa$ via
\begin{equation}
\label{eq:EandyDefs}
y = \frac{x}{x_{0}};\quad E=\left(\kappa+\frac{1}{2}\right)\hbar\omega.
\end{equation}
The Schr$\ddot{\text{o}}$dinger equation then becomes
\begin{equation}
-\psi^{\prime\prime}\left(y\right)+\left(y^{2}+\frac{\alpha}{y^{2}}\right)\psi\left(y\right)=2\left(\kappa+\frac{1}{2}\right)\psi\left(y\right).\label{eq:SE}
\end{equation}
In the limit that $y\rightarrow\infty$, the asymptotic behaviour of $\psi(y)$ is $\psi\left(y\right)\sim e^{-\frac{1}{2}y^{2}}$.
Therefore, we consider the ansatz $\psi\left(y\right)=e^{-\frac{1}{2}y^{2}}y^{\nu}g\left(y\right)$.
The indicial equation for $g$ motivates introducing the variable $\nu$ defined by $\alpha=\nu\left(\nu-1\right)$.
Solving this equation gives
\begin{equation}
\nu=\frac{1}{2}\pm\sqrt{\frac{1}{4}+\alpha}.
\end{equation}
Real solutions thus require $\frac{1}{4}+\alpha\geq0$. Let the two solutions be denoted by $\nu_{\pm}$.
For $\alpha>0$, $\nu_{+}>1$ and $\nu_{-}<0$.
In this case we will choose the positive root; in reality both should be considered, but the outcome will be the same~\cite{Mathews2021}.
Suffice it to say that, for $-1/4\leq\alpha<0$ and for $\alpha>0$, $\nu=\nu_{+}$ is the only physically acceptable solution.
This requirement is consistent with the conclusions of Refs. \cite{Senn1989,Ballhausen1989} and dispels
the unphysical behaviour found in Fig.~1 of Ref.~\cite{Ballhausen1988}, which exhibited two possible energy eigenvalues for a given $\alpha$ when $-1/4\leq\alpha<0$.
After using the indicial equation, the differential equation for $g$ is
\begin{equation}
-\left[g^{\prime\prime}\left(y\right)+\frac{2\nu}{y}g^{\prime}\left(y\right)\right]+2\nu g\left(y\right)+2yg^{\prime}\left(y\right)=2\kappa g\left(y\right).
\end{equation}
Now let $w=y^{2}$. This substitution then leads to
\begin{equation}
wg^{\prime\prime}\left(w\right)+\left(\nu+\frac{1}{2}-w\right)g^{\prime}\left(w\right)-\left(\frac{\nu-\kappa}{2}\right)g\left(w\right)=0.
\end{equation}
The confluent hypergeometric differential equation (also known as Kummer's equation) has the form
\begin{equation}
wg^{\prime\prime}\left(w\right)+\left(b-w\right)g^{\prime}\left(w\right)-ag\left(w\right)=0
\end{equation}
and the solution is a linear combination of two independent solutions of this equation, typically taken to be $M(a,b,w)$ (known as
the Kummer function), and $U(a,b,w)$ (known as the Tricomi function). Several other possibilities exist, as recently catalogued
in Ref.~\cite{Mathews2021}. See also Refs.~\cite{AbramowitzStegun,NIST2020}.
The result is that the solution is given by
\begin{equation}
g\left(w\right)=U\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},w\right),
\end{equation}
where $\frac{1}{2}\left(\nu-\kappa\right)$ must be a non-positive integer, in which case the Tricomi function truncates to a polynomial.
Therefore, $\kappa-\nu=2n$, where $n\in\mathbb{Z}_{\geq0}$.
The relation between the generalized Laguerre polynomial and the confluent hypergeometric function is given in Eq.~(13.6.27) of Ref.~\cite{AbramowitzStegun} and Eq.~(13.6.19) of Ref.~\cite{NIST2020}:
$U\left(-n,\beta+1,w\right)=\left(-1\right)^{n}n!L_{n}^{\left(\beta\right)}\left(w\right)$, where we use the Laguerre polynomials as defined in Refs.~\cite{AbramowitzStegun,NIST2020}.
Thus, up to a normalization constant, the solution is
\begin{equation}
g\left(w\right)=L_{n}^{\left(\nu-\frac{1}{2}\right)}\left(w\right) =
\sum_{s=0}^n \frac{\Gamma\left(n+\nu + \frac{1}{2}\right)}{\Gamma\left(s+\nu + \frac{1}{2}\right)}\frac{\left(-w\right)^s}{\left(n-s\right)! s!}.
\end{equation}
In summary, the complete solution for the eigenfunctions, when $y\geq0$, is
\begin{equation}
\label{eq:PsiR}
\psi_{+}\left(y\right)=A\left(-1\right)^{n}e^{-\frac{1}{2}y^{2}}y^{\nu}L_{n}^{\left(\nu-\frac{1}{2}\right)}\left(y^{2}\right),\ n\in\mathbb{Z}_{\geq0}.
\end{equation}
Similarly, the solution for $y\leq0$ is
\begin{equation}
\label{eq:PsiL}
\psi_{-}\left(y\right)=B\left(-1\right)^{n}e^{-\frac{1}{2}y^{2}}\left(-y\right)^{\nu}L_{n}^{\left(\nu-\frac{1}{2}\right)}\left(y^{2}\right),\ n\in\mathbb{Z}_{\geq0}.
\end{equation}
\subsection{Continuity and normalization conditions}
Continuity of the wave function at the origin requires that $\psi_{+}\left(0^{+}\right)=\psi_{-}\left(0^{-}\right)$.
Since $\nu>0$, the wave function vanishes at the origin and so this condition does not impose a constraint.
For potentials with a finite jump discontinuity, the derivative of the wave function is continuous~\cite{Branson1979,Andrews1981}.
However, since the potential in Eq.~\eqref{eq:Potential2} is singular at the origin, i.e., it has an infinite jump discontinuity, the behaviour of the derivative of the wave function is more subtle.
As pointed out in Ref.~\cite{Home1982}, excluding the case of the Dirac delta potential, the requirement of Hermiticity of the momentum operator leads to the result that a
wave function can have a discontinuous first derivative only at a point where the wave function itself vanishes.
Both of the functions in Eqs.~\eqref{eq:PsiR}-\eqref{eq:PsiL} vanish at the origin, and as such the derivative of the wave function can be discontinuous.
We will not impose a condition on $\psi^{\prime}$ and consider both even and odd-parity solutions.
We now determine the normalization constant.
Let $N$ denote the normalization constant of the wave function. The normalization condition is given by
\begin{align}\label{eq:Norm}
1 & = \int_{-\infty}^{\infty}dx\left|\psi\left(x\right)\right|^{2}\nonumber \\
& = N^{2}\left(\frac{\hbar}{m\omega}\right)^{\frac{1}{2}}\int_{0}^{\infty}dxx^{\nu-\frac{1}{2}}e^{-x}\left[L_{n}^{\left(\nu-\frac{1}{2}\right)}\left(x\right)\right]^{2}.
\end{align}
To evaluate the remaining integral we use the orthogonality relation for the generalized Laguerre polynomials (Eq.~(19), pg.~479 of Ref.~\cite{Prudnikov}).
For $\lambda>-1$, we have
\begin{equation}
\int_{0}^{\infty}dxx^{\lambda}e^{-x}L_{n}^{\left(\lambda\right)}\left(x\right)L_{m}^{\left(\lambda\right)}\left(x\right)=\frac{1}{n!}\Gamma\left(n+\lambda+1\right)\delta_{n,m}.
\label{int_norm}
\end{equation}
The solution to Eq.~\eqref{eq:Norm} is thus
\begin{equation}
N=\left(\frac{m\omega}{\hbar}\right)^{\frac{1}{4}}\sqrt{\frac{n!}{\Gamma\left(n+\nu+\frac{1}{2}\right)}}.
\end{equation}
This result agrees with Eq.~(16) of Ref.~\citep{Ballhausen1988b}.
Let $\psi_{n}\left(x\right)$ be defined by
\begin{align}
\psi_{n}\left(x\right)&=\frac{1}{\sqrt{x_{0}}}\sqrt{\frac{n!}{\Gamma\left(n+\nu+\frac{1}{2}\right)}}e^{-x^{2}/\left(2x_{0}^2\right)}\nonumber\\
&\quad\times
\left(\frac{x}{x_{0}}\right)^{\nu}L_{n}^{\left(\nu-\frac{1}{2}\right)}\left(\frac{x^2}{x_{0}^2}\right),\ x\geq0.
\end{align}
The complete solution of the problem is then given as follows. For all of the permissible (and non trivial) values of $\alpha$ for which bound-state solutions exist, $-1/4\leq\alpha<0$ and $\alpha>0$, the energy eigenvalues are
\begin{align}
\label{eq:Nu} \nu & = \frac{1}{2}+\sqrt{\frac{1}{4}+\alpha}. \\
\label{eq:En}E_{n} & = \left(2n+1+\sqrt{\frac{1}{4}+\alpha}\right)\hbar\omega,\ n\in\mathbb{Z}_{\geq0}.
\end{align}
Equations~\eqref{eq:Nu}-\eqref{eq:En} are also valid for $\alpha=0$.
When $\alpha=0$, Eq.~\eqref{eq:Potential2} reduces to the simple harmonic oscillator potential, and another set of solutions are given by the even-parity Hermite polynomial solutions.
Thus, for $-1/4\leq\alpha<0$ and $\alpha>0$, the energy eigenfunctions are doubly degenerate, and for $\alpha$=0 there is a ``disconnected'' set of even solutions.
To account for this discontinuous behaviour in the energy structure of the even solutions, we label these solutions as follows.
For $\alpha>0$, we let $n_{even}=n_{odd}=n\in\mathbb{Z}_{\geq0}$. However, for $-1/4\leq\alpha<0$, we let $n_{even}-1=n_{odd}=n\in\mathbb{Z}_{\geq0}$,
where $n_{even}$ and $n_{odd}$ label the respective even and odd-parity solutions.
A plot of these energy eigenvalues, as functions of $\alpha$, is shown in Fig.~\ref{fig:UnregEigs}. The solid (dashed) lines correspond to the even (odd) solutions.
As illustrated, the energy eigenvalues are doubly degenerate for $-1/4\leq\alpha<0$ and $\alpha>0$.
A possibly counter-intuitive feature of this plot is our choice of labelling for the even solutions.
As $\alpha$ changes sign, for a fixed $n_{odd}$, the even solution that is degenerate with the odd solution has a different $n_{even}$ label.
The actual mathematical expression for the even-parity wave functions does not change as $\alpha$ changes sign; that is, the $n$ in Eqs.~\eqref{eq:PsiR} and \eqref{eq:PsiL}
is always the same as $\alpha$ changes sign, it is merely that we define an $n_{even}$ that is different for positive and negative $\alpha$.
The motivation for this choice will be clearer in the next section when we study the regularized potential.
Finally, note that all of the eigenvalues are positive, i.e., there is no negative-energy state that takes advantage of the negative potential.
The presence of such a state will be shown for the case of the regularized potential.
The eigenfunctions are degenerate, and we select the even and odd-parity combinations:
\begin{equation}
\alpha\geq-1/4:\quad \psi\left(x\right) =
\begin{dcases}
\ \ \psi_{n}\left(x\right),\ \ \ x\geq0\\
\pm\psi_{n}\left(-x\right),\ x\leq0.
\end{dcases}
\label{eq:Wavefn}
\end{equation}
In Figs.~\ref{fig:UnregPsiOdd} and \ref{fig:UnregPsiEven} we show the first few even and odd-parity wave functions, respectively, for positive and negative values of $\alpha$.
\begin{figure}
\caption{The first few energy levels as functions of the parameter $\alpha$. We have artificially shifted the even-state eigenvalues in the positive (negative) $y$-direction for $\alpha<0\ (\alpha>0)$ for clarity.
In reality, the even and odd-state eigenvalues lie exactly on top of each other.
Note that all the odd-state energies (denoted by dashed coloured curves) are continuous and pass through the well-known result for the harmonic oscillator at $\alpha = 0$.
In contrast, the even states (denoted by coloured solid curves) have a discontinuity at $\alpha = 0$;
the actual energy values for $\alpha = 0$ lie halfway between this discontinuity.
Moreover, the lowest even state ($n_{even}
\label{fig:UnregEigs}
\end{figure}
\begin{figure}
\caption{The $n_{odd}
\label{fig:UnregPsiOdd}
\end{figure}
\begin{figure}
\caption{The even-parity wave functions. The dashed curves correspond to $\alpha=-0.1$ while the solid curves are for $\alpha=0.1$. For positive $\alpha$, the colour scheme is $n_{even}
\label{fig:UnregPsiEven}
\end{figure}
\subsection{Discussion}
The energy eigenvalues obtained here agree with those derived in Ref.~\citep{Ballhausen1988}.
Importantly, there are bound-state solutions for negative values of $\alpha$ in the range $-1/4\leq\alpha<0$.
The existence of bound-state solutions in this regime is not too surprising, since a harmonic oscillator potential encloses the singular potential;
moreover, these bound states all have positive energy, consistent with the fact that there
are no bound states for the $1/x^{2}$ potential in this regime~\citep{Essin2006}, as this would require a negative-energy solution.
For $-1/4\leq\alpha<0$, we have argued that there are two degenerate solutions (as there are for $\alpha >0$).
The critical value $\alpha=-1/4$ is analogous to the critical field for which the fall of a particle to the centre of the potential becomes possible: see Sec.~35 of Ref.~\cite{LandauLifshitz}.
Note that a remarkable discontinuity occurs at $\alpha=0$, where the states are no longer degenerate and the eigenvalues form the familiar ladder series (see Fig.~\ref{fig:UnregEigs}).
An interesting aspect of this problem is the double degeneracy of the bound-state solutions.
A standard theorem~\cite{LandauLifshitz} in quantum mechanics in one dimension asserts that for finite potentials the bound-state wave functions are non degenerate.
However, for singular potentials, this theorem is modified~\cite{Andrews1976}.
Here we have derived the exact energy eigenvalues and eigenfunctions.
If these solutions were not already known, it would be natural to consider the $\alpha/x^2$ potential as a perturbation to a simple harmonic oscillator system
and use non-degenerate perturbation theory to obtain the corrected eigenvalues and eigenfunctions in powers of $\alpha$.
The energy obtained~\cite{Aguilera1991} to second order in perturbation theory agrees with the expansion of the exact energy.
However, the perturbed wave functions disagree with the expansion of the exact wave functions.
Indeed, the expansion of $x^{\nu}$ for small $\alpha$ produces a logarithmic term in $x$, which cannot arise from the sum of unperturbed eigenfunctions consisting of Hermite polynomials.
Thus, perturbation theory for this potential is singular; see Ref.~\cite{Aguilera1991} for further discussion of these points.
Note that both the odd-parity (Fig.~\ref{fig:UnregPsiOdd}) and the even-parity (Fig.~\ref{fig:UnregPsiEven}) wave functions
are essentially identical for positive $\alpha$ and negative $\alpha$.
Of course, given Eq.~(\ref{eq:Wavefn}), the even and odd wave functions are identical to one another for a given value of $\alpha>0$ as well.
The significance of the first statement, however, is profound.
This equivalence means that the singular, attractive well ($\alpha < 0$) acts as a barrier in very much the same way as the repulsive barrier ($\alpha > 0$) does.
For a simpler model this was readily understood as a consequence of the so-called pseudopotential effect \cite{Ibrahim2018}.
This effect is summarized by the following: the existence of a very negative energy bound state serves to act as a
pseudopotential for other, higher energy states because these states must be orthogonal to the negative-energy bound state.
Because the very negative energy bound-state wave function will be strongly peaked near the origin, it will serve as an effective
barrier with respect to tunnelling in the positive-energy states.
In our case, however, we have been unable
to identify such a negative-energy bound state.
We speculate that nonetheless it is present, but outside of the Hilbert space that we have explored.
Additional evidence comes from the cusp that is clearly present at the origin in the even-parity wave functions depicted in
Fig.~\ref{fig:UnregPsiEven}. It is easy to show that the second derivative of this cusp-like feature produces a (repulsive) Dirac delta function.
As there is no Dirac delta function in the potential we are studying, we understand this inferred $\delta$-function to be the result
of a bound state not contained within our Hilbert space. This interpretation of our results for the unregularized potential is further
supported by results of the regularized potential.
We now investigate a regularized version of Eq.~\eqref{eq:Potential2} and study the interesting properties that arise in the limit that the cutoff is taken to zero.
We will recover the bound states of the unregularized potential, but, in addition, a new, negative energy bound state arises, and
plays a role in causing the degeneracy in the positive-energy solutions of the regularized potential. We believe this ground state is
the one inferred above in the unregularized theory.
\section{Regularized potential case i: $-1/4\leq\alpha<0$ }
\label{sec:RegSE1}
We consider the potential
\begin{equation}
V\left(x\right)=\left\{ \begin{array}{c}
V_{0}\left[\left(\frac{x}{x_{0}}\right)^{2}+\alpha\left(\frac{x_{0}}{x}\right)^{2}\right],\ x\geq\delta x_{0}\\ \\
V_{0}\left(\delta^{2}+\frac{\alpha}{\delta^{2}}\right), \hspace{15mm}\ x\leq\delta x_{0}.
\end{array}\right.
\label{regularized_potential}
\end{equation}
The parameter $x_{0}$ is the oscillator length defined in Eq.~\eqref{eq:X0}, $V_{0}$ is the energy scale defined in Eq.~\eqref{eq:V0},
and $0<\delta\ll1$ is a fixed cutoff parameter used to ``regularize'' the singularity at the origin.
We define $\widetilde{V}_{0}=V_{0}\left(\delta^{2}+\frac{\alpha}{\delta^{2}}\right)$.
In this section we consider the case $-1/4\leq\alpha<0$.
For bound-state solutions of energy $E$ we require $E<V\left(\pm\infty\right)=\infty$.
In addition, a theorem~\cite{LandauLifshitz} of one-dimensional quantum mechanics is that $E>V_{\text{min}}=\widetilde{V}_{0}$.
Since $\widetilde{V}_{0}\rightarrow-\infty$ as $\delta\rightarrow0$, for the case $-1/4\leq\alpha<0$, the allowed bound-state energies in this limit are $-\infty<E<\infty$.
In Sec.~\ref{sec:GS}, we shall show that there is indeed a ground-state solution with increasingly negative energy; that is, $E\rightarrow-\infty$ as $\delta\rightarrow0$.
All of the other bound states correspond to excited states that have positive energy.
\subsection{Odd-parity solutions}
To derive the even and odd-parity eigenfunctions, it suffices to consider only $x\geq0$.
We then divide space into region I: $x\leq\delta x_{0}$ and region II: $x\geq\delta x_{0}$.
We define $q^{2}$ by $q^{2}=\frac{2m}{\hbar^{2}}\left(E-\widetilde{V}_{0}\right)$, where $E$ is defined in Eq.~\eqref{eq:EandyDefs}.
Using the definitions of $q$ and $E$, the quantity $q\delta x_{0}$ can be expressed in terms of $\kappa$ as follows:
\begin{equation}
\label{eq:QX0}
\left(q\delta x_{0}\right)^{2}=\left(2\kappa+1\right)\delta^{2}-\left(\delta^{4}+\alpha\right).
\end{equation}
In region I, the Schr$\ddot{\text{o}}$dinger equation is
\begin{equation}
\label{eq:SERegionI_Eq}
\psi^{\prime\prime}\left(x\right)+q^{2}\psi\left(x\right)=0.
\end{equation}
The solution is
\begin{equation}
\label{eq:SERegionI_Sol}
\psi_{\text{I}}\left(x\right)=A_{\text{I}}\cos\left(qx\right)+B_{\text{I}}\sin\left(qx\right).
\end{equation}
The solutions have either even or odd parity. Let us first consider the odd-parity solutions: $A_{\text{I}}=0$.
In region II the Schr$\ddot{\text{o}}$dinger equation is
\begin{equation}
\label{eq:SERegionII_Eq}
-\frac{\hbar^{2}}{2m}\psi^{\prime\prime}\left(x\right)+V_{0}\left[\left(\frac{x}{x_{0}}\right)^{2}+\alpha\left(\frac{x_{0}}{x}\right)^{2}\right]\psi\left(x\right)=E\psi\left(x\right).
\end{equation}
Following the analysis performed in Sec.~\ref{sec:UnregSE}, where $y$ and $E$ are as given in Eq.~\eqref{eq:EandyDefs}, and $\nu$ is defined as in Eq.~\eqref{eq:Nu}, the solution to this differential equation is
\begin{align}
\label{eq:SERegionII_Sol}
\psi_{\text{II}}\left(x\right)&=A_{\text{II}}M\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},y^{2}\right)\nonumber\\
&\quad+B_{\text{II}}U\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},y^{2}\right).
\end{align}
For non-singular behaviour as $x\rightarrow\infty$, we require $A_{\text{II}}=0$.
The general, odd-parity solution (for $x\geq0$) is then
\begin{equation}
\psi\left(x\right)=\left\{ \begin{array}{c}
\qquad \qquad B_{\text{I}}\sin\left(qx\right),\hspace{18mm} x\leq\delta x_{0}\\
B_{\text{II}}y^{\nu}e^{-\frac{1}{2}y^2}U\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},y^2\right),\ \ \ x\geq\delta x_{0}.
\end{array}\right.\label{eq:PsiOdd}
\end{equation}
The energy eigenvalues are determined from the continuity of $\psi^{\prime}/\psi$ at $x=\delta x_{0}$.
In region I, we have
\begin{equation}
\label{eq:DPsiRegI}
\frac{\psi_{\text{I}}^{\prime}}{\psi_{\text{I}}}=q\cot\left(q\delta x_{0}\right).
\end{equation}
In region II we have
\begin{align}
\frac{\psi_{\text{II}}^{\prime}}{\psi_{\text{II}}} & = \frac{1}{x_{0}}\left[\frac{\nu}{\delta}-\delta+2\delta\frac{U^{\prime}\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},\delta^{2}\right)}{U\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},\delta^{2}\right)}\right]\nonumber \\
& = \frac{1}{x_{0}}\left[\frac{\nu}{\delta}-\delta-\delta\left(\nu-\kappa\right)\frac{U\left(\frac{\nu-\kappa}{2}+1,\nu+\frac{3}{2},\delta^{2}\right)}{U\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},\delta^{2}\right)}\right].
\end{align}
In the last step we used Eq.~(13.4.21) of Ref.~\cite{AbramowitzStegun}: $U^{\prime}\left(a,b,z\right)=-aU\left(a+1,b+1,z\right)$.
The Tricomi function $U\left(a,b,z\right)$ obeys the following recurrence relations (see Eqs.~(13.4.17)-(13.4.18) of Ref.~\cite{AbramowitzStegun}):
\begin{align}
\label{eq:ASIdentity1}0&=U\left(a,b,z\right)-aU\left(a+1,b,z\right)-U\left(a,b-1,z\right).\\
\label{eq:ASIdentity2}0&=\left(b-a\right)U\left(a,b,z\right)+U\left(a-1,b,z\right)-zU\left(a,b+1,z\right).
\end{align}
Using these identities, we then have
\begin{equation}
\frac{\psi_{\text{II}}^{\prime}}{\psi_{\text{II}}}=\frac{1}{\delta x_{0}}\left[\delta^{2}-\kappa-1-2\frac{U\left(\frac{\nu-\kappa}{2}-1,\nu+\frac{1}{2},\delta^{2}\right)}{U\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},\delta^{2}\right)}\right].
\end{equation}
Equating this expression with Eq.~\eqref{eq:DPsiRegI}, we then obtain the eigenvalue condition for odd-parity solutions:
\begin{equation}
\label{eq:EVCOdd}
q\delta x_{0}\cot\left(q\delta x_{0}\right)=\delta^{2}-\kappa-1-2\frac{U\left(\frac{\nu-\kappa}{2}-1,\nu+\frac{1}{2},\delta^{2}\right)}{U\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},\delta^{2}\right)}.
\end{equation}
The quantity $q\delta x_{0}$ is given in Eq.~\eqref{eq:QX0}, therefore Eq.~\eqref{eq:EVCOdd} can be used to determine $\kappa$, and thus $E$, for given values of $\alpha$ and $\delta$.
A plot of the energy eigenvalues, for negative and positive values of $\alpha$ and for even and odd-parity states, is shown in Fig.~\ref{fig:RegEigs}.
The analytical consideration of the even-parity solutions in the case of negative $\alpha$, and also the even and odd-parity solutions for positive $\alpha$, will be considered in the subsequent sections,
but here we present the complete results for convenience.
\begin{figure}
\caption{Energy eigenvalues for the regularized pseudoharmonic oscillator potential. The odd-parity solutions correspond to the dashed lines while the even-parity solutions appear as solid lines. Three values of $\delta$ are used: $\delta=0.01, 0.001,0.0001$.
The thick line has $\delta=0.01$, the medium-sized line has $\delta=0.001$, and the thin line has $\delta=0.0001$. All of the odd-parity solutions appear approximately on the same curve, and are therefore insensitive to both the value of $\alpha$ and the value of
the regularization parameter $\delta$ over the ranges shown.}
\label{fig:RegEigs}
\end{figure}
One striking feature observable in Fig.~\ref{fig:RegEigs}, in the case of negative $\alpha$, is the presence of an even solution with energy that is becoming increasingly negative as $\delta\rightarrow0$. The three curves for each energy level correspond to $\delta = 0.01,
0.001, 0.0001$, with the smallest value given by the thinnest curve; these have the steepest slopes near $\alpha = 0$.
For the lowest-energy solution (blue curves) these very small $\delta$ results are practically vertical near $\alpha = 0$.
This solution is absent in Fig.~\ref{fig:UnregEigs} for the eigenvalues of the unregularized problem.
The analytical properties of this state will be analyzed in more detail in Sec.~\ref{sec:GS}.
In particular, in Sec.~\ref{sec:GSEnergy} it will be shown that the energy for this state goes as $E\sim-1/\delta^2+O\left(\delta^2\right)$, as $\delta\rightarrow0$.
In addition, in Sec.~\ref{sec:GSPsi} it will be shown that the probability density for this state limits to a Dirac delta function.
A similar bound-state solution is also present in the regularized 1D hydrogen atom~\cite{Boyack2021}.
Another interesting feature in Fig.~\ref{fig:RegEigs} is that the curves are continuous functions of $\alpha$. That is, for both even and odd solutions, as $\alpha$ changes sign the energy levels vary smoothly.
This figure should be contrasted with Fig.~\ref{fig:UnregEigs}, where the even solutions have a seemingly discontinuous behaviour as $\alpha$ changes sign.
For example, in Fig.~\ref{fig:UnregEigs}, when $\alpha<0$ the $n_{even}=1$ solution is degenerate with the $n_{odd}=0$ solution, whereas when $\alpha>0$ the $n_{even}=1$ solution is degenerate with the $n_{odd}=1$ solution.
This behaviour can now be understood as the $\delta\rightarrow0$ limit of Fig.~\ref{fig:RegEigs}, where this crossover feature emerges naturally.
The next step is to take the limit $\delta\rightarrow0$.
From Eq.~\eqref{eq:QX0}, we obtain $\left(q\delta x_{0}\right)^{2}\rightarrow-\alpha=\left|\alpha\right|$ as $\delta\rightarrow0$.
Thus, $q\delta x_{0}\cot\left(q\delta x_{0}\right)\rightarrow\sqrt{\left|\alpha\right|}\cot\sqrt{\left|\alpha\right|}.$
In the previous section we found that for the unregularized potential the parameter $\kappa$ is given by $\kappa=2n+\nu$.
Based on this result, for the regularized potential we then define
\begin{equation}
\label{eq:KEqn}
\kappa=2n+\nu+2\epsilon_{n},\ n\in\mathbb{Z}_{\geq0}.
\end{equation}
The ``correction term'' $\epsilon_{n}$ characterizes the difference between the energy eigenvalues for the unregularized and regularized potentials.
For the unregularized potential, $\epsilon_{n}=0$. The next goal is to determine $\epsilon_{n}$ as a function of $\alpha$ in the limit $\delta\ll1$.
As shown in Appendix~\ref{App:OddError}, the expression for $\epsilon_{n}$ in the limit $\delta\ll1$ is
\begin{align}
\label{eq:OddError1}
\epsilon_{n}&=\left(\frac{\nu-\sqrt{\left|\alpha\right|}\cot\sqrt{\left|\alpha\right|}}{\nu-1+\sqrt{\left|\alpha\right|}\cot\sqrt{\left|\alpha\right|}}\right)\nonumber\\
&\quad\times\frac{\left(-1\right)^{n}}{\Gamma\left(\frac{1}{2}-\nu-n\right)n!}\frac{\Gamma\left(\frac{3}{2}-\nu\right)}{\Gamma\left(\nu+\frac{1}{2}\right)}\delta^{2\nu-1}.
\end{align}
Since $\nu>1/2$, $\epsilon_{n}\rightarrow0$ as $\delta\rightarrow0$.
Let us now turn to the wave function for odd-parity states, given in Eq.~\eqref{eq:PsiOdd}.
Continuity of $\psi$ at $x=\delta x_{0}$ imposes the condition
\begin{equation}
\label{eq:BIIOdd}
B_{\text{I}}\sin\left(q\delta x_{0}\right)=B_{\text{II}}\delta^{\nu}e^{-\delta^{2}/2}U\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},\delta^{2}\right).
\end{equation}
Normalization of $\psi$ requires $\int_{-\infty}^{\infty}dx\left|\psi\left(x\right)\right|^{2}=1$.
After inserting Eq.~\eqref{eq:BIIOdd} into Eq.~\eqref{eq:PsiOdd}, then performing the normalization integral and solving for $B^{2}_{\text{I}}$, we obtain
\begin{align}
\label{eq:BIOdd}
B^{2}_{\text{I}}&=\frac{1}{2\delta x_{0}}\Biggl\{ \int_{0}^{1}dy\sin^{2}\left(q\delta x_{0}y\right)+\sin^{2}\left(q\delta x_{0}\right)\int_{1}^{\infty}dyy^{2\nu}\nonumber\\
&\quad\times e^{-\delta^{2}\left(y^{2}-1\right)}\left[\frac{U\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},\delta^{2}y^{2}\right)}{U\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},\delta^{2}\right)}\right]^{2}\Biggr\} ^{-1}.
\end{align}
The first few odd-parity wave functions are shown in Fig.~\ref{fig:RegPsiOdd}. For completeness, we present the results for positive and negative $\alpha$. The analysis for the positive $\alpha$ case is deferred to Sec.~\ref{sec:RegSE2OddPosAlpha}.
These wave functions strongly resemble the results for the unregularized potential shown in Fig.~\ref{fig:UnregPsiOdd}.
In fact, for even smaller values of $\delta$ (not shown), these curves become indistinguishable from those of Fig.~\ref{fig:UnregPsiOdd}.
In the next section we investigate the even-parity solutions.
\begin{figure}
\caption{The $n_{odd}
\label{fig:RegPsiOdd}
\end{figure}
\subsection{Even-parity solutions}
\begin{figure}
\caption{Exact energy eigenvalues for the even and odd solutions (blue) versus the approximate energy eigenvalues (red).
We have made the red curve thicker so that it is visible, since the energies are extremely close, particularly in the case of the odd solutions.
Here $\alpha=-0.05$.}
\label{fig:EnergyCorrNegAlpha}
\end{figure}
The Schr$\ddot{\text{o}}$dinger equation in region I is given in Eq.~\eqref{eq:SERegionI_Eq}, with the general solution given in Eq.~\eqref{eq:SERegionI_Sol}. For even-parity solutions we set $B_{\text{I}}=0$.
The general solution for the Schr$\ddot{\text{o}}$dinger equation in region II is given in Eq.~\eqref{eq:SERegionII_Sol}, where we set $A_{\text{II}}=0$.
For even-parity solutions, the wave function is given by
\begin{equation}
\psi\left(x\right)=\left\{ \begin{array}{c}
\qquad \qquad A_{\text{I}}\cos\left(qx\right),\hspace{15mm} x\leq\delta x_{0}\\
B_{\text{II}}y^{\nu}e^{-\frac{1}{2}y^2}U\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},y^2\right),\ x\geq\delta x_{0}.
\end{array}\right.\label{eq:PsiEven}
\end{equation}
The eigenvalue condition is again determined by requiring continuity of $\psi^{\prime}/\psi$ at $x=\delta x_{0}$. In contrast to Eq.~\eqref{eq:EVCOdd} for odd-parity solutions,
the result for even-parity solutions is given by
\begin{equation}
\label{eq:EVCEven}
q\delta x_{0}\tan\left(q\delta x_{0}\right)=\kappa+1-\delta^{2}+2\frac{U\left(\frac{\nu-\kappa}{2}-1,\nu+\frac{1}{2},\delta^{2}\right)}{U\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},\delta^{2}\right)}.
\end{equation}
Following the analysis in Appendix~\ref{App:OddError}, we can determine the correction term in a manner similar
to that used for the odd states. The only difference in this case is the replacement of $\cot$ by $-\tan$ in Eq.~\eqref{eq:OddError1}.
Thus, the expression for $\epsilon_{n}$ for the even-parity states is
\begin{align}
\label{eq:EvenError1}
\epsilon_{n}&=\left(\frac{\nu+\sqrt{\left|\alpha\right|}\tan\sqrt{\left|\alpha\right|}}{\nu-1-\sqrt{\left|\alpha\right|}\tan\sqrt{\left|\alpha\right|}}\right)\nonumber\\
&\quad\times\frac{\left(-1\right)^{n}}{\Gamma\left(\frac{1}{2}-\nu-n\right)n!}\frac{\Gamma\left(\frac{3}{2}-\nu\right)}{\Gamma\left(\nu+\frac{1}{2}\right)}\delta^{2\nu-1}.
\end{align}
Note that, for negative $\alpha$, in the above formula we replace $n_{even}-1=n$, as illustrated in Fig.~\ref{fig:RegEigs}, where $n_{even}$ starts from 1,2,3,\dots.
This redefinition merely amounts to a relabelling.
In Fig.~\ref{fig:EnergyCorrNegAlpha}, we compare the exact energy eigenvalues (shown in blue), computed using Eqs.~\eqref{eq:EVCOdd} and \eqref{eq:EVCEven} for odd and even solutions, respectively,
against those determined using Eqs.~\eqref{eq:KEqn}, \eqref{eq:OddError1}, and \eqref{eq:EvenError1} (shown in red) for small values of $\delta$ and $\alpha=-0.05$.
The results are in very good agreement for small values of $\delta$. The relative agreement for the odd corrections is even better, as
a zoom of Fig.~\ref{fig:EnergyCorrNegAlpha} focussing only on the odd correction indicates (not shown).
Let us now turn to the wave function for even parity-states, given in Eq.~\eqref{eq:PsiEven}.
Continuity of $\psi$ at $x=\delta x_{0}$ imposes the condition
\begin{equation}
\label{eq:AIEven}
A_{\text{I}}\cos\left(q\delta x_{0}\right)=B_{\text{II}}\delta^{\nu}e^{-\delta^{2}/2}U\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},\delta^{2}\right).
\end{equation}
Normalization of $\psi$ requires $\int_{-\infty}^{\infty}dx\left|\psi\left(x\right)\right|^{2}=1$.
After inserting Eq.~\eqref{eq:AIEven} into Eq.~\eqref{eq:PsiEven}, then performing the normalization integral and solving for $A^{2}_{\text{I}}$, we obtain
\begin{align}
A^{2}_{\text{I}}&=\frac{1}{2\delta x_{0}}\Biggl\{ \int_{0}^{1}dy\cos^{2}(q\delta x_{0}y)+\cos^{2}(q\delta x_{0})\int_{1}^{\infty}dyy^{2\nu}\nonumber\\
&\quad\times e^{-\delta^{2}(y^{2}-1)}\left[\frac{U\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},\delta^{2}y^{2}\right)}{U\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},\delta^{2}\right)}\right]^{2}\Biggr\} ^{-1}.
\end{align}
The first few even-parity wave functions are shown in Fig.~\ref{fig:RegPsiEven}.
For completeness, we present the results for negative and positive $\alpha$. The analysis for the $\alpha>0$ case is presented in Sec.~\ref{sec:RegSE2EvenPosAlpha}.
As in the previous section, the wave functions strongly resemble those obtained for the
unregularized potential shown in Fig.~\ref{fig:UnregPsiEven}. The agreement improves with smaller values of $\delta$ (not shown)
but the convergence towards the unregularized result is slower than for the odd-parity wave functions.
We now have a complete description of the energy eigenvalues and the wave functions for the even and odd-parity solutions with positive energy.
\begin{figure}
\caption{The even-parity wave functions for the regularized potential with $\delta=0.01$. The dashed curves correspond to $\alpha=-0.1$ while the solid curves are for $\alpha=0.1$.
For positive $\alpha$, the colour scheme is $n_{even}
\label{fig:RegPsiEven}
\end{figure}
\section{Ground-state solution with infinite negative energy}
\label{sec:GS}
\subsection{Energy eigenvalue}
\label{sec:GSEnergy}
For the even-parity solutions, Fig.~\ref{fig:RegEigs} shows that there is a state whose energy becomes increasingly negative as $\delta\rightarrow0$. Let us now determine the analytical properties of this solution.
The eigenvalue condition for even-parity states is given in Eq.~\eqref{eq:EVCEven}.
Using the identity in Eq.~\eqref{eq:ASIdentity2}, this condition can be expressed as
\begin{equation}
\label{eq:EVCEven2}
q\delta x_{0}\tan\left(q\delta x_{0}\right) = -\delta^{2}-\nu+2\delta^{2}\frac{U\left(\frac{\nu-\kappa}{2},\nu+\frac{3}{2},\delta^{2}\right)}{U\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},\delta^{2}\right)}.
\end{equation}
From Eq.~(13.8.11) of Ref.~\cite{NIST2020}, we have
\begin{align}
\label{eq:UAsymptote}
& \lim_{a\rightarrow\infty}U\left(a,b,z\right) = 2\left(\frac{z}{a}\right)^{\frac{1}{2}\left(1-b\right)}\frac{e^{z/2}}{\Gamma\left(a\right)}\biggl\{K_{b-1}\left(2\sqrt{az}\right)\nonumber\\
& \times\sum_{s=0}^{\infty}\frac{p_{s}\left(b,z\right)}{a^{s}}+\sqrt{\frac{z}{a}}K_{b}\left(2\sqrt{az}\right)\sum_{s=0}^{\infty}\frac{q_{s}\left(b,z\right)}{a^{s}}\biggr\}.
\end{align}
The $p$ and $q$ coefficients are defined in Eqs.~(13.8.15-13.8.16) of Ref.~\cite{NIST2020}. Here we define $a=\frac{1}{2}\left(\nu-\kappa\right),b=\nu+\frac{1}{2}$, and $z=\delta^{2}$.
Since we are interested in the limit $\delta\rightarrow0$, we need to consider only the $z=0$ values of the first few $p$ and $q$ coefficients, which are given by
\begin{align}
\label{eq:P0Coeff} p_{0}\left(b,z\right) & = 1,\\
p_{1}\left(b,0\right) & = -\frac{b}{2}\left(b-1\right),\\
\label{eq:Q0Coeff} q_{0}\left(b,0\right) & = \frac{b}{2}.
\end{align}
Numerical results indicate that, as $\delta\rightarrow0$, the quantity $\kappa\delta^2$ is constant. This motivates the following series expansion for $\kappa$, as a function of powers of $\delta^{2}$:
\begin{equation}
\label{eq:KAnsatz}
\kappa=-\frac{2c_{0}}{\delta^{2}}+c_{1}-\frac{1}{2}+c_{2}\delta^2+\dots.
\end{equation}
By inserting this ansatz for $\kappa$ in Eq.~\eqref{eq:EVCEven2}, and solving order by order in powers of $\delta^2$, the coefficients $c_{0}$, $c_{1}$, etc., can be deduced.
The most important coefficients are $c_{0}$ and $c_{1}$, because they appear in expressions that do not vanish as $\delta\rightarrow0$.
The derivation is lengthy, thus we defer the technical details to Appendix~\ref{App:CCoeffs} and here we present just the final results.
The coefficient $c_{0}$ is the solution of the following transcendental equation:
\begin{align}
\label{eq:C0Eqn}
c_{0}&=\frac{1}{4}\biggl\{ \left[\sqrt{\left|\alpha\right|-4c_{0}}\tan\left(\sqrt{\left|\alpha\right|-4c_{0}}\right)+\nu\right]\nonumber\\
&\quad\times\frac{K_{\nu-\frac{1}{2}}\left(2\sqrt{c_{0}}\right)}{K_{\nu+\frac{1}{2}}\left(2\sqrt{c_{0}}\right)}\biggr\} ^{2}.
\end{align}
The coefficient $c_{1}$ is determined in closed form to be:
\begin{equation}
\label{eq:C1Eqn}
c_{1}=0.
\end{equation}
The energy is $E=\left(\kappa+\frac{1}{2}\right)\hbar\omega$. Thus, using the previous results, the expansion of the ground-state energy $E_{0}$ in powers of $\delta^{2}$ is
\begin{equation}
\label{eq:GSE}
\frac{E_{0}}{\hbar\omega}=-\frac{2c_{0}}{\delta^{2}}+O\left(\delta^{2}\right).
\end{equation}
Interestingly, notice that there is no constant term in the energy as $\delta\rightarrow0$.
\begin{figure}
\caption{The coefficient $c_{0}
\label{fig:C0}
\end{figure}
In principle, Eq.~\eqref{eq:C0Eqn} can be numerically solved to determine $c_{0}$ for all $-1/4\leq\alpha<0$ and arbitrary $\delta\ll1$.
Once $c_{0}$ is deduced, the ground-state energy is then determined from Eq.~\eqref{eq:GSE}.
Nevertheless, it is preferable to determine a closed-form expression for $c_{0}$ as a function of $\alpha$, applicable in the limit $\delta\ll1$.
In Appendix~\ref{App:C0Sol} we perform such an analysis. The final result is
\begin{align}
\label{eq:C0Eqn2}
c_{0} & \approx \Biggl\{\left[1-\frac{2\sqrt{\frac{1}{4}+\alpha}}{\sqrt{\left|\alpha\right|}\tan\sqrt{\left|\alpha\right|}+\frac{1}{2}+\sqrt{\frac{1}{4}+\alpha}}\right] \nonumber\\
&\quad\times \frac{\Gamma\left(1+\sqrt{\frac{1}{4}+\alpha}\right)}{\Gamma\left(1-\sqrt{\frac{1}{4}+\alpha}\right)}
\Biggr\}^{\frac{1}{\sqrt{\frac{1}{4}+\alpha}}}.
\end{align}
This expression is valid provided $4c_{0}\ll1+\sqrt{\frac{1}{4}+\alpha}$.
In Fig.~\ref{fig:C0}, we plot $c_{0}$ as a function of $\alpha$ using both the self-consistent equation in Eq.~\eqref{eq:C0Eqn} (blue curve)
and the approximate formula in Eq.~\eqref{eq:C0Eqn2} (red curve).
As can be observed in this figure, the analytical result gives a very good approximation for nearly the entire range of values of $\alpha$.
It is only in the limiting case $\alpha\rightarrow-1/4$ that the approximate result deviates noticeably from the exact result.
As $\alpha\rightarrow-1/4$, Eq.~\eqref{eq:C0Eqn} gives $4c_{0}\approx0.0904$, whereas Eq.~\eqref{eq:C0Eqn2} gives $4c_{0}\approx0.0949$;
these values are not extremely small compared to unity, which is what is required for the assumption $4c_{0}\ll1+\sqrt{\frac{1}{4}+\alpha}$ to be valid.
For very small and negative $\alpha$ ($\alpha<0$ and $\left|\alpha\right|\ll1$), Eq.~\eqref{eq:C0Eqn2} reduces to
\begin{equation}
c_{0}\approx\left|\alpha\right|^{\frac{2}{\sqrt{1 - 4|\alpha|}}}.
\end{equation}
In Fig.~\ref{fig:GSEigs}, we provide a plot of the ground-state energy on a logarithmic scale as a function of $\alpha$ for a small,
negative range near zero, and for three different values of the regularization parameter $\delta$.
The ground-state energy decreases significantly with increasing $|\alpha|$ and with decreasing $\delta$.
The approximate result from Eqs.~\eqref{eq:GSE} and \eqref{eq:C0Eqn2} is also shown (in red), and it is indistinguishable from the exact result for most of the range shown.
\begin{figure}
\caption{The (even-parity) ground-state energy as a function of $\alpha$ for negative values of $\alpha$. The blue curves correspond to the exact energy determined by solving Eq.~\eqref{eq:EVCEven}
\label{fig:GSEigs}
\end{figure}
\subsection{Ground-state wave function}
\label{sec:GSPsi}
The wave function for even-parity solutions is given in Eq.~\eqref{eq:PsiEven}.
To determine the form of $\psi$ in the limit $\kappa \rightarrow -\infty$ ($\left|\kappa\right|\rightarrow\infty$), we use the identity in Eq.~\eqref{eq:UAsymptote}, keeping only the term with $p_{0}$ as its coefficient.
Applying this identity to Eq.~\eqref{eq:PsiEven}, the wave function becomes
\begin{equation}
\psi\left(x\right) = N\lim_{\left|\kappa\right|\rightarrow\infty}\left|\frac{x}{x_{0}}\right|^{\frac{1}{2}}K_{\nu-\frac{1}{2}}\left(\sqrt{2\left|\kappa\right|}\left|\frac{x}{x_{0}}\right|\right).
\end{equation}
Here, $N$ denotes the normalization constant.
The asymptotic behaviour of the order $\lambda$ modified Bessel function of the second kind is (see Eq.~(9.7.2) of Ref.~\cite{AbramowitzStegun}):
\begin{equation}
K_{\lambda}\left(z\right)\sim\sqrt{\frac{\pi}{2z}}e^{-z},\quad z\rightarrow\infty.
\end{equation}
Thus, we have
\begin{equation}
\psi\left(x\right)=\lim_{\left|\kappa\right|\rightarrow\infty}Ne^{-\sqrt{2\left|\kappa\right|}\left|x\right|/x_{0}}.
\end{equation}
The normalization constant is determined as usual:
\begin{equation}
1 = \int_{-\infty}^{\infty}dx\left|\psi\left(x\right)\right|^{2} = \frac{N^{2}x_{0}}{\sqrt{2\left|\kappa\right|}}.
\end{equation}
\begin{figure}
\caption{The ground-state wave function for $\alpha=-0.1$. The blue curves are obtained using the exact result in Eq.~\eqref{eq:PsiEven}
\label{fig:PsiGS}
\end{figure}
Thus, the normalized ground-state wave function (in the limit $\delta\ll1$) is
\begin{equation}
\label{eq:GSWavefunction}
\psi\left(x\right)=\lim_{\left|\kappa\right|\rightarrow\infty}\left(\frac{2\left|\kappa\right|}{x_{0}^{2}}\right)^{\frac{1}{4}}e^{-\sqrt{2\left|\kappa\right|}\left|x\right|/x_{0}}.
\end{equation}
Interestingly, this ground-state wave function has the same functional form as that of the ground-state wave function for the 1D hydrogen atom~\cite{Boyack2021}.
Indeed, if we replace the length scale $x_{0}$ by the Bohr radius $a_{0}$, and replace $\sqrt{2|\kappa|}$ by the parameter $1/\beta$, where the condition $|\kappa|\rightarrow\infty$ now becomes $\beta\rightarrow0$, then we recover the ground-state wave function for the 1D hydrogen atom~\cite{Boyack2021}.
Notice that the probability density limits to a Dirac delta function:
\begin{equation}
\label{eq:GSProbDensity}
\left|\psi\left(x\right)\right|^{2}=\lim_{\left|\kappa\right|\rightarrow\infty}\sqrt{\frac{2\left|\kappa\right|}{x_{0}^{2}}}e^{-2\sqrt{2\left|\kappa\right|}\left|x\right|/x_{0}}=\delta\left(x\right).
\end{equation}
Here we used the definition
\begin{equation}
\delta\left(x\right)=\lim_{\epsilon\rightarrow0}\frac{1}{\sqrt{\epsilon}}e^{-2\left|x\right|/\sqrt{\epsilon}}.
\end{equation}
In Fig.~\ref{fig:PsiGS}, we plot the exact ground-state wave function obtained using Eq.~\eqref{eq:PsiEven} for various values of $\delta$, and we also plot the limiting wave function Eq.~\eqref{eq:GSWavefunction}.
There is good agreement between the exact result (blue) and the approximate wave function (red) for $\delta$ very small.
\section{Regularized potential case ii: $\alpha>0$}
\label{sec:RegSE2}
For $\alpha>0$, the minimum of the potential is now $V_{\text{min}}>0$.
Thus, the range of permissible energies is $0<E<\infty$.
As a result, the state that has infinite negative energy in the case $-1/4\leq\alpha<0$ will now have a finite and positive energy for $\alpha>0$.
We now investigate the odd and even-parity eigenfunctions as in the previous sections.
\subsection{Odd-parity solutions}
\label{sec:RegSE2OddPosAlpha}
Since $\alpha>0$, the potential $\widetilde{V}_{0}\rightarrow+\infty$ as $\delta\rightarrow0$.
Thus, we define $k^{2}=\frac{2m}{\hbar^{2}}\left(\widetilde{V}_{0}-E\right)$, where E is defined in Eq.~\eqref{eq:EandyDefs}.
Using the definitions of $k$ and $E$, the quantity $k\delta x_{0}$ can be expressed in terms of $\kappa$ as follows:
\begin{equation}
\left(k\delta x_{0}\right)^{2}=\left(\delta^{4}+\alpha\right)-\left(2\kappa+1\right)\delta^{2}
\end{equation}
In region I, we have
\begin{equation}
\label{eq:SERegionI_Eq_PosAlpha}
\psi^{\prime\prime}\left(x\right)-k^{2}\psi\left(x\right)=0.
\end{equation}
The solutions are
\begin{equation}
\label{eq:SERegionI_Sol_PosAlpha}
\psi_{\text{I}}\left(x\right)=A_{\text{I}}\cosh\left(kx\right)+B_{\text{I}}\sinh\left(kx\right).
\end{equation}
Let us first consider the odd-parity solutions: $A_{\text{I}}=0$.
In region II the Schr$\ddot{\text{o}}$dinger equation is the same as in the previous section.
The wave function is then
\begin{equation}
\psi\left(x\right)=\left\{ \begin{array}{c}
\qquad \qquad B_{\text{I}}\sinh\left(kx\right),\hspace{14mm} x\leq\delta x_{0}\\
B_{\text{II}}y^{\nu}e^{-\frac{1}{2}y^{2}}U\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},y^{2}\right),\ x\geq\delta x_{0}.
\end{array}\right.\label{eq:PsiOdd_PosAlpha}
\end{equation}
For $\alpha>0$, $\nu$ is again taken as in Eq.~\eqref{eq:Nu}, which means that $\nu>1$. We again define $\kappa$ as in Eq.~\eqref{eq:KEqn}.
The energy eigenvalues are determined from the continuity of $\psi$ at $x=\delta x_{0}$, which gives
\begin{equation}
\label{eq:EVCOdd_PosAlpha}
k\delta x_{0}\coth\left(k\delta x_{0}\right)=\delta^{2}-\kappa-1-2\frac{U\left(\frac{\nu-\kappa}{2}-1,\nu+\frac{1}{2},\delta^{2}\right)}{U\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},\delta^{2}\right)}.
\end{equation}
As $\delta\rightarrow0$, $\left(k\delta x_{0}\right)^{2}\rightarrow\alpha$.
Thus, for the odd-parity solutions, we now have
$k\delta x_{0}\coth\left(k\delta x_{0}\right)\rightarrow\sqrt{\alpha}\coth\sqrt{\alpha}$.
The correction term $\epsilon_{n}$ can be determined by following the analogous derivation given in Appendix~\ref{App:OddError} for the case $-1/4\leq\alpha<0$.
The only difference is the replacement of the $\cot$ function by the $\coth$ function. Thus, the final result is
\begin{align}
\label{eq:OddError2}
\epsilon_{n}&=\left(\frac{\nu-\sqrt{\alpha}\coth\sqrt{\alpha}}{\nu-1+\sqrt{\alpha}\coth\sqrt{\alpha}}\right)\nonumber\\
&\quad\times \frac{\left(-1\right)^{n}}{\Gamma\left(\frac{1}{2}-\nu-n\right)n!}\frac{\Gamma\left(\frac{3}{2}-\nu\right)}{\Gamma\left(\nu+\frac{1}{2}\right)}\delta^{2\nu-1}.
\end{align}
Since $\nu>1/2$, $\epsilon_{n}\rightarrow0$ as $\delta\rightarrow0$.
Let us now turn to the wave function for odd-parity states, given in Eq.~\eqref{eq:PsiOdd_PosAlpha}.
Continuity of $\psi$ at $x=\delta x_{0}$ imposes the condition
\begin{equation}
\label{eq:BIIOdd_PosAlpha}
B_{\text{I}}\sinh\left(k\delta x_{0}\right)=B_{\text{II}}\delta^{\nu}e^{-\delta^{2}/2}U\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},\delta^{2}\right).
\end{equation}
Normalization of $\psi$ requires $\int_{-\infty}^{\infty}dx\left|\psi\left(x\right)\right|^{2}=1$.
After inserting Eq.~\eqref{eq:BIIOdd_PosAlpha} into Eq.~\eqref{eq:PsiOdd_PosAlpha}, then performing the normalization integral and solving for $B^{2}_{\text{I}}$, we obtain
\begin{align}
B^{2}_{\text{I}}&=\frac{1}{2\delta x_{0}}\Biggl\{ \int_{0}^{1}dy\sinh^{2}\left(k\delta x_{0}y\right)+\sinh^{2}\left(k\delta x_{0}\right)\int_{1}^{\infty}dyy^{2\nu}\nonumber\\
&\quad\times e^{-\delta^{2}\left(y^{2}-1\right)}\left[\frac{U\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},\delta^{2}y^{2}\right)}{U\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},\delta^{2}\right)}\right]^{2}\Biggr\}^{-1}.
\end{align}
The first few odd-parity wave functions are shown in Fig.~\ref{fig:RegPsiOdd}. In the next section we investigate the even-parity solutions.
\subsection{Even-parity solutions}
\label{sec:RegSE2EvenPosAlpha}
As mentioned at the start of this section, there is no infinite negative energy state for $\alpha>0$.
The Schr$\ddot{\text{o}}$dinger equation in region I is given in Eq.~\eqref{eq:SERegionI_Eq_PosAlpha}, with the general solution given in Eq.~\eqref{eq:SERegionI_Sol_PosAlpha}.
For even-parity solutions we set $B_{\text{I}}=0$. The wave function is then
\begin{equation}
\label{eq:PsiEven_PosAlpha}
\psi\left(x\right)=\left\{ \begin{array}{c}
\qquad \qquad A_{\text{I}}\cosh\left(kx\right),\hspace{13mm} x\leq\delta x_{0}\\
B_{\text{II}}y^{\nu}e^{-\frac{1}{2}y^{2}}U\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},y^{2}\right),\ x\geq\delta x_{0}.
\end{array}\right.
\end{equation}
The eigenvalue condition is determined by requiring continuity of $\psi^{\prime}/\psi$ at $x=\delta x_{0}$.
The final result, in contrast to Eq.~\eqref{eq:EVCOdd_PosAlpha} for the odd-parity solutions, is given by
\begin{equation}
k\delta x_{0}\tanh\left(k\delta x_{0}\right)=\delta^{2}-\kappa-1-2\frac{U\left(\frac{\nu-\kappa}{2}-1,\nu+\frac{1}{2},\delta^{2}\right)}{U\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},\delta^{2}\right)}.
\label{eq:EVCEven_PosAlpha}
\end{equation}
The correction term $\epsilon_{n}$ can be determined by following the analogous derivation given in Appendix~\ref{App:OddError} for the case $-1/4\leq\alpha<0$.
The only difference is the replacement of the $\cot$ function by the $\tanh$ function. Thus, the final result is
\begin{align}
\label{eq:EvenError2}
\epsilon_{n}&=\left(\frac{\nu-\sqrt{\alpha}\tanh\sqrt{\alpha}}{\nu-1+\sqrt{\alpha}\tanh\sqrt{\alpha}}\right)\nonumber\\
&\quad\times\frac{\left(-1\right)^{n}}{\Gamma\left(\frac{1}{2}-\nu-n\right)n!}\frac{\Gamma\left(\frac{3}{2}-\nu\right)}{\Gamma\left(\nu+\frac{1}{2}\right)}\delta^{2\nu-1}.
\end{align}
In Fig.~\ref{fig:EnergyCorrPosAlpha}, we compare the exact energy eigenvalues (shown in blue) computed using Eqs.~\eqref{eq:EVCOdd_PosAlpha} and \eqref{eq:EVCEven_PosAlpha}
for odd and even solutions respectively, against those determined using Eqs.~\eqref{eq:KEqn}, \eqref{eq:OddError2}, and \eqref{eq:EvenError2} (shown in red) for small values of $\delta$ and $\alpha=0.05$.
The results are in very good agreement, as they were for $\alpha<0$.
\begin{figure}
\caption{Exact energy eigenvalues for the even and odd solutions (blue) versus the approximate energy eigenvalues (red).
We have made the red curve thicker so that it is visible, since the energies are extremely close, particularly in the case of the odd solutions. Here $\alpha=0.05$.}
\label{fig:EnergyCorrPosAlpha}
\end{figure}
Let us now turn to the wave function for even-parity states, given in Eq.~\eqref{eq:PsiEven_PosAlpha}.
Continuity of $\psi$ at $x=\delta x_{0}$ imposes the condition
\begin{equation}
\label{eq:AIEven_PosAlpha}
A_{\text{I}}\cosh\left(k\delta x_{0}\right)=B_{\text{II}}\delta^{\nu}e^{-\delta^{2}/2}U\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},\delta^{2}\right).
\end{equation}
Normalization of $\psi$ requires $\int_{-\infty}^{\infty}dx\left|\psi\left(x\right)\right|^{2}=1$.
After inserting Eq.~\eqref{eq:AIEven_PosAlpha} into Eq.~\eqref{eq:PsiEven_PosAlpha}, then performing the normalization integral and solving for $A^{2}_{\text{I}}$, we obtain
\begin{align}
A^{2}_{\text{I}}&=\frac{1}{2\delta x_{0}}\Biggl\{ \int_{0}^{1}dy\cosh^{2}(q\delta x_{0}y)+\cosh^{2}(q\delta x_{0})\int_{1}^{\infty}dyy^{2\nu}\nonumber\\
&\quad\times e^{-\delta^{2}\left(y^{2}-1\right)}\left[\frac{U\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},\left(\delta y\right)^{2}\right)}{U\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},\delta^{2}\right)}\right]^{2}\Biggr\} ^{-1}.
\end{align}
The wave functions are shown in Fig.~\ref{fig:RegPsiEven}.
Note that these results again look very much like their counterparts with $\alpha = -0.1$ (also shown in Fig.~\ref{fig:RegPsiEven}).
Indeed, both odd and even-parity wave functions will converge towards the unregularized solutions shown in
Figs.~\ref{fig:UnregPsiOdd} and \ref{fig:UnregPsiEven}, as $\delta$ is taken smaller and smaller.
The sign of $\alpha$ becomes immaterial.
For $\alpha > 0$ these states are excluded from the barrier region by the barrier itself. For $\alpha < 0$ the same
set of states are excluded from this same region by the pseudopotential barrier~\cite{Ibrahim2018} created by the ground-state wave function.
The remarkable result here is that even the unregularized potential with a negative value of $\alpha$, for which {\it no negative-energy ground state exists}, has the same behaviour. The higher energy solutions, in the unregularized case, appear to know of the presence
of a state with negative (and infinite!) energy.
\section{Matrix mechanics method for the regularized potential}
\label{sec:matrix_mechanics}
As an additional check of our analytical work on the regularized potential, it is possible to formulate a solution in terms of matrix mechanics~\cite{Marsiglio2009,Nguyen2020}.
We proceed by embedding the potential given by Eq.~(\ref{eq:Potential1})
in an infinite square well potential (ISW) of width $a$, with $V_{\rm ISW} = 0$ for $0< x < a$ and infinite otherwise.
This domain is chosen so that we can use a convenient basis set given by
\begin{equation}
\phi_{n}(x) = \sqrt{\frac{2}{a}}\sin\left(\frac{n\pi x}{a}\right), \quad n = 1,2,3,\dots .
\label{basis}
\end{equation}
To make the potential symmetric, we need to shift the potential as well, so that Eq.~(\ref{eq:Potential1}), when regularized, becomes
\begin{equation}
V\left(x\right)=\left\{ \begin{array}{c}
\frac{1}{2}m\omega^{2}\left(x-{a \over 2}\right)^{2}+\frac{\hbar^{2}}{2m}\frac{\alpha}{\left(x-{a\over 2}\right)^{2}},\quad \left|x - {a \over 2}\right| > \epsilon {a \over 2},\\
\frac{1}{2}m\omega^{2}\left(\epsilon{a \over 2}\right)^{2}+\frac{\hbar^{2}}{2m}\frac{\alpha}{(\epsilon{a\over 2})^{2}}, \quad |x - {a \over 2}| < \epsilon {a \over 2}.
\end{array}\right.
\label{eq:regularized_potential_shifted}
\end{equation}
The dimensionless constant $\epsilon$ provides the cutoff; below this cutoff, the potential is replaced by a constant,
$V_\epsilon \equiv \frac{1}{2}m\omega^{2}(\epsilon{a \over 2})^{2}+\frac{\hbar^{2}}{2m}\frac{\alpha}{\left(\epsilon{a\over 2}\right)^{2}}$,
as given in the second line in Eq.~(\ref{eq:regularized_potential_shifted}).
This dimensionless cutoff is related to the cutoff $\delta$, first introduced in Sec.~\ref{sec:RegSE1}, by
\begin{equation}
\epsilon = {2 \over \pi}\sqrt{2 \over \rho}\delta.
\label{eq:fact1}
\end{equation}
The two length scales and the two energy scales are related by
\begin{equation}
{a \over x_0}= \pi \sqrt{\rho \over 2} \ \ \ {\rm and} \ \ \rho \equiv {\hbar \omega \over E_1^{(0)}} \ \ \ {\rm with} \ \ E_1^{(0)} = {\hbar^2 \pi^2 \over 2 m a^2}.
\label{eq:fact2}
\end{equation}
Using the wave function expansion
\begin{equation}
\psi(x) = \sum_{m=1}^{N_{\rm max}} c_m \phi_m(x),
\label{eq:basis_expansion}
\end{equation}
the usual matrix formulation \cite{Marsiglio2009} results in the matrix equation for the unknown eigenvalue $E$ and eigenvector coefficients $c_n$:
\begin{equation}
\sum_{m=1}^{N_{\rm max}} H_{nm} c_{m} = E c_{n}.
\label{eq:mat_eq}
\end{equation}
Note that care is required to have $N_{\rm max}$ sufficiently large to ensure that errors from the truncated expansion are completely negligible,
and that the infinite square well width $a$ is large enough to ensure none of our results are affected by its presence.
In practice, the results need to be compiled as a function of both of these parameters, $a$ and $N_{\rm max}$, until convergence is achieved.
The Hamiltonian matrix is divided into three pieces, $H_{nm} = H^{\rm kin}_{nm} + V^{\rm ext}_{nm} + V^{\rm con}_{nm}$: the kinetic term, the $x$-dependent potential [first line of Eq.~(\ref{eq:regularized_potential_shifted})],
and the constant potential [second line of Eq.~(\ref{eq:regularized_potential_shifted})], respectively.
Note that, since the potential is even, only matrix elements where $n \pm m$ is even are non zero. We define $p \equiv \pi (n \pm m)$ and ${\rm sinc}(x) \equiv \sin{(x)}/x$.
We quote the results in units of $E_1^{(0)}$ and use $v_\epsilon \equiv V_\epsilon/E_1^{(0)}$:
\begin{align}
\frac{H^{\text{kin}}_{nm}}{E_{1}^{(0)}} &= \delta_{nm} n^2, \\
\frac{V^{\text{con}}_{nm}}{E_1^{(0)}} &= \epsilon v_{\epsilon} \Biggl\{\delta_{nm} \left[1 - \left(-1\right)^n \text{sinc}(\pi n \epsilon)\right] \nonumber\\
&\quad + \left(1-\delta_{nm}\right) \left[g_{\epsilon}\left(n-m\right) - g_{\epsilon}\left(n+m\right) \right] \Biggr\}, \\
\frac{V^{\text{ext}}_{nm}}{E_1^{(0)}} &= \left(1 + (-1)^{n+m} \right) \Biggl\{\biggl[\delta_{nm} \left(\frac{\left(1 - \epsilon^3\right)}{24} - h_{\epsilon}\left(2n\right)\right) \nonumber\\
&\quad + \left(1-\delta_{nm}\right)\left[h_\epsilon(n-m) - h_\epsilon(n+m) \right] \biggr]\frac{\pi^2 \rho^2}{4} \nonumber \\
&\quad + {\alpha \over \pi^2} \left[k_{\epsilon}\left(n-m\right) - k_{\epsilon}\left(n+m\right)\right] \Biggr\}.
\end{align}
The quantities $g_{\epsilon}\left(n\pm m\right), h_{\epsilon}\left(n\pm m\right), k_{\epsilon}\left(n\pm m\right),$ and $\ell_{\epsilon}\left(n\pm m\right)$ are defined by
\begin{align}
g_{\epsilon}\left(n\pm m\right) & \equiv \cos\left(\frac{p}{2}\right){\rm sinc}\left(\frac{p \epsilon}{2}\right), \\
h_{\epsilon} \left(n\pm m\right) & \equiv \cos\left(\frac{p}{2}\right)\biggl[{2 \over p^3} \sin\left(\frac{p\epsilon}{2}\right) + \frac{1}{p^2}\biggl(\cos{\left(\frac{p}{2}\right)} \nonumber\\
&\quad - \epsilon \cos\left(\frac{p\epsilon }{2}\right) \biggr) - \frac{\epsilon^2}{4p} \sin\left(\frac{p\epsilon}{2}\right) \biggr], \\
k_{\epsilon} \left(n\pm m\right) & \equiv \cos\left(\frac{p}{2}\right) \left[{2 \over \epsilon}\left(1-\epsilon\right) - \ell_\epsilon \left(n\pm m\right) \right], \\
\ell_{\epsilon} \left(n\pm m\right) & \equiv \int_{\epsilon/2}^{1/2} dx {1 - \cos{\left[\pi \left(n \pm m\right)x\right]} \over x^2} \nonumber \\
&= {2 \over \epsilon}\left[ 1 - \cos{\left(\frac{p\epsilon}{2}\right)} \right] - 2 \left[ 1 - \cos{\left(\frac{p}{2}\right)} \right] \nonumber \\
&\quad + p\left[ {\rm Si}\left(\frac{p}{2} \right) - {\rm Si}\left(\frac{p \epsilon }{2} \right) \right]. \label{eq:elleps}
\end{align}
Here, the Sine Integral~\cite{AbramowitzStegun} is defined by
\begin{equation}
{\rm Si}(z) \equiv \int_0^z dt {\sin{t} \over t}.
\label{sine}
\end{equation}
Note that all these quantities are well-defined, but as $\epsilon \rightarrow 0$ [i.e., $\delta \rightarrow 0$ -- see Eq.~(\ref{eq:fact1})] the matrix elements become singular.
The matrix equation Eq.~(\ref{eq:mat_eq}) can be solved by computer for the eigenvalues and eigenvectors.
The latter can then be used in Eq.~(\ref{eq:basis_expansion}) to compute the wave functions in real space.
We can typically use $100 \times 100$ matrices, but as $\delta$ decreases and/or the magnitude of $\alpha$ increases, larger matrices $10000 \times 10000$ or larger are required to achieve convergence.
Moreover, with both decreasing $\delta$ or increasing $|\alpha|$, Eq.~(\ref{eq:elleps}) becomes more difficult to evaluate accurately.
We should also emphasize that the length scale $a$ was fabricated for convenience, and the results should not depend on this quantity.
The numerical procedure is also more straightforward for readers not familiar with the properties of the confluent hypergeometric functions.
As a comparison of the numerical approach of this section versus the approach of the previous sections based on hypergeometric functions,
we compute the ground-state energy for fixed $\delta=0.002$ and a range of negative values of $\alpha$.
The results are shown in Table~\ref{tab:GSEnergy}, for fixed $N_{\rm max} = 10000$ and a length $a$ given by $\rho = 50$ [see
Eq.~(\ref{eq:fact2})]. In practice, this value of $a$ is much larger than required for convergence of just the ground-state energy,
since the ground state is so confined near the origin. For example, for $\alpha = -0.05$ (final row of Table~\ref{tab:GSEnergy}),
with $\rho = 5$ and the same $N_{\rm max} = 10000$, we
obtain $E/(\hbar \omega) = -828.489881$. Comparison with the result attained by the use of Tricomi functions (3rd column)
shows that we can approach the analytical result with arbitrary precision, given sufficient computer power and memory.
\begin{widetext}
\begin{table}[h]
\caption{Comparison between numerical results for the ground-state energy as a function of $\alpha$. Here $\delta=0.002$.
\label{tab:GSEnergy}}
\begin{ruledtabular}
\begin{tabular}{ccccc}
$\alpha$ & Matrix mechanics: \eqref{eq:mat_eq} & Tricomi functions: \eqref{eq:EVCEven} & $-\frac{2c_{0}}{\delta^{2}}$ \eqref{eq:C0Eqn}, \eqref{eq:GSE} & $-\frac{2c_{0}}{\delta^{2}}$ \eqref{eq:GSE}, \eqref{eq:C0Eqn2} \tabularnewline
\hline
$-0.25$ & $-11294.85744903$ & $-11295.301683$ & $-11295.30170$ & $-11862.24636$ \tabularnewline
$-0.20$ & $-8056.57184873$ & $-8056.826663$ & $-8056.826665$ & $-8353.544755$ \tabularnewline
$-0.15$ & $-5149.73001990$ & $-5149.852852$ & $-5149.852880$ & $-5274.934465$ \tabularnewline
$-0.10$ & $-2679.68183517$ & $-2679.724708$ & $-2679.724730$ & $-2714.801773$ \tabularnewline
$-0.05$ & $-828.48321740$ & $-828.4898894$ & $-828.4900235$ & $-831.9802335$ \tabularnewline
\end{tabular}
\end{ruledtabular}
\end{table}
\end{widetext}
\section{Conclusion}
\label{sec:Conclusion}
In this paper we have performed a thorough analysis of the spectrum of the one-dimensional pseudoharmonic oscillator -- a simple harmonic oscillator in the presence of a $1/x^2$ interaction.
For the case where the potential is unregularized, we have shown that there are doubly-degenerate eigenfunctions when the interaction parameter $\alpha$ is positive, as was already known.
In addition, we have also shown that there are doubly degenerate bound-states in the region $-1/4\leq\alpha<0$.
We have also studied a regularized version of the pseudoharmonic oscillator, where the interaction is cut off near the origin.
For this regularized problem, we have again found even and odd-parity eigenfunctions, for $-1/4\leq\alpha<0$ and $\alpha>0$.
In contrast to the unregularized potential, we have shown that, for $-1/4\leq\alpha<0$, the regularized potential admits a ground-state solution with increasingly negative energy.
We have derived the analytical properties of this ground state and shown that its energy diverges as the inverse square of the cutoff, and that its probability density limits to a Dirac delta function.
The mathematical features of this solution are analogous to a similar ground state in the regularized one-dimensional hydrogen atom.
The similarity between the regularized and unregularized problems as the regularization parameter $\delta \rightarrow 0$ is, on the one hand, not surprising.
On the other hand, we do not find a negative energy solution for the unregularized problem with
$-1/4\leq\alpha<0$. Since it is this negative energy ground state that is responsible (through the pseudopotential effect) for
the properties of the positive-energy solutions, it is in many ways remarkable that the unregularized solutions are so similar to
the regularized solutions. The unregularized problem somehow appears to know about the infinite negative energy ground state.
One of the remaining areas that requires further investigation is the study of the regularized potential in the regime $\alpha<-1/4$.
Indeed, bound-state solutions for the regularized potential can be found for $\alpha<-1/4$, since the potential is always finite near the origin.
It would be desirable to study the behaviour of the ground-state solution with infinite negative energy as $\alpha$ becomes increasingly negative.
Since it is believed that the unregularized problem does not have finite negative energy bound-state solutions in the
regime $\alpha<-1/4$, it would be of interest to determine the properties of the eigenfunctions in the regularized problem as the cutoff approaches zero.
\section{Acknowledgments}
We thank J. Lekner for providing insightful comments on this problem. In addition, we also thank E. Dupuis, P. L. S. Lopes, and M. Protter for beneficial discussions.
R.B. was supported by D\'epartement de physique, Universit\'e de Montr\'eal. F.M., A.S., and A.B. were supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC).
\appendix
\numberwithin{equation}{section}
\numberwithin{figure}{section}
\section{Derivation of the correction term $\epsilon_{n}$ for odd-parity states where $-1/4\leq\alpha<0$}
\label{App:OddError}
\begin{widetext}
In this appendix we present the derivation of $\epsilon_{n}$ for the odd-parity states in the case where $-1/4\leq\alpha<0$.
The analysis is similar for $\alpha>0$ and for even-parity states, the only difference being the particular trigonometric or hyperbolic trigonometric functions used.
The Tricomi function $U(a,b,z)$ is defined in Eq.~(13.1.3) of Ref.~\cite{AbramowitzStegun}:
\begin{equation}
U\left(a,b,z\right)=\frac{\pi}{\sin\left(\pi b\right)}\biggl[\frac{M\left(a,b,z\right)}{\Gamma\left(1+a-b\right)\Gamma\left(b\right)}-z^{1-b}\frac{M\left(1+a-b,2-b,z\right)}{\Gamma\left(a\right)\Gamma\left(2-b\right)}\biggr].
\end{equation}
Here, $M(a,b,z)$ is the Kummer function (also known as the confluent hypergeometric function.)
Therefore, we have
\begin{eqnarray}
\frac{U\left(\frac{\nu-\kappa}{2}-1,\nu+\frac{1}{2},\delta^{2}\right)}{U\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},\delta^{2}\right)} & = &
\frac{\frac{M\left(\frac{\nu-\kappa}{2}-1,\nu+\frac{1}{2},\delta^{2}\right)}{\Gamma\left(\frac{-\nu-\kappa-1}{2}\right)\Gamma\left(\nu+\frac{1}{2}\right)}-\delta^{1-2\nu}\frac{M\left(\frac{-\nu-\kappa-1}{2},\frac{3}{2}-\nu,\delta^{2}\right)}{\Gamma\left(\frac{\nu-\kappa}{2}-1\right)\Gamma\left(\frac{3}{2}-\nu\right)}}{\frac{M\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},\delta^{2}\right)}{\Gamma\left(\frac{1-\nu-\kappa}{2}\right)\Gamma\left(\nu+\frac{1}{2}\right)}-\delta^{1-2\nu}\frac{M\left(\frac{1-\nu-\kappa}{2},\frac{3}{2}-\nu,\delta^{2}\right)}{\Gamma\left(\frac{\nu-\kappa}{2}\right)\Gamma\left(\frac{3}{2}-\nu\right)}}.
\end{eqnarray}
We are interested in the limit $\delta\ll1$; thus, we use the series for the hypergeometric function, $M(a,b,z)=1+O(z)$, to approximate this expression as
\begin{equation}
\frac{U\left(\frac{\nu-\kappa}{2}-1,\nu+\frac{1}{2},\delta^{2}\right)}{U\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},\delta^{2}\right)}\approx\frac{\frac{1}{\Gamma\left(-\frac{1}{2}-\left(n+\nu+\epsilon_{n}\right)\right)\Gamma\left(\nu+\frac{1}{2}\right)}-\frac{\delta^{1-2\nu}}{\Gamma\left(-\left(n+\epsilon_{n}\right)-1\right)\Gamma\left(\frac{3}{2}-\nu\right)}}{\frac{1}{\Gamma\left(\frac{1}{2}-\left(n+\nu+\epsilon_{n}\right)\right)\Gamma\left(\nu+\frac{1}{2}\right)}-\frac{\delta^{1-2\nu}}{\Gamma\left(-\left(n+\epsilon_{n}\right)\right)\Gamma\left(\frac{3}{2}-\nu\right)}}.
\end{equation}
By using the identities $\Gamma(-x)=-(1+x)\Gamma(-1-x)$ and $\Gamma\left(\frac{1}{2}-x\right)=-\left(\frac{1}{2}+x\right)\Gamma\left(-\frac{1}{2}-x\right)$, we can simplify the expression above to
\begin{align}
\frac{U\left(\frac{\nu-\kappa}{2}-1,\nu+\frac{1}{2},\delta^{2}\right)}{U\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},\delta^{2}\right)} = -\frac{\left(\frac{\kappa+\nu+1}{2}\right)\Gamma\left(-\left(n+\epsilon_{n}\right)\right)\Gamma\left(\frac{3}{2}-\nu\right)-\delta^{1-2\nu}\left(\frac{\kappa-\nu+2}{2}\right)\Gamma\left(\frac{1-\nu-\kappa}{2}\right)\Gamma\left(\nu+\frac{1}{2}\right)}{\Gamma\left(-\left(n+\epsilon_{n}\right)\right)\Gamma\left(\frac{3}{2}-\nu\right)-\delta^{1-2\nu}\Gamma\left(\frac{1-\nu-\kappa}{2}\right)\Gamma\left(\nu+\frac{1}{2}\right)}.
\end{align}
Here we used $n+\nu+\epsilon_{n}=\frac{1}{2}\left(\kappa+\nu\right)$ and $n+\epsilon_{n}=\frac{1}{2}\left(\kappa-\nu\right)$.
Combining this result with Eqs.~\eqref{eq:EVCOdd}-\eqref{eq:KEqn}, the eigenvalue condition is then given by
\begin{align}
\sqrt{\left|\alpha\right|}\cot\sqrt{\left|\alpha\right|}&=\delta^{2}-\kappa-1-2\frac{U\left(-\left(n+\epsilon_{n}\right)-1,\nu+\frac{1}{2},\delta^{2}\right)}{U\left(-\left(n+\epsilon_{n}\right),\nu+\frac{1}{2},\delta^{2}\right)} \nonumber\\
& \approx -\kappa-1\nonumber \\
& \quad +2\left\{ \frac{\left(\frac{\kappa+\nu+1}{2}\right)\Gamma\left(-\left(n+\epsilon_{n}\right)\right)\Gamma\left(\frac{3}{2}-\nu\right)-\delta^{1-2\nu}\left(\frac{\kappa-\nu+2}{2}\right)\Gamma\left(\frac{1}{2}-\left(n+\nu+\epsilon_{n}\right)\right)\Gamma\left(\nu+\frac{1}{2}\right)}{\Gamma\left(-\left(n+\epsilon_{n}\right)\right)\Gamma\left(\frac{3}{2}-\nu\right)-\delta^{1-2\nu}\Gamma\left(\frac{1}{2}-\left(n+\nu+\epsilon_{n}\right)\right)\Gamma\left(\nu+\frac{1}{2}\right)}\right\} \nonumber \\
& = -\nu+\frac{2\nu\Gamma\left(-\left(n+\epsilon_{n}\right)\right)\Gamma\left(\frac{3}{2}-\nu\right)-\delta^{1-2\nu}\Gamma\left(\frac{1}{2}-\left(n+\nu+\epsilon_{n}\right)\right)\Gamma\left(\nu+\frac{1}{2}\right)}{\Gamma\left(-\left(n+\epsilon_{n}\right)\right)\Gamma\left(\frac{3}{2}-\nu\right)-\delta^{1-2\nu}\Gamma\left(\frac{1}{2}-\left(n+\nu+\epsilon_{n}\right)\right)\Gamma\left(\nu+\frac{1}{2}\right)}.
\end{align}
Cross multiply these expressions to obtain
\begin{align}
& \left(\sqrt{\left|\alpha\right|}\cot\sqrt{\left|\alpha\right|}+\nu\right)\left[\Gamma\left(-\left(n+\epsilon_{n}\right)\right)\Gamma\left(\frac{3}{2}-\nu\right)-\delta^{1-2\nu}\Gamma\left(\frac{1}{2}-\left(n+\nu+\epsilon_{n}\right)\right)\Gamma\left(\nu+\frac{1}{2}\right)\right]\nonumber\\
& = 2\nu\Gamma\left(-\left(n+\epsilon_{n}\right)\right)\Gamma\left(\frac{3}{2}-\nu\right)-\delta^{1-2\nu}\Gamma\left(\frac{1}{2}-\left(n+\nu+\epsilon_{n}\right)\right)\Gamma\left(\nu+\frac{1}{2}\right).
\end{align}
Rearrange this equation to get
\begin{equation}
\frac{\Gamma\left(\frac{1}{2}-\left(n+\nu+\epsilon_{n}\right)\right)}{\Gamma\left(-\left(n+\epsilon_{n}\right)\right)}=\left(\frac{\sqrt{\left|\alpha\right|}\cot\sqrt{\left|\alpha\right|}-\nu}{\nu-1+\sqrt{\left|\alpha\right|}\cot\sqrt{\left|\alpha\right|}}\right)\frac{\Gamma\left(\frac{3}{2}-\nu\right)}{\Gamma\left(\nu+\frac{1}{2}\right)}\delta^{2\nu-1}.
\end{equation}
The denominator on the left-hand side of this equation is $\Gamma(-n-\epsilon_{n})=-\pi\left(-1\right)^n/\left[\Gamma\left(n+\epsilon_{n}+1\right)\sin\left(\pi\epsilon_{n}\right)\right]$. In the limit $\epsilon_{n}\ll1$, this becomes $\Gamma(-n-\epsilon_{n})\rightarrow-\left(-1\right)^{n}/\left(n!\epsilon_{n}\right)$.
In the limit $\delta\ll1$, $\epsilon_{n}\ll1$ and thus we can use the previous result to simplify the equation above to
\begin{equation}
-\Gamma\left(\frac{1}{2}-\nu-n\right)\left(-1\right)^{n}n!\epsilon_{n}=\left(\frac{\sqrt{\left|\alpha\right|}\cot\sqrt{\left|\alpha\right|}-\nu}{\nu-1+\sqrt{\left|\alpha\right|}\cot\sqrt{\left|\alpha\right|}}\right)\frac{\Gamma\left(\frac{3}{2}-\nu\right)}{\Gamma\left(\nu+\frac{1}{2}\right)}\delta^{2\nu-1}.
\end{equation}
Solving this equation for $\epsilon_{n}$ gives the result in Eq.~\eqref{eq:OddError1} of the main text.
\section{Derivation of the coefficients $c_{0}$ and $c_{1}$}
\label{App:CCoeffs}
The energy eigenvalue equation in Eq.~\eqref{eq:EVCEven2} can be written as
\begin{equation}
\label{eq:EVCEven3}
q\delta x_{0}\tan\left(q\delta x_{0}\right) = -\delta^{2}-\nu+I,
\end{equation}
where $I$ is the ratio of the two Kummer functions. By using Eq.~\eqref{eq:UAsymptote}, the definitions $a=\frac{1}{2}\left(\nu-\kappa\right), b=\nu+\frac{1}{2}$, and $z=\delta^2$, along with the $p$ and $q$ coefficients in Eqs.~\eqref{eq:P0Coeff}-\eqref{eq:Q0Coeff}, we have
\begin{align}
I & = 2\delta^{2}\frac{U\left(\frac{\nu-\kappa}{2},\nu+\frac{3}{2},\delta^{2}\right)}{U\left(\frac{\nu-\kappa}{2},\nu+\frac{1}{2},\delta^{2}\right)}\nonumber \\
& = 2\delta^{2}\left(\frac{\delta^{2}}{a}\right)^{-\frac{1}{2}}\frac{\left\{K_{b}\left(2\sqrt{a\delta^{2}}\right)\left[p_{0}\left(b+1,\delta^{2}\right)+\frac{p_{1}\left(b+1,\delta^{2}\right)}{a}\right]+\sqrt{\frac{\delta^{2}}{a}}K_{b+1}\left(2\sqrt{a\delta^{2}}\right)q_{0}\left(b+1,\delta^{2}\right)\right\} }{\left\{ K_{b-1}\left(2\sqrt{a\delta^{2}}\right)\left[p_{0}\left(b,\delta^{2}\right)+\frac{p_{1}\left(b,\delta^{2}\right)}{a}\right]+\sqrt{\frac{\delta^{2}}{a}}K_{b}\left(2\sqrt{a\delta^{2}}\right)q_{0}\left(b,\delta^{2}\right)\right\} }\nonumber \\
& = 2\delta^{2}\left(\frac{\delta^{2}}{a}\right)^{-\frac{1}{2}}\frac{\left\{ K_{b}\left(2\sqrt{a\delta^{2}}\right)\left[1-\frac{b\left(b+1\right)}{2a}\right]+\sqrt{\frac{\delta^{2}}{a}}K_{b+1}\left(2\sqrt{a\delta^{2}}\right)\frac{b+1}{2}\right\} }{\left\{ K_{b-1}\left(2\sqrt{a\delta^{2}}\right)\left[1-\frac{b\left(b-1\right)}{2a}\right]+\sqrt{\frac{\delta^{2}}{a}}K_{b}\left(2\sqrt{a\delta^{2}}\right)\frac{b}{2}\right\} }.
\end{align}
The ansatz for $\kappa$ is given by
\begin{equation}
\kappa=-\frac{2c_{0}}{\delta^{2}}+c_{1}-\frac{1}{2}+O\left(\delta^2\right).
\end{equation}
Therefore,
\begin{align}
\left(\frac{\delta^{2}}{a}\right)^{-\frac{1}{2}} & = \sqrt{\frac{\nu-\kappa}{2\delta^{2}}}\nonumber \\
& = \sqrt{\frac{1}{\delta^{2}}\left(\frac{c_{0}}{\delta^{2}}+\frac{b-c_{1}}{2}\right)}\nonumber \\
& \approx \frac{\sqrt{c_{0}}}{\delta^{2}}+\frac{b-c_{1}}{4\sqrt{c_{0}}}.
\end{align}
Similarly, $\sqrt{\frac{\delta^{2}}{a}}\approx\frac{\delta^{2}}{\sqrt{c_{0}}}$.
The argument of the modified Bessel functions is
\begin{align}
\sqrt{a\delta^{2}} & = \sqrt{\left(\frac{\nu-\kappa}{2}\right)\delta^{2}}\nonumber \\
& = \sqrt{c_{0}+\frac{1}{2}\left(b-c_{1}\right)\delta^{2}}\nonumber \\
& \approx \sqrt{c_{0}}+\frac{b-c_{1}}{4\sqrt{c_{0}}}\delta^{2}.
\end{align}
The expansion of the modified Bessel functions, in powers of $\delta^{2}$, is given by
\begin{equation}
K_{\lambda}\left(2\sqrt{a\delta^{2}}\right)=K_{\lambda}\left(2\sqrt{c_{0}}+\frac{b-c_{1}}{2\sqrt{c_{0}}}\delta^{2}\right)=K_{\lambda}\left(2\sqrt{c_{0}}\right)+\left(\frac{b-c_{1}}{2\sqrt{c_{0}}}\right)\delta^{2}K_{\lambda}^{\prime}\left(2\sqrt{c_{0}}\right).
\end{equation}
Here, $\lambda$ is an arbitrary order of the modified Bessel function of the second kind. Finally, note that
\begin{align}
\frac{1}{a} & = \frac{2}{\nu-\kappa}\nonumber \\
& \approx \frac{\delta^{2}}{c_{0}}.
\end{align}
For convenience, we omit the arguments of the modified Bessel functions, which are all $2\sqrt{c_{0}}$.
Thus, the quantity $I$ is given by
\begin{align}
I & = 2\left[\sqrt{c_{0}}+\left(\frac{b-c_{1}}{4\sqrt{c_{0}}}\right)\delta^{2}\right]\frac{\left\{ \left[K_{b}+\left(\frac{b-c_{1}}{2\sqrt{c_{0}}}\right)\delta^{2}K_{b}^{\prime}\right]\left[1-\frac{b\left(b+1\right)}{2}\frac{\delta^{2}}{c_{0}}\right]+\frac{\delta^{2}}{\sqrt{c_{0}}}\left(\frac{b+1}{2}\right)K_{b+1}\right\} }{\left\{ \left[K_{b-1}+\left(\frac{b-c_{1}}{2\sqrt{c_{0}}}\right)\delta^{2}K_{b-1}^{\prime}\right]\left[1-\frac{b\left(b-1\right)}{2}\frac{\delta^{2}}{c_{0}}\right]+\frac{\delta^{2}}{\sqrt{c_{0}}}\frac{b}{2}K_{b}\right\} }\nonumber \\
& = 2\left[\sqrt{c_{0}}+\left(\frac{b-c_{1}}{4\sqrt{c_{0}}}\right)\delta^{2}\right]\frac{K_{b}}{K_{b-1}}\frac{\left\{ 1+\left(\frac{b-c_{1}}{2\sqrt{c_{0}}}\right)\delta^{2}\frac{K_{b}^{\prime}}{K_{b}}-\frac{b\left(b+1\right)}{2}\frac{\delta^{2}}{c_{0}}+\frac{\delta^{2}}{\sqrt{c_{0}}}\left(\frac{b+1}{2}\right)\frac{K_{b+1}}{K_{b}}\right\} }{\left\{ 1+\left(\frac{b-c_{1}}{2\sqrt{c_{0}}}\right)\delta^{2}\frac{K_{b-1}^{\prime}}{K_{b-1}}-\frac{b\left(b-1\right)}{2}\frac{\delta^{2}}{c_{0}}+\frac{\delta^{2}}{\sqrt{c_{0}}}\frac{b}{2}\frac{K_{b}}{K_{b-1}}\right\} }\nonumber \\
& = 2\left[\sqrt{c_{0}}+\left(\frac{b-c_{1}}{4\sqrt{c_{0}}}\right)\delta^{2}\right]\frac{K_{b}}{K_{b-1}}\left[1+\left(\frac{b-c_{1}}{2\sqrt{c_{0}}}\right)\delta^{2}\frac{K_{b}^{\prime}}{K_{b}}-\frac{b\left(b+1\right)}{2}\frac{\delta^{2}}{c_{0}}+\frac{\delta^{2}}{\sqrt{c_{0}}}\left(\frac{b+1}{2}\right)\frac{K_{b+1}}{K_{b}}\right]\nonumber \\
& \quad\times\left[1-\left(\frac{b-c_{1}}{2\sqrt{c_{0}}}\right)\delta^{2}\frac{K_{b-1}^{\prime}}{K_{b-1}}+\frac{b\left(b-1\right)}{2}\frac{\delta^{2}}{c_{0}}-\frac{\delta^{2}}{\sqrt{c_{0}}}\frac{b}{2}\frac{K_{b}}{K_{b-1}}\right].
\end{align}
Thus, to $O\left(\delta^{2}\right)$, we have
\begin{align}
I & = 2\sqrt{c_{0}}\frac{K_{b}}{K_{b-1}}+\left(\frac{b-c_{1}}{2}\right)\delta^{2}\frac{K_{b}}{K_{b-1}}\left(\frac{1}{\sqrt{c_{0}}}+2\frac{K_{b}^{\prime}}{K_{b}}-2\frac{K_{b-1}^{\prime}}{K_{b-1}}\right)\nonumber \\
& \quad+2\delta^{2}\frac{K_{b}}{K_{b-1}}\left[-\frac{b\left(b+1\right)}{2\sqrt{c_{0}}}+\left(\frac{b+1}{2}\right)\frac{K_{b+1}}{K_{b}}+\frac{b\left(b-1\right)}{2\sqrt{c_{0}}}-\frac{b}{2}\frac{K_{b}}{K_{b-1}}\right].
\end{align}
The self-consistent equation in Eq.~\eqref{eq:EVCEven3} now becomes
\begin{align}
\label{eq:EVCEven4}
q\delta x_{0}\tan\left(q\delta x_{0}\right)+\nu & = -\delta^{2}+I\nonumber \\
& = 2\sqrt{c_{0}}\frac{K_{b}}{K_{b-1}} -\delta^{2}+\left(\frac{b-c_{1}}{2}\right)\delta^{2}\frac{K_{b}}{K_{b-1}}\left(\frac{1}{\sqrt{c_{0}}}+2\frac{K_{b}^{\prime}}{K_{b}}-2\frac{K_{b-1}^{\prime}}{K_{b-1}}\right)\nonumber \\
& \quad +2\delta^{2}\frac{K_{b}}{K_{b-1}}\left[-\frac{b\left(b+1\right)}{2\sqrt{c_{0}}}+\left(\frac{b+1}{2}\right)\frac{K_{b+1}}{K_{b}}+\frac{b\left(b-1\right)}{2\sqrt{c_{0}}}-\frac{b}{2}\frac{K_{b}}{K_{b-1}}\right].
\end{align}
The argument of the tangent function is determined from
\begin{align}
q\delta x_{0} &= \sqrt{\left(2\kappa+1\right)\delta^{2}-\left(\delta^{4}+\alpha\right)} \nonumber\\
& = \sqrt{\left|\alpha\right|-4c_{0}+2c_{1}\delta^{2}} \nonumber\\
& \approx\sqrt{\left|\alpha\right|-4c_{0}}+\frac{c_{1}\delta^{2}}{\sqrt{\left|\alpha\right|-4c_{0}}}.
\end{align}
Therefore, the left-hand side of Eq.~\eqref{eq:EVCEven4} is
\begin{align}
q\delta x_{0}\tan\left(q\delta x_{0}\right)+\nu &= \sqrt{\left|\alpha\right|-4c_{0}}\tan\left(\sqrt{\left|\alpha\right|-4c_{0}}\right)+\nu\nonumber\\
&\quad+\frac{c_{1}\delta^{2}}{\sqrt{\left|\alpha\right|-4c_{0}}}\left[\tan\left(\sqrt{\left|\alpha\right|-4c_{0}}\right)+\sqrt{\left|\alpha\right|-4c_{0}}\sec^{2}\left(\sqrt{\left|\alpha\right|-4c_{0}}\right)\right].
\end{align}
Solving Eq.~\eqref{eq:EVCEven4} to $O\left(\delta^{0}\right)$ gives the following
equation
\begin{equation}
2\sqrt{c_{0}}\frac{K_{b}}{K_{b-1}}=\sqrt{\left|\alpha\right|-4c_{0}}\tan\left(\sqrt{\left|\alpha\right|-4c_{0}}\right)+\nu.
\end{equation}
Rearranging this expression gives the result in Eq.~\eqref{eq:C0Eqn} of the main text.
Solving Eq.~\eqref{eq:EVCEven4} to $O\left(\delta^{2}\right)$ gives
\begin{align}
\label{eq:Xc1}
& c_{1}\left\{ \frac{1}{\sqrt{\left|\alpha\right|-4c_{0}}}\left[\tan\left(\sqrt{\left|\alpha\right|-4c_{0}}\right)+\sqrt{\left|\alpha\right|-4c_{0}}\sec^{2}\left(\sqrt{\left|\alpha\right|-4c_{0}}\right)\right]+\frac{1}{2}\frac{K_{b}}{K_{b-1}}\left(\sqrt{c_{0}}+2\frac{K_{b}^{\prime}}{K_{b}}-2\frac{K_{b-1}^{\prime}}{K_{b-1}}\right)\right\} \nonumber\\
& = -1+\frac{b}{2}\frac{K_{b}}{K_{b-1}}\left(\frac{1}{\sqrt{c_{0}}}+2\frac{K_{b}^{\prime}}{K_{b}}-2\frac{K_{b-1}^{\prime}}{K_{b-1}}\right)\nonumber \\
& \quad +2\frac{K_{b}}{K_{b-1}}\left[-\frac{b\left(b+1\right)}{2\sqrt{c_{0}}}+\left(\frac{b+1}{2}\right)\frac{K_{b+1}}{K_{b}}+\frac{b\left(b-1\right)}{2\sqrt{c_{0}}}-\frac{b}{2}\frac{K_{b}}{K_{b-1}}\right].
\end{align}
Let the expression on the left-hand side in the square brackets be denoted by $X$.
Now we simplify the expression on the right-hand side. To do this,
we use the second and fourth relations in Eqs.~(9.6.26) of Ref.~\cite{AbramowitzStegun}, with $z=2\sqrt{c_{0}}$:
\begin{align}
K_{\lambda}^{\prime}\left(z\right) & = -K_{\lambda+1}\left(z\right)+\frac{\lambda}{z}K_{\lambda}\left(z\right).\\
K_{\lambda}^{\prime}\left(z\right) & = -K_{\lambda-1}\left(z\right)-\frac{\lambda}{z}K_{\lambda}\left(z\right).
\end{align}
Thus, we have
\begin{align}
\frac{K_{b}^{\prime}}{K_{b}} & = -\frac{K_{b-1}}{K_{b}}-\frac{b}{z}.\\
\frac{K_{b-1}^{\prime}}{K_{b-1}} & = -\frac{K_{b}}{K_{b-1}}+\frac{b-1}{z}.\\
\frac{K_{b+1}}{K_{b}} & = 2\frac{b}{z}+\frac{K_{b-1}}{K_{b}}.
\end{align}
Using these identities, Eq.~\eqref{eq:Xc1} becomes
\begin{align}
Xc_{1} & = -1+\frac{K_{b}}{K_{b-1}}\left(\frac{b}{2\sqrt{c_{0}}}+b\frac{K_{b}^{\prime}}{K_{b}}-b\frac{K_{b-1}^{\prime}}{K_{b-1}}\right)\nonumber \\
& +\frac{K_{b}}{K_{b-1}}\left[-\frac{b\left(b+1\right)}{\sqrt{c_{0}}}+\left(b+1\right)\frac{K_{b+1}}{K_{b}}+\frac{b\left(b-1\right)}{\sqrt{c_{0}}}-b\frac{K_{b}}{K_{b-1}}\right]\nonumber \\
& = -1+\frac{K_{b}}{K_{b-1}}\left[\frac{b}{2\sqrt{c_{0}}}+b\left(-\frac{K_{b-1}}{K_{b}}-\frac{b}{2\sqrt{c_{0}}}\right)-b\left(-\frac{K_{b}}{K_{b-1}}+\frac{b-1}{2\sqrt{c_{0}}}\right)\right.\nonumber \\
& \left.-\frac{b\left(b+1\right)}{\sqrt{c_{0}}}+\left(b+1\right)\left(\frac{b}{\sqrt{c_{0}}}+\frac{K_{b-1}}{K_{b}}\right)+\frac{b\left(b-1\right)}{\sqrt{c_{0}}}-b\frac{K_{b}}{K_{b-1}}\right]\nonumber \\
& = 0.
\end{align}
This gives the result in Eq.~\eqref{eq:C1Eqn} of the main text.
\section{Derivation of the closed-form expression for $c_{0}$}
\label{App:C0Sol}
The order $\lambda$ modified Bessel function of the second kind is defined in Eq.~(9.6.2) of Ref.~\cite{AbramowitzStegun}:
\begin{equation}
K_{\lambda}\left(z\right)=\frac{\pi}{2\sin\left(\lambda\pi\right)}\left[I_{-\lambda}\left(z\right)-I_{\lambda}\left(z\right)\right],
\label{eq:KBessel}
\end{equation}
where the order $\lambda$ modified Bessel function of the first kind is defined in Eq.~(9.6.10) of Ref.~\cite{AbramowitzStegun}:
\begin{equation}
I_{\lambda}\left(z\right)=\left(\frac{1}{2}z\right)^{\lambda}\sum_{n=0}^{\infty}\left(\frac{1}{4}z^{2}\right)^{n}\frac{1}{n!\Gamma\left(1+\lambda+n\right)}.
\end{equation}
For small arguments $0<\left|z\right|\ll\sqrt{\lambda+1}$ the asymptotic form of $I_{\lambda}(z)$ is (see Eq.~(4), pg.~16 of Ref.~\cite{Watson}):
\begin{equation}
I_{\lambda}\left(z\right)\sim\frac{1}{\Gamma\left(1+\lambda\right)}\left(\frac{z}{2}\right)^{\lambda}.
\label{eq:IBesselSeries}
\end{equation}
Using Eqs.~\eqref{eq:KBessel} and \eqref{eq:IBesselSeries}, the limiting form of $K_{\lambda}\left(z\right)$ is
\begin{align}
K_{\lambda}\left(z\right) & = \frac{\pi}{2\sin\left(\lambda\pi\right)}\left[\frac{1}{\Gamma\left(1-\lambda\right)}\left(\frac{z}{2}\right)^{-\lambda}-\frac{1}{\Gamma\left(1+\lambda\right)}\left(\frac{z}{2}\right)^{\lambda}\right]\nonumber \\
& = \frac{1}{2}\left[\Gamma\left(\lambda\right)\left(\frac{2}{z}\right)^{\lambda}-\frac{\Gamma\left(1-\lambda\right)}{\lambda}\left(\frac{z}{2}\right)^{\lambda}\right].
\end{align}
In order to apply this result to Eq.~\eqref{eq:C0Eqn}, we require $2\sqrt{c_{0}}\ll\sqrt{\nu+\frac{1}{2}}$. In terms of $\alpha$, this is equivalent to the condition $4c_{0}\ll1+\sqrt{\frac{1}{4}+\alpha}$.
Assuming this condition is satisfied, Eq.~\eqref{eq:C0Eqn} can be simplified to
\begin{equation}
c_{0}\approx\frac{1}{4}\left\{ \left[\sqrt{\left|\alpha\right|-4c_{0}}\tan\left(\sqrt{\left|\alpha\right|-4c_{0}}\right)+\nu\right]\frac{\left[\Gamma\left(\nu-\frac{1}{2}\right)\left(\frac{1}{\sqrt{c_{0}}}\right)^{\nu-\frac{1}{2}}-\frac{\Gamma\left(1-\left(\nu-\frac{1}{2}\right)\right)}{\nu-\frac{1}{2}}\left(\sqrt{c_{0}}\right)^{\nu-\frac{1}{2}}\right]}{\left[\Gamma\left(\nu+\frac{1}{2}\right)\left(\frac{1}{\sqrt{c_{0}}}\right)^{\nu+\frac{1}{2}}-\frac{\Gamma\left(1-\left(\nu+\frac{1}{2}\right)\right)}{\nu+\frac{1}{2}}\left(\sqrt{c_{0}}\right)^{\nu+\frac{1}{2}}\right]}\right\} ^{2}.
\end{equation}
Taking the square root of this equation and simplifying then gives
\begin{equation}
2\frac{\Gamma\left(\nu+\frac{1}{2}\right)}{\Gamma\left(\nu-\frac{1}{2}\right)}\approx\left[\sqrt{\left|\alpha\right|-4c_{0}}\tan\left(\sqrt{\left|\alpha\right|-4c_{0}}\right)+\nu\right]\left[1-\frac{\Gamma\left(\frac{3}{2}-\nu\right)}{\Gamma\left(\frac{1}{2}+\nu\right)}c_{0}^{\nu-\frac{1}{2}}\right].
\end{equation}
We suppose that $c_{0}\ll|\alpha|$ and drop the $c_{0}$ terms appearing in the square roots in the equation above:
\begin{equation}
2\frac{\Gamma\left(\nu+\frac{1}{2}\right)}{\Gamma\left(\nu-\frac{1}{2}\right)}\approx\left(\sqrt{\left|\alpha\right|}\tan\sqrt{\left|\alpha\right|}+\nu\right)\left[1-\frac{\Gamma\left(\frac{3}{2}-\nu\right)}{\Gamma\left(\frac{1}{2}+\nu\right)}c_{0}^{\nu-\frac{1}{2}}\right].
\end{equation}
Now solve this equation for $c_{0}$ to obtain
\begin{equation}
c_{0}\approx\left\{ \frac{\Gamma\left(\frac{1}{2}+\nu\right)}{\Gamma\left(\frac{3}{2}-\nu\right)}\left[1-\frac{2}{\left(\sqrt{\left|\alpha\right|}\tan\sqrt{\left|\alpha\right|}+\nu\right)}\frac{\Gamma\left(\nu+\frac{1}{2}\right)}{\Gamma\left(\nu-\frac{1}{2}\right)}\right]\right\} ^{\frac{1}{\nu-\frac{1}{2}}}.
\end{equation}
After inserting the expression for $\nu$ from Eq.~\eqref{eq:Nu}, the result in Eq.~\eqref{eq:C0Eqn2} of the main text is obtained.
\end{widetext}
\end{document}
|
\begin{document}
\title{Strongly n-trivial Knots}
\newtheorem{definition}{Definition}[section]
\newtheorem{corollary}[definition]{Corollary}
\newtheorem{theorem}[definition]{Theorem}
\newtheorem{lemma}[definition]{Lemma}
\newtheorem{claim}[definition]{Claim}
\newtheorem{ex}[definition]{Example}
\newtheorem{question}[definition]{Question}
\newtheorem{exer}[definition]{Exercise}
\newtheorem{conjecture}[definition]{Conjecture}
\begin{abstract}
We prove that given any knot $k$ of genus $g$,
$k$ fails to be strongly $n$-trivial for all $n$,
$n \geq 3g-1 $.
\noindent Keywords: Crossing Changes, Strongly n-trivial, n-trivial,
n-adjacent, Thurston Norm,
Sutured Manifolds.
\noindent AMS classification: 57M99;
\end{abstract}
\section{Introduction}
We start with a little background.
\begin{definition}
A knot $k$ is called ``$(n$-$1)$-trivial''
if there exists a projection of $k$,
such that one can choose $n$ disjoint sets of
crossings of the projection with the property
that making the
crossing changes corresponding to any of the $2^{n}-1$
nontrivial combination of the sets of crossings
turns the original knot into the
unknot. The collection of sets of
crossing changes is said to be an ``$(n$-$1)$-trivializer for $k$''.
\end{definition}
\begin{conjecture}
The unknot is the only knot that is $n$-trivial for all $n$.
\label{conj:setsarcs}
\end{conjecture}
{\em Note:} \hspace{.5mm} A knot that is $n$-trivial is
automatically $m$-trivial for all $m \leq n$.
Work of Gusarov
[Gu] and Ng and Stanford [NS] shows that this question equates to showing
that the only knot with vanishing Vassiliev invariants for all $n$ is the
unknot. Thus, Conjecture~\ref{conj:setsarcs} is at the heart of the study of
Vassiliev invariants.
One reason why this question is interesting is that
it takes a geometric
approach to Vassiliev invariants, instead of the traditional
algebraic approach and therefore is relatively unexplored.
Vassiliev invariants measure geometric
properties of knots, which in turn are geometric objects,
so it is reasonable to hope that the geometry
might play an integral role
in their study.
The following definition derives
its motivation from $n$-triviality.
\begin{definition}
A knot $k$ is called ``strongly $(n$-$1)$-trivial.''
if there exists a projection of $k$,
such that one can choose $n$ crossings of the projection with the property
that making the
crossing changes corresponding to any of the $2^{n}-1$
nontrivial combination of the selected crossings
turns the original knot into the
unknot. The collection of
crossing changes is said to be a ``strong $(n$-$1)$-trivializer for $k$''.
\end{definition}
{\em Note:} \hspace{.5mm} The expression ``$n$ adjacent
to the unknot'' is used interchangeably with ``strongly $(n$-$1)$-trivial.''
We will stick with the latter throughout this
paper.
In Section~\ref{sect:bigns} we show that for any $n$ there is a
non-trivial knot that is strongly $n$-trivial. On the other hand in
Section~\ref{sect:onearc} we prove the main result of this paper:
\begin{theorem}
Any non-trivial knot $k$ of genus $g$
fails to be strongly $n$-trivial for all $n$,
$n \geq 3g-1$.
\label{thm:onearc}
\end{theorem}
{\em Note:} \hspace{.5mm} A knot that is strongly $n$-trivial is
automatically strongly $m$-trivial for all $m \leq n$. Also any knot
that is strongly $n$-trivial is clearly $n$-trivial, too.
In analogy with Conjecture~\ref{conj:setsarcs} we have
\begin{corollary}
The unknot is the only knot that is strongly $n$-trivial for all $n$.
\label{cor:onearc}
\end{corollary}
Theorem~\ref{thm:onearc}
is proven by repeated use of the following theorem of Gabai
\begin{theorem} (Corollary 2.4 [G]) Let $M$ be a Haken manifold
whose boundary is a nonempty union of tori. Let $F$ be a Thurston
norm minimizing surface representing an element of $H_2(M, \partial M)$
and let $P$ be a component of $\partial M$ such that
$P \cap F = \emptyset$. Then with at most one exception (up to
isotopy) $F$ remains norm minimizing in each manifold $M(\alpha)$
obtained by filling $M$ along an essential simple closed curve $\alpha$
in $P$ In particular $F$ remains incompressible in all but at most one
manifold obtained by filling $P$.
\label{thm:gabai}
\end{theorem}
\section{Notation}
\label{section:notation}
Let $k$ be a knot that is strongly $(n$-$1)$-trivial. Let
$p:k \rightarrow R^2$ be a projection with crossings $\{a_1, \dots a_n \}$
demonstrating the strong $(n$-$1)$-triviality. For each $a_i$
let $c_i$ be the small vertical circle that bounds a disk
$D_i$
that intersects $k$ geometrically
twice, but algebraically 0 times. We call the $c_i$ linking
circles of $k$ and call $D_i$ a crossing disk after
[ST]. Let $M$ be the link exterior of $k \cup c_1 \cup \dots c_n$
and $P_i$ be the torus boundary component in $M$ corresponding to
$c_i$. Either $+1$ or $-1$ filling of $P_i$
will result in the desired crossing change depending on
orientation. We adopt the convention that each $P_i$ will be oriented
so that $+1$ filling of $P_i$ corresponds to
the appropriate crossing change dictated by $a_i$.
\section{Irreducibility}
\begin{lemma}
Let $k$ be a nontrivial knot. Let $\{c_1, \dots c_n \}$
be linking circles for $k$ that show $k$ is strongly $(n$-$1)$-trivial,
then $M$, the exterior of $k \cup c_1 \dots \cup c_n$, is
irreducible and $M$ is therefore Haken.
\label{lemma:irreducible}
\end{lemma}
\begin{proof} Assume $M$ is reducible. Let $S$ be a sphere
that does not bound a ball on either side. $S$ cannot be
disjoint from $D_1 \cup \dots D_n$ or else it would bound a
ball on the side that does not contain $k$. Assume $S$ intersects
$D_1 \cup \dots \cup D_n$ minimally and
transversally. The intersection will consist
of a union of circles. Let $r$ be one of these circles that is
innermost on $S$ (any circle that bounds a disk on $S$ disjoint from
all the other circles of intersection). Without loss of
generality assume $r \subset D_1$. $r$ cannot
be trivial on $D_1 - (D_1 \cap k)$ since $S \cap D_1$ is minimal.
$r$, however must be trivial in $M$ so must divide $D_1$ into
two pieces, one containing both points of $D_1 \cap k$ and the
other consisting of an annulus running from $r$ to $c_1$.
This disk on $S$ bounded by $r$
shows that $c_1$
bounds a disk in the exterior of $k$.
This, however, means that $+1$ surgery on $c_1$
leaves $k$ unchanged instead of turning it into an unknot, yielding
the desired contradiction.
\end{proof}
\section{Minimal genus Seifert surfaces}
This section is dedicated to proving the following theorem.
\begin{theorem} If $k$ has a strong $(n$-$1)$-trivializer
$\{ c_1, \dots c_n \}$ and $F$ is a Seifert surface for $k$
disjoint from
$\{ c_1, \dots c_n \}$ which is minimal
genus among all such surfaces,
then $F$ is a
minimal genus Seifert surface for $k$.
\label{thm:surface}
\end{theorem}
\begin{proof}
Because the $c_i$ have linking number 0 with $k$ we can find a
Seifert surface for $k$ disjoint from the $c_i$.
Let $F$ be a minimal genus Seifert surface for $k$
in the link complement.
We supplement the notation introduced in Section~\ref{section:notation}.
Recall $M$ is
the link exterior of $k \cup c_1 \cup \dots \cup c_n$.
Let $L$ be the corresponding link of $n+1$ components in
$S^3$.
$P_i$ is the torus boundary component in $M$ corresponding to
$c_i$.
Let $M(\alpha)$ refer to the manifold
obtained by filling $M$ along an essential simple closed curve
of slope $\alpha$
in $P_n$. When $\alpha = 1/m, m \in Z$, $M(\alpha)$ is a link exterior.
Let $L_{\alpha}$ be the corresponding link in $S^3$. Let $k_{\alpha}$
be the image of $k$ in $L_{\alpha}$ and $F_{\alpha}$ be the image
of $F$ in $L_{\alpha}$.
We now prove Theorem~\ref{thm:surface} by induction on $n$.
If $F$ is ever a disk then Theorem~\ref{thm:surface}
is clearly true, so \underline{we will assume that $F$ is not a disk throughout} \underline{the
proof}.
{\bf The base case:}
Let $k$ be a strongly
0-trivial knot. This means that $k$ is unknotting number
1 and there is one linking circle $c_1$ that dictates a crossing change
that unknots $k$.
By Lemma~\ref{lemma:irreducible} if $M$ is
reducible, then $k$ is the unknot. As in the proof of
Lemma~\ref{lemma:irreducible} $c_1$ bounds a disk in the complement of
$k$, so $k \cup c_1$ is the unlink on two components. Therefore,
$F$ being least genus must be a disk, which is a contradiction,
verifying the claim for $M$ reducible and $n=1$.
We may assume $M$ is irreducible to
complete the base case. $k_1$ is the unknot.
Since $F_1$ is not a disk, it is no longer
norm minimizing after the filling. Thus by Theorem~\ref{thm:gabai}
$F$ is norm minimizing under any other filling of $P_1$. In particular
$F_{\infty}$ is Thurston norm minimizing for $L_{\infty}$,
which is just $k$.
Thus, $F$ is a least genus Seifert surface for $k$.
{\bf The inductive step:} Now we assume that if $k$ has a
strong $(n$-$2)$-trivializer
$\{ c_1, \dots c_{(n-1)} \}$ and $F$ is a Seifert surface for $k$
disjoint from
$\{ c_1, \dots c_{n-1} \}$, which is minimal
genus among all such surfaces,
then $F$ is also a
minimal genus Seifert surface for $k$ and show that the same
must be true for any strong $(n$-$1)$-trivializer for $k$.
Again by Lemma~\ref{lemma:irreducible} if $M$
is reducible, $k$ must be the unknot. As in previous arguments,
the separating
sphere $S$ must intersect at least one $D_i$
in a curve that is essential on $D_i - k$.
Without loss of generality, we may assume that $D_n$
is such a disk. Then $c_n$ bounds a disk in the complement of $k \cup \{c_1, \dots
c_{n-2}\}$.
Since $\{ c_1, \dots c_{n-1} \}$ forms a strong $(n$-$2)$-trivializer
for $k$, the induction assumption implies $k$ bounds
a disk $\Delta$ disjoint from
$c_1 \cup \dots c_{n-1}$. Since $c_n$ bounds a disk disjoint from
$k \cup c_1 \cup \dots c_{n-1}$,
$\Delta$ can clearly be chosen to be
disjoint from $c_n$, too, but this contradicts the assumption
that $F$ was minimal genus, but not a disk.
We now may finish the proof of Theorem~\ref{thm:surface}
knowing that $M$ is irreducible.
$k_1$ is an unknot in the link $L_1$.
$\{ c_1, \dots c_{n-1} \}$ is a strong $(n$-$2)$-trivializer
for $k$ in $L_1$. The
inductive assumption means that $k_1$ bounds a disk in
the exterior of $L_1$.
This disk is in the same
class as $F_1$ in $H_2(M(1), \partial M(1))$, thereby
showing that $F_1$ is not Thurston
norm minimizing. Thus, by Theorem~\ref{thm:gabai}
$F$ remains
norm minimizing under any other filling of $P_n$. In particular
$F_{\infty}$ is Thurston norm minimizing in $L_{\infty}$.
Thus, $F$ is a least genus Seifert surface for $k$
in the complement of $\{ c_1, \dots c_{n-1} \}$.
$\{ c_1, \dots c_{n-1} \}$, however, forms a strong $(n$-$2)$-trivializer
for $k$ in $L_{\infty}$.
By the inductive assumption, $F$ must be Thurston norm
minimizing for $k$ in the knot complement as well
as the link complement.
\end{proof}
\section{Arcs on a Seifert surface}
We now prove Theorem~\ref{thm:onearc}:
Any non-trivial knot $k$ of genus $g$
fails to be strongly $n$-trivial for all $n$,
$n \geq 3g-1$.
\begin{proof}
Let $k$ be strongly $n$-trivial with $n$-trivializers $\{c_1, \dots c_{n+1} \}$.
Let $F$ be a minimal genus Seifert surface for $k$ disjoint from
$\{c_1, \dots c_{n+1} \}$ as in Theorem~\ref{thm:surface}.
$F$ has genus $g$.
\label{sect:onearc}
Each linking circle $c_i$ bounds a disk $D_i$ that intersects $F$
in an arc running between the two points of $k \cap D_i$
and perhaps some simple closed curves. Simple closed curves
inessential in $D_i - k$
can be eliminated since $F$ is incompressible. Any essential curves
$s_j$ must
be parallel to $c_i$ in $D_i - k$. These curves can be removed
one at a time using the annulus running from $c_i$ to the
outermost $s_j$ to reroute $F$, decreasing the number of intersections.
Thus, if $F$ is assumed to have minimal intersection with
each of the $D_i$ then it intersects each one in an arc which we shall
call $a_i$ as in Figure~\ref{fig:linkingcircle}. Each $a_i$ is essential in $F$.
Otherwise $c_i$ would bound a disk disjoint from $F$ and the crossing change along
$c_i$ would fail to unknot $k$.
\begin{figure}
\caption{A Seifert surface passing disjointly through a linking circle}
\label{fig:linkingcircle}
\end{figure}
\begin{lemma}
$a_i$ is never parallel on $F$ to $a_j$ for $i \neq j$.
\end{lemma}
\begin{proof}If $a_i$ is parallel on $F$ to $a_j$
there must be an annulus running from $P_i$ to $P_j$ in the
link exterior. Recall that we adopted the convention that
$P_i$ and $P_j$ are each oriented so that $+1$ surgery
results in the appropriate crossing changes.
The two tori cannot have opposite
orientations or else $+1$ fillings on both $P_i$ and $P_j$
is the same as $\infty$ fillings on both, thus,
instead of unknotting $k$ changing
both crossings leaves $k$ knotted.
If the two tori have the same orientation
we could replace $P_i \cup P_j$ with a single
torus $T$ obtained by cutting and pasting
of the two tori along the annulus.
Now $+1$ filling for $P_i$ and $\infty$ filling for $P_j$
is equivalent to $+1$ filling on $T$, while $+1$ filling on
both $P_i$ and $P_j$ is equivalent to $\frac{1}{2}$ filling on
$T$. This implies that $F$ fails to be norm-minimizing after
both $+1$ and $\frac{1}{2}$
filling of $T$. This contradicts Theorem~\ref{thm:gabai}
completing the proof of the Lemma.
\end{proof}
Then $\{a_1, \dots a_n \}$ is a collection of
embedded arcs on $F$, no two of
which are parallel. An Euler characteristic argument shows that
$m \leq 3g-1$. Since the arcs are in one to one correspondence
with the linking circles,
a strong $n$-trivializer can produce
at most $3g-1$ linking circles for $k$
finally proving Theorem~\ref{thm:onearc}.
\end{proof}
We note that
Theorem~\ref{thm:onearc} predicts that a genus one knot can be at most
strongly $1$-trivial.
Given standard projections of the trefoil and the figure eight knot
it is easy to find a pair of crossing changes that show the knots are
strongly $1$-trivial. The theorem is therefore sharp
at least for genus one knots. It is possible, but unlikely, that the
theorem remains as precise for higher genus knots.
Finally as noted in the introduction, Theorem~\ref{thm:onearc} implies
Corollary~\ref{cor:onearc}:
The unknot is the only knot that is strongly
$n$-trivial for all $n$.
\section{Constructing strongly $n$-trivial knots}
\label{sect:bigns}
One might think that
there exists a bound $n$
such that no nontrivial knot is strongly $n$-trivial.
Given any $n$, this section gives one way
to construct strongly $n$-trivial knots.
In Figure~\ref{fig:sntriv} we will give
projections of graphs that show how to turn an unknot into a strongly
$n$-trivial knot. The circle running around the outside of the
graph should be viewed as an unknot $k'$. Each arc $a_i$ suggests a linking
circle $c_i$ and a crossing disk $D_i$. If we alter the link $k \cup
c_1 \cup \dots \cup c_n$ in $S^3$ by twisting -1 times
along each of the disks $D_1, D_2, \dots D_n$
$k'$ becomes a new knot $k$
see Figure~\ref{fig:push}. The linking circles remain fixed,
so we get a new link in $S^3$, $k \cup c_1 \dots \cup c_n$.
\begin{figure}
\caption{A graph contains instructions for turning the unknot
into a knot (or perhaps another unknot).}
\label{fig:push}
\end{figure}
Figure~\ref{fig:sntriv} gives graphs that generate
examples of strongly $1$-trivial
and strongly $2$-trivial knots. Note that the figure on the right
is obtained from the figure on the left
by replacing one arc by two new arcs that follow along the original
arc, clasp, return along the original arc, and then, close to the boundary,
clasp once again. This process could be iterated indefinitely by choosing
an arc of the new graph and repeating the construction.
It is modeled on doubling one component of a
link. Given a Brunnian link of $n$ components
(a nontrivial link for which any $n-1$ components is the unlink),
doubling one of the components yields a Brunnian link of $n+1$ components.
The graph on the left in Figure~\ref{fig:sntriv}
has a Brunnian link of 2 components as a subgraph and
the one on the right has the double of that link as a subgraph.
Let $\Gamma_{n}$ be the graph after
$n-2$ iterations ($n \geq 2$).
\begin{figure}
\caption{Examples of crossing changes for the unknot that create
nontrivial knots that are strongly $1$-trivial (left) and
strongly $2$-trivial (right). Note that each contains
a subgraph that is a Brunnian link of $n+1$ components.}
\label{fig:sntriv}
\end{figure}
\begin{theorem}
$\Gamma_{n}$ contains a Brunnian link, $l_{n+2}$, of $n+2$
components and yields $k$ a non-trivial, strongly $(n+1)$-trivial knot.
\label{theorem:brunnian}
\end{theorem}
\begin{proof} The link consists of the arcs $\{a_1, \dots a_{n+2}\}$,
together
with short segments from $k'$ connecting the end points
of the segments (and disjoint from the end points of the other segments).
The base case is trivial because,
$\Gamma_{0}$ contains a Brunnian link of $2$ components: the Hopf link.
$\Gamma_{n}$ is obtained from $\Gamma_{n-1}$ by doubling one of the components
of a Brunnian link of $n+1$ components. This yields a Brunnian link of $n+2$
components.
As a result of the Brunnian structure in $\Gamma_{n}$
any $n+1$ edges from $\{a_1, \dots a_{n+2}\}$ can be
disjointly embedded on a disk bounded by $k'$.
So $k'$ forms an unlink with any proper subset of
$\{c_1, \dots c_{n+2} \}$.
We can use this fact to
show that $\{c_1, \dots c_{n+2} \}$
are an $n$-trivializer for $k$.
Let $S$ be any nontrivial subset of $\{c_1, \dots c_{n+2} \}$.
Let $S^c$ be the complement of $S$.
If we take $k$ together with $S$, and do $+1$ surgery on each component
of $S$ the resulting knot is an unknot. This is because it
is exactly the same as if we took $k'$
and did $-1$ surgery on each of the components of $S^c$.
Since $S$ is a nontrivial subset, $S^c$ is a proper subset.
$k'$ together with a the linking circles in $S^c$, therefore form
an unlink, so each of the components of $S^c$ bounds a disk disjoint
from $k'$ and doing $-1$ surgery on these linking circles
leave $k'$ unchanged.
Now that we know that $k$ is strongly $(n+1)$-trivial,
we need only show $k$ is a non-trivial knot.
Assume $k$ is trivial. By
Theorem~\ref{thm:surface}, $k$ bounds a disk $\Delta$
in the complement of $c_1 \cup \dots \cup c_{n+2}$.
Since $k \cup c_1 \cup \dots
c_{n+2}$ was obtained from $k' \cup c_1 \cup \dots
c_{n+2}$ by spinning along the $D_i$'s, the exteriors of
the two links are homeomorphic, and therefore
$k'$ must bound a disk $\Delta'$ also disjoint from
$c_1 \cup \dots \cup c_{n+2}$ (note that one could
even prove that both $k \cup c_1 \cup \dots \cup c_{n+2}$
and $k' \cup c_1 \cup \dots \cup c_{n+2}$ are unlinks).
$k'$ intersects
each $D_i$ in 2 points, so as before we may assume $\Delta' \cap D_i$
is an arc for each $i$, but these arcs must, of course,
be isotopic to the $a_is$ which in turn shows that
the $a_is$ can be disjointly embedded on $\Delta'$,
proving that
$l_{n+2}$ is planar
and not Brunnian, the desired contradiction.
Thus, $k$ is a strongly $(n+1)$-trivial knot, but not the unknot.
\end{proof}
\hspace{.2in}
[G] Gabai, David, {\it Foliations and the topology of 3-manifolds. II},
Journal of Differential Geometry 26 (1987), pp. 461--478.
[Gu] Gusarov, M., {\it On $n$-equivalence of knots and invariants of finite degree}.
Topology of manifolds and varieties, pp. 173--192, Adv. Soviet Math., 18, Amer. Math. Soc.,
Providence, RI, 1994.
[NS] Ng, Ka Yi; Stanford, Ted, {\it On Gusarov's groups of knots}, to appear in Math Proc
Camb Phil.
[S] Scharlemann, Martin, {\it Unknotting number one
knots are prime}. Invent. Math. 82 (1985), no. 1, 37--55.
[ST] Scharlemann, Martin; Thompson, Abigail, {\it
Link genus and the Conway moves}. Comment. Math.
Helv. 64 (1989), no. 4, 527--535.
\end{document}
|
\begin{document}
\title{Discriminating Strength: a bona fide measure of non-classical correlations}
\author{A. Farace}
\affiliation{NEST, Scuola Normale Superiore and Istituto Nanoscienze-CNR,
I-56126 Pisa, Italy}
\author{A. De Pasquale}
\affiliation{NEST, Scuola Normale Superiore and Istituto Nanoscienze-CNR,
I-56126 Pisa, Italy}
\author{L. Rigovacca}
\affiliation{Scuola Normale Superiore,
I-56126 Pisa, Italy}
\author{V. Giovannetti}
\affiliation{NEST, Scuola Normale Superiore and Istituto Nanoscienze-CNR, I-56126 Pisa, Italy}
\begin{abstract}
A new measure of non-classical correlations is introduced and characterized. It tests the ability of using a state $\rho$ of a composite system $AB$ as a probe for a {\it quantum illumination} task [e.g. see S. Lloyd, Science {\bf 321}, 1463 (2008)],
in which one is asked to remotely discriminate among the two following scenarios: i) either nothing happens to the probe, or ii) the subsystem $A$ is transformed via a local unitary $R_A$ whose properties are partially unspecified when producing $\rho$. This new measure can be seen as the discrete version of the recently introduced Intereferometric Power measure [D. Girolami et al. e-print arXiv:1309.1472 (2013)] and, at least for the case in which $A$ is a qubit, it is shown to coincide (up to an irrelevant scaling factor) with the Local Quantum Uncertainty
measure of D. Girolami, T. Tufarelli, and G. Adesso,
Phys. Rev. Lett. {\bf 110}, 240402 (2013). Analytical expressions are derived which allow us to formally prove that, within the set of separable configurations, the maximum value of
our non-classicality measure is achieved over the set of quantum-classical states (i.e. states $\rho$ which admit a statistical unravelling where each element of the associated ensemble is distinguishable via local measures on $B$).
\end{abstract}
\maketitle
\section{Introduction}
In recent years strong evidences have been collected in support of the fact that
composite quantum systems
can exhibit correlations which, while not being accountable for by a purely classical statistical theory, still go beyond the notion of quantum entanglement~\cite{MODIREV}.
In the seminal papers by Henderson and Vedral~\cite{vedralzurek}, and Ollivier and Zurek~\cite{OLLZU}, this new form of non-classicality
was gauged in terms of a difference of two entropic quantities -- specifically the quantum mutual information~\cite{ENTROPYBOOK} (which accounts for {\it all} correlations in a bipartite system), and the Shannon mutual information~\cite{COVER} extractable by performing a generic local measurement on one of the subsystems.
The resulting functional, known as {\it quantum discord}~\cite{vedralzurek}, enlightens the impossibility of recovering the information contained in a composite quantum system
by performing local detections only. It turns out that this intriguing feature of quantum mechanics is not directly related to entanglement~\cite{ENTREV}. Indeed, even though all entangled states are bound to exhibit non-zero
value of quantum discord, examples of separable (i.e. non entangled) configurations can be easily found which share the same property
-- zero value of discord identifies only a tiny (zero-measure) subset of all separable configurations~\cite{FERRA}.
In spite of the enormous effort spent in characterizing this emerging new aspect of quantum mechanics, a question which is still open is
whether and to what extent the new form of quantum correlations identified by quantum discord can be considerd as a {\it resource} and exploited to give some kind of advantage over purely classical means. Due to the variety of contexts where quantum theory proved to be a useful tool for developing new
technological ideas
(such as information theory, thermodynamics, computation and communication), this gave rise to a number of alternative definitions and quantifiers of discord-like correlations, see e.g.~\cite{MODIREV} and references therein. This proliferation stems also from the difficulty of identifying a measure which is at the same time well defined, easily computable (even for the case of a two-qubit system), and has an operative meaning. As a paradigmatic example, let us recall the geometric discord~\cite{gd} which can be effortlessly computed at the price of being increasing under local operations~\cite{pianiGD}. Some geometric alternatives have been proposed in order to overcome this hindrance. For example one can take the Hilbert-Schmidt distance between the square root of density operators, rather than the density operators themselves~\cite{changGD}, or use different distances such as the trace distance~\cite{Ciccarello2014a} and the Bures distance~\cite{Spehner2013a}. There are also several non-geometric approaches to quantum correlations, both on a fundamental and on an applied level. Among them, let us briefly recall the measurement-induced disturbance~\cite{disturbance} and non-locality~\cite{nonlocality}, which consider the perturbation induced by local von Neumann measurements on non-classically correlated states. On the other hand, the quantum deficit~\cite{Oppenheim2002a} investigates the role of quantum discord in work extraction from a heat bath, while the so-called quantum advantage~\cite{Gu2012a} focuses on quantum discord as the resource allowing quantum communication to be more efficient than classical communication.
Dealing with this complex scenario, here we introduce a new measure of quantum correlations, the \textit{Discriminating Strength} (DS), which turns out to be a valid tradeoff between computability and the fulfillment of the criteria that every good discord quantifier should satisfy~\cite{criteria}. Most importantly, it also possesses a clear operative meaning, being directly connected with the quantum illumination procedures introduced in Refs.~\cite{LLOYD,TAN,JEFF,GU}.
Being the counterpart of the recently introduced Interferometric Power (IP) for continuos variable estimation theory~\cite{blindmetr}, the DS enlightens the benefit gained by quantum state discrimination protocols when general quantum correlations, not necessarily in the form of entanglement, are employed. Finally,
we provide a formal connection between our new measure and the Local Quantum Uncertainty Measure (LQU) introduced in Ref.~\cite{LQU}
whose operational meaning was not yet
completely understood. Specifically we show that LQU is a special case of DS when the state is used as probe to determine the application of a local unitary which is close to the identity. Furthermore, for qubit-qudit systems
one can verify that LQU and DS always coincide up to a proportionality factor.
The DS, together with the aforementioned IP and LQU, witness a recent burst of attention to the crucial role played by quantum correlations in realm of quantum metrology.
The manuscript is organized as follows. In Sec. \ref{sec:DS} we introduce a paradigmatic state discrimination scheme and we quantify how good a generic state $\rho$ can perform in the discrimination. In Sec. \ref{Sec:Prop} we show that the same quantifier satisfies all the properties required for a bona fide measure of discord. Moreover we present the connection between our measure and the LQU measure and we provide some simple analytical formulas for some special cases (specifically pure states and qubit-qudits systems). In Sec.~\ref{sec:sep} we focus on the set of separable states and we determine the maximum value of the DS on this set in the qubit-qudits case. Conclusions are left to Sec. \ref{Sec:conc}.
\section{Discriminating Strength} \label{sec:DS}
In order to formally introduce our new measure of non-classicality it is useful to
recall the Quantum Chernov Bound (QCB)~\cite{QCB}.
This is an inequality which characterizes the
asymptotic scaling of the minimum error probability~
$P_{err,min}^{(n)}(\rho_0,\rho_1)$
attainable when discriminating among $n$-copies of two density matrices $\rho_0$ and
$\rho_1$~\cite{QCB}.
By optimizing with respect to all possibile Positive-Operator Valued Measures (POVM) aimed to distinguish among the two possible configurations,
and assuming a $50\%$ prior probability of getting $\rho_0^{\otimes n}$ or $\rho_1^{\otimes n}$,
one can write~\cite{HELSTROM}
\begin{eqnarray}\label{eq:pminerr}
P_{err,min}^{(n)}:=\frac{1}{2} (1 -
\| \rho_0^{\otimes n} - \rho_1^{\otimes n}\|_1)
\;,
\end{eqnarray}
the optimal detection strategy being the one which discriminates among the negative and non-negative eigenspaces of the operator $\rho_0^{\otimes n} - \rho_1^{\otimes n}$.
For large enough $n$, the dependance of the error probability on the number of copies can be approximated by an exponential
decay
\begin{equation}
P_{err,min}^{(n)}(\rho_0,\rho_1)\simeq e^{-n\;\xi(\rho_0,\rho_1)} =: Q(\rho_0,\rho_1)^n\;,
\end{equation}
characterized by the decay constant
\begin{eqnarray}\label{eq:limit}
\xi(\rho_0,\rho_1) : =- \lim_{n\rightarrow\infty}
\frac{\ln P_{err,min}^{(n)}(\rho_0,\rho_1)}{n}\;.
\end{eqnarray}
Accordingly, the larger is $Q(\rho_0,\rho_1)$
the less distinguishable are the states $\rho_0$ and $\rho_1$.
The limit in \eqref{eq:limit} corresponds to the QCB bound~\cite{QCB} and reads
\begin{eqnarray}\label{eq:QCB}
e^{-\xi(\rho_0,\rho_1)} = Q(\rho_0,\rho_1) = \min_{0 \leq s \leq 1} \mathrm{\text{Tr}} \big[ \rho_0^s \rho_1^{1-s} \big]\;,
\end{eqnarray}
which implies
\begin{eqnarray}\label{bound1}
0 \leq Q(\rho_0,\rho_1) \leq \mbox{Tr}[ \rho_0^{1/2} \rho_1^{1/2}] \leq 1\;.
\end{eqnarray}
Furthermore if at least one of the two quantum states $\rho_0$ or $\rho_1$ is pure, then QCB reduces to the Uhlmann's fidelity~\cite{UHL}, i.e.
\begin{eqnarray}
Q(\rho_0,\rho_1)=\mathcal{F}(\rho_0,\rho_1):=\left(\mathrm{Tr}\left[\sqrt{\sqrt{\rho_0} \; \rho_1\; \sqrt{\rho_0}}\right]\right)^2\;. \label{pure}
\end{eqnarray}
\begin{figure}
\caption{(Color online): sketch of the discrimination problem discussed in the text.
(1) A first party (say Alice) prepares $n$ copies of a bipartite state $\rho$ of a composite system $AB$ and (2) sends the probing subsystems $A$ to a second party (say Robert) while keeping the reference subsystems $B$ on her laboratory.
(3) Robert can now decide whether or not a certain unitary rotation $R_A$ he has previously selected from a set ${\cal S}
\label{fig:uno}
\end{figure}
Let us now consider the following {\it quantum illumination} scenario~\cite{LLOYD,TAN,JEFF,GU}.
A first party (Alice) prepares $n$ copies of a density matrix $\rho$ of a bipartite system $AB$ composed by a probing component $A$ and a reference component $B$, while
a second party (the non-cooperative target Robert) selects an undisclosed unitary transformation $R_A$ from a set ${\cal S}$ of allowed transformations.
Next Alice sends her $n$ subsystems $A$ to Robert who is allowed to do one of the following actions: induce the same rotation $R_A$
on each of the $n$ subsystems $A$, or leave them unmodified -- see Fig.~\ref{fig:uno}. Only after this step Robert reveals the chosen rotation $R_A$ and sends back the $A$ subsystems. Alice is now
requested to guess whether the rotation $R_A$ has been implemented or not,
i.e. to \textit{discriminate} between $\rho_0^{\otimes n}=\rho^{\otimes n}$ (no rotation) and
$\rho_1^{\otimes n} = (R_A \rho R_A^\dag)^{\otimes n}$ (rotation applied). For this purpose of course she is allowed to perform the most general POVM
on the $n$ copies of the transformed states. In particular, as in a conventional interferometric experiment, she might find useful to exploit the correlations present among the probes $A$ and their corresponding reference counterparts $B$
[it is important to stress however that, due to the lack of prior info on $R_A$, Alice cannot perform any optimization
with respect to the choice of her initial state $\rho$].
In this scenario we define
the ``discriminating strength'' of the state $\rho$ by quantifying Alice's worst possible performance through the quantity
\begin{equation}
\label{eq:7}
D_{A\rightarrow B}(\rho) : = 1 - \max_{R_A \in {\cal S} } Q(\rho,R_A \rho R_A^\dagger)\;,
\end{equation}
where the maximization is performed over the set ${\cal S}$ of allowed~$R_A$, and where the symbol $A\rightarrow B$ enlightens the different role played by the two subsystems in the problem -- an asymmetry which is a common trait of the majority of non-classical correlations measures introduced so far~\cite{MODIREV}.
From Eqs.~(\ref{eq:QCB}) and \eqref{eq:7} it is clear that the higher is $D_{A\rightarrow B}(\rho)$ the better Alice will be able to
determine whether a generic element of ${\cal S}$ has been applied or not to $A$.
It is a natural guess to expect that the capability shown by the input state $\rho$ of recording the action of an arbitrary local rotation, should increase with the amount of correlations shared between the probe $A$ (which has been affected by the rotation) and the reference $B$ (which has not). This behavior would be analogous to the one displayed by the Interferometric Power measure discussed in~\cite{blindmetr}, which quantifies the worst-case precision in determining the value of a continuous parameter.
Clearly the choice of ${\cal S}$ plays a fundamental role in our construction: for instance allowing ${\cal S}$ to coincide with the group $\mathbb{U}_A$ of all possible unitary transformations on $A$, including the identity, would give $D_{A\rightarrow B}(\rho)=0$ for all states $\rho$.
To avoid these pathological results we find it convenient to identify ${\cal S}$ with the special family of $R_A$ parametrized as
$R_A^\Lambda = \exp[i H_A^\Lambda ]$,
where
$H_A^\Lambda$ is a Hamiltonian
of assigned non-degenerate spectrum represented by the elements of the diagonal matrix
\begin{eqnarray} \label{defLAMBDA}
\Lambda := \mbox{Diag} \{\lambda_1, \lambda_2, \ldots ,\lambda_{d_A}\}
\;,
\end{eqnarray}
with $\lambda_1 > \lambda_2 > ... > \lambda_{d_A}$
($d_A$ being the dimension of the system $A$) and $\lambda_{1} - \lambda_{d_A} < 2 \pi$ (a condition the latter which can always be enforced by properly relabeling the entries of $\Lambda$). Accordingly we have
\begin{eqnarray} H_A^\Lambda &=& U_A \;
\Lambda \; U_A^\dagger\;,\label{diago} \\
R_A^{\Lambda} &=& U_A \exp[ i \Lambda ] U_A^\dag \;, \label{diago1}
\end{eqnarray}
where now $U_A$ spans the whole set $\mathbb{U}(d_A)$. For each given choice of $\Lambda$~(\ref{defLAMBDA}) we thus define the quantity
\begin{equation} \label{defofDS}
D_{A\rightarrow B}^\Lambda(\rho) : = 1 - \max_{\{ H^\Lambda_A\}} Q(\rho,e^{i H^\Lambda_A} \rho e^{-i H^\Lambda_A})\;,
\end{equation}
the maximization being performed over the set $\{ H_A^\Lambda\}$ of the Hamiltonians of the form~(\ref{diago}). This measure of discord can be interpreted as an extension to generic non-classical correlations of the entanglement of response, which quantifies the change induced on the state of a composite quantum system by local unitary transformations~\cite{entResp}. In this respect another measure of discord has been recently introduced, the Discord of Response (DR)~\cite{discResp}. The DR is defined in terms of a maximization, over the set of unitary operators endowed with fully non-degenerate spectrum in the roots of the unity, of the Bures distance between the considered state and its evolution under such unitary transformations. Similarly to the DS, the DR accounts for the degree of distinguishability between an assigned quantum state and its evolution under local unitary operators. However, in the case of the DS introduced in this paper, no further limitations, apart from the non-degeneracy, are imposed on the spectrum of the unitary operators.
In the next section we will show that, for all given choices of the spectrum $\Lambda$ the functional~(\ref{defofDS}) fulfills all the requirements necessary for attesting it as a proper measure of non-classical correlations~\cite{MODIREV}.
\section{Properties}\label{sec:DS as a measure}
\label{Sec:Prop}
In this section we show that the discriminating strength (\ref{defofDS}) is a {\it bona fide} measure of non-classicality. We also clarify the connection between our measure and the
LQU measure introduced by Girolami et al. in Ref.~\cite{LQU}. Finally we
provide close analytical expressions that, in some special cases, allow one to avoid going through the cumbersome optimization over the set $\{ H_A^\Lambda\}$ of the Hamiltonians~(\ref{diago}).
\subsection{DS as a measure of non-classical correlations} \label{subsectionIIIA}
\textbf{Theorem 1:} $D_{A\rightarrow B}^\Lambda(\rho)$ satisfies the following properties:
\begin{enumerate}
\item it nullifies if and only if $\rho$ is a {\it classical-quantum} (CQ) state~(\ref{CQ})
\begin{eqnarray}\label{CQ}
\rho = \sum_i p_i \; |i\rangle_A \langle i| \otimes \rho_B^{(i)}\;,
\end{eqnarray}
with $p_i$ being probabilities, $\{|i\rangle_A\}$
being an orthonormal basis of $A$ and $\{ \rho_B(i)\}$ being a collection of density matrices of $B$ (these are the only configurations for which it is possible to recover partial information on the system by measuring $A$, without introducing any perturbation~\cite{MODIREV});
\item it is invariant under the action of arbitrary local unitary maps, $W_A$ and $V_B$ on $A$ and $B$ respectively, i.e.
\begin{equation}
D_{A\rightarrow B}^\Lambda(\rho)=D_{A\rightarrow B}^\Lambda(W_A \otimes V_B \rho W_A^\dagger \otimes V_B^\dagger)\,;
\end{equation}
\item it is non-increasing under any completely positive, trace-preserving (CPT)~\cite{NIELSEN} map $\Phi_B$ on $B$;
\item it is an entanglement monotone when $\rho$ is pure.
\end{enumerate}
\textit{Proof:} \\
1) $D_{A\rightarrow B}^\Lambda(\rho)=0$ iff there exists at least an element of the set~(\ref{diago}) such that
$Q(\rho,R_A^{\Lambda} \rho R_A^{\Lambda\dag} )=1$.
The latter condition is satisfied iff~\cite{QCB} $\rho=R_A^{\Lambda} \rho R_A^{\Lambda \dag} $.
Being $R_A^{\Lambda}$ endowed with a non-degenerate spectrum, this is equivalent to stating that $\rho$ and $H_A^\Lambda$ are diagonal in the same basis $\{|i\rangle_{A}\}$ of $\mathcal{H}_A$, and thus $\rho$ reduces to a CQ state of the form~(\ref{CQ}).
2)
First note that for every unitary operator $U$ it holds $(U \rho U^\dagger)^s = U \rho^s U^\dagger$. Then, due to the cyclic property of the trace, $V_B$ cancels out with $V_B^\dagger$ in the computation of $Q$. Finally $W_A^\dagger H_A^\Lambda W_A$ has the same spectrum of $H^\Lambda_A$ so that the maximization domain in \eqref{defofDS} remains unchanged along with the maximum value.
3) This follows from the very definition of the QCB. Indeed, the minimum error probability in \eqref{eq:pminerr} is achieved by optimizing over all possible POVM measurements on $(AB)^{\otimes n}$. Any local map $\Phi_B$ on $B$ commutes with the phase transformation determined by $H_A^\Lambda$, and thus can be reabsorbed in the measurement process. This modified measurement is at most as good as the optimal one, implying that the asymptotic error probability, and hence $Q$, cannot decrease. This gives
$D_{A\rightarrow B}^\Lambda(\Phi_B[\rho])\leq D_{A\rightarrow B}^\Lambda(\rho)$.
4) We will prove that if a pure state $|\psi\rangle$ is transformed into another pure state $|\phi\rangle$ by LOCC (Local Operations and Classical Communication), then $D_{A\rightarrow B}^\Lambda(|\phi\rangle)\leq D_{A\rightarrow B}^\Lambda(|\psi\rangle)$. We remind that, due to the purity of the input and output states, a generic LOCC transformation which maps the vector
$|\psi\rangle$ in $|\phi\rangle$
can always be
realized via a single POVM on $A$ followed by a unitary rotation on $B$ conditioned
by the measurement outcome, see e.g.~\cite{NIELSEN}.
In other words, we can write
\begin{eqnarray}\label{eq:phi_psi}
|\phi\rangle \langle \phi|=\sum_{j} ({M_j}_A {V_j}_B)|\psi\rangle \langle \psi|({M_j}^\dagger_A {V_j}^\dagger_B)\;,
\end{eqnarray}
where $\{{M_j}_A\}$ is a set of Kraus operators on $A$ ($\sum_{j}{M_j}^\dagger_A {M_j}_A=\mathbb{I}_A$),
and $\{{V_j}_B\}$ is a set of unitary operators on $B$.
Introducing the set of probabilities $\{p_j\}=\{\brackets{\psi|{M_j}^\dagger_A {M_j}_A}{\psi}{}\}$, from \eqref{eq:phi_psi}
it follows
that for all $j$ corresponding to $p_j \neq 0$ we must have
\begin{eqnarray} \label{NEWEQ}
{M_j}_A {V_j}_B|\psi\rangle = \sqrt{p_j}|\phi\rangle \,\quad \forall \, j \; \, \mbox{s.t.} \; \, p_j \neq 0\;.
\end{eqnarray}
Observe also that for each $H_A^\Lambda$, there exists an $ H_B^\Lambda$ which has the same components in the Schmidt basis of $|\psi\rangle$, that is
\begin{equation}
\brackets{\psi|e^{i H_A^\Lambda } \otimes\mathbb{I}_B}{\psi}{} = \brackets{\psi| \mathbb{I}_A \otimes e^{i H_B^\Lambda } }{\psi}{}\,.
\end{equation}
From Eq.~(\ref{pure}) it follows then that for pure input states the maximization over all $H_A^\Lambda$ is equivalent to a maximization over all $H_B^\Lambda$. This allows to write
\begin{eqnarray}\label{eq:pure_ds}
D_{A\rightarrow B}^\Lambda(|\psi\rangle_{})
&=& 1 - \max_{\{H_A^\Lambda\}}\big| \brackets{\psi| e^{i H_A^\Lambda }}{\psi}{}\big|^2 \\
&=&1 - \max_{\{H_B^\Lambda\}}\big| \brackets{\psi| e^{i H_B^\Lambda }}{\psi}{}\big|^2 \nonumber\\
&=&1 - \big| \brackets{\psi| e^{i \tilde H_B^\Lambda }}{\psi}{}\big|^2, \label{NEWEQ3}
\end{eqnarray}
where we $\tilde H_B^\Lambda$ labels the Hamiltonian for which the maximum is reached.
Along the same lines, we have
\begin{eqnarray}
D_{A\rightarrow B}^\Lambda (|\phi\rangle_{})
&=&
1 - \max_{\{H_B^\Lambda\}} \big| \brackets{\phi| e^{i H_B^\Lambda}}{\phi}{} \big|^2\\ \nonumber
&=&1-
\sum_j \frac{1}{p_j} \max_{\{H_B^\Lambda\}} \big| \brackets{\psi| {M_j}_A^\dagger {M_j}_A e^{i H_B^\Lambda} }{\psi}{} \big|^2 \;,
\end{eqnarray}
where the second identity follows from
Eq.~(\ref{NEWEQ}) by absorbing
the unitary operator $V_{jB}$ into the maximization
over $H_B^\Lambda$. The rhs of the latter expression can be bounded from above by noticing that
the maximum of a given function is greater than the function evaluated at a given point.
In particular we have
\begin{eqnarray}
D_{A\rightarrow B}^\Lambda (|\phi\rangle )
\leq 1- \sum_i \frac{1}{p_j} \big| \brackets{\psi| {M_j}_A^\dagger {M_j}_A e^{i \tilde H_B^\Lambda}}{\psi}{} \big|^2 \;,
\end{eqnarray}
where $\tilde H_B^\Lambda$ has been introduced in Eq.~(\ref{NEWEQ3}). Finally, applying the Cauchy-Schwarz inequality we get
\begin{eqnarray}
D_{A\rightarrow B}^\Lambda (|\phi\rangle ) &\leq& 1- \big| \brackets{\psi| \sum_j {M_j}_A^\dagger {M_j}_A e^{i \tilde H_B^\Lambda} }{\psi}{ } \big|^2
\\ \nonumber
&=& 1- \big| \brackets{\psi|e^{i \tilde H_B^\Lambda} }{\psi}{} \big|^2 = D_{A\rightarrow B}^\Lambda (|\psi\rangle),
\end{eqnarray}
hence concluding the proof. $\blacksquare$
\subsection{A formal connection between DS and LQU measures} \label{sec:conn}
The LQU measure of non-classical correlations was introduced in Ref.~\cite{LQU}. Given a state $\rho$ of the bipartite system $AB$ it can be computed as
\begin{eqnarray} \label{FF}
{\cal U}^{\Lambda}_{A\rightarrow B} (\rho) &=& \min_{\{H^\Lambda_A\}} {\cal I}(\rho, H^\Lambda_A) \;,
\end{eqnarray}
where
\begin{eqnarray} \label{WYAN}
{\cal I}(\rho, H^\Lambda_A) &:=&
\mbox{Tr}[H_A^\Lambda \rho H_A^\Lambda - \sqrt{\rho} H_A^\Lambda \sqrt{\rho} H_A^\Lambda ]\;,
\end{eqnarray}
is the Wigner-Yanase skew information~\cite{WYS} and where, as in Eq.~(\ref{defofDS}), the maximum is taken over the set $\{ H_A^\Lambda\}$ of the Hamiltonians~(\ref{diago}).
A connection between~(\ref{FF}) and our DS measure follows by taking a formal expansion of Eq.~(\ref{defofDS}) with respect to $\Lambda$, i.e.
\begin{eqnarray}
&&D_{A\rightarrow B}^\Lambda(\rho) = 1 - \max_{\{ H_A^\Lambda\}} \min_{0 \leq s \leq 1} \mathrm{\text{Tr}} \big[ \rho^s e^{i H_A^\Lambda} \rho^{1-s} e^{-i H_A^\Lambda} \big]\nonumber \\
&&= - \max_{\{H_A^\Lambda\}} \min_{0 \leq s \leq 1} \mathrm{\text{Tr}} \big[ \rho^s H_A^\Lambda \rho^{1-s} H_A^\Lambda- H_A^\Lambda \rho H_A^\Lambda \big] +O(\Lambda^3)\nonumber \\
&&= - \max_{\{H_A^\Lambda\}} \mathrm{\text{Tr}} \big[ \sqrt{\rho} H_A^\Lambda \sqrt{\rho} H_A^\Lambda- H_A^\Lambda \rho H_A^\Lambda \big] +O(\Lambda^3) \nonumber \\
&&= \min_{\{H_A^\Lambda\}} \mathrm{\text{Tr}} \big[ H_A^\Lambda \rho H_A^\Lambda - \sqrt{\rho} H_A^\Lambda \sqrt{\rho} H_A^\Lambda \big] +O(\Lambda^3)
\nonumber \\
&&\qquad \qquad \; =\;\; {\cal U}^{\Lambda}_{A\rightarrow B} (\rho) +O(\Lambda^3) \;, \label{formalcon}
\end{eqnarray}
where in the third identity we used the following property.\\
{\bf Lemma 1:}
{\it Given $\rho$ a density matrix and $\Theta=\Theta^\dag$ a Hermitian operator we have }
\begin{eqnarray}
&&\min_{0 \leq s \leq 1} \mathrm{\text{Tr}} \big[ \rho^s \Theta \rho^{1-s} \Theta \big] = \mathrm{\text{Tr}} \big[ \rho^{1/2} \Theta \rho^{1/2} \Theta \big] \;.
\end{eqnarray}
\\
\textit{Proof:} Expressing $\rho$ in terms of its eigenvectors $\{|\psi_\ell\rangle\}$ we can write
\begin{eqnarray}
&& \min_{0 \leq s \leq 1} \mathrm{\text{Tr}} \big[ \rho^s \Theta \rho^{1-s} \Theta \big] = \sum_{\ell} c_\ell |\langle \psi_\ell| \Theta | \psi_\ell\rangle|^2 \nonumber \\
&&\qquad \qquad\quad +\min_{0 \leq s \leq 1} \sum_{\ell<\ell'} (c_\ell^s c_{\ell'}^{1-s} +c_{\ell'}^s c_{\ell}^{1-s} ) \nonumber
|\langle \psi_\ell| \Theta | \psi_{\ell'}\rangle|^2\;,
\end{eqnarray}
where $\{c_\ell\}$ are the eigenvalues of $\rho$ which have being organized in decreasing order (i.e. $c_\ell \geq c_{\ell'}$ for $\ell\leq \ell'$).
The thesis then follows by simply noticing that for all couples $\ell<\ell'$, the functions $f(s) = c_\ell^s c_{\ell'}^{1-s} +c_{\ell'}^s c_{\ell}^{1-s}$ reach their minima
for $s=1/2$ (indeed their first derivative $f'(s) = (c_\ell^s c_{\ell'}^{1-s} - c_{\ell'}^s c_{\ell}^{1-s}) \ln(c_\ell/c_{\ell'})$ are non-negative for $s\geq 1/2$ and non-positive for $s\leq 1/2$). $\blacksquare$
\\
Equation~(\ref{formalcon}) establishes a formal connection between our DS measure and the LQU measure, providing hence a clear operational interpretation for the latter.
Specifically the LQU can be seen as the DS measure of a discrimination process where $\Lambda$ is a small quantity, i.e. where the allowed
rotations $R_A^{\Lambda}$ of Eq.~(\ref{diago1}) are small perturbation of the identity operator.
As we shall see in Sec.~\ref{sec:DS_qubit_qudit}, the relation among DS and LQU becomes even more stringent when $A$ is a qubit system: indeed, in this special case,
independently from the dimensionality of $B$, the two measure are proportional.
\subsection{Dependence upon $\Lambda$}
According to Sec.~\ref{subsectionIIIA} all choices of the matrices $\Lambda$ as in Eq.~(\ref{defLAMBDA}) provide a proper measure of non-classicality for the states $\rho$.
Even though one is tempted to conjecture that the case where $\Lambda$ has an harmonic spectrum (i.e. $\lambda_k-\lambda_{k-1} = \mbox{const}$ for all $k=2, 3, \cdots, d_A$)
should be somehow optimal (i.e. yield a more accurate measure of non-correlations),
the relations among these different DSs at present are not clear and indeed it might be possible that no absolute ordering can be established among them (this is
very much similar to what happens for the LQU of Ref.~\cite{LQU}).
Here we simply notice that since QCB is invariant under constant shifts in the local Hamiltonian spectrum, i.e.
$Q(\rho,e^{i H^\Lambda_A} \rho e^{-i H^\Lambda_A})=Q(\rho,e^{i (H^\Lambda_A + b \mathbb{I}_A)} \rho e^{-i (H^\Lambda_A + b \mathbb{I}_A)})$,
for all incoming states $\rho$ and for $b\in \mathbb{Re}$, we can always add a constant to $\Lambda$ at convenience without
affecting the corresponding DS measure, i.e.
\begin{equation} \label{defofDS111}
D_{A\rightarrow B}^\Lambda(\rho) = D_{A\rightarrow B}^{\Lambda+b}(\rho) \;, \quad \forall \rho\;.
\end{equation}
\subsection{Discriminating strength for pure states}
Let $\ket{\psi}$ be a pure state of $AB$ with Schmidt decomposition~\cite{NIELSEN} given by
\begin{equation}\label{eq:Schmidt}
\ket{\psi}=\sum_{j=1}^{\min\{d_A, d_B\}} \sqrt{q_j}\ket{j}_A\ket{j}_B\,,
\end{equation}
being $\{ |j\rangle_A\}$ and $\{ |j\rangle_B\}$ orthonormal
sets of $A$ and $B$, respectively ($d_{A,B}$ being the dimensionality of $A,B$). From Eq.~(\ref{eq:pure_ds}) it follows that in this case
the discriminating strength can be written as
\begin{eqnarray}\label{eq:Dpure1}
D_{A^\rightarrow B}^{\Lambda}(\ket{\psi})&=&1-\max_{\{H_A^\Lambda\}} \left|\sum_{j}q_j\bras{j}{A}e^{i H_A^\Lambda}\kets{j}{A}\right|^2\nonumber\\&=&1-\max_{\{H_A^\Lambda\}} \left|\mathrm{\text{Tr}}[\rho_A e^{i H_A^\Lambda}]\right|^2\;,
\end{eqnarray}
where $\rho_A=\mathrm{\text{Tr}}_B[\ketbras{\psi}{\psi}{A}]$ is the reduced state of $|{\psi}\rangle$ on $\mathcal{H}_A$.
From the spectral decomposition~(\ref{diago}) of $H_A^\Lambda$,
one can perform the trace in~\eqref{eq:Dpure1} over the eigenbasis of $\Lambda$
and get
\begin{eqnarray} \label{eqnew1}
D_{A\rightarrow B}^\Lambda (\ket{\psi}) =
1-\max_{\{M\}} \left|\sum_{k}\left(\sum_{j}M^{(k|j)} q_j\right) e^{i \lambda_k}\right|^2,
\end{eqnarray}
where now the maximization is performed over the set of the double stochastic matrices $M$ with elements $M^{(k|j)}=\bras{\lambda_k}{A}U_A^\dagger\kets{j}{A}\bra{j}U_A\kets{\lambda_k}{A}$. We remind that according to the Birkhoffs theorem~\cite{Birkhoff} $M$ can be written
as a convex combination of permutation matrices $\Pi_\alpha$ (corresponding to the permutation $\pi_\alpha$), i.e.
\begin{equation}
B = \sum_{\alpha} p_\alpha \Pi_\alpha \quad \mbox{with} \quad \sum_\alpha p_\alpha=1\,.
\end{equation}
Therefore, we can rewrite Eq.~(\ref{eqnew1}) as
\begin{eqnarray}
\label{eq:permutations}
D_{A\rightarrow B}^\Lambda(\ket{\psi})&=&1-\max_{\{p_\alpha\},\{\Pi_\alpha\} } \left|\sum_{\alpha,k }p_\alpha \sum_{j} \Pi_\alpha^{(k|j)}q_{j} \; e^{i \lambda_k }\right|^2 \nonumber \\
&=&1-\max_{\{p_\alpha\},\{\pi_\alpha\} } \left|\sum_{\alpha}p_\alpha \sum_{k} q_{\pi_\alpha[k]} e^{i \lambda_k }\right|^2\,. \nonumber\\
\end{eqnarray}
Note that if $d_B < d_A$, the number of Schmidt coefficients is smaller than the number of eigenvalues $\lambda_k$. In this case, the expressions above hold as long as one considers the state \eqref{eq:Schmidt} as having $d_A - d_B$ Schmidt coefficients equal to zero, i.e. one must apply the permutations to the set $\{ q_1, ..., q_{d_B}, q_{d_B+1}=0, ..., q_{d_A}=0 \}$.
By convexity it derives that the optimization over
the set $\{p_\alpha\}$ in \eqref{eq:permutations} can be explicitly carried out by
choosing probability sets $\{p_\alpha\}$ which have only a single element greater than zero (and thus equal to $1$), from which we finally derive
\begin{equation} \label{NEWNEW1}
D_{A\rightarrow B}^\Lambda (|\psi\rangle )=1-\max_{\pi_\alpha} \left|\sum_{k}q_{\pi_\alpha[k]}e^{i\lambda_{k}}\right|^2\;,
\end{equation}
where the maximization over the infinite set of Hamiltonians $H_A^\Lambda$ required by its definition (see Eq.~\eqref{defofDS}) has been replaced by a maximization over the group of permutations $\{\pi_\alpha\}$ on the set of the Schmidt coefficients $q_j$.
\subsubsection{Hamiltonians with harmonic spectrum}
If the spectrum of the Hamiltonian $H_A^\Lambda$ is harmonic with fundamental frequency $\omega=|\lambda_{i}-\lambda_{i+1}| \leq 2\pi/d_A$, Eq.~\eqref{NEWNEW1} can further simplified. More precisely, let us relabel the set of eigenvalues $\{\lambda_i\}$ as
\begin{eqnarray}
&\left\{\lambda_{1-[(d_A+1)/2]}, \lambda_{2-[(d_A+1)/2]}, \ldots, \lambda_{d_A-[(d_A+1)/2]} \right \}=\nonumber\\\nonumber\\ &\left \{\left(1\!-\! \left[\frac{d_A+1}{2}\right]\right) \! \omega, \!\left(2 \!-\! \left[\frac{d_A+1}{2}\right]\right)\! \omega, \ldots, \!\left(d_A \!-\! \left[\frac{d_A+1}{2}\right]\right)\! \omega \right \}, \quad\;
\end{eqnarray}
where $[x]$ stands for the integer part of the real parameter $x$.
Let us also reorder the Schmidt coefficients of $\ket{\psi}$ as $q_1 \geq q_2 \geq \ldots \geq q_{d_A}$ (where again some of them must be set to zero if $d_B < d_A$). By representing the phases $e^{i \lambda_k}$ as unitary vectors in the complex space, one derives that the permutation $\pi$ maximizing the sum
in \eqref{NEWNEW1}
is the one which associates $q_1$ to $\lambda_0=0$, $q_2$ to $\lambda_{1}=\omega$, $q_3$ to $\lambda_{-1}=-\omega$, $q_4$ to $\lambda_2=2\omega$, $q_5$ to $\lambda_{-2}=-2\omega$, etc., yielding
\begin{eqnarray}
&&D_{A\rightarrow B}^\Lambda(\ket{\psi}) \\ \nonumber
&&= 1-\left| \sum_{n=0}^{[(d_A+1)/2]-1} \!\!\!\!\!\!\!\!\!\! q_{2n+1} e^{-i n \omega}+ \!\!\!\!\! \sum_{n=1}^{d_A-[(d_A+1)/2]} \!\!\!\!\!\!\!\!\!\! q_{2n} e^{i n \omega}\right|^2\,.
\end{eqnarray}
\subsection{Discriminating strength for qubit-qudit systems}\label{sec:DS_qubit_qudit}
We conclude the Section by considering the case in which subsystem $A$ is given by a single qubit, and determine a closed expression for the discriminating strength.
Exploiting the gauge invariance~(\ref{defofDS111}) we set,
without loss of generality, $\Lambda=\mbox{Diag}\{-\lambda,\lambda\}$ and parameterize the set of local Hamiltonians acting on $A$ as
$H_A^\Lambda = \lambda\; \hat n \cdot \vec \sigma_A$, where
$\hat n$ is a unit vector in the Bloch sphere and $\vec{\sigma}_A=(\sigma_{A,1},\sigma_{A,2}, \sigma_{A,3})$ is the vector formed by the Pauli operators.
In what follows we will set $\sigma_A^{(\hat{n})}=\hat n \cdot \vec \sigma_A$.
Under these hypothesis, the QCB can be written as
\begin{eqnarray}\label{eq:QCB_qubitTrace}
Q(\rho_0,\rho_1) &=& \min_{s\in[0,1]}
\mathrm{\text{Tr}} \big[ \rho^s e^{i \lambda \sigma_A^{(\hat{n})} } \rho^{1-s} e^{-i \lambda \sigma_A^{(\hat{n})}} \big]\nonumber \\
& =& \cos^2\lambda + \min_{s\in[0,1]} \mbox{Tr} [\rho^{s} \sigma_A^{(\hat{n})} \rho^{1-s} \sigma_A^{(\hat{n})}] \; \sin^2\lambda \nonumber \\
& =& \cos^2\lambda + \mbox{Tr} [\rho^{1/2} \sigma_A^{(\hat{n})} \rho^{1/2} \sigma_A^{(\hat{n})}] \; \sin^2\lambda\nonumber \;,
\end{eqnarray}
where in the last passage we have used the fact that $\sigma_A^{(\hat{n})}$ is Hermitian and Lemma 1 to conclude that the minimization in $s$ is solved for $s=1/2$
(see also Ref.~\cite{QCBqubitAndGauss}, footnote 5 on page 11).
Replacing this into Eq.~(\ref{defofDS}) we finally obtain
\begin{eqnarray}
D_{A\rightarrow B}^\Lambda(\rho) &=& \max_{\hat{n}} \big( 1 - \; \mbox{Tr} [\rho^{1/2}
\sigma_A^{(\hat{n})} \rho^{1/2} \sigma_A^{(\hat{n})}] \big) \; \sin^2 \lambda \nonumber \\
&=& {\cal U}_{A\rightarrow B}^\Lambda (\rho)\; \frac{\sin^2 \lambda}{\lambda^2} \;, \label{eq:DS_LQU}\end{eqnarray}
where
\begin{eqnarray}
{\cal U}_{A\rightarrow B}^\Lambda(\rho) =
\lambda^2 \max_{\hat{n}} \big( 1 - \; \mbox{Tr} [\rho^{1/2}
\sigma_A^{(\hat{n})} \rho^{1/2} \sigma_A^{(\hat{n})}] \big) \;,
\end{eqnarray}
is the LQU measure for a qubit-qudit system~\cite{LQU} -- see Eqs.~(\ref{FF}) and (\ref{WYAN}). The identity~(\ref{eq:DS_LQU}) strengthens the formal connection between DS and LQU detailed in Sec.~\ref{sec:conn} and
provides a simple way to compute the DS for qubit-qudit systems. Indeed using the results of Ref.~\cite{LQU} it follows that
\begin{eqnarray}\label{EQNUOVA}
D_{A\rightarrow B}^\Lambda(\rho) &=& [1 - \xi_{\max}(W)]\; \sin^2\lambda \;,
\end{eqnarray}
with $\xi_{\max}(W)$ being the maximum eigenvalue of a $3\times 3$ matrix whose elements are given by
\begin{eqnarray}\label{eq:W}
W_{\alpha\beta} = \mbox{Tr}[ \sqrt{\rho}\; \sigma_{A,\alpha}
\sqrt{\rho} \; \sigma_{A,\beta}]\;.
\end{eqnarray}
If $\rho$ is pure, $\rho=\ketbras{\psi}{\psi}{}$, the discriminating strength reduces to
\begin{eqnarray}\label{eq:qubitquditpure}
D_{A\rightarrow B}^\Lambda(\left| \psi \right\rangle_{ }) = [1 - (q_1 - q_0)^2]\; \sin^2\lambda \,,
\end{eqnarray}
where $q_1$ and $q_2$ are the Schmidt coefficients of $|\psi\rangle$. In particular, notice that for separable pure states we have $|q_1-q_0| =1$ and the discord vanishes (see property 1 in Sec.~\ref{sec:DS as a measure}). On the other hand, for maximally entangled qubit-qudit states we have $q_0=q_1=1/2$ and the DS reaches the
maximum value $\sin^2\lambda$ (see property 4).\\
\section{Maximization of the discriminating strength over the set of separable states}\label{sec:sep}
The main role played by the discord in the realm of quantum mechanics is enlightening the presence of those quantum correlations which cannot be classified as quantum entanglement. Here, we investigate the behavior of the discriminating strength when computed on the set of separable states $\rho^{(\mathrm{sep})}$ (yielding zero
entanglement).
We will prove that for all qubit-qudit systems ($d_A=2$ and $d_B \geq 2$), the maximum discord over the set of separable states is reached over the subset of pure {\it Quantum-Classical} (pQC) states given by convex combinations of pure (non necessarily orthogonal) states $\{ \kets{\psi_k}{A}\}$ on $A$ and orthonormal basis $\{\kets{k}{B}\}$ on $B$, i.e.
\begin{equation}\label{eq:pQCstates}
\rho^{(\mathrm{pQC})}=\sum_k p_k\; \ketbras{\psi_{k}}{\psi_{k}}{A}\otimes \ketbras{k}{k}{B}\;,
\end{equation}
the $\{ p_k\}$ being probabilities.
For the case $d_B\geq 3$ we have an analytical proof of this fact, which allows us to solve the maximization and show that the following
identity holds
\begin{eqnarray} \max_{\rho^{\mathrm{(sep)}}} D_{A\rightarrow B}^\Lambda(\rho^{\mathrm{(sep)}}) &=&
\max_{\rho^{\mathrm{(pQC)}}} D_{A\rightarrow B}^\Lambda (\rho^{\mathrm{(pQC)}}) \nonumber \\
&=& \frac{2}{3} \sin^2\lambda \;,
\label{ALLDIM}
\end{eqnarray}
(see Sec.~\ref{ssec:optimal} for the case $d_B=\infty$ and Sec.~\ref{ssec:boundf} for the case $d_M\geq 3$).
For $d_B=2$ (i.e. for the qubit-qubit case) instead the optimality of the pure-QC states can only be verified numerically showing that
\begin{eqnarray} \max_{\rho^{\mathrm{(sep)}}} D_{A\rightarrow B}^\Lambda(\rho^{\mathrm{(sep)}}) &=&
\max_{\rho^{\mathrm{(pQC)}}} D_{A\rightarrow B}^\Lambda (\rho^{\mathrm{(pQC)}})
\nonumber \\&=& \frac{1}{2} \sin^2\lambda \;, \label{2DIM}
\end{eqnarray}
(see Sec.~\ref{ssec:qubqub}).
\subsection{pure-QC states maximize the DS over the set of separable states: case $d_B=\infty$}\label{ssec:optimal}
A generic
separable state can always be written as
\begin{eqnarray}
\rho^{(\mathrm{sep})} \label{sepstate}
&=& \sum_{k} p_k \ketbras{\psi_{k}}{\psi_{k}}{A} \otimes \rho_B^{(k)}, \end{eqnarray}
where $\{|\psi_k\rangle_A\}$ are (possibly non-orthogonal)
pure states on $\mathcal{H}_A$ and $\{ \rho_B^{(k)}\}$ is a set of density matrices on $\mathcal{H}_B$, while $\{ p_k\}$ are probabilities.
From the joint concavity of the QCB ~(\ref{eq:QCB})~\cite{QCB} and from the cyclic property of the trace, we have
\begin{widetext}
\begin{eqnarray}
Q(\rho^{(\mathrm{sep})} ,e^{i H_A^\Lambda} \rho^{(\mathrm{sep})} e^{-i H_A^\Lambda})
& \geq &\sum_k p_k Q( \ketbras{\psi_{k}}{\psi_{k}}{A}\otimes \rho_B^{(k)} ,e^{i H_A^\Lambda} \ketbras{\psi_{k}}{\psi_{k}}{A} e^{-i H_A^\Lambda}\otimes \rho_B^{(k)}) \nonumber
\\ &=& \sum_k p_k Q(\ketbras{\psi_{k}}{\psi_{k}}{A},e^{i H_A^\Lambda} \ketbras{\psi_{k}}{\psi_{k}}{A}e^{-i H_A^\Lambda})=\sum_k p_k |\bras{\psi_k}{A}e^{i H_A^\Lambda}\ket{\psi_k}_A|^2 \;.
\end{eqnarray}
\end{widetext}
By direct calculation, one can easily verify that the above inequality is saturated a pure-QC state $\rho^{\mathrm{(pQC)}}$ of Eq.~(\ref{eq:pQCstates}) obtained by replacing the
density matrices $\rho_B^{(k)}$ of~(\ref{sepstate}) with orthogonal projectors $|k\rangle_B\langle k|$
(notice that this is possible because $B$ is infinite dimensional).
Indeed in this case we have
\begin{widetext}
\begin{align}\label{eq:DSpqc}
Q(\rho^{\mathrm{(pQC)}}, e^{i H_A^\Lambda}& \rho^{\mathrm{(pQC)}} e^{-i H_A^\Lambda})\nonumber \\
& = \min_{0 \leq s \leq 1} \mathrm{\text{Tr}} \left[ \left(\sum_k p_k \ketbras{\psi_{k}}{\psi_{k}}{A}\otimes \ketbras{k}{k}{B}\right)^s \left(\sum_{k'} p_{k'} e^{i H_A^\Lambda} \ketbras{\psi_{k'}}{\psi_{k'}}{A}e^{-i H_A^\Lambda}\otimes \ketbras{k'}{k'}{B}\right)^{1-s} \right] \nonumber \\
& = \sum_k p_k |\bras{\psi_k}{A}e^{i H_A^\Lambda}\ket{\psi_k}_A|^2 \;.
\end{align}
\end{widetext}
Since $Q(\rho^{(\mathrm{sep})} ,e^{i H_A^\Lambda} \rho^{(\mathrm{sep})} e^{-i H_A^\Lambda})$ is greater than $Q(\rho^{\mathrm{(pQC)}}, e^{i H_A^\Lambda} \rho^{\mathrm{(pQC)}} e^{-i H_A^\Lambda})$ for each choice of $H_A^\Lambda$, we conclude that
\begin{equation}\label{eq:sep_pQC}
D_{A\rightarrow B}^\Lambda(\rho^{\mathrm{(sep)}})\leq D_{A\rightarrow B}^\Lambda(\rho^{\mathrm{(pQC)}})\,.
\end{equation}
Next we show that
the maximum DS attainable over the set of pQC states (and hence over the set of separable states) cannot be larger than $\frac{2}{3}\sin^2\lambda$.
To do so let us first consider the \textit{uniform} pQC state $\rho_{u,d}^{\mathrm{(pQC)}}$,
\begin{eqnarray} \label{OPTM}
\rho_{u,d}^{\mathrm{(pQC)}} = \frac{1}{d} \sum_{j=0}^{d-1} \; |\psi_{j}\rangle_A\langle \psi_{j} |
\otimes |j \rangle \langle j | \;,
\end{eqnarray}
characterized by $d$ pure states $\{ |\psi_j\rangle_A\}$ whose
corresponding vectors $\{ \hat{r}_j\}$ in the Bloch sphere are assumed to be
uniformly distributed (i.e. their $d$ vertices identify
a regular polyhedron). From Eq.~\eqref{eq:DSpqc} we have
\begin{eqnarray} D_{A\rightarrow B}^\Lambda(\rho_{u,d}^{(\mathrm{pQC})})\!\!\!&=& \!\!\!\min_{\{H_A^\Lambda\}}\sum_{j=0}^{d-1}\frac{1}{d} \left(1- \left | \bras{\psi_j} {A} e^{i H_A^\Lambda}\ket{\psi_j}_A\right|^2\right)\nonumber \\
&=& \!\!\! \min_{\{\hat{n}\}}\sum_{j=0}^{d-1}\frac{1}{d} \left[1 \!-\! \cos^2\lambda \!- \! \sin^2\lambda \; (\hat{r}_j\cdot\hat{n})^2\right]\nonumber \\
&=& \!\!\! \big( 1 - \max_{\{\hat{n}\}} \frac{1}{d} \sum_{j=0}^{d-1} \cos^2\theta_j \big)\;\sin^2\lambda \label{eq:pQC_QCB}\;,
\end{eqnarray}
where we set $H_A^\Lambda=\lambda \sigma_A^{(\hat{n})}$ (see Sec.~\ref{sec:DS_qubit_qudit}) and introduced $\cos\theta_j = \hat{n}\cdot \hat{r}_j$. In the limit \linebreak$d\rightarrow \infty$ the series $\sum_{j=1}^d \cos^2\theta_j$ converges to an integral over the solid angle, which does not depend on the
orientation of $\hat{n}$, i.e.
\begin{eqnarray}
\lim_{d\rightarrow \infty} \frac{1}{d}\sum_{j=0}^{d-1} \cos^2\theta_j
&=&
\frac{1}{4\pi} \int d\Omega \cos^2\theta\\ \nonumber
& =& \frac{1}{4\pi}
\int_0^{2\pi} \!\!\!\! d\phi \int_0^\pi \!\!\! d\theta \sin \theta \cos^2\theta =\frac{1}{3} \;.
\end{eqnarray}
Therefore we have
\begin{eqnarray}\label{dueduetrefffd}
D_{A\rightarrow B}^\Lambda(\rho_{u,\infty}^{\mathrm{(pQC)}}) = \frac{2}{3}\sin^2\lambda.
\end{eqnarray}
To prove that the above quantity is also the maximum value of DS over
the whole set of pure-QC states~\eqref{eq:pQCstates} we notice that, proceeding as in Eq.~(\ref{eq:pQC_QCB}),
we can write
\begin{eqnarray}\nonumber
D_{A\rightarrow B}^\Lambda(\rho^{\mathrm{(pQC)}}) \! &=& \! \big(1-\max_{\{\hat{n}\}}\sum_{j=0}^{d-1} p_j(\hat{r}_j\cdot\hat{n})^2\big)\! \sin^2\lambda \\
&=& \big(1-\sum_{j=0}^{d-1} p_j(\hat{r}_j\cdot\hat{n}_*)^2\big)\; \sin^2\lambda \;, \label{dueduetrefdfdf2}
\end{eqnarray}
where $\hat{n}_*$ indicates the direction which is saturating the maximization.
This vector is clearly a function of the state $\rho^{\mathrm{(pQC)}}$, i.e. it depends on the
probabilities $p_j$ and on the vectors $\hat{r}_j$. If we define the state $\rho_R^{\mathrm{(pQC)}}$, obtained from
$\rho^{\mathrm{(pQC)}}$ by applying to the vectors $\hat{r}_j$
a rotation matrix $R \in SO(3)$ , we have
\begin{equation}\label{eq:rotatedDS}
D_{A\rightarrow B}^\Lambda(\rho_R^{\mathrm{(pQC)}}) = D_{A\rightarrow B}^\Lambda(\rho^{\mathrm{(pQC)}}),
\end{equation}
where the vector saturating the maximization in Eq.~\eqref{dueduetrefdfdf2} now corresponds to $R \hat{n}_*$.
By introducing an ancillary system $C$, associated to the Hilbert space $\mathcal{H}_C$, and a set of $N$ 3D-rotations $\{ R_k \}$, mapping each vertex of the regular N-polyhedron on all vertices (including itself), one can define the density matrix
\begin{eqnarray}\label{expansion1}
&&\bar{\rho}^{\mathrm{(pQC)}}_{ABC}
\nonumber\\ &&\quad:=\frac{1}{N}\!\sum_{k=0}^{N-1} \!\sum_{j=0}^{d-1} p_j |\psi_{j}(R_k)\rangle_A\langle \psi_{j}(R_k) | \!\otimes\! |j \rangle_B \langle j | \!\otimes\! |k \rangle_C \langle k | \nonumber\\
&&\quad\;=\!\frac{1}{N}\sum_{k=0}^{N-1} \! \rho_{R_k}^{\mathrm{(pQC)}} \!\otimes\! |k \rangle_C \langle k |
\end{eqnarray}
where
\begin{equation}
\rho_{R_k}^{\mathrm{(pQC)}}:=\sum_{j=0}^{d-1} p_j |\psi_{j}(R_k)\rangle_A\langle \psi_{j}(R_k) | \!\otimes\! |j \rangle_B \langle j |\,.
\end{equation}
On the other hand $\bar{\rho}^{\mathrm{(pQC)}}_{ABC}$ can be also arranged as
\begin{equation}\label{expansion2}
\bar{\rho}^{\mathrm{(pQC)}}_{ABC}=\!\sum_{j=0}^{d-1} p_j \rho_{u,N,j}^{\mathrm{(pQC)}} \!\otimes\! |j \rangle_B \langle j |,
\end{equation}
where the density matrices $\rho_{u,N,j}^{\mathrm{(pQC)}}$, on $\mathcal{H}_{A} \otimes \mathcal{H}_{C}$, are defined as
\begin{equation}
\rho_{u,N,j}^{\mathrm{(pQC)}}:=\frac{1}{N}\!\sum_{k=0}^{N-1} |\psi_{j}(R_k)\rangle_A\langle \psi_{j}(R_k) | \!\otimes\! |k \rangle_C \langle k|\,.
\end{equation}
It is important to observe that since $B$ is infinite dimensional, there always exists a state $\bar{\rho}^{\mathrm{(pQC)}}$ of $AB$ which is fully isomorphic to $\bar{\rho}^{\mathrm{(pQC)}}_{ABC}$, from which it follows
\begin{equation}
Q(\bar{\rho}^{\mathrm{(pQC)}},\hat n) = Q(\bar{\rho}^{\mathrm{(pQC)}}_{ABC},\hat n)\,,
\end{equation}
where
\begin{equation}
Q(\rho,\hat n) := Q\left(\rho, e^{i \lambda \sigma_A^{(\hat{n})} } \rho e^{-i \lambda \sigma_A^{(\hat{n})}}\right)\,.
\end{equation}
Thanks to expansion \eqref{expansion1}, we get
\begin{eqnarray}
Q(\bar{\rho}^{\mathrm{(pQC)}},\hat n)=\frac{1}{N} \sum_{k=0}^{N-1} Q({\rho}_{R_k}^{\mathrm{(pQC)}},\hat n),
\end{eqnarray}
from which, taking the maximum over $\hat n$, it results
\begin{eqnarray}\label{dueffeee}
\max_{\{\hat n\}} \sum_{k=0}^{N-1} Q({\rho}_{R_k}^{\mathrm{(pQC)}},\hat n) \leq \sum_{k=0}^{N-1} \max_{ \{\hat n\} }Q({\rho}_{R_k}^{\mathrm{(pQC)}},\hat n).
\end{eqnarray}
Finally, since for all $k$, ${\rho}_{R_k}^{\mathrm{(pQC)}}$ and ${\rho}^{\mathrm{(pQC)}}$ share the same DS (see Eq.~\eqref{eq:rotatedDS}), we get
\begin{equation}\label{res1}
D_{A\rightarrow B}^\Lambda(\bar{\rho}^{\mathrm{(pQC)}}) \geq D_{A\rightarrow B}^\Lambda(\rho^{\mathrm{(pQC)}}).
\end{equation}
On the other hand, thanks to expansion \eqref{expansion2} we have
\begin{eqnarray}
Q(\bar{\rho}^{\mathrm{(pQC)}},\hat n) =\sum_{j=0}^{d-1} p_j Q \left( \rho_{u,N,j}^{\mathrm{(pQC)}},\hat n \right)\,.
\end{eqnarray}
and therefore
\begin{eqnarray}\label{dueffeee2}
\max_{\{\hat n\}} Q(\bar{\rho}^{\mathrm{(pQC)}},\hat n) \leq \sum_{j=0}^{d-1} p_j \max_{\{\hat n\}} Q\!\! \left( \rho_{u,N,j}^{\mathrm{(pQC)}},\hat n \right)\!.
\end{eqnarray}
The above inequality is saturated in the limit $N \rightarrow \infty$, where each $\rho_{u,N,j}^{\mathrm{(pQC)}}$ approaches the state $\rho_{u,\infty}^{\mathrm{(pQC)}}$ characterized by
\begin{equation}
Q \big(\rho_{u,\infty}^{\mathrm{(pQC)}}, \hat n\big) = \cos^2\lambda - \frac{1}{3}\sin^2\lambda, \quad \forall \hat{n}\,
\end{equation}
(see Eq.~\ref{dueduetrefffd}).
We therefore have
\begin{equation}\label{res2}
D_{A\rightarrow B}^\Lambda(\bar{\rho}^{\mathrm{(pQC)}}) \stackrel{N \rightarrow \infty}{=} \sum_{j=0}^{d-1} p_j \frac{2}{3}\sin^2\lambda= \frac{2}{3}\sin^2\lambda.
\end{equation}
The identity~(\ref{ALLDIM}) finally follows by
combining Eqs.~\eqref{eq:sep_pQC}, \eqref{res1} and \eqref{res2}.
\subsection{pure-QC states maximize the DS over the set of separable states: case $d_B\geq 3$}\label{ssec:boundf}
If $\mathcal{H}_B$ is finite dimensional we are not guaranteed about the possibility of mapping a generic separable state in the a pure-QC state. Thus relation~\eqref{eq:sep_pQC} could be in principle violated. However by embedding $\mathcal{H}_B$ into a larger system having infinite dimension one can still invoke the result of the previous subsection to say that
\begin{eqnarray} \max_{\rho^{\mathrm{(sep)}}} D_{A\rightarrow B}^\Lambda(\rho^{\mathrm{(sep)}})
\leq \frac{2}{3} \sin^2\lambda \;.
\label{ALLDIMineq}
\end{eqnarray}
To prove Eq.~(\ref{ALLDIM}) it is hence sufficient to produce an example of a pure-QC state (\ref{eq:pQCstates})
that achieves such upper bound. Of course the sequence of uniform states (\ref{OPTM})
cannot be used for this purpose because now $d_B$ is explicitly assumed to be finite. Instead
we take
\begin{eqnarray} \label{ttff}
\rho^{(p\mathrm{QC})} = \sum_{j=0,1,2} p_j\; |\psi_j\rangle_A \langle \psi_j|
\otimes |j\rangle_B\langle j|\;,
\end{eqnarray}
with $|0\rangle_B$, $|1\rangle_B$, $|2\rangle_B$ being orthonormal elements of $\mathcal{H}_B$,
which is a properly defined p-QC state whenever the dimension $d_B$ is larger than 3.
As in the first line of Eq.~(\ref{dueduetrefdfdf2}), its associated
discriminating strength can be then computed as,
\begin{eqnarray}\label{eq:dueduetre}
D_{A\rightarrow B}^\Lambda(\rho^{(\mathrm{pQC})})
= \big(1-\max_{\hat{n}}\sum_{j=0}^2 p_j(\hat{r}_j\cdot\hat{n})^2\big)\; \sin^2\lambda \,,
\end{eqnarray}
where $\hat{r}_j$ is the vector in the Bloch sphere of the state $\ket{\psi_j}$ while $H_A^\Lambda=\lambda \sigma_A^{(\hat{n})}$.
We are interested in the case where $\{\hat{r_j}\}$
is an orthonormal triplet (i.e. the three vectors identifying three Cartesian axes in the 3D-space). Notice that this does not mean that the corresponding states are orthogonal: instead they are mutually
unbalanced states (e.g. $|\psi_0\rangle_A=|0\rangle_A$,
$|\psi_1\rangle_A=|+\rangle_A=(\ket{0}_A + \ket{1}_A)/\sqrt{2}$, $|\psi_2\rangle_A=|\times \rangle_A = (|0\rangle_A + i |1\rangle_A)/\sqrt{2}$), so that (\ref{ttff}) corresponds to an (unbalanced) Generalized B92 (GB92) state~\cite{B92}. From the normalization condition on vector $\hat{n}$, it derives that the squared scalar products $(\hat{n}\cdot \hat{r}_j)^2$
define a set of probabilities, since
\begin{eqnarray}
\sum_{j=0,1,2} (\hat{n}\cdot \hat{r}_j)^2 = |\hat{n}|^2 =1
\;.
\end{eqnarray}
Thus, the maximization involved in (\ref{eq:dueduetre}) can be trivially performed by choosing $\hat n$ parallel to the $\hat r_j$ associated to the maximum weight $p_j$. This gives
\begin{eqnarray}
D_{A\rightarrow B}^\Lambda(\rho^{(\mathrm{GB92})}) = \left(1 - \max\{ p_0, p_1,p_2\}\right)\; \sin^2 \lambda\;.
\end{eqnarray}
By observing that for a three event process the maximum probability can never be smaller than $1/3$, we conclude that the maximum DS over the set of GB92 states is achieved by the Equally Weighted (EW) one
\begin{eqnarray}\label{eq:GB92sat}
\rho_{\mathrm{EW}}^{(\mathrm{GB92})} &=&\frac{1}{3} \Big( \ketbras{0}{0}{A} \otimes \ketbras{0}{0}{B}+ \ketbras{+}{+}{A} \otimes \ketbras{1}{1}{B}\nonumber \\
&& \quad+ \ketbras{\times}{\times}{A} \otimes \ketbras{2}{2}{B} \Big)\;.
\end{eqnarray}
With this choice we get
\begin{eqnarray}
D_{A\rightarrow B}^\Lambda(\rho_{\mathrm{EW}}^{(\mathrm{GB92})})=\frac{2}{3} \sin^2 \lambda \;, \label{GB92}
\end{eqnarray}
which shows that, also for $d_B$ finite and larger than 3, the upper bound~(\ref{ALLDIMineq}) is achievable with a pure-QC state, hence proving~(\ref{ALLDIM}).
\subsection{p-QC states maximize the DS over the set of separable states: case $d_B=2$ (qubit-qubit) }
\label{ssec:qubqub}
The argument used in the previous section cannot be directly applied to analyze the qubit-qubit case (i.e. $d_A=d_B=2$), because for those systems the states~(\ref{ttff})
and (\ref{eq:GB92sat}) cannot be defined. Furthermore we will shall see that the upper bound~(\ref{ALLDIMineq}) is no longer tight.
To deal with this case we first consider the class of QC state and show the maximum of DS, equal to $(1/2) \sin^2\lambda$, is achieved on the set of pure-QC states. Then we
resort to numerical optimization procedures to show that no other separable qubit-qubit state can do better than this, hence verifying the identity~(\ref{2DIM}).
\subsubsection{Maximum DS over QC states}
A generic QC state for the qubit-qubit case can be expressed as
\begin{equation}
\rho^{(\mathrm{QC})} \!=\! p \; \tau_0 \!\otimes\! |0\rangle_B\langle 0| + (1-p) \; \tau_1 \!\otimes\! |1\rangle_B\langle 1|,
\end{equation}
where $p\in [0,1]$, $\tau_{0}$ and $\tau_1$ are generic mixed state of $A$, and $\{ \ket{0}_B,\ket{1}_{B}\}$ is an orthonormal basis of $\mathcal{H}_B$.
To compute the associated value of DS we invoke Eq.~(\ref{EQNUOVA}) and determine
the maximum eigenvalue of the matrix
$W_{\alpha\beta}$ of Eq.~\eqref{eq:DS_LQU}.
Recalling the invariance of DS under local unitary operations we then set
\begin{equation}
\tau_0 \!=\! \frac{I + s_0 \sigma_3}{2}, \quad \tau_1 \!=\! \frac{I + s_1 ( \sin\phi \; \sigma_1 + \cos\phi \; \sigma_3)}{2},
\end{equation}
with $0\leq \phi \leq \pi$ and $0\leq s_i \leq 1$, which yields
\begin{equation}
\sqrt{\tau_i} = R(\phi_i)\frac{A(s_i) + B(s_i)\sigma_3}{\sqrt{2}} R^\dagger(\phi_i),
\end{equation}
where
$\phi_0=0$, $\phi_1=\phi$, $R(\theta)=\exp\left[-i\frac{\theta}{2} \sigma_2\right]$ and
\begin{equation}
A(s_i)\!=\!\frac{\sqrt{1+s_i} + \sqrt{1-s_i}}{2}, \; \;B(s_i)\!=\!\frac{\sqrt{1+s_i} - \sqrt{1-s_i}}{2}.
\end{equation}
We now have all the ingredients necessary for the computation of the matrix elements $W_{\alpha\beta}$. Thanks to the orthogonality of $|0\rangle_B$ and $|1\rangle_B$,
this gives
\begin{widetext}
\begin{eqnarray}
W_{2\beta}
&=& W_{\beta2} = \left[p\sqrt{1-s_0^2}+(1-p)\sqrt{1-s_1^2}\right]\delta_{2\beta}\, >0 \nonumber\\
W_{11}&=&p\sqrt{1-s_0^2} + \frac{(1-p)}{2}\left[ 1-\cos(2\phi) + \sqrt{1-s_1^2}\,(1+\cos(2\phi)) \right] >0\nonumber \\
W_{13}&=&W_{31} = (1-p)\left(1-\sqrt{1-s_1^2}\right)\sin\phi \;\cos\phi \nonumber \\
W_{33}&=&\frac{1+p}{2}+ \frac{1-p}{2}\left[\cos(2\phi)+ \sqrt{1-s_1^2} \; (1-\cos(2\phi))\right].\nonumber\\
\end{eqnarray}
\end{widetext}
It derives that the eigenvalues of $W$ reduce to
\begin{eqnarray}
\xi_0 &=& W_{22} \nonumber\\
\xi_{\pm} &=& \frac{W_{11}+W_{33}}{2} \pm \frac{1}{2}\sqrt{\left(W_{11}-W_{33}\right)^2 + 4 W_{13}^2}. \nonumber
\end{eqnarray}
Being $W_{22}<1$ and $W_{11}+W_{33}=1+W_{22}$, we have that
$\xi_+$ is the maximum eigenvalue. Therefore Eq.~\eqref{EQNUOVA} yields
\begin{eqnarray}
D_{A\rightarrow B}^\Lambda(\rho^{(\mathrm{QC})} )= f_W \;\frac{\sin^2\lambda}{2}\;,
\end{eqnarray}
where
\begin{equation}
f_W:= 1- W_{22}-\sqrt{\left(W_{11}-W_{33}\right)^2 + 4 W_{13}^2}\,.
\end{equation}
It derives
\begin{eqnarray}
D_{A\rightarrow B}^\Lambda(\rho^{(\mathrm{QC})} )\leq \;\frac{\sin^2\lambda}{2}\,,
\end{eqnarray}
the equality being saturated when $W_{22}=0$, $W_{13}=0$ and $W_{11}-W_{33}=0$. The first condition sets to $1$ the purity of $\tau_0$ and $\tau_1$ ($s_0^2=s_1^2=1$), the second and third conditions imply $\phi=(2n+1)\pi/2$, with $n \in \mathbb{Z}$, and $p=1/2$.
We conclude that the maximum of the DS on the set of QC states is achieved on B92-like states, which are pure-QC, that is
\begin{equation}
\max_{\rho^{\mathrm{(QC)}}} D_{A\rightarrow B}^\Lambda( \rho^{\mathrm{(QC)}}) = D_{A\rightarrow B}^\Lambda(\rho^{\mathrm{(B92)}})=\frac{\sin^2\lambda }{2}\,,
\end{equation}
being
\begin{equation}\label{eq:b92state}
\rho^{{(\mathrm{B92})}}\!=\! \frac{1}{2} \left( |0\rangle_A \langle 0| \!\otimes\! |0\rangle_B
\langle 0|+ |\sin({\phi})\rangle_A\langle \sin({\phi})| \!\otimes\! |1\rangle_B
\langle 1|\right)\,,
\end{equation}
and $\sin({\phi})=\pm1$ and $\ket{\pm}=(\ket{0}\pm \ket{1})/2$.
\subsubsection{Separable qubit-qubit states: numerical results}\label{sec:num_qubitqubit}
We conclude our analysis by providing numerical evidence that $(1/2) \sin^2\lambda$ is the maximum value reached by the discriminating strength over the all set of separable states as
anticipated in Eq.~(\ref{2DIM}).
We recall that a generic separable state of two qubit systems can always be written as a finite convex sum of direct products of pure states for $A$ and $B$ \cite{Sanpera}, i.e.
\begin{eqnarray}\label{eq:sep}
\rho^{(\mathrm{sep})} \!=\! \sum_{j=1}^{N} p_j \ketbras{\psi_j}{\psi_j}{A} \otimes \ketbras{\chi_j}{\chi_j}{B}, \quad p_j > 0 \;\; \forall j,
\end{eqnarray}
with $1\leq N \leq 4$. We remark that here no orthogonality constraint has to be imposed on either sets of pure states $\{\left|\psi_j\right>_A\}$ and $\{\left|\chi_j\right>_B\}$, on $\mathcal{H}_A$ and $\mathcal{H}_B$, respectively. The Bloch sphere formalism allows us to define, for all $j$
\begin{eqnarray}
\ketbras{\psi_j}{\psi_j}{A}\!:=\! \frac{\mathbb{I} + \hat{u}_j \cdot \vec{\sigma}_A}{2} \quad \!\mbox{and}\! \quad \ketbras{\chi_j}{\chi_j}{B}\!:=\!\frac{\mathbb{I} + \hat{v}_j \cdot \vec{\sigma}_A}{2}\,. \nonumber\\
\end{eqnarray}
Summarizing, all qubit-quibit separable states are characterized by a set of $N$ probabilities and $2 N$ vectors of unit norms.
The case $N=1$ is trivial (all separable states are completely uncorrelated) and the DS is always zero. Therefore, we have numerically analyzed the cases $N=2$, $N=3$ and $N=4$ and plot our results in Fig.~\ref{fig:data}. The reported results are in agreement with Eq.~(\ref{2DIM}).
\begin{figure}
\caption{Histogram of the data referring to the numerical computation of ${D_{A\rightarrow B}
\label{fig:data}
\end{figure}
The details of this numerical analysis are presented in Appendix~\ref{app:numerics}.
\section{Conclusions}\label{Sec:conc}
In this paper we have introduced, under the name of \textit{discriminating strength}, a novel measure of discord-like correlations, i.e. correlations that, even though not being addressable as quantum entanglement, are still non-classical. In the \textit{mare-magnum} of definitions and measures ~\cite{MODIREV}, each stemming from a different way in which quantum correlations can be used to outperform purely classical systems, the discriminating strength finds its natural collocation in the context of state discrimination. More precisely, it quantifies the ability of a given bipartite probing state to discriminate between the application or not of a unitary map to one of its two subsystems, when a large number of copies of the probing state is at disposal. We report that in a similar context, the noisy quantum illumination~\cite{TAN}, a recent paper~\cite{WEED} has put forward a connection between the advantage yielded by quantum illumination over the best conceivable classical approach, and the amount of quantum discord (as in Ollivier and Zurek ~\cite{OLLZU}) surviving in a maximally entangled state after the interaction with a noisy environment. Here however, our goal was to define a quantity which has a clear operative meaning (characterizing quantitatively each bipartite state as a resource for a specific task) and is also easy to compute, at least in some simple cases.
Specifically, we have proved that the discriminating strength fits all the requirements ascribing it as a proper measure of quantum correlations ~\cite{MODIREV}. We have also provided a closed expression of this measure for some special cases, such as pure states and qubit-qudit systems. For the latter case we have also shown an explicit connection with another measure of quantum correlations, the local quantum uncertainty \cite{LQU}, which, in the most general case, can be seen to approximate the discriminating strength in the limit where the unitary map is close to the identity. Next, we have focused on the class of separable states and proved, by means of both analytical and numerical methods, that for all qubit-qudit systems the discriminating strength reaches its maximum on the set of pure quantum classical states. Finally, we have explicitly determined this maximum value.
We remind that by definition the discriminating strength depends on the spectral properties of the encoding Hamiltonian $H_A^\Lambda$. In other words, for each specific choice of $\Lambda$ one can in principle define a different measure of quantum correlations (a similar problem also affects the local quantum uncertainty). It would be therefore interesting to investigate if there exists a criterion for comparing different measures arising from different spectral properties of $H_A^\Lambda$.
To conclude, we remark that the discriminating strength can be related to other discord-like measures that have been recently introduced, including the inteferometric power~\cite{blindmetr}, the local quantum uncertainty~\cite{LQU} and the discord of response~\cite{discResp}. Ultimately, all these measures share a common message: discord-like correlations are the fundamental resource to be used in many quantum metrology tasks. Moreover, the functionals on which they are based (Chernoff bound, Fisher information, Bures distance) are all interconnected, so that each measure could be used to bound the others~\cite{QCB,distances,QCBqubitAndGauss}. Most interestingly, even the Bures geometric quantum discord, which stems from a different perspective, has been recently shown to be related to an ambiguous state discrimination problem~\cite{Spehner2013a}. In this perspective, we believe that our analysis marks a further step towards a novel classification of a vast set of non-classicality measures.
\acknowledgements
We thank G. Adesso, D. Girolami, F. Illuminati and T. Tufarelli for useful comments and discussions.
ADP acknowledges support from Progetto Giovani Ricercatori 2013 of SNS.
\appendix
\section{Pedagogical remark}\label{AppA}
In this appendix, we provide an explicit proof that an arbitrary qubit-qutrit pQC state~\eqref{ttff} cannot achieve a DS greater than $(2/3)\sin^2\lambda$. Note that this result naturally derives from what found in Secs.~\ref{ssec:optimal} and \ref{ssec:boundf}. Nonetheless, we report the following proof as a pedagogical remark for the interested reader.
Consider an arbitrary qubit-qutrit pQC state~(\ref{ttff}) with
strictly positive probabilities $\{ p_j\}$ and with vectors $\{\hat{r}_j\}$ lying in the Bloch sphere. Without loss of generality we assume that $p_2\geq p_1 \geq p_0$ and introduce a Cartesian coordinate set formed by the 3D orthonormal vectors
$\{\hat{s}_j\}$ such that
\begin{eqnarray}
\hat{r}_2&=& \hat{s}_2\;, \nonumber \\
\hat{r}_1 &=& \cos \theta \hat{s}_2 +\sin\theta \hat {s}_1 \;. \nonumber \\
\hat{r}_0 &=& \cos \theta' \hat{s}_2 +\sin\theta' \cos\phi' \hat {s}_0 +
\sin\theta' \sin\phi' \hat{s}_1\;,
\end{eqnarray}
See Fig.~\ref{s_space}.
\begin{figure}
\caption{Bloch sphere representation of the qubit pure states $\{\left| \psi_j \right>_A\}
\label{s_space}
\end{figure}
With this choice we can write
\begin{eqnarray} \label{rre}
\sum_{j=0,1,2} \! p_j (\hat{n}\cdot \hat{r}_j)^2 = \!\!\! \sum_{j=0,1,2} \! \tilde{p}_j \cos^2\phi_j +
\Delta(\phi_0,\phi_1,\phi_2) \;,
\end{eqnarray}
where $\phi_j$ is the angle between $\hat{n}$
and the Cartesian $j$-th axis $\hat{s}_j$,
\begin{eqnarray} \label{coord}
\cos\phi_j = \hat{n} \cdot \hat{s}_j \;,
\end{eqnarray}
$\{\tilde{p}_j\}$ is still a probability set of elements
\begin{eqnarray}
\tilde{p}_2 &=& p_2 + p_1 \; \cos^2\theta + p_0 \; \cos^2\theta' \;,\nonumber \\
\tilde{p}_1 &=& p_1 \; \sin^2\theta + p_0 \; \sin^2\theta' \sin^2 \phi' \;, \nonumber \\
\tilde{p}_0 &=& p_0 \; \sin^2\theta' \cos^2\phi'\;, \nonumber \\
\end{eqnarray}
and $\Delta(\phi_0,\phi_1,\phi_2)$ is the function
\begin{eqnarray}
\Delta(\phi_0,\phi_1,\phi_2) \!\!&=& \!\!A \cos \phi_2 \cos\phi_1 + B \cos\phi_2 \cos\phi_0 \nonumber \\&+& \!\!C \cos \phi_0 \cos\phi_1\;, \label{defDELTA} \\
\nonumber \\
A&=& p_1 \sin2 \theta + p_0 \sin2\theta'\sin\phi' \nonumber \\
B&=& p_0 \sin 2\theta' \cos\phi' \nonumber \\
C&=& p_0 \sin^2\theta' \sin2\phi' \;.
\end{eqnarray}
Observe that all the dependence of~(\ref{rre})
upon $\hat{n}$ relies on the phases $\{\phi_j\}$: in particular
the probabilities $\{ \tilde{p}_j\}$ and the quantity $A$, $B$, and $C$
of Eq.~(\ref{defDELTA}) do not depend on the choice of the Hamiltonian:
they only depend on the initial state~(\ref{ttff}).
According to~\eqref{eq:dueduetre}, in order to compute the discriminating strength of the state we need to find the maximum value of~(\ref{rre}) over all
possible choices of $\hat{n}$, i.e. for all possible coordinates components~(\ref{coord}).
To do so we first use the following facts to show that
it is always possible to have $\Delta$ positive while
keeping the first contribution of~(\ref{rre}) positive
(i.e. $\sum_{j=0,1,2} \tilde{p}_j \cos^2\phi_j\geq0$):
\begin{itemize}
\item[{\bf F1:}] given three real number $a$, $b$ and $c$,
at least one of the four combination must be non negative,
i.e. $a+b+c$, $a-b-c$, $-a+b-c$, $-a-b+c$ (observe that their
sum is null);
\item[{\bf F2:}] The vectors which with respect to $\{ \hat{s}_j\}$ have coordinates
\begin{eqnarray}
\hat{n}_1&:=& (\cos\phi_0,\cos\phi_1,\cos\phi_2)\;,\nonumber \\
\hat{n}_2&:=&(-\cos\phi_0,\cos\phi_1,\cos\phi_2)\;,\nonumber \\
\hat{n}_3&:=&(\cos\phi_0,-\cos\phi_1,\cos\phi_2)\;, \nonumber \\
\hat{n}_4&:=&(\cos\phi_0,\cos\phi_1,-\cos\phi_2)\;,\nonumber
\end{eqnarray}
have the same value of $\sum_{j=0,1,2} \tilde{p}_j \cos^2\phi_j$ but
are associated to the following values for
$\Delta(\phi_1,\phi_2,\phi_3)$,
\begin{eqnarray}
\hat{n}_1& \mapsto& \Delta= a+b+c\;,\nonumber \\
\hat{n}_2& \mapsto& \Delta= a-b-c\;,\nonumber \\
\hat{n}_3& \mapsto& \Delta=-a+b-c\;,\nonumber \\
\hat{n}_4& \mapsto& \Delta=-a-b+c\;,
\end{eqnarray}
with $a=A\cos \phi_2 \cos\phi_1$,
$b=B \cos\phi_2 \cos\phi_0$ and $c=C \cos \phi_0\cos\phi_1$.
From {\bf F1} it derives that at least one of the vectors $\hat{n}_{1,2,3,4}$
will have $\Delta$ positive.
\end{itemize}
We therefore conclude that
\begin{eqnarray} \label{rre11}
\max_{\hat{n}} \sum_{j=0,1,2} p_j (\hat{n}\cdot \hat{r}_j)^2 &\geq& \max_{\hat{n}} \sum_{j=0,1,2} \tilde{p}_j \cos^2\phi_j \nonumber \\
& =&
\max \{ \tilde{p}_0, \tilde{p}_1, \tilde{p}_2\}\;,
\end{eqnarray}
where the last identity follows from the fact that
$\{\cos^2\phi_j\}$ is a probability set, since it
fulfills the normalization condition~$\sum_{j=0,1,2} \cos^2\phi_j=1$, see Eq.~(\ref{coord}).
Replacing this into Eq.~\eqref{eq:dueduetre} finally yields
\begin{eqnarray}\label{dueduetretre}
D_{A\rightarrow B}^\Lambda(\rho^{(\mathrm{pQC})}) \!\! &\leq& \!\! (1 - \max \{ \tilde{p}_0, \tilde{p}_1, \tilde{p}_2\} ) \sin^2\lambda \nonumber \\
&\leq& \!\! \frac{2}{3}\sin^2\lambda \;,
\end{eqnarray}
where the last inequality holds because the largest of three positive quantities summing to $1$ cannot be smaller than $1/3$.
\section{Numerical analysis for qubit-qubit separable states}\label{app:numerics}
This appendix is devoted to discussing in deeper details the numerical analysis presented in Sec.~\ref{sec:num_qubitqubit}.
We have computed the discriminating strength of a two-qubit system in an arbitrary separable state, which, without loss of generality can be written as
\begin{eqnarray}\label{eq:22SEPBloch}
\rho^{\mathrm{(sep)}} \!=\! \sum_{j=1}^{N} p_j \frac{\mathbb{I} + \hat{u}_j \cdot \vec{\sigma}_A}{2} \otimes \frac{\mathbb{I} + \hat{v}_j \cdot \vec{\sigma}_B}{2}, \quad p_j > 0 \;\; \forall j\,, \nonumber\\
\end{eqnarray}
with $1\leq N \leq 4$, and $\hat{u}_j,\hat{v}_j$ normalized vectors in the Bloch sphere~\cite{Sanpera}.
Let us start with the case $N=2$. The set of probabilities $\{p_i\}$ can be labelled as
\begin{eqnarray}
\{p_1,p_2\}&=&C_2\{\sin\alpha,\cos\alpha\}\nonumber\\
C_2&=&\frac{1}{\sin\alpha+\cos\alpha}\,,
\end{eqnarray}
with $0<\alpha\leq \pi/4$. The latter constraint implies $0<p_1\leq p_2$. Similarly, we have parametrized the unit vectors $\hat{u}_j$ and $\hat{v}_j$ by means of the polar and azimuthal angles, $0 \leq \theta^{u,v}_j \leq \pi$ and $0 \leq \phi^{u,v}_j < 2\pi$, respectively. For each angle, we have taken a set of uniformly distributed values within the corresponding range, and perform all possible combinations. Finally, we have set some additional constraints in the numerical code in order get rid of those states which are equivalent under local unitary transformations.
Thanks to this procedure, we have generated a set of $\sim 7 \times 10^8$ separable states and found that the state with maximum DS corresponds to the B92 state \eqref{eq:b92state} with $D_{A\rightarrow B}^\Lambda=1/2 \sin^2(\lambda\varphi)$, thus confirming what shown in Sec.~\ref{ssec:qubqub}.
We have repeated the same analysis for the case $N=3$ by setting
\begin{eqnarray}
\{p1,p2,p3\}&=&C_3\{\sin\alpha\sin\beta,\sin\alpha\cos\beta,\cos\alpha\} \nonumber\\
C_3&=&\frac{1}{\sin\alpha(\sin\beta+\cos\beta)+\cos\alpha}
\end{eqnarray}
with $0<\alpha,\beta \leq \pi/4$ to ensure that $0<p_1\leq p_2\leq p_3$.
We thus generated a set of $\sim 2 \times 10^6$ separable states. The maximum DS detected within this ensemble is $\sim 0.485 \sin^2(\lambda\varphi)$, and corresponds to
\begin{eqnarray}
&& \alpha=3\pi/16, \beta=\pi/4, \nonumber \\
&&\theta^{u,v}_j=\phi^{u,v}_j=0, \; \mbox{for} \; j=1,2 \nonumber \\
&&\theta^{u}_3=\phi^{u}_3=\pi/2, \; \theta^{v}_3=\pi, \phi^{v}_3=0\,.
\end{eqnarray}
Up to local unitary transformations, this set of parameters describes the state
\begin{eqnarray}
\rho^{\mathrm{(sep)}}\simeq0.486 \ketbras{0}{0}{A} \otimes \ketbras{0}{0}{B} + 0.514 \ketbras{+}{+}{A} \otimes \ketbras{1}{1}{B},\nonumber\\
\end{eqnarray}
which is almost equivalent the B92 state~\eqref{eq:b92state} found for $N=2$.
We foresee that, by means of a finer graining of the parameter space, one should be able to include in the ensemble generated with this procedure the B92 state and reach $1/2 \sin^2(\lambda\varphi)$ as the highest value for DS.\\
Finally we considered the case $N=4$, which corresponds to setting in Eq.~\eqref{eq:22SEPBloch}
\begin{eqnarray}
\{p1,p2,p3,p4\}&=&C_4\{\sin\alpha\sin\beta\sin\gamma,\sin\alpha\sin\beta\cos\gamma,
\nonumber\\ &&\qquad\qquad\sin\alpha\cos\beta,\cos\alpha\} \nonumber\\ \nonumber\\
C_4=\sin\alpha(\sin\beta\!\!\!\!&(&\!\!\!\!\sin\gamma+\cos\gamma)+\cos\beta) +\cos\alpha
\end{eqnarray}
with $0<\alpha,\beta,\gamma \leq \pi/4$ ensuring $0<p_1\leq p_2\leq p_3 \leq p_4$. We have thus generated a set of $\sim 10^6$ separable states. The maximum value we have found for the discriminating strength is $\sim 0.484 \sin^2(\lambda\varphi)$, achieved when
\begin{eqnarray}
&&\alpha=\pi/4, \; \beta=\pi/8, \; \gamma=\pi/4 \nonumber\\
&&\theta^{u,v}_j=0, \phi^{u,v}_j=0, \quad \mbox{for} \; j=1,4 \nonumber \\
&&\theta^{u}_k=\pi/2, \; \theta^{v}_k=\pi, \; \phi^{u,v}_k=0, \; \mbox{for} \; k=2,3\,.\nonumber\\
\end{eqnarray}
This set of parameters defines the state
\begin{eqnarray}
\rho^{\mathrm{(sep)}}& \simeq 0.515\ketbras{0}{0}{A} \otimes \ketbras{0}{0}{B} + 0.485 \ketbras{+}{+}{A} \otimes \ketbras{1}{1}{B}\,,\nonumber\\
\end{eqnarray}
which again, up to numerical errors, is quite close to the aforementioned B92 state.
\end{document}
|
\begin{equation}gin{document}
\title{Hamiltonian quantum computer in one dimension}
\author{Tzu-Chieh Wei}
\affiliation{C. N. Yang Institute for Theoretical Physics and
Department of Physics and Astronomy, State University of New York at
Stony Brook, Stony Brook, NY 11794-3840, USA}
\author{John C. Liang}
\affiliation{Rumson-Fair Haven Regional High School, 74 Ridge Rd, Rumson, NJ 07760, USA}
\date{\today}
\begin{equation}gin{abstract}
Quantum computation can be achieved by preparing an appropriate initial product state of qudits and then letting it evolve under a fixed Hamiltonian. The readout is made by measurement on individual qudits at some later time. This approach is called the Hamiltonian quantum computation and it includes, for example, the continuous-time quantum cellular automata and the universal quantum walk. We consider one spatial dimension and study the compromise between the locality $k$ and the local Hilbert space dimension $d$. For geometrically 2-local (i.e., $k=2$), it is known that $d=8$ is already sufficient for universal quantum computation but the Hamiltonian is not translationally invariant. As the locality $k$ increases, it is expected that the minimum required $d$ should decrease. We provide a construction of Hamiltonian quantum computer for $k=3$ with $d=5$. One implication is that simulating 1D chains of spin-2 particles is BQP-complete. Imposing translation invariance will increase the required $d$. For this we also construct another 3-local ($k=3$) Hamiltonian that is invariant under translation of a unit cell of two sites but that requires $d$ to be 8.
\end{abstract}
\pacs{03.67.Lx, 03.67.-a,
75.10.Jm}
\maketitle
\section{Introduction}
There are several approaches for quantum computation (QC), such as the standard circuit model~\cite{NielsenChuang}, topological QC~\cite{Kitaev,TQC}, adiabatic QC~\cite{Farhi}, measured-based QC~\cite{Oneway,Oneway2}, etc. In addition, quantum computation can be achieved by preparing an appropriate initial product state of qudits and then letting it evolve under a fixed Hamiltonian. The readout is made by measurement on individual qudits at some later time. Such an idea dated back to Benioff~\cite{Benioff} and Feynman~\cite{Feynman}. This is called a Hamiltonian quantum computer~\cite{NagajWocjan}. For example, Feynman provided an example Hamiltonian that is able to execute universal quantum computer, even though the interaction involves four particles residing on sites that are not geometrically local,
\begin{equation}gin{equation}
H_{\rm Feynman}= \sum_{j=0}^{k-1} \sigma_{j+1}^+\sigma_j^- A_{j+1} + {\rm h.c.}
\end{equation}
There are two important ingredients here; see Fig.~\ref{fig:HQC}. The first is the lowering and raising operators $\sigma^-$ and $\sigma^+$ that act on a set of spin-1/2 particles, representing a discrete clock register. The clock state is initialized as $|00\dots001\rangle$ and, via the action of the Hamiltonian, can appear as $|00\dots 0 1_j 0\dots 0\rangle$ with one single excitation, giving rise to a unary representation of a discrete time. The second ingredient is the unitary gates $A_j$'s that represent all the gates (which can be one-qubit or two-qubit) that a quantum computer will apply to any qubits or qubit pairs that are in the computational register. If we denote the initial state of these qubits in the computational register as $|\psi_0\rangle$, then under the evolution of the Hamiltonian $e^{-i t H_{\rm Feynman}}$, the clock and computer register will be in a superposition of the following state
\begin{equation}gin{equation}
|\Psi(t)\rangle= \sum_j c_j(t) |00\dots 0 1_j 0\dots0\rangle\otimes A_{j} A_{j-1}\dots A_1 |\psi_0\rangle,
\end{equation}
where the coefficients $c_j(t)$'s depend on the actual time $t$. The time evolved state $|\Psi(t)\rangle$ thus contains states that represent any stage of quantum computation as gates are being applied to the initial state: $A_{j} A_{j-1}\dots A_1 |\psi_0\rangle$. The component of the computational register corresponding to the clock being $|10\dots 0\rangle$ gives the completion of the computation, i.e., all the gates have been applied: $A_{k} A_{k-1}\dots A_1 |\psi_0\rangle$.
One can append many identity gates to the original gate sequence in order to boost the probability of ending up at a state where the desired quantum computation has been carried out. This gives a general explanation why a Hamiltonian quantum computer can execute universal quantum computation.
\begin{equation}gin{figure}
\centering\includegraphics[width=0.48\textwidth]{FeynmanHQC.png}
\caption{(color online) A schematic diagram for Feynman's Hamiltonian quantum computer. The top row of dots are qubits that constitute the clock. The bottom row of dots are qubits that constitute the computational part, i.e., those that one- and two-qubit gates are applied to. For example, the gate $A_j$ applies to qubits $u$ and $v$, indicated by two thin lines, when the two corresponding clock qubits have a change in their configuration $01\rightarrow 10$ (not shown). The interaction, given by the clock transition and gate operation, is highly non-local in general.
\label{fig:HQC} }
\end{figure}
Feynman's idea was used and generalized by Kitaev to construct the so-called Local Hamiltonian Problems (LHP)~\cite{KitaevShenVyalyi}, which are concerned with the complexity of finding the ground-state energy. It turns out the 5-local LHP, which involves interacting terms of 5 particles that are not necessarily geometrically local, is believed to be a hard problem (called QMA in terms of complexity class) even for quantum computers~\cite{KitaevShenVyalyi}. The locality $k$ for QMA-complete LHP was, in a series of work, reduced to 2~\cite{KempeRegev,KempeKitaevRegev}, even with nearest-neighbor interactions on two spatial dimensions~\cite{OliveiraTerhal}.
In one spatial dimension, it was shown by Aharonov et al. that 2-local 13-state Hamiltonians are QMA-complete~\cite{1DQMA}, and the local dimension $d$ is recently reduced to 8~\cite{Hallgren}. A key novelty in the one-dimensional case is that the use of qubits to represent the discrete clock was replaced by patterns of qudit configuration. This enables the reduction of interaction range to 2-local.
In terms of one-dimensional Hamiltonian quantum computer, there have been various constructions, for example, the continuous-time quantum cellular automata by Vollbrecht and Cirac~\cite{VollbrechtCirac}, by Kay~\cite{Kay}, and by Nagaj and Wocjan~\cite{NagajWocjan} as well as the universal quantum walk by Chase and Landahl~\cite{ChaseLandahl}. The 1D Hamiltonians in these constructions are nearest-neighbor two-body (or geometrically 2-local), but involve the dimension of local Hilbert space ranging from $d=8$~\cite{ChaseLandahl} and higher~\cite{VollbrechtCirac,Kay,NagajWocjan}. Here we study the compromise between the locality $k$ and the local dimension $d$ in one spatial dimension. As the locality $k$ increases, it is expected that the minimum required $d$ should decrease. For example, as a corollary of the results by Chase and Landahl, 6-local ($k=6$) qubit ($d=2$) Hamiltonians are universal, in the sense of quantum computation. For $k=4$, at most $d=3$ is needed for universality. But for $k=3$, how much lower than 8 can $d$ be?
The different constructions mentioned in previous paragraph share some common features: (i) the actual state of the computational register is represented by qubits in a consecutive region of a larger array of qubits; (ii) and there are parallel arrays of qudits that represent the program of the quantum computer, encoding instruction of gate movement and operation~\cite{VollbrechtCirac,Kay,NagajWocjan,ChaseLandahl}. All of them give rise to translation invariant Hamiltonians, except Ref.~\cite{ChaseLandahl}. In this paper, we provide two constructions: (i) one that uses a 5-state 3-local Hamiltonian but is non-translation invariant, and (ii) 8-state 3-local Hamiltonian that is invariant under translation of a unit cell of two sites. The former is inspired by the design used in 1D QMA local Hamiltonian problems~\cite{1DQMA,Hallgren}, where the previous focus was on 2-locality whereas ours is on 3-locality instead. With the 2-locality relaxed to 3-locality, it is conceivable that the local Hilbert-space dimension can be reduced (e.g. from $d=9$ in Ref.~\cite{1DQMA}). Ours is an explicit demonstration of this. Our second construction is inspired by the translation invariant constructions in Refs.~\cite{VollbrechtCirac,Kay,NagajWocjan} and in particular the work by Nagaj and Wocjan~\cite{NagajWocjan}. We explicitly modify a particular scheme with $d=20$ in Ref.~\cite{NagajWocjan} and reduce $d$ to $8$. However, there was one scheme with $d=10$ in Ref.~\cite{NagajWocjan}, but we cannot reduce its $d$ further.
The reason that our 5-state 3-local Hamiltonian is not translationally invariant (not under translation of finite lattice sites) is partly due to the site-dependence of gate operations (see Sec~\ref{sec:5state}), and hence it is not regarded as a quantum cellular automaton.
One of the 2-local Hamiltonian automata of Nagaj and Wocjan uses the local dimension of $d=10$~\cite{NagajWocjan}. This implies the existence of a 4-local Hamiltonian automaton that requires $d=4$ but is invariant under translation of 2 sites, as well as a 6-local Hamiltonian automaton with $d=3$ that is invariant under translation of 3 sites. But this leaves open the question of 3-locality. Our previously mentioned $d=8$ construction thus gives an upper bound on the lowest $d$ in this case. However, it does not seem to be that much of an improvement, comparing to the 10-state 2-local construction of Nagaj and Wocjan~\cite{NagajWocjan}.
As summarized schematically in Fig.~\ref{fig:kvsd} and Fig.~\ref{fig:kvsdTI}, we consider one spatial dimension and focus on the continuous-time evolution of Hamiltonian and focus on the compromise between the locality $k$ and the local dimension $d$.
But we mention that there were constructions using discrete-time quantum cellular automata, see e.g.~\cite{Lloyd,Watrous,Raussendorf,Shepherd} and as well as Hamiltonian quantum computer or quantum walk in two dimensions or higher, see e.g.~\cite{JanzingWocjan,Child}.
The remaining of the paper is organized as follows. In Sec.~\ref{sec:5state} we provide the 5-state 3-local construction and explain how the Hamiltonian is obtained. In Sec.~\ref{sec:8state} we consider translational invariance and construct 8-state 3-local transition rules that lead to a 8-state Hamiltonian invariant under translation of a unit cell of two sites. This construction can be regarded as a continuous-time quantum cellular automaton. In Sec.~\ref{sec:Prob} we analyze the probability of success. We conclude in Sec.~\ref{sec:con}.
\section{5-state non-translationally invariant construction}
\label{sec:5state}
\begin{equation}gin{figure}
\includegraphics[width=0.5\textwidth]{kvsd.png}
\caption{(color online) The status of locality $k$ vs. local Hilbert-space dimension (level) $d$ for universal quantum computation (BQP) in one spatial dimension.
\label{fig:kvsd} }
\end{figure}
We are motivated to explore the compromise between the locality $k$ and the local dimension $d$ for 1D Hamiltonians capable of performing universal quantum computation. As explained in the Introduction, what remains to be answered is the 3-local case. Our construction here borrows the idea from 1D QMA local Hamiltonian problems~\cite{1DQMA,Hallgren} but is otherwise new, and this adds a piece to the `phase diagram' illustrated in Fig.~\ref{fig:kvsd}. It turns out that a 2-local 9-state construction by Aharonov et al. for adiabatic QC can be used in the context of Hamiltonian QC~\cite{1DQMA}. The 8-state 1D QMA LHP by Hallgren et al.~\cite{Hallgren} actually uses effective transition rules involving 4 sites that are made from 2-local instructions. Our task here is not to find Hamiltonians that are QMA-complete, but to construct one that is universal for Hamiltonian quantum computer and that uses a small local dimension.
Our concern is 3-local whereas that of 1D QMA was 2-local~\cite{1DQMA}. Our transition rules are completely different.
It turns out the local dimension we need is $d=5$ and we have two different types of sites.
The basis states on odd and even sites are, respectively,
\begin{equation}gin{eqnarray*}
\{\,\vartriangleright,\,\vartrianglerightle,\circlearrowleft,\,\bullet\,,\,+\,\}, \ \ \{\qubit^{[0]},\qubit^{[1]},\,\blacktrianglerighte^{[0]},\,\blacktrianglerighte^{[1]},\blank\}.
\end{eqnarray*}
(We can regard the system as consisting of the same kind of particles on all sites, but their interactions have two different preferred bases, depending on whether the site is even or odd.) There are two kinds of qubits: $\qubit$ and $\,\blacktrianglerighte$, and the superscripts are used to indicate the logical qubit values (which will not be shown during the transitions below). The symbol $\blank$ is usually referred to as the unborn/dead symbol. The bullet $\,\bullet\,$ and plus $\,+\,$ are used to space qubits as well as unborn/dead symbols. The empty left and right triangles indicate direction of movement. In addition, $\,\vartrianglerightle$ also serves to swap $\qubit$ and $\blank$. The turn-around symbol $\circlearrowleft$ signals a change of direction. The complete transition rules are listed as follows and it is clear that the type of symbols implies the corresponding even or odd site.
\begin{equation}gin{eqnarray}\mbox{1:}\qquad
\begin{equation}gin{array}{lcr}
\,\blacktrianglerighte\,+\,\qubit&\longrightarrow&U_m(\qubit\,+\,\,\blacktrianglerighte)
\end{array}
\end{eqnarray}
\begin{equation}gin{eqnarray}\mbox{2:}\qquad
\begin{equation}gin{array}{lcr}
\,\blacktrianglerighte\,\bullet\,\bb\blank&\longrightarrow&\qubit\circlearrowleft\bb\blank
\end{array}
\end{eqnarray}
\begin{equation}gin{eqnarray}\mbox{3:}\qquad
\begin{equation}gin{array}{lcr}
\circlearrowleft\blank\,\bullet\,&\longrightarrow&\,\vartrianglerightle\blank\,\bullet\,
\end{array}
\end{eqnarray}
\begin{equation}gin{eqnarray}\mbox{4:}\qquad
\begin{equation}gin{array}{lcr}
\qubit\,\vartrianglerightle\blank&\longrightarrow&\blank\,\vartrianglerightle\qubit
\end{array}
\end{eqnarray}
\begin{equation}gin{eqnarray}\mbox{5a:}\qquad
\begin{equation}gin{array}{lcr}
\,+\,\blank\,\vartrianglerightle &\longrightarrow&\,\vartrianglerightle\blank\,+\,
\end{array}
\end{eqnarray}
\begin{equation}gin{eqnarray}\mbox{5b:}\qquad
\begin{equation}gin{array}{lcr}
\,\bullet\,\blank\,\vartrianglerightle&\longrightarrow&\,\bullet\,\blank\circlearrowleft
\end{array}
\end{eqnarray}
\begin{equation}gin{eqnarray}\mbox{6a:}\qquad
\begin{equation}gin{array}{lcr}
\circlearrowleft\bb\qubit\,+\,&\longrightarrow&\,\bullet\,\bb\,\blacktrianglerighte\,+\,
\end{array}
\end{eqnarray}
\begin{equation}gin{eqnarray}\mbox{6b:}\qquad
\begin{equation}gin{array}{lcr}
\circlearrowleft:\!\qubit\,+\,&\longrightarrow&\,\bullet\,\!:\!\qubit\,\vartriangleright
\end{array}
\end{eqnarray}
\begin{equation}gin{eqnarray}\mbox{7a:}\qquad
\begin{equation}gin{array}{lcr}
\,\vartriangleright\qubit\,+\,&\longrightarrow&\,+\,\qubit\,\vartriangleright
\end{array}
\end{eqnarray}
\begin{equation}gin{eqnarray}\mbox{7b:}\qquad
\begin{equation}gin{array}{lcr}
\,\vartriangleright\qubit\,\bullet\,\!:&\longrightarrow&\,+\,\qubit\circlearrowleft:
\end{array}
\end{eqnarray}
The reverse rules are obvious except that for Rule 1:
\begin{equation}gin{eqnarray}\mbox{1}^\dagger:\qquad
\begin{equation}gin{array}{lcr}
\qubit\,+\,\,\blacktrianglerighte&\longrightarrow&U_m^\dagger(\,\blacktrianglerighte\,+\,\qubit)
\end{array}
\end{eqnarray}
The gates $U_m$'s depend on the location and are exactly the gates used in a universal circuit model (see Fig.~\ref{fig:circuit}), where there are $R$ rounds of gate application, in each of which gates are applied sequentially between $i$-th and $(i+1)$-th qubits, for $i=1,\dots,n-1$. In terms of the order that they are applied, the gates are (from left to right)
\begin{equation}gin{equation}
\underbrace{U_{1,1}, \dots, U_{1,n-1}}_{\textrm{1st round}},
\underbrace{U_{2,1}, \dots, U_{2,n-1}}_{\textrm{2nd round}},
\dots,
\underbrace{U_{R,1}, \dots, U_{R,n-1}}_{\textrm{last round}}.
\label{eqn:prog}
\end{equation}
We note that the choice of the set of universal gates is arbitrary, for example, it can include the one-qubit gates such as the Hadamard gate and the $\pi/4$-gate, as well as the two-qubit CNOT gate~\cite{NielsenChuang}, or even the $W$ and $S$ gates to be used in Sec.~\ref{sec:8state}. A one-qubit gate is a trivial special case of a two-qubit gate, where one of the two qubits is acted by an identity gate, and this is the reason that only two-qubit gates are drawn in Fig.~\ref{fig:circuit}.
Furthermore, our transition rules are inspired by those of Ref.~\cite{1DQMA} and use the same idea of data movement, i.e., since each gate is applied at a specific location, the particles need to be moved to the next round before the gate sequence for that round can take place.
\begin{equation}gin{figure}
\includegraphics[width=0.48\textwidth]{circuit.png}
\caption{(color online) Universal circuit for quantum computation. This shows that gate sequence that will be simulated by our Hamiltonian quantum computers.
\label{fig:circuit} }
\end{figure}
Let us use `$\big[$' and `$\big]$' to mark the geometric boundaries on the left and right sides, respectively. The symbol `$\bb$' marks the boundary between blocks and `$:$' marks the location to be {\it not\/} at the boundary. Except the first block which has only one site, all other blocks have $2n$ sites. There are $R+1$ blocks, where $R$ is the number of rounds for gate application.
Let us illustrate for the case of $n=3$ and $R=2$. The initial state is
\begin{equation}gin{eqnarray*}
[0]\qquad \big[\circlearrowleft\bb\qubit\,+\,\qubit\,+\,\qubit\,\bullet\,\bb\blank\,\bullet\,\blank\,\bullet\,\blank\,\bullet\,\big]
\end{eqnarray*}
where the qubits $\qubit$'s are properly initialized. We will not show explicitly the logical values of the qubits and will refer to the list of symbols such as above as the {\it configuration\/}.
Then the rule 6a takes the configuration to
\begin{equation}gin{eqnarray*}
[1]\qquad \big[\,\bullet\,\bb \,\blacktrianglerighte\,+\,\qubit\,+\,\qubit\,\bullet\, \bb \blank\,\bullet\,\blank\,\bullet\,\blank\,\bullet\,\big]
\end{eqnarray*}
The gate $U_{1,1}$ will be applied and the configuration makes a transition via the rule 1 to
\begin{equation}gin{eqnarray*}
[2]\qquad \big[\,\bullet\,\bb \qubit\,+\,\,\blacktrianglerighte\,+\,\qubit\,\bullet\, \bb \blank\,\bullet\,\blank\,\bullet\,\blank\,\bullet\,\big]
\end{eqnarray*}
Similarly, the gate $U_{1,2}$ will be applied and the configuration makes a transition via the rule 1 to
\begin{equation}gin{eqnarray*}
[3]\qquad \big[\,\bullet\,\bb \qubit\,+\,\qubit\,+\,\,\blacktrianglerighte\,\bullet\, \bb \blank\,\bullet\,\blank\,\bullet\,\blank\,\bullet\,\big]
\end{eqnarray*}
Then via the rule 2, a turn-around symbol $\circlearrowleft$ is generated:
\begin{equation}gin{eqnarray*}
[4]\qquad \big[\,\bullet\,\bb \qubit\,+\,\qubit\,+\,\qubit\circlearrowleft \bb \blank\,\bullet\,\blank\,\bullet\,\blank\,\bullet\,\big]
\end{eqnarray*}
and followed by a generation of a left-moving symbol $\,\vartrianglerightle$ via the rule 3:
\begin{equation}gin{eqnarray*}
[5]\qquad \big[\,\bullet\,\bb \qubit\,+\,\qubit\,+\,\qubit\,\vartrianglerightle \bb \blank\,\bullet\,\blank\,\bullet\,\blank\,\bullet\,\big]
\end{eqnarray*}
The qubit $\qubit$ and the unborn symbol $\blank$ swap via the rule 4:
\begin{equation}gin{eqnarray*}
[6]\qquad \big[\,\bullet\,\bb \qubit\,+\,\qubit\,+\,\blank\,\vartrianglerightle \bb \qubit\,\bullet\,\blank\,\bullet\,\blank\,\bullet\,\big]
\end{eqnarray*}
The left-moving symbol $\,\vartrianglerightle$ then swaps with the plus symbol $\,+\,$ via the rule 5a:
\begin{equation}gin{eqnarray*}
[7]\qquad \big[\,\bullet\,\bb \qubit\,+\,\qubit\,\vartrianglerightle\blank\,+\, \bb \qubit\,\bullet\,\blank\,\bullet\,\blank\,\bullet\,\big]
\end{eqnarray*}
The previous two steps repeat again:
\begin{equation}gin{eqnarray*}
[8]\qquad \big[\,\bullet\,\bb \qubit\,+\,\blank\,\vartrianglerightle\qubit\,+\, \bb \qubit\,\bullet\,\blank\,\bullet\,\blank\,\bullet\,\big]
\end{eqnarray*}
\begin{equation}gin{eqnarray*}
[9]\qquad \big[\,\bullet\,\bb \qubit\,\vartrianglerightle\blank\,+\,\qubit\,+\, \bb \qubit\,\bullet\,\blank\,\bullet\,\blank\,\bullet\,\big]
\end{eqnarray*}
The qubit $\qubit$ and the unborn symbol $\blank$ swaps via the rule 4:
\begin{equation}gin{eqnarray*}
[10]\qquad \big[\,\bullet\,\bb \blank\,\vartrianglerightle\qubit\,+\,\qubit\,+\, \bb \qubit\,\bullet\,\blank\,\bullet\,\blank\,\bullet\,\big]
\end{eqnarray*}
The left-moving symbol cannot move but generates a turn-around symbol $\circlearrowleft$ via the rule 5b:
\begin{equation}gin{eqnarray*}
[11]\qquad \big[\,\bullet\,\bb \blank\circlearrowleft\qubit\,+\,\qubit\,+\, \bb \qubit\,\bullet\,\blank\,\bullet\,\blank\,\bullet\,\big]
\end{eqnarray*}
This then creates a bullet $\,\bullet\,$ and the right-moving symbol $\,\vartriangleright$ via the rule 6b:
\begin{equation}gin{eqnarray*}
[12]\qquad \big[\,\bullet\,\bb \blank\,\bullet\,\qubit\,\vartriangleright\qubit\,+\, \bb \qubit\,\bullet\,\blank\,\bullet\,\blank\,\bullet\,\big]
\end{eqnarray*}
The right-moving symbol $\,\vartriangleright$ then swaps with the plus $\,+\,$ in front of it via the rule 7a:
\begin{equation}gin{eqnarray*}
[13]\qquad \big[\,\bullet\,\bb \blank\,\bullet\,\qubit\,+\,\qubit\,\vartriangleright \bb \qubit\,\bullet\,\blank\,\bullet\,\blank\,\bullet\,\big]
\end{eqnarray*}
Then the right-moving symbol $\,\vartriangleright$ cannot move forward but causes a turn-around symbol $\circlearrowleft$ to be generated via the rule 7b (leaving behind a $\,+\,$ symbol):
\begin{equation}gin{eqnarray*}
[14]\qquad \big[\,\bullet\,\bb \blank\,\bullet\,\qubit\,+\,\qubit\,+\, \bb \qubit\circlearrowleft\blank\,\bullet\,\blank\,\bullet\,\big]
\end{eqnarray*}
This then repeats and the process continues of moving the unborn symbol $\blank$ across the block of qubits to the left, as was shown previously in $[5]$ to $[13]$:
\begin{equation}gin{eqnarray*}
[15]\qquad \big[\,\bullet\,\bb \blank\,\bullet\,\qubit\,+\,\qubit\,+\, \bb \qubit\,\vartrianglerightle\blank\,\bullet\,\blank\,\bullet\,\big]
\end{eqnarray*}
\begin{equation}gin{eqnarray*}
[16]\qquad \big[\,\bullet\,\bb \blank\,\bullet\,\qubit\,+\,\qubit\,+\, \bb \blank\,\vartrianglerightle\qubit\,\bullet\,\blank\,\bullet\,\big]
\end{eqnarray*}
\begin{equation}gin{eqnarray*}
[17]\qquad \big[\,\bullet\,\bb \blank\,\bullet\,\qubit\,+\,\qubit\,\vartrianglerightle \bb \blank\,+\,\qubit\,\bullet\,\blank\,\bullet\,\big]
\end{eqnarray*}
\begin{equation}gin{eqnarray*}
[18]\qquad \big[\,\bullet\,\bb \blank\,\bullet\,\qubit\,+\,\blank\,\vartrianglerightle \bb \qubit\,+\,\qubit\,\bullet\,\blank\,\bullet\,\big]
\end{eqnarray*}
\begin{equation}gin{eqnarray*}
[19]\qquad \big[\,\bullet\,\bb \blank\,\bullet\,\qubit\,\vartrianglerightle\blank\,+\, \bb \qubit\,+\,\qubit\,\bullet\,\blank\,\bullet\,\big]
\end{eqnarray*}
\begin{equation}gin{eqnarray*}
[20]\qquad \big[\,\bullet\,\bb \blank\,\bullet\,\blank\,\vartrianglerightle\qubit\,+\, \bb \qubit\,+\,\qubit\,\bullet\,\blank\,\bullet\,\big]
\end{eqnarray*}
A turn-around symbol is then generated and there is a motion to the right:
\begin{equation}gin{eqnarray*}
[21]\qquad \big[\,\bullet\,\bb \blank\,\bullet\,\blank\circlearrowleft\qubit\,+\, \bb \qubit\,+\,\qubit\,\bullet\,\blank\,\bullet\,\big]
\end{eqnarray*}
\begin{equation}gin{eqnarray*}
[22]\qquad \big[\,\bullet\,\bb \blank\,\bullet\,\blank\,\bullet\,\qubit\,\vartriangleright \bb \qubit\,+\,\qubit\,\bullet\,\blank\,\bullet\,\big]
\end{eqnarray*}
\begin{equation}gin{eqnarray*}
[23]\qquad \big[\,\bullet\,\bb \blank\,\bullet\,\blank\,\bullet\,\qubit\,+\, \bb \qubit\,\vartriangleright\qubit\,\bullet\,\blank\,\bullet\,\big]
\end{eqnarray*}
\begin{equation}gin{eqnarray*}
[24]\qquad \big[\,\bullet\,\bb \blank\,\bullet\,\blank\,\bullet\,\qubit\,+\, \bb \qubit\,+\,\qubit\circlearrowleft\blank\,\bullet\,\big]
\end{eqnarray*}
After the turn-out symbol is generated, the whole process of transporting the unborn symbol $\blank$ repeats again:
\begin{equation}gin{eqnarray*}
[25]\qquad \big[\,\bullet\,\bb \blank\,\bullet\,\blank\,\bullet\,\qubit\,+\, \bb \qubit\,+\,\qubit\,\vartrianglerightle\blank\,\bullet\,\big]
\end{eqnarray*}
\begin{equation}gin{eqnarray*}
[26]\qquad \big[\,\bullet\,\bb \blank\,\bullet\,\blank\,\bullet\,\qubit\,+\, \bb \qubit\,+\,\blank\,\vartrianglerightle\qubit\,\bullet\,\big]
\end{eqnarray*}
\begin{equation}gin{eqnarray*}
[27]\qquad \big[\,\bullet\,\bb \blank\,\bullet\,\blank\,\bullet\,\qubit\,+\, \bb \qubit\,\vartrianglerightle\blank\,+\,\qubit\,\bullet\,\big]
\end{eqnarray*}
\begin{equation}gin{eqnarray*}
[28]\qquad \big[\,\bullet\,\bb \blank\,\bullet\,\blank\,\bullet\,\qubit\,+\, \bb \blank\,\vartrianglerightle\qubit\,+\,\qubit\,\bullet\,\big]
\end{eqnarray*}
\begin{equation}gin{eqnarray*}
[29]\qquad \big[\,\bullet\,\bb \blank\,\bullet\,\blank\,\bullet\,\qubit\,\vartrianglerightle \bb \blank\,+\,\qubit\,+\,\qubit\,\bullet\,\big]
\end{eqnarray*}
\begin{equation}gin{eqnarray*}
[30]\qquad \big[\,\bullet\,\bb \blank\,\bullet\,\blank\,\bullet\,\blank\,\vartrianglerightle \bb \qubit\,+\,\qubit\,+\,\qubit\,\bullet\,\big]
\end{eqnarray*}
Finally, we arrive at a similar state to the initial one, except that the whole qubit block has moved one block to the right and the gates $U_{2,1}$ and $U_{2,2}$ will be applied subsequently.
\begin{equation}gin{eqnarray*}
[31]\qquad \big[\,\bullet\,\bb \blank\,\bullet\,\blank\,\bullet\,\blank\circlearrowleft \bb \qubit\,+\,\qubit\,+\,\qubit\,\bullet\,\big]
\end{eqnarray*}
\begin{equation}gin{eqnarray*}
[32]\qquad \big[\,\bullet\,\bb \blank\,\bullet\,\blank\,\bullet\,\blank\,\bullet\, \bb \,\blacktrianglerighte\,+\,\qubit\,+\,\qubit\,\bullet\,\big]
\end{eqnarray*}
\begin{equation}gin{eqnarray*}
[33]\qquad \big[\,\bullet\,\bb \blank\,\bullet\,\blank\,\bullet\,\blank\,\bullet\, \bb \qubit\,+\,\,\blacktrianglerighte\,+\,\qubit\,\bullet\,\big]
\end{eqnarray*}
After all the gates have been applied, the final state in the history of computation arrives:
\begin{equation}gin{eqnarray*}
[34]\qquad \big[\,\bullet\,\bb \blank\,\bullet\,\blank\,\bullet\,\blank\,\bullet\, \bb \qubit\,+\,\qubit\,+\,\,\blacktrianglerighte\,\bullet\,\big]
\end{eqnarray*}
The step number (counting from zero) is indicated in the square brackets $[\,\,\,]$ on the left in the above configurations. In general, for $n$ qubits with $R$ rounds, the total $T=(R-1)(3n^2+n)+n+1$.
What we have described above is the history of the computation via the transition rules (if the computation were run via discrete time). We will refer to the corresponding quantum states $|\psi_t\rangle$'s as the history states. However, the quantum computation will be executed not by discrete transitions, but via the continuous time evolution via the Hamiltonian: $\exp(-iHt)$. The procedure of running such a Hamiltonian quantum computer is (1) prepare proper initial state, e.g. in $[0]$ above, (2) let it evolve under the Hamiltonian $H$, and (3) perform measurement in the computational basis at some time $\tau$. From the measurement outcome of all sites, we will be able to determine at what stage of the computation the projected state is, and what the values of the qubits are. What remains to be shown is that the probability of arriving at a desired computation is high, which will be analyzed in Sec.~\ref{sec:Prob}. There, it will also be clear at what time the measurement should be taken (in fact, at a random moment).
\noindent {\bf Construction for the Hamiltonian}. We remark that these transition rules give rise to a Hamiltonian. For example, the rule 3 (including forward and backward) will contribute the following terms:
\begin{equation}gin{equation}
- \left|\,\vartrianglerightle\blank\,\bullet\,\right\rangle \left\langle \circlearrowleft\blank\,\bullet\,\right|-\left|\circlearrowleft\blank\,\bullet\,\right\rangle\left\langle\,\vartrianglerightle\blank\,\bullet\,\right|,
\end{equation}
applicable at appropriate locations.
For another example, the rule 6a will contribute the following terms,
\begin{equation}gin{eqnarray}
- \sum_{s=0,1}& \left(\left|\,\bullet\,\bb\,\blacktrianglerighte^{[s]}\,+\,\right\rangle \left\langle \circlearrowleft\bb\qubit^{[s]}\,+\,\right| \right.\nonumber\\
\,+\,&\left. \left|\circlearrowleft\bb\qubit^{[s]}\,+\,\right\rangle\left\langle\,\bullet\,\bb\,\blacktrianglerighte^{[s]}\,+\,\right|\right),
\end{eqnarray}
where we have accounted for all possible qubit states.
For another example from the rule 1, we have the following terms
\begin{equation}gin{eqnarray}
&\!\!\!\!\!\!\!- \sum_{s} \left([{U_m}]^{{s_1'},{s_2'}}_{{s_1},{s_2}}\left|\qubit^{[s_1']}\,+\,\,\blacktrianglerighte^{[s_2']}\right\rangle \left\langle \,\blacktrianglerighte^{[s_1]}\,+\,\qubit^{[s_2]}\right| \right.\nonumber\\
&\!\!\!\!+\,\left. {[U_m^\dagger}]^{{s_1'},{s_2'}}_{{s_1},{s_2}} \left|\,\blacktrianglerighte^{[s_1']}\,+\,\qubit^{[s_2']}\right\rangle\left\langle\qubit^{[s_1]}\,+\,\,\blacktrianglerighte^{[s_2]}\right|\right).
\end{eqnarray}
As the construction for the Hamiltonian is clear, we will not explicitly write down all the terms. Moreover, it is the effective Hamiltonian in the basis of the history states that matters, which we discuss in Sec.~\ref{sec:Prob}.
\section{8-state translationally invariant construction}
\label{sec:8state}
\begin{equation}gin{figure}
\includegraphics[width=0.5\textwidth]{kvsdTI.png}
\caption{(color online) The status of locality $k$ vs. local Hilbert-space dimension (level) $d$ for universal quantum computation (BQP) in one spatial dimension. Here the Hamiltonian is restricted to translationally invariant ones.
\label{fig:kvsdTI} }
\end{figure}
The 5-state construction in the last section yields a Hamiltonian that is not translationally invariant. If we impose the requirement that the resulting Hamiltonian to be translationally invariant (at least w.r.t. to translation across unit cells containing a fixed finite number of sites), then the minimum local dimension $d$ that is required to achieve a universal Hamiltonian quantum computer will be larger. In Fig.~\ref{fig:kvsdTI} we present the status of such 1D Hamiltonians on the $k$ vs $d$ diagram. It turns out that we can find a construction using a local dimension $d=8$. This can be regarded as the 3-local version of the continuous-time quantum cellular automata, and it has the advantage of serving as a programmable quantum computer. This quantum computer is also operated in three steps: (i) prepare appropriate initial product state, (ii) let the system evolve under the Hamiltonian, and (iii) perform measurement on all sites at a later time.
The transition rules used here are translationally invariant, as shown below. Our construction is a modification of the 2-local 20-state quantum cellular automaton by Nagaj and Wocjan~\cite{NagajWocjan}, and it also has a unique sequence of the history states via the transition rules on a properly initialized state. Similar to the construction in Sec.~\ref{sec:5state}, the dynamics of the transition rules (or the program) is such that
``particles'', mediating gate instructions, move above the data so that gates are executed at the right location. This data-movement technique comes
from the original construction in Ref.~\cite{1DQMA}.
On one sub-lattice (sites labeled by, say, half-integers), the local Hilbert space is composed of the following states
\begin{equation}gin{eqnarray}
\label{eqn:cursor}
\{\,\blacktriangleright,\,\vartriangleright,\,\vartrianglerightl,\,\vartrianglerightle,\rightarrowp,\rightarrow,\circlearrowleft,*\},
\end{eqnarray}
which can be regarded as the possible states of a cursor,
whereas on the other sub-lattice (sites labeled by, say, integers), it is composed of the following states
\begin{equation}gin{eqnarray}
\label{eqn:programdata}
\{\:I\,,\:S\,,W,\,\bullet\,\}\otimes\{0,1\} ,
\end{eqnarray}
a tensor product of program and data registers.
Hence $d=8$ at every site. The one-dimensional physical lattice thus has two sites in a unit cell.
We note that the symbols $S$ and $W$ are associated with the swap gate $S$ and the $W$ gate, respectively, where
\begin{equation}gin{equation}
\label{eqn:S}
S=\left(
\begin{equation}gin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1
\end{array}\right)
\end{equation}
and
\begin{equation}gin{equation}
\label{eqn:W}
W=\left(
\begin{equation}gin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & \frac{1}{\sqrt{2}} & \frac{-1}{\sqrt{2}} \\
0 & 0 &\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}
\end{array}\right),
\end{equation}
for which it is chosen for convenience that the control qubit is sitting geometrically to the left of the target qubit in our one-dimensional geometry.
One can show that $S$ and $W$ gates can simulate a universal set of gates (in fact only $W$ is needed, if it can be applied to any two qubits) and therefore $S$ and $W$ also constitute a set of universal gates; see Appendix~\ref{app:proof} for a proof.
In the original construction of Nagaj and Wocjan~\cite{NagajWocjan} on every site there are $d=20$ basis states given by
\begin{equation}gin{eqnarray*}
\{\:I\,,\:S\,,W,\,\bullet\,,\ici,\sci,\wci,\,\blacktriangleright,\,\vartriangleright,\circlearrowleft\}\otimes\{0,1\}.
\end{eqnarray*}
To reduce the local dimension $d$ to $8$, we remove six of the symbols in the first bracket: $\{\ici,\sci,\wci,\,\blacktriangleright,\,\vartriangleright,\circlearrowleft\}$, leaving those in Eq.~(\ref{eqn:programdata}) composed of program and data registers. But to maintain the same computational capability we have to insert one additional site (referred to as the site of a cursor) with possible states shown in Eq.~(\ref{eqn:cursor}) in between every two original sites and modify the transition rules. It is based on their scheme that our scheme is constructed.
The transition rules of our Hamiltonian computer are listed as follows.
\begin{equation}gin{eqnarray}
\mbox{1a:} \qquad
\begin{equation}gin{array}{lcr}
\TypeARule{\,*\,}{\,\vartrianglerightl}{\,\bullet\,}&{}\atop\longrightarrow&\TypeARule{\,\vartrianglerightle}{\,*\,}{\,\bullet\,}
\end{array}
\end{eqnarray}
\begin{equation}gin{eqnarray}
\mbox{1b:}\qquad
\begin{equation}gin{array}{lcr}
\TypeARule{\,*\,}{\,\vartrianglerightle}{\,\bullet\,}&{}\atop\longrightarrow&\TypeARule{\circlearrowleft}{\,*\,}{\,\bullet\,}
\end{array}
\end{eqnarray}
\begin{equation}gin{eqnarray}
\mbox{1c:}\qquad
\begin{equation}gin{array}{lcr}
\TypeARule{\,*\,}{\,\vartrianglerightle}{\,A\,}&{}\atop\longrightarrow&\TypeARule{\,\vartrianglerightle}{\,*\,}{\,A\,}
\end{array}
\end{eqnarray}
We note that the gate symbol $\,A\,$ can be any one of the three: $\{\:I\,,\:S\,,W\}$, where $I$ is the idenity gate.
\begin{equation}gin{eqnarray}\mbox{2a:}\qquad
\begin{equation}gin{array}{lcr}
\TypeARule{\rightarrowp}{\,*\,}{\,\bullet\,}&{}\atop\longrightarrow&\TypeARule{\,*\,}{\,\blacktriangleright}{\,\bullet\,}
\end{array}
\end{eqnarray}
\begin{equation}gin{eqnarray}\mbox{2b:}\qquad
\begin{equation}gin{array}{lcr}
\TypeARule{\rightarrow}{\,*\,}{\,\bullet\,}&{}\atop\longrightarrow&\TypeARule{\,*\,}{\,\vartriangleright}{\,\bullet\,}
\end{array}
\end{eqnarray}
\begin{equation}gin{eqnarray}\mbox{3a:}\qquad
\begin{equation}gin{array}{lcr}
\mathrm{tr}iURB{\circlearrowleft}{\,\bullet\,}{\,\bullet\,}{\,1\,}&\longrightarrow&\mathrm{tr}iURB{\rightarrowp}{\,\bullet\,}{\,\bullet\,}{1}
\end{array}
\end{eqnarray}
\begin{equation}gin{eqnarray}\mbox{3b:}\qquad
\begin{equation}gin{array}{lcr}
\mathrm{tr}iURB{\circlearrowleft}{\,\bullet\,}{\,\bullet\,}{\,0\,}&\longrightarrow&\mathrm{tr}iURB{\rightarrow}{\,\bullet\,}{\,\bullet\,}{0}
\end{array}
\end{eqnarray}
\begin{equation}gin{eqnarray}\mbox{4a:}\qquad
\begin{equation}gin{array}{lcr}
\TypeBRuleBB{\,\blacktriangleright}{\,\bullet\,}{\,A\,}{x}{y}&\longrightarrow&\TypeBRuleCC{\rightarrowp}{\,A\,}{\,\bullet\,}{x}{y}
\end{array}
\end{eqnarray}
\begin{equation}gin{eqnarray}\mbox{4b:}\qquad
\begin{equation}gin{array}{lcr}
\TypeBRuleR{\,\vartriangleright}{\,\bullet\,}{\,A\,}&{}\atop\longrightarrow&\TypeBRuleR{\rightarrow}{\,A\,}{\,\bullet\,}
\end{array}
\end{eqnarray}
\begin{equation}gin{eqnarray}\mbox{5a:}\qquad
\begin{equation}gin{array}{lcr}
\mathrm{tr}iULB{\,\blacktriangleright}{\,\bullet\,}{\,\bullet\,}{\,1\,}&\longrightarrow&\mathrm{tr}iULB{\,\vartrianglerightl}{\,\bullet\,}{\,\bullet\,}{1}
\end{array}
\end{eqnarray}
\begin{equation}gin{eqnarray}\mbox{5b:}\qquad
\begin{equation}gin{array}{lcr}
\mathrm{tr}iULB{\,\vartriangleright}{\,\bullet\,}{\,\bullet\,}{\,0\,}&\longrightarrow&\mathrm{tr}iULB{\,\vartrianglerightl}{\,\bullet\,}{\,\bullet\,}{0}
\end{array}
\end{eqnarray}
The reverse rules are obvious except that for Rule 4a:
\begin{equation}gin{eqnarray}\mbox{4a}^\dagger:\qquad
\begin{equation}gin{array}{lcr}
\TypeBRuleBB{\rightarrowp}{\,A\,}{\,\bullet\,}{x}{y}&\longrightarrow&\TypeBRuleCD{\,\blacktriangleright}{\,\bullet\,}{\,A\,}{x}{y}
\end{array}
\end{eqnarray}
\begin{equation}gin{widetext}
The total Hilbert space is composed of state of the following form:
\begin{equation}gin{eqnarray}
\kets{\varphi} = \bigotimes_{j=1}^{L} \left(\ket{b_{j+\frac{1}{2}}}\otimes\ket{p_j} \otimes \ket{d_j}\right)_j,
\label{start10product}
\end{eqnarray}
with $b_{j+\frac{1}{2}}$ denoting the state of the cursor register, $p_j$ the program register and $d_j$ the data register.
The initial state $|\psi_0\rangle$ is
\begin{equation}gin{eqnarray*}
[0]\qquad \begin{equation}gin{array}{c|cccccccccccccccccccccccccccccccccc}
{j} & 1 &\phantom{*} & 2 &\phantom{*} & 3 &\phantom{*}& 4 &\phantom{*}& 5 &\phantom{*}& 6&\phantom{*} &
7 &\phantom{*}& 8 &\phantom{*}& 9 &\phantom{*}& 10 &\phantom{*}&
11 &\phantom{*}& 12 &\phantom{*} & 13 &\phantom{*} & 14&\phantom{*}& 15 &\phantom{*}& \\
\hline
b_{j+\frac{1}{2}} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,}&{*} &
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}&
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,}&{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{\blacktriangleleft} \\
\hline
p_{j} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \,\bullet\,&\phantom{*} &
\:I\, &\phantom{*}& W &\phantom{*}& \:S\, &\phantom{*}& \:I\, &\phantom{*}&
\:I\, &\phantom{*}& \:S\, &\phantom{*} & W &\phantom{*} & \:I\,&\phantom{*}& \,\bullet\, &\phantom{*}&
\\
d_{j} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} &1 &\phantom{*} &
w_1 &\phantom{*} & w_2 &\phantom{*} & w_3 &\phantom{*} & 1 &\phantom{*}&
0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*}
\end{array}
\end{eqnarray*}
where $w_i$'s correspond to the actual qubits in the circuit model (see Fig.~\ref{fig:circuit}) and needs to be properly initialized. We also note that the gates in the program register $p_j$'s are arranged in the order
\begin{equation}gin{eqnarray}
I , \underbrace{U_{1,1}, \dots, U_{1,n-1}}_{\textrm{1st round}},
I, I, \underbrace{U_{2,1}, \dots, U_{2,n-1}}_{\textrm{2nd round}},
I, \dots,
I, \underbrace{U_{R,1}, \dots U_{R,n-1}}_{\textrm{last round}},\,I ,
\label{prog}
\end{eqnarray}
with $U_{k,j}$ being one of the three possible gates in the set $\{W,S,I\}$, and, in comparison with the gate sequence in Eq.~(\ref{eqn:prog}), here each round of gates is both preceded and followed by an identity gate. It is important to add the extra identity gates $I$'s so that there will not be any undesired gate operation between the qubit $\omega_1$ and the qubit to its left nor between the qubit $\omega_n$ and the qubit to its right~\cite{NagajWocjan}. Moreover, the qubit pattern in the data register, as illustrated in the example above, is
\begin{equation}gin{equation}
0\underbrace{100\dots0}_{ (R\!-\!1)\, \textrm{blocks}} 1\,\omega_1\omega_2\omega_3\dots\omega_n\underbrace{100\dots0}_{(R\!-\!1)\, \textrm{ blocks}}10,
\end{equation} where the spacing between the $1$'s being the same as the number of gates (including the identity gates) in a round. We note that there needs to be at least $(R-1)$ blocks of space to the left of all gates, so that $R$ rounds of gates can be completely executed. The particular pattern was designed by Nagaj and Wocjan~\cite{NagajWocjan} to appropriately activate gate operations. Via the rule 1a, the initial state makes a transition to the following
\begin{equation}gin{eqnarray*}
[1] \qquad \begin{equation}gin{array}{c|cccccccccccccccccccccccccccccccccc}
b_{j+\frac{1}{2}} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,}&{*} &
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}&
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,}&{*}& \phantom{\,\bullet\,} &{\,\vartrianglerightle}& \phantom{\,\bullet\,} & {*} \\
\hline
p_{j} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \,\bullet\,&\phantom{*} &
\:I\, &\phantom{*}& W &\phantom{*}& \:S\, &\phantom{*}& \:I\, &\phantom{*}&
\:I\, &\phantom{*}& \:S\, &\phantom{*} & W &\phantom{*} & \:I\,&\phantom{*}& \,\bullet\, &\phantom{*}&
\\
d_{j} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} &1 &\phantom{*} &
w_1 &\phantom{*} & w_2 &\phantom{*} & w_3 &\phantom{*} & 1 &\phantom{*}&
0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*}
\end{array}
\end{eqnarray*}
where the solid left-triangle moves one step forward and turns into a empty left-triangle. Via the rule 1c, the empty left-tirangle moves one step forward:
\begin{equation}gin{eqnarray*}
[2] \qquad \begin{equation}gin{array}{c|cccccccccccccccccccccccccccccccccc}
b_{j+\frac{1}{2}} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,}&{*} &
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}&
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,}&{\,\vartrianglerightle}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} & {*} \\
\hline
p_{j} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \,\bullet\,&\phantom{*} &
\:I\, &\phantom{*}& W &\phantom{*}& \:S\, &\phantom{*}& \:I\, &\phantom{*}&
\:I\, &\phantom{*}& \:S\, &\phantom{*} & W &\phantom{*} & \:I\,&\phantom{*}& \,\bullet\, &\phantom{*}&
\\
d_{j} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} &1 &\phantom{*} &
w_1 &\phantom{*} & w_2 &\phantom{*} & w_3 &\phantom{*} & 1 &\phantom{*}&
0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*}
\end{array}
\end{eqnarray*}
and continues until the configuration becomes the following one:
\begin{equation}gin{eqnarray*}
[9]\qquad \begin{equation}gin{array}{c|cccccccccccccccccccccccccccccccccc}
b_{j+\frac{1}{2}} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,}&{\,\vartrianglerightle} &
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}&
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,}&{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} & {*} \\
\hline
p_{j} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \,\bullet\,&\phantom{*} &
\:I\, &\phantom{*}& W &\phantom{*}& \:S\, &\phantom{*}& \:I\, &\phantom{*}&
\:I\, &\phantom{*}& \:S\, &\phantom{*} & W &\phantom{*} & \:I\,&\phantom{*}& \,\bullet\, &\phantom{*}&
\\
d_{j} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} &1 &\phantom{*} &
w_1 &\phantom{*} & w_2 &\phantom{*} & w_3 &\phantom{*} & 1 &\phantom{*}&
0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*}
\end{array}
\end{eqnarray*}
The rule 1b then kicks in, generating a turn-around symbol $\circlearrowleft$:
\begin{equation}gin{eqnarray*}
[10]\qquad \begin{equation}gin{array}{c|cccccccccccccccccccccccccccccccccc}
b_{j+\frac{1}{2}} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{\circlearrowleft}& \phantom{\,\bullet\,}&{*} &
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}&
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,}&{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} & {*} \\
\hline
p_{j} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \,\bullet\,&\phantom{*} &
\:I\, &\phantom{*}& W &\phantom{*}& \:S\, &\phantom{*}& \:I\, &\phantom{*}&
\:I\, &\phantom{*}& \:S\, &\phantom{*} & W &\phantom{*} & \:I\,&\phantom{*}& \,\bullet\, &\phantom{*}&
\\
d_{j} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} &1 &\phantom{*} &
w_1 &\phantom{*} & w_2 &\phantom{*} & w_3 &\phantom{*} & 1 &\phantom{*}&
0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*}
\end{array}
\end{eqnarray*}
Via the rule 3a, the turn-around symbol becomes a double right-arrow:
\begin{equation}gin{eqnarray*}
[11]\qquad \begin{equation}gin{array}{c|cccccccccccccccccccccccccccccccccc}
b_{j+\frac{1}{2}} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{\rightarrowp}& \phantom{\,\bullet\,}&{*} &
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}&
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,}&{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} & {*} \\
\hline
p_{j} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \,\bullet\,&\phantom{*} &
\:I\, &\phantom{*}& W &\phantom{*}& \:S\, &\phantom{*}& \:I\, &\phantom{*}&
\:I\, &\phantom{*}& \:S\, &\phantom{*} & W &\phantom{*} & \:I\,&\phantom{*}& \,\bullet\, &\phantom{*}&
\\
d_{j} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} &1 &\phantom{*} &
w_1 &\phantom{*} & w_2 &\phantom{*} & w_3 &\phantom{*} & 1 &\phantom{*}&
0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*}
\end{array}
\end{eqnarray*}
The double right-arrow moves and makes a transition via the rule 2a into a solid right-triangle:
\begin{equation}gin{eqnarray*}
[12]\qquad \begin{equation}gin{array}{c|cccccccccccccccccccccccccccccccccc}
b_{j+\frac{1}{2}} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,}&{\,\blacktriangleright} &
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}&
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,}&{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} & {*} \\
\hline
p_{j} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \,\bullet\,&\phantom{*} &
\:I\, &\phantom{*}& W &\phantom{*}& \:S\, &\phantom{*}& \:I\, &\phantom{*}&
\:I\, &\phantom{*}& \:S\, &\phantom{*} & W &\phantom{*} & \:I\,&\phantom{*}& \,\bullet\, &\phantom{*}&
\\
d_{j} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} &\mybox{1} &\phantom{*} &
\mybox{w_1} &\phantom{*} & w_2 &\phantom{*} & w_3 &\phantom{*} & 1 &\phantom{*}&
0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*}
\end{array}
\end{eqnarray*}
This is where the gate $\:I\,$ is applied and a double right-arrow is generated:
\begin{equation}gin{eqnarray*}
[13]\qquad \begin{equation}gin{array}{c|cccccccccccccccccccccccccccccccccc}
b_{j+\frac{1}{2}} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,}&{\rightarrowp} &
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}&
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,}&{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} & {*} \\
\hline
p_{j} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \:I\,&\phantom{*} &
\,\bullet\, &\phantom{*}& W &\phantom{*}& \:S\, &\phantom{*}& \:I\, &\phantom{*}&
\:I\, &\phantom{*}& \:S\, &\phantom{*} & W &\phantom{*} & \:I\,&\phantom{*}& \,\bullet\, &\phantom{*}&
\\
d_{j} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} &1 &\phantom{*} &
w_1 &\phantom{*} & w_2 &\phantom{*} & w_3 &\phantom{*} & 1 &\phantom{*}&
0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*}
\end{array}
\end{eqnarray*}
Note that for convenience, we will not change the symbols $\omega_i$'s even if nontrivial gate effect arises. The two boxes indicate where the two qubits were affected by the gate operation.
The previous two steps repeat a few times, but note that the gates will have no effect outside the qubit block $\omega_i$'s, and we arrive at the following configuration:
\begin{equation}gin{eqnarray*}
[26]\qquad \begin{equation}gin{array}{c|cccccccccccccccccccccccccccccccccc}
b_{j+\frac{1}{2}} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,}&{*} &
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}&
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,}&{\,\blacktriangleright}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} & {*} \\
\hline
p_{j} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \:I\,&\phantom{*} & W &\phantom{*}& \:S\, &\phantom{*}& \:I\, &\phantom{*}&
\:I\, &\phantom{*}& \:S\, &\phantom{*} & W &\phantom{*} &
\,\bullet\, &\phantom{*}&\:I\,&\phantom{*}& \,\bullet\, &\phantom{*}&
\\
d_{j} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} &1 &\phantom{*} &
w_1 &\phantom{*} & w_2 &\phantom{*} & w_3 &\phantom{*} & 1 &\phantom{*}&
0 &\phantom{*} & 0 &\phantom{*} & \mybox{0} &\phantom{*} & \mybox{1} &\phantom{*} & 0 &\phantom{*}
\end{array}
\end{eqnarray*}
It then transits (via the rule 4a) into
\begin{equation}gin{eqnarray*}
[27]\qquad \begin{equation}gin{array}{c|cccccccccccccccccccccccccccccccccc}
b_{j+\frac{1}{2}} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,}&{*} &
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}&
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,}&{\rightarrowp}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} & {*} \\
\hline
p_{j} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \:I\,&\phantom{*} & W &\phantom{*}& \:S\, &\phantom{*}& \:I\, &\phantom{*}&
\:I\, &\phantom{*}& \:S\, &\phantom{*} & W &\phantom{*} &
\:I\, &\phantom{*}&\,\bullet\,&\phantom{*}& \,\bullet\, &\phantom{*}&
\\
d_{j} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} &1 &\phantom{*} &
w_1 &\phantom{*} & w_2 &\phantom{*} & w_3 &\phantom{*} & 1 &\phantom{*}&
0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*}
\end{array}
\end{eqnarray*}
The double right-arrow moves and becomes the solid right-triangle (via the rule 2a):
\begin{equation}gin{eqnarray*}
[28]\qquad \begin{equation}gin{array}{c|cccccccccccccccccccccccccccccccccc}
b_{j+\frac{1}{2}} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,}&{*} &
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}&
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,}&{*}& \phantom{\,\bullet\,} &{\,\blacktriangleright} & \phantom{\,\bullet\,} & {*} \\
\hline
p_{j} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \:I\,&\phantom{*} & W &\phantom{*}& \:S\, &\phantom{*}& \:I\, &\phantom{*}&
\:I\, &\phantom{*}& \:S\, &\phantom{*} & W &\phantom{*} &
\:I\, &\phantom{*}&\,\bullet\,&\phantom{*}& \,\bullet\, &\phantom{*}&
\\
d_{j} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} &1 &\phantom{*} &
w_1 &\phantom{*} & w_2 &\phantom{*} & w_3 &\phantom{*} & 1 &\phantom{*}&
0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*}
\end{array}
\end{eqnarray*}
At this moment the solid right-triangle makes a transition (via the rule 5a) to a solid left-triangle:
\begin{equation}gin{eqnarray*}
[29]\qquad \begin{equation}gin{array}{c|cccccccccccccccccccccccccccccccccc}
b_{j+\frac{1}{2}} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,}&{*} &
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}&
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,}&{*}& \phantom{\,\bullet\,} &{\,\vartrianglerightl} & \phantom{\,\bullet\,} & {*} \\
\hline
p_{j} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \:I\,&\phantom{*} & W &\phantom{*}& \:S\, &\phantom{*}& \:I\, &\phantom{*}&
\:I\, &\phantom{*}& \:S\, &\phantom{*} & W &\phantom{*} &
\:I\, &\phantom{*}&\,\bullet\,&\phantom{*}& \,\bullet\, &\phantom{*}&
\\
d_{j} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} &1 &\phantom{*} &
w_1 &\phantom{*} & w_2 &\phantom{*} & w_3 &\phantom{*} & 1 &\phantom{*}&
0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*}
\end{array}
\end{eqnarray*}
The solid left-triangle moves one step forward and turns into an empty left-triangle (via the rule 1a):
\begin{equation}gin{eqnarray*}
[30]\qquad \begin{equation}gin{array}{c|cccccccccccccccccccccccccccccccccc}
b_{j+\frac{1}{2}} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,}&{*} &
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}&
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,}&{\,\vartrianglerightle}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} & {*} \\
\hline
p_{j} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \:I\,&\phantom{*} & W &\phantom{*}& \:S\, &\phantom{*}& \:I\, &\phantom{*}&
\:I\, &\phantom{*}& \:S\, &\phantom{*} & W &\phantom{*} &
\:I\, &\phantom{*}&\,\bullet\,&\phantom{*}& \,\bullet\, &\phantom{*}&
\\
d_{j} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} &1 &\phantom{*} &
w_1 &\phantom{*} & w_2 &\phantom{*} & w_3 &\phantom{*} & 1 &\phantom{*}&
0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*}
\end{array}
\end{eqnarray*}
The empty left-triangle can move to the left step by step via the rule 1c, until the configuration becomes:
\begin{equation}gin{eqnarray*}
[38]\qquad \begin{equation}gin{array}{c|cccccccccccccccccccccccccccccccccc}
b_{j+\frac{1}{2}} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{\,\vartrianglerightle}& \phantom{\,\bullet\,}&{*} &
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}&
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,}&{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} & {*} \\
\hline
p_{j} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \:I\,&\phantom{*} & W &\phantom{*}& \:S\, &\phantom{*}& \:I\, &\phantom{*}&
\:I\, &\phantom{*}& \:S\, &\phantom{*} & W &\phantom{*} &
\:I\, &\phantom{*}&\,\bullet\,&\phantom{*}& \,\bullet\, &\phantom{*}&
\\
d_{j} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} &1 &\phantom{*} &
w_1 &\phantom{*} & w_2 &\phantom{*} & w_3 &\phantom{*} & 1 &\phantom{*}&
0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*}
\end{array}
\end{eqnarray*}
The empty left-triangle moves to the left and turns into a turn-around symbol (via the rule 1b):
\begin{equation}gin{eqnarray*}
[39]\qquad \begin{equation}gin{array}{c|cccccccccccccccccccccccccccccccccc}
b_{j+\frac{1}{2}} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{\circlearrowleft}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,}&{*} &
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}&
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,}&{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} & {*} \\
\hline
p_{j} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \:I\,&\phantom{*} & W &\phantom{*}& \:S\, &\phantom{*}& \:I\, &\phantom{*}&
\:I\, &\phantom{*}& \:S\, &\phantom{*} & W &\phantom{*} &
\:I\, &\phantom{*}&\,\bullet\,&\phantom{*}& \,\bullet\, &\phantom{*}&
\\
d_{j} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} &1 &\phantom{*} &
w_1 &\phantom{*} & w_2 &\phantom{*} & w_3 &\phantom{*} & 1 &\phantom{*}&
0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*}
\end{array}
\end{eqnarray*}
Because of two qubits nearby are $00$, the turn-around symbol becomes a right arrow (via the rule 3b):
\begin{equation}gin{eqnarray*}
[40]\qquad \begin{equation}gin{array}{c|cccccccccccccccccccccccccccccccccc}
b_{j+\frac{1}{2}} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{\rightarrow}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,}&{*} &
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}&
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,}&{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} & {*} \\
\hline
p_{j} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \:I\,&\phantom{*} & W &\phantom{*}& \:S\, &\phantom{*}& \:I\, &\phantom{*}&
\:I\, &\phantom{*}& \:S\, &\phantom{*} & W &\phantom{*} &
\:I\, &\phantom{*}&\,\bullet\,&\phantom{*}& \,\bullet\, &\phantom{*}&
\\
d_{j} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} &1 &\phantom{*} &
w_1 &\phantom{*} & w_2 &\phantom{*} & w_3 &\phantom{*} & 1 &\phantom{*}&
0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*}
\end{array}
\end{eqnarray*}
The right arrow moves one step to the right and becomes an empty right-triangle (via the rule 2b):
\begin{equation}gin{eqnarray*}
[41]\qquad \begin{equation}gin{array}{c|cccccccccccccccccccccccccccccccccc}
b_{j+\frac{1}{2}} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{\,\vartriangleright}& \phantom{\,\bullet\,}&{*} &
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}&
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,}&{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} & {*} \\
\hline
p_{j} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \:I\,&\phantom{*} & W &\phantom{*}& \:S\, &\phantom{*}& \:I\, &\phantom{*}&
\:I\, &\phantom{*}& \:S\, &\phantom{*} & W &\phantom{*} &
\:I\, &\phantom{*}&\,\bullet\,&\phantom{*}& \,\bullet\, &\phantom{*}&
\\
d_{j} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} &1 &\phantom{*} &
w_1 &\phantom{*} & w_2 &\phantom{*} & w_3 &\phantom{*} & 1 &\phantom{*}&
0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*}
\end{array}
\end{eqnarray*}
Unlike the solid right-triangle, the empty right-triangle does not induce gate operation and simply moves one step forward and turns into a right arrow:
\begin{equation}gin{eqnarray*}
[42]\qquad \begin{equation}gin{array}{c|cccccccccccccccccccccccccccccccccc}
b_{j+\frac{1}{2}} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{\rightarrow}& \phantom{\,\bullet\,}&{*} &
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}&
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,}&{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} & {*} \\
\hline
p_{j} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*}& \,\bullet\, &\phantom{*}& \:I\,&\phantom{*}& \,\bullet\,&\phantom{*} & W &\phantom{*}& \:S\, &\phantom{*}& \:I\, &\phantom{*}&
\:I\, &\phantom{*}& \:S\, &\phantom{*} & W &\phantom{*} &
\:I\, &\phantom{*}&\,\bullet\,&\phantom{*}& \,\bullet\, &\phantom{*}&
\\
d_{j} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} &1 &\phantom{*} &
w_1 &\phantom{*} & w_2 &\phantom{*} & w_3 &\phantom{*} & 1 &\phantom{*}&
0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*}
\end{array}
\end{eqnarray*}
The previous two steps repeat a few times and we arrive at
\begin{equation}gin{eqnarray*}
[55]\qquad\begin{equation}gin{array}{c|cccccccccccccccccccccccccccccccccc}
b_{j+\frac{1}{2}} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,}&{*} &
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}&
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{\,\vartriangleright} & \phantom{\,\bullet\,}&{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} & {*} \\
\hline
p_{j} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*}& \,\bullet\,&\phantom{*} & \:I\,&\phantom{*}& W &\phantom{*}& \:S\, &\phantom{*}& \:I\, &\phantom{*}&
\:I\, &\phantom{*}& \:S\, &\phantom{*} & W &\phantom{*} & \,\bullet\, &\phantom{*}&
\:I\, &\phantom{*}&\,\bullet\,&\phantom{*}& \,\bullet\, &\phantom{*}&
\\
d_{j} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} &1 &\phantom{*} &
w_1 &\phantom{*} & w_2 &\phantom{*} & w_3 &\phantom{*} & 1 &\phantom{*}&
0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*}
\end{array}
\end{eqnarray*}
The empty right-triangle moves and becomes a right arrow:
\begin{equation}gin{eqnarray*}
[56]\qquad \begin{equation}gin{array}{c|cccccccccccccccccccccccccccccccccc}
b_{j+\frac{1}{2}} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,}&{*} &
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}&
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{\rightarrow} & \phantom{\,\bullet\,}&{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} & {*} \\
\hline
p_{j} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*}& \,\bullet\,&\phantom{*} & \:I\,&\phantom{*}& W &\phantom{*}& \:S\, &\phantom{*}& \:I\, &\phantom{*}&
\:I\, &\phantom{*}& \:S\, &\phantom{*} & W &\phantom{*} &
\:I\, &\phantom{*}&\,\bullet\, &\phantom{*}&\,\bullet\,&\phantom{*}& \,\bullet\, &\phantom{*}&
\\
d_{j} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} &1 &\phantom{*} &
w_1 &\phantom{*} & w_2 &\phantom{*} & w_3 &\phantom{*} & 1 &\phantom{*}&
0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*}
\end{array}
\end{eqnarray*}
The right arrow moves and becomes an empty right-triangle:
\begin{equation}gin{eqnarray*}
[57]\qquad \begin{equation}gin{array}{c|cccccccccccccccccccccccccccccccccc}
b_{j+\frac{1}{2}} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,}&{*} &
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}&
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,}&{\,\vartriangleright}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} & {*} \\
\hline
p_{j} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*}& \,\bullet\,&\phantom{*} & \:I\,&\phantom{*}& W &\phantom{*}& \:S\, &\phantom{*}& \:I\, &\phantom{*}&
\:I\, &\phantom{*}& \:S\, &\phantom{*} & W &\phantom{*} &
\:I\, &\phantom{*}&\,\bullet\, &\phantom{*}&\,\bullet\,&\phantom{*}& \,\bullet\, &\phantom{*}&
\\
d_{j} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} &1 &\phantom{*} &
w_1 &\phantom{*} & w_2 &\phantom{*} & w_3 &\phantom{*} & 1 &\phantom{*}&
0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*}
\end{array}
\end{eqnarray*}
Via the rule 5b, the empty right-triangle turns into a solid left-triangle:
\begin{equation}gin{eqnarray*}
[58]\qquad \begin{equation}gin{array}{c|cccccccccccccccccccccccccccccccccc}
b_{j+\frac{1}{2}} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,}&{*} &
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}&
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{\,\vartrianglerightl} & \phantom{\,\bullet\,}&{*} & \phantom{\,\bullet\,} & {*} \\
\hline
p_{j} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*}& \,\bullet\,&\phantom{*} & \:I\,&\phantom{*}& W &\phantom{*}& \:S\, &\phantom{*}& \:I\, &\phantom{*}&
\:I\, &\phantom{*}& \:S\, &\phantom{*} & W &\phantom{*} &
\:I\, &\phantom{*}&\,\bullet\, &\phantom{*}&\,\bullet\,&\phantom{*}& \,\bullet\, &\phantom{*}&
\\
d_{j} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} &1 &\phantom{*} &
w_1 &\phantom{*} & w_2 &\phantom{*} & w_3 &\phantom{*} & 1 &\phantom{*}&
0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*}
\end{array}
\end{eqnarray*}
It then moves one step to the left and turns into an empty left-triangle:
\begin{equation}gin{eqnarray*}
[59]\qquad \begin{equation}gin{array}{c|cccccccccccccccccccccccccccccccccc}
b_{j+\frac{1}{2}} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,}&{*} &
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}&
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{\,\vartrianglerightle}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,}&{*} & \phantom{\,\bullet\,} & {*} \\
\hline
p_{j} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*}& \,\bullet\,&\phantom{*} & \:I\,&\phantom{*}& W &\phantom{*}& \:S\, &\phantom{*}& \:I\, &\phantom{*}&
\:I\, &\phantom{*}& \:S\, &\phantom{*} & W &\phantom{*} &
\:I\, &\phantom{*}&\,\bullet\, &\phantom{*}&\,\bullet\,&\phantom{*}& \,\bullet\, &\phantom{*}&
\\
d_{j} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} &1 &\phantom{*} &
w_1 &\phantom{*} & w_2 &\phantom{*} & w_3 &\phantom{*} & 1 &\phantom{*}&
0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*}
\end{array}
\end{eqnarray*}
After a few rounds, we finally arrive at the final state $|\psi_T\rangle$ via the transition rules and all the gates have been applied:
\begin{equation}gin{eqnarray*}
[154]\qquad \begin{equation}gin{array}{c|cccccccccccccccccccccccccccccccccc}
b_{j+\frac{1}{2}} & \phantom{\,\bullet\,} &{\,\vartrianglerightle} & \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,}&{*} &
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*}&
\phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,} &{*}& \phantom{\,\bullet\,} &{*} & \phantom{\,\bullet\,}&{*} & \phantom{\,\bullet\,} & {*} \\
\hline
p_{j} & \,\bullet\,&\phantom{*} & \:I\,&\phantom{*}& W &\phantom{*}& \:S\, &\phantom{*}& \:I\, &\phantom{*}&
\:I\, &\phantom{*}& \:S\, &\phantom{*} & W &\phantom{*} &
\:I\, &\phantom{*}&\,\bullet\, &\phantom{*}&\,\bullet\,&\phantom{*}& \,\bullet\, &\phantom{*}&\,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*} & \,\bullet\, &\phantom{*}&
\\
d_{j} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} &1 &\phantom{*} &
w_1 &\phantom{*} & w_2 &\phantom{*} & w_3 &\phantom{*} & 1 &\phantom{*}&
0 &\phantom{*} & 0 &\phantom{*} & 0 &\phantom{*} & 1 &\phantom{*} & 0 &\phantom{*}
\end{array}
\end{eqnarray*}
There is no futher forward transition. With the above examples and the transition rules, we can count the total number of transitions made in getting to the last configuration is $T=6+ (n+1)\Big(3R(R-1)(n+1)+9R-5\Big)$ and there are in total $T+1$ history states. The step number (counting from zero) is indicated in the square brackets $[\,]$ in the above configurations.
As remarked earlier, the quantum computation will be executed not by discrete transitions, but via the continuous time evolution via the Hamiltonian: $\exp(-iHt)$. The construction for the Hamiltonian is similar to that in Sec.~\ref{sec:5state}, we will not explicitly write down all terms. All that is needed is the effective Hamiltonian in the subspace of the history states. We will analyze the probability of arriving at a desired computation using the history-state basis in Sec.~\ref{sec:Prob}.
\end{widetext}
\noindent {\bf Periodic boundary condition}. We have been using an open boundary condition, namely, the first site is not connected to the last site. What if our geometry is a circle? In this case, the transition will not terminate, but will continue forever, with gates being applied multiple times. To prevent this from happening, we simply add an additional column at the last site with $*$ replaced by a state $X$. There is no transition rule regarding $X$, so the transition will terminate once the symbol $\,\vartrianglerightle$ is one site next to $X$ from the other end. But this raises dimension $d$ on the half-integer site to 9. We note that for the periodic boundary condition the local dimension of the 2-local 10-state construction by Nagaj and Wocjan~\cite{NagajWocjan} will need to increase to 12 and their 20-state construction needs to be modified to have 22 states.
\section{Probability analysis}
\label{sec:Prob}
One common feature of both the above constructions is that with the proper initial condition, there is only one forward transition rule that applies, except at the last configuration. At any stage, there is only one backward transition rule that applies, except at the initial configuration. We will refer to the quantum states associated with these configurations linked via transition rules as the history states.
In the basis of the valid history states $\ket{\psi_t}$ ($t=0\dots T$ via the transition rules), the effective Hamiltonian is
\begin{equation}gin{equation}
H_{\rm eff} =-\sum_{t=0}^{T-1} |\psi_{t+1}\rangle\langle \psi_{t}|+|\psi_{t}\rangle\langle \psi_{t+1}|.
\end{equation}
For simplicity, we will also denote $|\psi_{t}\rangle$ simply by $|t\rangle$, and the transition probability to arrive at the state $|m\rangle$ starting from $|0\rangle$ after evolving for time duration $\tau$ is
\begin{equation}gin{equation}
p_{\tau}(m|0) = \left|
\bra{m} e^{-iH_{\rm eff} \tau} \ket{0}
\right|^2.
\end{equation}
As shown by Nagaj and Wocjan~\cite{NagajWocjan}, for such a one-dimensional quantum walk Hamiltonian, the probability of arriving at states $m>T/q$ (where $q$ is an positive integer greater than unity) is
\begin{equation}gin{eqnarray*}
P_{m> T/q}= \sum_{m> T/q} \frac{1}{\tau_0} \int_{0}^{\tau_0}d\tau\, p_{\tau}(m|0) \ge \frac{q-1}{q}
-{\cal O}(T/\tau_0).\end{eqnarray*}
This shows that the averaged total probability of ending up at a history state with label $m> T/q$ is very high. As mentioned in the Introduction, the trick to boost the success probability of completing the desired computation is to pad sufficient identity gates so that for any $m> T/q$ the desired circuit has been executed. Nagaj and Wocjan simply took $q=6$. Note that $\tau_0$ only needs to be chosen so that $T/\tau_0$ is small, for example, $\tau_0={\cal O}(T\log T)$.
\noindent {\bf Measurement}. As implied by the above averaged probability, the measurement scheme is to fix an appropriate $\tau_0$ and randomly at a time $\tau$ between $0$ and $\tau_0$, perform measurement on all sites in the computational basis and check the configuration to see if a desired history state $m>T/q$ is obtained. If so, the computation has passed the desired part and the remaining gates are all identity, and the output of the computational qubits is accepted. If not, the measurement outcome is thrown away, and the whole computation is re-started. But the analysis above shows that with sufficient pad of identity gates and an appropriately chosen $\tau_0$, the probability of completing the quantum computation is high.
\section{Concluding remarks}
\label{sec:con}
We address the compromise between the locality $k$ and the local Hilbert space dimension $d$ for one-dimensional Hamiltonian quantum computation.
Specifically, we provide a construction of Hamiltonian quantum computer for $k=3$ with $d=5$. One implication is that simulating the dynamics for 1D chains of spin-2 particles is BQP-complete as it would allow us to simulate a quantum computer. This construction, together with the previous ones, gives rise to a delineation of the border between easy and hard one-dimensional Hamiltonians in terms of the complexity class BQP. It is possible that further improvement of the boundary can be made. Imposing translation invariance increases the required local dimension $d$. We thus also construct another 3-local ($k=3$) Hamiltonian that is invariant under translation of two sites but that requires $d$ to be 8. Simulating the dynamics for such translationally invariant Hamiltonians is also a BQP-complete task.
Correspondingly, there is also an easy-hard boundary on the locality $k$ vs local dimension $d$ plane for 1D translationally invariant Hamiltonians.
We do not know whether our constructions are optimal, namely whether the local Hilbert space dimension $d$ is as small as it can be while maintaining the universality. For the 5-state construction, we believe it is likely the case if one insists that the local dimension on every site be the same. On each site of one sub-lattice, there are two different kinds of qubits ($\,\blacktrianglerighte$ and $\qubit$), and the gate application exploits the two kinds of qubits. The additional state $\blank$ (unborn/dead) is also necessary. If we try to use only one kind of the qubit (hence reducing $d_A$ to $3$), we have to increase the local dimension at the other sub-lattice to $d_B=7$ by adding two other symbols to enact the gate operations (the construction is not shown here).
For the continuous-time quantum cellular automata, our 3-local 8-state construction can be regarded as an improvement from the 2-local 20-state construction by Nagaj and Wocjan. These two constructions both have unique forward and backward transitions and the effective Hamiltonians are the same as the 1D quantum walk. The 2-local 10-state construction of Nagaj and Wocjan does not have unique forward and backward transitions and its Hamiltonian does not correspond to a 1D quantum walk. But $d=10$ is the lowest known so far for translationally invariant 2-local Hamiltonians. It may be possible for $d=8$ in our 3-local case to be further reduced if one does not use Hamiltonians constructed from unique forward and backward transitions for the appropriate history states. But we have not found one that has a lower local dimension. Regarding the local dimensions of 3-local quantum cellular automata, we also have other constructions with mixed local dimensions. For example we have one construction with mixed dimensions $d_A=2$ and $d_B=14$, another construction with $d_A=5$ and $d_B=12$, and yet another one with $d_A=6$ and $d_B=10$. We do not list these other constructions here.
One can ask similarly the compromise between $k$ and $d$ for one-dimensional QMA-complete local Hamiltonian problems. The lowest local dimension for 2-local Hamiltonians is $d=8$ due to a work by Hallgren, Nagaj and Narayanaswami~\cite{Hallgren}. This means that 6-local ($k=6$) qubit ($d=2$) Hamiltonian problems are already QMA-complete. For $k=4$, at most $d=3$ is needed for QMA-completeness. But for $k=3$, how much lower than 8 can $d$ be? Our 3-local 5-state construction does not give rise to a Hamiltonian that is QMA-complete, as there are illegal configurations that remain zero energy even if we impose 3-local penalty terms.
\noindent {\bf Acknowledgment.} This work was supported by the
National Science Foundation under Grant No. PHY 1314748. J.C.L. acknowledges support from the Simons Summer Research Program 2015 at the Stony Brook University, where part of the work was carried out.
\appendix
\section{Elementary proof of the universality of the $W$ gate}
\label{app:proof}
In this Appendix we will provide a proof that the $W$ gate itself is universal. In the process of building up to the proof, we will also review a proof by Aharaonov that the Hadamard and Toffoli gates constitute a universal set of gates~\cite{Aharonov}. The fact that the $W$ gate~(\ref{eqn:W}) alone is universal has been known, e.g., see Ref.~\cite{Shepherd}. We will not be concerned with the efficiency, and we will add the swap gate $S$ to our repertoire of gates, i.e., $\{S,W\}$. If one can apply $W$ gate between any pair of qubits (including the choice of which qubit being the control and which being the target), then the swap gate is not necessary, as it corresponds just to re-wiring of the circuit. In our 1D continuous-time quantum cellular automaton, we fix the order of the control and target qubits, so we need the $S$ gate there. When necessary, we will use subscripts to denote the control $s$ and target $t$ in the $W$ gate: $W_{s\rightarrow t}$ or equivalently denote them inside the bracket: $W(s,t)$.
Let us begin by making some simple observations. First, if we fixed the control qubit to be in $\ket{1}$ then the action of the $W$ gate on the target is the following Hadamard-like gate, denoted by $H_y$,
\begin{equation}gin{equation}
H_y=\frac{1}{\sqrt{2}}\left(
\begin{equation}gin{array}{cc}
1 & -1 \\
1 & 1
\end{array}\right).
\end{equation}
Thus with an ancilla, we can use the $W$ gate to simulate the $H_y$ gate and include it in our repertoire. Furthermore, by a direct calculation, we have
\begin{equation}gin{equation}
W_{1\rightarrow2}^4 S\, W_{1\rightarrow2}^4 S\, W_{1\rightarrow2}^4=I_1\otimes Z_2,
\end{equation}
and thus we can generate a Pauli Z gate
\begin{equation}gin{equation}
Z=\left(
\begin{equation}gin{array}{cc}
1 & 0 \\
0 & -1
\end{array}\right).
\end{equation}
The product of $H_y$ and $Z$ gives rise to the usual Hadamard gate $H$,
\begin{equation}gin{equation}
H\equiv H_y Z =\frac{1}{\sqrt{2}}\left(
\begin{equation}gin{array}{cc}
1 & 1 \\
1 & -1
\end{array}\right).
\end{equation}
With the Hadamard gate and the $Z$ gate, we obtain the Pauli $X$ gate
\begin{equation}gin{equation}
X=H\,Z\,H=\left(
\begin{equation}gin{array}{cc}
0 & 1 \\
1 & 0
\end{array}\right).
\end{equation}
One can also obtain the Control-NOT gate ($C_X$) via
\begin{equation}gin{equation}
C_X=W_{1\rightarrow2}^2S\,W_{1\rightarrow2}^6 S\, W_{1\rightarrow2}^2 S\, W_{1\rightarrow2}^6=\left(
\begin{equation}gin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1\\
0 & 0 &1 & 0
\end{array}\right),
\end{equation}
which can also allow us to get the $X$ gate. From the Hadamard gate $H$ and the $C_X$ gate, we can obtain the Control-Z gate $C_Z$. With the $X$ and $Z$ gates, we can obtain the $Y$ gate (the Pauli Y gate up to a factor of $i$) and its inverse:
\begin{equation}gin{equation}
Y\equiv XZ=\left(\begin{equation}gin{array}{cc}
0 & -1 \\
1 & 0
\end{array}\right), \quad Y^{-1} =\left(\begin{equation}gin{array}{cc}
0 & 1 \\
-1 & 0
\end{array}\right).
\end{equation}
\begin{equation}gin{figure}
\includegraphics[width=0.3\textwidth]{LambdaY0.png}
\caption{ The circuit to simulate the $\Lambda^2(Y)$ gate using the Toffoli and Hadamard gates. This construction relies on an identity that $Y=XHXH$.
\label{fig:LY0} }
\end{figure}
\begin{equation}gin{figure}[t!]
\includegraphics[width=0.45\textwidth]{LambdaY.png}
\caption{The circuit to simulate the $\Lambda^2(Y)$ gate via gates generated by $W$ gate or equivalently $C_{H_y}$ gate. Note that we have simplied the notation by replacing $W^3$ by a $C_{H_y^3}$ gate and that Control-NOT, $X$ and $Y$ gates can all be generated by the $W$ gate, as explained in the text.
\label{fig:LY} }
\end{figure}
With the above observation, we are almost ready to prove the universality of the $W$ gate. But before we do that it is instructive to review the proof by Aharonov that the Toffoli gate $T$ and the Hadamard gate $H$ constitute a universal set of quantum gates~\cite{Aharonov}. In the proof she used a result by Kitaev~\cite{Kitaev97} that the Hadamard gate and the Control-Phase gate $\Lambda(P(i))$,
\begin{equation}gin{equation}
\Lambda(P(i))=\left(
\begin{equation}gin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0\\
0 & 0 &0 & i
\end{array}\right),
\end{equation}
are also a universal set of gates. She then applied the notion of encoded universality to turn the (two-qubit) Control-Phase gate into an equivalent three-qubit gate (with only real numbers) which is in fact the Control-Control-$Y$ gate, which we write explicitly below,
\begin{equation}gin{equation}
\Lambda^2(Y)\equiv\left(
\begin{equation}gin{array}{cccccccc}
1 & 0 & 0 & 0 & 0 & 0 & 0& 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 0& 0 \\
0 & 0 & 1 & 0 & 0 & 0 & 0& 0\\
0 & 0 &0 & 1 & 0 & 0 & 0& 0\\
0 & 0 &0 & 0 & 1 & 0 & 0& 0\\
0 & 0 &0 & 0 & 0 & 1 & 0& 0\\
0 & 0 &0 & 0 & 0 & 0 & 0& -1\\
0 & 0 &0 & 0 & 0 & 0 & 1 & 0
\end{array}\right).
\end{equation}
As shown in Fig.~\ref{fig:LY0}, this gate can be simulated by the Toffoli and the Hadamard gates, as
\begin{equation}gin{equation}
\Lambda^2(Y=XZ)=T(1,2,3)H(3)T(1,2,3)H(3),
\end{equation}
where the numbers inside the brackets indicate which qubits are acted by the gate, and the explicit expression of the Toffoli gate (a.k.a. Control-Control-NOT gate) is
\begin{equation}gin{equation}
T\equiv\left(
\begin{equation}gin{array}{cccccccc}
1 & 0 & 0 & 0 & 0 & 0 & 0& 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 0& 0 \\
0 & 0 & 1 & 0 & 0 & 0 & 0& 0\\
0 & 0 &0 & 1 & 0 & 0 & 0& 0\\
0 & 0 &0 & 0 & 1 & 0 & 0& 0\\
0 & 0 &0 & 0 & 0 & 1 & 0& 0\\
0 & 0 &0 & 0 & 0 & 0 & 0& 1\\
0 & 0 &0 & 0 & 0 & 0 & 1 & 0
\end{array}\right).
\end{equation}
Therefore, the Hadamard and Toffoli gates are universal.
Finally, we now return to finish the proof that the $W$ gate is itself universal. Since we already have the Hadamard gate in our repertoire, we prove the universality of the $W$ gate by showing that the gate $\Lambda^2(Y)$ can be simulated by the gates in the repertoire set generated from $W$, i.e.,
\begin{equation}gin{eqnarray}
\Lambda^2(Y)&=&X(1)X(2) Y(3) W(1,3)^3\,C_X(1,2)W(2,3)^3\nonumber\\
& &\,C_X(1,2)W(2,3)^3 \,X(2)X(1),
\end{eqnarray}
the circuit for which is shown in Fig.~\ref{fig:LY}.
This equality can be verified directly by matrix multiplication or by using the elementary gate multiplication and identities (see chapter 4 of Ref.~\cite{NielsenChuang}). More specifically, by checking all possible control bits ($00$, $01$, $10$, $11$) one can verify that the combination of the five control gates in the middle of the circuit (Fig.~\ref{fig:LY}) gives rise to a Control-Control-$Y^{-1}$ gate, but conditioned on the first two qubits not both being $0$. Including the $Y$ gate makes this block of gates become a Control-Control-$Y$ gate conditioned on the first two qubits being both $0$. The $X$ gates on both qubits before and after the previous block make the whole circuit become a Control-Control-$Y$ gate (conditioned on first two qubits being both 1), i.e., $\Lambda^2(Y)$, hence completing the proof.
\begin{equation}gin{thebibliography}{99}
\bibitem{NielsenChuang}
M. Nielsen and I. Chuang, {\sl Quantum Computation and Quantum Information\/}
(Cambridge Univ. Press, 2000).
\bibitem{Kitaev}
A. Yu. Kitaev, Ann. Phys. (N.Y.) {\bf 303}, 2 (2003).
\bibitem{TQC}
C. Nayak, S. H. Simon, A. Stern, M. Freedman, and S. Das Sarma, Rev. Mod. Phys. {\bf 80}, 1083 (2008).
\bibitem{Farhi}
E. Farhi, J. Goldstone, S. Gutmann, J. Lapan, A. Lundgren, and D. Preda, Science {\bf 292}, 472 (2001).
D. Averin, Solid State Comm. {\bf 105}, 659 (1998).
\bibitem{Oneway}
R. Raussendorf and H. J. Briegel, Phys. Rev.
Lett. {\bf 86}, 5188 (2001).
\bibitem{Oneway2}
H. J. Briegel, D. E. Browne, W. D\"ur, R. Raussendorf, and M. Van den Nest,
Nature Phys. {\bf 5}, 19-26 (2009).
\bibitem{Benioff}
P. Benioff, J. Stat. Phys. {\bf 6}, 563 (1980).
\bibitem{Feynman}
R.~Feynman,
{Opt. News}, {\bf 11}, 11 (1985).
\bibitem{NagajWocjan}
D. Nagaj and P. Wocjan, Phys. Rev. A {\bf 78}, 032311 (2008).
\bibitem{KitaevShenVyalyi}
A. Yu. Kitaev, A. H. Shen and M. N. Vyalyi, {|it Classical and Quantum Computation\/} (AMS, Providence, 2002).
\bibitem{KempeRegev}
J. Kempe and O. Regev, Quantum Inf. Comput. {\bf 3}, 258 (2003).
\bibitem{KempeKitaevRegev}
J. Kempe, A. Kitaev, and O. Regev, SIAM J. Comput. {\bf 35}, 1070 (2006).
\bibitem{OliveiraTerhal}
R. Oliveira and B. Terhal, Quantum Inf. Comput. {\bf 8}, 0900 (2008).
\bibitem{1DQMA}
D. Aharonov, D. Gottesman, S. Irani, and J. Kempe,
Commun. Math. Phys. {\bf 287}, 41 (2009).
\bibitem{Hallgren}
S. Hallgren, D. Nagaj, and S. Narayanaswami,
Quantum Inf. Comput. {\bf 13}, 0721 (2013).
\bibitem{VollbrechtCirac}
K. G. H. Vollbrecht and J. I. Cirac, Phys. Rev. Lett. {\bf 100},
010501 (2008).
\bibitem{Kay}
A. Kay, Phys. Rev. A {\bf 78}, 012346 (2008).
\bibitem{ChaseLandahl}
B. A. Chase and A. J. Landahl, e-print arXiv:0802.1207.
\bibitem{Lloyd}
S. Lloyd, Science {\bf 261}, 1569 (1993).
\bibitem{Watrous}
J. Watrous, in Proceedings of the 36th IEEE Symposium on Foundations of Computer Science (IEEE Computer Society Press, Los Alamitos, California, 1995), p. 528.
\bibitem{Raussendorf}
R. Raussendorf, Phys. Rev. A {\bf 72}, 022301 (2005).
\bibitem{Shepherd}
D. J. Shepherd, T. Franz, and R. F. Werner
Phys. Rev. Lett. {\bf 97}, 020502 (2006).
\bibitem{JanzingWocjan}
D. Janzing and P. Wocjan, Quantum Inf. Process. {\bf 4}, 129 (2005).
\bibitem{Child}
A. M. Childs,
Phys. Rev. Lett. {\bf 102}, 180501 (2009).
\bibitem{Aharonov}
D. Aharonov,
e-print arXiv:quant-ph/0301040v1.
\bibitem{Kitaev97}
A. Yu. Kitaev, Russian Math.
Surveys {\bf 52}, 1191 (1997).
\end{thebibliography}
\end{document}
|
\begin{document}
\title[Oscillations: self improvement]{BMO: Oscillations, self improvement, Gagliardo coordinate spaces and reverse
Hardy inequalities}
\author{Mario Milman}
\address{Instituto Argentino de Matematica}
\email{[email protected]}
\urladdr{https://sites.google.com/site/mariomilman}
\thanks{The author was partially supported by a grant from the Simons Foundation
(\#207929 to Mario Milman)}
\begin{abstract}
A new approach to classical self improving results for $BMO$ functions is
presented. \textquotedblleft Coordinate Gagliardo spaces" are introduced and a
generalized version of the John-Nirenberg Lemma is proved. Applications are provided.
\end{abstract}
\dedicatory{Para Corita\footnote{See last Section: *a brief personal note on Cora
Sadosky*.}}\maketitle
\tableofcontents
\section{Introduction and Background}
Interpolation theory provides a framework, as well as an arsenal of tools,
that can help in our understanding of the properties of function spaces and
the operators acting on them. Conversely, the interaction of the abstract
theory of interpolation with concrete function spaces can lead to new general
methods and results. In this note we consider some aspects of the interaction
between interpolation theory and $BMO,$ focussing on the self improving
properties of $BMO$ functions.
To fix the notation, in this section we shall consider functions defined on a
fixed cube, $Q_{0}\subset R^{n}.$ A prototypical example of the
self-improvement exhibited by $BMO$ functions is the statement that a function
in $BMO$ automatically belongs to all $L^{p}$ spaces, $p<\infty,$
\begin{equation}
BMO\subset
{\displaystyle\bigcap\limits_{p\geq1}}
L^{p}. \label{rates0}
\end{equation}
In fact, $BMO$ is contained in the Orlicz space $e^{L}.$ This is one of the
themes underlying the John-Nirenberg Lemma \cite{johnN}. One way to obtain
this refinement is to make explicit the rates of decay of the family of
embeddings implied by (\ref{rates0}).
We consider in detail an inequality apparently first shown in \cite{chenzu},
\begin{equation}
\left\Vert f\right\Vert _{L^{q}}\leq C_{n}q\left\Vert f\right\Vert _{L^{p}
}^{p/q}\left\Vert f\right\Vert _{BMO}^{1-p/q},1\leq p<q<\infty. \label{intro1}
\end{equation}
With (\ref{intro1}) at hand we can, for example, extrapolate by the $\Delta
-$method of \cite{jm}, and the exponential integrability of $BMO$ functions
follows (cf. (\ref{expo}) below)
\begin{equation}
\left\Vert f\right\Vert _{e^{L}}\sim\sup_{q>1}\frac{\left\Vert f\right\Vert
_{L^{q}}}{q}\leq c_{n}\left\Vert f\right\Vert _{BMO}. \label{linfty0}
\end{equation}
More generally, for compatible Banach spaces, interpolation inequalities of
the form
\begin{equation}
\left\Vert f\right\Vert _{X}\leq c(\theta)\left\Vert f\right\Vert _{X_{1}
}^{1-\theta}\left\Vert f\right\Vert _{X_{2}}^{\theta},\text{ }\theta\in(0,1),
\label{intro2}
\end{equation}
where $c(\theta)$ are constants that depend only on $\theta,$ play an
important role in analysis. What is needed to extract information at the end
points (e.g. by \textquotedblleft extrapolation" \cite{jm}) is to have good
estimates of the rate of decay $c(\theta),$ as $\theta$ tends to $0$ or to
$1.$ We give a brief summary of inequalities of the form (\ref{intro2}) for
the classic methods of interpolation in Section \ref{secc:inter} below. For
example, a typical interpolation inequality of the form (\ref{intro2}) for the
Lions-Peetre real interpolation spaces can be formulated as follows. Given a
compatible pair\footnote{We refer to Section \ref{secc:inter} for more
details.} of Banach spaces $\vec{X}=(X_{1},X_{2})$, the \textquotedblleft
$K-$functional\textquotedblright\ is defined for $f\in\Sigma(\vec{X}
)=X_{1}+X_{2},t>0,$ by
\begin{equation}
K(t,f;\vec{X}):=K(t,f;X_{1},X_{2})=\inf_{f=f_{1}+f_{2},f_{i}\in X_{i}
}\{\left\Vert f_{1}\right\Vert _{X_{1}}+t\left\Vert f_{2}\right\Vert _{X_{2}
}\}. \label{k1}
\end{equation}
The real interpolation spaces $\vec{X}_{\theta,q}$ can be defined through the
use of the $K-$functional. Let $\theta\in(0,1),0<q\leq\infty,$ then we let
\begin{equation}
\vec{X}_{\theta,q}=\{f\in\Sigma(\vec{X}):\left\Vert f\right\Vert _{\vec
{X}_{\theta,q}}<\infty\}, \label{arriba}
\end{equation}
where\footnote{with the usual modification when $q=\infty.$}
\[
\left\Vert f\right\Vert _{\vec{X}_{\theta,q}}=\left\{ \int_{0}^{\infty
}\left[ t^{-\theta}K(t,f;\vec{X})\right] ^{q}\frac{dt}{t}\right\} ^{1/q}.
\]
We have (cf. Lemma \ref{marcaada} below) that, for $f\in X_{1}\cap
X_{2},0<\theta<1,1\leq q\leq\infty$,
\begin{equation}
\lbrack(1-\theta)\theta q]^{1/q}\left\Vert f\right\Vert _{(X_{1}
,X_{2})_{\theta,q}}\leq\left\Vert f\right\Vert _{X_{1}}^{1-\theta}\left\Vert
f\right\Vert _{X_{2}}^{\theta}. \label{rates}
\end{equation}
We combine (\ref{rates}), the known real interpolation theory of $BMO$ (cf.
\cite[Theorem 6.1]{besa}, \cite{bs} and the references therein), sharp reverse
Hardy inequalities (cf. \cite{ren} and \cite{mil}), and the re-scaling of
inequalities via the reiteration method (cf. \cite{bl}, \cite{holm0}) to give
a new *interpolation* proof of (\ref{intro1}) in Lemma \ref{marcao} below.
Let us now recall how the study of $BMO$ led to new theoretical developments
in interpolation theory\footnote{Paradoxically, except for Section
\ref{secc:bilinear}, in this paper we do not discuss interpolation theorems
per se. For interpolation theorems involving $BMO$ type of spaces there is a
large literature. For articles that are related to the developments in this
note I refer, for example, to \cite{herz}, \cite{bedesa}, \cite{sag},
\cite{misa}, \cite{jm1}, \cite{kru}.}.
A natural follow up question to (\ref{linfty0}) was to obtain the best
possible integrability condition satisfied by $BMO$ functions. The answer was
found by Bennett-DeVore-Sharpley \cite{bedesa}. They showed the
inequality\footnote{where $f^{\ast}$ denotes the non-increasing rearrangement
of $f$ and $f^{\ast\ast}(t)=\frac{1}{t}\int_{0}^{t}f^{\ast}(s)ds.$}
\begin{equation}
\left\Vert f\right\Vert _{L(\infty,\infty)}:=\sup_{t}\left\{ f^{\ast\ast
}(t)-f^{\ast}(t)\right\} \leq c_{n}\left\Vert f\right\Vert _{BMO}.
\label{linfty}
\end{equation}
The refinement here is that the (non-linear) function space $L(\infty
,\infty),$ defined by the condition
\[
\left\Vert f\right\Vert _{L(\infty,\infty)}<\infty,
\]
is strictly contained\footnote{The smallest rearrangement invariant space that
contains $BMO$ is $e^{L}$ as was shown by Pustylnik \cite{pus}.} in $e^{L}.g$
In their celebrated work, Bennett-DeVore-Sharpley \cite{bedesa} proposed the
following connection between real interpolation, weak interpolation, and $BMO$
(cf. \cite[page 384]{bs}). The $K-$functional for the pair $(L^{1},L^{\infty
})$ is given by (cf. \cite{bs} and Section \ref{secc:bgextra} below)
\[
K(t,f;L^{1},L^{\infty})=\int_{0}^{t}f^{\ast}(s)ds.
\]
Therefore, $\frac{dK(t,f;L^{1},L^{\infty})}{dt}=K^{\prime}(t,f;L^{1}
,L^{\infty})=f^{\ast}(t);$ consequently, we can compute the \textquotedblleft
norm" of weak $L^{1}:=L(1,\infty)$, as follows
\begin{align*}
\left\Vert f\right\Vert _{L(1,\infty)} & =\sup_{t>0}tf^{\ast}(t)\\
& =\sup_{t>0}tK^{\prime}(t,f;L^{1},L^{\infty}).
\end{align*}
Then, in analogy with the definition of weak $L^{1},$ Bennett-DeVore-Sharpley
proceeded to define $L(\infty,\infty)$ using the functional
\begin{equation}
\left\Vert f\right\Vert _{L(\infty,\infty)}:=\sup_{t>0}tK^{\prime
}(t,f;L^{\infty},L^{1}). \label{berta4}
\end{equation}
Note that in (\ref{berta4}) the order of the spaces is reversed in the
computation of the $K-$functional. These two different $K-$functionals are
connected by the equation
\begin{equation}
K(t,f;L^{\infty},L^{1})=tK(\frac{1}{t},f;L^{1},L^{\infty}). \label{ladada}
\end{equation}
Inserting (\ref{ladada}) in (\ref{berta4}) we readily see that
\begin{align*}
\left\Vert f\right\Vert _{L(\infty,\infty)} & =\sup_{t>0}\{tK(\frac{1}
{t},f;L^{1},L^{\infty})-K^{\prime}(\frac{1}{t},f;L^{1},L^{\infty})\}\\
& =\sup_{t>0}\{\frac{K(t,f;L^{1},L^{\infty})}{t}-K^{\prime}(t,f;L^{1}
,L^{\infty})\}\\
& =\sup_{t}\{f^{\ast\ast}(t)-f^{\ast}(t)\}.
\end{align*}
The oscillation operator, $f\rightarrow f^{\ast\ast}(t)-f^{\ast}(t),$ turns
out to play an important role in other fundamental inequalities in analysis. A
recent remarkable application of the oscillation operator provides the sharp
form of the Hardy-Littlewood-Sobolev-O'Neil inequality up the borderline end
point $p=n.$ Indeed, if we let
\begin{equation}
\left\Vert f\right\Vert _{L(p,q)}=\left\{
\begin{array}
[c]{cc}
\left\{ \int_{0}^{\infty}\left( f^{\ast}(t)t^{1/p}\right) ^{q}\frac{dt}
{t}\right\} ^{1/q} & 1\leq p<\infty,1\leq q\leq\infty\\
\left\Vert f\right\Vert _{L(\infty,q)} & 1\leq q\leq\infty,
\end{array}
\right. \label{berta}
\end{equation}
where\footnote{Apparently the $L(\infty,q)$ spaces for $q<\infty$ were first
introduced and their usefulness shown in \cite{bmr}. Note that with the usual
definition $L(\infty,\infty)$ would be $L^{\infty},$ and $L(\infty,q)=\{0\},$
for $q<\infty.$ The key point here is that the use of the oscillation operator
introduces cancellations that make the spaces defined in this fashion
non-trivial (cf. Section \ref{secc:coordenadas}, Example \ref{ejemplomarkao}
).}
\begin{equation}
\left\Vert f\right\Vert _{L(\infty,q)}:=\left\{ \int_{0}^{\infty}(f^{\ast
\ast}(t)-f^{\ast}(t))^{q}\frac{dt}{t}\right\} ^{1/q}, \label{berta1}
\end{equation}
then it was shown in \cite{bmr} that
\begin{equation}
\left\Vert f\right\Vert _{L(\bar{p},q)}\leq c_{n}\left\Vert \nabla
f\right\Vert _{L(p,q)},1\leq p\leq n,\frac{1}{\bar{p}}=\frac{1}{p}-\frac{1}
{n},\text{ }f\in C_{0}^{\infty}(R^{n}). \label{sobolev}
\end{equation}
The Sobolev inequality (\ref{sobolev}) is best possible, and for $p=q=n$ it
improves on the end point result of
Brezis-Wainger-Hanson-Maz'ya\footnote{which in turn improves upon the
classical exponential integrability result by Trudinger \cite{tru}.} (cf.
\cite{brez}, \cite{ha}, \cite{maz}) much as the Bennett-DeVore-Sharpley
inequality (\ref{linfty}) improves upon (\ref{linfty0}). The improvement over
*best possible results* is feasible because, once again, the spaces that
correspond to $p=n,$ i.e.
\[
L(\infty,q)=\{f:\left\Vert f\right\Vert _{L(\infty,q)}<\infty\},
\]
are not necessarily linear\footnote{Let $X$ be a rearrangement invariant
space, Pustylnik \cite{pus} has given necessary and sufficient conditions for
spaces of functions defined by conditions of the form
\[
\left\Vert \left( f^{\ast\ast}-f^{\ast}\right) t^{-\gamma}\right\Vert
_{X}<\infty
\]
to be linear and normable.}!
Moreover, the Sobolev inequality (\ref{sobolev}) persists up to higher order
derivatives\footnote{The improvement is also valid for Besov space
inequalities as well (cf. \cite{mamipams}).}, as was shown in \cite{milpu},
\[
\left\Vert f\right\Vert _{L(\bar{p},q)}\leq c_{n}\left\Vert \nabla
^{k}f\right\Vert _{L(p,q)},1\leq p\leq\frac{n}{k},\frac{1}{\bar{p}}=\frac
{1}{p}-\frac{k}{n},\text{ }f\in C_{0}^{\infty}(R^{n}).
\]
In particular, when $p=\frac{n}{k}$ and $q=\infty,$ we have the $BMO$ type
result\footnote{The spaces $L(\infty,q)$ allow to interpolate between
$L^{\infty}=L(\infty,1)$ and $L(\infty,\infty)\subset e^{L}.$}
\[
\left\Vert f\right\Vert _{L(\infty,\infty)}\leq c\left\Vert \nabla
^{k}f\right\Vert _{L(\frac{n}{k},\infty)},f\in C_{0}^{\infty}(R^{n}).
\]
Using the space $L(\infty,\infty)$ one can improve (\ref{intro1}) as follows
(cf. \cite{kowa})
\begin{equation}
\left\Vert f\right\Vert _{L^{q}}\leq C_{n}q\left\Vert f\right\Vert _{L^{p}
}^{p/q}\left\Vert f\right\Vert _{L(\infty,\infty)}^{1-p/q},1\leq p<q<\infty.
\label{lageneral}
\end{equation}
In my work with Jawerth\footnote{The earlier work of Herz \cite{herz} and
Holmstedt \cite{holm}, that precedes \cite{bedesa}, should be also mentioned
here.} (cf. \cite{jm1}) we give a somewhat different interpretation of the
$L(\infty,q)$ spaces using Gagliardo diagrams (cf. \cite{bl}, \cite{cwbjmm0});
this point of view turns out to be useful to explain other applications of the
oscillation operator $f^{\ast\ast}(t)-f^{\ast}(t)$ (cf. \cite{cwbjmm},
\cite{mamicoc} and Section \ref{secc:uses}). The idea behind the approach in
\cite{jm1} is that of an \textquotedblleft optimal decomposition", which also
makes it possible to incorporate the $L(\infty,q)$ spaces into the abstract
theory of real interpolation, as we shall show below.
Let $t>0,$ and let $f\in\Sigma(\vec{X})=X_{1}+X_{2}.$ Out of all the competing
decompositions for the computation of $K(t,f;\vec{X}),$ an
optimal\footnote{Optimal decompositions exist for $a.e.$ $t>0.$}
decomposition
\[
f=D_{1}(t)f+D_{2}(t)f,\text{ with }D_{i}(t)f\in X_{i},i=1,2,
\]
satisfies
\begin{equation}
K(t,f;\vec{X})=\left\Vert D_{1}(t)f\right\Vert _{X_{1}}+t\left\Vert
D_{2}(t)f\right\Vert _{X_{2}}, \label{k3}
\end{equation}
(resp. a nearly optimal decomposition obtains if in (\ref{k3}) we replace $=$
by $\approx).$ For an optimal decomposition of $f$ we
have\footnote{Interpreting $(\left\Vert D_{1}(t)f\right\Vert _{X_{1}
},\left\Vert D_{2}(t)f\right\Vert _{X_{2}})$ as coordinates on the boundary of
a Gagliardo diagram (cf. \cite{bl}) it follows readily that, for all
$\varepsilon>0,t>0,$ we can find nearly optimal decompositions
$x=x_{\varepsilon}(t)+y_{\varepsilon}(t),$ such that
\begin{equation}
(1-\varepsilon)[K(t,f;\vec{X})-t\frac{d}{dt}K(t,f;\vec{X})]\leq\left\Vert
x_{\varepsilon}(t)\right\Vert _{X_{1}}\leq(1+\varepsilon)[K(t,f;\vec
{X})-t\frac{d}{dt}K(t,f;\vec{X})] \label{a1'}
\end{equation}
\begin{equation}
(1-\varepsilon)\frac{d}{dt}K(t,f;\vec{X})\leq\left\Vert y_{\varepsilon
}(t)\right\Vert _{X_{2}}\leq(1+\varepsilon)\frac{d}{dt}K(t,f;\vec{X}).
\label{a2'}
\end{equation}
} (cf. \cite{holm}, \cite{jm1})
\begin{equation}
\left\Vert D_{1}(t)f\right\Vert _{X_{1}}=K(t,f;\vec{X})-t\frac{d}
{dt}K(t,f;\vec{X}); \label{a1}
\end{equation}
\begin{equation}
\left\Vert D_{2}(t)f\right\Vert _{X_{2}}=\frac{d}{dt}K(t,f;\vec{X}).
\label{a2}
\end{equation}
In particular, for the pair $(L^{1},L^{\infty}),$ we have (cf. Section
\ref{secc:bgextra}),
\begin{align*}
\left\Vert D_{1}(t)f\right\Vert _{L^{1}} & =K(t,f;L^{1},L^{\infty}
)-t\frac{d}{dt}K(t,f;L^{1},L^{\infty})\\
& =tf^{\ast\ast}(t)-tf^{\ast}(t),
\end{align*}
and
\[
\left\Vert D_{2}(t)f\right\Vert _{L^{\infty}}=\frac{d}{dt}K(t,f;L^{1}
,L^{\infty})=f^{\ast}(t).
\]
Thus, reinterpreting optimal decompositions using Gagliardo diagrams (cf.
\cite{holm}, \cite{jm1}), one is led to consider spaces, which could be
referred to as \textquotedblleft Gagliardo coordinate spaces". These spaces
coincide with the Lions-Peetre real interpolation spaces, for the usual range
of the parameters (cf. \cite{holm}), but they also make sense at the end
points (cf. \cite{jm1}), and in this fashion they can be used to complete the
Lions-Peetre scale much as the generalized $L(p,q)$ spaces, defined by
(\ref{berta}), complete the classical scale of Lorentz spaces. The
\textquotedblleft Gagliardo coordinate spaces" $\vec{X}_{\theta,q}
^{(i)},i=1,2,$ are formally obtained replacing $K(t,f;\vec{X}),$ in the
definition of the $\vec{X}_{\theta,q}$ norm of $f$ (cf. (\ref{arriba})
above)$,$ by $\left\Vert D_{1}(t)f\right\Vert _{X_{1}}$ (resp. $\left\Vert
D_{2}(t)f\right\Vert _{X_{2}})$ (cf. \cite{holm})$.$ In particular, we note
that the $\vec{X}_{\theta,q}^{(2)}$ spaces correspond to the $k$-spaces
studied by Bennett \cite{beka}.
This point of view led us to formulate and prove the following general version
of (\ref{rates}) (cf. Theorem \ref{teomarkao} below)
\begin{equation}
\left\Vert f\right\Vert _{\vec{X}_{\theta,q(\theta)}^{(2)}}\leq cq\left\Vert
f\right\Vert _{X_{1}}^{1-\theta}\left\Vert f\right\Vert _{\vec{X}_{1,\infty
}^{(1)}}^{\theta},\theta\in(0,1),1-\theta=\frac{1}{q(\theta)}, \label{berta3}
\end{equation}
which can be easily seen to imply an abstract extrapolation theorem connected
with the John-Nirenberg inequality.
Underlying these developments is the following computation of the
$K-$functional for the pair $(L^{1},BMO)$ given in \cite{besa} (cf. Section
\ref{secc:self} below)
\begin{equation}
K(t,f,L^{1}(R^{n}),BMO(R^{n}))\approx tf^{\#\ast}(t), \label{obama}
\end{equation}
where $f^{\#}$ is the sharp maximal function of Fefferman-Stein \cite{fest}
(cf. (\ref{veaabajo}) below). In this calculation $BMO(R^{n})$ is provided
with the seminorm\footnote{$BMO(R^{n})$ can be normed by $\left\vert
f\right\vert _{BMO}$ if we identify functions that differ by a constant.} (cf.
Section \ref{secc:abajo})
\[
\left\vert f\right\vert _{BMO}=\left\Vert f_{R^{n}}^{\#}\right\Vert
_{L^{\infty}}.
\]
In \cite{chenzu}, the authors show that (\ref{intro1}) can be used to give a
strikingly \textquotedblleft easy" proof of an inequality first proved in
\cite{kowa1} using paraproducts,
\begin{equation}
\left\Vert fg\right\Vert _{L^{p}}\leq c(\left\Vert f\right\Vert _{L^{p}
}\left\Vert g\right\Vert _{BMO}+\left\Vert g\right\Vert _{L^{p}}\left\Vert
f\right\Vert _{BMO}),1<p<\infty. \label{intro3}
\end{equation}
The argument to prove (\ref{intro3}) given in \cite{chenzu} has a general
character and with suitable modifications (in particular, using the
*reiteration theorem* of interpolation theory) the idea\footnote{We cannot
resist but to offer here our slight twist to the argument
\begin{align*}
\left\Vert fg\right\Vert _{L^{p}} & \leq\left\Vert f\right\Vert _{L^{2p}
}\left\Vert g\right\Vert _{L^{2p}}\\
& =\left\Vert f\right\Vert _{[L^{p},BMO]_{1/2,2p}}\left\Vert g\right\Vert
_{[L^{p},BMO]_{1/2,2p}}\\
& \preceq\left\Vert f\right\Vert _{L^{p}}^{1/2}\left\Vert f\right\Vert
_{BMO}^{1/2}\left\Vert g\right\Vert _{L^{p}}^{1/2}\left\Vert g\right\Vert
_{BMO}^{1/2}\\
& \leq\left\Vert f\right\Vert _{L^{p}}\left\Vert g\right\Vert _{BMO}
+\left\Vert g\right\Vert _{L^{p}}\left\Vert f\right\Vert _{BMO}.
\end{align*}
} can be combined with (\ref{berta3}) to yield a new end point result for
bilinear interpolation for generalized product and convolution operators of
O'Neil type acting on interpolation scales (see Section \ref{secc:bilinear}
below). Further, in Section \ref{secc:uses} we collect applications of the
methods discussed in the paper, and also offer some
suggestions\footnote{However, keep in mind the epigraph of \cite{restaurant},
originally due Douglas Adams, The Restaurant at the End of the Universe, Tor
Books, 1988 :\textquotedblleft For seven and a half million years, Deep
Thought computed and calculated, and in the end announced that the answer was
in fact Forty-two---and so another, even bigger, computer had to be built to
find out what the actual question was."} for further research. In particular,
in Section \ref{secc:singular} we give a new approach to well known results by
Bagby-Kurtz \cite{bagby}, Kurtz \cite{kurtz}, on the linear (in $p$) rate of
growth of $L^{p}$ estimates for certain singular integrals; in Section
\ref{secc:lambda} we discuss the connection between the classical good lambda
inequalities (cf. \cite{bg}, \cite{cof}) and the strong good lambda
inequalities of Bagby-Kurtz (cf. \cite{kurtz}), with inequalities for the
oscillation operator $f^{\ast\ast}-f^{\ast}$; in Section \ref{secc:bgextra} we
show how oscillation inequalities for Sobolev functions are connected with the
Gagliardo coordinate spaces and the property of commutation of the gradient
with optimal $(L^{1},L^{\infty})$ decompositions (cf. \cite{cwbjmm}), we also
discuss briefly the characterization of the isoperimetric inequality in terms
of rearrangement inequalities for Sobolev functions, in a very general context..
The intended audience for this note are, on the one hand, classical analysts
that may be curious on what abstract interpolation constructions could bring
to the table, and on the other hand, functional analysts, specializing in
interpolation theory, that may want to see applications of the abstract
theories. To balance these objectives I have tried to give a presentation full
of details in what respects to interpolation theory, and provide full
references to the background material needed for the applications to classical
analysis. In this respect, I have compiled a large set of references but the
reader should be warned that this paper is not intended to be a survey, and
that the list intends only to document the material that is mentioned in the
text and simply reflects my own research interests, point of view, and
limitations. In fact, many important topics dear to me had to be left out,
including *Garsia inequalities* (cf. \cite{garsiagrenoble}).
I close the note with some personal reminiscences of my friendship with Cora Sadosky.
\section{The John-Nirenberg Lemma and rearrangements\label{secc:abajo}}
In this section we recall a few basic definitions and results associated with
the self improving properties of $BMO$ functions\footnote{For more background
information we refer to \cite{bs} and \cite{cosa}.}. In particular, we discuss
the John-Nirenberg inequality (cf. \cite{johnN}). In what follows we always
let $Q$ denote a cube.
Let $Q_{0}$ be a fixed cube in $R^{n}$. For $x\in Q_{0},$ let
\begin{equation}
f_{Q_{0}}^{\#}(x)=\sup_{x\backepsilon Q,Q\subset Q_{0}}\frac{1}{\left\vert
Q\right\vert }\int_{Q}\left\vert f-f_{Q}\right\vert dx,\text{ where }
f_{Q}=\frac{1}{\left\vert Q\right\vert }\int_{Q}fdx. \label{veaabajo}
\end{equation}
The space of functions of bounded mean oscillation, $BMO(Q_{0}),$ consists of
all the functions $f\in L^{1}(Q_{0})$ such that $f_{Q_{0}}^{\#}\in L^{\infty
}(Q_{0}).$ Generally, we use the seminorm
\begin{equation}
\left\vert f\right\vert _{BMO(Q_{0})}=\left\Vert f_{Q_{0}}^{\#}\right\Vert
_{L^{\infty}}. \label{llamada}
\end{equation}
The space $BMO(Q_{0})$ becomes a Banach space if we identify functions that
differ by a constant. Sometimes it is preferable for us to use
\[
\left\Vert f\right\Vert _{BMO(Q_{0})}=\left\vert f\right\vert _{BMO(Q_{0}
)}+\left\Vert f\right\Vert _{L^{1}(Q_{0})}.
\]
The classical John-Nirenberg Lemma is reformulated in \cite[Corollary 7.7,
page 381]{bs} as follows\footnote{For a recent new approach to the
John-Nirenberg Lemma we refer to \cite{css}.}: Given a fixed cube
$Q_{0}\subset R^{n},$ there exists a constant $c>0,$ such that for all $f\in
BMO(Q_{0}),$ and for all subcubes $Q\subset Q_{0},$
\begin{equation}
\lbrack\left( f-f_{Q}\right) \chi_{Q}]^{\ast}(t)\leq c\left\vert
f\right\vert _{BMO(Q_{0})}\log^{+}(\frac{6\left\vert Q\right\vert }{t}),t>0.
\label{jn1}
\end{equation}
In particular, $BMO$ has the following self improving property (cf.
\cite[Corollary 7.8, page 381]{bs}). Let $1\leq p<\infty,$ and
let\footnote{Note that $f_{Q_{0},1}^{\#}=f_{Q_{0}}^{\#}.$}
\[
f_{Q_{0},p}^{\#}(x)=\sup_{x\backepsilon Q,Q\subset Q_{0}}\left\{ \frac
{1}{\left\vert Q\right\vert }\int_{Q}\left\vert f-f_{Q}\right\vert
^{p}dx\right\} ^{1/p},
\]
and
\[
\left\Vert f\right\Vert _{BMO^{p}(Q_{0})}=\left\Vert f_{Q_{0},p}
^{\#}\right\Vert _{L^{\infty}}+\left\Vert f\right\Vert _{L^{p}(Q_{0})}.
\]
Then, with constants independent of $f,$
\[
\left\Vert f\right\Vert _{BMO^{p}(Q_{0})}\approx\left\Vert f\right\Vert
_{BMO(Q_{0})}.
\]
It follows that, for all $p<\infty,$
\[
BMO(Q_{0})\subset L^{p}(Q_{0}).
\]
Actually, from \cite{jm} we have
\[
\left\Vert \left( f-f_{Q}\right) \chi_{Q}\right\Vert _{\Delta(\frac
{L^{p}(Q)}{p})}\approx\sup_{t}\frac{[\left( f-f_{Q}\right) \chi_{Q}]^{\ast
}(t)}{\log^{+}(\frac{6\left\vert Q\right\vert }{t})}\approx\left\Vert
f-f_{Q}\right\Vert _{e^{L(Q)}},
\]
which combined with (\ref{jn1}) gives
\[
\left\Vert f-f_{Q}\right\Vert _{e^{L(Q)}}\leq c\left\vert f\chi_{Q}\right\vert
_{BMO(Q)},
\]
and therefore (cf. \cite{bs})
\[
BMO(Q_{0})\subset e^{L(Q_{0})}.
\]
In other words, the functions in $BMO(Q_{0})$ are exponentially integrable.
The previous results admit suitable generalizations to $R^{n}$ and more
general measure spaces.
\section{Interpolation theory: some basic inequalities\label{secc:inter}}
In this section we review basic definitions, and discuss inequalities of the
form (\ref{intro2}) that are associated with the classical methods of interpolation.
The starting objects of interpolation theory are pairs $\vec{X}=(X_{1},X_{2})$
of Banach spaces that are \textquotedblleft compatible\textquotedblright, in
the sense that both spaces are continuously embedded in a common Hausdorff
topological vector space $V$\footnote{We shall then call $\vec{X}=(X_{0}
,X_{1})$ a \textquotedblleft Banach pair\textquotedblright. In general, the
space $V$ plays an auxiliary role, since once we know that $\vec{X}$ is a
Banach pair we can use $\Sigma(\vec{X})$ as the ambient space. In particular,
the functional $K(t,f;\vec{X})$ is in principle only defined on $\Sigma
(\vec{X}).$ On the other hand, the functional $f\rightarrow\frac{d}
{dt}K(t,f;\vec{X}),$ can make sense for a larger class of elements than
$\Sigma(\vec{X}).$ This occurs for significant examples: For example, on the
interval $[0,1],$
\[
L(1,\infty)=\{f:\sup_{t}tf^{\ast}(t)<\infty\}\nsubseteq L^{1}+L^{\infty}
=L^{1}.
\]
}. In real interpolation we consider two basic functionals, the $K-$
functional, already introduced in (\ref{k1}), associated with the construction
of the sum space $\Sigma(\vec{X})=X_{1}+X_{2},$ and its counterpart, the
$J-$functional, defined on the intersection space $\Delta(\vec{X})=X_{1}\cap
X_{2},$ by
\begin{equation}
J(t,f;\vec{X}):=J(t,f;X_{1},X_{2})=\max\left\{ \left\Vert f\right\Vert
_{X_{1}},t\left\Vert f\right\Vert _{X_{2}}\right\} ,t>0. \label{j1}
\end{equation}
The $K-$functional is used to construct the interpolation spaces $(X_{1}
,X_{2})_{\theta,q}$ (cf. (\ref{arriba}) above). Likewise, associated with the
$J-$functional we have the $(X_{1},X_{2})_{\theta,q;J},$ spaces. Let
$\theta\in(0,1),1\leq q\leq\infty;$ and let $U_{\theta,q}$ be the class of
functions $u:(0,\infty)\rightarrow\Delta(\vec{X}),$ such that\footnote{with
the usual modification when $q=\infty.$} $\left\Vert u\right\Vert
_{U_{\theta,q}}=\left\{ \int_{0}^{\infty}(t^{-\theta}J(t,u(t);X_{1}
,X_{2}))^{q}\frac{dt}{t}\right\} ^{1/q}<\infty$. The space $(X_{1}
,X_{2})_{\theta,q;J}$ consists of elements $f\in$ $X_{1}+X_{2},$ such that
there exists $u\in U_{\theta,q},$ with
\[
f=\int_{0}^{\infty}u(s)\frac{ds}{s}\text{ (in }X_{1}+X_{2}),
\]
provided with the norm
\[
\left\Vert f\right\Vert _{(X_{1},X_{2})_{\theta,q;J}}=\inf_{f=\int_{0}
^{\infty}u(s)\frac{ds}{s}}\{\left\Vert u\right\Vert _{U_{\theta,q}}\}.
\]
A basic result in this context is that the two constructions give the same
spaces (*the equivalence theorem*) (cf. \cite{bl})
\[
(X_{1},X_{2})_{\theta,q}=(X_{1},X_{2})_{\theta,q;J},
\]
where the constants of norm equivalence depend only on $\theta$ and $q.$
In practice the $J-$method is harder to compute, but nevertheless plays an
important theoretical role. In particular, the following interpolation
property holds for the $J-$method. If $X$ is a Banach space intermediate
between $X_{1}$ and $X_{2},$ in the sense that $\Delta(\vec{X})\subset
X\subset\Sigma(\vec{X}),$ then an inequality of the form
\begin{equation}
\left\Vert f\right\Vert _{X}\leq\left\Vert f\right\Vert _{X_{1}}^{1-\theta
}\left\Vert f\right\Vert _{X_{2}}^{\theta},\text{ for some fixed }\theta
\in(0,1),\text{ and for all }f\in\Delta(\vec{X}), \label{propiedadj}
\end{equation}
is equivalent to
\begin{equation}
\left\Vert f\right\Vert _{X}\leq\left\Vert f\right\Vert _{(X_{1}
,X_{2})_{\theta,1;J}},\text{ for all }f\in(X_{1},X_{2})_{\theta,1;J}.
\label{propiedaj1}
\end{equation}
One way to see this equivalence is to observe that (cf. \cite{bs})
\begin{lemma}
\label{lemaviva}Let $\theta\in(0,1).$ Then, for all $f\in X_{1}\cap X_{1},$
\begin{equation}
\left\Vert f\right\Vert _{X_{1}}^{1-\theta}\left\Vert f\right\Vert _{X_{2}
}^{\theta}=\inf_{t>0}\{t^{-\theta}J(t,f;X_{1},X_{2})\}. \label{lect14}
\end{equation}
\end{lemma}
The preceding discussion shows that, in particular,
\[
\left\Vert f\right\Vert _{(X_{1},X_{2})_{\theta,1,J}}\leq\left\Vert
f\right\Vert _{X_{1}}^{1-\theta}\left\Vert f\right\Vert _{X_{2}}^{\theta
},0<\theta<1,\text{ for }f\in X_{1}\cap X_{2}.
\]
More generally (cf. \cite{jm}, \cite{milta}) we have\footnote{Here and in what
follows we use the convention $\infty^{0}=1.$}: for all $f\in X_{1}\cap
X_{2},$
\begin{equation}
\left\Vert f\right\Vert _{(X_{1},X_{2})_{\theta,q,J}}\leq\left(
(1-\theta)\theta q^{\prime}\right) ^{-1/q^{\prime}}\left\Vert f\right\Vert
_{X_{1}}^{1-\theta}\left\Vert f\right\Vert _{X_{2}}^{\theta},0<\theta<1,\text{
1}\leq q\leq\infty, \label{constantesj}
\end{equation}
where $\frac{1}{q}+\frac{1}{q^{\prime}}=1.$
Likewise, for the complex method of interpolation of Calder\'{o}n,
$[.,.]_{\theta},$ we have (cf. \cite{ca}),
\[
\left\Vert f\right\Vert _{[X_{1},.X_{2}]_{\theta}}\leq\left\Vert f\right\Vert
_{X_{1}}^{1-\theta}\left\Vert f\right\Vert _{X_{2}}^{\theta},0<\theta<1,\text{
for }f\in X_{1}\cap X_{2}.
\]
For the \textquotedblleft$K$" method we also have the following result
implicit\footnote{See also \cite{cwja}.} in \cite{jm}, which we prove for sake
of completeness.
\begin{lemma}
\label{marcaada}
\begin{equation}
\lbrack(1-\theta)\theta q]^{1/q}\left\Vert f\right\Vert _{(X_{1}
,X_{2})_{\theta,q}}\leq\left\Vert f\right\Vert _{X_{0}}^{1-\theta}\left\Vert
f\right\Vert _{X_{1}}^{\theta},f\in X_{1}\cap X_{2},0<\theta<1. \label{jam1}
\end{equation}
\end{lemma}
\begin{proof}
Let $f\in X_{1}\cap X_{2}$. Using decompositions of the form $f=f+0,$ or
$f=0+f$, we readily see that
\[
K(f,t;X_{1},X_{2})\leq\min\{\Vert f\Vert_{X_{1}},t\Vert f\Vert_{X_{2}}\}.
\]
Therefore,
\begin{align*}
\Vert f\Vert_{(X_{1},X_{2})_{\theta,q}} & =\left( \int_{0}^{\infty
}[t^{-\theta}K(f,t;X_{1},X_{2})]^{q}\frac{dt}{t}\right) ^{1/q}\\
& \leq\left( \int_{0}^{\Vert f\Vert_{X_{1}}/\Vert f\Vert_{X_{2}}}
[t^{-\theta+1}\Vert f\Vert_{X_{2}}]^{q}\frac{dt}{t}+\int_{\Vert f\Vert_{X_{1}
}/\Vert f\Vert_{X_{2}}}^{\infty}[t^{-\theta}\Vert f\Vert_{X_{1}}]^{q}\frac
{dt}{t}\right) ^{1/q}\\
& =\left( \Vert f\Vert_{X_{2}}^{q}\left( \frac{\Vert f\Vert_{X_{1}}}{\Vert
f\Vert_{X_{2}}}\right) ^{q(1-\theta)}\frac{1}{(1-\theta)q}+\Vert
f\Vert_{X_{1}}^{q}\left( \frac{\Vert f\Vert_{X_{1}}}{\Vert f\Vert_{X_{2}}
}\right) ^{-\theta q}\frac{1}{\theta q}\right) ^{1/q}\\
& =\Vert f\Vert_{X_{1}}^{1-\theta}\Vert f\Vert_{X_{2}}^{\theta}\left(
\frac{1}{(1-\theta)q}+\frac{1}{\theta q}\right) ^{1/q}\\
& =[(1-\theta)\theta q]^{-1/q}\Vert f\Vert_{X_{1}}^{1-\theta}\Vert
f\Vert_{X_{2}}^{\theta}.
\end{align*}
\end{proof}
For more results related to this section we refer to \cite{jm}, \cite{milta}
and \cite{kamixi}.
\section{Self Improving properties of $BMO$ and interpolation\label{secc:self}
}
The purpose of this section is to provide a new proof of (\ref{intro1}) using
interpolation tools.
One the first results obtained concerning interpolation properties of $BMO$ is
the following (cf. \cite{fest}, \cite{bs} and the references therein)
\begin{equation}
\left[ L^{1},BMO\right] _{\theta}=L^{q},\text{ with }\frac{1}{1-\theta}=q.
\label{bmo2}
\end{equation}
In particular, it follows that
\[
L^{1}\cap BMO\subset L^{q}.
\]
Therefore, if we work on a cube $Q_{0}$, we have
\[
L^{1}(Q_{0})\cap BMO(Q_{0})=BMO(Q_{0})\subset L^{q}(Q_{0}).
\]
In other words, the following self-improvement holds
\[
f\in BMO(Q_{0})\Rightarrow f\in
{\displaystyle\bigcap\limits_{q\geq1}}
L^{q}(Q_{0}).
\]
While it is not true that $f\in BMO(Q_{0})\Rightarrow f\in L^{\infty}(Q_{0}),$
we can quantify precisely the deterioration of the $L^{q}$ norms of a function
in $BMO(Q_{0}),$ to be able to conclude by extrapolation that
\begin{equation}
f\in BMO(Q_{0})\Rightarrow f\in e^{L(Q_{0})}. \label{expo}
\end{equation}
Let us go over the details. First, consider the following inequality
attributed to Chen-Zhu \cite{chenzu}.
\begin{lemma}
\label{marcao}Let $f\in BMO(Q_{0}),$ and let $1\leq p<\infty.$ Then, there
exists an absolute constant that depends only on $n$ and $p,$ such that for
all $q>p,$
\begin{equation}
\left\Vert f\right\Vert _{L^{q}}\leq C_{n}q\left\Vert f\right\Vert _{L^{p}
}^{p/q}\left\Vert f\right\Vert _{BMO}^{1-p/q}. \label{bmo1}
\end{equation}
\end{lemma}
The point of the result, of course, is the precise dependency of the constants
in terms of $q.$ Before going to the proof let us show how (\ref{expo})
follows from (\ref{bmo1}).
\begin{proof}
\textbf{(of (\ref{expo})}. From (\ref{bmo1}), applied to the case $p=1,$ we
find that, for all $q>1,$
\begin{align*}
\left\Vert f\right\Vert _{L^{q}} & \leq C_{n}q\left\Vert f\right\Vert
_{L^{1}}^{1/q}\left\Vert f\right\Vert _{BMO}^{1-1/q}\\
& \leq C_{n}q\left\Vert f\right\Vert _{BMO}.
\end{align*}
Hence, using for example the \textquotedblleft$\Delta"$ method of
extrapolation\footnote{For more recent developments in extrapolation theory
cf. \cite{ash}.} of \cite{jm}, we get
\[
\left\Vert f\right\Vert _{\Delta(\frac{L^{q}}{q})}=\sup_{q>1}\frac{\left\Vert
f\right\Vert _{q}}{q}\approx\left\Vert f\right\Vert _{e^{L}}\leq
c_{n}\left\Vert f\right\Vert _{BMO}.
\]
\end{proof}
We now give a proof of Lemma \ref{marcao} using interpolation.
\begin{proof}
It will be convenient for us to work on $R^{n}$ (the same results hold for
cubes: See more details in Remark \ref{remarkao} below). We start by
considering the case $p=1$, the general case will follow by a re-scaling
argument, which we provide below$.$
The first step is to make explicit the way we obtain the real interpolation
spaces between $L^{1}$ and $BMO$. It is well known (cf. \cite{besa},
\cite{bs}, and the references therein) that
\begin{equation}
(L^{1},BMO)_{1-1/q,q}=L^{q},q>1. \label{interpolabmo}
\end{equation}
Here the equality of the norms of the indicated spaces is within constants of
equivalence that depend only on $q$ and $n.$ In particular, we have
\[
\left\Vert f\right\Vert _{L^{q}}\leq c(q,n)\left\Vert f\right\Vert
_{(L^{1},BMO)_{1-1/q,q}}.
\]
The program now is to give a precise estimate of $c(q,n)$ in terms of $q,$ and
then apply Lemma \ref{marcaada}. We shall work with $BMO$ provided by the
seminorm $\left\vert \cdot\right\vert _{BMO}$ (cf. (\ref{llamada}) above).
The following result was proved in \cite[Theorem 6.1]{besa},
\begin{equation}
K(t,f,L^{1}(R^{n}),BMO(R^{n}))\approx tf^{\#\ast}(t), \label{bmo4}
\end{equation}
with absolute constants of equivalence, and where $f^{\#}$ denotes the sharp
maximal operator\footnote{For computations of related $K-$functionals and
further references cf. \cite{ja}, \cite{alvamil}.} (cf. \cite{fest},
\cite{cosa}, \cite{bs})
\[
f^{\#}(x):=f_{R^{n}}^{\#}(x)=\sup_{x\ni Q}\frac{1}{\left\vert Q\right\vert
}\int_{Q}\left\vert f(y)-f_{Q}\right\vert dy,\text{ and }f_{Q}=\frac
{1}{\left\vert Q\right\vert }\int_{Q}f(y)dy.
\]
Let $f\in L^{1}\cap BMO,$ $q>1,$ and define $\theta$ by the equation $\frac
{1}{1-\theta}=q.$ Combining (\ref{bmo4}) and (\ref{jam1}), we have, with
absolute constants that do not depend on $q,\theta$ or $f,$
\begin{align*}
\lbrack(1-\theta)\theta q]^{1/q}\left\{ \int_{0}^{\infty}[t^{-1+1/q}
tf^{\#\ast}(t)]^{q}\frac{dt}{t}\right\} ^{1/q} & \approx\lbrack
(1-\theta)\theta q]^{1/q}\left\Vert f\right\Vert _{(L^{1}(R^{n}),BMO(R^{n}
))_{1-1/q,q}}\\
& \preceq\Vert f\Vert_{L^{1}}^{1/q}\Vert f\Vert_{BMO}^{1-1/q}\text{.}
\end{align*}
Thus,
\begin{equation}
\left\{ \int_{0}^{\infty}f^{\#\ast}(t)^{q}dt\right\} ^{1/q}\leq c(\frac
{q}{q-1})^{1/q}\Vert f\Vert_{L^{1}}^{1/q}\Vert f\Vert_{BMO}^{1-1/q}.
\label{bmo3}
\end{equation}
Now, we recall that by \cite{bedesa}, as complemented in \cite[(3.8), pag
228]{sasch}, we have
\begin{equation}
f^{\ast\ast}(t)-f^{\ast}(t)\leq cf^{\#\ast}(t),t>0. \label{ufa}
\end{equation}
Combining (\ref{ufa}) with (\ref{bmo3}), yields
\begin{equation}
\left\{ \int_{0}^{\infty}[f^{\ast\ast}(t)-f^{\ast}(t)]^{q}dt\right\}
^{1/q}\leq c(\frac{q}{q-1})^{1/q}\Vert f\Vert_{L^{1}}^{1/q}\Vert f\Vert
_{BMO}^{1-1/q}. \label{bailada}
\end{equation}
Observe that, since $[tf^{\ast\ast}(t)]^{\prime}=(\int_{0}^{t}f^{\ast
}(s)ds)^{\prime}=f^{\ast}(t),$ we have
\[
\lbrack-f^{\ast\ast}(t)]^{\prime}=\frac{f^{\ast\ast}(t)-f^{\ast}(t)}{t}.
\]
Moreover, since $f\in L^{1},$ then $f^{\ast\ast}(\infty)=0,$ and it follows
from the fundamental theorem of calculus that we can write
\[
f^{\ast\ast}(t)=\int_{t}^{\infty}f^{\ast\ast}(s)-f^{\ast}(s)\frac{ds}{s}.
\]
Consequently, by Hardy's inequality,
\[
\left\{ \int_{0}^{\infty}f^{\ast\ast}(t)^{q}dt\right\} ^{1/q}\leq q\left\{
\int_{0}^{\infty}[f^{\ast\ast}(t)-f^{\ast}(t)]^{q}dt\right\} ^{1/q}.
\]
Inserting this information in (\ref{bailada}) we arrive at
\begin{equation}
\left\{ \int_{0}^{\infty}f^{\ast\ast}(t)^{q}dt\right\} ^{1/q}\leq
cq(\frac{q}{q-1})^{1/q}\Vert f\Vert_{L^{1}}^{1/q}\Vert f\Vert_{BMO}^{1-1/q}.
\label{verla}
\end{equation}
We now estimate the left hand side of (\ref{verla}) from below. By the sharp
reverse Hardy inequality for decreasing functions (cf. \cite{ren}, \cite[Lemma
2.1]{mil}, see also \cite{xiao}) we can write
\begin{align}
\left\Vert f\right\Vert _{q} & =\left\{ \int_{0}^{\infty}f^{\ast}
(t)^{q}dt\right\} ^{1/q}\nonumber\\
& \leq(\frac{q-1}{q})^{1/q}\left\{ \int_{0}^{\infty}f^{\ast\ast}
(t)^{q}dt\right\} ^{1/q}. \label{gurkha}
\end{align}
Combining the last inequality with (\ref{verla}) we obtain
\begin{align*}
\left\Vert f\right\Vert _{q} & =\left\{ \int_{0}^{\infty}f^{\ast}
(t)^{q}dt\right\} ^{1/q}\\
& \leq(\frac{q-1}{q})^{1/q}\left\{ \int_{0}^{\infty}f^{\ast\ast}
(t)^{q}dt\right\} ^{1/q}\\
& \leq(\frac{q-1}{q})^{1/q}(\frac{q}{q-1})^{1/q}cq\left\{ \int_{0}^{\infty
}[f^{\ast\ast}(t)-f^{\ast}(t)]^{q}dt\right\} ^{1/q}\\
& \leq cq\Vert f\Vert_{L^{1}}^{1/q}\Vert f\Vert_{BMO}^{1-1/q},
\end{align*}
as we wanted to show.
Let us now consider the case $p>1.$ Let $q>p.$ By Holmstedt's reiteration
theorem (cf. \cite{bl}, \cite{holm}) we have
\[
(L^{p},BMO)_{1-p/q,q}=((L^{1},BMO)_{1-1/p,p},BMO)_{1-p/q},_{q},
\]
and, moreover, with absolute constants that depend only on $p,$
\[
K(t,f;L^{p},BMO)\approx\left\{ \int_{0}^{t^{p}}(s^{\frac{1}{p}-1}sf^{\#\ast
}(s))^{p}\frac{ds}{s}\right\} ^{1/p}=\left\{ \int_{0}^{t^{p}}f^{\#\ast
}(s)^{p}ds\right\} ^{1/p}.
\]
By Lemma \ref{marcaada} it follows that, with constants independent of $q,f,$
we have
\begin{align*}
\left\{ \int_{0}^{\infty}[t^{-(1-p/q)}\left\{ \int_{0}^{t^{p}}f^{\#\ast
}(s)^{p}ds\right\} ^{1/p}]^{q}\frac{dt}{t}\right\} ^{1/q} & \approx
\left\Vert f\right\Vert _{(L^{p}(R^{n}),BMO(R^{n}))_{1-p/q,q}}\\
& \preceq p^{-1/q}[q-p]^{-1/q}q^{1/q}\Vert f\Vert_{L^{p}}^{p/q}\Vert
f\Vert_{BMO}^{1-p/q}.
\end{align*}
Now,
\begin{align*}
\left\{ \int_{0}^{\infty}[t^{-(1-p/q)q}\left\{ \int_{0}^{t^{p}}f^{\#\ast
}(s)^{p}ds\right\} ^{q/p}\frac{dt}{t}\right\} ^{1/q} & =\left\{ \int
_{0}^{\infty}[t^{-(1-p/q)q}t^{q}\left\{ \frac{1}{t^{p}}\int_{0}^{t^{p}
}f^{\#\ast}(s)^{p}ds\right\} ^{q/p}\frac{dt}{t}\right\} ^{1/q}\\
& =\left\{ \int_{0}^{\infty}t^{p}\left\{ \frac{1}{t^{p}}\int_{0}^{t^{p}
}f^{\#\ast}(s)^{p}ds\right\} ^{q/p}\frac{dt}{t}\right\} ^{1/q}\\
& =\left( \frac{1}{p}\right) ^{1/q}\left\{ \int_{0}^{\infty}u\left\{
\frac{1}{u}\int_{0}^{u}f^{\#\ast}(s)^{p}ds\right\} ^{q/p}\frac{du}
{u}\right\} ^{1/q}\\
& =\left( \frac{1}{p}\right) ^{1/q}\left[ \left\{ \int_{0}^{\infty
}\left\{ \frac{1}{u}\int_{0}^{u}f^{\#\ast}(s)^{p}ds\right\} ^{q/p}
du\right\} ^{p/q}\right] ^{1/p}\\
& \geq\left( \frac{1}{p}\right) ^{1/q}\left[ \frac{\frac{q}{p}}
{q/p-1}\right] ^{1/q}\left\{ \int_{0}^{\infty}f^{\#\ast}(u)^{q}du\right\}
^{1/q},
\end{align*}
where in the last step we have used the reverse sharp Hardy inequality (cf.
\cite[Lemma 2.1]{mil}). Consequently,
\begin{align*}
\left\{ \int_{0}^{\infty}f^{\#\ast}(u)^{q}du\right\} ^{1/q} & \preceq
p^{1/q}\left[ \frac{q/p-1}{q/p}\right] ^{1/q}p^{-1/q}[q-p]^{-1/q}
q^{1/q}\Vert f\Vert_{L^{p}}^{p/q}\Vert f\Vert_{BMO}^{1-p/q}\\
& \sim\Vert f\Vert_{L^{p}}^{p/q}\Vert f\Vert_{BMO}^{1-p/q}.
\end{align*}
Hence, by the analysis we already did for the case $p=1,$ we see that
\begin{align*}
\left\{ \int_{0}^{\infty}f^{\ast}(t)^{q}dt\right\} ^{1/q} & \preceq
(\frac{q-1}{q})^{1/q}q\Vert f\Vert_{L^{p}}^{p/q}\Vert f\Vert_{BMO}^{1-p/q}\\
& \preceq q\Vert f\Vert_{L^{p}}^{p/q}\Vert f\Vert_{BMO}^{1-p/q},
\end{align*}
as we wished to show.
\end{proof}
\begin{remark}
\label{remarkao}If we work on a cube $Q_{0}$, the replacement of (\ref{ufa})
is (cf. \cite{bs})
\[
f^{\ast\ast}(t)-f^{\ast}(t)\leq cf^{\#\ast}(t),0<t<\left\vert Q_{0}\right\vert
/3.
\]
In this situation, we have $BMO(Q_{0})\subset L^{1}(Q_{0}),$ and we readily
see that $\left\{ \int_{0}^{\left\vert Q_{0}\right\vert /3}[t^{-1+1/q}
tf^{\#\ast}(t)]^{q}\frac{dt}{t}\right\} ^{1/q}$ is an equivalent seminorm for
$(L^{1}(Q_{0}),BMO(Q_{0}))_{1-1/q,q}.$ The rest of the proof now follows
mutatis mutandis.
\end{remark}
\begin{remark}
For related Hardy inequalities for one dimensional oscillation operators of
the form $f_{\#}(t)=$ $\frac{1}{t}\int_{0}^{t}f(s)ds-f(t),$ cf. \cite{misa}
and \cite{krular}.
\end{remark}
\section{The rearrangement Invariant Hull of $BMO$ and Gagliardo coordinate
spaces\label{secc:coordenadas}}
In this section we introduce the \textquotedblleft Gagliardo coordinate
spaces" (cf. \cite{holm}, \cite{jm1}, \cite{mami}) and we use them to extend
(\ref{intro1}) to the setting of real interpolation.
Let $\theta\in\lbrack0,1],q\in(0,\infty].$ Following the discussion given in
the Introduction, we define the \textquotedblleft Gagliardo coordinate spaces"
as follows\footnote{Think in terms of a Gagliardo diagram, see for example
\cite[page 39]{bl}, \cite{jamig}, \cite{jm1}.}
\[
(X_{1},X_{2})_{\theta,q}^{(1)}=\left\{ f\in X_{1}+X_{2}:\left\Vert
f\right\Vert _{(X_{1},X_{2})_{\theta,q}^{(1)}}<\infty\right\} ,
\]
where
\[
\left\Vert f\right\Vert _{(X_{1},X_{2})_{\theta,q}^{(1)}}=\left\{ \int
_{0}^{\infty}\left( t^{1-\theta}\left[ \frac{K(t,f;X_{1},X_{2})}
{t}-K^{\prime}(t,f;X_{1},X_{2})\right] \right) ^{q}\frac{dt}{t}\right\}
^{1/q},
\]
and
\[
(X_{1},X_{2})_{\theta,q}^{(2)}=\left\{ f\in X_{1}+X_{2}:\left\Vert
f\right\Vert _{(X_{1},X_{2})_{\theta,q}^{(2)}}<\infty\right\} ,
\]
\[
\left\Vert f\right\Vert _{(X_{1},X_{2})_{\theta,q}^{(2)}}=\left\{ \int
_{0}^{\infty}(t^{-\theta}tK^{\prime}(t,f;X_{1},X_{2}))^{q}\frac{dt}
{t}\right\} ^{1/q},
\]
and we compare them to the classical Lions-Peetre spaces $(X_{1}
,X_{2})_{\theta,q}.$
The Gagliardo coordinate spaces in principle are not linear, and the
corresponding functionals, $\left\Vert f\right\Vert _{(X_{1},X_{2})_{\theta
,q}^{(i)}},i=1,2,$ are not norms. However, it turns out that, when $\theta
\in(0,1),q\in(0,\infty],$ we have, with *norm* equivalence (cf. \cite{holm},
\cite{jm1}),
\begin{equation}
(X_{1},X_{2})_{\theta,q}^{(1)}=(X_{1},X_{2})_{\theta,q}^{(2)}=(X_{1}
,X_{2})_{\theta,q}. \label{equiva}
\end{equation}
More precisely, the \textquotedblleft norm\textquotedblright\ equivalence
depends only on $\theta,$ and $q.$ On the other hand, at the end points,
$\theta=0$ or $\theta=1,$ the resulting spaces can be very different.
\begin{example}
\label{ejemplomarkao}Let $(X_{1},X_{2})=(L^{1},L^{\infty}).$ Then, if
$\theta=1,q=\infty,$ we have
\begin{equation}
\left\Vert f\right\Vert _{(X_{1},X_{2})_{1,\infty}^{(1)}}=\left\Vert
f\right\Vert _{L(\infty,\infty)}, \label{wehaveseen}
\end{equation}
while
\[
\left\Vert f\right\Vert _{(X_{1},X_{2})_{1,\infty}}=\left\Vert f\right\Vert
_{(X_{1},X_{2})_{1,\infty}^{(2)}}=\left\Vert f\right\Vert _{L^{\infty}}.
\]
For $\theta=1,q<\infty,$
\[
\left\Vert f\right\Vert _{(X_{1},X_{2})_{1,q}^{(1)}}=\left\Vert f\right\Vert
_{L(\infty,q)}.
\]
On the other hand,
\[
\left\Vert f\right\Vert _{(X_{1},X_{2})_{1,q}^{(2)}}=\left\{ \int_{0}
^{\infty}f^{\ast}(t)^{q}\frac{dt}{t}\right\} ^{1/q}\leq\left\{ \int
_{0}^{\infty}f^{\ast\ast}(t)^{q}\frac{dt}{t}\right\} ^{1/q}=\left\Vert
f\right\Vert _{(X_{1},X_{2})_{1,q}},
\]
and
\[
\left\Vert f\right\Vert _{(X_{1},X_{2})_{1,q}^{(2)}}<\infty\Leftrightarrow
f=0.
\]
For $\theta=0,q=\infty,$
\[
\left\Vert f\right\Vert _{(X_{1},X_{2})_{0,\infty}}=\sup_{t}tf^{\ast\ast
}(t)=\left\Vert f\right\Vert _{L^{1}},
\]
while
\[
\left\Vert f\right\Vert _{(X_{1},X_{2})_{0,\infty}^{(2)}}=\sup_{t}tf^{\ast
}(t)=\left\Vert f\right\Vert _{L(1,\infty)}.
\]
Moreover,
\begin{align*}
\left\Vert f\right\Vert _{(X_{1},X_{2})_{0,\infty}^{(1)}} & =\sup
_{t}t(f^{\ast\ast}(t)-f^{\ast}(t))\\
& =\sup_{t}\int_{f^{\ast}(t)}^{\infty}\lambda_{f}(s)ds.
\end{align*}
Therefore, if $f^{\ast}(\infty)=0,$ then
\begin{align*}
\left\Vert f\right\Vert _{(X_{1},X_{2})_{0,\infty}^{(1)}} & =\int
_{0}^{\infty}\lambda_{f}(s)ds\\
& =\left\Vert f\right\Vert _{L^{1}}.
\end{align*}
Also, if $f^{\ast\ast}(\infty)=0,$
\begin{align*}
\left\Vert f\right\Vert _{(X_{1},X_{2})_{1,1}^{(1)}} & =\int_{0}^{\infty
}\left[ f^{\ast\ast}(t)-f^{\ast}(t)\right] \frac{dt}{t}\\
& =\lim_{r\rightarrow0}\int_{r}^{\infty}\left[ f^{\ast\ast}(t)-f^{\ast
}(t)\right] \frac{dt}{t}\\
& =\lim_{r\rightarrow0}\left( f^{\ast\ast}(r)-f^{\ast\ast}(\infty)\right)
\text{ (since }\frac{d}{dr}(-f^{\ast\ast}(t))=\frac{f^{\ast\ast}(t)-f^{\ast
}(t)}{t}\text{)}\\
& =\left\Vert f\right\Vert _{L^{\infty}}.
\end{align*}
\end{example}
\begin{theorem}
\label{teomarkao}Let $\theta\in\lbrack0,1),$ and let $1\leq q<\infty.$ Then,
there exists an absolute constant $c$ such that
\begin{equation}
\left\Vert f\right\Vert _{\vec{X}_{\theta,q}^{(2)}}\leq cq\left(
1+[(1-\theta)q]^{1/q}\right) ^{1-\theta}[(1-\theta)q]^{-\theta}\left\Vert
f\right\Vert _{X_{1}}^{1-\theta}\left\Vert f\right\Vert _{\vec{X}_{1,\infty
}^{(1)}}^{\theta}. \label{verde}
\end{equation}
In particular, if $(1-\theta)q=1,$
\begin{equation}
\left\Vert f\right\Vert _{\vec{X}_{\theta,q}^{(2)}}\leq cq\left\Vert
f\right\Vert _{X_{1}}^{1-\theta}\left\Vert f\right\Vert _{\vec{X}_{1,\infty
}^{(1)}}^{\theta}. \label{bmo5}
\end{equation}
\end{theorem}
Before going to the proof let us argue why such a result could be termed a
generalized John-Nirenberg inequality. Indeed, let $Q$ be a cube in $R^{n},$
and consider the pair $\vec{X}=(L^{1}(Q),L^{\infty}(Q)).$ As we have seen (cf.
(\ref{wehaveseen}))
\[
\left\Vert f\right\Vert _{\vec{X}_{1,\infty}^{(1)}}=\left\Vert f\right\Vert
_{L(\infty,\infty)}.
\]
By definition, if $f\in L(\infty,\infty)(Q),$ then $f\in L^{1}(Q).$ We now
show that, moreover, $\left\Vert f\right\Vert _{L^{1}}\leq\left\vert
Q\right\vert \left\Vert f\right\Vert _{L(\infty,\infty)}.$ Indeed, for all
$t>0$ we have (cf. \cite[Theorem 2.1 (ii)]{aalto})
\begin{align}
\int_{\{\left\vert f\right\vert >t\}}\left\vert f(x)\right\vert dx &
\leq\int_{f^{\ast}(\lambda_{f}(t))}^{\infty}\lambda_{f}(r)dr+t\lambda
_{f}(t)\nonumber\\
& =\lambda_{f}(t)[f^{\ast\ast}(\lambda_{f}(t))-f^{\ast}(\lambda
_{f}(t))]+t\lambda_{f}(t)\nonumber\\
& \leq\lambda_{f}(t)\left( \left\Vert f\right\Vert _{L(\infty,\infty
)}+t\right) \nonumber\\
& \leq\left\vert Q\right\vert \left( \left\Vert f\right\Vert _{L(\infty
,\infty)}+t\right) , \label{nuevo}
\end{align}
where in the second step we have used the formula (do a graph!)
\[
s(f^{\ast\ast}(s)-f^{\ast}(s))=\int_{f^{\ast}(s)}^{\infty}\lambda_{f}(u)du.
\]
Let $t\rightarrow0$ in (\ref{nuevo}) then, by Fatou's Lemma, we see that
\begin{equation}
\left\Vert f\right\Vert _{L^{1}}\leq\left\vert Q\right\vert \left\Vert
f\right\Vert _{L(\infty,\infty)}. \label{nuevo1}
\end{equation}
Now, let $\theta\in(0,1),$ and $\frac{1}{q}=1-\theta.$ Then,
\begin{align*}
\left\Vert f\right\Vert _{\vec{X}_{\theta,q}^{(2)}} & =\left\{ \int
_{0}^{\infty}(t^{-\theta}tK^{\prime}(t,f;L^{1},L^{\infty}))^{q}\frac{dt}
{t}\right\} ^{1/q}\\
& =\left\{ \int_{0}^{\infty}(t^{-(1-1/q)}tf^{\ast}(t))^{q}\frac{dt}
{t}\right\} ^{1/q}\\
& =\left\Vert f\right\Vert _{L^{q}}.
\end{align*}
Therefore, by (\ref{bmo5}) and (\ref{nuevo1}),
\begin{align*}
\left\Vert f\right\Vert _{L^{q}} & \leq cq\left\Vert f\right\Vert _{L^{1}
}^{1/q}\left\Vert f\right\Vert _{L(\infty,\infty)}^{1/q^{\prime}}\\
& \leq cq\left\vert Q\right\vert \left\Vert f\right\Vert _{L(\infty,\infty
)}\\
& \leq cq\left\Vert f\right\Vert _{L(\infty,\infty)}.
\end{align*}
Consequently,
\[
\left\Vert f\right\Vert _{e^{L}}\approx\left\Vert f\right\Vert _{\Delta
(\frac{L^{q}}{q})}=\sup_{q}\frac{\left\Vert f\right\Vert _{L^{q}}}{q}\leq
c\left\Vert f\right\Vert _{L(\infty,\infty)}.
\]
\begin{proof}
\textbf{(of Theorem (\ref{teomarkao})}. Let us write
\begin{align*}
\left\Vert f\right\Vert _{\vec{X}_{\theta,q}^{(2)}} & =\left( \int
_{0}^{\infty}(u^{1-\theta}\frac{d}{du}K(u,f;\vec{X}))^{q}\frac{du}{u}\right)
^{1/q}\\
& =\left( \int_{0}^{t}(u^{1-\theta}\frac{d}{du}K(u,f;\vec{X}))^{q}\frac
{du}{u}\right) ^{1/q}+\left( \int_{t}^{\infty}(u^{1-\theta}\frac{d}
{du}K(u,f;\vec{X}))^{q}\frac{du}{u}\right) ^{1/q}\\
& =(I)+(II).
\end{align*}
We estimate these two terms as follows,
\begin{align*}
(I) & =\left( \int_{0}^{t}u^{(1-\theta)q}(\frac{d}{du}K(u,f;\vec{X}
))^{q}\frac{du}{u}\right) ^{1/q}\\
& \leq\left( \int_{0}^{t}u^{(1-\theta)q}(\frac{K(u,f;\vec{X})}{u})^{q}
\frac{du}{u}\right) ^{1/q}\text{ (since }\frac{d}{du}K(u,f;\vec{X})\leq
\frac{K(u,f;\vec{X})}{u}).
\end{align*}
On the other hand, since $\left( \frac{K(u,f;\vec{X})}{u}\right) ^{\prime
}=\frac{K^{\prime}(u,f;\vec{X})u-K(u,f;\vec{X})}{u^{2}},$ we have that, for
$0<u<t,$
\begin{align*}
\frac{K(u,f;\vec{X})}{u} & =\frac{K(t,f;\vec{X})}{t}+\left( -\left.
\frac{K(\cdot,f;\vec{X})}{\cdot}\right\vert _{u}^{t}\right) \\
& =\frac{K(t,f;\vec{X})}{t}+\int_{u}^{t}\left( \frac{K(r,f;\vec{X})}
{r}-K^{\prime}(r,f;\vec{X})\right) \frac{dr}{r}\\
& \leq\frac{K(t,f;\vec{X})}{t}+\left( \log\frac{t}{u}\right) \sup_{r\leq
t}\left( \frac{K(r,f;\vec{X})}{r}-K^{\prime}(r,f;\vec{X})\right) \\
& \leq\frac{K(t,f;\vec{X})}{t}+\log\frac{t}{u}\left\Vert f\right\Vert
_{\vec{X}_{1,\infty}^{(1)}}.
\end{align*}
Therefore, by the triangle inequality,
\begin{align*}
(I) & \leq\left( \int_{0}^{t}(u^{(1-\theta)q}\{\frac{K(t,f;\vec{X})}
{t}+\log\frac{t}{u}\left\Vert f\right\Vert _{\vec{X}_{1,\infty}^{(1)}}
\}^{q}\frac{du}{u}\right) ^{1/q}\\
& \leq\frac{K(t,f;\vec{X})}{t}\left( \int_{0}^{t}u^{(1-\theta)q}\frac{du}
{u}\right) ^{1/q}+\left\Vert f\right\Vert _{\vec{X}_{1,\infty}^{(1)}}\left(
\int_{0}^{t}u^{(1-\theta)q}\left( \log\frac{t}{u}\right) ^{q}\frac{du}
{u}\right) ^{1/q}\\
& =\frac{K(t,f;\vec{X})}{t}\frac{t^{(1-\theta)}}{[(1-\theta)q]^{1/q}
}+\left\Vert f\right\Vert _{\vec{X}_{1,\infty}^{(1)}}\left( \int_{0}
^{t}u^{(1-\theta)q}\left( \log\frac{t}{u}\right) ^{q}\frac{du}{u}\right)
^{1/q}\\
& =(a)+(b).
\end{align*}
For the term $(a)$ we have
\begin{align*}
(a) & \leq\frac{t^{-\theta}}{[(1-\theta)q]^{1/q}}\lim_{t\rightarrow\infty
}K(t,f;\vec{X})\text{ (since }K(\cdot,f;\vec{X})\text{ increases)}\\
& \leq\frac{t^{-\theta}}{[(1-\theta)q]^{1/q}}\left\Vert f\right\Vert _{X_{1}
}.
\end{align*}
To deal with $(b)$ we use the asymptotics of the gamma function as follows:
let $s=\log\frac{t}{u},$ then $u=te^{-s},$ $du=-te^{-s}ds,$ $\frac{du}
{u}=-ds,u^{(1-\theta)q}=t^{(1-\theta)q}e^{-s(1-\theta)q},$ and we have
\begin{align*}
(b) & =\left\Vert f\right\Vert _{\vec{X}_{1,\infty}^{(1)}}\left( \int
_{0}^{t}u^{(1-\theta)q}s^{q}\frac{du}{u}\right) ^{1/q}\\
& =\left\Vert f\right\Vert _{\vec{X}_{1,\infty}^{(1)}}t^{1-\theta}\left(
\int_{0}^{\infty}e^{-s(1-\theta)q}s^{q}ds\right) ^{1/q}\\
& =\left\Vert f\right\Vert _{\vec{X}_{1,\infty}^{(1)}}t^{1-\theta}\left(
\int_{0}^{\infty}e^{-\tau}\frac{\tau^{q}}{[(1-\theta)q]^{q}}\frac{d\tau
}{[(1-\theta)q]}\right) ^{1/q},\text{ (let }\tau=s(1-\theta)q)\\
& =\frac{\left\Vert f\right\Vert _{\vec{X}_{1,\infty}^{(1)}}}{[(1-\theta
)q]}\frac{1}{[(1-\theta)q]^{1/q}}t^{1-\theta}\left( \Gamma(q+1)\right)
^{1/q}\\
& \leq\frac{\left\Vert f\right\Vert _{\vec{X}_{1,\infty}^{(1)}}}
{[(1-\theta)q]}\frac{1}{[(1-\theta)q]^{1/q}}t^{1-\theta}q\text{.}
\end{align*}
Combining inequalities for $(a)$ and $(b)$ we have
\[
(I)\leq\frac{t^{-\theta}}{[(1-\theta)q]^{1/q}}\left\Vert f\right\Vert _{X_{1}
}+\frac{\left\Vert f\right\Vert _{\vec{X}_{1,\infty}^{(1)}}}{[(1-\theta
)q]}\frac{1}{[(1-\theta)q]^{1/q}}t^{1-\theta}q.
\]
We now estimate $(II):$
\begin{align*}
(II) & =\left( \int_{t}^{\infty}u^{(1-\theta)q}u^{-1}\left( \frac{d}
{du}K(u,f;\vec{X})\right) ^{q-1}\left( u\frac{d}{du}K(u,f;\vec{X})\right)
\frac{du}{u}\right) ^{1/q}\\
& \leq\left\{ \sup_{u\geq t}(u^{\frac{(1-\theta)q-1}{q}}\left( \frac{d}
{du}K(u,f;\vec{X})\right) ^{\frac{q-1}{q}})\right\} \left\{ \int
_{t}^{\infty}\frac{d}{du}K(u,f;\vec{X})du\right\} ^{1/q}\\
& =(c)(d).
\end{align*}
The factors on the right hand side can be estimated as follows,
\begin{align*}
(d) & =\left( \lim_{u\mapsto\infty}K(u,f;\vec{X})-K(t,f;\vec{X})\right)
^{1/q}\\
& \leq\left( \lim_{u\mapsto\infty}K(u,f;\vec{X})\right) ^{1/q}\\
& =\left\Vert f\right\Vert _{X_{0}}^{1/q}.
\end{align*}
Also, since $K(\cdot,f;\vec{X})$ is concave, $\frac{d}{du}K(u,f;\vec{X}
)\leq\frac{K(u,f;\vec{X})}{u},$ consequently,
\begin{align*}
(c) & \leq\left\Vert f\right\Vert _{X_{1}}^{1-1/q}\sup_{u\geq t}
\{u^{\frac{(1-\theta)q-1}{q}-\frac{q-1}{q}}\}\\
& \leq\left\Vert f\right\Vert _{X_{1}}^{1-1/q}\{\sup_{u\geq t}u^{-\theta}\}\\
& =\left\Vert f\right\Vert _{X_{1}}^{1-1/q}t^{-\theta}.
\end{align*}
Thus,
\begin{align*}
(II) & \leq\left\Vert f\right\Vert _{X_{1}}^{1/q}\left\Vert f\right\Vert
_{X_{1}}^{1-1/q}t^{-\theta}\\
& =\left\Vert f\right\Vert _{X_{1}}t^{-\theta}.
\end{align*}
Combining the estimates for $(I)$ and $(II)$ yields,
\begin{align}
\left\Vert f\right\Vert _{\vec{X}_{\theta,q}^{(2)}} & \leq\left(
\frac{1+[(1-\theta)q]^{1/q}}{[(1-\theta)q]^{1/q}}\right) t^{-\theta
}\left\Vert f\right\Vert _{X_{1}}+\frac{1}{[(1-\theta)q]}\frac{1}
{[(1-\theta)q]^{1/q}}qt^{1-\theta}\left\Vert f\right\Vert _{\vec{X}_{1,\infty
}^{(1)}}\nonumber\\
& \leq cq\frac{1}{[(1-\theta)q]^{1/q}}\left\{ \left( 1+[(1-\theta
)q]^{1/q}\right) t^{-\theta}\left\Vert f\right\Vert _{X_{1}}+\frac
{1}{[(1-\theta)q]}t^{1-\theta}\left\Vert f\right\Vert _{\vec{X}_{1,\infty
}^{(1)}}\right\} . \label{dolores}
\end{align}
We balance the terms on the right hand side by choosing $t$ such that
\[
\left( 1+[(1-\theta)q]^{1/q}\right) t^{-\theta}\left\Vert f\right\Vert
_{X_{1}}=\frac{1}{[(1-\theta)q]}t^{1-\theta}\left\Vert f\right\Vert _{\vec
{X}_{1,\infty}^{(1)}},
\]
whence,
\[
t=\left( 1+[(1-\theta)q]^{1/q}\right) [(1-\theta)q]\frac{\left\Vert
f\right\Vert _{X_{1}}}{\left\Vert f\right\Vert _{\vec{X}_{1,\infty}^{(1)}}}.
\]
Inserting this value of $t$ in (\ref{dolores}) we find
\begin{align*}
\left\Vert f\right\Vert _{\vec{X}_{\theta,q}^{(2)}} & \leq cq\frac
{1}{[(1-\theta)q]^{1/q}}\left\{ \left( 1+[(1-\theta)q]^{1/q}\right)
^{-\theta}[(1-\theta)q]^{-\theta}\left\Vert f\right\Vert _{X_{1}}^{1-\theta
}\left\Vert f\right\Vert _{\vec{X}_{1,\infty}^{(1)}}^{\theta}\right\} \\
& \leq cq\left( \frac{1}{[(1-\theta)q]^{1/q}}\right) \left( 1+[(1-\theta
)q]^{1/q}\right) ^{-\theta}[(1-\theta)q]^{-\theta}\left\Vert f\right\Vert
_{X_{1}}^{1-\theta}\left\Vert f\right\Vert _{\vec{X}_{1,\infty}^{(1)}}
^{\theta},
\end{align*}
as we wished to show
\end{proof}
\begin{remark}
As an easy application of (\ref{bmo5}), we note that, if $X_{2}\subset X_{1},$
we can write
\[
\left\Vert f\right\Vert _{\Delta\left( (1-\theta)\vec{X}_{\theta,\frac
{1}{1-\theta}}^{(2)}\right) }=\sup_{\theta}(1-\theta)\left\Vert f\right\Vert
_{\vec{X}_{\theta,\frac{1}{1-\theta}}^{(2)}}\leq c\left\Vert f\right\Vert
_{\vec{X}_{1,\infty}^{(1)}}.
\]
We believe that similar arguments would lead to computations with the
$\Delta_{p}$ method of extrapolation (cf. \cite{jm}, \cite{kami}) but this
lies outside the scope of this paper so we leave the issue for another occasion.
\end{remark}
\section{Recent uses of the oscillation operator and $L(\infty,q)$ spaces in
Analysis\label{secc:uses}}
We present several different applications connected with the material
developed in this note. The material is only but a sample of results. The
results presented are either new or they provide a new treatment to known
results\footnote{See also \cite{rota}, Lesson \#3.}. This section differs from
previous ones in that we proceed formally and, whenever possible, we refer the
reader to the literature for background material and complete details. Further
development of materials in this section will appear elsewhere, e.g. in
\cite{cwbjmm}, \cite{mil1}, \cite{mil2},..)
\subsection{On some inequalities for classical operators by
Bennett-DeVore-Sharpley and Bagby and Kurtz\label{secc:singular}}
In this section we show how the methods developed in this paper can be applied
to give a new approach to results on singular integrals and maximal operators
that appeared first in \cite{bedesa}, \cite{bagby}, and \cite{kurtz} (cf. also
the references therein).
Let $\not T $ and $U$ be operators acting in a sufficiently large class of
testing functions, say the space $S$ of Schwartz testing functions on $R^{n}$.
Furthermore, suppose that there exists $C>0,$ such that for all $f\in S,$ the
following pointwise inequality holds
\[
\left( Tf\right) ^{\#}(x)\leq CUf(x).
\]
Then, taking rearrangements we have
\[
\left( Tf\right) ^{\#\ast}(t)\leq C(Uf)^{\ast}(t),t>0.
\]
Therefore,
\[
\left\{ \int_{0}^{\infty}\left( Tf\right) ^{\#\ast}(t)^{p}dt\right\}
^{1/p}\leq C\left\{ \int_{0}^{\infty}(Uf)^{\ast}(t)^{p}dt\right\} ^{1/p}.
\]
Now, by (\ref{ufa}) above, and the analysis that follows it, we see that
\begin{align*}
\left\{ \int_{0}^{\infty}\left( Tf\right) ^{\ast}(t)^{p}dt\right\} ^{1/p}
& \leq\left( \frac{p-1}{p}\right) ^{1/p}pC\left\{ \int_{0}^{\infty
}(Uf)^{\ast}(t)^{p}dt\right\} ^{1/p}\\
& \leq Cp\left\{ \int_{0}^{\infty}(Uf)^{\ast}(t)^{p}dt\right\}
^{1/p}\text{.}
\end{align*}
In other words,
\begin{equation}
\left\Vert Tf\right\Vert _{p}\leq cp\left\Vert Uf\right\Vert _{p},
\label{ladej}
\end{equation}
and we recover the main result of \cite{kurtz}.
We should also point out that the method of proof can be also implemented to
deal with the corresponding more general inequalities for doubling measures on
$R^{n}$ (cf. \cite{cof}, \cite{kurtz}, and the references therein).
\begin{remark}
It may be appropriate to mention that once one knows (\ref{ladej}) then one
could use the extrapolation theory of \cite{jm} to show (cf. \cite[Lemma
5]{kurtz}) that there exist absolute constants $C,\gamma>0,$ such that,
\begin{equation}
(Tf)^{\ast}(t)\leq C\int_{\gamma t}^{\infty}\left( Uf\right) ^{\ast}
(s)\frac{ds}{s},\text{ for all }f\in S. \label{ladej1}
\end{equation}
\end{remark}
\subsection{Good-Lambda Inequalities\label{secc:lambda}}
These inequalities apparently originate in the celebrated work of
Burkholder-Gundy \cite{bg} (cf. also \cite{bu}) on extrapolation of martingale
inequalities. They have been used since then to great effect in probability,
and also in classical harmonic analysis, probably beginning with \cite{bg1}
and Coifman-Fefferman \cite{cof}. Inequalities on the oscillation operator
$f^{\ast\ast}-f^{\ast}$ are closely connected with good-lambda inequalities.
This connection was pointed out long ago by Neveu \cite{nev}, Herz
\cite{herz}, Bagby and Kurtz (cf. \cite{bagby}, \cite{kurtz}), among others.
In this section we formalize some of their ideas.
To fix matters, let $\mu$ be a measure on $R^{n},$ and let $T$ and $H$ be
operators acting on a sufficiently rich class of functions. A prototypical
good lambda inequality has the following form: for all $\lambda>0,\varepsilon
>0,$ there exists $c(\varepsilon)>0,$ with $c(\varepsilon)\rightarrow0,$ as
$\varepsilon\rightarrow0,$ such that
\begin{equation}
\mu\{\left\vert Tf\right\vert >2\lambda,\left\vert Hf\right\vert
\leq\varepsilon\lambda\}\leq c(\varepsilon)\mu\{\left\vert Tf\right\vert
>\lambda\}. \label{vale0}
\end{equation}
The idea here is that if the behavior of $H$ is known on r.i. spaces, say on
$L^{p}$ spaces$,$ then we can also control the behavior of $T.$ Indeed, the
distribution function of $Tf$ can be controlled by the following elementary
argument
\begin{align*}
\mu\{\left\vert Tf\right\vert & >2\lambda\}\leq\mu\{\left\vert Tf\right\vert
>2\lambda,\left\vert Hf\right\vert \leq\varepsilon\lambda\}+\mu\{\left\vert
Tf\right\vert >2\lambda,\left\vert Hf\right\vert >\varepsilon\lambda\}\\
& \leq c(\varepsilon)\mu\{\left\vert Tf\right\vert >\lambda\}+\mu\{\left\vert
Hf\right\vert >\varepsilon\lambda\}.
\end{align*}
Then, since
\[
\left\Vert f\right\Vert _{p}^{p}=p\int_{0}^{\infty}\lambda^{p-1}
\mu\{\left\vert f\right\vert >\lambda\}d\lambda,
\]
we readily see that we can estimate the norm of $\left\Vert Tf\right\Vert
_{p}^{p}$ in terms of the norm of $\left\Vert Hf\right\Vert _{p}^{p}$ by means
of making $\varepsilon$ sufficiently small in order to be able collect the two
$\left\Vert Tf\right\Vert _{p}^{p}$ terms on the left hand side of the inequality.
In \cite{kurtz}, the author shows the following stronger good lambda
inequality for $f^{\#}:$There exists $B>0,$ such that for all $\varepsilon
>0,\lambda>0,$ and all locally integrable $f,$ we have
\[
\mu\{\left\vert f\right\vert >Bf^{\#}+\lambda\}\leq\varepsilon\mu\{\left\vert
f\right\vert >\lambda\}.
\]
This inequality is used to show the following oscillation inequality (cf.
\cite[p 270]{kurtz})
\[
f^{\ast}(t)-f^{\ast}(2t)\leq Cf^{\#\ast}(\frac{t}{2}),t>0,
\]
where $C$ is an absolute constant.
More generally, the argument in \cite{kurtz} can be formalized as follows
\begin{theorem}
\label{hercules}Suppose that $T$ and $H$ are operators acting on the Schwartz
class $S,$ such that, moreover, for all $\varepsilon>0,$ there exists $B>0,$
such that for all $\lambda>0,$
\begin{equation}
\mu\{\left\vert Tf\right\vert >B\left\vert Hf\right\vert +\lambda
\}\leq\varepsilon\mu\{\left\vert Tf\right\vert >\lambda\}. \label{vale}
\end{equation}
Then, there exists a constant $C>0$ such that for all $t>0,$ and for all $f\in
S,$
\begin{equation}
\left( Tf\right) ^{\ast}(t)-\left( Tf\right) ^{\ast}(2t)\leq C\left(
Hf\right) ^{\ast}(\frac{t}{2}). \label{valetodo}
\end{equation}
\end{theorem}
\begin{proof}
Let $\varepsilon=\frac{1}{4},$ and fix $B:=B(\frac{1}{4})$ such that
(\ref{vale}) holds for all $\lambda>0.$ Let $f\in S,$ and select
$\lambda=\left( Tf\right) ^{\ast}(2t).$ Then,
\[
\mu\{\left\vert Tf\right\vert >B\left\vert Hf\right\vert +\left( Tf\right)
^{\ast}(2t)\}\leq\frac{1}{4}\mu\{\left\vert Tf\right\vert >\left( Tf\right)
^{\ast}(2t)\}\leq\frac{t}{2}.
\]
By definition we have,
\[
\mu\{\left\vert Hf\right\vert >\left( Hf\right) ^{\ast}(\frac{t}{2}
)\}\leq\frac{t}{2}.
\]
Consider the set $A=\{\left\vert Tf\right\vert >B(Hf)^{\ast}(\frac{t}
{2})+\left( Tf\right) ^{\ast}(2t)\}.$ Then, it is easy to see, by
contradiction, that
\[
A\subset\{\left\vert Tf\right\vert >B\left\vert Hf\right\vert +\left(
Tf\right) ^{\ast}(2t)\}
{\displaystyle\bigcup}
\{\left\vert Hf\right\vert >\left( Hf\right) ^{\ast}(t/2)\}.
\]
Consequently,
\[
\mu(A)\leq\frac{t}{2}+\frac{t}{2}.
\]
Now, since
\[
\left( Tf\right) ^{\ast}(t)=\inf\{s:\mu\{\left\vert Tf\right\vert >s\}\leq
t\},
\]
it follows that
\[
\left( Tf\right) ^{\ast}(t)\leq B(Hf)^{\ast}(\frac{t}{2})+\left( Tf\right)
^{\ast}(2t),
\]
as we wished to show.
\end{proof}
\begin{remark}
It is easy to compare the oscillation operators $\left( Tf\right) ^{\ast
}(t)-\left( Tf\right) ^{\ast}(2t)$ and $\left( Tf\right) ^{\ast\ast
}(t)-\left( Tf\right) ^{\ast}(t).$ For example, it is shown in \cite[Theorem
4.1 p 1223]{bmr} that
\[
\left( Tf\right) ^{\ast}(\frac{t}{2})-\left( Tf\right) ^{\ast}
(t)\leq2\left( \left( Tf\right) ^{\ast\ast}(t)-\left( Tf\right) ^{\ast
}(t)\right) ,
\]
and
\begin{align}
\left( \left( Tf\right) ^{\ast\ast}(t)-\left( Tf\right) ^{\ast
}(t)\right) & \leq\frac{1}{t}\int_{0}^{t}\left( \left( Tf\right) ^{\ast
}(\frac{s}{2})-\left( Tf\right) ^{\ast}(s)\right) ds\nonumber\\
& +\left( Tf\right) ^{\ast}(\frac{t}{2})-\left( Tf\right) ^{\ast}(t).
\label{combinaos}
\end{align}
\end{remark}
Combining Theorem \ref{hercules} and the previous remark we have the following
\begin{theorem}
Suppose that $T$ and $H$ satisfy the strong good-lambda inequality
(\ref{vale}). Then,
\[
\left( \left( Tf\right) ^{\ast\ast}(t)-\left( Tf\right) ^{\ast
}(t)\right) \leq2B(Hf)^{\ast\ast}(\frac{t}{4}).
\]
\end{theorem}
\begin{proof}
The desired result follows combining (\ref{valetodo}) with (\ref{combinaos}).
\end{proof}
\begin{remark}
It is easy to convince oneself that the good-lambda inequalities of the form
(\ref{vale}) are, in fact, stronger than the usual good-lambda inequalities,
e.g. of the form (\ref{vale0}) (cf. \cite{kurtz}).
\end{remark}
\begin{remark}
Clearly there are many nice results lurking in the background of this section.
For example, a topic that comes to mind is to explore the use of good-lambda
inequalities in the interpolation theory of operator ideals and its
applications (cf. \cite{cobos}, \cite{mastylo}, and the references therein).
On the classical analysis side it would be of interest to explore the
connections of the interpolation methods with the maximal inequalities due to
Muckenhoupt-Wheeden and Hedberg-Wolff (cf. \cite{arz}, \cite{hon} and the
references therein).
\end{remark}
\subsection{Extrapolation of inequalities: Burkholder-Gundy-Herz meet
Calder\'{o}n-Maz'ya and Cwikel et al.\label{secc:bgextra}}
The leitmotif of \cite{bg} is the extrapolation of inequalities for the
classical operators acting on martingales (e.g. martingale transforms, maximal
operators, square functions..). There are two main ingredients to the method.
First, the authors, modeling on the classical operators acting on martingales,
single out properties that the operators under consideration will be required
to satisfy. Then they usually assume that an $L^{p}$ or weak type $L^{p}$
estimate holds, and from this information they deduce a full family of $L^{p}$
or even Orlicz inequalities. The main technical step of the extrapolation
procedure consists of using the assumptions we have just described in order to
prove suitable good-lambda inequalities. The method is thus different from the
usual interpolation theory, which works for \textbf{all} operators that
satisfy a \textbf{pair} of given estimates.
In \cite{cwbjmm} we have shown how to formulate some of the assumptions of
\cite{bg} in terms of optimal decompositions to compute $K-$functionals. Then,
assuming that the operators to be extrapolated act on interpolation spaces,
one can extract oscillation inequalities using interpolation theory. In
particular, the developments in \cite{cwbjmm} allow the extrapolation of
operators that do not necessarily act on martingales, but also on function
spaces, e.g. gradients, square functions, Littlewood-Paley functions, etc. The
basic technique involved to achieve the extrapolation is to use the
assumptions to prove an oscillation rearrangement inequality\footnote{Herz
\cite{he} also developed a different technique to extrapolate oscillation
rearrangement inequalities for martingale operators.}.
Unfortunately, \cite{cwbjmm} is still unpublished, although some of the
results have been discussed elsewhere (cf. \cite{mamicoc}) or will appear soon
(cf. \cite{mil1}). In keeping with the theme of this note, in this section I
want to present some more details on how one can extrapolate Sobolev
inequalities and encode the information using the oscillation operator
$f^{\ast\ast}-f^{\ast}.$
Let us take as a starting point the weak type Gagliardo-Nirenberg Sobolev
inequality in $R^{n}$ (cf. \cite{leo})$:$
\begin{equation}
\left\Vert f\right\Vert _{L(n^{\prime},\infty)}=\left\Vert f\right\Vert
_{(L^{1}(R^{n}),L^{\infty}(R^{n}))_{1/n,\infty}}\leq c_{n}\left\Vert \nabla
f\right\Vert _{L^{1}},f\in Lip_{0}(R^{n}). \label{abarca}
\end{equation}
Let $f\in Lip_{0}(R^{n}),$ and assume without loss that $f$ is positive. Let
$t>0,$ then an optimal decomposition for the computation of
\[
K(t,f):=K(t,f;L^{1}(R^{n}),L^{\infty}(R^{n}))=\int_{0}^{t}f^{\ast}(s)ds,
\]
is given by
\begin{equation}
f=f_{f^{\ast}(t)}+(f-f_{f^{\ast}(t)}), \label{shows}
\end{equation}
where
\begin{equation}
f_{f^{\ast}(t)}(x)=\left\{
\begin{array}
[c]{ll}
f(x)-f^{\ast}(t) & \text{if }f^{\ast}(t)<f(x)\\
0 & \text{if }f(x)\leq f^{\ast}(t)
\end{array}
\right. . \label{rf3}
\end{equation}
By direct computation we have
\begin{align*}
K(t,f) & \leq\left\Vert f_{f^{\ast}(t)}\right\Vert _{L^{1}}+t\left\Vert
f-f_{f^{\ast}(t)}\right\Vert _{L^{\infty}}\\
& =\left( \int_{0}^{t}f^{\ast}(s)ds-tf^{\ast}(t)\right) +tf^{\ast}(t)\\
& =\int_{0}^{t}f^{\ast}(s)ds.
\end{align*}
On the other hand, if $f=f_{0}+f_{1},$ with $f_{0}\in L^{1},f_{1}\in
L^{\infty},$ then
\begin{align*}
\int_{0}^{t}f^{\ast}(s)ds & \leq\int_{0}^{t}f_{0}^{\ast}(s)ds+\int_{0}
^{t}f_{1}^{\ast}(s)ds\\
& \leq\left\Vert f_{0}\right\Vert _{L^{1}}+t\left\Vert f_{1}\right\Vert
_{L^{\infty}}.
\end{align*}
Therefore,
\[
K(t,f)=\left\Vert f_{f^{\ast}(t)}\right\Vert _{L^{1}}+t\left\Vert
f-f_{f^{\ast}(t)}\right\Vert _{L^{\infty}}.
\]
Also note that, confirming (\ref{a1}) and (\ref{a2}) above, by direct
computation we have,
\begin{align*}
\left\Vert f_{f^{\ast}(t)}\right\Vert _{L^{1}} & =\int_{0}^{t}f^{\ast
}(s)ds-tf^{\ast}(t)\\
& =t\left( f^{\ast\ast}(t)-f^{\ast}(t)\right) ,
\end{align*}
\[
\left\Vert f-f_{f^{\ast}(t)}\right\Vert _{L^{\infty}}=f^{\ast}(t).
\]
The commutation of the gradient with truncations\footnote{(cf. \cite{maz},
\cite{bakr}, \cite{haj}, \cite{mmp})} implies
\[
\left\Vert \nabla f_{f^{\ast}(t)}\right\Vert _{L^{1}}\leq\int_{\{f>f^{\ast
}(t)\}}\left\vert \nabla f\right\vert dx.
\]
Therefore
\[
\left\Vert \nabla f_{f^{\ast}(t)}\right\Vert _{L^{1}}\leq\int_{0}
^{t}\left\vert \nabla f\right\vert ^{\ast}(s)ds.
\]
We apply the inequality (\ref{abarca}) to $f_{f^{\ast}(t)}$. We find
\begin{align*}
\left\Vert f_{f^{\ast}(t)}\right\Vert _{(L^{1}(R^{n}),L^{\infty}
(R^{n}))_{1/n,\infty}} & \leq c_{n}\left\Vert \nabla f_{f^{\ast}
(t)}\right\Vert _{L^{1}}\\
& \leq c_{n}\int_{0}^{t}\left\vert \nabla f\right\vert ^{\ast}(s)ds.
\end{align*}
We estimate the left hand side
\begin{align*}
\left\Vert f_{f^{\ast}(t)}\right\Vert _{(L^{1}(R^{n}),L^{\infty}
(R^{n}))_{1/n,\infty}} & =\sup_{s>0}s^{-1/n}K(s,f_{f^{\ast}(t)})\\
& =\sup_{s>0}s^{-1/n}K(s,f-(f-f_{f^{\ast}(t)}))\\
& \geq t^{-1/n}K(t,f-(f-f_{f^{\ast}(t)}))\\
& \geq t^{-1/n}\{K(t,f)-K(t,f-f_{f^{\ast}(t)})\},
\end{align*}
where the last inequality follows by the triangle inequality, since
$K(t,\cdot)$ is a norm. Now,
\[
K(t,f)=tf^{\ast\ast}(t),
\]
and
\begin{align*}
K(t,f-f_{f^{\ast}(t)}) & \leq t\left\Vert f_{f^{\ast}(t)}-f\right\Vert
_{L^{\infty}}\\
& =tf^{\ast}(t).
\end{align*}
Thus,
\[
\left\Vert f_{f^{\ast}(t)}\right\Vert _{(L^{1}(R^{n}),L^{\infty}
(R^{n}))_{1/n,\infty}}\geq t^{-1/n}\left( tf^{\ast\ast}(t)-tf^{\ast
}(t)\right) .
\]
Combining estimates we find
\[
\left( tf^{\ast\ast}(t)-tf^{\ast}(t)\right) t^{-1/n}\leq c_{n}\int_{0}
^{t}\left\vert \nabla f\right\vert ^{\ast}(s)ds,
\]
which can be written as
\begin{equation}
f^{\ast\ast}(t)-f^{\ast}(t)\leq c_{n}t^{1/n}\left\vert \nabla f\right\vert
^{\ast\ast}(t). \label{venta}
\end{equation}
This inequality had been essentially obtained by Kolyada \cite{kol} and is
equivalent (cf. \cite{mmp}) to earlier inequalities by Talenti \cite{talenti}
.. but its role in the study of limiting Sobolev inequalities, and the
introduction of the $L(\infty,q)$ spaces, was only pointed out in \cite{bmr}.
In \cite{mmp} it was shown that (\ref{venta}) is equivalent to the
isoperimetric inequality. More generally, Martin-Milman extended (\ref{venta})
to Gaussian measures (\cite{mamijfa}), and later\footnote{See also
\cite{kalis} for a maximal function approach to oscillation inequalities for
the gradient.} (cf. \cite{mamica}) to metric measure spaces, where the
inequality takes the following form
\begin{equation}
f^{\ast\ast}(t)-f^{\ast}(t)\leq\frac{t}{I(t)}\left\vert \nabla f\right\vert
^{\ast\ast}(t), \label{venta1}
\end{equation}
where $I$ is the isoperimetric profile associated with the underlying
geometry. In fact they show the equivalence of (\ref{venta1}) with the
corresponding isoperimetric inequality (for a recent survey cf. \cite{mamica}).
One can prove the Gaussian Sobolev version of (\ref{venta1}) using the same
extrapolation procedure as above, but using as a starting point Ledoux's
inequality \cite{le} as a replacement of the Gagliardo-Nirenberg inequality
(cf. \cite{mamicoc}). Further recent extensions of the Martin-Milman
inequality on Gaussian measure can be found in \cite{xiao1}.
Let us now show another Sobolev rearrangement inequality for oscillations,
apparently first recorded in \cite{mmp}. We now take as our starting
inequality for the extrapolation procedure the sharp form of the
Gagliardo-Nirenberg inequality, which can be formulated as
\[
\left\Vert f\right\Vert _{L(n^{\prime},1)}=\left\Vert f\right\Vert
_{(L^{1}(R^{n}),L^{\infty}(R^{n}))_{1/n,1}}\leq c_{n}\left\Vert \nabla
f\right\Vert _{L^{1}},f\in Lip_{0}(R^{n}).
\]
We apply the inequality to $f_{f^{\ast}(t)}.$ The right hand side we have
already estimated,
\[
\left\Vert f_{f^{\ast}(t)}\right\Vert _{(L^{1}(R^{n}),L^{\infty}
(R^{n}))_{1/n,1}}\leq c_{n}\int_{0}^{t}\left\vert \nabla f\right\vert ^{\ast
}(s)ds.
\]
Now,
\begin{align*}
\left\Vert f_{f^{\ast}(t)}\right\Vert _{(L^{1}(R^{n}),L^{\infty}
(R^{n}))_{1/n,1}} & =\int_{0}^{\infty}s^{-1/n}K(s,f_{f^{\ast}(t)})\frac
{ds}{s}\\
& \geq\int_{0}^{t}s^{-1/n}K(s,f-(f-f_{f^{\ast}(t)}))\frac{ds}{s}\\
& \geq\int_{0}^{t}s^{-1/n}\{K(s,f)-K(s,f-f_{f^{\ast}(t)})\}\frac{ds}{s}\\
& =\int_{0}^{t}s^{-1/n}\{sf^{\ast\ast}(s)-K(s,f-f_{f^{\ast}(t)})\}\frac
{ds}{s}.
\end{align*}
Note that since $f^{\ast}$ decreases, for $s\leq t,$ we have
\begin{align*}
K(s,f-f_{f^{\ast}(t)}) & \leq s\left\Vert f-f_{f^{\ast}(t)}\right\Vert
_{L^{\infty}}\\
& =sf^{\ast}(t)\\
& \leq sf^{\ast}(s).
\end{align*}
Therefore,
\begin{align*}
\left\Vert f\right\Vert _{(L^{1}(R^{n}),L^{\infty}(R^{n}))_{1/n,1}} &
\geq\int_{0}^{t}s^{-1/n}\{sf^{\ast\ast}(s)-sf^{\ast}(s)\}\frac{ds}{s}\\
& =\int_{0}^{t}s^{1-1/n}\{f^{\ast\ast}(s)-f^{\ast}(s)\}\frac{ds}{s}.
\end{align*}
Consequently,
\[
\int_{0}^{t}s^{1-1/n}\{f^{\ast\ast}(s)-f^{\ast}(s)\}\frac{ds}{s}\leq c_{n}
\int_{0}^{t}\left\vert \nabla f\right\vert ^{\ast}(s)ds,
\]
an inequality first shown in \cite{mmp}.
\subsection{Bilinear Interpolation\label{secc:bilinear}}
In this section we show an extension of (\ref{intro3}) to a class of bilinear
operators that have a product or convolution like structure. These operators
were first introduced by O'Neil (cf. \cite[Exercise 5, page 76]{bl}).
Let $\vec{A},\vec{B},\vec{C},$ be Banach pairs, and let $\Pi$ be a bilinear
bounded operator such that
\[
\Pi:\left\{
\begin{array}
[c]{cc}
A_{0}\times B_{0} & \rightarrow C_{0}\\
A_{0}\times B_{1} & \rightarrow C_{1}\\
A_{1}\times B_{0} & \rightarrow C_{1}
\end{array}
\right. .
\]
For example, the choice $A_{0}=B_{0}=C_{0}=L^{\infty},$ and $A_{1}=B_{1}
=C_{1}=L^{1},$ corresponds to a regular product operator $\Pi_{1}(f,g)=fg,$
while the choice $A_{0}=B_{0}=C_{1}=L^{1},$ and $A_{1}=B_{1}=C_{0}=L^{\infty}$
corresponds to a convolution operator $\Pi_{2}(f,g)=f\ast g$ on $R^{n},$ say.
The main boundedness result is given by (cf. \cite[Exercise 5, page 76]{bl}):
\begin{equation}
\left\Vert \Pi(f,g)\right\Vert _{\vec{C}_{\theta,r}}\leq\left\Vert
f\right\Vert _{\vec{A}_{\theta_{1},q_{1}r}}\left\Vert g\right\Vert _{\vec
{B}_{\theta_{2},q_{2}r}}, \label{bilineal}
\end{equation}
where $\theta,\theta_{i}\in(0,1),\theta=\theta_{1}+\theta_{2},q\,_{i}
,r\in\lbrack1,\infty],i=1,2,$ and $\frac{1}{r}\leq\frac{1}{q_{1}r}+\frac
{1}{q_{2}r}.$
\begin{example}
To illustrate the ideas, make it easy to compare results, and to avoid the tax
of lengthy computations with indices, we shall only model an extension of
(\ref{intro3}) for $\Pi_{1}$. Let us thus take $\vec{A}=\vec{B}=\vec{C}
=(A_{0},A_{1}),$ and let $\vec{X}=(X_{0},X_{1})=(A_{1},A_{0}).$ Further, in
order to be able to use the quadratic form argument outlined in the
Introduction, we choose $q_{1}=q_{2}=2,$ $r=p,$ $\theta_{1}=\theta_{2}
=\frac{\theta}{2}.$ Then, from (\ref{bilineal}) we get
\begin{equation}
\left\Vert \Pi_{1}(f,g)\right\Vert _{\vec{X}_{1-\theta,p}}\preceq\left\Vert
f\right\Vert _{\vec{X}_{1-\frac{\theta}{2},2p}}\left\Vert g\right\Vert
_{\vec{X}_{1-\frac{\theta}{2},2p}}. \label{laqueprecisa}
\end{equation}
Since $(1-\frac{1}{2})(1-\theta)+\frac{1}{2}1=1-\frac{\theta}{2},$ we can use
the reiteration theorem to write
\[
\vec{X}_{1-\frac{\theta}{2},2p}=(\vec{X}_{1-\theta,p},X_{1})_{\frac{1}{2}
,2p},
\]
and find that
\begin{equation}
\left\Vert \Pi_{1}(f,g)\right\Vert _{\vec{X}_{1-\theta,p}}\preceq\left\Vert
f\right\Vert _{(\vec{X}_{1-\theta,p},X_{1})_{\frac{1}{2},2p}}\left\Vert
g\right\Vert _{(\vec{X}_{1-\theta,p},X_{1})_{\frac{1}{2},2p}}.
\label{retornari}
\end{equation}
By the equivalence theorem\footnote{In this example we are not interested on
the precise dependence of the constants of equivalence.} (cf. (\ref{equiva})
above),
\[
(\vec{X}_{1-\theta,p},X_{1})_{\frac{1}{2},2p}=(\vec{X}_{1-\theta,p}
,X_{1})_{\frac{1}{2},2p}^{(2)}.
\]
Therefore,
\begin{align*}
\left\Vert \Pi_{1}(f,g)\right\Vert _{\vec{X}_{1-\theta,p}} & \preceq
\left\Vert f\right\Vert _{(\vec{X}_{1-\theta,p},X_{1})_{\frac{1}{2},2p}^{(2)}
}\left\Vert g\right\Vert _{(\vec{X}_{1-\theta,p},X_{1})_{\frac{1}{2},2p}
^{(2)}}\\
& \preceq\left\Vert f\right\Vert _{\vec{X}_{1-\theta,p}}^{1/2}\left\Vert
f\right\Vert _{\left( \vec{X}_{1-\theta,p},X_{1}\right) _{1,\infty}^{(1)}
}^{1/2}\left\Vert g\right\Vert _{\vec{X}_{1-\theta,p}}^{1/2}\left\Vert
g\right\Vert _{\left( \vec{X}_{1-\theta,p},X_{1}\right) _{1,\infty}^{(1)}
}^{1/2},
\end{align*}
where in the last step we have used (\ref{bmo5}) applied to the pair $(\vec
{X}_{1-\theta,p},X_{1}).$ Consequently, by the quadratic formula, we finally
obtain
\[
\left\Vert \Pi_{1}(f,g)\right\Vert _{\vec{X}_{1-\theta,p}}\preceq\left\Vert
f\right\Vert _{\vec{X}_{1-\theta,p}}\left\Vert g\right\Vert _{\left( \vec
{X}_{1-\theta,p},X_{1}\right) _{1,\infty}^{(1)}}+\left\Vert f\right\Vert
_{\left( \vec{X}_{1-\theta,p},X_{1}\right) _{1,\infty}^{(1)}}\left\Vert
g\right\Vert _{\vec{X}_{1-\theta,p}}.
\]
\end{example}
\begin{remark}
The method above uses one of the more powerful tools of the real method: the
re-scaling of inequalities. However, in this case there is substantial
difficulty for the implementation of the result since techniques for the
actual computation of the space $\left( \vec{X}_{1-\theta,p},X_{1}\right)
_{1,\infty}^{(1)}$ are not well developed at present time\footnote{It is an
interesting open problem to modify Holmstedt's method to be able to keep
track, in a nearly optimal way, both coordinates in the Gagliardo diagram,
when doing reiteration. For more on the computation of Gagliardo coordinate
spaces see the forthcoming \cite{mil3}.}$.$ Therefore we avoid the use of
(\ref{retornari}) and instead prove directly that if\footnote{To simplify the
computations we model the $L^{p}$ case here. Another simplification is that in
this argument we don't need to be fuzzy about constants.} $\theta=\frac{1}
{p},$ we have
\begin{equation}
\left\Vert f\right\Vert _{\vec{X}_{1-\frac{\theta}{2},2p}}\preceq\left\Vert
f\right\Vert _{\vec{X}_{1-\theta,p}}^{1/2}\left\Vert f\right\Vert _{\vec
{X}_{1,\infty}^{(1)}}^{1/2}.\label{berkovich}
\end{equation}
This given, applying (\ref{berkovich}) to both terms on the right hand side of
(\ref{laqueprecisa}) and then using the quadratic form argument we find
\begin{equation}
\left\Vert \Pi_{1}(f,g)\right\Vert _{\vec{X}_{1-\theta,p}}\preceq\left\Vert
f\right\Vert _{\vec{X}_{1-\theta,p}}\left\Vert g\right\Vert _{\vec
{X}_{1,\infty}^{(1)}}+\left\Vert g\right\Vert _{\vec{X}_{1-\theta,p}
}\left\Vert f\right\Vert _{\vec{X}_{1,\infty}^{(1)}}.\label{elcaso}
\end{equation}
In the particular case of product operators and $L^{p}$ spaces, $1<p<\infty,$
(\ref{elcaso}) reads
\[
\left\Vert fg\right\Vert _{L^{p}}\preceq\left\Vert f\right\Vert _{L^{p}
}\left\Vert g\right\Vert _{L(\infty,\infty)}+\left\Vert g\right\Vert _{L^{p}
}\left\Vert f\right\Vert _{L(\infty,\infty)},
\]
which should be compared with (\ref{intro3}) recalling that
\[
\left\Vert f\right\Vert _{L(\infty,\infty)}\leq c\left\Vert f\right\Vert
_{BMO}.
\]
\end{remark}
\begin{proof}
(of (\ref{berkovich})). To simplify the notation we let $K(t,f;\vec{X})=K(t).$
Then,
\begin{align*}
\left\Vert f\right\Vert _{\vec{X}_{1-\frac{\theta}{2},2p}} & =\left\{
\int_{0}^{\infty}\left( K(s)s^{-(1-\frac{\theta}{2})}\right) ^{2p}\frac
{ds}{s}\right\} ^{1/2p}\\
& \leq\left\{ \int_{0}^{t}\left( K(s)s^{-(1-\frac{\theta}{2})}\right)
^{2p}\frac{ds}{s}\right\} ^{1/2p}+\left\{ \int_{t}^{\infty}\left(
K(s)s^{-(1-\frac{\theta}{2})}\right) ^{2p}\frac{ds}{s}\right\} ^{1/2p}\\
& =(1)+(2).
\end{align*}
We proceed to estimate each of these two terms starting with $(2):$
\begin{align*}
(2) & =\left\{ \int_{t}^{\infty}\left( K(s)s^{-(1-\theta)}\right)
^{p}s^{(1-\theta)p}s^{-(1-\frac{\theta}{2})p}\left( K(s)s^{-(1-\frac{\theta
}{2})}\right) ^{p}\frac{ds}{s}\right\} ^{1/2p}\\
& \leq\left\{ \sup_{s\geq t}s^{(1-\theta)}s^{-(1-\frac{\theta}{2})}\left(
K(s)s^{-(1-\frac{\theta}{2})}\right) \right\} ^{1/2}\left\{ \int
_{t}^{\infty}\left( K(s)s^{-(1-\theta)}\right) ^{p}\frac{ds}{s}\right\}
^{1/2p}\\
& \leq\left\{ \sup_{s\geq t}\frac{K(s)}{s}\right\} ^{1/2}\left\Vert
f\right\Vert _{\vec{X}_{1-\theta,p}}^{1/2}\\
& =\left\{ \frac{K(t)}{t}\right\} ^{1/2}\left\Vert f\right\Vert _{\vec
{X}_{1-\theta,p}}^{1/2}\\
& =t^{-1/2}t^{(1-\theta)/2}\left\{ K(t)t^{-(1-\theta)}\right\}
^{1/2}\left\Vert f\right\Vert _{\vec{X}_{1-\theta,p}}^{1/2}\\
& \leq t^{-1/p2}\left\Vert f\right\Vert _{\vec{X}_{1-\theta,p}}.
\end{align*}
Moreover,
\begin{align*}
(1) & =\left\{ \int_{0}^{t}\left( \frac{K(s)}{s}s^{-(1-\frac{\theta}
{2})+1}\right) ^{2p}\frac{ds}{s}\right\} ^{1/2p}\\
& \leq\left\{ \int_{0}^{t}\left( \left[ \frac{K(s)}{s}-\frac{K(t)}
{t}\right] s^{-(1-\frac{\theta}{2})+1}\right) ^{2p}\frac{ds}{s}\right\}
^{1/2p}+\left\{ \int_{0}^{t}\left( \frac{K(t)}{t}s^{-(1-\frac{\theta}{2}
)+1}\right) ^{2p}\frac{ds}{s}\right\} ^{1/2p}\\
& =(a)+(b).
\end{align*}
The term $(b)$ is readily estimated:
\begin{align*}
(b) & =\frac{K(t)}{t}\left\{ \int_{0}^{t}s^{\frac{\theta}{2}2p}\frac{ds}
{s}\right\} ^{1/2p}\\
& \sim\frac{K(t)}{t}t^{1/2p}\\
& =K(t)t^{-(1-\theta)}t^{(1-\theta)}t^{1/2p-1}\\
& \preceq\left\Vert f\right\Vert _{\vec{X}_{1-\theta,p}}t^{-1/2p}.
\end{align*}
Next we use the familiar estimate
\begin{align*}
\left[ \frac{K(s)}{s}-\frac{K(t)}{t}\right] & =\int_{s}^{t}\left[
\frac{K(s)}{s}-K^{\prime}(s)\right] \frac{ds}{s}\\
& \leq\left\Vert f\right\Vert _{\vec{X}_{1,\infty}}\log\frac{t}{s},
\end{align*}
to see that
\begin{align*}
(a) & \leq\left\Vert f\right\Vert _{\vec{X}_{1,\infty}}\left\{ \int_{0}
^{t}s\left( \log\frac{t}{s}\right) ^{2p}\frac{ds}{s}\right\} ^{1/2p}\\
& \preceq\left\Vert f\right\Vert _{\vec{X}_{1,\infty}^{(1)}}t^{1/2p}.
\end{align*}
Collecting estimates, we have
\[
\left\Vert f\right\Vert _{\vec{X}_{1-\frac{\theta}{2},2p}}\preceq
t^{-1/p2}\left\Vert f\right\Vert _{\vec{X}_{1-\theta,p}}+\left\Vert
f\right\Vert _{\vec{X}_{1,\infty}^{(1)}}t^{1/2p}.
\]
Balancing the two terms on the right hand side we find
\[
\left\Vert f\right\Vert _{\vec{X}_{1-\frac{\theta}{2},2p}}\preceq\left\Vert
f\right\Vert _{\vec{X}_{1-\theta,p}}^{1/2}\left\Vert f\right\Vert _{\vec
{X}_{1,\infty}^{(1)}}^{1/2},
\]
as we wished to show.
\end{proof}
\begin{remark}
It would be of interest to implement a similar model analysis for $\Pi
_{2}(f,g)=f\ast g.$ We claim that it is easy, however, we must leave the task
to the interested reader.
\end{remark}
\begin{remark}
The results of this section are obviously connected with Leibniz rules in
function spaces. This brings to mind the celebrated commutator theorem of
Coifman-Rochberg-Weiss \cite{cofro} which states that if $T$ is a
Calder\'{o}n-Zygmund operator (cf. \cite{cosa}), and $b$ is a $BMO$ function,
then $[T,b]f=bTf-T(bf),$ defines a bounded linear operator\footnote{In fact,
the boundedness of $[T,b]$ for all CZ operators implies that $b\in BMO$ (cf.
\cite{janson})}
\[
\lbrack T,b]:L^{p}(R^{n})\rightarrow L^{p}(R^{n}),1<p<\infty.
\]
The result has been extended in many different directions. In particular, an
abstract theory of interpolation of commutators has evolved from it (cf.
\cite{cwkamiro}, \cite{rr}, for recent surveys). In this theory a crucial role
is played by a class of operators $\Omega$ such that, for bounded operators
$T$ on a given interpolation scale, the commutator $[T,\Omega]$ is also
bounded. The operators $\Omega$ often satisfy some of the functional equations
associated with derivation operators. At present time we know very little
about how this connection comes about or how to exploit it in concrete
applications (cf. \cite{cwmiro}, \cite{konmil}). More in keeping with the
topic of this paper, and in view of many possible interesting applications, it
would be of also of interest to study oscillation inequalities for commutators
$[T,\Omega]$ in the context of interpolation theory. In this connection we
should mention that in \cite{mamiwa}, the authors formulate a generalized
version of the Coifman-Rochberg-Weiss commutator theorem, valid in the context
of real interpolation, where in a suitable fashion the function space $W,$
introduced in \cite{misa}, plays the role of $BMO:$
\[
W=\{f\in L_{loc}^{1}(0,\infty):\sup_{t}\left\vert \frac{1}{t}\int_{0}
^{t}f(s)ds-f(t)\right\vert <\infty\}.
\]
Obviously, $f\in L(\infty,\infty)$ iff $f^{\ast}\in W.$
\end{remark}
\section{A brief personal note on Cora Sadosky}
I still remember well the day (circa March 1977) that I met Mischa Cotlar and
Cora Sadosky at Mischa's apartment in Caracas (cf. \cite{milcot}). I was
coming from Sydney, en route to take my new job as a Visiting Professor at
Universidad de Los Andes, in Merida. And while, of course, I had heard a lot
about them, I did not know them personally. Mischa and I had exchanged some
correspondence (no e-mail those days!). I had planned to come to spend an
afternoon with the Cotlars, taking advantage of a 24 hour stopover while en
route to Merida, a colonial city in the Andes region of Venezuela. As it turns
out, my flight for the next day had been cancelled, and Mischa and Yani
invited me to stay over at their place.
When I came to the Cotlar's apartment, Corita and Mischa were working in the
dining room. Mischa gave me a brief explanation of what they were doing
mathematically. In fact, he dismissed the whole enterprise\footnote{I would
learn quickly that Mischa's modesty was legendary.}. I would learn much later
that what they were doing then, would turn out to be very innovative and
influential research\footnote{From this period I can mention \cite{cotlar},
\cite{cotlar1}, \cite{cotlar2}.}.
Corita, who also knew about my impending arrival, greeted me with something
equivalent to *Oh, so you really do exist*!\footnote{Existence here to be
taken in a non mathematical sense. Of course at point in time I did NOT
*exist* mathematically!} Being a *porte\~{n}o* myself, I quickly found
Corita's style quite congenial and the conversation took off. We formed a
friendship that lasted for as long as she lived.
As I later learned, most people called her Cora, but the Cotlars, and a few
other old friends, that knew her from childhood, called her Corita\footnote{In
Spanish *Corita* means little Cora..}. Having being introduced to her at the
Cotlar's home, I proceeded to call her Corita too...and that was the way it
would always be\footnote{Many many years later she told me that by then
everyone called her Cora, except Mischa and myself and that she would prefer
for me to call her Cora. I said, of course, Corita!..}.
By early 1979 I moved for one semester to Maracaibo and afterwards to
Brasilia. Mischa was very helpful connecting me first with Jorge Lebowitz
(Maracaibo), and then with Djairo Figuereido (Brasilia). In the mean time
Corita herself had moved to the US, where we met again, at an AMS Special
Session in Harmonic Analysis.
At the time I had a visiting position at UI at Chicago, and was trying to find
a tenure track job. She took an interest in my situation, and gave me very
useful suggestions on the job hunting process. She would remain very helpful
throughout my career. In particular, when she learned\footnote{She had a
membership at IAS herself that year}, on her own, about my application for a
membership at the Institute for Advanced Study, in 1984, she supported my
case.... I did not know this until she called me to let me know that my
application had been accepted!
Corita and I met again many times over the course of the years. There were
conferences on Interpolation Theory\footnote{Harmonic analysts trained in the
sixties had a special place in their hearts for Interpolation theory. It was
after all a theory to which the great masters of the Chicago school (e.g.
Calder\'{o}n, Stein, Zygmund) had made fundamental contributions.} in Lund and
Haifa, on Harmonic Analysis in Washington D.C., Madrid and Boca Raton, there
were Chicago meets to celebrate various birthdays of Alberto Calder\'{o}n, a
special session on Harmonic Analysis in Montreal, etc, etc. Vanda and I went
to have dinner with Corita and her family, when we all coincided during a
visit to Buenos Aires in l985. She was instrumental in my participation at
Mischa's 80 birthday Conference in Caracas, 1994. We wrote papers for books
that each of us edited. At some point in time she and her family came to visit
us in Florida.
A few months before she passed, I was helping her, via e-mail, with the
texting\footnote{I mean using TeX!} of a paper of hers about Mischa Cotlar.
Probably the best way I have to describe Corita is to say that she was a force
of nature. She was a brilliant mathematician, with an intense but charming
personality. Having had to endure herself exile, discrimination, and very
difficult working conditions, she was very sensitive to the plight of others.
I am sure many of the testimonies in this book will describe how much she
helped to provide opportunities for younger mathematicians to develop their careers.
Some things are hard to change. As much as I could not train myself to call
her *Cora*, in our relationship she was always \textquotedblleft big sister".
I will always be grateful to her, for all her help and her friendship.
I know that the space $BMO$ had a very special place in her mathematical
interests and, indeed, $BMO$ spaces appear in many considerations throughout
her works. For this very reason, and whatever the merits of my small
contribution, I have chosen to dedicate this note on $BMO$ inequalities to her memory.
\textbf{Acknowledgement}. I am grateful to Sergey Astashkin and Michael Cwikel
for useful comments that helped improve the quality of the paper. Needless to
say that I remain fully responsible for the remaining shortcomings.
\end{document}
|
\begin{document}
\operatorname{o}peratorname{t}itle{Finite transitive permutation groups with \\operatorname{o}nly small normaliser orbits}
\operatorname{o}peratorname{aut}hor{Alexander Bors and Michael Giudici\operatorname{o}peratorname{t}hanks{First author's address: Johann Radon Institute for Computational and Applied Mathematics (RICAM), Altenberger Stra{\ss}e 69, 4040 Linz, Austria. E-mail: \mathfrak{h}ref{mailto:[email protected]}{[email protected]} \newline Second author's address: The University of Western Australia, Centre for the Mathematics of Symmetry and Computation, 35 Stirling Highway, Crawley 6009, WA, Australia. E-mail: \mathfrak{h}ref{mailto:[email protected]}{[email protected]} \newline The first author is supported by the Austrian Science Fund (FWF), project J4072-N32 \mathrm{e}nquote{Affine maps on finite groups}. The second author was supported by the Australian Research Council Discovery Project DP200101951. This research arose out of the annual CMSC research retreat and the authors thank the other participants for making it such an enjoyable event. \newline 2020 \mathrm{e}mph{Mathematics Subject Classification}: Primary 20B10. Secondary 20D45, 20F28. \newline \mathrm{e}mph{Key words and phrases:} Transitive permutation group, Normaliser, Normaliser orbit.}}
\date{\operatorname{o}peratorname{t}oday}
\title{Finite transitive permutation groups with \only small normaliser orbits}
\operatorname{o}peratorname{ab}stract{We study finite transitive permutation groups $G\operatorname{o}peratorname{l}eqslant\operatorname{o}peratorname{Sym}(\mathcal{O}mega)$ such that all orbits of the conjugation action on $G$ of the normaliser of $G$ in $\operatorname{o}peratorname{Sym}(\mathcal{O}mega)$ have size bounded by some constant. Our results extend recent results, due to the first author, on finite abstract groups $G$ such that all orbits of the natural action of the automorphism group $\operatorname{o}peratorname{Aut}(G)$ on $G$ have size bounded by some constant.}
\section{Introduction and main results}\operatorname{o}peratorname{l}abel{sec1}
One of the fundamental concepts in the study of \mathrm{e}mph{structures} (i.e., sets endowed with additional structure in the form of operations and relations) is that of an automorphism, the formalisation of the intuitive notion of a \mathrm{e}nquote{symmetry}. A significant portion of research across various disciplines is concerned with studying \mathrm{e}nquote{highly symmetrical} structures $X$, a condition usually expressed in terms of certain transitivity assumptions on natural actions of the automorphism group $\operatorname{o}peratorname{Aut}(X)$. For example, the well-studied notions of vertex-transitive graphs \cite[Definition 4.2.2, p.~85]{BW79a}, block- or flag-transitive designs \cite{CP93a,CP93b,Hub09a} and finite flag-transitive projective planes \cite{Tha03a} fall into this general framework.
In the special case where $X$ is a group $G$, the situation is more complicated, as the most straightforward transitivity assumption, that $\operatorname{o}peratorname{Aut}(G)$ shall act transitively on $G$, is only satisfied by the trivial group. Hence, in order to obtain an interesting theory, weaker conditions have been studied by various authors. As examples, we mention
\begin{itemize}
\item the papers \cite{BD16a,DGB17a,LM86a,Stro02a} by various authors on finite groups $G$ such that $\operatorname{o}peratorname{Aut}(G)$ has \mathrm{e}nquote{few} orbits on $G$ (and the recent paper \cite{BDM20a} studying such a condition for infinite groups),
\item the first author's paper \cite{Bor19a} dealing with finite groups $G$ such that $\operatorname{o}peratorname{Aut}(G)$ has at least one \mathrm{e}nquote{large} orbit on $G$, and
\item Zhang's paper \cite{Zha92a} investigating finite groups $G$ where all elements of the same order are conjugate under $\operatorname{o}peratorname{Aut}(G)$ (which were later discovered to have a connection with the celebrated CI-problem from algebraic graph theory, see \cite[Introduction]{LP97a}).
\mathrm{e}nd{itemize}
In his recent paper \cite{Bor20a}, the first author studied finite groups that are at the opposite end of the spectrum, i.e., they are \mathrm{e}nquote{highly unsymmetrical}. Formally, the paper \cite{Bor20a} is concerned with finite groups $G$ such that all orbits of $\operatorname{o}peratorname{Aut}(G)$ on $G$ have size bounded by a constant. It should be noted that Robinson and Wiegold already studied such conditions for general groups in their 1984 paper \cite{RW84a}, obtaining \mathrm{e}mph{inter alia} a nice general structural characterisation \cite[Theorem 1]{RW84a} in the spirit of Neumann's celebrated characterisation of BFC-groups \cite[Theorem 3.1]{Neu54a}. However, the methods and results from \cite{Bor20a} are tailored to the finite case, where more specific statements can be made.
There are many instances where the full automorphism group $\operatorname{o}peratorname{Aut}(G)$ is not \mathrm{e}nquote{accessible}. One notable case is where $G\operatorname{o}peratorname{l}eqslant\operatorname{o}peratorname{Sym}(\mathcal{O}mega)$ is a permutation group. Here it is natural to only view those automorphisms of $G$ that arise from conjugation by an element from the normaliser of $G$ in $\operatorname{o}peratorname{Sym}(\mathcal{O}mega)$. Formally, let us denote by $\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$ the image of the conjugation action $\operatorname{o}peratorname{N}_{\operatorname{o}peratorname{Sym}(\mathcal{O}mega)}(G)\rightarrow\operatorname{o}peratorname{Aut}(G)$. It is then natural to ask which results about $\operatorname{o}peratorname{Aut}(G)$ can be extended to $\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$. This group of automorphisms has been previously considered as a means for computing the normaliser of $G$ in $\operatorname{o}peratorname{Sym}(\mathcal{O}mega)$ \cite{Hul08}.
The goal of this paper is to extend the main results of \cite{Bor20a} to $\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$. By a classical idea, dating back to Cayley, abstract groups are in a bijective correspondence with regular permutation groups via their (right) multiplication actions on themselves. Moreover, if $G$ is a regular permutation group, then $\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)=\operatorname{o}peratorname{Aut}(G)$ (see Lemma \ref{regularAutLem} below), and so the statements of \cite[Theorem 1.1]{Bor20a} may equivalently be viewed as results on finite regular permutation groups $G$ such that all $\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$-orbits on $G$ have size bounded by a constant. In this paper, we will extend \cite[Theorem 1.1]{Bor20a} from finite regular to finite transitive permutation groups. Our main results are as follows:
\begin{theorem}\operatorname{o}peratorname{l}abel{mainTheo}
Let $G\operatorname{o}peratorname{l}eqslant\operatorname{o}peratorname{Sym}(\mathcal{O}mega)$ be a finite transitive permutation group.
\begin{enumerate}
\item The following are equivalent:
\begin{enumerate}
\item All $\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$-orbits on $G$ are of length at most $3$.
\item Up to isomorphism of permutation groups, $G$ is one of the following:
\begin{itemize}
\item $\mathbb{Z}/m\mathbb{Z}$ for some $m\in\{1,2,3,4,6\}$, $(\mathbb{Z}/2\mathbb{Z})^2$ or $\operatorname{o}peratorname{Sym}(3)$, each in its regular action on itself.
\item $\operatorname{o}peratorname{D}_{2n}\operatorname{o}peratorname{l}eqslant\operatorname{o}peratorname{Sym}(n)$, the symmetry group of a regular $n$-gon, for some $n\in\{3,4,6\}$.
\mathrm{e}nd{itemize}
\mathrm{e}nd{enumerate}
\item The order of $G$ cannot be bounded under the assumption that the maximum $\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$-orbit length on $G$ is $4$.
\item Let $c$ be a positive and $d$ a non-negative integer. Assume that $G$ is $d$-generated and that all $\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$-orbits on $G$ are of length at most $c$. Then $|G|\operatorname{o}peratorname{l}eqslant\operatorname{o}peratorname{f}frak(d,c^d)$, where $\operatorname{o}peratorname{f}frak$ is as in Notation \ref{mainNot} below.
\item If all $\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$-orbits on $G$ are of length at most $23$, then $G$ is soluble.
\mathrm{e}nd{enumerate}
\mathrm{e}nd{theorem}
\begin{notation}\operatorname{o}peratorname{l}abel{mainNot}
We denote by $\operatorname{o}peratorname{f}frak$ the function $\mathbb{N}\operatorname{o}peratorname{t}imes\mathbb{N}^+\rightarrow\mathbb{N}^+$ mapping
\[
(d,n)\mapsto 16^{(n+1)d}\cdot n^{2n^3d^2(5+d+4n^3\operatorname{o}peratorname{l}og_2{n})+2n^3+4nd+4d}.
\]
\mathrm{e}nd{notation}
The combination of statements (1) and (2) of Theorem \ref{mainTheo} is particularly interesting when compared to their counterparts in \cite[Theorem 1.1]{Bor20a}: At the moment, for finite abstract groups $G$, it is unknown what is the precise value of the maximum integer $c$ such that there are only finitely many $G$ with all $\operatorname{o}peratorname{Aut}(G)$-orbits of length at most $c$ (we only know that $c\in\{3,4,5,6,7\}$). We also note that 23 in part (4) is sharp as $G=\operatorname{o}peratorname{Alt}(5)$ in its usual action on five points has an $\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$-orbit of length 24.
We note that the proof of Theorem \ref{mainTheo}(3) uses a different, much simpler main idea than the one of \cite[Theorem 1.1(3)]{Bor20a}: Under the assumptions of Theorem \ref{mainTheo}(3), one has that $|\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)|\operatorname{o}peratorname{l}eqslant c^d$, which implies the asserted upper bound on $|G|$ by a partial generalisation of a classical theorem of Ledermann and Neumann, \cite[Theorem 6.6]{LN56a}, from abstract to transitive permutation groups, see Lemma \ref{ledNeuLem}. The same idea works in the case of abstract groups, where a direct application of \cite[Theorem 6.6]{LN56a} yields the following simpler and stronger bound compared to \cite[Theorem 1.1(3)]{Bor20a}:
\begin{theorem}\operatorname{o}peratorname{l}abel{improvedBoundTheo}
Let $G$ be a finite abstract group. Assume that $G$ is $d$-generated and that all $\operatorname{o}peratorname{Aut}(G)$-orbits on $G$ are of length at most $c$. Then
\[
|G|\operatorname{o}peratorname{l}eqslant c^{d\cdot c^d\cdot(1+\operatorname{o}peratorname{l}floor d\operatorname{o}peratorname{l}og_2{c}\rfloor)}+1.
\]
\mathrm{e}nd{theorem}
Unfortunately, as we will explain in Section \ref{sec5}, it seems that Ledermann and Neumann's proof of \cite[Theorem 6.6]{LN56a} cannot be adapted to transitive permutation groups, requiring us to use Sambale's recent proof from \cite{Sam19a} instead. Sambale's proof is conceptually simpler than Ledermann and Neumann's argument, but it produces worse explicit bounds, which explains the stark contrast between the bounds in Theorems \ref{mainTheo}(3) and \ref{improvedBoundTheo}.
\section{Preliminaries}\operatorname{o}peratorname{l}abel{sec2}
\subsection{Notation and terminology}\operatorname{o}peratorname{l}abel{subsec2P1}
We denote by $\mathbb{N}$ the set of natural numbers (including $0$) and by $\mathbb{N}^+$ the set of positive integers. The symbol $\mathfrak{p}hi$ denotes Euler's totient function.
As in \cite{Bor20a}, for a finite abstract group $G$, we denote by $\operatorname{o}peratorname{mao}l(G)$ the maximum length of an orbit of the natural action of the automorphism group $\operatorname{o}peratorname{Aut}(G)$ on $G$. On the other hand, if $G\operatorname{o}peratorname{l}eqslant\operatorname{o}peratorname{Sym}(\mathcal{O}mega)$ is a permutation group, then as in Section \ref{sec1}, we denote by $\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$ the group of all automorphisms of $G$ that are induced through conjugation by some element from the normaliser $\operatorname{o}peratorname{N}_{\operatorname{o}peratorname{Sym}(\mathcal{O}mega)}(G)$, and if $G$ is finite, we denote by $\operatorname{o}peratorname{mao}l_{\mathfrak{p}erm}(G)$ the maximum length of an orbit of the natural action of $\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$ on $G$. Note that we may also view $G$ as an abstract group, so that the notation $\operatorname{o}peratorname{Aut}(G)$ is well-defined (as is $\operatorname{o}peratorname{mao}l(G)$ if $G$ is finite), and we have $\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)\operatorname{o}peratorname{l}eqslant\operatorname{o}peratorname{Aut}(G)$, as well as $\operatorname{o}peratorname{mao}l_{\mathfrak{p}erm}(G)\operatorname{o}peratorname{l}eqslant\operatorname{o}peratorname{mao}l(G)$ if $G$ is finite.
If $G$ is an abstract group, we denote by $G_{\mathrm{reg}}\operatorname{o}peratorname{l}eqslant\operatorname{o}peratorname{Sym}(G)$ the image of the \mbox{(right-)}regular permutation representation of $G$ on itself. The minimum number of generators of a finitely generated group $G$ will be denoted by $d(G)$ and called the \mathrm{e}mph{rank of $G$}. The exponent (least common multiple of the element orders) of a finite group $G$ will be denoted by $\operatorname{o}peratorname{E}xp(G)$. The soluble radical (largest soluble normal subgroup) of a finite group $G$ will be denoted by $\operatorname{o}peratorname{Rad}(G)$, and the centre of $G$ by $\zeta G$.
\subsection{Some basic results}\operatorname{o}peratorname{l}abel{subsec2P2}
In this subsection, we collect some auxiliary results (most if not all of which are well-known) that will be used in the proofs of the main results. The following three lemmas deal with transitive permutation groups. We include the first without proof.
\begin{lemmma}\operatorname{o}peratorname{l}abel{transAbLem}
Let $G$ be a transitive permutation group, and let $S$ be a point stabiliser in $G$. The following hold:
\begin{enumerate}
\item $S$ is \mathrm{e}mph{core-free in $G$}, i.e., $S$ does not contain any nontrivial normal subgroup of $G$.
\item If $G$ is abelian, then $G$ is regular.
\mathrm{e}nd{enumerate}
\mathrm{e}nd{lemmma}
\begin{lemmma}\operatorname{o}peratorname{l}abel{autPermLem}
Let $G$ be a transitive permutation group, and let $\alpha\in\operatorname{o}peratorname{Aut}(G)$. The following are equivalent:
\begin{enumerate}
\item $\alpha\in\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$.
\item For some (equivalently, any) point stabiliser $S$ in $G$, we have that $S^{\alpha}$ is also a point stabiliser in $G$.
\mathrm{e}nd{enumerate}
\mathrm{e}nd{lemmma}
\begin{proof}
See \cite[Theorem 4.2B, p.~110]{DM96a}.
\mathrm{e}nd{proof}
\begin{lemmma}\operatorname{o}peratorname{l}abel{regularAutLem}
Let $G$ be a regular permutation group. Then $\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)=\operatorname{o}peratorname{Aut}(G)$. In particular, if $G$ is finite, then $\operatorname{o}peratorname{mao}l_{\mathfrak{p}erm}(G)=\operatorname{o}peratorname{mao}l(G)$.
\mathrm{e}nd{lemmma}
\begin{proof}
This is immediate by Lemma \ref{autPermLem}.
\mathrm{e}nd{proof}
The next two lemmas are concerned with finite abelian groups:
\begin{lemmma}\operatorname{o}peratorname{l}abel{centAbelianLem}
Let $A$ be a finite abelian group, and let $B$ be a proper subgroup of $A$. The following are equivalent:
\begin{enumerate}
\item The centraliser of $B$ in $\operatorname{o}peratorname{Aut}(A)$ is trivial.
\item $|B|$ is odd, and $|A|=2|B|$ (or, equivalently, $A=B\operatorname{o}peratorname{t}imes\mathbb{Z}/2\mathbb{Z}$).
\mathrm{e}nd{enumerate}
\mathrm{e}nd{lemmma}
\begin{proof}
The implication \mathrm{e}nquote{(2)$\operatorname{o}peratorname{R}ightarrow$(1)} is clear by the fact that $\operatorname{o}peratorname{Aut}(\mathbb{Z}/2\mathbb{Z})$ is trivial, so we focus on the proof of \mathrm{e}nquote{(1)$\operatorname{o}peratorname{R}ightarrow$(2)}, for which we will first show the following claim: \mathrm{e}nquote{If $G$ is a finite abelian $p$-group for some prime $p$, and $H$ is a proper subgroup of $G$, then $\operatorname{o}peratorname{C}_{\operatorname{o}peratorname{Aut}(G)}(H)$ is trivial if and only if $|G|=2$ and $|H|=1$.}
In order to prove the claim, note first that it is clear if $H$ is trivial, as $\mathbb{Z}/2\mathbb{Z}$ is the only nontrivial (finite) group with trivial automorphism group. We may thus assume that $H$ is nontrivial, and under this assumption, we need to show that $\operatorname{o}peratorname{C}_{\operatorname{o}peratorname{Aut}(G)}(H)\not=\{\operatorname{o}peratorname{id}_G\}$. For this, we may and will assume w.l.o.g.~that $H$ is of index $p$ in $G$. Fix elements $g\in G\setminus H$ and $h_0\in H$ with $\operatorname{o}peratorname{ord}(h_0)=p$. Then every element of $G$ has a unique representation as $ig+h$ for some $i\in\{0,1,\operatorname{o}peratorname{l}dots,p-1\}$ and some $h\in H$. We define a function $\alpha:G\rightarrow G$ via $(ig+h)^{\alpha}:=ig+ih_0+h$. It is easy to check that this function is a nontrivial element of $\operatorname{o}peratorname{C}_{\operatorname{o}peratorname{Aut}(G)}(H)$, which concludes the proof of the claim.
Now that the claim has been proved, we will show the contraposition of \mathrm{e}nquote{(1)$\operatorname{o}peratorname{R}ightarrow$(2)}: Assume that $|B|$ is even or $|A|>2|B|$; we need to show that $\operatorname{o}peratorname{C}_{\operatorname{o}peratorname{Aut}(A)}(B)$ is nontrivial. For each prime $p$, denote by $A_p$ and $B_p$, the Sylow $p$-subgroups of $A$ and $B$ respectively. Note that by Sylow's Theorems, for all primes $p$ we have $B_p<A_p$. If $p>2$, then by the claim, $\operatorname{o}peratorname{C}_{\operatorname{o}peratorname{Aut}(A_p)}(B_p)$ is nontrivial, whence $\operatorname{o}peratorname{C}_{\operatorname{o}peratorname{Aut}(A)}(B)$ is nontrivial, as required. We may thus assume that $B_p=A_p$ for all $p>2$ and $B_2<A_2$. Then under either of the assumptions \mathrm{e}nquote{$|B|$ is even} or \mathrm{e}nquote{$|A|>2|B|$}, we find that $(|A_2|,|B_2|)\not=(2,1)$, whence $\operatorname{o}peratorname{C}_{\operatorname{o}peratorname{Aut}(A_2)}(B_2)$ is nontrivial by the claim, and thus $\operatorname{o}peratorname{C}_{\operatorname{o}peratorname{Aut}(A)}(B)$ is nontrivial, as required.
\mathrm{e}nd{proof}
For the second lemma on finite abelian groups, we require a simple definition:
\begin{deffinition}\operatorname{o}peratorname{l}abel{abelianBasisDef}
Let $p$ be a prime, and let $A=\mathbb{Z}/p^{e_1}\mathbb{Z}\operatorname{o}peratorname{t}imes\cdots\operatorname{o}peratorname{t}imes\mathbb{Z}/p^{e_r}\mathbb{Z}$ with $e_1\geqslant e_2\geqslant\cdots\geqslant e_r\geqslant1$ be a finite abelian $p$-group of rank $r$. A \mathrm{e}mph{basis of $A$} is an $r$-tuple $(a_1,\operatorname{o}peratorname{l}dots,a_r)\in A^r$ such that $\operatorname{o}peratorname{ord}(a_i)=p^{e_i}$ for $i=1,\operatorname{o}peratorname{l}dots,r$ and $A=\operatorname{o}peratorname{l}angle a_1\rangle\operatorname{o}peratorname{t}imes\cdots\operatorname{o}peratorname{t}imes\operatorname{o}peratorname{l}angle a_r\rangle$.
\mathrm{e}nd{deffinition}
\begin{lemmma}\operatorname{o}peratorname{l}abel{elAbBasisLem}
Let $p$ be a prime, and let $A=\mathbb{Z}/p^{e_1}\mathbb{Z}\operatorname{o}peratorname{t}imes\cdots\operatorname{o}peratorname{t}imes\mathbb{Z}/p^{e_r}\mathbb{Z}$ with $e_1\geqslant e_2\geqslant\cdots\geqslant e_r\geqslant1$ be a finite abelian $p$-group of rank $r$. Moreover, let $B$ be an elementary abelian subgroup of $A$ of rank $s$. Then there is a basis $(a_1,\operatorname{o}peratorname{l}dots,a_r)$ of $A$ as well as indices $1\operatorname{o}peratorname{l}eqslant i_1<i_2<\cdots<i_s\operatorname{o}peratorname{l}eqslant r$ such that $(p^{e_{i_1}-1}a_{i_1},p^{e_{i_2}-1}a_{i_2},\operatorname{o}peratorname{l}dots,p^{e_{i_s}-1}a_{i_s})$ is a basis of $B$.
\mathrm{e}nd{lemmma}
\begin{proof}
We proceed by induction on $s$. The induction base, $s=0$, is vacuously true. Assume thus that $s\geqslant1$ and that the assertion is true for elementary abelian subgroups of rank at most $s-1$. Fix a nontrivial element $b\in B$. By \cite[Lemma 5]{Sam19a}, there are subgroups $A_1,A_2\operatorname{o}peratorname{l}eqslant A$ such that $b\in A_1$, $A_1$ is cyclic, and $A=A_1\operatorname{o}peratorname{t}imes A_2$. Fix a direct complement $B_2$ of $B_1:=\operatorname{o}peratorname{l}angle b\rangle$ in $B$. Through subtracting suitable multiples of $a$ from the entries of any given basis of $B_2$, we may assume without loss of generality that $B_2\operatorname{o}peratorname{l}eqslant A_2$. The assertion now follows through applying the induction hypothesis to $B_2$ and $A_2$.
\mathrm{e}nd{proof}
\begin{remmark}
Consider the following assertion, which generalises the statement of Lemma \ref{elAbBasisLem}:
\mathrm{e}nquote{For every prime $p$, every finite abelian $p$-group $A$ and every subgroup $B$ of $A$, there are bases $\vec{b}$ and $\vec{a}$ of $B$ and $A$ respectively such that for each entry $b$ of $\vec{b}$, some entry of $\vec{a}$ is a root of $b$.}
This assertion is \mathrm{e}mph{not} true. For example, for an arbitrary prime $p$, consider $A=\mathbb{Z}/p^3\mathbb{Z}\operatorname{o}peratorname{t}imes\mathbb{Z}/p\mathbb{Z}$, with basis $(a_1,a_2)$, and set $b:=pa_1+a_2$ and $B:=\operatorname{o}peratorname{l}angle b\rangle\cong\mathbb{Z}/p^2\mathbb{Z}$. Then if the above assertion was true, the generator $b$ of $B$ would need to have a $p$-th root in $A$, which it does not.
\mathrm{e}nd{remmark}
Finally, as in \cite{Bor20a}, we will be using the concept of a \mathrm{e}nquote{central automorphism} at several points in our arguments, and we briefly state the most important facts concerning this concept (which are well-known and easy to check). For each group $G$ and each group homomorphism $f:G\rightarrow\zeta G$, the function $\alpha_f:G\rightarrow G$, $g\mapsto g\cdot f(g)$, is an endomorphism of $G$ called the \mathrm{e}mph{central endomorphism of $G$ associated with $f$}. The kernel of $\alpha_f$ consists of those $g\in\zeta G$ such that $f(g)=g^{-1}$. In particular, if $G$ is finite, then $\alpha_f$ is an automorphism of $G$ if and only if the only element of $\zeta G$ that is inverted by $f$ is $1_G$. Such automorphisms are called \mathrm{e}mph{central automorphisms}, and they form a subgroup of $\operatorname{o}peratorname{Aut}(G)$ denoted by $\operatorname{o}peratorname{Aut}_{\mathrm{cent}}(G)$.
\section{Proof of Theorem \ref{mainTheo}(1)}\operatorname{o}peratorname{l}abel{sec3}
We split the proof of Theorem \ref{mainTheo}(1) into the three cases $\operatorname{o}peratorname{mao}l_{\mathfrak{p}erm}(G)=1,2,3$, each dealt with in one of the following three subsections.
\subsection{Finite transitive permutation groups with maximum normaliser orbit length 1}\operatorname{o}peratorname{l}abel{subsec3P1}
The following is a simple consequence of known results:
\begin{propposition}\operatorname{o}peratorname{l}abel{maol1Prop}
Let $G$ be a finite transitive permutation group. The following are equivalent:
\begin{enumerate}
\item $\operatorname{o}peratorname{mao}l_{\mathfrak{p}erm}(G)=1$.
\item Up to permutation group isomorphism, $G$ is one of $(\mathbb{Z}/m\mathbb{Z})_{\mathrm{reg}}$ for $m\in\{1,2\}$.
\mathrm{e}nd{enumerate}
\mathrm{e}nd{propposition}
\begin{proof}
The implication \mathrm{e}nquote{(2)$\operatorname{o}peratorname{R}ightarrow$(1)} is easy, so we focus on the implication \mathrm{e}nquote{(1)$\operatorname{o}peratorname{R}ightarrow$(2)}. Since $\operatorname{o}peratorname{Inn}(G)\operatorname{o}peratorname{l}eqslant\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$, all conjugacy classes of $G$ are of length $1$, i.e., $G$ is abelian. Hence, by Lemma \ref{transAbLem}(2), $G$ is regular, and by Lemma \ref{regularAutLem}, $\operatorname{o}peratorname{mao}l(G)=1$. The result now follows from \cite[Proposition 3.1.1]{Bor20a}.
\mathrm{e}nd{proof}
\subsection{Finite transitive permutation groups with maximum normaliser orbit length 2}\operatorname{o}peratorname{l}abel{subsec3P2}
In this subsection, we will prove the following result:
\begin{propposition}\operatorname{o}peratorname{l}abel{maol2Prop}
Let $G\operatorname{o}peratorname{l}eqslant\operatorname{o}peratorname{Sym}(\mathcal{O}mega)$ be a finite transitive permutation group. The following are equivalent:
\begin{enumerate}
\item $\operatorname{o}peratorname{mao}l_{\mathfrak{p}erm}(G)=2$.
\item Up to permutation group isomorphism, $G$ is one of the following:
\begin{itemize}
\item $(\mathbb{Z}/m\mathbb{Z})_{\mathrm{reg}}$ for some $m\in\{3,4,6\}$, or
\item $\operatorname{o}peratorname{D}_8\operatorname{o}peratorname{l}eqslant\operatorname{o}peratorname{Sym}(4)$.
\mathrm{e}nd{itemize}
\mathrm{e}nd{enumerate}
\mathrm{e}nd{propposition}
\begin{proof}
The implication \mathrm{e}nquote{(2)$\operatorname{o}peratorname{R}ightarrow$(1)} is easy, so we will be concerned with the implication \mathrm{e}nquote{(1)$\operatorname{o}peratorname{R}ightarrow$(2)}. If $G$ is regular, then by Lemma \ref{regularAutLem} and \cite[Proposition 3.2.4]{Bor20a}, it follows that $G\cong(\mathbb{Z}/m\mathbb{Z})_{\mathrm{reg}}$ for some $m\in\{3,4,6\}$. We will thus henceforth assume that $G$ is nonregular (hence nonabelian by Lemma \ref{transAbLem}(2)), and under this assumption, we will show that $G\cong\operatorname{o}peratorname{D}_8\operatorname{o}peratorname{l}eqslant\operatorname{o}peratorname{Sym}(4)$.
By assumption, all conjugacy classes of $G$ are of length at most $2$, and so the exponent of $\operatorname{o}peratorname{Inn}(G)$ is $2$, whence $\operatorname{o}peratorname{Inn}(G)$ is an elementary abelian $2$-group. In particular, $G$ is nilpotent of class $2$, and all Sylow $p$-subgroups of $G$ for $p>2$ are abelian. It follows that $G=G_2\operatorname{o}peratorname{t}imes A$ where $G_2$ is a nonabelian $2$-group of class $2$ and $A$ is a finite abelian group of odd order. Fix a point $\operatorname{o}mega\in\mathcal{O}mega$, and consider the associated point stabiliser $G_{\operatorname{o}mega}\operatorname{o}peratorname{l}eqslant G$. Since $G_{\operatorname{o}mega}$ is core-free in $G$, we find that $G_{\operatorname{o}mega}\cap A=\{1_G\}$, or equivalently (using the coprimality of $|G_2|$ and $|A|$), $G_{\operatorname{o}mega}\operatorname{o}peratorname{l}eqslant G_2$.
We claim that $A$ is trivial. Assume otherwise. Then by \cite[Proposition 3.1.1]{Bor20a}, there is an $a\in A$ such that $|a^{\operatorname{o}peratorname{Aut}(A)}|\geqslant2$. Let $g\in G_2\setminus\zeta G_2$, and consider the element $h:=ga\in G$. Note that $\operatorname{o}peratorname{Aut}(G)=\operatorname{o}peratorname{Aut}(G_2)\operatorname{o}peratorname{t}imes\operatorname{o}peratorname{Aut}(A)$, and since $G_{\operatorname{o}mega}\operatorname{o}peratorname{l}eqslant G_2$, it follows by Lemma \ref{autPermLem} that $\operatorname{o}peratorname{Inn}(G_2)\operatorname{o}peratorname{t}imes\operatorname{o}peratorname{Aut}(A)\operatorname{o}peratorname{l}eqslant\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$. Hence
\[
|h^{\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)}|\geqslant|h^{\operatorname{o}peratorname{Inn}(G_2)\operatorname{o}peratorname{t}imes\operatorname{o}peratorname{Aut}(A)}|=|g^{G_2}|\cdot|a^{\operatorname{o}peratorname{Aut}(A)}|\geqslant 2\cdot2=4>2,
\]
a contradiction. Therefore, $A$ is trivial, and so $G=G_2$ is a nonabelian $2$-group of class $2$.
Observing once more that $G_{\operatorname{o}mega}$ is core-free in $G$, we find that $G_{\operatorname{o}mega}\cap\zeta G=\{1_G\}$. As $G/\zeta G=\operatorname{o}peratorname{Inn}(G)$ is (as noted above) an elementary abelian $2$-group, it follows that $G_{\operatorname{o}mega}$ is also an elementary abelian $2$-group, embedded into $G/\zeta G$ via the canonical projection $G\rightarrow G/\zeta G$. Note that $|G_{\operatorname{o}mega}|<|G/\zeta G|$, since otherwise, $G=G_{\operatorname{o}mega}\operatorname{o}peratorname{t}imes\zeta G$, which is impossible, since $G_{\operatorname{o}mega}$ is both nontrivial and core-free in $G$.
We claim that $\zeta G$ is cyclic. Assume otherwise. Then, fixing an embedding $(\mathbb{Z}/2\mathbb{Z})^2\operatorname{o}verset{\iota}\mathfrak{h}ookrightarrow\zeta G$ and a projection $(G/\zeta G)/(G_{\operatorname{o}mega}\zeta G/\zeta G)\operatorname{o}verset{\mathfrak{p}i}\operatorname{o}peratorname{t}woheadrightarrow\mathbb{Z}/2\mathbb{Z}$, we have four distinct group homomorphisms
\[
f:G\operatorname{o}verset{\operatorname{o}peratorname{t}ext{can.}}\operatorname{o}peratorname{t}woheadrightarrow G/\zeta G\operatorname{o}verset{\operatorname{o}peratorname{t}ext{can.}}\operatorname{o}peratorname{t}woheadrightarrow(G/\zeta G)/(G_{\operatorname{o}mega}\zeta G/\zeta G)\operatorname{o}verset{\mathfrak{p}i}\operatorname{o}peratorname{t}woheadrightarrow\mathbb{Z}/2\mathbb{Z}\mathfrak{h}ookrightarrow(\mathbb{Z}/2\mathbb{Z})^2\operatorname{o}verset{\iota}\mathfrak{h}ookrightarrow\zeta G.
\]
These homomorphisms $f$ satisfy $\zeta G\operatorname{o}peratorname{l}eqslant\operatorname{o}peratorname{k}er(f)$, the associated central automorphisms $\alpha_f$ centralise $G_{\operatorname{o}mega}$ (in particular, $\alpha_f\in\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$ by Lemma \ref{autPermLem}) and a suitable element of $G$ has four distinct images under those automorphisms $\alpha_f$. It follows that $\operatorname{o}peratorname{mao}l_{\mathfrak{p}erm}(G)\geqslant 4>2$, a contradiction. This concludes the proof that $\zeta G$ is cyclic.
Note that $G'\operatorname{o}peratorname{l}eqslant\zeta G$ is also cyclic. But since $G/\zeta G$ has exponent $2$, it follows that for all $x,y\in G$,
\[
1_G=[x^2,y]=[x,y]^x[x,y]=[x,y]^2,
\]
whence $G'\cong\mathbb{Z}/2\mathbb{Z}$. Next, we claim that $G/G'$ is an elementary abelian $2$-group. Assume otherwise. Then $|\zeta G|\geqslant 4$ (otherwise, $G$ is extraspecial). Denote the image of $G_{\operatorname{o}mega}$ under the canonical projection $G\rightarrow G/G'$ by $P$. Since $P$ is elementary abelian, we may apply Lemma \ref{elAbBasisLem} to find a basis $\vec{b}=(b_1,\operatorname{o}peratorname{l}dots,b_n)$ of $G/G'$ such that a suitable subsequence of $\vec{b}$ \mathrm{e}nquote{powers up} to a basis of $P$. Now, the first basis entry $b_1$ has order $2^k$ for some $k\geqslant2$, and using the facts that $G'\operatorname{o}peratorname{l}eqslant\zeta G$, that $G_{\operatorname{o}mega}\cap\zeta G=\{1_G\}$ and that every square in $G$ lies in $\zeta G$, we conclude that $b_1^{2^{k-1}}\notin P$. It follows that $P\operatorname{o}peratorname{l}eqslant\operatorname{o}peratorname{l}angle b_2,\operatorname{o}peratorname{l}dots,b_n\rangle$, whence we may fix a projection $\mathfrak{p}i_1:G/(G_{\operatorname{o}mega}G')\operatorname{o}peratorname{t}woheadrightarrow\mathbb{Z}/2^k\mathbb{Z}$. We also fix a projection $\mathfrak{p}i_2:\mathbb{Z}/2^k\mathbb{Z}\operatorname{o}peratorname{t}woheadrightarrow\mathbb{Z}/4\mathbb{Z}$. Through composition, we obtain four distinct group homomorphisms
\[
f:G\operatorname{o}verset{\operatorname{o}peratorname{t}ext{can.}}\operatorname{o}peratorname{t}woheadrightarrow G/(G_{\operatorname{o}mega}G')\operatorname{o}verset{\mathfrak{p}i_1}\operatorname{o}peratorname{t}woheadrightarrow\mathbb{Z}/2^k\mathbb{Z}\operatorname{o}verset{\mathfrak{p}i_2}\operatorname{o}peratorname{t}woheadrightarrow\mathbb{Z}/4\mathbb{Z}\rightarrow\zeta G,
\]
each of which has the property that $1_G$ is the only element of $\zeta G$ that is inverted by $f$. Hence we have four distinct associated central automorphisms $\alpha_f\in\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$, and a suitable element of $G$ has four distinct images under these automorphisms, whence $\operatorname{o}peratorname{mao}l_{\mathfrak{p}erm}(G)\geqslant 4>2$, a contradiction. This concludes the proof that $G/G'$ is elementary abelian.
Now, setting $d:=\operatorname{o}peratorname{l}og_2{|G/G'|}$ and $t:=\operatorname{o}peratorname{l}og_2{|G_{\operatorname{o}mega}|}$, we define a \mathrm{e}mph{standard tuple in $G$} as a tuple $(g_1,\operatorname{o}peratorname{l}dots,g_d)\in G^d$ such that
\begin{itemize}
\item $(g_1,\operatorname{o}peratorname{l}dots,g_t)$ is a basis of $G_{\operatorname{o}mega}$, and
\item the entry-wise image of $(g_1,\operatorname{o}peratorname{l}dots,g_d)$ under the canonical projection $G\rightarrow G/G'$ is a basis of $G/G'$.
\mathrm{e}nd{itemize}
If $\vec{g}=(g_1,\operatorname{o}peratorname{l}dots,g_d)\in G^d$ is a standard tuple in $G$, then the \mathrm{e}mph{power-commutator tuple associated with $\vec{g}$} is the $(d+{d \choose 2})$-tuple
\[
(g_1^2,g_2^2,\operatorname{o}peratorname{l}dots,g_d^2,[g_1,g_2],[g_1,g_3],\operatorname{o}peratorname{l}dots,[g_1,g_d],[g_2,g_3],[g_2,g_4],\operatorname{o}peratorname{l}dots,[g_2,g_d],\operatorname{o}peratorname{l}dots,[g_{d-1},g_d])
\]
with entries in $G'$. Two standard tuples in $G$ are called \mathrm{e}mph{equivalent} if and only if they have the same power-commutator tuple.
As in \cite[proof of Proposition 3.2.4]{Bor20a}, two standard tuples in $G$ are conjugate under the component-wise action of $\operatorname{o}peratorname{Aut}(G)$ if and only if they are equivalent. However, the above definition of \mathrm{e}nquote{standard tuple} differs from the one in \cite[proof of Proposition 3.2.4]{Bor20a}, and it was chosen in such a way that any automorphism $\alpha$ of $G$ which maps any given standard tuple in $G$ to any other given standard tuple in $G$ has the property that $G_{\operatorname{o}mega}^{\alpha}=G_{\operatorname{o}mega}$, whence $\alpha\in\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$ by Lemma \ref{autPermLem}. It follows that each equivalence class of standard tuples in $G$ is contained in an $\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$-orbit on $G^d$. Considering that the number of equivalence classes of standard tuples in $G$ is at most $2^{d+{d\choose 2}}$ (this uses that $|G'|=2$) and that the number of standard tuples in $G$ is at least
\[
\mathfrak{p}rod_{i=0}^{t-1}{(2^t-2^i)}\cdot\mathfrak{p}rod_{j=t}^{d-1}{(2^d-2^j)}\cdot 2^{d-t}=2^{d+{d\choose 2}-t}\cdot\mathfrak{p}rod_{i=1}^t{(2^i-1)}\cdot\mathfrak{p}rod_{j=1}^{d-1}{(2^j-1)},
\]
it follows that there is an equivalence class of standard tuples in $G$ (and thus an $\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$-orbit on $G^d$) of size at least
\begin{align*}
&\operatorname{o}peratorname{f}rac{2^{d+{d\choose 2}-t}\cdot\mathfrak{p}rod_{i=1}^t{(2^i-1)}\cdot\mathfrak{p}rod_{j=1}^{d-t}{(2^j-1)}}{2^{d+{d\choose 2}}}=\operatorname{o}peratorname{f}rac{\mathfrak{p}rod_{i=1}^t{(2^i-1)}\cdot\mathfrak{p}rod_{j=1}^{d-t}{(2^j-1)}}{2^t}= \\
&(1-\operatorname{o}peratorname{f}rac{1}{2^t})\cdot 2^{{t\choose 2}}\cdot\mathfrak{p}rod_{i=1}^{t-1}{(1-\operatorname{o}peratorname{f}rac{1}{2^i})}\cdot 2^{{{d-t+1}\choose 2}}\cdot\mathfrak{p}rod_{j=1}^{d-t}{(1-\operatorname{o}peratorname{f}rac{1}{2^j})}\geqslant\operatorname{o}peratorname{f}rac{1}{2}\cdot 2^{{t\choose 2}+{{d-t+1}\choose 2}}\cdot(\mathfrak{p}rod_{i=1}^{\infty}{(1-\operatorname{o}peratorname{f}rac{1}{2^i})})^2 \\
&\geqslant\operatorname{o}peratorname{f}rac{1}{2}\cdot 2^{\operatorname{o}peratorname{f}rac{d}{4}\cdot(\operatorname{o}peratorname{f}rac{d}{2}-1)}\cdot 0.28^2=0.0392\cdot 2^{\operatorname{o}peratorname{f}rac{d^2}{8}-\operatorname{o}peratorname{f}rac{d}{4}},
\mathrm{e}nd{align*}
where the first inequality uses that $t\geqslant1$ (since $G$ is nonregular), and the second inequality uses that
\[
\min\{{t\choose 2},{{d-t+1}\choose 2}\}\geqslant\operatorname{o}peratorname{f}rac{d}{4}\cdot(\operatorname{o}peratorname{f}rac{d}{2}-1).
\]
However, since all $\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$-orbits on $G$ are of length at most $2$, it follows that the length of an $\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$-orbit on $G^d$ cannot exceed $2^d$. Therefore,
\[
2^d \geqslant 0.0392\cdot 2^{\operatorname{o}peratorname{f}rac{d^2}{8}-\operatorname{o}peratorname{f}rac{d}{4}},
\]
which implies that $d\operatorname{o}peratorname{l}eqslant12$. Moreover, for $d=2,3,\operatorname{o}peratorname{l}dots,12$ and $t\in\{1,2,\operatorname{o}peratorname{l}dots,d-1\}$, one can check that
\[
(1-\operatorname{o}peratorname{f}rac{1}{2^t})\cdot 2^{{t\choose 2}}\cdot\mathfrak{p}rod_{i=1}^{t-1}{(1-\operatorname{o}peratorname{f}rac{1}{2^i})}\cdot 2^{{{d-t+1}\choose 2}}\cdot\mathfrak{p}rod_{j=1}^{d-t}{(1-\operatorname{o}peratorname{f}rac{1}{2^j})} > 2^d
\]
unless
\[
(d,t)\in\{(2,1),(3,1),(3,2),(4,1),(4,2),(4,3),(5,2),(5,3),(5,4),(6,3),(6,4)\},
\]
so these are the only possibilities for $(d,t)$. In particular, the degree of $G$, which equals $2^{d-t+1}$, is at most $16$. To conclude this proof, we will make use of the library of finite transitive permutation groups of small degree in GAP \cite{GAP4}, which was implemented by Hulpke \cite{TransGrp}, goes up to degree $32$ and is based on the papers \cite{But93a,BK83a,CH08a,Hul05a,Roy87a} as well as an unpublished classification, due to Sims, of the primitive permutation groups up to degree $50$ (to cover the transitive groups of degree $31$); see also \cite{HR19a} for the latest record concerning the classification of transitive groups of small degree, which goes up to degree $48$. In any case, this classification allows us to conclude that the only transitive permutation group $G$ of degree at most $16$ such that
\begin{itemize}
\item $G$ is a nonabelian $2$-group of class $2$,
\item $\zeta G$ is cyclic,
\item $G/G'$ is elementary abelian, and
\item $\operatorname{o}peratorname{mao}l_{\mathfrak{p}erm}(G)=2$
\mathrm{e}nd{itemize}
is $\operatorname{o}peratorname{D}_8\operatorname{o}peratorname{l}eqslant\operatorname{o}peratorname{Sym}(4)$.
\mathrm{e}nd{proof}
\subsection{Finite transitive permutation groups with maximum normaliser orbit length 3}\operatorname{o}peratorname{l}abel{subsec3P3}
In this subsection, we will be concerned with the proof of the following part of Theorem \ref{mainTheo}(1):
\begin{propposition}\operatorname{o}peratorname{l}abel{maol3Prop}
Let $G\operatorname{o}peratorname{l}eqslant\operatorname{o}peratorname{Sym}(\mathcal{O}mega)$ be a finite transitive permutation group. The following are equivalent:
\begin{enumerate}
\item $\operatorname{o}peratorname{mao}l_{\mathfrak{p}erm}(G)=3$.
\item Up to permutation group isomorphism, $G$ is one of the following:
\begin{itemize}
\item $((\mathbb{Z}/2\mathbb{Z})^2)_{\mathrm{reg}}$, $\operatorname{o}peratorname{Sym}(3)_{\mathrm{reg}}$, or
\item $\operatorname{o}peratorname{D}_{2n}\operatorname{o}peratorname{l}eqslant\operatorname{o}peratorname{Sym}(n)$ for some $n\in\{3,6\}$.
\mathrm{e}nd{itemize}
\mathrm{e}nd{enumerate}
\mathrm{e}nd{propposition}
We will first show two auxiliary results, which are extensions of \cite[Lemmas 3.3.1(2) and 3.3.2]{Bor20a}.
\begin{lemmma}\operatorname{o}peratorname{l}abel{auxLem1}
Let $G\operatorname{o}peratorname{l}eqslant\operatorname{o}peratorname{Sym}(\mathcal{O}mega)$ be a finite, transitive, nonregular permutation group such that $\operatorname{o}peratorname{mao}l_{\mathfrak{p}erm}(G)=3$. Then the following hold:
\begin{enumerate}
\item The set of element orders of $\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$ is contained in $\{1,2,3\}$.
\item $G$ is a $\{2,3\}$-group.
\mathrm{e}nd{enumerate}
\mathrm{e}nd{lemmma}
\begin{proof}
For (1): As in \cite[proof of Lemma 3.3.1(2,a)]{Bor20a}, for all $\alpha\in\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$, one has that $G=\operatorname{o}peratorname{C}_G(\alpha^2)\cup\operatorname{o}peratorname{C}_G(\alpha^3)$, and thus $G=\operatorname{o}peratorname{C}_G(\alpha^2)$ or $G=\operatorname{o}peratorname{C}_G(\alpha^3)$, as $G$ is not the union of two proper subgroups.
For (2): As in \cite[proof of Lemma 3.3.1(2,b)]{Bor20a}, one can show that $G=G_{\{2,3\}}\operatorname{o}peratorname{t}imes G_{\{2,3\}'}$, where $G_{\{2,3\}}$ is the unique Hall-$\{2,3\}$-subgroup of $G$, and $G_{\{2,3\}'}$ is the unique, central Hall-$\{2,3\}'$-subgroup of $G$. Fix a point $\operatorname{o}mega\in\mathcal{O}mega$, and consider the point stabiliser $G_{\operatorname{o}mega}\operatorname{o}peratorname{l}eqslant G$. Since $G_{\operatorname{o}mega}$ is core-free in $G$, it follows that $G_{\operatorname{o}mega}\cap G_{\{2,3\}'}=\{1_G\}$, and thus $G_{\operatorname{o}mega}\operatorname{o}peratorname{l}eqslant G_{\{2,3\}}$, whence $\operatorname{o}peratorname{Inn}(G_{\{2,3\}})\operatorname{o}peratorname{t}imes\operatorname{o}peratorname{Aut}(G_{\{2,3\}'})$ embeds naturally into $\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$ by Lemma \ref{autPermLem}. Hence, if $|G_{\{2,3\}'}|>1$, then it follows by \cite[Lemma 3.1(2)]{Bor20a} that $\operatorname{o}peratorname{mao}l_{\mathfrak{p}erm}(G)\geqslant 4>3$, a contradiction.
\mathrm{e}nd{proof}
\begin{lemmma}\operatorname{o}peratorname{l}abel{auxLem2}
Let $G\operatorname{o}peratorname{l}eqslant\operatorname{o}peratorname{Sym}(\mathcal{O}mega)$ be a finite, transitive, nonregular permutation group such that $\operatorname{o}peratorname{mao}l_{\mathfrak{p}erm}(G)=3$. Then the set of element orders of $\operatorname{o}peratorname{Inn}(G)$ is exactly $\{1,2,3\}$.
\mathrm{e}nd{lemmma}
\begin{proof}
Assume otherwise. By Lemma \ref{transAbLem}(2), $G$ is nonabelian, and so by Lemma \ref{auxLem1}, $\operatorname{o}peratorname{E}xp(\operatorname{o}peratorname{Inn}(G))\in\{2,3\}$; in any case, $\operatorname{o}peratorname{Inn}(G)$, and thus $G$ itself, is nilpotent. By Lemma \ref{auxLem1}(2), we can write $G=G_2\operatorname{o}peratorname{t}imes G_3$ where $G_p$ denotes the unique Sylow $p$-subgroup of $G$ for $p\in\{2,3\}$. Fix a point $\operatorname{o}mega\in\mathcal{O}mega$ and consider the point stabiliser $G_{\operatorname{o}mega}\operatorname{o}peratorname{l}eqslant G$. We make a case distinction:
\begin{enumerate}
\item Case: $\operatorname{o}peratorname{E}xp(\operatorname{o}peratorname{Inn}(G))=2$. Then $G_2$ is nonabelian and $G_3$ is abelian (and thus central in $G$). Since $G_{\operatorname{o}mega}$ is core-free in $G$, it follows that $G_{\operatorname{o}mega}\cap G_3=\{1_G\}$, and thus $G_{\operatorname{o}mega}\operatorname{o}peratorname{l}eqslant G_2$. Therefore, $\operatorname{o}peratorname{Inn}(G_2)\operatorname{o}peratorname{t}imes\operatorname{o}peratorname{Aut}(G_3)$ embeds naturally into $\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$, and so if $|G_3|>1$, we could conclude that $\operatorname{o}peratorname{mao}l_{\mathfrak{p}erm}(G)\geqslant 2\cdot 2=4>3$, a contradiction. Consequently, $G_3$ is trivial, and so $G=G_2$ is a nonabelian $2$-group with $\operatorname{o}peratorname{mao}l_{\mathfrak{p}erm}(G)=3$. An argument analogous to the one in \cite[proof of Lemma 3.3.2, Case (1)]{Bor20a} yields the final contradiction for this case.
\item Case: $\operatorname{o}peratorname{E}xp(\operatorname{o}peratorname{Inn}(G))=3$. Then $G_2$ is abelian (hence central in $G$), whereas $G_3$ is nonabelian. We have $G_{\operatorname{o}mega}\operatorname{o}peratorname{l}eqslant G_3$, and so, viewing $H:=G_3$ as a permutation group via the inclusions $H\operatorname{o}peratorname{l}eqslant G\operatorname{o}peratorname{l}eqslant\operatorname{o}peratorname{Sym}(\mathcal{O}mega)$, we find that $H_{\operatorname{o}mega}=G_{\operatorname{o}mega}$ and $\operatorname{o}peratorname{mao}l_{\mathfrak{p}erm}(H)=3$. As in \cite[proof of Lemma 3.3.2, Case (2)]{Bor20a}, we conclude that $\operatorname{o}peratorname{Inn}(H)$ is abelian, i.e., that the nilpotency class of $H$ is $2$. Moreover, in view of $\operatorname{o}peratorname{E}xp(\operatorname{o}peratorname{Inn}(H))=3$, we conclude that $\operatorname{o}peratorname{Inn}(H)$ is an elementary abelian $3$-group. One can now show the following facts, in the listed order and analogously to the proof of Proposition \ref{maol2Prop}:
\begin{itemize}
\item $H_{\operatorname{o}mega}$ is an elementary abelian $3$-group, embedded into $H/\zeta H\cong\operatorname{o}peratorname{Inn}(H)$ via the canonical projection $H\rightarrow H/\zeta H$.
\item $|H_{\operatorname{o}mega}|<|H/\zeta H|$.
\item $\zeta H$ is cyclic.
\item $|H'|=3$.
\item $H/H'$ is an elementary abelian $3$-group.
\mathrm{e}nd{itemize}
We can use these restrictions on $H$ to carry out an analogue of the \mathrm{e}nquote{standard tuples} argument from the proof of Proposition \ref{maol2Prop} (only needing to replace the prime $2$ by $3$), which allows us to conclude that with $d:=\operatorname{o}peratorname{l}og_3{|H/H'|}$ and $t:=\operatorname{o}peratorname{l}og_3{|H_{\operatorname{o}mega}|}$, we have
\begin{equation}\operatorname{o}peratorname{l}abel{3tupleEq}
(1-\operatorname{o}peratorname{f}rac{1}{3^t})\cdot 3^{{t \choose 2}+{{d-t+1} \choose 2}}\cdot\mathfrak{p}rod_{i=1}^{t-1}{(1-\operatorname{o}peratorname{f}rac{1}{3^i})}\cdot\mathfrak{p}rod_{j=1}^{d-t}{(1-\operatorname{o}peratorname{f}rac{1}{3^j})} \operatorname{o}peratorname{l}eqslant 3^d,
\mathrm{e}nd{equation}
in particular
\[
3^d \geqslant \operatorname{o}peratorname{f}rac{2}{3}\cdot 3^{\operatorname{o}peratorname{f}rac{d}{4}(\operatorname{o}peratorname{f}rac{d}{2}-1)}\cdot(\mathfrak{p}rod_{i=1}^{\infty}{(1-\operatorname{o}peratorname{f}rac{1}{3^i})})^2 \geqslant \operatorname{o}peratorname{f}rac{2}{3}\cdot 3^{\operatorname{o}peratorname{f}rac{d^2}{8}-\operatorname{o}peratorname{f}rac{d}{4}}\cdot 0.56^2,
\]
which only holds for $d\operatorname{o}peratorname{l}eqslant11$. Moreover, among all pairs $(d,t)$ with $d\in\{2,3,\operatorname{o}peratorname{l}dots,11\}$ and $t\in\{1,2,\operatorname{o}peratorname{l}dots,d-1\}$, the stronger inequality from Formula (\ref{3tupleEq}) only holds for
\[
(d,t)\in\{(2,1),(3,1),(3,2),(4,2),(4,3)\},
\]
so that $\deg(H)=3^{d-t+1}\operatorname{o}peratorname{l}eqslant 3^3=27$. However, according to the library of finite transitive permutation groups of small degree \cite{GAP4,TransGrp}, there are no transitive permutation groups $H$ of degree at most $27$ such that
\begin{itemize}
\item $H$ is a nonabelian $3$-group of class $2$,
\item $\zeta H$ is cyclic,
\item $H/H'$ is elementary abelian, and
\item $\operatorname{o}peratorname{mao}l_{\mathfrak{p}erm}(H)=3$,
\mathrm{e}nd{itemize}
a contradiction to the case assumption.\qedhere
\mathrm{e}nd{enumerate}
\mathrm{e}nd{proof}
We are now ready to prove Proposition \ref{maol3Prop}.
\begin{proof}[Proof of Proposition \ref{maol3Prop}]
The implication \mathrm{e}nquote{(2)$\operatorname{o}peratorname{R}ightarrow$(1)} is easy, so we focus on the proof of \mathrm{e}nquote{(1)$\operatorname{o}peratorname{R}ightarrow$(2)}. The regular case is dealt with in \cite[Proposition 3.3.3]{Bor20a}, so we assume that $G$ is nonregular. By Lemma \ref{auxLem2}, the set of element orders of $\operatorname{o}peratorname{Inn}(G)$ is $\{1,2,3\}$, and as in \cite[proof of Proposition 3.3.3]{Bor20a}, this allows us to conclude that $\operatorname{o}peratorname{Inn}(G)\cong\operatorname{o}peratorname{Sym}(3)$. Fix a point $\operatorname{o}mega\in\mathcal{O}mega$, and consider the point stabiliser $G_{\operatorname{o}mega}\operatorname{o}peratorname{l}eqslant G$. Since $\operatorname{o}peratorname{Inn}(G)\cong G/\zeta G$ and $G_{\operatorname{o}mega}\cap\zeta G=\{1_G\}$, we find that $G_{\operatorname{o}mega}$ is embedded into $\operatorname{o}peratorname{Sym}(3)$ via the canonical projection $G\rightarrow G/\zeta G$. We make a case distinction:
\begin{enumerate}
\item Case: $G_{\operatorname{o}mega}\cong\operatorname{o}peratorname{Sym}(3)$. Then $G=G_{\operatorname{o}mega}\operatorname{o}peratorname{t}imes\zeta G$, which is impossible, since $G_{\operatorname{o}mega}$ is core-free in $G$.
\item Case: $G_{\operatorname{o}mega}\cong\mathbb{Z}/3\mathbb{Z}$. We claim that $\zeta G$ is a $3$-group. Indeed, assuming that $2$ divides $|\zeta G|$, there is a group homomorphism chain of the form
\[
f:G\operatorname{o}verset{\operatorname{o}peratorname{t}ext{can.}}\operatorname{o}peratorname{t}woheadrightarrow G/\zeta G\operatorname{o}verset{\sim}\rightarrow\operatorname{o}peratorname{Sym}(3)\operatorname{o}peratorname{t}woheadrightarrow\mathbb{Z}/2\mathbb{Z}\mathfrak{h}ookrightarrow\zeta G.
\]
The corresponding central automorphism $\alpha_f$ of $G$ lies in $\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$ and maps any fixed element $g\in G$ that projects onto an order $2$ element in $\operatorname{o}peratorname{Sym}(3)$ to a different element in the same central coset. Therefore and since the image of $g$ in $G/\zeta G$ has conjugacy class length $3$, it follows that $\operatorname{o}peratorname{mao}l_{\mathfrak{p}erm}(G)\geqslant 2\cdot 3>3$, a contradiction. This concludes the proof that $\zeta G$ is a $3$-group, and as in \cite[proof of Proposition 3.3.3]{Bor20a}, we can infer from this that $G=\zeta G\operatorname{o}peratorname{t}imes\operatorname{o}peratorname{Sym}(3)$.
We next claim that $|\zeta G|\operatorname{o}peratorname{l}eqslant 3$. Assume otherwise. With respect to a fixed direct decomposition of $G$ of the form $\zeta G\operatorname{o}peratorname{t}imes\operatorname{o}peratorname{Sym}(3)$, denote by $P$ the projection of $G_{\operatorname{o}mega}$ to $\zeta G$. Note that $|P|=3$, because if $P$ is trivial, then $G_{\operatorname{o}mega}$ is normal and hence not core-free in $G$. Moreover, denoting by $\operatorname{o}peratorname{t}au$ a fixed element of order $2$ in $\operatorname{o}peratorname{Sym}(3)$, observe that $\operatorname{o}peratorname{Stab}_{\operatorname{o}peratorname{Aut}(\zeta G)}(P)$ embeds naturally into $\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$, via the injective group homomorphism
\begin{align*}
&\alpha \mapsto \begin{cases}((z,\sigma) \mapsto (z^{\alpha},\sigma)), & \operatorname{o}peratorname{t}ext{if }\alpha\in\operatorname{o}peratorname{C}_{\operatorname{o}peratorname{Aut}(\zeta G)}(P), \\ ((z,\sigma)\mapsto (z^{\alpha},\sigma^{\operatorname{o}peratorname{t}au})), & \operatorname{o}peratorname{t}ext{otherwise}\mathrm{e}nd{cases} \\
&\operatorname{o}peratorname{t}ext{for all }\alpha\in\operatorname{o}peratorname{Stab}_{\operatorname{o}peratorname{Aut}(\zeta G)}(P),z\in\zeta G,\sigma\in\operatorname{o}peratorname{Sym}(3).
\mathrm{e}nd{align*}
Therefore, $\operatorname{o}peratorname{mao}l_{\mathfrak{p}erm}(G)$ is at least the maximum orbit length of $\operatorname{o}peratorname{Stab}_{\operatorname{o}peratorname{Aut}(\zeta G)}(P)$ on $\zeta G$, which we will show to be strictly larger than $3$ in the following subcase distinction:
\begin{enumerate}
\item Subcase: $\zeta G$ is cyclic. Then $P$ is characteristic in $\zeta G$, so $\operatorname{o}peratorname{Stab}_{\operatorname{o}peratorname{Aut}(\zeta G)}(P)=\operatorname{o}peratorname{Aut}(\zeta G)$. Therefore, writing $|\zeta G|=3^k$ (with $k\geqslant2$), we conclude that the maximum orbit length of $\operatorname{o}peratorname{Stab}_{\operatorname{o}peratorname{Aut}(\zeta G)}(P)$ on $\zeta G$ is $\mathfrak{p}hi(3^k)=3^{k-1}\cdot 2\geqslant 6>3$.
\item Subcase: $\zeta G$ is not cyclic. By Lemma \ref{elAbBasisLem}, there is a basis $(z_1,\operatorname{o}peratorname{l}dots,z_r)$ of $\zeta G$ and an $i\in\{1,\operatorname{o}peratorname{l}dots,r\}$ such that $P\operatorname{o}peratorname{l}eqslant\operatorname{o}peratorname{l}angle z_i\rangle$. Therefore, if $\operatorname{o}peratorname{ord}(z_j)=3^k>3$ for some $j\in\{1,\operatorname{o}peratorname{l}dots,r\}$, then $\operatorname{o}peratorname{Stab}_{\operatorname{o}peratorname{Aut}(\zeta G)}(P)$ has an orbit of length $\mathfrak{p}hi(3^k)=3^{k-1}\cdot 2\geqslant6>3$, a contradiction. Hence $\zeta G\cong(\mathbb{Z}/3\mathbb{Z})^r$, so that the maximum orbit length of $\operatorname{o}peratorname{Stab}_{\operatorname{o}peratorname{Aut}_{\zeta G}}(P)$ on $\zeta G$ is $3^r-3\geqslant6>3$, another contradiction.
\mathrm{e}nd{enumerate}
This concludes the proof that $|\zeta G|\operatorname{o}peratorname{l}eqslant 3$. However, note that if $|\zeta G|=1$, then $G_{\operatorname{o}mega}$ is normal and thus not core-free in $G$. It follows that $G$ is of order $18$ and of degree $6$, and by the library of finite transitive permutation groups of small degree \cite{GAP4,TransGrp}, there are no transitive permutation groups with this combination of order and degree and with maximum normaliser orbit length $3$, a contradiction.
\item Case: $G_{\operatorname{o}mega}\cong\mathbb{Z}/2\mathbb{Z}$. In what follows, for a finite group $H$ and a prime $p$, if $H$ has a unique Sylow $p$-subgroup, we denote that subgroup by $H_p$. Let $A\operatorname{o}peratorname{l}eqslant G$ be the preimage of the unique index $2$ subgroup of $\operatorname{o}peratorname{Sym}(3)$ under the canonical projection $G\rightarrow G/\zeta G\cong\operatorname{o}peratorname{Sym}(3)$. Then $A$ is an abelian subgroup of index $2$ in $G$, and we have $G=A\rtimes G_{\operatorname{o}mega}$ as well as
\[
A=(\zeta G)_2\operatorname{o}peratorname{t}imes G_3=(\zeta G)_2\operatorname{o}peratorname{t}imes(\zeta G)_3\operatorname{o}peratorname{t}imes{\mathbb{Z}/3\mathbb{Z}}=\zeta G\operatorname{o}peratorname{t}imes{\mathbb{Z}/3\mathbb{Z}};
\]
to see that $G_3=(\zeta G)_3\operatorname{o}peratorname{t}imes{\mathbb{Z}/3\mathbb{Z}}$, follow the argument in \cite[proof of Proposition 3.3.3]{Bor20a}. Consequently, every automorphism of $\zeta G$ extends to an element of $\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$, whence $3\geqslant\operatorname{o}peratorname{mao}l_{\mathfrak{p}erm}(G)\geqslant\operatorname{o}peratorname{mao}l(\zeta G)$, which in view of \cite[Theorem 1.1(1)]{Bor20a} implies that $|\zeta G|\in\{1,2,3,4,6\}$. It follows that
\[
(\deg(G),|G|) \in \{(3,6),(6,12),(9,18),(12,24),(18,36)\}.
\]
We conclude this proof by noting that according to the library of finite transitive permutation groups of small degree \cite{GAP4,TransGrp}, the only nonregular finite transitive permutation groups having one of these degree-size combinations as well as maximum normaliser orbit length $3$ are the groups $\operatorname{o}peratorname{D}_{2n}\operatorname{o}peratorname{l}eqslant\operatorname{o}peratorname{Sym}(n)$ for $n\in\{3,6\}$. \qedhere
\mathrm{e}nd{enumerate}
\mathrm{e}nd{proof}
\section{Proof of Theorem \ref{mainTheo}(2)}\operatorname{o}peratorname{l}abel{sec4}
This is easy modulo the work from \cite[Section 4]{Bor20a}. As in \cite[Definition 4.1]{Bor20a}, denote by $G_n$ the finite $2$-group given by the following (power-commutator) presentation:
\begin{align*}
\operatorname{o}peratorname{l}angle x_1,\operatorname{o}peratorname{l}dots,x_{2^n+1},a,b \mid \,\,\,&[a,b]=[x_i,a]=[x_i,b]=1,[x_{2i-1},x_{2i}]=a,[x_{2i},x_{2i+1}]=b, \\
&[x_i,x_j]=1\operatorname{o}peratorname{t}ext{ if }|i-j|>1,x_1^2=x_{2^n+1}^2=b, \\
&a^2=b^2=x_i^2=1\operatorname{o}peratorname{t}ext{ if }1<i<2^n+1\rangle.
\mathrm{e}nd{align*}
We note the following known facts about the group $G_n$:
\begin{enumerate}
\item The order of $G_n$ is $2^{2^n+3}$, see \cite[Remark 4.2(1)]{Bor20a}.
\item The centre of $G_n$ is $\operatorname{o}peratorname{l}angle a,b\rangle$ and is of order $4$, see \cite[Remark 4.2(1)]{Bor20a}.
\item The group $\operatorname{o}peratorname{Aut}_{\mathrm{cent}}(G_n)$ of central automorphisms of $G_n$ acts transitively on each nontrivial coset of $\zeta G_n$ in $G_n$, see \cite[beginning of the proof of Proposition 4.3]{Bor20a}.
\item $\operatorname{o}peratorname{Aut}(G_n)=\operatorname{o}peratorname{Aut}_{\mathrm{cent}}(G_n)\cup\operatorname{o}peratorname{Aut}_{\mathrm{cent}}(G_n)\alpha_n$, where $\alpha_n$ is the automorphism of $G_n$ given by $a\mapsto a$, $b\mapsto b$, $x_i\mapsto x_i$ for $i\not=2^n$ and $x_{2^n}\mapsto x_{2^n}x_{2^n+1}$, see \cite[Remark 4.2(3) and Proposition 4.3]{Bor20a}.
\mathrm{e}nd{enumerate}
Now, consider the subgroup $H_n:=\operatorname{o}peratorname{l}angle x_{2^n}\rangle\cong\mathbb{Z}/2\mathbb{Z}$ of $G_n$, and view $G_n$ as a transitive permutation group via its action by right multiplication on the right cosets of $H_n$. Observe that
\[
H_n^{G_n}=\{H_n,\operatorname{o}peratorname{l}angle x_{2^n}a\rangle,\operatorname{o}peratorname{l}angle x_{2^n}b\rangle,\operatorname{o}peratorname{l}angle x_{2^n}ab\rangle\}=H_n^{\operatorname{o}peratorname{Aut}_{\mathrm{cent}}(G_n)}.
\]
Therefore, using Lemma \ref{autPermLem} and fact (4) above, we find that $\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G_n)=\operatorname{o}peratorname{Aut}_{\mathrm{cent}}(G_n)$. In view of this and facts (2) and (3) from above, we conclude that $\operatorname{o}peratorname{mao}l_{\mathfrak{p}erm}(G_n)=4$. Since this holds for all $n\in\mathbb{N}^+$, the statement of Theorem \ref{mainTheo}(2) now follows from fact (1).
\section{Proof of Theorems \ref{mainTheo}(3) and \ref{improvedBoundTheo}}\operatorname{o}peratorname{l}abel{sec5}
We start with the proof of Theorem \ref{improvedBoundTheo}, which is easy modulo the following slightly weaker and reformulated version of Ledermann and Neumann's result \cite[Theorem 6.6]{LN56a}:
\begin{lemma}\operatorname{o}peratorname{l}abel{lnLem}
Let $G$ be a finite group, and assume that $|\operatorname{o}peratorname{Aut}(G)|\operatorname{o}peratorname{l}eqslant n$. Then $|G|\operatorname{o}peratorname{l}eqslant n^{n(1+\operatorname{o}peratorname{l}floor\operatorname{o}peratorname{l}og_2{n}\rfloor)}+1$.
\mathrm{e}nd{lemma}
\begin{proof}
Otherwise, we have
\[
|G|\geqslant n^{n(1+\operatorname{o}peratorname{l}floor\operatorname{o}peratorname{l}og_2{n}\rfloor)}+2\geqslant f(n+1),
\]
where $f$ is as in \cite[Theorem 6.6]{LN56a}. Hence by \cite[Theorem 6.6]{LN56a}, we have $|\operatorname{o}peratorname{Aut}(G)|\geqslant n+1=|\operatorname{o}peratorname{Aut}(G)|+1$, a contradiction.
\mathrm{e}nd{proof}
\begin{proof}[Proof of Theorem \ref{improvedBoundTheo}]
Let $\vec{g}$ be an arbitrary but fixed generating $d$-tuple of $G$. Consider the orbit $\mathcal{O}:=\vec{g}^{\operatorname{o}peratorname{Aut}(G)}$ of $\vec{g}$ under the (entry-wise) action of $\operatorname{o}peratorname{Aut}(G)$. Combining the facts that the action of $\operatorname{o}peratorname{Aut}(G)$ on the set of generating $d$-tuples of $G$ is semiregular and that all orbits of $\operatorname{o}peratorname{Aut}(G)$ on $G$ are of length at most $c$, we find that
\[
|\operatorname{o}peratorname{Aut}(G)|=|\mathcal{O}|\operatorname{o}peratorname{l}eqslant c^d.
\]
The result now follows from Lemma \ref{lnLem}, applied with $n:=c^d$.
\mathrm{e}nd{proof}
Now we turn to the proof of Theorem \ref{mainTheo}(3). The meat of this proof lies in the following lemma:
\begin{lemma}\operatorname{o}peratorname{l}abel{ledNeuLem}
Let $G$ be a finite transitive permutation group. Then, recalling the definition of the function $\operatorname{o}peratorname{f}frak$ from Notation \ref{mainNot}, we have
\[
|G|\operatorname{o}peratorname{l}eqslant\operatorname{o}peratorname{f}frak(d(G),|\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)|).
\]
\mathrm{e}nd{lemma}
As mentioned at the end of Section \ref{sec1}, our proof of Lemma \ref{ledNeuLem} is a modification of the short and self-contained proof due to Sambale \cite{Sam19a} that the order of an abstract finite group $G$ is bounded in terms of $|\operatorname{o}peratorname{Aut}(G)|$. In contrast to Ledermann and Neumann's proof of their explicit result \cite[Theorem 6.6]{LN56a}, Sambale's argument first focuses on the commutator subgroup $G'$ and the abelianisation $G/G'$ and only turns to structural considerations concerning $\zeta G$ at its end. Unfortunately, it seems unclear whether Ledermann and Neumann's argument, which provides better explicit bounds than Sambale's proof, can be adapted to transitive permutation groups. Note that the main idea of Ledermann and Neumann's proof is to construct central automorphisms of $G$ by extending suitable automorphisms of $\zeta G$. More precisely, these automorphisms of $\zeta G$ must centralise $X\cap\zeta G$ where $X$ is a subgroup of $G$ mapping onto $G/\zeta G$ under the projection $G\rightarrow G/\zeta G$, and $X$ is chosen such that $X\cap\zeta G$ is of bounded order. The additional condition that the constructed central automorphisms shall map the (core-free) point stabiliser $G_{\operatorname{o}mega}$ to a conjugate translates into additional mapping conditions on their restrictions to $\zeta G$, which do not seem to be well-controlled.
For our proof of Lemma \ref{ledNeuLem}, we will need the following concepts, which are from \cite[Definitions 5.6 and 5.8]{Bor20a}:
\begin{definition}\operatorname{o}peratorname{l}abel{standardDef}
Consider the following concepts.
\begin{enumerate}
\item Let $p$ be a prime, and let $P$ be a finite abelian $p$-group. For $n\in\mathbb{N}^+$ with $n\geqslant d(P)$, a \mathrm{e}mph{length $n$ standard generating tuple of $P$} is an $n$-tuple $(x_1,\operatorname{o}peratorname{l}dots,x_n)\in P^n$ such that $(x_1,\operatorname{o}peratorname{l}dots,x_{d(P)})$ is a basis of $P$ (in the sense of Definition \ref{abelianBasisDef}) and $x_i=1_P$ for $i=d(P)+1,\operatorname{o}peratorname{l}dots,n$.
\item Let $H$ be a finite abelian group. A \mathrm{e}mph{standard generating tuple of $H$} is a $d(H)$-tuple $\vec{h}\in H^{d(H)}$ such that for each prime divisor $p$ of $|H|$, the entry-wise projection of $\vec{h}$ to the Sylow $p$-subgroup $H_p$ of $H$ is a standard generating tuple of $H_p$.
\item Let $G$ be a finite group. A \mathrm{e}mph{standard tuple in $G$} is a $d(G/G')$-tuple with entries in $G$ and whose entry-wise image under the canonical projection $G\rightarrow G/G'$ is a standard generating tuple of $G/G'$.
\mathrm{e}nd{enumerate}
\mathrm{e}nd{definition}
\begin{definition}\operatorname{o}peratorname{l}abel{pacDef}
Let $G$ be a finite group, let $n:=d(G/G')$, and let $\vec{g}=(g_1,\operatorname{o}peratorname{l}dots,g_n)$ be a standard tuple in $G$.
\begin{enumerate}
\item The \mathrm{e}mph{power-automorphism-commutator tuple associated with $\vec{g}$} is the $(2n+{n\choose 2})$-tuple
\[
(\mathfrak{p}i_1,\operatorname{o}peratorname{l}dots,\mathfrak{p}i_n,\alpha_1,\operatorname{o}peratorname{l}dots,\alpha_n,\gamma_{1,1},\gamma_{1,2},\operatorname{o}peratorname{l}dots,\gamma_{1,n},\gamma_{2,3},\gamma_{2,4},\operatorname{o}peratorname{l}dots,\gamma_{2,n},\operatorname{o}peratorname{l}dots,\gamma_{n-1,n})
\]
with entries in $G'\cup\operatorname{o}peratorname{Aut}(G')$ such that
\begin{itemize}
\item $\mathfrak{p}i_i=g_i^{\operatorname{o}peratorname{ord}_{G/G'}(g_iG')}\in G'$ for $i=1,\operatorname{o}peratorname{l}dots,n$,
\item $\alpha_i\in\operatorname{o}peratorname{Aut}(G')$ is the automorphism induced through conjugation by $g_i$ for $i=1,\operatorname{o}peratorname{l}dots,n$, and
\item $\gamma_{i,j}=[g_i,g_j]\in G'$ for $1\operatorname{o}peratorname{l}eqslant i<j\operatorname{o}peratorname{l}eqslant n$.
\mathrm{e}nd{itemize}
\item Two standard tuples in $G$ are called \mathrm{e}mph{equivalent} if and only if they have the same associated power-automorphism-commutator tuple.
\mathrm{e}nd{enumerate}
\mathrm{e}nd{definition}
We note that if $G$ is a finite group and $\vec{g},\vec{h}$ are equivalent standard tuples in $G$, then there is an $\alpha\in\operatorname{o}peratorname{C}_{\operatorname{o}peratorname{Aut}(G)}(G')$ such that $\vec{h}=(\vec{g})^{\alpha}$, see \cite[Remark 5.9]{Bor20a}. Another concept that will appear in the proof of Lemma \ref{ledNeuLem} is the following:
\begin{definition}\operatorname{o}peratorname{l}abel{powerAutDef}
Let $G$ be a group. A \mathrm{e}mph{power map automorphism of $G$} is an automorphism of $G$ of the form $g\mapsto g^e$ for all $g\in G$ and a fixed $e\in\mathbb{Z}$. The power map automorphisms of $G$ form a subgroup of $\operatorname{o}peratorname{Aut}(G)$ denoted by $\operatorname{o}peratorname{Aut}_{\mathfrak{p}ow}(G)$.
\mathrm{e}nd{definition}
Note that this concept is distinct from the more general one of a \mathrm{e}mph{power automorphism of $G$}, i.e., an automorphism of $G$ that stabilises every subgroup of $G$. Moreover, observe that if $G$ is a finite abelian group, then since $G$ has a cyclic direct factor of order $\operatorname{o}peratorname{E}xp(G)$, we have
\[
\operatorname{o}peratorname{Aut}_{\mathfrak{p}ow}(G)\cong\operatorname{o}peratorname{Aut}_{\mathfrak{p}ow}(\mathbb{Z}/\operatorname{o}peratorname{E}xp(G)\mathbb{Z})=\operatorname{o}peratorname{Aut}(\mathbb{Z}/\operatorname{o}peratorname{E}xp(G)\mathbb{Z}).
\]
\begin{proof}[Proof of Lemma \ref{ledNeuLem}]
For notational simplicity, we will set $d:=d(G)\in\mathbb{N}$ and $n:=|\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)|\in\mathbb{N}^+$. Moreover, we fix a point $\operatorname{o}mega\in\mathcal{O}mega$ and consider the associated point stabiliser $G_{\operatorname{o}mega}\operatorname{o}peratorname{l}eqslant G$. We proceed in several steps.
First, we note that
\begin{equation}\operatorname{o}peratorname{l}abel{gPrimeEq}
|G'|\operatorname{o}peratorname{l}eqslant n^{2n^3},
\mathrm{e}nd{equation}
which is the conclusion of \cite[Lemma 2]{Sam19a}. Reading through the proof of \cite[Lemma 2]{Sam19a}, we find that it only uses the assumption that $|\operatorname{o}peratorname{Inn}(G)|\operatorname{o}peratorname{l}eqslant n$, which also holds in our case.
Next, we will show that each prime divisor $p$ of $|G|$ is at most $n+1$. We follow the proof of \cite[Lemma 3]{Sam19a}. If $|G/\zeta G|_p\not=1$, then $p\operatorname{o}peratorname{l}eqslant n$, since $\operatorname{o}peratorname{Inn}(G)\operatorname{o}peratorname{l}eqslant\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$. Otherwise, $|\zeta G|_p=|G|_p$ and $G=(\zeta G)_p\operatorname{o}peratorname{t}imes Q$ by \cite[Theorem 3.3.1]{KS04a}, where $Q$ is a $p'$-group. Since $G_{\operatorname{o}mega}$ is core-free in $G$, we have $G_{\operatorname{o}mega}\cap(\zeta G)_p=\{1_G\}$, and thus $G_{\operatorname{o}mega}\operatorname{o}peratorname{l}eqslant Q$ by the coprimality of $|(\zeta G)_p|$ and $|Q|$. Hence every automorphism of $(\zeta G)_p$ extends to an automorphism of $G$ in $\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$, whence $n\geqslant|\operatorname{o}peratorname{Aut}((\zeta G)_p)|\geqslant p-1$, as required.
Our next goal is to derive a certain upper bound on $\operatorname{o}peratorname{E}xp(G/G')$. Observe that $\operatorname{o}peratorname{E}xp(G/G')=\mathfrak{p}rod_p{\operatorname{o}peratorname{E}xp((G/G')_p)}$ where $p$ ranges over the prime divisors of $|G:G'|$ and $(G/G')_p$ denotes the Sylow $p$-subgroup of $G/G'$. Since the number of factors in this product is at most $n+1$ by the previous paragraph, we find that for a suitable prime divisor $p$ of $|G:G'|$, we have $\operatorname{o}peratorname{E}xp((G/G')_p)\geqslant\operatorname{o}peratorname{E}xp(G/G')^{1/(n+1)}$. By the remark preceding this proof and the structure of automorphism groups of cyclic groups, this implies that $\operatorname{o}peratorname{Aut}_{\mathfrak{p}ow}(G/G')$ contains an element $\beta$ of order at least
\[
\operatorname{o}peratorname{f}rac{1}{2}\mathfrak{p}hi(\operatorname{o}peratorname{E}xp((G/G')_p)) \geqslant \operatorname{o}peratorname{f}rac{1}{4}\operatorname{o}peratorname{E}xp((G/G')_p)^{1/2} \geqslant \operatorname{o}peratorname{f}rac{1}{4}\operatorname{o}peratorname{E}xp(G/G')^{1/(2n+2)};
\]
for the lower bound on $\mathfrak{p}hi(\operatorname{o}peratorname{E}xp((G/G')_p))$, see \cite[Lemma 2.4]{FS89a}. Fix a standard generating tuple $\vec{a}$ of $G/G'$ and consider the orbit
\[
\mathcal{O}:=(\vec{a})^{\operatorname{o}peratorname{l}angle \beta\rangle}.
\]
Since the action of $\operatorname{o}peratorname{Aut}(G/G')$ on generating tuples is semiregular, we find that
\[
|\mathcal{O}|=\operatorname{o}peratorname{ord}(\beta)\geqslant\operatorname{o}peratorname{f}rac{1}{4}\operatorname{o}peratorname{E}xp(G/G')^{1/(2n+2)}.
\]
For each $k=0,1,\operatorname{o}peratorname{l}dots,\operatorname{o}peratorname{ord}(\beta)-1$, fix a lift $\vec{g}_k$ of $(\vec{a})^{\beta^k}$ in $G$ and collect those lifts in a set
\[
\mathcal{O}':=\{\vec{g}_k \mid k=0,1,\operatorname{o}peratorname{l}dots,\operatorname{o}peratorname{ord}(\beta)-1\}.
\]
By definition, $\mathcal{O}'$ is a set of standard tuples in $G$ of size
\[
|\mathcal{O}'|=|\mathcal{O}|=\operatorname{o}peratorname{ord}(\beta)\geqslant\operatorname{o}peratorname{f}rac{1}{4}\operatorname{o}peratorname{E}xp(G/G')^{1/(2n+2)}.
\]
By Formula (\ref{gPrimeEq}) and since a finite group of order $o$ has at most $o^{\operatorname{o}peratorname{l}og_2{o}}$ automorphisms (see e.g.~\cite[Lemma 5.5]{Bor20a}), the number of equivalence classes of standard tuples in $G$ is at most
\[
|G'|^d\cdot|\operatorname{o}peratorname{Aut}(G')|^d\cdot|G'|^{{d\choose 2}} \operatorname{o}peratorname{l}eqslant n^{n^3d(1+d+4n^3\operatorname{o}peratorname{l}og_2{n})}.
\]
It follows that there is a nonempty subset $\mathcal{O}_{\ast}\subseteq\mathcal{O}'$ consisting of pairwise equivalent standard tuples in $G$ and such that
\[
|\mathcal{O}_{\ast}|\geqslant\operatorname{o}peratorname{f}rac{\operatorname{o}peratorname{E}xp(G/G')^{1/(2n+2)}}{4n^{n^3d(1+d+4n^3\operatorname{o}peratorname{l}og_2{n})}}.
\]
Fix $\vec{g}\in\mathcal{O}_{\ast}$, and denote by $A$ the set of all $\alpha\in\operatorname{o}peratorname{C}_{\operatorname{o}peratorname{Aut}(G)}(G')$ such that $\vec{g}^{\alpha}\in\mathcal{O}_{\ast}$. Then
\[
|A|=|\mathcal{O}_{\ast}|\geqslant\operatorname{o}peratorname{f}rac{\operatorname{o}peratorname{E}xp(G/G')^{1/(2n+2)}}{4n^{n^3d(1+d+4n^3\operatorname{o}peratorname{l}og_2{n})}},
\]
and the automorphisms in $A$ induce pairwise distinct automorphisms from $\operatorname{o}peratorname{l}angle\beta\rangle$ on $G/G'$. We claim that there is an $\alpha\in A$ such that the automorphism $\operatorname{o}peratorname{t}ilde{\alpha}$ of $G/G'$ induced by $\alpha$ satisfies
\[
\operatorname{o}peratorname{ord}(\operatorname{o}peratorname{t}ilde{\alpha})\geqslant|\mathcal{O}_{\ast}|^{1/2}\geqslant\operatorname{o}peratorname{f}rac{\operatorname{o}peratorname{E}xp(G/G')^{1/(4n+4)}}{2n^{\operatorname{o}peratorname{f}rac{1}{2}n^3d(1+d+4n^3\operatorname{o}peratorname{l}og_2{n})}}.
\]
Indeed, this is clear if $|\mathcal{O}_{\ast}|=1$ (where we may choose $\alpha=\operatorname{o}peratorname{id}_G$), so assume that $|\mathcal{O}_{\ast}|>1$. If $\operatorname{o}peratorname{ord}(\operatorname{o}peratorname{t}ilde{\alpha})<|\mathcal{O}_{\ast}|^{1/2}$ for all $\alpha\in A$, then since the cyclic group $\operatorname{o}peratorname{l}angle\beta\rangle$ has at most $k$ elements of order $k$ for each $k\in\mathbb{N}^+$, we find that
\[
|\mathcal{O}_{\ast}|>1+2+\cdots+\operatorname{o}peratorname{l}floor|\mathcal{O}_{\ast}|^{1/2}\rfloor\geqslant|A|=|\mathcal{O}_{\ast}|,
\]
a contradiction. We now know that there is an automorphism $\alpha\in\operatorname{o}peratorname{Aut}(G)$ with the following properties:
\begin{itemize}
\item $\alpha$ centralises $G'$.
\item The automorphism $\operatorname{o}peratorname{t}ilde{\alpha}$ of $G/G'$ induced by $\alpha$ is a power map automorphism of $G/G'$.
\item $\operatorname{o}peratorname{ord}(\alpha)\geqslant\operatorname{o}peratorname{ord}(\operatorname{o}peratorname{t}ilde{\alpha})\geqslant\operatorname{o}peratorname{f}rac{\operatorname{o}peratorname{E}xp(G/G')^{1/(4n+4)}}{2n^{\operatorname{o}peratorname{f}rac{1}{2}n^3d(1+d+4n^3\operatorname{o}peratorname{l}og_2{n})}}$.
\mathrm{e}nd{itemize}
Note that $\alpha$ does not necessarily stabilise $G_{\operatorname{o}mega}$. However, $\operatorname{o}peratorname{t}ilde{\alpha}$, being a power map automorphism of $G/G'$, stabilises the projection of $G_{\operatorname{o}mega}$ to $G/G'$. Therefore, each image of $G_{\operatorname{o}mega}$ under an iterate of $\alpha$ has a generating tuple of the form $(g_1c_1,\operatorname{o}peratorname{l}dots,g_tc_t,z_1,\operatorname{o}peratorname{l}dots,z_u)$ where
\begin{itemize}
\item $t$ is the minimum number of generators of the projection of $G_{\operatorname{o}mega}$ to $G/G'$,
\item $(g_1,\operatorname{o}peratorname{l}dots,g_t)$ is a fixed lift in $G$ of a standard generating tuple of the projection of $G_{\operatorname{o}mega}$ to $G/G'$,
\item $(z_1,\operatorname{o}peratorname{l}dots,z_u)$ is a fixed generating tuple of $G_{\operatorname{o}mega}\cap G'$, and
\item $(c_1,\operatorname{o}peratorname{l}dots,c_t)$ is a variable $t$-tuple of elements of $G'$.
\mathrm{e}nd{itemize}
It follows that the length $\operatorname{o}peratorname{el}l$ of the orbit of $G_{\operatorname{o}mega}$ under $\operatorname{o}peratorname{l}angle\alpha\rangle$ satisfies
\[
\operatorname{o}peratorname{el}l\operatorname{o}peratorname{l}eqslant|G'|^t\operatorname{o}peratorname{l}eqslant|G'|^d\operatorname{o}peratorname{l}eqslant n^{2n^3d}.
\]
Set $\gamma:=\alpha^{\operatorname{o}peratorname{el}l}$. Then $\gamma$ stabilises $G_{\operatorname{o}mega}$, whence $\gamma\in\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$, and
\[
n=|\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)|\geqslant\operatorname{o}peratorname{ord}(\alpha^{\operatorname{o}peratorname{el}l})\geqslant\operatorname{o}peratorname{f}rac{\operatorname{o}peratorname{ord}(\alpha)}{\operatorname{o}peratorname{el}l}\geqslant\operatorname{o}peratorname{f}rac{\operatorname{o}peratorname{E}xp(G/G')^{1/(4n+4)}}{2n^{\operatorname{o}peratorname{f}rac{1}{2}n^3d(5+d+4n^3\operatorname{o}peratorname{l}og_2{n})}}.
\]
This implies that
\[
\operatorname{o}peratorname{E}xp(G/G')\operatorname{o}peratorname{l}eqslant 16^{n+1}n^{2n^3d(5+d+4n^3\operatorname{o}peratorname{l}og_2{n})+4n+4},
\]
which is the desired upper bound on $\operatorname{o}peratorname{E}xp(G/G')$.
We can now conclude the proof as follows: Note that $G/G'$ is abelian, and since $G$ is $d$-generated, so is $G/G'$. It follows that
\[
|G:G'|\operatorname{o}peratorname{l}eqslant\operatorname{o}peratorname{E}xp(G/G')^d\operatorname{o}peratorname{l}eqslant 16^{(n+1)d}n^{2n^3d^2(5+d+4n^3\operatorname{o}peratorname{l}og_2{n})+4nd+4d},
\]
and thus, using Formula (\ref{gPrimeEq}),
\[
|G|=|G'|\cdot|G:G'|\operatorname{o}peratorname{l}eqslant n^{2n^3}\cdot 16^{(n+1)d}n^{2n^3d^2(5+d+4n^3\operatorname{o}peratorname{l}og_2{n})+4nd+4d}=\operatorname{o}peratorname{f}frak(d,n),
\]
as required.
\mathrm{e}nd{proof}
Now that we have verified Lemma \ref{ledNeuLem}, we are ready to prove Theorem \ref{mainTheo}(3).
\begin{proof}[Proof of Theorem \ref{mainTheo}(3)]
As in the proof of Theorem \ref{improvedBoundTheo}, we have $|\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)|\operatorname{o}peratorname{l}eqslant c^d$, and the result follows from Lemma \ref{ledNeuLem}, using the monotonicity of the function $\operatorname{o}peratorname{f}frak$ in its second variable.
\mathrm{e}nd{proof}
\section{Proof of Theorem \ref{mainTheo}(4)}\operatorname{o}peratorname{l}abel{sec6}
The proof is by contradiction -- let $G\operatorname{o}peratorname{l}eqslant\operatorname{o}peratorname{Sym}(\mathcal{O}mega)$ be an insoluble finite transitive permutation group, and assume that $\operatorname{o}peratorname{mao}l_{\mathfrak{p}erm}(G)\operatorname{o}peratorname{l}eqslant23$. In particular, the maximum conjugacy class length in $G$ is at most $23$. Therefore, the arguments from \cite[Section 6]{Bor20a} show that $\zeta G=\operatorname{o}peratorname{Rad}(G)$, that $G'\cong\operatorname{o}peratorname{Alt}(5)$ and that $G=\zeta G\operatorname{o}peratorname{t}imes G'$. Fix a point $\operatorname{o}mega\in\mathcal{O}mega$, and consider the point stabiliser $G_{\operatorname{o}mega}\operatorname{o}peratorname{l}eqslant G$. Since $G_{\operatorname{o}mega}$ is core-free in $G$, we have $G_{\operatorname{o}mega}\cap\zeta G=\{1_G\}$, so that $G_{\operatorname{o}mega}$ embeds into $G'\cong\operatorname{o}peratorname{Alt}(5)$ via the canonical projection $G\rightarrow G/\zeta G$. Denote by $P$ the image of $G_{\operatorname{o}mega}$ under the canonical projection $G\rightarrow G/G'$, which we may and will view as a subgroup of $\zeta G$. We now note three important facts:
\begin{enumerate}
\item $\operatorname{o}peratorname{C}_{\operatorname{o}peratorname{Aut}(\zeta G)}(P)$ is trivial. Assume otherwise. Note that $\operatorname{o}peratorname{C}_{\operatorname{o}peratorname{Aut}(\zeta G)}(P)\operatorname{o}peratorname{t}imes\operatorname{o}peratorname{Inn}(G')$ embeds naturally into $\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$ via the injective group homomorphism
\begin{align*}
&\iota: (\alpha,\beta) \mapsto ((z,c) \mapsto (z^{\alpha},c^{\beta})) \\
&\operatorname{o}peratorname{t}ext{ for all }\alpha\in\operatorname{o}peratorname{C}_{\operatorname{o}peratorname{Aut}(\zeta G)}(P),\beta\in\operatorname{o}peratorname{Inn}(G'),z\in\zeta G,c\in G',
\mathrm{e}nd{align*}
which has the property that if $\beta$ is the conjugation by $c_0\in G'$, then $G_{\operatorname{o}mega}^{\iota(\alpha,\beta)}=G_{\operatorname{o}mega}^{c_0}$, a $G$-conjugate of $G_{\operatorname{o}mega}$. Therefore, and since $G'\cong\operatorname{o}peratorname{Alt}(5)$ has a conjugacy class of length $20$, it follows that $\operatorname{o}peratorname{mao}l_{\mathfrak{p}erm}(G)\geqslant 2\cdot 20=40>23$, a contradiction.
\item $|P|>1$. Otherwise, we have $G_{\operatorname{o}mega}\operatorname{o}peratorname{l}eqslant G'\cong\operatorname{o}peratorname{Alt}(5)$, and since there is exactly one conjugacy class of subgroups of $G'$ that are isomorphic to $G_{\operatorname{o}mega}$, we find that $\operatorname{o}peratorname{Aut}(G')$ embeds naturally into $\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$ by Lemma \ref{autPermLem}, whence $\operatorname{o}peratorname{mao}l_{\mathfrak{p}erm}(G)\geqslant\operatorname{o}peratorname{mao}l(G')=\operatorname{o}peratorname{mao}l(\operatorname{o}peratorname{Alt}(5))=24>23$, a contradiction.
\item $P$ is a quotient of $G_{\operatorname{o}mega}/G_{\operatorname{o}mega}'$. This is clear since $P$ is by definition an abelian quotient of $G_{\operatorname{o}mega}$.
\mathrm{e}nd{enumerate}
We will now go through the finitely many possible (abstract group) isomorphism types of $G_{\operatorname{o}mega}$ and reduce the proof of Theorem \ref{mainTheo}(4) to checking a finite list of possibilities for $(G,G_{\operatorname{o}mega})$, which are listed in Table \ref{23table} below.
\begin{enumerate}
\item Case: $G_{\operatorname{o}mega}\cong\operatorname{o}peratorname{Alt}(5)$ or $|G_{\operatorname{o}mega}|=1$. Then $G_{\operatorname{o}mega}/G_{\operatorname{o}mega}'$ is trivial, whence $P$ is trivial by fact (3) above. However, this contradicts fact (2).
\item Case: $G_{\operatorname{o}mega}\cong\operatorname{o}peratorname{Alt}(4)$ or $G_{\operatorname{o}mega}\cong\mathbb{Z}/3\mathbb{Z}$. Then $G_{\operatorname{o}mega}/G_{\operatorname{o}mega}'\cong\mathbb{Z}/3\mathbb{Z}$, whence $P\cong\mathbb{Z}/3\mathbb{Z}$ by facts (2) and (3) above. Using fact (1) and Lemma \ref{centAbelianLem}, it follows that $\zeta G$ is cyclic of order $3$ or $6$. Hence, up to abstract group isomorphism, the pair $(G,G_{\operatorname{o}mega})$ is one of the possibilities listed in rows 1--4 of Table \ref{23table}.
\item Case: $G_{\operatorname{o}mega}\cong\mathbb{Z}/5\mathbb{Z}$. Then $G_{\operatorname{o}mega}/G_{\operatorname{o}mega}'\cong\mathbb{Z}/5\mathbb{Z}$, whence $P\cong\mathbb{Z}/5\mathbb{Z}$ by facts (2) and (3) above. Using fact (1) and Lemma \ref{centAbelianLem}, it follows that $\zeta G$ is cyclic of order $5$ or $10$. Hence, up to abstract group isomorphism, the pair $(G,G_{\operatorname{o}mega})$ is one of the possibilities listed in rows 5 and 6 of Table \ref{23table}.
\item Case: $G_\operatorname{o}mega$ is isomorphic to one of $\operatorname{o}peratorname{D}_{10}$, $\operatorname{o}peratorname{Sym}(3)$ or $\mathbb{Z}/2\mathbb{Z}$. Then $G_{\operatorname{o}mega}/G_{\operatorname{o}mega}'\cong\mathbb{Z}/2\mathbb{Z}$, whence $P\cong\mathbb{Z}/2\mathbb{Z}$ by facts (2) and (3) above. Using fact (1) and Lemma \ref{centAbelianLem}, it follows that $\zeta G=P\cong\mathbb{Z}/2\mathbb{Z}$. Hence, up to abstract group isomorphism, the pair $(G,G_{\operatorname{o}mega})$ is one of the possibilities listed in rows 7--9 of Table \ref{23table}.
\item Case: $G_\operatorname{o}mega\cong(\mathbb{Z}/2\mathbb{Z})^2$. Then $G_{\operatorname{o}mega}/G_{\operatorname{o}mega}'\cong(\mathbb{Z}/2\mathbb{Z})^2$, whence by facts (2) and (3) above, either $P\cong\mathbb{Z}/2\mathbb{Z}$ or $P\cong(\mathbb{Z}/2\mathbb{Z})^2$. In either case, we have $\zeta G=P$ by fact (1) and Lemma \ref{centAbelianLem}. Therefore, up to abstract group isomorphism, the pair $(G,G_{\operatorname{o}mega})$ is one of the possibilities listed in rows 10 and 11 of Table \ref{23table}.
\mathrm{e}nd{enumerate}
We now give Table \ref{23table}, which not only lists the $11$ remaining possibilities for $(G,G_{\operatorname{o}mega})$ extracted from the above arguments, but also specifies the value of $\operatorname{o}peratorname{mao}l_{\mathfrak{p}erm}(G)$ in each case, which was computed using GAP \cite{GAP4}. Since $\operatorname{o}peratorname{mao}l_{\mathfrak{p}erm}(G)>23$ throughout, the proof of Theorem \ref{mainTheo}(4) is complete.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\mathfrak{h}line
row no. & $G$ & $G_{\operatorname{o}mega}$ & $\operatorname{o}peratorname{mao}l_{\mathfrak{p}erm}(G)$ \\ \mathfrak{h}line
1 & $\mathbb{Z}/3\mathbb{Z}\operatorname{o}peratorname{t}imes\operatorname{o}peratorname{Alt}(5)$ & $\operatorname{o}peratorname{l}angle(\operatorname{o}verline{1},(1,2,3))\rangle$ & $48$ \\ \mathfrak{h}line
2 & $\mathbb{Z}/6\mathbb{Z}\operatorname{o}peratorname{t}imes\operatorname{o}peratorname{Alt}(5)$ & $\operatorname{o}peratorname{l}angle(\operatorname{o}verline{2},(1,2,3))\rangle$ & $48$ \\ \mathfrak{h}line
3 & $\mathbb{Z}/3\mathbb{Z}\operatorname{o}peratorname{t}imes\operatorname{o}peratorname{Alt}(5)$ & $\operatorname{o}peratorname{l}angle(\operatorname{o}verline{0},(1,2)(3,4)),(\operatorname{o}verline{1},(1,2,3))\rangle$ & $40$ \\ \mathfrak{h}line
4 & $\mathbb{Z}/6\mathbb{Z}\operatorname{o}peratorname{t}imes\operatorname{o}peratorname{Alt}(5)$ & $\operatorname{o}peratorname{l}angle(\operatorname{o}verline{0},(1,2)(3,4)),(\operatorname{o}verline{2},(1,2,3))\rangle$ & $40$ \\ \mathfrak{h}line
5 & $\mathbb{Z}/5\mathbb{Z}\operatorname{o}peratorname{t}imes\operatorname{o}peratorname{Alt}(5)$ & $\operatorname{o}peratorname{l}angle(\operatorname{o}verline{1},(1,2,3,4,5))\rangle$ & $80$ \\ \mathfrak{h}line
6 & $\mathbb{Z}/10\mathbb{Z}\operatorname{o}peratorname{t}imes\operatorname{o}peratorname{Alt}(5)$ & $\operatorname{o}peratorname{l}angle(\operatorname{o}verline{2},(1,2,3,4,5))\rangle$ & $80$ \\ \mathfrak{h}line
7 & $\mathbb{Z}/2\mathbb{Z}\operatorname{o}peratorname{t}imes\operatorname{o}peratorname{Alt}(5)$ & $\operatorname{o}peratorname{l}angle(\operatorname{o}verline{1},(1,2)(3,4))\rangle$ & $24$ \\ \mathfrak{h}line
8 & $\mathbb{Z}/2\mathbb{Z}\operatorname{o}peratorname{t}imes\operatorname{o}peratorname{Alt}(5)$ & $\operatorname{o}peratorname{l}angle(\operatorname{o}verline{0},(1,2,3)),(\operatorname{o}verline{1},(2,3)(4,5))\rangle$ & $24$ \\ \mathfrak{h}line
9 & $\mathbb{Z}/2\mathbb{Z}\operatorname{o}peratorname{t}imes\operatorname{o}peratorname{Alt}(5)$ & $\operatorname{o}peratorname{l}angle(\operatorname{o}verline{0},(1,2,3,4,5)),(\operatorname{o}verline{1},(2,5)(3,4))\rangle$ & $24$ \\ \mathfrak{h}line
10 & $\mathbb{Z}/2\mathbb{Z}\operatorname{o}peratorname{t}imes\operatorname{o}peratorname{Alt}(5)$ & $\operatorname{o}peratorname{l}angle(\operatorname{o}verline{0},(1,2)(3,4)),(\operatorname{o}verline{1},(1,3)(2,4))\rangle$ & $24$ \\ \mathfrak{h}line
11 & $(\mathbb{Z}/2\mathbb{Z})^2\operatorname{o}peratorname{t}imes\operatorname{o}peratorname{Alt}(5)$ & $\operatorname{o}peratorname{l}angle((\operatorname{o}verline{1},\operatorname{o}verline{0}),(1,2)(3,4)),((\operatorname{o}verline{0},\operatorname{o}verline{1}),(1,3)(2,4))\rangle$ & $72$ \\ \mathfrak{h}line
\mathrm{e}nd{tabular}
\mathrm{e}nd{center}
\caption{The remaining possibilities for $(G,G_{\operatorname{o}mega})$}
\operatorname{o}peratorname{l}abel{23table}
\mathrm{e}nd{table}
\section{Concluding remarks}\operatorname{o}peratorname{l}abel{sec7}
We conclude this paper with some related open questions for further research. Firstly, as mentioned in Section \ref{sec5}, our Lemma \ref{ledNeuLem} is a partial generalisation (from finite abstract groups to finite transitive permutation groups) of a celebrated theorem of Ledermann and Neumann, \cite[Theorem 6.6]{LN56a}. The following question asks whether Ledermann and Neumann's theorem can be extended to transitive permutation groups in its full strength:
\begin{question}\operatorname{o}peratorname{l}abel{ledNeuQues}
Is there an (explicit) function $f:\mathbb{N}^+\rightarrow\mathbb{N}^+$ such that for every finite transitive permutation group $G$, one has $|G|\operatorname{o}peratorname{l}eqslant f(|\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)|)$?
\mathrm{e}nd{question}
Another interesting question is whether Robinson and Wiegold's structural characterisation \cite[Theorem 1]{RW84a} can be extended to transitive permutation groups:
\begin{question}\operatorname{o}peratorname{l}abel{robWieQues}
Let $G$ be a (not necessarily finite) transitive permutation group. Is it true that the following are equivalent?
\begin{enumerate}
\item The supremum of the $\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$-orbit sizes on $G$ is finite.
\item The torsion subgroup $T$ of $\zeta G$ is finite and $\operatorname{o}peratorname{Aut}_{\mathfrak{p}erm}(G)$ induces a finite group of automorphisms in $G/T$.
\mathrm{e}nd{enumerate}
\mathrm{e}nd{question}
It would also be interesting to investigate to what extent Theorem \ref{mainTheo} can be generalised to arbitrary (not necessarily transitive) permutation groups $G$ of finite degree. For example, even under the assumption that $\operatorname{o}peratorname{mao}l_{\mathfrak{p}erm}(G)=1$, there are infinitely many such $G$ up to permutation group isomorphism (any finite-degree permutation group $G$ of order $2$ is an example), but the following is still an interesting question:
\begin{question}\operatorname{o}peratorname{l}abel{intransQues}
Can the finite-degree permutation groups $G$ with $\operatorname{o}peratorname{mao}l_{\mathfrak{p}erm}(G)\operatorname{o}peratorname{l}eqslant 3$ be classified, and are they of bounded order?
\mathrm{e}nd{question}
We finish with the following question about extending the results of \cite{Bor20a} to another natural setting.
\begin{question}\operatorname{o}peratorname{l}abel{GLQues}
Let $G\operatorname{o}peratorname{l}eqslant \operatorname{o}peratorname{GL}(d,q)$ and let $\operatorname{o}peratorname{Aut}_{\mathrm{linear}}(G)$ be the subgroup of $\operatorname{o}peratorname{Aut}(G)$ induced by the conjugation action of $\operatorname{o}peratorname{N}_{\operatorname{o}peratorname{GL}(d,q)}(G)$ on $G$. Is it possible to classify all groups $G$ for which all orbits of $\operatorname{o}peratorname{Aut}_{\mathrm{linear}}(G)$ have length at most three?
\mathrm{e}nd{question}
\begin{thebibliography}{99}
\bibitem{BD16a}
R.~Bastos and A.C.~Dantas,
On Finite Groups with Few Automorphism Orbits,
\mathrm{e}mph{Comm.~Algebra} \operatorname{o}peratorname{t}extbf{44}(7):2953--2958, 2016.
\bibitem{BDM20a}
R.~Bastos, A.C.~Dantas and E.~de Melo,
Soluble groups with few orbits under automorphisms,
to appear in \mathrm{e}mph{Geom.~Dedicata}, digital version available under \url{https://link.springer.com/article/10.1007/s10711-020-00525-7}.
\bibitem{BW79a}
N.L.~Biggs and A.T.~White,
\mathrm{e}mph{Permutation Groups and Combinatorial Structures},
Cambridge University Press (London Mathematical Society Lecture Note Series, 33), Cambridge, 1979 (reprinted 2008).
\bibitem{Bor19a}
A.~Bors,
Finite groups with a large automorphism orbit,
\mathrm{e}mph{J.~Algebra} \operatorname{o}peratorname{t}extbf{521}:331--364, 2019.
\bibitem{Bor20a}
A.~Bors,
Finite groups with only small automorphism orbits,
to appear in \mathrm{e}mph{J.~Group Theory}, digital version available under \url{https://doi.org/10.1515/jgth-2019-0152}.
\bibitem{But93a}
G.~Butler,
The transitive groups of degree fourteen and fifteen,
\mathrm{e}mph{J.~Symbolic Comput.} \operatorname{o}peratorname{t}extbf{16}:413--422, 1993.
\bibitem{BK83a}
G.~Butler and J.~McKay,
The transitive groups of degree up to $11$,
\mathrm{e}mph{Comm.~Algebra} \operatorname{o}peratorname{t}extbf{11}:863--911, 1983.
\bibitem{CP93a}
P.J.~Cameron and C.E.~Praeger,
Block-transitive $t$-designs. I. Point-imprimitive designs,
\mathrm{e}mph{Discrete Math.} \operatorname{o}peratorname{t}extbf{118}:33--43, 1993.
\bibitem{CP93b}
P.J.~Cameron and C.E.~Praeger,
Block-transitive $t$-designs. II. Large $t$,
in: F.~De Clerck et al.~(eds.), \mathrm{e}mph{Finite geometry and combinatorics},
Cambridge University Press (London Mathematical Society Lecture Note Series, 191), Cambridge, 1993.
\bibitem{CH08a}
J.J.~Cannon and D.F.~Holt,
The transitive permutation groups of degree $32$,
\mathrm{e}mph{Experiment.~Math.} \operatorname{o}peratorname{t}extbf{17}:307--314, 2008.
\bibitem{DGB17a}
A.C.~Dantas, M.~Garonzi and R.~Bastos,
Finite groups with six or seven automorphism orbits,
\mathrm{e}mph{J.~Group Theory} \operatorname{o}peratorname{t}extbf{20}(5):945--954, 2017.
\bibitem{DM96a}
J.D.~Dixon and B.~Mortimer,
\mathrm{e}mph{Permutation Groups},
Springer (Graduate Texts in Mathematics, 163), New York, 1996.
\bibitem{FS89a}
W.~Feit and G.M.~Seitz,
On finite rational groups and related topics,
\mathrm{e}mph{Illinois J.~Math.} \operatorname{o}peratorname{t}extbf{33}(1):103--131, 1989.
\bibitem{GAP4}
The GAP~Group,
\mathrm{e}mph{GAP. Groups, Algorithms, and Programming},
version 4.11.0, 2020, \url{https://www.gap-system.org}.
\bibitem{HR19a}
D.~Holt and G.~Royle,
A census of small transitive groups and vertex-transitive graphs,
to appear in \mathrm{e}mph{J.~Symbolic Comput.}, a digital version is available under \url{https://www.sciencedirect.com/science/article/abs/pii/S0747717119300598}.
\bibitem{Hub09a}
M.~Huber,
\mathrm{e}mph{Flag-transitive Steiner designs},
Birkh{\"a}user (Frontiers in Mathematics), Basel, 2009.
\bibitem{Hul05a}
A.~Hulpke,
Constructing transitive permutation groups,
\mathrm{e}mph{J.~Symbolic Comput.} \operatorname{o}peratorname{t}extbf{39}:1--30, 2005.
\bibitem{Hul08}
A.~Hulpke, Normalizer calculation using automorphisms. In: L.-C.~Kappe et al.~(eds.), \mathrm{e}mph{Computational group theory and the theory of groups}, Amer.~Math.~Soc. (Contemp.~Math., 470), Providence, 2008, pp.~105--114.
\bibitem{TransGrp}
A.~Hulpke,
\mathrm{e}mph{GAP package TransGrp. Transitive Groups Library},
version 2.0.5, 2020, \url{https://www.gap-system.org/Packages/transgrp.html}.
\bibitem{KS04a}
H.~Kurzweil and B.~Stellmacher,
\mathrm{e}mph{The Theory of Finite Groups. An Introduction},
Springer (Universitext), New York, 2004.
\bibitem{LM86a}
T.J.~Laffey and D.~MacHale,
Automorphism orbits of finite groups,
\mathrm{e}mph{J.~Austral.~Math.~Soc.~(Ser.~A)} \operatorname{o}peratorname{t}extbf{40}:253--260, 1986.
\bibitem{LN56a}
W.~Ledermann and B.H.~Neumann,
On the order of the automorphism group of a finite group. I,
\mathrm{e}mph{Proc.~Roy.~Soc.~London Ser.~A} \operatorname{o}peratorname{t}extbf{233}:494--506, 1956.
\bibitem{LP97a}
C.H.~Li and C.E.~Praeger,
Finite groups in which any two elements of the same order are either fused or inverse-fused,
\mathrm{e}mph{Comm.~Algebra} \operatorname{o}peratorname{t}extbf{25}(10):3081--3118, 1997.
\bibitem{Neu54a}
B.H.~Neumann,
Groups covered by permutable subsets,
\mathrm{e}mph{J.~London Math.~Soc.} \operatorname{o}peratorname{t}extbf{29}:235--248, 1954.
\bibitem{RW84a}
D.J.S.~Robinson and J.~Wiegold,
Groups with boundedly finite automorphism classes,
\mathrm{e}mph{Rend.~Sem.~Mat.~Univ.~Padova} \operatorname{o}peratorname{t}extbf{71}:273--286, 1984.
\bibitem{Roy87a}
G.F.~Royle,
The transitive groups of degree twelve,
\mathrm{e}mph{J.~Symbolic Comput.} \operatorname{o}peratorname{t}extbf{4}:255--268, 1987.
\bibitem{Sam19a}
B.~Sambale,
On a theorem of Ledermann and Neumann,
preprint (2019), \url{https://arxiv.org/abs/1909.13220}.
\bibitem{Stro02a}
M.~Stroppel,
Locally compact groups with few orbits under automorphisms,
\mathrm{e}mph{Top.~Proc.} \operatorname{o}peratorname{t}extbf{26}(2):819--842, 2002.
\bibitem{Tha03a}
K.~Thas,
Finite flag-transitive projective planes: a survey and some remarks,
\mathrm{e}mph{Discrete Math.} \operatorname{o}peratorname{t}extbf{266}:417--429, 2003.
\bibitem{Zha92a}
J.~Zhang,
On Finite Groups All of Whose Elements of the Same Order Are Conjugate in Their Automorphism Groups,
\mathrm{e}mph{J.~Algebra} \operatorname{o}peratorname{t}extbf{153}:22--36, 1992.
\mathrm{e}nd{thebibliography}
\mathrm{e}nd{document}
|
\begin{document}
\title{Lyapunov spectrum of a relativistic stochastic flow in the Poincar\'e group.}
\author{Camille Tardif \footnote{\texttt{[email protected]} Universit\'e du Luxembourg.} \\ }
\maketitle
\abstract{ We determine the Lyapunov spectrum and stable manifolds of some stochastic flows on the Poincar\'e group associated to Dudley's relativistic processes.}
{ \bf Key words: } Relativistic processes. L\'evy processes in Lie groups. Poincar\'e group. Lyapunov spectrum. Hyperbolic dynamics.
{\bf AMS Subject Classification:} 37H15, 60G51, 83A05
\tableofcontents
\ensuremath{\mathbb{S}^{d-1}}ection{Introduction}
In 1966 Dudley \cite{Dud66} defined a class of relativistic processes with Lorentzian-covariant dynamics in the framework of special relativity. Such a process $\xi_t$ with values in Minkowski space-time $\ensuremath{\mathbb{R}}^{1,d}$, is differentiable and has velocity smaller than the speed of light. So it can be parametrized by its proper time, which amounts to impose to the velocity $\dot{\xi}_t$ to be an element of the unit pseudo sphere $\ensuremath{\mathbb{H}}^d$of $\ensuremath{\mathbb{R}}^{1,d}$. The restriction to the tangent space of $\ensuremath{\mathbb{H}}^d$ of Minkowski ambient pseudo-metric turns $\ensuremath{\mathbb{H}}^d$ into a Riemannian manifold of constant negative curvature. The invariance of the process $(\dot{\xi}_t, \xi_t) $ by the natural action of the set of Lorentz transforms on $\ensuremath{\mathbb{H}}^d \times \ensuremath{\mathbb{R}}^{1,d}$ imposes to the laws of $\dot{\xi}_t$ to be invariant by the action of the isometries of $\ensuremath{\mathbb{H}}^d$. Among this class of relativistic processes, there is essentially only one which is continuous. It corresponds to the case where $\dot{\xi}_t$ is a Riemannian Brownian motion in the hyperbolic space and in this case $(\dot{\xi}_t, \xi_t)$ is called \emph{Dudley diffusion}. Forty years after this seminal work, Franchi and Le Jan \cite{FLJ07} extended Dudley diffusion to the framework of any Lorentz manifold. They defined relativistic processes with Lorentzian-covariant dynamics on generic Lorentzian manifolds by rolling without slipping a Dudley diffusion on the unit tangent space. They studied the asymptotic behavior of such diffusion in the Schwarzschild space-time. Bailleul \cite{Bail08} succeeded to compute the Poisson boundary of Dudley diffusion in Minkowski space-time and showed that it coincides with the causal boundary of $\ensuremath{\mathbb{R}}^{1,d}$. The asymptotic behavior of relativistic diffusions was investigated in other non flat Lorentzian manifolds (\cite{Angst09}, \cite{Franchi09}) with the aim of describing how the asymptotic behavior of the diffusion reflects the asymptotic geometry of the manifold.
In this work we ask a new question concerning these processes dealing with the asymptotic behaviour of some stochastic flow associated to it. As Brownian motion on a Riemannian manifold , the relativistic diffusion \cite{FLJ07} is obtained by projecting a diffusion process with values in the orthonormal frame bundle, solution of a stochastic differential equation. This SDE generates a stochastic flow which, in our Lorentzian framework, consists in a stochastic perturbation of the geodesic flow. Existence and computation, for example, of the Lyapunov spectrum and stable manifolds of these flows may be investigated in the same way as it was done by Carverhill and Elworthy \cite{Carv/Elw} for the canonical stochastic flow in the Riemannian framework. The main difficulty to study the flow of relativistic processes comes from the fact that the orthonormal frame bundle of a Lorentz manifold is never compact. Nevertheless in this article we provide a study of the asymptotic dynamics of the stochastic flow generated by Dudley processes in the Minkowski space-time (without restricting ourselves to the diffusion case). Precisely, in this framework, the orthonormal frame bundle is identified with the Poincar\'e group $\widetilde{G}:= PSO(1,d) \ltimes \ensuremath{\mathbb{R}}^{1,d}$ and denoting $\phi_t$ the left invariant stochastic flow associated to one of Dudley's processes in $\widetilde{G}$ we obtain the description of the Lyapunov spectrum and the stable manifolds of $\varphi_t$. Precisely we obtain the following two results.
\begin{thms}[Lyapunov spectrum]
There exist a constant $\alpha >0 $ and two asymptotic random Lie sub-algebras $V_\infty^- \ensuremath{\mathbb{S}^{d-1}}ubset V_\infty^0$ of $\mathrm{Lie}(\widetilde{G})$ such that for some norm $\Vert \cdot \Vert$ on $\mathrm{Lie}( \widetilde{G} )$ and $\widetilde{X} \in \mathrm{Lie}( \widetilde{G} )$ we have for almost every trajectory
\[
\frac{1}{t} \log \Vert d \varphi_t (\mathrm{Id})( \widetilde{X}) \Vert_{\varphi_t(\mathrm{Id})} \underset{t \to +\infty}{\longrightarrow} \left \{ \begin{matrix} \alpha & \mathrm{if} & \widetilde{X} \in \mathrm{Lie}( \widetilde{G} ) \ensuremath{\mathbb{S}^{d-1}}etminus V_\infty^0 \\ 0 & \mathrm{if} & \widetilde{X} \in V_\infty^0 \ensuremath{\mathbb{S}^{d-1}}etminus V_\infty^- \\ -\alpha & \mathrm{if} & \widetilde{X} \in V_\infty^- \ensuremath{\mathbb{S}^{d-1}}etminus \{ 0 \} \end{matrix}\right.
\]
\end{thms}
\begin{thms}[Stable manifolds]
Denote by $\mathcal{V}_{\infty}^{-}:=\exp ( V_\infty^-)$ and $d$ the distance associated to a left invariant and $\mathrm{Ad}(SO(d))$-invariant Riemanian metric in $\widetilde{G}$. Then for any two distinct points $\tilde{g}'$ and $\tilde{g}$ in $\widetilde{G}$ we have
\begin{itemize}
\item If $\tilde{g}' \in \tilde{g} \mathcal{V}_{\infty}^{-}$ then
\[
\frac{1}{t} \log d \left ( \varphi_t (\tilde{g}) , \varphi_t (\tilde{g}') \right ) \underset{ t \to + \infty }{\longrightarrow} -\alpha.
\]
\item If $\tilde{g}' \notin \tilde{g} \mathcal{V}_{\infty}^{-}$ then
\[
\liminf_{t \to \infty} d\left ( \varphi_t (\tilde{g}) , \varphi_t (\tilde{g}') \right ) >0.
\]
\end{itemize}
\end{thms}
We begin by constructing, in section \ref{Dudproc}, Dudley processes as projections of left L\'evy processes on the Poincar\'e group $\widetilde{G}$, identified with the orthonormal frame bundle of the Minkowski space-time. These L\'evy processes are solutions of stochastic integral equations and induce a left invariant stochastic flow $\varphi_t$ in $\widetilde{G}$. In section \ref{asympt} we find the asymptotic behavior of Dudley processes and exhibit the asymptotic random variables $(\theta_\infty, \lambda_\infty) \in \mathbb{S}^{d-1} \times \ensuremath{\mathbb{R}}^{*}_{+}$. Finally in section \ref{LyapSpec} we prove Theorems \ref{Lyap} and \ref{stable} and explicit the projection of the stable manifold in $\ensuremath{\mathbb{H}}^d \times \ensuremath{\mathbb{R}}^{1,d}$ by showing that it corresponds to a skew product of a horosphere by a line.
Note that stochastic flows generated by L\'evy processes on semi-simple Lie groups were intensively studied by Liao (\cite{Liao02}, \cite{Liao01}, \cite{Liao04}). But his results cannot be used directly here since our L\'evy processes lie in the Poincar\'e group which is not semi-simple. Moreover in our work we suppose only that the Levy measure is integrable at infinity whereas Liao \cite{Liao04} request the entire integrability of it.
Our work is also strongly inspirited by the work of Bailleul and Raugi \cite{Bail/Raug} where the authors used Raugi's methods \cite{Raugi77} to find the Poisson boundary of Dudley diffusion.
\ensuremath{\mathbb{S}^{d-1}}ection{Dudley processes and their lift in the Poincar\'e group}\label{Dudproc}
We present in this section the geometrical framework of special relativity and define a natural class of relativistic Markov processes with Lorentzian-covariant dynamics introduced by Dudley in \cite{Dud66}. They are obtained by projecting left L\'evy processes with values in the Poincar\'e group and are described by two parameters: a diffusion coefficient $\ensuremath{\mathbb{S}^{d-1}}igma \in \ensuremath{\mathbb{R}}$ and a jump intensity L\'evy measure $\nu$ on $\ensuremath{\mathbb{R}}^*_+$.
\ensuremath{\mathbb{S}^{d-1}}ubsection{ Minkowski space-time and Poincar\'e group} The Minkowski space-time $\ensuremath{\mathbb{R}}^{1,d}$ is $\ensuremath{\mathbb{R}} \times \ensuremath{\mathbb{R}}^d$ endowed with the Lorentz quadratic form $q$ defined by
\[
\forall \xi= (\xi^0, \xi^1, \dots, \xi^d) \in \ensuremath{\mathbb{R}} \times \ensuremath{\mathbb{R}}^d , \quad \quad q(\xi) = \left (\xi^{0} \right )^{2} - \left (\xi^{1} \right )^{2} - \cdots - \left (\xi^{d} \right )^{2}.
\]
We denote by $\vec{\xi}:= ( \xi^1, \dots, \xi^d )^t$ the space component of $\xi$.
Set
\[
Q = \mathrm{Diag} (1, -1, \dots, -1 )
\]
the matrix of $q$ in the canonical basis $(e_0, e_1, \dots, e_d)$. Time orientation is given by the constant vector field $e_0$ and some $\xi \in \ensuremath{\mathbb{R}}^{1,d}$ is said to be future oriented when $q( \xi , e_0) >0 $. A path $\gamma_s$ in $\ensuremath{\mathbb{R}}^{1,d}$ is said to be \emph{time-like} when it is differentiable almost everywhere and $q(\dot{\gamma}_s) >0 $ and $q(\dot{\gamma}_s, e_0) >0$.
The Poincar\'e group is the group of affine $q$-isometries which preserve orientation and time-orientation. It is the semi-direct product connected group \[ \widetilde{G} := PSO(1,d) \ltimes \ensuremath{\mathbb{R}}^{1,d} \] where $G:=PSO(1,d)$ denotes the group of linear $q$-isometries which preserve orientation and time-orientation. An element $\tilde{g}= (g, \xi) \in \widetilde{G}$ is made up of its linear part $g \in G$ and its translation part $\xi$. We identify $G$ with the sub-group of $\widetilde{G}$ which fixes $0$. By this way, we identify $\ensuremath{\mathbb{R}}^{1,d}$ with the homogeneous space $\widetilde{G}/G$. The identity element of $\widetilde{G}$ and $G$ is denoted by $\mathrm{Id}$ (thus for us $\mathrm{Id}= (\mathrm{Id},0)$). At $\tilde{g} =( g, \xi ) \in \widetilde{G}$ we associate the affine frame $ \left ( (g(e_0), g(e_1), \dots, g(e_d)) ; \xi \right ) $ of $\ensuremath{\mathbb{R}}^{1,d}$ and $\widetilde{G}$ is identified with the bundle of $q$-orthonormal, oriented and time-oriented, frames over $\ensuremath{\mathbb{R}}^{1,d}$. We denote by
\[
\begin{matrix}
\tilde{\pi}: & \widetilde{G} & \longrightarrow & \ensuremath{\mathbb{R}}^{1,d} \\
& \tilde{g}=(g, \xi) & \longmapsto & \xi
\end{matrix}
\]
the projection associated to this trivial fibration and $G =\tilde{\pi}^{-1} \{ 0 \}$. The canonical basis being fixed we identify $G$ with the matrix group
\[
G= \left \{ g \in \mathrm{SL}(\ensuremath{\mathbb{R}}^{d+1} ), \ g Q g^t = Q , \ q(g(e_0), e_0 ) >0 \right \},
\]
and its Lie algebra is
\begin{align*}
\mathrm{Lie}(G) &= \left \{ X \in \mathcal{M}_{d+1}(\ensuremath{\mathbb{R}}), \quad X Q -Q X^t = 0 \right \} \\
&= \left \{ \left ( \begin{matrix} 0 & b^t \\ b & C \end{matrix} \right ), \ \ b \in \ensuremath{\mathbb{R}}^{d} , \ C \in \mathcal{M}_{d}(\ensuremath{\mathbb{R}}) \ \text{s.t} \ C=-C^t \right \}.
\end{align*}
We have $\mathrm{Lie} (\widetilde{G}) = \mathrm{Lie}(G) \times \ensuremath{\mathbb{R}}^{1,d}$ and for $\widetilde{X}= (X, x) , \widetilde{Y}=(Y,y) \in \mathrm{Lie} (\widetilde{G}) $
\[
[\widetilde{X}, \widetilde{Y}] = ( [X,Y], Xy-Yx).
\]
We identify $\mathrm{Lie}(G)$ with $\mathrm{Ker}(d_{\mathrm{Id}} \tilde{\pi} )$ and its elements are vertical for the fibration $\tilde{\pi}$.
We set
\begin{align*}
V_i & := e_0 e_i^{t} + e_i e_0^{t} \quad i=1, \dots, d \\
V_{ij} & := [V_i, V_j]= e_i e_j^{t} -e_j e_i^{t} \quad j>i.
\end{align*}
Moreover we set
\[
H_0:= (0, e_0 ) \in \mathrm{Lie}(\widetilde{G})
\]
which is horizontal for the fibration $\tilde{\pi}$.
{\bf Notation} For $\widetilde{X} \in \mathrm{Lie}(\widetilde{G})$ we denote by $\widetilde{X}^{l}$ the left invariant vector field in $\widetilde{G}$ associated.
Denote by $K$ the subgroup of $G$ made of the rotations of $\ensuremath{\mathbb{R}}^d$. We have
\[
K := \left \{ \left ( \begin{matrix} 1 & 0 \\ 0 & k \end{matrix} \right ), k \in SO(d) \right \},
\]
and $K$ is also the stabilizer of $e_0$ under the action of $G$ on $\ensuremath{\mathbb{R}}^{1,d}$. The homogeneous space $G/K$ can be identified with the orbit of $e_0$ under the action of $G$ which is the unit pseudo sphere $\ensuremath{\mathbb{H}}^d := \{ \xi \in \ensuremath{\mathbb{R}}^{1,d}, \ q(\xi)=1, \xi^0 >0 \}$ and is a Riemannian manifold of constant negative curvature when its tangent space is endowed with the restriction of $q$ on it.
For $r \in \ensuremath{\mathbb{R}}^{+}$ and $ \theta \in \mathbb{S}^{d-1} \ensuremath{\mathbb{S}^{d-1}}ubset \ensuremath{\mathbb{R}}^{d}$, define
\[
S(r, \theta):= \exp \left ( r \ensuremath{\mathbb{S}^{d-1}}um_{i=1}^d \theta^i V_i \right ) = \left ( \begin{matrix} \cosh(r) & \ensuremath{\mathbb{S}^{d-1}}inh(r) \theta^{t} \\ \ensuremath{\mathbb{S}^{d-1}}inh(r) \theta & \mathrm{Id} + ( \cosh(r) -1 ) \theta \theta^{t} \end{matrix} \right ).
\]
Each $g \in G$ can be decomposed in \emph{polar form} $g= S(r(g), \theta(g) )R$ where $R\in K$.
\ensuremath{\mathbb{S}^{d-1}}ubsection{ Dudley processes}
In this paragraph we define the relativistic processes introduced by Dudley in \cite{Dud66}. These processes enjoy two natural properties:
\begin{itemize}
\item they are $\widetilde{G}$-invariant i.e their dynamics are invariant by a change of $q$-orthonormal frame
\item their trajectories in $\ensuremath{\mathbb{R}}^{1,d}$ are time-like: they are almost everywhere differentiable,and the tangents vectors are time-like and time oriented.
\end{itemize}
First remark that no Markov processes with values in $\ensuremath{\mathbb{R}}^{1,d}$ is $\tilde{G}$-invariant. Indeed, the law at some time $t>0$ of such process starting at $0$ would be a $G$-invariant probability measure in $\ensuremath{\mathbb{R}}^{1,d}$ which is necessary trivial by the following lemma.
\begin{lem}
The only $G$-invariant probability measure in $\ensuremath{\mathbb{R}}^{1,d}$ is the Dirac measure at $0$.
\end{lem}
\begin{proof}
Let $\mu$ be a $G$-invariant probability measure in $\ensuremath{\mathbb{R}}^{1,d}$. First suppose that the support of $\mu$ is not contained in the $q$-orthogonal hyperplane of some light-like line $\{ u (e_0 + \hat{\theta}), u \in \ensuremath{\mathbb{R}} \}$ ( $\hat{\theta}:= \ensuremath{\mathbb{S}^{d-1}}um_{i=1}^d \hat{\theta}^i e_i \in \mathbb{S}^{d-1} $ ). So there exist a compact set $C$ in the complement of this hyperplane such that $\mu( C) >0$. For $g= S(r, \hat{\theta})$, $r>0$ and $\xi= (\xi^0, \vec{\xi} ) \in \ensuremath{\mathbb{R}}^{1,d}$, denoting $\Vert \cdot \Vert$ the Euclidean norm in $\ensuremath{\mathbb{R}}^{1,d}$ we have
\begin{align*}
\Vert g(\xi) \Vert^2&= 2q( g(\xi), e_0)^2 -q(g(\xi) )= 2 q(\xi, g^{-1}(e_0) )^2 -q(\xi)
= 2 \left ( \cosh(r) \xi^0 - \ensuremath{\mathbb{S}^{d-1}}inh(r) \hat{\theta} \cdot \vec{\xi} \right )^2 - q(\xi) \\
&= 2 \left ( \cosh(r) ( \xi^0 - \hat{\theta} \cdot \vec{\xi} ) + e^{-r} \hat{\theta} \cdot \vec{\xi} \right )^2 -q(\xi) \\
&= 2 \left ( \cosh(r) q( \xi, e_0 + \hat{\theta} ) + e^{-r} \hat{\theta} \cdot \vec{\xi} \right )^2 -q(\xi).
\end{align*}
Since $C$ is a compact set in the complement of the $q$-orthogonal hyperplan to $\{ u (e_0 + \hat{\theta}), u \in \ensuremath{\mathbb{R}} \}$ then $\inf_ {\xi \in C} \vert q( \xi, e_0 + \hat{\theta} ) \vert > 0$ and thus $r$ can be chosen such that $\inf_{\xi \in C } \Vert g(\xi) \Vert $ is arbitrary large. Thus $g$ can be chosen such that $C$ and $g(C)$ are disjoint. Furthermore $q( g(\xi), e_0 + \hat{\theta} )= q (\xi , g^{-1}(e_0 + \hat{\theta} ) )= e^r q(\xi, e_0 + \hat{\theta})$ and thus $g(C)$ belongs to the complement of the hyperplan $q$-orthogonal to $e_0 + \hat{\theta}$. By iteration we can find a sequence $g_k = S(r_k, \hat{\theta})$, $k \in \ensuremath{\mathbb{N}}$ such that the compact sets $g_k (C)$ are pairwise disjoint. Thus we obtain a contradiction writing $1 \geq \mu ( \cup_k g_k(C)) = \ensuremath{\mathbb{S}^{d-1}}um_k \mu ( g_k (C) ) = \ensuremath{\mathbb{S}^{d-1}}um_k \mu(C) = + \infty$.
Now if the support of $\mu$ is contained in some hyperplan tangent to the light-cone and is not restricted to $0$, we can find a compact set $C$ in this hyperplan with $0 \notin C$ and $\mu(C) >0$ and we can choose a rotation $R \in K$ such that $R(C)$ is not in the hyperplan. Thus $\mu(C)= \mu(R(C) )= 0$ and it gives a contradiction. So we proved that $\mu$ is necessary the Dirac measure in $\{0 \}$.
\end{proof}
Thus $\ensuremath{\mathbb{R}}^{1,d}= \widetilde{G}/G$ cannot be the space of states of some non-trivial $\widetilde{G}$-invariant Markov process. But $\widetilde{G}$-homogeneous spaces of the form $\widetilde{G}/ \widetilde{K}$ where $\widetilde{K}$ is a compact subgroup of $\widetilde{G}$ have some $\widetilde{K}$-invariant probability measure and may be endowed with some $\widetilde{G}$-invariant Markov processes (see \cite{Liao04} or \cite{Hunt56} ). The smallest spaces of states we can consider correspond to the maximal compact sub-group of $\widetilde{G}$. Thus it is natural to consider the space of states $\widetilde{G}/K \ensuremath{\mathbb{S}^{d-1}}imeq \ensuremath{\mathbb{H}}^d \times \ensuremath{\mathbb{R}}^{1,d}$. The group $K$ is seen as the subgroup of $\widetilde{G}$ which stabilize $0$ and $e_0$ under the action of $\widetilde{G}$ on $\ensuremath{\mathbb{R}}^{1,,d}$.
We denote by $\pi : \widetilde{G} \longmapsto \ensuremath{\mathbb{H}}^d \times \ensuremath{\mathbb{R}}^{1,d} \ensuremath{\mathbb{S}^{d-1}}imeq \widetilde{G}/K$ the canonical projection \[\forall \tilde{g}= (g, \xi) \in \widetilde{G} \quad \pi(\tilde{g})=(g(e_0), \xi ). \]
The following Proposition exhibit all the relativistic processes in $\ensuremath{\mathbb{H}}^d \times \ensuremath{\mathbb{R}}^{1,d}$. It is essentially an application of a result of Liao ( Theorem 2.1 and 2.2 p 42 in \cite{Liao04} ).
\begin{prop}\label{defiDudley}
The Markov processes on $\ensuremath{\mathbb{H}}^d \times \ensuremath{\mathbb{R}}^{1,d}$, starting at $(\zeta_0, \xi_0)$, which are $\widetilde{G}$-invariant and whose trajectories are time-like are of the form $(\zeta_s, \xi_s)$ where $ \zeta_s$ is a $G$-invariant Markov process on $\ensuremath{\mathbb{H}}^{d}$ and $\xi_t = \xi_0 + a\int_{0}^{t} \zeta_s ds $; $a$ being some positive constant. For such a process there exist $\ensuremath{\mathbb{S}^{d-1}}igma >0$ and a measure $\nu$ on $\ensuremath{\mathbb{R}}^+$ satisfying
\[
\int_0^{+ \infty} \min(1,r^2) \nu(dr) < + \infty,
\]
such that $(\zeta_t, \xi_t) = \pi (\tilde{g}_t )$ in law where $\tilde{g}_t$ is a left Levy process on $\widetilde{G}$ starting at $\tilde{g}_0$ s.t $(\zeta_0, \xi_0) = \pi (\tilde{g}_0 )$ of which generator $\tilde{\ensuremath{\mathcal{L}}}$ is defined by
\begin{align*}
\forall f \in C^2( \widetilde{G} ) \quad \widetilde{\ensuremath{\mathcal{L}}} f (\tilde{g}) = a H_0^{l}f(\tilde{g}) &+ \frac{\ensuremath{\mathbb{S}^{d-1}}igma^2}{2} \ensuremath{\mathbb{S}^{d-1}}um_{i=1}^{d} (V_{i}^l)^2 f (\tilde{g}) \\
&+ \int_0^{+\infty } \int_{\mathbb{S}^d} \left ( f(\tilde{g}S(r, \theta) ) -f(\tilde{g}) - r \mathbf{1}_{r \in [0,1] }\ensuremath{\mathbb{S}^{d-1}}um_{i=1}^d \theta^{i} V_i^l f(\tilde{g}) \right ) \nu(dr) d \theta.
\end{align*}
\end{prop}
\begin{defi}[Dudley processes]
When $a=1$ then $\dot{\xi}_t = \zeta_t$ and $\xi_t$ is parametrized by its \emph{proper time}, i.e
$
q( \dot{\xi}_t )=1.
$
When moreover $\ensuremath{\mathbb{S}^{d-1}}igma$ or $\nu$ is non trival we call $(\dot{\xi}_t , \xi_t)$ a Dudley process and we consider exclusively these processes in the sequel. When $\nu =0$ $(\dot{\xi}_t, \xi_t)$ is continuous and is called \emph{Dudley diffusion}.
\end{defi}
\begin{rmq}
The process $\xi_t$ is differentiable and $\dot{\xi}_t$ is c\`adl\`ag.
\end{rmq}
\begin{proof}[Proof of Proposition \ref{defiDudley}]
Let $(\zeta_t, \xi_t)$ a Markov (Feller) process on $\ensuremath{\mathbb{H}}^{d} \times \ensuremath{\mathbb{R}}^{1,d}$ starting at $(\zeta_0, \xi_0)$, which is $\widetilde{G}$-invariant and whose trajectories are time-like. By choosing $\tilde{g}_0$ such that $(\zeta_0, \xi_0) = \pi (\tilde{g}_0 )$ and considering the Markov process $\tilde{g}_0^{-1} (\zeta_t, \xi_t)$ it remains to prove the proposition in the case where $(\zeta_0, \xi_0)= (e_0, 0)$.
Set \[ H_i := ( 0, e_i ) \in \mathrm{Lie}(\widetilde{G}), \quad i=1, \dots, d.\] The family $\{ H_0, H_i, V_i, V_{ij} \}_{i<j \in \{1, \dots, d \}}$ form an orthonormal basis of $\mathrm{Lie}(\widetilde{G})$ for an $Ad(K)$-invariant inner product on $\mathrm{Lie}( \widetilde{G} )$ and $\mathrm{Ker} \ d_{\mathrm{Id}} \pi = \{ V_{ij} \}_{i<j \in \{1, \dots, d\} }$.
We set \[ X_0 := H_0 \quad X_i := H_i \quad X_{d+i}:= V_i \quad i=1, \dots, d. \] An $\tilde{h}=(h, \xi) \in \tilde{G}$ can be decomposed in $ \tilde{h}= \exp \left ( \ensuremath{\mathbb{S}^{d-1}}um_{i=0}^d x^{i}(\tilde{h}) X_i \right ) \exp \left ( \ensuremath{\mathbb{S}^{d-1}}um_{i= d+1}^{2d} x^{i}(\tilde{h}) X_i \right )R$ where $R \in K$, $x^{0}(\tilde{h})= \xi^{0}$ and for $i=1, \dots, d$ $x^{i}(\tilde{h}) = \xi^{i}$ and $x^{d+i}(\tilde{h})= r(h)\theta^{i}(h)$. By Theorem 2.1 and 2.2 p 42 of \cite{Liao04}, $(\zeta_t, \xi_t)$ coincide in law with $\pi(\tilde{g}_t)$ where $\tilde{g}_t$ is a left Levy process in $\tilde{G}$, starting at $\mathrm{Id}$, which is $K$-right invariant and generated by
\[
\widetilde{\ensuremath{\mathcal{L}}} f(\tilde{g}) = \frac{1}{2}\ensuremath{\mathbb{S}^{d-1}}um_{i, j =0}^{2d } a_{ij} X_i^{l} X_j^{l}f(\tilde{g}) + \ensuremath{\mathbb{S}^{d-1}}um_{i=0}^{2d} b^{i} X_i^{l}f(\tilde{g}) + \int_{\widetilde{G}} \left ( f(\tilde{g} \tilde{h}) -f(\tilde{g}) - \ensuremath{\mathbb{S}^{d-1}}um_{i=0}^{2d} \mathbf{1}_{\underset{\xi^{i}(h) \leq 1}{r(h) \leq 1 } }x^{i}(\tilde{h}) X_{i}f(\tilde{g}) \right ) \widetilde{\Pi}(d \tilde{h}).
\]
The matrice $A:=(a_{ij})$ is a positive symmetric , $(b^{i})_i \in \ensuremath{\mathbb{R}}^{2d+1}$ and $\widetilde{\Pi}$ is a Levy measure invariant by $K$-conjugation in $\widetilde{G}$. The right $K$-invariance of $\tilde{g}_t$ ensures that
for all $k \in K \ensuremath{\mathbb{S}^{d-1}}imeq SO(d)$ , $\mathrm{diag}(1,k, k) A \mathrm{diag}(1,k, k)^{-1} = A$. Thus, $A$ is necessarily of the form
\[
\left (
\begin{matrix}
\hat{\ensuremath{\mathbb{S}^{d-1}}igma} & 0 & 0 \\
0 & \tilde{\ensuremath{\mathbb{S}^{d-1}}igma} \mathrm{Id}& \ensuremath{\mathbb{S}^{d-1}}igma' \mathrm{Id} \\
0 & \ensuremath{\mathbb{S}^{d-1}}igma' \mathrm{Id}& \ensuremath{\mathbb{S}^{d-1}}igma \mathrm{Id}
\end{matrix} \right ),
\]
where $\hat{\ensuremath{\mathbb{S}^{d-1}}igma}, \tilde{\ensuremath{\mathbb{S}^{d-1}}igma} \geq 0$ and $\tilde{\ensuremath{\mathbb{S}^{d-1}}igma} \ensuremath{\mathbb{S}^{d-1}}igma \geq (\ensuremath{\mathbb{S}^{d-1}}igma ')^2$. Moreover, using again $K$-invariance it comes $b^i =0$ for $i=1, \dots, 2d$ and we set $a:= b^{0}$. The trajectories of $\tilde{g}_t$ projected in $\ensuremath{\mathbb{R}}^{1,d}$ need to be differentiable so the jump measure $\Pi$ is supported on $G$ and $\hat{\ensuremath{\mathbb{S}^{d-1}}igma}$ and $\tilde{\ensuremath{\mathbb{S}^{d-1}}igma}$ are necessarily null. Since the trajectories are time-oriented we have $a >0$. The push forward of $\widetilde{\Pi}$ by $\pi$ is supported on $G/K \ensuremath{\mathbb{S}^{d-1}}imeq \ensuremath{\mathbb{H}}^{d}$ and is $K$-invariant. Thus $\Pi$ can be chosen of the form
\[
\forall f \in C_0(G) \quad \Pi f = \int_0^{+\infty} \int_{\mathbb{S}^{d-1}} f(S(r, \theta)) \nu(dr) d\theta
\]
where $\nu$ is a Levy measure on $\ensuremath{\mathbb{R}}^{*}_{+}$ ( i.e satisfying $\int \min (1,r^2) \nu(dr) <+\infty$).
\end{proof}
Denote by $g_t$ the $G$-component of $\tilde{g}_t$. Thus $g_t (e_0) = \dot{\xi}_t$ and $ \xi_t = \int_0^t g_s (e_0) ds $. By definition $g_t$ is a $G$-valued left Levy process, $K$-right invariant, generated by $\ensuremath{\mathcal{L}}$ defined by
\[
\forall f \in C^2(G) \quad \ensuremath{\mathcal{L}} f(g) = \frac{\ensuremath{\mathbb{S}^{d-1}}igma^2}{2} \ensuremath{\mathbb{S}^{d-1}}um_{i=1}^{d} (V_{i}^l)^2 f (g) + \int_0^{+\infty } \int_{\mathbb{S}^d} \left ( f(\tilde{g}S(r, \theta) ) -f(\tilde{g}) - r \mathbf{1}_{r \in [0,1] }\ensuremath{\mathbb{S}^{d-1}}um_{i=1}^d \theta^{i} V_i^l f(\tilde{g}) \right ) \nu(dr) d \theta.
\]
Denote by $\Pi$ the Levy measure supported on $G$ defined by
\[
\Pi f = \int_0^{+\infty} \int_{\mathbb{S}^{d-1}} f(S(r, \theta) ) \nu(dr) d\theta.
\]
Define $U_0 := \{ g \in G, r(g) \leq 1\}$ which is a $K$ invariant neighborhood of $\mathrm{Id}$ in $G$. For $f \in C^2(G)$ we have the following It\^o formula (see \cite{App/Kun}) for $g_t$
\begin{align} \label{Ito} \notag
f(g_t)& = f(\mathrm{Id}) + \ensuremath{\mathbb{S}^{d-1}}igma \ensuremath{\mathbb{S}^{d-1}}um_{i=1}^d\int_0^t V_i^{l}f(g_{s^{-}}) dB^{i}_s + \frac{\ensuremath{\mathbb{S}^{d-1}}igma^2}{2} \int_0^t \ensuremath{\mathbb{S}^{d-1}}um_{i=1}^{d} (V_i^{l})^2f(g_{s^{-}}) ds +\int_0^t \int_{U_0} \left ( f(g_{s^{-}}h) -f(g_{s^{-}}) \right ) \tilde{N}(ds,dh) \\
&+ \int_0^t \int_{U_0} \left ( f(g_{s^{-}}h) -f(g_{s^{-}}) -r(h) \ensuremath{\mathbb{S}^{d-1}}um_{i=1}^{d} \theta^{i}(h) V_i^l f(g_{s^{-}}) \right ) ds \Pi(dh) \\
& + \int_0^t \int_{(U_0)^c} \left ( f(g_{s^{-}}h) -f(g_{s^{-}}) \right ) N(ds,dh), \notag
\end{align}
where $B_t$ is a Brownian motion of $\ensuremath{\mathbb{R}}^d$, $N$ is a Poisson random measure on $\ensuremath{\mathbb{R}} \times G$ of intensity measure $dt \otimes \Pi$ and $\tilde{N}(ds, dh) := N(ds, dh) - ds \Pi(dh) $ is the compensated random measure associated.
\ensuremath{\mathbb{S}^{d-1}}ection{Asymptotic random variables}\label{asympt}
In this section we determine the asymptotic behavior of $\pi(\tilde{g}_t)=(g_{t} (e_0), \xi_t)$ under an integrability condition on the jump intensity measure $\nu$ (Ass.\ref{integrability}). Writing $g_t=n_t a_t k_t$ in some Iwasawa decomposition of $G$ we first prove (Prop.\ref{alpha}), applying the It\^o formula \eqref{Ito} and the law of large number, that the abelian term $a_t = \exp( \alpha_t V_1) $ is \emph{positively contracting}, $\frac{\alpha_t}{t}$ converges almost surely to a positive constant $\alpha$ depending explicitly on $\ensuremath{\mathbb{S}^{d-1}}igma$ and $\nu$. Next we prove (Prop.\ref{convht}) that the nilpotent term $n_t$ converges almost surely to an asymptotic random variable $n_\infty$ and this convergence is exponentially fast with rate $\alpha$. Then we investigate (Prop.\ref{xit}) the asymptotic behavior of $\xi_t$ in $\ensuremath{\mathbb{R}}^{1,d}$. Geometrically, seen in the projective space, the $\ensuremath{\mathbb{H}}^d$-valued process $g_t(e_0)$ converges to a limit angle $\theta_\infty \in \partial \ensuremath{\mathbb{H}}^d \ensuremath{\mathbb{S}^{d-1}}imeq \mathbb{S}^{d-1}$ of which $n_\infty$ is a stereographic projection. Moreover, the process $\xi_t$ is asymptotic to some affine hyperplane $q$-orthogonal to $\theta_\infty$ of which position is fixed by another asymptotic random variable $\lambda_\infty \in \ensuremath{\mathbb{R}}^{*}_{+}$. Figure \ref{fig} sum up the asymptotic results.
\begin{figure}
\caption{Asymptotic behavior of a Dudley diffusion }
\label{fig}
\end{figure}
\ensuremath{\mathbb{S}^{d-1}}ubsection{Iwasawa decomposition in $G$}
Although a polar decomposition of $G$ was used to introduce $g_t$ (defining the $K$ invariant measure $\Pi$), Iwasawa decomposition seems to be more adapted to describe its asymptotic dynamics. Introduce briefly this decomposition.
The maximal abelian subalgebra contained in $\mathrm{Vect}\{ V_1, \dots, V_d \}$, which is the orthogonal subspace of $\mathrm{Lie}(G)$ of $\mathcal{K}$ for the Killing form, is of dimension one. Let choose $\mathcal{A} := \mathrm{Vect} \{ V_1 \}$ one of them. The linear endomorphism $\mathrm{ad}(V_1)$ of $\mathrm{Lie}(G)$ is diagonalisable with eigenvalues $-1, 0$ and $1$. Set
\[
\mathcal{\mathcal{N}}= \left \{ X \in \mathrm{Lie}(G), \ \mathrm{ad}(V_1) X= -X \right \} \quad \widebar{\mathcal{N}}=\left \{ X \in \mathrm{Lie}(G), \ \mathrm{ad}(V_1) X= X \right \}
\]
the eigenspace corresponding respectively to the eigenvalue $-1$ and $1$. Explicitly
\[
\mathcal{N}=\mathrm{Vect}\{ V_i - V_{1i} , \ \ i=2, \dots, d \} \quad \mathrm{and} \ \ \widebar{\mathcal{N}}= \mathrm{Vect}\{ V_i + V_{1i} , \ \ i=2, \dots, d \}.
\]
The eigenspace corresponding to $0$ is $\mathcal{A}\oplus \mathcal{M}$ where $\mathcal{M}$ is the sub algebra of elements of $\mathcal{K}$ which commute with the elements of $\mathcal{A}$. Explicitly
\[
\mathcal{M}= \mathrm{Vect}\{ V_{ij} ,\ \ i, j =2\dots d \} =\left \{ \left ( \begin{matrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & C \end{matrix} \right ), \ \ C \in \mathrm{so}(d-1) \right \}.
\]
The subspace $\mathcal{N}$ is a nilpotent Lie algebra (even abelian since $[ \mathcal{N}, \mathcal{N}]=0$). The corresponding Iwasawa decomposition of $\mathrm{Lie}(G)$ is
\[
\mathrm{Lie}(G)= \mathcal{N}\oplus \mathcal{A} \oplus \mathcal{K}.
\]
For $X \in \mathrm{Lie}(G)$ we denote by $\{ X \}_{\mathcal{N}}$ (resp. $\{ X \}_{\mathcal{A}}$ and $\{ X \}_{\mathcal{K}}$) its projection in $\mathcal{N}$ (resp. $\mathcal{A}$ and $\mathcal{K}$ ) thus $X= \{ X \}_{\mathcal{N}}+\{ X \}_{\mathcal{A}}+\{ X \}_{\mathcal{K}}$.
Denoting by $A:=\exp ( \mathcal{A})$, $N:= \exp(\mathcal{N})$ the subgroup corresponding we obtain the corresponding Iwasawa decompositions of $G$
\[
G=NAK.
\]
Moreover, the mapping from $N \times A \times K$ to $G$ which maps $(n,a,k)$ to $nak$ is an analytic diffeomorphism. For $g\in G$ we denote by $g=(g)_{N} (g)_A (g)_K$ its decomposition in Iwasawa coordinates. To simplify notations set $n_t:=(g_t)_{N}$, $a_t:=(g_t)_A$ and $k_t=(g_t)_K$, thus $g_t=n_t a_t k_t$.
Note that we have other Iwasawa decompositions like $\mathrm{Lie}(G) = \widebar{N}\oplus \mathcal{A} \oplus \mathcal{K}$ (with $G= \widebar{N}AK$).
\noindent\textbf{Iwasawa and polar coordinates}. \\
\noindent For $g \in G$ written in polar form $g = \exp \left (r \ensuremath{\mathbb{S}^{d-1}}um_{i=1}^{d} \theta^{i} \left ( V_i -V_{1i}\right ) \right )R$ where $R\in K $, we have $ q( e_0 , g(e_0) )= \cosh(r)$.
Now denoting by $b \in \ensuremath{\mathbb{R}}^{d-2}$ and $u \in \ensuremath{\mathbb{R}}$ such that $(g)_N = \exp\left ( \ensuremath{\mathbb{S}^{d-1}}um_{i=2}^{d} b^i (V_i -V_{1i} ) \right )$ and $(g)_A= \exp \left ( u V_1\right )$ we can compute explicitly $q( e_0 , g(e_0) )$ in terms of $b$ and $u$ and we obtain
\begin{align}
\cosh(r) = \left ( 1+ \frac{\Vert b \Vert^2}{2} \right ) \cosh (u ) + \frac{\Vert b \Vert^2}{2} \ensuremath{\mathbb{S}^{d-1}}inh( u ). \label{link}
\end{align}
Moreover $u$ can be expressed in term of $\theta$ and $r$ via
\begin{align}
e^{u} =\cosh(r) + \theta^1 \ensuremath{\mathbb{S}^{d-1}}inh(r) \label{link2}
\end{align}
\\
\ensuremath{\mathbb{S}^{d-1}}ubsection{Asymptotic behavior of the $G$-component}
The aim of this section is to show that $n_t$ converges almost surely to an asymptotic $N$-valued random variable $n_{\infty}$ and that the convergence is exponentially fast. This result, stated in Proposition \ref{convht}, appears to be a consequence of the contracting property of $a_t$. For this we need the following integrability condition on $\nu$. The group $G$ is semi simple and the tools used in this section are very closed from those of Liao \cite{Liao04}. Nevertheless we present a self-contained proof in our specific framework and our results are established under a weaker assumption than the ones of \cite{Liao04} ( see remark \ref{rmq} ). Namely we suppose that the following integrability condition is satisfied.
\begin{ass}\label{integrability}
\[
\int_1^{+\infty} r \nu (dr) < +\infty.
\]
\end{ass}
The following proposition computes explicitly the linear drift of $a_t$ which appears to be positive. The proof is essentially a consequence of the law of large number.
\begin{prop}\label{alpha}
Let denote by $\alpha_t$ the $\ensuremath{\mathbb{R}}$-valued process such that $a_t= \exp ( \alpha_t V_1)$. Then the following convergence holds almost surely
\[
\frac{\alpha_t}{t} \underset{t \to \infty}{\longrightarrow} \alpha >0.
\]
The positive constant $\alpha$ is
\[
\alpha := \frac{d-1}{2} \ensuremath{\mathbb{S}^{d-1}}igma^2 + \int_0^{+\infty} \frac{r \cosh(r) -\ensuremath{\mathbb{S}^{d-1}}inh(r)}{\ensuremath{\mathbb{S}^{d-1}}inh(r)} \nu(dr).
\]
\end{prop}
\begin{proof}
First define $\log : A \to \mathcal{A} \ensuremath{\mathbb{S}^{d-1}}imeq \ensuremath{\mathbb{R}} \ \exp(u V_1) \mapsto u V_1 $ and apply It\^o formula \eqref{Ito} to the smooth map $f: g \mapsto \log (g)_A$. Remark that for $g, h \in G$ $(gh)_A = (g)_A \left ( (g)_K h \right )_A$ and
\[
f(g_{s-}h)-f(g_{s-})=\log \left ( k_{s-}g \right )_{A}.
\]
Moreover
\begin{align*}
V_{i}^{l}f(g_{s-})&= \left. \frac{d}{dt} \log (g_{s-} e^{tV_{i}})_{A} \right \vert_{t=0} = \left. \frac{d}{dt} \log \left (e^{tAd(k_{s-})V_{i}} \right )_{A} \right \vert_{t=0} = \{ Ad(k_{s-})V_{i} \}_{\mathcal{A}} \\
(V_{i}^{l})^{2}f(g_{s-})&= \left. \frac{d}{dt} \{ Ad\left (g_{s-}e^{tV_{i}} \right )_{K}V_{i} \}_{\mathcal{A}} \right \vert_{t=0}= \left. \frac{d}{dt} \{ Ad\left (e^{tAd(k_{s-})V_{i}} \right )_{K}Ad(k_{s-})V_{i} \}_{\mathcal{A}} \right \vert_{t=0} \\&= \left \{\left [ \left\{ Ad(k_{s-})V_{i} \right\}_{\mathcal{K}}, Ad(k_{s-})V_{i} \right ] \right \}_{\mathcal{A}}.
\end{align*}
For $k\in K$ we have $Ad(k)V_{i}= \ensuremath{\mathbb{S}^{d-1}}um_{j=1}^{d} k_{ij}V_{j}$ where $k=(k_{ij}) \in K \ensuremath{\mathbb{S}^{d-1}}imeq SO(d)$. Then we compute
\begin{align*}
\ensuremath{\mathbb{S}^{d-1}}um_{i=1}^{d} \left \{\left [ \left\{ Ad(k_{s-})V_{i} \right\}_{\mathcal{K}}, Ad(k_{s-})V_{i} \right ] \right \}_{\mathcal{A}} &=\ensuremath{\mathbb{S}^{d-1}}um_{i=1}^{d} \ensuremath{\mathbb{S}^{d-1}}um_{j=1}^{d} \ensuremath{\mathbb{S}^{d-1}}um_{l=1}^{d} (k_{s-})_{ij}(k_{s-})_{il} \left \{ \left [ \{ V_{j} \}_{\mathcal{K}}, V_{l} \right ] \right \}_{\mathcal{A}},
\end{align*}
since $\{V_{j} \}_{\mathcal{K}}= V_{1j}$ (and 0 for $j=1$ ) and $[V_{1j},V_{l}]=V_{1}\delta_{jl}$ we get
\begin{align*}
\ensuremath{\mathbb{S}^{d-1}}um_{i=1}^{d} \left \{\left [ \left\{ Ad(k_{s-})V_{i} \right\}_{\mathcal{K}}, Ad(k_{s-})V_{i} \right ] \right \}_{\mathcal{A}} = \ensuremath{\mathbb{S}^{d-1}}um_{j=2}^{d} \ensuremath{\mathbb{S}^{d-1}}um_{i=1}^{d} ((k_{s-})_{ij})^{2} V_{1}= (d-1)V_{1}.
\end{align*}
Thus It\^o formula \eqref{Ito} can be written
\begin{align*}
\log(a_{t})&= M_{t} + \frac{d-1}{2}\ensuremath{\mathbb{S}^{d-1}}igma^{2}t + \int_{0}^{t} \int_{U_{0}^{c}} \log (k_{s-}h)_{A} N(ds,dh) \\& + \int_{0}^{t} \int_{U_{0}} \left ( \log(k_{s-}h)_{A} -r(h)\ensuremath{\mathbb{S}^{d-1}}um_{i=1}^{d} \theta^{i}(h) \{ Ad(k_{s-})V_{i} \}_{\mathcal{A}} \right ) ds \Pi(dh),
\end{align*}
where $M_{t}:= \ensuremath{\mathbb{S}^{d-1}}igma \ensuremath{\mathbb{S}^{d-1}}um_{i=1}^{d}\int_{0}^{t} \{ Ad(k_{s-})V_{i} \}_{\mathcal{A}}dB^{i}_{s} + \int_{0}^{t}\int_{U_{0}} (k_{s-}h)_{A} \tilde{N}(ds,dh) $ is a martingale. Its bracket is $ \ensuremath{\mathbb{S}^{d-1}}igma^{2} \int_{0}^{t} \ensuremath{\mathbb{S}^{d-1}}um_{i=1}^{d}\Vert \{ Ad(k_{s-})V_{i} \}_{\mathcal{A}} \Vert^{2} ds + \int_{0}^{t} \int_{U_{0}} \Vert \log(k_{s-}h)_{A}\Vert^{2}ds\Pi(dh)= t\left (\ensuremath{\mathbb{S}^{d-1}}igma^{2} + \int_{U_{0}} \Vert \log (h)_{A} \Vert^{2}\Pi(dh) \right )$ and thus we obtain that almost surely
\[
\frac{M_{t}}{t} \underset{t \to +\infty}{\longrightarrow} 0.
\]
Moreover, making the change of variable $h'= k_{s-} h k_{s-}^{-1}$ we obtain using the $K$-invariant by conjugation of $\Pi$
\begin{align*}
\int_{0}^{t} \int_{U_{0}} & \left ( \log(k_{s-}h)_{A} -r(h)\ensuremath{\mathbb{S}^{d-1}}um_{i=1}^{d} \theta^{i}(h) \{ Ad(k_{s-})V_{i} \}_{\mathcal{A}} \right ) ds \Pi(dh)\\ & = \int_{0}^{t} \int_{U_{0}} \log (h')_{A} -r(h') \ensuremath{\mathbb{S}^{d-1}}um_{i=1}^{d} \theta^{i}(k_{s-}^{-1}h') \{Ad(k_{s-})V_{i}\}_{\mathcal{A}} \ \Pi(dh') ds \\
&= \int_{0}^{t} \int_{U_{0}} \log (h')_{A} -r(h') \ensuremath{\mathbb{S}^{d-1}}um_{i=1}^{d} \ensuremath{\mathbb{S}^{d-1}}um_{j=1}^{d} (k_{s-})_{ij} \theta^j(h') (k_{s-})_{i1}V_{1} \ \Pi(dh') ds\\
&= \int_{0}^{t} \int_{U_{0}} \log (h')_{A} -r(h') \theta^{1}(h') V_{1} \ \Pi(dh') ds \\
&=t \int_{\mathbb{S}^{d-1}} \int_{0}^{1} \log (S(r,\theta))_{A} -r\theta^{1}V_1 \ \nu(dr) d\theta\\
\end{align*}
By \eqref{link2} we have $\log (S(r,\theta))_{A} = \log ( \cosh(r) + \theta^{1} \ensuremath{\mathbb{S}^{d-1}}inh(r) ) V_{1} $ and the previous term equals
\begin{align*}
t \int_{\mathbb{S}^{d-1}} \int_{0}^{1} \log ( \cosh(r) + \theta^{1} \ensuremath{\mathbb{S}^{d-1}}inh(r) ) -r\theta^{1}\ \nu(dr) d\theta V_1 &= t \int_0^1 \left ( \int_{-1}^{1} \log \left ( \cosh(r) + u \ensuremath{\mathbb{S}^{d-1}}inh(r) \right ) -r u \frac{du}{2} \right ) \nu(dr) V_1 \\
&= t \int_0^1 \frac{r \cosh(r) - \ensuremath{\mathbb{S}^{d-1}}inh(r) }{\ensuremath{\mathbb{S}^{d-1}}inh(r)} \nu(dr)
\end{align*}
It remains to consider the asymptotic behavior of the stochastic integral $\int_{0}^{t}\int_{U_{0}^{c}} \log(k_{s-}h)_{A} N(ds,dh)$. We know that there exist $T_i$ jump times of a Poisson process of intensity measure $\Pi(U_{0}^{c})$ and random variable $h_n$ i.i.d of law $\Pi \vert_{U_{0}^c}/\Pi(U_{0}^{c}) $ and independent of $(T_i)_i$ such that
\[
\int_{0}^{t}\int_{U_{0}^{c}} \log(k_{s-}h)_{A} N(ds,dh) = \ensuremath{\mathbb{S}^{d-1}}um_{n, T_{n }\leq t} \log(k_{T_{n}^{-}} h_{n})_{A}.
\]
Moreover, by invariance of $\Pi$ under conjugation by $K$ we check easily that the random variables $h'_{n}:=Ad(k_{T_{n}^{-}}) h_{n}$ are i.i.d of common law $\Pi \vert_{U_{0}^c}/\Pi(U_{0}^{c}) $ and we have
\[
\int_{0}^{t}\int_{U_{0}^{c}} \log(k_{s-}h)_{A} N(ds,dh) = \ensuremath{\mathbb{S}^{d-1}}um_{n=1}^{N_{t}} \log (h'_{n})_{A},
\]
where $N_{t}$ is a Poisson process of intensity measure $\Pi(U_{0}^{c})$ and independent of $(h'_n)_n$. Moreover since $\int_{1}^{+\infty} r\nu(dr) < +\infty$ then $\log(h'_{n})_{A}$ is integrable and the law of large number ensures that
\[
\frac{1}{t} \int_{0}^{t} \int_{U_{0}^{c}} \log (k_{s-}h)_{A} N(ds,dh) \underset{t\to+\infty}{\longrightarrow} \Pi(U_{0}^{c}) \mathbb{E}[ \log(h'_{n})_{A} ] .
\]
The proof is ended by checking that
\[
\mathbb{E}[ \log(h'_{n})_{A} ] = \frac{1}{\Pi(U_{0}^{c})}\int_{1}^{+\infty}\frac{r \cosh(r) -\ensuremath{\mathbb{S}^{d-1}}inh(r)}{\ensuremath{\mathbb{S}^{d-1}}inh(r)} \nu(dr) V_{1}.
\]
\end{proof}
\begin{rmq}\label{rmqinteg}
When $\int_1^{+\infty} r \nu(dr) = + \infty $ we obtain $\mathbb{E} [ \left \vert \log (h'_n)_A \right \vert ] = + \infty$. Nevertheless
\begin{align*}
\mathbb{E} \left [ - \min \left (\log (h'_n)_A, 0 \right ) \right ]& = \int_1^{+ \infty} \int_{-1}^1 - \min \left( \log (\cosh(r + \theta^1 \ensuremath{\mathbb{S}^{d-1}}inh(r) ), 0 \right ) \frac{d\theta^1}{2} \nu(dr) \\
&= \int_1^{+\infty} \int_{e^{-r}}^1 -\log (v) \frac{dv}{1 \ensuremath{\mathbb{S}^{d-1}}inh(r)} \nu(dr) \\
&= \int_1^{+\infty} \frac{1}{2 \ensuremath{\mathbb{S}^{d-1}}inh(r)} (1- e^{-r} (r+1) ) \nu(dr) < + \infty.
\end{align*}
Now, applying a generalized law of large numbers we deduce that almost surely
\[
\frac{1}{t} \int_0^t \int_{U_0^c} \log (k_{s^{-}} h)_A N(ds, dh) \underset{t \to + \infty}{\longrightarrow} + \infty,
\]
and so
\[
\frac{\alpha_t}{t} \underset{t \to + \infty}{\longrightarrow} + \infty.
\]
\end{rmq}
The following proposition establishes that $g_t$ is bounded in expectation on a finite time interval. This result is used to prove the convergence of $n_t$ in the next Proposition.
\begin{prop}\label{supremum}
Fix $T>0$. Then
\[\ensuremath{\mathbb{E}}\left [ \ensuremath{\mathbb{S}^{d-1}}up_{t \in [0,T]} r\left ( g_t \right) \right ] < + \infty. \]
\end{prop}
\begin{proof}
We cannot directly apply It\^o formula to $g \mapsto r(g)$ since it is not regular at $Id$. But we can find a smooth function $\tilde{r}$ such that $\tilde{r} \geq r$ on $U_0$ and $\tilde{r}=r$ on $U_0^c$. For such a function we have
\begin{align} \label{Ito2}
\tilde{r}(g_t)&= \tilde{r}(Id) + \hat{m}_t + \tilde{m}_t + I_t + J_t + \int_0^t \int_{U_0^c}\left ( \tilde{r}(g_{s^{-}}h)-\tilde{r}(g_{s^{-}} ) \right ) N(ds,dh).
\end{align}
Where $\hat{m}_t:= \ensuremath{\mathbb{S}^{d-1}}igma \int_0^t V_i^l \tilde{r}(g_{s^{-}} )dB_s^i$ and $\tilde{m}_t:= \int_0^t \int_{U_0} \left ( \tilde{r}(g_{s^{-}}h)- \tilde{r}(g_{s^{-}})\right ) \tilde{N}(ds,dh)$ are martingales, $I_t:=\ensuremath{\mathbb{S}^{d-1}}igma^{2} \int_0^t \ensuremath{\mathbb{S}^{d-1}}um_{i=1}^{d} \left ( V_i^l \right )^2\tilde{r}(g_{s^{-}} ) ds $ and
\[
J_t:=\int_0^t \int_{r \in [0,1]} \int_{\theta \in \mathbb{S}^{d-1}} \left ( \tilde{r} \left (g_{s^{-}} S(r, \theta) \right) -\tilde{r}(g_{s^{-}}) - r\ensuremath{\mathbb{S}^{d-1}}um_{i=1}^{d} \theta^i V_i^l \tilde{r}(g_{s^{-}})\right ) d \theta \nu(dr) ds
\]
are processes with finite variation. \\
To prove the proposition we will bound the supremum on $[0,T]$ of each of these five terms by means of $\Vert X^l \tilde{r} \Vert_\infty$ and $\Vert (X^l)^2 \tilde{r} \Vert_\infty$ for some $X \in \mathrm{Vect} \{ V_i, i=1,\dots,d \}$. Thus we need the following lemma.
\begin{lem}
For $X \in \mathcal{P}=\mathrm{Vect} \{ V_i, i=1,\dots,d \}$ we have $\Vert X^l \tilde{r} \Vert_\infty < + \infty$ and $\Vert (X^l)^2 \tilde{r} \Vert_\infty < +\infty$.
\end{lem}
\begin{proof}[Proof of the lemma]
$U_0$ being a compact set it suffices to show that the supremum is finite on $U_0^c$. Since $\tilde{r}= r$ on $U_0^c$ it remains to prove that $\ensuremath{\mathbb{S}^{d-1}}up_{g\in U_0^c} \vert X^l r(g) \vert < +\infty$ and $ \ensuremath{\mathbb{S}^{d-1}}up_{g\in U_0^c} \vert (X^l)^2 r(g) \vert < +\infty$. The polar decomposition of $g$ can be written $g=\tilde{k} \exp(r(g) V_1) k$ for some $\tilde{k}, k \in K$. Setting $x \in \ensuremath{\mathbb{R}}^d$ such that $X= \ensuremath{\mathbb{S}^{d-1}}um_i x^i V_i $ we have
\[
r(g \exp(sX) ) = r\left (\exp(r(g) V_1) \exp \left (s \ensuremath{\mathbb{S}^{d-1}}um_{i=1}^d (kx)^i V_i \right ) \right ).
\]
Let $\theta \in \mathbb{S}^{d-1}$ be such that $(kx)^i = \Vert x \Vert \theta^i$. Then we compute explicitly, for $g \in U_0^c$:
\begin{align*}
r\left (\exp(r(g) V_1) \exp \left (s \ensuremath{\mathbb{S}^{d-1}}um_{i=1}^d (kx)^i V_i \right ) \right ) &= (\cosh)^{-1} \left ( \cosh(r(g)) \cosh(s \Vert x \Vert ) + \theta^1 \ensuremath{\mathbb{S}^{d-1}}inh(r(g)) \ensuremath{\mathbb{S}^{d-1}}inh(s \Vert x \Vert) \right ) \\
&= r(g) + s \Vert x \Vert \theta^1 + \frac{s^2}{2} \Vert x \Vert^2 \frac{\cosh(r(g))}{\ensuremath{\mathbb{S}^{d-1}}inh(r(g))} (1- (\theta^1)^2) + O(s^3).
\end{align*}
Then it comes that
\[
X^l r(g)=\left. \frac{d}{ds} r(g \exp(sX) ) \right \vert_{s=0} = \Vert x \Vert \theta^1,
\]
and thus $\ensuremath{\mathbb{S}^{d-1}}up_{g \in U_0^c} \left \vert X^l r(g) \right \vert \leq \Vert x \Vert$. \\
Moreover
\[
(X^l)^2 r(g) = \left. \frac{d^2}{ds^2} r(g \exp(sX) ) \right \vert_{s=0} =\Vert x \Vert^2 \frac{\cosh(r(g))}{\ensuremath{\mathbb{S}^{d-1}}inh(r(g))} (1- (\theta^1)^2)
\]
and so $\ensuremath{\mathbb{S}^{d-1}}up_{g \in U_0^c} \left \vert (X^l)^2 r(g) \right \vert \leq 2 \Vert x \Vert^2.$
\end{proof}
Return to the proof of Proposition \ref{supremum}. By Doob's norm inequalities (see \cite{Kal}) we obtain
\begin{align*}
\ensuremath{\mathbb{E}} \left [\ensuremath{\mathbb{S}^{d-1}}up_{t \in [0,T]} \vert \hat{m}_t\vert^2 \right ]\leq 4\ensuremath{\mathbb{E}} \left [ (\hat{m}_T)^2 \right ] \leq 4\ensuremath{\mathbb{S}^{d-1}}igma^2 \ensuremath{\mathbb{E}} \left [ \int_0^T \ensuremath{\mathbb{S}^{d-1}}um_{i=1}^{d} \left \vert V_i^l \tilde{r}(g_{s^{-}} ) \right \vert^2 ds \right ] \leq 4\ensuremath{\mathbb{S}^{d-1}}igma^2 T d \max_i \Vert V_i^l \tilde{r} \Vert^2_\infty < +\infty.
\end{align*}
and
\begin{align*}
\ensuremath{\mathbb{E}} \left [ \ensuremath{\mathbb{S}^{d-1}}up_{t \in [0,T]} \vert \tilde{m}_t\vert^2 \right ] &\leq 4\ensuremath{\mathbb{E}} \left [ (\tilde{m}_T)^2 \right ] \leq 4 \ensuremath{\mathbb{E}} \left [ \int_0^T \int_{r\in [0,1]} \int_{\theta \in \mathbb{S}^{d-1}} \left \vert \tilde{r}\left (g_{s^{-}}S(r, \theta) \right)-\tilde{r}(g_{s^{-}} ) \right \vert^2 \nu(dr) d\theta ds \right ] \\ &\leq 8 d T \int_0^1 r^2 \nu(dr) \max_i \Vert V_i^l \tilde{r} \Vert^2_\infty < +\infty.
\end{align*}
We have also
\begin{align*}
\ensuremath{\mathbb{E}} \left [ \ensuremath{\mathbb{S}^{d-1}}up_{t \in [0,T]} \vert I_t \vert \right ] \leq \ensuremath{\mathbb{S}^{d-1}}igma^2 T d \max_i \Vert (V_i^l)^2 \tilde{r} \Vert_\infty < +\infty .
\end{align*}
Moreover applying a Taylor inequality to $u \in [0,1] \mapsto \tilde{r}(g_{s^{-}}S(ur,\theta) )$ it comes
\begin{align*}
\left \vert \tilde{r} \left (g_{s^{-}} S(r, \theta) \right) -\tilde{r}(g_{s^{-}}) - r\ensuremath{\mathbb{S}^{d-1}}um_{i=1}^{d} \theta^i V_i^l \tilde{r}(g_{s^{-}}) \right \vert \leq \frac{1}{2} r^2 \ensuremath{\mathbb{S}^{d-1}}up_{\theta \in \mathbb{S}^{d-1}} \left \Vert \left (\ensuremath{\mathbb{S}^{d-1}}um_i \theta^i V_i^l \right)^{2} \tilde{r} \right \Vert_\infty < +\infty
\end{align*}
and thus
\begin{align*}
\ensuremath{\mathbb{E}} \left [ \ensuremath{\mathbb{S}^{d-1}}up_{t \in [0,T]} \vert J_t \vert \right ] \leq \frac{T}{2} \left ( \int_0^1 r^2 \nu(dr) \right ) \ensuremath{\mathbb{S}^{d-1}}up_{\theta \in \mathbb{S}^{d-1}} \left \Vert \left (\ensuremath{\mathbb{S}^{d-1}}um_i \theta^i V_i^l \right)^{2}\tilde{r} \right \Vert_\infty < +\infty.
\end{align*}
Finally, to bound the supremum of the last term of \eqref{Ito2} we need the assumption \ref{integrability}. Indeed, since $\left \vert \tilde{r}(g_{s^{-}}S(r,\theta)) -\tilde{r}(g_{s^{-}}) \right \vert \leq r \ensuremath{\mathbb{S}^{d-1}}up_{\theta \in \mathbb{S}^{d-1}} \left \Vert \left (\ensuremath{\mathbb{S}^{d-1}}um_i \theta^i V_i^l \right)\tilde{r} \right \Vert_\infty$ we obtain
\begin{align*}
\ensuremath{\mathbb{E}} \left [\ensuremath{\mathbb{S}^{d-1}}up_{t\in [0,T] }\left \vert \int_0^t \int_{U_0^c} \tilde{r}(g_{s^{-}}h) -\tilde{r}(g_{s^{-}}) ds \Pi(dh) \right \vert \right ]\leq T d \left (\int_{1}^{+\infty} r \nu(dr) \right )\max_i \left \Vert V_i^l \tilde{r} \right \Vert_\infty < +\infty.
\end{align*}
\end{proof}
We can now state the main result of this section, the convergence of $n_t$ to some asymptotic random variable $n_\infty$ and the speed of convergence.
\begin{prop}\label{convht}
Denote by $b_t =( b_t^{i} )_{i=2,\dots, d}$ the $\ensuremath{\mathbb{R}}^{d-1}$-valued process such that \[n_t =\exp \left ( \ensuremath{\mathbb{S}^{d-1}}um_{i=2}^{d} b_{t}^{i-1} (V_i-V_{1i} ) \right ). \] Then $b_t$ converges almost surely to $b_\infty$ exponentially fast with rate $\alpha$, i.e
\begin{align}
\limsup_{t \to +\infty}\frac{1}{t} \log \Vert b_t -b_{\infty} \Vert \leq - \alpha. \label{ht}
\end{align}
As a consequence $n_t$ converges almost-surely to $n_\infty := \exp \left ( \ensuremath{\mathbb{S}^{d-1}}um_{i=2}^{d} b_{\infty}^{i-1}(V_i-V_{1i} ) \right )$ and defining $h_t := e^{- t\alpha V_1} n_{\infty}^{-1} g_t$ we obtain
\begin{align}
\lim_{t \to +\infty} \frac{1}{t} r\left ( h_t\right ) = 0 \quad \mathrm{a.s}. \label{rform}
\end{align}
\end{prop}
\begin{proof}
Denoting by $[t]$ the integer part of $t$ we decompose $b_t = \ensuremath{\mathbb{S}^{d-1}}um_{j=1}^{[t]} ( b_j -b_{j-1} ) + b_t -b_{[t]} $. To prove that $b_t$ converge and \eqref{ht} holds it is sufficient to verify that
\begin{align}
\limsup_{j \to \infty } \frac{1}{j} \log \ensuremath{\mathbb{S}^{d-1}}up_{s\in [0,1]} \Vert b_j -b_{j +s} \Vert \leq -\alpha \quad \mathrm{a.s} \label{dec}.
\end{align}
For $j \in \ensuremath{\mathbb{N}}$ and $s\in [0,1]$ we have $g_{j}^{-1} g_{j+s} = k_{j}^{-1} a_{j}^{-1} n_{j}^{-1} n_{j+s} a_{j+s} k_{j+s}$ and thus
\begin{align} n_{j}^{-1} n_{j+s} = a_{j} \left ( g_{j}^{-1} g_{j+s} k_j \right )_N a_{j}^{-1}. \label{eq1} \end{align}
Denote by $\tilde{b}_{j,s}= (\tilde{b}_{j,s}^{i})_{i=2, \dots, d} \in \ensuremath{\mathbb{R}}^{d-1}$ such that $\left ( g_{j}^{-1} g_{j+s} k_j \right )_N = \exp \left ( \ensuremath{\mathbb{S}^{d-1}}um_{i=2}^{d} \tilde{b}_{j,s}^{i} (V_i -V_{1i}) \right )$. Then, \eqref{eq1} implies that
\begin{align}
b_{j+s} -b_j = e^{- \alpha_j} \tilde{b}_{j,s}. \label{hjs}
\end{align}
Since $g_t$ is a Levy process, $ \left (\ensuremath{\mathbb{S}^{d-1}}up_{s\in [0,1] } r(g_{j}^{-1} g_{j+s}) \right )_{j\in \ensuremath{\mathbb{N}}}$ are i.i.d random variables. Moreover, by Proposition \ref{supremum} their common expectation $ \ensuremath{\mathbb{E}} [ \ensuremath{\mathbb{S}^{d-1}}up_{s \in [0,1]} r(g_{s})]$ is finite. Thus, applying the law of large number it comes
\begin{align}
\frac{1}{j} \ensuremath{\mathbb{S}^{d-1}}up_{s\in [0,1] } r(g_{j}^{-1} g_{j+s} ) \underset{j \to \infty}{\longrightarrow} 0 \quad \mathrm{a.s.} \label{LDGN}
\end{align}
Since $r( (g_{j}^{-1}g_{j+s} )_N ) \leq 2 r (g_{j}^{-1} g_{j+s})$ and, by \eqref{link}, $ \Vert \tilde{b}_{j,s} \Vert^2 = 2 \left ( \cosh \left( r \left( g_{j}^{-1} g_{j+s} k_{j} \right )_N \right ) -1 \right )$ we deduce from \eqref{LDGN} that for $\ensuremath{\varepsilon} >0$ there exists $j_0$ such that for $j>j_0$
\begin{align}
\Vert \tilde{b}_{j,s} \Vert \leq e^{\ensuremath{\varepsilon} j}. \label{hjs2}
\end{align}
Since, by Proposition \ref{alpha}, $\alpha_j = j\alpha + o(j)$ thus \eqref{dec} follows from \eqref{hjs} and \eqref{hjs2}. To finish the proof of the proposition we need to check \eqref{rform}. We have
\[
r(h_t)=r( e^{-\alpha t V_1} n_{\infty}^{-1} n_t a_t ) \leq r( e^{-\alpha t V_1}a_t) + r( a_{t}^{-1} n_{\infty}^{-1}n_{t} a_{t}),
\]
and $r( e^{-\alpha t V_1}a_t) = \vert \alpha_t -\alpha t \vert = o(t)$ and we obtain from \eqref{link} ( since $ a_{t}^{-1} n_{\infty}^{-1}n_{t} a_{t} \in N$),
\[
r (a_{t}^{-1} n_{\infty}^{-1}n_{t} a_{t})= \cosh^{-1} \left ( 1 + e^{2 \alpha_t} \frac{\Vert b_t -b_\infty \Vert^2}{2} \right ).
\]
So by \eqref{ht} we have also $r (a_{t}^{-1} n_{t}^{-1}n_{\infty} a_{t}) = o(t)$ and \eqref{rform} holds.
\end{proof}
\begin{rmq}\label{rmq}
\begin{itemize}
\item Without integrability condition, we can nevertheless show that $g_t$ satisfy the \emph{irreducibility} and \emph{contraction} conditions of \cite{Liao04} and deduce that $\alpha_t$ converges almost surely to $+\infty$ and $n_t$ converges to $n_\infty$. Nevertheless, by remark \ref{rmqinteg}, we obtain in the case where $\int_1^{+ \infty} r \nu(dr) = +\infty$, that almost surely $\frac{\alpha_t}{t}$ converges to $+\infty$.
\item In \cite{Liao04} the author uses a stronger hypothesis to prove the rate of convergence of a L\'evy process in a semi-simple group. It corresponds in our case to assume that $\int_0^{+\infty} r \nu(dr) < + \infty $.
\end{itemize}
\end{rmq}
\ensuremath{\mathbb{S}^{d-1}}ubsection{Asymptotic behavior of the $\ensuremath{\mathbb{R}}^{1,d}$-component}
The linear endomorphism $\xi \mapsto \exp ( V_1 ) \xi $ is diagonalisable with eigenvalues $-1, 0, +1$. Denote by $U^{-}$, $U^{0}$ and $U^{+}$ the respective eigenspaces. Explicitly $U^{-}= \mathrm{Vect} \{ e_0 - e_1 \} $, $U^{0}= \mathrm{Vect} \{ e_2, \dots, e_d \}$ and $U^{+}= \mathrm{Vect} \{ e_0 + e_1 \} $. For $\xi \in \ensuremath{\mathbb{R}}^{1,d}$ we denote by $(\xi)^{-}$, $(\xi)^{0}$ and $(\xi)^{+}$ its projection on each eigenspace. Explicitly we obtain
\begin{align*}
(\xi )^{-}= -\frac{1}{2} q( \xi , e_0 + e_1 )& (e_0 -e_1), \quad \quad(\xi)^{+}= \frac{1}{2} q(\xi, e_0 -e_1 ) (e_0 +e_1) \\
&(\xi)^{0}= \ensuremath{\mathbb{S}^{d-1}}um_{i=2}^{d} q(\xi, e_i ) e_i.
\end{align*}
Recall that by definition, $\xi_t = \int_0^t g_s (e_0) ds$. The following proposition gives the asymptotic behavior of $\xi_t$.
\begin{prop}\label{xit}
There exists an asymptotic random variable $\lambda_{\infty}>0$ such that
\[
(n_\infty^{-1} \xi_t )^{-} \underset{t \to + \infty}{\longrightarrow} \lambda_\infty (e_0 -e_1),
\]
and moreover
\begin{align}
\limsup_{t \to + \infty} \frac{1}{t} \log \Vert (n_\infty^{-1} \xi_t )^{-} - \lambda_\infty (e_0 -e_1) \Vert \leq -\alpha. \label{convxit}
\end{align}
We also have
\begin{align}
\limsup_{t \to +\infty} \frac{1}{t} \log \Vert (n_\infty^{-1} \xi_t )^{0} \Vert \leq 0, \quad \mathrm{and}, \quad
\limsup_{t \to +\infty} \frac{1}{t} \log \Vert (n_\infty^{-1} \xi_t )^{+} \Vert \leq \alpha.
\end{align}
\end{prop}
\begin{proof}
We have
\begin{align}
&- \frac{1}{2} q( n_\infty^{-1} \xi_t , e_0 + e_1 ) = \int_0^t -\frac{1}{2} q(n_\infty^{-1} n_s a_s (e_0), e_0 + e_1 ) ds \label{integral}
\end{align}
and the integrand can be written
\begin{align*}
-\frac{1}{2} q(n_\infty^{-1} n_s a_s (e_0), e_0 + e_1 ) &= -\frac{1}{2} q(e^{-\alpha s V_1}n_\infty^{-1} n_s a_s (e_0),e^{-\alpha s V_1}(e_0 + e_1) ) \\ &= -\frac{1}{2} e^{-\alpha s}q(h_s (e_0),(e_0 + e_1) ),
\end{align*}
where $h_s =e^{- \alpha s } n_{\infty}^{-1} g_s$ as defined in Proposition \ref{convht}.
Denote by $(\tilde{r}_s , \tilde{\theta}_s) \in \ensuremath{\mathbb{R}}^{+} \times \mathbb{S}^{d-1}$ the polar decomposition of $h_s (e_0) \in \ensuremath{\mathbb{H}}^{d}$. So $\tilde{r}= r(h_s)$ and
\[
- \frac{1}{2}e^{-\alpha s}q(h_s (e_0),(e_0 + e_1) ) = \frac{1}{2}e^{-\alpha s} \left ( \cosh(\tilde{r}_s ) -\tilde{\theta}_s^{1} \ensuremath{\mathbb{S}^{d-1}}inh(\tilde{r}_s) \right ) \in \frac{1}{2}[e^{ -(\alpha s +\tilde{r}_s)} , e^{-(\alpha s-\tilde{r}_s)} ].
\]
Proposition \ref{convht} ensures that $\tilde{r}_s= o(s)$ a.s, so fixing $\ensuremath{\varepsilon} >0$ arbitrary small we can find $s_0 >0$ such that for all $s >s_0$ the integrand of \eqref{integral} is positive and bounded by $e^{-(\alpha - \ensuremath{\varepsilon})s}$. This ensures the convergence of $(n_\infty^{-1} \xi_t )^{-}$ to $\lambda_\infty (e_0 -e_1)$ with $\lambda_{\infty} >0$. Moreover for $t>s_0$
\begin{align*}
\left \vert -\frac{1}{2}q( n_\infty^{-1} \xi_t , e_0 + e_1 ) - \lambda_\infty \right \vert \leq \int_t^{+ \infty} e^{-(\alpha - \ensuremath{\varepsilon}) s} ds = \frac{1}{\alpha -\ensuremath{\varepsilon} } e^{-(\alpha -\ensuremath{\varepsilon})t},
\end{align*}
which prove \eqref{convxit}.
Now
\begin{align*}
(n_{\infty}^{-1} \xi_t )^0 = \ensuremath{\mathbb{S}^{d-1}}um_{i=2}^{d} \int_0^t q( n_{\infty}^{-1} n_s a_s (e_0),e_i )ds e_i,
\end{align*}
and for $i=2, \dots, d$ we have $ q( n_{\infty}^{-1} n_s a_s (e_0),e_i )= q( e^{-\alpha s V_1} n_{\infty}^{-1} n_s a_s (e_0) , e_i ) = \tilde{\theta}_s^{i} \ensuremath{\mathbb{S}^{d-1}}inh(\tilde{r}_s)$ so $\vert q( n_{\infty}^{-1} n_s a_s (e_0),e_i ) \vert \leq e^{\tilde{r}_s}$ and this ensures that $\limsup_{t \to +\infty} \frac{1}{t} \log \Vert (n_\infty^{-1} \xi_t ){0} \Vert \leq 0$.
Moreover,
\begin{align*}
(n_{\infty}^{-1} \xi_t ){+} = \left ( \int_0^t \frac{1}{2} q\left ( n_{\infty}^{-1} n_s a_s (e_0), e_0 -e_1 \right )ds \right ) (e_0 -e_1 )
\end{align*}
and
\begin{align*}
\frac{1}{2} q( n_{\infty}^{-1} n_s a_s (e_0), e_0 -e_1 )&= \frac{1}{2} e^{\alpha s} q(h_t (e_0), e_0 -e_1 ) \\
&= \frac{1}{2} e^{\alpha s} \left ( \cosh(\tilde{r}_s ) - \tilde{\theta}_s^{1} \ensuremath{\mathbb{S}^{d-1}}inh(\tilde{r}_s) \right ) \in \frac{1}{2} [ e^{\alpha s - \tilde{r}_s } , e^{\alpha s + \tilde{r}_s } ].
\end{align*}
So
\[
\Vert (n_{\infty}^{-1} \xi_t )^{+} \Vert \leq \Vert (n_{\infty}^{-1} \xi_{s_0} )^{+} \Vert + \int_{s_0}^{t} e^{(\alpha + \ensuremath{\varepsilon}) s} ds,
\]
and thus $\limsup_{t \to +\infty} \frac{1}{t} \log \Vert (n_\infty^{-1} \xi_t )^{+} \Vert \leq \alpha.$
\end{proof}
\ensuremath{\mathbb{S}^{d-1}}ubsection{Geometric description of the convergence}
Denote by $\mathbf{p}: \ensuremath{\mathbb{R}}^{1,d} \ensuremath{\mathbb{S}^{d-1}}etminus \{ 0 \} \to \ensuremath{\mathbb{P}}^d \ensuremath{\mathbb{R}}$ the projection onto the projective space of dimension $d$. The hyperbolo\"id $\ensuremath{\mathbb{H}}^d$ is mapped onto the interior of a projective ball and its boundary ($\partial \ensuremath{\mathbb{H}}^d \ensuremath{\mathbb{S}^{d-1}}imeq \mathbb{S}^{d-1}$) is the image of the $q$-isotropy cone
\[
\partial \ensuremath{\mathbb{H}}^d := \mathbf{p} \left ( \{ \xi , q(\xi) =0 \} \ensuremath{\mathbb{S}^{d-1}}etminus \{ 0 \} \right ).
\]
From the relation
\begin{align*}
q( \dot{\xi}_t, n_t (e_0 +e_1) ) = q( g_t (e_0) , n_t (e_0 +e_1) ) = q( e_0 , a_t^{-1} (e_0 +e_1) ) = e^{-\alpha_t} \underset{t \to \infty}{\longrightarrow} 0,
\end{align*}
we deduce that all limit points of $\mathbf{p}(\dot{\xi}_t)$ are $q$-orthogonal to $\theta_\infty:= \mathbf{p}( n_{\infty} (e_0 +e_1) )$. Since the only point of $\widebar{\mathbf{p}(\ensuremath{\mathbb{H}}^d)}$
which is $q$-orthogonal to $\theta_\infty$ is $\theta_\infty$ itself it comes that $ \mathbf{p}(\dot{\xi}_t)$ converges to $\theta_{\infty}$ in $\ensuremath{\mathbb{P}}^d \ensuremath{\mathbb{R}}$. Now, identifying $\ensuremath{\mathbb{P}}^d \ensuremath{\mathbb{R}}$ with its affine chart $\{ \xi, \ \xi^0 =1 \}$ we can consider that $\theta_\infty \in \mathbb{S}^d$. From \eqref{link} we deduce that $r_t \to +\infty$ and since $\mathbf{p}(\dot{\xi}_t) = \mathbf{p}( e_0 + \theta_t \frac{\ensuremath{\mathbb{S}^{d-1}}inh(r_t)}{\cosh(r_t)})$ it comes that $\theta_t$ converges to $\theta_{\infty}$ in $\mathbb{S}^d$. The two asymptotic random variables $\theta_\infty$ and $n_\infty$ are linked by
\[
\mathbf{p}( e_0 + \theta_\infty ) = \mathbf{p}( n_\infty (e_0 + e_1) )
\]
or more explicitly, $b_\infty \in \ensuremath{\mathbb{R}}^{d-1}$ (defined by $n_\infty = \exp \left ( \ensuremath{\mathbb{S}^{d-1}}um_{i =2}^{d} b_\infty^{i-1} (V_1 -V_{1i} )\right )$ ) is the stereographic projection of $\theta_\infty$
\[
\theta_\infty = \frac{1}{1+ \Vert b_\infty \Vert^2} \left ( \begin{matrix} 1- \Vert b_\infty \Vert^2 \\ 2b_\infty \end{matrix} \right ).
\]
Concerning the asymptotic behavior of $\xi_t$, Proposition \ref{xit} ensures that $ q( \xi_t, n_{\infty} (e_0 + e_1 ) )$ converges to $\lambda_\infty$. Thus geometrically $\xi_t$ is asymptotic to an affine hyperplan which is $q$- orthogonal to $ n_{\infty} (e_0 + e_1 )$ (or $e_0 + \theta_\infty$ ) and passing by $\lambda_\infty (e_0 -e_1)$ .
\ensuremath{\mathbb{S}^{d-1}}ection{Lyapunov spectrum and stable manifolds} \label{LyapSpec}
\ensuremath{\mathbb{S}^{d-1}}ubsection{Lyapunov spectrum}
The Levy process $\tilde{g}_t$, with values in $\widetilde{G}$ and starting at some $\tilde{g}$, can be obtained by solving the following left invariant stochastic integro-differential equation in $\widetilde{G}$
\begin{align}
\forall f \in C^2( \tilde{G}), \quad f(\tilde{g}_t)& = f(\tilde{g}) + \ensuremath{\mathbb{S}^{d-1}}igma \ensuremath{\mathbb{S}^{d-1}}um_{i=1}^d\int_0^t V_i^{l}f(\tilde{g}_{s^{-}})\circ dB^{i}_s + \int_0^t H_0^{l} ( \tilde{g}_{s^{-}})ds +\int_0^t \int_{U_0} \left ( f(\tilde{g}_{s^{-}}h) -f(\tilde{g}_{s^{-}}) \right ) \tilde{N}(ds,dh) \notag \\
&+ \int_0^t \int_{U_0} \left ( f(\tilde{g}_{s^{-}}h) -f(\tilde{g}_{s^{-}}) -r(h) \ensuremath{\mathbb{S}^{d-1}}um_{i=1}^{d} \theta^{i}(h) V_i^l f(\tilde{g}_{s^{-}}) \right ) ds \Pi(dh) \label{SDEtilde} \\
& + \int_0^t \int_{(U_0)^c} \left ( f(\tilde{g}_{s^{-}}h) -f(\tilde{g}_{s^{-}}) \right ) N(ds,dh). \notag
\end{align}
This stochastic differential equation induces a stochastic flow $\varphi_t$ in $\widetilde{G}$ which maps $\tilde{g}$ on the solution at time $t$ and starting at $\tilde{g}$ of \eqref{SDEtilde}. By left invariance $\varphi_t$ is also defined by
\[
\begin{matrix}
\varphi_t : & \widetilde{G} & \longrightarrow & \widetilde{G} \\
& \tilde{g} & \longmapsto & \tilde{g} \tilde{g}_t,
\end{matrix}
\]
where $\tilde{g}_t$ is starting at $\mathrm{Id}$.
Denote by $\Vert \cdot \Vert $ any norm on $\mathrm{Lie}( \widetilde{G} )$ and by $\Vert \cdot \Vert_{\tilde{g}}$ the left invariant (Finsler) metric associated in $ \widetilde{G}$ on $T_{\tilde{g}} \widetilde{G}$. For $v \in T_{\tilde{g}} \widetilde{G}$ we aim to investigate the asymptotic exponential rate of growth or decay of $\Vert d \varphi_t (\tilde{g})(v) \Vert_{\varphi_t (\tilde{g})}$. Denote by $L_{\tilde{g}}$ the left translation by $\tilde{g}$ in $ \widetilde{G}$. By left invariance of the flow, $\Vert d \varphi_t (\tilde{g})(v) \Vert_{\varphi_t(\tilde{g})}= \Vert d \varphi_t(\mathrm{Id})( \widetilde{X}) \Vert_{\varphi_t(\mathrm{Id}) }$ where $ \widetilde{X}:= (dL_{\tilde{g}})^{-1}(v) \in T_{\mathrm{Id}} \widetilde{G}=\mathrm{Lie}( \widetilde{G})$. For $\tilde{g}= (g, \xi) \in \widetilde{G}$ and $ \widetilde{X}, \widetilde{Y}\in \mathrm{Lie} ( \widetilde{G} )$ it comes
\begin{align}
\mathrm{Ad}(\tilde{g}) ( \widetilde{X}) &= \left ( \mathrm{Ad}(g)(X), gx - \mathrm{Ad}(g)(X)\xi \right) \label{Ad} \\
\mathrm{ad}(\widetilde{Y})(\widetilde{X}) &= \left ( \mathrm{ad}(Y)(X), Yx -Xy\right ).
\end{align}
The endomorphism $\widetilde{X} \mapsto \mathrm{ad}(V_1,0)( \widetilde{X})$ is diagonalisable on $\mathrm{Lie}(\widetilde{G})$. Its eigenvalues are $-1,0 , 1$ and we denote by $ \widetilde{U}^{-}$, $ \widetilde{U}^{0}$ and $ \widetilde{U}^{+}$ the eigenspaces associated.
We can check that
\begin{align*}
\widetilde{X} \in \widetilde{U}^{+} &\Longleftrightarrow X \in \widebar{\mathcal{N}} \ \mathrm{and} \ x\in U^{+} \\
\widetilde{X} \in \widetilde{U}^{0} &\Longleftrightarrow X \in \mathcal{A}\oplus \mathcal{M} \ \mathrm{and} \ x\in U^{0} \\
\widetilde{X} \in \widetilde{U}^{-} &\Longleftrightarrow X \in \mathcal{N} \ \mathrm{and} \ x \in U^{-}.
\end{align*}
Set \[ \tilde{g}_\infty := \left (n_\infty , \lambda_\infty (e_0 -e_1) \right) \in \widetilde{G} \] and $V_\infty^{-}:= \mathrm{Ad}(\tilde{g}_\infty)\left ( \widetilde{U}^{+} \right ) $, $V_\infty^{0}:= \mathrm{Ad}(\tilde{g}_\infty) \left ( \widetilde{U}^{0} + \widetilde{U}^{+}\right )$.
We denote by \[ \tilde{h}_t := \left ( e^{-t \alpha V_1}, \ 0 \right ) \tilde{g}_{\infty}^{-1} \tilde{g}_t,\] where we recall that $\tilde{g}_t:= \varphi_t(\mathrm{Id})$ is starting at $\mathrm{Id}$.
\begin{thm}\label{Lyap}
Let $ \widetilde{X} \in \mathrm{Lie}( \widetilde{G} )$. For almost every trajectory
\[
\frac{1}{t} \log \Vert d \varphi_t (\mathrm{Id})( \widetilde{X}) \Vert_{\varphi_t(\mathrm{Id})} \underset{t \to +\infty}{\longrightarrow} \left \{ \begin{matrix} \alpha & \mathrm{if} & \widetilde{X} \in \mathrm{Lie}( \widetilde{G} ) \ensuremath{\mathbb{S}^{d-1}}etminus V_\infty^0 \\ 0 & \mathrm{if} & \widetilde{X} \in V_\infty^0 \ensuremath{\mathbb{S}^{d-1}}etminus V_\infty^- \\ -\alpha & \mathrm{if} & \widetilde{X} \in V_\infty^- \ensuremath{\mathbb{S}^{d-1}}etminus \{ 0 \} \end{matrix}\right.
\]
\end{thm}
\begin{proof}
By left invariance of $\Vert \cdot \Vert$, $\Vert d \varphi_t (\mathrm{Id})( \widetilde{X}) \Vert_{\varphi_t(\mathrm{Id})} = \Vert \mathrm{Ad}(\tilde{g_t}^{-1}) († \widetilde{X} )\Vert$. We set, for $\tilde{g} \in \widetilde{G}$
\[
\Vert \mathrm{Ad}(\tilde{g}) \Vert := \ensuremath{\mathbb{S}^{d-1}}up_{ \widetilde{X} \neq 0} \frac{\Vert \mathrm{Ad}(\tilde{g})( \widetilde{X})\Vert}{\Vert \widetilde{X} \Vert}.
\]
Let $ \widetilde{X} \in \mathrm{Lie}( \widetilde{G})$. Writting $\tilde{g}_t^{-1}= \left ( \tilde{h}_t \right )^{-1} \left ( e^{-t \alpha V_1} ,0 \right ) \tilde{g}_{\infty}^{-1}$ we deduce that
\begin{align}
\frac{\Vert \mathrm{Ad} \left ( \left ( e^{-t \alpha V_1} ,0 \right ) \tilde{g}_{\infty}^{-1} \right )( \widetilde{X}) \Vert }{\Vert \mathrm{Ad} ( \tilde{h}_t ) \Vert} \leq \Vert &\mathrm{Ad}(\tilde{g}_t^{-1})( \widetilde{X})\Vert \notag \\
&\leq \Vert \mathrm{Ad} (\tilde{h}_t)^{-1} \Vert \Vert \mathrm{Ad} \left ( \left ( e^{-t \alpha V_1} ,0 \right ) \tilde{g}_{\infty}^{-1} \right )( \widetilde{X}) \Vert. \label{ineg}
\end{align}
Suppose for the moment that
\begin{align}
&\limsup_{t \to + \infty} \frac{1}{t} \log \left \Vert \mathrm{Ad} ( \tilde{h}_t )^{-1} \right \Vert \leq 0 \label{limsup1} \\
\mathrm{and} \quad &\limsup_{t \to + \infty} \frac{1}{t} \log \Vert \mathrm{Ad} ( \tilde{h}_t ) \Vert \leq 0. \label{limsup2}
\end{align}
Then we deduce from \eqref{ineg} that $\frac{1}{t} \log \Vert \mathrm{Ad}(\tilde{g}_t^{-1})( \widetilde{X})\Vert $ and $\frac{1}{t} \log \Vert \mathrm{Ad} \left ( \left ( e^{-t \alpha V_1} ,0 \right ) \tilde{g}_{\infty}^{-1} \right )( \widetilde{X}) \Vert $ have the same limit when $t$ goes to $\infty$. The linear isomorphism $\mathrm{Ad} \left ( e^{-t \alpha V_1} ,0 \right )$ is diagonalisable with eigenvalues $e^{-\alpha t}$, $1$ and $e^{\alpha t}$ associated respectively to the eigenspaces $ \widetilde{U}^{+}$, $ \widetilde{U}^{0}$ and $ \widetilde{U}^{-}$. Decomposing $\mathrm{Ad}(\tilde{g}_\infty)^{-1} ( \widetilde{X})$ in the direct sum $ \widetilde{U}^{-} \oplus \widetilde{U}^{0} \oplus \widetilde{U}^{+}$ and using a Euclidean norm $\Vert \cdot \Vert$ on $\mathrm{Lie}({ \widetilde{G}})$ for which this decomposition is orthogonal, we deduce easily the theorem (note that the convergence is independant of the chosen norm).
Thus it remains to prove \eqref{limsup1} and \eqref{limsup2}. We have
\begin{align*}
\tilde{h}_t=\left ( e^{-t \alpha V_1}, \ 0 \right ) \tilde{g}_{\infty}^{-1}\tilde{g}_t &= \left ( e^{-t \alpha V_1}, \ 0 \right ) \left (n_\infty^{-1} , \ -\lambda_{\infty} (e_0 -e_1)†\right ) \left (n_t a_t k_t , \ \xi_t \right ) \\ &= \left ( h_t, \ \ e^{-t \alpha V_1 } \left ( n_{\infty}^{-1}\xi_t \right ) - \lambda_\infty e^{t \alpha} (e_0 -e_1) \right ) \\ &= \left (\mathrm{Id} , \ e^{-t \alpha V_1 } \left (n_{\infty}^{-1}\xi_t \right ) - \lambda_\infty e^{t \alpha} (e_0 -e_1) \right ) \ \left ( h_t , \ 0 \right )
\end{align*}
Let $\ensuremath{\varepsilon} >0$. By Proposition \ref{convht} we can find $t_0 >0$ such that $\forall t >t_0$ $r(h_t) \leq \ensuremath{\varepsilon} t$ and by Proposition \ref{xit} we have
\begin{align*}
\Vert e^{-t \alpha V_1 } \left ( n_{\infty}^{-1}\xi_t \right )- \lambda_\infty e^{t \alpha} (e_0 -e_1) \Vert & \leq e^{\alpha t} \Vert (n_\infty^{-1}\xi_t )_{-} -\lambda_{\infty} (e_0-e_1) \Vert + \Vert (n_{\infty}^{-1} \xi_t )_{0}\Vert + e^{-\alpha t}\Vert (n_{\infty}^{-1}\xi_t)_{+} \Vert \\
& \leq e^{\ensuremath{\varepsilon} t }
\end{align*}
Now using the following Lemma \ref{norms} we deduce easily \eqref{limsup1} and \eqref{limsup2}.
\end{proof}
\begin{lem}\label{norms}
There exist positive constants $\alpha, \beta, \gamma$ such that for $g \in G$ and $\xi \in \ensuremath{\mathbb{R}}^{1,d}$
\begin{align*}
\Vert \mathrm{Ad} (g, 0)\Vert \leq \alpha e^{r(g)} \\
\Vert \mathrm{Ad} (\mathrm{Id}, \xi ) \Vert \leq \beta \Vert \xi \Vert + \gamma.
\end{align*}
\end{lem}
\begin{proof}
All norms are equivalent and it suffices to check the inequalities for some particuliar norms. Let choose the following $SO(d)$-invariant euclidean norm on $\mathrm{Lie}( \widetilde{G})$
\[
\Vert (X,x) \Vert:= \ensuremath{\mathbb{S}^{d-1}}qrt{ \mathrm{Tr}(X^{t}X) + x^{t}x }.
\]
We obtain easily
\[
\Vert \mathrm{Ad}(g, \ 0)\Vert = e^{r(g)}.
\]
Taking now $\Vert (X,x) \Vert:= \ensuremath{\mathbb{S}^{d-1}}qrt{\mathrm{Tr}(X^{t}X)} + \ensuremath{\mathbb{S}^{d-1}}qrt{x^{t}x} $ we get
\begin{align*}
\Vert \mathrm{Ad}(\mathrm{Id}, \ \xi ) (X,x) \Vert &= \ensuremath{\mathbb{S}^{d-1}}qrt{\mathrm{Tr}(X^{t}X)} + \Vert x - X\xi \Vert
\leq \Vert (X, x )\Vert + \Vert X\xi \Vert \\ &\leq \Vert (X, x )\Vert + \ensuremath{\mathbb{S}^{d-1}}qrt{\mathrm{Tr}(X^{t}X)} \max_i \vert \xi^{i} \vert \leq \Vert (X, x )\Vert \left (1 +\max_i \vert \xi^{i} \vert \right ).
\end{align*}
Thus $\Vert \mathrm{Ad}(\mathrm{Id}, \ \xi ) \Vert \leq 1 + \alpha \Vert \xi \Vert$ for a constant $\alpha >0$ independant of $\xi$.
\end{proof}
\ensuremath{\mathbb{S}^{d-1}}ubsection{Stable manifolds}
First, remark that $V_{\infty}^{-}$ and $V_{\infty}^{0}$ are Lie sub-algebras of $\mathrm{Lie}( \widetilde{G} )$. Denote by
\[ \mathcal{V}^{-}_{\infty}:= \exp (V_{\infty}^{-} ), \quad \mathrm{and} \quad \mathcal{V}^{0}_{\infty}:= \exp (V_{\infty}^{0}) \]
the closed subgroup of $ \widetilde{G}$ associated.
Fix now a euclidean norm $\Vert \cdot \Vert$ on $\mathrm{Lie}( \widetilde{G})$ which is $\mathrm{Ad}(K)$-invariant. Such a norm is of the form
\[
\left \Vert \left ( \left ( \begin{matrix} 0 & b^{t} \\ b & C \end{matrix} \right ), \ x \right ) \right \Vert := \ensuremath{\mathbb{S}^{d-1}}qrt{ \kappa^2 b^{t}b + \beta^2 \mathrm{Tr} \left ( C^{t}C \right )+ \gamma^2 \vec{x}^{t}\vec{x} + \delta^2 (x^0)^2 },
\]
for some positive constants $\kappa$, $\beta$, $\gamma$ and $\delta$. We denote by $d$ the distance in $\widetilde{G}$ associated to the left invariant Riemanian metric induced by $\Vert \cdot \Vert$. To simplify notations, we denote by $d(g,h)$ the distance between $(g,0)$ and $(h,0)$ for $g, h \in G$.
The following result shows that the stable manifold associated to $\varphi_t$ is $\varphi_{0} \mathcal{V}_{\infty}^{-}$.
\begin{thm} \label{stable}
Let $\tilde{g}$ and $\tilde{g}'$ two distinct points in $ \widetilde{G}$.
\begin{itemize}
\item If $\tilde{g}' \in \tilde{g} \mathcal{V}_{\infty}^{-}$ then
\[
\frac{1}{t} \log d \left ( \varphi_t (\tilde{g}) , \varphi_t (\tilde{g}') \right ) \underset{ t \to + \infty }{\longrightarrow} -\alpha.
\]
\item If $\tilde{g}' \notin \tilde{g} \mathcal{V}_{\infty}^{-}$ then
\[
\liminf_{t \to \infty} d\left ( \varphi_t (\tilde{g}) , \varphi_t (\tilde{g}') \right ) >0.
\]
\end{itemize}
\end{thm}
The properties of $d$ we need in the proof of Theorem \ref{stable} are sum up in the following proposition
\begin{prop}\label{dprop}
\begin{enumerate}[i)]
\item Left invariance \[\forall \tilde{g}, \tilde{h} \in \widetilde{G}, \quad d( \tilde{g} , \tilde{g} \tilde{h} ) = d( \mathrm{Id} , \tilde{h} ). \]
Thus $ d( \mathrm{Id}, \ \tilde{g}^{-1} )= d( \mathrm{Id}, \ \tilde{g} )$ and triangularity inequality writes:
\[
\forall \tilde{g}, \tilde{h} \in \widetilde{G}, \quad d( \mathrm{Id} , \tilde{g} \tilde{h} ) \leq d( \mathrm{Id} , \tilde{g} ) +d( \mathrm{Id}, \ \tilde{h} )
\]
\item K-right invariance \[ \forall \tilde{g} \in \widetilde{G} \ \mathrm{and} \ k \in K, \quad d \left ( (k, 0) , \tilde{g} (k, 0) \right ) = d( \mathrm{Id} , \tilde{g} ) \]
\item For $ \widetilde{X} \in \mathrm{Lie}( \widetilde{G} ) $
\[
d \left ( \mathrm{Id} , \exp( \widetilde{X} ) \right ) \leq \Vert \widetilde{X} \Vert.
\]
\item There exists a neighborood $\mathcal{O}$ of $0$ in $\mathrm{Lie}( \widetilde{G})$ and a constant $C>0$ such that
\[
\forall \widetilde{X} \in \mathcal{O}, \quad C \Vert \widetilde{X} \Vert \leq d\left (\mathrm{Id}, \exp( \widetilde{X}) \right )
\]
\item $\forall (g, \xi ) \in \widetilde{G}$
\[
d\left ( \mathrm{Id}, g \right ) \leq d \left ( \mathrm{Id},\ (g, \xi ) \right )
\]
\item For $g=S(r, \theta) R$ and $g'= S(r', \theta')R' $ we have
\[
\frac{\kappa}{\ensuremath{\mathbb{S}^{d-1}}qrt{\kappa^2 +2\beta^2}}d\left ( S(r, \theta) , S(r', \theta') \right ) \leq d(g, g') \]
\item For all $r \geq 0$ and $\theta \in \mathbb{S}^{d-1}$ \[d \left ( \mathrm{Id}, S(r,\theta ) \right )= \kappa r\]
\end{enumerate}
\end{prop}
\begin{proof}[Proof of Proposition \ref{dprop}]
$i)$ and $ii)$.The left and $K$-right invariance follows from the definition of the metric as being a left invariant Riemannian metric on $\widetilde{G}$ defined from an $\mathrm{Ad}(K)$-invariant inner product on $\mathrm{Lie}(\widetilde{G})$. Inequality $iii)$ is obtained remarking that the length of the path $t \in [0,1] \mapsto \exp \left ( t \widetilde{X} \right )$ is equal to $\Vert \widetilde{X} \Vert$.
$iv)$. Denote by $\widehat{\exp}: \mathrm{Lie}(\widetilde{G})\to \widetilde{G}$ be the exponential map at $\mathrm{Id}$ induced by the metric $\Vert \cdot \Vert$ in $ \widetilde{G}$: for $\tilde{X}\in \mathrm{Lie}(\widetilde{G})$, $\widehat{\exp}(\tilde{X})= \gamma_{\tilde{X}}(1)$ where $t \in [0,1] \mapsto \gamma_{\tilde{X}}(t)$ is the geodesic starting from $\mathrm{Id}$ in the direction $\tilde{X}$. The differential at $0$ of $\widehat{\exp}$ is known to be identity and there exists a sufficient small neighborhood $\mathcal{O}'$ of $0\in \mathrm{Lie}(\widetilde{G})$ such that: \begin{equation} \forall \tilde{X} \in \mathcal{O}', \quad \Vert \tilde{X} \Vert = d\left (I, \widehat{\exp}(\tilde{X}) \right). \label{truc} \tag{$*$}\end{equation} Furthermore, the map $\widehat{\exp}^{-1}\circ \exp $ can be defined in a neighborhood of $0$ and its differential at $0$ is the identity: $\widehat{\exp}^{-1}\circ \exp(\tilde{X})= \tilde{X}+o(\Vert \tilde{X} \Vert)$. So we can find $\mathcal{O}$ neighborhood of $0$ and $C>0$ such that for all $\tilde{X}\in \mathcal{O}$, $C \Vert \tilde{X} \Vert \leq \Vert \widehat{\exp}^{-1}\circ \exp(\tilde{X}) \Vert \leq \frac{1}{C} \Vert \tilde{X} \Vert$. Taking $\mathcal{O}$ small enough so that $\widehat{\exp}^{-1}\circ \exp(\mathcal{O}) \ensuremath{\mathbb{S}^{d-1}}ubset \mathcal{O}'$, we can apply \eqref{truc} to $\widehat{\exp}^{-1}\circ \exp(\tilde{X})$, thus yielding $\Vert \widehat{\exp}^{-1}\circ \exp(\tilde{X}) \Vert= d\left(I, \exp(\tilde{X})\right)$ for every $\tilde{X}\in \mathcal{O}$.
$v)$. Each path $s \in [0,1] \mapsto (g_s, \xi_s)$ joining $\mathrm{Id}$ to $(g,\xi)$ is of length $\int_{0}^1 \Vert (g_s^{-1} \dot{g}_s , g_s^{-1} \xi_s ) \Vert ds $ which is greater than $\int_{0}^1 \Vert (g_s^{-1} \dot{g}_s, 0 ) \Vert ds $ corresponding to the path $ s \in [0,1 ] \mapsto (g_s, 0)$ joining $\mathrm{Id}$ to $(g, 0)$.
$vi)$. Consider a path $ s\in [0,1] \mapsto S(r_s, \theta_s)R_s$ joining $g$ to $ g'$. We compute, using dot notation for $\frac{d}{ds}$
\[
R_s^{-1} S(r_s, \theta_s)^{-1} \frac{d}{ds} (S(r_s, \theta_s) R_s )= \left ( \begin{matrix} 0 & \dot{r}_s \theta_s^t + \ensuremath{\mathbb{S}^{d-1}}inh(r_s) \dot{\theta_s}^t \\ \dot{r}_s \theta_s+ \ensuremath{\mathbb{S}^{d-1}}inh(r_s) \dot{\theta}_s & (\cosh(r_s) -1) \left ( \dot{\theta}_s \theta_s^{t} -\theta_s \dot{\theta}_s^t \right ) + R_s^{-1}\dot{R}_s
\end{matrix} \right ).
\]
Its length $ l:= \int_0^1 \Vert R_t^{-1} S(r_t, \theta_t)^{-1} \frac{d}{dt} S(r_t, \theta_t) R_t \Vert dt $ is larger than
\[ \int_0^{1} \kappa \Vert \dot{r}_s \theta_s+ \ensuremath{\mathbb{S}^{d-1}}inh(r_s) \dot{\theta}_s \Vert ds = \int_0^{1} \kappa \ensuremath{\mathbb{S}^{d-1}}qrt{ (\dot{r}_s)^2 + \ensuremath{\mathbb{S}^{d-1}}inh(r_s)^2 \Vert \dot{\theta}_s \Vert^2 }ds. \] Moreover, the path $s \mapsto S(r_s, \theta_s) $ which join $S(r, \theta)$ to $S(r', \theta')$ is of length
\begin{align}
\int_0^1 &\ensuremath{\mathbb{S}^{d-1}}qrt{\kappa^2 \left ((\dot{r}_s)^2 + \ensuremath{\mathbb{S}^{d-1}}inh(r_s)^2 \Vert \dot{\theta}_s \Vert^2 \right )+ \beta^2 \left ( 2 (\cosh(r) -1)^2 \Vert \dot{\theta}_s \Vert^2 \right ) }ds \\ & \quad \quad \quad \quad \quad \quad \quad \quad\quad \quad\quad \quad\quad \quad\quad \quad\leq \frac{\ensuremath{\mathbb{S}^{d-1}}qrt{\kappa^2 +2 \beta^2}}{\kappa} \int_0^{1} \kappa \ensuremath{\mathbb{S}^{d-1}}qrt{ (\dot{r}_s)^2 + \ensuremath{\mathbb{S}^{d-1}}inh(r_s)^2 \Vert \dot{\theta}_s \Vert^2 }ds.
\end{align}
Thus
\[
d( S(r, \theta), S(r', \theta') ) \leq \frac{\ensuremath{\mathbb{S}^{d-1}}qrt{\kappa^2 +2 \beta^2}}{\kappa} l
\]
and taking the infimum over all the path joining $g$ to $g'$ we obtain $vi)$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{stable}]
Let $ \widetilde{Y} \in \mathrm{Lie}( \widetilde{G} ) \ensuremath{\mathbb{S}^{d-1}}etminus \{ 0 \}$ be such that $\exp( \widetilde{Y} ) =\tilde{g}^{-1} \tilde{g}' $. Then
\[
d \left ( \varphi_t (\tilde{g}) , \varphi_t (\tilde{g}') \right ) = d\left ( \mathrm{Id}, \ \tilde{g}_t^{-1} \exp\left ( \widetilde{Y} \right ) \tilde{g}_t \right )= d \left ( \mathrm{Id}, \ \exp \left ( \mathrm{Ad}(\tilde{g}_t^{-1}) ( \widetilde{Y} ) \right ) \right ).
\]
By Theorem \ref{Lyap}, if $ \widetilde{Y} \in \mathrm{Ad}(g_{\infty})( \widetilde{U}^{+})$ (i.e. if $g' \in g \mathcal{V}_{\infty}^{-}$) then $\Vert \mathrm{Ad}(g_t^{-1}) ( \widetilde{Y} ) \Vert$ converge to $0$ exponentially fast with rate $\alpha$ and so for large $t$ it evolves in $\mathcal{O}$. Thus, using $iii)$ and $iv)$ of Proposition \ref{dprop} we obtain the first point of the theorem as a direct consequence of Theorem \ref{Lyap}.
Set $\widetilde{X}:= \mathrm{Ad}(\tilde{g}_{\infty}^{-1}) ( \widetilde{Y} )$, thus
\[ \widetilde{Y} = \mathrm{Ad}(\tilde{g}_{\infty})\left ( \left ( \widetilde{X} \right )^{+} + \left ( \widetilde{X} \right )^{0} + \left ( \widetilde{X} \right )^{-} \right ),\]
and write
\begin{align*}
\left ( \widetilde{X} \right )^{+} = ( X^{+}, \ x^{+} ), &\ \mathrm{where} \ \ X^{+} \in \widebar{\mathcal{N}} \ \mathrm{and} \ \ x^{+} \in U^{+} \\
\left ( \widetilde{X} \right )^{0} = ( X^{0}, \ x^{0} ), &\ \mathrm{where} \ \ X^{0} \in \mathcal{A} \oplus \mathcal{M} \ \mathrm{and} \ \ x^{0} \in U^{0} \\
\left ( \widetilde{X} \right )^{-} = ( X^{-}, \ x^{-} ), &\ \mathrm{where} \ \ X^{-} \in \mathcal{N} \ \mathrm{and} \ \ x^{-} \in U^{-}.
\end{align*}
Now suppose that $\tilde{g}' \notin \tilde{g} \mathcal{V}^{-}_{\infty}$ which is equivalent to $\left ( \widetilde{X} \right )^{0} \neq 0$ or $\left ( \widetilde{X}\right )^{-} \neq 0$.
Suppose first that $\left ( \widetilde{X}\right )^{-} \neq 0$. Thus $\widetilde{Y} \in \mathrm{Lie}(\widetilde{G}) \ensuremath{\mathbb{S}^{d-1}}etminus V_\infty^0$ and by Theorem \ref{Lyap} $\Vert \mathrm{Ad}(\tilde{g}_t^{-1}) \widetilde{Y} \Vert $ converges to $+\infty$ exponentially fast. Now suppose by contradiction that ${ \displaystyle \liminf_{t \to + \infty }d \left ( \mathrm{Id},\exp \left ( \mathrm{Ad}(\tilde{g}_t^{-1} ) \widetilde{Y} \right ) \right ) =0 }$. Then we can find a $s_t$ such that $\displaystyle d \left ( \mathrm{Id}, \ \exp \left ( \mathrm{Ad}(\tilde{g}_{s_t}^{-1})( \widetilde{Y} ) \right ) \right )$ converges to $0$ and for large $t$ $ \mathrm{Ad}(\tilde{g}_{s_t}^{-1})( \widetilde{Y} )$ lies in $\mathcal{O}$. The inequality $iv)$ of Proposition \ref{dprop} give us the contradiction and we have proved the second point of the Theorem if $\left ( \widetilde{X}\right )^{-} \neq 0$.
So we can suppose $\left ( \widetilde{X}\right )^{-} = 0$ and $\left ( \widetilde{X}\right )^{0} \neq 0$.
\noindent {\bf First case: } $\mathbf{X^{0}†\neq 0}$ . By $v)$ of Proposition \ref{dprop}, $ {\displaystyle d( \mathrm{Id} , g_t^{-1} \exp( Y ) g_t ) \leq d\left ( \mathrm{Id}, \ \tilde{g}_t^{-1} \exp\left ( \widetilde{Y} \right ) \tilde{g}_t \right ) }$ and it remains to prove that ${\displaystyle \liminf_{t \to \infty} } d( \mathrm{Id} , g_t^{-1} \exp( Y ) g_t ) $ is positive. But $ Y = \mathrm{Ad}(n_{\infty})(X)$ and $X= X^{+} + X^{0} \in \widebar{\mathcal{N}} \oplus \mathcal{A} \oplus \mathcal{M} \ensuremath{\mathbb{S}^{d-1}}etminus \widebar{\mathcal{N}}$. Consider an Iwasawa decomposition of $\exp(X)$ in $G$
\[
\exp(X) = \bar{n} am , \ n \in \widebar{N}, \ a \in A, \ \text{and} \ \ m \in M,
\]
and $am \neq \mathrm{Id} $. Since $ \forall g,h \ \ d(\mathrm{Id}, gh) \leq d(\mathrm{Id}, g) +d(\mathrm{Id}, h)$ we get
\begin{align*}
d( \mathrm{Id} , g_t^{-1} \exp( Y ) g_t ) &= d( \mathrm{Id}, h_t^{-1} e^{-t \alpha V_1} \bar{n}ame^{t \alpha V_1} h_t^{-1} ) \\
&\geq d( \mathrm{Id}, h_t^{-1} e^{-t \alpha V_1}ame^{t \alpha V_1} h_t) -d( \mathrm{Id}, h_t^{-1} e^{-t \alpha V_1} \bar{n} e^{t \alpha V_1} h_t) .
\end{align*}
Writting $\bar{n}= \exp( Z), Z \in \widebar{\mathcal{N}}$ and $d( \mathrm{Id}, h_t^{-1} e^{-t \alpha V_1} \bar{n} e^{t \alpha V_1} h_t )$ is dominated by $ e^{-t \alpha + r(h_t) } \Vert Z \Vert $ (by Lemma \ref{norms} and $iii)$ of Proposition \ref{dprop}) and converges exponentially fast to zero (recall that by Proposition \ref{convht} $r(h_t) =o(t) $ a.s. ). Thus it remains to prove that $\liminf d( \mathrm{Id}, h_t^{-1} e^{-t \alpha V_1}ame^{t \alpha V_1} h_t) >0$ to finish the proof in the first case. This is ensured by the following Lemma.
\begin{lem}\label{lem3}
Let $a \in A$ and $m \in M$ s.t $am\neq \mathrm{Id}$. Then $\exists C>0, \ \forall g \in G, \ d( \mathrm{Id}, g^{-1} am g ) > C$.
\end{lem}
\begin{proof}[Proof of lemma \ref{lem3}]
Consider the polar decomposition $g = S(r, \theta ) R$.
Suppose first that $ a= \mathrm{Id}$ and $m \neq \mathrm{Id}$. Then we get
\begin{align*}
d( \mathrm{Id}, g^{-1} m g ) &= d( \mathrm{Id}, S(r, -\theta ) m S(r, \theta ) ) = d( S(r, \theta) , \ m S(r, \theta ) ) =d( S(r, \theta) , S(r, m\theta ) m ) \\
& \geq \frac{\kappa}{ \ensuremath{\mathbb{S}^{d-1}}qrt{\kappa^{2} +2 \beta^{2}}} d(S(r, \theta), S(r, m\theta) ) \quad \text{by} \ vi) \ \text{of Proposition \ref{dprop}} \\
& \geq \frac{\kappa}{ \ensuremath{\mathbb{S}^{d-1}}qrt{\kappa^{2} +2 \beta^{2}}} \left ( d(S(r, \theta), S(r,\theta)m^{-1} ) -d(S(r, \theta)m^{-1} , S(r, m\theta) )\right ) \\
& = \frac{\kappa}{ \ensuremath{\mathbb{S}^{d-1}}qrt{\kappa^{2} +2 \beta^{2}}} \left ( d( \mathrm{Id}, m^{-1} ) - d(S(r, \theta), \ m S(r, \theta ) ) \right) \\
&= \frac{\kappa}{ \ensuremath{\mathbb{S}^{d-1}}qrt{\kappa^{2} +2 \beta^{2}}} \left ( d( \mathrm{Id}, m ) -d( \mathrm{Id}, g^{-1} m g ) \right)
\end{align*}
Thus $d( \mathrm{Id}, g^{-1} m g ) \geq \frac{\kappa}{\kappa +\ensuremath{\mathbb{S}^{d-1}}qrt{\kappa^{2} +2 \beta^{2}}} d( \mathrm{Id}, m ) >0$.
Suppose now $a \neq \mathrm{Id}$. Let $u \neq 0 $ such that $a= \exp ( u V_1)$, then an explicit computation gives:
\begin{align}
\cosh r( g^{-1}a m g) &= \cosh(u) \left ( \cosh(r)^2 - ((m\theta)^1)^2 \ensuremath{\mathbb{S}^{d-1}}inh(r)^2 \right ) -\ensuremath{\mathbb{S}^{d-1}}inh(r)^2 \ensuremath{\mathbb{S}^{d-1}}um_{i=2}^{d} \theta^i (m\theta)^{i} \\
& = \cosh(u) + \left ( \cosh(u) (1- ((m\theta)^1)^2) - \ensuremath{\mathbb{S}^{d-1}}um_{i=2}^{d} \theta^i (m\theta)^{i} \right ) \ensuremath{\mathbb{S}^{d-1}}inh(r)^2 \\
& \geq \cosh(u) + ( 1 - \theta^t (m\theta) ) \ensuremath{\mathbb{S}^{d-1}}inh(r)^2 \quad \text{we used } (m\theta)^1 = \theta^1 \\
& \geq \cosh(u).
\end{align}
Then by $vi)$ and $vii)$ of Proposition \ref{dprop} it comes
\[
d( \mathrm{Id}, g^{-1}a m g ) \geq \frac{\kappa}{\ensuremath{\mathbb{S}^{d-1}}qrt{\kappa^2 + 2 \beta^2 }} \kappa u >0.
\]
\end{proof}
Return to the proof of Theorem \ref{stable}.
\noindent {\bf Second case: $\mathbf{X^0=0}$ but $ \mathbf{x^0 \neq 0}$ }. So $\widetilde{X} = ( X^{+}, x^{+} +x^{0})$ and explicitely
\[
\exp (\widetilde{X}) = ( \exp (X^{+}) , \ x^{0} + x^{+} + \frac{X^{+} x^{0} }{2} ) = (\mathrm{Id}, \xi ) ( \exp (X^{+}), 0 ),
\]
where we have set $\xi:= x^{0} + x^{+} + \frac{X^{+} x^{0} }{2}$.
Thus
\begin{align*}
\tilde{g}_t^{-1} \exp(†\widetilde{Y}) \tilde{g}_t &= \tilde{h}_t^{-1} (e^{-t\alpha V_1}, 0) \exp (\widetilde{X} ) (e^{t\alpha V_1}, 0) \tilde{h}_t \\
&= ( \mathrm{Id}, h_t^{-1} e^{-t \alpha V_1} \xi ) (\exp( \mathrm{Ad}(h_{t}^{-1} e^{-t\alpha V_1}) X), \ 0 ),
\end{align*}
and
\begin{align*}
d( \mathrm{Id}, &\tilde{g}_t^{-1} \exp(†\widetilde{Y}) \tilde{g}_t) \geq d\left( \mathrm{Id}, \ ( \mathrm{Id}, h_t^{-1} e^{-t \alpha V_1} \xi) \right ) - d( \mathrm{Id}, \ (\exp( \mathrm{Ad}(h_{t}^{-1} e^{-t\alpha V_1}) X), \ 0 ) ).
\end{align*}
As done previously in the first case, $d( \mathrm{Id}, \ (\exp( \mathrm{Ad}(h_{t}^{-1} e^{-t\alpha V_1}) X), \ 0 ) )$ converges exponentially fast to $0$ and it remains to prove that
\begin{align}
\liminf_{t \to \infty} d\left( \mathrm{Id}, \ ( \mathrm{Id}, h_t^{-1} e^{-t \alpha V_1} \xi) \right ) > 0 \label{liminf2}.
\end{align}
Suppose by contradiction that we can find $s_t$ such that $d\left ( \mathrm{Id} , \left ( \mathrm{Id} , \ h_{s_t}^{-1} e^{-s_{t} \alpha V_1} (\xi) \right )\right )$ converges to $0$. By $iv)$ of Proposition \ref{dprop} for large $t$
\[
d\left ( \mathrm{Id} , ( \mathrm{Id} , \ h_{s_t}^{-1} e^{-s_{t} \alpha V_1} (\xi ) )\right ) \geq †\Vert h_{s_t}^{-1} e^{-s_{t} \alpha V_1} (\xi ) \Vert
\]
Since $\frac{X^{+} x^{0} }{2} \in U^{+}$ we obtain directly that $q( \xi ) = q( x^{0})$ which is negative since $x^{0}$ is supposed to be non zero. But
\begin{align*}
\Vert h_{s_t}^{-1} e^{-s_{t} \alpha V_1} (\xi ) \Vert^{2} &= \gamma^{2} \left ( \ensuremath{\mathbb{S}^{d-1}}um_{i =1}^{d} q \left ( h_{s_t}^{-1} e^{-s_{t} \alpha V_1} (\xi ) , e_i \right )^2 \right ) + \delta^2 q \left ( h_{s_t}^{-1} e^{-s_{t} \alpha V_1} (\xi ) , e_0 \right )^2 \\
& \geq \min(\gamma, \delta)^2 \left ( \ensuremath{\mathbb{S}^{d-1}}um_{i =0}^{d} q \left ( h_{s_t}^{-1} e^{-s_{t} \alpha V_1} (\xi ) , e_i \right )^2 \right ) \\
&= \min(\gamma, \delta)^2 \left ( 2 q( h_{s_t}^{-1} e^{-s_{t} \alpha V_1} (\xi ) , e_0)^2 - q \left ( h_{s_t}^{-1} e^{-s_{t} \alpha V_1} (\xi ) \right )^2 \right )\geq - \min(\gamma, \delta)^2 q( x^{0}) >0.
\end{align*}
\end{proof}
\ensuremath{\mathbb{S}^{d-1}}ubsection{Projection on $\ensuremath{\mathbb{H}}^d \times \ensuremath{\mathbb{R}}^{1,d}$ of the stable manifolds}
We explicit here the projection of $\mathcal{V}_{\infty}^{-}$ on $\ensuremath{\mathbb{H}}^{d} \times \ensuremath{\mathbb{R}}^{1,d}$. Recall that by definition an element of $\mathcal{V}_{\infty}^{-}$ is of the form $\tilde{g}_\infty \exp (X, x) \tilde{g}_\infty ^{-1}$ where $(X, x) \in \widebar{\mathcal{N}} \times U^{+}$. We deduce, since in this case $ \exp (X, x) = ( \exp(X) , x) $, that an element of $\pi ( \mathcal{V}_{\infty}^{-} ) $ is of the form
\begin{align} \label{skew}
\left ( n_\infty \exp(X) n_\infty^{-1} (e_0 ), \ u n_{\infty}(e_0 +e_1) + \lambda_\infty \left ( \mathrm{Id} - n_\infty \exp(X) n_\infty^{-1} \right ) (e_0 -e_1) \right ),
\end{align}
where $X$ lies in $\widebar{\mathcal{N} }$ and $u \in \ensuremath{\mathbb{R}} $.
Since $\exp(X)(e_0 + e_1) =e_0 + e_1$ for $X \in \widebar{\mathcal{N}}$ we obtain
\[
q( n_\infty \exp(X) n_\infty^{-1} (e_0), n_\infty (e_0 + e_1) ) = q(n_\infty^{-1} e_0, e_0 +e_1 ) = q( e_0 , n_\infty (e_0 +e_1) )
\]
and thus when $X$ describes $\widebar{\mathcal{N}}$ then $ n_\infty \exp(X) n_\infty^{-1} (e_0)$ draws the intersection between $\ensuremath{\mathbb{H}}^d$ and the affine hyperplan passing by $e_0$ and $q$-orthogonal to $n_\infty (e_0 +e_1)$. This submanifold of $\ensuremath{\mathbb{R}}^{1,d}$ is a parabolo\"id of codimension 2 and is mapped by $\mathbf{p}$ (the projection onto the projective space) on a sphere tangent at $\partial \ensuremath{\mathbb{H}}^d$ in $\theta_\infty$ and passing by $\mathbf{p}(e_0)$. It is called the \emph{horosphere} tangent at $\theta_\infty$ and passing by $e_0$ and is denoted by $\mathcal{H}_\infty$.
Moreover, since
\[
q( n_\infty \exp(X) n_\infty^{-1} (e_0 -e_1), n_\infty (e_0 + e_1) ) = q( e_0 -e_1, e_0 +e_1) = 0,
\]
we get that when $X$ describes $\mathcal{N}$ then $n_\infty \exp(X) n_\infty^{-1} (e_0 -e_1)$ describes the intersection between the light cone $\{ \xi , q(\xi) =0 \}$ and the hyperplan passing by $e_0 -e_1$ and $q$-orthogonal to $n_\infty (e_0 +e_1)$. Thus, when $X$ describes $\mathcal{N} $ then $\left ( \mathrm{Id} - n_\infty \exp(X) n_\infty^{-1} \right ) (e_0 -e_1)$ draws a parabolo\"id $\mathcal{P}_\infty$ in the hyperplan $q$-orthogonal to $n_\infty (e_0 +e_1)$. For each $\dot{\xi}$ in the horosphere $\mathcal{H}_\infty$ corresponds a unique $X_{\dot{\xi}} \in \mathcal{N}$ such that $\dot{\xi}= n_\infty \exp(X_{\dot{\xi}}) n_\infty^{-1} (e_0)$ and the one-to-one function $ \psi: \dot{\xi} \mapsto \left ( \mathrm{Id} - n_\infty \exp(X_{\dot{\xi}}) n_\infty^{-1} \right ) (e_0 -e_1)$ maps $\mathcal{H}_{\infty}$ on $\mathcal{P}_{\infty}$.
Then by \eqref{skew}, we obtain the following one-to-one map
\[
\begin{matrix}
\mathcal{H}_\infty \times \langle n_{\infty}(e_0 +e_1) \rangle & \longrightarrow & \pi( \mathcal{V}_{\infty}^{-}) \\
(\dot{\xi}, \xi ) & \longmapsto & ( \dot{\xi}, \xi + \lambda_\infty \psi ( \dot{\xi}) )
\end{matrix}
\]
and $\pi( \mathcal{V}_{\infty}^{-})$ is a skew product of the line $\langle n_{\infty}(e_0 +e_1) \rangle$ with the horosphere $\mathcal{H}_\infty$.
\begin{figure}
\caption{ $\pi( \mathcal{V}
\end{figure}
\end{document}
|
\begin{document}
\title[On equivalent conjectures on smooth threefolds]{On equivalent conjectures for minimal log discrepancies on smooth threefolds}
\author{Masayuki Kawakita}
\address{Research Institute for Mathematical Sciences, Kyoto University, Kyoto 606-8502, Japan}
\email{[email protected]}
\thanks{Partially supported by JSPS Grant-in-Aid for Scientific Research (C) 16K05099.}
\begin{abstract}
On smooth threefolds, the ACC for minimal log discrepancies is equivalent to the boundedness of the log discrepancy of some divisor which computes the minimal log discrepancy. We reduce it to the case when the boundary is the product of a canonical part and the maximal ideal to some power. We prove the reduced assertion when the log canonical threshold of the maximal ideal is either at most one-half or at least one.
\end{abstract}
\maketitle
\section{Introduction}
Let $P\in X$ be the germ of a smooth variety and $\mathfrak{a}=\prod_{j=1}^e\mathfrak{a}_j^{r_j}$ be an $\mathbf{R}$-ideal on $X$. We write $\mld_P(X,\mathfrak{a})$ for the minimal log discrepancy of the pair $(X,\mathfrak{a})$ at $P$. For a subset $I$ of the positive real numbers, we mean by $\mathfrak{a}\in I$ that the exponents $r_j$ in $\mathfrak{a}$ belong to $I$. ACC stands for the ascending chain condition while DCC stands for the descending chain condition. This paper discusses the ACC conjecture for minimal log discrepancies on smooth threefolds, which was conjectured by Shokurov \cite{BS10}, \cite{Sh88} for arbitrary lc pairs.
\begin{conjectureAlph}\label{cnj:acc}
Fix a subset $I$ of the positive real numbers which satisfies the DCC. Then the set
\begin{align*}
\{\mld_P(X,\mathfrak{a})\mid\textup{$P\in X$ a smooth threefold},\ \textup{$\mathfrak{a}$ an $\mathbf{R}$-ideal},\ \mathfrak{a}\in I\}
\end{align*}
satisfies the ACC.
\end{conjectureAlph}
We approach it with the theory of the generic limit of ideals introduced by de Fernex and Musta\c{t}\u{a} \cite{dFM09}. Our earlier work \cite{K14} shows the finiteness of the set of $\mld_P(X,\mathfrak{a})$ in which the germ $P\in X$ of a klt variety and the exponents in $\mathfrak{a}$ are fixed. Instead, if the exponents in $\mathfrak{a}$ move in an infinite set satisfying the DCC, then we require the stability of minimal log discrepancies for generic limits. This stability connects Conjecture \ref{cnj:acc} to the following important conjectures equivalently, as it was indicated essentially by Musta\c{t}\u{a} and Nakamura \cite{MN16}.
The first is the ACC for $a$-lc thresholds, a generalisation of lc thresholds.
\begin{conjectureAlph}\label{cnj:alc}
Fix a non-negative real number $a$ and a subset $I$ of the positive real numbers which satisfies the DCC. Then the set
\begin{align*}
\{t\in\mathbf{R}_{\ge0}\mid\textup{$P\in X$ a smooth threefold},\ \textup{$\mathfrak{a}$, $\mathfrak{b}$ $\mathbf{R}$-ideals},\ \mld_P(X,\mathfrak{a}\mathfrak{b}^t)=a,\ \mathfrak{a}\mathfrak{b}\in I\}
\end{align*}
satisfies the ACC.
\end{conjectureAlph}
The second is a uniform version of the $\mathfrak{m}$-adic semi-continuity, which was proposed originally by Musta\c{t}\u{a}.
\begin{conjectureAlph}\label{cnj:madic}
Fix a finite subset $I$ of the positive real numbers. Then there exists a positive integer $l$ depending only on $I$ such that if $P\in X$ is the germ of a smooth threefold and if $\mathfrak{a}=\prod_{j=1}^e\mathfrak{a}_j^{r_j}$ and $\mathfrak{b}=\prod_{j=1}^e\mathfrak{b}_j^{r_j}$ are $\mathbf{R}$-ideals on $X$ satisfying that $r_j\in I$ and $\mathfrak{a}_j+\mathfrak{m}^l=\mathfrak{b}_j+\mathfrak{m}^l$ for any $j$, where $\mathfrak{m}$ is the maximal ideal in $\mathscr{O}_X$ defining $P$, then $\mld_P(X,\mathfrak{a})=\mld_P(X,\mathfrak{b})$.
\end{conjectureAlph}
The last is the boundedness of the log discrepancy of some divisor which computes the minimal log discrepancy, proposed by Nakamura.
\begin{conjectureAlph}\label{cnj:nakamura}
Fix a finite subset $I$ of the positive real numbers. Then there exists a positive integer $l$ depending only on $I$ such that if $P\in X$ is the germ of a smooth threefold and $\mathfrak{a}$ is an $\mathbf{R}$-ideal on $X$ satisfying that $\mathfrak{a}\in I$, then there exists a divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{a})$ and satisfies the inequality $a_E(X)\le l$.
\end{conjectureAlph}
The first main result of this paper is to reduce these conjectures to the case when the boundary is the product of a canonical part and the maximal ideal to some power.
\begin{theorem}\label{thm:first}
Conjectures \textup{\ref{cnj:acc}}, \textup{\ref{cnj:alc}}, \textup{\ref{cnj:madic}} and \textup{\ref{cnj:nakamura}} are equivalent to Conjecture \textup{\ref{cnj:product}}.
\end{theorem}
\begin{conjecture}\label{cnj:product}
Let $P\in X$ be the germ of a smooth threefold and $\mathfrak{m}$ be the maximal ideal in $\mathscr{O}_X$ defining $P$. Fix a positive rational number $q$ and a non-negative rational number $s$. Then there exists a positive integer $l$ depending only on $q$ and $s$ such that if $\mathfrak{a}$ is an ideal on $X$ satisfying that $(X,\mathfrak{a}^q)$ is canonical, then there exists a divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{a}^q\mathfrak{m}^s)$ and satisfies the inequality $a_E(X)\le l$.
\end{conjecture}
Our earlier work derives this boundedness when $(X,\mathfrak{a}^q)$ is terminal or $s$ is zero.
\begin{theorem}\label{thm:terminal}
Let $P\in X$ be the germ of a smooth threefold and $\mathfrak{m}$ be the maximal ideal in $\mathscr{O}_X$ defining $P$.
\begin{enumerate}
\item\label{itm:terminal}
Fix a positive rational number $q$ and a non-negative rational number $s$. Then there exists a positive integer $l$ depending only on $q$ and $s$ such that if $\mathfrak{a}$ is an ideal on $X$ satisfying that $(X,\mathfrak{a}^q)$ is terminal, then there exists a divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{a}^q\mathfrak{m}^s)$ and satisfies the inequality $a_E(X)\le l$.
\item\label{itm:zero}
Fix a positive rational number $q$. Then there exists a positive integer $l$ depending only on $q$ such that if $\mathfrak{a}$ is an ideal on $X$ satisfying that $(X,\mathfrak{a}^q)$ is canonical, then there exists a divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{a}^q)$ and satisfies the inequality $a_E(X)\le l$.
\end{enumerate}
\end{theorem}
The second main result is to prove Conjecture \ref{cnj:product} when the lc threshold of the maximal ideal is either at most one-half or at least one.
\begin{theorem}\label{thm:second}
Let $P\in X$ be the germ of a smooth threefold and $\mathfrak{m}$ be the maximal ideal in $\mathscr{O}_X$ defining $P$. Fix a positive rational number $q$ and a non-negative rational number $s$.
\begin{enumerate}
\item\label{itm:half}
There exists a positive integer $l$ depending only on $q$ and $s$ such that if $\mathfrak{a}$ is an ideal on $X$ satisfying that $(X,\mathfrak{a}^q)$ is canonical and that $\mld_P(X,\mathfrak{a}^q\mathfrak{m}^{1/2})$ is not positive, then there exists a divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{a}^q\mathfrak{m}^s)$ and satisfies the inequality $a_E(X)\le l$.
\item\label{itm:one}
There exists a positive integer $l$ depending only on $q$ and $s$ such that if $\mathfrak{a}$ is an ideal on $X$ satisfying that $(X,\mathfrak{a}^q)$ is canonical and that $(X,\mathfrak{a}^q\mathfrak{m})$ is lc, then there exists a divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{a}^q\mathfrak{m}^s)$ and satisfies the inequality $a_E(X)\le l$.
\end{enumerate}
\end{theorem}
Once Theorem \ref{thm:second}(\ref{itm:half}) is established, it is relatively simple to obtain Conjecture \ref{cnj:product} when $s$ is close to zero in terms of a scale determined by $q$.
\begin{corollary}\label{crl:main}
Conjecture \textup{\ref{cnj:product}} holds when $s$ is at most $1/n$ for some integer $n$ greater than one such that $nq$ is integral.
\end{corollary}
We shall explain the outline of our research. Fix the germ $P\in X$ of a smooth threefold and a positive rational number $q$. For a sequence $\{\mathfrak{a}_i\}_{i\in\mathbf{N}}$ of ideals on $X$, its generic limit $\mathsf{a}$ is defined on the spectrum $\hat P\in \hat X$ of the completion of the local ring $\mathscr{O}_{X,P}\otimes_kK$, where $K$ is an extension of the ground field $k$. The stability of minimal log discrepancies means the equality
\begin{align*}
\mld_{\hat P}(\hat X,\mathsf{a}^q)=\mld_P(X,\mathfrak{a}_i^q)
\end{align*}
for infinitely many $i$, to which any of Conjectures \ref{cnj:acc} to \ref{cnj:nakamura} is equivalent (Theorem \ref{thm:equiv}). The strategy employed in this paper is to pursue Conjecture \ref{cnj:nakamura}.
Our previous work \cite{K15} derived the above stability except for the case when $(\hat X,\mathsf{a}^q)$ has the smallest lc centre of dimension one, which implies that $\mld_{\hat P}(\hat X,\mathsf{a}^q)$ is at most one. By this result, in order to prove Conjecture \ref{cnj:nakamura}, one has only to consider those ideals $\mathfrak{a}$ which have $\mld_P(X,\mathfrak{a}^q)$ less than one. We begin with the ACC for $1$-lc thresholds \cite{St11}. Using it together with the classification of divisorial contractions \cite{K01}, \cite{Km96}, we construct a birational morphism $Y\to X$ with bounded log discrepancies by which $(X,\mathfrak{a})$ can be replaced with a pair $(Y,(\mathfrak{a}')^q\mathfrak{b}^q)$ satisfying that $(Y,(\mathfrak{a}')^q)$ is canonical and that $\mathfrak{b}$ has bounded colength (Theorem \ref{thm:canonical}).
We study the generic limit $\mathsf{a}$ of a sequence of ideals $\mathfrak{a}_i$ on $X$ such that $(X,\mathfrak{a}_i^q)$ is canonical. We may assume that $(\hat X,\mathsf{a}^q)$ has the smallest lc centre $\hat C$ of dimension one. Then the $\mld_{\hat P}(\hat X,\mathsf{a}^q)$ equals one by the canonicity of $(X,\mathfrak{a}_i^q)$, and so does the $\mld_P(X,\mathfrak{a}_i^q)$. By our result \cite{K17} in dimension two, there exists a divisor $\hat E$ over $\hat X$ computing $\mld_{\eta_{\hat C}}(\hat X,\mathsf{a}^q)=0$ which is obtained at the generic point $\eta_{\hat C}$ of $\hat C$ by a weighted blow-up. On our extra condition that $\mld_{\hat P}(\hat X,\mathsf{a}^q)=1$, we find $\hat E$ for which the weighted blow-up at $\eta_{\hat C}$ is extended to the closed point $\hat P$ (Theorem \ref{thm:wbu}).
We associate the minimal log discrepancy on $\hat X$ with that on $\hat E$ by precise inversion of adjunction (Section \ref{sct:reduction}). The generic limit $\mathsf{b}$ of a sequence of ideals of bounded colength satisfies that $\mathsf{b}\mathscr{O}_{\hat C}=\hat\mathfrak{m}^b\mathscr{O}_{\hat C}$ for some integer $b$, where $\hat\mathfrak{m}$ is the maximal ideal in $\mathscr{O}_{\hat X}$. Then Conjecture \ref{cnj:nakamura} is reduced to the case when $\mathfrak{b}$ is the maximal ideal $\mathfrak{m}$ to the power of $b$, which completes Theorem \ref{thm:first}.
Suppose that the lc threshold of $\mathfrak{m}$ with respect to $(X,\mathfrak{a}^q)$ is at most one-half. Under our assumptions on the generic limit $\mathsf{a}$ involved, we derive a special conclusion that $\mld_P(X,\mathfrak{a}^q\mathfrak{m}^s)=1-2s$, which includes the boundedness stated in Theorem \ref{thm:second}(\ref{itm:half}). A similar argument is applied to the case of lc threshold at least one, Theorem \ref{thm:second}(\ref{itm:one}).
Conjecture \ref{cnj:product} remains open when $\mld_P(X,\mathfrak{a}^q)$ equals one and $\mld_P(X,\mathfrak{a}^q\mathfrak{m}^{1/2})$ is positive. In this case, every divisor $E$ over $X$ computing $\mld_P(X,\mathfrak{a}^q)$ satisfies that $\ord_E\mathfrak{m}$ equals one. We supply a classification of the centre of $E$ on a certain weighted blow-up of $X$ (Theorem \ref{thm:crepant}).
\section{Preliminaries}
We shall fix the notation and review the basics of singularities in birational geometry. Refer to \cite{Ko13} for details.
We work over an algebraically closed field $k$ of characteristic zero. We omit to write the bases of tensor products over $k$ and of products over $\Spec k$ when it is clear. A \textit{variety} is an integral separated scheme of finite type over $\Spec k$. The \textit{dimension} of a scheme means the Krull dimension. A variety of dimension one (resp.\ two) is called a \textit{curve} (resp.\ a \textit{surface}).
The \textit{germ} is considered at a closed point algebraically. When we work on the spectrum of a noetherian ring, we identify an ideal in the ring with its coherent ideal sheaf. For an irreducible closed subset $Z$ of a scheme, we write $\eta_Z$ for the generic point of $Z$.
The \textit{round-down} $\rd{r}$ of a real number $r$ is the greatest integer at most $r$. The \textit{natural number} starts from zero.
\textit{Orders}.
Let $X$ be a noetherian scheme and $Z$ be an irreducible closed subset of $X$. The \textit{order} of a coherent ideal sheaf $\mathfrak{a}$ on $X$ along $Z$ is the maximal $\nu\in\mathbf{N}\cup\{+\infty\}$ satisfying that $\mathfrak{a}\mathscr{O}_{X,\eta_Z}\subset\mathscr{I}^\nu\mathscr{O}_{X,\eta_Z}$ for the ideal sheaf $\mathscr{I}$ of $Z$, and it is denoted by $\ord_Z\mathfrak{a}$. If $Y\to X$ is a birational morphism from a noetherian normal scheme, then we set $\ord_E\mathfrak{a}=\ord_E\mathfrak{a}\mathscr{O}_Y$ for a prime divisor $E$ on $Y$. The $\ord_Zf$ for a function $f$ in $\mathscr{O}_X$ stands for $\ord_Z(f\mathscr{O}_X)$.
Suppose that $X$ is normal. For an effective $\mathbf{Q}$-Cartier divisor $D$ on $X$, we set $\ord_ZD=r^{-1}\ord_Z\mathscr{O}_X(-rD)$ for a positive integer $r$ such that $rD$ is Cartier, which is independent of the choice of $r$. The notion of $\ord_ZD$ is extended to $\mathbf{R}$-Cartier $\mathbf{R}$-divisors by linearity.
\textit{$\mathbf{R}$-ideals}.
An $\mathbf{R}$-\textit{ideal} on a noetherian scheme $X$ is a formal product $\mathfrak{a}=\prod_j\mathfrak{a}_j^{r_j}$ of finitely many coherent ideal sheaves $\mathfrak{a}_j$ on $X$ with positive real exponents $r_j$. For a positive real number $t$, the $\mathfrak{a}$ to the \textit{power} of $t$ is $\mathfrak{a}^t=\prod_j\mathfrak{a}_j^{tr_j}$. The \textit{cosupport} of $\mathfrak{a}$ is the union of the supports of $\mathscr{O}_X/\mathfrak{a}_j$ for all $j$. The \textit{order} of $\mathfrak{a}$ along an irreducible closed subset $Z$ of $X$ is $\ord_Z\mathfrak{a}=\sum_jr_j\ord_Z\mathfrak{a}_j$. The $\mathfrak{a}$ is said to be \textit{invertible} if all $\mathfrak{a}_j$ are invertible. The \textit{pull-back} of $\mathfrak{a}$ by a morphism $Y\to X$ is $\mathfrak{a}\mathscr{O}_Y=\prod_j(\mathfrak{a}_j\mathscr{O}_Y)^{r_j}$. For a subset $I$ of the positive real numbers, we mean by $\mathfrak{a}\in I$ that all exponents $r_j$ belong to $I$.
If $\mathfrak{a}$ is invertible, then the $\mathbf{R}$-divisor $A=\sum_jr_jA_j$ for which $\mathfrak{a}_j=\mathscr{O}_X(-A_j)$ is called the $\mathbf{R}$-divisor \textit{defined by} $\mathfrak{a}$. When we work on the germ $P\in X$, the $\mathbf{R}$-divisor \textit{defined by a general member in} $\mathfrak{a}$ means an $\mathbf{R}$-divisor $\sum_jr_j(f_j)$ on $X$ with a general member $f_j$ in $\mathfrak{a}_j$. The $\mathfrak{a}$ is said to be \textit{$\mathfrak{m}$-primary} if all $\mathfrak{a}_j$ are $\mathfrak{m}$-primary, where $\mathfrak{m}$ is the maximal ideal in $\mathscr{O}_X$ defining $P$.
\begin{convention}
It is sometimes convenient to allow an exponent in an $\mathbf{R}$-ideal to be zero. We define a coherent ideal sheaf to the power of zero as the structure sheaf.
\end{convention}
\textit{The minimal log discrepancy}.
A \textit{subtriple} $(X,\Delta,\mathfrak{a})$ consists of a normal variety $X$, an $\mathbf{R}$-divisor $\Delta$ on $X$ such that $K_X+\Delta$ is $\mathbf{R}$-Cartier, and an $\mathbf{R}$-ideal $\mathfrak{a}$ on $X$. The $(X,\Delta,\mathfrak{a})$ is called a \textit{triple} if $\Delta$ is effective. We omit to write $\mathfrak{a}$ or $\Delta$ and call $(X,\Delta)$ or $(X,\mathfrak{a})$ a (\textit{sub})\textit{pair} when $\mathfrak{a}=\mathscr{O}_X$ or $\Delta=0$. The $\Delta$ or $\mathfrak{a}$ is called the \textit{boundary} when $(X,\Delta)$ or $(X,\mathfrak{a})$ is a pair.
A prime divisor $E$ on a normal variety $Y$ equipped with a birational morphism $\pi\colon Y\to X$ is called a divisor \textit{over} $X$, and the closure of the image $\pi(E)$ is called the \textit{centre} of $E$ on $X$ and denoted by $c_X(E)$. We write $\mathcal{D}_X$ for the set of all divisors over $X$. Two elements in $\mathcal{D}_X$ are usually identified when they define the same valuation on the function field of $X$. The \textit{log discrepancy} of $E$ with respect to $(X,\Delta,\mathfrak{a})$ is
\begin{align*}
a_E(X,\Delta,\mathfrak{a})=1+\ord_EK_{Y/(X,\Delta)}-\ord_E\mathfrak{a},
\end{align*}
where $K_{Y/(X,\Delta)}=K_Y-\pi^*(K_X+\Delta)$.
Let $Z$ be a closed subvariety of $X$. The \textit{minimal log discrepancy} of $(X,\Delta,\mathfrak{a})$ at the generic point $\eta_Z$ is
\begin{align*}
\mld_{\eta_Z}(X,\Delta,\mathfrak{a})=\inf\{a_E(X,\Delta,\mathfrak{a})\mid E\in\mathcal{D}_X,\ c_X(E)=Z\},
\end{align*}
It is either a non-negative real number or minus infinity. We say that $E\in\mathcal{D}_X$ \textit{computes} $\mld_{\eta_Z}(X,\Delta,\mathfrak{a})$ if $c_X(E)=Z$ and $a_E(X,\Delta,\mathfrak{a})=\mld_{\eta_Z}(X,\Delta,\mathfrak{a})$ (or is negative when $\mld_{\eta_Z}(X,\Delta,\mathfrak{a})=-\infty$). It is often enough to study the case when $Z$ is a closed point by the relation $\mld_{\eta_Z}(X,\Delta,\mathfrak{a})=\mld_P(X,\Delta,\mathfrak{a})-\dim Z$ for a general closed point $P$ in $Z$. It is sometimes convenient to use the \textit{minimal log discrepancy} of $(X,\Delta,\mathfrak{a})$ in a closed subset $W$ of $X$ which is defined by
\begin{align*}
\mld_W(X,\Delta,\mathfrak{a})=\inf\{a_E(X,\Delta,\mathfrak{a})\mid E\in\mathcal{D}_X,\ c_X(E)\subset W\}.
\end{align*}
\textit{Singularities}.
The subtriple $(X,\Delta,\mathfrak{a})$ is said to be \textit{log canonical} (\textit{lc}) (resp.\ \textit{Kawamata log terminal} (\textit{klt})) if $a_E(X,\Delta,\mathfrak{a})\ge0$ (resp.\ $>0$) for all $E\in\mathcal{D}_X$. It is said to be \textit{purely log terminal} (\textit{plt}) (resp.\ \textit{canonical}, \textit{terminal}) if $a_E(X,\Delta,\mathfrak{a})>0$ (resp.\ $\ge1$, $>1$) for all $E\in\mathcal{D}_X$ exceptional over $X$. For a closed point $P$ in $X$, $(X,\Delta,\mathfrak{a})$ is lc about $P$ iff $\mld_P(X,\Delta,\mathfrak{a})$ is not minus infinity. When $(X,\Delta,\mathfrak{a})$ is lc, the \textit{lc threshold} with respect to $(X,\Delta,\mathfrak{a})$ of a non-trivial $\mathbf{R}$-ideal $\mathfrak{b}$ on $X$ is the maximal real number $t$ such that $(X,\Delta,\mathfrak{a}\mathfrak{b}^t)$ is lc.
Let $Y$ be a normal variety birational to $X$. A centre $c_Y(E)$ on $Y$ of $E\in\mathcal{D}_Y$ such that $a_E(X,\Delta,\mathfrak{a})\le0$ is called a \textit{non-klt centre} on $Y$ of $(X,\Delta,\mathfrak{a})$. The union of all non-klt centres on $Y$ is called the \textit{non-klt locus} on $Y$ of $(X,\Delta,\mathfrak{a})$. When we just say a non-klt centre or the non-klt locus, we mean that it is on $X$.
When $(X,\Delta,\mathfrak{a})$ is lc, a non-klt centre of $(X,\Delta,\mathfrak{a})$ is often called an \textit{lc centre}. When we work on the germ of a variety, an lc centre contained in every lc centre is called the \textit{smallest lc centre}. The smallest lc centre exists and it is normal \cite[Theorem 9.1]{F11}.
The \textit{index} of a normal $\mathbf{Q}$-Gorenstein singularity $P\in X$ is the least positive integer $r$ such that $rK_X$ is Cartier at $P$.
\textit{Birational transformations}.
A reduced divisor $D$ on a smooth variety $X$ is said to be \textit{simple normal crossing} (\textit{snc}) if $D$ is defined at every closed point $P$ in $X$ by the product of a part of a regular system of parameters in $\mathscr{O}_{X,P}$. A \textit{stratum} of $D=\sum_{i\in I}D_i$ is an irreducible component of $\bigcap_{i\in I'}D_i$ for a subset $I'$ of $I$. For a smooth morphism $X\to S$, the $D$ said to be snc \textit{relative to} $S$ if every stratum of $D$ is smooth over $S$.
A \textit{log resolution} of a subtriple $(X,\Delta,\mathfrak{a})$ is a projective birational morphism from a smooth variety $Y$ to $X$ such that
\begin{itemize}
\item
the exceptional locus is a divisor and $\mathfrak{a}\mathscr{O}_Y$ is invertible,
\item
the union of the exceptional locus, the support of the strict transform of $\Delta$, and the cosupport of $\mathfrak{a}\mathscr{O}_Y$ is snc, and
\item
it is isomorphic on the maximal open locus $U$ in $X$ such that $U$ is smooth, $\mathfrak{a}\mathscr{O}_U$ is invertible, and the union of the support of $\Delta|_U$ and the cosupport of $\mathfrak{a}\mathscr{O}_U$ is snc.
\end{itemize}
Let $(X,\Delta,\mathfrak{a})$ be a subtriple, where $\mathfrak{a}=\prod_j\mathfrak{a}_j^{r_j}$, and $Y$ be a normal variety birational to $X$. A subtriple $(Y,\Gamma,\mathfrak{b})$ is said to be \textit{crepant} to $(X,\Delta,\mathfrak{a})$ if $a_E(X,\Delta,\mathfrak{a})=a_E(Y,\Gamma,\mathfrak{b})$ for any divisor $E$ over $X$ and $Y$. Suppose that $Y$ is smooth and has a birational morphism to $X$ whose exceptional locus is a divisor $\sum_iE_i$. The \textit{weak transform} on $Y$ of $\mathfrak{a}$ is the $\mathbf{R}$-ideal $\mathfrak{a}_Y=\prod_j(\mathfrak{a}_{jY})^{r_j}$ defined by
\begin{align*}
\mathfrak{a}_{jY}=\mathfrak{a}_j\mathscr{O}_Y(\textstyle\sum_i(\ord_{E_i}\mathfrak{a}_j)E_i).
\end{align*}
Remark that this notion is different from that of the strict transform, the $j$-th ideal of which is $\sum_{f\in\mathfrak{a}_j}f\mathscr{O}_Y(\sum_i(\ord_{E_i}f)E_i)$ (see \cite[III Definition 5]{H64}). The definition of the weak transform $\mathfrak{a}_Y$ is extended to the case when $Y$ is normal as far as $\sum_i(\ord_{E_i}\mathfrak{a}_j)E_i$ is Cartier for any $j$. We introduce
\begin{definition}
The \textit{pull-back} of $(X,\Delta,\mathfrak{a})$ by $Y\to X$ is the subtriple $(Y,\Delta_Y,\mathfrak{a}_Y)$ in which $\Delta_Y=-K_{Y/(X,\Delta)}+\sum_{ij}(r_j\ord_{E_i}\mathfrak{a}_j)E_i$.
\end{definition}
The pull-back $(Y,\Delta_Y,\mathfrak{a}_Y)$ is crepant to $(X,\Delta,\mathfrak{a})$.
\textit{Weighted blow-ups}.
Let $P\in X$ be the germ of a smooth variety. Let $x_1,\ldots,x_c$ be a part of a regular system of parameters in $\mathscr{O}_{X,P}$ and $w_1,\ldots,w_c$ be positive integers. For $w\in\mathbf{N}$, let $\mathscr{I}_w$ be the ideal in $\mathscr{O}_X$ generated by all monomials $x_1^{s_1}\cdots x_c^{s_c}$ such that $\sum_{i=1}^cs_iw_i\ge w$. The \textit{weighted blow-up} of $X$ with $\wt(x_1,\ldots,x_c)=(w_1,\ldots,w_c)$ is $\Proj_X(\bigoplus_{w\in\mathbf{N}}\mathscr{I}_w)$.
\begin{remark}\label{rmk:wbu}
If $x'_1,\ldots,x'_c$ is a part of another regular system of parameters such that $x'_i\in\mathscr{I}_{w_i}\setminus\mathscr{I}_{w_i+1}$ for any $i$, then the weighted blow-up of $X$ with $\wt(x'_1,\ldots,x'_c)=(w_1,\ldots,w_c)$ is the same that is obtained by $\wt(x_1,\ldots,x_c)=(w_1,\ldots,w_c)$.
\end{remark}
Its explicit description is reduced to the case of the affine space by an \'etale morphism. Let $o\in\mathbf{A}^d$ be the germ at origin of the affine space with coordinates $x_1,\cdots,x_d$ and $Y$ be the weighted blow-up of $\mathbf{A}^d$ with $\wt(x_1,\ldots,x_d)=(w_1,\ldots,w_d)$. One may assume that $w_1,\ldots,w_d$ have no common divisors. Then $Y$ is covered by the affine charts $U_i=\mathbf{A}^d/\mathbf{Z}_{w_i}(w_1,\ldots,w_{i-1},-1,w_{i+1},\ldots,w_d)$ for $1\le i\le d$, and the exceptional divisor is isomorphic to the weighted projective space $\mathbf{P}(w_1,\ldots,w_d)$ (see \cite[6.38]{KSC04} for details).
Here the notation $\mathbf{A}^d/\mathbf{Z}_r(a_1,\ldots,a_d)$ stands for the quotient of $\mathbf{A}^d$ by the cyclic group $\mathbf{Z}_r$ of order $r$ whose generator sends the $i$-th coordinate $x_i$ of $\mathbf{A}^d$ to $\zeta^{a_i}x_i$, where $\zeta$ is a primitive $r$-th root of unity. The $x_1,\ldots,x_d$ on this quotient are called \textit{orbifold coordinates}. An isolated \textit{cyclic quotient singularity} means that the spectrum of the completion of its local ring coincides with the regular base change of some $\mathbf{A}^d/\mathbf{Z}_r(a_1,\ldots,a_d)$, in which it is said to be of \textit{type} $\frac{1}{r}(a_1,\ldots,a_d)$.
In terms of toric geometry following the notation in \cite{I14}, by setting $N=\mathbf{Z}^d+\mathbf{Z} v$ where $v=\frac{1}{r}(a_1,\ldots,a_d)$, the quotient $\mathbf{A}^d/\mathbf{Z}_r(a_1,\ldots,a_d)$ is the toric variety $T_N(\Delta)$ which corresponds to the cone $\Delta$ spanned by the standard basis $e_1,\ldots,e_d$ of $\mathbf{Z}^d$. For $e=\frac{1}{r}(w_1,\ldots,w_d)\in N\cap\Delta$, the \textit{weighted blow-up} of $\mathbf{A}^d/\mathbf{Z}_r(a_1,\ldots,a_d)$ with respect to $\wt(x_1,\ldots,x_d)=\frac{1}{r}(w_1,\ldots,w_d)$ is defined by adding the ray generated by $e$.
\textit{Adjunction}.
Let $X$ be a normal variety and $S+B$ be an effective $\mathbf{R}$-divisor on $X$ such that $S$ is reduced and has no common components with the support of $B$. Suppose that they form a pair $(X,S+B)$. Then one has the \textit{adjunction}
\begin{align*}
\nu^*(K_X+S+B|_S)=K_{S^\nu}+B_{S^\nu}
\end{align*}
on the normalisation $\nu\colon S^\nu\to S$ of $S$, in which $B_{S^\nu}$ is an effective $\mathbf{R}$-divisor called the \textit{different} on $S^\nu$ of $B$ (see \cite[Chapter 16]{Ko92} or \cite[Section 3]{Sh93}).
\begin{example}\label{exl:different}
Let $X=\mathbf{A}^2/\mathbf{Z}_r(1,w)$ with orbifold coordinates $x_1,x_2$ such that $w$ is coprime to $r$. Let $S$ be the curve on $X$ defined by $x_1$ and $P$ be the origin of $X$. Then $K_X+S|_S=K_S+(1-r^{-1})P$.
\end{example}
The singularity on $X$ is associated with that on $S^\nu$ by
\begin{theorem}[Inversion of adjunction]\label{thm:ia}
Notation as above.
\begin{enumerate}
\item
\textup{(\cite[Theorem 17.6]{Ko92})}\;
$(X,S+B)$ is plt about $S$ iff $(S^\nu,B_{S^\nu})$ is klt. In this case, $S$ is normal.
\item
\textup{(\cite{K07})}\;
$(X,S+B)$ is lc about $S$ iff $(S^\nu,B_{S^\nu})$ is lc.
\end{enumerate}
\end{theorem}
\textit{$R$-varieties}.
The notions explained above make sense over the ring $R$ of formal power series over a field of characteristic zero, which has been discussed by de Fernex, Ein and Musta\c{t}\u{a} \cite{dFEM11}, \cite{dFM09}. We mean by an \textit{$R$-variety} an integral separated scheme of finite type over $\Spec R$. We consider regular $R$-varieties instead of smooth $R$-varieties.
The canonical divisor $K_X$ on a normal $R$-variety $X$ is defined by the \textit{sheaf of special differentials} in \cite{dFEM11}. Let $Y\to X$ be a birational morphism between regular $R$-varieties. The relative canonical divisor $K_{Y/X}$ is the effective divisor defined by the zeroth Fitting ideal of $\Omega_{Y/X}$ \cite[Remark A.12]{dFEM11}. In particular, $K_{Y/X}$ is independent of the structure of $X$ as an $R$-variety.
\begin{remark}\label{rmk:regular}
Let $P\in X$ be the germ of a normal $\mathbf{Q}$-Gorenstein $R$-variety and $\mathfrak{a}$ be an $\mathbf{R}$-ideal on $X$. Let $X'$ be either
\begin{itemize}
\item
the spectrum of the completion of the local ring $\mathscr{O}_{X,P}$, or
\item
$X\times_{\Spec R}\Spec R'$, where $R'$ is the completion of $R\otimes_KK'$ for a field extension $K'$ of $K$,
\end{itemize}
which has a regular morphism $\pi\colon X'\to X$. Then $K_{X'}=\pi^*K_X$, by which one has that $a_{E'}(X',\mathfrak{a}\mathscr{O}_{X'})=a_E(X,\mathfrak{a})$ and $\mld_{P'}(X',\mathfrak{a}\mathscr{O}_{X'})=\mld_P(X,\mathfrak{a})$ for any components $E'$ of $E\times_XX'$ and $P'$ of $P\times_XX'$.
\end{remark}
\begin{lemma}\label{lem:regular}
Let $P\in X$ be the germ of an $R$-variety. Let $\hat X$ be the spectrum of the completion of the local ring $\mathscr{O}_{X,P}$ and $\hat P$ be its closed point.
\begin{enumerate}
\item\label{itm:idealbij}
Let $\mathfrak{m}$ be the maximal ideal in $\mathscr{O}_X$ defining $P$ and $\hat\mathfrak{m}$ be the maximal ideal in $\mathscr{O}_{\hat X}$. Then the pull-back defines a bijective map from the set of $\mathfrak{m}$-primary $\mathbf{R}$-ideals on $X$ to the set of $\hat\mathfrak{m}$-primary $\mathbf{R}$-ideals on $\hat X$.
\item\label{itm:divisorbij}
Suppose that $X$ is normal. Then the base change defines a bijective map from the set of divisors over $X$ with centre $P$ to the set of divisors over $\hat X$ with centre $\hat P$.
\end{enumerate}
\end{lemma}
\begin{proof}
The (\ref{itm:idealbij}) follows from the isomorphisms $\mathscr{O}_X/\mathfrak{m}^l\simeq\mathscr{O}_{\hat X}/\hat\mathfrak{m}^l$, while (\ref{itm:divisorbij}) follows from the property that blowing-up commutes with flat base changes.
\end{proof}
By Lemma \ref{lem:regular}, in order to study the minimal log discrepancy at the closed point of the germ $P\in X$, one may often replace $P\in X$ with a germ the completion of the local ring of which is isomorphic to that of $\mathscr{O}_{X,P}$.
\section{The generic limit of ideals}\label{sct:limit}
We recall the generic limit of ideals on a fixed germ. It was introduced by de Fernex and Musta\c{t}\u{a} \cite{dFM09} and simplified by Koll\'ar \cite{Ko08}. We follow our style of the definition in \cite{K15}.
Let $P\in X$ be the germ of a scheme of finite type over $k$ and $\mathfrak{m}$ be the maximal ideal in $\mathscr{O}_X$ defining $P$. Let $\mathcal{S}=\{(\mathfrak{a}_{i1},\ldots,\mathfrak{a}_{ie})\}_{i\in\mathbf{N}}$ be an infinite sequence of $e$-tuples of ideals in $\mathscr{O}_X$. For every positive integer $l$, the ideal $(\mathfrak{a}_{ij}+\mathfrak{m}^l)/\mathfrak{m}^l$ in $\mathscr{O}_X/\mathfrak{m}^l$ for $i\in\mathbf{N}$ and $1\le j\le e$ corresponds to a closed point $P_{ij}(l)$ in the Hilbert scheme $H_l$ parametrising ideals in $\mathscr{O}_X/\mathfrak{m}^l$. The $H_l$ is a scheme of finite type over $k$ and there exists a natural rational map $H_{l+1}\to H_l$. Take the Zariski closure $Z_j(l)$ of the subset $\{P_{ij}(l)\}_{i\in\mathbf{N}}$ in $H_l$. By finding a locally closed irreducible subset $Z_l$ of $Z_1(l)\times\cdots\times Z_e(l)$ inductively, one obtains a family of approximations of $\mathcal{S}$ defined below.
\begin{definition}\label{dfn:approx}
A \textit{family} $\mathcal{F}=(Z_l,(\mathfrak{a}_j(l))_j,N_l,s_l,t_l)_{l\ge l_0}$ \textit{of approximations} of $\mathcal{S}$ consists of a fixed positive integer $l_0$ and for every $l\ge l_0$,
\begin{itemize}
\item
a variety $Z_l$,
\item
an ideal sheaf $\mathfrak{a}_j(l)$ on $X\times Z_l$ for every $1\le j\le e$ which is flat over $Z_l$ and contains $\mathfrak{m}^l\mathscr{O}_{X\times Z_l}$,
\item
an infinite subset $N_l$ of $\mathbf{N}$ and a map $s_l\colon N_l\to Z_l(k)$, where $Z_l(k)$ denotes the set of the $k$-points in $Z_l$, and
\item
a dominant morphism $t_l\colon Z_{l+1}\to Z_l$,
\end{itemize}
such that
\begin{itemize}
\item
$\mathfrak{a}_j(l)\mathscr{O}_{X\times Z_{l+1}}=\mathfrak{a}_j(l+1)+\mathfrak{m}^l\mathscr{O}_{X\times Z_{l+1}}$ by $\id_X\times t_l$,
\item
$\mathfrak{a}_j(l)_i=\mathfrak{a}_{ij}+\mathfrak{m}^l$ for $i\in N_l$, where $\mathfrak{a}_j(l)_i=\mathfrak{a}_j(l)\otimes_{\mathscr{O}_{Z_l}}k$ is the ideal in $\mathscr{O}_X$ given by the closed point $s_l(i)\in Z_l$,
\item
the image of $N_l$ by $s_l$ is dense in $Z_l$, and
\item
$N_{l+1}$ is contained in $N_l$ and $t_l\circ s_{l+1}=s_l|_{N_{l+1}}$.
\end{itemize}
\end{definition}
For the above $\mathcal{F}$, let $K=\varinjlim_lK(Z_l)$ be the union of the function fields $K(Z_l)$ of $Z_l$ by the inclusions $t_l^*\colon K(Z_l)\to K(Z_{l+1})$. Let $\hat X$ be the spectrum of the completion of the local ring $\mathscr{O}_{X,P}\otimes_kK$. Let $\hat P$ be the closed point of $\hat X$ and $\hat\mathfrak{m}$ be the maximal ideal in $\mathscr{O}_{\hat X}$.
\begin{definition}
The \textit{generic limit} of $\mathcal{S}$ with respect to $\mathcal{F}$ is the $e$-tuple $(\mathsf{a}_1,\ldots,\mathsf{a}_e)$ of ideals in $\mathscr{O}_{\hat X}$ defined by
\begin{align*}
\mathsf{a}_j=\varprojlim_l\mathfrak{a}_j(l)_K,
\end{align*}
where $\mathfrak{a}_j(l)_K=\mathfrak{a}_j(l)\otimes_{\mathscr{O}_{Z_l}}K$ is the ideal in $\mathscr{O}_X\otimes_kK$ given by the natural $K$-point $\Spec K\to Z_l$.
\end{definition}
\begin{remark}
Let $R$ be the completion of the local ring $\mathscr{O}_{X,P}$. In the literature, the generic limit is defined for a sequence of $e$-tuples of ideals in $R$. When $\mathfrak{a}_{ij}$ are ideals in $R$, the generic limit is defined in the same way just by replacing the condition $\mathfrak{a}_j(l)_i=\mathfrak{a}_{ij}+\mathfrak{m}^l$ in Definition \ref{dfn:approx} with $\mathfrak{a}_j(l)_i=(\mathfrak{a}_{ij}+\mathfrak{m}^lR)\cap\mathscr{O}_X$.
\end{remark}
By the very definition, one has
\begin{lemma}
Let $(\mathsf{a},\mathsf{b})$ be a generic limit of a sequence $\{(\mathfrak{a}_i,\mathfrak{b}_i)\}_{i\in\mathbf{N}}$ of pairs of ideals in $\mathscr{O}_X$.
\begin{enumerate}
\item
If $\mathfrak{a}_i\subset\mathfrak{b}_i$ for any $i$, then $\mathsf{a}\subset\mathsf{b}$.
\item
The $\mathsf{a}+\mathsf{b}$ and $\mathsf{a}\mathsf{b}$ are the generic limits of $\{\mathfrak{a}_i+\mathfrak{b}_i\}_{i\in\mathbf{N}}$ and $\{\mathfrak{a}_i\mathfrak{b}_i\}_{i\in\mathbf{N}}$.
\end{enumerate}
\end{lemma}
The generic limit depends on the choice of $\mathcal{F}$ but remains the same after the replacement of $\mathcal{F}$ with a subfamily.
\begin{definition}
A family $\mathcal{F}'=(Z'_l,(\mathfrak{a}'_j(l))_j,N'_l,s'_l,t'_l)_{l\ge l'_0}$ of approximations of $\mathcal{S}$ is called a \textit{subfamily} of $\mathcal{F}$ if $l'_0$ is at least $l_0$ and if there exists an open immersion $i_l\colon Z'_l\to Z_l$ for every $l\ge l'_0$ such that
\begin{itemize}
\item
$t_l\circ i_{l+1}=i_l\circ t'_l$,
\item
$\mathfrak{a}_j(l)\mathscr{O}_{X\times Z'_l}=\mathfrak{a}'_j(l)$ by $\id_X\times i_l$, and
\item
$N'_l$ is a subset of $N_l$ and $i_l\circ s'_l=s_l|_{N'_l}$.
\end{itemize}
\end{definition}
\begin{convention}\label{cnv:retain}
Later we shall often replace $\mathcal{F}$ with a subfamily, but we retain the same notation $\mathcal{F}=(Z_l,(\mathfrak{a}_j(l))_j,N_l,s_l,t_l)_{l\ge l_0}$ to avoid intricacy.
\end{convention}
The theory of the generic limit of ideals was developed for the study of the singularities on the germ $P\in X$. When $X$ is klt, the singularities on $\hat X$ are associated with those on $X$ (see \cite{dFEM11}). The existence of log resolutions supplies
\begin{lemma}\label{lem:resolution}
Notation as above and assume that $X$ is klt. Then $\hat X$ is klt, and after replacing $\mathcal{F}$ with a subfamily but using the same notation,
\begin{align*}
\mld_{\hat P}(\hat X,{\textstyle\prod}_{j=1}^e(\mathsf{a}_j+\hat\mathfrak{m}^l)^{r_j})=\mld_P(X,{\textstyle\prod}_{j=1}^e\mathfrak{a}_j(l)_i^{r_j})
\end{align*}
for any positive real numbers $r_1,\ldots,r_e$ and for any $i\in N_l$ and $l\ge l_0$.
\end{lemma}
\begin{remark}\label{rmk:descend}
\begin{enumerate}
{\first\item\label{itm:descendE}}
Let $\hat E$ be a divisor over $\hat X$ with centre $\hat P$. Then replacing $\mathcal{F}$ with a subfamily (but using the same notation as in Convention \ref{cnv:retain}), one can descend $\hat E$ to a divisor $E_l$ over $X\times Z_l$ for any $l\ge l_0$, that is, $E_{l'}=E_l\times_{Z_l}Z_{l'}$ when $l\le l'$, and $\hat E=E_l\times_{X\times Z_l}\hat X$. Let $E_i$ be any connected component of the fibre of $E_l$ at $s_l(i)\in Z_l$, which is independent of $l$ as far as $i\in N_l$. Replacing $\mathcal{F}$ with a subfamily again, for any $i\in N_l$ and $1\le j\le e$, $E_i$ is a divisor over $X$ and satisfies that
\begin{align*}
\ord_{\hat E}\mathsf{a}_j=\ord_{\hat E}(\mathsf{a}_j+\hat\mathfrak{m}^l)&=\ord_{E_i}(\mathfrak{a}_{ij}+\mathfrak{m}^l)=\ord_{E_i}\mathfrak{a}_{ij}<l,\\
a_{\hat E}(\hat X,\mathsf{a})&=a_{E_i}(X,\mathfrak{a}_i).
\end{align*}
\item
Let $\hat\pi\colon\hat Y\to\hat X$ be a projective birational morphism isomorphic outside $\hat P$. Then $\hat\pi$ is descendible as stated in \cite[Proposition A.7]{K15}, that is, after replacing $\mathcal{F}$ with a subfamily, there exist projective morphisms $\pi_l\colon Y_l\to X\times Z_l$ such that $\pi_{l'}=\pi_l\times_{Z_l}Z_{l'}$ when $l\le l'$ and such that $\hat\pi=\pi_l\times_{X\times Z_l}\hat X$.
\end{enumerate}
\end{remark}
\begin{remark}
The $E_l$ is treated in \cite{K15} as if it has connected fibres, which should have been corrected appropriately.
\end{remark}
Now we fix positive real numbers $r_1,\ldots,r_e$ and consider the $\mathbf{R}$-ideals $\mathfrak{a}_i=\prod_{j=1}^e\mathfrak{a}_{ij}^{r_j}$ and $\mathsf{a}=\prod_{j=1}^e\mathsf{a}_j^{r_j}$. The $\mathsf{a}$ is called the \textit{generic limit} of the sequence $\{\mathfrak{a}_i\}_{i\in\mathbf{N}}$ of $\mathbf{R}$-ideals on $X$ with respect to $\mathcal{F}$. The most important achievement at present is the following theorem due to de Fernex, Ein and Musta\c{t}\u{a}. Indeed, as an application, they proved first the ACC for lc thresholds restricted on smooth varieties.
\begin{theorem}[\cite{dFEM10}, \cite{dFEM11}]\label{thm:lct}
Notation as above and assume that $X$ is klt. If $(\hat X,\mathsf{a})$ is lc, then so is $(X,\mathfrak{a}_i)$ for any $i\in N_{l_0}$ after replacing $\mathcal{F}$ with a subfamily which depends on $r_1,\ldots,r_e$.
\end{theorem}
Theorem \ref{thm:lct} is a corollary to the effective $\mathfrak{m}$-adic semi-continuity of lc thresholds, which was globalised in \cite[Theorem 4.11]{K15}. We prove its relative version.
\begin{theorem}\label{thm:relative}
Let $X$ be a klt variety and $X\to T$ be a morphism to a variety. Suppose that every closed fibre of $X\to T$ is klt. Let $\mathfrak{a}=\prod_j\mathfrak{a}_j^{r_j}$ be an $\mathbf{R}$-ideal on $X$ and $Z$ be an irreducible closed subset of $X$ which dominates $T$. Suppose that $\mld_{\eta_Z}(X,\mathfrak{a})=0$ and it is computed by a divisor $E$ over $X$. Then after replacing $X$ and $T$ with their dense open subsets, the following hold for any $t\in T$.
\begin{itemize}
\item
The fibre of $E$ at $t$ is non-empty, and its arbitrary connected component $E_t$ is a divisor over a component $X_t$ of the fibre of $X$ at $t$.
\item
The centre $Z_t$ on $X_t$ of $E_t$ is smooth.
\item
If an $\mathbf{R}$-ideal $\mathfrak{b}=\prod_j\mathfrak{b}_j^{r_j}$ on $X_t$ satisfies that $\mathfrak{a}_j\mathscr{O}_{X_t}+\mathfrak{p}_j=\mathfrak{b}_j+\mathfrak{p}_j$ for any $j$, where $\mathfrak{p}_j=\{f\in\mathscr{O}_{X_t}\mid\ord_{E_t}f>\ord_E\mathfrak{a}_j\}$, then $(X_t,\mathfrak{b})$ is lc about $Z_t$ and $\mld_{\eta_{Z_t}}(X_t,\mathfrak{b})=0$.
\end{itemize}
\end{theorem}
\begin{proof}
Take a log resolution $\pi\colon Y\to X$ of $(X,\mathfrak{a}\mathscr{I}_Z)$, where $\mathscr{I}_Z$ is the ideal sheaf of $Z$, such that $E$ is realised as a divisor on $Y$. We may shrink $T$ so that $T$ and $Y\to T$ are smooth and so that the union $F$ of the exceptional locus of $\pi$ and the cosupport of $\mathfrak{a}\mathscr{I}_Z\mathscr{O}_Y$ is an snc divisor relative to $T$. Replace $X$ with an open subset $X'$ containing $\eta_Z$ such that $Z'=Z|_{X'}$ is smooth over $T$ and such that if the restriction $S'=S|_{\pi^{-1}(X')}$ of a stratum $S$ of $F$ satisfies that $S'\neq\emptyset$ and $\pi(S')\subset Z'$, then $S'\to Z'$ is smooth.
Set $n=\dim Z-\dim T$. Then for any $t\in T$ and $z\in Z_t$,
\begin{align*}
\mld_z(X_t,\mathfrak{m}_z^n\cdot\mathfrak{a}\mathscr{O}_{X_t})=0
\end{align*}
for the maximal ideal sheaf $\mathfrak{m}_z$ on $X_t$ defining $z$, and it is computed by the divisor $G_z$ obtained by the blow-up of $Y_t=Y\times_XX_t$ along a component of $E_t\cap\pi^{-1}(z)$. This is verified from the local description at each closed point $y$ in $\pi^{-1}(z)$. Indeed, let $v_1,\ldots,v_s$ be a part of a regular system of parameters in $\mathscr{O}_{Y,y}$ such that $F$ is defined at $y$ by $\prod_{l=1}^sv_l$. Since every stratum of $F$ mapped into $Z$ is smooth over $Z$, they are extended to a part $v_1,\ldots,v_s,w_1,\ldots,w_n$ of a regular system of parameters in $\mathscr{O}_{Y,y}$ such that their images form a part of a regular system of parameters in $\mathscr{O}_{Y_t,y}$ and such that
\begin{align*}
\mathfrak{m}_z\mathscr{O}_{Y_t,y}=(w_1,\ldots,w_n,\textstyle\prod_{l=1}^sv_l^{m_l})\mathscr{O}_{Y_t,y},
\end{align*}
where $m_l$ is the order of $\mathscr{I}_Z$ along the divisor defined by $v_l$. (Note that the corresponding expression in the proof of \cite[Theorem 4.11]{K15} is incorrect).
Since $\ord_{G_z}\mathfrak{a}_j\mathscr{O}_{X_t}=\ord_E\mathfrak{a}_j$ and $\ord_{G_z}f\ge\ord_{E_t}f$ for any $f\in\mathscr{O}_{X_t}$, by \cite[Theorem 1.4]{dFEM10} we conclude that $\mld_z(X_t,\mathfrak{m}_z^n\mathfrak{b})=0$ for the $\mathfrak{b}$ in the statement. Hence $(X_t,\mathfrak{b})$ is lc about $Z_t$, and $\mld_{\eta_{Z_t}}(X_t,\mathfrak{b})=0$ by $a_{E_t}(X_t,\mathfrak{b})=0$.
\end{proof}
\begin{corollary}\label{crl:relative}
Let $X$ be a klt variety and $X\to T$ be a morphism to a variety. Suppose that the fibre $X_t$ at every closed point $t$ in $T$ is klt. Let $\mathfrak{a}=\prod_j\mathfrak{a}_j^{r_j}$ be an $\mathbf{R}$-ideal on $X$ and $Z$ be a closed subset of $X$ such that $(X,\mathfrak{a})$ is lc about $Z$. Set $Z_t=Z\times_XX_t$ and let $\mathscr{I}_t$ denote the ideal sheaf of $Z_t$ on $X_t$. Then there exists a positive integer $l$ such that after replacing $T$ with its dense open subset, for any $t\in T$ if an $\mathbf{R}$-ideal $\mathfrak{b}=\prod_j\mathfrak{b}_j^{r_j}$ on $X_t$ satisfies that $\mathfrak{a}_j\mathscr{O}_{X_t}+\mathscr{I}_t^l=\mathfrak{b}_j+\mathscr{I}_t^l$ for any $j$, then $(X_t,\mathfrak{b})$ is lc about $Z_t$.
\end{corollary}
\begin{proof}
We shall prove it by noetherian induction on $Z$. Let $Z_0$ be an irreducible component of $Z$, which may be assumed to dominate $T$. Let $\mathscr{I}_{Z_0}$ be the ideal sheaf of $Z_0$ and $r$ be the non-negative real number such that $\mld_{\eta_{Z_0}}(X,\mathfrak{a}\mathscr{I}_{Z_0}^r)$ equals zero. Applying Theorem \ref{thm:relative}, after shrinking $T$ there exist open subset $X'$ of $X$ containing $\eta_{Z_0}$ and a positive integer $l_0$ such that for any $t\in T$, if an $\mathbf{R}$-ideal $\mathfrak{b}=\prod_j\mathfrak{b}_j^{r_j}$ on $X_t$ satisfies that $\mathfrak{a}_j\mathscr{O}_{X'_t}+\mathscr{I}_{Z_0}^{l_0}\mathscr{O}_{X'_t}=\mathfrak{b}_j\mathscr{O}_{X'_t}+\mathscr{I}_{Z_0}^{l_0}\mathscr{O}_{X'_t}$ for any $j$ on $X'_t=X'\times_XX_t$, then $(X'_t,\mathfrak{b}\mathscr{O}_{X'_t})$ is lc about $Z_0\times_XX'_t$. Thus the assertion is reduced to that for the closure of $Z\setminus(Z_0\cap X')$, which follows from the hypothesis of induction.
\end{proof}
\section{Singularities on a fixed variety}
In this section, we fix the germ $P\in X$ of a klt variety and review an approach to the study of $\mld_P(X,\mathfrak{a})$ for $\mathbf{R}$-ideals $\mathfrak{a}$ which uses the generic limit of ideals on $X$. Our earlier work shows the discreteness for log discrepancies $a_E(X,\mathfrak{a})$.
\begin{theorem}[\cite{K14}]\label{thm:discrete}
Let $P\in X$ be the germ of a klt variety. Fix a finite subset $I$ of the positive real numbers. Then the set
\begin{align*}
\{a_E(X,\mathfrak{a})\mid\textup{$\mathfrak{a}$ an $\mathbf{R}$-ideal},\ \mathfrak{a}\in I,\ E\in\mathcal{D}_X,\ \textrm{$(X,\mathfrak{a})$ lc about $\eta_{c_X(E)}$}\}
\end{align*}
is discrete in $\mathbf{R}$.
\end{theorem}
We shall explain the equivalence of several important conjectures on a fixed germ with the help of Theorems \ref{thm:lct} and \ref{thm:discrete}.
\begin{conjecture}\label{cnj:equiv}
Let $P\in X$ be the germ of a klt variety.
\begin{enumerate}[series=equiv]
\item\label{itm:acc}
\textup{(ACC for minimal log discrepancies)}\;
Fix a subset $I$ of the positive real numbers which satisfies the DCC. Then the set
\begin{align*}
\{\mld_P(X,\mathfrak{a})\mid\textup{$\mathfrak{a}$ an $\mathbf{R}$-ideal},\ \mathfrak{a}\in I\}
\end{align*}
satisfies the ACC.
\item\label{itm:alc}
\textup{(ACC for $a$-lc thresholds)}\;
Fix a non-negative real number $a$ and a subset $I$ of the positive real numbers which satisfies the DCC. Then the set
\begin{align*}
\{t\in\mathbf{R}_{\ge0}\mid\textup{$\mathfrak{a}$, $\mathfrak{b}$ $\mathbf{R}$-ideals},\ \mld_P(X,\mathfrak{a}\mathfrak{b}^t)=a,\ \mathfrak{a}\mathfrak{b}\in I\}
\end{align*}
satisfies the ACC.
\item\label{itm:madic}
\textup{(uniform $\mathfrak{m}$-adic semi-continuity)}\;
Fix a finite subset $I$ of the positive real numbers. Then there exists a positive integer $l$ depending only on $X$ and $I$ such that if $\mathfrak{a}=\prod_{j=1}^e\mathfrak{a}_j^{r_j}$ and $\mathfrak{b}=\prod_{j=1}^e\mathfrak{b}_j^{r_j}$ are $\mathbf{R}$-ideals on $X$ satisfying that $r_j\in I$ and $\mathfrak{a}_j+\mathfrak{m}^l=\mathfrak{b}_j+\mathfrak{m}^l$ for any $j$, where $\mathfrak{m}$ is the maximal ideal in $\mathscr{O}_X$ defining $P$, then $\mld_P(X,\mathfrak{a})=\mld_P(X,\mathfrak{b})$.
\item\label{itm:nakamura}
\textup{(boundedness)}\;
Fix a finite subset $I$ of the positive real numbers. Then there exists a positive integer $l$ depending only on $X$ and $I$ such that if $\mathfrak{a}$ is an $\mathbf{R}$-ideal on $X$ satisfying that $\mathfrak{a}\in I$, then there exists a divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{a})$ and satisfies the inequality $a_E(X)\le l$.
\item\label{itm:limit}
\textup{(generic limit)}\;
Let $r_1,\ldots,r_e$ be positive real numbers and $\{\mathfrak{a}_i=\prod_{j=1}^e\mathfrak{a}_{ij}^{r_j}\}_{i\in\mathbf{N}}$ be a sequence of $\mathbf{R}$-ideals on $X$. Notation as in Section \textup{\ref{sct:limit}}, so set the generic limit $\mathsf{a}=\prod_{j=1}^e\mathsf{a}_j^{r_j}$ on $\hat P\in\hat X$. Then $\mld_{\hat P}(\hat X,\mathsf{a})=\mld_P(X,\mathfrak{a}_i)$ for any $i\in N_{l_0}$ after replacing $\mathcal{F}$ with a subfamily but using the same notation.
\end{enumerate}
\end{conjecture}
\begin{remark}\label{rmk:limit}
We provide a few remarks on Conjecture \ref{cnj:equiv}(\ref{itm:limit}).
\begin{enumerate}
\item\label{itm:limitineq}
Lemma \ref{lem:resolution} means the equality
\begin{align*}
\mld_{\hat P}(\hat X,{\textstyle\prod}_{j=1}^e(\mathsf{a}_j+\hat\mathfrak{m}^l)^{r_j})=\mld_P(X,{\textstyle\prod}_{j=1}^e(\mathfrak{a}_{ij}+\mathfrak{m}^l)^{r_j})
\end{align*}
for any $i\in N_{l_0}$ after replacing $\mathcal{F}$ with a subfamily. Take a divisor $\hat E$ over $\hat X$ which computes $\mld_{\hat P}(\hat X,\mathsf{a})$ and choose an integer $l_1\ge l_0$ such that $\ord_{\hat E}\mathsf{a}_j\le l_1\ord_{\hat E}\hat\mathfrak{m}$ for any $j$. Then for $l\ge l_1$, the left-hand side equals $\mld_{\hat P}(\hat X,\mathsf{a})$ while the right-hand side is at least $\mld_P(X,\mathfrak{a}_i)$. Thus after replacing $l_0$ with $l_1$, one has the inequality
\begin{align*}
\mld_{\hat P}(\hat X,\mathsf{a})\ge\mld_P(X,\mathfrak{a}_i)
\end{align*}
for any $i\in N_{l_0}$. The intrinsic part in Conjecture \ref{cnj:equiv}(\ref{itm:limit}) is the opposite inequality.
\item\label{itm:limitresult}
In particular, Conjecture \ref{cnj:equiv}(\ref{itm:limit}) holds when $\mld_{\hat P}(\hat X,\mathsf{a})$ is not positive by Theorem \ref{thm:lct}. The conjecture also holds when $(\hat X,\mathsf{a})$ is klt \cite[Theorem 5.1]{K15}. Thus, we know that Conjecture \ref{cnj:equiv}(\ref{itm:limit}) holds unless $(\hat X,\mathsf{a})$ is not klt but $\mld_{\hat P}(\hat X,\mathsf{a})$ is positive.
\end{enumerate}
\end{remark}
We prepare basic lemmata.
\begin{lemma}\label{lem:DCC}
Let $I$ and $J$ be subsets of the positive real numbers both of which satisfy the DCC. Then the set $\{rs\mid r\in I,\ s\in J\}$ satisfies the DCC.
\end{lemma}
\begin{proof}
Let $\{r_is_i\}_{i\in\mathbf{N}}$ be an arbitrary non-increasing sequence where $r_i\in I$ and $s_i\in J$. It is enough to show that $r_is_i$ is constant passing to a subsequence. We claim that there exists a strictly increasing sequence $\{i_j\}_{j\in\mathbf{N}}$ such that $\{r_{i_j}\}_{j\in\mathbf{N}}$ is a non-decreasing sequence. Indeed, let $i_1$ be a number such that $r_{i_1}$ attains the minimum of the set $\{r_i\mid i\in\mathbf{N}\}$, which exists since this set satisfies the DCC. If one constructed $i_1,\ldots,i_j$, then take $i_{j+1}$ as a number such that $r_{i_{j+1}}$ attains the minimum of the set $\{r_i\mid i>i_j\}$.
By replacing $\{r_is_i\}_{i\in\mathbf{N}}$ with $\{r_{i_j}s_{i_j}\}_{j\in\mathbf{N}}$, we may assume that $r_i$ is non-decreasing. Applying the same argument to $\{s_i\}_{i\in\mathbf{N}}$, we may also assume that $s_i$ is non-decreasing. Then the sequence $\{r_is_i\}_{i\in\mathbf{N}}$ becomes both non-increasing and non-decreasing, so $r_is_i$ must be constant.
\end{proof}
\begin{lemma}\label{lem:mld}
Let $P\in X$ be the germ of a normal $\mathbf{Q}$-Gorenstein variety and $\mathfrak{a}_1,\ldots,\mathfrak{a}_e$ be $\mathbf{R}$-ideals on $X$. Let $t_1,\ldots,t_e$ be non-negative real numbers such that $\sum_{i=1}^et_i=1$.
\begin{enumerate}
\item\label{itm:mldconvex}
$\mld_P(X,\prod_{i=1}^e\mathfrak{a}_i^{t_i})\ge\sum_{i=1}^et_i\mld_P(X,\mathfrak{a}_i)$.
\item\label{itm:mldequal}
If a divisor $E$ over $X$ computes all $\mld_P(X,\mathfrak{a}_i)$, then $\mld_P(X,\prod_{i=1}^e\mathfrak{a}_i^{t_i})=\sum_{i=1}^et_i\mld_P(X,\mathfrak{a}_i)$ and it is computed by $E$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $F$ be a divisor over $X$ which computes $\mld_P(X,\prod_{i=1}^e\mathfrak{a}_i^{t_i})$. Then,
\begin{align*}
\mld_P(X,{\textstyle\prod_{i=1}^e\mathfrak{a}_i^{t_i}})=a_F(X,{\textstyle\prod_{i=1}^e\mathfrak{a}_i^{t_i}})=\sum_{i=1}^et_i\cdot a_F(X,\mathfrak{a}_i)\ge\sum_{i=1}^et_i\mld_P(X,\mathfrak{a}_i),
\end{align*}
which is (\ref{itm:mldconvex}). On the other hand, if $E$ computes all $\mld_P(X,\mathfrak{a}_i)$, then
\begin{align*}
\mld_P(X,{\textstyle\prod_{i=1}^e\mathfrak{a}_i^{t_i}})\le a_E(X,{\textstyle\prod_{i=1}^e\mathfrak{a}_i^{t_i}})=\sum_{i=1}^et_i\cdot a_E(X,\mathfrak{a}_i)=\sum_{i=1}^et_i\mld_P(X,\mathfrak{a}_i),
\end{align*}
which with (\ref{itm:mldconvex}) shows the assertion (\ref{itm:mldequal}).
\end{proof}
\begin{theorem}\label{thm:equiv}
Let $P\in X$ be the germ of a klt variety. Then the five statements in Conjecture \textup{\ref{cnj:equiv}} are equivalent.
\end{theorem}
\begin{proof}
\textit{Step} 1.
The generic limit of ideals was invented from the insight of the implication from (\ref{itm:limit}) to (\ref{itm:acc}). Musta\c{t}\u{a} informed us the proof of this implication and we wrote it in \cite[Proposition 4.8]{K15}. Note that the proof in \cite{K15} works even if $X$ has klt singularities. We also note that though the statement in \cite{K15} assumes the assertion in (\ref{itm:limit}) for ideals $\mathfrak{a}_{ij}$ in the completion of the local ring $\mathscr{O}_{X,P}$, its proof uses only the assertion for ideals in $\mathscr{O}_X$ which is exactly (\ref{itm:limit}). We derived from (\ref{itm:limit}) in fact the following ACC which was formulated by Cascini and McKernan \cite{M13}.
\begin{enumerate}[topsep=
amount,resume=equiv]
\item\label{itm:CM}
\textit{Fix subsets $I$ of the positive real numbers and $J$ of the non-negative real numbers both of which satisfy the DCC. Then there exist finite subsets $I_0$ of $I$ and $J_0$ of $J$ such that if $\mathfrak{a}$ is an $\mathbf{R}$-ideal on $X$ satisfying that $\mathfrak{a}\in I$ and $\mld_P(X,\mathfrak{a})\in J$, then $\mathfrak{a}\in I_0$ and $\mld_P(X,\mathfrak{a})\in J_0$.}
\end{enumerate}
The assertion (\ref{itm:acc}) follows from (\ref{itm:CM}) immediately. We shall derive (\ref{itm:alc}) from (\ref{itm:CM}). Let $\{t_i\}_{i\in\mathbf{N}}$ be a non-decreasing sequence of positive real numbers such that there exist ideals $\mathfrak{a}_i$ and $\mathfrak{b}_i$ on $X$ satisfying that $\mld_P(X,\mathfrak{a}_i\mathfrak{b}_i^{t_i})=a$ and $\mathfrak{a}_i\mathfrak{b}_i\in I$. It is enough to show that $T=\{t_i\mid i\in\mathbf{N}\}$ satisfies the ACC. By Lemma \ref{lem:DCC}, the set $IT=\{rt\mid r\in I,\ t\in T\}$ satisfies the DCC. Applying (\ref{itm:CM}) to $I\cup IT$ and $\{a\}$, one obtains a finite subset $I_0$ of $I\cup IT$ such that $\mathfrak{a}_i\mathfrak{b}_i^{t_i}\in I_0$ for any $i$. Particularly, $T$ is contained in the set $I^{-1}I_0=\{r^{-1}s\mid r\in I,\ s\in I_0\}$ which satisfies the ACC.
\textit{Step} 2.
The conjecture (\ref{itm:nakamura}) was proposed by Nakamura. His joint work \cite{MN16} with Musta\c{t}\u{a} shows the equivalence of (\ref{itm:madic}), (\ref{itm:nakamura}) and (\ref{itm:limit}). They treated the assertion in (\ref{itm:limit}) for ideals in the completion, but their proof works for our (\ref{itm:limit}). They also provided a direct proof of the implication from (\ref{itm:nakamura}) to (\ref{itm:acc}) which uses the ACC for lc thresholds on $X$ and Theorem \ref{thm:discrete}. We write the argument from (\ref{itm:limit}) to (\ref{itm:nakamura}) as Lemma \ref{lem:limtonak} since it will be used later.
\textit{Step} 3.
Hence it is enough to show the implications from (\ref{itm:acc}) to (\ref{itm:nakamura}) and from (\ref{itm:alc}) to (\ref{itm:nakamura}). If (\ref{itm:nakamura}) were false, then there would exist a strictly increasing sequence $\{l_i\}_{i\in\mathbf{N}}$ and a sequence $\{\mathfrak{a}_i\}_{i\in\mathbf{N}}$ of $\mathbf{R}$-ideals on $X$ such that $\mathfrak{a}_i\in I$ and such that every divisor $E_i$ over $X$ computing $\mld_P(X,\mathfrak{a}_i)$ satisfies the inequality $a_{E_i}(X)\ge l_i$. The assertion (\ref{itm:nakamura}) for those $\mathfrak{a}$ whose $\mld_P(X,\mathfrak{a})$ is not positive will be proved in Theorem \ref{thm:nonpos} independently. We assume that $\mld_P(X,\mathfrak{a}_i)$ is positive for any $i$ here.
By Theorem \ref{thm:discrete}, the set
\begin{align*}
M=\{a_E(X,\mathfrak{a})\mid\mathfrak{a}\in I,\ E\in\mathcal{D}_X,\ \textrm{$(X,\mathfrak{a})$ lc}\}
\end{align*}
is discrete in $\mathbf{R}$. In particular, all $\mld_P(X,\mathfrak{a}_i)$ belong to a finite set since they are bounded from above by $\mld_PX$. Thus we may assume that $\mld_P(X,\mathfrak{a}_i)$ is constant, say $m$, which is positive by our assumption. We may assume that $\mathfrak{a}_i$ is non-trivial, then $m$ is less than $\mld_PX$. By the discreteness of $M$, there exists a real number $m'$ greater than $m$ such that $r\not\in M$ for any real number $m<r\le m'$.
Let $t_i$ be the positive real number such that $\mld_P(X,\mathfrak{a}_i^{1-t_i})=m'$, which exists and satisfies that $0<t_i<1$ by $m<m'<\mld_PX$. Take a divisor $E_i$ over $X$ which computes $\mld_P(X,\mathfrak{a}_i^{1-t_i})$. Then $a_{E_i}(X,\mathfrak{a}_i)<m'$, so $E_i$ also computes $\mld_P(X,\mathfrak{a}_i)=m$ by the property of $m'$, and thus $\ord_{E_i}\mathfrak{a}_i=a_{E_i}(X)-m\ge l_i-m$. Since $t_i\ord_{E_i}\mathfrak{a}_i=a_{E_i}(X,\mathfrak{a}_i^{1-t_i})-a_{E_i}(X,\mathfrak{a}_i)=m'-m$, one has the estimate $t_i\le(m'-m)/(l_i-m)$ when $l_i>m$, showing that $t_i$ approaches to zero as $i$ increases.
This contradicts the ACC for $m'$-lc thresholds in (\ref{itm:alc}). It is sufficient to verify that our situation also contradicts the ACC for minimal log discrepancies in (\ref{itm:acc}). By passing to a subsequence, we may assume that $t_i$ are less than one-half and form a strictly decreasing sequence whose limit is zero. Then $\{1-(1-t_i)t_i\}_{i\in \mathbf{N}}$ is a strictly increasing sequence. We set
\begin{align*}
T=\{1-(1-t_i)t_i\mid i\in\mathbf{N}\},
\end{align*}
which satisfies the DCC.
Note that $1-(1-t_i)t_i=(1-t_i)(1-t_i)+t_i$. Because $E_i$ computes both $\mld_P(X,\mathfrak{a}_i^{1-t_i})$ and $\mld_P(X,\mathfrak{a}_i)$, by Lemma \ref{lem:mld}(\ref{itm:mldequal}) one has that
\begin{align*}
\mld_P(X,\mathfrak{a}_i^{1-(1-t_i)t_i})=(1-t_i)\mld_P(X,\mathfrak{a}_i^{1-t_i})+t_i\mld_P(X,\mathfrak{a}_i)=m'-t_i(m'-m)
\end{align*}
which is computed by $E_i$. But then $\mld_P(X,\mathfrak{a}_i^{1-(1-t_i)t_i})$ is strictly increasing. This contradicts (\ref{itm:acc}) for $IT=\{rt\mid r\in I,\ t\in T\}$ since $IT$ satisfies the DCC by Lemma \ref{lem:DCC}.
\end{proof}
\begin{lemma}\label{lem:limtonak}
Let $P\in X$ be the germ of a klt variety. Let $r_1,\ldots,r_e$ be positive real numbers and $\{\mathfrak{a}_i=\prod_{j=1}^e\mathfrak{a}_{ij}^{r_j}\}_{i\in\mathbf{N}}$ be a sequence of $\mathbf{R}$-ideals on $X$. Notation as in Section \textup{\ref{sct:limit}}, so set the generic limit $\mathsf{a}=\prod_{j=1}^e\mathsf{a}_j^{r_j}$ on $\hat P\in\hat X$. If $\mld_{\hat P}(\hat X,\mathsf{a})=\mld_P(X,\mathfrak{a}_i)$ for any $i\in N_{l_0}$, then there exists a positive rational number $l$ such that for infinitely many indices $i$, there exists a divisor $E_i$ over $X$ which computes $\mld_P(X,\mathfrak{a}_i)$ and satisfies the equality $a_{E_i}(X)=l$.
\end{lemma}
\begin{proof}
Take a divisor $\hat E$ over $\hat X$ which computes $\mld_{\hat P}(\hat X,\mathsf{a})$. As in Remark \ref{rmk:descend}(\ref{itm:descendE}), replacing $\mathcal{F}$ with a subfamily, one can descend $\hat E$ to a divisor $E_l$ over $X\times Z_l$ for any $l\ge l_0$. For a component $E_i$ of the fibre of $E_l$ at $s_l(i)\in Z_l$, one may assume that $a_{\hat E}(\hat X,\mathsf{a})=a_{E_i}(X,\mathfrak{a}_i)$ for any $i\in N_{l_0}$. Then $E_i$ computes $\mld_P(X,\mathfrak{a}_i)$ and $a_{E_i}(X)$ equals the constant $a_{\hat E}(\hat X)$.
\end{proof}
\begin{theorem}\label{thm:nonpos}
Let $P\in X$ be the germ of a klt variety. Fix a finite subset $I$ of the positive real numbers. Then there exists a positive integer $l$ depending only on $X$ and $I$ such that if $\mathfrak{a}$ is an $\mathbf{R}$-ideal on $X$ satisfying that $\mathfrak{a}\in I$ and that $\mld_P(X,\mathfrak{a})$ is not positive, then there exists a divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{a})$ and satisfies the inequality $a_E(X)\le l$.
\end{theorem}
\begin{proof}
Let $\{\mathfrak{a}_i\}_{i\in\mathbf{N}}$ be an arbitrary sequence of $\mathbf{R}$-ideals on $X$ such that $\mathfrak{a}_i\in I$ and such that $\mld_P(X,\mathfrak{a}_i)$ is not positive. It is sufficient to show the existence of a positive rational number $l$ such that for infinitely many indices $i$, there exists a divisor $E_i$ over $X$ which computes $\mld_P(X,\mathfrak{a}_i)$ and satisfies the equality $a_{E_i}(X)=l$.
Write $\mathfrak{a}_i=\prod_{j=1}^{e_i}\mathfrak{a}_{ij}^{r_{ij}}$ so $r_{ij}\in I$. We may assume that every $\mathfrak{a}_{ij}$ is non-trivial. Let $r$ be the minimum of the elements of $I$. Then $\mld_P(X,\mathfrak{a}_i)\le\mld_PX-\sum_{j=1}^{e_i}r_{ij}\le\mld_PX-re_i$. Let $e'$ denote the greatest integer such that $re'\le\mld_PX$. If $(X,\mathfrak{a}_i)$ is lc, then $e_i\le e'$. If $(X,\mathfrak{a}_i)$ is not lc and $e_i>e'$, then we may replace $\mathfrak{a}_i$ with $\mathfrak{a}'_i=\prod_{j=1}^{e'+1}\mathfrak{a}_{ij}^{r_{ij}}$ because every divisor computing $\mld_P(X,\mathfrak{a}'_i)=-\infty$ also computes $\mld_P(X,\mathfrak{a}_i)$. Hence by passing to a subsequence, we may assume that $e_i$ is constant, say $e$, and that $r_{ij}$ is constant, say $r_j$, for each $1\le j\le e$. That is, $\mathfrak{a}_i=\prod_{j=1}^e\mathfrak{a}_{ij}^{r_j}$.
Following Section \ref{sct:limit}, we construct a generic limit $\mathsf{a}=\prod_{j=1}^e\mathsf{a}_j^{r_j}$ of $\{\mathfrak{a}_i\}_{i\in\mathbf{N}}$. We use the notation in Section \ref{sct:limit}, so $\mathsf{a}$ is an $\mathbf{R}$-ideal on $\hat P\in\hat X$. If $\mld_{\hat P}(\hat X,\mathsf{a})$ were positive, then there would exist a positive real number $t$ such that $(\hat X,\mathsf{a}\hat\mathfrak{m}^t)$ is lc. By Theorem \ref{thm:lct}, $(X,\mathfrak{a}_i\mathfrak{m}^t)$ is lc for infinitely many $i$, which contradicts that $\mld_P(X,\mathfrak{a}_i)$ is not positive. Thus $\mld_{\hat P}(\hat X,\mathsf{a})$ is not positive. Then by Remark \ref{rmk:limit}(\ref{itm:limitresult}), $\mld_{\hat P}(\hat X,\mathsf{a})=\mld_P(X,\mathfrak{a}_i)$ for any $i\in N_{l_0}$ after replacing $\mathcal{F}$ with a subfamily, and the existence of $l$ follows from Lemma \ref{lem:limtonak}.
\end{proof}
The conjectures hold in dimension two.
\begin{theorem}\label{thm:surface}
Conjecture \textup{\ref{cnj:equiv}} holds when $X$ is a klt surface.
\end{theorem}
\begin{proof}
By Theorem \ref{thm:equiv}, it is enough to verify one of the statements. The (\ref{itm:nakamura}) is stated in \cite[Theorem 1.3]{MN16}. Alternatively, one may derive (\ref{itm:acc}) from \cite[Theorem 3.8]{Al94}, or derive (\ref{itm:limit}) from \cite{K13} by replacing $X$ with its minimal resolution.
\end{proof}
Roughly speaking, our former work \cite{K15} asserts a part of the conjectures in dimension three in the case when the minimal log discrepancy is greater than one.
\begin{theorem}\label{thm:grthan1}
Let $P\in X$ be the germ of a smooth threefold. Let $r_1,\ldots,r_e$ be positive real numbers and $\{\mathfrak{a}_i=\prod_{j=1}^e\mathfrak{a}_{ij}^{r_j}\}_{i\in\mathbf{N}}$ be a sequence of $\mathbf{R}$-ideals on $X$. Notation as in Section \textup{\ref{sct:limit}}, so set the generic limit $\mathsf{a}=\prod_{j=1}^e\mathsf{a}_j^{r_j}$ on $\hat P\in\hat X$. Then the pair $(\hat X,\mathsf{a})$ satisfies one of the following cases.
\begin{enumerate}[label=\textup{\arabic*.},ref=\arabic*]
\item\label{cas:case1}
The $\mld_{\hat P}(\hat X,\mathsf{a})$ is not positive.
\item\label{cas:case2}
$(\hat X,\mathsf{a})$ is klt.
\item\label{cas:case3}
$(\hat X,\mathsf{a})$ is lc and has the smallest lc centre which is normal and of dimension two.
\item\label{cas:case4}
$(\hat X,\mathsf{a})$ is lc and has the smallest lc centre which is regular and of dimension one.
\end{enumerate}
Moreover, the following hold.
\begin{enumerate}
\item\label{itm:cases123}
In the cases \textup{\ref{cas:case1}}, \textup{\ref{cas:case2}} and \textup{\ref{cas:case3}}, $\mld_{\hat P}(\hat X,\mathsf{a})=\mld_P(X,\mathfrak{a}_i)$ for any $i\in N_{l_0}$ after replacing $\mathcal{F}$ with a subfamily.
\item\label{itm:case4}
In the case \textup{\ref{cas:case4}}, $\mld_{\hat P}(\hat X,\mathsf{a})$ is at most one.
\end{enumerate}
\end{theorem}
\begin{proof}
The case division follows from the existence of the smallest lc centre \cite[Theorem 1.2]{K15}. The equality in (\ref{itm:cases123}) holds in the cases \textup{\ref{cas:case1}} and \textup{\ref{cas:case2}} by Remark \ref{rmk:limit}(\ref{itm:limitresult}), and in the case \textup{\ref{cas:case3}} by \cite[Theorem 5.3]{K15}. The assertion (\ref{itm:case4}) is \cite[Proposition 6.1]{K15}.
\end{proof}
We reduce Conjecture \ref{cnj:equiv}(\ref{itm:nakamura}) to the case of $\mathbf{Q}$-ideals.
\begin{lemma}\label{lem:rational}
Let $P\in X$ be the germ of a klt variety. Suppose that for any positive integer $n$, there exists a positive integer $l$ depending only on $X$ and $n$ such that if $\mathfrak{a}$ is an ideal on $X$, then there exists a divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{a}^{1/n})$ and satisfies the inequality $a_E(X)\le l$. Then Conjecture \textup{\ref{cnj:equiv}} holds for $P\in X$.
\end{lemma}
\begin{proof}
In Conjecture \ref{cnj:equiv}, the assertion (\ref{itm:limit}) follows from (\ref{itm:nakamura}) for $I=\{r_1,\ldots,r_e\}$ by \cite{MN16}. Thus we may assume Conjecture \ref{cnj:equiv}(\ref{itm:limit}) in the case when $e=1$ and $r_1=1/n$ for some positive integer $n$. By Theorem \ref{thm:equiv}, it is enough to derive the full statement of (\ref{itm:limit}) from this special case.
We want the equality $\mld_{\hat P}(\hat X,\mathsf{a})=\mld_P(X,\mathfrak{a}_i)$, where $\mathfrak{a}_i=\prod_{j=1}^e\mathfrak{a}_{ij}^{r_j}$ and $\mathsf{a}=\prod_{j=1}^e\mathsf{a}_j^{r_j}$. We write $m=\mld_{\hat P}(\hat X,\mathsf{a})$ for simplicity. By Remark \ref{rmk:limit}(\ref{itm:limitresult}), we may assume that $m$ is positive. By Theorem \ref{thm:discrete}, the set
\begin{align*}
M=\{\mld_P(X,\mathfrak{a})\mid\mathfrak{a}\in\{r_1,\ldots,r_e\},\ \textrm{$(X,\mathfrak{a})$ lc}\}
\end{align*}
is discrete in $\mathbf{R}$. Thus there exists a real number $m'$ less than $m$ such that $r\not\in M$ for any real number $m'<r<m$.
Since the set
\begin{align*}
Q=\{(q_1,\ldots,q_e)\in(\mathbf{R}_{\ge0})^e\mid\textrm{$(\hat X,{\textstyle\prod_{j=1}^e}\mathsf{a}_j^{q_j})$ lc}\}
\end{align*}
is a rational polytope, the vector $r=(r_1,\ldots,r_e)$ in $Q$ is expressed as $r=\sum_{s\in S}t_sq_s$, where $S$ is a finite set, all $q_s=(q_{1s},\ldots,q_{es})$ belong to $Q\cap\mathbf{Q}^e$, and $t_s$ are positive real numbers such that $\sum_{s\in S}t_s=1$. By choosing $q_s$ close to $r$, we may assume that
\begin{align*}
m'=\mld_{\hat P}(\hat X,\mathsf{a})-(m-m')<\sum_{s\in S}t_s\mld_{\hat P}(\hat X,{\textstyle\prod_{j=1}^e}\mathsf{a}_j^{q_{js}}).
\end{align*}
Write $q_{js}=m_{js}/n$ with positive integers $n$ and $m_{1s},\ldots,m_{es}$ for $s\in S$. Then $\mld_{\hat P}(\hat X,\prod_{j=1}^e\mathsf{a}_j^{q_{js}})=\mld_{\hat P}(\hat X,(\prod_{j=1}^e\mathsf{a}_j^{m_{js}})^{1/n})$ and the ideal $\prod_{j=1}^e\mathsf{a}_j^{m_{js}}$ is the generic limit of the sequence $\{\prod_{j=1}^e\mathfrak{a}_{ij}^{m_{js}}\}_{i\in\mathbf{N}}$ of ideals on $X$. By our assumption, the equality $\mld_{\hat P}(\hat X,(\prod_{j=1}^e\mathsf{a}_j^{m_{js}})^{1/n})=\mld_P(X,(\prod_{j=1}^e\mathfrak{a}_{ij}^{m_{js}})^{1/n})$ holds for any $i\in N_{l_0}$ and $s\in S$ after replacing $\mathcal{F}$ with a subfamily. Hence with Lemma \ref{lem:mld}(\ref{itm:mldconvex}), one has that
\begin{align*}
m'<\sum_{s\in S}t_s\mld_P(X,{\textstyle\prod_{j=1}^e}\mathfrak{a}_{ij}^{q_{js}})\le\mld_P(X,{\textstyle\prod_{j=1}^e}\mathfrak{a}_{ij}^{r_j})=\mld_P(X,\mathfrak{a}_i)\in M,
\end{align*}
which implies that $\mld_P(X,\mathfrak{a}_i)\ge m$ by the property of $m'$. Together with Remark \ref{rmk:limit}(\ref{itm:limitineq}), we obtain the required equality $m=\mld_P(X,\mathfrak{a}_i)$.
\end{proof}
\begin{proposition}\label{prp:mult}
Let $P\in X$ be the germ of a klt variety and $\mathfrak{m}$ be the maximal ideal in $\mathscr{O}_X$ defining $P$. Fix a finite subset $I$ of the positive real numbers. Then there exists a positive real number $t$ depending only on $X$ and $I$ such that if $\mathfrak{a}$ is an $\mathbf{R}$-ideal on $X$ satisfying that $\mathfrak{a}\in I$ and that $\mld_P(X,\mathfrak{a})$ is positive, then $(X,\mathfrak{a}\mathfrak{m}^t)$ is lc.
\end{proposition}
\begin{proof}
Fix $r_1,\ldots,r_e\in I$ and let $\{\mathfrak{a}_i=\prod_{j=1}^e\mathfrak{a}_{ij}^{r_j}\}_{i\in\mathbf{N}}$ be a sequence of $\mathbf{R}$-ideals on $X$ such that $\mld_P(X,\mathfrak{a}_i)$ is positive. It is enough to show the existence of a positive real number $t$ such that $(X,\mathfrak{a}_i\mathfrak{m}^t)$ is lc for infinitely many indices $i$.
Following Section \ref{sct:limit}, we construct a generic limit $\mathsf{a}$ of $\{\mathfrak{a}_i\}_{i\in\mathbf{N}}$ on $\hat P\in\hat X$. Then $\mld_{\hat P}(\hat X,\mathsf{a})$ is positive by Remark \ref{rmk:limit}(\ref{itm:limitineq}), so there exists a positive real number $t$ such that $(\hat X,\mathsf{a}\hat\mathfrak{m}^t)$ is lc for the maximal ideal $\hat\mathfrak{m}$ in $\mathscr{O}_{\hat X}$. By Theorem \ref{thm:lct}, there exists an infinite subset $N_{l_0}$ of $\mathbf{N}$ such that $(X,\mathfrak{a}_i\mathfrak{m}^t)$ is lc for any $i\in N_{l_0}$.
\end{proof}
\begin{corollary}[\cite{MN16}]\label{crl:mult}
Let $P\in X$ be the germ of a klt variety and $\mathfrak{m}$ be the maximal ideal in $\mathscr{O}_X$ defining $P$. Fix a finite subset $I$ of the positive real numbers. Then there exists a positive integer $b$ depending only on $X$ and $I$ such that if $\mathfrak{a}$ is an $\mathbf{R}$-ideal on $X$ satisfying that $\mathfrak{a}\in I$ and that $\mld_P(X,\mathfrak{a})$ is positive, then $\ord_E\mathfrak{m}$ is at most $b$ for every divisor $E$ over $X$ computing $\mld_P(X,\mathfrak{a})$.
\end{corollary}
\begin{proof}
Take $t$ in Proposition \ref{prp:mult}. Let $E$ be an arbitrary divisor over $X$ which computes $\mld_P(X,\mathfrak{a})$. The log canonicity of $(X,\mathfrak{a}\mathfrak{m}^t)$ implies that
\begin{align*}
\ord_E\mathfrak{m}^t\le a_E(X,\mathfrak{a})=\mld_P(X,\mathfrak{a})\le\mld_PX,
\end{align*}
that is, $\ord_E\mathfrak{m}\le t^{-1}\mld_PX$. The $b=\rd{t^{-1}\mld_PX}$ is a required integer.
\end{proof}
\section{Construction of canonical pairs}
The objective of this section is to prove the following theorem.
\begin{theorem}\label{thm:canonical}
Let $P\in X$ be the germ of a smooth threefold. Fix a positive rational number $q$. Then there exist positive integers $l$ and $c$ both of which depend only on $q$ such that if $\mathfrak{a}$ is an ideal on $X$ satisfying that $\mld_P(X,\mathfrak{a}^q)$ is positive, then at least one of the following holds.
\begin{enumerate}
\item\label{itm:bounded}
There exists a divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{a}^q)$ and satisfies the inequality $a_E(X)\le l$.
\item\label{itm:reduced}
There exists a birational morphism from the germ $Q\in Y$ of a smooth threefold to the germ $P\in X$ such that
\begin{itemize}
\item
every exceptional prime divisor $F$ on $Y$ satisfies the inequalities $a_F(X)\le c$ and $a_F(X,\mathfrak{a}^q)<1$, by which the pull-back $(Y,\Delta,\mathfrak{a}_Y^q)$ of $(X,\mathfrak{a}^q)$ is defined with an effective $\mathbf{Q}$-divisor $\Delta$,
\item
$\mld_Q(Y,\Delta,\mathfrak{a}_Y^q)=\mld_P(X,\mathfrak{a}^q)$, and
\item
$\mld_Q(Y,\mathfrak{a}_Y^q)$ is at least one.
\end{itemize}
\end{enumerate}
\end{theorem}
We recall a part of the classification of threefold divisorial contractions which will play an important role in our argument.
\begin{definition}
A projective birational morphism $Y\to X$ between $\mathbf{Q}$-factorial terminal varieties is called a \textit{divisorial contraction} if its exceptional locus is a prime divisor.
\end{definition}
\begin{theorem}\label{thm:divisorial}
Let $\pi\colon Y\to X$ be a threefold divisorial contraction which contracts its exceptional divisor to a closed point $P$ in $X$.
\begin{enumerate}
\item\label{itm:kawamata}
\textup{(\cite{Km96})}\;
Suppose that $P$ is a quotient singularity of $X$. The spectrum of the completion of $\mathscr{O}_{X,P}$ is the regular base change of $\mathbf{A}^3/\mathbf{Z}_r(w,-w,1)$ with orbifold coordinates $x_1,x_2,x_3$, where $w$ is a positive integer less than $r$ and coprime to $r$. Then $\pi$ is base-changed to the weighted blow-up with $\wt(x_1,x_2,x_3)=(w/r,(r-w)/r,1/r)$.
\item\label{itm:kawakita}
\textup{(\cite{K01})}\;
Suppose that $P$ is a smooth point of $X$. Then there exists a regular system $x_1,x_2,x_3$ of parameters in $\mathscr{O}_{X,P}$ and coprime positive integers $w_1,w_2$ such that $\pi$ is the weighted blow-up with $\wt(x_1,x_2,x_3)=(w_1,w_2,1)$.
\end{enumerate}
\end{theorem}
Stepanov proved the ACC for canonical thresholds on smooth threefolds as an application of Theorem \ref{thm:divisorial}(\ref{itm:kawakita}).
\begin{theorem}[\cite{St11}]\label{thm:stepanov}
The set
\begin{align*}
\{t\in\mathbf{Q}_{\ge0}\mid\textup{$P\in X$ a smooth threefold},\ \textup{$\mathfrak{a}$ an ideal},\ \mld_P(X,\mathfrak{a}^t)=1\}
\end{align*}
satisfies the ACC.
\end{theorem}
\begin{proof}
Let $S$ denote the set in the theorem. The original statement \cite[Theorem 1.7]{St11} asserts that the set
\begin{align*}
T=\biggl\{t\in\mathbf{Q}_{\ge0}\;\biggm|
\begin{array}{l}
\textrm{$P\in X$ a smooth threefold},\ \textrm{$D$ an effective divisor},\\
\textrm{$(X,tD)$ canonical but not terminal}
\end{array}
\biggr\}
\end{align*}
satisfies the ACC. It is enough to show that if $t$ is an arbitrary element of $S$, then $t/3$ belongs to $T$. For such $t$, there exists an ideal $\mathfrak{a}$ on the germ $P\in X$ of a smooth threefold such that $\mld_P(X,\mathfrak{a}^t)=1$. Then $t\le\mld_PX=3$ and $t/3$ is en element of $S$ since $\mld_P(X,(\mathfrak{a}^3)^{t/3})=1$. Thus it is sufficient to show that if $t\in S$ is at most one, then $t$ belongs to $T$.
Take a germ $P\in X$ on which $t$ is realised by $\mld_P(X,\mathfrak{a}^t)=1$. Let $\mathfrak{m}$ be the maximal ideal in $\mathscr{O}_X$ defining $P$. By replacing $\mathfrak{a}$ with $\mathfrak{a}+\mathfrak{m}^l$ for a large integer $l$, we may assume that $\mathfrak{a}$ is $\mathfrak{m}$-primary. We take a log resolution $Y$ of $(X,\mathfrak{a})$. Then $\mathfrak{a}\mathscr{O}_Y=\mathscr{O}_Y(-A)$ for an effective divisor $A$ such that $-A$ is free over $X$. Thus there exists a reduced divisor $H$ linearly equivalent to $-A$ such that $Y$ is also a log resolution of $(X,H_X,\mathfrak{a})$, where $H_X$ is the push-forward of $H$. Then $\mld_P(X,tH_X)=1$ and $(X,tH_X)$ is canonical, meaning that $t\in T$.
\end{proof}
We shall use a consequence of the minimal model program.
\begin{definition}
Let $P\in X$ be the germ of a $\mathbf{Q}$-factorial terminal variety and $\mathfrak{a}$ be an $\mathbf{R}$-ideal on $X$ such that $\mld_P(X,\mathfrak{a})$ equals one. A divisorial contraction to $X$ is said to be \textit{crepant} with respect to $(P,X,\mathfrak{a})$ if its exceptional divisor computes $\mld_P(X,\mathfrak{a})$.
\end{definition}
\begin{lemma}\label{lem:crepant}
Let $P\in X$ be the germ of a $\mathbf{Q}$-factorial terminal variety and $\mathfrak{a}$ be an $\mathbf{R}$-ideal on $X$ such that $\mld_P(X,\mathfrak{a})$ equals one. Then there exists a divisorial contraction crepant with respect to $(P,X,\mathfrak{a})$.
\end{lemma}
\begin{proof}
By replacing $\mathfrak{a}$ with $\mathfrak{b}$ in Lemma \ref{lem:perturb}, we may assume that $\mathfrak{a}$ is an $\mathfrak{m}$-primary $\mathbf{R}$-ideal, where $\mathfrak{m}$ is the maximal ideal in $\mathscr{O}_X$ defining $P$, such that there exists a unique divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{a})$. Then by \cite[Corollary 1.4.3]{BCHM10}, there exists a projective birational morphism $Y\to X$ from a $\mathbf{Q}$-factorial normal variety whose exceptional locus is a prime divisor which coincides with $E$. We may assume that the weak transform $\mathfrak{a}_Y$ on $Y$ of $\mathfrak{a}$ is defined. Then $(Y,\mathfrak{a}_Y)$ is the pull-back of $(X,\mathfrak{a})$, and it is terminal by the uniqueness of $E$. In particular $Y$ itself is terminal, so $Y\to X$ is a required contraction.
\end{proof}
\begin{lemma}\label{lem:perturb}
Let $P\in X$ be the germ of a $\mathbf{Q}$-factorial klt variety and $\mathfrak{a}$ be an $\mathbf{R}$-ideal on $X$ such that $(X,\mathfrak{a})$ is lc. Then there exists an $\mathbf{R}$-ideal $\mathfrak{b}$ such that
\begin{itemize}
\item
$\mathfrak{b}$ is $\mathfrak{m}$-primary, where $\mathfrak{m}$ is the maximal ideal in $\mathscr{O}_X$ defining $P$,
\item
$\mld_P(X,\mathfrak{a})=\mld_P(X,\mathfrak{b})$,
\item
there exists a unique divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{b})$, and
\item
$E$ also computes $\mld_P(X,\mathfrak{a})$.
\end{itemize}
\end{lemma}
\begin{proof}
Writing $\mathfrak{a}=\prod_j\mathfrak{a}_j^{r_j}$, if we take a large integer $l$, then $\mathfrak{a}'=\prod_j(\mathfrak{a}_j+\mathfrak{m}^l)^{r_j}$ satisfies that $\mld_P(X,\mathfrak{a}')=\mld_P(X,\mathfrak{a})$ and any divisor computing $\mld_P(X,\mathfrak{a}')$ also computes $\mld_P(X,\mathfrak{a})$. By replacing $\mathfrak{a}$ with $\mathfrak{a}'$, we may assume that $\mathfrak{a}$ is $\mathfrak{m}$-primary.
Let $Y$ be a log resolution of $(X,\mathfrak{a})$ and $\{E_i\}_{i\in I}$ be the set of the exceptional prime divisors on $Y$ contracting to the point $P$. Let $A$ be the $\mathbf{R}$-divisor on $Y$ defined by $\mathfrak{a}\mathscr{O}_Y$ and $I'$ be the subset of $I$ consisting of the indices $i$ such that $E_i$ computes $\mld_P(X,\mathfrak{a})$. There exists an effective exceptional divisor $F$ such that $-F$ is very ample and such that the minimum $m$ of $\{\ord_{E_i}A/\ord_{E_i}F\}_{i\in I'}$ attains by only one index, say $i_0\in I'$. One can take a small positive real number $\epsilon$ such that $b_i=a_{E_i}(X,\mathfrak{a})+\epsilon(\ord_{E_i}A-m\ord_{E_i}F)$ is greater than $\mld_P(X,\mathfrak{a})$ for any $i\in I\setminus\{i_0\}$. Note that $b_{i_0}$ remains equal to $\mld_P(X,\mathfrak{a})$.
Let $\mathfrak{c}$ be the ideal on $X$ given by the push-forward of $\mathscr{O}_Y(-F)$ and set the $\mathbf{R}$-ideal $\mathfrak{b}=\mathfrak{a}^{1-\epsilon}\mathfrak{c}^{\epsilon m}$. Possibly replacing $\epsilon$ with a smaller real number, we may assume that $(X,\mathfrak{b})$ is lc. $Y$ is also a log resolution of $(X,\mathfrak{b})$ and $a_{E_i}(X,\mathfrak{b})=b_i$ for any $i\in I$. Thus $\mathfrak{b}$ satisfies all the required properties but being $\mathfrak{m}$-primary. However, one can replace $\mathfrak{b}$ with an $\mathfrak{m}$-primary $\mathbf{R}$-ideal just by the argument of constructing $\mathfrak{a}'$ from $\mathfrak{a}$.
\end{proof}
We consider the following algorithm in order to prove Theorem \ref{thm:canonical}.
\begin{algorithm}\label{alg:canonical}
Let $q$ be a positive rational number. Let $P\in X$ be the germ of a smooth threefold and $\mathfrak{a}$ be an ideal on $X$ such that $(X,\mathfrak{a}^q)$ is lc. Let $E$ be a divisor over $X$ which computes $\mld_P(X,\mathfrak{a}^q)$.
\begin{enumerate}[indented,label=\texttt{\arabic*}.,ref=\texttt{\arabic*}]
\item
Start with $X_0=X$.
\item\label{prc:initial}
Suppose that $X_i$ is given, which has only terminal quotient singularities.
\item\label{prc:notpoint}
If the centre $c_{X_i}(E)$ on $X_i$ of $E$ is of positive dimension, then output $X_i$.
\item\label{prc:point}
Suppose that $c_{X_i}(E)$ is a closed point, which will be denoted by $P_i$. Let $r_i$ be the index of the germ $P_i\in X_i$. One can define the weak transform $\mathfrak{b}_i$ on $P_i\in X_i$ of $\mathfrak{a}^{r_i}$ and let $\mathfrak{a}_i$ be the $\mathbf{Q}$-ideal $\mathfrak{b}_i^{1/r_i}$. The pair $(X_i,\mathfrak{a}_i^q)$ is lc at $P_i$.
\item\label{prc:smoothout}
If $P_i$ is a smooth point of $X_i$ and $\mld_{P_i}(X_i,\mathfrak{a}_i^q)\ge1$, then output $X_i$.
\item
If $P_i$ is a smooth point of $X_i$ and $\mld_{P_i}(X_i,\mathfrak{a}_i^q)<1$, then go to \ref{prc:iplus1}.
\item\label{prc:singout}
If $P_i$ is a singular point of $X_i$ and $\mld_{P_i}(X_i,\mathfrak{a}_i^q)>1$, then output $X_i$.
\item
If $P_i$ is a singular point of $X_i$ and $\mld_{P_i}(X_i,\mathfrak{a}_i^q)\le1$, then go to \ref{prc:iplus1}.
\item\label{prc:iplus1}
Let $q_i$ be the positive rational number such that $\mld_{P_i}(X_i,\mathfrak{a}_i^{q_i})=1$. Fix a divisorial contraction $X_{i+1}\to X_i$ crepant with respect to $(P_i,X_i,\mathfrak{a}_i^{q_i})$ by Lemma \ref{lem:crepant}. Go back to \ref{prc:initial} and proceed with $X_{i+1}$ instead of $X_i$.
\end{enumerate}
In this algorithm, we fix the notation that $F_i$ is the exceptional divisor of $X_{i+1}\to X_i$ and that $F_j^i$ is the strict transform on $X_i$ of $F_j$ for $j<i$, and we set $\Delta_i=\sum_{j=0}^{i-1}(1-a_{F_j}(X,\mathfrak{a}^q))F_j^i$ and $S_i=\sum_{j=0}^{i-1}F_j^i$.
\end{algorithm}
\begin{remark}
By the very definition,
\begin{enumerate}
\first\item
$q_i\le q$, and $q_i<q$ when $q_i$ is defined at a smooth point $P_i$ of $X_i$, and
\item
$(X_i,\Delta_i,\mathfrak{a}_i^q)$ is crepant to $(X,\mathfrak{a}^q)$.
\end{enumerate}
\end{remark}
In order to run Algorithm \ref{alg:canonical}, we need to verify that
\begin{itemize}
\item
$X_i$ has at worst quotient singularities in the process \ref{prc:initial}, and
\item
$(X_i,\mathfrak{a}_i^q)$ is lc at $P_i$ in the process \ref{prc:point},
\end{itemize}
besides the termination. The first claim follows from Theorem \ref{thm:divisorial}. For the second claim, let $b_i=1-a_{F_i}(X_i,\mathfrak{a}_i^q)$. Then $(X_{i+1},b_iF_i,\mathfrak{a}_{i+1}^q)$ is crepant to $(X_i,\mathfrak{a}_i^q)$ and $b_iF_i$ is effective since $a_{F_i}(X_i,\mathfrak{a}_i^q)\le a_{F_i}(X_i,\mathfrak{a}_i^{q_i})=1$ by $q_i\le q$. Thus the log canonicity of $(X_i,\mathfrak{a}_i^q)$ follows from that of $(X,\mathfrak{a}^q)$ inductively. Hence Algorithm \ref{alg:canonical} runs up to the termination.
We prepare several basic properties of the algorithm before completing its termination in Proposition \ref{prp:termination}.
\begin{lemma}\label{lem:algorithm}
The following hold in Algorithm \textup{\ref{alg:canonical}}.
\begin{enumerate}
\item\label{itm:nondecr}
The $q_i$ form a non-decreasing sequence.
\item\label{itm:atmost1}
$a_{F_i}(X,\mathfrak{a}^{q_i})\le1$.
\item\label{itm:lessthan1}
$a_{F_i}(X,\mathfrak{a}^q)<1$.
\item\label{itm:easybound}
If $q_i\neq q$, then $a_{F_i}(X)\le q(q-q_i)^{-1}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Since $(X_{i+1},\mathfrak{a}_{i+1}^{q_i})$ is crepant to $(X_i,\mathfrak{a}_i^{q_i})$, one has that $\mld_{P_{i+1}}(X_{i+1},\mathfrak{a}_{i+1}^{q_i})\ge\mld_{P_i}(X_i,\mathfrak{a}_i^{q_i})=1$. Thus $q_i\le q_{i+1}$, which shows (\ref{itm:nondecr}).
For $j<i$, let $b_{ij}=1-a_{F_j}(X_j,\mathfrak{a}_j^{q_i})$. Then $(X_{j+1},b_{ij}F_j,\mathfrak{a}_{j+1}^{q_i})$ is crepant to $(X_j,\mathfrak{a}_j^{q_i})$ and $b_{ij}F_j$ is effective by $q_j\le q_i$ in (\ref{itm:nondecr}). Thus one has the inequality $a_{F_i}(X_j,\mathfrak{a}_j^{q_i})\le a_{F_i}(X_{j+1},\mathfrak{a}_{j+1}^{q_i})$ and inductively $a_{F_i}(X,\mathfrak{a}^{q_i})\le a_{F_i}(X_i,\mathfrak{a}_i^{q_i})=1$, which is (\ref{itm:atmost1}).
The (\ref{itm:lessthan1}) follows from (\ref{itm:atmost1}) unless $q_i=q$. If $q_i=q$, then $q_i$ is defined at a singular point $P_i$ of $X_i$, so $q_0<q$. Let $j$ be the greatest integer such that $q_j<q$. Then $(X_i,\mathfrak{a}_i^q)$ is crepant to $(X_{j+1},\mathfrak{a}_{j+1}^q)$. On the other hand, $(X_{j+1},\Delta_{j+1},\mathfrak{a}_{j+1}^q)$ is crepant to $(X,\mathfrak{a}^q)$. By (\ref{itm:atmost1}), $\Delta_{j+1}$ is effective and its support coincides with $S_{j+1}$. Thus one has that $a_{F_i}(X,\mathfrak{a}^q)=a_{F_i}(X_{j+1},\Delta_{j+1},\mathfrak{a}_{j+1}^q)<a_{F_i}(X_{j+1},\mathfrak{a}_{j+1}^q)=a_{F_i}(X_i,\mathfrak{a}_i^q)=1$.
To see (\ref{itm:easybound}), suppose that $q_i<q$. By (\ref{itm:atmost1}) and $a_{F_i}(X,\mathfrak{a}^q)\ge0$, one computes that
\begin{align*}
\mathfrak{a}_{F_i}(X)=a_{F_i}(X,\mathfrak{a}^{q_i})+q_i\ord_{F_i}\mathfrak{a}&\le1+q_i\ord_{F_i}\mathfrak{a}\\
&=1+q_i(q-q_i)^{-1}(a_{F_i}(X,\mathfrak{a}^{q_i})-a_{F_i}(X,\mathfrak{a}^q))\\
&\le1+q_i(q-q_i)^{-1}=q(q-q_i)^{-1}.
\end{align*}
\end{proof}
\begin{lemma}\label{lem:e}
Fix a positive rational number $q$. Then there exists a positive rational number $\epsilon$ depending only on $q$ such that every $q_i$ defined at a smooth point $P_i$ of $X_i$ in Algorithm \textup{\ref{alg:canonical}} satisfies the inequality $q_i\le q-\epsilon$.
\end{lemma}
\begin{proof}
It follows from Theorem \ref{thm:stepanov}.
\end{proof}
\begin{proposition}\label{prp:termination}
Algorithm \textup{\ref{alg:canonical}} terminates.
\end{proposition}
\begin{proof}
Take $\epsilon$ in Lemma \ref{lem:e}. Then every divisor $F_i$ defined over a smooth point $P_i$ of $X_i$ satisfies that $a_{F_i}(X,\mathfrak{a}^{q-\epsilon})\le a_{F_i}(X,\mathfrak{a}^{q_i})\le1$ by Lemma \ref{lem:algorithm}(\ref{itm:atmost1}). The number of such $F_i$ is finite because $(X,\mathfrak{a}^{q-\epsilon})$ is klt. In particular, there exists an integer $e$, depending on $(X,\mathfrak{a}^q)$ and $E$, such that $P_i$ is a singular point of $X_i$ for any $i>e$. By Theorem \ref{thm:divisorial}(\ref{itm:kawamata}), the $r_i$ for $i>e$ form a strictly decreasing sequence. Hence the algorithm must terminate.
\end{proof}
One can also bound $r_i$ and $a_{F_i}(X)$.
\begin{lemma}\label{lem:r}
Fix a positive rational number $q$. Then there exists a positive integer $r$ depending only on $q$ such that every $r_i$ in Algorithm \textup{\ref{alg:canonical}} satisfies the inequality $r_i\le r$.
\end{lemma}
\begin{proof}
Take $\epsilon$ in Lemma \ref{lem:e}. We shall show that any positive integer $r$ at least $q\epsilon^{-1}-2$ satisfies the required property. The $r_0$ is one. By Theorem \ref{thm:divisorial}, the $r_{i+1}$ satisfies that $r_{i+1}<r_i$ when $P_i$ is a singular point of $X_i$ and that $r_{i+1}\le a_{F_i}(X_i)-2$ when $P_i$ is a smooth point of $X_i$. Thus it is enough to show that $a_{F_i}(X_i)\le q\epsilon^{-1}$ when $P_i$ is a smooth point of $X_i$. Since
\begin{align*}
a_{F_i}(X_i)\le a_{F_i}(X_i)+\sum_{j=0}^{i-1}(a_{F_j}(X)-1)\ord_{F_i}F_j^i=a_{F_i}(X),
\end{align*}
the required inequality follows from Lemma \ref{lem:algorithm}(\ref{itm:easybound}).
\end{proof}
\begin{lemma}\label{lem:c}
Fix a positive rational number $q$. Then there exists a positive integer $c$ depending only on $q$ such that every $F_i$ in Algorithm \textup{\ref{alg:canonical}} satisfies the inequality $a_{F_i}(X)\le c$.
\end{lemma}
\begin{proof}
Take a positive integer $n$ such that $nq$ is integral, and take $\epsilon$ in Lemma \ref{lem:e} and $r$ in Lemma \ref{lem:r}. Fix a positive integer $c_0$ at least $q\epsilon^{-1}$ and define positive integers $c_1,\ldots,c_r$ inductively by the recurrence relation
\begin{align*}
c_{j+1}=2+(c_j-1)n
\end{align*}
for $0\le j<r$. We shall prove that $c_r$ is a required constant.
Let $e$ be the greatest integer such that $q_e$ is defined at a smooth point $P_e$ of $X_e$. Then by Lemmata \ref{lem:algorithm}(\ref{itm:nondecr}), (\ref{itm:easybound}) and \ref{lem:e}, the estimate $a_{F_i}(X)\le c_0$ holds for any $i\le e$, and by Lemma \ref{lem:r}, if the algorithm defines $P_{e+1}\in X_{e+1}$, then $r_{e+1}\le r$. In particular, the algorithm terminates with the output $X_{e+r'}$ for some $r'\le r$ by Theorem \ref{thm:divisorial}(\ref{itm:kawamata}). Thus it is enough to show that $a_{F_{e+j}}(X)\le c_j$ for any $j\le r$ as far as $F_{e+j}$ is defined. This is reduced to proving that if $F_i$ is defined at a singular point $P_i$ of $X_i$ and if $a_{F_j}(X)$ is bounded from above by a positive integer $c'$ for all $j<i$, then $a_{F_i}(X)$ is at most $2+(c'-1)n$.
Suppose that $F_i$ and $c'$ are given as above. By Lemma \ref{lem:algorithm}(\ref{itm:lessthan1}), the $\mathbf{Q}$-divisor $\Delta_i$ satisfies that $S_i\le n\Delta_i$. Since $(X_i,\Delta_i,\mathfrak{a}_i^q)$ is crepant to $(X,\mathfrak{a}^q)$, one has that
\begin{align*}
\ord_{F_i}S_i\le n\ord_{F_i}\Delta_i&=n(a_{F_i}(X_i,\mathfrak{a}_i^q)-a_{F_i}(X_i,\Delta_i,\mathfrak{a}_i^q))\\
&\le n(a_{F_i}(X_i,\mathfrak{a}_i^{q_i})-a_{F_i}(X,\mathfrak{a}^q))=n(1-a_{F_i}(X,\mathfrak{a}^q))\le n,
\end{align*}
where the second inequality follows from $q_i\le q$. Together with $a_{F_i}(X_i)=1+1/r_i$ by Theorem \ref{thm:divisorial}(\ref{itm:kawamata}), one computes that
\begin{align*}
a_{F_i}(X)&=a_{F_i}(X_i)+\sum_{j=0}^{i-1}(a_{F_j}(X)-1)\ord_{F_i}F_j^i\\
&\le1+\frac{1}{r_i}+(c'-1)\ord_{F_i}S_i<2+(c'-1)n.
\end{align*}
\end{proof}
In order to control the log discrepancy of a divisor computing $\mld_P(X,\mathfrak{a}^q)$, we need an extra assumption that $\mld_P(X,\mathfrak{a}^q)$ is positive.
\begin{lemma}\label{lem:process37}
Fix a positive rational number $q$. Then there exists a positive integer $l$ depending only on $q$ such that in Algorithm \textup{\ref{alg:canonical}} if $\mld_P(X,\mathfrak{a}^q)$ is positive and if the algorithm terminates at the process \textup{\ref{prc:notpoint}} or \textup{\ref{prc:singout}}, then there exists a divisor $E'$ over $X$ which computes $\mld_P(X,\mathfrak{a})$ and satisfies the inequality $a_{E'}(X)\le l$.
\end{lemma}
\begin{proof}
\textit{Step} 1.
We take $c$ in Lemma \ref{lem:c}. Let $\eta$ be a positive rational number such that the exist no integers $a$ satisfying that $q<1/a<q+\eta$. Let $n$ be a positive integer such that $nq$ is integral. Since Conjecture \ref{cnj:equiv}(\ref{itm:nakamura}) holds for in dimension two by Theorem \ref{thm:surface}, there exists a positive integer $l'$ depending only on $n$ such that if $Q\in H$ is the germ of a smooth surface and $\mathfrak{a}_H$ is an ideal on $H$, then there exists a divisor $E_H$ over $H$ which computes $\mld_Q(H,\mathfrak{a}_H^{1/n})$ and satisfies the inequality $a_{E_H}(H)\le l'$.
Let $\mathfrak{m}$ be the maximal ideal in $\mathscr{O}_X$ defining $P$. Let $E'$ be an arbitrary divisor over $X$ which computes $\mld_P(X,\mathfrak{a}^q)$. By Corollary \ref{crl:mult}, there exists a positive integer $b$ depending only on $q$ such that $\ord_{E'}\mathfrak{m}\le b$. Note that $b$ can be taken independent of the germ $P\in X$ of a smooth threefold. Indeed, $E'$ also computes $\mld_P(X,\mathfrak{a}'^q)$ for the $\mathfrak{m}$-primary ideal $\mathfrak{a}'=\mathfrak{a}+\mathfrak{m}^e$ as far as a positive integer $e$ satisfies that $\ord_{E'}\mathfrak{a}\le e\ord_{E'}\mathfrak{m}$. Thus by Lemma \ref{lem:regular}, one can take the $b$ on the germ at origin of the affine space $\mathbf{A}^3$.
For any $i$, one has the estimate $\ord_{E'}S_i\le\ord_{E'}\mathfrak{m}$ because $\mathfrak{m}^r\mathscr{O}_{X_i}$ is contained in $\mathscr{O}_{X_i}(-rS_i)$ for a positive integer $r$ such that $rS_i$ is Cartier. Hence,
\begin{align*}
\ord_{E'}S_i\le b.
\end{align*}
Supposing that the algorithm terminates at the process \ref{prc:notpoint} or \ref{prc:singout}, we shall bound the log discrepancy of some divisor which computes $\mld_P(X,\mathfrak{a})$ in terms of $q$, $c$, $\eta$, $l'$ and $b$.
\textit{Step} 2.
Suppose that the algorithm terminates at the process \ref{prc:notpoint} and outputs $X_i$. Then the centre $c_E(X_i)$ on $X_i$ of $E$ is either a divisor or a curve. If it is a divisor, then $E=F_{i-1}$ and it computes $\mld_P(X,\mathfrak{a}^q)$. By Lemma \ref{lem:c}, $F_{i-1}$ satisfies that
\begin{align*}
a_{F_{i-1}}(X)\le c.
\end{align*}
Suppose that $c_E(X_i)$ is a curve $C$. Let $H$ be a general hyperplane section of $X_i$ and $Q$ be a closed point in $H\cap C$. Considering a log resolution, one has that
\begin{align*}
\mld_Q(H,\Delta_i|_H,(\mathfrak{a}_i\mathscr{O}_H)^q)=\mld_{\eta_C}(X_i,\Delta_i,\mathfrak{a}_i^q)=\mld_P(X,\mathfrak{a}^q),
\end{align*}
where the second equality holds since $E$ computes $\mld_P(X,\mathfrak{a}^q)$. Moreover by the expression $\mld_Q(H,\Delta_i|_H,(\mathfrak{a}_i\mathscr{O}_H)^q)=\mld_Q(H,(\mathfrak{a}_i^{nq}\mathscr{O}_H(-n\Delta_i|_H))^{1/n})$, there exists a divisor $E'$ over $X_i$ with $c_{X_i}(E')=C$ such that an irreducible component $E'_H$ of $E'\times_{X_i}H$ mapped to $Q$ computes $\mld_Q(H,\Delta_i|_H,(\mathfrak{a}_i\mathscr{O}_H)^q)$ and satisfies the inequality $a_{E'_H}(H)\le l'$. The $E'$ computes $\mld_P(X,\mathfrak{a}^q)$ as well as $\mld_{\eta_C}(X_i,\Delta_i,\mathfrak{a}_i^q)$, and $a_{E'}(X_i)=a_{E'_H}(H)\le l'$. Therefore,
\begin{align*}
a_{E'}(X)=a_{E'}(X_i)+\sum_{j=0}^{i-1}(a_{F_j}(X)-1)\ord_{E'}F_j^i\le l'+(c-1)\ord_{E'}S_i\le l'+(c-1)b,
\end{align*}
where the last inequality follows from Step 1.
\textit{Step} 3.
Suppose that the algorithm terminates at the process \ref{prc:singout} and outputs $X_i$. Let $\mathfrak{n}$ be the maximal ideal in $\mathscr{O}_{X_i}$ defining $P_i$. Recall that $\mathfrak{a}_i=\mathfrak{b}_i^{1/r_i}$. Set $\mathfrak{b}'_i=\mathfrak{b}_i+\mathfrak{n}^e$ for a positive integer $e$ such that $\ord_E\mathfrak{b}_i\le e\ord_E\mathfrak{n}$. Since $\mld_{P_i}(X_i,\mathfrak{a}_i^q)$ is greater than one, there exists a rational number $q'$ greater than $q$ such that $\mld_{P_i}(X_i,(\mathfrak{b}'_i)^{q'/r_i})=1$. There exists a divisorial contraction to $X_i$ crepant to $(P_i,X_i,(\mathfrak{b}'_i)^{q'/r_i})$ by Lemma \ref{lem:crepant} and it is uniquely determined by Theorem \ref{thm:divisorial}(\ref{itm:kawamata}). Its exceptional divisor $F$ satisfies that $a_F(X_i)=1+1/r_i$. Thus
\begin{align*}
q'\ord_F\mathfrak{b}'_i=r_i(a_F(X_i)-a_F(X_i,(\mathfrak{b}'_i)^{q'/r_i}))=r_i(a_F(X_i)-1)=1,
\end{align*}
which derives that $q'$ is at least $q+\eta$ by the definition of $\eta$. In particular, the pair $(X_i,(\mathfrak{b}'_i)^{(q+\eta)/r_i})$ is canonical so $a_E(X_i,\mathfrak{a}_i^{q+\eta})=a_E(X_i,(\mathfrak{b}'_i)^{(q+\eta)/r_i})\ge1$. Hence one computes that
\begin{align*}
a_E(X_i)&=a_E(X_i,\mathfrak{a}_i^q)+q\ord_E\mathfrak{a}_i\\
&=a_E(X_i,\mathfrak{a}_i^q)+q\eta^{-1}(a_E(X_i,\mathfrak{a}_i^q)-a_E(X_i,\mathfrak{a}_i^{q+\eta}))\\
&\le(1+q\eta^{-1})a_E(X_i,\mathfrak{a}_i^q)-q\eta^{-1}\\
&=(1+q\eta^{-1})(a_E(X_i,\Delta_i,\mathfrak{a}_i^q)+\ord_E\Delta_i)-q\eta^{-1}\\
&\le(1+q\eta^{-1})(a_E(X,\mathfrak{a}^q)+\ord_ES_i)-q\eta^{-1}
\end{align*}
and
\begin{align*}
a_E(X)&=a_E(X_i)+\sum_{j=0}^{i-1}(a_{F_j}(X)-1)\ord_EF_j^i\\
&\le(1+q\eta^{-1})(a_E(X,\mathfrak{a}^q)+\ord_ES_i)-q\eta^{-1}+(c-1)\ord_ES_i.
\end{align*}
Together with $a_E(X,\mathfrak{a}^q)=\mld_P(X,\mathfrak{a}^q)\le3$ and $\ord_ES_i\le b$ in Step 1, one concludes that
\begin{align*}
a_E(X)\le(1+q\eta^{-1})(3+b)-q\eta^{-1}+(c-1)b.
\end{align*}
\textit{Step} 4.
By Steps 2 and 3, any integer $l$ at least $c$, $l'+(c-1)b$ and $(1+q\eta^{-1})(3+b)-q\eta^{-1}+(c-1)b$ satisfies the required property.
\end{proof}
\begin{proof}[Proof of Theorem \textup{\ref{thm:canonical}}]
We shall verify that the $l$ in Lemma \ref{lem:process37} and $c$ in Lemma \ref{lem:c} satisfy the assertion. Let $\mathfrak{a}$ be an ideal on $X$ such that $\mld_P(X,\mathfrak{a}^q)$ is positive. Run Algorithm \ref{alg:canonical} which terminates by Proposition \ref{prp:termination}. If the algorithm terminates at the process \ref{prc:notpoint} or \ref{prc:singout}, then the property (\ref{itm:bounded}) holds by Lemma \ref{lem:process37}. If it terminates at the process \ref{prc:smoothout}, then let $Q\in Y$ be the output $P_i\in X_i$. The $Q\in Y$ satisfies the property (\ref{itm:reduced}) by Lemmata \ref{lem:algorithm}(\ref{itm:lessthan1}) and \ref{lem:c}.
\end{proof}
\section{Extraction by weighted blow-ups}
Recall the classification of divisors over a smooth surface computing the minimal log discrepancy.
\begin{theorem}[\cite{K17}]\label{thm:mldwbu}
Let $P\in X$ be the germ of a smooth surface and $\mathfrak{a}$ be an $\mathbf{R}$-ideal on $X$.
\begin{enumerate}
\item
If $(X,\mathfrak{a})$ is lc, then every divisor computing $\mld_P(X,\mathfrak{a})$ is obtained by a weighted blow-up.
\item
If $(X,\mathfrak{a})$ is not lc, then some divisor computing $\mld_P(X,\mathfrak{a})$ is obtained by a weighted blow-up.
\end{enumerate}
\end{theorem}
We want to apply this theorem with the object of extracting by a weighted blow-up a divisor over a smooth threefold whose centre is a curve and which computes the lc threshold. In order to use such extraction in the study of the generic limit of ideals, we need to formulate it for $R$-varieties. We let $K$ be a field of characteristic zero throughout this section. The purpose of this section is to prove
\begin{theorem}\label{thm:wbu}
Let $X$ be the spectrum of the ring of formal power series in three variables over $K$ and $P$ be its closed point. Let $\mathfrak{a}$ be an $\mathbf{R}$-ideal on $X$ such that
\begin{itemize}
\item
$\mld_P(X,\mathfrak{a})$ equals one, and
\item
$(X,\mathfrak{a})$ has an lc centre $C$ of dimension one.
\end{itemize}
Then there exist a divisor $E$ over $X$ computing $\mld_{\eta_C}(X,\mathfrak{a})$ and a part $x_1,x_2$ of a regular system of parameters in $\mathscr{O}_X$ such that $E$ is obtained by the weighted blow-up of $X$ with $\wt(x_1,x_2)=(w_1,w_2)$ for some coprime positive integers $w_1,w_2$.
\end{theorem}
When a divisor over a smooth variety is given, we often realise it by a finite sequence of blow-ups.
\begin{definition}\label{dfn:tower}
Let $X$ be a smooth variety and $E$ be a divisor over $X$ whose centre $Z$ on $X$ has codimension at least two in $X$. A \textit{tower} on $X$ with respect to $E$ is a finite sequence of projective birational morphisms $X_{i+1}\to X_i$ of smooth varieties for $0\le i<l$ such that
\begin{itemize}
\item
$X_0=X$ and $Z_0=Z$,
\item
$X_{i+1}$ is about $\eta_{Z_i}$ the blow-up of $X_i$ along $Z_i$,
\item
$E_i$ is the exceptional prime divisor on $X_{i+1}$ contracting onto $Z_i$,
\item
$Z_{i+1}$ is the centre on $X_{i+1}$ of $E$, and
\item
$E_{l-1}=E$.
\end{itemize}
A tower is called the \textit{regular tower} if for any $i<l$, the centre $Z_i$ is smooth and $X_{i+1}$ is globally the blow-up of $X_i$ along $Z_i$. Note that the regular tower is uniquely determined by $E$ if it exists.
\end{definition}
\begin{remark}\label{rmk:toric}
Let $P\in X$ be the germ of a smooth variety. Let $x_1,\ldots,x_c$ be a part of a regular system of parameters in $\mathscr{O}_X$ and $E$ be the divisor obtained by the weighted blow-up of $X$ with $\wt(x_1,\ldots,x_c)=(w_1,\ldots,w_c)$, where $c$ is at least two. Then one can see that the regular tower on $X$ with respect to $E$ exists in terms of toric geometry. Following the notation in \cite{I14}, set $N=\mathbf{Z}^d$ with the standard basis $e_1,\ldots,e_d$ for $d=\dim X$. One may assume that $w=(w_1,\ldots,w_c,0,\ldots,0)$ is primitive in $N$. Construct a finite sequence of fans $(N,\Delta_i)$ for $0\le i\le l$ such that
\begin{itemize}
\item
$I_i=\{e_1,\ldots,e_d\}\cup\{v_1,\ldots,v_i\}$,
\item
$\Delta_i$ is the set of all cones spanned by a subset of $I_i$,
\item
$J_i$ is the smallest subset of $I_i$ such that $w$ belongs to the cone spanned by $J_i$,
\item
$v_{i+1}=\sum_{v\in J_i}v$, and
\item
$J_i\neq\{w\}$ for $i<l$ and $J_l=\{w\}$.
\end{itemize}
Set the toric variety $T_i=T_N(\Delta_i)$ and let $E_i^T$ be the exceptional divisor of $T_{i+1}\to T_i$. Then $X$ has an \'etale morphism to $T_0$ by corresponding $e_i$ to $x_i$. The base changes of $T_i$ to $X$ form the regular tower on $X$ with respect to $E$, and every $E_i=E_i^T\times_{T_0}X$ is obtained by a weighted blow-up of $X$.
\end{remark}
We collect basic properties of the log discrepancies in a tower which was essentially written in \cite[Proposition 6]{K17}.
\begin{lemma}\label{lem:tower}
Notation as in Definition \textup{\ref{dfn:tower}}. Let $\mathfrak{a}$ be an $\mathbf{R}$-ideal on $X$ and $\mathfrak{a}_i$ be its weak transform on $X_i$. Set $a_i=a_{E_i}(X,\mathfrak{a})$.
\begin{enumerate}
\item\label{itm:order}
The $\ord_{Z_i}\mathfrak{a}_i$ form a non-increasing sequence.
\item\label{itm:order1}
If $\ord_Z\mathfrak{a}\le1$, then $a_i\ge1$ and the $a_i$ form a non-decreasing sequence.
\item\label{itm:ordlthan1}
If $\ord_Z\mathfrak{a}<1$, then $a_i>1$ and the $a_i$ form a strictly increasing sequence.
\end{enumerate}
\end{lemma}
\begin{proof}
Take a subvariety $V_{i+1}$ of $Z_{i+1}$ such that $V_{i+1}\to Z_i$ is finite and dominant. Then $\ord_{Z_{i+1}}\mathfrak{a}_{i+1}\le\ord_{V_{i+1}}\mathfrak{a}_{i+1}\le\ord_{Z_i}\mathfrak{a}_i$ by \cite[III Lemmata 7 and 8]{H64}, which is (\ref{itm:order}).
The assertion (\ref{itm:order1}) is reduced to (\ref{itm:ordlthan1}) because $a_i$ is the limit of $a_{E_i}(X,\mathfrak{a}^{1-\epsilon})$ when $\epsilon$ decreases to zero. Suppose that $\ord_Z\mathfrak{a}<1$ in order to see (\ref{itm:ordlthan1}). Then so are $\ord_{Z_i}\mathfrak{a}_i$ by (\ref{itm:order}). In particular, $a_{E_i}(X_i,\mathfrak{a}_i)=a_{E_i}(X_i)-\ord_{Z_i}\mathfrak{a}_i>1$. Since $(X_i,\sum_{j=0}^{i-1}(1-a_j)E_j^i,\mathfrak{a}_i)$ is crepant to $(X,\mathfrak{a})$, where $E_j^i$ is the strict transform of $E_j$, one computes that
\begin{align*}
a_i=a_{E_i}(X_i,\mathfrak{a}_i)+\sum_{j=0}^{i-1}(a_j-1)\ord_{E_i}E_j^i>1+\sum_{j=0}^{i-1}(a_j-1)\ord_{E_i}E_j^i.
\end{align*}
This derives that $a_i>1$ by induction, and then derives that $a_i>a_{i-1}$ again by induction.
\end{proof}
\begin{proposition}\label{prp:tower}
Let $P\in X$ be the germ of a smooth variety and $\mathfrak{a}$ be an $\mathbf{R}$-ideal on $X$. Let $E$ be the divisor obtained by the blow-up of $X$ at $P$.
\begin{enumerate}
\item\label{itm:Ecomp}
If $\ord_P\mathfrak{a}\le1$, then $E$ computes $\mld_P(X,\mathfrak{a})$.
\item
If $\ord_P\mathfrak{a}<1$, then $E$ is the unique divisor computing $\mld_P(X,\mathfrak{a})$.
\end{enumerate}
\end{proposition}
\begin{proof}
It is \cite[Proposition 6]{K17} exactly. Just apply Lemma \ref{lem:tower}(\ref{itm:order1}) and (\ref{itm:ordlthan1}) to the tower on $X$ with respect to a divisor which computes $\mld_P(X,\mathfrak{a})$.
\end{proof}
We shall study divisors computing the minimal log discrepancy on a $K$-variety of dimension two.
\begin{lemma}\label{lem:Kpoint}
Let $P\in X$ be the germ at a $K$-point of a regular $K$-variety of dimension two and $\mathfrak{a}$ be an $\mathbf{R}$-ideal on $X$. Then there exists a divisor over $X$ computing $\mld_P(X,\mathfrak{a})$ which is obtained by a sequence of finitely many blow-ups at a $K$-point.
\end{lemma}
\begin{proof}
Let $\mathfrak{m}$ be the maximal ideal in $\mathscr{O}_X$ defining $P$. By adding a high multiple of $\mathfrak{m}$ to each component of $\mathfrak{a}$, we may assume that $\mathfrak{a}$ is $\mathfrak{m}$-primary. Suppose that $(X,\mathfrak{a})$ is not lc. Then there exists a positive real number $t$ less than one such that $\mld_P(X,\mathfrak{a}^t)$ is zero. Replacing $\mathfrak{a}$ with $\mathfrak{a}^t$, we may assume that $(X,\mathfrak{a})$ is lc.
We write $m=\mld_P(X,\mathfrak{a})$ for simplicity. Let $Y$ be the blow-up of $X$ at $P$ and $E$ be its exceptional divisor. There is nothing to prove if $E$ computes $\mld_P(X,\mathfrak{a})$. Thus we may assume that $a_E=a_E(X,\mathfrak{a})$ is greater than $m$. Then $a_E=2-\ord_P\mathfrak{a}<1$ by Proposition \ref{prp:tower}(\ref{itm:Ecomp}) (which also holds for regular $K$-varieties by Remark \ref{rmk:regular}). That is, $m< a_E<1$. Let $\mathfrak{a}_Y$ be the weak transform on $Y$ of $\mathfrak{a}$, then $(Y,(1-a_E)E,\mathfrak{a}_Y)$ is crepant to $(X,\mathfrak{a})$. We claim that there exists a unique point $Q$ in $Y$ such that $\mld_Q(Y,(1-a_E)E,\mathfrak{a}_Y)=m$, and claim that $Q$ is a $K$-point.
Let $Q$ be an arbitrary closed point in $Y$ such that $\mld_Q(Y,(1-a_E)E,\mathfrak{a}_Y)=m$. Such $Q$ exists since $a_E\neq m$. Set the base change $\bar X=X\times_{\Spec K}\Spec\bar K$ of $X$ to the algebraic closure $\bar K$ of $K$. Let $\bar P$, $\bar\mathfrak{a}$, $\bar Y$, $\bar E$ and $\bar\mathfrak{a}_Y$ be the base changes of $P$, $\mathfrak{a}$, $Y$, $E$ and $\mathfrak{a}_Y$ to $\bar K$ as well. Then every closed point $\bar Q$ in $Q\times_X\bar X$ satisfies that $\mld_{\bar Q}(\bar Y,(1-a_E)\bar E,\bar\mathfrak{a}_Y)=\mld_{\bar P}(\bar X,\bar\mathfrak{a})$. Thus our claims on $Q$ come from those on $\bar Q$, so we may assume that $K$ is algebraically closed.
One has that $\mld_Q(Y,E,\mathfrak{a}_Y)\le\mld_Q(Y,(1-a_E)E,\mathfrak{a}_Y)-a_E=m-a_E<0$, which means that $(Y,E,\mathfrak{a}_Y)$ is not lc at $Q$. By inversion of adjunction, $(E,\mathfrak{a}_Y\mathscr{O}_E)$ is not lc at $Q$, that is, $\ord_Q(\mathfrak{a}_Y\mathscr{O}_E)>1$. Hence the number of $Q$ is less than the degree of the divisor on $E\simeq\mathbf{P}_K^1$ defined by $\mathfrak{a}_Y\mathscr{O}_E$, which equals $\ord_E\mathfrak{a}=2-a_E$. Thus, the uniqueness of $Q$ follows.
While $a_E$ is greater than $m$, we replace $P\in(X,\mathfrak{a})$ with $Q\in(Y,\mathscr{O}_Y(-E)^{1-a_E}\cdot\mathfrak{a}_Y)$ and repeat the same argument. This procedure terminates at finitely many times. Indeed, let $l$ be the minimum of $a_F(X)$ for all divisors $F$ over $X$ computing $\mld_P(X,\mathfrak{a})$. Then after at most $(l-1)$ blow-ups, one attains a divisor which computes $\mld_P(X,\mathfrak{a})$.
\end{proof}
\begin{example}
There may exist a divisor computing $\mld_P(X,\mathfrak{a})$ which is not obtained by a sequence of blow-ups at a $K$-point. For example, let $P\in\mathbf{A}_\mathbf{R}^2$ be the germ at origin of the affine plane over $\mathbf{R}$ with coordinates $x_1$, $x_2$, and $H$ be the divisor on $\mathbf{A}_\mathbf{R}^2$ defined by $x_1^2+x_2^2$. Then $\mld_P(\mathbf{A}_\mathbf{R}^2,H)=0$. Let $Y$ be the blow-up of $\mathbf{A}_\mathbf{R}^2$ at $P$ and $E$ be its exceptional divisor. Then $(Y,H_Y+E)$ is crepant to $(\mathbf{A}_\mathbf{R}^2,H)$, where $H_Y$ is the strict transform. The intersection $Q$ of $H_Y$ and $E$ is a $\mathbf{C}$-point such that $\mld_Q(Y,H_Y+E)=0$.
\end{example}
Now we apply Theorem \ref{thm:mldwbu} to $K$-varieties of dimension two.
\begin{proposition}\label{prp:wbu}
Let $P\in X$ be the germ at a $K$-point of a regular $K$-variety of dimension two and $\mathfrak{a}$ be an $\mathbf{R}$-ideal on $X$. Then there exists a divisor $E$ over $X$ computing $\mld_P(X,\mathfrak{a})$ which is obtained by a weighted blow-up.
\end{proposition}
\begin{proof}
We may assume the log canonicity of $(X,\mathfrak{a})$ by the argument in the first paragraph of the proof of Lemma \ref{lem:Kpoint}. By Lemma \ref{lem:Kpoint}, there exists a divisor $E$ over $X$ computing $\mld_P(X,\mathfrak{a})$ which is obtained by a sequence of finitely many blow-ups at a $K$-point. Set the base change $\bar X=X\times_{\Spec K}\Spec\bar K$ of $X$ to the algebraic closure $\bar K$ of $K$ and let $\bar P$ and $\bar\mathfrak{a}$ be the base changes of $P$ and $\mathfrak{a}$ to $\bar X$. Since $E$ is obtained by finitely many blow-ups at a $K$-point, its base change $\bar E=E\times_X\bar X$ is irreducible, so $\bar E$ is a divisor over $\bar X$. Thus by Theorem \ref{thm:mldwbu}, there exists a regular system $x_1,x_2$ of parameters in $\mathscr{O}_{\bar X,\bar P}$ such that $\bar E$ is obtained by the weighted blow-up of $\bar X$ with $\wt(x_1,x_2)=(w_1,w_2)$ for some coprime positive integers $w_1,w_2$.
We shall show that one can take $x_1$ and $x_2$ from $\mathscr{O}_X$. This is obvious when $w_1=w_2=1$ because the weighted blow-up in this case is nothing but the blow-up at the point. Suppose that $w_1>w_2$. Let $L$ be a finite Galois extension of $K$ such that $x_1$ and $x_2$ belong to $\mathscr{O}_X\otimes_KL$. Then for any element $\sigma$ of the Galois group $G$ of $L/K$, the $\bar E=\bar E^\sigma$ is obtained by the weighted blow-up with $\wt(x_1^\sigma,x_2^\sigma)=(w_1,w_2)$. Thus one can replace $x_i$ with its trace $\sum_{\sigma\in G}x_i^\sigma$ by Remark \ref{rmk:wbu}. Here one can assume that $\sum_{\sigma\in G}x_i^\sigma\in\mathfrak{m}\setminus\mathfrak{m}^2$, where $\mathfrak{m}$ is the maximal ideal in $\mathscr{O}_X$ defining $P$, by replacing $x_i$ with $\lambda_ix_i$ for a general member $\lambda_i$ in $L$.
Now we may assume that $x_1$ and $x_2$ belong to $\mathscr{O}_X$. Then $E$ is obtained by the weighted blow-up of $X$ with $\wt(x_1,x_2)=(w_1,w_2)$.
\end{proof}
\begin{proof}[Proof of Theorem \textup{\ref{thm:wbu}}]
\textit{Step} 1.
First of all, remark that $C$ is the smallest lc centre of $(X,\mathfrak{a})$. The $C$ is regular by \cite[Theorem 1.2]{K15}, and it is geometrically irreducible because its base change to any field is again the smallest lc centre of the base change of $(X,\mathfrak{a})$. Thus, there exists a regular system $x_1,x_2,x_3$ of parameters in $\mathscr{O}_X$ such that the ideal $\mathscr{I}_C$ in $\mathscr{O}_X$ defining $C$ is generated by $x_1$ and $x_2$. If we consider instead of $\mathfrak{a}=\prod_j\mathfrak{a}_j^{r_j}$ the $\mathbf{R}$-ideal $\mathfrak{b}=\prod_j(\mathfrak{a}_j+(x_1,x_2)^l\mathscr{O}_X)^{r_j}$ for a large integer $l$, then $C$ is still the smallest lc centre of $(X,\mathfrak{b})$ and $\mld_P(X,\mathfrak{b})\ge\mld_P(X,\mathfrak{a})=1$. On the other hand, $\mld_P(X,\mathfrak{b})$ is at most one by \cite[Proposition 6.1]{K15}. Thus $\mld_P(X,\mathfrak{b})$ must equal one.
Hence by replacing $\mathfrak{a}$ with $\mathfrak{b}$, we may assume that $\mathfrak{a}$ is the pull-back of an $\mathbf{R}$-ideal $\mathfrak{a}'$ on $X'=\Spec K[[x_3]][x_1,x_2]$. Set $X''=\Spec K((x_3))[x_1,x_2]$, where $K((x_3))$ is the quotient field of $K[[x_3]]$. There exist natural morphisms
\begin{align*}
X\to X'\leftarrow X''.
\end{align*}
Let $P'$ be the point of $X'$ defined by $(x_1,x_2,x_3)\mathscr{O}_{X'}$ and $P''$ be the point of $X''$ defined by $(x_1,x_2)\mathscr{O}_{X''}$.
One has that $\mld_{P''}(X'',\mathfrak{a}'\mathscr{O}_{X''})=\mld_{\eta_C}(X,\mathfrak{a})=0$. By Proposition \ref{prp:wbu}, there exists a divisor $E''$ over $X''$ computing $\mld_{P''}(X'',\mathfrak{a}'\mathscr{O}_{X''})$ which is obtained by a weighted blow-up of $X''$. Let $E'$ be the unique divisor over $X'$ such that $E''=E'\times_{X'}X''$ and let $E=E'\times_{X'}X$. Note that $C$ is the centre of $E$ on $X$.
\textit{Step} 2.
There exists a regular tower $\mathcal{T}''$ on $X$ with respect to $E''$ in Definition \ref{dfn:tower} (which can be extended to $K((x_3))$-varieties). As seen in Remark \ref{rmk:toric}, $\mathcal{T}''$ is a finite sequence $X''_l\to\cdots\to X''_0=X''$ of blow-ups at a $K((x_3))$-point and the exceptional divisor $F''_i$ of $X''_{i+1}\to X''_i$ is obtained by a weighted blow-up of $X''$. Note that $E''=F''_{l-1}$. Possibly by replacing $E''$ with some $F''_i$, we may assume that $F_i''$ does not compute $\mld_{P''}(X'',\mathfrak{a}'\mathscr{O}_{X''})$ unless $i=l-1$.
The $\mathcal{T}''$ is compactified over $X'$, that is, $X''_{i+1}\to X''_i$ is the base change of a projective birational morphism $X'_{i+1}\to X'_i$ of regular schemes. Then the base changes $X_i=X'_i\times_{X'}X$ to $X$ form a tower $\mathcal{T}$ on $X$ with respect to $E$. Let $C_i$ be the centre on $X_i$ of $E$. Since $\mathcal{T}''$ consists of blow-ups at a $K$-point, $C_i$ is birational to $C$ for any $i<l$. Hence $C_i$ must be isomorphic to the regular scheme $C$. Therefore one can replace $X_i$ and $X'_i$ inductively so that $\mathcal{T}$ is the regular tower on $X$ with respect to $E$.
Let $F_i$ denote the exceptional divisor of $X_{i+1}\to X_i$, and set $a_i=a_{F_i}(X,\mathfrak{a})$. By our construction, every $a_i$ is positive except for $i=l-1$ while $a_{l-1}$ is zero. Let $\mathfrak{a}_i$ be the weak transform on $X_i$ of $\mathfrak{a}$ and set the $\mathbf{R}$-divisor $\Delta_i=\sum_{j=0}^{i-1}(1-a_j)F_j^i$ on $X_i$, where $F_j^i$ is the strict transform of $F_j$. Then $(X_i,\Delta_i,\mathfrak{a}_i)$ is crepant to $(X,\mathfrak{a})$. We claim that $a_i<1$ for any $i$. This is obvious for $i=l-1$ since $a_{l-1}=0$. In order to see the inequality $a_i<1$ for the fixed index $i<l-1$ by induction, assume that $a_j<1$ for any $j$ less than $i$. Then $\Delta_i$ is effective. Since $F_i$ does not compute $\mld_{\eta_C}(X,\mathfrak{a})$, one has that $\ord_{F_i}\Delta_i+\ord_{F_i}\mathfrak{a}_i>1$ by Proposition \ref{prp:tower}. Hence one obtains that $a_i=a_{F_i}(X_i,\Delta_i,\mathfrak{a}_i)=2-(\ord_{F_i}\Delta_i+\ord_{F_i}\mathfrak{a}_i)<1$.
\textit{Step} 3.
We have that $0<a_i<1$ for any $i<l-1$ while $a_{l-1}=0$. We let $f_i$ denote the fibre of $F_i\to C$ at $P$, which is isomorphic to $\mathbf{P}_K^1$. For $i<l$, let $P_i$ be the $K$-point in $C_i$ mapped to $P$. We claim that for any indices $i$ and $j$ such that $j<i<l$, the centre $C_i$ is either disjoint from $F_j^i$ or contained in $F_j^i$.
Indeed if $C_i$ intersected $F_j^i$ properly at $P_i$, then the morphism $F_j^{i+1}\to F_j^i$ would not be an isomorphism. Thus $F_j^{i+1}$ must contain the fibre $f_i$ of $F_i\to C_i$. In particular, $F_j^{i+1}$ intersects $C_{i+1}$. On the other hand, $C_{i+1}$ is not contained in $F_j^{i+1}$ as $C_i$ is not in $F_j^i$. Thus one obtains that $C_{i+1}$ must also intersect $F_j^{i+1}$ properly at $P_{i+1}$, unless $i+1=l$. Repeating this argument, one would have that $F_j^l$ contains $f_{l-1}$ as well as $F_{l-1}$ contains $f_{l-1}$. Now let $G$ be the divisor obtained by the blow-up of $X_l$ along $f_{l-1}$. One computes that
\begin{align*}
a_G(X,\mathfrak{a})\le a_G(X_l,\Delta_l)\le2-(1-a_j)-(1-a_{l-1})=a_j<1,
\end{align*}
which contradicts that $\mld_P(X,\mathfrak{a})=1$.
\textit{Step} 4.
Let $i$ be any index such that $\ord_{F_i}\mathscr{I}_C=1$. We shall show that there exists a part $y_1$ of a regular system of parameters in $\mathscr{O}_X$ such that $C_i$ is contained in the strict transform $H_i$ on $X_i$ of the divisor on $X$ defined by $y_1$. This is obvious for $i=0$. The condition $\ord_{F_i}\mathscr{I}_C=1$ for the fixed $i\ge1$ implies that $\ord_{F_{i-1}}\mathscr{I}_C=1$ since
\begin{align*}
\ord_{F_i}\mathscr{I}_C=\ord_{F_i}\mathscr{I}_i+\sum_{j=0}^{i-1}\ord_{F_j}\mathscr{I}_C\cdot\ord_{F_i}{F_j^i}\ge\ord_{F_{i-1}}\mathscr{I}_C
\end{align*}
for the weak transform $\mathscr{I}_i$ on $X_i$ of $\mathscr{I}_C$. Hence by induction on $i$, we may assume the existence of $y_1$ such that $C_{i-1}$ is contained in $H_{i-1}$.
We extend $y_1$ to a regular system $y_1,y_2,x_3$ of parameters in $\mathscr{O}_X$ in which $y_2$ is a general member in $\mathscr{I}_C$. Then for any $j\le i-1$, $F_j$ is as a divisor over $X$ obtained by the weighted blow-up of $X$ with $\wt(y_1,y_2)=(j+1,1)$, and the $y_1/y_2^j,y_2,x_3$ form a regular system of parameters in $\mathscr{O}_{X_j,P_j}$. In particular, $f_{i-1}\simeq\mathbf{P}_K^1$ has homogeneous coordinates $y_1/y_2^{i-1}$, $y_2$. Moreover, the $K$-point $P_i\in f_{i-1}$ is not defined by $[y_1/y_2^{i-1}:y_2]=[1:0]$. This follows when $i=1$ from the general choice of $y_2$, and when $i\ge2$ from the property in Step 3 that $C_i$ does not intersect $F_{i-2}^i$. Take $c\in K$ such that $P_i\in f_{i-1}$ is defined by $[y_1/y_2^{i-1}:y_2]=[c:1]$. Replacing $y_1$ with $y_1-cy_2^i$, we may assume that $c=0$.
Then $y_1/y_2^i,y_2,x_3$ form a regular system of parameters in $\mathscr{O}_{X_i,P_i}$. The $H_i$, $F_{i-1}$ and $f_{i-1}$ are defined at $P_i$ by $y_1/y_2^i$, $y_2$ and $(y_2,x_3)\mathscr{O}_{X_i}$. Because the fibration $F_{i-1}\to C_{i-1}$ is isomorphic to the projection of the product $\mathbf{P}_K^1\times_{\Spec K}C_{i-1}$, its section $C_i$ is defined at $P_i$ by $(y_1/y_2^i+x_3v(x_3),y_2)\mathscr{O}_{X_i}$ for some $v(x_3)\in K[[x_3]]$. After replacing $y_1$ with $y_1+y_2^ix_3v(x_3)$, one has that $C_i$ is contained in $H_i$.
\textit{Step} 5.
Let $e$ be the maximal index such that $\ord_{F_e}\mathscr{I}_C=1$ and choose a regular system $y_1,y_2,x_3$ of parameters in $\mathscr{O}_X$ such that $y_1$ satisfies the condition in Step 4 for $i=e$ and $y_2$ is a general member in $\mathscr{I}_C$. Now repeating the process in Step 1 for $y_1,y_2,x_3$ instead of $x_1,x_2,x_3$, we may assume that $x_1=y_1$ and $x_2=y_2$. Then by Remark \ref{rmk:toric}, one can obtain $E''=F''_{l-1}$ by a weighted blow-up with respect to the coordinates $x_1,x_2$. More precisely, there exist a non-negative integer $p$ and a positive integer $q$ such that $E''$ is obtained by the weighted blow-up of $X''$ with $\wt(x_1,x_2)=p(e,1)+q(e+1,1)$. Note that $p$ is positive iff $e+1<l$.
Therefore, we conclude that $E$ is also obtained by the weighted blow-up of $X$ with $\wt(x_1,x_2)=p(e,1)+q(e+1,1)$.
\end{proof}
\section{Reduction to the case of decomposed boundaries}\label{sct:reduction}
The objective of this section is to complete the reduction to Conjecture \ref{cnj:product}.
\begin{remark}\label{rmk:independent}
In order to prove Conjecture \ref{cnj:product} or a statement of the same kind, it is sufficient to find an integer $l$ which satisfies the required property but may depend on the germ $P\in X$ of a smooth threefold, for the reason that one has only to consider those $\mathfrak{a}$ which are $\mathfrak{m}$-primary. Indeed, there exists an \'etale morphism from $P\in X$ to the germ $o\in\mathbf{A}^3$ at origin of the affine space. Then as it is seen in Lemma \ref{lem:regular}, any $\mathfrak{m}$-primary ideal $\mathfrak{a}$ on $X$ is the pull-back of some ideal $\mathfrak{b}$ on $\mathbf{A}^3$, and $\mld_P(X,\mathfrak{a}^q\mathfrak{m}^s)$ coincides with $\mld_o(\mathbf{A}^3,\mathfrak{b}^q\mathfrak{n}^s)$, where $\mathfrak{n}$ is the maximal ideal in $\mathscr{O}_{\mathbf{A}^3}$ defining $o$. Thus, the bound $l$ on the germ $o\in\mathbf{A}^3$ can be applied to an arbitrary germ $P\in X$.
\end{remark}
We shall make the reduction by using the generic limit of ideals. For a moment, we work in the general setting that $P\in X$ is the germ of a klt variety. Let $r_1,\ldots,r_e$ be positive real numbers and $\mathcal{S}=\{\mathfrak{a}_i=\prod_{j=1}^e\mathfrak{a}_{ij}^{r_j}\}_{i\in\mathbf{N}}$ be a sequence of $\mathbf{R}$-ideals on $X$. Let $\mathsf{a}=\prod_{j=1}^e\mathsf{a}_j^{r_j}$ be a generic limit of $\mathcal{S}$. We use the notation in Section \ref{sct:limit}. The $\mathsf{a}$ is the generic limit with respect to a family $\mathcal{F}=(Z_l,(\mathfrak{a}_j(l))_j,N_l,s_l,t_l)_{l\ge l_0}$ of approximations of $\mathcal{S}$, and $\mathsf{a}$ is an ideal on $\hat P\in\hat X$ where $\hat X$ is the spectrum of the completion of the local ring $\mathscr{O}_{X,P}\otimes_kK$.
Let $\hat f\colon\hat Y\to\hat X$ be a projective birational morphism isomorphic outside $\hat P$. Suppose that $\hat Y$ is klt and the exceptional locus of $\hat f$ is a $\mathbf{Q}$-Cartier prime divisor $\hat F$. Let $\hat C$ be a closed proper subset of $\hat F$. As in Remark \ref{rmk:descend}, after replacing $\mathcal{F}$ with a subfamily, for any $l\ge l_0$ the $\hat f$ is descended to a projective morphism $f_l\colon Y_l\to X\times Z_l$ from a klt variety whose exceptional locus is a $\mathbf{Q}$-Cartier prime divisor $F_l$. One may assume that for any $i\in N_l$, the fibre $f_i\colon Y_i\to X$ at $s_l(i)\in Z_l$ is a morphism from a klt variety whose exceptional locus is a $\mathbf{Q}$-Cartier prime divisor $F_i$. Refer to \cite[Section B]{dFEM11} for the properties of a family of normal $\mathbf{Q}$-Gorenstein rational singularities. We may assume that $\hat C$ is descended to a closed subset $C_l$ in $F_l$. The $f_i$, $F_i$ and $C_i=C_l\times_{Y_l}Y_i$ are independent of $l$ because they are compatible with $t_l$.
\begin{lemma}\label{lem:relative}
Notation and assumptions as above. Suppose that $a_{\hat F}(\hat X,\mathsf{a})$ is at most one and that the intersection of $\hat F$ and the non-klt locus on $\hat Y$ of $(\hat X,\mathsf{a})$ is contained in $\hat C$. Then there exists a positive integer $l$ depending only on $\mathsf{a}$ and $\hat f$ such that after replacing $\mathcal{F}$ with a subfamily, for any $i\in N_{l_0}$ if a divisor $E$ over $X$ computes $\mld_P(X,\mathfrak{a}_i)$ and has centre $c_{Y_i}(E)$ not contained in $C_i$, then $a_E(X)\le l$.
\end{lemma}
\begin{proof}
Let $r$ be a positive integer such that $r\hat F$ is Cartier. We may assume that $rF_l$ is Cartier. By replacing $\mathfrak{a}_{ij}$ with $(\mathfrak{a}_{ij})^r$ and $r_j$ with $r_j/r$, we may assume that $\mathfrak{a}_{ij}$ is an ideal to the power of $r$ and so is $\mathsf{a}_j$. Thus one can define the weak transform $\mathfrak{a}_{iY}=\prod_j(\mathfrak{a}_{ijY})^{r_j}$ on $Y_i$ of $\mathfrak{a}_i$, as well as the weak transform $\mathsf{a}_Y$ on $\hat Y$ of $\mathsf{a}$. We may assume that $\ord_{\hat F}\mathsf{a}_j=\ord_{F_l}\mathfrak{a}_j(l)=\ord_{F_i}\mathfrak{a}_{ij}<l$ for any $i\in N_l$ and $j$. Set $\mathfrak{a}_{jY}(l)=\mathfrak{a}_j(l)\mathscr{O}_{Y_l}(a_jF_l)$ and $\mathfrak{a}_Y(l)=\prod_j(\mathfrak{a}_{jY}(l))^{r_j}$ for $a_j=\ord_{\hat F}\mathsf{a}_j$, which is divisible by $r$.
One can fix a positive real number $t$ such that the intersection of $\hat F$ and the non-klt locus on $\hat Y$ of $(\hat X,\mathsf{a}^{1+t})$ is still contained in $\hat C$. Set the real number $b$ so that $(\hat Y,b\hat F,\mathsf{a}_Y^{1+t})$ is crepant to $(\hat X,\mathsf{a}^{1+t})$, then $0\le b<1$ by $a_{\hat F}(\hat X,\mathsf{a})\le1$. Then $(Y_l,bF_l,\mathfrak{a}_Y(l)^{1+t})$ is crepant to $(X\times Z_l,\mathfrak{a}(l)^{1+t})$ while $(Y_i,bF_i,(\mathfrak{a}_{iY})^{1+t})$ is crepant to $(X,\mathfrak{a}_i^{1+t})$. One may assume that $(Y_l,bF_l,\mathfrak{a}_Y(l)^{1+t})$ is klt about $F_l\setminus C_l$.
Apply Corollary \ref{crl:relative} to the family $Y_l\setminus C_l\to Z_l$. Since $\mathfrak{a}_{ijY}+\mathfrak{m}^l\mathscr{O}_{Y_i}(a_jE_i)=\mathfrak{a}_{jY}(l)\mathscr{O}_{Y_i}$, one has that $(Y_i,bF_i,(\mathfrak{a}_{iY})^{1+t})$ is lc about $F_i\setminus C_i$ for any $i\in N_{l_0}$ after replacing $\mathcal{F}$ with a subfamily. Thus if a divisor $E$ over $X$ satisfies that $c_{Y_i}(E)\not\subset C_i$, then $a_E(X,\mathfrak{a}_i^{1+t})\ge0$, that is, $t\ord_E\mathfrak{a}_i\le a_E(X,\mathfrak{a}_i)$. Hence,
\begin{align*}
a_E(X)=a_E(X,\mathfrak{a}_i)+\ord_E\mathfrak{a}_i\le(1+t^{-1})a_E(X,\mathfrak{a}_i).
\end{align*}
In addition if $E$ computes $\mld_P(X,\mathfrak{a}_i)$, then
\begin{align*}
a_E(X)\le(1+t^{-1})\mld_P(X,\mathfrak{a}_i)\le(1+t^{-1})\mld_PX.
\end{align*}
Hence any integer $l$ at least $(1+t^{-1})\mld_PX$ satisfies the required property.
\end{proof}
We provide a meta theorem which connects statements involving the maximal ideal $\mathfrak{m}$ to those involving $\mathfrak{m}$-primary ideals. For the property $\mathscr{P}$ in the theorem, one can take for example empty or being terminal.
\begin{theorem}\label{thm:meta}
Let $P\in X$ be the germ of a smooth threefold and $\mathfrak{m}$ be the maximal ideal in $\mathscr{O}_X$ defining $P$. Fix a positive rational number $q$. Let $\mathscr{P}$ be a property of canonical pairs $(X,\mathfrak{a}^q)$ for ideals $\mathfrak{a}$ on $X$. Then the following statements are equivalent.
\begin{enumerate}
\item\label{itm:maximal}
Fix a non-negative rational number $s$. Then there exists a positive integer $l$ depending only on $q$ and $s$ such that if $\mathfrak{a}$ is an ideal on $X$ satisfying that $(X,\mathfrak{a}^q)$ is canonical and has the property $\mathscr{P}$, then there exists a divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{a}^q\mathfrak{m}^s)$ and satisfies the inequality $a_E(X)\le l$.
\item\label{itm:mprimary}
Fix a non-negative rational number $s$ and a positive integer $b$. Then there exists a positive integer $l$ depending only on $q$, $s$ and $b$ such that if $\mathfrak{a}$ and $\mathfrak{b}$ are ideals on $X$ satisfying that $(X,\mathfrak{a}^q)$ is canonical and has the property $\mathscr{P}$ and that $\mathfrak{b}$ contains $\mathfrak{m}^b$, then there exists a divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{a}^q\mathfrak{b}^s)$ and satisfies the inequality $a_E(X)\le l$.
\end{enumerate}
\end{theorem}
\begin{proof}
\textit{Step} 1.
The (\ref{itm:maximal}) follows from the special case of (\ref{itm:mprimary}) when $b=1$. It is necessary to derive (\ref{itm:mprimary}) from (\ref{itm:maximal}). Let $\mathcal{S}=\{(\mathfrak{a}_i,\mathfrak{b}_i)\}_{i\in\mathbf{N}}$ be an arbitrary sequence of pairs of ideals on $X$ such that $(X,\mathfrak{a}_i^q)$ is canonical and has the property $\mathscr{P}$ and such that $\mathfrak{b}_i$ contains $\mathfrak{m}^b$. Assuming the (\ref{itm:maximal}), it is sufficient to find an integer $l$ such that for infinitely many $i$, there exists a divisor $E_i$ over $X$ which computes $\mld_P(X,\mathfrak{a}_i^q\mathfrak{b}_i^s)$ and satisfies the inequality $a_{E_i}(X)\le l$. Note Remark \ref{rmk:independent}. By Theorem \ref{thm:nonpos}, we may assume that $\mld_P(X,\mathfrak{a}_i^q\mathfrak{b}_i^s)$ is positive. Then by Corollary \ref{crl:mult}, there exists a positive integer $b_0$ depending only on $q$ and $s$ such that $\ord_{G_i}\mathfrak{m}\le b_0$ for every divisor $G_i$ over $X$ computing $\mld_P(X,\mathfrak{a}_i^q\mathfrak{b}_i^s)$.
\textit{Step} 2.
We construct a generic limit $(\mathsf{a},\mathsf{b})$ of $\mathcal{S}$ using the notation in Section \ref{sct:limit}. The $(\mathsf{a},\mathsf{b})$ is the generic limit with respect to a family $\mathcal{F}=(Z_l,(\mathfrak{a}(l),\mathfrak{b}(l)),N_l,s_l,t_l)_{l\ge l_0}$ of approximations of $\mathcal{S}$. The $\mathsf{a}$ and $\mathsf{b}$ are ideals on $\hat P\in\hat X$ where $\hat X$ is the spectrum of the completion of the local ring $\mathscr{O}_{X,P}\otimes_kK$. We let $\hat\mathfrak{m}$ denote the maximal ideal in $\mathscr{O}_{\hat X}$. Note that $\hat\mathfrak{m}^b\subset\mathsf{b}$ by $\mathfrak{m}^b\subset\mathfrak{b}_i$. By Lemma \ref{lem:limtonak} and Remark \ref{rmk:limit}(\ref{itm:limitineq}), the existence of $l$ is reduced to the inequality $\mld_{\hat P}(\hat X,\mathsf{a}^q\mathsf{b}^s)\le\mld_P(X,\mathfrak{a}_i^q\mathfrak{b}_i^s)$ for any $i\in N_{l_0}$ after replacing $\mathcal{F}$ with a subfamily. By Theorem \ref{thm:grthan1}, we may assume that $(\hat X,\mathsf{a}^q\mathsf{b}^s)$ has the smallest lc centre $\hat C$ which is regular and of dimension one.
Since $\mathsf{b}$ is $\hat\mathfrak{m}$-primary, $\hat C$ is also the smallest lc centre of $(\hat X,\mathsf{a}^q)$. In particular, $\mld_{\hat P}(\hat X,\mathsf{a}^q)\le1$ by Theorem \ref{thm:grthan1}(\ref{itm:case4}), while $\mld_{\hat P}(\hat X,\mathsf{a}^q)\ge1$ by Remark \ref{rmk:limit}(\ref{itm:limitineq}). Thus $\mld_{\hat P}(\hat X,\mathsf{a}^q)=1$.
\textit{Step} 3.
We apply Theorem \ref{thm:wbu} to $(\hat X,\mathsf{a}^q)$. There exist a divisor $\hat E$ over $\hat X$ computing $\mld_{\eta_{\hat C}}(\hat X,\mathsf{a}^q)$ and a regular system $x_1,x_2,x_3$ of parameters in $\mathscr{O}_{\hat X}$ such that $\hat E$ is obtained by the weighted blow-up of $\hat X$ with $\wt(x_1,x_2)=(w_1,w_2)$ for some coprime positive integers $w_1,w_2$. We take $x_3$ generally from $\hat\mathfrak{m}$ so that $\ord_{x_3}\mathsf{a}$ is zero, where $\ord_{x_3}$ stands for the order along the divisor on $\hat X$ defined by $x_3$. Note that $\hat C$ is geometrically irreducible.
We fix a positive integer $j$ such that
\begin{align*}
j>bb_0.
\end{align*}
Let $\hat f\colon\hat Y\to\hat X$ be the weighted blow-up with $\wt(x_1,x_2,x_3)=(jw_1,jw_2,1)$ and $\hat F$ be its exceptional divisor. By Remark \ref{rmk:wbu}, there exists a regular system $y_1,y_2,y_3$ of parameters in $\mathscr{O}_{X,P}\otimes_kK$ such that $\hat f$ is also the weighted blow-up with $\wt(y_1,y_2,y_3)=(jw_1,jw_2,1)$ (after regarding $y_1,y_2,y_3$ as elements in $\mathscr{O}_{\hat X}$).
Discussed in the paragraph prior to Lemma \ref{lem:relative}, after replacing $\mathcal{F}$ with a subfamily, $\hat f$ is descended to a projective morphism $f_l\colon Y_l\to X\times Z_l$ for any $l\ge l_0$. One can assume that $y_1,y_2,y_3$ come from $\mathscr{O}_{X,P}\otimes_k\mathscr{O}_{Z_l}$ and that for any $i\in N_l$, their fibres $y_{1i},y_{2i},y_{3i}$ at $s_l(i)\in Z_l$ form a regular system of parameters in $\mathscr{O}_{X,P}$. $Y_l$ is klt and the exceptional locus of $f_l$ is a $\mathbf{Q}$-Cartier prime divisor $F_l$. The fibre $f_i\colon Y_i\to X$ of $f_l$ at $s_l(i)$ is the weighted blow-up of $X$ with $\wt(y_{1i},y_{2i},y_{3i})=(jw_1,jw_2,1)$ whose exceptional divisor is $F_i=F_l\times_{Y_l}Y_i$.
Since $(jw_1,jw_2,1)=j(w_1,w_2,0)+(0,0,1)$, one has the inequality
\begin{align*}
\ord_{\hat F}\mathsf{a}^q\ge j\ord_{\hat E}\mathsf{a}^q+\ord_{x_3}\mathsf{a}^q=j(w_1+w_2)
\end{align*}
using $\ord_{\hat E}\mathsf{a}^q=a_{\hat E}(\hat X)-a_{\hat E}(\hat X,\mathsf{a}^q)=w_1+w_2$. Equivalently, $a_{\hat F}(\hat X,\mathsf{a}^q)=a_{\hat F}(\hat X)-\ord_{\hat F}\mathsf{a}^q\le 1$. Hence $a_{\hat F}(\hat X,\mathsf{a}^q)=1$ by $\mld_{\hat P}(\hat X,\mathsf{a}^q)=1$ in Step 2.
\textit{Step} 4.
Let $\hat Q$ be the closed point in $\hat F$ which lies on the strict transform of $\hat C$. For $i\in N_l$, let $Q_i$ be the closed point in $F_i$ which lies on the strict transform of the curve on $X$ defined by $(y_{1i},y_{2i})\mathscr{O}_X$. Applying Lemma \ref{lem:relative} to $(\hat X,\mathsf{a}^q\mathsf{b}^s)$, $\hat f$, and $\hat Q$, one has only to treat the case when $\mld_P(X,\mathfrak{a}_i^q\mathfrak{b}_i^s)$ is computed by a divisor $G_i$ such that $c_{Y_i}(G_i)=Q_i$.
For such $G_i$, the inequality $\ord_{G_i}\mathfrak{m}\le b_0$ holds by Step 1. Thus,
\begin{align*}
\ord_{G_i}\mathfrak{b}_i\le\ord_{G_i}\mathfrak{m}^b\le bb_0<j\le jw_2&=\ord_{F_i}(y_{1i},y_{2i})\mathscr{O}_X\\
&\le\ord_{F_i}(y_{1i},y_{2i})\mathscr{O}_X\cdot\ord_{G_i}F_i\\
&\le\ord_{G_i}(y_{1i},y_{2i})\mathscr{O}_X,
\end{align*}
whence $\ord_{G_i}\mathfrak{b}_i=\ord_{G_i}(\mathfrak{b}_i+(y_{1i},y_{2i})\mathscr{O}_X)$.
By $\hat\mathfrak{m}^b\subset\mathsf{b}$, there exists a non-negative integer $b'$ at most $b$ satisfying that
\begin{align*}
\mathsf{b}+(y_1,y_2)\mathscr{O}_{\hat X}=\hat\mathfrak{m}^{b'}+(y_1,y_2)\mathscr{O}_{\hat X}.
\end{align*}
Then one can assume that $\mathfrak{b}(l)+(y_1,y_2)\mathscr{O}_{X\times Z_l}=(\mathfrak{m}^{b'}+\mathfrak{m}^l)\mathscr{O}_{X\times Z_l}+(y_1,y_2)\mathscr{O}_{X\times Z_l}$ for any $l\ge l_0$, which derives the inclusion $\mathfrak{m}^{b'}\subset\mathfrak{b}_i+\mathfrak{m}^l+(y_{1i},y_{2i})\mathscr{O}_X$. One may assume that $l_0\ge b$, then $\mathfrak{m}^{b'}\subset\mathfrak{b}_i+(y_{1i},y_{2i})\mathscr{O}_X$. Thus, $\ord_{G_i}\mathfrak{b}_i=\ord_{G_i}(\mathfrak{b}_i+(y_{1i},y_{2i})\mathscr{O}_X)\le\ord_{G_i}\mathfrak{m}^{b'}$. In particular,
\begin{align*}
\mld_P(X,\mathfrak{a}_i^q\mathfrak{m}^{sb'})\le a_{G_i}(X,\mathfrak{a}_i^q\mathfrak{m}^{sb'})\le a_{G_i}(X,\mathfrak{a}_i^q\mathfrak{b}_i^s)=\mld_P(X,\mathfrak{a}_i^q\mathfrak{b}_i^s).
\end{align*}
\textit{Step} 5.
We want the inequality $\mld_{\hat P}(\hat X,\mathsf{a}^q\mathsf{b}^s)\le\mld_P(X,\mathfrak{a}_i^q\mathfrak{b}_i^s)$, as seen in Step 2. Applying our assumption (\ref{itm:maximal}) in the case when the exponent of $\mathfrak{m}$ is one of $0,s,2s,\ldots,bs$, there exists a positive integer $l'$ depending only on $q$, $s$ and $b$ such that $\mld_P(X,\mathfrak{a}_i^q\mathfrak{m}^{sb'})$ is computed by a divisor $E_i$ satisfying the inequality $a_{E_i}(X)\le l'$. One has that $\ord_{E_i}\mathfrak{a}_i=q^{-1}(a_{E_i}(X)-a_{E_i}(X,\mathfrak{a}_i^q))\le q^{-1}l'$, so $E_i$ computes $\mld_P(X,(\mathfrak{a}_i+\mathfrak{m}^e)^q\mathfrak{m}^{sb'})$ for any integer $e$ at least $q^{-1}l'$, which equals $\mld_P(X,\mathfrak{a}_i^q\mathfrak{m}^{sb'})$. Together with Lemma \ref{lem:resolution}, one obtains that $\mld_{\hat P}(\hat X,\mathsf{a}^q\hat\mathfrak{m}^{sb'})=\mld_P(X,\mathfrak{a}_i^q\mathfrak{m}^{sb'})$ for any $i\in N_{l_0}$ after replacing $\mathcal{F}$ with a subfamily. Hence by Step 4, the problem is reduced to showing the equality
\begin{align*}
\mld_{\hat P}(\hat X,\mathsf{a}^q\mathsf{b}^s)=\mld_{\hat P}(\hat X,\mathsf{a}^q\hat\mathfrak{m}^{sb'}).
\end{align*}
The equality $\mathsf{b}+(y_1,y_2)\mathscr{O}_{\hat X}=\hat\mathfrak{m}^{b'}+(y_1,y_2)\mathscr{O}_{\hat X}$ tells that
\begin{align*}
\mathsf{b}+(y_1,y_2,y_3^j)\mathscr{O}_{\hat X}=\hat\mathfrak{m}^{b'}+(y_1,y_2,y_3^j)\mathscr{O}_{\hat X}.
\end{align*}
Since $(y_1,y_2,y_3^j)\mathscr{O}_{\hat X}=\hat f_*\mathscr{O}_{\hat Y}(-j\hat F)=\mathscr{I}_{\hat C}+\hat\mathfrak{m}^j$, where $\mathscr{I}_{\hat C}$ is the ideal sheaf of $\hat C$, one concludes that $\mathsf{b}+\mathscr{I}_{\hat C}=\hat\mathfrak{m}^{b'}+\mathscr{I}_{\hat C}$, using $\hat\mathfrak{m}^j\subset\hat\mathfrak{m}^b\subset\mathsf{b}$ and $\hat\mathfrak{m}^j\subset\hat\mathfrak{m}^{b'}$. Therefore, the required equality follows from the precise inversion of adjunction, Corollary \ref{crl:pia}.
\end{proof}
Precise inversion of adjunction compares the minimal log discrepancy of a pair and that of its restricted pair by adjunction. Let $P\in X$ be the germ of a normal variety and $S+B$ be an effective $\mathbf{R}$-divisor on $X$ such that $S$ is a normal prime divisor which does not appear in $B$. Suppose that they form a pair $(X,S+B)$, then one has the adjunction $K_X+S+B|_S=K_S+B_S$ in which $B_S$ is the different on $S$ of $B$.
\begin{conjecture}[Precise inversion of adjunction]\label{cnj:pia}
Notation as above. Then one has that $\mld_P(X,S+B)=\mld_P(S,B_S)$.
\end{conjecture}
This conjecture is regarded as the more precise version of Theorem \ref{thm:ia}. At present we know two cases when it holds. One is when $X$ is smooth \cite{EMY03} or more generally has lci singularities \cite{EM04}. The other is when the minimal log discrepancy is at most one \cite{BCHM10}, that is,
\begin{theorem}\label{thm:pia}
Conjecture \textup{\ref{cnj:pia}} holds when $(X,\Delta)$ is klt for some boundary $\Delta$ and $\mld_P(X,S+B)$ is at most one.
\end{theorem}
\begin{proof}
It is enough to show the inequality $\mld_P(X,S+B)\ge\mld_P(S,B_S)$. By inversion of adjunction, we may assume that $0<\mld_P(X,S+B)\le1$. Then by \cite[Corollary 1.4.3]{BCHM10}, there exists a projective birational morphism $\pi\colon Y\to X$ from a $\mathbf{Q}$-factorial normal variety such that the divisorial part of its exceptional locus is a prime divisor $E$ computing $\mld_P(X,S+B)$. Let $S_Y$ and $B_Y$ denote the strict transforms of $S$ and $B$. Then the pull-back of $(X,S+B)$ is $(Y,S_Y+B_Y+bE)$ where $b=1-\mld_P(X,S+B)\ge0$. Let $C$ be an arbitrary irreducible component of $E\cap S_Y$. By $\mld_P(X,S+B)>0$, the $(Y,S_Y+B_Y+bE)$ is plt about the generic point $\eta_C$ of $C$. By adjunction, one can write
\begin{align*}
K_Y+S_Y+B_Y+bE|_{S_Y}=K_{S_Y}+B_{S_Y}
\end{align*}
about $\eta_C$. By \cite[Corollary 3.10]{Sh93}, $C$ has coefficient at least $b$ in $B_{S_Y}$. Hence, one obtains that $\mld_P(S,B_S)\le a_C(S_Y,B_{S_Y})\le1-b=\mld_P(X,S+B)$.
\end{proof}
\begin{lemma}\label{lem:pia}
Let $P\in X$ be the germ of a klt variety and $S$ be a prime divisor on $X$ such that $(X,S)$ is plt. Let $\Delta$ be the different on $S$ defined by $K_X+S|_S=K_S+\Delta$. Let $\hat X$ be the spectrum of the completion of the local ring $\mathscr{O}_{X,P}$ and $\hat P$ be its closed point. Set $\hat S=S\times_X\hat X$ and $\hat\Delta=\Delta\times_X\hat X$. Let $\mathsf{a}$ be an $\mathbf{R}$-ideal on $\hat X$ such that $\mld_{\hat P}(\hat X,\hat S,\mathsf{a})\le1$. Then $\mld_{\hat P}(\hat X,\hat S,\mathsf{a})=\mld_{\hat P}(\hat S,\hat\Delta,\mathsf{a}\mathscr{O}_{\hat S})$.
\end{lemma}
\begin{proof}
Adding a high multiple of the maximal ideal $\hat\mathfrak{m}$ in $\mathscr{O}_{\hat X}$ to each component of $\mathsf{a}$, we may assume that $\mathsf{a}$ is $\hat\mathfrak{m}$-primary. Then $\mathsf{a}$ is the pull-back of an $\mathbf{R}$-ideal $\mathfrak{a}$ on $X$. By Remark \ref{rmk:regular}, the assertion is reduced to the precise inversion of adjunction $\mld_P(X,S,\mathfrak{a})=\mld_P(S,\Delta,\mathfrak{a}\mathscr{O}_S)$ for varieties, which follows from Theorem \ref{thm:pia}.
\end{proof}
\begin{proposition}\label{prp:piaE}
Let $X$ be the spectrum of the ring of formal power series in three variables over a field $K$ of characteristic zero and $P$ be its closed point. Let $x_1,x_2$ be a part of a regular system of parameters in $\mathscr{O}_X$ and $w_1,w_2$ be coprime positive integers. Let $Y\to X$ be the weighted blow-up of with $\wt(x_1,x_2)=(w_1,w_2)$, $E$ be its exceptional divisor, and $f$ be the fibre of $E\to X$ at $P$. Let $\Delta$ be the different on $E$ defined by $K_Y+E|_E=K_E+\Delta$. Let $\mathfrak{a}$ be an $\mathbf{R}$-ideal on $X$ whose weak transform $\mathfrak{a}_Y$ on $Y$ is defined. Suppose that $a_E(X,\mathfrak{a})$ is zero. Then $\mld_P(X,\mathfrak{a})=\mld_f(E,\Delta,\mathfrak{a}_Y\mathscr{O}_E)$.
\end{proposition}
\begin{proof}
By the regular base change, we may assume that $K$ is algebraically closed. Since $(Y,E,\mathfrak{a}_Y)$ is crepant to $(X,\mathfrak{a})$, it is enough to prove that $\mld_f(Y,E,\mathfrak{a}_Y)=\mld_f(E,\Delta,\mathfrak{a}_Y\mathscr{O}_E)$.
Extend the $x_1,x_2$ to a regular system $x_1,x_2,x_3$ of parameters in $\mathscr{O}_X$ and set $X'=\Spec K[x_1,x_2,x_3]$. Then $Y$ is the base change of the weighted blow-up of $X'$ with $\wt(x_1,x_2)=(w_1,w_2)$. Thus, the equality $\mld_{\eta_f}(Y,E,\mathfrak{a}_Y)=\mld_{\eta_f}(E,\Delta,\mathfrak{a}_Y\mathscr{O}_E)$ follows from Lemma \ref{lem:pia} by cutting by the strict transform of the divisor on $X$ defined by $x_1^{w_2}+\lambda x_2^{w_1}$ for a general member $\lambda$ in $K$. Together with $\mld_f(Y,E,\mathfrak{a}_Y)\le\mld_{\eta_f}(Y,E)=1$, it is sufficient to verify that $\mld_Q(Y,E,\mathfrak{a}_Y)=\mld_Q(E,\Delta,\mathfrak{a}_Y\mathscr{O}_E)$ for any closed point $Q$ in $f$ such that $\mld_Q(Y,E,\mathfrak{a}_Y)\le1$, which follows from Lemma \ref{lem:pia} again.
\end{proof}
\begin{corollary}\label{crl:pia}
Let $X$ be the spectrum of the ring of formal power series in three variables over a field $K$ of characteristic zero and $P$ be its closed point. Let $\mathfrak{a}$, $\mathfrak{b}$ and $\mathfrak{c}$ be $\mathbf{R}$-ideals on $X$. Suppose that $\mld_P(X,\mathfrak{a})$ equals one and that $(X,\mathfrak{a})$ has an lc centre $C$ of dimension one on which $\mathfrak{b}\mathscr{O}_C=\mathfrak{c}\mathscr{O}_C$. Then $\mld_P(X,\mathfrak{a}\mathfrak{b})=\mld_P(X,\mathfrak{a}\mathfrak{c})$.
\end{corollary}
\begin{proof}
We may assume that $C$ is not contained in the cosupport of $\mathfrak{b}\mathfrak{c}$, because otherwise $\mld_P(X,\mathfrak{a}\mathfrak{b})=\mld_P(X,\mathfrak{a}\mathfrak{c})=-\infty$. By Theorem \ref{thm:wbu}, there exist a divisor $E$ over $X$ computing $\mld_{\eta_C}(X,\mathfrak{a})=0$ and a part $x_1,x_2$ of a regular system of parameters in $\mathscr{O}_X$ such that $E$ is obtained by the weighted blow-up $Y$ of $X$ with $\wt(x_1,x_2)=(w_1,w_2)$ for some $w_1,w_2$. We may assume that the weak transform $\mathfrak{a}_Y$ on $Y$ of $\mathfrak{a}$ is defined. Then, the assertion follows from Proposition \ref{prp:piaE} by $\mathfrak{b}\mathscr{O}_E=\mathfrak{c}\mathscr{O}_E$.
\end{proof}
\begin{proof}[Proof of Theorem \textup{\ref{thm:first}}]
It is sufficient to derive Conjectures \ref{cnj:acc}, \ref{cnj:alc}, \ref{cnj:madic} and \ref{cnj:nakamura} from Conjecture \ref{cnj:product}. All $\mathbf{R}$-ideals on the germ $P\in X$ of a smooth threefold in Conjectures \ref{cnj:acc} to \ref{cnj:nakamura} may be assumed to be $\mathfrak{m}$-primary, where $\mathfrak{m}$ is the maximal ideal in $\mathscr{O}_X$ defining $P$. There exists an \'etale morphism from $P\in X$ to the germ $o\in\mathbf{A}^3$ at origin of the affine space, by which any $\mathfrak{m}$-primary $\mathbf{R}$-ideal $\mathfrak{a}$ on $X$ is the pull-back of some $\mathbf{R}$-ideal $\mathfrak{b}$ on $\mathbf{A}^3$ by Lemma \ref{lem:regular}. Thus for Conjectures \ref{cnj:acc} to \ref{cnj:nakamura}, one has only to consider $\mathbf{R}$-ideals on the fixed germ $P\in X$ of a smooth threefold.
By Lemma \ref{lem:rational}, these conjectures are reduced to the case $I=\{1/n\}$ of Conjecture \ref{cnj:nakamura}, that is, for a fixed positive integer $n$, it is enough to find an integer $l$ such that if $\mathfrak{a}$ is an ideal on $X$, then there exists a divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{a}^{1/n})$ and satisfies the inequality $a_E(X)\le l$. By Theorem \ref{thm:nonpos}, we have only to consider those $\mathfrak{a}$ for which $\mld_P(X,\mathfrak{a}^{1/n})$ is positive. Then by Corollary \ref{crl:mult}, there exists a positive integer $b$ depending only on $n$ such that $\ord_E\mathfrak{m}\le b$ for every divisor $E$ over $X$ computing $\mld_P(X,\mathfrak{a}^{1/n})$.
Set $q=1/n$ and apply Theorem \ref{thm:canonical}. It is enough to bound $a_E(X)$ for those $\mathfrak{a}$ in the case (\ref{itm:reduced}) of Theorem \ref{thm:canonical}. Suppose this case and use the notation in Theorem \ref{thm:canonical}. Let $E$ be an arbitrary divisor over $Y$ which computes $\mld_Q(Y,\Delta,\mathfrak{a}_Y^q)$, that equals $\mld_P(X,\mathfrak{a}^q)$. Then,
\begin{align*}
a_E(X)=a_E(Y)+\sum_F(a_F(Y)-1)\ord_EF\le a_E(Y)+(c-1)\sum_F\ord_EF,
\end{align*}
in which the summation takes over all exceptional prime divisors on $Y$, and
\begin{align*}
\sum_F\ord_EF\le\ord_E\mathfrak{m}\le b.
\end{align*}
Thus the boundedness of $a_E(X)$ is reduced to that of $a_E(Y)$. In other words, it is sufficient to treat the divisors computing $\mld_Q(Y,\Delta,\mathfrak{a}_Y^q)$. The ideal $\mathfrak{b}=\mathscr{O}_Y(-n\Delta)$ in $\mathscr{O}_Y$ is defined since $n\Delta$ is integral, for which $(Y,\mathfrak{a}_Y^q\mathfrak{b}^q)$ is crepant to $(Y,\Delta,\mathfrak{a}_Y^q)$. The $\mathfrak{b}$ satisfies that $\ord_E\mathfrak{b}\le n\sum_F\ord_EF\le nb$ by $\mathscr{O}_Y(-n\sum F)\subset\mathfrak{b}$. In particular, $E$ computes $\mld_Q(Y,\mathfrak{a}_Y^q(\mathfrak{b}+\mathfrak{n}^{nb})^q)$ as well as $\mld_Q(Y,\Delta,\mathfrak{a}_Y^q)$ for the maximal ideal $\mathfrak{n}$ in $\mathscr{O}_Y$ defining $Q$.
Replacing the notation $(Y,\mathfrak{a}_Y^q(\mathfrak{b}+\mathfrak{n}^{nb})^q)$ with $(X,\mathfrak{a}^q\mathfrak{b}^q)$ and $\mathfrak{n}$ with $\mathfrak{m}$, Conjectures \ref{cnj:acc} to \ref{cnj:nakamura} follow from the boundedness of $a_E(X)$ for some divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{a}^q\mathfrak{b}^q)$ such that $\mld_P(X,\mathfrak{a}^q)\ge1$ and such that $\mathfrak{m}^{nb}\subset\mathfrak{b}$. One may assume that $\mathfrak{a}$ is $\mathfrak{m}$-primary. Then one can apply Theorem \ref{thm:meta} with the property $\mathscr{P}$ being empty, which reduces the boundedness of $a_E(X)$ to Conjecture \ref{cnj:product}.
\end{proof}
\section{Boundedness results}
In this section, we shall prove Conjecture \ref{cnj:product} in several cases. First we treat the case when either $(X,\mathfrak{a}^q)$ is terminal or $s$ is zero.
\begin{proof}[Proof of Theorem \textup{\ref{thm:terminal}}]
Let $\mathcal{S}=\{\mathfrak{a}_i\}_{i\in\mathbf{N}}$ be an arbitrary sequence of ideals on $X$ such that $(X,\mathfrak{a}_i^q)$ is terminal. We construct a generic limit $\mathsf{a}$ of $\mathcal{S}$. We use the notation in Section \ref{sct:limit}, so $\mathsf{a}$ is the generic limit with respect to a family $\mathcal{F}=(Z_l,\mathfrak{a}(l),N_l,s_l,t_l)_{l\ge l_0}$ of approximations of $\mathcal{S}$, and $\mathsf{a}$ is an ideal on $\hat P\in\hat X$. We let $\hat\mathfrak{m}$ denote the maximal ideal in $\mathscr{O}_{\hat X}$. To see the assertion (\ref{itm:terminal}), by Lemma \ref{lem:limtonak} and Remark \ref{rmk:independent}, it is enough to show the equality $\mld_{\hat P}(\hat X,\mathsf{a}^q\hat\mathfrak{m}^s)=\mld_P(X,\mathfrak{a}_i^q\mathfrak{m}^s)$ for any $i\in N_{l_0}$ after replacing $\mathcal{F}$ with a subfamily. One has that $\mld_{\hat P}(\hat X,\mathsf{a}^q)>1$ by Remark \ref{rmk:limit}(\ref{itm:limitineq}). Thus $(\hat X,\mathsf{a}^q)$ satisfies the case \ref{cas:case1}, \ref{cas:case2} or \ref{cas:case3} in Theorem \ref{thm:grthan1}, which derives that $(\hat X,\mathsf{a}^q\hat\mathfrak{m}^s)$ does not have the smallest lc centre of dimension one. Hence the required equality holds by Theorem \ref{thm:grthan1}.
For (\ref{itm:zero}), starting instead with $\mathcal{S}$ such that $(X,\mathfrak{a}_i^q)$ is canonical, we need to show the equality $\mld_{\hat P}(\hat X,\mathsf{a}^q)=\mld_P(X,\mathfrak{a}_i^q)$. This holds in the cases other than the case \ref{cas:case4} in Theorem \ref{thm:grthan1}, so we may assume the case \ref{cas:case4}, in which $\mld_{\hat P}(\hat X,\mathsf{a}^q)\le1$. By $\mld_P(X,\mathfrak{a}_i^q)\ge1$, the required equality follows from Remark \ref{rmk:limit}(\ref{itm:limitineq}).
\end{proof}
We shall study the case in Theorem \ref{thm:second}(\ref{itm:half}) when the lc threshold of the maximal ideal is at most one-half. We prepare a useful criterion for identifying a divisor over a variety.
\begin{lemma}\label{lem:parallel}
Let $P\in X$ be the germ of a smooth variety and $E$ be a divisor over $X$. Let $x_1,\ldots,x_c$ be a part of a regular system of parameters in $\mathscr{O}_{X,P}$ and $w_1,\ldots,w_c$ be positive integers. Let $Y\to X$ be the weighted blow-up with $\wt(x_1,\ldots,x_c)=(w_1,\ldots,w_c)$, $F$ be its exceptional divisor, and $H_i$ be the strict transform of the divisor on $X$ defined by $x_i$. Suppose that
\begin{itemize}
\item
$c_X(E)$ coincides with $c_X(F)$, and
\item
the vector $(w_1,\ldots,w_c)$ is parallel to $(\ord_Ex_1,\ldots,\ord_Ex_c)$.
\end{itemize}
Then the centre on $Y$ of $E$ is not contained in the union $\bigcup_{i=1}^cH_i$.
\end{lemma}
\begin{proof}
The idea has appeared already in \cite[Lemma 6.1]{K03}. We may assume that $w_1,\ldots,w_c$ have no common divisors. One computes that
\begin{align*}
\ord_Ex_i=\ord_EH_i+\ord_Fx_i\cdot\ord_EF=\ord_EH_i+w_i\ord_EF.
\end{align*}
The $\ord_EH_i$ is positive iff the centre $c_Y(E)$ lies on $H_i$. Since the intersection $\bigcap_{i=1}^cH_i$ is empty, at least one of $\ord_EH_i$ is zero. Because $(\ord_Ex_1,\ldots,\ord_Ex_c)$ is parallel to $(w_1,\ldots,w_c)$, one concludes that $\ord_EH_i$ is zero for every $i$, which proves the assertion.
\end{proof}
The next lemma plays a central role in the proof of Theorem \ref{thm:second}(\ref{itm:half}).
\begin{lemma}\label{lem:half}
Let $C$ be the spectrum of the ring of formal power series in one variable over $k$ and $P$ be its closed point. Let $X\to C$ be a smooth projective morphism of relative dimension one and $f$ be its fibre at $P$. Let $\Delta$ be an effective $\mathbf{R}$-divisor on $X$, $Q$ be a closed point in $f$, and $t$ be a positive real number. Suppose that
\begin{itemize}
\item
$(K_X+\Delta)\cdot f=0$,
\item
$\mld_f(X,\Delta)=1$, and
\item
$\mld_Q(X,\Delta+tf)=0$.
\end{itemize}
Then $t$ is at least one-half. Moreover if $t$ equals one-half, then $\mld_Q(X,\Delta+sf)=1-2s$ for any non-negative real number $s$ at most one-half.
\end{lemma}
\begin{proof}
\textit{Step} 1.
Let $E$ be a divisor over $X$ which computes $\mld_Q(X,\Delta+tf)=0$. We define the coprime positive integers $w_1$ and $w_2$ so that the vector $(w_1,w_2)$ is parallel to $(\ord_Ef,\ord_E\mathfrak{n})$, where $\mathfrak{n}$ is the maximal ideal in $\mathscr{O}_X$ defining $Q$. Take a regular system $x_1,x_2$ of parameters in $\mathscr{O}_{X,Q}$ such that $x_1$ defines $f$ and $x_2$ is a general member in $\mathfrak{n}$. We claim that the divisor $F$ obtained by the weighted blow-up $Y\to X$ with $\wt(x_1,x_2)=(w_1,w_2)$ computes $\mld_Q(X,\Delta+tf)$.
This claim can be verified in the same way as in \cite{K17}. Assuming that $a_F(X,\Delta+tf)$ is positive, we shall derive a contradiction. For $i=1,2$, let $H_i$ be the strict transform of the divisor defined on $X$ by $x_i$. By Lemma \ref{lem:parallel}, the centre on $Y$ of $E$ would be a closed point $R$ in $F\setminus(H_1+H_2)$. The pull-back of $(X,\Delta+tf)$ is $(Y,bF+\Delta_Y+tH_1)$ in which $\Delta_Y$ is the strict transform of $\Delta$ and $b=1-a_F(X,\Delta+tf)<1$. Thus $(Y,F+\Delta_Y)$ is not lc about $R$, so $(F,\Delta_Y|_F)$ is not lc about $R$ by inversion of adjunction. Remark that this inversion of adjunction on $R\in Y$ holds by Lemma \ref{lem:pia} because $Y\to X$ is the base change of the weighted blow-up of $\Spec k[x_1,x_2]$ with $\wt(x_1,x_2)=(w_1,w_2)$. This means that $\ord_R(\Delta_Y|_F)$ is greater than one.
One computes that
\begin{align*}
1&=-(K_Y+bF+\Delta_Y+tH_1)\cdot F+1\\
&\le-(K_Y+bF+tH_1)\cdot F-\ord_R(\Delta_Y|_F)+1\\
&<((w_1+w_2-1)+b-tw_1)(-F^2)=\frac{1}{w_1}+\frac{1-t}{w_2}-\frac{1-b}{w_1w_2}.
\end{align*}
Together with $w_1\ge w_2$ and $b<1$, one would obtain that $w_2=1$ and $tw_1<b$. But then $a_F(X,\Delta)=a_F(X,\Delta+tf)+t\ord_Ff=(1-b)+tw_1<1$, which contradicts that $\mld_f(X,\Delta)=1$.
\textit{Step} 2.
We have seen that $F$ computes $\mld_Q(X,\Delta+tf)=0$. Then $a_F(X,\Delta)=t\ord_Ff=tw_1$. Since $\ord_Q\Delta\le1$ by $\mld_f(X,\Delta)=1$, one has that $\ord_F\Delta\le w_1$. Thus,
\begin{align*}
w_2\le w_1+w_2-\ord_F\Delta=a_F(X,\Delta)=tw_1.
\end{align*}
By $(K_X+\Delta)\cdot f=0$, one has that $(\Delta\cdot f)=2$. Hence
\begin{align*}
w_2^{-1}\ord_F\Delta=(\ord_F\Delta)(F\cdot H_1)\le(\Delta_Y+(\ord_F\Delta)F)\cdot H_1=(\Delta\cdot f)=2,
\end{align*}
where the inequality follows from the fact that $f$ does not appear in $\Delta$ by $\mld_f(X,\Delta)=1$. Thus $\ord_F\Delta\le2w_2$ and
\begin{align*}
w_1-w_2\le w_1+w_2-\ord_F\Delta=a_F(X,\Delta)=tw_1,
\end{align*}
that is, $(1-t)w_1\le w_2$.
We have obtained that $(1-t)w_1\le w_2\le tw_1$. Therefore $t\ge1/2$, and moreover if $t=1/2$, then $w_1=2w_2$ so $(w_1,w_2)=(2,1)$.
\textit{Step} 3.
Suppose that $t=1/2$. Let $s$ be a non-negative real number at most one-half. It is necessary to show that $\mld_Q(X,\Delta+sf)=1-2s$. One has that $\mld_Q(X,\Delta+(1/2)f)=0$ and it is computed by $F$. In particular, $a_F(X,\Delta)=2^{-1}\ord_Ff=1$. By $\mld_f(X,\Delta)=1$, one obtains that $\mld_Q(X,\Delta)=1$ and it is also computed by $F$. Then Lemma \ref{lem:mld}(\ref{itm:mldequal}) provides that
\begin{align*}
\mld_Q(X,\Delta+sf)=(1-2s)\mld_Q(X,\Delta)+2s\mld_Q(X,\Delta+(1/2)f)=1-2s.
\end{align*}
\end{proof}
\begin{proposition}\label{prp:half}
Let $X$ be the spectrum of the ring of formal power series in three variables over a field $K$ of characteristic zero and $P$ be its closed point. Let $\mathfrak{a}$ be an $\mathbf{R}$-ideal such that $\mld_P(X,\mathfrak{a})$ equals one and such that $(X,\mathfrak{a})$ has an lc centre of dimension one. Then one of the following holds for the maximal ideal $\mathfrak{m}$ in $\mathscr{O}_X$.
\begin{enumerate}
\item\label{itm:eqhalf}
The $\mld_P(X,\mathfrak{a}\mathfrak{m}^s)$ equals $1-2s$ for any non-negative real number $s$ at most one-half.
\item\label{itm:grthanhalf}
The $\mld_P(X,\mathfrak{a}\mathfrak{m}^{1/2})$ is positive.
\end{enumerate}
\end{proposition}
\begin{proof}
We may assume that $K$ is algebraically closed. By Theorem \ref{thm:wbu}, there exist a divisor $E$ over $X$ and a part $x_1,x_2$ of a regular system of parameters in $\mathscr{O}_X$ such that $a_E(X,\mathfrak{a})=0$ and such that $E$ is obtained by the weighted blow-up $Y\to X$ with $\wt(x_1,x_2)=(w_1,w_2)$ for some $w_1,w_2$. We may assume that the weak transform $\mathfrak{a}_Y$ of $\mathfrak{a}$ is defined. Let $f$ be the fibre of $E\to X$ at $P$ and $\Delta$ be the different on $E$ defined by $K_Y+E|_E=K_E+\Delta$. Take an $\mathbf{R}$-divisor $A_X=e^{-1}\sum_{i=1}^eA_i$ on $X$ for large $e$ in which $A_i$ are defined by general members in $\mathfrak{a}$. Let $A$ be the strict transform on $Y$ of $A_X$ and set $A_E=A|_E$. By Proposition \ref{prp:piaE}, one obtains that
\begin{align*}
\mld_P(X,\mathfrak{a}\mathfrak{m}^s)=\mld_f(E,\Delta+A_E+sf)
\end{align*}
for any non-negative real number $s$. This equality for $s=0$ supplies that $\mld_f(E,\Delta+A_E)=1$. In particular, $f$ does not appear in $\Delta+A_E$.
Let $t$ be the positive real number such that $\mld_P(X,\mathfrak{a}\mathfrak{m}^t)=0$. If $t>1/2$, then the case (\ref{itm:grthanhalf}) holds. Suppose that $t\le1/2$. Then $\mld_f(E,\Delta+A_E+tf)=0$ but $\mld_{\eta_f}(E,\Delta+A_E+tf)=1-t>0$, so there exists a closed point $Q$ in $f$ such that $\mld_Q(E,\Delta+A_E+tf)=0$. With $(K_E+\Delta+A_E)\cdot f=0$, one can apply Lemma \ref{lem:half} to $(E,\Delta+A_E)$, which derives that $t=1/2$ and $\mld_f(E,\Delta+A_E+sf)=1-2s$. Hence the case (\ref{itm:eqhalf}) holds.
\end{proof}
\begin{proof}[Proof of Theorem \textup{\ref{thm:second}(\ref{itm:half})}]
Let $\mathcal{S}=\{\mathfrak{a}_i\}_{i\in\mathbf{N}}$ be an arbitrary sequence of ideals on $X$ such that $(X,\mathfrak{a}_i^q)$ is canonical and such that $\mld_P(X,\mathfrak{a}_i^q\mathfrak{m}^{1/2})$ is not positive. We construct a generic limit $\mathsf{a}$ of $\mathcal{S}$. We use the notation in Section \ref{sct:limit}, so $\mathsf{a}$ is the generic limit with respect to a family $\mathcal{F}=(Z_l,\mathfrak{a}(l),N_l,s_l,t_l)_{l\ge l_0}$ of approximations of $\mathcal{S}$, and $\mathsf{a}$ is an ideal on $\hat P\in\hat X$. We let $\hat\mathfrak{m}$ denote the maximal ideal in $\mathscr{O}_{\hat X}$. By Lemma \ref{lem:limtonak} and Remarks \ref{rmk:limit}(\ref{itm:limitineq}) and \ref{rmk:independent}, it is enough to show the inequality $\mld_{\hat P}(\hat X,\mathsf{a}^q\hat\mathfrak{m}^s)\le\mld_P(X,\mathfrak{a}_i^q\mathfrak{m}^s)$ for any $i\in N_{l_0}$ after replacing $\mathcal{F}$ with a subfamily. By Theorem \ref{thm:grthan1}, we may assume that $(\hat X,\mathsf{a}^q)$ has the smallest lc centre of dimension one, in which $\mld_{\hat P}(\hat X,\mathsf{a}^q)\le1$. By Remark \ref{rmk:limit}(\ref{itm:limitineq}) again, one obtains that $\mld_{\hat P}(\hat X,\mathsf{a}^q)=1$ from the canonicity of $(X,\mathfrak{a}_i^q)$.
Let $t$ be the positive rational number such that $\mld_{\hat P}(\hat X,\mathsf{a}^q\hat\mathfrak{m}^t)=0$. By Theorem \ref{thm:nonpos}, we may assume that $s<t$. By Remark \ref{rmk:limit}(\ref{itm:limitresult}), one has that $\mld_P(X,\mathfrak{a}_i^q\mathfrak{m}^t)=0$ for any $i\in N_{l_0}$ after replacing $\mathcal{F}$ with a subfamily. In particular, $t\le1/2$ by $\mld_P(X,\mathfrak{a}_i^q\mathfrak{m}^{1/2})\le0$. Applying Proposition \ref{prp:half} to $(\hat X,\mathsf{a}^q)$, one obtains that $t=1/2$ and $\mld_{\hat P}(\hat X,\mathsf{a}^q\hat\mathfrak{m}^s)=1-2s$. Thus $\mld_P(X,\mathfrak{a}_i^q\mathfrak{m}^{1/2})=0$. By Lemma \ref{lem:mld}(\ref{itm:mldconvex}), one obtains that
\begin{align*}
\mld_P(X,\mathfrak{a}_i^q\mathfrak{m}^s)&\ge(1-2s)\mld_P(X,\mathfrak{a}_i^q)+2s\mld_P(X,\mathfrak{a}_i^q\mathfrak{m}^{1/2})\\
&\ge1-2s=\mld_{\hat P}(\hat X,\mathsf{a}^q\hat\mathfrak{m}^s).
\end{align*}
\end{proof}
\begin{proof}[Proof of Corollary \textup{\ref{crl:main}}]
We may assume that $\mathfrak{a}$ is $\mathfrak{m}$-primary. By Theorems \ref{thm:terminal}(\ref{itm:terminal}) and \ref{thm:second}(\ref{itm:half}), we have only to consider ideals $\mathfrak{a}$ such that $\mld_P(X,\mathfrak{a}^q)$ equals one and such that $\mld_P(X,\mathfrak{a}^q\mathfrak{m}^{1/2})$ is positive. By Lemma \ref{lem:mld}(\ref{itm:mldconvex}), such $\mathfrak{a}$ satisfies that
\begin{align*}
\mld_P(X,\mathfrak{a}^q\mathfrak{m}^{1/n})\ge\Bigl(1-\frac{2}{n}\Bigr)\mld_P(X,\mathfrak{a}^q)+\frac{2}{n}\mld_P(X,\mathfrak{a}^q\mathfrak{m}^{1/2})>1-\frac{2}{n},
\end{align*}
whence $\mld_P(X,\mathfrak{a}^q\mathfrak{m}^{1/n})\ge1-1/n$ since $\mld_P(X,\mathfrak{a}^q\mathfrak{m}^{1/n})$ belongs to $n^{-1}\mathbf{Z}$.
Let $E$ be an arbitrary divisor over $X$ which computes $\mld_P(X,\mathfrak{a}^q)$. Then
\begin{align*}
1-\frac{1}{n}\le\mld_P(X,\mathfrak{a}^q\mathfrak{m}^{1/n})\le a_E(X,\mathfrak{a}^q\mathfrak{m}^{1/n})=a_E(X,\mathfrak{a}^q)-\frac{1}{n}\ord_E\mathfrak{m}\le1-\frac{1}{n},
\end{align*}
so $\mld_P(X,\mathfrak{a}^q\mathfrak{m}^{1/n})=1-1/n$ and it is computed by $E$. By Lemma \ref{lem:mld}(\ref{itm:mldequal}), $E$ also computes $\mld_P(X,\mathfrak{a}^q\mathfrak{m}^s)$ for any non-negative real number $s$ at most $1/n$. Thus Corollary \ref{crl:main} follows from Theorem \ref{thm:terminal}(\ref{itm:zero}).
\end{proof}
By a similar argument, one can prove Conjecture \ref{cnj:product} in the opposite case when the lc threshold of the maximal ideal is at least one.
\begin{proof}[Proof of Theorem \textup{\ref{thm:second}(\ref{itm:one})}]
Let $\mathcal{S}=\{\mathfrak{a}_i\}_{i\in\mathbf{N}}$ be an arbitrary sequence of ideals on $X$ such that $(X,\mathfrak{a}_i^q)$ is canonical and such that $(X,\mathfrak{a}_i^q\mathfrak{m})$ is lc. It is enough to show the existence of a positive integer $l$ such that for infinitely many indices $i$, there exists a divisor $E_i$ over $X$ which computes $\mld_P(X,\mathfrak{a}_i^q\mathfrak{m}^s)$ and satisfies the equality $a_{E_i}(X)=l$. Note Remark \ref{rmk:independent}. As in the proof of Theorem \ref{thm:second}(\ref{itm:half}), we construct a generic limit $\mathsf{a}$ on $\hat P\in\hat X$ of $\mathcal{S}$ with respect to a family $\mathcal{F}=(Z_l,\mathfrak{a}(l),N_l,s_l,t_l)_{l\ge l_0}$ of approximations of $\mathcal{S}$. By Lemma \ref{lem:limtonak} and Theorem \ref{thm:grthan1}, we may assume that $(\hat X,\mathsf{a}^q)$ has the smallest lc centre of dimension one, in which $\mld_{\hat P}(\hat X,\mathsf{a}^q)\le1$. By Remark \ref{rmk:limit}(\ref{itm:limitineq}), one has that $\mld_{\hat P}(\hat X,\mathsf{a}^q)=1$.
Let $\hat E$ be a divisor over $\hat X$ which computes $\mld_{\hat P}(\hat X,\mathsf{a}^q)$. As in Remark \ref{rmk:descend}(\ref{itm:descendE}), replacing $\mathcal{F}$ with a subfamily, one can descend $\hat E$ to a divisor $E_l$ over $X\times Z_l$ for any $l\ge l_0$. Writing $E_i$ for a component of the fibre of $E_l$ at $s_l(i)\in Z_l$, one may assume that $a_{E_i}(X)=a_{\hat E}(\hat X)$ and $a_{E_i}(X,\mathfrak{a}_i^q)=1$ for any $i\in N_l$. Then for any $i\in N_{l_0}$, $\mld_P(X,\mathfrak{a}_i^q)=1$ by the canonicity of $(X,\mathfrak{a}_i^q)$ and it is computed by $E_i$. By the log canonicity of $(X,\mathfrak{a}_i^q\mathfrak{m})$, the $\ord_{E_i}\mathfrak{m}$ must equal one and $E_i$ also computes $\mld_P(X,\mathfrak{a}_i^q\mathfrak{m})=0$. Therefore, $E_i$ computes $\mld_P(X,\mathfrak{a}_i^q\mathfrak{m}^s)$ by Lemma \ref{lem:mld}(\ref{itm:mldequal}).
\end{proof}
\section{Rough classification of crepant divisors}
By Theorems \ref{thm:terminal}(\ref{itm:terminal}) and \ref{thm:second}(\ref{itm:half}), for Conjecture \ref{cnj:product} one has only to consider ideals $\mathfrak{a}$ such that $\mld_P(X,\mathfrak{a}^q)$ equals one and such that $\mld_P(X,\mathfrak{a}^q\mathfrak{m}^{1/2})$ is positive. Then every divisor $E$ over $X$ computing $\mld_P(X,\mathfrak{a}^q)$ satisfies that $\ord_E\mathfrak{m}$ equals one. We close this paper by providing a rough classification of $E$.
\begin{theorem}\label{thm:crepant}
Let $P\in X$ be the germ of a smooth threefold and $\mathfrak{m}$ be the maximal ideal in $\mathscr{O}_X$ defining $P$. Let $\mathfrak{a}$ be an $\mathbf{R}$-ideal on $X$ such that $\mld_P(X,\mathfrak{a})$ equals one. Let $E$ be a divisor over $X$ which computes $\mld_P(X,\mathfrak{a})$ such that $\ord_E\mathfrak{m}$ equals one. Then there exist a regular system $x_1,x_2,x_3$ of parameters in $\mathscr{O}_{X,P}$ and positive integers $w_1,w_2$ with $w_1\ge w_2$ such that for the weighted blow-up $Y$ of $X$ with $\wt(x_1,x_2,x_3)=(w_1,w_2,1)$, one of the following cases holds by identifying the exceptional divisor $F$ with $\mathbf{P}(w_1,w_2,1)$ with weighted homogeneous coordinates $x_1,x_2,x_3$.
\begin{enumerate}[label=\textup{\arabic*.},ref=\arabic*]
\item\label{cas:toric}
$E$ equals $F$ as a divisor over $X$.
\item\label{cas:hypers}
The centre $c_Y(E)$ is the curve on $F$ defined by $x_1x_3^p+x_2^q$ for some positive integers $p$ and $q$ satisfying that $w_1+p=qw_2\le w_1+w_2$.
\item\label{cas:saturated}
The centre $c_Y(E)$ is the curve on $F$ defined by $x_1x_2+x_3^{w_1+w_2}$.
\end{enumerate}
\end{theorem}
\begin{proof}
\textit{Step} 1.
Let $w_1$ be the maximum of $\ord_Ex_1$ for all elements $x_1$ in $\mathfrak{m}\setminus\mathfrak{m}^2$. To see the existence of this maximum, let $Z\to X$ be the birational morphism from a smooth threefold $Z$ on which $E$ appears as a divisor. Applying Zariski's subspace theorem \cite[(10.6)]{Ab98} to $\mathscr{O}_{X,P}\subset\mathscr{O}_{Z,Q}$ for a closed point $Q$ in $E$, one has an integer $w$ such that $\mathscr{O}_Z(-wE)_Q\cap\mathscr{O}_{X,P}\subset\mathfrak{m}^2$. Then $\ord_Ex_1$ is less than $w$ for any $x_1\in\mathfrak{m}\setminus\mathfrak{m}^2$, so $w_1$ exists. Fix $x_1$ for which $\ord_Ex_1$ attains the maximum $w_1$.
Then let $w_2$ be the maximum of $\ord_Ex_2$ for those $x_2$ such that $x_1,x_2$ form a part of a regular system of parameters in $\mathscr{O}_{X,P}$. Note that $w_1\ge w_2$. Fix $x_2$ for which $\ord_Ex_2$ attains the maximum $w_2$, and take a general member $x_3$ in $\mathfrak{m}$. Note that $\ord_Ex_3=\ord_E\mathfrak{m}=1$. The $x_1,x_2,x_3$ form a regular system of parameters in $\mathscr{O}_{X,P}$. Let $Y$ be the weighted blow-up of $X$ with $\wt(x_1,x_2,x_3)=(w_1,w_2,1)$ and $F$ be its exceptional divisor. We identify $F$ with $\mathbf{P}(w_1,w_2,1)$ with weighted homogeneous coordinates $x_1,x_2,x_3$.
\textit{Step} 2.
Let $L$ be an arbitrary locus in $F$ defined by a weighted homogeneous polynomial of form either $u_1$, $u_2$ or $x_3$ where
\begin{itemize}
\item
$u_1=x_1+\sum_{i=0}^\rd{ w_1/w_2}\lambda_ix_2^ix_3^{w_1-iw_2}$ for some $\lambda_i\in k$,
\item
$u_2=x_2+\lambda x_3^{w_2}$ for some $\lambda\in k$.
\end{itemize}
We claim that the centre $c_Y(E)$ is not contained in any such $L$. Indeed by Lemma \ref{lem:parallel}, $c_Y(E)$ is not contained in the locus defined by $x_1x_2x_3$. In particular, $\ord_EF=\ord_E\mathfrak{m}=1$. If $L$ is defined by $u_i$ for $i=1$ or $2$, then the $u_i$, as an element in $\mathscr{O}_X$, satisfies that
\begin{align*}
w_i\ge\ord_Eu_i\ge\ord_EL+\ord_Fu_i\cdot\ord_EF=\ord_EL+w_i.
\end{align*}
Thus $\ord_EL=0$, meaning that $c_Y(E)\not\subset L$. One also has that $\ord_Ex_i=\ord_Eu_i$. By Remark \ref{rmk:wbu}, we are free to replace $x_1$ with $u_1$ as well as $x_2$ with $u_2$ for the regular system $x_1,x_2,x_3$ of parameters constructed in Step 1.
\textit{Step} 3.
Since an arbitrary closed point in $F$ lies on some $L$ in Step 2, the $c_Y(E)$ is either a curve or $F$ itself. The case $c_Y(E)=F$ is nothing but the case \ref{cas:toric}. We shall investigate the case when $c_Y(E)$ is an irreducible curve $C$ other than any $L$.
Let $d$ be the weighted degree of $C$ in $F\simeq\mathbf{P}(w_1,w_2,1)$. We may assume that the weak transform $\mathfrak{a}_Y$ of $\mathfrak{a}$ is defined, so $(Y,bF,\mathfrak{a}_Y)$ is the pull-back of $(X,\mathfrak{a})$ where $b=1-a_F(X,\mathfrak{a})\le0$. Since
\begin{align*}
a_E(Y,F,\mathfrak{a}_Y)=a_E(Y,bF,\mathfrak{a}_Y)-(1-b)\ord_EF\le a_E(X,\mathfrak{a})-(1-b)=b\le0,
\end{align*}
the $(Y,F,\mathfrak{a}_Y)$ is not plt about $\eta_C$. Thus $(F,\mathfrak{a}_Y\mathscr{O}_F)$ is not klt about $\eta_C$ by inversion of adjunction, that is, $\ord_C(\mathfrak{a}_Y\mathscr{O}_F)\ge1$. Thus the strict transform $A_Y$ of the $\mathbf{R}$-divisor on $X$ defined by a general member in $\mathfrak{a}$ satisfies the inequality $A_Y|_F\ge C$. One computes that
\begin{align*}
d=w_1w_2C\cdot(-F)&\le dw_1w_2A_Y|_F\cdot(-F)=w_1w_2(\ord_F\mathfrak{a})F^3\\
&=\ord_F\mathfrak{a}=w_1+w_2+1-a_F(X,\mathfrak{a})\le w_1+w_2.
\end{align*}
\textit{Step} 4.
Let $f$ be the weighted homogeneous polynomial in $x_1,x_2,x_3$ defining $C$, which has weighted degree $d\le w_1+w_2$. Since any weighted homogeneous polynomial in $x_2,x_3$ is decomposed as the product of polynomials of form $x_3$ or $x_2+\lambda x_3^{w_2}$ for some $\lambda\in k$, by Step 2 and the irreducibility of $C$, there exists a monomial which involves $x_1$ and appears in $f$.
Suppose that $d<w_1+w_2$. Then $x_1x_3^p$ is the only monomial of weighted degree $d$ involving $x_1$, in which $p=d-w_1\ge0$. The $p$ must be positive by Step 2, so $1\le p<w_2$. The $f$ is, up to constant, written as
\begin{align*}
f=x_1x_3^p+\sum_{i=0}^q\lambda_ix_2^ix_3^{w_1+p-iw_2}=\Bigl(x_1+\sum_{i=0}^{q-1}\lambda_ix_2^ix_3^{w_1-iw_2}\Bigr)x_3^p+\lambda_qx_2^qx_3^{w_1+p-qw_2}
\end{align*}
for some $\lambda_i\in k$, where $q=\rd{(w_1+p)/w_2}$. Since $f$ is irreducible, one has that $\lambda_q\neq0$ and $w_1+p=qw_2$. Replacing $f$ with $\lambda_q^{-1}f$ and $x_1$ with $\lambda_q^{-1}(x_1+\sum_{i=0}^{q-1}\lambda_ix_2^ix_3^{w_1-iw_2})$, $f$ is expressed as $x_1x_3^p+x_2^q$, which is the case \ref{cas:hypers}.
Suppose that $d=w_1+w_2$ and $w_1>w_2$. Then $x_1x_2$ and $x_1x_3^{w_2}$ are the only monomials of weighted degree $d$ involving $x_1$. If only $x_1x_3^{w_2}$ appears in $f$, then the case \ref{cas:hypers} holds by the same discussion as in the case $d<w_1+w_2$. If $x_1x_2$ appears in $f$, then the part in $f$ involving $x_1$ is, up to constant, written as $x_1(x_2+\lambda x_3)$ for some $\lambda\in k$. Replacing $x_2$ with $x_2+\lambda x_3$, one may write $f$ as
\begin{align*}
f=x_1x_2+\sum_{i=0}^q\lambda_ix_2^ix_3^{w_1+w_2-iw_2}=\Bigl(x_1+\sum_{i=1}^q\lambda_ix_2^{i-1}x_3^{w_1+w_2-iw_2}\Bigr)x_2+\lambda_0x_3^{w_1+w_2}
\end{align*}
for some $\lambda_i\in k$, where $q=\rd{w_1/w_2}+1$. One has that $\lambda_0\neq0$ by the irreducibility of $f$. Replacing $f$ with $\lambda_0^{-1}f$ and $x_1$ with $\lambda_0^{-1}(x_1+\sum_{i=1}^q\lambda_ix_2^{i-1}x_3^{w_1+w_2-iw_2})$, $f$ is expressed as $x_1x_2+x_3^{w_1+w_2}$, which is the case \ref{cas:saturated}.
Finally, suppose that $d=2w$ and $w_1=w_2=w$ for some $w$. If $w=1$, then $C$ must be a conic in $F\simeq\mathbf{P}^2$, so the case \ref{cas:saturated} holds after replacing $x_1,x_2,x_3$. If $w\ge2$, then after replacing $x_1,x_2$ with their suitable linear combinations, we may assume that the part in $f$ not involving $x_3$ is either $x_1x_2$ or $x_2^2$. In the first case, $f$ is written as $f=(x_1+\lambda_1x_3^w)(x_2+\lambda_2x_3^w)+\lambda_3x_3^{2w}$ for some $\lambda_1,\lambda_2,\lambda_3\in k$. Then $\lambda_3\neq0$ by the irreducibility. Replacing $f$ with $\lambda_3^{-1}f$, $x_1$ with $\lambda_3^{-1}(x_1+\lambda_1x_3^w)$, and $x_2$ with $x_2+\lambda_2x_3^w$, $f$ is expressed as $x_1x_2+x_3^{2w}$, which is the case \ref{cas:saturated}. In the second case, $f$ is written as $f=(\lambda_1x_1+\lambda_2x_2+\lambda_3x_3^w)x_3^w+x_2^2$ for some $\lambda_1,\lambda_2,\lambda_3\in k$. Then $\lambda_1\neq0$ by the irreducibility. Replacing $x_1$ with $\lambda_1x_1+\lambda_2x_2+\lambda_3x_3^w$, $f$ is expressed as $x_1x_3^w+x_2^2$, which is the case \ref{cas:hypers}.
\end{proof}
One can compute the lc threshold of the maximal ideal in the case \ref{cas:saturated}.
\begin{proposition}
Suppose the case \textup{\ref{cas:saturated}} in Theorem \textup{\ref{thm:crepant}}. Then $(X,\mathfrak{a}\mathfrak{m})$ is lc.
\end{proposition}
\begin{proof}
We keep the notation in Theorem \ref{thm:crepant}. $Y$ is the weighted blow-up of $X$ with $\wt(x_1,x_2,x_3)=(w_1.w_2,1)$ and $F$ is its exceptional divisor. For $i=1,2,3$, let $H_i$ be the strict transform of the divisor defined by $x_i$. Let $Q_i$ be the closed point in $F$ which lies on $H_j\cap H_k$ for a permutation $\{i,j,k\}$ of $\{1,2,3\}$. Let $C$ denote the centre on $Y$ of $E$, which is the curve defined in $F\simeq\mathbf{P}(w_1,w_2,1)$ by $x_1x_2+x_3^{w_1w_2}$ of weighted degree $d=w_1+w_2$ in our case \ref{cas:saturated}. Let $A$ be the $\mathbf{R}$-divisor defined by a general member in $\mathfrak{a}$ and $A_Y$ be its strict transform on $Y$.
We have seen in Step 3 of the proof of Theorem \ref{thm:crepant} that
\begin{align*}
d=w_1w_2C\cdot(-F)&\le dw_1w_2A_Y|_F\cdot(-F)=w_1+w_2+1-a_F(X,\mathfrak{a})\le w_1+w_2.
\end{align*}
Since $d=w_1+w_2$, the above inequalities are eventually equalities, whence $a_F(X,\mathfrak{a})=1$ and $A_Y|_F=C$.
The triple $(Y,F+A_Y+H_3)$ is crepant to $(X,A+H_{3X})$, where $H_{3X}$ is the divisor defined by $x_3$. Thus it is enough to show the log canonicity of $(Y,F+A_Y+H_3)$. Let $\Delta$ be the different on $F$ defined by $K_Y+F|_F=K_F+\Delta$. By inversion of adjunction, the log canonicity of $(Y,F+A_Y+H_3)$ is equivalent to that of $(F,\Delta+A_Y|_F+H_3|_F)$.
Let $g$ be the greatest common divisor of $w_1$ and $w_2$. Let $L\simeq\mathbf{P}^1$ be the line in $F$ defined by $x_3$. Since $Y$ has a quotient singularity of type $\frac{1}{g}(1,-1)$ at $\eta_L$, the different $\Delta$ equals $(1-g^{-1})L$ as in Example \ref{exl:different}. Together with $A_Y|_F=C$ and $H_3|_F=g^{-1}L$, one has that $\Delta+A_Y|_F+H_3|_F=C+L$.
The assertion is reduced to the log canonicity of $(F,C+L)$, which can be checked directly by using the explicit expression $(x_1x_2+x_3^{w_1+w_2})x_3$ of the defining weighted polynomial of $C+L$. Along $L$, one can use inversion of adjunction again, which tells that $K_F+C+L|_L=K_L+Q_1+Q_2$.
\end{proof}
\end{document}
|
\begin{document}
\begin{CJK*}{UTF8}{}
\title{Repetitive Readout Enhanced by Machine Learning}
\author{Genyue Liu \CJKfamily{gbsn}(刘亘越)}
\thanks{These authors contributed equally to this work.}
\affiliation{
Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
}
\author{Mo Chen \CJKfamily{gbsn}(陈墨) }
\thanks{These authors contributed equally to this work.}
\affiliation{
Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
}
\affiliation{
Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
}
\author{Yi-Xiang Liu \CJKfamily{gbsn}(刘仪襄)}
\affiliation{
Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
}
\affiliation{
Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
}
\author{David Layden}
\affiliation{
Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
}
\affiliation{
Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
}
\author{Paola Cappellaro }
\thanks{[email protected]}
\affiliation{
Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
}
\affiliation{
Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
}
\date{\today}
\begin{abstract}
Single-shot readout is a key component for scalable quantum information processing. However, many solid-state qubits with favorable properties lack the single-shot readout capability. One solution is to use the repetitive quantum-non-demolition readout technique, where the qubit is correlated with an ancilla, which is subsequently read out. The readout fidelity is therefore limited by the back-action on the qubit from the measurement. Traditionally, a threshold method is taken, where only the total photon count is used to discriminate qubit state, discarding all the information of the back-action hidden in the time trace of repetitive readout measurement. Here we show by using machine learning (ML), one obtains higher readout fidelity by taking advantage of the time trace data. ML is able to identify when back-action happened, and correctly read out the original state. Since the information is already recorded (but usually discarded), this improvement in fidelity does not consume additional experimental time, and could be directly applied to preparation-by-measurement and quantum metrology applications involving repetitive readout.
\end{abstract}
\maketitle
\end{CJK*}
\section{Introduction}\label{sec:intro}
\begin{figure*}
\caption{(a) Quantum circuit for repetitive quantum-non-demolition readout of the nuclear spin state $\ket{\psi_n}
\label{fig: introduction}
\end{figure*}
Single-shot readout is a key component for scalable quantum information processing~\cite{Divincenzo00,Raussendorf03}, for its close connection to state initialization and fault-tolerant quantum error correction~\cite{Nielsen00b}. Indeed, it is one of the main deciding factors in the selection of potential qubits.
Single-shot readout has been achieved in various physical qubit systems, ranging from neutral atoms~\cite{Bakr09,Endres16,Cooper18}, to trapped ions~\cite{Myerson08}, superconducting qubit~\cite{Jeffrey14}, and solid-state defect centers~\cite{Morello10,Elzerman04,Hanson05,Neumann10b,Maurer12,Dreau13,Waldherr14,Liu17}.
There are however situations where a candidate qubit has favorable coherence properties, but does not naturally come with single-shot readout capabilities. Examples include Al$^+$ ions~\cite{Schmidt05,Hume07}
and room-temperature nitrogen-vacancy (NV) centers in diamond~\cite{Neumann10b,Maurer12,Dreau13,Waldherr14,Liu17}, where a closed optical cycle for readout is either lacking, or experimentally challenging.
A solution to this problem is through repetitive quantum-non-demolition (QND) measurements~\cite{Hume07}.
In the repetitive QND protocol, a Controlled-NOT (CNOT) gate is applied to correlate the qubit state to an ancilla, which is subsequently read out (Fig.~\ref{fig: introduction} (a)). If the readout operator commutes with the qubit's intrinsic Hamiltonian, in other words, if the readout is QND, one can repeat the above process multiple times to increase signal-to-noise ratio, until the desired fidelity is reached.
This protocol is also known as the repetitive readout technique widely adopted in NV research at room-temperature, where the nuclear spin state (here the $^{14}$N or a $^{13}$C ) is repetitively read out with the help of the NV electronic spin~\cite{Jiang09,Neumann10b}.
In its implementations so far, the spin state was determined by comparing the total photon number collected through all the repetitive readouts with a previously established \textit{threshold} (Fig.~\ref{fig: introduction} (b)). The detected photon count numbers are thus divided into two classes, referred to as bright (dark) state of the qubit.
In this threshold method (TM), the readout infidelity can be evaluated from the overlap between the photon count distributions of bright and dark states. Two factors contribute to this overlap: inefficient optical readout, including photon shot noise and limited photon collection efficiency; and deviation from the QND condition.
The first factor can be improved by embedding the emitter into photonic structures and by using better single photon detectors.
The second factor imposes a more fundamental constraint.
Indeed, if the readout operator does not fully commute with the system Hamiltonian, back-action from the measurement will eventually limit the number of photons that can be collected before quantum information is destroyed~\cite{Cujia19,Pfender19}.
To mitigate this effect, we propose to use the additional information carried by the measurement-induced state perturbation itself. Information about the perturbation is already recorded during typical experiments, in the form of the time trace of photon clicks from the repetitive readouts (Fig.~\ref{fig: introduction} (c)), but is usually discarded in the TM after extracting the total photon number. Identifying the perturbation and tracing back to the unperturbed original state using this information is the key to improving the fidelity of readout.
Unfortunately, finding an elegant analytical approach proves difficult--the complexity of the photodynamics exhibits intrinsic randomness, and the inefficient photon collection process yields noisy data, precluding clean analytical analysis that would take advantage of the additional information.
On the other hand, machine learning (ML) is designed to discover hidden data correlations, and it is widely used in classification problems~\cite{Krizhevsky17}. It has been recently introduced in quantum information tasks to mitigate crosstalks in multi-qubit readout~\cite{Seif18}, to enhance quantum metrology~\cite{Santagati19,Dinani19}, and to identify quantum phases~\cite{Lian19}.
In this work, we apply ML to state discrimination for the repetitive readout of NV center. To design and evaluate the ML method, we use the full information from time trace data generated by quantum Monte-Carlo simulation. We tried different supervised ML methods and mainly focused on a shallow neural network realized using MATLAB\textsuperscript{\textregistered} Neural Net Pattern Recognition tool (npartool).
We observed consistent increase in readout fidelity using ML over TM. The improvement in readout fidelity albeit small is robust over a parameter space that covers individual NV differences. One application of our results is in preparation-by-measurement: when one discards less trustworthy measurements, ML yields a more efficient initialization process than TM.
Since in our method the training labels are readily available in experiments with very high fidelity~\cite{Neumann10b,Maurer12,Waldherr14,Dreau13,Liu17}, it can be readily applied to current experiments. Together with the robustness of our method over NV photodynamic parameters, we expect that the improved readout fidelity can be achieved in experiments.
\begin{figure*}
\caption{(a) Readout fidelity as a function of repetition number $N$ in the repetitive readout. The fidelity from TM (grey) declines after $N_\mathrm{opt}
\label{fig:mainresults}
\end{figure*}
\section{Repetitive Readout Model and Simulation}
We consider reading out the native $^{14}$N nuclear spin state through the electronic spin of NV center at room-temperature as an example.
The NV center's ground state is an electronic spin triplet ($S=1$), and can be optically polarized to the $\ket{m_s=0}$ state.
The other two sublevels $\ket{m_s=\pm 1}$ have additional non-radiative decay channels under optical illumination, allowing optical readout of spin state by fluorescence intensity.
The native $^{14}$N nuclear spin is a nuclear spin-1 ($I=1$), and couples to the NV center through hyperfine interaction.
$^{14}$N does not have optical readout, but it supports a C$_\textrm{n}$NOT$_\textrm{e}$ operation (control on nuclear spin and NOT gate on electronic spin): $\ket{m_s=0,m_I=+1}\leftrightarrow\ket{m_s=+1,m_I=+1}$, and $\ket{m_s, m_I=0,-1}\leftrightarrow \ket{m_s, m_I=0,-1}$, which correlates the $^{14}$N to the NV state.
In the repetitive readout protocol, the NV starts in $\ket{m_s=0}$, and a CNOT gate correlates the nuclear spin state to NV. A green laser then reads out the NV state, while also repolarizing it back to $\ket{m_s=0}$.
Under high magnetic field, where the NV and $^{14}$N energies are well separated, this process is approximately QND and can be repeated a few thousand times to accumulate signal, discriminating the bright $\ket{m_I=0,-1}$ (dark $\ket{m_I=+1}$) state of $^{14}$N in a single shot~(Fig.~\ref{fig: introduction}). Still, the high magnetic field cannot fully eliminate back-action of the measurement on $^{14}$N , which is caused by the relatively strong excited state transverse hyperfine interaction $A_\perp(S_+I_-+S_-I_+)$.
This perturbation causes flip-flips between NV and the $^{14}$N destroying the quantum information lost. In the TM, this perturbation prevents us from keeping to accumulate useful signal and reduces the fidelity of state discrimination. ML, instead, as we find out, can identify the majority of such flips and therefore improve the readout fidelity. Ultimately, the readout fidelity is limited by flips that occur very early during repetitive readout.
We used simulated data to explore the effectiveness of ML in repetitive readout and to better analyze the source of improvement.
To fully capture the photodynamics involved in the repetitive readout process, we employed a 33-level model, considering the NV$^-$~electronic and $^{14}$N ~nuclear spins and the neutrally charged NV$^0$~ state. The model is described in more detail in the Appendix. Most transition rates in the model were accurately measured from independent experiments~\cite{Robledo11b,Tetienne12,Gupta16,Manson06} and we use values from Gupta et al~\cite{Gupta16}.
The excited state NV-$^{14}$N ~transverse hyperfine interaction strength and NV$^-$ to NV$^0$~(de)ionization rate at strong laser power were not precisely determined before, and therefore a reasonable range is explored to cover possible variations in individual NVs, based on the results from~\cite{Maurer12,Neumann10b,Poggiali17,PhysRevB.79.235210}.
In the simulation, we assumed an intermediate magnetic field of $7500$~G typical for repetitive readout experiments, and a photon collection efficiency of $30\%$, standard with photonic structures like solid immersion lens or parabolic mirrors on the diamond~\cite{Marseglia11,Robledo11,Wan18}. A perfect CNOT gate connecting $\ket{m_s=0,m_I=+1}\leftrightarrow\ket{m_s=+1,m_I=+1}$ was assumed. Correspondingly, the dark state is $\ket{m_I=+1}$, and bright state is $\ket{m_I=0,-1}$.
We remark that it is possible to use the same protocol to read out $^{13}$C rather than $^{14}$N ~\cite{Dreau13,Waldherr14,Maurer12,Liu17}, given well-characterized hyperfine interaction strengths~\cite{Shim13x,Rao16,Smeltzer11,Dreau12}.
\section{Neural Network Architecture}
The network in \textsf{nprtool} is a two-layer feed-forward neuron network (Fig.~\ref{fig: introduction} (c)). In all trainings, we used a training set of size $10,000$~with a random portion of $15\%$ for validation. The input data is the time trace of single photon detector clicks through the repetitive readout process (Fig.~\ref{fig: introduction} (c)). Because the total photon count is a good metric for state discrimination, we take the cumulative sum of the time trace before feeding it to the neural network.
Out of the $10,000$~data, half are dark state $\ket{m_I=+1}$, while the other half are bright with a $1:1$ ratio between $\ket{m_I=0}$~and $\ket{m_I=-1}$. After training, we used a test set of size $4,000$, which was generated in the same way as the training set but not used in training, to independently test the network.
This process is typically repeated $10$ times and the average accuracy was used throughout this work. {Error bars represent the standard deviation of the $10$ results.}
We found that approximately $12.5$ neurons per 1000 repetitions was a good balance between the increase in fidelity and avoidance of overfitting.
\begin{figure*}
\caption{Cumulative number of photons as a function of read out repetitions. Each trace corresponds to one input to the neural network. All traces shown here experienced at least one $^{14}
\label{fig:flipflops}
\end{figure*}
\section{Results}
We first investigate the influence of the repetition number on readout fidelity. The fidelity $F$ across this manuscript is defined as
\begin{equation} \label{eqn:fidelity}
F=\frac{F_{\mathrm{bright}}+F_{\mathrm{dark}}}{2}
\end{equation}
where $F_{\mathrm{bright}}$ and $F_{\mathrm{dark}}$ are the percentage of bright and dark states that are correctly read out, respectively.
The number of repetition influences the readout fidelity in two ways: 1. A larger repetition number means more photons detected and better separation between photon count distributions of the bright (dark) states (Fig.~\ref{fig: introduction} (b)). 2. A larger repetition number, however, also implies a longer illumination time and a higher probability of the $^{14}$N ~nuclear spin to flip, due to the large transverse hyperfine interaction in the excited state, which mixes the photon count distributions of two initially different states. As a result of these competing effects, there is an optimal repetition number $N_\mathrm{opt}$ for the TM.
On the other hand, the readout fidelity from ML keeps improving as we increase the repetition number even if the increase rate slows down (Fig.~\ref{fig:mainresults} (a)).
At $N_\mathrm{opt}$, we observed a $0.34\%$ increase in fidelity with ML.
Since the time trace input for ML is recorded in all experiments even when intended for TM, this improved fidelity does not consume additional experimental time. One can add more repetitions in the experiment, and harness a further increase as much as $0.57\%$ in readout fidelity (compared to TM at $N_\mathrm{opt}$). The improvement at $N>N_\mathrm{opt}$ suggests that ML is not only more robust against $^{14}$N ~flips, but rather extracts useful information from the flips. This is investigated in more detail later.
As mentioned earlier, the excited state transverse hyperfine interaction strength $A_\perp$ between NV and $^{14}$N ~, and (de)ionization rate $k_\mathrm{ion}$($k_\textrm{deion}$) between NV$^-$ and NV$^0$~under strong illumination have been not yet determined to satisfactory precision.
We therefore explored a parameter range to cover realistic values one might encounter in experiment: $A_\perp=\{-30,-40,-50\}$~MHz and $k_\mathrm{ion}=\{70,90,100\}\times\beta$~MHz, where $\beta$ is a unit-less value proportional to laser power. In the simulation, we choose $\beta$
such that for any combination of parameters the NV would emit the same total number of photons in the bright state during repetitive readout.
Comparisons of TM at $N_\mathrm{opt}$, ML at $N_\mathrm{opt}$ and ML at $N=8000$ are shown in Fig.~\ref{fig:mainresults} (b) under different $A_\perp, k_\mathrm{ion}$. The trend matches Fig.~\ref{fig:mainresults} (a). ML consistently outperforms TM with both repetition numbers chosen.
To better understand how ML achieves higher fidelity, we take a closer look at cases where $^{14}$N ~experienced flip-flops in the excited state, which is a major limit to the TM fidelity.
We find the neural network is able to extract information from the time trace input to recognize if a flip has occurred, and recover the original state. Such flips could bring the photon count across the threshold, yielding misclassification when using TM. This is shown in Fig.~\ref{fig:flipflops}, where we plot the cumulative sum of the time traces in cases where flip(s) occurred.
In Fig.~\ref{fig:flipflops} (a), ML correctly assigns all these time traces to their original states, while TM looks only at the total photon count at the end and compares it to the threshold (dashed line), making $\sim 25\%$ wrong decisions. In Fig.~\ref{fig:flipflops} (b), we show instances when ML gave the wrong classification.
We notice that in those cases, the $^{14}$N ~flip-flops happen at the very beginning, making the time traces indistinguishable from those of the opposite initial state with no flips. There is little hope in correctly reading out these states, posing an ultimate limit to the readout fidelity.
\begin{figure}
\caption{More efficient state preparation-by-measurement. The state readout fidelity increases after discarding less trustworthy measurements and this improves the state preparation. ML always outperforms TM and scales more favorably with the ratio of discarded data. The solid curves are a guide to the eye. Error bars are the standard deviation of 10 training results, and are smaller than the marker.}
\label{fig:discard}
\end{figure}
\section{Application to initialization by readout}
\label{sec:discard}
One scenario where even a modest increase in the fidelity can be beneficial is in state preparation-by-measurement~\cite{Neumann10b,Maurer12,Dreau13,Waldherr14,Liu17}. In this is a widely adopted technique, to achieve a higher fidelity of state preparation with the TM, two distinct thresholds are set, $N_\mathrm{dark}<N_\mathrm{th}$ and $N_\mathrm{bright}>N_\mathrm{th}$, where $N_\mathrm{th}$ is the readout threshold.
Measurements in between the two thresholds are discarded, as they cannot be assigned to either bright or dark state with enough confidence. This leads to a lengthier state preparation routine.
In ML, the neural network assigns each input to a probability $p_\mathrm{bright}$ ($p_\mathrm{dark}$) of the state being bright (dark). A final step compares $p_\mathrm{bright}, p_\mathrm{dark}$ and classifies accordingly.
To achieve a higher fidelity, we discard cases where $0.5-t<p_\mathrm{dark/bright}<0.5+t$, with an adjustable threshold $t$.
We compare the state preparation fidelity from TM and ML, when discarding the same amount of data, and observe that ML maintains its advantage over TM, and scales more favorably than TM with the ratio of discarded measurements (Fig.~\ref{fig:discard}). This enables preparing a high fidelity initial state more efficiently. We observed similar improvement from unsupervised learning (see Appendix), agreeing with~\cite{Magesan15}.
\section{Conclusion and Outlook}
In conclusion, we have shown that ML techniques can exploit the hidden structure in the repetitive readout data of NV center at room-temperature to improve the state measurement fidelity. We used Quantum Monte-Carlo simulation based on a 33-level NV model to generate data for machine learning, and found improved single-shot readout fidelity over the traditional threshold method, that can be attributed to the ML ability to correctly classify a larger number of readout trajectory that are perturbed by the measurement process itself.
While we used simulations, generally the training process does not depend on knowledge of the model. In fact, the only information required is the label for the state ($\ket{m_I=+1}$~or $\ket{m_I=0,-1}$), which is readily available in experiments by discarding less trustworthy data~\cite{Neumann10b,Maurer12,Dreau13,Waldherr14,Liu17}.
One can then use this data to train a network specific to the NV of interest, and expect an increase in readout fidelity in all subsequent repetitive readout experiments, free of any additional experimental time.
Although individual NVs may have slightly different photodynamic parameters, they should be covered by the range we explored in this work, and therefore the improvement in fidelity is expected to be ubiquitous.
In addition, the off-the-shelf MATLAB\textsuperscript{\textregistered} deep learning toolbox we employed greatly reduces the complexities in the neuron network architecture, making this improvement easily reproducible and more accessible to experimentalists.
Though small, the increase in fidelity does not require any additional experimental time, and is readily compatible with experiments using repetitive readout of nuclear spins, including in quantum metrology~\cite{Aslam17,Lovchinsky16,Degen17} to improve sensitivity.
To further shed light on the bright/dark decisions that affect the ML readout fidelity, one could use decision tree learning instead of a neuron network. This could potentially inform optimized readout protocols, with varying illumination times, or help further improve the neuron network architecture.
More broadly, ML could be applied to more complex systems, for example to help mitigate crosstalk of fluorescence signals in a solid-state register consisting of a few nearby NV or other color centers~\cite{Seif18}.
\appendix
\section{Appendix I: NV model and Quantum Monte-Carlo Simulation}\label{appdix:model}
We used a 33-level model to fully describe the dynamics of NV-$^{14}$N ~in the repetitive readout process. This model includes the spin-1 triplet ground and excited states, and singlet metastable state for NV$^-$, the spin-$1/2$ ground and excited states for NV$^0$, and the nuclear spin-1 of $^{14}$N , as illustrated in Fig.~\ref{fig:model}. The transition rates directly related to the NV photoluminescence have been precisely determined and reported in various works~\cite{Robledo11b,Tetienne12,Gupta16,Manson06}, although with some significant variations.
For the simulation we took the values from Gupta \textit{et al.}~\cite{Gupta16} listed in Table.~\ref{tab:transitionrates}.
\begin{table}[h]
\begin{tabular}{l|l|l|l|l|l|l}
transition rates& $k_r$ & $k_{47}$ & $k_{57}$ & $k_{71}$ & $k_{72}$ \\
\hline
(MHz)& 65.9 & 92.1 & 11.4 & 1.18 & 4.84 \\
\hline
\end{tabular}
\caption{Transition rates used in the 33-level model.}\label{tab:transitionrates}
\end{table}
The exact (de)ionization mechanisms under $532$~nm laser illumination have not been yet determined experimentally, neither have the (de)ionization rate under laser-power comparable to the saturation power (measurement under weak power can be found in ~\cite{Aslam13,Chen13b,Hacquebard18}). Here we assume the (de)ionization $k_\textrm{ion}$($k_\textrm{deion}$) occurs only in the excited states, and obeys selection rules as illustrated in Fig.~\ref{fig:model}. To maintain the experimentally determined $70/30$ ratio~\cite{Aslam13} between the charge states, we set $k_\textrm{deion}=2k_\textrm{ion}$. The ionization rate is proportional to the laser intensity, which is swept around $k_\textrm{ion}\approx 90\beta$~MHz, in accordance with \cite{Maurer12}.
When the magnetic field is applied along the NV-axis, the ground state NV-$^{14}$N ~Hamiltonian has negligible effect on the repetitive readout, thus it is not considered in the numerical simulation. The NV$^-$~excited state Hamiltonian reads:
\begin{equation}\label{eqn:NV-ham}
H_{-}=\Delta_{es} S_z^2+Q I_z^2+\gamma_e B S_z+\gamma_n B I_z+\mathbf{S}\cdot\mathbf{A}\cdot\mathbf{I}
\end{equation}
where $\mathbf{S}$ and $\mathbf{I}$ are the electronic and nuclear spin operators, $\Delta_{es}=1.42$~GHz is the zero-field splitting of the electronic spin, $Q=-4.945$~MHz the nuclear quadrupole interaction~\cite{Smeltzer09}, and $\gamma_e=2.802$~MHz/G and $\gamma_n=-0.308$~kHz/G the electronic and nuclear gyromagnetic ratios. The hyperfine interaction term is diagonal due to symmetry:
\begin{equation}\label{eqn:NV-hfgs}
\mathbf{S}\cdot\mathbf{A}\cdot\mathbf{I}=A_\parallel S_zI_z+A_\perp(S_xI_x+S_yI_y)
\end{equation}
where $A_\parallel=-40$~MHz were determined via ODMR experiment~\cite{Neumann09}. $A_\perp$ was believed to be similar to $A_\parallel$ and is recently measured between $-40$ and $-50$~MHz~\cite{Poggiali17}.
The NV$^0$~excited state Hamiltonian takes the form:
\begin{equation}\label{eqn:NV0ham}
H_{0}=Q I_z^2+\gamma_e B S_z+\gamma_n B I_z+\mathbf{S}\cdot\mathbf{C}\cdot\mathbf{I}
\end{equation}
with the hyperfine interaction term:
\begin{equation}\label{eqn:NV-hf}
\mathbf{S}\cdot\mathbf{C}\cdot\mathbf{I}=C_\parallel S_zI_z+C_\perp(S_xI_x+S_yI_y)
\end{equation}
The hyperfine interaction strengths were considered similar to those in the NV$^-$~excited state~\cite{PhysRevB.79.235210}, and we set $C_\parallel=C_\perp=-40$MHz.
\begin{figure*}
\caption{The 33-level NV model used in our simulation, consisting of 11 electronic spin levels times 3 nuclear spin levels (level spacings not to scale). $k_r$, $k_{47}
\label{fig:model}
\end{figure*}
To simulate repetitive readout experiments for both the training and testing data, we used the quantum Monte-Carlo method based on the aforementioned 33-level model. One challenge lies in the various time scales involved in the numerical simulation, from the electronic spin's fast oscillation $\omega\sim (2\pi)\cdot 10$~GHz, to the optical transition rates $k_{ij}\sim 100$~MHz, to the flip-flop rate of $^{14}$N ~nuclear spin $1/T_1^n \sim $~kHz. We mitigate this issue by employing the Born-Oppenheimer approximation~\cite{doi:10.1002/andp.19273892002} in our numerical simulation, and average out the fast oscillation at $\omega$ as following.
We define $\delta p_{mn}$ as the transition probability from the state $\ket{m}$ to $\ket{n}$ in the time step $\delta t$.
Starting from $\ket{\psi(t=0)}=\ket{m}$, we have
\begin{equation}\label{eqn:boApprox}
\begin{aligned}
\delta p_{mn} &=\int^{\delta t}_0 \left( \sum^{33}_{i=1}|\braket{n}{i}|^2~|\braket{i}{\psi(t)} |^2 \right)dt \\ & = \sum^{33}_{i=1} \left( k_{in}\int^{\delta t}_0 |\braket{i}{\psi(t)}|^2 dt \right)
\end{aligned}
\end{equation}
Notice that $\boldsymbolrt\braket{i}{\psi(t)}\boldsymbolrt^2$ is periodic with period $2 \pi/ \omega$, which is much smaller than the time step $\delta t \sim 1/k_{ij}$. Thus, we assume only the average effect of this oscillation is seen in each time step, and numerically find $\left\langle \frac{\delta p_{mn}}{\delta t}\right\rangle$. This allows us to efficiently perform the quantum Monte-Carlo simulation.
\section{Appendix II: Machine Learning Discussions}\label{appdix:MLdiscussion}
\subsection{Recurrent Neural Network}
Recurrent neural network (RNN) is a commonly used architecture specializing in time-series data with the capability to understand the correlation within the time-series. In the main text, we showed results obtained using shallow neural network. In order to see if we gain by exploiting the correlation within the time series we also tested the performance of an advanced recurrent neural network: long short-term memory (LSTM).
Due to the nature of recurrent neural network, the training process is very time-consuming and therefore not suitable for exploring multiple parameters in our model. To speed up the training process, we averaged the input time trace data over 100 realizations, to greatly reduce the training set dimension. Indeed, this may have caused some loss of information. The result though still consistently outperforms the TM and is comparable to the shallow neural network shown in the main text (see Table.~\ref{tab:LSTM}). One remark is that we did not take the cumulative sum for the input data, because LSTM specializes in time series data and is able to recognize some quasi-periodic patterns.
\begin{table}[htbp]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
$A_\perp$ (MHz) & $k_\mathrm{ion}$ (MHz) & TM fidelity & ML fidelity & LSTM fidelity \\
\hline
\multicolumn{1}{|c|}{\multirow{3}[0]{*}{-50}} & \multicolumn{1}{c|}{$70\beta$} & $97.56(4)$\% & 97.86$(7)$\% & 97.61$(5)$\% \\
& \multicolumn{1}{c|}{$90\beta$} & 96.98$(4)$\% & 97.32$(5)$\% & 97.40$(2)$\% \\
& \multicolumn{1}{c|}{$110\beta$} & 96.31$(4)$\% & 96.71$(5)$\% & 96.77$(7)$\% \\
\hline
$-30$ & \multirow{3}[0]{*}{$90\beta$} & 98.67$(2)$\% & 98.76$(3)$\% & 98.44$(3)$\% \\
$-40$ & & 97.94$(2)$\% & 98.20$(4)$\% & 98.29$(3)$\% \\
$-50$ & & 96.98$(4)$\% & 97.32$(5)$\% & 97.40$(2)$\% \\
\hline
\end{tabular}
\caption{Comparison between the fidelity obtained through TM, ML and LSTM under different parameters. All training and testings were conducted at the $N_\mathrm{opt}$ of that set of parameters. Overall, the LSTM algorithm has similar performance compared with the shallow neural network.}
\label{tab:LSTM}
\end{table}
\subsection{Unsupervised learning}
In the main text we compared the enhanced fidelities of TM and supervised learning after discarding less trustworthy data. Another possibility is to use unsupervised learning~\cite{Magesan15}. This method is of interest because unsupervised learning does not require any well-labelled data. We implemented the \textsf{k-means} algorithm that classifies a given data set into $k$ different groups.
We first use the TM readout to obtain a bright (dark) group of measurement trajectories. We then perform \textsf{k-means} on the bright (dark) group to further classify it into $k$ subgroups.
The fidelity increases when we discard the smallest subgroup. Compared to the TM, \textsf{k-means} gives better fidelity as shown in Fig.~\ref{fig:unsupervised_discard}, because the unsupervised learning extracts some information about $^{14}$N flips through the hidden structures in time trace data, in agreement with~\cite{Magesan15}.
Note that unlike TM or supervised learning, we cannot control the ratio of discarded data. Therefore, the fidelity defined by Eq.~\ref{eqn:fidelity} is not available, and only the fidelity of dark state is shown. We also remark that in rare cases, \textsf{k-means} gives outlier results with fidelity much worse than TM.
\begin{figure}
\caption{More efficient state preparation-by-measurement.
Improved dark state readout accuracy after discarding less trustworthy readouts. Each diamond-shaped point represents an individual k-means test.}
\label{fig:unsupervised_discard}
\end{figure}
\subsection{Robustness of trained network}
In addition to the universal improvement from ML over TM with different photodynamic parameters explored in the main text, we here explore the robustness of a network to a change in parameters. In particular, we use the network R trained by \{$k_\mathrm{ion}=-90 \beta$MHz, $A_{\perp}=-50$MHz\} data, to classify data generated with different parameters.
First, we test the network R on different (de)ionization rate \{$k_\mathrm{ion}=-100 \beta$MHz, $A_{\perp}=-50$MHz\}, obtaining a fidelity of $97.93(5)\%$ from the network R, compared to $98.07(5)\%$ from TM.
We attribute this deteriorated performance of ML to the change in the photodynamics.
Under the same condition, different $k_\mathrm{ion}$ change the relative distributions of bright and dark states. This change cannot be compensated by laser intensity, and makes the network R obsolete.
We then tested the network R robustness to different transverse hyperfine strengths, $A_\perp=-40, -30$~MHz.
Intuitively, a small change in $A_\perp$ does not change the photoluminescence pattern, but rather modifies the $^{14}$N ~flip-flop rate a little, which could be captured by the network, given its ability to recognize the occurrence of flip-flops. Indeed, we observed better fidelity from the network R on $A_\perp=-40$~MHz data than TM, and comparable fidelity to TM on $A_\perp=-30$~MHz, where the parameter has changed by $40\%$ (Table.~\ref{tab:train_A_test_B}). Here we used $N_{opt}$ for the test data for both ML and network R. These results indicated that provide variations in the NV parameters are small, it is possible to use a fixed network R to directly read out any NV, without the need to run experiments to generate the traning data.
\begin{table}[htbp]
\begin{tabular}{c|c|c|c}
$A_\perp$ (MHz)& TM fidelity & ML fidelity & network R fidelity\\
\hline
$-40$ & $97.94(2)\%$ & $98.20(4)\%$ & $98.24(4)\%$ \\
\hline
$-30$ & $98.67(2)\%$ & $98.76(3)\%$ & $98.66(4)\%$ \\
\hline
\end{tabular}
\caption{Robustness test of network R trained with \{$k_\mathrm{ion}=-90\beta$MHz, $A_{\perp}=-50$MHz\}. We compare the readout fidelities of test data with different $A_\perp$ from TM, ML, and network R. The result from network R is better than TM when $A_\perp$ is not changed too much.}\label{tab:train_A_test_B}
\end{table}
\end{document}
|
\begin{document}
\title{The approximation property and Lipschitz mappings on Banach spaces}
\begin{abstract}
We present an overview to the approximation property, paying especial attention to the recent results relating the approximation property to ideals of linear operators and Lipschitz ideals.
We complete the paper with some new results on approximation of Lipschitz mappings and their relation to linear operator ideals.
\end{abstract}
\section{The approximation property}
The approximation property is a fundamental property in Functional Analysis. Although the widest context where it has been studied are locally convex spaces, it has fruitfully succeeded on Banach spaces theory. A locally convex space (l.c.s.) $E$ has the {\it approximation property} (AP for short) if the identity on $E$ can be approximated uniformly on precompact subsets of $E$ by finite rank operators. Denoting $Id_E$ the identity of $E$, $E'$ the topological dual of $E$, $E'\otimes$ the tensor product of $E'$ and $E$ and ${\mathcal L}(E)$ the space of all continuous operators on $E$, the AP can be written as $Id_E\in \overline {E'\otimes E}^{\tau_c}$ in ${\mathcal L}(E)$, where $\tau_c$ is the topology of uniform converge on precompact subsets of $E$.
In \cite[p. 108]{Sc}
one can read:
\begin{quote}
But it is not known whether in every l.c.s. $E$ the identity map can be approximated, uniformly on every precompact set in $E$, by continuous linear maps of finite rank. The question whether or not this is true constitutes the {\it approximation problem.}
\end{quote}
Around 1970 every known Banach space (even every known l.c.s.) had the AP. Gr\"othendieck proved in 1955 that if every Banach space has the AP then every l.c.s. has the AP. This result made of special interest the study of the AP for Banach spaces. At that time it was known that every Banach space with a basis had the AP. So, the problem of the basis posed by Banach in 1932 hit the scene: {\it Does every separable Banach space have a Schauder basis?}. Karlin proved in 1948 that there exist separable Banach spaces without an unconditional basis, e.g. $C([0,1])$. However, a complete answer to the basis problem was unknown. So, the relation between both problems, the approximation problem and the problem of the basis, was very strong. To make things more interesting, Gr\"othendieck also proved that if every Banach space had the AP then every separable Banach space had a basis. These results spread the interest of the AP on Banach spaces, that became the natural setting where the theory was mainly developed. Of course all this was before 1973, when Enflo solved the Approximation Problem by giving a celebrated example of a separable Banach space without a basis and without the AP.
After Enflo's example, many new ones appeared in the literature. For instance, the interested reader can find details of the following ones in \cite{Ca}:
\begin{itemize}
\item Figiel (unpublished) and Davie (1973/75) found an example of a subspace $S$ of $\ell_p$, $2<p$, without the AP.
\item Szankowski (1978) of a subspace $S$ of $\ell_p$, $1\leq p<2$, without the (compact) AP.
\item Johnson (1979/80) gave a Banach space whose subspaces have all the AP but some of them have no basis.
\item Szankowski (1981) proved that $\mathcal L(\ell_2;\ell_2)$ does not have the AP.
\item Pisier (1983/85) found an infinite dimensional Banach space such that $X\hat \otimes_\pi X=X\hat\otimes_\varepsilon X$ algebraically and topologically. Such $X$ does not have the AP.
\end{itemize}
Although the definition of the AP is estated just for the identity been approximated, it is well-known that not only the identity can be approximated. If $E$ is a l.c.s. with topological dual $E'$ then, the following are equivalent:
\begin{enumerate}
\item $Id_E\in \overline {E'\otimes E}^{\tau_c}$ in ${\mathcal L}(E)$ (i.e. $E$ has the AP)
\item ${\mathcal L}(E)=\overline {E'\otimes E}^{\tau_c}$.
\item For all l.c.s. $F$, $\mathcal L(E;F)=\overline {E'\otimes F}^{\tau_c}$.
\item For all l.c.s. $F$, $\mathcal L(F;E)=\overline {F'\otimes E}^{\tau_c}$.
\end{enumerate}
Also compact operators can be approximated uniformly, not only on compact sets, but also on bounded sets. This was proved by Grothendieck in 1955.
Let $E$ be a Banach space with strong dual $E'$ and let $\tau_b$ be the topology of uniform convergence on bounded sets of $E$.
\begin{itemize}
\item $E$ has the AP $\iff$ for every Banach space $F$, $\mathcal K(F;E)=\overline {F'\otimes E}^{\tau_b}$ in $\mathcal L(F;E)$.
\item $E'$ has the AP $\iff$ for every Banach space $F$, $\mathcal K(E;F)=\overline {E'\otimes F}^{\tau_b}$ in $\mathcal L(E;F).$
\end{itemize}
where $\mathcal K(E;F)$ are all compact operators from $E$ into $F$.
However, it is still an open problem to know that a Banach space has the AP whenever
$\mathcal K(E;E)=\overline {E'\otimes E}^{\tau_b}$ in $\mathcal L(E;E)$.
\section{The approximation property and ideals of operators}
In the last decade linear operator ideals have hit the study of the approximation property. The main idea is to consider variants of the approximation property when we consider not only bounded or compact operators, but any linear operator belonging to some given operator ideal. In these variants a suitable operator topology related to the ideal is required. In \cite{BeBo}, it is considered an approximation property where every linear operator $T \in \mathcal L(E,E)$ can be approximated uniformly on compact subsets of $E$ by operators in $\mathcal I(E,E)$. In \cite{c,DOPS,oja12,r} the operators from the ideal are approximated by finite rank operators. Besides, replacing compact sets by another class of sets with some kind of compactness related to the ideal $\mathcal I$ has also been considered. The new class of sets is formed by $\mathcal I$-compact sets. This notion was introduced by Carl and Stephani in \cite{CaSt} and the related approximation property has been studied in \cite{DOPS, LaTu,LaTu3}. All these ideas have been unified in \cite{BeBo1} where the concept of ideal topology has been introduced. Given ${\mathcal I,J}$ two ideals of linear operators, the unifying approximation property given in \cite{BeBo1} reads as follows: a Banach space $E$ is said to have the $({\mathcal I,J} , \tau)$-approximation property if $E$-valued operators belonging
to the operator ideal ${\mathcal I}$ can be approximated, with respect to the ideal topology $\tau$, by operators
belonging to the operator ideal ${\mathcal J}$.
Let us recall some concepts related to the tandem approximation property/operator ideals.
Let $\mathcal I$ be an operator ideal, and let $E$ and $F$ be Banach spaces.
The space of all finite rank linear operators between two Banach spaces $E$ and $F$ is denoted by ${\mathcal F}(E,F)$.
A subset $ B\subset E$ is {\it relatively $\mathcal I$-compact} if $B \subseteq S(M)$, where $M$ is a compact subset of some Banach space $G$ and $S \in \mathcal I(G,E)$.
A linear operator $T:E\to F$ is {\it $\mathcal I$-compact} if $T(B_E)$ is a relatively $\mathcal I$-compact subset of $F$.
The operator ideal formed by all linear $\mathcal I$-compact operators is denoted by ${\mathcal K}_{\mathcal I}$.
Let $(\mathcal I,\|\cdot\|_{\mathcal I})$ be a Banach operator ideal.
Following \cite{oja12} a Banach space $E$ has the {\it ${\mathcal I}$-approximation property}
if for every Banach space $F$, $\overline{F'\otimes E}^{\|\cdot\|_{\mathcal I}}={\mathcal I}(F,E)$.
In \cite{LaTu} the norm $\|\cdot\|_{\mathcal I}$ is replaced by the operator norm $\|\cdot \|$ in ${\mathcal L}$, that is, a Banach space
$E$ has the {\it ${\mathcal I}$-uniform approximation property}
if for every Banach space $F$, $\overline{F'\otimes E}^{\|\cdot\|}={\mathcal I}(F,E)$.
Note that, whenever the ideal of compact operators $\mathcal K$ is considered, the ${\mathcal K}$-(uniform) approximation property is just the approximation property.
In particular, $E$ has the ${\mathcal K}_{\mathcal I}$-uniform approximation property if for every Banach space $F$, $\overline{F'\otimes E}^{\|\cdot\|}={\mathcal K}_{\mathcal I}(F,E)$,
that is, every ${\mathcal I}$-compact operator from $F$ into $E$ can be uniformly approximated by finite rank operators.
It has been proved \cite{LaTu} that a Banach space $E$ has the ${\mathcal K}_{\mathcal I}$-uniform approximation property if, and only if, the identity $Id_E\in \overline{E'\otimes E}^{\tau_{\mathcal I}} $, where $\tau_{\mathcal I}$ is the topology of uniform convergence on relatively ${\mathcal I}$-compact sets.
\section{The approximation property for Lipschitz operators.}
The above point of view of considering ideals of operator to get variants of the approximation property has motivated the study of the approximation property for metric spaces. Since the publication of \cite{FJ09}, several ideals of Lipschitz mappings have appeared in the last years whose general approach can be found in \cite{AcRuSaYa}. Lipschitz operator ideals are the basis for considering approximation properties on metric spaces in an intrinsic way and not only on Banach spaces associated to them.
Let $X$ be a pointed metric space (with distinguished point $0$).
A map $T:X\to E$ is {\it Lipschitz} if there is $C>0$ such that $\|T(x)-T(y)\|\leq Cd(x,y)$ for all $x,y\in X$.
The {\it Lipschitz norm} ${\rm Lip}$ is given by the infimum of all $C>0$ as above. The space of all Lipschitz maps $T$ with $T(0)=0$ is denoted by $Lip_0(X,E)$ and forms a Banach space when endowed with the ${\rm Lip}$ norm.
Let $\chi_A$ be the characteristic function of a set $A$.
For $x,x^{\prime }\in X$, define $ m_{xx^{\prime }}:=\chi_{\left\{ x\right\} }-\chi_{\left\{ x^{\prime }\right\} }$. We write $ \mathcal{M}(X)$ for the set of all functions $m=\sum\limits_{j=1}^{n}\lambda
_{j}m_{x_{j}x_{j}^{\prime }}$, called {\it molecules}, for any scalars $\lambda_j$ and any $x,x'$ in $X$, endowed with the norm
\begin{equation}
\left\Vert m\right\Vert _{\mathcal{M}(X)}=\inf \left\{
\sum\limits_{j=1}^{n}\left\vert \lambda _{j}\right\vert
d(x_{j},x_{j}^{\prime }),\ \ m=\sum\limits_{j=1}^{n}\lambda
_{j}m_{x_{j}x_{j}^{\prime }}\right\} ,
\end{equation}
where the infimum is taken over all finite representations of the molecule $m$.
The {\it Arens Eells space} \AE $\left( X\right) $ is the completion of the normed space $\mathcal{M}(X)$ (see \cite{A.E56}).
It is well-known that the map $\delta_{X} :X\rightarrow $\AE $(X)$ defined by $\delta_{X}(x)=m_{x0}$
isometrically embeds $X$ in \AE $\left( X\right) $ and that each Lipschitz operator $T:X\to E$ with values in a Banach space $E$, factors as
$$
\xymatrix{
X \ar[rr]^{T} \ar@{->}[dr]_{\delta_X} & & E, \\
& \AE(X) \, , \ar[ur]_{T_L} & }
$$
where $T_L$ is the unique continuous linear operator such that $T=T_L\circ \delta_X$.
The correspondence $T\longleftrightarrow T_{L}$ establishes an isomorphism
between the normed spaces $Lip_{0}(X,E)$ and $\mathcal{L}(\AE \left(
X\right) ,E)$.
We will consider a {\it composition Lipschitz ideal}, i.e.,
$$
\mathcal I_{Lip}=\mathcal I\circ Lip_0
$$
for some linear operator ideal $\mathcal I$. This equality means that $T\in \mathcal I_{Lip}(X,E)$ if and only if $T=u\circ S$ for some $u\in {\mathcal I}$ and $S\in Lip_0$. In \cite{AcRuSaYa} it is proved that $T\in \mathcal I_{Lip}(X,E)=\mathcal I\circ Lip_0(X,E)$ if and only if $T_L \in \mathcal I(\AE(X),X)$.
In \cite{AcRuSaYa} it has been studied the approximation property for metric spaces by means of ideals of Lipschitz operators. Some of the definitions involved are natural extensions to the Lipschitz setting of the corresponding linear concepts. However, most of the proofs concerning the approximation property strongly use linear tools and cannot be adapted to the Lipschitz case. That produces several technical difficulties on the new metric approximation theory.
Let $Y$ and $X$ be pointed metric spaces and let ${\mathcal I}$ be a linear operator ideal.
A set $K \subseteq X$ is {\it (relatively) $\mathcal I$-Lipschitz compact} if $\delta_X(K)$ is (resp. relatively) $\mathcal I$-compact in $\AE (X)$.
If $ E$ is a Banach space, every relatively $\mathcal I$-Lipschitz compact subset of a Banach space is relatively $\mathcal I$-compact (\cite[Proposition 4.2]{AcRuSaYa}).
We say that a Lipschitz operator $\phi:Y \to X$ is {\it $\mathcal I$-Lipschitz compact} if $\phi(B_Y) =\phi(\{x \in Y: d(x,0) \le 1 \})$ is relatively $\mathcal I$-Lipschitz compact.
It is known that a linear map $T:F \to E$ between Banach spaces is $\mathcal I$-Lipschitz compact if, and only if, it is linear $\mathcal I$-compact.
\section{The $\mathcal I$-approximation property for Lipschitz operators.}
The standard approximation property for the free spaces $\AE(X)$ has been studied by several author (e.g. \cite{dal,dal2, GoOz}). In \cite{AcRuSaYa} an approximation property has been introduced explicitly for metric spaces: the $\mathcal I$-approximation property.
Consider a linear operator ideal $\mathcal I$ and let ${\mathcal I}_{Lip}={\mathcal I}\circ Lip_0$ be the associated composition Lipschitz operator ideal.
On $Lip_0(X,E)$, we consider the topology {\it Lipschitz-$\tau_{\mathcal I}$} of uniform convergence on $\mathcal I$-Lipschitz compact sets
in the space of operators $Lip_0(X,E)$ as the one generated by the seminorms
$$
q_K(T):= \sup_{x \in K} \|T(x)\| = \sup_{m \in \delta_X(K)} \|T_L(m)\|,
$$
where $K$ is a relatively $\mathcal I$-Lipschitz compact set of $X$.
It is easy to see that this topology induces on the space $\mathcal L(F,E)$, of linear operators between Banach spaces $F$ and $E$, the topology $\tau_{\mathcal I}$ of uniform convergence on $\mathcal I$-compact sets.
Consider a set of operators $\mathcal O(X,\AE(X)) \subseteq Lip_0(X, \AE(X))$.
A metric space $X$ has the {\it $\mathcal I$-Lipschitz approximation property with respect to $\mathcal O(X,\AE(X))$} if $\delta_X:X \to \AE(X)$ belongs to the Lipschitz-$\tau_{\mathcal I}$-closure of $\mathcal O(X,\AE(X))$.
It has been proved in \cite{AcRuSaYa} that the new concepts and results in the Lipschitz setting recover those that come from the linear theory related to the ${\mathcal K}_{\mathcal I}$-uniform approximation property. Several connections are obtained when we consider the $\mathcal I$-Lipschitz approximation property with respect to concrete sets $\mathcal O(X,\AE(X))$. For instance, the approximation property on a Banach space $E$ is recovered whenever $\mathcal O(E,\AE(E))={\mathcal F}(E,\AE(E))$ as shows the next result.
Let $E$ be a Banach space. The linearization $\beta_E:\AE(E)\to E$ of the identity map in $E$ is known as the barycentric map. It is well-known that $\beta_E(B_{\AE(E)})=B_E$.
\begin{proposition}
Let $E$ be a Banach space. If $E$ has the ${\mathcal K}$-Lipschitz approximation property with respect to ${\mathcal F}(E,\AE(E))$ then $E$ has the approximation property.
\end{proposition}
\begin{proof}
Fix $\varepsilon>0$ and a compact set $K\subset E$. By assumption, there is a finite rank operator $T\in {\mathcal F}(E,\AE(E))$ such that $\|\delta_E(x)-T(x)\|_{\AE(E)}<\varepsilon$ for all $x\in K$. Using the continuity of the barycentric map we get $\|x-\beta_E\circ T(x)\|_E<E$ for all $x\in K$. Since $\beta_E\circ T\in {\mathcal F}(E,E)$, we conclude that $E$ has the approximation property.
\end{proof}
Let $E$ be a Banach space and let ${\mathcal I}$ be an ideal of linear operators. A subset $ B\subset E$ is {\it $\mathcal I$-bounded} if $B \subseteq S(B_G)$, where $G$ is a Banach space and $S \in \mathcal I(G,E)$. Any relatively $\mathcal I$-compact set is $\mathcal I$-bounded. If $\mathcal I=\mathcal L$ is the ideal of all continuous bounded operators, then $\mathcal I$-bounded sets are just all bounded sets.
Note that if in the definition of ${\mathcal I}$-Lipschitz approximation property we change the topology $\tau_{\mathcal I}$ with the topology of uniform convergence on ${\mathcal I}$-bounded sets then we get the following general result in a similar way:
\begin{proposition}
Let $E$ be a Banach space and let ${\mathcal I}$ be a linear ideal. If $E$ has the ${\mathcal L}$-Lipschitz approximation property with respect to ${\mathcal I}(E,\AE(E))$ then $E$ has the ${\mathcal I}$-uniform approximation property.
\end{proposition}
The set $\mathcal O_0(X,\AE(X))= Lip_{0\mathcal F}(X,\AE(X))$, allows to get the approximation of vector-valued Lipschitz mapping by means of finite rank Lipschitz mappings.
\begin{proposition}\cite[Proposition 4.8]{AcRuSaYa}
Let $X$ be a pointed metric space. The following assertions are equivalent:
\begin{itemize}
\item $X$ has the ${\mathcal I}$-Lipschitz approximation property with respect to $ Lip_{0{\mathcal F}}(X,\AE(X))$.
\item For every Banach space $E$, $Lip_{0{\mathcal F}}(X,E)$ is Lipschitz-$\tau_{\mathcal I}$ dense in $Lip_0(X,E)$.
\end{itemize}
\end{proposition}
It also permits to obtain the approximation of ${\mathcal I}$-Lipschitz compact operators by finite finite rank Lipschitz mappings as shows the following proposition. However the converse is still an open problem.
\begin{proposition}\cite[Proposition 4.9]{AcRuSaYa}\label{aaaa}
Let $X$ be a pointed metric space and $\mathcal I$ be an operator ideal. If $X$ has the $\mathcal I$-Lipschitz approximation property with respect to $Lip_{0{\mathcal F}} (X,\AE(X))$ then, for any pointed metric space $Z$ and any $\mathcal I$-Lipschitz compact mapping $\phi:Z \to X$, the mapping $\delta_X \circ \phi$ can be
approximated by finite rank operators of
$Lip_{0{\mathcal F}} (Z,\AE(X))$ uniformly on $B_Z$.
\end{proposition}
When we consider the set $\mathcal O_1(E,\AE(E))=\delta_E \circ Lip_{0\mathcal F}(E, \AE(E))$ for a Banach space $E$, then the $\mathcal I$-Lipschitz approximation property is weaker than the ${\mathcal K}_{\mathcal I}$-uniform approximation property.
\begin{proposition}\cite[Proposition 4.10]{AcRuSaYa}\label{bbbb}
Let $\mathcal I$ be an operator ideal.
Let $E$ be a Banach space with the ${\mathcal K}_{\mathcal I}$-uniform approximation property. Then $E$ has the $\mathcal I$-Lipschitz approximation property as a metric space with respect to the set $\,\,\,\delta_E \circ Lip_{0\mathcal F}(E,\AE(E))$.
\end{proposition}
The following result completes Proposition \ref{aaaa} and gives a partial converse to Proposition \ref{bbbb}.
\begin{proposition}
Let $E$ be a Banach space and $\mathcal I$ be an operator ideal. If $E$ has the $\mathcal I$-Lipschitz approximation property with respect to $Lip_{0{\mathcal F}} (E,\AE(E))$ then, any $\mathcal I$-Lipschitz compact mapping $\phi:Z \to E$, defined on any pointed metric space $Z$, can be
approximated by finite rank operators in
$\beta_E\circ Lip_{0{\mathcal F}} (Z,E)$ uniformly on $B_Z$.
\end{proposition}
\begin{proof}
Let $Z$ be a pointed metric space and let $\phi:Z \to E$ be a $\mathcal I$-Lipschitz compact mapping. Fix $\varepsilon>0$. By Proposition \ref{aaaa}, there is $f$ in $Lip_{0{\mathcal F}} (Z,E)$ such that $\|f(x)-\delta_E\circ \phi(x)\|_{\AE (E)}<\varepsilon$ for all $x\in B_Z$. Using the continuity of the linear barycentric map $\beta_E$ we get that $\|\beta_E\circ f(x)- \phi(x)\|_{E}<\varepsilon$ for all $x\in B_Z$.
\end{proof}
If $\mathcal O_2(X,\AE(X))= \mathcal F \circ \delta_X(X, \AE(X))$ then the connection is summarized in the following proposition.
\begin{proposition}\cite[Proposition 4.11]{AcRuSaYa}
Let $\mathcal I$ be an operator ideal and let $X$ be a pointed metric space.
If $\AE(X)$ has the ${\mathcal K}_{\mathcal I}$-uniform approximation property, then
$X$ has the $\mathcal I$-Lipschitz approximation property with respect to the class $\mathcal F \circ \delta_X(X,\AE(X))$.
\end{proposition}
\end{document}
|
\begin{document}
\title{Einstein-Podolsky-Rosen-steering using quantum correlations in non-Gaussian entangled states}
\author{Priyanka Chowdhury}
\affiliation{S. N. Bose National Centre for Basic Sciences, Block JD, Sector III, Salt Lake, Kolkata 700098, India}
\author{Tanumoy Pramanik}
\affiliation{S. N. Bose National Centre for Basic Sciences, Block JD, Sector III, Salt Lake, Kolkata 700098, India}
\author{A. S. Majumdar}
\affiliation{S. N. Bose National Centre for Basic Sciences, Block JD, Sector III, Salt Lake, Kolkata 700098, India}
\author{G. S. Agarwal}
\affiliation{Department of Physics, Oklahoma State University, Stillwater, Oklahoma 74078, USA}
\begin{abstract}
In view of the increasing importance of non-Gaussian entangled states in
quantum information protocols like teleportation and violations of Bell
inequalities, the steering of continuous variable non-Gaussian
entangled states is investigated. The EPR steering for Gaussian states may be
demonstrated through
the violation of the Reid inequality involving products of the inferred
variances of
non-commuting observables. However, for arbitrary states the Reid inequality
is not always necessary because of the higher order
correlations in such states. One then needs to use the entropic steering
inequality.
We examine several classes of currently important
non-Gaussian entangled states, such as the
two-dimensional harmonic
oscillator, the photon subtracted two mode squeezed vacuum, and the NOON state,
in order to demonstrate the steering property of such states. A comparative
study of the violation of the Bell-inequality
for these states shows that the entanglement present is more easily revealed
through steering compared to Bell-violation for several such states.
\pacs{03.65.Ud, 42.50.Xa}
\end{abstract}
\date{\today}
\maketitle
\section{Introduction}
The pioneering work of Einstein, Podolsky and Rosen (EPR) \cite{epr} has over
the years lead to the unfolding of several rich and arguably paradoxical
features of quantum mechanics \cite{Reid_09}. Considering a position-momentum correlated state
of two particles, and assuming the notions of spatial separability, locality,
and reality to hold true at the level of quantum particles, EPR argued that
the quantum mechanical description of the state of a particle is not
complete. An immediate consequence of correlations between spatially separated
particles was then noted by Schrodinger \cite{schrod} in that it allowed
for the steering of the state on one side merely by the choice
of the measurement basis on the other side without in any way having direct
access to the affected particle. The word `entanglement' was first coined
by Schrodinger to describe the property of such spatially separated but
correlated particles.
Inspired by the early works of EPR and Schrodinger, a formalism for
quantifying the correlations in terms of joint measurements of observables
corresponding to two spatially separated particles was first proposed by
Bell \cite{bell} for the case of any general theory obeying the tenets of
locality and realism. Bell's inequality was shown to be violated in
quantum mechanics, a fact that has since been empirically validated in several
subsequent experiments \cite{aspect}. From the practical point of view,
quantum correlations have been used as resource in
performing tasks that are unable to be achieved using classical means,
leading to many interesting and important information theoretic applications,
such as dense coding, teleportation, and secret key generation.
Developments in quantum information theory for both discrete \cite{QI_Disc_09} as well as continuous
variables \cite{QI_Conti_05} have brought about the realization
of subtle differences in various categories of correlations, for example,
the distinction of quantum entanglement from more general quantum correlations,
{\it viz.}, quantum discord \cite{discord} found in classes of
separable states.
On the other hand, the understanding of the precise nature of correlations
that lead to the EPR paradox had to wait for a number of years beyond
Bell's derivation of his inequality, and further advances in quantum
information theory. In this direction, a testable formulation of the EPR
paradox was proposed by Reid \cite{reid} in the realm of continuous variable
systems using the position-momentum uncertainty relation, in terms of an
inequality involving products of inferred variances of incompatible
observables. This lead to the
experimental realization of the EPR paradox by Ou et al. \cite{ou} for
the case of two spatially separated and correlated light modes. Similar
demonstrations of the EPR paradox using quadrature amplitudes of other
radiation fields were performed \cite{tara}. Moreover, a
much stronger violation of the Reid inequality for two mode squeezed vaccum
states has been experimentally demonstrated recently \cite{stein}.
The EPR criterion has been used to demonstrate entanglement in Bose-Einstein
condensates, as well \cite{he}.
Other works have showed that
the Reid inequality is effective in demonstrating the EPR paradox for
systems in which correlations appear at the level of variances.
However, in systems
with correlations manifesting in higher than the second moment, the Reid
formulation generally fails to show occurrence of the EPR paradox, even though
Bell nonlocality may be exhibited \cite{walborn,lg}.
A more direct manifestation of EPR-type correlations has been proposed by the
work of Wiseman et al. \cite{wiseman1,wiseman2}, where steering is formulated
in terms
of an information theoretic task. Using similar formulations for entanglement
as well as Bell nonlocality, a clear distinction between these three types
of correlations is possible using joint probability distributions. Wiseman
et al. \cite{wiseman1,wiseman2} have further shown a hierarchy
between the three types of correlations, with entanglement being the
weakest, steering the intermediate, and Bell violation the strongest
of the three. Bell nonlocal states constitute a strict subset of steerable
states which, in turn, are a strict subset of entangled states. For the case
of pure entangled
states of two qubits the three classes overlap. An experimental
demonstration of these differences has been demonstrated for mixed
entangled states of two qubits \cite{saunders}. A loophole free EPR-steering
experiment has also been performed \cite{witt}.
The case of continuous
variable states however poses an additional difficulty, since there exist
several pure entangled states which do not display steering through the
Reid criterion based on variances of observables \cite{reid}. In order to
exploit higher order correlation
in such states, Walborn et al. \cite{walborn} proposed a new steering
condition which is derived using the entropic uncertainty principle
\cite{bialynicki}. Entropic functions by definition incorporate correlations
up to all orders, and the Reid criterion can be seen to follow as a
limiting case of the entropic steering relation \cite{walborn}. Generalizations
of entropic steering inequalities to the case of symmetric
steering \cite{schn}, loss-tolerant steering \cite{Loss_toler}, as
well as to the case of steering with qauntum memories \cite{schn2} has
also been proposed recently.
EPR steering for Gaussian states has been studied extensively both
theoretically and experimentally. It is realized though that Gaussian states
are a rather special class of states, and there exist very common examples
of states, such as the superposition of two oscillators in Fock states that
are far from Gaussian in nature. The non-Gaussian states are usually generated
by the process of photon subtraction and addition \cite{book}, and these
states generally have higher degree of entanglement than the Gaussian states.
Hence, non-Gaussian states have applications in tests of Bell inequalities,
quantum teleportation and other quantum information protocols \cite{qinfo}.
Extensions of the entanglement criteria for non-Gaussian
states have been proposed recently \cite{ENT_NG}.
Since the steering of correlated systems has started being studied only
recently, it is important to understand the steering of systems with
non-Gaussian correlations. A particular example of a non-Gaussian state
was considered by Walborn et al. \cite{walborn} revealing steering through
the entropic inequality.
Non-Gaussian entanglement and steering has also been
recently studied in the context of Kerr-squeezed optical
beams \cite{Olsen_2013}.
In the present work we consider several categories of non-Gaussian states
with the motivation to investigate EPR-steering of such states. This
should stimulate steering experiments using non-Gaussian
states.
The plan of the paper is as follows. In the next section we present a
brief review of the basic concepts involved in EPR steering. Here we
first discuss the Reid criterion for demonstrating the EPR paradox,
and recall its applicability for the case of the two mode squeezed vacuum
state. We then discuss steering as an information processing task,
and the entropic steering inequality for conjugate variable pairs.
In Section III, several examples of non-Gaussian states are studied
for their steering and nonlocality properties. Here we first consider
entangled eigenstates of the two dimensional harmonic oscillator given
by Laguerre-Gaussian wave functions that have been experimentally
realized \cite{review,fickler}, and may be capable of useful information
processing due to their
high available degrees of freedom. We show the inadequacy of
the Reid criterion in revealing steering for such states. We
then demonstrate steering using the entropic steering relation.
Photon subtraction from light beams is useful for generating a variety
of non-Gaussian states, and
is thought to be of much practical use in quantum state engineering
\cite{zavatta}. We next study the steering properties of photon
subtracted squeezed vacuum states using the entropic steering inequality.
Lastly, we study steering by N00N states \cite{dow} that are regarded to
be of high utility in quantum metrology. In all the examples considered,
we present a comparison of the magnitude of Bell violation with the
strength of steering. Such an analysis also brings out the
comparative efficiency of the steering framework in revealing quantum
correlations in a given state compared to the Bell framework,
that may be of practical relevance. A summary of our main results are
presented in Section IV.
\section{The EPR paradox and steering}
The EPR paradox may be understood by considering a bipartite entangled
state which may be expressed in two different ways, as
\begin{eqnarray}
\vert\Psi\rangle = \sum_{n=1}^{\infty}c_n\vert\psi_n\rangle\vert u_n\rangle =
\sum_{n=1}^{\infty}d_n\vert\phi_n\rangle\vert v_n\rangle
\label{ensemb}
\end{eqnarray}
where $\{\vert u_n\rangle\}$ and $\{\vert v_n\rangle\}$ are two orthonormal
bases for one of the parties (say, Alice). If Alice chose to measure in the
$\{\vert u_n\rangle\}$ ($\{\vert v_n\rangle\}$) basis, then she
instantaneously projects
Bob's system into one of the states $\vert\psi_n\rangle$ ($\vert\phi_n\rangle$).
This ability of Alice to affect Bob's state due to her choice of the
measurement basis was dubbed as ``steering'' by Schrodinger \cite{schrod}.
Since there is no physical interaction between Alice and Bob, it is paradoxical
that the ensemble of $\vert\psi_n\rangle$s is different from the ensemble
of $\vert\phi_n\rangle$s.
The
EPR paradox stems from the correlations between two non-commuting
observables of a sub-system with those of the other sub-system, i.e.,
$<x,p_y> \neq 0$, with $<x>=0=<p_y>$ individually.
In the original formulation of the paradox correlations between the measurement outcomes
of positions and momenta for two separated particles was considered. Due to the presence of correlations,
the measurement of the position of, say, the first particle leads one to infer the correlated value of
the position for the second particle (say, $x_{\inf}$). Now, if the momentum of the second particle is measured
giving the outcome, say $p$, the value of the product of uncertainties $(\Delta x_{\inf})^2 (\Delta p_{\inf})^2$ may
turn out to be lesser than that allowed by the uncertainty principle, {\it viz.} $(\Delta x)^2 (\Delta p )^2 \ge 1$, thus leading to the paradox.
The following material in this section is primarily to fix the setting for
our work on non-Gaussian entangled states.
\subsection{The Reid inequality and its violation for the two mode squeezed vacuum state}
The possibility of demonstrating the EPR paradox in the context
of continuous variable correlations was first proposed by Reid \cite{reid}. Such an idea has been experimentally realized \cite{ou} through quadrature phase measurements performed on the two output
beams of a nondegenerate parametric amplifier. This technique of demonstrating the product of variances of the inferred values of
correlated observables to be less than that allowed by the uncertainty principle, has since gained popularity \cite{tara}, and has been employed recently for variables other than position and momentum, e.g., for correlations between
optical and orbital angular momentum of light emitted through
spontaneous parametric down-conversion \cite{leach}.
Let us now consider the situation where the quadrature phase components of two correlated and spatially
separated light fields are measured. The quadrature amplitudes associated with the fields $E_{\gamma}=C[\hat{\gamma} e^{-i\omega_{\gamma} t} + \hat{\gamma}^{\dagger} e^{i\omega_{\gamma} t}]$ (where, $\gamma\in\{a,b\}$, are the bosonic operators for two different modes, $\omega_{\gamma}$ is the frequency, and
$C$ is a constant incorporating spatial factors taken to be equal for each mode) are given by
\begin{eqnarray}
\hat{X}_{\theta}=\frac{\hat{a}e^{- i \theta} + \hat{a}^{\dagger} e^{i \theta}}{\sqrt{2}},
\hspace{0.5cm}
\hat{Y}_{\phi}=\frac{\hat{b}e^{- i \phi} + \hat{b}^{\dagger} e^{i \phi}}{\sqrt{2}},
\label{Quard}
\end{eqnarray}
where,
\begin{eqnarray}
\hat{a} &=& \frac{X + i P_x}{\sqrt{2}},\hspace{0.5cm} \hat{a}^\dagger = \frac{X -i P_x}{\sqrt{2}},\nonumber\\
\hat{b}&=& \frac{Y+i P_y}{\sqrt{2}}, \hspace{0.5cm} \hat{b}^\dagger = \frac{Y- i P_y}{\sqrt{2}},
\label{boson_op}
\end{eqnarray}
and the commutation relations of the bosonic operators are given by $[\hat{a},\hat{a}^{\dagger}]=1=[\hat{b},\hat{b}^{\dagger}]$.
Now, using Eq.(\ref{boson_op}) the expression for the quadratures can be rewritten as
\begin{eqnarray}
\hat{X}_{\theta} = \cos[\theta] ~\hat{X} + \sin[\theta] ~\hat{P}_x, \hspace{0.3cm}
\hat{Y}_{\phi} = \cos[\phi]~ \hat{Y} +\sin[\phi]~ \hat{P}_y.
\label{Dless}
\end{eqnarray}
The correlations between the quadrature amplitudes $\hat{X}_{\theta}$ and $\hat{Y}_{\phi}$ are captured by the correlation coefficient, $ C_{\theta,\phi} $ defined as \cite{reid,ou,tara}
\begin{eqnarray}
C_{\theta,\phi}=\frac{\langle \hat{X}_{\theta} \hat{Y}_{\phi} \rangle}{\sqrt{\langle \hat{X}^2_{\theta} \rangle \langle \hat{Y}^2_{\phi} \rangle}},
\label{Cr_f}
\end{eqnarray}
where $\langle \hat{X}_{\theta} \rangle=0=\langle \hat{Y}_{\phi} \rangle$. The correlation is perfect for some values of $\theta$ and $\phi$, if $|C_{\theta,\phi}|=1$. Clearly $|C_{\theta,\phi}|=0$ for uncorrelated variables.
Due to the presence of correlations, the quadrature amplitude $\hat{X}_{\theta}$ can be inferred by measuring the corresponding amplitude $\hat{Y}_{\phi}$. The
EPR paradox arises due to the ability to infer an observable of one system from the result of measurement
performed on a spatially separated second system. In realistic situations the correlations are not perfect because of the interaction with the environment as well as finite detector efficiency. Hence, the estimated amplitudes $\hat{X}_{\theta 1}$ and $\hat{X}_{\theta 2}$ with the help of $\hat{Y}_{\phi 1}$ and $\hat{Y}_{\phi 2}$, respectively, are subject to inference errors, and given by \cite{reid}
\begin{eqnarray}
\hat{X}_{\theta 1}^{e}=g_1 \hat{Y}_{\phi 1},
\hspace{0.5cm}
\hat{X}_{\theta 2}^{e}=g_2 \hat{Y}_{\phi 2},
\label{Est}
\end{eqnarray}
where $g_1$ and $g_2$ are scaling parameters.
Now, one may choose $g_1$, $g_2$, $\phi 1$, and $\phi 2$ in such a way that $\hat{X}_{\theta 1}$ and $\hat{X}_{\theta 2}$ are inferred with the highest possible accuracy. The errors given by the deviation of the estimated amplitudes from the true amplitudes $\hat{X}_{\theta 1}$ and $\hat{X}_{\theta 2}$ are captured by $(\hat{X}_{\theta 1}- \hat{X}_{\theta 1}^{e})$ and $(\hat{X}_{\theta 2}- \hat{X}_{\theta 2}^{e})$, respectively. The average errors of the inferences are given by
\begin{eqnarray}
(\Delta_{\inf} \hat{X}_{\theta 1})^2 &=& \langle (\hat{X}_{\theta 1}- \hat{X}_{\theta 1}^{e})^2\rangle = \langle (\hat{X}_{\theta 1}- g_1 \hat{Y}_{\phi 1})^2\rangle, \nonumber \\
(\Delta_{\inf} \hat{X}_{\theta 2})^2 &=& \langle (\hat{X}_{\theta 2}- \hat{X}_{\theta 2}^{e})^2\rangle = \langle (\hat{X}_{\theta 2}- g_2 \hat{Y}_{\phi 2})^2\rangle.
\label{Erro}
\end{eqnarray}
The values of the scaling parameters $g_1$ and $g_2$ are chosen such that
$\frac{\partial (\Delta_{\inf} \hat{X}_{\theta 1})^2}{\partial g_1} =0 = \frac{\partial (\Delta_{\inf} \hat{X}_{\theta 2})^2}{\partial g_2}$, from which it follows that
\begin{eqnarray}
g_1 = \frac{\langle \hat{X}_{\theta 1} \hat{Y}_{\phi 1} \rangle}{\langle \hat{Y}_{\phi 1}^2 \rangle},
\hspace{0.5cm}
g_2 = \frac{\langle \hat{X}_{\theta 2} \hat{Y}_{\phi 2} \rangle}{\langle \hat{Y}_{\phi 2}^2 \rangle}.
\label{g's}
\end{eqnarray}
The values of $\phi 1$ ($\phi2$) are obtained by maximizing $C_{\theta1,\phi1}$ ($C_{\theta 2,\phi2}$).
Now, due to the commutation relations $[\hat{X},\hat{P}_X]=i;~~[\hat{Y},\hat{P}_Y]=i$, it is required
that the product
of the variances of the above inferences $(\Delta_{\inf} \hat{X}_{\theta 1})^2 (\Delta_{\inf} \hat{X}_{\theta 2})^2 \ge 1/4$. Hence, the EPR paradox occurs if the correlations in the field quadratures lead to
the condition
\begin{eqnarray}
EPR \equiv (\Delta_{\inf} \hat{X}_{\theta 1})^2 (\Delta_{\inf} \hat{X}_{\theta 2})^2 < \frac{1}{4}.
\label{P_Uncer}
\end{eqnarray}
Let us consider a two mode squeezed vacuum (TMSV) state, the expression of which is given by\cite{NOPA}
\begin{eqnarray}
|NOPA\rangle = |\xi\rangle &=& S(\xi) |0,0\rangle \nonumber\\
& = & \sqrt{1-\lambda^2}~~\displaystyle\sum_{n=0}^{\infty}\lambda^n~|n,n\rangle
\label{NOPA}
\end{eqnarray}
where, $ \lambda=\tanh(r)\in[0,1] $, the squeezing parameter $ r>0 $ and $ |m,n\rangle=|m\rangle_A\otimes |n\rangle_B $ (where $|m\rangle$ and $|n\rangle$ are the usual Fock states). $S(\xi)$ ($=e^{(\xi\hat{a_1}^{\dagger}\hat{a_2} ^{\dagger}-\xi^\ast a_1 a_2)}$, where $\xi=r e^{i\phi}$) is the squeezing operator (unitary). $ A $ and $ B $ are the two involved modes for Alice and Bob respectively.
For the NOPA state given by Eq.(\ref{NOPA}), the inferred uncertainties is given by
\begin{eqnarray}
(\Delta_{\inf} X_{\theta})^2&=&\frac{1}{2} \cosh[2r] \nonumber \\
&& - \frac{1}{2} \tanh[2r] \sinh[2r] \cos^2[\theta+\phi],
\end{eqnarray}
where the quadrature amplitude $X_\theta$ is inferred by measuring the corresponding amplitude $Y_\phi$.
The minimum values for two different values of $\theta $ (i. e. , $\theta_1=0$ and $\theta_2=\pi/2$) of $(\Delta_{\inf} X_{\theta})^2$ are
\begin{eqnarray}
(\Delta_{\inf} X_{\theta_1})^2= (\Delta_{\inf} X_{\theta_2})^2= \frac{1}{2 \cosh[2r]} ,
\label{NOPA_Reid}
\end{eqnarray}
which occur for $\phi_1=0$ and $\phi_2=\pi/2$, respectively. The product of
uncertainties is thus $\frac{1}{4 \cosh^2[2r]}$ which asymptotically reaches the value $0$ for $ r\rightarrow \infty $, and this shows that the Reid
condition (\ref{P_Uncer}) for occurrence of the EPR paradox holds. Hence,
the two mode squeezed vacuum state shows EPR steering for all values of $ r $ except at $ r=0$. However, the Reid condition fails to demonstrate
steering by more general non-Gaussian states, for example, the two-dimensional
harmonic oscillator, as we will show in Section III.
\subsection{Steering and entropic inequalities}
A modern formulation of EPR steering was presented by Wiseman et
al. \cite{wiseman1,wiseman2} as an information processing task. They considered
that one of two parties (say, Alice) prepares a bipartite quantum state
and sends one of the particles to Bob. The procedure is repeated as many
times as required. Bob's particle is assumed to possess a definite
state, even if it is unknown to him (local hidden state). No such
assumption is made for Alice, and hence, this
formulation of steering is an asymmetric task.
Alice and Bob make measurements on their respective
particles, and communicate classically. Alice's task is to convince Bob
that the state they share is entangled. If correlations between Bob's
measurement results and Alice's declared results can be explained by
a local hidden state (LHS) model for Bob, he is not convinced. This is
because Alice could have drawn a pure state at random from some ensemble and
sent it to Bob, and then chosen her result based on her knowledge of this LHS.
Conversely, if the correlations cannot be so explained, then the state must
be entangled. Alice will be successful in her task of steering if she can
create genuinely different ensembles for Bob by steering Bob's state. It may
be noted that a similar formulation of Bell nonlocality as an information
theoretic task is also possible \cite{wiseman1}, where the correlations
between Alice and Bob may be described in terms of a local hidden variable
model.
In the above situation, an EPR-steering inequality \cite{caval} may be
derived involving an experimental situation for qubits with $n$
measurement settings
for each side. Bob's $k$-th measurement
setting is taken to correspond with the observable $\hat{\sigma}_k$,
and Alice's declared result is denoted by the random variable
$A_k \to \{-1,1\}$. Violation of the inequality
\begin{eqnarray}
\frac{1}{n}\sum_{k=1}^n\langle A_k\hat{\sigma}_k\rangle \le C_n
\label{steer1}
\end{eqnarray}
reveals occurrence of steering, where $C_n \equiv \mathrm{max}_{\{A_k\}}(\frac{\lambda_{\mathrm{max}}}{n}\sum_{k=1}^nA_k\hat{\sigma}_k)$ is the maximum
value of the l.h.s. of (\ref{steer1}) if Bob has a pre-existing state known
to Alice, with $\lambda_{\mathrm{max}}$ being the largest eigenvalue of
the operator $\frac{1}{n}\sum_{k=1}^n A_k\hat{\sigma}_k$. Experimental
demonstration of steering for mixed entangled states \cite{saunders}
that are Bell local has confirmed that steering is a weaker form of
correlations compared to nonlocality.
For the case of continuous variable systems, the Reid criterion is an
indicator for steering, as discussed above. However, there exist several
pure entangled continuous variable states which do not reveal steering
through the Reid criterion. An example of such a state is provided in
Ref.\cite{walborn}, which we also discuss briefly below. Since entanglement
is a weaker form of correlations
compared to steering \cite{wiseman1,wiseman2}, it is clear that for such
states the steering correlations do not appear up to second order (variances)
that may be checked by the Reid criterion. The Reid criterion itself is
derived using the Heisenberg uncertainty relation involving product of
variances of non-commuting observables. On the other hand, a more general
form of the uncertainty relation containing correlations in all orders
of, for example, the position and momentum distribution of a quantum system is
provided by the entropic uncertainty relation \cite{bialynicki} given by
\begin{eqnarray}
h_Q(X)+h_Q(P)\geq \ln \pi e.
\label{entropy_uncertainty}
\end{eqnarray}
Using the entropic uncertainty relation, Walborn et al. \cite{walborn} have
derived an entropic steering inequality. They considered a joint probability
distribution of two parties corresponding to a non-steerable state for
which there exists a local hidden state (LHS) description, given by
\begin{eqnarray}
\mathcal{P}(r_A,r_B)=\sum_\lambda \mathcal{P}(\lambda)\mathcal{P}(r_A|\lambda)\mathcal{P}_Q(r_B|\lambda),
\label{steer2}
\end{eqnarray}
where, $ r_A $ and $ r_B $ are the outcomes of measurements $ R_A $ and $ R_B $ respectively; $ \lambda $ are hidden variables that specify an ensemble of
states; $ \mathcal{P} $ are general probability distributions; and $ \mathcal{P}_Q $ are probability distributions corresponding to the quantum state specified by $ \lambda $. Now, using a rule for conditional probabilities
$P(a,b|c) = P(b|c)P(a|b)$ which holds when $\{b\} \in \{c\}$, i.e., there
exists a local hidden state of Bob predetermined by Alice, it follows that
the conditional probability $\mathcal{P}(r_B| r_A)$ is given by
\begin{eqnarray}
\mathcal{P}(r_B|r_A)=\sum_\lambda \mathcal{P}(r_B,\lambda|r_A)
\label{steer3}
\end{eqnarray}
with $P(r_B,\lambda | r_A) = P(\lambda |r_A)P_Q(r_B|\lambda)$. Note that
(\ref{steer2}) and (\ref{steer3}) are equivalent conditions for
non-steerability. Next, considering the relative entropy (defined for two
distributions $p(X)$ and $q(X)$ as $\mathcal{H}(p(X)||q(X))= \sum_xp_x\ln(p_x/q_x)$) between the
probability distributions $ \mathcal{P}(r_B,\lambda|r_A) $ and $ \mathcal{P}(\lambda|r_A)\mathcal{P}(r_B|r_A) $ , it follows from the positivity of
relative entropy that
\begin{eqnarray}
\sum_\lambda \int dr_B \mathcal{P}(r_B,\lambda|r_A) \ln \frac{\mathcal{P}(r_B,\lambda|r_A)}{\mathcal{P}(\lambda|r_A)\mathcal{P}(r_B|r_A)}\geq 0
\end{eqnarray}
Using the non-steering condition (\ref{steer3}), the definition of the
conditional entropy ($h(X|Y) = -\sum_{x,y} p(x,y)\ln p(x|y)$), and averaging over all measurement
outcomes $r_A$, it
follows that the conditional entropy $h(R_B|R_A)$ satisfies
\begin{eqnarray}
h(R_B|R_A) \ge \sum_{\lambda} \mathcal{P}(\lambda) h_Q(R_B|\lambda)
\label{cond1}
\end{eqnarray}
Considering a pair of variables $S_A,S_B$ conjugate to $R_A,R_B$, a similar
bound on the conditional entropy may be written as
\begin{eqnarray}
h(S_B|S_A) \ge \sum_{\lambda} \mathcal{P}(\lambda) h_Q(S_B|\lambda)
\label{cond2}
\end{eqnarray}
For the LHS model for Bob, note that the entropic uncertainty relation
(\ref{entropy_uncertainty}) holds for each state marked by $\lambda$.
Averaging over all hidden variables, it follows that
\begin{eqnarray}
\sum_{\lambda} \mathcal{P}(\lambda)\biggl(h_Q(R_B|\lambda)
+ h_Q(S_B|\lambda)\biggr) \ge \ln \pi e
\label{cond3}
\end{eqnarray}
Now, using the bounds (\ref{cond1}) and (\ref{cond2}) in the relation
(\ref{cond3}) one gets the entropic steering inequality given by
\begin{eqnarray}
h(R_B|R_A)+h(S_B|S_A)\geq \ln \pi e.
\label{entropy_steering}
\end{eqnarray}
Walborn et al. \cite{walborn} presented an example of the state given by
(up to a suitable normalization)
\begin{eqnarray}
\phi_n(x_A,x_B) = \mathcal{H}_n(\frac{x_A+x_B}{\sqrt{2}\sigma_{+}})e^{-\frac{(x_A+x_B)^2}{4\sigma_{+}^2}}e^{-\frac{(x_A-x_B)^2}{4\sigma_{-}^2}}
\label{walex}
\end{eqnarray}
where $\mathcal{H}_n$ is the $n$-th order Hermite polynomial, which does not
reveal steering using the Reid criterion when $\sigma_{\pm}/\sigma_{\mp} < 1 + 1.5\sqrt{n}$, whereas the entropic steering criterion (\ref{entropy_steering})
is able to show steering except when the state is separable, i.e., for
$n=0$, and $\sigma_{+} = \sigma_{-}$.
Using the relation between information entropy and variance, it was further
shown by Walborn et al. \cite{walborn} that the Reid criterion follows
in the limiting case of the entropic steering relation (\ref{entropy_steering}).
In the following section we will use the entropic steering inequality
for demonstrating steering by several continuous variable states.
\section{Steering and nonlocality for non-Gaussian states}
In this section we study steering and nonlocality by several non-Gaussian
states. Considering first entangled states constructed using the
eigenstates of the two-dimensional harmonic oscillator, we study the steering
and nonlocal properties of LG beams. We show that the Reid criterion is
unable to reveal the streerability of LG modes. The entropic steering
inequality shows that the strength of steering increases with angular
momentum of the LG beams. We then discuss non-Gaussian states
obtained by subtracting single and two photons from two-mode squeezed
vacuum states. We show that the violation of Bell's inequality for
such states behaves differently with the increase of the squeezing parameter
compared to the strength of steering. Finally, we investigate the nonlocal
and steering properties of another class of non-Gaussian states, {\it viz.},
N00N states.
\subsection{Non-Gaussian entangled states of a two dimensional harmonic oscillator}
The importance of the two-dimensional harmonic oscillator cannot be
overemphasized in the context of quantum mechanics. The historical development
of radiation theory started with the correspondence with the two modes of
the radiation field. The classic problem of the charged particle in
the electromagnetic field leading to the existence of Landau levels
was developed using the same machinery. The energy eigenfunctions of
the two-dimensional harmonic oscillator may be expressed in terms of
Hermite-Gaussian (HG) functions given by
\begin{eqnarray}
u_{nm}(x,y) &&= \sqrt{\frac{2}{\pi}} \left(\frac{1}{2^{n+m} w^2 n!m!}\right)^{1/2} \nonumber\\
&& \times H_n \left(\frac{\sqrt{2}x}{w}\right) H_m \left(\frac{\sqrt{2}y}{w}\right) e^{-\frac{(x^2+y^2)}{w^2}},
\nonumber \\
\int |u_{nm}(x,y)|^2 dx dy &&=1
\label{hermite}
\end{eqnarray}
Entangled states may be constructed from superpositions of
HG wave functions \cite{danakas}
\begin{eqnarray}
\Phi_{nm}(\rho,\theta) = \sum_{k=0}^{n+m} u_{n+m-k,k}(x,y)\frac{f_k^{(n,m)}}{k!}(\sqrt{-1})^k \nonumber \\
\times \sqrt{\frac{k! (n+m-k)!}{n! m! 2^{n+m}}}
\label{legherm}
\end{eqnarray}
\begin{eqnarray}
f_k^{(n,m)} = \frac{d^k}{dt^k} ((1-t)^n(1+t)^m)|_{t=0},
\label{def11}
\end{eqnarray}
where $\Phi_{nm}(\rho,\theta)$ are the well-known Laguerre-Gaussian functions
that are physically realizable field configurations \cite{review,fickler}
with interesting topological \cite{berry} and coherence \cite{simon,banerji}
properties, given by
\cite{book2}
\begin{eqnarray}
\Phi_{nm}(\rho,\theta) = e^{i(n-m)\theta}e^{-\rho^2/w^2}(-1)^{\mathrm{min}(n,m)}
\left(\frac{\rho \sqrt{2}}{w}\right)^{|n-m|}
\label{waveLG} \\
\times \sqrt{\frac{2}{\pi n! m ! w^2}}
L^{|n-m|}_{\mathrm{min}(n,m)} \left(\frac{2\rho^2}{w^2}\right) (\mathrm{min}(n,m)) ! \nonumber
\end{eqnarray}
with
$\int |\Phi_{nm}(\rho,\theta)|^2 dx dy =1$,
where $w$ is the beam waist, and $L_p^l(x)$ is the generalized Laguerre polynomial.
The superposition (\ref{legherm}) is like a Schmidt decomposition
thereby
signifying the entanglement of the LG wave functions.
In the special case
\begin{eqnarray}
\Phi_{10} = \frac{2}{\sqrt{\pi} w^2} (x + iy)e^{-(x^2+y^2)/w^2} \nonumber\\
\Phi_{01} = \frac{2}{\sqrt{\pi} w^2} (x - iy)e^{-(x^2+y^2)/w^2}
\label{zerolg}
\end{eqnarray}
In the following analysis we will study the quantum correlations present
in the LG wave functions, first for the purpose of demonstrating steering.
We will next compare the strength of steering with the degree of nonlocality
of such modes through the violation of Bell's inequality \cite{bell}.
It is henceforth convenient to work with the pair of dimensionless quadratures
$\{X,~P_X\}$ and $\{Y,~P_Y\}$, given by
\begin{eqnarray}
x (y) \rightarrow \frac{w}{\sqrt{2}} ~~X (Y),
\hspace{0.5cm}
p_x (p_y) \rightarrow \frac{\sqrt{2} \hbar}{w} ~~P_X (P_Y),
\label{DL_T}
\end{eqnarray}
The canonical commutation relations are $[\hat{X},\hat{P}_X]=i;~~[\hat{Y},\hat{P}_Y]=i$, and the operator $\hat{P}_X$ and $\hat{P}_Y$ are given by
$\hat{P}_X = - i \frac{\partial}{\partial X}$ and $\hat{P}_Y = - i \frac{\partial}{\partial Y}$, respectively. The Wigner function corresponding to the LG wave function
in terms of the scaled variables is given by
\begin{eqnarray}
W_{nm}(X,P_X;Y,P_Y)&=&\frac{(-1)^{n+m}}{(\pi)^{2}} L_{n}[4(Q_0+Q_2)]
\label{WF_LG_n1_n2} \\
&& L_{m}[4(Q_0-Q_2)]~exp(-4Q_0) \nonumber
\end{eqnarray}
where
\begin{eqnarray}
Q_0 & = & \frac{1}{4}\left[ X^2 + Y^2 + P_X^2+P_Y^2\right],\\
Q_2 & = & \frac{XP_Y-YP_X}{2}.
\label{wig}
\end{eqnarray}
Let us now check how the Reid criterion applies to the case of LG wave fucntions.
In order to do so we estimate the product of uncertainties of the values
of inferred observables $(\Delta_{\inf} X_{\theta_1})^2(\Delta_{\inf} X_{\theta_2})^2$.
This is performed by maximizing the correlation function
$C_{\theta1,\phi1}$ ($C_{\theta 2,\phi2}$). Using Eqs.(\ref{Erro}) and (\ref{g's})
it follows that \begin{eqnarray}
(\Delta_{\inf} X_{\theta})^2 = \langle X_\theta^2 \rangle \left(1 - (C^{\max}_{\theta,\phi})^2\right)
\label{X_Unc}
\end{eqnarray}
The maximum correlation strength $|C^{\max}_{\theta,\phi}| ~=\frac{1}{2}$ occurs for $\phi-\theta=\frac{k \pi}{2} $ (where $k$ is an odd integer). For arbitrary values of $n,m$ it can be shown that the
expression of the maximum correlation function is given by
\begin{eqnarray}
C^{\max}_{0,\pi/2} = \frac{\langle XP_Y \rangle}{\sqrt{\langle X^2 \rangle \langle P^2_Y\rangle}},~~~ C^{\max}_{\pi/2,\pi} =
- \frac{\langle P_X Y \rangle}{\sqrt{\langle P_X^2 \rangle \langle Y^2\rangle}}
\label{maxcor}
\end{eqnarray}
In Figure-\ref{LG_Steer_Reid} we plot the product of uncertainties
$(\Delta_{\inf} X_{\theta_1})^2(\Delta_{\inf} X_{\theta_2})^2$ versus the
angular momentum $n$. It is seen that the Reid criterion given by
Eq.(\ref{P_Uncer}) is not satisfied, since
$(\Delta_{\inf} X_{\theta_1})^2(\Delta_{\inf} X_{\theta_2})^2 \ge 1/4$ for any value of $n$. Hence, it
is not possible to demonstrate steering by entangled LG modes using the
Reid criterion.
\begin{figure}
\caption{\footnotesize (Coloronline)
The product of uncertainties $(\Delta_{\inf}
\label{LG_Steer_Reid}
\end{figure}
We now apply the entropic steering criterion to the case of the LG wave functions.
In the entropic steering inequality given by Eq.(\ref{entropy_steering})
the observables have to be chosen such that there exist correlations between
$R_A$ and $R_B$ ($S_A$ and $S_B$). For the case of the LG wave functions, we use
the nonvanishing $\langle X P_Y\rangle$ correlations, as evident from
the Wigner function (\ref{wig}). Thus, in terms of the conjugate pairs of dimensionless quadratures, (\ref{entropy_steering}) becomes
\begin{eqnarray}
h(\mathcal{X}|\mathcal{P_Y})+h(\mathcal{P_X|}\mathcal{Y})\geq \ln \pi e,
\label{LG_entropy_steering}
\end{eqnarray}
where $ X,~Y,~P_X $ and $ P_Y $ are the outcomes of measurements $ \mathcal{X},~\mathcal{Y},~\mathcal{P_X} $ and $ \mathcal{P_Y} $ respectively.
Here, the conditional entropies $ h(\mathcal{X}|\mathcal{P_Y}) $ and $ h(\mathcal{P_X|}\mathcal{Y}) $ are given by
\begin{eqnarray}
h(\mathcal{X}|\mathcal{P_Y})&=& h(\mathcal{X},\mathcal{P_Y})-h(\mathcal{P_Y}),\nonumber\\
h(\mathcal{P_X|}\mathcal{Y})&=&h(\mathcal{P_X},\mathcal{Y})-h(\mathcal{Y}),
\end{eqnarray}
with
$h(\mathcal{X},\mathcal{P_Y})=-\int_{-\infty}^{\infty} \mathcal{P}(X,P_Y) \ln \mathcal{P}(X,P_Y)~ dX dP_Y$,
$h(\mathcal{P_Y})=-\int_{-\infty}^{\infty} \mathcal{P}(P_Y) \ln \mathcal{P}(P_Y) ~ dP_Y$, and similarly for
$ h(\mathcal{P_X},\mathcal{Y}) $ and $ h(\mathcal{Y}) $. The marginal probability
distributions are obtained using the Wigner function (\ref{wig}) for the
LG wave function.
For $ n=0 $ and $ m=0 $, the LG wave function factorizes into a product
state with the corresponding Wigner function given by
\begin{eqnarray}
W_{00}(X,P_X;Y,P_Y)=\frac{e^{-X^2-Y^2-P_X^2-P_Y^2}}{\pi^2}.
\end{eqnarray}
In this case the relevant entropies turn out to be
$h(\mathcal{X},\mathcal{P_Y})=h(\mathcal{P_X},\mathcal{Y})=\ln \pi e$ and
$h(\mathcal{Y})=h(\mathcal{P_Y})=\frac{1}{2}\ln \pi e$, and hence, the
entropic steering inequality becomes saturated, i.e.,
\begin{eqnarray}
h(\mathcal{X}|\mathcal{P_Y})+h(\mathcal{P_X|}\mathcal{Y}) = \ln \pi e.
\end{eqnarray}
For $ n=1 $ and $ m=0 $, the Wigner function has the form
\begin{eqnarray}
W_{10}(X,P_X;Y,P_Y)
&=& e^{-X^2-Y^2-P_X^2-P_Y^2} \\
&& \times \frac{(P_X - Y)^2 +(P_Y+X)^2 -1}{\pi^2} \nonumber
\end{eqnarray}
and the relevant entropies are given by
$h(\mathcal{X},\mathcal{P_Y})=h(\mathcal{P_X},\mathcal{Y}) \approx 2.41509$,
and $h(\mathcal{Y})=h(\mathcal{P_Y}) \approx1.38774$. Hence, the entropic
steering relation in this case becomes
\begin{eqnarray}
h(\mathcal{X}|\mathcal{P_Y})+h(\mathcal{P_X|}\mathcal{Y}) \approx 2.05471 < \ln \pi e
\end{eqnarray}
We thus see that steering is demonstrated. Note the non-Gaussian nature of
the Wigner function for $n \ge 1$ which enables demonstration of steering
through the entropic criterion. For higher values of angular
momentum, we plot the l.h.s. of the entropic steering relation in
Figure-(\ref{LG_Steer_Fig}). We see that violation of the inequality
becomes stronger for higher values of $n$.
\begin{figure}
\caption{\footnotesize (Coloronline) The figure shows that the violation of entropic steering inequality (\ref{LG_entropy_steering}
\label{LG_Steer_Fig}
\end{figure}
Now, for making a comparison between the strength of steering and the
degree of nonlocality, we next study Bell violation by the LG wave function.
In order to do so, we consider the Wigner transform $\Pi_{nm}(X,P_X;Y,P_Y)$ ($=(\pi)^2 \; W_{nm}(X,P_X;Y,P_Y)$, where $W_{nm}(X,P_X;Y,P_Y)$ is given by Eq.(\ref{WF_LG_n1_n2})) \cite{zhang2}. The Bell-CHSH inequality using Wigner transform is given by \cite{paris}
\begin{eqnarray}
|BI| &=& |\Pi_{n,m}(X1,P_{X1};Y1,P_{Y1})\nonumber \\
&& +\Pi_{n,m}(X2,P_{X2};Y1,P_{Y1}) \nonumber \\
&& +\Pi_{n,m}(X1,P_{X1};Y2,P_{Y2})\nonumber \\
&& -\Pi_{n,m}(X2,P_{X2};Y2,P_{Y2})| <2,
\label{BInequality}
\end{eqnarray}
In the following table, we make comparison among Bell violation and entropic EPR steering for different values of $n$ with $m=0$. \\
\begin{tabular}{|c|c|c|c|}
\hline
& & & \\
$n$ & $\frac{|BI_{\max}|}{2}$ & $\frac{(\ln\pi e)}{h(\mathcal{X}|\mathcal{P_Y})+h(\mathcal{P_X|}\mathcal{Y})} $ & $4 ~ (\Delta_{\inf} \hat{X}_{\theta 1})^2 (\Delta_{\inf} \hat{X}_{\theta 2})^2$ \\
& & & \\
\hline
0 & 1 & 1 & 1 \\
\hline
1 & 1.11934 & 1.04381 & 2.25 \\
\hline
2 & 1.17437 & 1.0567 & 2.77778 \\
\hline
3 & 1.20128 & 1.06256 & 3.0625 \\
\hline
4 & 1.21738 & 1.06572 & 3.24 \\
\hline
5 & 1.22813 & 1.06758 & 3.36111 \\
\hline
6 & 1.23584 & 1.0687 & 3.44898 \\
\hline
7 & 1.24165 & 1.06939 & 3.51563 \\
\hline
8 & 1.24618 & 1.0698 & 3.5679 \\
\hline
9 & 1.24982 & 1.07002 & 3.61 \\
\hline
10 & 1.25281 & 1.07011 & 3.64463 \\
\hline
\end{tabular} \\
Note here that $\frac{|BI_{\max}|}{2}>1$ signifies Bell violation, and
$\frac{(\ln\pi e)}{h(\mathcal{X}|\mathcal{P_Y})+h(\mathcal{P_X|}\mathcal{Y})} >1$ signifies steering by the entropic steering inequality. On the other hand
the last column provides values of the products of inferred variances,
showing that the Reid criterion is unable to identify steering for any
value of $n$ in this case.
\subsection{Photon subtracted squeezed vacuum}
Let us now consider non-Gaussian states derived from Gaussian states by
the subtraction of photons. Consider the two mode squeezed vacuum state given by
Eq.(\ref{NOPA}). The Wigner function associated with the state (\ref{NOPA}) is given by \cite{book}
\begin{eqnarray}
W_{|\xi\rangle}(\alpha,\beta)&&=\frac{4}{\pi^2} \exp[-2|\alpha \cosh (r) -\beta^{\ast} \sinh(r) \exp[i \phi] |^2 \nonumber \\
&& -2|- \alpha^{\ast} \sinh(r) \exp[i \phi] +\beta \cosh(r) |^2],
\label{WF}
\end{eqnarray}
where $\alpha$ and $\beta$ represent complex phase space displacements and $\int \int W_{|\xi\rangle}(\alpha,\beta) ~d^2\alpha d^2\beta=1$, and $\{x,k_x\}$, $\{y,k_y\}$ are conjugate quadrature observables. In terms of the
variables $ X, P_X, Y $ and $ P_Y $, the Wigner function (with the replacements
$\alpha = \frac{X + i P_X}{\sqrt{2}}$, $\beta = \frac{Y + i P_Y}{\sqrt{2}}$, and $\phi=0$) becomes
\begin{eqnarray}
W_{\xi}(X,P_X;Y,P_Y) &=& \frac{1}{\pi^2} \exp[ -2(P_X P_Y-X Y)\sinh 2r \nonumber \\
&& -(X^2+Y^2+P_X^2 \nonumber \\
&&+P_Y^2)\cosh 2r].
\end{eqnarray}
Bell violation by the NOPA state has been
studied earlier \cite{PRA'98}. In terms of the Wigner transform $\Pi[\alpha,\beta]$ ($=\frac{\pi^2}{4} W_{|\xi\rangle}[\alpha,\beta]$) the Bell sum is given by \cite{PRA'98}
\begin{eqnarray}
BI&=&\Pi[\alpha=0,\beta=0]+\Pi[\alpha=\sqrt{J},\beta=0]\nonumber \\
&&+\Pi[\alpha=0,\beta=-\sqrt{J}]-\Pi[\alpha=\sqrt{J},\beta=-\sqrt{J}] \nonumber \\
&=& 1+2Exp[-2J \cosh(2r)] \nonumber\\
&& - Exp[-4 J ( \cosh^2(r) - 2 \cos(\phi) \cosh(r) \sinh(r)\nonumber\\
&&+ \sinh^2(r) ) ], \label{BI_squeezed}
\end{eqnarray}
where $J$ represents amount of displacement in the phase space. By choosing $\phi=0$ and considering $r\rightarrow \infty$ \cite{PRA'98}, the above expression becomes
\begin{eqnarray}
BI(J,r)=1-Exp[-4 J e^{2r}]+2Exp[-J e^{2r}]
\label{BI_squeezed_2}
\end{eqnarray}
The maximum value of $BI$ is $2.19055$ \cite{PRA'98} (for the above choice of settings) which occurs for the constraints
\begin{eqnarray}
J Exp[2 r]=\frac{1}{3} \ln2,
\label{Res}
\end{eqnarray}
where $J<< 1$.
For example, $BI_{\max}$ ($=2.19055$) occurs for the choice of parameters $J=0.00009467$ and $r=3.9$.
However, a more general choice of settings \cite{PRA'03,PRA'04}
\begin{eqnarray}
BI&=&\Pi[\alpha_1,\beta_1]+\Pi[\alpha_1,\beta_2]+\Pi[\alpha_2,\beta_1]\nonumber \\
&& -\Pi[\alpha_2,\beta_2],
\label{BI_squeezed_M}
\end{eqnarray}
leads to the maximum Bell violation $BI_{\max}=2.32449$ for the choice of
parameters $\alpha_1=0.0036990,~\alpha_2=-0.0115244,~\beta_1=-0.0039127,~\beta_2=0.0113108,~r = 3.8853675$.
The subtraction of $n$ photons from the state $|\xi\rangle$ (\ref{NOPA}) may
be represented as
\begin{eqnarray}
|\xi_{n}\rangle= (a\otimes I + (-1)^k I\otimes b)^n ~ |\xi\rangle,
\label{xi-}
\end{eqnarray}
where $k\in\{0,1\}$, and it is assumed that one does not know from which
mode the photon is subtracted. After normalization the state becomes $\sqrt{N_n} |\xi_{n-}\rangle$, where the normalization constant $N_n$ is given by
$(N_n)^{-1} =\langle\xi_{n}|\xi_{n}\rangle$.
The Wigner function of the state $ |\xi_{n}\rangle $ is related to the Wigner
function of the state $ |\xi_{(n-1)}\rangle $ by
\begin{eqnarray}
W_{n}(\alpha,\beta) = \hat{\Lambda}(\alpha,\beta) ~ W_{(n-1)}(\alpha,\beta),
\end{eqnarray}
where the operator $\hat{\Lambda}(\alpha,\beta)$ is given by
\begin{eqnarray}
\hat{\Lambda}(\alpha,\beta) = && [\left(\alpha^\ast + \frac{1}{2} \frac{\partial}{\partial \alpha}\right) \left(\alpha + \frac{1}{2} \frac{\partial}{\partial \alpha^\ast}\right) \nonumber \\
&&+ \left(\alpha^\ast + \frac{1}{2} \frac{\partial}{\partial \alpha}\right) \left(\beta + \frac{1}{2} \frac{\partial}{\partial \beta^\ast}\right) \nonumber \\
&& + \left(\alpha + \frac{1}{2} \frac{\partial}{\partial \alpha^\ast}\right) \left(\beta^\ast + \frac{1}{2} \frac{\partial}{\partial \beta}\right) \nonumber \\
&&+ \left(\beta^\ast + \frac{1}{2} \frac{\partial}{\partial \beta}\right) \left(\beta + \frac{1}{2} \frac{\partial}{\partial \beta^\ast}\right)].
\end{eqnarray}
The Wigner function $W_{n}(\alpha,\beta)$ is obtained from $W(\alpha,\beta)$ given by Eq.(\ref{WF}) by applying $\hat{\Lambda}(\alpha,\beta)$ $n$ times, i.e.,
$W_{n}(\alpha,\beta) = \hat{\Lambda}^n(\alpha,\beta) ~ W(\alpha,\beta)$, and normalizing suitably ($\int W_{n}(\alpha,\beta)~ \mathrm{d}^2\alpha ~ \mathrm{d}^2\beta =1$).
In terms of the $ X, P_X, Y $ and $ P_Y $, the Wigner function for the single
photon subtracted squeezed vacuum state becomes
\begin{eqnarray}
W_{1}(X,Y,P_X,P_Y) &=& \frac{1}{\pi ^2} \exp (2 \sinh (2 r) (X Y-P_X P_Y)\nonumber \\
&&- \cosh (2 r) (X^2+Y^2+P_X^2+P_Y^2) ) \nonumber \\
&& (- \sinh (2 r) (P_X^2-2 P_X P_Y+P_Y^2 \nonumber \\
&&-(X-Y)^2 )+ \cosh (2 r) (P_X^2
\label{wigsing} \\
&& -2 P_X P_Y+P_Y^2+(X-Y)^2 )-1 ) \nonumber
\end{eqnarray}
To evaluate the Bell violation, we use the Wigner transform $\Pi_{n}(\alpha,\beta)~(= \frac{\pi^2}{4} W_{n}(\alpha,\beta))$. The Bell sum using the above Wigner transform may be expressed as
\begin{eqnarray}
BI_{n} &=& \Pi_{n}(\alpha_1,\beta_1) + \Pi_{n}(\alpha_1,\beta_2) \nonumber \\
&&+\Pi_{n}(\alpha_2,\beta_1) - \Pi_{n}(\alpha_2,\beta_2)
\label{BI_n-}
\end{eqnarray}
Now, to obtain the maximum Bell violation, one maximizes $BI_{n}$ over $\alpha_1, ~ \alpha_2,~\beta_1,~\beta_2,~r$ for a given value of $n$.
Considering single photon reduction from each mode, i.e., $a\otimes I + (-1)^k I\otimes b$, the state (\ref{xi-}) becomes
\begin{eqnarray}
|\xi^{1-}\rangle &=& \sqrt{1-\lambda^2} \sum \lambda^n \sqrt{n} [|n-1,n\rangle \nonumber \\
&& + (-1)^k |n,n-1\rangle ]
\label{SPSub}
\end{eqnarray}
with normalization constant $N_1 = \frac{1}{2\sinh^2(r)}$. The Wigner transform for the above state is given by
\begin{eqnarray}
\Pi_{1}(\alpha,\beta) = && \exp[2 (\alpha \beta +\alpha^\ast \beta^\ast) \sinh(2r) - 2 (|\alpha|^2 \nonumber \\
&& + |\beta|^2) \cosh(2r)]~(- (2 \alpha \beta + 2 \alpha^\ast\beta^\ast \nonumber \\
&& + (-1)^k \alpha^2 +(-1)^k (\alpha^\ast)^2 +(-1)^k (\beta^2 \nonumber \\
&& +(\beta^\ast)^2 )) ~ \sinh(2r) +2 (\alpha (\alpha^\ast + (-1)^k \beta^\ast) \nonumber \\
&& + \beta (\beta^\ast + (-1)^k \alpha^\ast ) ) \cosh(2r)-1)
\end{eqnarray}
The maximum Bell violation, i.e., $(BI_{1})_{\max} =-2.5444$ occurs for the choices $\alpha_1=- 0.0067$, $\alpha_2=0.0201$, $\beta_1=0.0067$, $\beta_2=- 0.0201$, $r=3.0$ and $k=1$. Now, comparing with the two-mode
squeezed state where the Bell violation is $-2.3245$ \cite{paris}, it is seen that by
photon annihilation, the maximum Bell violation increases.
For the case of two photon subtraction from each mode ($(a\otimes I + (-1)^k I\otimes b)^2$), we can similarly obtain the maximum Bell violation which turns
out to be $(BI_{2})_{\max} =2.6305$ for the choices $\alpha_1=-0.1338$, $\alpha_2=-0.1392$, $\beta_1=-0.1365$, $\beta_2=-0.1311$, $r=4.4015$ and $k=1$. We thus see that the maximum Bell violation increases further.
We have seen in the last section that the Reid criterion is able bring out
the steering property of two mode squeezed vacuum state.
Let us now see whether it is possible to demonstrate steering for single
photon annihilated state (\ref{SPSub}) using the Reid criterion.
The uncertainty for the inferred observables is in this case given by
\begin{eqnarray}
(\Delta_{\inf} X_{\theta})^2 &=& \cosh(2 r) -\sinh(r) \cosh(r) \cos(2\theta ) \\
&& -\frac{(\cosh(2 r) \cos(\theta -\phi )-2 \sinh(2 r) \cos(\theta +\phi ))^2}{4 (\cosh(2 r)-\sinh(r) \cosh(r) \cos(2 \phi ))}.\nonumber
\end{eqnarray}
Calculating the minimum value of $(\Delta_{\inf} X_{\theta})^2$ for two different values of $\theta$ (i.e., $\theta_1=0$ and $\theta_2=\pi/2$), the product of uncertainties turns out to be
\begin{eqnarray}
(\Delta_{\inf} X_{\theta_1})^2 (\Delta_{\inf} X_{\theta_2})^2 = \frac{9}{2 (3 \cosh (4 r)+5)},
\label{SP_NOPA_Reid}
\end{eqnarray}
which goes to $0$ for $r\rightarrow \infty$. In the Figure 3a we compare the amount of violation of the Reid inequality by the NOPA and
the single photon annihilated NOPA states. We see that the Reid criterion fails
in the latter case for smaller values of the squeezing parameter $r$.
\begin{figure}
\caption{\footnotesize (Coloronline)
\textit{a}
\label{Fig_Reid}
\end{figure}
We next demonstrate steering for the photon subtracted squeezed vacuum state
through the entropic steering inequality. Considering the measurements
corresponding to either position ($ r=x $) or momentum ($ s=p $), for
the single photon subtracted squeezed vacuum state correlations exist between
$ X $ and $ Y $, and $ P_X $ and $ P_Y $. $\{X,~P_X\}$ and $\{Y,~P_Y\}$ are conjugate pairs of dimensionless quadratures. So in terms of these variables,
the steering inequality (\ref{entropy_steering}) becomes
\begin{eqnarray}
h(\mathcal{Y}|\mathcal{X})+h(\mathcal{P_Y|}\mathcal{P_X})\geq \ln \pi e,
\label{NOPA_entropy_steering}
\end{eqnarray}
where $ X,~Y,~P_X, $ and $ P_Y $ are the outcomes of measurements $ \mathcal{X},~\mathcal{Y},~\mathcal{P_X}, $ and $ \mathcal{P_Y} $ respectively.
Here, the conditional entropies $ h(\mathcal{Y}|\mathcal{X}) $ and $ h(\mathcal{P_Y}|\mathcal{P_X}) $ are given by
\begin{eqnarray}
h(\mathcal{Y}|\mathcal{X})&=& h(\mathcal{X},\mathcal{Y})-h(\mathcal{X}),\nonumber\\
h(\mathcal{P_Y}|\mathcal{P_X})&=&h(\mathcal{P_X},\mathcal{P_Y})-h(\mathcal{P_X}),
\label{condt_prob}
\end{eqnarray}
and calculated using the marginal probability distributions obtained
from the Wigner function (\ref{wigsing}).
One can thus calculate the L.H.S. of the inequality (\ref{NOPA_entropy_steering})
for the single photon subtracted state for any value of the squeezing
parameter $r$. In Fig. 3b we plot the L.H.S. of
the entropic steering inequality versus $ r $ for the squeezed vacuum state
as well as the single photon subtracted state. The figure shows the violation
of the steering inequality increases with $ r $ for each of these two states.
In the following table we show the comparison of Bell violation with entropic EPR steering for the NOPA state and the single photon annihilated NOPA state.
Note here that $\frac{|BI_{\max}|}{2} > 1$ signifies Bell violation, and
$\frac{(\ln\pi e)}{h(\mathcal{X}|\mathcal{P_Y})+h(\mathcal{P_X|}\mathcal{Y})} >1$ identifies steering. One sees that though the magnitude of Bell violation reaches a maximum
for a certain value of the squeezing parameter $r$, and subsequently decreases
gradually, the strength of steering increases monotonically
with $r$. Hence, it would be much easier to observe steering compared to
Bell violation for higher values of $r$.
\begin{tabular}{|c|c|c|c|}
\hline
State & $r$ & Bell violation & Entropic EPR steering criterion\\
& & ($=\frac{|BI_{\max}|}{2}$) & $(=\frac{\ln(\pi e)}{h(\mathcal{Y}|\mathcal{X})+h(\mathcal{P_Y|}\mathcal{P_X})})$\\
\hline
$|\xi\rangle$ & 0 & 1.0 & 1.0 \\
\hline
$|\xi\rangle$ & 0.2 & 1.040 & 1.038 \\
\hline
$|\xi\rangle$ & 0.4 & 1.091 & 1.157 \\
\hline
$|\xi\rangle$ & 0.6 & 1.125 & 1.383 \\
\hline
$|\xi\rangle$ & 0.8 & 1.144 & 1.790 \\
\hline
$|\xi\rangle$ & 1 & 1.153 & 2.616 \\
\hline
$|\xi\rangle$ & 1.2 & 1.159 & 4.991 \\
\hline
$|\xi\rangle$ & 1.4 & 1.160 & 62.737 \\
\hline
\hline
$|\xi_{1}\rangle$ & 0 & 1.120 & 1.044 \\
\hline
$|\xi_{1}\rangle$ & 0.2 & 1.189 & 1.061 \\
\hline
$|\xi_{1}\rangle$ & 0.4 & 1.229 & 1.124 \\
\hline
$|\xi_{1}\rangle$ & 0.6 & 1.252 & 1.264 \\
\hline
$|\xi_{1}\rangle$ & 0.8 & 1.263 & 1.529 \\
\hline
$|\xi_{1}\rangle$ & 1 & 1.267 & 2.027 \\
\hline
$|\xi_{1}\rangle$ & 1.2 & 1.271 & 3.132 \\
\hline
$|\xi_{1}\rangle$ & 1.4 & 1.271 & 7.531 \\
\hline
\end{tabular}
\subsection{N00N state}
The maximally path-entangled number states have the form given by
\begin{eqnarray}
|\psi\rangle = \frac{1}{\sqrt{2}}(|N\rangle_{a}|0\rangle_{b}+e^{i \phi}|0\rangle_{a}|N\rangle_{b}).
\label{N00N state}
\end{eqnarray}
This is an example of a two-mode state such that $ N $ photons can be found either in the mode $a$ or in the mode $b$, and is referred to as `$ N00N $' states
\cite{dow}.
The utility of N00N states in making precise interferometric measurements
is of much importance in quantum metrology. Such states have been recently
experimentally realized up to $N=5$ \cite{noonexpt}. The entanglement
of N00N states is obtained in terms of the logarithmic negativity, {\it viz.}
$E_N = 1$ \cite{book}, a value that is independent of $N$.
The Wigner distribution function for the $ N00N $ state is given by \cite{Bell_N00N}
\begin{eqnarray}
W(\alpha,\beta) &=& \frac{2}{\pi^2} e^{-2|\alpha|^2-2|\beta|^2}[(-1)^N(L_N(4|\alpha|^2)+L_N(4|\beta|^2)) \nonumber \\
&& -\frac{2^{2N}}{N!}(\alpha^{*N} \beta^N+\alpha^N \beta^{*N})],
\end{eqnarray}
where for simplicity we choose $\phi=\pi$ and $ L_N(x) $ is the Laguerre polynomial.
In terms of the dimensionless quadratures $ \{X, P_X\} $ and $ \{Y, P_Y\} $
the Wigner function becomes
\begin{eqnarray}
W(X, P_X, Y, P_Y) =&& \frac{1}{2\pi ^2 N!} e^{-(X^2+Y^2+P_X^2+P_Y^2)}\nonumber\\
&& [ -2^N \{(X+i P_X)^N (Y-i P_Y)^N \nonumber \\
&& +(X-i P_X)^N (Y+i P_Y)^N\}+\nonumber\\
&&(-1)^N N! \{L_N\left(2(X^2+P_X^2)\right)\nonumber \\
&& +L_N\left(2(Y^2+P_Y^2)\right)\}] .
\label{Wigner_X_Y}
\end{eqnarray}
The Bell-CHSH inequality
\begin{eqnarray}
|BI| = \Pi(\alpha, \beta)+\Pi(\alpha^{\prime}, \beta)+\Pi(\alpha, \beta^{\prime})-\Pi(\alpha^{\prime}, \beta^{\prime}) \leq 2
\label{BCHSH}
\end{eqnarray}
is maximally violated with $BI_{\max} = -2.2387 $ which occurs for $ N=1 $ and the corresponding settings are $\alpha=- \beta=0.0610285$, $\alpha^{\prime}=-\beta^{\prime}=-0.339053$. States with larger $ N $ do not violate the inequality.
However, there are some other Bell-type inequalities \cite{Bell_N00N} for
six correlated events for which $ N00N $ states show the violation for
any $ N $.
From the expression of the Wigner function (\ref{Wigner_X_Y}) for the
N$00$N states the presence of correlations of the type $\langle X,Y\rangle$ is
clear. Using such correlations
the entropic steering inequality for the N00N state may be written as
\begin{eqnarray}
h(\mathcal{Y}|\mathcal{X})+h(\mathcal{P_Y|}\mathcal{P_X})\geq \ln \pi e,
\label{N00N_entropy_steering}
\end{eqnarray}
The conditional entropies $ h(\mathcal{Y}|\mathcal{X}) $ and $ h(\mathcal{P_Y|}\mathcal{P_X}) $ can be calculated through the marginal probabilities
obtained through the Wigner function (\ref{Wigner_X_Y}), using which the
L.H.S. of the inequality (\ref{N00N_entropy_steering}) may be obtained for
different values
of $ N $. It turns out that for $N=1$, one gets $h(\mathcal{Y}|\mathcal{X})+h(\mathcal{P_Y|}\mathcal{P_X}) \approx 2.05 < \ln \pi e$, thus violating the
steering inequality. However, for $N=2$, one gets $h(\mathcal{Y}|\mathcal{X})+h(\mathcal{P_Y|}\mathcal{P_X}) \approx 2.25 > \ln \pi e$. Larger values of $N$
lead to further higher values of $h(\mathcal{Y}|\mathcal{X})+h(\mathcal{P_Y|}\mathcal{P_X})$, and hence, no steering
is possible for $N>1$.
In Fig.[\ref{PXY_N_N00N_F}], we plot the joint probability $P(X,Y)$ for
two different values of $N$, {\it viz}., $N=1$ and $N=4$, respectively.
The higher peak of the $N=1$ curve indicates stronger $\langle X,Y\rangle$ correlations
responsible for steering in this case. The correlations weaken for larger
values of $N$ as is indicated by the lower peak value of the $N=4$ curve,
and are not sufficient for revealing steering through the entropic inequality.
Thus, $ N00N $ states
with $ N=1 $
violate the entropic steering inequality, but for $ N\geq 1 $, these states are
not steerable. This feature is similar to Bell violation for $N00N$ states
which is revealed for $N=1$, but the violation of the standard Bell-CHSH
inequality does not occur for $N \ge 1$.
\begin{figure}
\caption{\footnotesize (Coloronline)
Correlations of the type $\langle X,Y\rangle$ responsible for steering using the
entropic steering inequality are revealed through the joint probability
distributions $P(X,Y)$. The figure shows that such correlations are
sufficiently strong to admit steering
for $N=1$, but are significantly weakened for larger $N$.
}
\label{PXY_N_N00N_F}
\end{figure}
\section{Summary}
In the present paper we have studied EPR steering by non-Gaussian
continuous variable entangled states. Here we have considered several
examples of such systems, i.e., the two-dimensional harmonic oscillator,
the photon subtracted squeezed vacuum state, and the N00N state.
Though such states are entangled
pure states, we have shown that they fail to reveal steering through
the Reid criterion for wide ranges of parameters. Steering with
such states is demonstrated using the entropic steering inequality.
We have computed the relevant conditional entropies using the
Wigner function whose non-Gaussian nature plays an inportant role
in demonstrating steering.
For all the above examples we perform a quantitative study of the
strength of steering (determined by the magnitude of violation of
the entropic steering inequality) as a function of the state parameters.
This leads to some interesting observations, especially in comparison
with the magnitude of Bell nonlocality demonstrated by these states.
For the LG modes one sees that the steering strength increases with
the increase of the angular momentum $n$, a feature that is also common
to the Bell violation. However, for both the two-mode squeezed vacuum
state as well as the single photon subtracted state derived from it,
we show that the behavior of the maximum Bell violation and steering
strength versus the squeezing parameter are not similar. This is evident
from the fact that though the maximum Bell violation peaks for a certain
value of $r$, the steering strength rises monotonically with increasing
$r$. This feature clearly establishes the fact that though Bell violation
guarantees steerability, the two types of quantum correlations are distinct
from each other. Moreover, the presence of quantum correlations in such
class of states may be more easily detected through the violation of the
entropic steering inequality compared to the violation of the Bell inequality
for higher values of squeezing.
Finally, we study steering by N00N states. Here, steering
through the entropic steering condition is revealed only for $N=1$, though
the entanglement of such states remains constant with $N$. This shows
that entanglement is a different correlation compared to steering,
as also it is different compared to Bell
nonlocality. The above results should be useful for detecting and
manipulating correlations in non-Gaussian states for practical purposes
in different arenas such as information processing, quantum metrology, and
Bose condensates. Further work on the issue of the recently proposed
symmetric steering framework \cite{schn} may be of interest using
non-Gaussian resources.
{\it Acknowledgements}: GSA thanks Tata Institute of Fundamental Research
where a part of this work was done.
\end{document}
|
\begin{document}
\title {Links in overtwisted contact manifolds}
\author[Rima Chatterjee]{Rima Chatterjee}
\address{Mathematisches Institut\\ Universit\"at zu K\" oln\\ Weyertal 86-90,
50931 K\"oln, Germany}
\email{\href{mailto:[email protected]}{[email protected]}}
\urladdr{\url{http://www.rimachatterjee.com/}}
\begin{abstract}
We prove that Legendrian and transverse links in overtwisted contact structures having overtwisted complements can be classified coarsely by their classical invariants. We further prove that any coarse equivalence class of loose links has support genus zero and constructed examples to show that the converse does not hold.
\end{abstract}
\maketitle
\section{Introduction}
\label{intro}
Knot theory associated to contact 3-manifolds has been a very interesting field of study. We say a knot in a contact 3-manifold is \emph{Legendrian} if it is tangent everywhere to the contact planes and \emph{transverse} if it is everywhere transverse. The classification of Legendrian and transverse knots has always been an interesting and difficult problem in contact geometry. Two Legendrian knots are said to be \emph{Legendrian isotopic} if they are isotopic through Legendrian knots. A knot or link type is said to be \emph{Legendrian simple} if it can be classified by its classical invariants up to Legendrian isotopy. There are only a few knot types that are known to be Legendrian simple in $(\Sp^3,\xistd)$. For example topologically trivial knots in \cite{ef}, the torus knots and figure eight knots in \cite{eth} are all Legendrian simple. While there is no reason to believe all knots should be Legendrian simple, it has been historically difficult to prove otherwise. Chekanov \cite{chekanov} and independently, Eliashberg \cite{eli} developed invariants of Legendrian knots that show that $m(5_2)$ has Legendrian representatives that are not distinguised by their classical invariants.
Since Eliashberg's classification of overtwisted contact structures \cite{eliashbergovertwist}, the study of overtwisted contact structures and the knots and links in them, has been minimal. However, in recent years overtwisted contact structures have played central roles in many interesting applications such as building achiral Lefchetz fibration \cite{ef}, near symplectic structures on 4-manifolds \cite{gay} and many more. Thus the overtwisted manifolds and the knot theory associated to them has generated significant interest. There are two types of knots/links in overtwisted contact structures, namely loose and non-loose (Also known as non-exceptional and exceptional respectively).
A link in an overtwisted contact manifold is loose if its complement is overtwisted and non-loose otherwise. The first explicit example of a non-loose knot is given by Dymara in \cite{dy}. In general, non-loose knots appear to be rare. It is still not known if every knot type has a non-loose representative. We have another notion of classification of knots and links in contact manifolds known as \emph{coarse equivalence}. We say knots/links are coarsely classified if they are classified up to orientation preserving contactomorphism, smoothly isotopic to the identity. Observe that, though classification by Legendrian isotopy and coarse equivalence are equivalent in $(\Sp^3,\xistd)$, they are not the same in general. Eliashberg and Fraser gave a coarse classification of Legendrian unknots in overtwisted contact structure in $\Sp^3$ \cite{ef}. Later, Geiges and Onaran gave a partial coarse classification of the non-loose left handed trefoil knots in \cite{geiona2} and non-loose Legendrian Hopf links in \cite{geiona}. Recently, Matkovi\v{c} in \cite{matko} extended their result. Note that, this is still not a complete classification. Also, all of these classification results have been proved in overtwisted $\Sp^3$.
This paper studies links in all overtwisted contact manifolds. In \cite{et}, Etnyre proved that loose, null-homologous Legendrian and transverse knots can be coarsely classified by their classical invariants. In \cite{geiona2}, the authors proved that for the loose Hopf link, this classification result remains true. It turns out that Etnyre's work very naturally extends for every null-homologous loose links (By null-homologous here we mean every link component bounds a Seifert surface) which is the first theorem of this paper:
\begin{theorem}
\label{thm:Legmain}
Suppose $\Li_1$ and $\Li_2$ are two loose null-homologous Legendrian links with same classical invariants. Then, $\Li_1$ and $\Li_2$ are coarsely equivalent.
\end{theorem}
\begin{remark}
Here by a null-homologous link, we assume that every link component is null-homologous.
\end{remark}
The above theorem tells us that there is only a unique loose link with any fixed classical invariants in any overtwisted contact structure up to contactomorphism.
\begin{remark}
In an overtwisted contact manifold $(\M,\xi)$, classification up to contactomorphism and classification upto Legendrian isotopy are not equaivalent. Our result doesn't say anything about the Legendrian simpleness of a loose link. Dymara in \cite{dy} proved that two Legendrian knots having same classical invariants in any contact 3-manifold $(\M,\xi)$ are Legendrian isotopic if if there exists an overtwisted disk disjoint from both of them. Later this result was strengthened by Ding--Geiges in \cite{Ding-Geiges} and further by Cahn-Chernov in \cite{Cahn-Chernov} . In spite of being a stronger notion of classification, unfortunately this does not apply to all loose knots.
\end{remark}
As a corollary we proved the following result for loose transverse links.
\begin{corollary}
\label{cor:transverse}
Suppose $\T$ and $\T'$ are two transverse loose null-homologous links with same classical invariants. Then $\T$ and $\T'$ are coarsely equivalent.
\end{corollary}
In other words, there is a unique loose null-homologous transverse link with every component having a fixed self-linking number up to contactomorphism.
\begin{remark}
In \cite{et}, the theorem was proved for null-homologous knots and it was hinted that these might be extended to non-null homologous knots using Tchernov's definition of relative rotation number and relative Thurston--Benniquin number \cite{tchernov} with some extra conditions on the underlying manifold. It seems plausible that the same idea can be extended for links as well.
\end{remark}
After classifying the Legendrian and transverse loose links, we associate a Legendrian link with a compatible open book decomposition of the manifold. First, we extended the definition of the support genus of a Legendrian knot defined in \cite{ona} to the support genus of a Legendrian link (this extension comes naturally) and proved the following theorem about coarse equivalence class of loose Legendrian links.
\begin{theorem}
\label{thm:supportgenus}
Suppose $[\Li]$ denotes the coarse equivalence class of loose, null-homologous Legendrian links with in any contact $3$-manifold.Then $\sg([\Li])=0$.
\end{theorem}
The above result gives a generalization (weak) of Onaran's result.
Like non-loose knots, non-loose links appear to be rare. The above theorem suggests, if we can find a Legendrian link $\Li$ with $\sg(\Li)>0$ that will immediately tell us that $\Li$ is non-loose.
We also show that the converse of the theorem is not true by constructing planar open books for non-loose links.
\begin{theorem}
\label{thm:examples}
There are examples of non-loose links with support genus zero.
\end{theorem}
Also, as a corollary we have a similar result for coarse equivalence class of loose transverse links.
\begin{corollary}
Suppose $[\T]$ be a coarse equivalence class of loose, null-homologous loose transverse links. Then $\sg[\T]=0$
\end{corollary}
\subsection{Organization of the paper}
The paper has been organized in the following way: In \fullref{sec:basics}, we discussed preliminaries of contact geometry and Legendrian knots followed by a discussion of Pontyragin-Thom construction for manifolds with boundary in \fullref{sec:homotopy}. In \fullref{sec:Legtheorem}, we proved \fullref{thm:Legmain} and \fullref{cor:transverse}. We conclude with a proof of \fullref{thm:supportgenus} and \fullref{thm:examples} in \fullref{sec:openbook}.
\section{Basics on contact geometry}
\label{sec:basics}
In this section, we briefly mention the preliminaries of contact geometry and Legendrian knots. For more details the reader should check \cite{etknot}, \cite{etcontact} and \cite{etnyrelectures}.
\subsection{Contact structures}
A contact structure $\xi$ on an oriented $3$-manifold $\M$ is a nowhere integrable $2$-plane field and we call ($\M,\xi$) a contact manifold. We assume that the plane fields are co-oriented, so $\xi$ can be expressed as the kernel of some global one form $\alpha$. In this case, the non-integrability condition is equivalent to $\alpha\wedge d\alpha > 0$.
There are two types of contact structures--tight and overtwisted.
An overtwisted disk is a disk embedded in a contact manifold $(\M,\xi$) such that $\xi$ is tangent to the boundary of the disk. We call a contact manifold overtwisted, if it contains an overtwisted disk. Otherwise we call it tight.
Though only few results are knows about classifying tight contact structures on manifolds, overtwisted contact structures are completely classified by Eliashberg.
\begin{theorem}{(Eliashberg, \cite{eliashbergovertwist})}Two overtwisted contact structures are isotopic if and only if they are homotopic as plane fields. Moreover, every homotopy class of oriented 2-plane field contains an overtwisted contact structure.
\end{theorem}
\subsection{Legendrian links}
A link $\Li$ smoothly embedded in $(\M,\xi)$ is said to be Legendrian if it is everywhere tangent to $\xi$.
For the purpose of this paper, by classical invariants of a link we refer to the classical invariants of its components.
The classical invariants of a Legendrian knot are the topological knot type, \emph{Thurston--Benniquin invariant} $\tb(\Li)$ and \emph{rotation number} $\rot(\Li)$.
$\tb(\Li)$ measures the twisting of the contact framing relative to the framing given by the Seifert surface of $\Li$.
The other classical invariant $\rot(\Li)$ is defined to be the winding of $\T\Li$ after trivializing $\xi$ along the Seifert surface.
One can classify a Legendrian link up to Legendrian isotopy. Two Legendrian links $\Li$ and $\Li'$ are said to be \emph{ Legendrian isotopic} if they are isotopic through Legendrian links.
There is another type of classification of Legendrian links known as \emph{coarse equivalence}. We say two Legendrian links are \emph{coarsely classified} if they are classified up to orientation preserving contactomorphism, isotopic to the identity. In $(\Sp^3, \xistd)$ these two types of classification are equivalent . But in general a coarse equivalence does not imply Legendrian isotopy.
\begin{figure}
\caption{Stabilizations of a Legendrian knot.}
\label{fig:stabilization}
\end{figure}
Stabilization of a link can be done by stabilizing any of the link component.
By standard neighborhood theorem of the Legendrian knot, one can identify any Legendrian link component $\Li$ locally with the $x$ axis. Stabilization is a local operation as shown in \fullref{fig:stabilization}. The modification on the top right-side is called the positive stabilization and denoted as $\Li_+$. The modification on the bottom right-side is known as negative stabilizations and denoted as $\Li_-$. It does not matter which order the stabilizations are being done, it just matters where those are being done. The effect of the stabilizations on the classical invariants are as follows:
\[\tb(\Li_\pm)=\tb(\Li)-1 \quad \text{and}\quad \rot(\Li_\pm)=\rot(\Li)\pm 1.\]
\subsection{Transverse link and its relationship with a Legendrian link}
A link $\T$ in ($\M,\xi$) is called transverse (positively) if it intersects the contact planes transversely with each intersection positive. By classical invariant of a transverse link, we will refer to the classical invariants of its components. There are two classical invariants for transverse knot, the topological knot type and the {\it self-linking number} $\slk(\T)$. Self-linking number is defined for null-homologous knots. Suppose $\Sigma$ be a Seifert surface of a transverse knot. As $\Sigma|_\xi$ is trivial, we can find a non-zero vector field $v$ over $\Sigma$ in $\xi$. Let $\T'$ be a copy of $\T$ obtained by pushing $\T$ slightly in the direction of $v$. The self-linking number $\slk(\T)$ is defined to be the linking no of $\T$ with $\T'$.
Legendrian and transverse links are related by the operations known as transverse push off and Legendrian approximation. The classical invariants of a Legendrian link component and its transverse push off are related as follows:
\[ \slk(\Li_\pm)=\tb(\Li)\mp\rot(\Li)\] where $\Li_\pm$ denotes the positive and negative transverse push offs.
In this paper, if we mention transverse push-off it is always the positive transverse pushoff unless explicitly stated otherwise. Note that, while a transverse push off is well defined, a Legendrian approximation is only well defined up to negative stabilizations.
\subsection{Open book decomposition and supporting contact structures}
Recall an \emph{open book decomposition} of a $3$-manifold $\M$ is a triple $(\B,\Sigma,\phi)$ where $\B$ is a link in $\M$ such that $\M\setminus\B$ fibers over the circle with fiber $\Sigma$ and monodromy $\phi$ so that $\phi$ is identity near the boundary and each fiber of the fibration is a Seifert surface for $\B$. By saying $\phi$ is the monodromy of the fibration we mean that $\M\setminus\B= \Sigma\times[0,1]/\sim$ where $(1,x)\sim(0,\phi(x))$. The fibers of the fibration are called \emph{pages} of the open book and $\B$ is called the \emph{binding}. Given an open book $(\B,\Sigma,\phi)$ for $\M$, let $\Sigma'$ be $\Sigma$ with a $ 1$-handle attached. Suppose $c$ is a simple closed curve that intersects the cocore of the attached 1-handle exactly once. Set $\phi'=\phi\circ D_c^+$, where $D_c^+$ is a right handed Dehn-twist along $c$. The new open book $(\B',\Sigma',\phi')$ is known as the \emph{positive stabilization} of $(\B,\Sigma,\phi)$. If we use $D_c^-$ instead, that will be called a \emph{negative stabilization}. For details check \cite{etplanar}.
We say a contact structure $\xi=\ker\alpha$ on $\M$ is supported by an open book decomposition ($\B, \Sigma,\phi)$ of $\M$ if
\begin{enumerate}
\item $d\alpha$ is a positive area form on the page of the open book.
\item $\alpha(v)>0$, for each oriented tangent vector to $\B$.
\end{enumerate}
Given an open book decomposition of a 3-manifold $\M$, Thurston and Winkelnkemper \cite{thurston} showed how one can produce a compatible contact structure. Giroux proved that two contact structures which are compatible with the same open book are isotopic as contact structures \cite{giroux}. Giroux also proved that two contact structures are isotopic if and only if they are compatible with open books which are related by positive stabilizations.
It is well known that every closed oriented 3-manifold has an open book decomposition. We can perform an operation called \emph{Murasugi sum} to connect sum two open books and produce a new open book. An interested reader should check \cite{etnyrelectures} for details.
\section{Homotopy classes of 2-plane fields}
\label{sec:homotopy}
In this section, we review the homotopy theory of plane fields in the complement of a link. Specifically, we will study homotopy classes of 2-plane fields on manifolds with boundary. We start by recalling, Pontyragin-Thom construction associated with manifolds with boundary (For Pontyragin-Thom construction for closed manifolds see \cite{milnor})
\subsection{Pontyragin-Thom construction for manifolds with boundary}
Suppose $\M$ be an oriented manifold with boundary. The space of oriented plane-fields on $\M$ will be denoted as $\mathcal{P}(\M)$. On the other hand, if $\eta$ is a plane-field defined on the boundary of $\M$, then the set of all plane fields that extend $\eta$ to all of $\M$ will be denoted by $\mathcal{P}(\M,\eta)$. $\mathcal{V}(M)$ will be the set of all unit vector fields and $\mathcal{V}(M,v)$ will denote the set of all unit vector fields which extend $v$ to all of $\M$. Here $v$ is the unit vector field defined along $\partial M$. Also observe the sets $\mathcal{P}(\M,\eta)$ and $\mathcal{V}(\M,v)$ can be empty depending on $\eta$ and $v$.
After choosing a Riemannian metric on $\M$ we can associate a unit vector field to an oriented plane field in the following way: We send a unit vector field $v$ to the plane field $\eta$ such that $v$ followed by the oriented basis of $\eta$ orients $\TM$. Thus there is a one-to-one correspondence between $\mathcal{P}(\M)$ and $\mathcal{V}(M)$. Similarly for $\mathcal{P}(\M,\eta)$ and $\mathcal{V}(\M,v)$ where $v$ is the unit vector field along the boundary associated to $\eta$ by a choice of metric and orientation. Notice both the correspondences only depend on a choice of metrics.
We know that any 3-manifold has trivial tangent bundle. Thus fixing some trivialization we can write $\TM\simeq\M\times\R^3$. So the unit tangent bundle $\mathrm{UTM}$ can be identified with $\M\times\Sp^2$. Any unit vector field on $\M$ can be defined as a section of this bundle and can be associated to a map $\M\to\Sp^2$. We can identify $\mathcal{V}(\M)$ with $[\M, \Sp^2]$. Similarly if $v$ is a unit vector field on $\partial M$, we can associate it with a map $f_v\colon\partial\M\to{\Sp}^2$. Thus $\mathcal{V}(\M,v)$ can be identified with the maps from $\M$ to ${\Sp}^2$ which coincides with $f_v$ on the boundary, denoted by $[\M,\Sp^2;f_v]$.
Now Suppose $f_v\colon\partial M\to\Sp^2$ misses the north pole $p$. Now given any $f\in[\M, \Sp^2;f_v]$ we can homotope it so that it is transverse to the north pole (Thus $p$ will be a regular value for $f$). Then $f^{-1}(p)=\Li_f$ will be in the interior of $\M$ with framing $\bf{f}_f $ given by $f^*(T\Sp^2|_p)$. As $f$ homotopes through maps in $[\M,\Sp^2;f_v]$ the link $(\Li_f,\bf{f}_f)$ changes by framed cobordism. Thus any $v$ defined on $\partial \M$ which extends to $\M$ can be associated to framed cobordism classes of link. This gives us the relative version of Pontyragin-Thom construction.
\begin{remark}
Notice, this construction works fine if $\M$ has multiple boundary components.
\end{remark}
\begin{lemma}
\label{lemma:pont}
Assume that $\eta$ is a plane field defined along the boundary of $\M$ that in some trivialization of $\mathrm{TM}$ corresponds to a function that misses the north pole of $\Sp^2$. There is a one-to-one correspondence between homotopy classes of plane fields on $\M$ that extend $\eta$ on $\M$ and the set of framed links in the interior of $\M$ up to framed cobordism.
\end{lemma}
For the closed case, the following proposition was proved in \cite{gompf}.
\begin{proposition}
Let $\M$ be a closed, connected 3-manifold. Then any trivialization $\tau$ of the tangent bundle of $\M$ determines a function $\Gamma_\tau$ sending homotopy classes of oriented $2$-plane fields $\xi$ on $\M$ into $\Ho_1(\M,\Z)$ and for any $\xi$, $2\Gamma_\tau(\xi)$ is Poincar\'{e} dual to $c_1(\xi)\in\Ho^2(\M,\Z)$. For any fixed $x\in \Ho_1(\M,\Z)$, the set $\Gamma^{-1}(x)$ of classes of 2 plane-fields $\xi$ mapping to $x$ has a canonical $\Z$ action and is isomorphic to $\Z/d(\xi)$, where $d$ is the divisibility of the chern class.
\end{proposition}
Now suppose $\M$ is a manifold with boundary and $\mathcal{F}(\M)$ denotes the set of all cobordism classes of framed link in the interior of $\M$. Then there is a homomorphism
\[\phi\colon\mathcal{F}\to\Ho_1(\M,\Z)\]
such that
\[ (\Li_f,f)\to [\Li].\] This map is clearly surjective. We want to compute the preimage of this map. First notice, there is a natural intersection pairing between $\Ho_1(\M)$ and $\Ho_2(\M,\partial M)$. Let $i\colon(\M,\emptyset)\to(\M,\partial\M)$ induces the map $i_*\colon\Ho_2(\M,\Z)\to\Ho_2(\M,\partial \M,\Z)$. For $\Li\in\Ho(\M,\Z)$, set
\[D_\Li=\{\Li\cdot[\Sigma]:\ {\text where}\ \Sigma\in i_*(\Ho_2(\M,\Z))\}\]
where $\Li\cdot\Sigma$ denotes the intersection pairing. Clearly this is a subset of $\Z$. Suppose $d(\Li)$ is the smallest non-negative integer in $D_\Li$.
\begin{lemma}
With the notations above, \[\phi^{-1}(\Li)=\Z/d(2\Li).\]
\end{lemma}
\section{Classification of loose Legendrian links}
\label{sec:Legtheorem}
There are two types of links in an overtwisted contact manifold, namely loose (also known as non-exceptional) and loose (also known as exceptional). A Legendrian link $\Li$ is called loose if the contact structure restricted to its complement is overtwisted. Otherwise, it is called non-loose. In other words, a loose link must have an overtwisted disk disjoint from it.
\begin{remark}
Note that, for a loose Legendrian link, all of its components must be loose. But a non-loose link can have loose components. In fact, a non-loose link can have all its components loose.
\end{remark}
The following is our main theorem in this section.
\begin{theorem}
\label{thm:main}
Suppose $\Li$ and $\Li'$ are two Legendrian $n$-component links in $(\M,\xi)$ with all of their components null-homologous. We fix their Seifert surfaces. If $\Li$ and $\Li'$ are topologically isotopic, $\tb(\Li_i)=\tb(\Li_i')$ and $\rot(\Li_i)=\rot(\Li_i')$ for $i=1\dots n$ (where the classical invariants are defined using the fixed Seifert surfaces), then $\Li$ and $\Li'$ are coarsely equivalent.
\end{theorem}
In other words, there is a unique loose Legendrian link with the components having fixed $\tb$ and $\rot$ up to contactomorphism.
Before we begin proving this, we need the following lemma:
\begin{lemma}
\label{lemma:homotopic}
Suppose $\Li$ and $\Li'$ be two Legendrian $n$-component links in $(\M,\xi)$ with each of their components being null-homologous. Suppose they are topologically isotopic, $\tb(\Li_i)=\tb(\Li_i')$ and $\rot(\Li_i)=\rot(\Li_i')$ for $i=1\dots n$, then $\xi|_{\M\setminus \N(\Li)}$ is homotopic to $\xi|_{\M\setminus \N(\Li')}$ rel boundary as plane fields.
\end{lemma}
\begin{proof}
We will use techniques similar to \cite{et}.
As $\Li$ and $\Li'$ are topologically isotopic, there is an ambient isotopy of $\M$ which takes $\Li$ to $\Li'$. We will assume that the Seifert surfaces of the link components are also related by this ambient isotopy (So after applying the ambient isotopy we assume the Seifert surfaces of the components agree).
As $\Li$ and $\Li'$ are topologically isotopic there is an ambient isotopy of $\M$, $\phi_t$ such that $\phi_0=id$ and $\phi_1(\Li)=\Li'$. Using this isotopy we push forward the underlying contact structure $\xi$. Thus we now have a new contact structure ${\phi^{-1}_{1*}}{\xi}$ and call it $\xi'$. Observe $\xi$ and $\xi'$ are homotopic as plane fields in $\M$. After we apply the isotopy we can assume $\Li=\Li'$ and $\N$ be their standard neighborhood. Note that, $\tb$ measures the twisting of the contact framing with respect to the surface framing. As the components have the same $\tb$, this allows us to identify the neighborhoods. Now by standard neighborhood theorem of Legendrian links, $\xi$ and $\xi'$ agree on $\N$. We need to show that $\xi|_{\M\setminus\N}$ is homotopic to $\xi'|_{\M\setminus \N}$ rel boundary as plane fields. We know that homotopy class of plane fields are in one-to-one correspondence with framed links up to framed cobordism. Now using Pontyragin-Thom construction for manifolds with boundary, we will associate these plane fields with $(\Li_\xi,f_\xi)$ and $(\Li_{\xi'},f_{\xi'})$. We need to show that these links are homologous in $\M\setminus \N$ and that their framing differs by $2d[L_\xi]$ where $d$ is the divisibility of the euler class of $\xi$.
To do this, first we will fix a trivialization of $\TM$. Note that, Pontyragin--Thom construction works for any trivialization, but we would like to use a convenient one. Suppose $\V_1$ be the Reeb vector field of $\xi$. Now we choose a Riemannian metric such that $\V_1$ is positively orthogonal to $\xi$ with respect to this metric. Thus $\V_1$ defines $\xi$ in $\M$. To avoid ambiguity, from now on we will call the contact structure $\xi$, $\xi_{\V_1}$ and start making alterations to $\xi_{\V_1}$ which do not affect $\xi$ or $\xi'$. Next choose $\V_2$ in the following way:
\begin{enumerate}
\item Choose $\V_2$ to be the tangent vector field along $\Li_i$ , if $\rot(\Li_i)$ is even.
\item Choose $\V_2$ to be the tangent vector field along $\Li_i$ with an extra negative twist with respect to the fixed Seifert surface of the component, if $\rot(\Li_i)$ is odd.
\end{enumerate}
Observe that the tangent vector field $\V_2$ along $\Li=\Li'$ agrees as all the components have same $\rot$ ($\rot$ measures the winding of the tangent vector field along the component)
Notice that as we know $\xi$ in $N$, we can extend $\V_2$ to all of $N$. Now we need to extend $\V_2$ to all of $\M$. In general, this might not be possible. The relative Euler class $e(\xi_{V_1}, \V_2)$ is the obstruction to this extension. So our goal is to make this obstruction vanish.
By using Lefchetz duality and Mayer--Vietoris sequence, we have
\begin{equation*}
\label{eq:hom}
\Ho^2(\X,\partial \X;\Z)\simeq \Ho_1(\X;\Z)\simeq \Ho_1(\M)\oplus \Z^n
\tag{1}
\end{equation*}
where each of the $\Z$ factors are generated by the meridian of the link components.
Now the relative Euler class $e(\xi_{\V_1},\V_2 )$ lives in $\Ho^2(X,\partial \X;\Z)$. By \fullref{eq:hom}, it has $n+1$ components. As the splitting suggests one can check that the relative Euler class of $\xi_{\V_1}$ relative to $\V_2$ on $\partial \X$ is computed as its evaluation on absolute chains in $\X\subset \M$ and its evaluation on the Seifert surfaces of $L_i$. For the first part, the evaluation is determined by the evaluation of $e(\xi_{V_1})$ on surfaces in $\M$. Now as $\xi_{\V_1}$ is a contact structure, it is an even class. On the other hand, by our choice of $\V_2$,
\[ \langle e(\xi_{\V_2},),[\Sigma_i]\rangle= \rot(\Li_i) \quad \text{or}\quad \rot(\Li_i)+1\]
In both the cases, this is always even for each $i$. So the relative Euler class is a $n+1$ vector with every co-ordinate even. Let us rename this as $\alpha$. Next we will apply half Lutz twist to alter the relative Euler class. Now choose a transverse knot $\K$ in $\X$ (that is $[\K] \in \Ho_1(\X,\Z)$) such that $PD[\K]=\frac{1}{2}(\alpha)$ (We can always find such knot). If we apply half Lutz twist in $\X$ along $\K$, we get a new contact structure $\xi_{\V_2'}$ such that\[
e(\xi'_{\V_1},\V_2)- e(\xi_{\V_1},\V_2)=-2PD[\K] \]
By our choice of $\K$, $ e(\xi'_{\V_1'},\V_2)$ becomes zero. Thus we can extend $\V_2$ as a section of $\xi'_{\V_1}$ on all of $\X$. Now choose an almost contact structure $J$ on $\M$ and say $\V_3=J\V_2$. We use the vector fields $-\V_1, \V_2, \V_3$ to trivialize $\TM$ and $\mathrm{TX}$. Notice here $\V_1$ is mapped to the south pole $p^*$. We will call this trivialization $\tau$.
Using this trivialization, we find framed links ($\Li_\xi, f_\xi)$ and ($\Li_{\xi'}, f_{\xi'})$ associated to $\xi$ and $\xi'$ by Pontyragin--Thom construction on $\X$. As $\M$ is trivialized by $\tau$, both $\Li_\xi$ and $\Li_{\xi'}$ are oriented cycles. Next we need to show that $\Li_\xi$ and $\Li_{\xi'}$ are homologous in $\X$. As $\Ho_1(\X,\Z)$ splits in $n+1$ components,we need to check if they agree in each of them. First we will show they agree in $\Ho_1(\M,\Z)$. Now notice, $\V_1$ is the vector field that defines $\xi$ in $\N$ and also it is mapped to the south pole. So we can define a map from $\N$ to $\Sp^2$ where $\N$ is collapsed to the south pole $p^*$. Now we can extend the map $f_\xi$ in the following way:
\[F_\xi(x)=
\begin{cases}
f_\xi(x)\quad \text{if}\quad x\in X\\
p^* \qquad \ \text{if}\quad x\in \N
\end{cases}
\]
Now $F^{-1}(p)=f^{-1}(p)=\Li_\xi$. Similarly for $\Li_{\xi'}$. Thus $\Li_\xi$ and $\Li_{\xi'}$ are also associated to $\xi$ and $\xi'$ in $\M$. Now as $\xi$ and $\xi'$ are homotopic as plane fields in $\M$, the components must agree in $\Ho_1(\M,\Z)$.
Next we need to verify if $\Li_\xi\cap\Sigma_i= \Li_\xi'\cap\Sigma_i$ for each $i$. Note that here we can take the same Seifert surfaces for each link components $\Li_i$ and $\Li_i'$ as they are related by the ambient isotopy. As the tangent vector $\V_2$ gives the framing to the link $\Li_\xi$ (as framing of $\Li_\xi$ is given by the pull back of $T_p\Sp^2$ and this is exactly equal to $\xi$ along $\Li_\xi$), we have
\[ \langle e(\xi,\V_2), \Sigma_i\rangle=\Li_\xi\cap\Sigma_i.\]
Same argument works for $\Li_{\xi'}$.
Now if $\rot(\Li_i)$ is even, the definition of $\V_2$ gives us $\rot(\Li_i)=\langle e(\xi,\V_2),\Sigma_i\rangle$.
Thus if $\rot(\Li_i)$ is even, we have,
\[ \Li_\xi\cap\Sigma_i=\langle e(\xi,\V_2),[\Sigma]\rangle=\rot(\Li_i)=\rot(\Li_i')=\langle e(\xi',\V_2),[\Sigma]\rangle=\Li_{\xi'}\cap\Sigma_i\]
Similarly for $\rot(\Li_j)$ odd,
\[ \Li_\xi\cap\Sigma_i=\langle e(\xi,\V_2),[\Sigma]\rangle=\rot(\Li_j)+1=\rot(\Li_j')+1=\Li_\xi'\cap\Sigma_i.\] Thus $\Li_\xi$ and $\Li_{\xi'}$ are homologous in $\Ho_1(X,\Z).$
Next we want to show that the framing differs by $2d([\Li_\xi])$. Now notice that $\xi$ and $\xi'$ are homotopic as plane fields in $\M$. Thus the framings of $\Li_\xi$ and $\Li_\xi'$ associated to $\xi$ and $\xi'$ must differ by $d(\xi)$ where $d(\xi)$ is the divisibility of $e(\xi$) \cite{gompf}. In other words, its the same as the divisibility of the Poincar\'{e} dual of $e(\xi)$. We will show this is exactly $2d[L_\xi]$.
We know $\xi=f_\xi^*(TS^2)$.
\[e(\xi)=e(f_\xi^*(TS^2))= f_\xi^*(e(TS^2))= f_\xi^*(2[S^2]).\] Now $p=PD[S^2]$ as $p$ is a regular value. So \[f_\xi^*(2[S^2])= f_\xi^*(2PD[p])=2PD(f_\xi^{-1}(p))=2[\Li_\xi].\] For the second equality check \cite{geiges}. So the framing differs by $2d[L\xi]$.
Thus by \fullref{lemma:pont}, $\xi_{M\setminus N}$ and $\xi_{M\setminus N}'$ are homotopic rel boundary.
\end{proof}
\begin{proof}[Proof of \fullref{thm:main}]
As $\Li$ and $\Li'$ are loose, they have overtwisted complements. Now by Eliashberg's classification of overtwisted contact structures we know that isotopy classes of overtwisted contact structures are in one to one correspondence with the homotopy class of plane fields \cite{eli}. Thus if each of the components of $L$ and $L'$ have same Thurston--Benniquin and rotation number, by \fullref{lemma:homotopic} they have contactomorphic complements rel boundary. As we can extend this contactomorphism over the standard neighborhood of $\Li$ (disjoint union of solid tori), this proves $\Li$ and $\Li'$ are coarsely equaivalent.
\end{proof}
\begin{corollary}
\label{cor:transverselink}
Suppose $\T$ and $\T'$ are two topologically isotopic loose $n$-component transverse links with each of their components being null-homologous (i.e each of the components bounds a Seifert surface). Fix these Seifert surfaces and with respect to these surfaces suppose $\slk(\T_i)=\slk(\T'_i)$, then $\T$ and $\T'$ are coarsely equivalent.
\end{corollary}
\begin{proof}
Suppose $\T$ and $\T'$ be two loose transverse links with each of their components being nullhomologous (i.e each component bounds a Seifert surface) and $\slk(\T_i)=\slk(\T_i')$ for each $i$. Now we Legendrian realize $\T$ and $\T'$ component by component and call them $\Li$ and $\Li'$. We can do the Legendrian approximation in a small enough neighborhood so that the Legendrian links remain loose. After this step,
we can have the following two cases:
\subsubsection*{\bf Case 1}Suppose $\tb(\Li_i)=\tb(\Li_i')$ and $\rot(\Li_i)=\rot(\Li_i')$ for all $i$. Then we have two loose Legendrian links with each component null homologous and with same classical invariants. Thus by \fullref{thm:main}, they have contactomorphic complements. Now we take the transverse push-off of $\Li$ and $\Li'$. As transverse push-off is well-defined, we get back $\T$ and $\T'$. This proves that $\T$ and $\T'$ are coarsely equaivalent.
\subsubsection*{\bf Case 2}Suppose $\tb(\Li_j)\neq\tb(\Li_j')$ for some $j$. We may assume $\tb(\Li_j)>\tb(\Li_j')$. So we start by negatively stabilizing $\Li_j$. As we can do a negative stabilization in a small enough Darboux ball, this does not effect any other link component and thus without changing the transverse link type. So we can negatively stabilize each of the link components locally one by one till $\tb(\Li_i)=\tb(\Li_i')$ for each $i$. As $\slk(\T_i)=\slk(\T_i')$, we must have $\rot(\Li_i)=\rot(\Li_i')$ for each $i$ as well. So we are back in case 1.
\end{proof}
\section{Links and open book decomposition}
\label{sec:openbook}
In this section, we extend the idea of support genus of a Legendrian knot \cite{ona} to the support genus of a link and prove that every coarse equivalence class of loose null-homologous Legendrian links have support genus zero.
We can always associate a Legendrian link in $(\M,\xi)$ with an open book supporting the underlying manifold by including the link in the $1$-skeleton of the contact cell decomposition of the contact manifold. Thus we define the support genus of a Legendrian link in $(\M,\xi)$ as follows:
\begin{definition}
The support genus $\sg(\Li)$ of a Legendrian link $\Li$ in a contact $3$-manifold $(\M,\xi)$ is the minimal genus of a page of the open book decomposition of $\M$ supporting $\xi$ such that $\Li$ lies on the page of the open book and the framings given by $\xi$ and the page agree.
\end{definition}
In \cite{ona}, Onaran proved the following theorem.
\begin{theorem}
Any link in a 3-manifold $\M$ is planar.
\end{theorem}
The above theorem tells us that that any link in $\M$ can be put on a planar open book $(\B,\Sigma, \phi)$ for $\M$. For details of the proof see \cite{ona}.
\begin{figure}
\caption{Page of a planar open book where the link lies. The blue outline shows the outer boundary component of the punctured disk. The box depicts the boundary area where we want to do the stabilization or detabilization of $\Li_k$.}
\label{fig:braid}
\end{figure}
Now before we proceed to the main theorem of this section, we will need the following lemmas.
\begin{lemma}
\label{lemma:Positive_stab_Leg}
Suppose $\Li$ be a Legendrian link sitting on a planar open book as shown in \fullref{fig:braid}. Then positive/negative stabilization of any of the link component $\Li_i$ can be done fixing the Legendrian isotopy type of the other link components.
\end{lemma}
\begin{figure}
\caption{Positive and Negative stabilization of the link sitting on the page of an open book.}
\label{fig:Positive_stab_Leg}
\end{figure}
\begin{proof}
Suppose $\Li$ be a Legendrian link sitting on the page of a planar open book. Fix an orientation of the link. Suppose $\B_i$ is the outer most binding component. Now choose a particular region of $\B_i$ which is closest to $\Li_i$ and far from other components. The shaded region in \fullref{fig:braid} shows us where we will do the stabilizations. We do a positive stabilization along $\B_i$ and push the link component $\Li_i$ along the attaching 1-handle as shown in \fullref{fig:Positive_stab_Leg}. We call it $\Li'_i$. by our choice of attaching region, this operation is local and thus does not affect any other link component sitting on the page of the open book. Clearly $\D$ is a disk with $\tb=-1$ and a single dividing curve. Thus we assume it to be convex. Therefore, $\Li'_i$ is the stabilization with $\D$ being the stabilizing disk. Also $\D$ can be thought as bypass disk along $\Li'$.
\begin{figure}
\caption{The positive and negative stabilization of $\Li_i$ and the signs of bypass disks.}
\label{fig:bypass_sign}
\end{figure}
The sign of the stabilization will depend on the orientation of the boundary of the disk.. The orientation of the boundary of the disk is inherited by the Legendrian knot $\Li'$. The sign of the singularity of $D_\xi$ is determined by the contact planes. We will call a singularity along $\partial\D$ positive or negative according to if the contact plane takes a right handed or a left handed turn along $\partial\D$. See \fullref{fig:bypass_sign}. Now clearly we have chosen to do this operation away from the other link components. Thus all other link components remain unaltered during the operation and so are their Legendrian knot types. Observe that, $\Li_i$ has a fixed orientation. So we can perform any number of positive or negative stabilization of any link component away from the other components.
\end{proof}
The next lemma tells us that de-stabilization of any component of a loose link can be done in the complement of other components.
\begin{lemma}
\label{lemma:neg_stab}
Suppose $\Li$ be a link sitting on the page of a planar open book $(\B, \Sigma,\phi)$ as shown in \fullref{fig:braid}. Suppose $\B_i$ be the outer most boundary component. Now suppose we do a negative stabilization of $(\B,\Sigma,\phi)$ along $\B_i$. The new open book does not support $(\M,\xi)$ and we get a new link $\Li_{new}$ in the new contact structure. Now if we push $\Li_{new}$ along the attaching handle, this will destabilize the link component and it can be performed in a way that it does not affect the Legendrian type of any other link components.
\end{lemma}
\begin{figure}
\caption{Negative stabilization of the open book and the de-stabilized link component sitting on the page}
\label{fig:neg_stab_openbook}
\end{figure}
\begin{proof}
In \cite{ona}, a similar version of this lemma has been proved for knots. We give a slightly different proof. Our proof relies on the fact that null-homologous Legendrian knots having same classical invariants are Legendrian isotopic in $\Sp^3$ if there is an overtwisted disk disjoint from them \cite{dy}.
Suppose $\Li$ be a Legendrian link sitting on the page of a planar open book $(\B,\Sigma,\phi)$. Fix an orientation of $\Li$. Pick a link component $\Li_i$, we want to destabilize. Now we choose a particular region of the outer most boundary component near $\Li_i$ and away from all other $\Li_j$'s. This can be done as shown in \fullref{fig:braid}.
Now do a negative stabilization along that region and push the link component $\Li_i$ along the attaching 1-handle. By our choice of attaching region, this operation is away from the other link components. The new open book $(\B',\Sigma', \phi')$ doesn't support the underlying contact structure anymore. We will call the link $\Li_{new}$ in the new contact structure and show that $(\Li'_{new})_i$ is a destabilization of $(\Li_{new})_i$ as shown in \fullref{fig:neg_stab_openbook}.
\begin{figure}
\caption{Negative stabilization followed by a positive stabilization of the open book near $\Li_k$ and away from other components.}
\label{fig:destab_link}
\end{figure}
Here the disk $\D$ has $\tb=1$ and thus cannot be made convex. So we stabilize the open book along the same boundary component as shown in \fullref{fig:destab_link}. Now positive and negative stabilization of $(\Sigma,\phi)$ can also be thought as Murasugi summing with $(\Ho^\pm,\pi^\pm)$. Also notice $(\Ho^+,\pi^+)\connsum (\Ho^-,\pi^-)$ is an open book for $(\Sp^3,\xi_{-1})$. As the link components are identical outside the neighborhood of the boundary, we can assume the local operation to be entirely in the overtwisted $\Sp^3$. Now we push $(\Li'_{new})_i$ along the new attaching handle. And by \fullref{lemma:Positive_stab_Leg}, we get $(\Li'_{new})_i^\pm$ according to the orientation of the link component. Also we found an overtwisted disk $\D$ in the complement of $(\Li_{new})_i$ and $(\Li'_{new})_i^\pm $. Now by \cite{dy}, $(\Li_{new})_i$ and $(\Li'_{new})_i^\pm $ must be Legendrian isotopic. As $(\Li'_{new})_i^\pm $ is a stabilization of $(\Li'_{new})_i$, clearly $(\Li'_{new})_i$ is the destabilization of $(\Li_{new})_i$. Nothing changed outside the overtwisted $\Sp^3$. Thus all other link components remain unaltered and so their Legendrian isotopy class.
\end{proof}
Thus \fullref{lemma:neg_stab} together with \fullref{lemma:Positive_stab_Leg} proves that if a link lies on an open book as shown in \fullref{fig:braid}, any number of positive (resp. negative) stabilization and de-stabilization of a particular link component can be done in the complement of the other link components. We will use these lemmas in the proof of our main theorem in this section.
\begin{definition}
Suppose $[\Li]_n$ denotes the class of all the $n$-component links with each component having fixed $\tb$ and $\rot$. For any two links in this class there exists a contactomorphism that takes one to the other. We call this \emph{the coarse equivalence class} of a link.
\end{definition}
\begin{theorem}
\label{thm:sg}
Suppose $[\Li]_n$ be the coarse equivalence class of null-homologous, loose Legendrian link in $(\M,\xi)$. Then $\sg([\Li]_n)=0$.
\end{theorem}
\begin{proof}
As every link is planar, we can put $\Li$ on a planar open book $(\B,\Sigma,\phi)$ for $\M$. Now $(\B,\Sigma,\phi)$ does not necessarily support the underlying contact structure. But we can always negatively stabilize the open book and assume the contact structure it supports is overtwisted and call it $\xi'$. As overtwisted contact structures can be identified using their $d_2$ and $d_3$ invariant, we start making alterations to the open book so that the invariants match with those of $\xi$. By Lutz twist and Murasugi summing in an appropriate way we can make the $d_2$ and $d_3$ invarants agree. Note that, $d_3$ invariant are additive under connected sum operation. Also none of these operations change the genus of the open book. For details of these operations check \cite{etplanar}. Now we have a planar open book which supports a contact structure whose $d_2$ and $d_3$ invariants agree with $\xi$. By Eliashberg's classification of overtwisted contact structures, these contact structures are isotopic. Next we can Legendrian realize the link on the page and call it $\Li'$. Suppose we want to realize the following classical invariants, $\tb=(t_1, t_2,\dots t_n)$ and $\rot=(r_1, r_2,\dots r_n)$. If the classical invariants of $\Li'$ agree with that of $\Li$, we are done. Suppose not. Then we can have the following cases:
\subsection*{Case 1}Suppose $\tb$ agrees but $\rot$ does not. Let $\Li_j$ be a link component with $\tb(\Li_j)=t_j$ and $\rot(\Li_j)=r'_j\neq r_j$. Now we will negatively or positively stabilize the link component $\Li_j$ to increase or decrease $r'_j$. We know by \fullref{lemma:Positive_stab_Leg}, this operation can be done fixing other link components. Notice, this will change $t_j$ to $t_j-1$. So we need to destabilize the link component in an appropriate way so that we do not reverse the change in $r_j$. This can be done in the following way, if we positively stabilize the link component, we will negatively destabilize it. This can be done fixing all other link components as stated in \fullref{lemma:neg_stab}. Now this will keep the $\tb$ fixed and increase $\rot$ by 2. Similarly doing a negative stabilization and a positive destabilization will keep $\tb$ fixed and decreases $\rot$ by 2. As $\tb+\rot$ is always odd for a Legendrian knot, we can achieve any possible rotation number for a link component. Now we can do this any number of time to achieve $r_j$ while fixing the Legendrian type of all other link components. Here note that, we might end up in a contact structure different from the one we started as negative stabilization alters a contact structure. But then we can always alter it by Murasugi summing with appropriate open books of $\Sp^3$. In this way, we will find a link sitting on the page of an open book supporting the contact structure $\xi$ with $\tb=(t_1, t_2,\dots t_n)$ and $\rot=(r_1, r_2,\dots r_n)$. By \fullref{thm:main}, $\Li$ must be in the same coarse equivalence class . This proves the theorem.
\subsection*{Case 2}Suppose $\tb(\Li_j)=t'_j\neq t_j$. In this case we need to stabilize or destabilize the link component $\Li_j$ to decrease and increase the $\tb$ till it agrees with $t_j$ and this can be done keeping the other components fixed by \fullref{lemma:Positive_stab_Leg} and \fullref{lemma:neg_stab}. Now we can do this local operation for all the link components one by one till we get the $\tb$ we desire. So we are in Case 1.
\end{proof}
The next theorem tells us that the converse of the above theorem is not true.
\begin{theorem}
There are examples of non-loose links with support genus zero.
\end{theorem}
\begin{figure}
\caption{Example 1:(a) Non-loose Hopf link in $(\Sp^3,\xi_{-1}
\label{fig:Hopf}
\end{figure}
\begin{proof}
\fullref{fig:Hopf}(a) shows a non-loose positive Hopf link in $(\Sp^3,\xi_{-1})$. To see this, we do a -1 surgery along $\Li_1$ which will cancel one of the +1- surgeries and we will be left with one +1- surgery on $\tb=-1$ unknot in $(\Sp^3,\xistd)$ which produces the unique tight $\Sp^1\times\Sp^2$. For details, check \cite{geiona}. Next, we constructed a planar open book compatible with $(\Sp^3,\xi_{-1})$ where the non-loose Hopf link sits. We start with the annular open book that supports $(\Sp^3,\xistd)$ and used the well known stabilization method we used previously in \fullref{lemma:Positive_stab_Leg}. The monodromy of this open book can be computed from the Dehn twists coming from the stabilizations and the Dehn twists defined by the surgery curves. One of the left-handed Dehn twist coming from the +1 surgery will cancel the right handed Dehn twist of the annular open book we started with. We perform right handed Dehn twist along the solid green curves and the left handed Dehn twist along the dashed curve. This clearly shows $\sg(\Li_0\sqcup\Li_1)=0$.
\begin{figure}
\caption{Example 2:(a) Non-loose Hopf link in $(\Sp^3,\xi_{-2}
\label{fig:Hopf2}
\end{figure}
Example 2 shows a non-loose Hopf link in $(\Sp^3,\xi_{-2})$. Here a $-1$ surgery on $L_2$ and $-2$ surgery on $L_2$ gives us $(\Sp^3,\xistd$). Check \cite{geiona} for details. We produce a compatible open book for this contact structure as follows: like before we started with an annular open book supporting $(\Sp^3,\xistd)$ and used the well known stabilization method. Notice that the right handed Dehn twist coming from the annular open book again gets cancelled with the negative Dehn twist coming from one of the surgery curve. We perform right handed Dehn twist along the green curves and one left handed Dehn twist along the black dashed curves.
\end{proof}
\end{document}
|
\begin{document}
\begin{frontmatter}
\title{A Weighted U Statistic for Association Analyses Considering Genetic
Heterogeneity}
\runtitle{Heterogeneity Weighted U}
\begin{aug}
\author{\fnms{Changshuai} \snm{Wei}\thanksref{t1,m1}\ead[label=e1]{[email protected]}},
\author{\fnms{Robert C.} \snm{Elston}\thanksref{t2,m2}\ead[label=e2]{[email protected]}}
\and
\author{\fnms{Qing} \snm{Lu}\thanksref{t3,m3}\ead[label=e3]{[email protected]}}
\thankstext{t1}{Assistant Professor of Biostatistics, Department of
Biostatistics and Epidemiology, University of North Texas Health Science Center, (Email: \href{mailto:[email protected]}{[email protected]}) }
\thankstext{t2}{Professor of Biostatistics, Department of Epidemiology
and Biostatistics, Case Western Reserve University. (Email: \href{mailto:[email protected]}{[email protected] })}
\thankstext{t3}{Corresponding Author, Assosicate Professor of Biostatistics,
Department of Epidemiology and Biostatistics, Michigan State University.(Email:
\href{mailto:[email protected]}{[email protected]})}
\runauthor{C. Wei et al.}
\affiliation{University of North Texas Health Science Center \thanksmark{m1}, Case Western Reserve University \thanksmark{m2} and Michigan State University\thanksmark{m3}}
\end{aug}
\begin{abstract}
Converging evidence suggests that common complex diseases with the
same or similar clinical manifestations could have different underlying
genetic etiologies. While current research interests have shifted toward uncovering rare variants and structural variations predisposing to human diseases, the impact of heterogeneity in genetic studies of complex diseases has been largely overlooked. Most of the existing statistical methods assume the disease
under investigation has a homogeneous genetic effect and could, therefore,
have low power if the disease undergoes heterogeneous pathophysiological
and etiological processes. In this paper, we propose a heterogeneity
weighted U (HWU) method for association analyses considering genetic heterogeneity. HWU can be applied to various types of
phenotypes (e.g., binary and continuous) and is computationally efficient
for high-dimensional genetic data. Through simulations, we showed the advantage of HWU when the underlying genetic etiology
of a disease was heterogeneous, as well as the robustness of HWU against different model assumptions (e.g., phenotype distributions). Using HWU, we conducted a genome-wide
analysis of nicotine dependence from the Study of Addiction: Genetics
and Environments (SAGE) dataset. The genome-wide analysis of nearly
one million genetic markers took 7 hours, identifying heterogeneous
effects of two new genes (i.e., \textit{CYP3A5} and \textit{IKBKB}) on nicotine dependence.
\end{abstract}
\begin{keyword}
\kwd{High-dimensional Data}
\kwd{Non-parametric Statistic}
\kwd{Nicotine Dependence}
\end{keyword}
\end{frontmatter}
\section{Introduction}
Benefiting from high-throughput technology and ever-decreasing genotyping
cost, large-scale genome-wide and sequencing studies have become commonplace
in biomedical research. From these large-scale studies, thousands
of genetic variants have been identified as associated with complex
human diseases, some with compelling biological plausibility for a
role in the disease pathophysiology and etiology. Despite such success,
for most complex diseases the identified genetic variants account
for only a small proportion of the heritability. While substantial
efforts have shifted toward finding rare variants, gene-gene/gene-environment
interactions, structural variations, and other genetic variants accounting
for the missing heritability [\cite{Eichler2010}], there is a considerable
lack of attention being paid to genetic heterogeneity in the analysis
of complex human diseases.We define genetic heterogeneity as a genetic
variant having different effects on individuals or on subgroups of
a population (e.g., gender and ethnic groups). For instance, the effect
size and the effect direction of the genetic variant can be different
according to the individuals' genetic background, personal/demographic
characteristics and/or the sub-phenotype groups they belong to.
Substantial evidence from a wide range of diseases suggests that complex
diseases are characterized by remarkable genetic heterogeneity [\cite{Thornton-Wells2004,McClellan2010,Galvan2010}]
. Despite the strong evidence of genetic heterogeneity in human disease
etiology, investigating genetic variants with heterogeneous effects
remains a great challenge, primarily because: i) the commonly used
study designs (e.g. the case-control design) may not be optimal for
studying heterogeneous effects; ii) there is a lack of prior knowledge
that can be used to infer the latent population structure (i.e., heterogeneous
subgroups in the population); iii) replication studies are more challenging
and need to be carefully designed; and iv) computationally efficient
and flexible statistical methods for high-dimensional data analysis,
taking into account genetic heterogeneity, have not been well developed.
Most of the existing methods assume that the disease under investigation
is a unified phenotype with homogeneous genetic causes. When genetic
heterogeneity is present, the current methods will likely yield attenuated
estimates for the effects of genetic variants, leading to low power
of the study.
To account for genetic heterogeneity in association analyses, we propose
a heterogeneity weighted U, referred to as HWU. Because the new method
is based on a weighted U statistic, it assumes no specific distribution
of phenotypes; it can be applied to both qualitative and quantitative
phenotypes with various types of distributions. Moreover, HWU is computationally
efficient and has been implemented in a C++ package for high-dimensional
data analyses (\href{https://www.msu.edu/~changs18/software.html\#HWU}{https://www.msu.edu/$\sim$changs18/software.html\#HWU}).
\section{Method}
\subsection{Motivation from a Gaussian random effect model}
To motivate the idea of the heterogeneity weighted U, we first introduce
a Gaussian random effect model to test genetic association when considering
genetic heterogeneity. Assume the following random effect model,
\[
Y_{i}=\mu+g_{i}\beta_{i}+\varepsilon_{i},\varepsilon_{i}\sim N(0,\sigma^{2}),
\]
where $Y_{i}$ and $g_{i}$ represent the phenotype and the single-locus
genotype of individual $i$, respectively. $g_{i}$ can be coded as
0, 1, and 2 (i.e., the additive model), or 0 and 1 (e.g., the dominant/recessive
model); $\beta_{i}$ is normally distributed, $\beta_{i}\sim N(0,\sigma_{b}^{2})$,
and $\varepsilon_{i}$ is the iid random error. Let $\kappa_{i,j}$
represent the background similarity or the latent population structure
for individuals $i$ and $j$. We assume that the more similar two
individuals are, the more similar are their genetic effects, i.e.,
$cov(\beta_{i},\beta_{j})=\kappa_{i,j}\sigma_{b}^{2}$.
We define $\beta=(\beta_{1},\cdots,\beta_{n})^{T}$, $Y=(Y_{1},\cdots,Y_{n})^{T}$,
$\varepsilon=(\varepsilon_{1},\cdots,\varepsilon_{n})^{T}$, $G=\{diag(g_{1},\cdots,g_{n})\}{}_{n\times n}$
, and $K=\{\kappa_{i,j}\}_{n\times n}$. The model can then be written
as: $Y=\mu+G\beta+\varepsilon,\varepsilon\sim N(0,\sigma^{2}I),\beta\sim N(0,\sigma_{b}^{2}K)$.
We denote $\delta=G\beta$, and rewrite the model as: $Y=\mu+\delta+\varepsilon,\varepsilon\sim N(0,\sigma^{2}I),\delta\sim N(0,\sigma_{b}^{2}GKG)$.
A score test statistic can be formed to test the variance component
$\sigma_{b}^{2}=0$,
\[
T=\tilde{Y}^{T}GKG\tilde{Y},
\]
where $\tilde{Y}_i=(Y_{i}-\mu)/\sigma$ is the standardized residual
under the null. We can partition the test statistic $T$ into two
parts, $T=\sum_{i\neq j}\kappa_{i,j}g_{i}g_{j}\tilde{Y}_{i}\tilde{Y}_{j}+\sum_{i=1}^{n}g_{i}^{2}\tilde{Y}_{i}^{2}$
, where the first summation is closely related to the weighted U statistic
introduced below.
\subsection{Heterogeneity weighted U}
The Gaussian random effect model assumes a normal distribution. In
order to consider phenotypes with various distributions and modes
of inheritance, we develop a heterogeneity weighted U with rank-based
U kernels and flexible weight functions. We first order the subjects
according to their phenotypic values $Y_{i}$ and assign subject scores
based on their ranks, denoted by $R_{i}$, $i=1,\cdots,n$. When there
are ties in the sample, we assign the averaged rank. For example,
in a case-control study with $N_{0}$ controls ($Y_{i}=0$) and $N_{1}$
cases ($Y_{i}=1$), all the controls are assigned a score $(N_{0}+1)/2$.
The phenotypic similarity between subjects $i$ and $j$ can be defined
as,
\[
S_{i,j}=h(R_{i},R_{j}),
\]
where $h(\cdot,\cdot)$ is a two degree mean zero symmetric kernel
function (i.e., $h(R_{i},R_{j})=h(R_{j},R_{i})$ and $E_{F}(h(R_{i},R_{j}))=0$
) that satisfies the finite second moment condition, $E_{F}(h^{2}(R_{i},R_{j}))<\infty$,
and the degenerate kernel condition, $var(E(h(R_{i},R_{j})|R_{j}))=0$.
In this paper, we choose $h(R_{i},R_{j})=\sigma_{R}^{-2}(R_{i}-\mu_{R})(R_{j}-\mu_{R})$,
where $\mu_{R}=E(R)$ and $\sigma_{R}^{2}=var(R)$. Let $G_{i}=(g_{i,1},\cdots,g_{i,Q})$
denote the multiple genetic variants for individual $i$. We further
define a weight function to measure the genetic similarity under the
latent population structure $\kappa_{i,j}$,
\[
w_{i,j}=\kappa_{i,j}f(G_{i},G_{j}),
\]
where $f(G_{i},G_{j})$ represents the genetic similarity calculated
based on the genetic variants of interest. We can then form the heterogeneity
weighted U, referred to as HWU,
\[
U=2\sum_{1\leq i<j\leq n}w_{i,j}S_{i,j},
\]
to evaluate the association between
the phenotype and the genetic variants, considering the latent population
structure.
Thus HWU is a summation, over all pairs of individuals, of their phenotypic
similarities weighted by their genetic similarities. Under the null
hypothesis of no association, the phenotypic similarity is unrelated
to the genetic similarity. Because the phenotypic similarity has mean
0 (i.e., $E_{F}(R_{i},R_{j}) = 0$), the expectation of HWU is 0. Under
the alternative, the phenotypic similarities should increase as the
genetic similarities increases. The positive phenotypic similarities
are more heavily weighted and the negative phenotypic similarities
are more lightly weighted, leading to a positive value of HWU under
the alternative.
\subsection{Asymptotic distribution of heterogeneity weighted U}
To assess the significance of the association, a permutation test
can be used to calculate a p-value for HWU. However, for high-dimensional
data, the permutation test could be computationally intensive. Therefore,
we derive the asymptotic distribution of HWU under the null hypothesis.
The asymptotic properties of the un-weighted U statistic (i.e., $w_{i,j}\equiv1$)
are well established [\cite{Hoeffding1948,Serfling1981}]. When the kernel
is non-degenerate ( $var(E(h(R_{i},R_{j})|R_{j}))>0$ ), the limiting
distribution is normal. When the kernel is degenerate ( $var(E(h(R_{i},R_{j})|R_{j}))=0$
), the limiting distribution is a sum of independent chi-square variables.
However, the limiting distribution of the weighted U statistic depends
on both the weight function and the kernel function [\cite{ONeil1993}].
Because non-normality also occurs for a non-degenerate kernel with
certain weight functions, we use the degenerate kernel for HWU to
obtain a unified form of limiting distribution, as shown in the following
derivation.
We first expand the kernel function $h(\cdot,\cdot)$ as the sum of
products of its eigenfunctions. Let $\{\alpha_{t}\}$ and $\{\varphi_{t}(\cdot)\}$
denote the eigenvalues and the corresponding ortho-normal eigenfunctions
of the kernel. We can write $h(\cdot,\cdot)$ as $h(R_{i},R_{j})=\sum_{t=1}^{\infty}\alpha_{t}\varphi_{t}(R_{i})\varphi_s(R_{j})$
, and the weighted U as,
\[
U=\sum_{i\neq j}w_{i,j}\sum_{t=1}^{\infty}\alpha_{t}\varphi_{t}(R_{i})\varphi_s(R_{j}),
\]
where
\[
E(\varphi_{t}(R_{i})\varphi_s(R_{j}))=\begin{cases}
1, & t=s\text{ and }i=j\\
0, & \text{otherwise. }
\end{cases}
\]
By exchanging the two summations, ($\sum_{t=1}^{\infty}\alpha_{t}\sum_{i\neq j}w_{i,j}\varphi_{t}(R_{i})\varphi_s(R_{j})$),
the weighted U statistic is an infinite sum of quadratic forms and
can be approximated by a linear combination of chi-square random variables [\cite{Dewet1973,Shieh1994}].
Letting $W=\{w_{i,j}\}_{n\times n}$ be the weight matrix with all
diagonal element equal to 0, the limiting distribution can be written
as
\[
U\sim\sum_{t=1}^{\infty}\alpha_{t}\sum_{s=1}^{n}\lambda_{s}(\chi_{1,ts}^{2}-1),
\]
where $\{\lambda_{s}\}$ are the eigenvalues of the weight matrix
and $\{\chi_{1,ts}^{2}\}$ are iid chi-square random variables with
1 df. In this paper, we use a cross product kernel, $h(R_{i},R_{j})=\sigma_{R}^{-2}(R_{i}-\mu_{R})(R_{j}-\mu_{R})$.
In this case, the expansion of $h(\cdot,\cdot)$ can be simplified
to $h(R_{i},R_{j})=\alpha_{1}\varphi_{1}(R_{i})\varphi_{1}(R_{j})$,
where $\alpha_{1}=1$ and $\varphi_{1}(R)=\sigma_{R}^{-1}(R-\mu_{R})$.
Using this representation and the fact that $\sum_{s=1}^{n}\lambda_{s}=0$,
the limiting distribution can be simplified to $U\sim\sum_{s=1}^{n}\lambda_{s}\chi_{1,s}^{2}$.
We also note that the parameters $\mu_{R}$ and $\sigma_{R}^{2}$
are unknown and need to be estimated from the data, which influences
the limiting distribution of HWU [\cite{Dewet1987,Shieh1997}]. Taking
the parameter estimation into account, the limiting distribution can
be expressed as a weighted sum of independent chi-squared variables,
$U\sim\sum_{s=1}^{n}\lambda_{1,s}\chi_{1,s}^{2}$ (Appendix \ref{app}), where $\{\lambda_{1,s}\}$ are the eigenvalues of the matrix $(I-J)W(I-J)$
, in which $I$ is an identity matrix and $J$ is a matrix with all
elements equal to $1/n$.
The HWU described above can also be modified to allow for covariate
adjustment. Suppose $Z_{n\times p}=(1,z_{1},\cdots,z_{p})$ is the
covariate matrix. In the cross product kernel of HWU, we can calculate
the estimators of $\mu_{R}$ and $\sigma_{R}^{2}$ as $\hat{\mu}_{R}=PR$
and $\hat{\sigma}_{R}^{2}=(R-\hat{\mu}_{R})^{T}(R-\hat{\mu}_{R})/(n-p-1)$,
where $P=Z(Z^{T}Z)^{-1}Z^{T}$. The limiting distribution can then
be written as $U\sim\sum_{s=1}^{n}\lambda_{1,s}^{*}\chi_{1,s}^{2}$
, where $\{\lambda_{1,s}^{*}\}$ are the eigenvalues of the matrix
$(I-P)W(1-P)$.
Davies\textquoteright{} method [\cite{Davies1980}] can be used to calculate
the p-value for the association test. When the calculation involves
large matrix eigen-decomposition, we use the state-of-the-art algorithm
nu-TRLan [\cite{Wu2000}] to improve the computational efficiency.
\subsection{Weighting schemes}
The weight function comprises two components, $\kappa_{i,j}$ and
$f(G_{i},G_{j})$ . $\kappa_{i,j}$ measures the latent population
structure, which could be inferred from related covariates. Depending
on the type of data, different functions can be used to calculate
$\kappa_{i,j}$. For instance, we can apply the genome-wide averaged
IBS function on GWAS data and the genome-wide weighted average IBS
(WIBS) function on sequencing data to calculate $\kappa_{i,j}$ [\cite{Astle2009}].
For environmental covariates, we can calculate $\kappa_{i,j}$ based
on Euclidian distance [\cite{Jiang2011}]. Given environmental covariates,
we first standardize each covariate according to its mean and standard
deviation, denoted by $x_{d}$ ($x_{d}=(x_{d,1},\cdots,x_{d,n})^{T}$
, $d=1,2,\cdots,D$ ), and then calculate $\kappa_{i,j}=exp(-(x_{i}-x_{j})R(x_{i}-x_{j})^{T})$,
where $R$ is used to reflect the relative importance (e.g., $R=\{diag(\omega_{d})\}_{D\times D}$
in which $\omega_{d}$ measures the importance) or inner correlation
(e.g., $R=(\frac{1}{n}\sum_{i=1}^{n}x_{i}^{T}x_{i})^{-1}$) of the
covariates.
$f(G_{i},G_{j})$ measures the genetic similarity. For a single-locus
model, we can use the cross product $f(G_{i},G_{j})=f(g_{i},g_{j})=g_{i}g_{j}$
when the effect is additive. Otherwise, we can use $f(G_{i},G_{j})=f(g_{i},g_{j})=1(g_{i}=g_{j})$
for an unspecified mode of inheritance, where $1(\cdot)$ is the indicator
function. The above measurements can be easily extended to handle
$Q$ multiple markers by using $f(G_{i},G_{j})=\sum_{q=1}^{Q}g_{q,i}g_{q,j}$.
The weight function $w_{i,j}$ can also be specified for different
purposes. For instance, if we choose $w_{i,j}=f(G_{i},G_{j})$ (i.e.,
$\kappa_{i,j}\equiv1$), then the weighted U tests the association
without consideration of genetic heterogeneity. We refer to this statistic
as the non-heterogeneity weighted U (NHWU). Furthermore, we can construct
a statistic to test the presence of the heterogeneity effect, referred
to as the pure-heterogeneity weighted U (PHWU), by setting $w_{i,j}^{*}=(\kappa_{i,j}-\bar{\kappa})f(G_{i},G_{j})$,
where $\bar{\kappa}=\frac{1}{n^{2}}\sum_{i=1}^{n}\sum_{j=1}^{n}\kappa_{i,j}$.
\section{Result}
\subsection{Simulations}
In simulation I and simulation II, we simulated various cases of genetic
heterogeneity and compared the proposed HWU test with two other tests,
NHWU and the likelihood ratio test using the conventional generalized
linear model (GLM). In simulation III, we investigated the robustness
of HWU to non-normal distributions and mis-specified weight functions.
In all sets of simulations, unless otherwise specified we used Euclidian-distance-based
$\kappa_{i,j}$ by setting $R=I$ and cross-product-based $f(g_{i},g_{j})$
to form the weight function. For each simulation setting, we simulated
1000 replicate datasets, each having a sample size of 1000. Power
and type 1 error of the methods were calculated based on the proportion
of p-values in the 1000 replicates smaller than or equal to 0.05.
\subsubsection{Simulation I}
In this simulation, we assumed two sub-populations, and considered
both continuous and binary phenotypes. We simulated binary phenotypes
using the logistic model,
\[
logit(P(y_{i,j(i)}=1))=\mu+g_{i,j(i)}\beta_{i},
\]
where $i$ and $j(i)$ represented respectively the $i$-th
subpopulation and $j$-th individual in the $i$-th sub-population.
Additionally, we introduced a covariate $x_{i,j(i)}=a_{i}+\delta_{i,j(i)}$,
$\delta_{i,j(i)}\sim N(0,\sigma_{c}^{2})$, from which we infer the
latent sub-populations. Continuous phenotypes were simulated similarly
by using a linear regression model. The value of the regression coefficient
$\beta_{i}$ for different models was listed in Table 1, while the
details of the simulation were described in Supplementary Appendix
A.
No substantial inflation of type I error was detected for any of the
three methods (Table \ref{Table1}). In the presence of genetic heterogeneity
(i.e., T1, T3, and T4 in Table \ref{Table1}), HWU outperformed NHWU
and GLM, especially when the genetic effects for the two sub-populations
were in the opposite direction (i.e., T1). In such a case, NHWU and
GLM could barely detect any genetic effect, while HWU had high statistical
power to detect the association. In the absence of genetic heterogeneity
(i.e., T2 in Table \ref{Table1}), HWU remained comparable in performance
to NHWU and GLM. We also noted that, in the absence of genetic heterogeneity
(i.e., T2), the non-parametric NHWU had almost identical power to
GLM.
\begin{table}
\caption{Type I error and power comparison of three methods when there are
two heterogeneous sub-populations}
\label{Table1}
\begin{center}
\begin{tabular}{ccccccccccc}
\hline
\multirow{3}{*}{Model\textsuperscript{1}} & \multicolumn{5}{c}{Binary Phenotype} & \multicolumn{5}{c}{Continuous Phenotype}\tabularnewline
\cline{2-11}
& \multicolumn{2}{c}{Effect\textsuperscript{2}} & \multicolumn{3}{c}{Type I error/Power} & \multicolumn{2}{c}{Effect} & \multicolumn{3}{c}{Type I error/Power}\tabularnewline
\cline{2-11}
& $\beta_{1}$ & $\beta_{2}$ & HWU & NHWU & GLM & $\beta_{1}$ & $\beta_{2}$ & HWU & NHWU & GLM\tabularnewline
\hline
\hline
Null & 0 & 0 & 0.046 & 0.051 & 0.051 & 0 & 0 & 0.052 & 0.049 & 0.049\tabularnewline
\hline
T1 & -0.1 & 0.1 & 0.085 & 0.07 & 0.07 & -0.1 & 0.1 & 0.241 & 0.062 & 0.07\tabularnewline
& -0.3 & 0.3 & 0.455 & 0.069 & 0.068 & -0.3 & 0.3 & 0.972 & 0.157 & 0.173\tabularnewline
& -0.5 & 0.5 & 0.899 & 0.084 & 0.084 & -0.5 & 0.5 & 1 & 0.376 & 0.417\tabularnewline
\hline
T2 & 0.1 & 0.1 & 0.122 & 0.149 & 0.15 & 0.1 & 0.1 & 0.284 & 0.368 & 0.384\tabularnewline
& 0.3 & 0.3 & 0.601 & 0.743 & 0.75 & 0.3 & 0.3 & 0.999 & 1 & 1\tabularnewline
& 0.5 & 0.5 & 0.978 & 0.993 & 0.993 & 0.5 & 0.5 & 1 & 1 & 1\tabularnewline
\hline
T3 & 0 & 0.2 & 0.107 & 0.102 & 0.102 & 0 & 0.2 & 0.32 & 0.273 & 0.282\tabularnewline
& 0 & 0.4 & 0.378 & 0.307 & 0.31 & 0 & 0.4 & 0.902 & 0.767 & 0.803\tabularnewline
& 0 & 0.6 & 0.718 & 0.582 & 0.586 & 0 & 0.6 & 0.999 & 0.978 & 0.984\tabularnewline
\hline
T4 & -0.1 & 0.3 & 0.239 & 0.091 & 0.092 & -0.1 & 0.3 & 0.718 & 0.161 & 0.18\tabularnewline
& 0.1 & 0.3 & 0.304 & 0.377 & 0.381 & 0.1 & 0.3 & 0.822 & 0.89 & 0.906\tabularnewline
& -0.3 & 0.5 & 0.72 & 0.069 & 0.069 & -0.3 & 0.5 & 0.997 & 0.059 & 0.076\tabularnewline
\hline
\end{tabular}
\end{center}
\textsuperscript{1}Various scenarios of heterogeneity were considered
in the simulation, including no genetic effect for both sub-populations
(Null), the same effect size but with different directions (T1), the
same effect size with the same direction (T2), no genetic effect for
one sub-population but having a genetic effect for the other (T3),
and different effect sizes with the same or different directions (T4).
\textsuperscript{2}Single-locus effects for the 2 sub-populations,
where the effect for the sub-population 1 denoted by $\beta_{1}$
and the effect for the sub-population 2 denoted by $\beta_{2}$.
\end{table}
We also investigated the performance of the three methods when the
underlying phenotype distribution and the modes of inheritance were
unknown (Supplementary Simulation I). Overall, HWU outperformed the
other two methods. In particular, when the phenotype was non-normal,
both HWU and NHWU had higher power than GLM (Supplementary Table S1).
By using $f(g_{i},g_{j})=1(g_{i}=g_{j})$, HWU was robust to the disease
model when the mode of inheritance was unknown, e.g., heterozygote
effect (Supplementary Table S2 and S3).
\begin{figure}
\caption{Power comparison of three methods for a binary phenotype with 20 heterogeneous
sub-populations}
\label{Fig1}
\end{figure}
\begin{figure}
\caption{Power comparison of three methods for a continuous phenotype with
20 heterogeneous sub-populations }
\label{Fig2}
\end{figure}
\subsubsection{Simulation II}
In simulation II, we used the same simulation model as in simulation
I, but considered a more complicated latent population structure by
increasing the number of sub-populations to 20, and sampling $\beta_{i}$
( $i=1,2,\cdots,20$ ) from a uniform distribution with mean $\mu_{\beta}$
and variance $\sigma_{\beta}^{2}$. We simulated 25 covariates, $x_{i,j(i)}^{d}=a_{i}^{d}+\delta_{i,j(i)}$,
$\delta_{i,j(i)}\sim N(0,\sigma_{c}^{2})$, ( $d=1,2,\cdots,25$ ),
to generate the latent population structure (Supplementary Appendix
B). No substantial inflation of type I error was detected for any
of the three methods at the 0.05 level (Supplementary Table S4). Through
simulation, we demonstrated that HWU outperformed NHWU and GLM for
both binary (Figure \ref{Fig1}) and continuous (Figure \ref{Fig2})
phenotypes. In the presence of genetic heterogeneity (i.e.,when $\sigma_{\beta}/\mu_{\beta}$
is large), HWU attained higher power than NHWU and GLM. When the genetic
heterogeneity was negligible (i.e., when $\sigma_{\beta}/\mu_{\beta}$
is small), HWU had comparable performance to NHWU and GLM. When the
average genetic effect ( $\mu_{\beta}$ ) increased, all three methods
gained power. Nevertheless, when the variance of the genetic effect
( $\sigma_{\beta}$ ) increased, only HWU gained substantial increase
in power. We also investigated the performance of HWU when the covariates
could not accurately infer the latent population structure. For such
purpose, we investigated the power of HWU as the noise parameter $\sigma_{c}^{2}$
changed. The result showed that the power of HWU decreased as the
\textquotedblleft{}noise\textquotedblright{} increased
(Supplementary Table S5).
In practice, the nature of the latent population structure may not
be \textquotedblleft{}categorical\textquotedblright{}. Therefore,
we also simulated genetic effects using a random effect model, where
effects were different for each subject (Supplementary Simulation
II). The three methods had comparable power when the genetic heterogeneity
was negligible. Nevertheless, as the genetic heterogeneity increased,
there was a clear advantage of HWU over NHWU and GLM (Supplementary
Figures S1 and S2).
\subsubsection{Simulation III}
In simulation III, we first investigated the robustness of HWU against
different non-normal phenotype distributions. In order to separate the influence of heterogeneity and phenotype distribution, we compared HWU with its \textquotedblleft{}parametric alternative\textquotedblright{},
the variance component score test (VCscore), instead of GLM. We simulated the phenotype
using a random effect model,
\[
y_{i}=\mu+Z_{i}\alpha+g_{i}\beta_{i}+\varepsilon_{i},\varepsilon_{i}\sim F,
\]
where $Z_{i}$ denotes covariates for subject $i$, $\alpha$ denotes
covariate effects and $F$ followed a non-normal distribution (Supplementary Appendix C). We
simulated three types of non-normal distribution for $F$, 1) t ditributions
with $df=2$, 2) Cauchy distribution, and 3) a mixture of normal and
chi-squared distribution. For each distribution, we simulated model
with confounding effects and without confounding effect, where confounding
effect is simulated by generating $Z_{i}$ that is correlated with $g_{i}$. Meanwhile, $Z_{i}$ is also correlated with $y_i$ since $\alpha \neq 0$.
We included $Z$ in the analysis for both HWU and VCscore, and summarize
the Type I errors in Table \ref{Table2}. No substantial inflation
of type I error was detected for HWU for 3 non-normal distributions,
regardless of whether there were confounding effects. VCscore is robust
against mixture of normal and chi-squared distribution, but have inflated
type I error for heavy tailed distribution (e.g., Cauchy distribution).
If we did not include $Z$ in the analysis, both methods showed inflated
type I error when there were confounding effects(Supplementary Table
S6). Further investigations on power performance showed slightly more
advantage of HWU over VCscore for non-normal distributions (Supplementary
Table S7).
\begin{table}
\caption{Type I error comparisons of HWU and VCscore under non-normal distributions}
\label{Table2}
\begin{center}
\begin{tabular}{ccccc}
\hline
\multirow{2}{*}{Confounding Effect} & \multirow{2}{*}{Model} & \multicolumn{3}{c}{Distribution}\tabularnewline
\cline{3-5}
& & Mixture & $t_{df=2}$ & Cauchy\tabularnewline
\hline
\hline
\multirow{2}{*}{No} & HWU & 0.041 & 0.057 & 0.057\tabularnewline
\cline{2-5}
& VCscore & 0.039 & 0.070 & 0.095\tabularnewline
\hline
\multirow{2}{*}{Yes} & HWU & 0.053 & 0.060 & 0.062\tabularnewline
\cline{2-5}
& VCscore & 0.059 & 0.146 & 0.250\tabularnewline
\hline
\end{tabular}
\end{center}
{*}the mixture distribution follows $a\chi_{df=1}^{2}+(1-a)N(5,1)$,
where $a\sim Bernoulli(0.6)$.
\end{table}
\begin{table}
\caption{Performance of HWU with a mis-specified weight function}
\label{Table2_1}
\begin{tabular}{cccccc}
\hline
\multirow{2}{*}{Component} & \multicolumn{2}{c}{Mis-specification\textsuperscript{1}} & \multirow{2}{*}{Model\textsuperscript{2}} & \multicolumn{2}{c}{Method\textsuperscript{3}}\tabularnewline
\cline{2-3} \cline{5-6}
& Mis & True & & HWU(mis) & HWU(true)\tabularnewline
\hline
\hline
\multirow{4}{*}{$f(g_{i},g_{j})$ } & \multirow{2}{*}{$\mathfrak{D}(g_{i},g_{j})$ } & \multirow{2}{*}{$g_{i}g_{j}$} & Null & 0.047 & 0.049\tabularnewline
& & & Alt & 0.459 & 0.481\tabularnewline
\cline{2-6}
& \multirow{2}{*}{$1(g_{i}=g_{j})$ } & \multirow{2}{*}{$g_{i}g_{j}$} & Null & 0.048 & 0.052\tabularnewline
& & & Alt & 0.451 & 0.51\tabularnewline
\hline
\multirow{4}{*}{$\kappa_{i,j}$} & \multirow{2}{*}{$\frac{1}{D}x_{i}x_{j}^{T}$} & \multirow{2}{*}{$\mathfrak{D}(x_{i},x_{j})$} & Null & 0.05 & 0.058\tabularnewline
& & & Alt & 0.174 & 0.505\tabularnewline
\cline{2-6}
& \multirow{2}{*}{$\mathfrak{D}(x_{i},x_{j})$} & \multirow{2}{*}{$\frac{1}{D}x_{i}x_{j}^{T}$} & Null & 0.046 & 0.053\tabularnewline
& & & Alt & 0.072 & 0.515\tabularnewline
\hline
\end{tabular}
\textsuperscript{1}\textquotedblleft{}Mis\textquotedblright{} represents
the misspeficied $f(g_{i},g_{j})$ or $\kappa_{i,j}$ when analyzing
simulated data, while \textquotedblleft{}True\textquotedblright{}
represent the true $f(g_{i},g_{j})$ or $\kappa_{i,j}$ in the corresponding
simulation setting. Here, $\mathfrak{D}(\cdot,\cdot)$ represents
the euclidian distance based weight, i.e., $\mathfrak{D}(g_{i},g_{j})=exp(-(g_{i}-g_{j})^{2})$
and $\mathfrak{D}(x_{i},x_{j})=exp(-\frac{1}{D}\sum_{d=1}^{D}(x_{d,i}-x_{d,j})^{2})$.
\textsuperscript{2}The error distribution was set as $t$ distribution with $df=2$. \textquotedblleft{}Null\textquotedblright{}
represents the null model with $\mu_{\beta}=0$ and $\sigma_{\beta}^{2}=0$;
\textquotedblleft{}Alt\textquotedblright{} represents the heterogeneous
effect model with $\mu_{\beta}=0$ and $\sigma_{\beta}^{2}=0.5$.
\textsuperscript{3}HWU(mis) represents the HWU model with a mis-specified
weight function, while HWU(true) represents the HWU model with the true
weight function.
\end{table}
We also investigated the performance of HWU when the weight function
was mis-specified (Table \ref{Table2_1}). In this simulation,
we considered 4 different scenarios, either with mis-specified $f(g_{i},g_{j})$
or mis-specified $\kappa_{i,j}$. Type I error rates were well controlled
when the weight function was mis-specified. However, we found the
power of HWU with a mis-specified weight function was lower than that
with a correct weight function, especially when $\kappa_{i,j}$ was
mis-specified (Table \ref{Table2_1}).
\subsection{Genome-wide association analysis of Nicotine Dependence}
We applied our methods to the Genome-wide association study (GWAS)
dataset from the Study of Addiction: Genetics and Environments (SAGE).
The SAGE is one of the largest and most comprehensive case-control
studies conducted to date aimed at discovering new genetic variants
contributing to addiction. We analyzed the number of cigarettes smoked
per day, categorized into 4 classes (0 for less than 10 cigarettes,
1 for 11-to-20 cigarettes, 2 for 21-to-30 cigarettes, and 3 for more
than 31 cigarettes). Prior to the statistical analysis, we reassessed
the quality of the genotype data. After undertaking a careful quality
control process (i.e., removing samples with missing phenotype data
and low-quality genetic markers), 2845 subjects and 949,658 single-nucleotide
polymorphisms (SNPs) remained for the analysis. The SAGE comprises
samples from both Caucasian and African-American populations. To make
the association analysis robust against confounding effects, we adjusted
for the first 20 principal components from the available genome-wide
genetic markers, as well as gender and race, in the analysis.
\begin{table}
\caption{Top 10 nicotine dependence associated SNPs from the GWAS analysis
considering gender heterogeneity }
\label{Table3}
\begin{center}
\begin{tabular}{cccccc}
\hline
\multirow{2}{*}{Name} & \multirow{2}{*}{Chr} & \multirow{2}{*}{Position} & \multirow{2}{*}{Gene} & \multicolumn{2}{c}{p-value}\tabularnewline
\cline{5-6}
& & & & HWU & NHWU\tabularnewline
\hline
\hline
rs17078660 & 3 & 46160432 & NA, near \textit{FLT1P1} & $9.98\times10^{-9}$ & 0.017\tabularnewline
MitoA15302G & 26 & 15302 & NA & $1.69\times10^{-8}$ & 0.688\tabularnewline
rs7753843 & 6 & 67055504 & NA & $1.86\times10^{-8}$ & 0.571\tabularnewline
rs10493279 & 1 & 60368804 & NA, near \textit{C1orf87} & $1.88\times10^{-8}$ & 0.014\tabularnewline
rs4560769 & 8 & 42259961 & \textit{IKBKB} & $1.93\times10^{-8}$ & 0.062\tabularnewline
rs9694958 & 8 & 42275203 & \textit{IKBKB} & $2.82\times10^{-8}$ & 0.122\tabularnewline
rs776746 & 7 & 99108475 & \textit{CYP3A5} & $2.91\times10^{-8}$ & 0.241\tabularnewline
rs4646437 & 7 & 99203019 & \textit{CYP3A4} & $4.03\times10^{-8}$ & 0.512\tabularnewline
rs9694574 & 8 & 42279609 & \textit{IKBKB} & $4.63\times10^{-8}$ & 0.144\tabularnewline
rs4646457 & 7 & 99083016 & \textit{ZSCAN25} & $4.74\times10^{-8}$ & 0.138\tabularnewline
\hline
\end{tabular}
\end{center}
\end{table}
Considering that the etiology of nicotine dependence has been shown
to be heterogeneous for gender [\cite{Li2003}], we used gender to infer
the latent population structure and assumed an additive effect to
compute . Using HWU, the genome-wide scanning of 949,658 SNPs on the
SAGE dataset was completed in about 7 hours by parallel computation
on 19 cores. The top 10 SNPs having the strongest association with
nicotine dependence are listed in Table \ref{Table3}. Among the 10
SNPs, 3 SNPs (i.e., rs4560769, rs9694574, and rs9694958) are located
within the gene \textit{IKBKB}, while another 3 SNPs (i.e., rs4646437, rs4646457,
rs776746) are located within or near the gene \textit{CYP3A5}. The 3 SNPs related
to gene \textit{IKBKB} are in high linkage disequilibrium (LD), with the estimated
correlation ranging from 0.736 to 0.853. The highest association signal
was from rs4560769 (p-value$=1.93\times10^{-8}$). The 3 SNPs related
to gene \textit{CYP3A5} were also in high LD (correlation from 0.781 to 0.913),
among which rs4646437 had the strongest association with nicotine
dependence (p-value$=2.91\times10^{-8}$). To evaluate the sensitivity
of the results, we performed association tests using other weight
functions (Table \ref{Table3}). Using a homogeneity weight $w_{i,j}=g_ig_j$ (NHWU), none
of the 10 SNPs had a p-value smaller than 0.01. The difference between
HWU and NHWU indicated heterogeneous effects of the two genes on nicotine
dependence in males and females. Additional stratified analysis by
analyzing males and females separately also suggested this heterogeneous
effect of the two genes in males and females (Supplementary Real Data
Analysis). In addition to gender, we also investigated potential genetic
heterogeneity due to different ethnic and genetic backgrounds. In
these analyses, we considered the same covariates as those used in
the gender heterogeneity analysis. However, the results suggested
there was no strong evidence of genetic heterogeneity due to different
ethnic and genetic backgrounds (Supplementary Real Data Analysis).
\section{Discussion}
In recent years, U-statistic based methods have been gaining popularity
in genetic association studies due to their robustness and flexibility
[\cite{Schaid2005,Zhang2010}]. Yet, few methods have been developed
to model genetic heterogeneity, especially under the weighted U framework.
In this paper, we have proposed a flexible and computationally efficient
method, HWU, for high-dimensional genetic association analyses allowing
for genetic heterogeneity. With HWU, we were able to integrate the
latent population structure (inferred from genetic background or environmental
covariates) into a weight function and test heterogeneous effects
without stratifying the sample. Simulation studies were conducted
to compare the power of the proposed HWU method with methods that
do not model genetic heterogeneity (i.e., NHWU and GLM). In the presence
of genetic heterogeneity, HWU attained higher power than NHWU and
GLM. In the absence of genetic heterogeneity, HWU still had comparable
performance to NHWU and GLM. Unlike conventional methods, such as
GLM, our method was developed based on a nonparametric U statistic,
and therefore offers robust performance when the underlying phenotype
distribution and mode of inheritance are unknown.
In HWU, we use genome profiles or environmental covariates to build
the background similarity (i.e., the latent population structure $\kappa_{i,j}$
) and combine it with the genetic similarity to form the weight function
$f(G_{i},G_{j})$. We then evaluate its relationship with a phenotype
by using a weighted U statistic. Our method is different from testing
an interaction effect. The key difference is that, for HWU, we assume
there is a latent population structure that acts in some joint fashion
with the genetic variants, while in the usual interaction effect model
the genetic variants are assumed to interact with known variables.
Furthermore, our test has fewer degrees of freedom than usual interaction
tests. HWU is based on the idea that the more similar two subjects
are, the more similar are their genetic effects. The idea of relating
phenotype similarity to genotype similarity is not new. For example,
Tzeng et.al proposed a gene-trait similarity regression for multi-locus
association analysis [\cite{Tzeng2007}]. However, their method is based
on the usual regression framework and does not consider genetic heterogeneity.
In this paper, we focus on a single-locus test with consideration
of genetic heterogeneity and assume an additive model. By modifying
the weight function, HWU can easily be extended to model a multi-locus
effect and other modes of inheritance (e.g., dominant/recessive effects).
The weight function also offers flexibility for constructing latent
population structure. Various similarity-based or distance-based functions
can be applied to informative environmental and genetic covariates
to infer the latent population structure. Although type I error is
generally controlled for a variety of weight functions, the choice
of an appropriate function to construct the latent population structure
could impact the power of HWU. In this article, we suggest a Euclidian-distance
based function, $\kappa_{i,j}=exp(-(x_{i}-x_{j})R(x_{i}-x_{j})^{T})$,
in which prior knowledge can be incorporated for potential power improvement.
Nevertheless, a cross product kernel (i.e., $\kappa_{i,j}=\frac{1}{D}x_{i}x_{j}^{T}$)
can also be used if the underlying model favors linearity. In the
scenario where multiple functions might be used to construct the latent
population structure, the optimal function could be chosen by using
a similar approach to that proposed by \cite{Lee2012}.
Another advantage of our method is its computational efficiency. For
the analysis of high-dimensional data, we derived the asymptotic distribution
of the weighted U statistic and optimized the computational algorithm
(e.g. using efficient eigen-decomposition). The genome-wide analysis
of 949,658 SNPs took 7 hours and identified two genes, \textit{IKBKB} and \textit{CYP3A5}.
Although our analysis suggests that these two genes are associated
with nicotine dependence and have heterogeneous effects according
to gender, further study and biological experiments are needed to
confirm the association and to further investigate the potential function
of these two genes in nicotine dependence.
\appendix
\section{}
\subsection{Asymtotic distribution of HWU with parameter estimation}\label{app}
As showed in the main text, the limiting distribution of the weighted
U with a cross product kernel can be simplified to $U\sim\sum_{s=1}^{n}\lambda_{s}\chi_{1,s}^{2}$.
Taking the parameter estimation into account {[}\cite{Dewet1987,Shieh1997}{]}
, the limiting distribution becomes:
\[
U\sim\sum_{s=1}^{n}\lambda_{s}(\phi_{s}+c_{s}\phi_{0})^{2},
\]
where $\{\lambda_{s}\}$ are the eigenvalues from the eigen-decomposition
of $W=B\Lambda B^{T}$, in which $\Lambda=\{diag(\lambda_{s})\}_{n\times n}$
and $B=\{b_{i,j}\}_{n\times n}$. $\{\phi_{s}\}$ are i.i.d. standard
normal random variables. $\phi_{0}$ is also standard normal random
variable with $cov(\phi_{s},\phi_{0})=c_{s}$, where $c_{s}$ is defined
as $c_{s}=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}b_{i,s}$.
Let $\gamma=(\phi_{1}+c_{1}\phi_{0},\cdots,\phi_{n}+c_{n}\phi_{0})$
be a random vector, where $\gamma=MVN(0,\Sigma_{n\times n})$. Letting
$I$ be the $n\times n$ identity matrix and $J$ be the $n\times n$
matrix with all elements equal to $1/n$, we can easily show that
$\Sigma=I-B^{T}JB$ and $\Sigma\Sigma=\Sigma$. Letting $\xi$ be
a random vector, $\xi\sim MVN(0,I_{n\times n})$, we have $\Sigma\xi\sim MVN(0,\Sigma)$
and
\begin{align*}
\sum_{s=1}^{n}\lambda_{s}(\phi_{s}+c_{s}\phi_{0})^{2} & =\gamma^{T}\Lambda\gamma\\
& =\xi^{T}\Sigma\Lambda\Sigma\xi\\
& =\xi^{T}B^{T}(B\Sigma B^{T})B\Lambda B^{T}(B\Sigma B^{T})B\xi\\
& =\xi^{T}B^{T}(I-J)W(I-J)B\xi.
\end{align*}
Because $B\xi\sim MVN(0,I_{n\times n})$, the limiting distribution
of the weighted U is the weighted sum of independent chi-squares,
$U\sim\sum_{s=1}^{n}\lambda_{1,s}\chi_{1,s}^{2}$, where $\{\lambda_{1,s}\}$
are the values of the matrix $(I-J)W(I-J)$.
\section*{Disclosure Declaration}
We declare no conflict of interest.
\begin{supplement}
\sname{Supplementary Material}\label{suppA}
\stitle{Supplementary Material to A Weighted U Statistic for Association Analyses Considering Genetic Heterogeneity}
\slink[url]{http://onlinelibrary.wiley.com/journal/10.1002/(ISSN)1097-0258}
\sdescription{Materials include Supplementary Appendix A to C, Supplementary Simulation I to II, Supplementary Real Data Analysis, Supplementary Table S1 to S9, and Supplementary Figure S1 to S4.}
\end{supplement}
\end{document}
|
\begin{document}
\begin{flushright}
\begin{tabular}{l}
{\sf Uzbek Mathematical}\\
{\sf Journal, 2018, 1, pp.\pageref{Jur1}-\pageref{Jur2}}\\
\end{tabular}
\end{flushright}
UDC 517.946.22; AMS Subject Classifications: Primary 35A08
\begin{center}
\textbf{ Fundamental solutions of generalized bi-axially \\ symmetric multivariable Helmholtz equation}\\
\textbf{Ergashev T.G., Hasanov A. }
\end{center}
{\small
\begin{center}
\begin{tabular}{p{9cm}}
Maqolada ko'p o'zgaruvchili umumlashgan ikki o'qli simmetrik
Gelmgolts tenglamasining to'rtta fundamental yechimlari oshkor
ko'rinishda topilgan va ular yangi kiritilgan uch o'zgaruvchili
konflyuent gipergeometrik funksiya yordamida ifodalangan. Topilgan
fundamental yechimlarning maxsuslik tartibi aniqlangan hamda
elliptik tenglamalar uchun chegaraviy masalalarni yechishda zarur
bo'ladigan boshqa bir necha xossalari o'rganilgan.
\\[0.5 cm]
Основным результатом настоящей работы является
построение четырех фундаментальных решений обобщенного
двуосесимметрического многомерного уравнения Гельмгольца в явном
виде, которые удалось выразить через недавно введенную вырожденную
гипергеометрическую функцию от трех переменных. Кроме того,
определен порядок особенности и выявлены свойства найденных
фундаментальных решений, необходимые при решении краевых задач для
вырождающихся эллиптических уравнений второго порядка.
\end{tabular}\end{center} }
\makeatletter
\renewcommand{\vbox{\thepage \hfil {\it Ergashev T.G., Hasanov A.} \hrule }}{\vbox{\thepage \hfil {\it Ergashev T.G., Hasanov A.} \hrule }}
\renewcommand{\@oddhead}{\vbox{
{\it Fundamental solutions of generalized bi-axially symmetric ...
}
\thepage \hrule}} \makeatother
\label{Jur1}
\textbf{1.Introduction}
It is known that fundamental solutions have an essential role in
studying partial differential equations. Formulation and solving
of many local and non-local boundary value problems are based on
these solutions. Moreover,\,\,\,fundamental solutions appear as
potentials, for instance, as simple-layer and double-layer
potentials in the theory of potentials.
The explicit form of fundamental solutions gives a possibility to
study the considered equation in detail. For example, in the works
of Barros-Neto and Gelfand [1], fundamental solutions for Tricomi
operator, relative to an arbitrary point in the plane were
explicitly calculated. We also mention Leray's work [2], which it
was described as a general method, based upon the theory of
analytic functions of several complex variables, for finding
fundamental solutions for a class of hyperbolic linear
differential operators with analytic coefficients. Among other
results in this direction, we note a work by Itagaki [3], where 3D
high-order fundamental solutions for a modified Helmholtz equation
were found. The found solutions can be applied with the boundary
particle method to some 2D inhomogeneous problems, for example,
see [4].
Singular partial differential equations appear at studying various
problems of aerodynamics and gas dynamics [5] and irrigation
problems [6]. For instance, the famous Chaplygin equation [7]
describes subsonic, sonic and supersonic flows of gas. The theory
of singular partial differential equations has many applications
and possibilities of various theoretical generalizations. It is,
in fact, one of the rapidly developing branches of the theory of
partial differential equations.
In most cases boundary value problems for singular partial
differential equations are based on fundamental solutions for
these equations, for instance, see [8].
Let us consider the generalized bi-axially symmetric Helmholtz
equation with $p$ variables
$$ H_{\alpha,\beta}^{p,\lambda}(u)\equiv\sum\limits_{i=1}^p \frac{\partial^2u}{\partial x_i^2}+\frac{2\alpha}{x_1}\frac{\partial u}{\partial x_1}+\frac{2\beta}{x_2}\frac{\partial u}{\partial x_2}-\lambda^2u=0 \eqno(1.1) $$
in the domain $R_p^+\equiv
\left\{(x_1,...,x_p):x_1>0,x_2>0\right\}$ , where $p$ is a
dimension of a Euclidean space $(p\geq2)$, $\alpha,\beta$ and
$\lambda$ are constants and $0<2\alpha,2\beta<1$.
In the article [9], the equation (1.1) was considered in two
cases: (1) when $p=2, \alpha=0,\beta>0;$ (2) when
$p=2,\lambda=0,\beta>0.$ In the work [10] in order to find
fundamental solutions, at first two new confluent hypergeometric
functions were introduced. Furthermore, by means of the
introduced hypergeometric function fundamental solutions of the
equation(1.1) were constructed in an explicit form. For studying
the properties of the fundamental solutions, the introduced
confluent hypergeometric functions are expanded in products by
Gauss's hypergeometric functions. The logarithmic singularity of
the constructed fundamental solutions of equation (1.1) was
explored with the help of the obtained expansion.Fundamental
solutions of equation (1.1) with $p=3$ \,and $\lambda=0$ were used
in the investigation of the Dirichlet problem for
three-dimensional elliptic equation with two singular coefficients
[11].
In the present article for the equation (1.1) in the domain
$R_p^+$ at $p>2$ four fundamental solutions are constructed in
explicit form. Furthermore, some properties of these solutions are
shown, which will be used for solving boundary value problems for
aforementioned equation.
\textbf{2.About one confluent hypergeometric function of three variables}
The confluent hypergeometric function of three variables which we
will use in the present work looks like [10]
$$A_2(a;b_1,b_2;c_1,c_2;x,y,z)=\sum\limits_{m,n,k=0}^\infty\frac{(a)_{m+n-k}(b_1)_m(b_2)_n}{(c_1)_m(c_2)_nm!n!k!}x^my^nz^k, \eqno(2.1)$$
where $a,\,b_1,\,b_2,\,c_1,\,c_2$ are complex constants,\,\,
$c_1,\,c_2\neq 0,-1,-2,...$ and \\ $(a)_n=\Gamma(a+n)/\Gamma(a)$
is the Pochhammer symbol.
Using the formula of derivation
$$\frac{\partial^{i+j+k}}{\partial x^i\partial y^j \partial z^k}A_2(a;b_1,b_2;c_1,c_2;x,y,z)=$$
$$=\frac{(a)_{i+j-k}(b_1)_i(b_2)_j}{(c_1)_i(c_2)_j}A_2(a+i+j-k;b_1+i,b_2+j;c_1+i,c_2+j;x,y,z),\eqno (2.2)$$
it is easy to show that the hypergeometric function
$A_2(a;b_1,b_2;c_1,c_2;x,y,z)$ satisfies the system of
hypergeometric equations [10]
$$\left\{ \begin{matrix}
x(1-x)\omega_{xx}-xy\omega_{xy}+xz\omega_{xz}+[c_1-(a+b_1+1)x]\omega_x-\\-b_1y\omega_y+b_1z\omega_z-ab_1\omega=0, \\
y(1-y)\omega_{yy}-xy\omega_{xy}+yz\omega_{yz}+[c_2-(a+b_2+1)y]\omega_y-\\-b_2x\omega_x+b_2z\omega_z-ab_2\omega=0, \\
z\omega_{zz}-x\omega_{xz}-y\omega_{yz}+(1-a)\omega_z+\omega=0, \\
\end{matrix} \right.\eqno(2.3)$$
where $$\omega(x,y,z)=A_2(a;b_1,b_2;c_1,c_2;x,y,z).\eqno (2.4)$$
Really, by virtue of the derivation formula (2.2), it is easy to
calculate the following expressions
$$\omega_x=\sum\limits_{m,n,k=0}^\infty\frac{(a)_{m+n-k}(b_1)_m(b_2)_n}{(c_1)_m(c_2)_nm!n!k!}\frac{(a+m+n-k)(b_1+m)}{(c_1+m)}x^my^nz^k, \eqno(2.5)$$
$$x\omega_x=\sum\limits_{m,n,k=0}^\infty\frac{(a)_{m+n-k}(b_1)_m(b_2)_n}{(c_1)_m(c_2)_nm!n!k!}\frac{m}{1}x^my^nz^k, \eqno(2.6)$$
$$y\omega_y=\sum\limits_{m,n,k=0}^\infty\frac{(a)_{m+n-k}(b_1)_m(b_2)_n}{(c_1)_m(c_2)_nm!n!k!}\frac{n}{1}x^my^nz^k, \eqno(2.7)$$
$$z\omega_z=\sum\limits_{m,n,k=0}^\infty\frac{(a)_{m+n-k}(b_1)_m(b_2)_n}{(c_1)_m(c_2)_nm!n!k!}\frac{k}{1}x^my^nz^k, \eqno(2.8)$$
$$xy\omega_{xy}=\sum\limits_{m,n,k=0}^\infty\frac{(a)_{m+n-k}(b_1)_m(b_2)_n}{(c_1)_m(c_2)_nm!n!k!}\frac{mn}{1}x^my^nz^k, \eqno(2.9)$$
$$xz\omega_{xz}=\sum\limits_{m,n,k=0}^\infty\frac{(a)_{m+n-k}(b_1)_m(b_2)_n}{(c_1)_m(c_2)_nm!n!k!}\frac{mk}{1}x^my^nz^k, \eqno(2.10)$$
$$x^2\omega_{xx}=\sum\limits_{m,n,k=0}^\infty\frac{(a)_{m+n-k}(b_1)_m(b_2)_n}{(c_1)_m(c_2)_nm!n!k!}\frac{(m-1)m}{1}x^my^nz^k. \eqno(2.11)$$
Substituting (2.5)-(2.11) into the first equation of the system
(2.3), we are convinced that function $\omega(x,y,z)$ satisfies
this equation. We are similarly convinced that function
$\omega(x,y,z)$ satisfies the second and third equations of the
system (2.3).
Having substituted $\omega=x^\tau y^\nu z^\mu \psi(x,y,z)$ in the
system (2.3), it is possible to be convinced that for the values
$\tau:
\,\,\,\,\,\,0,\,\,\,\,\,\,\,\,\,1-c_1,\,\,\,\,\,\,0,\,\,\,\,\,\,\,\,\,1-c_1,$
$\nu:\,\,\,\,\,\,0,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,0,\,\,\,\,\,\,1-c_2,\,\,\,1-c_2,$
$\mu:\,\,\,\,\,0,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,0,\,\,\,\,\,\,\,\,\,\,\,\,\,\,0,\,\,\,\,\,\,\,\,\,\,\,\,\,0,$
\\the system has four linearly independent solutions
$$\omega_1(x,y,z)=A_2(a;b_1,b_2;c_1,c_2;x,y,z),\eqno (2.12)$$
$$\omega_2(x,y,z)=x^{1-c_1}A_2(a+1-c_1;b_1+1-c_1,b_2;2-c_1,c_2;x,y,z),\eqno (2.13)$$
$$\omega_3(x,y,z)=y^{1-c_2}A_2(a+1-c_2;b_1,b_2+1-c_2;c_1,2-c_2;x,y,z),\eqno (2.14)$$
\,\,\,\,\,\,\,\,\,$\omega_4(x,y,z)=x^{1-c_1}y^{1-c_2}\times$
$$\times
A_2(a+2-c_1-c_2;b_1+1-c_1,b_2+1-c_2;2-c_1,2-c_2;x,y,z).\eqno
(2.15)$$
In order to further study the decomposition properties of the
products by Gauss's hypergeometric functions, we need to know the
same expansions of the function $A_2(a;b_1,b_2;c_1,c_2;x,y,z).$
For this purpose we shall consider the expression
$$A_2(a;b_1,b_2;c_1,c_2;x,y,z)=$$ $$=\sum\limits_{k=0}^\infty\frac{(-z)^k}{(1-a)_kk!}F_2(a-k;b_1,b_2;c_1,c_2;x,y), \eqno (2.16)$$
where
$$ F_2(a;b_1,b_2;c_1,c_2;x,y)=
\sum\limits_{m,n=0}^\infty\frac{(a)_{m+n}(b_1)_m(b_2)_n}{(c_1)_m(c_2)_nm!n!}x^my^n.$$
In [12] for Appell's hypergeometric function
$F_2(a;b_1,b_2;c_1,c_2;x,y)$ following expansion was found
$$F_2(a;b_1,b_2;c_1,c_2;x,y)=\sum\limits_{i=0}^\infty\frac{(a)_i(b_1)_i(b_2)_i}{(c_1)_i(c_2)_ii!}x^iy^i\times$$
$$\times F(a+i;b_1+i;c_1+i;x)F(a+i;b_2+i;c_2+i;y),
\eqno (2.17)$$ where
$F(a,b;c;z)=\sum\limits_{n=0}^\infty\frac{(a)_n(b)_n}{(c)_nn!}z^n$
is a hypergeometric function of Gauss.
Considering expansion (2.17), from the identity (2.16) we find
[10]
$$A_2(a;b_1,b_2;c_1,c_2;x,y,z)=
\sum\limits_{i,j=0}^\infty\frac{(a)_{i-j}(b_1)_i(b_2)_i}{(c_1)_i(c_2)_ii!j!}x^iy^iz^j
\times$$
$$\times F(a+i-j,b_1+i;c_1+i;x)F(a+i-j,b_2+i;c_2+i;y).\eqno (2.18)$$
By virtue of the formula
$$F(a,b;c;x)=(1-x)^{-b}F\left(c-a,b;c;\frac{x}{x-1}\right),$$
we get from expansion (2.18)
$$A_2(a;b_1,b_2;c_1,c_2;x,y,z)=
(1-x)^{-b_1}(1-y)^{-b_2}\times$$
$$\times\sum\limits_{i,j=0}^\infty\frac{(a)_{i-j}(b_1)_i(b_2)_i}{(c_1)_i(c_2)_ii!j!}\left(\frac{x}{1-x}\right)^i\left(\frac{y}{1-y}\right)^iz^j
\times$$
$$\times F\left(c_1-a+j,b_1+i;c_1+i;\frac{x}{x-1}\right)\times$$ $$\times F\left(c_2-a+j,b_2+i;c_2+i;\frac{y}{y-1}\right).\eqno (2.19)$$
Expansion (2.19) will be used for studying properties of the
fundamental solutions.
We note, that the expansions for the hypergeometric function of
Lauricella $F_A^{(s)}$ were found in [13].
\textbf{3.Fundamental solutions}
We consider the generalized bi-axially symmetric multivariable
Helmholtz equation in the domain $R_p^+$.\,The equation (1.1) has
the following constructive formulas
$$H^{p,\lambda}_{\alpha,\beta}\left(x_1^{1-2\alpha} u \right)\equiv x_1^{1-2\alpha}H^{p,\lambda}_{1-\alpha,\beta}(u), \eqno(3.1) $$
$$H^{p,\lambda}_{\alpha,\beta}\left(x_2^{1-2\beta} u \right)\equiv x_2^{1-2\beta}H^{p,\lambda}_{\alpha,1-\beta}(u). \eqno(3.2) $$
The constructive formulas (3.1),(3.2) give the possibility to
solve boundary value problems for equation (1.1) for various
values of the parameters $\alpha,\beta$.
We search the solution of equation (1.1) in the form
$$u(x)=P(r)\omega(\xi,\eta,\zeta), \eqno (3.3)$$
where
$$r^2=\sum \limits_{i=1}^p \left(x_i-x_{0i}\right)^2,\,\,\,\,r^2_1=\left(x_1+x_{01}\right)^2+\sum \limits_{i=2}^p\left(x_i-x_{0i}\right)^2,$$
$$r^2_2=\left(x_1-x_{01}\right)^2+\left(x_2+x_{02}\right)^2+\sum \limits_{i=3}^p\left(x_i-x_{0i}\right)^2,$$
$$\xi=\frac{r^2-r^2_1}{r^2}=-\frac{4x_1x_{01}}{r^2},\,\,\,\,
\eta=\frac{r^2-r^2_2}{r^2}=-\frac{4x_2x_{02}}{r^2},$$
$$\zeta=-\frac{\lambda^2}{4}r^2,\,\,\,\, P(r)=\left(r^2\right)^{1-\alpha-\beta-\frac{p}{2}}.$$
Substituting (3.3) into equation (1.1), we have
$$A_1\omega_{\xi\xi}+A_2\omega_{\eta\eta}+A_3\omega_{\zeta\zeta}+B_1\omega_{\xi\eta}+B_2\omega_{\xi\zeta}+B_3\omega_{\eta\zeta}+$$ $$+C_1\omega_\xi+C_2\omega_\eta+C_3\omega_\zeta+D\omega=0,\eqno(3.5)$$
where
$$A_1=P\sum\limits_{i=1}^p \left(\frac{\partial\xi}{\partial x_i}\right)^2,\,\,\,\,\,\,A_2=P\sum\limits_{i=1}^p \left(\frac{\partial\eta}{\partial x_i}\right)^2,\,\,\,\,\,A_3=P\sum\limits_{i=1}^p \left(\frac{\partial\zeta}{\partial x_i}\right)^2,$$
$$B_1=2P\sum\limits_{i=1}^p \frac{\partial\xi}{\partial x_i}\frac{\partial\eta}{\partial x_i},\,\,\,\,B_2=2P\sum\limits_{i=1}^p \frac{\partial\xi}{\partial x_i}\frac{\partial\zeta}{\partial x_i},\,\,\,\,B_3=2P\sum\limits_{i=1}^p \frac{\partial\eta}{\partial x_i}\frac{\partial \zeta}{\partial x_i},$$
$$C_1=2\sum\limits_{i=1}^p \frac{\partial P}{\partial x_i}\frac{\partial \xi}{\partial x_i}+P\sum\limits_{i=1}^p \frac{\partial^2\xi}{\partial x_i^2}+P\left(\frac{2\alpha}{x_1}\frac{\partial \xi}{\partial x_1}+\frac{2\beta}{x_2}\frac{\partial \xi}{\partial x_2} \right),$$
$$C_2=2\sum\limits_{i=1}^p \frac{\partial P}{\partial x_i}\frac{\partial \eta}{\partial x_i}+P\sum\limits_{i=1}^p \frac{\partial^2\eta}{\partial x_i^2}+P\left(\frac{2\alpha}{x_1}\frac{\partial \eta}{\partial x_1}+\frac{2\beta}{x_2}\frac{\partial \eta}{\partial x_2} \right),$$
$$C_3=2\sum\limits_{i=1}^p \frac{\partial P}{\partial x_i}\frac{\partial \zeta}{\partial x_i}+P\sum\limits_{i=1}^p \frac{\partial^2\zeta}{\partial x_i^2}+P\left(\frac{2\alpha}{x_1}\frac{\partial \zeta}{\partial x_1}+\frac{2\beta}{x_2}\frac{\partial \zeta}{\partial x_2} \right),$$
$$D=\sum\limits_{i=1}^p \frac{\partial^2 P}{\partial x_i^2}+\frac{2\alpha}{x_1}\frac{\partial P}{\partial x_1}+\frac{2\beta}{x_2}\frac{\partial P}{\partial x_2} -\lambda^2P.$$
After elementary evaluations we find
$$A_1=-\frac{4P}{r^2}\frac{x_{01}}{x_1}\xi(1-\xi),\,\,\,\,\, A_2=-\frac{4P}{r^2}\frac{x_{02}}{x_2}\eta(1-\eta),\eqno(3.6)$$
$$A_3=-\lambda^2P\zeta, \,\,\,\,\,\,B_1=\frac{4P}{r^2}\frac{x_{01}}{x_1}\xi\eta+\frac{4P}{r^2}\frac{x_{02}}{x_2}\xi\eta,\eqno(3.7)$$
$$ B_2=-\frac{4P}{r^2}\frac{x_{01}}{x_1}\xi\zeta+\lambda^2P\xi,\,\,\,\,\,\, B_3=-\frac{4P}{r^2}\frac{x_{02}}{x_2}\eta\zeta+\lambda^2 P\eta,\eqno(3.8)$$
$$C_1=-\frac{4P}{r^2}\frac{x_{01}}{x_1}\left[2\alpha-\left(2\alpha+\beta+\frac{p}{2}\right)\xi\right]+\frac{4P}{r^2}\frac{x_{02}}{x_2}\beta\xi,\eqno(3.9)$$
$$C_2= \frac{4P}{r^2}\frac{x_{01}}{x_1}\alpha\eta-\frac{4P}{r^2}\frac{x_{02}}{x_2}\left[2\beta-\left(\alpha+2\beta+\frac{p}{2}\right)\eta\right],\eqno(3.10)$$
$$C_3=-\frac{4P}{r^2}\frac{x_{01}}{x_1}\alpha\zeta-\frac{4P}{r^2}\frac{x_{02}}{x_2}\beta\zeta-\lambda^2P\left(\frac{p}{2}-\alpha-\beta\right),\eqno(3.11)$$
$$D=\frac{4P}{r^2}\left[\frac{x_{01}}{x_1}\alpha+\frac{x_{02}}{x_2}\beta \right]\left(\alpha+\beta-1+\frac{p}{2}\right)-\lambda^2P.\eqno(3.12)$$
Substituting equalities (3.6)-(3.12) into equation (3.5), we get
the system of hypergeometric equations
$$\left\{ \begin{matrix}
\xi(1-\xi)\omega_{\xi\xi}-\xi\eta\omega_{\xi\eta}+\xi\zeta\omega_{\xi\zeta}+\left[2\alpha-\left(2\alpha+\beta+\frac{p}{2}\right)\xi\right]\omega_\xi-\\-\alpha\eta\omega_\eta+\alpha\zeta\omega_\zeta-\alpha\left(\alpha+\beta-1+\frac{p}{2}\right)\omega=0, \\
\eta(1-\eta)\omega_{\eta\eta}-\xi\eta\omega_{\xi\eta}+\eta\zeta\omega_{\eta\zeta}+\left[2\beta-\left(\alpha+2\beta+\frac{p}{2}\right)\eta\right]\omega_\eta-\\-\beta\xi\omega_\xi+\beta\zeta\omega_\zeta-\beta\left(\alpha+\beta-1+\frac{p}{2}\right)\omega=0, \\
\zeta\omega_{\zeta\zeta}-\xi\omega_{\xi\zeta}-\eta\omega_{\eta\zeta}+\left(2-\alpha-\beta-\frac{p}{2}\right)\omega_\zeta+\omega=0. \\
\end{matrix} \right.\eqno(3.13)$$
Considering the solutions of the system of hypergeometric
equations (2.12)-(2.15), we define
$$\omega_1(\xi,\eta,\zeta)=A_2\left(\alpha+\beta-1+\frac{p}{2}; \alpha,\beta;2\alpha,2\beta;\xi,\eta,\zeta\right),\eqno (3.14)$$
$$\omega_2(\xi,\eta,\zeta)=\xi^{1-2\alpha}A_2\left(-\alpha+\beta+\frac{p}{2}; 1-\alpha,\beta;2-2\alpha,2\beta;\xi,\eta,\zeta\right),\eqno (3.15)$$
$$\omega_3(\xi,\eta,\zeta)=\eta^{1-2\beta}A_2\left(\alpha-\beta+\frac{p}{2}; \alpha,1-\beta;2\alpha,2-2\beta;\xi,\eta,\zeta\right),\eqno (3.16)$$
\,\,\,\,\,\,\,\,\,$\omega_4(\xi,\eta,\zeta)=\xi^{1-2\alpha}\eta^{1-2\beta}\times$
$$\times
A_2\left(1-\alpha-\beta+\frac{p}{2};
1-\alpha,1-\beta;2-2\alpha,2-2\beta;\xi,\eta,\zeta\right).\eqno
(3.17)$$
Substituting the equalities (3.14)-(3.17) into the expression
(3.3), we get some solutions of the equation (1.1)
$q_1(x,x_0)=k_1\left(r^2\right)^{1-\alpha-\beta-\frac{p}{2}}\times$
$$\times A_2\left(\alpha+\beta-1+\frac{p}{2};
\alpha,\beta;2\alpha,2\beta;\xi,\eta,\zeta\right),\eqno (3.18)$$
\,\,\,\,\,\,\,\,\,$q_2(x,x_0)=k_2\left(r^2\right)^{\alpha-\beta-\frac{p}{2}}x_1^{1-2\alpha}x_{01}^{1-2\alpha}\times$
$$\times A_2\left(-\alpha+\beta+\frac{p}{2};
1-\alpha,\beta;2-2\alpha,2\beta;\xi,\eta,\zeta\right),\eqno
(3.19)$$
\,\,\,\,\,\,\,\,\,$q_3(x,x_0)=k_3\left(r^2\right)^{-\alpha+\beta-\frac{p}{2}}x_2^{1-2\beta}x_{02}^{1-2\beta}\times$
$$\times A_2\left(\alpha-\beta+\frac{p}{2};
\alpha,1-\beta;2\alpha,2-2\beta;\xi,\eta,\zeta\right),\eqno
(3.20)$$
\,\,\,\,\,\,\,\,\,$q_4(x,x_0)=k_4\left(r^2\right)^{-1+\alpha+\beta-\frac{p}{2}}x_1^{1-2\alpha}x_{01}^{1-2\alpha}x_2^{1-2\beta}x_{02}^{1-2\beta}\times$
$$\times
A_2\left(1-\alpha-\beta+\frac{p}{2};
1-\alpha,1-\beta;2-2\alpha,2-2\beta;\xi,\eta,\zeta\right),\eqno
(3.21)$$ where $k_1,...k_4$ are constants which will be determined
at solving boundary value problems for equation (1.1). It is easy
to notice that the considered functions (3.18)-(3.21) possess the
properties
$$ \frac{\partial q_1(x,x_0)}{\partial x_1}\mid_{x_1=0}=0,\,\,\,\,\,\,\,\,\frac{\partial q_1(x,x_0)}{\partial x_2}\mid_{x_2=0}=0, \eqno (3.22)$$
$$ q_2(x,x_0)\mid_{x_1=0}=0,\,\,\,\,\,\,\,\,\,\,\,\frac{\partial q_2(x,x_0)}{\partial x_2}\mid_{x_2=0}=0, \eqno (3.23)$$
$$ \frac{\partial q_3(x,x_0)}{\partial x_1}\mid_{x_1=0}=0,\,\,\,\,\,\,\,\,q_3(x,x_0)\mid_{x_2=0}=0, \eqno (3.24)$$
$$ q_4(x,x_0)\mid_{x_1=0}=0,\,\,\,\,\,\,\,\,q_4(x,x_0)\mid_{x_2=0}=0. \eqno (3.25)$$
From the expansion (2.19) follows that the fundamental solutions
(3.18)-(3.21)\,\,at $ r \rightarrow 0 $ possess a singularity of
the order $\frac{1}{r^{p-2}}$, where $p>2$.
\textbf{References}
\begin{enumerate}
\item Barros-Neto J.J., Gelfand I.M. Fundamental solutions for the
Tricomi operator I,II,III, Duke Math.J. 98(3),1999. P.465-483;
111(3),2001.P.561-584; 128(1)\,2005.\,P.119-140.
\item Leray J. Un prolongementa de la transformation de Laplace
qui transforme la solution unitaires d'un opereteur hyperbolique
en sa solution elementaire (probleme de Cauchy,IV),
Bull.Soc.Math.France 90, 1962. P.39-156.
\item Itagaki M. Higher order three-dimensional fundamental
solutions to the Helmholtz and the modified Helmholtz equations.
Eng.\,Anal.\,Bound.\,Elem.\,15,1995. P.289-293.
\item Golberg M.A., Chen C.S. The method of fundamental solutions
for potential, Helmholtz and diffusion problems, in: Golberg
M.A.(Ed.), Boundary Integral Methods-Numerical and Mathematical
Aspects, Comput.Mech.Publ.,1998. P.103-176.
\item Bers L. Mathematical aspects of subsonic and transonic gas
dynamics, New York,London. 1958.
\item Serbina L.I. A problem for the linearized Boussinesq equation with a nonlocal Samarskii condition, Differ.Equ. 38(8),2002. P. 1187-1194.
\item Chapligin S.A. On gas streams, Dissertation, Moscow, 1902 (in
Russian).
\item Salakhitdinov M.S., Hasanov A. A solution of the
Neumann-Dirichlet boundary-value problem for generalized
bi-axially symmetric Helmholtz equation. Complex Variables and
Elliptic Equations. 53 (4), 2008. P.355-364.
\item Marichev O.I. Integral representation of solutions of the
generalized double axial-symmetric Helmholtz equation (in
Russian). Differencial'nye Uravneniya, Minsk,
14(10),1978.P.1824-1831.
\item Hasanov A. Fundamental solutions of the generalized bi-axially symmetric Helmholtz
equation. Complex Variables and Elliptic Equations. Vol.52, No 8,
2007. P. 673-683.
\item Karimov E.T., Nieto J.J. The Dirichlet problem for a 3D
elliptic equation with two singular coefficients. Computers and
Mathematics with Applications. 62, 2011. P.214-224.
\item Burchnall J.L., Chaundy T.W. Expansions of Appell's double
hypergeometric functions. The Quarterly Journal of Mathematics,
Oxford, Ser.12,1941. P.112-128.
\item Hasanov A., Srivastava H.M. Some decomposition formulas
associated with the Lauricella function $F_A^{r}$ and other
multiple hypergeometric functions. Applied Mathematics Letters,
19(2), 2006. P.113-121.
\end{enumerate}
\begin{tabular}{p{7cm}l}
Institute of Mathematics, Tashkent, Uzbekistan \\
\end{tabular}
\textbf{Ko'p o'zgaruvchili umumlashgan ikki o'qli simmetrik \\
Gelmgolts tenglamasining fundamental yechimlari}
\\ \textbf{Ergashev T.G'., Hasanov A.}
\textbf{Фундаментальные решения обобщенного
\\ двуосесимметрического многомерного уравнения Гельмгольца} \\
\textbf{Эргашев Т.Г., Хасанов А.}
\label{Jur2}
\end{document}
|
\begin{document}
\title{\large Extendability of continuous quasiconvex functions from subspaces}
\author{Carlo Alberto De Bernardi and Libor Vesel\'y}
\address{Dipartimento di Matematica per le Scienze economiche, finanziarie ed attuariali, Universit\`{a} Cattolica del Sacro Cuore, Via Necchi 9, 20123 Milano, Italy}
\address{Dipartimento di Matematica\\
Universit\`a degli Studi\\
Via C.~Saldini 50\\
20133 Milano\\
Italy}
\email{[email protected]}
\email{[email protected]}
\email{[email protected]}
\subjclass[2010]{Primary 26B25, 46A55; Secondary 52A41, 52A99}
\keywords{quasiconvex function, extension, topological vector space}
\thanks{}
\begin{abstract}
Let $Y$ be a subspace of a topological vector space $X$, and $A\subset X$ an open convex set that intersects $Y$.
We say that the property $(QE)$ [property $(CE)$] holds if every continuous quasiconvex [continuous convex] function on $A\cap Y$ admits a
continuous quasiconvex [continuous convex] extension defined on $A$.
We study relations between $(QE)$ and $(CE)$ properties, proving that $(QE)$ always implies $(CE)$ and that, under suitable hypotheses (satisfied for example if $X$ is a normed space and $Y$ is a closed subspace of $X$), the two properties are equivalent.
By combining the previous implications between $(QE)$ and $(CE)$ properties with known results about the property $(CE)$, we obtain some new positive results about the extension of quasiconvex continuous functions. In particular, we generalize the results contained in \cite{DEQEX} to the infinite-dimensional separable case. Moreover, we also immediately obtain existence of examples in which $(QE)$ does not hold.
\end{abstract}
\maketitle
\markboth{C.A.\ De Bernardi and L.\ Vesel\'y}{Extendability of continuous quasiconvex functions}
\section{Introduction}\label{S:intro}
A real-valued function $f$, defined on a convex set (in a vector space), is said to be {\em quasiconvex} if all sub-level sets
of $f$ are convex (see Section~\ref{sec: notation}). Quasiconvex functions represent a natural generalization
of convex functions, and
play an important role in Mathematical programming, in Mathematical economics,
and in many other areas of Mathematical analysis
(see \cite{AH,BFitzV,gencon,CR,CRdiff, pierskalla,penot} and the references therein for results and applications concerning quasiconvex functions).
The present paper deals with an extension problem for continuous quasiconvex functions. For simplicity, let us introduce the
following {\em temporary terminology.}
\begin{quoting}
\noindent
Let $Y$ be a subspace of a topological vector space $X$, and $A\subset X$ an open convex set that intersects $Y$.
Let us say that $(QE)$ [$(CE)$] holds if every continuous quasiconvex [continuous convex] function on $A\cap Y$ admits a
continuous quasiconvex [continuous convex] extension defined on $A$.
\end{quoting}
Notice that the extension properties $(QE)$ and $(CE)$ seem a priori to be two independent problems.
As we shall see, this is not completely the case.
In the last two decades, the property $(CE)$ has been studied in several
papers, mainly for the particular case of $A=X$ (see e.g. \cite{BV2002,BMV,VZ2010,DEVEX1,DEVEX2,DEX} and the references therein).
The known positive results require assumptions of a separability nature.
As for the quasiconvex case, in the recent paper \cite{DEQEX}, the first-named author
proved that $(QE)$ holds in some particular, essentially finite-dimensional cases
(namely, for either $X$ a finite-dimensional normed space, or $X$ an arbitrary normed space,
$Y$ finite-dimensional and $A\cap Y$ a polytope in $Y$).
However, the proof of the basic result in
\cite{DEQEX} relies on a compactness argument, which cannot be used in the infinite-dimensional case.
In the present paper, we study the property $(QE)$ for general, possibly infinite-dimensional topological vector spaces, obtaining
some new positive results. With respect to \cite{DEQEX}, our approach here is different, consisting in studying
implication relations between $(QE)$ and $(CE)$.
After some preparatory results in Section~\ref{sec: notation}, the following Section~\ref{S:2ext_prop} provides
the main tool of the present paper:
a characterization of the property $(QE)$,
in terms of ``extendability'' of certain increasing families of open (in $Y$) convex
subsets of $A\cap Y$ to increasing families of open convex subsets of $A$, preserving an appropriate topological property.
On the other hand, to characterize the property $(CE)$, it suffices to consider a similar property for increasing {\em sequences}
of sets, as proved in \cite{DEVEX1} (see also Theorem~\ref{all}). The main results of the paper are contained in Section~\ref{S:relationship}, in which we use the two characterizations cited above
to study relationship between
$(QE)$ and $(CE)$. First we show that if $(QE)$ holds then $(CE)$ holds as well and, under a weak additional assumption on $X$,
the subspace $Y$ is closed (see Theorem~\ref{QEimpliesCE}).
Then we prove that the vice-versa holds
whenever there exists a nonnegative continuous convex function on $A$ which is null exactly
on $A\cap Y$ (see Theorem~\ref{CEimpliesQE}). Notice that this condition is automatically satisfied in the case when
$Y$ is a closed subspace of a normed space $X$.
Section~\ref{S:appl} starts with a simple proposition showing that, under an additional assumption, if $(QE)$ holds then it
is possible to extend
a non-constant $f$ so that it preserves the range and the set of minimizers of $f$. In the rest of the section,
results from previous sections and \cite{DEVEX1, DEVEX2} are applied to obtain
sufficient conditions on $X,Y$ assuring that $(QE)$ holds for $A=X$ (Theorem~\ref{applCE}), as well as for an arbitrary open
convex set $A\subset X$ (Theorem~\ref{applSCE}). An example of our results for Banach spaces is the following.
\noindent
{\em Let $Y$ be a closed subspace of a Banach space $X$. If $X/Y$ is separable then $(QE)$ holds for $A=X$. If $Y$ is separable
and either
$Y$ is complemented or $X$ is WCG (weakly compactly generated; e.g., reflexive or separable) then $(QE)$ holds
for arbitrary open convex set $A\subset X$. Moreover, in both cases, the extension preserves the set of minimizers.}
\noindent
Moreover, at the end of Section~\ref{S:appl} we present some examples in which $(QE)$ does not hold.
Also in this case we apply some results concerning $(CE)$ from \cite{DEVEX1}.
Finally, let us remark that some positive results on extendability of quasiconvex functions
from convex sets (instead of subspaces) are contained in our forthcoming paper
\cite{DEVEX_UC} while examples of non-extendability are given in \cite{DEVEqc_examples}. In both cases, an
important role is played by geometric properties of the convex set in question.
\section{Preliminaries}\label{sec: notation}
We consider only nontrivial real topological vector spaces ({\em t.v.s.}, for short). Unless specified otherwise,
such spaces are not assumed to be Hausdorff.
If $X$ is a t.v.s.\ and $E\subset Z\subset X$, we denote by $\overline{E}^Z$, $\partial_Z E$, and $\mathrm{int}_Z E$
the relative closure, boundary and interior of $E$ in $Z$.
All functions we consider are real-valued. Let us recall that, if $A$ is a convex subset of $X$, a function $f:A\to\mathbb{R}$ is called {\em quasiconvex} if
$f((1-t)x+t y)\le \max\{f(x),f(y)\}$ whenever $x,y\in A$ and $t\in[0,1]$. Thus $f$ is quasiconvex if and only if
all its strict sub-level sets
$$
[f< \beta]:=\{x\in A:\, f(x)< \beta\}\qquad(\beta\in\mathbb{R})
$$
are convex, if and only if all its sub-level sets
$$
[f\le\beta]:=\{x\in A:\, f(x)\le\beta\}
\qquad(\beta\in\mathbb{R})
$$
are convex. The corresponding super-level sets $[f>\beta]$ and $[f\ge\beta]$ are defined analogously.
Given sets $A_n$ ($n\in\mathbb{N}$) and $A$, the notation $A_n\nearrow A$ means that $A_n\subset A_{n+1}$ ($n\in\mathbb{N}$) and $\bigcup_{n\in\mathbb{N}}A_n=A$;
and $A_n\searrow A$ means that $A_n\supset A_{n+1}$ ($n\in\mathbb{N}$) and $\bigcap_{n\in\mathbb{N}}A_n=A$.
For $x,y\in X$, $[x,y]$ denotes the closed segment in $X$ with endpoints $x$ and $y$, and $[x,y):=[x,y]\setminus\{y\}$; meanings of
$(x,y]$ and $(x,y)$ are analogous.
For a convex set $C\subset X$,
it is well known (cf.\ \cite{Klee}) that: if
$x\in\mathrm{int}_X C$ and $y\in \overline{C}^X$ then $[x,y)\subset\mathrm{int}_X C$; and if $C$ has nonempty interior then
$\mathrm{int}_X(\overline{C}^X)=\mathrm{int}_X C$.
\begin{lemma}\label{closure}
Let $Y$ be a subspace of a t.v.s.\ $X$, and $A\subset X$ an open convex set intersecting $Y$. Then:
\begin{enumerate}[(a)]
\item $A\cap\overline{Y}^X\subset\overline{A\cap Y}^X$;
\item $\overline{A\cap Y}^X=\overline{A}^X\cap\overline{Y}^X$;
\item $A\cap Y$ is relatively closed in $A$ if and only if $Y$ is closed (in $X$).
\end{enumerate}
\end{lemma}
\begin{proof}
(a) follows easily since $A$ is open.
(b) The inclusion $\subset$ is obvious. To show the other inclusion, take any
$x\in \overline{A}^X\cap\overline{Y}^X$, and fix some $y\in A\cap Y$.
Then $[y,x)\in \mathrm{int}_X(\overline{A}^X)\cap\overline{Y}^X=A\cap\overline{Y}^X
\subset\overline{A\cap Y}^X$ by (a). It follows that $x\in\overline{A\cap Y}^X$.
(c) We can (and do) assume that $0\in A$. Assume that $A\cap Y$ is relatively closed in $A$. Then by (b),
$A\cap Y=\overline{A\cap Y}^X\cap A=\overline{A}^X\cap\overline{Y}^X\cap A=A\cap \overline{Y}^X$.
Consequently, $Y=\bigcup_{t>0}t(A\cap Y)=\bigcup_{t>0}t(A\cap \overline{Y}^X)=\overline{Y}^X$,
and so $Y$ is closed. The other implication is trivial.
\end{proof}
The following lemma coincides with \cite[Lemma~1.1]{DEVEX1}.
\begin{lemma}\label{conv}
Let $Y$ be a subspace of a t.v.s.\ $X$,
$C\subset Y$ and $A\subset X$ convex sets. Then:
\begin{enumerate}
\item[(a)] $\mathrm{conv}(A\cup C)\cap Y=\mathrm{conv}[(Y\cap A)\cup C]$;
\item[(b)] if $C$ is open in $Y$, $A$ is open in $X$ and
$A\cap C\ne\emptyset$, then $\mathrm{conv}(A\cup C)$ is open in $X$.
\end{enumerate}
\end{lemma}
One of the main tools of the present paper will be a simple proposition,
Proposition~\ref{P:lowerlevelsets}, on constructing
quasiconvex functions from increasing families of open convex sets. For brevity of formulations, we shall
use the following terminology.
\begin{definition}\label{D:Omega}
Let $Z$ be a topological space, and $E\subset Z$. By an {\em $\Omega(E)$-family in $Z$} we mean a family
$\{D_\alpha\}_{\alpha\in \mathbb{R}}$ of sets in $Z$ such that
\begin{align*}
\bigcap_{\alpha\in\mathbb{R}}D_\alpha=\emptyset,\
\bigcup_{\alpha\in\mathbb{R}}D_\alpha=E,\
\text{and}\
\overline{D}^E_\alpha\subset D_\beta \text{ whenever $\alpha,\beta\in\mathbb{R}$, $\alpha<\beta$.}
\end{align*}
\end{definition}
The second part of the following proposition, that is, reconstructing a quasiconvex function from its sublevel sets,
is trivial and part of the mathematical folklore, and also the first part appears implicitly or in a slightly different form elsewhere (e.g., in \cite{DEQEX,CR}).
\begin{proposition}\label{P:lowerlevelsets}
Let $Z$ be a convex set in a t.v.s.\ $X$.
\begin{enumerate}[(a)]
\item If $\{D_\alpha\}_{\alpha\in\mathbb{R}}$ is an $\Omega(Z)$-family of relatively open convex subsets of $Z$,
then the function
$$
f(x):=\sup\{\alpha\in\mathbb{R}:\,x\notin D_\alpha\} \qquad (x\in Z)
$$
is a continuous quasiconvex function on $Z$ such that for each $\alpha\in\mathbb{R}$
\begin{equation}\label{lls}
[f<\alpha]=\bigcup_{\beta<\alpha}D_\beta
\quad\text{and}\quad
[f\le\alpha]=\bigcap_{\gamma>\alpha}D_\gamma\,.
\end{equation}
\item If $g$ is a continuous quasiconvex function on $Z$, then the sets
$$
D_\alpha:=[g<\alpha]\qquad(\alpha\in\mathbb{R})
$$
form an $\Omega(Z)$-family of relatively open convex subsets of $Z$. Moreover, the function
$f$ associated to $\{D_\alpha\}_{\alpha\in\mathbb{R}}$ by (a) coincides with $g$.
\end{enumerate}
\end{proposition}
\begin{proof}
\
(a) It is clear that $f$ is well-defined, and it is an elementary exercise to show that it satisfies \eqref{lls}.
Since all the sets $[f<\alpha]$ are convex and relatively open in $Z$, $f$ is quasiconvex and upper semicontiuous on $Z$.
Moreover, the definition of an $\Omega(Z)$-family implies that
$[f\le\alpha]=\bigcap_{\gamma>\alpha}\overline{D}^Z_\gamma$ for each $\alpha\in\mathbb{R}$, and hence
$f$ is also lower semicontinuous on $Z$.
(b) The first part is clear. For the second part, if $x\in Z$ then we have $f(x)=\sup\{\alpha\in\mathbb{R}:\, g(x)\ge\alpha\}=g(x)$.
\end{proof}
\begin{example}
Consider the family $\{D_\alpha\}_{\alpha\in\mathbb{R}}$ of open convex sets in $Z=\mathbb{R}$ given by: $D_\alpha=\emptyset$ for $\alpha<0$,
$D_\alpha=(-\infty,0)$ for $0\le\alpha<1$, and $D_\alpha=\mathbb{R}$ for $\alpha\ge1$. Then the family $\{D_\alpha\}_{\alpha\in\mathbb{R}}$
is nondecreasing but it is not an $\Omega(\mathbb{R})$-family. The corresponding function $f$ defined in Proposition~\ref{P:lowerlevelsets}(a)
is the characteristic function of the interval $[1,\infty)$, and hence it is quasiconvex but not lower semicontinuous.
\end{example}
Let $Y$ be a closed subspace of a t.v.s. $X$. Let us recall that the {\em quotient-space topology} on $X/Y$ is the vector
topology whose open sets are the sets of the form $q(A)$, $A\subset X$ open,
where $q\colon X\to X/Y$ is the quotient map. Moreover, $X/Y$ is locally convex whenever $X$ is (see, e.g., \cite{KN}). We shall also say
that a set $E\subset X$ is a {\em convexly $G_\delta$-set} if $E$ is the intersection of a sequence of open convex sets.
Also recall that a convex set $C$ in a t.v.s.\ is called {\em algebraically bounded} if it contains no half-lines.
Since the proof of the following technical lemma is a bit long, we postpone it to Appendix.
\begin{lemma}\label{L:G-delta}
Let $X$ be a t.v.s., $Y\subset X$ a subspace, and $A\subset X$ an open convex set intersecting $Y$.
Consider the following three groups of conditions.
\begin{enumerate}
\item[A.]
\begin{enumerate}
\item[($a_1$)] There exists a continuous quasiconvex function $\delta\colon A\to[0,\infty)$ such that
$\delta^{-1}(0)=A\cap Y$;
\item[($a_2$)] $A\cap Y$ is a convexly $G_\delta$-set in $X$.
\end{enumerate}
\item[B.]
\begin{enumerate}
\item[($b_1$)] There exists a continuous quasiconvex function $f\colon X/Y\to[0,\infty)$ such that
$f^{-1}(0)=\{0\}$;
\item[($b_2$)] the origin of $X/Y$ is a convexly $G_\delta$-set in $X/Y$.
\end{enumerate}
\item[C.]
\begin{enumerate}
\item[($c_1$)] There exists a continuous convex function $d\colon X\to[0,\infty)$ such that $d^{-1}(0)=Y$;
\item[($c_2$)] there exists a continuous seminorm $p$ on $X$ such that $p^{-1}(0)=Y$;
\item[($c_3$)] there exists an open convex set $V\subset X$ such that $\bigcap_{\mathrm{var}epsilonsilon>0}\mathrm{var}epsilonsilon V=Y$;
\item[($c_4$)] $X/Y$ contains an algebraically bounded convex neighborhood of $0\in X/Y$;
\item[($c_5$)] there exists a continuous norm on $X/Y$.
\end{enumerate}
\end{enumerate}
\noindent
Then the conditions of any single group are mutually equivalent and, for
$i,j\in\{1,2\}$ and $k\in\{1,\dots,5\}$, one has the implications
$$
(a_i)\Leftarrow(b_j)\Leftarrow(c_k).
$$
Moreover, if $A=X$ then also $(a_i) \mathbb{R}ightarrow (b_j)$.
\end{lemma}
\begin{proof}
See Appendix.
\end{proof}
\section{Two extension properties and their characterizations}\label{S:2ext_prop}
\begin{definition}
Let $Y$ be a subspace of a t.v.s.\ $X$, and $A\subset X$ an open convex set that intersects $Y$.
We shall say that the couple $(X,Y)$ has:
\begin{enumerate}[(a)]
\item the {\em $CE(A)$-property} if each continuous convex function $f\colon A\cap Y\to\mathbb{R}$ can be extended to a continuous convex
function $F\colon A\to\mathbb{R}$;
\item the {\em $QE(A)$-property} if each continuous quasiconvex function $f\colon A\cap Y\to\mathbb{R}$ can be extended to
a continuous quasiconvex function $F\colon A\to\mathbb{R}$.
\end{enumerate}
\end{definition}
The following characterization of the $CE(A)$-property was proved in \cite[Theorem~2.6]{DEVEX1}.
\begin{theorem}\label{all}
Let $X$ be a t.v.s., $Y\subset X$ a subspace, and $A\subset X$ an open convex set intersecting $Y$. Then the following assertions are equivalent.
\begin{enumerate}
\item[(i)] Every continuous convex function on $A\cap Y$ admits a continuous convex extension to $A$.
\item[(ii)] For every sequence $\{C_n\}$ of open convex sets in $Y$ such that $C_n\nearrow A\cap Y$, there exists a sequence $\{D_n\}$ of open convex sets in $X$ such that $D_n\nearrow A$ and $D_n\cap Y=C_n$, for each $n\in\mathbb{N}$.
\item[(ii')] For every sequence $\{C_n\}$ of open convex sets in $Y$ such that $C_n\nearrow A\cap Y$, there exists a sequence $\{D_n\}$ of open convex sets in $X$ such that $D_n\nearrow A$ and $D_n\cap Y\subset C_n$, for each $n\in\mathbb{N}$.
\end{enumerate}
\end{theorem}
The aim of the present short section is to provide a characterization of the $QE(A)$-property in a similar spirit. In this case, sequences are not enough; we have to consider
monotone families of convex sets defined
in Definition~\ref{D:Omega}. The main tool is Proposition~\ref{P:lowerlevelsets}.
\begin{theorem}\label{teo:CharExQuasiconv}
Let $X$ be a t.v.s., $Y\subset X$ a subspace, and $A\subset X$ an open convex set intersecting $Y$. Then the following assertions are equivalent.
\begin{enumerate}
\item $(X,Y)$ has the
$\mathrm{QE}(A)$-property.
\item For each $\Omega(A\cap Y)$-family $\{C_\alpha\}_{\alpha\in\mathbb{R}}$ of open convex sets in $Y$
there exists an $\Omega(A)$-family $\{D_\alpha\}_{\alpha\in\mathbb{R}}$ of open convex sets in $X$ such that
\begin{equation}\label{eqA}
\bigcup_{\gamma<\alpha}C_\gamma \subset D_\alpha\cap Y\subset C_\alpha \qquad(\alpha\in\mathbb{R}).
\end{equation}
\end{enumerate}
\end{theorem}
\begin{proof} $(\mathrm i)\mathbb{R}ightarrow(\mathrm{ii})$.
Let $\{C_\alpha\}_{\alpha\in\mathbb{R}}$ be an $\Omega(A\cap Y)$-family of open convex sets in $Y$.
By Proposition~\ref{P:lowerlevelsets}(a), the function $f:A\cap Y\to \mathbb{R}$, $f(y):=\sup\{\alpha\in\mathbb{R}: y\not\in C_\alpha\}$, is a continuous quasiconvex function
on $A\cap Y$ such that $\bigcup_{\gamma<\alpha}C_\gamma=[f<\alpha]$.
Let $F\colon A\to\mathbb{R}$ be a continuous quasiconvex extension of $f$. For $\alpha\in\mathbb{R}$, define $D_\alpha:=[F<\alpha]$, and observe that
$\overline{D}^A_\alpha\subset D_\beta$ whenever $\alpha<\beta$ are real numbers. Thus $\{D_\alpha\}_{\alpha\in\mathbb{R}}$
is an $\Omega(A)$-family of open convex sets in $X$. Moreover,
$\bigcup_{\gamma<\alpha}C_\gamma=[f<\alpha]=D_\alpha\cap Y\subset C_\alpha$ for each $\alpha\in\mathbb{R}$.
$(\mathrm{ii})\mathbb{R}ightarrow(\mathrm{i})$. Let $f\colon A\cap Y\to \mathbb{R}$ be a continuous quasiconvex function. For each $\alpha\in\mathbb{R}$ define
$C_\alpha:=[f<\alpha]$, and notice that $\{C_\alpha\}_{\alpha\in\mathbb{R}}$
is an $\Omega(A\cap Y)$-family of open convex sets in $Y$.
Let $\{D_\alpha\}_{\alpha\in\mathbb{R}}$ be an $\Omega(A)$-family of open convex sets in $X$ that satisfies \eqref{eqA}.
By Proposition~\ref{P:lowerlevelsets}, the function $F\colon A\to \mathbb{R}$, given by $F(x):=\sup\{\alpha\in\mathbb{R} : x\not\in D_\alpha \}$,
is continuous and quasiconvex on $A$. Moreover, Proposition~\ref{P:lowerlevelsets}(b) gives that
for every $y\in A\cap Y$ we have $F(y)=\sup\{\alpha\in\mathbb{R} : y\notin C_\alpha\}=f(y)$.
Hence $F$ extends $f$, and we are done.
\end{proof}
\section{Relationship between the properties $QE(A)$ and $CE(A)$}\label{S:relationship}
The aim of the present section is to show that the quasiconvex extension property $QE(A)$ implies the
convex extension property $CE(A)$, and that the two properties are equivalent under some additional
assumptions. We shall also see that the $QE(A)$-property is possible only for closed subspaces $Y$. Notice that
this is not the same for the $CE(A)$-property; indeed, by \cite[Lemma~3.2]{DEVEX1} and its proof,
{\em $(X,Y)$ has the $CE(A)$-property if and only $(X,\overline{Y}^X)$ has the $CE(A)$-property.}
\begin{theorem}\label{QEimpliesCE}
Let $X$ be a t.v.s., $Y\subset X$ a subspace, and $A\subset X$ an open convex set that intersects $Y$.
Assume that the couple $(X,Y)$ has the $QE(A)$-property. Then $(X,Y)$ has the $CE(A)$-property.
If, moreover, the singleton $\{0\}\subset X$ is a convexly $G_\delta$-set (e.g., if $X$ is locally convex and metrizable)
then $Y$ is closed.
\end{theorem}
\begin{proof}
Assume that $(X,Y)$ has the $QE(A)$-property.
To show that $(X,Y)$ has the $CE(A)$-property, it suffices to verify condition (ii') in Theorem~\ref{all}.
Let $\{C_n\}_{n\in\mathbb{N}}$ be a sequence of open convex sets in $Y$ such that $C_n\nearrow A\cap Y$. We can (and do)
assume that $0\in C_1$. Let us define a family $\{B_\alpha\}_{\alpha\in\mathbb{R}}$ as follows.
For $n\in\mathbb{N}$ put $C_n':=\frac{n}{n+1} C_n$, and notice that these sets satisfy:
\begin{itemize}
\item $C_n'\nearrow A\cap Y$;
\item for each $n\in\mathbb{N}$, there exists $\lambda_{n+1}\in(0,1)$ such that $C'_n\subset\lambda_{n+1}C'_{n+1}$.
\end{itemize}
Put $\lambda_1:=1/2$ and, for each $n\in\mathbb{N}$, consider the increasing affine function $\varphi_n:\mathbb{R}\to\mathbb{R}$ defined by $$\varphi_n(t)=t-n+(n+1-t)\lambda_{n},\qquad\qquad t\in\mathbb{R}.$$
Notice that $\varphi_n(n)=\lambda_{n}$ and $\varphi_n(n+1)=1$.
Now, given a real number $\alpha\ge1$, take the unique $n\in\mathbb{N}$ with $\alpha\in[n,n+1)$, and define
$
B_\alpha:=\varphi_n(\alpha)\,C'_{n}$. For $\alpha<1$, define $B_\alpha:=\emptyset$. Observe that $B_\alpha\subset C_n'\subset\lambda_{n+1}C_{n+1}'=B_{n+1}$ for $\alpha\in[n,n+1)$ and $n\in\mathbb{N}$.
It is easy to verify that $\{B_\alpha\}_{\alpha\in\mathbb{R}}$ is an $\Omega(A\cap Y)$-family of open convex sets in $Y$.
By Theorem~\ref{teo:CharExQuasiconv}, there exists an $\Omega(A)$-family $\{D_\alpha\}_{\alpha\in\mathbb{R}}$ of open convex sets in $X$
that satisfies \eqref{eqA}. In particular, $D_n\nearrow A$, and
$D_n\cap Y\subset B_{n}\subset C_n$ for each $n\in\mathbb{N}$.
By Theorem~\ref{all}, $(X,Y)$ has the $CE(A)$-property.
Now, let the origin of $X$ be a convexly $G_\delta$-set.
If $Y$ is not closed, by Lemma~\ref{closure}(c) there exists $\bar{x}\in\overline{A\cap Y}^A\setminus Y$. Clearly
$\bar{x}\in A$. By Lemma~\ref{L:G-delta} there exists a continuous quasiconvex function $\delta\ge0$ on $X$ such that $\delta^{-1}(0)=0$.
Then the formula $f(y):=\log\delta(y-\bar{x})$ defines a continuous quasiconvex function on $A\cap Y$ that admits no
continuous extension defined at $\bar{x}$. But this contradicts the $QE(A)$-property.
\end{proof}
The remaining part of the present section is devoted to the implication ``$\,CE(A)\mathbb{R}ightarrow QE(A)\,$''.
Let us start with the following technical lemma.
\begin{lemma}\label{L:cl}
Let $X$ be a t.v.s., $Y\subset X$ a closed subspace, and
$A\subset X$ an open convex set intersecting $Y$. Let $d\colon A\to[0,\infty)$
be a continuous convex function such that $d^{-1}(0)=A\cap Y$.
Let $U, D_2$ be open convex subsets of $A$, and $C$ an open convex set in $Y$.
Assume also that:
\begin{enumerate}[(a)]
\item $U\cap Y\subset C$ and $\overline{C}^{A\cap Y}\subset D_2$;
\item there exists a neighborhood $W\subset X$ of the origin such that $U+W\subset D_2$;
\item $d(U)$ is bounded.
\end{enumerate}
Then the set $D_1:=\mathrm{conv}(U\cup C)$ satisfies
$\overline{D}_1^A\subset D_2$.
\end{lemma}
\begin{proof}
Fix an arbitrary $x\in \overline{D}_1^A$. Then $x$ is the limit of a net $(z_i)_{i\in I}$ in $D_1$,
and hence we can write
$$
x=\lim_{i\in I} \bigl[ t_i u_i +(1-t_i)c_i \bigr]
\quad\text{with}\quad t_i\in[0,1],\ u_i\in U,\ c_i\in C.
$$
Passing to a subnet if necessary,
we can (and do) assume that $t_i\to\bar{t}\in[0,1]$.
If $\bar{t}=0$ then by the properties of $d$ we have
$$
d(x)\le\liminf_{i}[t_i d(u_i)+(1-t_i) d(c_i)]=\liminf_{i}t_i d(u_i)=0,
$$
and hence $x\in A\cap Y$.
By Lemma~\ref{conv}(a) and Lemma~\ref{closure},
$x\in \overline{D}_1^X\cap Y\cap A=\overline{D_1\cap Y}^X\cap A=\overline{C}^Y\cap A\subset D_2$.
If $\bar{t}>0$, fix $\eta>0$ such that $\eta/\bar{t}<1$, and then choose
$i\in I$ so that $x\in t_i u_i +(1-t_i)c_i+\eta W$ and $\eta/t_i<1$. Then
\begin{align*}
x &\textstyle \in
t_i U +\eta W +(1-t_i)C = t_i\bigl(U+\frac{\eta}{t_i}W\bigr)+(1-t_i)C \\
&\subset t_i(U+W)+(1-t_i)C\subset D_2.
\end{align*}
This completes the proof.
\end{proof}
\begin{observation}\label{O:LC-like}
Let $Y$ be a subspace of a t.v.s.\ $X$, and $A\subset X$ an open convex set intersecting $Y$.
Let $(X,Y)$ have the $CE(A)$-property. Then for each open convex set $C\subset A\cap Y$ in $Y$
there exists an open convex set $H\subset A$ in $X$ such that $H\cap Y=C$.
\noindent
\rm
(Indeed, it suffices to apply Theorem~\ref{all}(ii) with $C_1:=C$ and $C_n:=A\cap Y$ for any integer $n\ge2$.)
\end{observation}
\begin{theorem}\label{CEimpliesQE}
Let $Y$ be a closed subspace of a t.v.s.\ $X$, and $A\subset X$ an open convex set that intersects $Y$.
Assume there exists a continuous convex function $d\colon A\to[0,\infty)$ such that $d^{-1}(0)=A\cap Y$.
If the couple $(X,Y)$ has the $CE(A)$-property then it has the $QE(A)$-property as well.
\end{theorem}
\begin{proof}
Let $(X,Y)$ have the $CE(A)$-property, and let $\{C_\alpha\}_{\alpha\in\mathbb{R}}$ be an $\Omega(A\cap Y)$-family of
open convex sets in $Y$.
The set $J:=\{\alpha\in\mathbb{R} : C_\alpha\ne\emptyset\}$ is clearly an upper-unbounded interval. If $J=[\alpha_0,\infty)$,
we can (and do) assume that $\alpha_0=1$. Otherwise, $J$ is an open interval and we can (and do) assume that $J=\mathbb{R}$.
In both cases, $C_1$ is nonempty, and hence we can (and do) assume that $0\in C_1$.
Since $C_n\nearrow A\cap Y$ as $n\in\mathbb{N}$ tends to infinity, by Theorem~\ref{all} there exists a sequence
$\{E_n\}_{n\in\mathbb{N}}$ of open convex sets in $X$ such that $E_n\nearrow A$, and $E_n\cap Y=C_n$ for each $n\in\mathbb{N}$.
Define
$K_n:=\frac{n}{n+1}\bigl(E_n\cap[d<n]\bigr)$ ($n\in\mathbb{N}$), and notice that these sets satisfy:
$$
K_n\nearrow A,\quad K_n\subset [d<n],\quad\text{and}\quad
K_n\subset\lambda_{n+1}K_{n+1}\quad\text{with}\quad \lambda_{n+1}\in(0,1).
$$
For every $\alpha\ge1$ take the unique $n\in\mathbb{N}$ with $\alpha\in[n,n+1)$, and define
$\varphi_n(\alpha):=\alpha-n+(n+1-\alpha)\lambda_{n}$,
$$
U_\alpha:=\varphi_n(\alpha) K_{n}
\quad\text{and}\quad
D_\alpha:=\mathrm{conv}(U_\alpha\cup C_\alpha).
$$
Notice that $\varphi_n(n)=\lambda_n$, $\varphi_n(n+1)=1$, and $\varphi_n$ is
an increasing affine function.
Since $U_\alpha\cap Y=\varphi_n(\alpha)\frac{n}{n+1}C_n\subset C_n\subset C_\alpha$, we have $D_\alpha\cap Y= C_\alpha$
by Lemma~\ref{conv}. Moreover, since $U_\alpha\subset K_n$, $d$ is bounded on $U_\alpha$.
It is easy to see that for real numbers $1\le\alpha<\beta$ we always have
$0\in U_\alpha \subset\lambda_{\alpha,\beta}U_\beta$
for some $\lambda_{\alpha,\beta}\in(0,1)$.
In the case when $J=[1,\infty)$, we easily complete the proof by defining $D_\alpha=\emptyset$ for $\alpha<0$;
indeed, an application of Lemma~\ref{L:cl} shows that then the family $\{D_\alpha\}_{\alpha\in\mathbb{R}}$
has the properties from Theorem~\ref{teo:CharExQuasiconv}(ii).
Now consider the case $J=\mathbb{R}$. Let us construct open convex sets $D_\alpha$ ($\alpha<1$) proceeding by induction
with respect to the (unique) integer $n\le 0$ such that $\alpha\in[n,n+1)$.
Assume that, for some integer $n\le0$, we have already constructed $D_\beta$ for all $\beta\ge n+1$.
(For $n=0$, this is the case.) By Observation~\ref{O:LC-like}, there exists an open convex set $H_n\subset A$
such that $H_n\cap Y=C_n$. Define $E_n:=H_n\cap D_{n+1}\cap[d<\frac1{|n|+2}]$, and notice that
$E_n\cap Y\subset C_n$.
Fix an arbitrary $y_n\in E_n\cap Y$, and consider the open convex set $W_n:=(1/2)(E_n-y_n)$ containing $0$.
Then we have $y_n+2W_n=E_n$. For $\alpha\in[n,n+1)$ we define
$$
U_\alpha:=(y_n+W_n) + (\alpha-n)W_n
\quad\text{and}\quad
D_\alpha:=\mathrm{conv}(U_\alpha\cup C_\alpha).
$$
Since $U_\alpha\cap Y\subset E_n\cap Y\subset C_n\subset C_\alpha$, we have
$D_\alpha\cap Y=C_\alpha$ by Lemma~\ref{conv}. Notice also that
$U_\alpha+(n+1-\alpha)W_n=E_n\subset D_{n+1}\cap[d<\frac1{|n|+2}]$ for each such $\alpha$.
So we have defined a family $\{D_\alpha\}_{\alpha\in\mathbb{R}}$ of open convex sets in $X$.
An application of Lemma~\ref{L:cl} allows us to conclude that
$\overline{D}_\alpha^A\subset D_{\beta}$ for any couple of real numbers $\alpha<\beta$.
Moreover, $\bigcap_{\alpha\in\mathbb{R}}D_\alpha=\bigcap_{n\in\mathbb{N}}D_{-n}\subset d^{-1}(0)=A\cap Y$ and hence
$\bigcap_{\alpha\in\mathbb{R}}D_\alpha=\bigcap_{\alpha\in\mathbb{R}}D_\alpha\cap Y=\bigcap_{\alpha\in\mathbb{R}}C_\alpha=\emptyset$.
Now, it easily follows that
$\{D_\alpha\}_{\alpha\in\mathbb{R}}$ is an $\Omega(A)$-family such that $D_\alpha\cap Y=C_\alpha$ for each $\alpha\in\mathbb{R}$.
In particular, it satisfies the condition (ii) in Theorem~\ref{teo:CharExQuasiconv}, as needed.
\end{proof}
\section{Some applications}\label{S:appl}
Let us start with a simple result on extensions of continuous quasiconvex functions that preserve the range and the
set of minimizers.
\begin{proposition}\label{better}
Let $X$ be a t.v.s., $Y\subset X$ a closed subspace, and $A\subset X$ an open convex set that intersects $Y$.
Assume also that:
\begin{enumerate}[(a)]
\item the couple $(X,Y)$ has the $QE(A)$-property;
\item there exists a continuous quasiconvex function $\delta\colon A\to[0,+\infty)$ such that $\delta^{-1}(0)=A\cap Y$.
\end{enumerate}
Then each non-constant continuous quasiconvex function $f\colon A\cap Y\to\mathbb{R}$ admits a continuous quasiconvex extension
$F\colon A\to \mathbb{R}$ such that $F(A)=f(A\cap Y)$ and,
for $a:=\inf F(A)$,
$$
\{ x\in A : F(x)=a\}=\{ y\in A\cap Y : f(x)=a\}.
$$
\end{proposition}
\begin{proof}
Given $f$, denote $a:=\inf f(A\cap Y)$ and $b:=\sup f(A \cap Y)$, and notice that $a<b$ since $f$ is not constant.
If $f<b$ on $A\cap Y$, fix an increasing homeomorphism $\varphi$ of $(-\infty,b)$ onto $\mathbb{R}$. Then $g:=\varphi\circ f$ is a
(non-constant) continuous quasiconvex function on $A\cap B$. By (a), $g$ admits a continuous quasiconvex extension
$G\colon A\to\mathbb{R}$. Then $F_1:=\varphi^{-1}\circ G$ is a continuous quasiconvex extension of $f$ such that $F_1<b$.
If $b\in f(A\cap Y)$, take an arbitrary continuous quasiconvex extension $G\colon A\to \mathbb{R}$ of $f$, and define
$F_1(x):=\min\{G(x),b\}$ ($x\in A$). Then $F_1$ is a continuous quasiconvex extension of $f$ such that $F_1\le b$.
If $a=-\infty$ we simply put $F:=F_1$, and we are done. Now, assume that $a\in\mathbb{R}$. Fix some $\mathrm{var}epsilonsilon>0$ such that $a+\mathrm{var}epsilonsilon(\pi/2)<b$,
and define $F(x):=\max\{a+\mathrm{var}epsilonsilon\arctan{\delta(x)}, F_1(x)\}$ ($x\in A$). Then $F$ is clearly continuous and quasiconvex,
and $F(x)=F_1(x)$ whenever $F_1(x)>a+\mathrm{var}epsilonsilon(\pi/2)$. Moreover,
for $x\in A$, one has
$F(x)=a$ if and only $x\in Y$ and $f(x)=F_1(x)=a$. It follows that $F$ has the desired properties.
\end{proof}
\begin{remark}\label{constant}
Notice that, under the assumptions of the above theorem, if $f\equiv c$ is constant on $A\cap Y$ then
$F(x):=c+\delta(x)$ ($x\in A$) is a quasiconvex extension of $f$ with the same set of minimizers (but, of course, not the same
range).
\end{remark}
Thanks to Theorem~\ref{CEimpliesQE}, we can now obtain some sufficient conditions for extendability of continuous quasiconvex functions.
Let us state some examples. A t.v.s.\ is called {\em conditionally separable} (see \cite{DEVEX1}) if for each neighborhood $V$ of $0$
there exists a countable set $E$ such that $E+V=X$. For metrizable spaces, conditional separability is equivalent to separability.
In general, separability implies conditional separability but not vice-versa.
\begin{theorem}[Extension to the whole space]\label{applCE}
Let $Y$ be a closed subspace of a t.v.s.\ $X$.
Assume that at least one of the following conditions holds.
\begin{enumerate}[(a)]
\item $Y$ is complemented in $X$.
\item $X$ is locally convex, and $X/Y$ is conditionally separable and admits a continuous norm.
\end{enumerate}
Then each continuous quasiconvex function $f$ on $Y$ admits a continuous quasiconvex extension $F$ defined on
the whole $X$.
Moreover, if $f$ is non-constant and, in the case (a) also the origin of $X/Y$ is a countable intersection of open convex sets,
then the extension $F$ can be chosen so that\ \
$F(X)=f(Y)$\ \ and\ \ $\mathrm{arg\,min}_X F= \mathrm{arg\,min}_{Y} f\,$.
\end{theorem}
\begin{proof}
(a) Let $P\colon X\to Y$ be a continuous linear projection onto $Y$. Then clearly $F:=f\circ P$ is
a continuous quasiconvex extension of $f$. So $(X,Y)$ has the $QE(X)$-property. The second part
follows from Lemma~\ref{L:G-delta} and Proposition~\ref{better}.
(b) By \cite[Theorem~4.5]{DEVEX1}, $(X,Y)$ has the $CE(X)$-property. By Lemma~\ref{L:G-delta} and
Theorem~\ref{CEimpliesQE}, $(X,Y)$ has the $QE(X)$-property. The last part follows again from Proposition~\ref{better}.
\end{proof}
Let us recall that a Banach space $X$ is said to have the {\em separable complementation property}
if every separable subspace of $X$ is contained in a separable complemented subspace of $X$.
It is known (see e.g.\ \cite[pp.\ 481--482]{PY}) that this property holds in any of the following cases:
(i) $X$ is weakly compactly generated;
(ii) $X$ is dual and has the Radon-Nikod\'ym property;
(iii) $X$ has a countably norming M-basis;
(iv) $X$ is a Banach lattice not containing $c_0$.
\begin{theorem}[Extension to an open convex set]\label{applSCE}
Let $A$ be an open convex set in a locally convex t.v.s.\ $X$, and $Y\subset X$
a closed subspace such that $X/Y$ admits a continuous norm.
Assume that $A\cap Y\ne\emptyset$ and at least one of the following conditions is satisfied.
\begin{enumerate}[(a)]
\item $Y$ is finite-dimensional.
\item $X$ is conditionally separable.
\item $Y$ is conditionally separable and complemented.
\item $X$ is a Banach space with the separable complementation property and $Y$ is separable.
\item $A\cap Y$ is weakly open in $Y$, and either $Y$ is complemented or $X/Y$ is conditionally separable.
\item $Y$ is a Banach space, $A\cap Y$ is a countable intersection of open half-spaces in $Y$, and
either $Y$ is complemented or $X/Y$ is conditionally separable.
\end{enumerate}
Then each
continuous quasiconvex function $f$ on $A\cap Y$ can be extended to a continuous quasiconvex function $F$
on $A$.
Moreover, if $f$ is non-constant then the extension $F$ can be chosen so that\ \
$F(A)=f(A\cap Y)$\ \ and\ \ $\mathrm{arg\,min}_A F= \mathrm{arg\,min}_{A\cap Y} f\,$.
\end{theorem}
\begin{proof} Since $X/Y$ admits a continuous norm, equivalence of the conditions in Lemma~\ref{L:G-delta}, C, ensures existence of a continuous seminorm on $X$ whose kernel is $Y$.
By Theorem~\ref{CEimpliesQE} it suffices to show that $(X,Y)$ has the $CE(A)$-property. This is true
for (a) by \cite[Proposition~3.7(b)]{DEVEX1}. The cases (b),(c) and (d)
were proved in \cite[Corollary~2.3]{DEVEX2}.
If either $Y$ is complemented or $X/Y$ is conditionally separable then $(X,Y)$ has the $CE(X)$-property
by Proposition~3.3(a) and Theorem~4.5 in \cite{DEVEX1}. Now, (e) and (f) follow from
Corollary~2.4(a) and Corollary~2.6(a) in \cite{DEVEX2}, respectively.
Finally, the last part follows from Proposition~\ref{better}.
\end{proof}
Let us finally present some examples in which $QE(X)$ does not hold.
As a consequence of Theorem~\ref{QEimpliesCE}, \cite[Proposition~4.4]{DEVEX1}, \cite[Proposition~4.5]{DEVEX1}, \cite[Corollary~4.6]{DEVEX1}, and \cite[Remark~5.6]{DEVEX1}, we immediately obtain the following corollary.
\begin{corollary}
The couple $(X,Y)$ fails the ${QE}(X)$-property in any of
the following cases.
\begin{enumerate}
\item $Y$ is an infinite-dimensional
closed subspace of $X=\ell_\infty(\Gamma)$ isomorphic to some $c_0(\Lambda)$ or
$\ell_p(\Lambda)$ with $1<p<\infty$.
\item $Y$ is an infinite-dimensional
closed subspace of $X=\ell_\infty(\Gamma)$ not containing any isomorphic copy of $\ell_1$.
\item $X$ is a Grothendieck Banach space but this is not the case for $Y$.
\item $Y$ is a separable nonreflexive Banach space, considered as a subspace of $X=\ell_\infty$.
\item $X=L^p([0,1])$ with $0<p<1$ and $Y$ is a finite-dimensional subspace of $X$.
\end{enumerate}
In particular, $(\ell_\infty,c_0)$ fails the ${QE}(X)$-property.
\end{corollary}
\section*{Appendix: proof of Lemma~\ref{L:G-delta}}
\begin{proof}[Proof of Lemma~\ref{L:G-delta}]
\
A. The implication
$(a_1)\mathbb{R}ightarrow(a_2)$ follows immediately by considering the open convex sets $U_n:=[\delta<1/n]$ ($n\in\mathbb{N}$).
Let us show that
$(a_2)\mathbb{R}ightarrow(a_1)$. We can (and do) assume that $0\in A$.
Let $\{U_k\}_{k\in\mathbb{N}}$ be a decreasing sequence of open
convex sets in $A$ such that $\bigcap_{k\in\mathbb{N}}U_k=A\cap Y$.
Define $V_k:=\frac{k+1}{k}U_k$ ($k\in\mathbb{N}$), and notice that
$$
\text{$\textstyle V_{k+1}\subset\frac{k(k+2)}{(k+1)^2}V_k$
($k\in\mathbb{N}$), and $\textstyle A\cap Y\subset \bigcap_{k\in\mathbb{N}}V_k\subset 2\bigcap_{k\in\mathbb{N}}U_k=2A\cap Y$.}
$$
Now let us define
a family $\{W_n\}_{n\in\mathbb{Z}}$ of open convex sets in $X$ by
$$
W_n:=V_{|n|}\ \text{for $n<0$ integer,}\ \text{and}\
W_n:=(n+2)V_1\ \text{for $n\ge0$ integer.}
$$
It is easy to see that, for each $n\in\mathbb{Z}$,
$$
W_n\subset\lambda_{n+1}W_{n+1}\quad\text{with some $\lambda_{n+1}\in(0,1)$.}
$$
Let us extend $\{W_n\}_{n\in\mathbb{Z}}$ to a family $\{W_\alpha\}_{\alpha\in\mathbb{R}}$ by defining
$$
W_\alpha:=\varphi_n(\alpha)W_n\quad\text{whenever $\alpha\in[n,n+1)$, $n\in\mathbb{Z}$,}
$$
where $\varphi_n(\alpha):=(n+1-\alpha)\lambda_n+(\alpha-n)$.
Since each $\varphi_n$ is an increasing affine function such that $\varphi_n(n)=\lambda_n$ and $\varphi_n(n+1)=1$,
it is easy to see that for each couple of reals $\alpha<\beta$ we have
$W_\alpha\subset\lambda_{\alpha,\beta}W_\beta$ with $\lambda_{\alpha,\beta}\in(0,1)$, and hence
$\overline{W}^{X}_\alpha\subset W_\beta$. Moreover,
$A\cap Y\subset\bigcap_{\alpha\in\mathbb{R}}W_\alpha\subset 2A\cap Y$ and $\bigcup_{\alpha\in\mathbb{R}}W_\alpha=X$.
Fix an increasing homeomorphism $h$ of $(0,\infty)$ onto $\mathbb{R}$, and define
$$
D_\alpha:=W_{h(\alpha)}\cap A\ \text{for $\alpha>0$,}
\quad\text{and}\quad
D_\alpha:=\emptyset\ \text{for $\alpha\le0$.}
$$
Then $\{D_\alpha\}_{\alpha\in\mathbb{R}}$ is an $\Omega(A)$-family of open convex sets in $X$.
Proposition~\ref{P:lowerlevelsets} assures that the formula
$$
\delta(x):=\sup\{\alpha\in\mathbb{R}; \, x\notin D_\alpha\}\quad(x\in A)
$$
defines a continuous quasiconvex real-valued function on $A$. Clearly, $\delta\ge0$ and by Proposition~\ref{P:lowerlevelsets},
$$
\delta^{-1}(0)=[\delta\le 0]=\bigcap_{\alpha>0}D_\alpha=\bigcap_{n\in\mathbb{N}}V_n\cap A=A\cap Y.
$$
B. The equivalence $(b_1)\Leftrightarrow(b_2)$ follows by the first part of the proof
(applied with $Y=\{0\}\subset X/Y$).
C. Conditions $(c_k)$.
$(c_4)\mathbb{R}ightarrow(c_5)$. If ($c_4$) holds, there exists $U\subset X/Y$,
an algebraically bounded, symmetric, open convex neighborhood of the origin in $X/Y$.
The Minkowski gauge of $U$ is a continuous seminorm on $X/Y$, which is even a norm since
$U$ is algebraically bounded.
$(c_5)\mathbb{R}ightarrow(c_2)$. If $\nu$ is a continuous norm on $X/Y$, then $p:= \nu\circ q$ (where $q$ is the quotient map)
satisfies ($c_2$).
$(c_2)\mathbb{R}ightarrow(c_1)$ is obvious.
$(c_1)\mathbb{R}ightarrow(c_3)$. If $d$ is as in ($c_1$), let us define $V:=[d<1]$. If $y\in Y$ then
$y/\mathrm{var}epsilonsilon\in V$ ($\mathrm{var}epsilonsilon>0$), and hence $y\in\bigcap_{\mathrm{var}epsilonsilon>0}\mathrm{var}epsilonsilon V$. On the other hand,
if $x\in\bigcap_{\mathrm{var}epsilonsilon>0}\mathrm{var}epsilonsilon V$ then $d(x/\mathrm{var}epsilonsilon)<1$ for each $\mathrm{var}epsilonsilon>0$. By convexity,
for each $\mathrm{var}epsilonsilon\in(0,1)$ we obtain $d(x)=d\bigl(\mathrm{var}epsilonsilon(x/\mathrm{var}epsilonsilon)+(1-\mathrm{var}epsilonsilon)0\bigr)\le \mathrm{var}epsilonsilon d(x/\mathrm{var}epsilonsilon)<\mathrm{var}epsilonsilon$.
Then $d(x)=0$, that is, $x\in Y$. Thus $V$ satisfies ($c_3$).
$(c_3)\mathbb{R}ightarrow(c_4)$. Let $V$ be as in ($c_3$). It is easy to see that $V+Y=V$ since $V$ is convex and open.
It follows that
the open convex neighborhood $U:=q(V)$ of $0\in X/Y$ satisfies $\bigcap_{\mathrm{var}epsilonsilon>0}\mathrm{var}epsilonsilon U=\{0\}$.
This implies that $U$ is algebraically bounded. We are done.
D. The implication $(c_5)\mathbb{R}ightarrow(b_2)$ is obvious. If $(b_2)$ holds then the function $\delta:=f\circ q$
(where $q\colon X\to X/Y$ is the quotient map) satisfies the condition contained in $(a_1)$.
E. Now, for $A=X$, let us show that $(a_2)\mathbb{R}ightarrow(b_1)$.
Let $U_n\subset X$ ($n\in\mathbb{N}$) be open convex sets such that $\bigcap_{n\in\mathbb{N}}U_n=Y$.
For each $n\in\mathbb{N}$, $U_n+Y=U_n$ since $U_n$ is convex and open. It follows easily that the intersection of the open convex sets
$V_n:=q(U_n)\subset X/Y$ ($n\in\mathbb{N}$) is just the origin of $X/Y$.
\end{proof}
\end{document}
|
\begin{document}
\preprint{aaPM/123-QED}
\title[Sample title]{Theoretical analysis of a Polarized Two-Photon Michelson Interferometer with Broadband Chaotic Light }
\author{Yu Zhou}
\affiliation{MOE Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter, Department of Applied Physics, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China
}
\affiliation{Electronic Materials Research Laboratory, Key Laboratory of the Ministry of Education $\&$ International Center for Dielectric Research, School of Electronic Science and Engineering, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China}
\author{Sheng Luo}
\affiliation{MOE Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter, Department of Applied Physics, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China
}
\affiliation{Electronic Materials Research Laboratory, Key Laboratory of the Ministry of Education $\&$ International Center for Dielectric Research, School of Electronic Science and Engineering, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China}
\author{Jianbin Liu}
\email{[email protected]}
\affiliation{Electronic Materials Research Laboratory, Key Laboratory of the Ministry of Education $\&$ International Center for Dielectric Research, School of Electronic Science and Engineering, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China}
\author{Huaibin Zheng}
\email{[email protected]}
\affiliation{Electronic Materials Research Laboratory, Key Laboratory of the Ministry of Education $\&$ International Center for Dielectric Research, School of Electronic Science and Engineering, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China}
\author{Hui Chen}
\affiliation{Electronic Materials Research Laboratory, Key Laboratory of the Ministry of Education $\&$ International Center for Dielectric Research, School of Electronic Science and Engineering, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China}
\author{Yuchen He}
\affiliation{Electronic Materials Research Laboratory, Key Laboratory of the Ministry of Education $\&$ International Center for Dielectric Research, School of Electronic Science and Engineering, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China}
\author{Yanyan Liu}
\affiliation{Science and Technology on Electro-Optical Information Security Control Laboratory, Tianjin 300308, China}
\author{Fuli Li}
\affiliation{MOE Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter, Department of Applied Physics, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China
}
\author{Zhuo Xu}
\affiliation{Electronic Materials Research Laboratory, Key Laboratory of the Ministry of Education $\&$ International Center for Dielectric Research, School of Electronic Science and Engineering, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China}
\date{\today}
\begin{abstract}
In this paper, we study two-photon interference of broadband chaotic light in a Michelson interferometer with two-photon-absorption detector. The theoretical analysis is based on two-photon interference and Feynman path integral theory. The two-photon coherence matrix is introduced to calculate the second-order interference pattern with polarizations being taken into account. Our study shows that the polarization is another dimension, as well as time and space, to tune the interference pattern in the two-photon interference process. It can act as a switch to manipulate the interference process and open the gate to many new experimental schemes.
\end{abstract}
\keywords{Suggested keywords}
\maketitle
\section{\label{sec:leve$L_1$}Introduction}
Michelson Interferometer (MI), as an important instrument to study the temporal coherence of electromagnetic (EM) fields, has been applied to many important scientific research projects including the well known Laser Interferometer Gravitational-wave Observatory (LIGO) \cite{Harry2010Advanced}. A two-photon absorption(TPA) detector can be triggered by a pair of photons when the difference of their arriving time is in the order of a few femtoseconds \cite{2002Ultrasensitive,1998Generation,1998Ultrahigh}. The combination of a MI with a TPA detector is used to study the Hanbury Brown
and Twiss (HBT) effect of chaotic thermal light. For ordinary detectors the coherence time of chaotic light which is at the order of femtoseconds is too short. The MI provides the interference paths and TPA detector responses in ultra-short coherence time. Many state-of-the-art researches has been done with the setups, such as measuring photon bunching effect of real chaotic light from a black body \cite{boitier2009measuring}, observing the interference between photon pairs from independent chaotic sources \cite{2011Indistinguishable}, finding the polarization time of unpolarized light \cite{shevchenko2017polarization} \textit{etc}. The similar setup has also been used to recover the hidden polarization \cite{2018Recovering} and form ultra-broadband ghost imaging \cite{2015Ultrabroadband} \textit{etc}. Instead of chaotic sources, the quantum light source like entangled photon pairs and ultra-bright twin beams has also been studied by using this kind of setups \cite{2012Coherence,boitier2011photon}.
In Ref.\cite{tang2018measuring} the super-bunching effect of photons of true chaotic light was experimentally demonstrated in the similar setup by cascading the interferometer. Moreover, we proposed to explore the super-bunching effect to enhance the sensitivity of weak signal (such as gravitational wave) detection. To do so it is critical to manipulate the two-photon interference in the setup to increase the interference effect. According to previous studies, we realized that polarization is a parameter as same as space and time in the two-photon interference phenomenon. It could help us to manipulate the two-photon interference in a MI. A theory based on two-photon interference and Feynman path integral which also taking polarization into consideration is necessary for future research. However, the two-photon interference theory reported in previous publication does not take polarizations of $EM$ fields into considerations \cite{2012Coherence,tang2018measuring}. Some of the previous studies on polarization in a MI are from angle of classical coherence theory \cite{2008Polarization,2019Interference}.
Therefore in this paper we analyze a polarized MI with broadband chaotic light detected by a TPA detector with quantum theory. The theoretical model is based on quantum two-photon interference and Feynman path integral theory. In the analysis we expand the scalar model \cite{tang2018measuring} to vector model by taking polarizations into consideration and introduce a two-photon covariance matrix to describe the transformation of two-photon coherence in the MI. We analyze the four components of the TPA detection in the scalar model and connect them with interference between different two-photon probability amplitudes. It is found that in the vector model polarizations work as a switch to control the coefficients of the four components of TPA detection output where in a scalar model the coefficients are all equal. By adjusting the polarizers in the MI we can make some component to be zero or dominating. For example, we can choose to observe only the HBT effect (with constant background), observe sub-wavelength effect by removing $\omega$ oscillation component, or make $\omega$ oscillation component dominate over $2\omega$ oscillation component \textit{etc}. The model suggests new experimental schemes. It can also help us to further study the manipulation of two-photon interference to explore super-bunching effect in weak signal detection \cite{tang2018measuring}. This model can also be applied to study the MI with polarized quantum sources such as entangled photon pairs or squeezed light \textit{etc}.
\section{\label{sec:theory}Theory}
The HBT effect can be described as the results of interference between two different but indistinguishable two-photon probability amplitudes \cite{fano1961quantum}. The interfering phenomenon in a MI with broadband chaotic light detected by a TPA detector can also be understood in the same way. The detection scheme is shown in Fig.~\ref{fig:setup}.
A continuous amplified spontaneous emission (ASE) incoherent light is used in the configuration. The ASE is completely unpolarized light just like natural light\cite{boitier2009measuring}. The wavelength of the ASE is center at $1550 nm$ with $30 nm$ bandwidth. ASE is coupled into the MI which consists of two mirrors ($M_1$ and $M_2$) and a beam splitter (BS). There are four polarizers $P_0$, $P_1$, $P_2$ and $P_3$ could be put in or taken away from the MI depends on different experiments. $P_0$ could be put at the input of the interferometer . $P_1$ and $P_2$ could be put at two arms of the MI in front of mirrors $M_1$ and $M_2$ respectively. The output beam of the interferometer goes into a semiconductor photomultiplier tube (PMT) operated in two-photon absorption (TPA) regime.
\begin{figure}
\caption{\label{fig:epsart}
\label{fig:epsart}
\label{fig:setup}
\end{figure}
The TPA detector measures the second order correlation function of the light field,
\begin{equation} \label{eq:G2}
G^{(2)} \equiv \langle E^{(-)}(t) E^{(-)}(t+\tau) E^{(+)}(t+\tau) E^{(+)}(t) \rangle ,
\end{equation}
where $E^{(-)}(t)$ is the negative frequency part of quantized EM field reaching the TPA detector at time $t$; $E^{(-)}(t+\tau)$ is the negative frequency part of quantized EM field reaching the TPA detector at time $t+\tau$ \cite{2001Optical}. $E^{(-)}(t)=E_1^{(-)}(t)+E_2^{(-)}(t)$ signifies that each $E$ field in Eq.~(\ref{eq:G2}) comes from both arm $1$ and $2$ of the MI.
From the quantum mechanical point of view, the signal of TPA detector in Eq.~(\ref{eq:G2}) can be calculated using the coherent superposition of four different and indistinguishable probability amplitudes.
Assuming the light is at single photon level, Eq.~(\ref{eq:G2}) can be written as \cite{2011An},
\begin{equation} \label{eq:G2-1}
G^{(2)} =|\langle 0| E_2^{(+)}(t+\tau) E_1^{(+)}(t) |1_a 1_b\rangle |^2,
\end{equation}
where $|1_a 1_b\rangle$ stands for the state of two photons $a$ and $b$; $E_1^{(+)}(t)$ and $E_2^{(+)}(t+\tau)$ signify $E$ fields come from arm $1$ and $2$ respectively. As shown in Fig.~\ref{fig:4paths}, there are four probability amplitudes involved in Eq.~(\ref{eq:G2-1}) which are $A_{I}=A\substack{a\to1 \\ b\to1}$ , $A_{II}=A\substack{a\to1 \\ b\to2}$, $A_{III}=A\substack{a\to2 \\ b\to1}$ and $A_{IV}=A\substack{a\to2 \\ b\to2}$ from which we have,
\begin{equation} \label{eq:1}
G^{(2)}=|A_I+A_{II}+A_{III}+A_{IV}|^2,
\end{equation}
where $A_I$ to $A_{IV}$ are four probability amplitudes shown in Fig.~\ref{fig:4paths} \cite{tang2018measuring}.
The expansion of Eq.~(\ref{eq:G2}) has $16$ terms without taking polarizations into consideration. In general, each term has the form of $\langle E_{ai}^{(-)} E_{bj}^{(-)} E_{bl}^{(+)} E_{ak}^{(+)} \rangle $ where $i,j,k,l=1,2$ stand for through which arms photons pass. For example,
\begin{equation} \label{eq:G2-2}
A_{III}^* A_{II}=\langle E_{a2}^{(-)} (t+\tau)E_{b1}^{(-)}(t) E_{b2}^{(+)} (t+\tau) E_{a1}^{(+)}(t) \rangle.
\end{equation}
\begin{figure}
\caption{\label{fig:epsart}
\label{fig:epsart}
\label{fig:4paths}
\end{figure}
In Ref.~ \cite{tang2018measuring} a theoretical model based on the Feynman's path-integral and two-photon interference theory was developed to describe the Hanbury-Brown and Twiss effect (HBT) of multi-spatial-mode thermal light at ultrashort timescale by two-photon absorption. The theory is applied to interpret experimental results and shows that the output of the TPA detector is composed by four components which come from interference between different two-photon probability amplitudes. In brief, the expansion of Eq.~(\ref{eq:1}) is comprised of four parts: the constant background which comes from $|A_I|^2+|A_{II}|^2$, the HBT term which come from $|A_{II}+A_{III}|^2$, the oscillation part with frequency $\omega$ and the oscillation part with frequency $2 \omega$.
If polarizations are taken into consideration, there are
\begin{eqnarray} \label{eq:polar-1}
E_1^{(+)}(t)& =& E_{1x}^{(+)}(t)+E_{1y}^{(+)}(t)\nonumber \\
E_2^{(+)}(t+\tau)& =& E_{2x}^{(+)}(t+\tau)+E_{2y}^{(+)}(t+\tau) ,
\end{eqnarray}
where $E_{1x}^{(+)}(t)$ stands for the positive frequency part of $E$ field of $x$ polarization from channel $1$ which arrives the detector at time $t$, others terms have similar meanings. Combining Eq.~(\ref{eq:G2-1}) and Eq.~(\ref{eq:polar-1}) we can see that each $16$ terms in Eq.~(\ref{eq:G2}) has the form of
\begin{equation} \label{eq:G2-3}
\langle (E_{aix}^{(-)}+E_{aiy}^{(-)}) (E_{bjx}^{(-)}+E_{bjy}^{(-)}) (E_{blx}^{(+)}+E_{bly}^{(+)}) (E_{akx}^{(+)}+E_{aky}^{(+)}) \rangle
\end{equation}
where $i,j,k,l=1,2$ stand for through which arms photons pass and $x,y$ stands for the polarizations. For example, Eq.~(\ref{eq:G2-2}) changes into,
\begin{eqnarray} \label{eq:G2-4}
A_{III}^* A_{II}=\langle (E_{a2x}^{(-)} +E_{a2y}^{(-)})(E_{b1x}^{(-)}+E_{b1y}^{(-)})\nonumber \\(E_{b2x}^{(+)}+E_{b2y}^{(+)}) (E_{a1x}^{(+)}+E_{a1y}^{(+)}) \rangle,
\end{eqnarray}
in which there are $16$ terms after expansion. There are total $256$ terms in Eq.~(\ref{eq:G2}) after expansion.
However, not all the terms survive the expectation valuation $\langle \ldots \rangle$ in all $256$ terms because in general photon $a$ and $b$ from different polarization mode have different initial phases for chaotic light. The two-photon state of photons $a$ and $b$ can be written as,
\begin{equation} \label{eq:initial-phase}
|1_a 1_b\rangle=|1_a\rangle e^{i\delta_a}\otimes |1_b\rangle e^{i\delta_b},
\end{equation}
where $\delta_a$ and $\delta_b$ are random phases of photons $a$ and $b$ due to random excitations times of atoms respectively\cite{1997Quantum}. For photons from the same polarization, for example both photons come from $x$ polarization, we have $\langle e^{i (\delta_a -\delta_b)} \rangle=1$ which means that the initial phases of photons from the same polarization mode are completely correlated. If two photons are from orthogonal polarization, we have $\langle e^{i (\delta_a -\delta_b)} \rangle=0$ which means that the initial phases of photons from the orthogonal polarizations are completely uncorrelated.
Under this assumption, only $6$ out of $16$ terms survive in every terms in the expansion of Eq.~(\ref{eq:G2-3}), for example the expansion of Eq.~(\ref{eq:G2-4}) is,
\begin{eqnarray} \label{eq:G2-5}
&&A_{III}^* A_{II} \nonumber \\ &= &\langle E_{a2x}^{(-)} E_{b1x}^{(-)} E_{b2x}^{(+)} E_{a1x}^{(+)}\rangle + \langle E_{a2y}^{(-)} E_{b1y}^{(-)} E_{b2y}^{(+)} E_{a1y}^{(+)}\rangle\nonumber \\
&+&\langle E_{a2x}^{(-)} E_{b1y}^{(-)} E_{b2x}^{(+)} E_{a1y}^{(+)}\rangle+\langle E_{a2y}^{(-)} E_{b1x}^{(-)} E_{b2y}^{(+)} E_{a1x}^{(+)}\rangle\nonumber \\
&+&\langle E_{a2x}^{(-)} E_{b1y}^{(-)} E_{b2y}^{(+)} E_{a1x}^{(+)}\rangle+\langle E_{a2y}^{(-)} E_{b1x}^{(-)} E_{b2x}^{(+)} E_{a1y}^{(+)}\rangle,\nonumber \\
\end{eqnarray}
where only in these $6$ terms initial phases would cancel each other and have non-zero values and other $10$ terms equal to zeros. Since polarization is an independent dimension to describe the $EM$ field as same as time and space, the Eq.~(\ref{eq:G2-3}) can be factorized into the product of polarizations part and temporal part (all the calculation is assumed to be done in the same spatial mode) and written as,
\begin{eqnarray} \label{eq:G2-6}
&&A_{III}^* A_{II} \nonumber \\ & = &[\langle E_{2x}^{(-)} E_{1x}^{(-)} E_{2x}^{(+)} E_{1x}^{(+)}\rangle + \langle E_{2y}^{(-)} E_{1y}^{(-)} E_{2y}^{(+)} E_{1y}^{(+}\rangle\nonumber \\
&+&\langle E_{2x}^{(-)} E_{1y}^{(-)} E_{2x}^{(+)} E_{1y}^{(+)}\rangle+\langle E_{2y}^{(-)} E_{1x}^{(-)} E_{2y}^{(+)} E_{1x}^{(+}\rangle\nonumber \\
&+&\langle E_{2x}^{(-)} E_{1y}^{(-)} E_{2y}^{(+)} E_{1x}^{(+}\rangle+\langle E_{2y}^{(-)} E_{1x}^{(-)} E_{2x}^{(+)} E_{1y}^{(+)}\rangle] \nonumber \\
& \times & \langle E_{a2}^{(-)} E_{b1}^{(-)} E_{b2}^{(+)} E_{a1}^{(+)}\rangle ,\nonumber \\
\end{eqnarray}
where $\langle E_{a2}^{(-)} E_{b1}^{(-)} E_{b2}^{(+)} E_{a1}^{(+)}\rangle$ corresponds to the temporal interference term in the scalar model \cite{tang2018measuring} and the sum of $6$ terms in $[..]$ correspond to the polarization interference only found in the vector model. From Eq.~(\ref{eq:G2-6}) we can see that in the vector model polarizations determine the coefficients of interference terms in the scalar model. Since other $16$ terms in the expansion of Eq.~(\ref{eq:G2-1}) have the similar form as shown in Eq.~(\ref{eq:G2-6}), in the vector model we have,
\begin{equation} \label{eq:VSmodel}
V_{TPA}=C \otimes S_{TPA},
\end{equation}
where $V_{TPA}$ is the probability density matrix in vector model, $C$ is the coefficients matrix which will be defined lately, $\otimes$ stands for Hadamard product and $S_{TPA}$ is the probability density matrix derived in scalar model which is defined as \cite{tang2018measuring},
\begin{small}
\begin{equation} \label{eq:Smode}
\centering
S_{TPA} =
\left[ \begin{array}{ccccc}
A_{I}^* A_{I}&A_{I}^* A_{II}&A_{I}^* A_{III} &A_{I}^* A_{IV}& \\
A_{II}^* A_{I}&A_{II}^* A_{II}&A_{II}^* A_{III}&A_{II}^* A_{IV}& \\
A_{III}^* A_{I}&A_{III}^* A_{II}&A_{III}^* A_{III} &A_{III}^* A_{IV}& \\
A_{IV}^* A_{I}&A_{IV}^* A_{II}&A_{IV}^* A_{III} &A_{IV}^* A_{IV} & \\
\end{array} \right],
\end{equation}
\end{small}
where terms like $A_{II}^* A_{III}$ now stand for the interference term in the scalar model in which only temporal interference is taken into consideration. The coefficients matrix $C$ is defined as,
\begin{equation} \label{eq:c-matrix}
\centering
C =
\left[ \begin{array}{ccccc}
c_{1111}&c_{1112}&c_{1121} &c_{1122}& \\
c_{1211}&c_{1212}&c_{1221}&c_{1222}& \\
c_{2111}&c_{2112}&c_{2121} &c_{2122}& \\
c_{2211}&c_{2212}&c_{2221} &c_{2222} & \\
\end{array} \right],
\end{equation}
where
\begin{eqnarray} \label{eq:c-matrix-1}
\centering
&& c_{ijkl} \nonumber \\ &=& \langle E_{ix}^{(-)} E_{jx}^{(-)} E_{lx}^{(+)} E_{kx}^{(+)}\rangle + \langle E_{iy}^{(-)} E_{jy}^{(-)} E_{ly}^{(+)} E_{ky}^{(+)}\rangle\nonumber \\
&+&\langle E_{ix}^{(-)} E_{jy}^{(-)} E_{lx}^{(+)} E_{ky}^{(+)}\rangle+\langle E_{iy}^{(-)} E_{jx}^{(-)} E_{ly}^{(+)} E_{kx}^{(+)}\rangle\nonumber \\
&+&\langle E_{ix}^{(-)} E_{jy}^{(-)} E_{ly}^{(+)} E_{kx}^{(+)}\rangle+\langle E_{iy}^{(-)} E_{jx}^{(-)} E_{lx}^{(+)} E_{ky}^{(+)}\rangle,\nonumber \\
\end{eqnarray}
where $i,j,k,l=1,2$ stand for through which arms photons pass and $x,y$ stand for polarizations.
To make the calculation easier we define a \emph{second-order covariance matrix} or \emph{two-photon covariance matrix} (TCM) $J^{(2)} $ since it describes the annihilation of two-photons with polarizations \cite{2001Optical},
\begin{equation} \label{eq:connection}
\centering
J^{(2)}(i,j,k,l) =
\left[ \begin{array}{ccccc}
{\color{red} J_{xxxx}} &J_{xxxy} &J_{xxyx} &J_{xxyy} & \\
J_{xyxx} &{\color{red} J_{xyxy}} &{\color{red} J_{xyyx}} &J_{xyyy} & \\
J_{yxxx} &{\color{red} J_{yxxy}} &{\color{red} J_{yxyx}} &J_{yxyy} & \\
J_{yyxx} &J_{yyxy} &J_{yyyx} &{\color{red} J_{yyyy}} & \\
\end{array} \right],
\end{equation}
where $i,j,k,l=1,2$ have the same meanings defined in Eq.~(\ref{eq:c-matrix}), $x,y$ stand for the polarization and the positions of subindexes of $J$ are define as: the first and the fourth indexes correspond the EM field of photon $a$ and the second and third indexes correspond the EM field of photon $b$. For example the element $J_{xyyx}(i,j,k,l)\equiv
\langle E_{aix}^{(-)} E_{bjy}^{(-)} E_{bly}^{(+)} E_{akx}^{(+)} \rangle$ stands for the second order correlation function of $E$ fields of $x$ polarization of photon $a$ through path $i$, $E$ fields of $y$ polarization of photon $b$ through path $j$, $E$ fields of $y$ polarization of photon $b$ through path $l$ and $E$ fields of $x$ polarization of photon $a$ through path $k$. The connect between Eq.~(\ref{eq:c-matrix}) and Eq.~(\ref{eq:connection}) is,
\begin{eqnarray} \label{eq:connection-1}
c_{ijkl} & = &J^{(2)}[1,1]+J^{(2)}[2,2]+J^{(2)}[2,3]\nonumber\\
& + & J^{(2)}[3,2]+J^{(2)}[3,3]+J^{(2)}[4,4],
\end{eqnarray}
where in Eq.~(\ref{eq:connection}) only the $6$ red terms (color online) are not zero and contribute to $G^{(2)}$.
One of the advantages of defining the two-photon covariance matrix is that the setup shown in Fig.~\ref{fig:setup} is a linear system and the EM field operators and the two-photon coherence matrix at the TPA detector relate to those at the input of the MI by a linear transformation matrix which is determined by the experimental setups \cite{2001Optical}. The polarized MI we studied is comprised of polarizers, non-polarized beams splitter and mirrors. The connection between the TCM $J^{(2)}$ at the TPA detector and the TCM $J_0^{(2)}$ at the input of the MI is \cite{2001Optical}
\begin{equation} \label{eq:tm}
J^{(2)}= (T_1 T_2 \ldots T_n)^ \dagger J_0^{(2)} (T_1 T_2 \ldots T_n),
\end{equation}
where $T_1 T_2 \ldots T_n$ stands for the cascade transformation matrix for the MI. Once the two-photon coherence matrix is determined the $G^{(2)}$ function of the polarized MI could be calculated using Eq.~(\ref{eq:VSmodel}) in which the coefficients matrix $C$ is calculated using Eq.~(\ref{eq:connection-1}).
\section{\label{sec:simul}Simulations}
In this section, we will employ the method above to study two-photon interference in different schemes and show how to manipulate the interference. In simulations, all the figure plot the normalized second order correlation functions $g^{(2)}=\frac{G^{(2)}}{\langle E_1^{(-)}E_1^{(+)} \rangle \langle E_2^{(-)}E_2^{(+)} \rangle}$\cite{2001Optical}.
\subsection{\label{subsec:unpolar}Unpolarized chaotic light as input}
We start with the unpolarized chaotic light. In this case, polarizers are absent in the MI shown in Fig.~\ref{fig:setup}. Without polarizers involved, there are four kinds of interference patterns in the outcomes of the TPA detection as mentioned in Sec.~\ref{sec:theory}. They are constant background, HBT effect, the oscillation pattern with frequency $\omega$ and the oscillation pattern with frequency $2 \omega$ respectively \cite{tang2018measuring}. The output of the TPA detector is the sum of each elements of probability density matrix $S_{TPA}$ as shown in Eq.~(\ref{eq:G2-1}). All the four different components are mixed together and shown in Fig.~\ref{fig:no-polarizers}(a).
To have a better understanding of the structure of interference patterns, the probability density matrix $S_{TPA}$ are visualized by using a $3D$ barchart in which the height of bars are proportional to their relative probabilities of each element. In $S_{TPA}$, many terms are complex number and their real parts are taken as their relative probabilities. In the barchart, the constant background part which corresponds to the two-photon probability that two-photons come from either arm $1$ or $2$ is visualized by two magenta bars in Fig.~\ref{fig:no-polarizers}(b). This component does not change with the relative arrival time difference $\tau$ between two photons. The second component corresponds to the well known HBT effect. It describes that photons $a$ and $b$ trigger the TPA detector in two different ways : photon $a$ comes from arm $1$ and photon $b$ comes from arm $2$ which corresponds to two-photon amplitude $A_{II}$ ; photon $a$ comes from arm $2$ and photon $b$ comes from arm $1$ which corresponds to two-photon amplitude $A_{III}$ as shown in Fig.~\ref{fig:4paths}. The probability of HBT effect is $|A_{II}+A_{III}|^2$. This component is visualized by four red bars in Fig.~\ref{fig:no-polarizers}(b). The third component of TPA detection can be factorized into the product of intensity and first oder interference and it is visualized by eight blue bars in Fig.~\ref{fig:no-polarizers}(b). The fourth part is interesting because it stands for that photon $a$ and $b$ interference with themselves as one entity. In the expansion of Eq.~(\ref{eq:G2-1}) it is signified by the term of $A^*_IA_{IV}+A_IA^*_{IV}$, two photons can come from either arm $1$ or $2$ as one entity, the two probability amplitudes interfere with each other and leads to sub-wavelength effect. The fourth component is visualized by two black bars in Fig.~\ref{fig:no-polarizers}(b). In an ordinary HBT interferometer, only the HBT effect is measured because other parts are ruled out by the detection scheme of an ordinary HBT interferometer \cite{brown1956correlation}. However, in a MI with a point TPA detector all these four kinds of TPA events exit and mix together. In previous research, people usually concentrated on the HBT effect part plus the inevitable constant background which are signified by four red bars and two magenta bars in the probability matrix and filter out the third and fourth parts which is signified by eight blue bars and two black bars in the probability matrix \cite{boitier2009measuring,shevchenko2017polarization,2018Recovering,2015Ultrabroadband}. In this paper, we will take every parts into consideration and find the method to manipulate the interference process using two-photon interference theory which leads to interesting results.
When the input of the MI is unpolarized chaotic light the outcome of the TPA detector is shown in Fig.~\ref{fig:no-polarizers}(a) which was measured in almost every previous researches using similar detection schemes \cite{boitier2009measuring,shevchenko2017polarization,2018Recovering,2015Ultrabroadband}. It is proportional to the sum of $16$ different probabilities to trigger a TPA event which are shown in Fig.~\ref{fig:no-polarizers}(b) and we can see that each $16$ probabilities are equal. The sum of all these probabilities lead to the $g^{(2)}$ function as shown in Eq.~(\ref{eq:1}).
\begin{figure}
\caption{\label{fig:epsart}
\label{fig:epsart}
\label{fig:no-polarizers}
\end{figure}
Next we simulate the two-photon interference of photons from orthogonal polarizations in two different cases. Two polarizers, $P_1$ and $P_2$, which are orthogonal to each other are inserted into arm $1$ and $2$. In the first case polarizer $P_1$ is set to $0$ in arm $1$ and polarizer $P_2$ is set to $\frac{\pi}{2}$ in arm $2$ as shown in Fig.~\ref{fig:setup}, the output of the TPA detector is shown in Fig.~\ref{fig:0-90vs45-45}(a).
In the second case $P_1$ is set to $\frac{\pi}{4}$ in arm $1$ and $P_2$ is set to $-\frac{\pi}{4}$ in arm $2$, the output of the TPA detector is shown in Fig.~\ref{fig:0-90vs45-45}(b).
Comparing (a) with (b) in Fig.~\ref{fig:0-90vs45-45}, we can see that they are both flat in the center of $g^{(2)}$ function, $g^{(2)}(0)=1$. This means that the two-photon interference generates no bunching effect if photon $a$ and $b$ come from orthogonal polarization modes (in (a) they are set to $0^{\circ}$/ $90^{\circ}$ and in (b) they are set to $45^{\circ} $/$135^{\circ}$). This can be explained as that photons from orthogonal polarization modes has uncorrelated initial phases. The terms which lead to bunching effect $A_{III}^* A_{II} +A_{III} A_{II}^*=0$. However, no bunching effect does not mean no two-photon interference. The two-photon interference leads to the possibility distribution of triggering a TPA events different in two cases. There are four possibility to trigger a TPA event in both two cases: photon $a$ and $b$ can both come from arm $1$ or $2$ which are $|A_{a1b1}|^2$ and $|A_{a2b2}|^2$ respectively and correspond to two magenta columns in Fig.~\ref{fig:0-90vs45-45}; photon $a$ from arm $1$ and photon $b$ from arm $2$ which corresponds to $|A_{a1b2}|^2$; photon $a$ from arm $2$ and photon $b$ from arm $1$ which corresponds to $|A_{a2b1}|^2$, the last two possibility correspond to two red columns in Fig.~\ref{fig:0-90vs45-45}.
\begin{figure}
\caption{ Comparison between $0^{\circ}
\label{fig:0-90vs45-45}
\end{figure}
We notice that in the two schemes the possibilities distribution for a TPA detection is different. For $0^{\circ}/90^{\circ}$ scheme, all the possibilities is the same and equal to $\frac{1}{4}$. However, for $45^{\circ}/135^{\circ}$ scheme, the possibility for both photons come from the same arm (either arm $1$ or $2$) is $\frac{3}{8}$; the possibility for both photons come from different arm is $\frac{1}{8}$. This result is non-intuitive. Even the $g^{(2)}$ function is the same, the contributions from four possibilities are different.
As shown in Eq.~(\ref{eq:VSmodel}), polarizations can be used to manipulate the two-photon interference in the MI. In the $45^{\circ}/135^{\circ}$ scheme if a polarizer $P_3$ which is set to $0^{\circ}$ is added in front of the TPA detector, and set one of the polarizer say $P_1$ deviate from the $45^{\circ}$ a few degree, $\omega$ oscillation part of two-photon interference will dominate comparing with the $2\omega$ oscillation part. It is shown in Fig.~\ref{fig:omega-dominating}
\begin{figure}
\caption{\label{fig:epsart}
\label{fig:epsart}
\label{fig:omega-dominating}
\end{figure}
in which the probability term of $|A_{I}|^2$, $|A_{II}|^2$, $|A_{III}|^2$ and $|A_{IV}|^2$ are removed to make a comparison between only $\omega$ and $2\omega$ terms. In the next subsection there is a detection scheme in which the $\omega$ terms are removed and only $2\omega$ terms is detected which leads to sub-wavelength effect.
\subsection{\label{subsec:polar}Polarized chaotic light and its sub-wavelength effect}
Now we put a linear polarizer $P_0$ in front of the beamsplitter as shown in Fig.~\ref{fig:setup}, it turns the unpolarized chaotic light into linear polarized light before into the MI. When there is no polarizer in both arms the $g^{(2)}$ function and two-photon detection possibility matrix are the same as those in unpolarized light case as show in Fig.~\ref{fig:no-polarizers}.
If we set the polarizer $P_0$ to $45^{\circ}$, $P_1$ to $0^{\circ}$ and $P_2$ to $90^{\circ}$ the $g^{(2)}$ function and its two-photon detection possibility matrix are shown in Fig.~\ref{fig:45-0-90}. We can see that there is bunching effect but there is no $\omega$ and $2\omega$ oscillation terms. The reason is that from the point of view of quantum interference the TPA detector can \emph{in principle} identify from which arms(paths) photons come from because of the two polarizers in arms $1$ and $2$. Since the $which path$ information is known, there is no corresponding two-photon interference. However, the probability amplitudes $A_{II}$ and $A_{III}$ are stilled indistinguishable and the interference between them leads to the HBT effect as shown in four red columns in Fig.~\ref{fig:45-0-90}.
\begin{figure}
\caption{\label{fig:epsart}
\label{fig:epsart}
\label{fig:45-0-90}
\end{figure}
If we set the polarizer $P_0$ to $0^{\circ}$, $P_1$ to $45^{\circ}$ and $P_2$ to $135^{\circ}$ the situation is more interesting. The $g^{(2)}$ function and its two-photon detection possibility matrix are shown in Fig.~\ref{fig:0-45-135}. We can see that there is bunching effect and no $\omega$ oscillation terms. Surprisingly there is $2\omega$ oscillation terms as shown in two black columns in the figure. From the point view of quantum optics, photon $a$ and $b$ form one entity which interferes with itself. The interference patterns have $2\omega$ frequency of oscillation. This is a sub-wavelength effect. The corresponding experimental phenomenon has been observed and the details are reported in another paper \cite{luo2021observing}.
\begin{figure}
\caption{\label{fig:epsart}
\label{fig:epsart}
\label{fig:0-45-135}
\end{figure}
With another polarizer $P_3$ is put before the detector, it could acts a \textit{which path} information eraser. For example, when polarizer $P_0$ is set to $45^{\circ}$, $P_1$ to $0^{\circ}$ and $P_2$ to $90^{\circ}$ and the output is shown in Fig.~\ref{fig:45-0-90}. With polarizer $P_3$ is set to $45^{\circ}$ before the TPA detector, the output of the detector and the probability matrix is resumed as same as those shown in Fig.~\ref{fig:no-polarizers} because the $which path$ information is erased by polarizer $P_3$ and interference terms leads to oscillation is not zero anymore. If $P_3$ is set to $135^{\circ}$ instead of $45^{\circ}$ the situation is slightly different: the probability matrix is the same but the $g^{(2)}(0)$ change from maximum to minimum as shown in Fig.~\ref{fig:45vs135}.
\begin{figure}
\caption{ The polarizer $P_3$ in front of the TPA detector can erase the $which path$ information. By controlling the relative angle between $P_0$ and $P_3$ ($0^{\circ}
\label{fig:45vs135}
\end{figure}
\section{Discussion}
First, we notice that the outputs of TPA detection are slightly different under two different schemes as shown in Fig.~\ref{fig:0-90vs45-45}. In both $0^{\circ}/90^{\circ}$ and $45^{\circ}/135^{\circ}$ schemes they both have flat $g^{(2)}$ functions and are comprised of four possibilities $|A_{I}|^2$, $|A_{II}|^2$, $|A_{III}|^2$ and $|A_{IV}|^2$. They are different in the percentages of contributions from the four possibilities. In $0^{\circ}/90^{\circ}$ scheme, each of the four possibilities contributes $\frac{1}{4}$ to $g^{(2)}$. However, in $45^{\circ}/135^{\circ}$ scheme, each of $|A_{I}|^2$ and $|A_{IV}|^2$ contributes $\frac{3}{8}$ to $g^{(2)}$; each of $|A_{II}|^2$ and $|A_{III}|^2$ contributes $\frac{1}{8}$ to $g^{(2)}$. The reason for the difference lies in the mirror reflection of the BS. In $0^{\circ}/90^{\circ}$ scheme, the reflection of the BS does not change the polarizations of light. In $45^{\circ}/135^{\circ}$ scheme, however, there is a mirror reflection from the BS which changes the left and right to make the polarization of $45^{\circ}/135^{\circ}$ switch to $135^{\circ}/45^{\circ}$. So in order to make the a $45^{\circ}/135^{\circ}$ detection scheme as shown in Fig.~\ref{fig:0-90vs45-45}(b) we need to set both $P_1$ and $P_2$ to $135^{\circ}$ because the $45^{\circ}$ polarization of chaotic light will enter into arm $1$ in the angle of $135^{\circ}$ because of the reflection of the BS. The reflection of the BS leads to the difference between the two-photon coherence covariance (TCM) of $0^{\circ}/90^{\circ}$ scheme and that of $45^{\circ}/135^{\circ}$ scheme and at last the differences between their TPA detection probability density matrixes.
The above simulated results can be verified experimentally. In both schemes, we can measure their total TPA detection rates and assume they are all equal to $1$. Then we can block one arm, say arm $2$, and only two-photon probability $|A_{I}|^2$ is not blocked. In $0^{\circ}/90^{\circ}$ scheme, the TPA detection rates should drop to $\frac{1}{4}$. In $45^{\circ}/135^{\circ}$ scheme, the TPA detection rates should be $\frac{3}{8}$, slightly higher than that in $0^{\circ}/90^{\circ}$ scheme.
The reflection of the BS is also the reason for the difference in polarized chaotic light in Sec.~\ref{subsec:polar}. In $45^{\circ}/0^{\circ}/90^{\circ}$ scheme, all the $\omega$ and $2\omega$ oscillation parts are removed, only the HBT effect and constant background are left. On the other hand, in $0^{\circ}/45^{\circ}/135^{\circ}$ scheme only the $\omega$ oscillation part is removed and other than the HBT effect and constant background the $2\omega$ part also exists. In the point of view of quantum mechanics, the $2\omega$ oscillation part is a sub-wavelength effect from which an entity comprised of photons $a$ and $b$ interferes with itself \cite{1995Photonic,1999Measurement,2001Two}. The momentum of the entity is twice that of a single photon and the De Broglie wavelength is half of a single photon. The sub-wavelength effect can be used to increase the resolution of imaging or quantum lithography \cite{2001Two}. The sub-wavelength effect predicted by the vector model has been observed in our following experiments and reported in another paper \cite{luo2021observing}.
In Sec.~\ref{subsec:polar} it is found that by controlling the relative angle between polarizer $P_0$ and $P_3$ the value of $g^{(2)}(0)$ can be manipulated as shown in Fig.~\ref{fig:45vs135}. When $P_0$ is set to parallel to $P_3$ $g^{(2)}(0)$ reaches its maximum value and when $P_0$ is set to orthogonal to $P_3$ $g^{(2)}(0)$ reaches its minimum value. This scheme could be applied in our previously proposed weak signal detection MI by exploring super-bunching effect of chaotic light \cite{2020Two}. In a LIGO-like weak signal detection interferometer, to have higher sensitivity and save energy the detector is made to observe the dark fringe \cite{Harry2010Advanced}. In our proposed new weak-signal detection scheme dark fringe can be manipulated by adjusting the relative angle between polarizers $P_0$ and $P_3$.
\section{Conclusion}
In this paper, a vector model is developed to theoretically describe the two-photon interference phenomenon of chaotic light in a MI with polarizers. The model is developed by using two-photon interference and Feynman path integral theory. The model shows that the polarization as an independent dimension in phase space can act as a switch to manipulate the two-photon interference in the MI. The components of two-photon interference patterns which are mixed together in previous studies can now be picked out one by one by adjusting polarizers in the MI. The vector model could help us in further study in a cascaded MI which explores super-bunching effect of chaotic light to increase the sensitivity on weak signal detection \cite{2020Two}. It may help us to design a new type of weak signal (including gravitational wave) detection setup with higher sensitivity.
\begin{acknowledgments}
This work was supported by Shaanxi Key Research and Development Project (Grant No. 2019ZDLGY09-09); National Natural Science Foundation of China (Grant No. 61901353); National Nature Science Foundation of China (Grant No. 12074307); Key Innovation Team of Shaanxi Province (Grant No. 2018TD-024) and 111 Project of China (Grant No.B14040).
\end{acknowledgments}
\appendix
\end{document}
|
\begin{document}
\title{\bf Generalizations of Han's Hook Length Identities}
\author{Laura L.M. Yang\\
\small Department of Mathematical and Statistical Sciences\\
\small University of Alberta, Edmonton, Alberta, Canada T6G 2G1\\
\small \texttt{[email protected]} }
\date{ }
\maketitle
\begin{abstract}
\noindent Han recently discovered new hook length identities for
binary trees. In this paper, we extend Han's identities to binomial
families of trees. Moreover, we present a bijective proof of one of
the identities for the family of ordered trees.
\end{abstract}
\section{Introduction}
The {\it hook length} of a vertex $v$ of a rooted tree $T$ is the
number $h_v$ of descendants of $v$ in $T$ (including $v$ itself).
Several identities involving this parameter have been discovered,
especially since the appearance of Postnikov's identity in 2004
\cite{Postnikov_y2004}; see, e.g.,
\cite{Chen_y2008,Du_y2005,Gessel_y2005,Moon_y2007,Seo_y2005} and the
references contained therein. Han \cite{Han_1,Han_2} recently found
two more such identities, namely,
\begin{eqnarray}
\sum_{T}\prod_{v\in T}\frac{1}{h_v2^{h_v-1}} &=&\frac{1}{n!}
\end{eqnarray}
and
\begin{eqnarray}
\sum_{T}\prod_{v\in T}\frac{(z+h_v)^{h_v-1}}{h_v(2z+h_v-1)^{h_v-2}}
&=&\frac{2^nz}{n!}(z+n)^{n-1},
\end{eqnarray}
where each sum is over the (incomplete) binary trees $T$ with $n$
vertices (in which each vertex has at most one left-child and at
most one right-child). Our main object here is to extend Han's
identities to more general {\it binomial} families of trees. The
definition of these families and our main results will be stated in
Section 2. The proofs will be given in Section 3. Finally, in
Section 4, we give a bijective proof of one of the identities for
the family of ordered trees.
\section{Definitions and Main Results}
We recall that {\it ordered} trees are (finite) rooted trees with an
ordering specified for the children of each vertex (see, e.g., Knuth
\cite[p. 306]{Knuth_y1973}). Let $s$ and $m$ be given constants such
that $sm>0$ and $m$ is a positive integer if $s>0$. And let $d_v$
denote the number of children of vertex $v$ in any given rooted tree
$T$. If each ordered tree $T$ is assigned the {\it weight}
$$
w(T)=\prod_{v\in T} \binom{m}{d_v}s^{d_v},
$$
then the resulting family of weighted ordered trees is called a {\it
binomial} family or an $(s,m)$-family $F$. Let $F_n$ denote the
subset of binomial trees that have $n$ vertices and let
$y_n=\sum_{T\in F_n}w(T)$ denote the (weighted) number of trees in
$F_n$. It follows readily from these definitions that the generating
function $y=y(x)=\sum_1^\infty y_nx^n$ satisfies the relation
$$y=x(1+sy)^m.$$
For additional remarks on these families, especially in the context
of simply generated families may be sound in \cite{Moon_y2007}.
Notice, for example, that the binomial families include the
incomplete k-ary and the ordered trees; but they do not include the
complete binary trees in which every vertex has zero or two
children. We now state our main results.
\begin{theo}
Let $F_n$ denote the subset of the $(s,m)$-family of binomial trees
that have $n$ vertices. Then
\begin{eqnarray}\label{Identity-1}
\sum_{T\in F_n}w(T)\prod_{v\in T}
\frac{1}{h_vm^{h_v-1}}&=&\frac{s^{n-1}}{n!}
\end{eqnarray}
and
\begin{eqnarray}\label{Identity-2}
\sum_{T\in F_n}w(T)\prod_{v\in T}
\frac{(z+h_v)^{h_v-1}}{h_v(mz+h_v-1)^{h_v-2}}
&=&\frac{s^{n-1}m^nz}{n!}(z+n)^{n-1},
\end{eqnarray}
for $n=1,2,\ldots$.
\end{theo}
\section{Proof of Theorem}
Let $p_n$ and $q_n$ denote the lefthand sides of identities
\eqref{Identity-1} and \eqref{Identity-2} for $n=1,2,\ldots$. The
proof will be by induction on $n$. It is easy to check that $p_1=1$
and $q_1=mz$ so \eqref{Identity-1} and \eqref{Identity-2} hold when
$n=1$. Any non-trivial binomial tree $T$ with $n$ vertices in which
the root has $d$ children may be constructed from an ordered
collection of d smaller binomial trees with $n-1$ vertices
altogether by attaching a new (root) vertex to the roots of the $d$
smaller trees and then introducing the appropriate weight factors.
It follows readily from this observation and the definition of
$p_n$, that if $n>1$ then
\begin{eqnarray}\label{p_n}
p_n=\frac{1}{nm^{n-1}}\sum_{d\geq 1}\binom{m}{d}s^d\sum
p_{j_1}\ldots p_{j_d}
\end{eqnarray}
where the inner sum is over all compositions $(j)=(j_1,\ldots,j_d)$
of $n-1$ into $d$ positive integers. If we apply the induction
hypothesis that $p_j=s_{j-1}/j!$ for $j<n$, simplify, and rewrite
the righthand side of relation \eqref{p_n} in terms of generating
functions, we find that
\begin{eqnarray*}
p_n &=&\frac{s^{n-1}}{nm^{n-1}}\sum_{d\geq 1}\binom{m}{d}
[x^{n-1}]{(e^x-1)^d}\\
&=&\frac{s^{n-1}}{nm^{n-1}}[x^{n-1}](e^{xm}-1) = \frac{s^{n-1}}{n!}.
\end{eqnarray*}
This suffices to prove identity \eqref{Identity-1}.
Before proceeding to the proof of identity \eqref{Identity-2} we
recall that if $u=u(x)$ is a power series such that $u=e^{xu}$, then
it follows readily from Lagrange's inversion formula that
\begin{eqnarray}\label{u}
u^z=1+\sum_{n\geq 1}\frac{z(z+n)^{n-1}}{n!}x^n.
\end{eqnarray}
for any $z$.
We now consider identify \eqref{Identity-2} for the quantity $q_n$.
In this case the reasoning that led to relation \eqref{p_n} leads to
the conclusion that if $n>1$, then
\begin{eqnarray}\label{q_n}
q_n=\frac{(z+n)^{n-1}}{n(mz+n-1)^{n-2}}\sum_{d\geq 1}
\binom{m}{d}s^d \sum q_{j_1}\ldots q_{j_d}
\end{eqnarray}
where the inner sum is over the same compositions $(j)$ as before.
If we apply the induction hypothesis that
$q_j=s^{j-1}m^jz(z+j)^{j-1}/j!$ for $j<n$, simplify, rewrite the
righthand side of relation \eqref{q_n} in terms of generating
functions, and appeal to relation \eqref{u}, we find that
\begin{eqnarray*}
q_n
&=&\frac{(sm(z+n))^{n-1}}{n(mz+n-1)^{n-2}}\sum_{d\geq1}\binom{m}{d}[x^{n-1}]{(u^z-1)^d}\\
&=&\frac{(sm(z+n))^{n-1}}{n(mz+n-1)^{n-2}}[x^{n-1}](u^{zm}-1) =
\frac{s^{n-1}m^n z(z+n)^{n-1}}{n!}.
\end{eqnarray*}
This suffices to complete the proof of the theorem.
\begin{exam}
The five ordered trees with $n=4$ vertices are illustrated in Figure
\ref{eg-ordered}.
\begin{figure}
\caption{Ordered trees with $4$ vertices.}
\label{eg-ordered}
\end{figure}
If $F$ is the $(1,k)$-family (of incomplete $k$-ary trees), then it
follows from the Theorem that
\begin{eqnarray}
\sum_{T}w(T)\prod_{v\in T}
\frac{1}{h_vk^{h_v-1}}&=&\frac{1}{n!}\label{(1,k)-1}
\end{eqnarray}
and
\begin{eqnarray}
\sum_{T}w(T)\prod_{v\in T}
\frac{(z+h_v)^{h_v-1}}{h_v(kz+h_v-1)^{h_v-2}}
&=&\frac{k^nz}{n!}(z+n)^{n-1},\label{(1,k)-2}
\end{eqnarray}
where the sums, here and elsewhere, are over the trees $T$ in $F_n$.
In this case, the weights of $T_1, T_2, T_3, T_4$ and $T_5$ are
$\binom{k}{3}, \binom{k}{2}\binom{k}{1}, \binom{k}{2}\binom{k}{1},
\binom{k}{1}\binom{k}{2}$ and $\left(\binom{k}{1}\right)^3$,
respectively. Hence,
\begin{eqnarray*}
p_4=\frac{\binom{k}{3}}{4\cdot
k^3}+\frac{\binom{k}{2}\binom{k}{1}}{4\cdot k^3 \cdot 2\cdot
k}+\frac{\binom{k}{2}\binom{k}{1}}{4\cdot k^3 \cdot 2\cdot
k}+\frac{\binom{k}{1}\binom{k}{2}}{4\cdot k^3\cdot 3\cdot
k^2}+\frac{\left(\binom{k}{1}\right)^3}{4\cdot k^3 \cdot 3\cdot
k^2\cdot 2\cdot k}=\frac{1}{4!}
\end{eqnarray*}
and
\begin{eqnarray*}
q_4&=&\frac{\binom{k}{3}(z+4)^3}{4(kz+3)^2(kz)^{-3}}
+\frac{\binom{k}{2}\binom{k}{1}(z+4)^3(z+2)}{4(kz+3)^2\cdot
2(kz)^{-2}}
+\frac{\binom{k}{2}\binom{k}{1}(z+4)^3(z+2)}{4(kz+3)^2\cdot 2(kz)^{-2}}\\
&&+\frac{\binom{k}{1}\binom{k}{2}(z+4)^3(z+3)^2}{4(kz+3)^2\cdot
3(kz+2)(kz)^{-2}}
+\frac{\left(\binom{k}{1}\right)^3(z+4)^3(z+3)^2(z+2)}{4(kz+3)^2\cdot3(kz+2)\cdot2(kz)^{-1}}
=\frac{k^4z(z+4)^3}{4!}.
\end{eqnarray*}
Notice that \eqref{(1,k)-1} and \eqref{(1,k)-2} reduce to Han's
identities when $k=2$.
If $F$ is a $(-1,-k)$-family, then
\begin{eqnarray}
\sum_{T}w(T)\prod_{v\in T}
\frac{1}{h_v(-k)^{h_v-1}}&=&\frac{(-1)^{n-1}}{n!}\label{(-1,-k)-family}
\end{eqnarray}
and
\begin{eqnarray}
\sum_{T}w(T)\prod_{v\in T}
\frac{(z+h_v)^{h_v-1}}{h_v(h_v-kz-1)^{h_v-2}}
&=&\frac{-k^nz}{n!}(z+n)^{n-1}.
\end{eqnarray}
In particular, if $F$ is the $(-1,-1)$-family, i.e., the family of
ordered trees, then
\begin{eqnarray}
\sum_{T}\prod_{v\in T}
\frac{1}{h_v(-1)^{h_v-1}}&=&\frac{(-1)^{n-1}}{n!}\label{ordered}
\end{eqnarray}
and
\begin{eqnarray}
\sum_{T}\prod_{v\in T} \frac{(z+h_v)^{h_v-1}}{h_v(h_v-z-1)^{h_v-2}}
&=&\frac{-z}{n!}(z+n)^{n-1},
\end{eqnarray}
where have omitted the weight factors here since they all equal one.
In this case,
\begin{eqnarray*} p_4&=&\frac{1}{4\cdot
(-1)^3}+\frac{1}{4\cdot (-1)^3 \cdot 2\cdot
(-1)}+\frac{1}{4\cdot (-1)^3 \cdot 2\cdot (-1)}\\
&&+\frac{1}{4\cdot (-1)^3\cdot 3\cdot (-1)^2}+\frac{1}{4\cdot (-1)^3
\cdot 3\cdot (-1)^2\cdot 2\cdot (-1)}=-\frac{1}{4!}
\end{eqnarray*}
and
\begin{eqnarray*}
q_4&=&\frac{(z+4)^3}{4(3-z)^2(-z)^{-3}}
+\frac{(z+4)^3(z+2)}{4(3-z)^2\cdot 2(-z)^{-2}}
+\frac{(z+4)^3(z+2)}{4(3-z)^2\cdot 2(-z)^{-2}}\\
&&+\frac{(z+4)^3(z+3)^2}{4(3-z)^2\cdot 3(2-z)(-z)^{-2}}
+\frac{(z+4)^3(z+3)^2(z+2)}{4(3-z)^2\cdot3(2-z)\cdot2(-z)^{-1}}
=-\frac{z(z+4)^3}{4!}.
\end{eqnarray*}
If $F$ is a $(1/m,m)$-family, then
\begin{eqnarray}
\sum_{T}w(T)\prod_{v\in T}
\frac{1}{h_vm^{h_v-1}}&=&\frac{1}{m^{n-1}n!}
\end{eqnarray}
and
\begin{eqnarray}
\sum_{T}w(T)\prod_{v\in T}
\frac{(z+h_v)^{h_v-1}}{h_v(mz+h_v-1)^{h_v-2}}
&=&\frac{mz}{n!}(z+n)^{n-1} \label{rooted tree}.
\end{eqnarray}
If we let $z=1/m$ in \eqref{rooted tree} and take the limit as $m$
tends to infinity, we obtain the identity
\begin{eqnarray}
\sum_{T} \prod_{v\in T}\frac{1}{d_v!}=\frac{n^{n-1}}{n!},
\end{eqnarray}
where the sum is over all ordered trees $T$ with $n$ vertices. This
relation, which expresses the number $n^{n-1}$ of rooted labelled
trees with $n$ vertices as a sum over the ordered trees with $n$
vertices, with suitable weights taken into account, is equivalent to
a relation given by Mohanty \cite[p. 163]{Mohanty_y1979}.
\end{exam}
\section{An Involution on Increasing Ordered Trees}
We conclude by giving a sign-reversing involution that establishes
an alternate form of identity \eqref{ordered}, namely,
\begin{eqnarray}
\sum_{T}n!\prod_{v\in T} \frac{1}{h_v(-1)^{h_v}}&=&-1,
\end{eqnarray}
where the sum is over all ordered trees with $n$ vertices.
It is well known that $n!/\prod_{v\in T}h_v$ counts the number of
ways to label the vertices of $T$ with $\{1,2,\ldots, n\}$ such that
the label of each vertex is less than that of its descendants
\cite[p.67, exer. 20]{Knuth_y1997}. Such a labelled tree is called
{\it increasing}. We define the sign of a tree $T$ to be
$\prod_{v\in T} (-1)^{h_v}$.
An increasing ordered tree $T$ with $n$ vertices is {\it proper} if
the root of $T$ has $n-1$ children and their labels are increasing
from left to right. It is easy to check that the sign of any proper
tree is $-1$.
The involution is based on the non-proper increasing ordered trees.
For any leaf $v$ of a non-proper increasing ordered tree, suppose
$v$ is the $i$-th child of $u$ and $w$ is the $i+1$-th child of $u$
if it exists. We say that $v$ is {\it illegal} if $v$ is the
rightmost child of $u$ and the subtree rooted at $u$ is proper or
$v$ is bigger than any vertex of the subtree rooted at $w$ and the
subtree rooted at $w$ is proper.
Now the involution can be described as follows: Given any
non-proper increasing ordered tree $T$ let $v$ be the first
illegal leaf encountered when traversing the tree $T$ in preorder.
We now have two cases: (1) $v$ is bigger than any vertex of the
subtree rooted at $w$ and the subtree rooted at $w$ is proper; (2)
$v$ is the rightmost child of $u$ and the subtree rooted at $u$ is
proper. In this case, $u$ is not the root of $T$.
For case (1), let $u$ be the parent of $v$. We cut off the edge
between $u$ and $v$, and move $v$ as the rightmost child of $w$. Let
$T'$ be the resulting tree. Note that in the search process for
$T'$, the leaf $v$ is still the first encountered illegal leaf.
For case (2), we may reverse the construction for case (1). Hence
we obtain a sign-reversing involution. Figure \ref{eg-involution}
illuminates this involution on increasing ordered trees.
\begin{figure}
\caption{An involution on increasing ordered trees.}
\label{eg-involution}
\end{figure}
\end{document}
|
\begin{document}
\title{Coalgebra Learning via Duality}
\begin{abstract}
Automata learning is a popular technique for inferring minimal automata through membership and equivalence queries. In this paper, we generalise learning to the theory of coalgebras. The approach relies on the use of logical formulas as tests, based on a dual adjunction between states and logical theories. This allows us to learn, e.g., labelled transition systems, using Hennessy-Milner logic. Our main contribution is an abstract learning algorithm, together with a proof of correctness and termination.
\end{abstract}
\section{Introduction}
\label{sec:intro}
In recent years, automata learning is applied with considerable success
to infer models of systems and in order to analyse and verify them.
Most current approaches to active automata learning are ultimately based
on the original algorithm due to Angluin~\cite{Angluin87},
although numerous improvements have been made,
in practical performance and in extending the techniques
to different models~\cite{Vaandrager17}.
Our aim is to move from automata to \emph{coalgebras}~\cite{Rutten00,jacobs-coalg},
providing a generalisation of learning to a wide
range of state-based systems.
The key insight underlying our work is that dual adjunctions connecting coalgebras and tailor-made logical languages~\cite{KupkeKP04,BonsangueK05,Klin07,PavlovicMW06,KupkeP11} allow us to devise a generic learning algorithm for coalgebras that is parametric in the type of system under consideration.
Our approach gives rise to a fundamental distinction between \emph{states}
of the learned system and \emph{tests}, modelled as logical formulas. This distinction is blurred in the classical DFA algorithm,
where tests are also used to specify the (reachable) states. It is precisely the distinction between
tests and states which allows us to
move beyond classical automata, and use, for instance, Hennessy-Milner logic to
learn bisimilarity quotients of labelled transition systems.
To present learning via duality we need to introduce new notions and refine existing ones.
First, in the setting of coalgebraic modal logic, we introduce the new notion of \emph{sub-formula closed}
collections of formulas, generalising suffix-closed sets of words in Angluin's algorithm (Section~\ref{sec:subform-closed}).
Second, we import the abstract notion of \emph{base} of a functor from~\cite{alwin}, which allows
us to speak about `successor states' (Section~\ref{sec:base}). In particular, the base allows us to
characterise \emph{reachability} of coalgebras in a clear and concise way. This yields
a canonical procedure for computing the reachable part from a given initial state in a coalgebra,
thus generalising the notion of a generated subframe from modal logic.
We then rephrase \emph{coalgebra learning} as the problem of inferring a coalgebra which
is reachable, minimal and which cannot be distinguished from the original coalgebra
held by the teacher using tests. This requires suitably adapting the computation of the reachable part
to incorporate tests, and only learn `up to logical equivalence'.
We formulate the notion of \emph{closed table}, and an associated procedure to close tables.
With all these notions in place, we can finally define our abstract algorithm for coalgebra learning,
together with a proof of correctness and termination (Section~\ref{sec:learning}).
Overall, we consider this correctness and termination proof as the main contribution of the paper;
other contributions are the computation of reachability via the base and the
notion of sub-formula closedness. At a more conceptual level, our paper
shows how states and tests interact in automata learning, by rephrasing it in the context of a dual adjunction
connecting coalgebra (systems) and algebra (logical theories).
As such, we provide a new foundation of learning state-based systems.
\paragraph{Related work.}
The idea that tests in the learning algorithm should be formulas of a distinct logical language
was proposed first in~\cite{baku18:angl}. However, the
work in {\em loc.cit.} is quite ad-hoc, confined to Boolean-valued modal logics,
and did not explicitly use duality.
This paper is a significant improvement: the dual adjunction
framework and the definition
of the base~\cite{alwin} enables us to present a description of Angluin's algorithm in purely categorical terms, including
a proof of correctness and, crucially, termination.
Our abstract notion of logic also enables us to recover {\em exactly} the standard DFA algorithm
(where tests are words) and the algorithm for learning Mealy machines (where test are many-valued), something that is not possible in \cite{baku18:angl}
where tests are modal formulas.
Closely related to our work is also the line of research initiated by~\cite{jasi:auto14} and followed up within the CALF project~\cite{heer:anab16,HeerdtS017,heer17:lear}
which applies ideas from category theory to automata learning.
Our approach is orthogonal to CALF: the latter
focuses on learning a general version of {\em automata},
whereas our work is geared towards learning bisimilarity quotients of state-based transition systems.
While CALF lends itself to studying automata in a large variety
of base categories, our work thus far is concerned with varying the type of transition structures.
\section{Learning by Example}
\label{sec:examples}
The aim of this section is twofold: (i) to remind the reader of the key elements of Angluin's L$^*$ algorithm~\cite{Angluin87} and
(ii) to motivate and outline our generalisation.
In the classical L$^*$ algorithm, the learner tries to learn
a regular language $\mathcal{L}$ over some alphabet $A$ or, equivalently, a DFA $\mathcal{A}$ accepting that language.
Learning proceeds by asking queries to a teacher who has access to this automaton.
{\em Membership queries} allow the learner to test whether a given word is in the language,
and {\em equivalence queries} to test whether the correct DFA has been learned already.
The algorithm constructs so-called tables $(S,E)$ where $S, E \subseteq A^*$
are the rows and columns of the table, respectively. The value at position $(s,e)$ of the table is the answer to the membership query ``$se \in \mathcal{L}$?''.
Words play a double role: On the one hand, a word $w \in S$ represents the
state which is reached when reading $w$ at the initial state.
On the other hand, the set $E$ represents the set of membership queries that the learner is asking about the states in $S$.
A table is {\em closed} if for all $w \in S$ and all $a \in A$ either $wa \in S$
or there is a state $v \in S$ such that $wa$ is equivalent to $v$ w.r.t.\ membership queries of words in $E$.
If a table is not closed we extend $S$ by adding words of the form $w a$ for $w \in S$ and $a \in A$.
Once it is closed, one can define a {\em conjecture},\footnote{The algorithm additionally
requires \emph{consistency}, but this is not needed if counterexamples
are added to $E$. This idea goes back to~\cite{mapn95:onth}.}
i.e., a DFA with states in $S$. The learner now asks
the teacher whether the conjecture is correct. If it is, the algorithm terminates. Otherwise
the teacher provides a {\em counterexample}: a word on which the conjecture is incorrect. The table
is now extended using the counterexample. As a result, the table
is not closed anymore and the algorithm continues again by closing the table.
Our version of L$^*$ introduces some key conceptual differences: tables are
pairs
$(S,\Psi)$ such that $S$ (set of rows) is a selection of states of $\mathcal{A}$ and $\Psi$ (set of columns) is a collection of tests/formulas.
Membership queries become checks of tests in $\Psi$ at states in $S$ and equivalence queries
verify whether or not the learned structure is logically equivalent to the original one.
A table $(S,\Psi)$ is closed if for all successors $x'$ of elements of $S$ there exists an $x \in S$ such that $x$ and $x'$
are equivalent w.r.t.\ formulas in $\Psi$.
The clear distinction
between states and tests in our algorithm means that counterexamples are formulas that have to be added to $\Psi$.
Crucially, the move from words to formulas
allows us to use the rich theory of coalgebra and coalgebraic logic to devise a generic algorithm.
We consider two examples within our generic framework: classical DFAs, yielding essentially the L$^*$ algorithm,
and labelled transition systems,
which is to the best of our knowledge not covered by standard automata learning algorithms.
For the DFA case, let $L = \{u \in \{a,b\}^{*} \mid \mbox{number of } a\mbox{'s} \mbox{ mod } 3 = 0\}$
and assume that the teacher uses the following (infinite) automaton describing $L$:
\begin{center}
\begin{tikzpicture}[->,node distance=1.3cm, semithick, auto]
\tikzstyle{every state}=[text=black]
\node[initial,state, initial text=, accepting] (A) {$q_0$};
\node[state] (B) [right of=A] {$q_1$};
\node[state] (C) [right of = B]{$q_2$};
\node[state] (D) [right of = C, accepting] {$q_3$};
\node[state] (E) [right of = D] {$q_4$};
\node[state] (F) [right of = E] {$q_5$};
\node[state] (G) [right of = F, accepting] {$q_6$};
\node[state] (H) [right of = G] {$q_7$};
\node[state] (I) [right of = H, draw=none]{$\cdots$};
\path (A) edge node {\tiny $a$} (B)
edge [loop above] node {\tiny $b$} (C)
(B) edge node {\tiny $a$} (C)
edge [loop above] node {\tiny $b$} (C)
(C) edge node {\tiny $a$} (D)
edge [loop above] node {\tiny $b$} (C)
(D) edge node {\tiny $a$} (E)
edge [loop above] node {\tiny $b$} (C)
(E) edge node {\tiny $a$} (F)
edge [loop above] node {\tiny $b$} (C)
(F) edge node {\tiny $a$} (G)
edge [loop above] node {\tiny $b$} (C)
(G) edge node {\tiny $a$} (H)
edge [loop above] node {\tiny $b$} (C)
(H) edge node {\tiny $a$} (I)
edge [loop above] node {\tiny $b$} (H);
\end{tikzpicture}
\end{center}
As outlined above, the learner starts to construct tables $(S,\Psi)$ where $S$ is a selection of states
of the automaton and $\Psi$ are formulas. For DFAs we will see (Ex.~\ref{ex:dfa}) that
our formulas are just words in $\{a,b\}^*$.
Our starting table is $(\{q_0\}, \emptyset)$, i.e., we select the initial state and do not check any logical properties.
This table is trivially closed, as all states are equivalent w.r.t.\ $\emptyset$.
The first conjecture is the automaton consisting of one accepting state $q_0$ with $a$- and $b$-loops,
whose language is $\{a,b\}^*$.
This is incorrect and the teacher provides, e.g., $aa$ as counterexample.
The resulting table is $(\{q_0\},\{\varepsilon,a,aa\})$ where the second component was generated by closing $\{aa\}$
under suffixes. Suffix closedness features both in the original L$^*$ algorithm and in our framework (Section~\ref{sec:subform-closed}).
The table $(\{q_0\},\{\varepsilon,a,aa\})$ is not closed as $q_1$, the $a$-successor of $q_0$,
does not accept $\varepsilon$ whereas $q_0$ does. Therefore we extend the table to $(\{q_0,q_1\},\{\varepsilon,a,aa\})$.
Note that, unlike in the classical setting, exploring
successors of already selected states cannot be achieved by appending letters to words, but we need to
{\em locally} employ the transition structure on the automaton $\mathcal{A}$ instead.
A similar argument shows that we need to extend the table further to $(\{q_0,q_1,q_2\},\{\varepsilon,a,aa\})$ which is closed.
This leads to the (correct) conjecture depicted on the right below.
The acceptance condition and transition structure has been
read off from the original automaton, where the transition from $q_2$ to $q_0$ is obtained by realising that $q_2$'s successor $q_3$
is represented by the equivalent state $q_0 \in S$.
\begin{wrapfigure}[5]{r}{0cm}
\begin{minipage}{4cm}
\centering
\begin{tikzpicture}[->,auto,node distance=1.2cm]]
\tikzstyle{every state}=[text=black]
\node[initial, initial text=, state, accepting] (A) {$q_0$};
\node[state] (B) [right of=A] {$q_1$};
\node[state] (C) [right of =B] {$q_2$};
\path (A) edge [bend left] node {\tiny $a$} (B)
edge [loop above] node {\tiny $b$} (C)
(B) edge [loop above] node {\tiny $b$} (C)
edge [bend left] node {\tiny $a$} (C)
(C) edge [bend left] node {\tiny $a$} (A)
edge [loop above] node {\tiny $b$} (C);
\end{tikzpicture}
\end{minipage}
\end{wrapfigure}
A key feature of our work is that the L$^*$ algorithm can be systematically generalised to new settings, in particular,
to the learning of bisimulation quotients of transition systems.
Consider the following labelled transition system (LTS).
We would like to learn its minimal representation, i.e., its quotient modulo bisimulation.
\begin{wrapfigure}[7]{r}{0cm}
\begin{minipage}{8cm}
\centering
\begin{tikzpicture}[->,node distance=1.3cm, semithick, auto]
\tikzstyle{every state}=[text=black]
\node[state, initial, initial text=] (A) {$x_0$};
\node[state] (B) [below left of=A] {$x_1$};
\node[state] (C) [below right of = A]{$x_2$};
\node[state]
(D) [below left of = B] {$x_3$};
\node[state]
(E) [below right of = B] {$x_4$};
\node[state]
(F) [right of = E] {$x_5$};
\node[state]
(G) [right of = F] {$x_6$};
\node[state]
(H) [right of = G] {$x_7$};
\node[state] (I) [right of = H, draw=none]{$\cdots$};
\path (A) edge [bend left = 20] node {\tiny $a$} (B)
edge [bend left = 20] node {\tiny $a$} (C)
(B) edge [bend left = 20] node {\tiny $a$} (A)
edge node ['] {\tiny $b$} (D)
edge node {\tiny $a$} (E)
(C) edge [bend left = 20] node {\tiny $a$} (A)
edge node [', near start] {\tiny $b$} (F)
edge node {\tiny $a$} (G)
(E) edge [loop left] node {\tiny $b$} (E)
(G) edge node {\tiny $b$} (H)
(H) edge node {\tiny $b$} (I);
\end{tikzpicture}
\end{minipage}
\end{wrapfigure}
Our setting allows us to choose a suitable logical language.
For LTSs, the language consists of the formulas of standard
multi-modal logic (cf. Ex.~\ref{ex:Pow}).
The semantics is as usual where $\diam{a}\phi$ holds at a state if it has an $a$-successor that
makes $\phi$ true.
As above, the algorithm constructs tables, starting with $(S = \{x_0\}, \Psi = \emptyset)$.
The table is closed, so the first conjecture is a single state with an $a$-loop with no proposition letter true (note that $x_0$ has no $b$ or $c$ successor and
no proposition is true at $x_0$).
It is, however, easy for the teacher to find a counterexample. For example, the formula
$\diam{a} \diam{b} \top$ is true at the root of the original LTS but false in the conjecture.
We add the counterexample and all its subformulas to $\Psi$ and obtain a new table
$(\{x_0\},\Psi'\}$ with $\Psi' = \{ \diam{a} \diam{b} \top, \diam{b} \top, \top\}$.
Now, the table is not closed, as $x_0$ has successor $x_1$ that satisfies $\diam{b} \top$
whereas $x_0$ does not satisfy $\diam{b}\top$.
Therefore we add $x_1$ to the table to obtain $(\{x_0,x_1\},\Psi')$.
Similar arguments will lead to the closed table
$(\{x_0,x_1,x_3,x_4\},\Psi')$ which also yields the correct conjecture.
Note that the state $x_2$ does not get added to the table as it is equivalent to $x_1$
and thus already represented. This demonstrates a remarkable fact: we computed the bisimulation
quotient of the LTS without inspecting the (infinite) right-hand side of the LTS.
Another important example that fits smoothly into our framework is the well-known variant of Angluin's algorithm to
learn Mealy machines~(Ex.~\ref{ex:mealy}). Thanks to our general notion of logic, our framework allows to
use an intuitive language, where a formula is simply an input word $w$ whose truth value at a state $x$ is the observed output
after entering $w$ at $x$. This is in contrast to~\cite{baku18:angl} where formulas had to be Boolean valued.
Multi-valued logics fit naturally in our setting; this is expected to be useful to deal with systems
with quantitative information.
\section{Preliminaries}
\label{sec:prelims}
The general learning algorithm in this paper is based on
the theory of \emph{coalgebras},
which provides an abstract framework for representing
state-based transition systems.
In what follows we assume that the reader is familiar with basic notions of category theory
and coalgebras~\cite{jacobs-coalg,Rutten00}.
We briefly recall the notion of pointed coalgebra, modelling
a coalgebra with an initial state.
Let $\Cat{C}$ be a category with a terminal object $1$ and let $B \colon \Cat{C} \to \Cat{C}$ be a functor.
A pointed $B$-coalgebra is a triple $(X,\gamma,x_0)$ where $X \in \Cat{C}$ and $\gamma\colon X \to B X$
and $x_0\colon 1 \to X$, specifying the coalgebra structure and the point (``initial state'') of the coalgebra, respectively.
\paragraph{Coalgebraic modal logic.}
Modal logics are used to describe properties of state-based systems, modelled here as coalgebras.
The close relationship between coalgebras and their logics is described
elegantly via dual adjunctions~\cite{KupkeKP04,Klin07,PavlovicMW06,KupkeP11}.
Our basic setting consists of two categories $\Cat{C},
\Cat{D}$ connected by functors $P, Q$ forming a dual adjunction $P \dashv Q \colon \Cat{C} \leftrightarrows \Cat{D}^{\mathsf{op}}$.
In other words, we
have a natural bijection
$\Cat{C}(X, Q \deltaelta) \cong \Cat{D}(\deltaelta, P X) \mbox{ for } X \in
\Cat{C}, \deltaelta \in \Cat{D}$.
Moreover, we assume
\begin{wrapfigure}[4]{r}{0pt}
\begin{minipage}{16em}
\begin{equation}\label{eq:dual-adjunction}
\xymatrix@C=0.5cm{
\Cat{C}\ar@/^2ex/[rr]^-{P} \save !L(.5) \ar@(dl,ul)^{B} \restore & \bot & \Cat{D}^{\mathsf{op}} \ar@/^2ex/[ll]^-{Q} \save !R(.5) \ar@(ur,dr)^{L} \restore
}
\end{equation}
\end{minipage}
\end{wrapfigure}
two functors, $B \colon \Cat{C} \rightarrow \Cat{C}, L \colon
\Cat{D} \rightarrow \Cat{D}$, see~\eqref{eq:dual-adjunction}.
The functor $L$ represents the syntax of the (modalities in the) logic:
assuming that $L$ has an initial
algebra $\alpha \colon L \Phi \rightarrow \Phi$ we think of
$\Phi$ as the collection of formulas, or tests.
In this logical perspective, the functor $P$ maps
an object $X$ of $\Cat{C}$ to the collection of predicates
and the functor $Q$ maps an object $\deltaelta$ of $\Cat{D}$
to the collection $Q \deltaelta$ of $\deltaelta$-theories.
The connection between coalgebras and their logics is specified
via a natural transformation $\delta \colon L P \Rightarrow P B$, sometimes
referred to as the one-step semantics
\begin{wrapfigure}[5]{r}{0pt}
\begin{minipage}{18em}
\begin{equation}\label{eq:semantics-logic}
\vcenter{
\xymatrix@R=0.5cm{ L \Phi \ar@{-->}[r]^{L \Sem{\_}} \ar[d]_\alpha & L P X \ar[r]^{\delta_X} &
P B X \ar[d]^{P \gamma} \\
\Phi \ar@{-->}[rr]^{\exists ! \Sem{\_}} & & P X }
}
\end{equation}
\end{minipage}
\end{wrapfigure}
of the logic. The $\delta$ is used to define the semantics of
the logic on a $B$-coalgebra $(X,\gamma)$ by initiality,
as in~\eqref{eq:semantics-logic}.
Furthermore, using the bijective correspondence of the dual adjunction between $P$ and $Q$,
the map $\Sem{\_}$ corresponds to a map
$\theory{}{\gamma}\colon X \rightarrow Q \Phi$ that we will refer to as the theory
map of $(X,\gamma)$.
\begin{wrapfigure}[5]{r}{0pt}
\begin{minipage}{18em}
\begin{equation}\label{eq:theory-map-logic}
\vcenter{
\xymatrix@R=0.5cm{B X \ar@{-->}[r]^{B \theory{}{\gamma}} & B Q \Phi \ar[r]^{\delta^\flat_\Phi}
& Q L \Phi \\
X \ar[u]^\gamma \ar@{-->}[rr]^{\exists ! \theory{}{\gamma}} & & Q \Phi \ar[u]_{Q \alpha}}
}
\end{equation}
\end{minipage}
\end{wrapfigure}
The theory map can be expressed directly via a universal property,
by making use of the so-called \emph{mate} $\delta^\flat\colon BQ \Rightarrow Q L$
of the one-step semantics $\delta$ (cf.~\cite{Klin07,PavlovicMW06}).
More precisely, we have $\delta^\flat = Q L \varepsilon \circ Q \delta Q \circ \eta B Q$,
where $\eta, \varepsilon$ are the unit and counit of the adjunction.
Then $\theory{}{\gamma}\colon X \to Q \Phi$ is the unique morphism making~\eqref{eq:theory-map-logic} commute.
\begin{example}
\label{ex:dfa} Let $\Cat{C}= \Cat{D} = \mathsf{Set}, P=Q = 2^{-}$ the contravariant
power set functor,
$B = 2 \times -^{A}$ and $L = 1 + A \times
-$. In this case $B$-coalgebras can be thought of as
deterministic automata with input alphabet $A$ (e.g.,~\cite{Rutten98}).
It is well-known that the initial $L$-algebra
is $\Phi = A^{*}$ with structure $\alpha = [\varepsilon, \mathrm{cons}]\colon 1 + A \times A^* \to A^*$
where $\varepsilon$ selects the empty word and $\mathrm{cons}$ maps a pair $(a,w) \in A \times A^*$
to the word $aw \in A^*$, i.e., in this example our tests are words with the intuitive meaning that
a test succeeds if the word is accepted by the given automaton.
For $X \in \Cat{C}$, the $X$-component of the (one-step) semantics $\delta\colon L P \Rightarrow P B$ is defined as follows:
$ \delta_X (\ast) = \lbrace (i, f) \in 2 \times X^A \mid i = 1 \rbrace$,
and $\delta_X (a, U) = \lbrace (i, f) \in 2 \times X^A \mid f(a) \in U \rbrace$.
It is matter of routine checking that the semantics of tests in $\Phi$
on a $B$-coalgebra $(X,\gamma)$ is as follows: we have
$\Sem{\varepsilon} = \{ x \in X \mid \pi_1 (\gamma(x)) = 1 \}$
and $\Sem{a w} = \{ x \in X \mid \pi_2 (\gamma(x))(a)\in \Sem{w}\}$,
where $\pi_1$ and $\pi_2$ are the projection maps.
The theory map $\theory{}{\gamma}$ sends a state to the language accepted by that state in the usual way.
\end{example}
\begin{example}
\label{ex:mealy} Again let $\Cat{C} = \Cat{D} = \mathsf{Set}$ and consider the
functors $P = Q = O^{-}$, $B = (O \times -)^A$ and $L = A \times (1 + -)$, where $A$ and $O$ are fixed sets,
thought of as input and output alphabet, respectively. Then $B$-coalgebras
are Mealy machines and the initial $L$-algebra is given by the set $A^+$ of finite non-empty words over $A$.
For $X \in \Cat{C}$, the one-step semantics $\delta_X\colon A \times (1 + O^X) \to O^{B X}$ is defined by
$\delta_X(a,\mathrm{inl(*)}) = \lambda f . \pi_1 (f(a))$ and
$\delta_X(a,\mathrm{inr}(g)) = \lambda f. g(\pi_2(f(a)))$.
Concretely, formulas are words in $A^+$; the ($O$-valued) semantics of $w \in A^+$ at state $x$
is the output $o \in O$ that is produced after processing the input $w$ from state $x$.
\end{example}
\begin{example}\label{ex:Pow}
Let $\Cat{C} = \mathsf{Set}$ and $\Cat{D} = \mathsf{BA}$, where the latter denotes the
category of Boolean algebras.
Again $P = 2^{-}$, but this time $2^X$ is interpreted as a Boolean algebra.
The functor $Q$ maps a Boolean algebra to
the collection of ultrafilters over it~\cite{blrive01:moda}. Furthermore $B=(\mathcal{P} -)^A$ where
$\mathcal{P}$ denotes covariant power set
and $A$ a set of actions.
Coalgebras for this functor correspond to
labelled transition systems, where a state has a set of successors that depends on the action/input from $A$.
The dual functor $L\colon \mathsf{BA} \to \mathsf{BA}$ is defined as
$L Y \mathrel{:=} F_{\mathsf{BA}} (\{ \diam{a} y \mid a \in A, y \in Y \})
/ \!\equiv$
where $F_\mathsf{BA} \colon \mathsf{Set} \to \mathsf{BA}$ denotes the free Boolean algebra functor and where,
roughly speaking, $\equiv$ is the congruence generated from the axioms
${\diam{a} \perp} \equiv {\perp}$ and $\diam{a}(y_1 \vee y_2) \mathrel{\equiv} \diam{a}(y_1) \vee \diam{a}(y_2)$ for each $a \in A$.
This is explained in more detail in~\cite{KupkeP11}. The initial algebra for this functor is the so-called Lindenbaum-Tarski
algebra~\cite{blrive01:moda} of modal formulas
$\left( \phi \mathrel{::=} \perp \mid \phi \vee \phi \mid \neg \phi \mid \diam{a} \phi \right)$
quotiented by logical equivalence. The definition of an appropriate $\delta$
can be found in, e.g.,~\cite{KupkeP11}---the semantics $\Sem{\_}$ of a formula then amounts to the standard one~\cite{blrive01:moda}.
\end{example}
Different types of probabilistic transition systems also fit into the dual adjunction framework, see, e.g,~\cite{JacobsS09}.
\paragraph{Subobjects and intersection-preserving functors.}
We denote by $\mathsf{Sub}(X)$ the collection of subobjects of an object $X \in \Cat{C}$.
Let $\leq$ be the order\label{order} on subobjects $s \colon S \rightarrowtail X, s' \colon S' \rightarrowtail X$ given by
$s \leq s'$ iff there is $m \colon S \rightarrow S'$ s.t.\ $s = s' \circ m$.
The \emph{intersection} $\bigwedge J \rightarrowtail X$ of a family $J = \{s_i \colon S_i \rightarrow X\}_{i \in I}$ is
defined as the greatest lower bound w.r.t.\ the order $\leq$. In a complete category, it
can be computed by (wide) pullback. We denote the maps in the limiting cone by $x_i \colon \bigwedge J \rightarrowtail S_i$.
For a functor $B \colon \Cat{C} \rightarrow \Cat{D}$, we say $B$ \emph{preserves (wide)
intersections}
if it preserves these wide pullbacks, i.e., if $(B(\bigwedge J), \{B x_i\}_{i
\in I})$ is the pullback of $\{B s_i \colon B S_i \rightarrow B X\}_{i \in I}$.
By \cite[Lemma 3.53]{AMMS13} (building on~\cite{trnkova1971descriptive}), \emph{finitary} functors on $\mathsf{Set}$ `almost' preserve wide intersections:
for every such functor $B$ there is a functor $B'$ which preserves wide intersections and agrees
with $B$ on all non-empty sets.
Finally, if $B$ preserves intersections, then it preserves monos.
\paragraph{Minimality notions.}
The algorithm that we will describe in this paper learns a minimal and reachable representation of an object.
The intuitive notions
of minimality and reachability are formalised as follows.
\begin{definition}
We call a $B$-coalgebra $(X,\gamma)$ \emph{minimal w.r.t.\ logical equivalence} if
the theory map $\theory{}{\gamma}\colon X \to Q \Phi$ is a monomorphism.
\end{definition}
\begin{definition}\label{def:reachable}
We call a pointed $B$-coalgebra $(X,\gamma,x_0)$ \emph{reachable}
if for any subobject $s\colon S \to X$ and $s_0\colon 1 \to S$ with $x_0 = s \circ s_0$:
if $S$ is a subcoalgebra of $(X,\gamma)$ then
$s$ is an isomorphism.
\end{definition}
For expressive logics~\cite{schr08:expr}, behavioural equivalence concides with
logical equivalence. Hence, in that case, our algorithm
learns a ``well-pointed coalgebra'' in the terminology of~\cite{AMMS13}, i.e., a
pointed coalgebra that is reachable and minimal w.r.t.~behavioural
equivalence. All logics appearing in this paper are expressive.
\paragraph{Assumption on $\Cat{C}$ and Factorisation System.}
Throughout the paper we will assume that $\Cat{C}$ is a complete
and well-powered category. Well-powered means that for each $X \in \Cat{C}$ the collection
$\mathsf{Sub}(X)$ of subobjects of a given object forms a set.
Our assumptions imply~\cite[Proposition 4.4.3]{borceux1994} that every morphism $f$ in $\Cat{C}$
\begin{wrapfigure}[5]{r}{0pt}
\begin{minipage}{10em}
\begin{equation}\label{eq:fill-in}
\begin{tikzcd}
X \ar[d, "h"'] \ar[r, "e", twoheadrightarrow] &
Y \ar[d, "g"] \ar[dl, "d", dashed] \\
U \ar[r, "m"', tail] & Z
\end{tikzcd}
\end{equation}
\end{minipage}
\end{wrapfigure}
factors uniquely (up to isomorphism) as $f = m \circ e$ with $m$
a mono and $e$ a strong epi.
Recall that an epimorphism
$e \colon X \rightarrow Y$ is strong if for every commutative
square in~\eqref{eq:fill-in} where the bottom arrow is a monomorphism, there
exists a unique diagonal morphism $d$ such that the entire diagram commutes.
\section{Subformula Closed Collections of Formulas}
\label{sec:subform-closed}
Our learning algorithm will construct conjectures that are ``partially'' correct, i.e.,
correct with respect to a subobject of the collection of all formulas/tests.
Recall this collection of all tests are formalised in our setting as the initial $L$-algebra $(\Phi, \alpha\colon L \Phi \to \Phi)$.
To define a notion of partial correctness we need to consider
subobjects of $\Phi$ to which we can restrict the theory map. This is formalised via the notion
of ``subformula closed'' subobject of $\Phi$.
\begin{wrapfigure}[5]{r}{0pt}
\begin{minipage}{10em}
\begin{equation}\label{eq:coalg-to-alg}
\begin{tikzcd}
LX \ar[r, "Lg^{\dagger}"] & LY \ar[d, "g"] \\
X \ar[u, "f"] \ar[r, "g^{\dagger}"] & Y
\end{tikzcd}
\end{equation}
\end{minipage}
\end{wrapfigure}
The definition of such subobjects is based on the notion
of \emph{recursive coalgebra}.
For $L \colon \Cat{D} \rightarrow \Cat{D}$ an endofunctor,
a coalgebra $f \colon X \rightarrow LX$ is called
\textit{recursive} if for every $L$-algebra $g \colon
LY \rightarrow Y$ there is a unique `coalgebra-to-algebra' map
$g^{\dagger}$ making~\eqref{eq:coalg-to-alg} commute.
\begin{definition}\label{def:subclosed}
A subobject $j \colon \Psi \to \Phi$ is called a {\em subformula closed collection} (of formulas)
if there is a unique
$L$-coalgebra structure $\sigma \colon \Psi \to L \Psi$ such that
$(\Psi, \sigma)$ is a recursive $L$-coalgebra and
$j$ is the (necessarily unique) coalgebra-to-algebra map
from $(\Psi,\sigma)$ to the initial algebra $(\Phi,\alpha)$.
\end{definition}
\begin{remark}
The uniqueness of $\sigma$ in Definition~\ref{def:subclosed} is implied if $L$ preserves
monomorphisms. This is the case in our examples. The notion of recursive coalgebra
goes back to~\cite{tayl99:prac,OSIUS197479}.
The paper~\cite{AdamekLM07} contains a claim that
the first item of our definition of subformula closed collection is implied by the second one if $L$ preserves preimages.
In our examples both properties of $(\Psi,\sigma)$ are verified directly, rather than by relying on general
categorical results.
\end{remark}
\begin{example}
In the setting of Example~\ref{ex:dfa}, where the initial
$L$-algebra is based on the set $A^*$ of words over the set (of inputs) $A$, a subset $\Psi \subseteq A^*$ is subformula-closed
if it is suffix-closed, i.e., if for all $a w \in \Psi$ we have $w \in \Psi$ as well.
\end{example}
\begin{example}
In the setting that $B = (\mathcal{P} -)^A$ for some set of actions $A$, $\Cat{C} = \mathsf{Set}$
and $\Cat{D}=\mathsf{BA}$, the logic is given as a functor $L$ on Boolean algebras as discussed in Example~\ref{ex:Pow}.
As a subformula closed collection is an object in $\Psi$, we are not simply dealing with a set of formulas, but with a Boolean algebra.
The connection to the standard notion of being closed under taking subformulas in modal logic~\cite{blrive01:moda} can be sketched as follows:
given a set $\deltaelta$ of modal formulas that is closed under taking subformulas, we define a Boolean algebra $\Psi_\deltaelta \subseteq \Phi$
as the smallest Boolean subalgebra of $\Phi$ that is generated by the set $\hat{\deltaelta} = \{ [\phi]_{\Phi} \mid \phi \in \deltaelta \}$ where for a formula
$\phi$ we let $[\phi]_\Phi \in \Phi$ denote its equivalence class in $\Phi$.
It is then not difficult to define a suitable $\sigma\colon \Psi_\deltaelta \to L \Psi_\deltaelta$.
As $\Psi_\deltaelta$ is generated by closing $\hat{\deltaelta}$ under Boolean operations, any
two states $x_1,x_2$ in a given coalgebra $(X,\gamma)$ satisfy
$ \left( \forall b \in \Psi_\deltaelta. x_1 \in \Sem{b} \Leftrightarrow x_2 \in \Sem{b} \right) \mbox{ iff }
\left( \forall b \in \hat{\deltaelta}. x_1 \in \Sem{b} \Leftrightarrow x_2 \in \Sem{b} \right).
$
In other words, equivalence w.r.t.\ $\Psi_\deltaelta$ coincides with equivalence w.r.t.\
the {\em set} of formulas $\deltaelta$. This explains why in the concrete algorithm,
we do not deal with Boolean algebras explicitly, but with
subformula closed sets of formulas instead.
\end{example}
\begin{wrapfigure}[5]{r}{0pt}
\begin{minipage}{18em}
\begin{equation}\label{eq:thmap-psi}
\vcenter{
\xymatrix@C=1cm{
X \ar[d]_{\gamma} \ar[rr]^{\theory{\Psi}{\gamma}}
& & Q \Psi \\
B X \ar[r]^{B \theory{\Psi}{\gamma}}
& B Q \Psi \ar[r]^{\delta^{\flat}_{\Psi}}
& Q L\Psi \ar[u]_{Q \sigma}
} }
\end{equation}
\end{minipage}
\end{wrapfigure}
The key property of subformula closed collections $\Psi$ is that we can restrict
our attention to the so-called $\Psi$-theory map. Intuitively, subformula closedness
is what allows us to define this theory map inductively.
\begin{lemma}\label{lm:subf}
Let $\Psi \stackrel{j}{\rightarrowtail} \Phi$ be a sub-formula closed collection, with coalgebra structure $\sigma \colon \Psi \rightarrow L\Psi$.
Then $\theory{\Psi}{\gamma} = Q j \circ \theory{\Phi}{\gamma}$ is the unique
map making~\eqref{eq:thmap-psi} commute.
We call $\theory{\Psi}{\gamma}$ the $\Psi$-theory map, and omit the $\Psi$
if it is clear from the context.
\end{lemma}
\section{Reachability and the Base}
\label{sec:base}
In this section, we define the notion of \emph{base} of an endofunctor, taken from~\cite{alwin}.
This allows us to speak about the (direct) successors of states in a coalgebra,
and about reachability, which
are essential ingredients of the learning algorithm.
\begin{definition}
Let $B\colon \Cat{C} \rightarrow \Cat{C}$ be an
endofunctor.
We say $B$ \emph{has a base} if for every arrow $f
\colon X \rightarrow B Y$ there exist $g \colon
X \rightarrow B Z$ and $m \colon Z \rightarrowtail Y$
with $m$ a monomorphism such that $f = B m \circ g$, and for any
pair $g' \colon X \rightarrow B Z', m' \colon Z' \rightarrowtail Y$
with $B m' \circ g' = f$ and $m'$ a monomorphism there is a unique
arrow $h \colon Z \rightarrow Z'$ such that $B h \circ g
= g'$ and $m' \circ h = m$, see Diagram~\eqref{eq:base-diagram}.
We call $(Z,g,m)$ the \emph{($B$)-base} of the morphism $f$.
\end{definition}
\begin{wrapfigure}[5]{r}{0pt}
\begin{minipage}{17em}
\begin{equation}\label{eq:base-diagram}
\begin{tikzcd}
X \ar[rr, "f", bend left] \ar[r, "g"']
\ar[dr, "g'"', bend right]
& B Z \ar[d, "B h"] \ar[r, "B m"'] & B Y \\
& B Z' \ar[ur, "B m'"', bend right] &
\end{tikzcd}
\end{equation}
\end{minipage}
\end{wrapfigure}
We sometimes refer to $m \colon Z \rightarrowtail Y$ as the base of $f$, omitting the $g$
when it is irrelevant, or clear from the context.
Note that the terminology `the' base is justified, as it is easily
seen to be unique up to isomorphism.
For example,
let $B \colon \mathsf{Set} \rightarrow \mathsf{Set}$, $BX = 2 \times X^A$.
The base of a map $f \colon X \rightarrow BY$ is given by
$m \colon Z \rightarrowtail Y$, where
$Z = \{(\pi_2 \circ f)(x)(a) \mid x \in X, a \in A \}$,
and $m$ is the inclusion. The associated $g \colon X \rightarrow BZ$
is the corestriction of $f$ to $BZ$.
For $B = (\mathcal{P} -)^A \colon \mathsf{Set} \rightarrow \mathsf{Set}$,
the $B$-$base$ of $f \colon X \rightarrow Y$ is given by the inclusion $m \colon Z \rightarrowtail Y$,
where $Z=\{ y \in Y \mid \exists x \in X, \exists a \in A \mbox{ s.t. } y \in f(x)(a)\}$.
\begin{proposition}\label{prop:existence-base}
Suppose $\Cat{C}$ is complete and well-powered, and $B \colon \Cat{C} \rightarrow
\Cat{C}$ preserves (wide) intersections. Then $B$ has a base.
\end{proposition}
If $\Cat{C}$ is a locally presentable category,
then it is complete and well-powered~\cite[Remark 1.56]{AR94}. Hence, in that case,
any functor $B \colon \Cat{C} \rightarrow \Cat{C}$ which preserves intersections has a base.
The following lemma will be useful in proofs.
\begin{lemma}\label{lm:nat-base}
Let $B \colon \Cat{C} \rightarrow \Cat{C}$ be a functor that has a base and that preserves pre-images.
Let $f \colon S \rightarrow B X$ and
$h \colon X \rightarrow Y$ be morphisms, let
$(Z,g,m)$
be the base of $f$
and let $e \colon Z \rightarrow W, m' \colon W \rightarrow
Y$ be the (strong epi, mono)-factorisation of $h \circ m$.
Then $(W,Be \circ g, m')$
is the base of $B h \circ f$.
\end{lemma}
The $B$-base provides an elegant way to relate reachability within a coalgebra
to a monotone operator on the (complete) lattice of subobjects of the carrier of the coalgebra.
Moreover, we will see that the least subcoalgebra that contains a given subobject of the
carrier can be obtained via a standard least fixpoint construction. Finally, we will introduce
the notion of prefix closed subobject of a coalgebra, generalising the prefix closedness
condition from Angluin's algorithm.
By our assumption on $\Cat{C}$ at the end of Section~\ref{sec:prelims},
the collection of subobjects $(\mathsf{Sub}(X),\leq)$
ordered as usual (cf.~page~\ref{order}) forms a complete lattice.
Recall that the meet on $\mathsf{Sub}(X)$ (intersection) is defined via pullbacks.
In categories
with coproducts, the join $s_1 \vee s_2$ of subobjects
$s_1,s_2 \in \mathsf{Sub}(X)$ is defined as the mono part of the factorisation of the map
$[s_1,s_2]\colon S_1 + S_2 \to X$, i.e., $[s_1,s_2] = (s_1 \vee s_2) \circ e$ for a strong epi
$e$. In $\mathsf{Set}$, this amounts to taking the union of subsets.
\begin{wrapfigure}[4]{r}{0pt}
\begin{minipage}{13.5em}
\begin{equation}\label{eq:operator-diagram}
\vcenter{
\xymatrix@R=0.8cm{S \ar[d]_g \ar[r]^{s} & X \ar[d]^\gamma \\
B \Gamma(S) \ar[r]^-{B \Gamma_\gamma^B(s)} & B X }
}
\end{equation}
\end{minipage}
\end{wrapfigure}
For a binary join $s_1 \vee s_2$ we denote by $\mathit{inl}v\colon S_1 \to (S_1 \vee S_2)$ and $\mathit{inr}v\colon S_2 \to (S_1 \vee S_2)$
the embeddings that exist by $s_i \leq s_1 \vee s_2$ for $i = \{1,2\}$.
Let us now define the key operator of this section.
\begin{definition}\label{def:gamma}
Let $B$ be a functor that has a base, $s \colon S \rightarrowtail X$ a subobject of some $X \in \Cat{C}$
and let $(X,\gamma)$ be a $B$-coalgebra. Let $(\Gamma(S), g, \Gamma_\gamma^B(s))$ be the $B$-base of
$\gamma \circ s$, see Diagram~\eqref{eq:operator-diagram}.
Whenever $B$ and $\gamma$ are clear from the context, we write
$\Gamma(s)$ instead of $\Gamma_\gamma^B(s)$.
\end{definition}
\begin{lemma}\label{lm:gamma-monotone}
Let $B \colon \Cat{C} \to \Cat{C}$ be a functor with a base and let $(X,\gamma)$ be a $B$-coalgebra. The operator $\Gamma\colon \mathsf{Sub}(X) \to \mathsf{Sub}(X)$ defined by
$s \mapsto \Gamma(s)$ is monotone.
\end{lemma}
Intuitively, $\Gamma$ computes for a given set of states $S$ the set of ``immediate successors'', i.e.,
the set of states that can be reached by applying $\gamma$ to an element of $S$.
We will see that pre-fixpoints of $\Gamma$ correspond to subcoalgebras. Furthermore,
$\Gamma$ is the key to formulate our notion of closed table in the learning algorithm.
\begin{proposition}\label{prop:subcoalg-base}
Let $s \colon S \rightarrowtail X$ be a subobject and $(X, \gamma) \in \mathsf{Coalg}(B)$ for $X \in \Cat{C}$
and $B\colon\Cat{C} \to \Cat{C}$ a functor that has a base. Then $s$ is a subcoalgebra
of $(X, \gamma)$ if and only if $\Gamma(s) \leq s$. Consequently, the collection of
subcoalgebras of a given $B$-coalgebra forms a complete lattice.
\end{proposition}
Using this connection,
reachability of a pointed coalgebra (Definition~\ref{def:reachable}) can be expressed
in terms of the least fixpoint $\mathsf{lfp}$ of an operator defined in terms of $\Gamma$.
\begin{theorem}\label{thm:reach-base}
Let $B\colon\Cat{C} \to \Cat{C}$ be a functor that has a base.
A pointed $B$-coalgebra $(X,\gamma,x_0)$ is reachable
iff $X \cong \mathsf{lfp}(\Gamma \vee x_0)$ (isomorphic as subobjects of $X$, i.e., equal).
\end{theorem}
This justifies defining the reachable part from an initial state $x_0 \colon 1 \rightarrowtail X$ as the least fixpoint
of the monotone operator $\Gamma \vee x_0$. Standard means of computing the least fixpoint
by iterating this operator then give us a way to compute this subcoalgebra.
Further, $\Gamma$ provides a way to generalise the notion of ``prefixed closedness'' from Angluin's L$^*$ algorithm
to our categorical setting.
\begin{definition}
Let $s_0,s \in \mathsf{Sub}(X)$ for some $X\in \Cat{C}$ and let $(X,\gamma)$ be a $B$-coalgebra.
We call $s$ {\em $s_0$-prefix closed w.r.t.\ $\gamma$} if
$s = \bigvee_{i=0}^n s_i$ for some $n \geq 0$ and a collection $\{s_i \mid i = 1,\ldots,n\}$ with
$s_{j + 1} \leq \Gamma(\bigvee_{i=0}^j s_i)$ for all $j$ with $0 \leq j < n$.
\end{definition}
\section{Learning Algorithm}
\label{sec:learning}
We define a general learning algorithm
for $B$-coalgebras.
First, we describe the setting, in general and slightly informal terms.
The teacher has
a pointed $B$-coalgebra $(X, \gamma, s_0)$.
Our task is to `learn'
a pointed $B$-coalgebra $(S, \hat{\gamma}, \hat{s}_0)$ s.t.:
\begin{itemize}
\item $(S,\hat{\gamma}, \hat{s}_0)$ is \emph{correct} w.r.t.\ the
collection $\Phi$ of all tests, i.e., the theory of $(X,\gamma)$
and $(S,\hat{\gamma})$ coincide on the initial states $s_0$ and $\hat{s}_0$,
(Definition~\ref{def:correct-psi});
\item $(S,\hat{\gamma},\hat{s}_0)$ is minimal w.r.t.\ logical equivalence;
\item $(S,\hat{\gamma},\hat{s}_0)$ is reachable.
\end{itemize}
The first point means that the learned coalgebra is `correct', that is,
it agrees with the coalgebra of the teacher on all possible tests from the initial state.
For instance, in case of deterministic automata and their logic in Example~\ref{ex:dfa},
this just means that the language of the learned automaton is the correct one.
In the learning game, we are only provided limited access to the coalgebra
$\gamma \colon X \rightarrow B{X}$. Concretely, the teacher gives us:
\begin{itemize}
\item
for any subobject $S \rightarrowtail X$ and sub-formula closed subobject $\Psi$ of $\Phi$,
the composite theory map
$
\begin{tikzcd}
S \ar[r, tail]
& X \ar[r, "{\theory{\Psi}{\gamma}}"]
& Q \Psi
\end{tikzcd}
$;
\item for $(S, \hat{\gamma}, \hat{s}_0)$ a pointed coalgebra, whether
or not it is correct w.r.t.\ the collection $\Phi$ of all tests;
\item in case of a negative answer to the previous question,
a \emph{counterexample}, which essentially
is a subobject $\Psi'$ of $\Phi$ representing some tests on which
the learned coalgebra is wrong (defined more precisely below);
\item for a given subobject $S$ of $X$, the `next states'; formally, the computation of the $B$-base of
the composite arrow
$
\begin{tikzcd}
S \ar[r,tail]
& X \ar[r,"\gamma"]
& B{X}
\end{tikzcd}
$.
\end{itemize}
The first three points correspond respectively to the standard notions of membership query (`filling
in' the table with rows $S$ and columns $\Psi$),
equivalence query and counterexample generation. The last point, about the base,
is more unusual: it does not occur in the standard algorithm, since there a canonical
choice of $(X,\gamma)$ is used, which allows to represent next states in a fixed manner.
It is required in our setting of an arbitrary coalgebra $(X,\gamma)$.
In the remainder of this section, we describe the abstract learning algorithm
and its correctness. First, we describe the basic ingredients needed for the algorithm: tables, closedness,
counterexamples and a procedure to close a given table (Section~\ref{sec:tables}).
Based on these notions, the actual algorithm is presented (Section~\ref{sec:alg}),
followed by proofs of correctness and termination (Section~\ref{sec:correctness-and-termination}).
\begin{assumption}
Throughout this section, we assume
\begin{itemize}
\item that we deal with coalgebras over the base category $\Cat{C} = \mathsf{Set}$;
\item a functor $B \colon \Cat{C} \rightarrow \Cat{C}$ that preserves pre-images and wide intersections;
\item a category $\Cat{D}$ with an initial object $0$ s.t.\ arrows with domain $0$ are monic;
\item a functor $L \colon \Cat{D} \rightarrow \Cat{D}$
with an initial algebra $L \Phi \stackrel{\cong}{\rightarrow} \Phi$;
\item an adjunction $P \dashv Q \colon \Cat{C} \leftrightarrows \Cat{D}^\mathsf{op}$,
and a logic $\delta \colon L P \Rightarrow P B$.
\end{itemize}
Moreover, we assume
a pointed $B$-coalgebra $(X, \gamma, s_0)$.
\end{assumption}
\begin{remark}\label{rem:assumption-disc}
We restrict to $\Cat{C} = \mathsf{Set}$, but see it as a key contribution
to state the algorithm in categorical terms: the assumptions
cover a wide class of functors on $\mathsf{Set}$, which is the main direction of generalisation.
Further, the categorical approach will enable future generalisations.
The assumptions on the category $\Cat{C}$ are: it is complete,
well-powered and satisfies that for all (strong) epis $q\colon S \to \overline{S} \in \Cat{C}$
and all monos $i \colon S' \to S$ such that $q \circ i$ is mono there
is a morphism $q^{-1} \colon \overline{S} \to S$ such that (i) $q \circ q^{-1} = \mathsf{id}$ and $q^{-1} \circ q \circ i = i$.
\end{remark}
\subsection{Tables and counterexamples}\label{sec:tables}
\begin{definition}
A \emph{table} is a pair $(S \stackrel{s}{\rightarrowtail} X, \Psi \stackrel{i}{\rightarrowtail} \Phi)$
consisting of a subobject $s$ of $X$ and a subformula-closed subobject $i$ of $\Phi$.
\end{definition}
To make the notation a bit lighter, we sometimes refer to a table by $(S,\Psi)$, using $s$
and $i$ respectively to refer to the actual subobjects.
The pair $(S,\Psi)$ represents `rows' and `columns' respectively, in the table;
the `elements' of the table are given abstractly by the map $\theory{\Psi}{\gamma} \circ s$.
In particular, if $\Cat{C} = \Cat{D} = \mathsf{Set}$ and $Q = 2^{-}$, then this is a map
$S \rightarrow 2^\Psi$, assigning a Boolean value to every pair of a row (state) and a column (formula).
\begin{wrapfigure}[4]{r}{0pt}
\begin{minipage}{18em}
\begin{equation}\label{eq:closed-diagram}
\begin{tikzcd}
S \ar[r, "s", tail] &
X \ar[r, "\theory{}{\gamma}"] &
Q \Psi \\
\Gamma(S) \ar[r, "{\Gamma(s)}"'] \ar[u, "k"]
& X \ar[ur,"\theory{}{\gamma}"']
&
\end{tikzcd}
\end{equation}
\end{minipage}
\end{wrapfigure}
For the definition of closedness, we use the operator $\Gamma(S)$ from Definition~\ref{def:gamma},
which characterises the successors of a subobject $S \rightarrowtail X$.
\begin{definition}\label{def:closed}
A table $(S,\Psi)$ is \emph{closed} if there exists a map
$k \colon \Gamma(S) \rightarrow S$ such that Diagram~\eqref{eq:closed-diagram} commutes.
A table $(S,\Psi)$ is \emph{sharp} if the composite map
$
\begin{tikzcd}
S \ar[r, "s"] &
X \ar[r, "{\theory{}{\gamma}}"] &
Q \Psi
\end{tikzcd}
$
is monic.
\end{definition}
Thus, a table $(S,\Psi)$ is closed if all the successors of states (elements of $\Gamma(S)$) are already
represented in $S$, up to equivalence w.r.t.\ the tests in $\Psi$. In other terms,
the rows corresponding to successors of existing rows are already in the table.
Sharpness amounts to minimality w.r.t.\ logical equivalence: every row has a unique value.
The latter will be an invariant of the algorithm (Theorem~\ref{thm:invariant}).
\begin{wrapfigure}[4]{r}{0pt}
\begin{minipage}{17em}
\begin{equation}\label{eq:conj}
\begin{tikzcd}
S \ar[r, "s", tail] \ar[d, "\hat{\gamma}"'] &
X \ar[r, "\gamma"] &
B X \ar[d, "B\theory{}{\gamma}"] \\
B S \ar[r, "Bs"'] &
B X \ar[r, "B\theory{}{\gamma}"'] &
B Q \Psi
\end{tikzcd}
\end{equation}
\end{minipage}
\end{wrapfigure}
A \emph{conjecture} is a coalgebra on $S$, which is not quite a subcoalgebra of $X$: instead,
it is a subcoalgebra `up to equivalence w.r.t.\ $\Psi$', that is, the successors agree
up to logical equivalence.
\begin{definition}\label{def:conjecture}
Let $(S,\Psi)$ be a table.
A coalgebra structure $\hat{\gamma} \colon S \rightarrow B S$ is called a \emph{conjecture} (for $(S,\Psi)$) if
Diagram~\eqref{eq:conj} commutes.
\end{definition}
It is essential to be able to construct a conjecture from a closed table. The following, stronger
result is a variation of Proposition~\ref{prop:subcoalg-base}.
\begin{theorem}\label{thm:conjecture}
A sharp table is closed iff there
exists a conjecture for it.
Moreover, if the table is sharp and $B$ preserves monos, then this conjecture is unique.
\end{theorem}
\begin{wrapfigure}[5]{r}{0pt}
\begin{minipage}{17em}
\begin{equation}\label{eq:correct-diagram}
\begin{tikzcd}
& X \ar[dr, "{\theory{}{\gamma}}"] & \\
1 \ar[r, "{\hat{s}_0}"', tail] \ar[ur, "s_0", tail] &
S \ar[r, "{\theory{}{\hat{\gamma}}}"']
& Q \Psi
\end{tikzcd}
\end{equation}
\end{minipage}
\end{wrapfigure}
\noindent Our goal is to learn a pointed coalgebra which is correct w.r.t.\
all formulas. To this aim we ensure correctness w.r.t.\ an in\-crea\-sing sequence
of subformula closed collections $\Psi$.
\begin{definition}\label{def:correct-psi}
Let $(S,\Psi)$ be a table, and let
$(S,\hat{\gamma},\hat{s}_0)$ be a pointed $B$-coalgebra on $S$.
We say $(S,\hat{\gamma}, \hat{s}_0)$ is \emph{correct} w.r.t.\ $\Psi$ if Diagram~\eqref{eq:correct-diagram}
commutes.
\end{definition}
All conjectures constructed during the learning algorithm will be correct w.r.t.\ the
subformula closed collection $\Psi$ of formulas under consideration.
\begin{lemma}\label{lm:truth-lemma-tables}
Suppose $(S,\Psi)$ is closed, and $\hat{\gamma}$ is a conjecture.
Then $\theory{\Psi}{\gamma} \circ s = \theory{\Psi}{\hat{\gamma}} \colon S \rightarrow Q\Psi$.
If $\hat{s}_0 \colon 1 \rightarrow S$ satisfies
$s \circ \hat{s}_0 = s_0$ then $(S,\hat{\gamma},\hat{s}_0)$ is correct w.r.t.~$\Psi$.
\end{lemma}
We next define the crucial notion of \emph{counterexample}
to a pointed coalgebra: a subobject $\Psi'$ of $\Psi$ on which it is `incorrect'.
\begin{definition}
Let $(S,\Psi)$ be a table, and let $(S,\hat{\gamma},\hat{s}_0)$ be a pointed $B$-coalgebra on $S$.
Let $\Psi'$ be a subformula closed subobject of $\Phi$, such that
$\Psi$ is a subcoalgebra of $\Psi'$.
We say $\Psi'$ is a \emph{counterexample (for $(S,\hat{\gamma},\hat{s}_0)$, extending $\Psi$)} if
$(S,\hat{\gamma},\hat{s}_0)$ is \emph{not} correct w.r.t.\ $\Psi'$.
\end{definition}
The following elementary lemma states that if there are no more counterexamples
for a coalgebra, then it is correct w.r.t.\ the object $\Phi$ of all formulas.
\begin{lemma}\label{lm:no-more-counter}
Let $(S,\Psi)$ be a table, and
let $(S,\hat{\gamma},\hat{s}_0)$ be a pointed $B$-coalgebra on $S$.
Suppose that there are no counterexamples for
$(S,\hat{\gamma},\hat{s}_0)$ extending $\Psi$. Then
$(S,\hat{\gamma},\hat{s}_0)$ is correct w.r.t.\ $\Phi$.
\end{lemma}
The following describes, for a given table, how to extend it with the successors (in $X$)
of all states in $S$. As we will see below, by repeatedly applying this construction,
one eventually obtains a closed table.
\begin{definition}\label{def:closing}
Let $(S,\Psi)$ be a sharp table.
Let $(\overline{S},q,r)$ be the (strong epi, mono)-factorisation
of the map $\theory{}{\gamma} \circ (s \vee \Gamma(s))$, as in
the diagram:
$$
\begin{tikzcd}
S \vee \Gamma(S) \ar[r,"{s \vee \Gamma(s)}"] \ar[dr,"q"',twoheadrightarrow]
& X \ar[r,"{\theory{}{\gamma}}"]
& Q \Psi \\
& \overline{S} \ar[ur,"r"',tail]
&
\end{tikzcd}
$$
We define
$\mathsf{close}(S,\Psi) ~{:=}~ \{ \overline{s} \colon \overline{S} \rightarrowtail X \mid
\theory{}{\gamma} \circ \overline{s} = r, s \leq \overline{s} \leq s \vee \Gamma(s) \}
$.
For each $\overline{s} \in \mathsf{close}(S,\Psi)$ we have $s \leq \overline{s}$
and thus $s = \overline{s} \circ \kappa$ for some $\kappa\colon S \to \overline{S}$.
\end{definition}
\begin{lemma}\label{lem:connecting}
In Definition~\ref{def:closing}, for each $\overline{s} \in \mathsf{close}(S,\Psi)$,
we have $\kappa = q \circ \mathit{inl}v$.
\end{lemma}
We will refer to $\kappa=q \circ \mathit{inl}v$ as the connecting map
from $s$ to $\overline{s}$.
\begin{lemma}\label{lem:uglycondition}
In Definition~\ref{def:closing}, if there exists $q^{-1} \colon \overline{S} \rightarrow S \vee \Gamma(S)$
such that $q \circ q^{-1} = \mathsf{id}$ and $q^{-1} \circ q \circ \mathit{inl}v = \mathit{inl}v$, then
$\mathsf{close}(S,\Psi)$ is non-empty.
\end{lemma}
By our assumptions, the hypothesis of Lemma~\ref{lem:uglycondition} is satisfied (Remark~\ref{rem:assumption-disc}),
hence $\mathsf{close}(S,\Psi)$ is non-empty. It is precisely (and only) at this point that we need
the strong condition about existence of right inverses to epimorphisms.
\subsection{The algorithm}\label{sec:alg}
Having defined closedness, counterexamples and
a procedure for closing a table, we are ready to define the abstract algorithm.
In the algorithm, the teacher has access to
a function $\mathsf{counter}((S,\hat{\gamma},\hat{s}_0),\Psi)$,
which returns the set of all counterexamples (extending $\Psi$) for the conjecture $(S,\hat{\gamma},\hat{s}_0)$.
If this set is empty, the coalgebra $(S,\hat{\gamma},\hat{s}_0)$ is correct (see Lemma~\ref{lm:no-more-counter}),
otherwise the teacher picks one of its elements $\Psi'$.
We also make use of $\mathsf{close}(S,\Psi)$, as given in Definition~\ref{def:closing}.
\begin{algorithm}[H]
\caption{Abstract learning algorithm}\label{alg:main}
\begin{algorithmic}[1]
\State $(S \stackrel{s}{\rightarrowtail} X) \gets (1 \stackrel{s_0}{\rightarrowtail} X)$
\State $\hat{s}_0 \gets \mathsf{id}_1$
\State $\Psi \gets 0$
\While {\texttt{true}}
\While {$(S \stackrel{s}{\rightarrowtail} X,\Psi)$ is not closed} \label{ln:closing}
\State let $(\overline{S} \stackrel{\overline{s}}{\rightarrowtail} X)
\in \mathsf{close}(S,\Psi)$, with connecting map $\kappa \colon S \rightarrowtail \overline{S}$
\State $(S \stackrel{s}{\rightarrowtail} X) \gets
(\overline{S} \stackrel{\overline{s}}{\rightarrowtail} X)$
\State $\hat{s}_0 \gets \kappa \circ \hat{s}_0$
\EndWhile
\State let $(S,\hat{\gamma})$ be a conjecture for $(S,\Psi)$ \label{ln:conj}
\If {$\mathsf{counter}((S,\hat{\gamma},\hat{s}_0),\Psi) = \emptyset$}
\State \textbf{return} $(S,\hat{\gamma}, \hat{s}_0)$ \label{ln:ret}
\Else
\State $\Psi \gets \Psi'$ for some $\Psi' \in \mathsf{counter}((S,\hat{\gamma},\hat{s}_0),\Psi)$ \label{ln:new-ctr}
\EndIf
\EndWhile
\end{algorithmic}
\end{algorithm}
The algorithm takes as input the coalgebra $(X,\gamma,s_0)$ (which we fixed throughout this section).
In every iteration of the outside loop, the table is first closed by repeatedly applying the procedure in Definition~\ref{def:closing}. Then, if the conjecture corresponding to the closed table is correct,
the algorithm returns it (Line~\ref{ln:ret}). Otherwise, a counterexample is
chosen (Line~\ref{ln:new-ctr}), and the algorithm continues.
\subsection{Correctness and Termination}\label{sec:correctness-and-termination}
Correctness is stated in Theorem~\ref{thm:correctness}.
It relies on establishing loop invariants:
\begin{theorem}\label{thm:invariant}
The following is an invariant of both loops in Algorithm~\ref{alg:main}:
\begin{enumerate*}
\item $(S,\Psi)$ is sharp,
\item $s \circ \hat{s}_0 = s_0$, and
\item $s$ is $s_0$-prefix closed w.r.t.\ $\gamma$.
\end{enumerate*}
\end{theorem}
\begin{theorem}\label{thm:correctness}
If Algorithm~\ref{alg:main} terminates, then it returns a pointed coalgebra
$(S,\hat{\gamma},\hat{s}_0)$ which is minimal w.r.t.\ logical equivalence,
reachable and correct w.r.t.~$\Phi$.
\end{theorem}
In our termination arguments, we have to make an assumption about the coalgebra which
is to be learned. It does not need to be finite itself, but it should be finite
up to logical equivalence---in the case of deterministic automata, for instance,
this means the teacher has a (possibly infinite) automaton representing
a regular language. To speak about this precisely,
let $\Psi$ be a subobject of $\Phi$. We take a (strong epi, mono)-factorisation of the theory map, i.e.,
$
\theory{\Psi}{\gamma} = \left(
\xymatrix{X \ar@{->>}[r]^-{e_\Psi}
& \xvert{X}{\Psi}~ \ar@{>->}[r]^-{m_\Psi}
& Q\Psi
}
\right)
$ for some strong epi $e$ and mono $m$.
We call the object $\xvert{X}{\Psi}$ in the middle the \emph{$\Psi$-logical quotient}.
For the termination result (Theorem~\ref{thm:termination}), $\xvert{X}{\Phi}$
is assumed to have finitely many quotients and subobjects, which
just amounts to finiteness, in $\mathsf{Set}$.
We start with termination of the inner while loop (Corollary~\ref{cor:term-inner}). This
relies on two results:
first, that once the connecting map $\kappa$ is an iso, the table is closed,
and second, that---under a suitable assumption on the coalgebra $(X,\gamma)$---during execution
of the inner while loop, the map $\kappa$ will eventually be an iso.
\begin{theorem}\label{thm:kappa-iso-closed}
Let $(S, \Psi)$ be a sharp table, let $\overline{S} \in \mathsf{close}(S,\Psi)$
and let $\kappa \colon S \rightarrow \overline{S}$ be the connecting map.
If $\kappa$ is an isomorphism, then $(S,\Psi)$ is closed.
\end{theorem}
\begin{lemma}\label{lm:some-kappa-iso}
Consider a sequence of sharp tables $(S_i \stackrel{s_i}{\rightarrowtail} X,\Psi)_{i \in \mathbb{N}}$
such that $s_{i+1} \in \mathsf{close}(S_i, \Psi)$ for all $i$. Moreover,
let $(\kappa_i \colon S_i \rightarrow S_{i+1})_{i \in \mathbb{N}}$ be the connecting maps (Definition~\ref{def:closing}).
If the logical quotient $\xvert{X}{\Phi}$ of $X$
has finitely many subobjects, then $\kappa_i$ is an isomorphism for some $i \in \mathbb{N}$.
\end{lemma}
\begin{corollary}\label{cor:term-inner}
If the $\Phi$-logical quotient $\xvert{X}{\Phi}$ has finitely many subobjects, then
the inner while loop of Algorithm~\ref{alg:main} terminates.
\end{corollary}
For the outer loop,
we assume that $\xvert{X}{\Phi}$ has finitely many quotients,
ensuring that every sequence of
counterexamples proposed by the teacher is finite.
\begin{theorem}\label{thm:termination}
If the $\Phi$-logical quotient $\xvert{X}{\Phi}$ has finitely many quotients
and finitely many subobjects,
then Algorithm~\ref{alg:main} terminates.
\end{theorem}
\section{Future Work}
\label{sec:fw}
We showed how duality plays a natural role in automata learning,
through the central connection between states and tests. Based on this foundation,
we proved correctness and termination of an abstract algorithm for coalgebra
learning. The generality is not so much in the base category (which, for the algorithm, we take to be $\mathsf{Set}$)
but rather in the functor used; we only require a few mild conditions on the functor,
and make no assumptions about its shape. The approach
is thus considered \emph{coalgebra learning} rather
than automata learning.
Returning to automata, an interesting direction is to extend the present
work to cover learning of, e.g., non-deterministic or alternating automata~\cite{BolligHKL09,AngluinEF15}
for a regular language. This would require explicitly handling
branching in the type of coalgebra. One promising direction would be to incorporate
the forgetful logics of~\cite{KlinR16}, which are defined within the same
framework of coalgebraic logic as the current work.
It is not difficult to define in this setting what it means for a table
to be closed `up to the branching part', stating, e.g., that even though
the table is not closed, all the successors of rows are present as combinations of other rows.
Another approach would be to integrate monads into our framework, which are also
used to handle branching within the theory of coalgebras~\cite{JacobsSS15}. It is an intriguing question
whether the current approach, which allows to move beyond automata-like examples,
can be combined with the CALF framework~\cite{heer17:lear}, which is very far in handling branching occuring
in various kinds of automata.
\paragraph{Acknowledgments.} We are grateful to Joshua Moerman, Nick Bezhanishvili, Gerco van Heerdt, Aleks Kissinger and Stefan Milius for valuable
discussions and suggestions.
\begin{appendix}
\section{Proofs of Section~\ref{sec:base}}
\begin{proof}[Proof of Proposition~\ref{prop:existence-base}]
Let $f \colon X \rightarrow B(Y)$.
Consider the collection of all pairs of maps $g_k \colon X \rightarrow B(U_k)$,
$m_k \colon U_k \rightarrow Y$ such that $B(m_k) \circ g_k = f_k$ and $m_k$ is a subobject,
indexed by $k \in K$. Let
$
m \colon \bigwedge \{m_k\}_{k \in K} \rightarrow Y
$
be the intersection of all the $m_k$ -- this is a (small) set since $\Cat{C}$ is well-powered.
We abbreviate $\bigwedge \{m_k\}_{k \in K}$ by $I$.
Since $B$ preserves intersections, $B(m) \colon B(I) \rightarrow B(Y)$ is the intersection
of all the subobjects $B(m_k)$.
Now the $g_k$'s form a cone over the $B(m_k)$'s,
so we get a unique $g\colon X \rightarrow B(I)$ from the universal property of the pullback $B(I)$.
We claim that $(I,g,m)$ is the base of $f$. To see this,
first of all, note that $i$ is mono, and $B(m) \circ g = f$ by definition of $m$ and $g$.
Further, if there is any $g' \colon X \rightarrow B(U), m' \colon U \rightarrow Y$ with
$B(m') \circ g' = f$ and $m'$ monic
then it is (up to isomorphism) one of the $g_k$,$m_k$ pairs.
Hence, there is the map $x_k \colon I \rightarrow U_k$ in the limiting cone, i.e., $m_k \circ x_k = i$,
and we have $B(x_k) \circ g = g_k$.
Finally $x_k$ is unique among such maps,
since $B$ preserves monos (as it preserves intersections).
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lm:nat-base}]
By our assumption on $Z$ there exists a morphism
$g: S \to B Z$ such that $B m \circ g = f$.
Therefore we have $ B m' \circ B e \circ g = B h \circ f$
which shows that $m'$ is a candidate for the base of $B h \circ f$.
We still need to check the universal property of the base.
To this aim let $g':S \to B U$ and $n:U \to Y$ be the base
of $B h \circ f$:
\begin{equation*}
\begin{tikzcd}
S \ar[rr, "f"] \ar[dr, "g"] \ar[ddrr, "g'"', bend right = 20] & &
B X \ar[rr, "B h"] & & B Y \\
& B Z \ar[ur, "B m"] \ar[rr, "B e"] & & B W \ar[ur, "B m'"] \\
& & B U \ar[ur, "B j"] \ar[uurr, "B n"', bend right = 20] & &
\end{tikzcd}
\end{equation*}
By the universal property of the base there is a morphism
$j \colon U \rightarrow W$ making the lower right diagram commute.
Now, consider the following pullback:
\begin{equation*}
\begin{tikzcd}
P \ar[r, "p_n"] \ar[d, "p_h"'] \ar[dr, phantom, very near start,
"\lrcorner"] &
X \ar[d, "h"] \\
U \ar[r, "n"', tail] & Y
\end{tikzcd}
\end{equation*}
This is a preimage because $n$ is mono and by assumption on $B$
we have that this pullback is preserved under application of $B$.
$S$ forms a cone over the diagram with $S \rightarrow B X,
S \rightarrow B U$. So there exists a map from $S$ to the pullback.
\begin{equation*}
\begin{tikzcd}
S \ar[rr, "f"] \ar[dr, "g"] \ar[dddrr, "g'"', bend right = 35]
\ar[ddr, "w"', dashed] & &
B X \ar[rr, "B e"] & & B Y \\
& B Z \ar[ur, "B m"] \ar[rr, "B e"] \ar[d, "B k"', near start] & &
B W \ar[ur, "B m'"] \ar[ddl, "B d"', dashed, bend right = 15] \\
& B P \ar[uur, "B p_n"', near start] \ar[dr, "B p_h"] &
& & \\
& & B U \ar[uur, "B j"', bend right = 15]
\ar[uuurr, "B n"', bend right = 35] & &
\end{tikzcd}
\end{equation*}
$p_n$ is mono, so $B p_n \circ w$ is a base factorization. So, we get an arrow
from $B Z$ to $B P$, i.e., $k \colon Z \rightarrow P$ such that:
\begin{itemize}
\item[(i)] $B k \circ g = w$
\item[(ii)] $p_n \circ k = m$
\end{itemize}
Consider the following diagram. According to the diagonal
filling property, we have $\exists ! d \colon W \rightarrow U$
such that (i) $n \circ d = m'$ and (ii) $d \circ e = p_h \circ k$.
\begin{equation*}
\begin{tikzcd}
Z \ar[r, "e", twoheadrightarrow] \ar[d, "k"'] &
W \ar[dd, "m'"] \ar[ddl, "\exists!d", dashed] \\
P \ar[d, "p_h"'] & \\
U \ar[r, "n"', tail] & Y
\end{tikzcd}
\end{equation*}
By the universal property of the base we have $d \circ j =
\mathit{id}_U$. Moreover, $m' \circ j \circ d = n \circ d = m'$, and
because $m'$ is monic we have $j \circ d = \mathit{id}_W$.
\end{proof}
\begin{lemma}\label{lattice1}
If $\Cat{C}$ is complete and well-powered, then for each $X \in \Cat{C}$, we have $\mathsf{Sub}(X)$ has arbitrary meets.
Consequently, $\mathsf{Sub}(X)$ is a complete lattice.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lattice1}]
Consider some $X \in \Cat{C}$ and an arbitrary family of subobjects $\{m_i \colon S_1 \rightarrow X\}_{i \in I}$. Let $P$
be the pullback with pullback maps $\{p_1 \colon P \rightarrow S_i\}_{i \in I}$. As the $m_i$ are mono, the
$p_i$ are mono as well, so let define $p := m_i \circ p_i \colon P \rightarrow X \in \mathsf{Sub}(X)$. Obviously, we have
$p \leq m_i \forall i \in I$. So, $P$ is a lower bound. To see that $P$ is the greatest lower bound, consider an arbitrary
$P'$ with $p' \colon P' \rightarrow X$ that is a lower bound of the same family of subobjects. By definition of lower bound
we have for each $i \in I$ a map $p_i' \colon P' \rightarrow S_i$ s.t. $m_i \circ p_i' = p$. By universal property of
the (wide) pullback, there exists a unique map $c \colon P' \rightarrow P$ s.t. $p \circ c = p'$, i.e.,
$p' \leq p$. As $P'$ was an arbitrary lower bound, we showed that $P$ is the greatest lower bound.
This finishes the proof of the fact that $\mathsf{Sub}(X)$ has arbitrary meets. Completeness of the lattice can now be proven
in a standard way by defining the join of an arbitrary collection of subobjects as the meet of all upper bounds of this collection.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lm:gamma-monotone}]
It is obvious that $\Gamma$ is well-defined. To check monotonicity, we consider the following diagram for subobjects $s:S \to X$ and $s':S' \to X$
such that $s \leq s'$.
\[
\xymatrix{S \ar@{-->}[dddd]_{j} \ar[rd] \ar[rr]^{s} & & X \ar[dd]^\gamma \\
& B \Gamma(S) \ar[rd]^{B \Gamma(s)} \ar@{-->}[dd]_{\exists h} & \\
& & B X \\
& B \Gamma(S') \ar[ru]_{B \Gamma(s')} & \\
S' \ar[ru] \ar[rr]_{s'} & & X \ar[uu]^\gamma }
\]
Here $j$ exists by the definition of $\leq$ and $h$ exists by the universal property of the base of $\gamma \circ s$.
Therefore we have $\Gamma(s) \leq \Gamma(s')$ as required.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:subcoalg-base}]
\begin{itemize}
\item[$\Rightarrow$] Consider the following diagram
\begin{equation*}
\begin{tikzcd}
S \ar[rr, "s"] \ar[dd, "\sigma"] \ar[dr, "e"] & & X \ar[d, "\gamma"] \\
& B \Gamma(S) \ar[r, "B \Gamma(s)"] \ar[dl,"B j"] & B X \\
B S \ar[urr, "B s", bend right] & & \\
\end{tikzcd}
\end{equation*}
As $s$ is a subcoalgebra there exists $\sigma \colon S \rightarrow B S$ s.t. the outer square commutes. By the universal
property of the base there exists $j \colon \Gamma(S) \rightarrow S$ s.t. $s \circ j = \Gamma(s)$. In other words,
$\Gamma(s) \leq s$ as required.
\item[$\Leftarrow$] By assumption there exists a $j: \Gamma(S) \to S$ such that
$s \circ j = \Gamma(s)$. We define a $B$-coalgebra structure on $S$ by putting $\sigma := j \circ e$. We have to show that the outer square in the above diagram
commutes, but this is easy
to show because the inner square commutes by definition of the base, the left triangle commutes by definition of
$\sigma$ and the right one by assumption on $j$.
\end{itemize}
Finally, that the collection of subcoalgebras of $(X,\gamma)$ forms a complete lattice is now a direct consequence of
the fact that the collection of pre-fixpoints of the monotone operator $\Gamma$ forms a complete lattice (Knaster-Tarski theorem).
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:reach-base}]
Suppose $X \cong \mathsf{lfp}(\Gamma \vee x_0)$, and let $s \colon S \rightarrowtail X$
be a subcoalgebra together with an arrow $s_0 \colon 1 \to S$ with $x_0 = s \circ s_0$.
The latter implies that $x_0 \leq s$. Further, since $S$ is a subcoalgebra,
by Proposition~\ref{prop:subcoalg-base} we get $\Gamma(s) \leq s$.
Hence $\Gamma(s) \vee x_0 \leq s$, i.e., $s$ is a pre-fixed point of $\Gamma \vee x_0$.
By the Knaster-Tarski theorem (using that $\mathsf{Sub}(X)$ is a complete lattice),
$\mathsf{lfp}(\Gamma \vee x_0)$ is the least pre-fixed point, so it now suffices to prove that $s \leq \mathsf{lfp}(\Gamma \vee x_0)$.
But this follows easily, since $s$ is a subobject of $X$ and $X \cong \mathsf{lfp}(\Gamma \vee x_0)$.
Conversely, suppose $(X,\gamma,x_0)$ is reachable.
We have that $\Gamma(\mathsf{lfp}(\Gamma \vee x_0)) \leq
\Gamma(\mathsf{lfp}(\Gamma \vee x_0)) \vee x_0 = \mathsf{lfp}(\Gamma \vee x_0)$, so by Proposition~\ref{prop:subcoalg-base},
$\mathsf{lfp}(\Gamma \vee x_0)$ is a subcoalgebra of $(X, \gamma)$.
Moreover, we have
$x_0 \leq \Gamma(\mathsf{lfp}(\Gamma \vee x_0)) \vee x_0 = \mathsf{lfp}(\Gamma \vee x_0)$, so there
exists a map $s_0 \colon 1 \rightarrow \mathsf{lfp}(\Gamma \vee x_0)$ such that $x_0 = s \circ s_0$,
where $s \colon \mathsf{lfp}(\Gamma \vee x_0) \rightarrowtail X$ is the inclusion. Hence,
by definition of reachability, we get that $s$ is an isomorphism.
\qed
\end{proof}
\section{Proofs of Section~\ref{sec:learning}}
\begin{proof}[Proof of Theorem~\ref{thm:conjecture}]
Given a table $(S,\Psi)$ that is closed, it is straightforward to construct
a conjecture $\hat{\gamma}$ as composite of $g:S \to B \Gamma(S)$
and the arrow $B k: B\Gamma(S) \to S$, where $g$ is part of the
base $(\Gamma(S),g,\Gamma(s))$ of $\gamma \circ s$ and $k: \Gamma(S) \to S$
is the morphism that exists by closedness of $(S,\Psi)$.
For the converse, consider a conjecture $(S, \hat{\gamma})$ for a sharp table $(S,\Psi)$,
let $(\Gamma(S),g,\Gamma(s))$ be the base of $\gamma \circ s$ and let $(h: \Gamma(S) \to Y, m:Y \to Q \Psi)$ be the factorisation of
$\theory{}{\gamma} \circ \Gamma(s)$.
By Lemma~\ref{lm:nat-base}, as $h$ is epi, we have that $(Y, Bh \circ g, m)$ is the base of $B\theory{}{\gamma} \circ \gamma \circ s$. The situation
is depicted in the (commuting) upper square of the diagram below).
\begin{equation}\label{eq:extended-base}
\begin{tikzcd}
S \ar[r,"s",tail] \ar[dr,"g"] \ar[ddr,"\hat{\gamma}"',bend right = 20]
& X \ar[r,"\gamma"]
& B{X} \ar[r,"{B\theory{}{\gamma}}"]
& BQ(\Psi)\\
& B(\Gamma(S)) \ar[r,"Bh"]
& B(Y) \ar[ur,"Bm"] \ar[dl,"Bj"]
& \\
& B{S} \ar[r,"Bs"']
& B{X} \ar[uur,"{B\theory{}{\gamma}}"',bend right = 20]
&
\end{tikzcd}
\end{equation}
The bigger outer square commutes by the fact that $\hat \gamma$ is assumed to be a conjecture.
As the table is sharp, we have $\theory{}{\gamma} \circ s$ is mono. Therefore the universal property of the base
yields existence of a morphism $j: Y \to S$ such that $\theory{}{\gamma} \circ s \circ j = m$.
We define $k \mathrel{:=} j \circ h$ and claim that this $k$ is a witness for $(S,\Psi)$, i.e., that $k$ makes the
relevant diagram from Definition~\ref{def:closed} commute. To see this, we calculate:
\[
\theory{}{\gamma} \circ \Gamma(s) = m \circ h = \theory{}{\gamma} \circ s \circ j \circ h = \theory{}{\gamma} \circ s \circ k .
\]
This finishes the proof.
\qed
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lm:truth-lemma-tables}]
The map $\theory{}{\hat{\gamma}}$ is, by definition, the unique map making the following diagram commute.
$$
\begin{tikzcd}
S \ar[rr, "{\theory{}{\hat{\gamma}}}"] \ar[d,"{\hat{\gamma}}"']
& & Q \Psi \\
B{S} \ar[r, "{B\theory{}{\hat{\gamma}}}"']
& B Q \Psi \ar[r, "{\rho^\flat_{\Psi}}"']
& Q L \Psi \ar[u,"d"']
\end{tikzcd}
$$
where $d \colon \Psi \rightarrow L\Psi$ is the coalgebra structure from subformula closedness
of $\Psi$.
Consider the following diagram:
\begin{equation*}
\begin{tikzcd}
S \ar[r, "s", tail] \ar[d, "\hat{\gamma}"'] &
X \ar[d, "\gamma"] \ar[rr,"{\theory{}{\gamma}}"]
& & Q \Psi \\
B{S} \ar[r, "Bs"] &
B{X} \ar[r, "{B\theory{}{\hat{\gamma}}}"'] &
B Q \Psi \ar[r, "{\rho^\flat_{\Psi}}"'] &
Q L \Psi \ar[u,"d"']
\end{tikzcd}
\end{equation*}
The rectangle on the right commutes by definition of $\theory{\Psi}{\gamma}$.
Together with $\hat{\gamma}$
being a conjecture, it follows that the outside of the diagram commutes.
Since $\theory{\Psi}{\hat{\gamma}}$ is the unique such map,
we have $\theory{\Psi}{\gamma} \circ s = \theory{\Psi}{\gamma}$.
\qed
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lm:no-more-counter}]
If there is no counterexample,
then in particular $\Phi$ is not a counterexample.
The object $\Phi$ is subformula-closed subobject of itself,
and $\Psi$ is a subcoalgebra of $\Phi$. Hence, by definition of
counterexamples, it must be the case that $(S,\hat{\gamma},\hat{s}_0)$ correct w.r.t.\ $\Phi$.
\qed
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:connecting}]
Let $\overline{s} \in \mathsf{close}(S,\Psi)$. We calculate:
\begin{eqnarray*}
\theory{}{\gamma} \circ \overline{s} \circ \kappa & = & \theory{}{\gamma} \circ s = \theory{}{\gamma} \circ (s \vee \Gamma(s)) \circ \mathit{inl}v = r \circ q \circ \mathit{inl}v \\
& = & \theory{}{\gamma} \circ \overline{s} \circ q \circ \mathit{inl}v
\end{eqnarray*}
which implies $\kappa = q \circ \mathit{inl}v$ as $r = \theory{}{\gamma} \circ \overline{s}$ is a mono.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:uglycondition}]
Given the assumption of the lemma we are able to define a morphism $\overline{s}:\overline{S} \to X$
by putting $\overline{s} \mathrel{:=} (s \vee \Gamma(s)) \circ q^{-1}$. Obviously $\overline{s}$ is a mono
as it is defined as composition of monos. Furthermore, by definition, we have $\overline{s} \leq s \vee \Gamma(s)$.
To see that $s \leq \overline{s}$, we calculate
\[
\overline{s} \circ q \circ \mathit{inl}v = (s \vee \Gamma(s)) \circ q^{-1} \circ q \circ \mathit{inl}v = (s \vee \Gamma(s)) \circ \mathit{inl}v = s .
\]
Finally, the condition concerning the theory map also follows easily:
\[
\theory{}{\gamma} \circ \overline{s} = \theory{}{\gamma} \circ (s \vee \Gamma(s)) \circ q^{-1} = r \circ q \circ q^{-1} = r .
\]
This finishes the proof of the lemma.
\end{proof}
We need a few auxiliary lemma's in the proofs below.
\begin{lemma}\label{lm:close-pres-sharp}
If $(S,\Psi)$ is sharp, then $(\overline{s} \colon
\overline{S} \rightarrow X,\Psi)$ is sharp
for any $\overline{s} \in \mathsf{close}(S,\Psi)$.
\end{lemma}
\begin{proof}
This folllows immediately from Definition~\ref{def:closing},
since $\theory{}{\gamma} \circ \overline{s} = r$, where $r$ is monic.
\qed
\end{proof}
\begin{lemma}\label{lm:counter-theory}
Let $\Psi$ and $\Psi'$ be subformula closed, with $\Psi$ a subcoalgebra
of $\Psi'$, witnessed by a mono $i \colon \Psi \rightarrowtail \Psi'$.
Then we have $Q i \circ \theory{\Psi'}{\gamma} = \theory{\Psi}{\gamma}$.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lm:counter-theory}]
Let $\sigma' \colon \Psi' \rightarrow L\Psi'$ and $\sigma \colon \Psi \rightarrow L\Psi$ be
the coalgebra structures from subformula closedness of $\Psi'$ and $\Psi$ respectively.
Consider the following diagram.
$$
\begin{tikzcd}
X \ar[rr, "{\theory{\Psi'}{\gamma}}"] \ar[d,"\gamma"']
& & Q(\Psi') \ar[r, "{Q i}"]
& Q \Psi \\
B{X} \ar[r, "B{\theory{\Psi'}{\gamma}}"]
& B Q(\Psi') \ar[r, "{\rho^\flat_{\Psi'}}"] \ar[dr, "{BQ i}"']
& Q L(\Psi') \ar[u,"{\sigma'}"'] \ar[r, "{Q Li}"]
& Q L \Psi \ar[u,"\sigma"'] \\
& & B Q \Psi \ar[ur, "{\rho^\flat_{\Psi}}"']
\end{tikzcd}
$$
By definition, $\theory{\Psi'}{\gamma}$ is the unique map making the left rectagle
commute. The (right) square commutes by assumption that $i$ is a coalgebra
homomorphism, and the (lower) triangle by naturality.
Since $\theory{\Psi}{\gamma}$ is the unique map
such that $\theory{\Psi}{\gamma} = Q \sigma \circ \rho^\flat_{\Psi} \circ B\theory{\Psi}{\gamma} \circ \gamma$,
we have $\theory{\Psi}{\gamma} = Q i \circ \theory{\Psi'}{\gamma}$.
\qed
\end{proof}
\begin{lemma}\label{lm:counter-sharp}
Suppose $(S,\Psi)$ is sharp, and $\Psi'$ is a counterexample. Then
$(S,\Psi')$ is again sharp.
\end{lemma}
\begin{proof}
Let $i \colon \Psi \rightarrowtail \Psi'$ be the inclusion of the coalgebra $(\Psi,\sigma)$ into $(\Psi',\sigma')$.
By Lemma~\ref{lm:counter-theory}, we have
$Q i \circ \theory{\Psi'}{\gamma} = \theory{\Psi}{\gamma}$.
Hence $Q i \circ \theory{\Psi'}{\gamma} \circ s = \theory{\Psi}{\gamma} \circ s$,
and since $\theory{\Psi}{\gamma} \circ s$ is monic, it follows that
$\theory{\Psi'}{\gamma} \circ s$ is monic.
\qed
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:invariant}]
To show that each of these is an invariant of both loops, it suffices
to prove that they hold at once we enter the first iteration of the outer loop,
and that both loops preserve them (that it holds at the start of each
first iteration of the inner loop then follows).
\begin{enumerate}
\item
(Holds at entry of the outer loop.) At this point, $(S,\Psi) = (S_0, 0)$.
Since $Q$ is a right adjoint, it maps $0$ to the terminal object $Q 0 = 1$ of $\Cat{C}$.
Hence, the map from $S_0=1$ to $Q \Psi$ is of the form
\begin{tikzcd}
1 \ar[r, "s_0"] &
X \ar[r, "{\theory{}{\gamma}}"] &
Q \Psi = 1 \end{tikzcd}
which is an iso, so in particular monic.
(Preserved by the inner loop.) This follows from Lemma~\ref{lm:close-pres-sharp}.
(Preserved by the loop body.)
If $(S,\Psi)$ is sharp on entry of the body of the outer loop,
and the inner loop terminates, then $(S,\Psi)$
is again sharp at Line~\ref{ln:conj}. It only remains to show that if $\Psi'$ is
a counterexample (extending $\Psi'$) for a conjecture for $(S,\Psi)$, then $(S,\Psi')$ is
sharp. This follows, in turn, from Lemma~\ref{lm:counter-sharp}.
\item (Holds at entry of the outer loop.) Follows immediately
from the first two lines of the algorithm.
(Preserved by the inner loop.) Suppose $\hat{s}_0 \circ s = s_0$,
and let $\overline{s} \in \mathsf{close}(S,\Psi)$. We need to prove that
$\overline{s}_0 \circ \kappa \circ \hat{s}_0 = s_0$ where $\kappa:S \to \overline{S}$
is the connecting map.
Indeed, we have $\overline{s}_0 \circ \kappa \circ \hat{s}_0 = s \circ \hat{s}_0 = s_0$,
by definition of $\kappa$ and assumption, respectively.
(Preserved by the outer loop.) This follows immediately from preservation by the inner loop.
\item Clearly the initial configuration $(S_0,0)$ is $s_0$-prefix closed.
Suppose now that $(S,\Psi)$ is a table with $s$ being $s_0$-prefix closed.
We need to check that any $\overline{s} \in \mathsf{close}(S,\Psi)$ is $s_0$-prefix closed as well.
By assumption on $(S,\Psi)$ we have
$s = \bigvee_{i=0}^n s_i$ for a suitable family of subobjects $s_0,\ldots,s_n$.
Let $\overline{s} \in \mathsf{close}(S,\Psi)$. Then by definition we have
$\overline{s} \leq s \vee \Gamma(s)$, so we put $s_{n+1} \mathrel{:=} \Gamma(s) \wedge \overline{s}$.
It is then easy to check that $\overline{s}$ is $s_0$-prefixclosed:
\[ \bigvee_{i=0}^{n+1} s_i = \bigvee_{i=0}^n s_i \vee s_{n+1} = s \vee ( \Gamma(s) \wedge \overline{s}) = (s \vee \Gamma(s))\wedge (s\vee \overline{s}) = \overline{s}\]
where the last equality follows from $s \leq \overline{s} \leq s \vee \Gamma(s)$ . By definition
we have $s_{n+1} \leq \Gamma(s) = \Gamma(\bigvee_{i=0}^n s_i)$ as required.
\end{enumerate}
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:correctness}]
Minimality w.r.t logical equivalence follows from the fact that sharpness of the table is
maintained throughout. As the algorithm terminated
there is no counterexample, which means by Lemma~\ref{lm:no-more-counter} that the coalgebra is correct w.r.t. $\Phi$.
For reachability we show that the pointed coalgebra
that is returned by the algorithm is reachable by showing that
{\em any} conjecture that is constructed during the run of the algorithm is reachable.
While running the algorithm we will only encounter conjectures that are built from
tables that are both sharp and closed.
Therefore we consider an arbitrary sharp and closed table $(S,\Psi)$ together with the conjecture $(S,\hat{\gamma})$ that exists
according to Theorem~\ref{thm:conjecture}. We are going to prove that $(S,\hat{\gamma},\hat{s_0})$ is reachable.
By Theorem~\ref{thm:invariant} we
know that $(S,\Psi)$ is $s_0$-prefix closed. This means that
$s = \bigvee_{i=0}^n s_i$ for suitable subobjects $s_1, \ldots,s_n \in \mathsf{Sub}(X)$.
Suppose now that $(\overline{S},\overline{\gamma},\overline{s_0})$ is
a subcoalgebra of $(S,\hat{\gamma},\hat{s_0})$ with inclusion $j:\overline{S} \to S$
such that $j \circ \overline{s_0} = \hat{s_0}$.
We prove by induction on $i$ that $s_i \leq \overline{s}$ for all $i \in \{0,\ldots,n\}$
and thus $s \leq \overline{s}$ - this will imply $s = \overline{s}$ and thus, as $\overline{s}$ was
assumed to be an arbitrary (pointed) subcoalgebra, reachability of $(S,\hat{\gamma},\hat{s_0})$.
\\
{\noindent \em Case} $i=0$. Then $\overline{s} \circ \overline{s_0} = s \circ j \circ \overline{s_0} = s \circ \hat{s_0} = s_0$
and thus $s_0 \leq \overline{s}$ as required. \\
{\noindent \em Case} $i=j+1$. Then
\[
\theory{}{\gamma} \circ s_{j+1} \leq \theory{}{\gamma} \circ \Gamma(\bigvee_{i=0}^j s_j)
\stackrel{\mbox{\tiny I.H.}}{\leq} \theory{}{\gamma} \circ \Gamma(\overline{s})
\stackrel{\mbox{\tiny Thm.~\ref{thm:conjecture}}}{\leq}
\theory{}{\gamma} \circ \overline{s}
\]
where we slightly abuse notation by writing $f \leq g$ for arbitrary morphisms $f:X_1 \to Y$ and $g:X_2 \to Y$ is
there exists a morphism $m:X_1 \to X_2$ such that $g \circ m = f$.
The inequality implies that there is a map $k_{j+1}: S_i \to \overline{S}$ such that
$ \theory{}{\gamma} \circ \overline{s} \circ k_{j+1} = \theory{}{\gamma} \circ s_{j+1}$.
This implies
\begin{equation}\label{eq:smallermodtheory}
\theory{}{\gamma} \circ s \circ j \circ k_{j+1} = \theory{}{\gamma} \circ s_{j+1}
\end{equation}
On the other hand, we have $s \circ \mathrm{in}_{j+1} = s_{j+1}$ where $\mathrm{in}_{j+1} = s_{j+1}$
denotes the inclusion of $s_{j+1}$ into $s$.
Therefore we have $ \theory{}{\gamma} \circ s \circ \mathrm{in}_{j+1} = \theory{}{\gamma} \circ s_{j+1}$.
Together with~(\ref{eq:smallermodtheory}) this implies $ \theory{}{\gamma} \circ s \circ \mathrm{in}_{j+1} = \theory{}{\gamma} \circ s \circ j \circ k_{j+1}$.
By sharpness of the table $s$ we obtain $ \mathrm{in}_{j+1} = j \circ k_{j+1}$ and finally
$s_{j+1} = s \circ \mathrm{in}_{j+1} = s \circ j \circ k_{j+1} = s' \circ k_{j+1}$ which shows that
$s_{j+1} \leq s'$. This finishes the induction proof.
\qed
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:kappa-iso-closed}]
Suppose $\kappa$ is an isomorphism. Consider the following diagram (using
the notation from Definition~\ref{def:closing}),
where $g \colon S \rightarrow B{X}$ is the map which forms
the base of $\gamma \circ s$ together with $\Gamma(s) \colon \Gamma(S) \rightarrow X$.
\begin{equation*}
\begin{tikzcd}
S \ar[d, "g"'] \ar[r,"s"] &
X \ar[r,"\gamma"] &
B{X} \ar[dddd, "B\theory{}{\gamma}"] \\
B(\Gamma(S)) \ar[d,"{B\mathit{inr}v}"'] \ar[urr,"B(\Gamma(s))"] \\
B(S \vee \Gamma(S)) \ar[d,"{Bq}"'] \ar[uurr,"{B (s \vee \Gamma(s))}"']\\
B(\overline{S}) \ar[d,"{B\kappa^{-1}}"'] \ar[drr,"Br"] \\
B{S} \ar[r, "Bs"'] &
B{X} \ar[r, "B\theory{}{\gamma}"'] &
BQ(\Psi)
\end{tikzcd}
\end{equation*}
The inner shapes commute, from top to bottom: (1) by definition the base,
(2) by definition of $\mathit{inr}v$, (3) by definition of $(q,r)$; for the bottom triangle (4),
we have
$$
\theory{}{\gamma} \circ s \stackrel{\mbox{\tiny Def. of $\kappa$}}{=} \theory{}{\gamma} \circ \overline{s} \circ \kappa \stackrel{\mbox{\tiny $\overline{s} \in \mathsf{close}(S,\Psi)$}}{=}
r \circ \kappa
$$
which suffices since $\kappa$ is an iso. Since the entire diagram commutes,
the coalgebra structure on $S$ gives a conjecture for $(S,\Psi)$. Hence, by Theorem~\ref{thm:conjecture},
the table is closed. \qed
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lm:some-kappa-iso}]
First, observe that the $S_i$'s form an increasing chain of subobjects of $X$.
Since all these tables $(S_i,\Psi)$ are sharp, they give rise to an increasing chain of subobjects of
$Q(\Psi)$, by composition with $\theory{\Psi}{\gamma}$,
given by $\theory{\Psi}{\gamma} \circ s_i \colon S_i \rightarrow Q(\Psi)$.
By Lemma~\ref{lm:counter-theory},
it follows that each $\theory{\Phi}{\gamma} \circ s_i \colon S_i \rightarrow Q \Phi$
is monic, and we obtain a sequence of subobjects of $\Phi$:
$$
\begin{tikzcd}
S_0 \ar[r,"\kappa_0"] \ar[d,"s_0"]
& S_1 \ar[r,"\kappa_1"] \ar[d,"s_1"]
& S_2 \ar[r,"\kappa_2"] \ar[d,"s_2"]
& \ldots \\
X \ar[dr,"{\theory{\Phi}{\gamma}}"']
& X \ar[d,"{\theory{\Phi}{\gamma}}"]
& X \ar[dl,"{\theory{\Phi}{\gamma}}"]
& \ldots \\
& Q \Phi & &
\end{tikzcd}
$$
It follows that this induces a chain of subobjects of $\xvert{X}{\Phi}$:
$$
\begin{tikzcd}
S_0 \ar[r,"\kappa_0"] \ar[d,"s_0"]
& S_1 \ar[r,"\kappa_1"] \ar[d,"s_1"]
& S_2 \ar[r,"\kappa_2"] \ar[d,"s_2"]
& \ldots \\
X \ar[dr,"{e_\Phi}"',twoheadrightarrow]
& X \ar[d,"{e_\Phi}",twoheadrightarrow]
& X \ar[dl,"{e_\Phi}",twoheadrightarrow]
& \ldots \\
& \xvert{X}{\Phi} & &
\end{tikzcd}
$$
By assumption, $\xvert{X}{\Phi}$ has finitely many subobjects, so $\kappa_i$
must be an isomorphism for some $i$.
\qed
\end{proof}
\begin{proof}[Proof of Corollary~\ref{cor:term-inner}]
The while loop computes a chain of subobjects of $X$ as in Lemma~\ref{lm:some-kappa-iso};
in particular, each of these forms a sharp table (with $\Psi$),
since sharpness is an invariant (Theorem~\ref{thm:invariant}).
Hence, after a finite number of iterations, $\kappa$ is an iso.
By Theorem~\ref{thm:kappa-iso-closed} this implies that $(S,\Psi)$ is closed,
which means the guard of the while loop is false.
\qed
\end{proof}
For termination of the outer loop, we need several auxiliary lemmas.
\begin{lemma}\label{lm:subform-coalg}
Let $(S,\Psi)$ be table, and let $\Psi'$
be a subformula-closed subobject of $\Phi$, such that
$\Psi$ is a subcoalgebra of $\Psi'$.
Then there is a unique map $q$ making the following diagram commute:
\begin{equation}\label{eq:psi-quotient}
\begin{tikzcd}
X \ar[r, "{e_{\Psi'}}", twoheadrightarrow] \ar[dr, "{e_{\Psi}}"', bend right=20, twoheadrightarrow]
& \xvert{X}{\Psi'} \ar[r, "{m_{\Psi'}}", tail] \ar[d, "q", dashed, twoheadrightarrow]
& Q \Psi' \ar[d, "Q i"] \\
& \xvert{X}{\Psi} \ar[r, "{m_{\Psi}}"', tail]
& Q \Psi
\end{tikzcd}
\end{equation}
Moreover, this map $q$ is an epimorphism.
\end{lemma}
\begin{proof}
The outside of the diagram commutes by Lemma~\ref{lm:counter-theory}. The map
$q$ arises by the unique fill-in property. That $q$ is an epi follows
since $e_{\Psi}$ is an epi, and $e_{\Psi} = q \circ e_{\Psi'}$.
\qed
\end{proof}
\begin{lemma}\label{lm:counterexample-iso}
Let $(S \stackrel{s}{\rightarrowtail} X,\Psi)$ be a closed table, and $(S,\hat{\gamma}, \hat{s}_0)$
a pointed coalgebra, such that $(S,\hat{\gamma})$ is a conjecture
and $s \circ \hat{s}_0 = s_0$.
If $\Psi'$ is a counterexample for $(S,\hat{\gamma},\hat{s}_0)$, then
the map $q \colon \xvert{X}{\Psi'} \rightarrow \xvert{X}{\Psi}$
from Lemma~\ref{lm:subform-coalg} is \emph{not} an isomorphism.
\end{lemma}
\begin{proof}
Suppose that $q$ is an iso; we prove that, in that case,
$(S,\hat{\gamma},\hat{s}_0)$ is correct w.r.t.\
$\Psi'$.
Let $q^{-1}$ be the inverse of $q$. Since
$q \circ e_{\Psi'} = e_\Psi$ we also have $e_{\Psi'} = q^{-1} \circ e_\Psi$. Hence,
the two shapes on the lower right in the following diagram commute:
\begin{equation*}
\begin{tikzcd}
S \ar[r, "s", tail] \ar[d, "\hat{\gamma}"'] &
X \ar[r, "\gamma"] &
B{X} \ar[d, "Be_{\Psi}"'] \ar[ddr, "{Be_{\Psi'}}", bend left=20]
&
\\
B{S} \ar[r, "Bs"'] &
B{X} \ar[r, "Be_{\Psi}"] \ar[drr, "{Be_{\Psi'}}"', bend right=20] &
B(\xvert{X}{\Psi}) \ar[dr, "{Bq^{-1}}"'] &
\\
& & & B(\xvert{X}{\Psi'})
\end{tikzcd}
\end{equation*}
The rectangle commutes since $\hat{\gamma}$ is a conjecture for the
closed table $(S,\Psi)$. Since the entire diagram commutes,
it shows that $(S,\hat{\gamma})$ is a conjecture for the closed table
$(S,\Psi')$ as well.
Together with $s \circ \hat{s}_0 = s_0$, by Lemma~\ref{lm:truth-lemma-tables},
we obtain that $(S,\hat{\gamma},\hat{s}_0)$ is correct w.r.t.\ $\Psi'$.
\qed
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:termination}]
The inner while loop terminates in each iteration
of the outer loop by Corollary~\ref{cor:term-inner}.
The outer loop generates a sequence
$\Psi_0, \Psi_1, \Psi_2, \ldots$ of subobjects, such that for each $i$,
there is a pointed coalgebra $(S_i, \hat{\gamma}, \hat{s}_0)$ such that
\begin{itemize}
\item $(S_i \stackrel{s_i}{\rightarrowtail} X, \Psi_i)$ is a closed table,
\item $(S_i, \hat{\gamma})$ is a conjecture for this table,
\item $s_i \circ \hat{s}_0 = s_0$, and
\item $\Psi_{i+1}$ is a counterexample for $(S_i,\hat{\gamma},\hat{s}_0)$.
\end{itemize}
We will show that such a sequence is necessarily finite.
By the last point and Lemma~\ref{lm:subform-coalg}, for each $i$,
there exists a map $q_{i+1,i}$ making the diagram on the left-hand side commute:
$$
\begin{tikzcd}
& X \ar[dl,"{e_{\Psi_i}}"'] \ar[dr,"{e_{\Psi_{i+1}}}"]
& \\
\xvert{X}{\Psi_i}
& & \xvert{X}{\Psi_{i+1}} \ar[ll,"q_{i+1,i}"']
\end{tikzcd}
\qquad
\begin{tikzcd}
& X \ar[dl,"{e_{\Psi_i}}"'] \ar[dr,"{e_{\Phi}}"]
& \\
\xvert{X}{\Psi_i}
& & \xvert{X}{\Phi} \ar[ll,"q_{i}"']
\end{tikzcd}
$$
Moreover, again by Lemma~\ref{lm:subform-coalg}, for each $i$,
there is a map $q_i$ making the diagram on the right-hand side above commute.
For each $i$, we have
$$
q_{i+1,i} \circ q_{i+1} \circ e_{\Phi}
= q_{i+1,i} \circ e_{i+1}
= e_{\Psi_i}
= q_i \circ e_{\Phi}
$$
and since $e_{\Phi}$ is epic, we obtain $q_{i+1, i} \circ q_{i+1} = q_i$. Hence, we get
the following sequence of quotients:
$$
\begin{tikzcd}
& & \xvert{X}{\Phi} \ar[dll, "q_0"', twoheadrightarrow]
\ar[dl, "q_1", twoheadrightarrow]
\ar[d, "q_2", twoheadrightarrow]
\ar[dr, "{\ldots}", twoheadrightarrow]
\\
\xvert{X}{\Psi_0}
& \xvert{X}{\Psi_1} \ar[l, "q_{1,0}", twoheadrightarrow]
& \xvert{X}{\Psi_2} \ar[l, "q_{2,1}", twoheadrightarrow]
& \ldots \ar[l, "q_{3,2}", twoheadrightarrow]
\end{tikzcd}
$$
It follows from Lemma~\ref{lm:counterexample-iso} and the previous assumptions that none of the
quotients $q_{i+1,i}$ can be an iso.
But since for each $i$, $\xvert{X}{\Psi_i}$ is a quotient of $\xvert{X}{\Phi}$,
and the latter has only finitely many quotients, the sequence of
counterexamples must be finite.
\qed
\end{proof}
\end{appendix}
\end{document}
|
\begin{document}
\begin{abstract}
Let $X$ be a projective variety of dimension $n$ and $L$ be a nef divisor on
$X$. Denote by $\epsilon_d(r;X,L)$ the $d$-dimensional Seshadri constant of
$r$ very general points in $X$. We prove that
$$\epsilon_d(rs;X,L)\ge \epsilon_d(r;X,L)\cdot
\epsilon_d(s;\PP^n,\OO_{\PP^n}(1)) \quad \text{ for } r,s\ge 1.$$
\end{abstract}
\title{An inequality between multipoint Seshadri constants}
\section{Introduction}
Let $L$ be a nef divisor on a projective variety $X$ of dimension $n$.
The Seshadri constant of dimension $d\le n$ at $r$ points $p_1,\ldots,p_r$ in
the smooth locus of $X$
is defined (see \cite{szemberg(01):global_local_pos_lin_bdl}) to be the real number
\begin{equation*}
\epsilon_d(p_1,\ldots,p_r;X,L) =\inf\left\{ \left( \dfrac{L^d\cdot Z}{\sum \operatorname{mult}_{p_i}Z} \right) ^{1/d}:
\begin{matrix}
Z \subset X \text{ effective cycle} \\
\text{of dimension }d
\end{matrix}
\right\}.
\end{equation*}
By semicontinuity of multiplicities if the points are in very general position then
the Seshadri constant does not depend on the actual points chosen
\eqref{lem:semicontinuous}. Thus one can define
$$\epsilon_d(r;X,L) = \epsilon_d(p_1,\ldots,p_r;X,L)$$
where $p_1,\ldots,p_r$ is any collection of $r$ very general points in
$X$. We shall prove the following inequality comparing Seshadri
constants of very general points $X$ and those in projective space.
\begin{thm}\label{thm:main}
Let $X$ be an $n$ dimensional projective variety, $L$ be a nef
divisor on $X$ and $r,s,d\ge 1$ be integers, with $d\le n$. Then
$$\epsilon_d(rs;X,L) \ge \epsilon_d(r;X,L) \cdot \epsilon_d(s;\PP^n,\OO_{\PP^n}(1)).$$
\end{thm}
\begin{rmk} When $d=1$ the Seshadri constant $\epsilon_1$ reduces to the usual Seshadri constant as introduced by Demailly \cite{demailly(92):sing_herm_metrics}, and in this case Theorem \ref{thm:main} is due to Biran \cite{biran(99):const_new_ample_divis_out_old_ones}. When $r=1$, $d=n-1$ the Theorem is due to Ro\'e \cite{roe(04):relat_between_one_point_multi}.
\end{rmk}
It is well known and not hard to see that
$\epsilon_d(r;X,L)\le\sqrt[n]{L^n/r}$
for $d=1$ and $d=n-1$ (see remark \ref{rmk:grow}). On the other hand,
explicit values or even lower bounds are in general
hard to compute. Thus Theorem \ref{thm:main} should be seen as a lower
bound on $\epsilon_d(rs;X,L)$, which will be useful if Seshadri
constants on projective space are known. As an example, let us recall
the most general form of a famous conjecture by Nagata:
\begin{conj}[Nagata-Biran-Szemberg, \cite{szemberg(01):global_local_pos_lin_bdl}]
Let $X$ be an $n$ dimensional projective variety, $L$ be a nef
divisor on $X$ and $1 \le d\le n$ be an integer. Then there is a
positive integer $r_0$ such that for every $r\ge r_0$,
$\epsilon_d(r;X,L) = \sqrt[n]{L^n/r}.$
\end{conj}
\begin{rmk}\label{rmk:grow}
Already Demailly observed that $\epsilon_1(r;X,L)\le\epsilon_d(r;X,L)$ for all
$d=2, \dots, n$ (which combined with the obvious equality
$\epsilon_n(r;X,L)= \sqrt[n]{L^n/r}$ gives the upper bound
$\epsilon_1(r;X,L)\le\sqrt[n]{L^n/r}$). Standard arguments also show
that $\epsilon_{n-1}(r;X,L)\le\sqrt[n]{L^n/r}=\epsilon_n(r;X,L)$ and
that $\epsilon_1(r;X,L) = \sqrt[n]{L^n/r}$ implies
$\epsilon_d(r;X,L) = \sqrt[n]{L^n/r}$ for all $d=2, \dots n$, so if
the conjecture above holds for $d=1$ then it holds for all $d$. In
view of these facts, it is tempting to ask if the inequalities
$\epsilon_{d_1}(r;X,L)\le\epsilon_{d_2}(r;X,L)$ hold for all $d_1\le d_2$.
\end{rmk}
In this context Theorem \ref{thm:main} implies, for instance,
that if (1) the Nagata-Biran-Szemberg conjecture is true for
$d$-dimensional Seshadri constants on $\PP^n$,
and (2) $\epsilon_d(1;X,L)=\sqrt[n]{L^n}$, then the
Nagata-Biran-Szemberg conjecture is true for the $d$-dimensional Seshadri
constants of $(X,L)$. See \cite{roe(04):relat_between_one_point_multi}
for more applications along this line.
The proof uses the degeneration to the normal cone of $r$ very general
points of $X$. That is, let $p_1,\ldots,p_r$ be very general
points in $X$ and $\pi\colon \X\to \AA^1$ be the blowup of $X\times
\AA^1$ at points $p_i\times \{0\}$ for $1\le i\le r$. The central
fibre of $\X$ over $0\in \AA^1$ is reducible, having one component
which is the blowup of $X$ at $p_1,\ldots,p_r$ (with exceptional
divisor $E_i\simeq \PP^{n-1}$ over $p_i$) and $r$ exceptional
components $F_i\simeq \PP^n$, with each $E_i$ glued to $F_i$ along a
hyperplane.
Regard a collection of $rs$ points in $X$ as coming from $r$
sub-collections each consisting of $s$ points. Shrinking $\AA^1$ if
necessary for each $1\le i\le r$ we can find $s$ sections of $\pi$ that
pass through very general points in $F_i\backslash E_i$. Then the inequality in
Theorem \ref{thm:main} follows by showing that the existence of highly
singular cycles at $rs$ points of $X$ implies the existence of highly
singular cycles at $rs$ points of the central fiber chosen at will,
which means cycles with higher multiplicities at the $s$ chosen points.
It is clear that this argument actually produces something stronger.
For instance we do not have to divide the collection of $rs$ points
evenly:
\begin{thm}\label{thm:main2}
Let $X$ be an $n$ dimensional projective variety and $L$ be a nef divisor on
$X$. Suppose $1\le s_1\le s_2\le \cdots \le s_r$ are integers and
set $s= \sum_{i=1}^r s_i$. Then
$$\epsilon_d\left(s;X,L\right) \ge \epsilon_d(r;X,L) \cdot \epsilon_d(s_r;\PP^n,\OO_{\PP^n}(1)).$$
\end{thm}
We can also consider weighted Seshadri constants as in
\cite{harbourne-roe(08):disc_beh_sesh_consts}, defined as follows.
In addition to the nef divisor $L$ on $X$ and $r$ points $p_1,\ldots,p_r$ in
the smooth locus of $X$, let a nonzero real vector
${\bf \ell}=(l_1,\cdots,l_r)\in \mathbb R_{+}^{r}$ be given with each $l_i\ge 0$. The Seshadri constant of dimension $d\le n$ at these points with weights ${\bf \ell}$
is defined to be the real number
$$\epsilon_d(l_1p_1, \dots, l_rp_r;X,L)
=\inf\left\{ \left( \dfrac{L^d\cdot Z}{\sum l_i\operatorname{mult}_{p_i}Z} \right) ^{1/d}:
\begin{matrix}
Z \subset X \text{ effective cycle} \\
\text{of dimension }d
\end{matrix}
\right\}.
$$
Again by semicontinuity if the points are in very general position then
the weighted Seshadri constant does not depend on the actual points chosen
\eqref{lem:semicontinuous}. Thus one can define
$$\epsilon(r,{\bf \ell};X,L) = \epsilon(l_1p_1,\ldots,l_rp_r;X,L)$$
where $p_1,\ldots,p_r$ is any collection of $r$ very general points in
$X$.
\begin{thm}\label{thm:glueing}
Let $X$ be an $n$ dimensional projective variety, let $L$ be a nef
divisor on $X$, and let $r, d \ge 1$
be integers with $d \le n$. Suppose $1\le s_1\le s_2\le \cdots \le s_r$ are integers and
set $s= \sum_{i=1}^r s_i$. For each $i$ let ${\bf \ell}_i$ be a vector in $\mathbb R_{+}^{s_i}$ and set ${\bf \ell} = ({\bf \ell}_1, {\bf \ell}_2, \ldots,{\bf \ell}_r)\in \mathbb R_{+}^{s}$.
Then
$$\epsilon_d(s,{\bf \ell};X,L) \ge \epsilon_d(r;X,L) \cdot
\inf_{i=1,\ldots,r}\epsilon_d(s_i,{\bf \ell}_i;\PP^n,\OO_{\PP^n}(1)).$$
\end{thm}
\begin{rmk}
Since the $L$-degree and multiplicity at a point of a $d$-dimensional
scheme coincide with those of the associated $d$-dimensional cycle, the
Seshadri constant can be equivalently defined as
\begin{equation*}
\epsilon_d(p_1,\ldots,p_r;X,L) =\inf\left\{ \left( \dfrac{L^d\cdot Z}{\sum \operatorname{mult}_{p_i}Z} \right) ^{1/d}:
\begin{matrix}
Z \subset X \text{ closed subscheme} \\
\text{of dimension }d
\end{matrix}
\right\}.
\end{equation*}
For convenience, we shall work with this definition.
\end{rmk}
{\bf Conventions: }
By a variety
$X$ we mean a possibly reducible reduced scheme of finite type
over an algebraically closed uncountable field $K$.
If $X$ is a variety then a very general point $p\in X$ is a
point that lies outside a countable collection of proper subvarieties
of $X$. We say that a collection of $r$ points $p_1,\ldots,p_r$ is
very general if the point $(p_1,\ldots,p_r)\in X^r$ in the $r$-th
power of $X$ is very general. The uncountable hypotheses on the base
field guarantees that a set of very general points is nonempty and
dense; over a countable or finite field claims on very general points
are void and $\epsilon(r;X,L)$ is not well defined.
\section{Preliminaries}
\subsection{Semicontinuity of Seshadri constants}
Our proof of Theorem \ref{thm:main} will rely on the
semicontinuity property according to which Seshadri constants
can only ``jump down'' in families. When $d=1$ this semicontinuity
is well-known and comes from interpreting Seshadri constants in terms of
ampleness of certain line bundles on a blowup of $X$
\cite[Thm. 5.1.1]{lazarsfeld(04):posit_in_algeb_geomet}. To deal
with the general case $d\ge 1$, let $p\colon \X\to B$ be a projective
flat morphism, and for $b\in B$, denote $\X_b=p^{-1}(b)$.
We assume that $\X$ is a variety, $B$ is a reduced and
irreducible scheme, and all the fibres $\X_b$ are
(possibly reducible)
varieties. The following result, well known in singularity theory,
follows from the Hilbert-Samuel stratification
\cite{lejeune-jalabert-teissier(74):normal_cones_sheav_rel_jets}
(in particular, from the finiteness proved in
\cite[Theorem 4.15]{lejeune-jalabert-teissier(74):normal_cones_sheav_rel_jets}).
\begin{thm}\label{thm:multiplicity}
Let $\Y \subset \X$ be a closed subscheme, and let $m$ be a nonnegative integer.
The set of $y \in \Y$ such that $\Y_{p(y)}$ has multiplicity at least $m$ at $y$
is Zariski-closed in $\Y$.
\end{thm}
For our purposes, $\Y\rightarrow B$ will be a family of subschemes of dimension
$d$ in the family of (possibly reducible) varieties $\X\rightarrow B$. Since we are
interested in the multiplicities at several points at a time for the computation
of Seshadri constants, we need a multipoint analogue of Theorem \ref{thm:multiplicity}.
Denote $\X_B^r:= \X \times_B \overset{r}{\cdots} \times_B \X \overset{p^r}{\rightarrow} B$
(respectively $\Y_B^r \rightarrow B$) the family whose fiber over $b$ is $(\X_b)^r$
(respectively $(\Y_b)^r$). The next Corollary is an immediate consequence of \eqref{thm:multiplicity}.
\begin{cor}\label{cor:multiplicities}
Let $\Y \subset \X$ be a closed subscheme, and let $m_1, \ldots, m_r$ be nonnegative integers.
The set of tuples
$(y_1, \ldots y_r) \in (\Y_b)^r$, $b \in B$,
such that $\Y_b$ has multiplicity at least $m_i$ at each $y_i$
is Zariski-closed in $\Y^r_B$.
\end{cor}
Given a closed subscheme $\Y \subset \X$ and a sequence $\m=(m_1, \ldots, m_r)$, denote
$\mathcal{I}_\m(\Y)\subset \Y^r_B$ the closed set given by \ref{cor:multiplicities}.
Fix be a relatively nef divisor $\LL$ on $\X$, and denote $\LL_b=\LL|_{\X_b}$.
Since the definition of multipoint Seshadri constants only makes sense
for distinct smooth points,
we will usually restrict to the complement of the diagonals and the
singularities of fibers in $\X^r_B$, which is Zariski open.
\begin{lem}\label{lem:semicontinuous}
Let $r$ be a positive integer, let $\oover{\X}_B^r$ be the complement of the
diagonals and the singularities in $\X^r_B$, and let a real number $\epsilon >0$
and a nonzero real vector ${\bf \ell}=(l_1,\cdots,l_r)$ be given with each $l_i\ge0$.
Then the set of tuples
$(p_1,\ldots,p_r)\in \oover{\X}_B^r$ such that
$$\epsilon_d(l_1p_1,\ldots,l_rp_r;\X_b,\LL_b)\le \epsilon$$ is the union of at most countably many
Zariski closed sets of $\oover{\X}_B^r$.
\end{lem}
\begin{proof}
Let $\operatorname{Hilb}_d(\X/B)$ denote the relative Hilbert scheme of subschemes of $\X$ of
(relative) dimension $d$.
It has countably many irreducible components, which are irreducible projective schemes over $B$
(see \cite{nitsure(05):constr_hilb_quot_sch}). Let $H$ be one of these components, and
let $\HH \subset \X\times_B H \rightarrow H$ be the corresponding universal family.
By the standard properties
of the Hilbert scheme, the intersection with $\LL_b$ is constant, i.e., $\LL_b^d\cdot \HH_{b,h}$
does not depend on $b\in B$, $h\in H$. Denote this number by $\LL^d \cdot H$.
For each choice of a sequence $\m=(m_1, \ldots, m_r)$ of $r$ nonnegative integers,
we apply corollary \ref{cor:multiplicities} above to the universal family
$\HH \rightarrow H$ and get a Zariski closed set
$$\mathcal{I}_\m(\HH)\subset\HH_H^r \subset (\X\times_B H)_H^r=\X_B^r \times_B H.$$
Let $\mathcal{P}_\m(\HH)\subset \X^r_B$ be the image of $\mathcal{I}_\m(\HH)$ by projection
on the first factor.
Since $H$ is projective, $\mathcal{P}_\m(\HH)$ is Zariski closed, and
$\oover{\mathcal{P}}_\m(\HH):=\mathcal{P}_\m(\HH)\cap\oover{\X}_B^r$ is Zariski closed in
$\oover{\X}_B^r$. By the definition
of Seshadri constants, for all $(p_1,\dots, p_r)\in \oover{\mathcal{P}}_\m(\HH)$
with $p_i \in \X_b$, $\epsilon_d(l_1p_1,\dots, l_rp_r;\X_b,\LL_b)\le \left(\LL^d \cdot H/\sum l_im_i\right)^{1/d}$.
Since every $d$-dimensional subscheme is represented by a point in some component of the Hilbert scheme, it follows that
\begin{equation}
\label{eq:seshhilb}
\epsilon_d(p_1,\dots, p_r;\X_b,\LL_b)=\inf_{(p_1,\dots,p_r) \in \oover{\mathcal{P}}_\m(\HH)}
\left\lbrace \left( \frac{L^d \cdot H}{\sum l_im_i} \right) ^{1/d}
\right\rbrace.
\end{equation}
Thus, for each $\epsilon \in \R$, the set of tuples $(p_1,\ldots,p_r)$ such that
$\epsilon_d(p_1,\ldots,p_r;X,L)\le \epsilon$ is exactly
$$\bigcup_{\{H: \frac{L^d \cdot H}{\sum l_im_i} \le \epsilon^d\}} \oover{\mathcal{P}}_\m(\HH),$$
hence the claim.
\end{proof}
\begin{rmk}
\label{rmk:vgeneral}
From the previous Lemma \eqref{lem:semicontinuous} applied to $\X=X$, $B=\operatorname{Spec}\, K$,
for very general points $p_1,\ldots,p_r$ the Seshadri constant
$\epsilon_d(p_1,\ldots,p_r;X,L)$ and its weighted counterparts are independent of the
points chosen,
and thus $\epsilon_d(r;X,L)$ is well defined. Moreover, (\ref{eq:seshhilb})
shows that
$$\epsilon_d(r;X,L)=\inf_{\mathcal{P}_\m(\HH)=X^r}
\left\lbrace \left( \frac{L^d \cdot H}{\sum m_i} \right)
^{1/d}\right\rbrace.$$
\end{rmk}
\begin{rmk}
If $d=1$, then the infimum in (\ref{eq:seshhilb}) can be taken over
all $\m$ and all components $\HH$ of the Hilbert scheme of $X$
\emph{in all dimensions} (including, e.g., the isolated point
corresponding to the whole variety $X$). Doing so,
the Nakai-Moishezon for $\R$-divisors
\cite{campana-peternell(90):algeb_ample_cone_projec_variet} implies
that the infimum is attained by some $\HH$ and $\m$ (see the proof
of \cite[Proposition 4]{steffens(98):rem_sesh_consts}, and also
\cite{debarre(04):seshad_const_abelian_variet},
\cite{bauer-szemberg(01):local_pos_prin_pol_ab_3folds}) hence
the set of values effectively taken by the Seshadri constant as
points vary is either finite or countable.
\end{rmk}
\begin{rmk}
For surfaces, a stronger version of Lemma \eqref{lem:semicontinuous} is known to hold. Namely a finite number of Zariski closed sets (hence a single one) suffices, and moreover the set of values
effectively taken by the Seshadri constant is either finite or has exactly one accumulation
point which is $\sqrt{L^2/r}$. This has been proved by Oguiso \cite{oguiso(02):sesh_const_family_surf}
for $r=1$ and follows from Harbourne-Roé \cite{harbourne-roe(08):disc_beh_sesh_consts} for $r>1$. Unfortunately, the methods used to
prove such finiteness do not seem to extend to varieties of
higher dimension.
\end{rmk}
\begin{rmk}
For Seshadri constants at higher dimensional centers (i.e. measuring
the multiplicities at non-closed points) as defined by Paoletti in
\cite{paoletti(94):sesh_consts_gonality_spc_curv_stable_lin_bdl}
similar semicontinuity results hold; in this case, the Hilbert
scheme has to be used as parameter space in the place of the $r$th
product of the variety.
\end{rmk}
\section{Proofs of Theorems}
\subsection{Proof of Theorems \ref{thm:main} and \ref{thm:main2}}\label{sec:proofofmaintheorems}
We start with Theorem \ref{thm:main} and consider the degeneration to
the normal cone of very general points $p_1,\ldots, p_r$ in $X$. In
detail set $B=\AA^1$ and let $\pi\colon \X\to X\times B$ be the blowup
at the points $\bar{p}_i=(p_i,0)\in X\times B$. The exceptional
divisor $F$ is a disjoint union $F=\bigcup_i F_i$ where
$F_i=\PP(T_{p_i}\oplus \mathbb C)\simeq\PP^n$ is the projective
completion of the tangent space $T_{p_i}X$ of $X$ at $p_i$. We denote
by $q_1,q_2$ the projections from $X\times B$ to the factors and let
$q=q_2\circ \pi\colon \X\to B$.
Set $\delta=\epsilon_d(r;X,L)$ and fix $1\le i\le r$. By replacing $B$ with an open set around $0$ if
necessary we can find sections $\sigma'_{i,j}$ for $j=1,\ldots,s$
which pass through $p_i$ and have very general tangent direction
there. Denote by $\sigma_{i,j}$ the section obtained from the proper
transform of $\sigma'_{i,j}$ in $\X$. Then the collection
$\S_i=\{\sigma_{i,1}(0),\ldots,\sigma_{i,s}(0)\}$ is a set of $s$
very general points in $F_i\simeq \PP^n$.
Denote $\sigma':=(\sigma'_{1,1}, \dots, \sigma'_{r,s}):B \rightarrow X^{rs}$
and $\sigma:=(\sigma_{1,1}, \dots, \sigma_{r,s}):B \rightarrow \X_B^{rs}$.
Each component of the relative Hilbert scheme of $X\times B$ is of the
form $H'=Z \times B$ where $Z$ is a component of the Hilbert scheme of
$X$; for each such $H'$ there is a unique
component $H$ of the Hilbert scheme of $\X$ with
$H' \cap q_2^{-1}(B\setminus\{0\})=H \cap q^{-1}(B\setminus\{0\})$.
We resume notations as in the previous section, with
$\Z \rightarrow Z$, $\HH' \rightarrow H'$, $\HH \rightarrow H$ as universal Hilbert families on
$X$, $X \times B$ and $\X$ respectively.
Recyling notation from above, let $\mathcal{P}_\m(Z)$ be the projection of
$\mathcal{I}_{\m}(Z)$ to $X^{rs}$ and consider for a while a sequence
$\m=(m_1, \ldots, m_{rs})$ of $rs$
nonnegative integers such that $\mathcal{P}_\m(\Z)=X^{rs}$. For
convenience, set also $m_{i,j}=m_{(i-1)s+j}$ for $i=1, \ldots, r$,
$j=1, \ldots, s$. Then
$\mathcal{P}_\m(\HH')=(X\times B)_B^{rs}=X^{rs} \times B$ and $\mathcal{P}_\m(\HH)=\X_B^{rs}$.
Consider the following pullback diagram:
$$
\xymatrix{
\mathcal{I}_\m(\HH')\times_{\mathcal{P}_\m(\HH')} B \ar[d]^{\tau_1} \ar[r]^{\tau_2}
& B\ar[d]^{\sigma'\times i}\\
\mathcal{I}_\m(\HH') \ar@{->>}[r] & \mathcal{P}_\m(\HH')=X^{rs} \times B}
$$
$\tau_2$ is onto and proper, so (restricting to a smaller neighbourhood of $0$
if needed) we may choose a section; denote by $\zeta:B \rightarrow Z$ the
composition of this section with the natural maps
$$\mathcal{I}_\m(\HH')\times_{\mathcal{P}_\m(\HH')} B
\overset{\tau_1}{\rightarrow} \mathcal{I}_\m(\HH')\hookrightarrow {\HH'}_{H'}^{rs} \hookrightarrow X^{rs}\times H
\rightarrow H=Z \times B \rightarrow Z.$$
Let $\Y'\rightarrow B$ be the family obtained from the universal family $\Z \rightarrow Z$
by base change through $\zeta$. By construction, each fiber $\Y_b'$ with $b\ne 0$
is a subscheme of $X$ of dimension $d$ with
a point of multiplicity $\ge m_{i,j}$ at $\sigma_{i,j}(b)$.
Consider the strict transform $\Y$ of $\Y'$ in $\X$. By flatness, $\Y_0$ has dimension $d$, and by
semicontinuity of multiplicities (\ref{cor:multiplicities}) it has
a point of multiplicity $\ge m_{i,j}$ at $\sigma_{i,j}(0)$. Therefore for every
$i$ such that some $m_{i,j}>0$, $\Y_0 \cap F_i$ is a $d$-dimensional subscheme of
$F_i \cong \PP^n$ of
degree at least $m_i:=(\epsilon_d(s;\PP^n,\OO_{\PP^n}(1)))^d \sum_j m_{i,j}$. Since this degree
is exactly the multiplicity of $\Y'_0$ at $p_i$, it follows that
$$L \cdot Z = L \cdot \Y'_0 \ge \delta^d\sum_{i=1}^r m_i =
\delta^d (\epsilon_d(s;\PP^n,\OO_{\PP^n}(1)))^d \sum_{k=1}^{rs} m_{k}.$$
Since this is true whenever $\mathcal{P}_\m(\Z)=X^{rs}$, in view of \ref{rmk:vgeneral}
the claimed bound on $\epsilon_d(rs;X,L)$ follows.
The proof of Theorem \ref{thm:main2} follows easily as if $s_1\le s_2\le \cdots \le s_r$ then $s=\sum_{i=1}^r s_i \le rs_r$ and
$$\epsilon_d(s;X,L) \ge \epsilon_d(rs_r;X,L)\ge \epsilon_d(r;X,L)\cdot
\epsilon_d(s_r;\PP^n,\OO_{\PP^n}(1)).\qed $$
\subsection{Proof of Theorem
\ref{thm:glueing}}\label{sec:proofofthemglueing}
Essentially, the proof of Theorem \ref{thm:main} works also in this
more general setting. We have proved the particular case first for clarity,
and give next a sketch of the changes needed for \ref{thm:glueing}.
Just as above let $\X\to B$ be the degeneration to the normal cone of
very general points $p_1,\ldots, p_r$ in $X$, with exceptional divisors $\PP^n$.
Also just as above, by shrinking $B$ if necessary, for every component $Z$ of
$\operatorname{Hilb}_d(X)$ and every $\m=(m_1,\dots,m_{s})$ with $\mathcal{P}_\m(\Z)=X^{s}$
there exist schemes in $\operatorname{Hilb}_d(X)$ with multiplicities at least
$$m_i:=(\epsilon_d(s_i,{\bf \ell}^i;\PP^n,\OO_{\PP^n}(1)))^d
\sum_{j=s_1+\cdots+s_{i-1}+1}^{s_1+\cdots+s_{i}} m_j$$
at general points $p_i$, $i=1, \ldots, r$. From this the result
follows exactly as in the previous case. \qed
{\small \noindent {\tt [email protected]}} \newline
\noindent DPMMS, Centre for Mathematical Sciences, Wilberforce Road, Cambridge CB3 0WA. UNITED KINGDOM.
{\small \noindent {\tt [email protected]}} \newline
\noindent Departament de Matemàtiques, Universitat Aubònoma de
Barcelona. 08193 Bellaterra (Barcelona). SPAIN. \\
\end{document}
|
\begin{document}
\title{The Chow Motive of a Locally Trivial Fibration and Murre's conjectures}
\author{Carlos Pompeyo-Guti\'errez \\ [email protected] \\ Divisi\'on Acad\'emica de Ciencias B\'asicas \\ Universidad Ju\'arez Aut\'onoma de Tabasco}
\date{February 2015}
\maketitle
\begin{abstract}
Guillet and Soul\'e have shown in \cite{GIL96} that, for a fibration $\pi: Y \to X$ with fibre $Z$, locally trivial in the Zariski topology, we have a decomposition
\[ [Y] = [X] \cdot [Z], \]
where $[\cdot]$ denotes a class in the Grothendieck group $K_{0}(\cl{M}_{Rat}(k))$ associated to the category of (pure effective) Chow motives $\cl{M}_{Rat}(k)$ for a field $k$.
By assuming some additional properties for the fibre $Z$, we construct an explicit isomorphism $h(Y) \cong h(X) \otimes h(Z)$ in the category $\cl{M}_{Rat}(k)$, and we use it to prove, for this type of fibrations, some conjectures disscussed by Murre in \cite{MUR93}.
\end{abstract}
\section{Introduction.}
In \cite{GIL96} Gillet and Soul\'e define, for any quasi-projective variety $X$ defined over a field $k$ of characteristic zero, a class $[X] \in K_{0}(\cl{M}_{Rat}(k))$ characterized by the following properties:
\begin{enumerate}
\item If $X$ is an smooth projective variety, then $[X]=[h(X)]$, where $h(X)=(X, \Delta_{X})$ denotes the Grothendieck motive under rational equivalence associated to $X$.
\item If $W \subset X$ is a closed subvariety in $X$,
\[ [X]=[W]+[X-W] \ . \]
\end{enumerate}
As a consequence, for a fibration $\pi: Y \to X$ with fiber $Z$, locally trivial for the Zariski topology of $X$, we have that
\[ [Y]=[X]\cdot[Z] \]
in $K_{0}(\cl{M}_{Rat}(k))$. Since $[X \times Z]=[X] \cdot [Z]$ in $K_{0}(\cl{M}_{Rat}(k))$, this implies that
\[ h(Y) \oplus M \cong \left( h(X) \otimes h(Z) \right) \oplus M \]
for some $M \in \ob(\cl{M}_{Rat}(k))$. Unfortunately, the cancellation law is not valid in general for the category $\cl{M}_{Rat}(k)$, as there are examples of fields $k$ for which the cancellation law fails, see for example, Remark 2.8 in \cite{CPSZ06}.
The main result of this work is the following:
\begin{theorem} \label{teo12}
Let $\pi : Y \to X$ be a locally trivial fibration with $\pi$ being a proper morphism and with fibres isomorphic to a fixed variety $Z$ having a Chow stratification. Then we have
\[ h(Y) \cong h(X) \otimes h(Z). \quad $_{\square}$f \]
\end{theorem}
In Theorem \ref{teo12} we give an explicit isomorphism, closely related to the geometry of the fibre. After that, we discuss some conjectures proposed by Murre in \cite{MUR93}, namely we prove:
\begin{theorem} \label{thmCK}
Let $\pi:Y \to X$ a locally trivial fibration as in Theorem \ref{teo12}. Furthermore, suppose $X$ has a Chow-K\"unneth decomposition $\pi_{0}(X), \dots,$ $\pi_{2 \dim(X)}(X)$ satisfying that
\[ \pi_{0}(X), \dots, \pi_{j-1}(X) , \pi_{2j+1}(X) , \dots , \pi_{2 \dim(X)}(X) \]
act as zero on $CH^{j}(X)$ ($0 \leq j \leq 2\dim(X)$). Then $Y$ has a Chow-K\"unneth decomposition $\pi_{0}(Y), \dots, \pi_{2 \dim(Y)}(Y)$ and for each $j$,
\[ \pi_{0}(Y), \dots, \pi_{j-1}(Y), \pi_{2j+1}(Y), \dots, \pi_{2 \dim(Y)}(Y) \]
act as zero on $CH^{j}(Y)$. $_{\square}$
\end{theorem}
As an example of the previously mentioned fibrations we have the flag bundles associated to a given vector bundle $E$ over a variety $X$, more generally, any locally trivial fibration with fibres isomorphic to a quotient of a linear algebraic group by a parabolic subgroup. Our results are true even for varieties defined over a field having positive characteristic or not being algebraically closed.
\section{The Chow ring of a locally trivial fibration.}
In this section we calculate the Chow ring of certain locally trivial fibrations. The results given here are based in Proposition 1 in \cite{EDI97} and Lemma 2.8 in \cite{ELL89} and we include them for the sake of completeness.
We will consider fibrations with fibres isomorphic to a given variety $Z$ satisfying the following properties.
\begin{definition} \label{chowpair}
Let $Z$ be a smooth projective variety with dimension $n$. We say that $Z$ satisfies the \textbf{Chow pairing conditions} if for each $p$ such that $0 \le p \le n$ we can find cycle classes $\tau_{p,1},...,\tau_{p,m_{p}} \in CH^{p}(Z)$ such that
\begin{enumerate}
\item $CH^{n}(Z) \cong \db{Z} \tau_{n,1}$ (where $\tau_{n,1}$ denotes the class of a point in $Z$).
\item For $p < n$, $CH^{p}(Z)$ is a free $\db{Z}$-module with finite rank
\[ CH^{p}(Z) \cong \bigoplus_{i=1}^{m_{p}} \db{Z} \tau_{p,i}. \]
\item For each $p < n$, we can give a perfect pairing
\[ CH^{p}(Z) \times CH^{n-p}(Z) \to CH^{n}(Z) \cong \db{Z}\tau_{n,1} \]
satisfying
\[ \tau_{p,i} \cap \tau_{n-p,j} = \left\lbrace \begin{array}{ccc} \tau_{n,1} & if & i=j \\ 0 & if & i \neq j. \end{array} \right. \]
\end{enumerate}
\end{definition}
\begin{remark} Observe that both $CH^{p}(Z)$ and $CH^{n-p}(Z)$ have the same rank, in particular, $m_{0}=m_{n}=1$.
\end{remark}
\begin{definition} \label{chowstrat}
Let $Z$ be a smooth projective variety. We say that $Z$ has a \textbf{Chow stratification} if
\begin{enumerate}
\item $Z$ has a cellular decomposition
\[ Z=Z_{d} \supset Z_{d-1} \supset \cdots \supset Z_{0} \supset Z_{-1}= \emptyset \]
by closed subvarieties such that each $Z_{i}-Z_{i-1}$ is a disjoint union of schemes $U_{i,j}$ isomorphic to affine spaces $\db{A}^{d_{i,j}}$.
\item $Z$ satisfies the Chow pairing conditions by taking the cycles appearing in the previous definition as $\tau_{i,j}:=\overline{U}_{i,j}$.
\end{enumerate}
\end{definition}
\begin{remark}. (See Example 1.9.1 in \cite[p. 23]{FUL98}). If a variety $Z$ has a cellular decomposition then $CH^{\ast}(Z)$ is finitely generated as a $\db{Z}$-module by the cycle classes $\overline{U}_{i,j}$; therefore the variety $Z$ will have a Chow stratification if it satisfies conditions $(i)$ and $(iii)$ in the definition of the Chow pairing conditions.
\end{remark}
It will prove to be useful to establish the following convention.
\textbf{Convention.} For the rest of the paper, each time we say that $\pi: Y \to X$ is a locally trivial fibration, we will be assuming that $\pi$ is a locally trivial fibration with $\pi$ being a proper, flat, smooth morphism and with fibres isomorphic to a fixed variety $Z$ satisfying the Chow pairing conditions, unless otherwise stated.
Now, consider a locally trivial fibration $\pi:Y \to X$ and let $U \subset X$ be an open subset for which $\pi$ becomes trivial, and set $W:=X \setminus U$. Let $i:U \to X$, $j:W \to X$, $\imath : Y|_{U} \to Y$ and $\jmath : Y|_{W} \to Y$ be the inclusions, and denote by $\pi_{U}$ (resp. $\pi_{W}$) the restriction of $\pi$ to $Y|_{U}$ (resp. $Y|_{W}$). Let $\eta:Y|_{U} \to Z$ be the morphism induced by the projection of $U \times Z$ on the second factor.
Then for each $p$ we have the following diagram
\begin{align} \label{dia0301}
\begin{array}{c}
\xymatrix{ & & CH^{p}(Z) \ar@{>}[d]^{\eta^{\ast}} & \\
CH^{p-m}(Y|_{W}) \ar@{>}[r]^{\jmath_{\ast}} \ar@{>}[d]^{\pi_{W \ast}} & CH^{p}(Y) \ar@{>}[r]^{\imath^{\ast}} \ar@{>}[d]^{\pi_{\ast}} & CH^{p}(Y|_{U}) \ar@{>}[r] \ar@{>}[d]^{\pi_{U \ast}} & 0 \\
CH^{p-n-m}(W) \ar@{>}[r]_{j_{\ast}} & CH^{p-n}(X) \ar@{>}[r]_{i^{\ast}} & CH^{p-n}(U) \ar@{>}[r] & 0 }
\end{array}
\end{align}
where $m$ denotes the codimension of $W$ in $X$; the rows are exact by Proposition $1.8$ in \cite{FUL98}, the left square commute by the functoriality of the push-forward and the right square commute by Proposition $1.7$ in \cite{FUL98}.
Using this diagram define elements $T_{p,i} \in CH^{p}(Y)$ such that $\imath^{\ast} T_{p,i} = \eta^{\ast} \tau_{p,i}$. We have the following Theorem.
\begin{theorem} (Duality Theorem). \label{teo0301}
Let $\pi:Y \to X$ be a fibration with fibre $Z$ satisfying the Chow pairing conditions. Then for any $p$, $q$ satisfying $p+q \leq n$ and any $\alpha \in CH^{\ast}(X)$ we have:
\[ \pi_{\ast}(\pi^{\ast}(\alpha) \cap T_{p,i} \cap T_{q,j}) = \left\lbrace \begin{array}{cl} \alpha & if \ (q,j)=(n-p,i) \\ 0 & otherwise \end{array} \right. \]
\end{theorem}
\proof By the projection formula
\[ \pi_{\ast}(\pi^{\ast}(\alpha) \cap T_{p,i} \cap T_{q,j})= \alpha \cap \pi_{\ast}(T_{p,i} \cap T_{q,j}), \]
so it will be enough to calculate $\pi_{\ast}(T_{p,i} \cap T_{q,j})$. Observe that
\[ \pi_{\ast}(T_{p,i} \cap T_{q,j}) \in CH^{p+q-n}(X), \]
and therefore $\pi_{\ast}(T_{p,i} \cap T_{q,j})=0$ if $p+q<n$. From now on we will suppose $q=n-p$. In this case, by looking at (\ref{dia0301}) we see that $CH^{0}(X) \cong CH^{0}(U)$.
Since the right square in (\ref{dia0301}) commutes then
\[ i^{\ast}\pi_{\ast}(T_{p,i} \cap T_{q,j})= \pi_{U \ast} \imath^{\ast}(T_{p,i} \cap T_{q,j}) \ . \]
But then
\[ \imath^{\ast}(T_{p,i} \cap T_{q,j})=\imath^{\ast}(T_{p,i}) \cap \imath^{\ast}(T_{q,j})=\eta^{\ast}(\tau_{p,i}) \cap \eta^{\ast}(\tau_{q,j}) = \eta^{\ast} (\tau_{p,i} \cap \tau_{q,j}) \ . \]
Now, if $j=i$ then $\tau_{p,i} \cap \tau_{q,j}=e$ and then
\[ \imath^{\ast}(T_{p,i} \cap T_{q,j})= \eta^{\ast}(e)=1_{U} \times e , \]
where $e$ is the class of a point in $Z$ and therefore we have
\[ i^{\ast}\pi_{\ast}(T_{p,i} \cap T_{q,j})=\pi_{U \ast}(1_{U} \times e)=1_{U} \ . \]
So, being $i^{\ast}$ injective for $CH^{0}(X)$, we have $\pi_{\ast}(T_{p,i} \cap T_{q,j})=1_{X}$ if $j=i$, and therefore in this case
\[ \pi_{\ast}(\pi^{\ast}(\alpha) \cap T_{p,i} \cap T_{q,j})= \alpha \ . \]
Now suppose $j \neq i$, so we have $\tau_{p,i} \cap \tau_{q,j}=0$ and then
\[ \imath^{\ast}(T_{p,i} \cap T_{q,j})= \eta^{\ast}(0)=0 , \]
consequently,
\[ i^{\ast}\pi_{\ast}(T_{p,i} \cap T_{q,j})= \pi_{U \ast} \imath^{\ast}(T_{p,i} \cap T_{q,j})=0 \]
and since $i^{\ast}$ is injective for $CH^{0}(X)$
\[ \pi_{\ast}(T_{p,i} \cap T_{q,j})=0 \]
from which we conclude
\[ \pi_{\ast}(\pi^{\ast}(\alpha) \cap T_{p,i} \cap T_{q,j})=0 \]
for $j\neq i$. $_{\square}$
\textbf{Convention.} Since any element $\alpha_{i,j} \otimes n_{i,j} \tau_{i,j} \in \displaystyle CH^{p-i}(X) \otimes \db{Z} \tau_{i,j}$ can be rewritten as
\[ \alpha_{i,j} \otimes n_{i,j} \tau_{i,j} = n_{i,j}\alpha_{i,j} \otimes \tau_{i,j} = \beta_{i,j} \otimes \tau_{i,j} \ , \]
from now on we will simply denote such an element as $\alpha_{i,j} \otimes \tau_{i,j}$. $_{\square}$
As a consequence of the Duality Theorem we have the following.
\begin{corollary} \label{cor0301}
Let $\pi: Y \to X$ be a locally trivial fibration as before. Then
\[ \begin{array}{rrcl}
\varphi: & \displaystyle \bigoplus_{i=0}^{p} \bigoplus_{j=1}^{m_{i}} \left( CH^{p-i}(X) \otimes \db{Z} \tau_{i,j} \right) & \to & CH^{p}(Y) \\
& \displaystyle (\alpha_{i,j} \otimes \tau_{i,j})_{i,j} & \mapsto & \displaystyle \sum_{i=0}^{p} \sum_{j=1}^{m_{i}} \pi^{\ast}( \alpha_{i,j}) \cap T_{i,j}
\end{array} \]
is injective.
\end{corollary}
\proof Let $(\alpha_{i,j} \otimes \tau_{i,j})_{i,j} \in \ker \varphi$, so it satisfies the equation
\[ \sum \pi^{\ast}(\alpha_{i,j}) \cap T_{i,j} = 0. \]
Suppose we have $(\alpha_{i,j} \otimes \tau_{i,j})_{i,j} \neq 0$ and let $(k,l)$ be the (lexicographically) greatest index such that $\alpha_{k,l} \otimes \tau_{k,l} \neq 0$. Multiplying the last equality by $T_{n-k,l}$ and then applying $\pi_{\ast}$ we obtain
\[ 0= \pi_{\ast} \left( \sum_{i,j} \pi^{\ast}(\alpha_{i,j}) \cap T_{i,j} \cap T_{n-k,l} \right) = \sum_{i,j} \pi_{\ast} (\pi^{\ast}(\alpha_{i,j}) \cap T_{i,j} \cap T_{n-k,l})=\alpha_{k,l} \]
which is absurd. Therefore $\ker \varphi =0$ and $\varphi$ is injective. $_{\square}$
Is worth noticing that the group:
\[ \bigoplus_{i=0}^{p} \bigoplus_{j=1}^{m_{i}} \left( CH^{p-i}(X) \otimes \db{Z} \tau_{i,j} \right) \]
is isomorphic to the $p$-graded part of the graded ring $CH^{\ast}(X) \otimes CH^{\ast}(Z)$. In this way, Corollary \ref{cor0301} can be restated as follows.
\begin{corollary} \label{cor0302}
Let $\pi : Y \to X$ be a fibration as in Corollary \ref{cor0301}. Then $CH^{\ast}(X) \otimes CH^{\ast}(Z)$ is a $CH^{\ast}(X)$-submodule of $CH^{\ast}(Y)$. $_{\square}$
\end{corollary}
Now we center our attention on deciding when the morphism defined in Corollary \ref{cor0301} is surjective. In order to answer this question we require the following.
\begin{lemma} \label{lem0301}
If $Z$ has a Chow stratification, then for any variety $X$ we have
\[ CH^{\ast}(X \times Z) \cong CH^{\ast}(X) \otimes CH^{\ast}(Z). \]
\end{lemma}
\proof Notice that, in this case, the morphism from Corollary \ref{cor0301} can be written as
\[ \begin{array}{rrcl}
\varphi: & \displaystyle \bigoplus_{i=0}^{p} CH^{p-i}(X) \otimes CH^{i}(Z) & \to & CH^{p}(X \times Z) \\
& \displaystyle (\alpha_{i} \otimes \beta_{i})_{i} & \mapsto & \displaystyle \sum_{i=0}^{p} \pi_{X}^{\ast}(\alpha_{i}) \cap \pi_{Z}^{\ast}(\beta_{i})
\end{array} \]
where $\pi_{X}, \ \pi_{Z}$ are the projections from $X \times Z$ to $X$ and $Z$ respectively, and therefore this morphism is injective. Moreover we have the equality
\[ \sum_{i} \pi_{X}^{\ast}(\alpha_{i}) \cap \pi_{Z}^{\ast}(\beta_{i}) = \sum_{i} \alpha_{i} \times \beta_{i} \ . \]
So, by comparing with Example 1.10.2 from \cite[p. 25]{FUL98} we obtain the surjectivity.
$_{\square}$
To conclude this Section, we provide the following Theorem.
\begin{theorem} \label{teo11}
Let $\pi : Y \to X$ be a locally trivial fibration with $\pi$ being a proper morphism and with fibres isomorphic to a fixed variety $Z$ having a Chow stratification. Then we have a $CH^{\ast}(X)$-modules isomorphism
\[ CH^{\ast}(Y) \cong CH^{\ast}(X) \otimes CH^{\ast}(Z). \quad $_{\square}$f \]
\end{theorem}
\textbf{Proof.} At this point we only have to show that the morphism defined in Corollary \ref{cor0301} is surjective. In order to do this, we proceed by induction on the dimension of the base space $X$.
For $\dim X=0$ the result is trivial.
Now, suppose $\dim X >0$, let $U \subset X$ be an open set such that $Y$ becomes trivial on $U$, and set $W=X \setminus U$, $m=\codim_{X}(W)$. We have a diagram
\begin{align} \label{dia0302}
\begin{array}{c}
\xymatrix{ \displaystyle \bigoplus_{i=0}^{p-m} \bigoplus_{j=1}^{m_{i}} CH^{p-m-i}(W) \otimes \db{Z} \tau_{i,j} \ar@{>}[r]^<<<<{g'} \ar@{>}[d]_{h'} & CH^{p-m}(Y|_{W}) \ar@{>}[r] \ar@{>}[d]^{f'} & 0 \\ \displaystyle \bigoplus_{i=0}^{p} \bigoplus_{j=1}^{m_{i}} CH^{p-i}(X) \otimes \db{Z} \tau_{i,j} \ar@{>}[r]^<<<<<<{\varphi} \ar@{>}[d]_{h} & CH^{p}(Y) \ar@{>}[d]^{f} & \\ \displaystyle \bigoplus_{i=0}^{p} CH^{p-i}(U) \otimes CH^{i}(Z) \ar@{>}[r]^<<<<<<{g} \ar@{>}[d] & CH^{p}(U \times Z ) \ar@{>}[d] \ar@{>}[r] & 0 \\ 0 & 0 & }
\end{array}
\end{align}
where the first row is exact by induction hypothesis since $\dim W < \dim X$, the last row is given by Lemma \ref{lem0301} and the left column is obtained factor by factor by tensorizing the corresponding exact sequences obtained from Proposition $1.8$ in \cite{FUL98}.
Pick an element $\beta \in CH^{p}(Y)$ and set $\alpha_{1}:=f(\beta) $. Define elements $\alpha_{2}, \alpha_{3}$ satisfying $g(\alpha_{2})=\alpha_{1}$ and $h(\alpha_{3})=\alpha_{2}$. Then
\[ f(\varphi(\alpha_{3}))=g(h(\alpha_{3}))=g(\alpha_{2})=\alpha_{1}=f(\beta) \]
and therefore $\beta - \varphi(\alpha_{3}) \in \Ker f = \Img f'$, so we can write $\beta- \varphi(\alpha_{3})=f'(\alpha_{4})$ for some $\alpha_{4}$. Finally define $\alpha_{5}$ as an element satisfying that $g'(\alpha_{5})=\alpha_{4}$. Then
\[ \varphi(\alpha_{3}+h'(\alpha_{5}))=\varphi(\alpha_{3})+f'(g'(\alpha_{5}))=\beta \]
and we have that $\varphi$ is surjective. $_{\square}$
\section{Defining the projectors.}
Along this section we will be using notations and conventions established by Manin in \cite{MAN68}. In this section we define pairwise orthogonal projectors for a locally trivial fibration as the one described in Section 2. In order to do this, we follow the ideas given by Manin in \cite{MAN68} to calculate the Chow motive of a projective bundle associated to a given vector bundle, and K\"ock in \cite{KCK91} which generalized the construction of Manin to grassmannian bundles.
We will define correspondences $p_{i,j} \in \ho_{CV(k)}(Y,Y)$ using the isomorphism described in Corollary \ref{cor0301}. In order to do this consider the auxiliary sets
\[ W_{i,j}= \{ (i,l) | j < l \leq m_{i} \} \cup \{ (k,l) | k>i, \ 1 \leq l \leq m_{k} \}, \]
and define the correspondences $p_{i,j}$ by a downward induction, starting with
\[ p_{n,m_{n}}=c_{T_{n,m_{n}}} \circ c(\pi) \circ c(\pi)^{t} \circ c_{T_{0,m_{n}}} \]
and in the general case by writing
\begin{equation} \label{ecp}
p_{i,j}=c_{T_{i,j}} \circ c(\pi) \circ c(\pi)^{t} \circ c_{T_{n-i,j}} \circ \left( \Delta_{Y}-\sum_{(k,l) \in W_{i,j}} p_{k,l} \right) \ .
\end{equation}
The following Lemma will be used to prove some properties satisfied by the correspondences just defined.
\begin{lemma} \label{le0401}
$\displaystyle (p_{i,j})_{e} \left( \sum_{r=0}^{p} \sum_{s=1}^{m_{r}} \pi^{\ast}(\alpha_{r,s}) \cap T_{r,s} \right)= \left\{ \begin{array}{ccl} \pi^{\ast}(\alpha_{i,j}) \cap T_{i,j} & if & i \leq p \\ 0 & if & i>p \ . \end{array}\right.$
\end{lemma}
\proof We will use a downward induction. First, observe that
$\begin{array}{cl}
& \displaystyle (p_{n,m_{n}})_{e} \left(\sum_{r=0}^{p} \sum_{s=1}^{m_{r}} \pi^{\ast}(\alpha_{r,s}) \cap T_{r,s} \right) \\
= & \displaystyle m_{T_{n,m_{n}}} \left(\pi^{\ast} \left(\pi_{\ast} \left( \sum_{r=0}^{p} \sum_{s=1}^{m_{r}} \pi^{\ast}(\alpha_{r,s}) \cap T_{r,s} \cap T_{0,m_{n}} \right) \right) \right) \\
= & \displaystyle \sum_{r=0}^{p} \sum_{s=1}^{m_{r}} m_{T_{n,m_{n}}} \left(\pi^{\ast} \left(\pi_{\ast} \left( \pi^{\ast}(\alpha_{r,s}) \cap T_{r,s} \cap T_{0,m_{n}} \right) \right) \right)
\end{array}$
Now, by using Theorem \ref{teo0301}, last expression becomes
\[ m_{T_{n,m_{n}}} (\pi^{\ast}(\alpha_{n,m_{n}}) )= \pi^{\ast}(\alpha_{n,m_{n}})\cap T_{n,m_{n}}, \]
so we have verified the Lemma in this case.
Now, in order to clarify the general case, consider the sets
\[ M_{i,j}:=\{ (k,l) \ | \ k<i \} \cup \{ (i,l) \ | \ l \leq j \}. \]
Then, by applying the induction hypothesis:
$ \begin{array}{cl} & \displaystyle \left((\Delta_{Y})_{e}-\sum_{(k,l) \in W_{i,j}} (p_{k,l})_{e} \right) \left( \sum_{r=0}^{p} \sum_{s=1}^{m_{r}} \pi^{\ast}(\alpha_{r,s}) \cap T_{r,s} \right) \\ \ \\
= & \displaystyle \sum_{r=0}^{p} \sum_{s=1}^{m_{r}} \pi^{\ast}(\alpha_{r,s}) \cap T_{r,s}-\sum_{(k,l) \in W_{i,j}} \pi^{\ast}(\alpha_{k,l}) \cap T_{k,l} \\ \ \\
= & \displaystyle \sum_{(r,s) \in M_{i,j}} \pi^{\ast}(\alpha_{r,s}) \cap T_{r,s} \\ \end{array} $
and so
$ \begin{array}{cl} & \displaystyle (p_{i,j})_{e} \left( \sum_{r=0}^{p} \sum_{s=1}^{m_{r}} \pi^{\ast}(\alpha_{r,s}) \cap T_{r,s} \right) \\
\ \\
= & \displaystyle m_{T_{i,j}} \left( \pi^{\ast} \left( \pi_{\ast} \left( \sum_{(r,s) \in M_{i,j}} \pi^{\ast}(\alpha_{r,s}) \cap T_{r,s} \cap T_{n-i,j} \right) \right) \right) \\
\ \\
= & \displaystyle \sum_{(r,s) \in M_{i,j}} m_{T_{i,j}} \left( \pi^{\ast} \left( \pi_{\ast} \left( \pi^{\ast}(\alpha_{r,s}) \cap T_{r,s} \cap T_{n-i,j} \right) \right) \right) . \\ \end{array} $
If $(r,s) \in M_{i,j}$ then $r \leq i$, and therefore $r+n-i \leq n$; then applying Theorem \ref{teo0301} we obtain
\[ \pi_{\ast} \left( \pi^{\ast}(\alpha_{r,s}) \cap T_{r,s} \cap T_{n-i,j} \right)=\left\lbrace \begin{array}{cl} \alpha_{i,j} & if \ (r,s)=(i,j) \\ 0 & otherwise \end{array} \right. \]
In this way we can write
\[ \displaystyle (p_{i,j})_{e} \left( \sum_{r=0}^{p} \sum_{s=1}^{m_{r}} \pi^{\ast}(\alpha_{r,s}) \cap T_{r,s} \right)=m_{T_{i,j}}(\pi^{\ast}(\alpha_{i,j}))=\pi^{\ast}(\alpha_{i,j}) \cap T_{i,j} \]
as desired. $_{\square}$
We will need the following two lemmas, the proof of which can be found in \cite{MAN68}.
\begin{lemma} \label{funcident}
For any morphism of varieties $\varphi:X \to Y$, any $T \in \ob(V(k))$ and any element $\alpha \in CH^{\ast}(X)$ we have
\begin{description}
\item[a)] $c(\varphi)_{T}=(id_{T} \times \varphi)^{\ast}$,
\item[b)] $c(\varphi)_{T}^{t}=(id_{T} \times \varphi)_{\ast}$,
\item[c)] $(c_{\alpha})_{T}=m_{1_{T} \times \alpha}$. $_{\square}$
\end{description}
\end{lemma}
\begin{lemma} \label{Yonconseq}
Let $\mathcal{D}$ be a diagram of objects and morphisms from the category $CV(k)$. Furthermore, let $I$ be
\[ I = \sum_{i=1}^{r} a_{i} f_{i}, \]
where $a_{i} \in \db{Z}$ and $f_{i}$ are some correspondences between the objects of the diagram $\mathcal{D}$. For $T \in \ob(V(k))$, let $I_{T}$ be
\[ I_{T} = \sum_{i=1}^{r} a_{i} (f_{i})_{T}. \]
Then $I=0$ if and only if $I_{T}=0$ for all $T \in \ob(V(k))$. $_{\square}$
\end{lemma}
An immediate consequence of Lemmas \ref{funcident} and \ref{Yonconseq} is Manin's Identity Principle, which we state in what follows.
Suppose we have a diagram $\mathcal{D}$ of objects and morphisms of the category $V(k)$, and let $J$ be
\[ J = \sum_{i=1}^{r} a_{i}F_{i}, \]
where $a_{i} \in \db{Z}$ and every homomorphism $F_{i}$ is a composition of a finite number of homomorphisms of the form $\varphi^{\ast}$, $\varphi_{\ast}$, $m_{\alpha}$ for $\alpha \in C(X)$, $X \in \ob(\mathcal{D})$, $\varphi \in \mor(\mathcal{D})$.
For any $T \in \ob(V(k))$ we denote by $T \times J$ the identity obtained from $J$ by changing all the objects $X$ by $T \times X$, all the morphisms $\varphi$ by $id_{T} \times \varphi$ and all the morphisms $m_{\alpha}$ by $m_{1_{T} \times \alpha}$.
In a similar way, denote by $c(J)$ the identity obtained from $J$ by changing all the morphisms $\varphi^{\ast}$ by $c(\varphi)$, all the morphisms $\varphi_{\ast}$ by $c(\varphi)^t$ and all the morphisms $m_{\alpha}$ by $c_{\alpha}$.
\begin{theorem} \label{Manin} \textbf{Manin's Identity Principle} (\cite[p.450]{MAN68}).
Let $J$ be as before. The following two assertions are equivalent.
\begin{description}
\item[a)] $T \times J=0$ for all $T \in \ob(V(k))$.
\item[b)] $c(J)=0$. $_{\square}$
\end{description}
\end{theorem}
The correspondences $p_{i,j}$ have the following properties.
\begin{theorem} \label{teo0401}
Let $p_{i,j}$ be the correspondences defined before. Then we have the following:
\begin{enumerate}
\item The correspondences $p_{i,j}$ are of degree zero.
\item $ \displaystyle \sum_{i,j} p_{i,j} = \Delta_{Y} $
\item $ \displaystyle p_{i,j} \circ p_{k,l}= \delta_{(i,j)}^{(k,l)} p_{i,j} $
\end{enumerate}
\end{theorem}
\proof The first affirmation is clear from the definition of the correspondences $p_{i,j}$. In order to prove the remaining assertions we will use Manin's Identity Principle.
We are going to make more precise what we are supposed to prove. Define morphisms $\rho_{i,j}:CH^{\ast}(Y) \to CH^{\ast}(Y)$ in a inductive way by:
\[ \rho_{i,j}:=m_{T_{i,j}} \circ \pi^{\ast} \circ \pi_{\ast} \circ m_{T_{n-i,j}} \circ \left( id_{CH^{\ast}(Y)} - \sum_{(k,l) \in W_{i,j}} \rho_{k,l} \right), \]
and for a variety $T$, denote by $\rho_{i,j}^{T}$ the morphism:
\[ \rho_{i,j}^{T}:=m_{1_{T} \times T_{i,j}} \circ (id_{T} \times \pi)^{\ast} \circ (id_{T} \times \pi)_{\ast} \circ m_{1_{T} \times T_{n-i,j}} \circ \left( id_{CH^{\ast}(T \times Y)} - \sum_{(k,l) \in W_{i,j}} \rho_{k,l}^{T} \right) \]
Manin's Identity Principle asserts that identities $2$ and $3$ of this Theorem hold if and only if the identities
\begin{equation} \label{ec0403}
\sum_{i,j} \rho_{i,j}^{T} = id_{CH^{\ast}(T \times Y)}
\end{equation}
and
\begin{equation} \label{ec0404}
\rho_{i,j}^{T} \circ \rho_{k,l}^{T}= \delta_{(i,j)}^{(k,l)} \rho_{i,j}^{T}
\end{equation}
hold for every variety $T$.
Now if
\[ p_{i,j}^{T}=c_{1_{T} \times T_{i,j}} \circ c(id_{T} \times \pi) \circ c(id_{T} \times \pi)^{t} \circ c_{1_{T} \times T_{n-i,j}} \circ \left( \Delta_{T \times Y}-\sum_{(k,l) \in W_{i,j}} p_{k,l}^{T} \right) \]
then $(p_{i,j}^{T})_{e}=\rho_{i,j}^{T}$ and we can rewrite (\ref{ec0403}) and (\ref{ec0404}) as
\begin{equation} \label{ec0405}
\begin{array}{c}
\displaystyle \sum_{i,j} (p_{i,j}^{T})_{e} = (\Delta_{T \times Y})_{e} \ , \\
\ \\
(p_{i,j}^{T})_{e} \circ (p_{k,l}^{T})_{e}= \delta_{(i,j)}^{(k,l)} (p_{i,j}^{T})_{e}
\end{array}
\end{equation}
Now, by Lemma \ref{le0401}
\[ \displaystyle \sum_{i=0}^{n} \sum_{j=1}^{m_{i}} (p_{i,j})_{e}\left( \sum_{r=0}^{p} \sum_{s=1}^{m_{r}} \pi^{\ast}(\alpha_{r,s}) \cap T_{r,s} \right) = \sum_{i=0}^{p} \sum_{j=1}^{m_{i}} \pi^{\ast}(\alpha_{i,j}) \cap T_{i,j} \]
so we have that $\displaystyle \sum_{i,j}(p_{i,j})_{e}|_{CH^{p}(Y)}=id_{CH^{p}(Y)}$, and therefore
\begin{equation} \label{ec0406}
\sum_{i,j}(p_{i,j})_{e}=id_{CH^{\ast}(Y)}=(\Delta_{Y})_{e} \ .
\end{equation}
On the other hand,
\[ (p_{i,j})_{e} \circ (p_{k,l})_{e} \left( \sum_{r=0}^{p} \sum_{s=1}^{m_{r}} \pi^{\ast}(\alpha_{r,s}) \cap T_{r,s} \right) = (p_{i,j})_{e} \left( \pi^{\ast}(\alpha_{k,l}) \cap T_{k,l} \right) \]
but
\[ (p_{i,j})_{e} \left( \pi^{\ast}(\alpha_{k,l}) \cap T_{k,l} \right) = \left\lbrace \begin{array}{ccl} \pi^{\ast}(\alpha_{i,j}) \cap T_{i,j} & if & (i,j)=(k,l) \\ 0 & if & (i,j) \ne (k,l) \end{array} \right. \]
therefore
\begin{equation} \label{ec0407}
(p_{i,j})_{e} \circ (p_{k,l})_{e}= \delta_{(i,j)}^{(k,l)} (p_{i,j})_{e} \ .
\end{equation}
But both (\ref{ec0406}) and (\ref{ec0407}) are true for a locally trivial fibration $\pi:Y \to X$ satisfying the hypothesis of Theorem \ref{teo11}, and the locally trivial fibration
\[ id_{T} \times \pi: T \times Y \to T \times X \]
satisfy such hypothesis, provided we can show that the elements $1_{T} \times T_{i,j}$ generate the Chow ring $CH^{\ast}(T \times Y)$ as a $CH^{\ast}(T \times X)$-module. But this follows from the definition of the mentioned generators if we do suitable changes to diagram (\ref{dia0301}). Therefore, identities (\ref{ec0405}) also hold. $_{\square}$
Now we will establish some Lemmas needed to prove Theorem \ref{teo12}. From now on, each time we say we have an isomorphism of motives, we are refering to the fact that we have an isomorphism between the additive structures of the motives involved. We start by calculating some factors appearing in the decomposition that we will give later for the motive $h(Y)$.
\begin{lemma} \label{le0402}
Let $\pi : Y \to X$ and $\pi': Y' \to X$ be two locally trivial fibrations satisfying the hypothesis of Theorem \ref{teo11}. Then we have an isomorphism of motives
\[ h(Y) \cong h(Y'). \]
\end{lemma}
\proof Denote by $T_{i,j}$ (resp. $T_{i,j}'$) the generators of $CH^{\ast}(Y)$ (resp. $CH^{\ast}(Y')$) as a $CH^{\ast}(X)$-module.
Theorem \ref{teo0401} lets us conclude that
\[ h(Y)=(Y, \Delta_{Y}) = \left( Y, \sum_{i,j} p_{i,j} \right) \cong \bigoplus_{i,j} (Y,p_{i,j}) \]
and
\[ h(Y') \cong \bigoplus_{i,j} (Y',p_{i,j}'), \]
where $p_{i,j}, \ p_{i,j}'$ are defined as in (\ref{ecp}), so it will be enough to show that the factors appearing in these decompositions are isomorphic.
In order to do this, define morphisms of motives
\[h_{i,j} \in \ho_{\cl{M}_{Rat}(k)}((Y,p_{i,j}),(Y',p_{i,j}'))\]
by the formula:
\[ h_{i,j}:=c_{T_{i,j}'} \circ c(\pi') \circ c(\pi)^{t} \circ c_{T_{n-i,j}} \circ \left( \Delta_{Y} - \sum_{(k,l) \in W_{i,j}} p_{k,l} \right) \ ; \]
analogously define the morphisms $h_{i,j}' \in \ho_{\cl{M}_{Rat}(k)}((Y',p_{i,j}'),(Y,p_{i,j}))$ by:
\[ h_{i,j}':=c_{T_{i,j}} \circ c(\pi) \circ c(\pi')^{t} \circ c_{T_{n-i,j}'} \circ \left( \Delta_{Y'} - \sum_{(k,l) \in W_{i,j}} p_{k,l}' \right) \ . \]
At this point we would have to show the commutativity of the diagrams
\[ \xymatrix{ Y \ar@{>}[d]_{p_{i,j}} \ar@{>}[r]^{h_{i,j}} & Y' \ar@{>}[d]^{p_{i,j}'} & & Y' \ar@{>}[d]_{p_{i,j}'} \ar@{>}[r]^{h_{i,j}'} & Y \ar@{>}[d]^{p_{i,j}} \\ Y \ar@{>}[r]_{h_{i,j}} & Y' & & Y' \ar@{>}[r]_{h_{i,j}'} & Y } \]
but this follows by Manin's Identity Principle since we have both
\[ (h_{i,j})_{e} \left( \sum_{r=0}^{p} \sum_{s=1}^{m_{r}} \pi^{\ast}(\alpha_{r,s}) \cap T_{r,s} \right)= \pi'^{\ast}(\alpha_{i,j}) \cap T_{i,j}' \]
and a similar equation holding for $(h_{i,j}')_{e}$.
By following this procedure we can also obtain the identities
\[ h_{i,j}' \circ h_{i,j} = \Delta_{Y} \mod p_{i,j}, \quad h_{i,j} \circ h_{i,j}' = \Delta_{Y'} \mod p_{i,j}' \]
which expose both $h_{i,j}$ and $h'_{i,j}$ as the desired isomorphisms. $_{\square}$
\begin{lemma} \label{le0403}
Let $\pi: Y \to X$ be a locally trivial fibration satisfying the hypothesis of Theorem \ref{teo11}. Then we have an isomorphism of motives
\[ (Y, p_{i,j}) \cong h(X) \otimes ( Z, p_{i,j,Z}), \]
where the projectors $p_{i,j,Z} \in \ho_{\cl{M}_{Rat}(k)}(Z,Z)$ are defined by the formula
\[ p_{i,j,Z}:=\tau_{n-i,j} \times \tau_{i,j}. \]
\end{lemma}
\proof By Lemma \ref{le0402} we have that
\[ (Y, p_{i,j}) \cong (X \times Z, q_{i,j} ) \]
where the projectors $q_{i,j}$ are the ones defined for the trivial fibration
\[ \xymatrix{X \times Z \ar@{>}[r]^<<<<<{\rho} & X} \]
by using the formula (\ref{ecp}).
We will show that
\[ (X \times Z, q_{i,j} ) = ( X \times Z, \Delta_{X} \otimes p_{i,j,Z}) \]
We start by observing that the collection of correspondences $p_{i,j,Z}$ are, in fact, projectors in the category of motives. Clearly, the correspondences $p_{i,j,Z}$ have degree zero. Now,
\begin{eqnarray}
p_{i,j,Z} \circ p_{k,l,Z} & = & \pi_{13 \ast} \left(\pi_{12}^{\ast}(\tau_{n-i,j} \times \tau_{i,j}) \cap \pi_{23}^{\ast}(\tau_{n-k,l} \times \tau_{k,l}) \right) \nonumber \\
& = & \pi_{13 \ast}(\tau_{n-i,j} \times (\tau_{i,j} \cap \tau_{n-k,l}) \times \tau_{k,l}) \ . \nonumber
\end{eqnarray}
Suppose $\tau_{i,j} \cap \tau_{n-k,l} \ne 0$. Then
\[ \pi_{13}(\tau_{n-i,j} \times (\tau_{i,j} \cap \tau_{n-k,l}) \times \tau_{k,l})=\tau_{n-i,j} \times \tau_{k,l} \ . \]
Since
\[ \dim(\tau_{n-i,j} \times (\tau_{i,j} \cap \tau_{n-k,l}) \times \tau_{k,l})=n, \]
\[ \dim(\tau_{n-i,j} \times \tau_{k,l})=n+i-k \]
we see that they have the same dimension if and only if $k=i$. Therefore
\[ \pi_{13 \ast}(\tau_{n-i,j} \times (\tau_{i,j} \cap \tau_{n-k,l}) \times \tau_{k,l})=\left \{ \begin{array}{cll} \tau_{n-i,j} \times \tau_{i,l} & if & k=i \\ 0 & if & k \ne i \end{array} \right. \]
but since we are assuming $\tau_{i,j} \cap \tau_{n-i,l} \ne 0$, we have that $l=j$, so
\[ \pi_{13 \ast}(\tau_{n-i,j} \times (\tau_{i,j} \cap \tau_{n-k,l}) \times \tau_{k,l})=\delta_{(i,j)}^{(k,l)} \tau_{n-i,j} \times \tau_{i,j} \]
and therefore
\[ p_{i,j,Z} \circ p_{k,l,Z}= \delta_{(i,j)}^{(k,l)} p_{i,j,Z} \ . \]
Now consider the case when $\tau_{i,j} \cap \tau_{n-k,l}=0$. Then $(i,j) \ne (k,l)$, otherwise we would have $\tau_{i,j} \cap \tau_{n-k,l}=e$; therefore $\delta_{(i,j)}^{(k,l)}=0$. Another consequence of $\tau_{i,j} \cap \tau_{n-k,l}=0$ is that, by Proposition $1.10$ in \cite[p. 24]{FUL98} we have that
\[ \tau_{n-i,j} \times (\tau_{i,j} \cap \tau_{n-k,l}) \times \tau_{k,l}=0. \]
So in this case we also have the equality
\[ p_{i,j,Z} \circ p_{k,l,Z}= \delta_{(i,j)}^{(k,l)} p_{i,j,Z} \ . \]
Therefore the correspondences $p_{i,j,Z}$ define mutually orthogonal projectors on $Z$. Now we proceed to verify that the projectors $q_{i,j}$ and $\Delta_{X} \otimes p_{i,j,Z}$ coincide.
By Lemma \ref{le0401} we have that
\[ (q_{i,j})_{e} \left( \sum_{r=0}^{p} \sum_{s=1}^{m_{r}} \rho^{\ast}(\alpha_{r,s}) \cap 1_{X} \times \tau_{r,s} \right) = \rho^{\ast}(\alpha_{i,j}) \cap 1_{X} \times \tau_{i,j} \ . \]
On the other hand
\begin{eqnarray}
(\Delta_{X} \otimes p_{i,j,Z})_{e}(\rho^{\ast}(\alpha_{r,s}) \cap 1_{X} \times \tau_{r,s}) & = & (p_{i,j,Z})_{X}(\rho^{\ast}(\alpha_{r,s}) \cap 1_{X} \times \tau_{r,s}) \nonumber \\
& = & p_{i,j,Z} \circ (\rho^{\ast}(\alpha_{r,s}) \cap 1_{X} \times \tau_{r,s}) \nonumber \\
& = & p_{13 \ast}( \alpha_{r,s} \times (\tau_{r,s} \cap \tau_{n-i,j}) \times \tau_{i,j}) \nonumber \\
& = & \left\lbrace \begin{array}{ccc} \alpha_{i,j} \times \tau_{i,j} & if & (r,s)=(i,j) \\ 0 & if & (r,s) \ne (i,j) \end{array} \right. \nonumber \\
& = & \delta_{(i,j)}^{(k,l)} \rho^{\ast}(\alpha_{i,j}) \cap 1_{X} \times \tau_{i,j} \ . \nonumber
\end{eqnarray}
Therefore
\begin{eqnarray}
& & (\Delta_{X} \otimes p_{i,j,Z})_{e} \left( \sum_{r=0}^{p} \sum_{s=1}^{m_{r}} \rho^{\ast}(\alpha_{r,s}) \cap 1_{X} \times \tau_{r,s} \right) \nonumber \\
& = & \rho^{\ast}(\alpha_{i,j}) \cap 1_{X} \times \tau_{i,j} \ . \nonumber
\end{eqnarray}
At this point we have proved the equality
\[ (q_{i,j})_{e}=(\Delta_{X} \otimes p_{i,j,Z})_{e} \]
and by applying Manin's Identity Principle we obtain
\[ q_{i,j}=\Delta_{X} \otimes p_{i,j,Z} \ . \]
We conclude the proof of this Lemma by noticing that
\[ (Y,p_{i,j}) \cong (X \times Z, q_{i,j})=(X \times Z , \Delta_{X} \otimes p_{i,j,Z}) \cong (X, \Delta_{X}) \otimes (Z, p_{i,j,Z}) \]
as desired. $_{\square}$
\begin{lemma} \label{le0404}
Under the hypothesis of Lemma \ref{le0403} we have that
\[ h(Z) \cong \bigoplus_{i,j} (Z,p_{i,j,Z}) \]
\end{lemma}
\proof We have already shown that the correspondences $p_{i,j,Z}$ induce mutually orthogonal projectors. So, our proof will be finished if we can show that
\[ \sum_{i,j} p_{i,j,Z} = \Delta_{Z} \ . \]
The elements of $CH^{\ast}(Z)$ can be written as
\[ \sum_{r=0}^{n} \sum_{s=1}^{m_{r}} n_{r,s} \tau_{r,s} \]
for some $n_{r,s} \in \db{Z}$. Observe that
\[ (p_{i,j,Z})_{e}\left(\sum_{r=0}^{n} \sum_{s=1}^{m_{r}} n_{r,s} \tau_{r,s} \right)= p_{i,j,Z} \circ \left(\sum_{r=0}^{n} \sum_{s=1}^{m_{r}} n_{r,s} \tau_{r,s} \right)= \sum_{r=0}^{n} \sum_{s=1}^{m_{r}} n_{r,s} p_{i,j,Z} \circ \tau_{r,s} \]
besides
\begin{eqnarray}
p_{i,j,Z} \circ \tau_{r,s} & = & p_{2 \ast}( (\tau_{n-i,j} \times \tau_{i,j}) \cap (\tau_{r,s} \times 1_{Z}) ) \nonumber \\
& = & p_{2 \ast}( (\tau_{n-i,j} \cap \tau_{r,s}) \times \tau_{i,j} ) \nonumber \\
& = & \left\lbrace \begin{array}{ccc} \tau_{i,j} & if & (r,s)=(i,j) \\ 0 & if & (r,s) \ne (i,j) \end{array} \right. \nonumber \\
& = & \delta_{(i,j)}^{(r,s)} \tau_{i,j} \nonumber
\end{eqnarray}
in this way we obtain that
\[(p_{i,j,Z})_{e}\left( \sum_{r=0}^{n} \sum_{s=1}^{m_{r}} n_{s} \tau_{r,s} \right)=n_{i,j} \tau_{i,j} \]
and therefore
\[ \sum_{i,j} (p_{i,j,Z})_{e}=id_{CH^{\ast}(Z)}=(\Delta_{Z})_{e} \]
and by using Manin's Identity Principle we obtain the desired result. $_{\square}$
Now we have at our disposal all the tools needed to prove Theorem \ref{teo12}.
\textbf{Proof of Theorem \ref{teo12}} We have that
\[ \begin{array}{rcl} h(Y) = (Y, \Delta_{Y}) & \cong & \displaystyle \bigoplus_{i,j} (Y, p_{i,j}) \\
\ \\
& \cong & \displaystyle \bigoplus_{i,j} \left( h(X) \otimes (Z,p_{i,j,Z}) \right) \\
\ \\
& \cong & h(X) \otimes \left( \displaystyle \bigoplus_{i,j} (Z,p_{i,j,Z}) \right) \\
\ \\
& \cong & h(X) \otimes h(Z). \end{array} $_{\square}$f \]
\section{Murre's conjectures.}
We begin this section by recalling some definitions.
\begin{definition}
Let $X$ be an smooth $d$ dimensional projective variety over a field $k$. We say that $X$ has a Chow-K\"unneth decomposition if we can find cycle classes
\[ \pi_{0}(X), \dots , \pi_{2d}(X) \in CH^{d}(X \times X, \db{Q}) \]
such that
\begin{description}
\item[a)] $\pi_{i}(X) \circ \pi_{j}(X) = \delta_{i,j} \pi_{i}(X)$.
\item[b)] $\displaystyle \Delta_{X}= \sum_{i=0}^{2d} \pi_{i}(X)$.
\item[c)] (over $\bar{k}$) $\pi_{i}$ modulo (co)homological equivalence (for example, in \'etale cohomology) is the usual K\"unneth component $\Delta_{X}(2d-i,i)$.
\end{description}
If we define $h^{i}(X):=(X, \pi_{i}(X))$, then we will say that
\[ h(X)= \bigoplus_{i=0}^{2d} h^{i}(X) \]
(or equivalently, the collection $\pi_{0}(X), \dots, \pi_{2d}(X)$) is a Chow-K\"unneth (CK) decomposition for $X$.
\end{definition}
Some examples of varieties having a CK decomposition are curves \cite{KLE70}, surfaces \cite{MUR90}, products of curves and surfaces \cite{MUR93II}, abelian varieties \cite{SHE74} and uniruled complex 3-folds \cite{ANG98}. The following conjectures (among others) were proposed by Murre in \cite{MUR93}, and they are related to a conjectural filtration on the Chow groups of an algebraic variety.
\textbf{Murre's conjectures.}
\begin{description}
\item[A)] Every smooth projective $d$ dimensional variety $X$ has a Chow-K\"unneth decomposition:
\[ h(X) \cong \bigoplus_{i=0}^{2d} (X, \pi_{i}(X)) \]
\item[B)] For each $j$, $\pi_{0}(X), \dots, \pi_{j-1}(X), \pi_{2j+1}(X), \dots, \pi_{2d}(X)$ act as zero on $CH^{j}(X, \db{Q})$.
\end{description}
In order to say something about the conjectures in case of the fibrations studied in this work, we need to establish some identities, the proof is straightforward.
\begin{lemma}
Let $X, Y, Z \in \ob(V(k))$, $\alpha \in CH^{\ast}(Y)$, $\varphi \in CH^{\ast}(X \times Y)$, $\psi \in CH^{\ast}(Y \times Z)$, $\tau \in CH^{\ast}(X \times Z)$. Let $f:X \to Y$ and $g:Y \to Z$ be morphisms in $V(k)$. Then we have the following identities.
\begin{enumerate}
\item $c_{\alpha} \circ \varphi = (1_{X} \times \alpha) \cdot \varphi$,
\item $\psi \circ c_{\alpha}= (\alpha \times 1_{X}) \cdot \psi$,
\item $c(f) \circ \varphi = (id_{X} \times f)^{\ast}(\varphi)$,
\item $c(g)^{t} \circ \varphi = (id_{X} \times g)_{\ast}(\varphi)$,
\item $\tau \circ c(f) = (f \times id_{Z})_{\ast}(\tau)$,
\item $\psi \circ c(f)^{t} = (f \times id_{Z})^{\ast}(\psi)$. $_{\square}$
\end{enumerate}
\end{lemma}
In particular, we can rewrite some parts of the correspondences $p_{i,j}$ given in the last section, namely
\[ c_{T_{i,j}} \circ c(\pi) \circ c(\pi)^{t} \circ c_{T_{n-i,j}}= (\pi \times \pi)^{\ast}(\Delta_{X}) \cdot (T_{n-i,j} \times T_{i,j}) \ . \]
From this new expression we see that, provided we have a CK decomposition for the base space of the fibration as
\[ \Delta_{X} = \sum_{i=0}^{2 \dim(X)} \pi_{i}(X) \ , \]
then a CK decomposition for the fibration $Y$ must involve correspondences with terms of the form
\[ (\pi \times \pi)^{\ast}(\pi_{r}(X)) \cdot (T_{n-i,j} \times T_{i,j})=c_{T_{i,j}} \circ c(\pi) \circ \pi_{r}(X) \circ c(\pi)^{t} \circ c_{T_{n-i,j}} \ . \]
That will be the case, as we will see in the proof of Theorem \ref{thmCK}.
\textbf{Proof of Theorem \ref{thmCK}}. Let $\pi_{0}(X), \cdots, \pi_{2d}(X)$ be a CK decomposition for $X$, where $d$ is the dimension of $X$. For a cycle $\varphi \in CH^{\ast}(X \times X)$, define a cycle $\fr{p}_{j}(\varphi) \in CH^{\ast}(Y \times Y)$ as (here $n=\dim(Z)$)
\[ \displaystyle \fr{p}_{j}(\varphi)= \sum_{\lambda=1}^{m_{j/2}} c_{T_{j/2, \lambda}} \circ c(\pi) \circ \varphi \circ c(\pi)^{t} \circ c_{T_{n-j/2, \lambda}} \circ \left( \Delta_{Y} - \sum_{(k,l) \in W_{j/2, \lambda}} p_{k,l} \right) \]
for $j$ even and $\fr{p}_{j}(\varphi)=0$ for $j$ odd.
Since
\[ c_{T_{j/2, \lambda}} \circ c(\pi) \circ \varphi \circ c(\pi)^{t} \circ c_{T_{n-j/2, \lambda}} = (\pi \times \pi)^{\ast}(\varphi) \cdot (T_{n-j/2, \lambda} \times T_{j/2, \lambda}) \ , \]
it follows that $\fr{p}_{j}(\cdot)$ is additive.
Now, for each integer $k$ between $0$ and $2 \dim(Y)$ define the set
\[ I_{k}:=\{ (i,j) \ | \ 0 \leq i \leq 2d, \ 0 \leq j \leq 2n, \ i+j=k \} \]
and the correspondence
\[ \pi_{k}(Y):= \sum_{(i,j) \in I_{k}} \fr{p}_{j}(\pi_{i}(X)). \]
We will show that $\pi_{0}(Y), \dots, \pi_{2 \dim(Y)}(Y)$ give a CK decomposition for $Y$ and satisfy the conjecture \textbf{B)} established before.
Clearly $\pi_{k}(Y)$ is a correspondence of degree zero. Besides, since $I_{k} \cap I_{k'} = \emptyset$ for $k \ne k'$ and
\[ \bigcup_{k=0}^{2 \dim(Y)} I_{k} = \{0, 1, \dots, 2d \} \times \{0,1, \dots, 2n \} \]
it follows that
\begin{eqnarray}
\sum_{k=0}^{2 \dim(Y)} \pi_{k}(Y) & = & \sum_{k=0}^{2 \dim(Y)} \sum_{(i,j) \in I_{k}} \fr{p}_{j}(\pi_{i}(X)) \nonumber \\ & = & \sum_{i=0}^{2d} \sum_{j=0}^{2n} \fr{p}_{j}(\pi_{i}(X)) \nonumber \\ & = & \sum_{j=0}^{2n} \fr{p}_{j} \left( \sum_{i=0}^{2d} \pi_{i}(X) \right) \nonumber \\ & = & \sum_{j=0}^{n} \sum_{\lambda=0}^{m_{j}} p_{j, \lambda} = \Delta_{Y} \ . \nonumber
\end{eqnarray}
Now, we will verify that we have the identities
\[ \pi_{k}(Y) \circ \pi_{k'}(Y) = \delta_{k,k'} \pi_{k}(Y) \ . \]
By Lemma \ref{Yonconseq}, we have to show that for any variety $T \in \ob(V(k))$,
\[ \pi_{k}(Y)_{T} \circ \pi_{k'}(Y)_{T} = \delta_{k,k'} \pi_{k}(Y)_{T} \ . \]
As we observed in the proof of Theorem \ref{teo0301}, $id_{T} \times \pi: T \times Y \to T \times X$ is a locally trivial fibration with fibres isomorphic to $Z$ and having a Chow stratification, the generators of $CH^{\ast}(T \times Y)$ as $CH^{\ast}(T \times X)$-module being the elements $1_{T} \times T_{i,j}$. Therefore, by Theorem \ref{teo11} any element $\beta \in CH^{p}(T \times Y)$ can be written as
\[ \beta = \sum_{r=0}^{p} \sum_{s=1}^{m_{r}} (id_{T} \times \pi)^{\ast}(\alpha_{r,s}) \cap (1_{T} \times T_{r,s}) \ , \]
for some $\alpha_{r,s} \in CH^{p-r}(T \times X)$.
By Lemma \ref{funcident}, each non zero summand of $\fr{p}_{j}(\pi_{i}(X))_{T}$ is of the form
\[ m_{1_{T} \times T_{j/2, \lambda}} \circ (id_{T} \times \pi)^{\ast} \circ \pi_{i}(X)_{T} \circ (id_{T} \times \pi)_{\ast} \circ m_{1_{T} \times T_{n-j/2, \lambda}} \circ \left( id_{T \times Y} - \sum_{(k,l) \in W_{j/2, \lambda}} (p_{k,l})_{T} \right) \]
and by doing similar calculations to the ones given in the proof of Lemma \ref{le0401} , we obtain
\[ \left( (id_{T} \times \pi)_{\ast} \circ m_{1_{T} \times T_{n-j/2, \lambda}} \circ \left( id_{T \times Y} - \sum_{(k,l) \in W_{j/2, \lambda}} (p_{k,l})_{T} \right) \right)(\beta)= \alpha_{j/2, \lambda} \ ; \]
in consequence
\[ \fr{p}_{j}(\pi_{i}(X))_{T}(\beta)= \displaystyle \sum_{\lambda=1}^{m_{j/2}} (id_{T} \times \pi)^{\ast}(\pi_{i}(X)_{T}(\alpha_{j/2, \lambda})) \cap (1_{T} \times T_{j/2, \lambda}) \ ; \]
observe that in this last expression, by applying $\fr{p}_{j}(\pi_{i}(X))_{T}$ to an element, the result only involve terms in which the generator is of the form $1_{T} \times T_{j/2, \lambda}$. Therefore, should we apply $\fr{p}_{j'}(\pi_{i'}(X))_{T}$ to $\fr{p}_{j}(\pi_{i}(X))_{T}(\beta)$ for $j' \ne j$ (and no matter what value of $i'$ we choose) we would obtain zero. On the other hand, if $j=j'$ then
\[ \fr{p}_{j}(\pi_{i'}(X))_{T} \left(\fr{p}_{j}(\pi_{i}(X))_{T}(\beta) \right)= \displaystyle \sum_{\lambda=1}^{m_{j/2}} (id_{T} \times \pi)^{\ast}( (\pi_{i'}(X) \circ \pi_{i}(X))_{T}(\alpha_{j/2, \lambda})) \cap (1_{T} \times T_{j/2, \lambda}) \ ; \]
but we have that $\pi_{i'}(X) \circ \pi_{i}(X) = \delta_{i',i} \pi_{i}(X)$, so it follows that
\[ \fr{p}_{j}(\pi_{i'}(X))_{T} \left(\fr{p}_{j}(\pi_{i}(X))_{T}(\beta) \right)= \delta_{i',i} \fr{p}_{j}(\pi_{i}(X))_{T}(\beta) \ . \]
To summarize, we have that
\[ \fr{p}_{j'}(\pi_{i'}(X))_{T} \left(\fr{p}_{j}(\pi_{i}(X))_{T}(\beta) \right) = \delta_{(i,j)}^{(i',j')} \fr{p}_{j}(\pi_{i}(X))_{T}(\beta) \ . \]
Now, when $k \ne k'$, $I_{k} \cap I_{k'} = \emptyset$, and therefore $\delta_{(i,j)}^{(i',j')}=0$ for any $(i,j) \in I_{k}$ and any $(i',j') \in I_{k'}$. Therefore, for $k \ne k'$
\[ \pi_{k'}(Y)_{T} \circ \pi_{k}(Y)_{T}(\beta)= 0 \ . \]
In a similar fashion, $\pi_{k}(Y)_{T} \circ \pi_{k}(Y)_{T}(\beta)= \pi_{k}(Y)_{T}(\beta) $. Therefore the projectors
\[ \pi_{0}(Y), \dots, \pi_{2 \dim(Y)}(Y) \]
provide a CK decomposition for $Y$.
Now we will prove the statement about the action. Recall the action of $\pi_{k}(Y)$ on $CH^{j}(Y)$ is given by the values of $\pi_{k}(Y)_{e}$, where $e= \Spec (k)$. We have to show that, for a given value of $p$,
\[ \pi_{0}(Y)_{e}(\beta)= \cdots = \pi_{p-1}(Y)_{e}(\beta) = \pi_{2p+1}(Y)_{e}(\beta) = \dots = \pi_{2 \dim(Y)}(Y)_{e}(\beta)=0 \]
for any $\beta \in CH^{p}(Y)$.
As before, $\beta$ can be written as
\[ \sum_{r=0}^{p} \sum_{s=1}^{m_{r}} \pi^{\ast}(\alpha_{r,s}) \cap T_{r,s} \]
for some $\alpha_{r,s} \in CH^{p-r}(X)$.
From the calculations done before we see that
\[ \pi_{k}(Y)_{e}(\beta)= \sum_{\substack{(i,j) \in I_{k} \\ j \ even}} \displaystyle \sum_{\lambda=1}^{m_{j/2}} \pi^{\ast}(\pi_{i}(X)_{e}(\alpha_{j/2, \lambda})) \cap T_{j/2, \lambda} \ , \]
and we have to consider two cases.
Suppose $0 \leq k \leq p-1$. Then for $(i,j) \in I_{k}$ we have
\[ 0 \leq 2i+j \leq 2i +2j \leq 2(p-1) \]
and therefore
\begin{equation} \label{ineq01}
0 \leq i \leq p-\frac{j}{2}-1 \ .
\end{equation}
But $\alpha_{j/2, \lambda} \in CH^{p-\frac{j}{2}}(X)$ and $\pi_{i}(X)$ acts as zero there because of (\ref{ineq01}) and the hypothesis on $\pi_{i}(X)$. In this way we have that $\pi_{k}(Y)_{e}(\beta)=0$ for $0 \leq k \leq p-1$.
Now, suppose $2p+1 \leq k \leq 2\dim(Y)$. For $(i,j) \in I_{k}$,
\[ 2p+1 \leq k= i+j \ ; \quad i \leq 2 \dim(X) \ . \]
Putting together these two inequalities we obtain $2(p-j/2)+1 \leq i \leq 2 \dim(X)$ and again, by the hypothesis on $\pi_{i}(X)$, it acts as zero on $CH^{p-j/2}(X)$. Therefore
$\pi_{k}(Y)_{e}(\beta)=0$ for $2p+1 \leq k \leq 2\dim(Y)$. $_{\square}$
\end{document}
|
\begin{document}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{assumption}[theorem]{Assumptio}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{problem}[theorem]{Problem}
\newtheorem{claim}{Claim}
\newtheorem{conjecture}[theorem]{Conjecture}
\newtheorem{fact}{Fact}
\title{Tur$
m{cute{a}
\noindent\rule[0pt]{16.5cm}{0.09em}
\noindent{\bf Abstract}
\noindent Suppose that $\dot{G}$ is an unbalanced signed graph of order $n$ with $e(\dot{G})$ edges. Let $\rho(\dot{G})$ be the spectral radius of $\dot{G}$, and $\mathcal{K}_4^-$ be the set of the unbalanced $K_4$. In this paper, we prove that if $\dot{G}$ is a $\mathcal{K}_4^-$-free unbalanced signed graph of order $n$, then $e(\dot{G})\leqslant \frac{n(n-1)}{2}-(n-3)$ and $\rho(\dot{G})\leqslant n-2$. Moreover, the extremal graphs are completely characterized.
\noindent{\bf Keywords:} Signed graph, Adjacency matrix, Spectral radius, Tur$\rm{\acute{a}}$n problem.
\noindent\rule[0pt]{16.5cm}{0.05em}
\section{Introduction}
The graph $G$ is considered to be simple and undirected throughout this paper. The vertex set and the edge set of a graph $G$ will be denoted by $V(G)$ and $E(G)$. A signed graph $\dot{G}=(G,\sigma)$ consists a graph $G$, called the underlying graph, and a sign function $\sigma:E(G) \rightarrow \left\{-1,+1\right\}$. The signed graphs firstly appeared in the work of Harary \cite{H}. If all edges get signs $+1$ (resp. $-1$), then $\dot{G}$ is called all positive (resp. all negative) and denoted by $(G,+)$ (resp. $(G,-)$). The sign of a cycle $C$ of $\dot{G}$ is $\sigma(C)=\prod_{e\in E(C)}\sigma(e)$, whose sign is $+1$ (resp. $-1$) is called positive (resp. negative). A signed graph $\dot{G}$ is called balanced if all its cycles are positive; otherwise it is called unbalanced. For more details about the notion of signed graphs, we refer to \cite{Z}.
Let $U$ be a subset of the vertex set $V(\dot{G})$ and $\dot{G}_U$ be the signed graph obtained from $\dot{G}$ by reversing the sign of each edge between a vertex in $U$ and a vertex in $V(\dot{G})\setminus U$. We say the signed graph $\dot{G}_U$ is switching equivalent to $\dot{G}$, and write $\dot{G}\sim \dot{G}_U$. The switching operation remains the signs of cycles. So if $\dot{G}$ is unbalanced, and $\dot{G}_U$ is also unbalanced.
For an $n\times n$ real symmetric matrix $M$, all its eigenvalues will be denoted by $\lambda_1(M)\geqslant\lambda_2(M)\geqslant\cdots\geqslant\lambda_n(M)$, and we write Spec($M$)=$\{\lambda_1(M),\lambda_2(M),\cdots,\lambda_n(M)\}$ for the spectra of $M$. The adjacency matrix of a signed graph $\dot{G}$ of order $n$ is an $n\times n$ matrix $A(\dot{G})=(a_{ij})$. If $\sigma(uv)=+1$ (resp. $\sigma(uv)=-1$), then $a_{uv}=1$ (resp. $a_{uv}=-1$) and if $u$ is not adjacent to $v$, then $a_{uv}=0$. The eigenvalues of $A(\dot{G})$ are called the eigenvalues of $\dot{G}$, denoted by $\lambda_1(\dot{G})\geqslant\lambda_2(\dot{G})\geqslant\cdots\geqslant\lambda_n(\dot{G})$. In particular, the largest eigenvalue $\lambda_1(\dot{G})$
is called the index of $\dot{G}$. The spectral radius of $\dot{G}$ is defined by $\rho(\dot{G})=\max\big\{|\lambda_i(\dot{G})|:1\leqslant i\leqslant n\big\}.$
Since, in general, $A(\dot{G})$ is not similar to a non-negative matrix, it may happen that $-\lambda_n(\dot{G})>\lambda_1(\dot{G})$. Thus,
$\rho(\dot{G})=\max\big\{\lambda_1(\dot{G}),-\lambda_n(\dot{G})\big\}.$ For the diagonal matrix $S_U=\text{diag}(s_1,s_2,\cdots,s_n)$, we have $A(\dot{G})=S_U^{-1}A(\dot{G}_U)S_U$ where $s_i=1$ if $i\in U$, and $s_i=-1$ otherwise. Therefore, the signed graphs $\dot{G}$ and $\dot{G}_U$ share the same spectra.
A graph may be regarded as a signed graph with all positive edges. Hence, the properties of graphs can be considered in terms of signed graphs naturally. Moreover, there are some special properties in terms of signed graphs. Such as Huang \cite{H1} solved the Sensitivity Conjecture by the spectral properties of signed hypercubes. For the spectral theory of signed graph, see \cite{BCKW ,KP, KS, ABH} for details, where \cite{BCKW} is an excellent survey about some open problems in the spectral theory of signed graphs.
Let $\mathcal{F}$ be a family of graphs. Graph $G$ is $\mathcal{F}$-free if $G$ does not contain any graph in $\mathcal{F}$ as a subgraph. The classical Tur$\rm{\acute{a}}$n type problem determines the maximum number of edges of an $n$ vertex $\mathcal{F}$-free graph, called the Tur$\rm{\acute{a}}$n number.
Let $T_r(n)$ be a complete $k$-partite graph of order $n$ whose partition sets
have sizes as equal as possible. Tur$\rm{\acute{a}}$n \cite{T} proved that $T_r(n)$ is the unique extremal graph of $K_{r+1}$-free graph, which is regarded as the beginning of the extremal graph theory. We refer the reader to \cite{BS, FG, YZ} for more results about Tur$\rm{\acute{a}}$n number.
\begin{theorem}\cite{T}
If $G$ is a $K_{r+1}$-free graph of order $n$, then
$$e(G)\leqslant e(T_r(n)),$$
with equality holding if and only if $G=T_r(n)$.
\end{theorem}
In 2007, Nikiforov \cite{N1} gave a spectral version of the Turán Theorem for the complete graph $K_{r+1}$. In
the past few decades, much attention has been paid to the search for the spectral Turán Theorem such as \cite{CDT, DKL,WKX}.
\begin{theorem}\cite{N1}
If $G$ is a $K_{r+1}$-free graph of order $n$, then
$$\rho(G)\leqslant\rho(T_r(n)),$$
with equality holding if and only if $G=T_r(n)$.
\end{theorem}
How about the Tru$\rm{\acute{a}}$n problem of signed graph? The discussions about the complete signed graph kicked off. Let $\mathcal{K}_3^-$ be the set of the unbalanced $K_3$. Up to switching equivalence, we have $\mathcal{K}_3^-=\{\dot{H}\}$, where $\dot{H}$ is the signed triangle with exactly one negative edge. Wang, Hou, and Li \cite{WHL} determined the Tur$\rm{\acute{a}}$n number of $\mathcal{K}_3^-$ and the spectral Tur$\rm{\acute{a}}$n number of $\mathcal{K}_3^-$. The dashed lines indicate negative edges, and ellipses indicate the cliques with all positive edges in Fig. 1 and 2.
\begin{figure}
\caption{\centering{The signed graphs $\dot{G}
\label{F1}
\label{F1}
\label{F2}
\end{figure}
\begin{theorem}\label{c3edge}\cite{WHL}
Let $\dot{G}=(G,\sigma)$ be a connected $\mathcal{K}_3^-$-free unbalanced signed graph of order $n$. Then
$$e(\dot{G})\leqslant \frac{n(n-1)}{2}-(n-2),$$
with equality holding if and only if $\dot{G}\sim \dot{G}(s,t)$, where $s+t=n-2$ and $s$, $t\geqslant1$ (see Fig. 1).
\end{theorem}
\begin{theorem}\label{c3spectrum}\cite{WHL}
Let $\dot{G}=(G,\sigma)$ be a connected $\mathcal{K}_3^-$-free unbalanced signed graph of order $n$. Then
$$\rho(\dot{G})\leqslant\frac{1}{2}(\sqrt{n^2-8}+n-4),$$
with equality holding if and only if $\dot{G}\sim\dot{G}(1,n-3)$.
\end{theorem}
Let $\mathcal{K}_4^-$ be the set of the unbalanced $K_4$. Up to switching equivalence, we have $\mathcal{K}_4^-=\{\dot{H_1},\dot{H_2}\}$, where $\dot{H_1}$ is the signed $K_4$ with exactly one negative edge and $\dot{H_2}$ is the signed $K_4$ with two independent negative edges. If $\dot{G}$ is a $\mathcal{K}_4^-$-free signed graph of order $n$ with maximum edges, then $e(\dot{G})=\frac{n(n-1)}{2}$ with equality holding if and only if $\dot{G}\sim(K_n,+)$. Therefore, we would focus the attention on $\mathcal{K}_4^-$-free unbalanced signed graphs. Let
$$\mathcal{G}=\big\{\dot{G_1}(a,b),\, \dot{G_1'}(1,n-4), \, \dot{G_2}(c,d), \, \dot{G_3}(1,n-5), \, \dot{G_4}(1,n-5), \, \dot{G_5}(1,n-5)\big\},$$
where $a+b=n-3$, $a$, $b\geqslant 0$, and $c+d=n-4$, $c$, $d\geqslant 1$ (see Fig. 1 and 2). For any signed graph $\dot{G}\in\mathcal{G}$, we have $\dot{G}$ is $\mathcal{K}_4^-$-free unbalanced, and
\begin{equation}\label{n-3}
e(\dot{G})=\frac{n(n-1)}{2}-(n-3).
\end{equation}
If $\dot{G}$ is a $\mathcal{K}_4^-$-free signed graph of order $n$ with maximum spectral radius, then $\rho(\dot{G})=n-1$ with equality holding if and only if $\dot{G}\sim(K_n,+)$. Therefore, we would focus the attention on $\mathcal{K}_4^-$-free unbalanced signed graphs. In Section 2, for any signed graph $\dot{G}\in\mathcal{G}$, we will prove that
$\lambda_1(\dot{G})\leqslant n-2,$
with equality holding if and only if $\dot{G}\sim \dot{G_1}(0,n-3)$.
Among all unbalanced signed graph, Tur$\rm\acute{a}$n number of $\mathcal{K}_4^-$ will be determined in Theorem \ref{edge}, and spectral Tur$\rm\acute{a}$n number of $\mathcal{K}_4^-$ will be determined in Theorem \ref{spectrum}. Their proofs will be presented in Section 3 and Section 4.
\begin{theorem}\label{edge}
Let $\dot{G}=(G,\sigma)$ be a $\mathcal{K}_4^-$-free unbalanced signed graph of order $n$ ($n\geqslant7$). Then
$$e(\dot{G})\leqslant \frac{n(n-1)}{2}-(n-3),$$
with equality holding if and only if $\dot{G}$ is switching equivalent to a signed graph in $\mathcal{G}$.
\end{theorem}
\begin{figure}
\caption{The signed graphs $\dot{G_2}
\label{F3}
\label{F4}
\label{F5}
\label{F6}
\end{figure}
\begin{theorem}\label{spectrum}
Let $\dot{G}=(G,\sigma)$ be a $\mathcal{K}_4^-$-free unbalanced signed graph of order $n$. Then
$$\rho(\dot{G})\leqslant n-2,$$
with equality holding if and only if $\dot{G}\sim \dot{G_1}(0,n-3)$.
\end{theorem}
\section{The indices of signed graphs in $\mathcal{G}$}
For any signed graph $\dot{G}$ in $\mathcal{G}$, we will show that $\lambda_1(\dot{G})\leqslant n-2$, with equality holding if and only if $\dot{G}\sim\dot{G_1}(0,n-3)$. The equitable quotient matrix technique and Cauchy Interlacing Theorem are two main tools in our proof.
Let $M$ be a real symmetric matrix with the following block form
$$M=\left(
\begin{array}{ccc}
M_{11} & \cdots & M_{1m} \\
\vdots & \ddots & \vdots \\
M_{m1} & \cdots & M_{mm} \\
\end{array}
\right).
$$
For $1\leqslant i,j\leqslant m$, let $q_{ij}$ denote the average row sum of $M_{ij}$. The matrix $Q=(q_{ij})$ is called the quotient matrix of $M$. Moreover, if for each pair $i,j$, $M_{ij}$ has a constant row sum, then $Q$ is called a equitable quotient matrix of $M$.
\begin{lemma}\label{equitable}\cite{BH}
Let $Q$ be an equitable quotient matrix of matrix $M$. Then the matrix $M$ has the following two kinds of eigenvalues.
(1) The eigenvalues coincide with the eigenvalues of $Q$.
(2) The eigenvalues of $M$ not in {\rm Spec}($Q$) remain unchanged if some scalar multiple of the all-one block $J$ is added to block $M_{ij}$ for each $1\leqslant i,j\leqslant m$.
Furthermore, if $M$ is nonnegative and irreducible, then $\lambda_1(M)=\lambda_1(Q).$
\end{lemma}
\begin{lemma}\label{interlacing}\cite{BH}
Let $A$ be a symmetric matrix of order $n$ with eigenvalues $\lambda_1\geqslant \lambda_2\geqslant\cdots\geqslant\lambda_n$ and $B$ be a principal submatrix of $A$ of order $m$ with eigenvalues $\mu_1\geqslant\mu_2\geqslant\cdots\geqslant\mu_m$. Then the eigenvalues of $B$ interlace the eigenvalues of $A$, that is, $\lambda_i\geqslant\mu_i\geqslant\lambda_{n-m+i}$ for $i=1,\cdots,m$.
\end{lemma}
The clique number of a graph $G$, denoted by $\omega(G)$, is the maximum order of a clique in $G$. The balanced clique number of a signed graph $\dot{G}$, denoted by $\omega_b(\dot{G})$, is the maximum order of a balanced clique in $\dot{G}$.
\begin{lemma}\label{lower}\cite{W} Let $G$ be a graph of order $n$. Then
$$\lambda_1(G)\leqslant\left(1-\frac{1}{\omega(G)}\right)n.$$
\end{lemma}
\begin{lemma}\label{balanced}\cite{WYQ}
Let $\dot{G}$ be a signed graph of order $n$. Then
$$\lambda_1(\dot{G})\leqslant\left(1-\frac{1}{\omega_b(\dot{G})}\right)n.$$
\end{lemma}
\begin{lemma}\cite{S}\label{spanning}
Every signed graph $\dot{G}$ contains a balanced spanning subgraph, say $\dot{H}$, which satisfies $\lambda_1(\dot{G})\leqslant\lambda_1(\dot{H})$.
\end{lemma}
\begin{remark}\cite{WHL}\label{spanning2}
There is a switching equivalent graph $\dot{G}_U$ such that eigenvector $\bm{x}$ of $A(\dot{G}_{U})$ corresponding to $\lambda_1(\dot{G})$ is non-negative. By the proof of \cite[Theorem 3.1]{S}, the balanced spanning subgraph $\dot{H}$ in Lemma \ref{spanning} may be obtained from $\dot{G}_U$ by removing all negative edges.
\end{remark}
\begin{lemma}\label{unbalanced}
If $\dot{G}$ is a connected signed graph, then $\lambda_1(\dot{G})\leqslant\lambda_1(G)$. Moreover, the equality holds if and only if $\dot{G}$ is balanced.
\end{lemma}
\begin{proof}
Let $\dot{G}_U$ be a signed graph defined in Remark \ref{spanning2}, and $\dot{H}$ be a spanning subgraph of $\dot{G}_{U}$ by removing all negative edges and $\lambda_1(\dot{G})\leqslant\lambda_1(\dot{H})$. Thus $A(\dot{H})$ is a nonnegative matrix, $A(G)$ is a nonnegative irreducible matrix, and $\dot{H}$ is a subgraph of $G$. By Perron-Frobenius Theorem, we know that $\lambda_1(\dot{H})\leqslant \lambda_1(G)$. So, $\lambda_1(\dot{G})\leqslant\lambda_1(G)$.
If $\lambda_1(\dot{G})=\lambda_1(G)$, then $\lambda_1(\dot{H})= \lambda_1(G)$. By Perron-Frobenius Theorem, we know that $A(\dot{H})=A(G)$ and then $\dot{H}=G$ and $\dot{G}_U=(G,+)$, so $\dot{G}$ is balanced. If $\dot{G}$ is balanced, then $\dot{G}\sim(G,+)$, and then $\lambda_1(\dot{G})=\lambda_1(G)$.
\end{proof}
Recall that
$$\mathcal{G}=\big\{\dot{G_1}(a,b),\, \dot{G_1'}(1,n-4), \, \dot{G_2}(c,d), \, \dot{G_3}(1,n-5), \, \dot{G_4}(1,n-5), \, \dot{G_5}(1,n-5)\big\},$$
where $a+b=n-3$, $a$, $b\geqslant 0$, and $c+d=n-4$, $c$, $d\geqslant 1$.
\begin{lemma}\label{largest}
For any graph $\dot{G}$ in $\mathcal{G}$, we have $\lambda_1(\dot{G})\leqslant n-2$, with equality holding if and only if $\dot{G}\sim \dot{G_1}(0,n-3)$.
\end{lemma}
\begin{proof}[\rm{\textbf{Proof.}}]
We will complete the proof by showing the following five claims. Firstly, we claim that $\lambda_1(\dot{G_1}(0,n-3))=n-2$.
\begin{claim}
$\lambda_1(\dot{G_1}(0,n-3))=n-2$.
\end{claim}
\begin{proof}[\rm{\textbf{Proof of Claim 1.}}]
For the signed graph $\dot{G_1}(0,n-3)$, we give a vertex partition with $V_1=\{u\}$, $V_2=\{v\}$, $V_3=\{w\}$ and $V_4=V(\dot{G})\setminus\{u,v,w\}$. Then the adjacency matrix $A(\dot{G_1}(0,n-3))$ and its corresponding equitable quotient matrix $Q_1$ are as following
$$A(\dot{G_1}(0,n-3))={\footnotesize\begin{bmatrix}
0 & -1 & 1 & \bm{0^T} \\
-1 & 0 & 1 & \bm{j^T_{n-3}} \\
1 & 1 & 0 & \bm{j^T_{n-3}} \\
\bm{0} & \bm{j_{n-3}} & \bm{j_{n-3}} & (J-I)_{n-3}\\
\end{bmatrix}}\ \text{and}\
Q_1={\footnotesize\begin{bmatrix}
0 & -1 & 1 & 0 \\
-1 & 0 & 1 & n-3 \\
1 & 1 & 0 & n-3 \\
0 & 1 & 1 & n-4 \\
\end{bmatrix}.}
$$
By Lemma \ref{equitable} (1), the eigenvalues of $Q_1$ are also the eigenvalues of $A(\dot{G_1}(0,n-3))$. The characteristic polynomial of $Q_1$ is
\begin{align*}
P_{Q_1}(x)=(x-n+2)(x-1)(x+1)(x+2).
\end{align*}
Hence, $\lambda_1(Q_1)=n-2$. Add some scalar multiple of the all-one block $J$ to block of $A(\dot{G_1}(0,n-3))$ and $A(\dot{G_1}(0,n-3))$ becomes
$$A_1={\footnotesize\begin{bmatrix}
0 & 0 & 0 & \bm{0^T} \\
0 & 0 & 0 & \bm{0^T} \\
0 & 0 & 0 & \bm{0^T} \\
\bm{0} & \bm{0} & \bm{0} & -I_{n-3} \\
\end{bmatrix}.}\
$$
By Lemma \ref{equitable} (2), there are $n-4$ eigenvalues of $\dot{G_1}(0,n-3)$ contained in the spectra of $A_1$. Since Spec($A_1$)=$\{-1^{[n-3]},0^{[3]}\}$, we have $\lambda_1(\dot{G_1}(0,n-3))=\lambda_1(Q_1)=n-2$.
\end{proof}
Next, we claim that $\lambda_1(\dot{G})< n-2$ for any graph $\dot{G}\in \mathcal{G}\setminus\{ \dot{G_1}(0,n-3)\}$.
\begin{claim}
$\lambda_1(\dot{G_1}(a,b))<n-2,\ 1\leqslant a\leqslant b$.
\end{claim}
\begin{proof}[\rm{\textbf{Proof of Claim 2.}}]
We prove that Claim by showing
$$\lambda_1(\dot{G_1}(\lfloor\frac{n-3}{2}\rfloor,\lceil\frac{n-3}{2}\rceil))<\cdots<\lambda_1(\dot{G_1}(1,n-4))<n-2.$$
Partition the vertices set of $\dot{G_1}(a,b)$ as $V_1=\{u\}$, $V_2=\{v\}$, $V_3=\{w\}$, $V_4=N(u)\setminus\{v,w\}$ and $V_5=N(v)\setminus\{u,w\}$. Then the adjacency matrix $A(\dot{G_1}(a,b))$ and its corresponding equitable quotient matrix $Q_2(a,b)$ are as following
$$A(\dot{G_1}(a,b))={\footnotesize\begin{bmatrix}
0 & -1 & 1 & \bm{j^T_{a}} & \bm{0^T} \\
-1 & 0 & 1 & \bm{0^T} & \bm{j^T_{b}} \\
1 & 1 & 0 & \bm{j^T_{a}} & \bm{j^T_{b}} \\
\bm{j_{a}} & \bm{0} & \bm{j_{a}} & (J-I)_{a} & J_{b} \\
\bm{0} & \bm{j_{b}} & \bm{j_{b}} & J_{a} & (J-I)_{b} \\
\end{bmatrix}}\ \text{and}\
Q_2(a,b)={\footnotesize\begin{bmatrix}
0 & -1 & 1 & a & 0 \\
-1 & 0 & 1 & 0 & b \\
1 & 1 & 0 & a & b \\
1 & 0 & 1 & a-1 & b \\
0 & 1 & 1 & a & b-1 \\
\end{bmatrix}.}
$$
By Lemma \ref{equitable} (1), the eigenvalues of $Q_2(a,b)$ are also the eigenvalues of $A(\dot{G_1}(a,b))$. The characteristic polynomial of $Q_2(a,b)$ is
\begin{align}\label{Q2}
P_{Q_2(a,b)}(x,a,b)=&x^5-(a+b-2)x^4-(3a+3b+2)x^3+(2ab-a-b-4)x^2\notag\\
&+(5ab+3a+3b)x+2ab+2a+2b+2.
\end{align}
Add some scalar multiple of the all-one block $J$ to block of $A(\dot{G_1}(a,b))$ and $A(\dot{G_1}(a,b))$ becomes
$$A_2={\footnotesize\begin{bmatrix}
0 & 0 & 0 & \bm{0^T} & \bm{0^T} \\
0 & 0 & 0 & \bm{0^T} & \bm{0^T} \\
0 & 0 & 0 & \bm{0^T} & \bm{0^T} \\
\bm{0} & \bm{0} & \bm{0} & -I_{a} & 0 \\
\bm{0} & \bm{0} & \bm{0} & 0 & -I_{b} \\
\end{bmatrix}.}\
$$
By Lemma \ref{equitable} (2), there are $n-5$ eigenvalues of $\dot{G_1}(a,b)$ contained in the spectra of $A_2$. Since $\lambda_1(Q_2(a,b))>0$ and Spec($A_2$)=$\{-1^{[n-3]},0^{[3]}\}$, we have $\lambda_1(\dot{G_1}(a,b))=\lambda_1(Q_2(a,b))$.
Noting that
$$P_{Q_2(a,b)}(x,a,b)-P_{Q_2(a-1,b+1)}(x,a-1,b+1)=(b-a+1)(2x+1)(x+2),$$
then $P_{Q_2(a,b)}(x,a,b)>P_{Q_2(a-1,b+1)}(x,a-1,b+1)$ when $x>-\frac{1}{2}$. Hence, $\lambda_1(Q_2(a,b))<\lambda_1(Q_2(a-1,b+1))$. Thus, $\lambda_1(\dot{G_1}(a,b))<\lambda_1(\dot{G_1}(a-1,b+1))$, and then
$$\lambda_1(\dot{G_1}(\lfloor\frac{n-3}{2}\rfloor,\lceil\frac{n-3}{2}\rceil))<\cdots<\lambda_1(\dot{G_1}(1,n-4)).$$
Now we will prove that $\lambda_1(\dot{G_1}(1,n-4))<n-2$. By (\ref{Q2}), we have
\begin{align*}
P_{Q_2(1,n-4)}(x,1,n-4)&=(x+2)g_2(x),
\end{align*}
where $g_2(x)=x^4-(n-3)x^3-(n-1)x^2+(3n-11)x+2n-6$. Note that, for $n\geqslant 7$,
$$g_2(n-2)=2n^2-11n+12>0.$$
To complete the proof, it suffices to prove that $\lambda_2(\dot{G_1}(1,n-4))<n-2$. In fact, by Lemmas \ref{interlacing} and \ref{balanced}, we have
\begin{align*}
\lambda_2(\dot{G_1}(1,n-4))&\leqslant \lambda_1(\dot{G_1}(1,n-4)-v)\\
&\leqslant\left(1-\frac{1}{\omega_b(\dot{G_1}(1,n-4)-v)}\right)(n-1)\\
&=\frac{(n-1)(n-3)}{n-2}<n-2.
\end{align*}
Hence, $\lambda_1(\dot{G_1}(1,n-4))<n-2$.
\end{proof}
\begin{claim}
$\lambda_1(\dot{G_1'}(1,n-4))<n-2.$
\end{claim}
\begin{proof}[\rm{\textbf{Proof of Claim 3.}}]
Partition the vertices set of $\dot{G_1'}(1,n-4)$ as $V_1=\{u\}$, $V_2=\{v\}$, $V_3=\{w\}$, $V_4=\{u_1\}$ and $V_5=N(v)\setminus\{u,w\}$. The corresponding equitable quotient matrix $Q_3$ of $A(\dot{G_1'}(1,n-4))$ is as following
$$Q_3={\footnotesize\begin{bmatrix}
0 & -1 & 1 & -1 & 0 \\
-1 & 0 & 1 & 0 & n-4 \\
1 & 1 & 0 & 1 & n-4 \\
-1 & 0 & 1 & 0 & n-4 \\
0 & 1 & 1 & 1 & n-5 \\
\end{bmatrix}.}
$$
Furthermore, by Lemma \ref{equitable}, we know that the $\lambda_1(\dot{G_1'}(1,n-4))=\lambda_1(Q_3)$. The characteristic polynomial of $Q_3$ is $P_{Q_3}(x)=xg_3(x),$
where $g_3(x)=x^4 + (5 - n)x^3 + (7 - 3n)x^2 + (n - 5)x + (4n - 12)$.
Note that, for $n\geqslant 7$,
$$g_3(n-2)=2n^2-7n+2>0.$$
To complete the proof, it suffices to prove that $\lambda_2(\dot{G_1'}(1,n-4))<n-2$. In fact, by Lemmas \ref{interlacing} and \ref{balanced}, we have
\begin{align*}
\lambda_2(\dot{G_1'}(1,n-4))&\leqslant \lambda_1(\dot{G_1'}(1,n-4)-v)\\
&\leqslant\left(1-\frac{1}{\omega_b(\dot{G_1'}(1,n-4)-v)}\right)(n-1)\\
&=\frac{(n-1)(n-3)}{n-2}<n-2.
\end{align*}
Hence, $\lambda_1(\dot{G_1'}(1,n-4))<n-2$.
\end{proof}
\begin{claim}
$\lambda_1(\dot{G_2}(c,d))<n-2,\ 1\leqslant c\leqslant d$.
\end{claim}
\begin{proof}[\rm{\textbf{Proof of Claim 4.}}]
We may consider the underlying graph $G_2(c,d)$ for $\dot{G_2}(c,d)$ and we prove that Claim by showing
$$\lambda_1(G_2(\lfloor\frac{n-4}{2}\rfloor,\lceil\frac{n-4}{2}\rceil))<\cdots<\lambda_1(G_2(1,n-5))<n-2.$$
Partition the vertices set of $G_2(c,d)$ as $V_1=\{u\}$, $V_2=\{v\}$, $V_3=\{w,w_1\}$, $V_4=N(u)\setminus\{v,w,w_1\}$ and $V_5=N(v)\setminus\{u,w,w_1\}$. Then the corresponding equitable quotient matrix $Q_4(c,d)$ of $A(G_2(c,d))$ is
$$Q_4(c,d)={\footnotesize\begin{bmatrix}
0 & 1 & 2 & c & 0 \\
1 & 0 & 2 & 0 & d \\
1 & 1 & 0 & c & d \\
1 & 0 & 2 & c-1 & d \\
0 & 1 & 2 & c & d-1 \\
\end{bmatrix}.}$$
Since $A(G_2(c,d))$ is nonnegative and irreducible, we have $\lambda_1(A(G_2(c,d)))=\lambda_1(Q_4(c,d))$ by Lemma \ref{equitable}. The characteristic polynomial of $Q_4(c,d)$ is
\begin{align}\label{Q3}
P_{Q_4(c,d)}(x,c,d)=&x^5+(2-c-d)x^4-(4c+4d+4)x^3+(2cd-2c-2d-14)x^2\notag\\
&+(3cd+5c+5d-13)x+4c+4d-4cd-4.
\end{align}
Noting that
$$P_{Q_4(c,d)}(x,c,d)-P_{Q_4(c-1,d+1)}(x,c-1,d+1)=(d-c+1)(2x^2+3x-4),$$
then $P_{Q_4(c-1,d+1)}(x,c-1,d+1)<P_{Q_4(c,d)}(x,c,d)$ when $x> 1$. Hence, $\lambda_1(Q_4(c,d))<\lambda_1(Q_4(c-1,d+1))$. Thus, $\lambda_1(G_2(c,d))<\lambda_1(G_2(c-1,d+1))$ and,
$$\lambda_1(G_2(\lfloor\frac{n-4}{2}\rfloor,\lceil\frac{n-4}{2}\rceil))<\cdots<\lambda_1(G_2(1,n-5)).$$
Noting that, by ($\ref{Q3}$)
\begin{align*}
P_{Q_4(1,n-5)}(x,1,n-5)=x^5+(6-n)x^4+(12-4n)x^3-16x^2+(8n-48)x,
\end{align*}
then, for $n\geqslant 7$, we have
$$P_{Q_4(1,n-5)}(n-2,1,n-5)=4n(n-2)(n-6)>0.$$
To complete the proof, it suffices to prove that $\lambda_2(G_2(1,n-5))<n-2$. In fact, by Lemmas \ref{interlacing} and \ref{lower}, we have
\begin{align*}
\lambda_2(G_2(1,n-5))&\leqslant \lambda_1(G_2(1,n-5)-v)\\
&\leqslant\left(1-\frac{1}{\omega(G_2(1,n-5)-v)}\right)(n-1)\\
&=\frac{(n-1)(n-4)}{n-3}<n-2.
\end{align*}
Therefore, we have
$\lambda_1(G_2(1,n-5))<n-2.$ By Lemma \ref{unbalanced}, we have $\lambda_1(\dot{G_2}(c,d))<\lambda_1(G_2(c,d))<n-2$.
\end{proof}
\begin{claim}
$\lambda_1(\dot{G_i}(1,n-5))<n-2, \ i=3,4,5.$
\end{claim}
\begin{proof}[\rm{\textbf{Proof of Claim 5.}}]
Write $G_i(1,n-5)$ as the underlying graph of $\dot{G_i}(1,n-5)$ for $i=3,4,5$. Noting that $G_2(1,n-5)\cong G_i(1,n-5)$ for $i=3,4,5$. Hence, by Lemma \ref{unbalanced} we have $\lambda_1(\dot{G_i}(1,n-5))<\lambda_1(G_i(1,n-5))=\lambda_1(G_2(1,n-5))<n-2$.
\end{proof}
\end{proof}
\section{A proof of Tur$\mathbf{\acute{a}}$n number of $\mathcal{K}_4^-$}
Let $\dot{G}$ be an unbalanced signed graph of order $n$ ($n\geqslant7$). For any vertex $v$ in $V(\dot{G})$, $N_{\dot{G}}(v)$ (or $N(v)$) is the set of the neighbors of $v$ and $N_{\dot{G}}[v]=N_{\dot{G}}(v)\cup\{v\}$ (or $N[v]$). For $U\subseteq V(G)$, let $\dot{G}[U]$ be the subgraph induced by $U$.
Let $\dot{G}$ be a $\mathcal{K}_4^-$-free unbalanced signed graph with maximum edges. In fact, $\dot{G}$ is connected. Otherwise, for some two vertices $u$ and $v$ in distinct components, we add the edge $uv$ to $\dot{G}$. Then $\dot{G}+uv$ is a $\mathcal{K}_4^-$-free unbalanced signed graph with more edges than $\dot{G}$, which is a contradiction.
\begin{proof}[\rm{\textbf{Proof of Theorem \ref{edge}.}}]
Let $\dot{G}$ be a $\mathcal{K}_4^-$-free unbalanced signed graph with maximum edges. Then $\dot{G}$ contains at least one negative cycle, and assume the smallest length of the negative cycles is $\ell$. Since each signed graph in $\mathcal{G}$ is $\mathcal{K}_4^-$-free and unbalanced, from (\ref{n-3}) we have
\begin{equation}\label{medge}
e(\dot{G})\geqslant \frac{n(n-1)}{2}-(n-3).
\end{equation}
If $\ell\geqslant 4$, then $\dot{G}$ is $\mathcal{K}_3^-$-free. Noting that $\dot{G}$ is connected, by Theorem \ref{c3edge}, we have
$$e(\dot{G})\leqslant \frac{n(n-1)}{2}-(n-2)<\frac{n(n-1)}{2}-(n-3),$$
which is a contradiction to (\ref{medge}).
Hence, assume that $uvwu$ is an unbalanced $K_3$ with $\sigma(uv)=-1$ and $\sigma(uw)=\sigma(vw)=+1$. Suppose that $e(\dot{G})=\frac{n(n-1)}{2}-q$. Since $e(\dot{G})\geqslant\frac{n(n-1)}{2}-(n-3)$, we have $q\leqslant n-3$. Now the proof will be divided into two cases.
\textbf{Case 1.} $|N(u)\cap N(v)|=1$.
Let $N(u)\setminus\{v,w\}=\{u_1,\cdots,u_{a}\}$ and $N(v)\setminus\{u,w\}=\{v_1,\cdots,v_{b}\}$. Then, $a+b\leqslant n-3$. In this case, we have
\begin{align*}
e(\dot{G})&=e(\dot{G}[V(\dot{G})\setminus\{u,v\}])+e(\dot{G}[\{u,v\},V(\dot{G})\setminus\{u,v\}])+1\\
&\leqslant \frac{(n-2)(n-3)}{2}+(a+1)+(b+1)+1\\
&\leqslant \frac{n(n-1)}{2}-(n-3).
\end{align*}
Hence, by (\ref{medge}) we have $e(\dot{G})=\frac{n(n-1)}{2}-(n-3).$
Furthermore, $\dot{G}[V(\dot{G})\setminus\{u,v\}]$ is a clique and $a+b=n-3$, namely each vertex in $V(\dot{G})\setminus\{u,v,w\}$ is adjacent to $u$ or $v$. Since the switching operation remains the sign of cycles, any $\dot{G}_U$ is still $\mathcal{K}^-_4$-free and unbalanced. We may suppose $\sigma(uu_i)=+1$ for any $1\leqslant i\leqslant a$. Otherwise, we do switching operation at some $u_i$. Similarly, suppose $\sigma(vv_j)=+1$ for any $1\leqslant j\leqslant b$.
Without loss of generality, assume that $a\leqslant b$. The assumption $n\geqslant 7$ ensures that $b\geqslant 2$. For any two vertices, say $v_i$ and $v_j$, in $N(v)\setminus\{u,w\}$, we have $\dot{G}[\{v,w,v_i,v_j\}]\sim (K_4,+)$. Hence, $\sigma(wv_i)=\sigma(v_iv_j)=+1$ for $1\leqslant i,j\leqslant b$, and then $\dot{G}[N(v)\setminus\{u\}]$ is a clique with all positive edges.
If $a=0$, then $\dot{G}\sim \dot{G_1}(0,n-3)$. If $a=1$, then $\dot{G}[\{w,u_1,v_{i},v_{j}\}]\sim (K_4,+)$ and $\sigma(u_1w)=\sigma(u_1v_{i})$ for $1\leqslant i\leqslant b$. If $\sigma(u_1w)=\sigma(u_1v_{i})=+1$, then $\dot{G}\sim \dot{G_1}(1,n-4)$. If $\sigma(u_1w)=\sigma(u_1v_{i})=-1$, then $\dot{G}\sim \dot{G_1'}(1,n-4)$.
If $a\geqslant 2$, for any two vertices, say $u_{i}$ and $u_{j}$, in $N(u)\setminus\{v,w\}$, we have $\dot{G}[\{u,w,u_{i},u_{j}\}]\sim (K_4,+)$. Hence, $\dot{G}[N(u)\setminus\{v\}]$ is a clique with all positive edges. Noting that $\dot{G}[\{w,u_i,v_j,v_t\}]$ $\sim (K_4,+)$, we have $\sigma(u_i v_j)=+1$ for $1\leqslant i\leqslant a$ and $1\leqslant j\leqslant b$. Therefore, $\dot{G}\sim \dot{G}_1(a, b)$ with $a\geqslant 2$.
Hence, in this case, $\dot{G}=\dot{G_1}(a,b)$ with $a$, $b\geqslant 0$, $a+b=n-3$, or $\dot{G}=\dot{G_1'}(1,n-4)$.
\textbf{Case 2.} $|N(u)\cap N(v)|\geqslant 2$.
Let $N(u)\cap N(v)=\{w,w_1,\cdots,w_k\}$, $N[u]\setminus N[v]=\{u_1,\cdots,u_{c}\}$, and $N[v]\setminus N[u]=\{v_1,\cdots,v_{d}\}$. For any $1\leqslant i\leqslant k$, we claim that $w$ is not adjacent to $w_i$. Otherwise, suppose that $w$ is adjacent to $w_1$. Since $\dot{G}[\{u,v,w\}]$ is an unbalanced $K_3$, we have $\dot{G}[\{u,v,w,w_1\}]$ is an unbalanced $K_4$, which is a contradiction to the assumption that $\dot{G}$ is $\mathcal{K}_4^-$-free and unbalanced.
We further claim that each vertex of $\dot{G}$ is adjacent to the vertex $u$ or $v$. Otherwise suppose there are $r$ vertices in $V(\dot{G})\setminus(N(u)\cup N(v))$. Then $3+c+d+k+r=n$ holds. From the inequality
$$n-3\geqslant q\geqslant c+d+k+2r=n-3+r,$$
we have $r=0$. Furthermore $q=c+d+k=n-3$. Then $\dot{G}[V(\dot{G})\setminus\{u,v,w\}]$ is a clique and $w$ is adjacent to $u_i$ and $v_j$ for $1\leqslant i\leqslant c$ and $1\leqslant j\leqslant d$.
We may suppose $\sigma(uu_i)=\sigma(vv_j)=\sigma(vw_t)=+1$ for $1\leqslant i\leqslant c$, $1\leqslant j\leqslant d$, and $1\leqslant t\leqslant k$. Without loss of generality, assume that $0\leqslant c\leqslant d$. If $c=0$, then we may set $\dot{G}^*=\dot{G}_{\{u\}}$, namely, $\dot{G}^*$ is obtained from $\dot{G}$ by a switching operation at the vertex $u$. Then in $\dot{G}^*$, we have $|N(u)\cap N(w)|=1$ and $\sigma(u w)=-1$. Thus $\dot{G}^*$ satisfies the condition of Case 1. Then we have
$$e(\dot{G})=e(\dot{G}^*)=\frac{n(n-1)}{2}-(n-3),$$
and $\dot{G}\sim\dot{G}^*\sim \dot{G_1}(a,b)$ or $ \dot{G_1'}(1,n-4)$. Now suppose $c\geqslant 1$, and we will distinguish two subcases.
\textbf{Subcase 2.1.} $|N(u)\cap N(v)|=2$.
Since $k=1$ and $n\geqslant 7$, we have $d\geqslant 2$. For any two vertices, say $v_i$ and $v_j$, in $N[v]\setminus N[u]$, we have $\dot{G}[\{v,w,v_i,v_j\}]\sim (K_4,+)$. Hence, $\sigma(wv_i)=+1$ for $1\leqslant i\leqslant d$ and $\dot{G}[N[v]\setminus N[u]]$ is a clique with all positive edges. Similarly, we have $\sigma(w_1v_i)=+1$ for $1\leqslant i\leqslant d$.
If $c=1$, then $d=n-5$. Since the clique $\dot{G}[\{w,u_1,v_i,v_j\}]\sim (K_4,+)$, we have $\sigma(u_1w)=\sigma(u_1v_i)$, and similarly, we get $\sigma(u_1w_1)=\sigma(u_1v_i)$. Thus $\sigma(u_1w)=\sigma(u_1w_1)=\sigma(u_1v_i)$ for $1\leqslant i\leqslant n-5$. Suppose $\sigma(uw_1)=+1$. If $\sigma(u_1w)=\sigma(u_1w_1)=\sigma(u_1v_i)=+1$, then $\dot{G}\sim\dot{G_2}(1,n-5)$. If $\sigma(u_1w)=\sigma(u_1w_1)=\sigma(u_1v_i)=-1$, then we do switching operation at $\{u_1\}$ and $\dot{G}\sim\dot{G_3}(1,n-5)$. Suppose $\sigma(uw_1)=-1$. If $\sigma(u_1w)=\sigma(u_1w_1)=\sigma(u_1v_i)=+1$, then $\dot{G}\sim\dot{G_4}(1,n-5)$. If $\sigma(u_1w)=\sigma(u_1w_1)=\sigma(u_1v_i)=-1$, then we do switching operation at $\{u_1\}$ and $\dot{G}\sim\dot{G_5}(1,n-5)$.
Now suppose $c\geqslant 2$. For any two vertices, say $u_i$ and $u_j$, in $N[u]\setminus N[v]$, then $\dot{G}[\{u,w,u_i,u_j\}]$ $\sim (K_4,+)$. Hence, $\sigma(wu_i)=+1$ for $1\leqslant i\leqslant c$, and then $\dot{G}[N[u]\setminus N[v]]$ is a clique with all positive edges. Noting that $\dot{G}[\{w,u_i,v_j,v_t\}]\sim(K_4,+)$, then $\sigma(u_iv_j)=+1$ for $1\leqslant i\leqslant c$ and $1\leqslant j\leqslant d$. By the fact that the cliques $\dot{G}[\{w_1,u_i,v_j,v_t\}]\sim (K_4,+)$ and $\dot{G}[\{u,w_1,u_i,u_j\}]\sim (K_4,+)$, we have $\sigma(w_1u_i)=+1$ for $1\leqslant i\leqslant c$ and $\sigma(uw_1)=+1$, respectively. Therefore, $\dot{G}\sim \dot{G_2}(c,d)$ in this subcase.
\textbf{Subcase 2.2.} $|N(u)\cap N(v)|\geqslant3$.
If $c=1$, then we may set $\dot{G}^*=\dot{G}_{\{u\}}$, namely, $\dot{G}^*$ is obtained from $\dot{G}$ by a switching operation at the vertex $u$. Then in $\dot{G}^*$, we have $|N(u)\cap N(w)|=2$, $\sigma(u w)=-1$, $\{w_1,w_2,\cdots,w_k\}=N[u]\setminus N[w]$, and $\{v_1,\cdots,v_d\}=N[w]\setminus N[u]$. Thus $\dot{G}^*$ satisfies the conditions of Subcase 2.1. Hence, $\dot{G}\sim\dot{G^*}\sim \dot{G_2}(k,d)$.
Now suppose $c\geqslant 2$. The fact $q=n-3$ ensures that $\dot{G}[\{w_1,\cdots,w_k\}]$ is a clique. Noting that $\sigma(vw_t)=+1$, we have $\sigma(uw_t)=-1$ for $1\leqslant t\leqslant k$. Otherwise, if $\sigma(uw_1)=+1$, then $\dot{G}[\{u,v,w_1,w_2\}]$ is an unbalanced $K_4$, which is a contradiction. Furthermore, $\dot{G}[\{w_1,\cdots,w_k\}]$ is a clique with all positive edges.
Since $\dot{G}[\{v,w,v_i,v_j\}]\sim (K_4,+)$, we have $\sigma(w v_i)=+1$ and $\sigma(v_jv_t)=+1$ for $1\leqslant i,j,t\leqslant d$. Similarly, we have $\sigma(wu_i)=+1$ and $\sigma(u_ju_t)=+1$ for $1\leqslant i,j,t\leqslant c$. Since $\dot{G}[\{w,u_i,v_j,v_t\}]\sim(K_4,+)$, we have $\sigma(u_i v_j)=+1$ for $1\leqslant i\leqslant c$ and $1\leqslant j\leqslant d$. Considering the cliques $\dot{G}[\{u,u_i,w_j,w_t\}]$ and $\dot{G}[\{v,v_i,w_j,w_t\}]$, we have $\sigma(v_i w_j)=+1$ for $1\leqslant i\leqslant d$ and $1\leqslant j\leqslant k$, and $\sigma(u_i w_j)=-1$ for $1\leqslant i\leqslant c$ and $1\leqslant j\leqslant k$. While $\dot{G}[\{w_1,w_2,u_1,v_1\}]$ is an unbalanced $K_4$, which is a contradiction.
\end{proof}
\section{A proof of spectral Tur$\mathbf{\acute{a}}$n number of $\mathcal{K}_4^-$}
Let $\dot{G}$ be a signed graph of order $n$. By the table of the spectra of signed graphs with at most six vertices \cite{BCST}, we can check that Theorem \ref{spectrum} is true for $n\leqslant 6$. Therefore, assume that $n\geqslant 7$. The following celebrated upper bound of $\rho(G)$ is very crucial for our proof.
\begin{theorem}\cite{HSF, N}\label{delta}
Let $G$ be a graph of order $n$ with the minimum degree $\delta=\delta(G)$ and $e=e(G)$. Then
$$\rho(G)\leqslant\frac{\delta-1+\sqrt{8e-4\delta n+(\delta+1)^2}}{2}.$$
\end{theorem}
The negation of $\dot{G}$ (denoted by $-\dot{G}$) is obtained by reversing the sign of every edge in $\dot{G}$. Obviously, the eigenvalues of $-\dot{G}$ are obtained by reversing the sign of the eigenvalues of $\dot{G}$.
\begin{proof}[\rm{\textbf{Proof of Theorem \ref{spectrum}.}}]
Let $\dot{G}=(G,\sigma)$ be a $\mathcal{K}_4^-$-free unbalanced signed graph with maximum spectral radius. Since $\dot{G_1}(0,n-3)$ is a $\mathcal{K}_4^-$-free unbalanced signed graph, by Lemma \ref{largest},
\begin{equation}\label{n-2}
\rho(\dot{G})\geqslant \rho(\dot{G_1}(0,n-3))=n-2.
\end{equation}
We claim that $\rho(\dot{G})=\lambda_1(\dot{G})$. Otherwise $\rho(\dot{G})=\max\{\lambda_1(\dot{G}),-\lambda_n(\dot{G})\}=-\lambda_n(\dot{G})$. Assume that $\dot{G_1}=-\dot{G}$. Hence, $\lambda_1(\dot{G_1})=-\lambda_n(\dot{G})$. Since $\dot{G}$ is $\mathcal{K}_4^-$-free, we have $\omega_b(\dot{G_1})\leqslant 3$. By Lemma \ref{balanced}, for $n\geqslant 7$, we have
$$\rho(\dot{G})=-\lambda_n(\dot{G})=\lambda_1(\dot{G_1})\leqslant\left(1-\frac{1}{\omega_b(\dot{G_1})}\right)n\leqslant\frac{2}{3}n<n-2,$$
which is a contradiction to (\ref{n-2}).
We claim that $\dot{G}$ is connected. Let $\bm{x}=(x_1,x_2,\cdots,x_n)^T$ be a unit eigenvector of $A(\dot{G})$ corresponding to $\lambda_1(\dot{G})$. Let vertices $u$ and $v$ be any two vertices belonging to distinct components, then we construct a signed graph $\dot{G_2}=(G+uv,\sigma_2)$ with $\sigma_2(e)=\sigma(e)$ when $e\in E(\dot{G})$. If $x_ux_v\geqslant 0$ (resp. $x_ux_v< 0$), then we take $\sigma_2(uv)=+1$ (resp. $\sigma_2(uv)=-1$). Hence, by Rayleigh-Ritz Theorem we have
\begin{equation}\label{xu}
\lambda_1(\dot{G}_2)-\lambda_1(\dot{G})\geqslant\bm{x}^TA(\dot{G}_2)\bm{x}-\bm{x}^TA(\dot{G})\bm{x}= 2\sigma_2(uv)x_ux_v\geqslant0.
\end{equation}
Since $\dot{G}_2$ is also $\mathcal{K}_4^-$-free and unbalanced, we have $\lambda_1(\dot{G}_2)\leqslant\lambda_1(\dot{G})$. So, $\lambda_1(\dot{G}_2)=\lambda_1(\dot{G})$. Furthermore, $\lambda_1(\dot{G}_2)=\bm{x}^TA(\dot{G}_2)\bm{x}$ holds, and then $A(\dot{G}_2)\bm{x}=\lambda_1(\dot{G}_2)\bm{x}$. From (\ref{xu}), $x_ux_v=0$ holds. Without loss of generality, suppose $x_u=0$. By $A(\dot{G})\bm{x}=\lambda_1(\dot{G})\bm{x}$ and $A(\dot{G}_2)\bm{x}=\lambda_1(\dot{G}_2)\bm{x}$, we have
$$\lambda_1(\dot{G})x_u=\sum_{w\in N_{\dot{G}}(u)}\sigma(uw)x_w=0,$$
and then
$$\lambda_1(\dot{G}_2)x_u=\sum_{w\in N_{\dot{G_2}}(u)}\sigma(uw)x_w+\sigma_2(uv)x_v=\sigma_2(uv)x_v=0.$$
Hence, $x_v=0$. Then $\bm{x}$ is a zero vector, which is a contradiction.
We claim that $\delta(\dot{G})\geqslant 2$. Otherwise there exits a vertex $u$ with $d_{\dot{G}}(u)=1$ and $uv\notin E(\dot{G})$ for some vertex $v$. Then we construct a signed graph $\dot{G_3}=(G+uv,\sigma_3)$. Noting that $d_{\dot{G_3}}(u)=2$, $\dot{G_3}$ is still $\mathcal{K}_4^-$-free unbalanced. Furthermore, we may obtain a contradiction as above.
If $e(\dot{G})\leqslant\frac{n(n-1)}{2}-(n-2)$, for the underlying graph $G$, by Theorem \ref{delta} we have
\begin{align*}
\rho(G)&\leqslant\frac{\delta-1+\sqrt{8e(G)-4\delta n+(\delta+1)^2}}{2}\\
&\leqslant\frac{\delta-1+\sqrt{8(\frac{n(n-1)}{2}-(n-2))-4\delta n+(\delta+1)^2}}{2}\\
&=\frac{\delta-1+\sqrt{4n^2-4(\delta+3)n+\delta^2+2\delta+17}}{2}\\
&\leqslant\frac{\delta-1+\sqrt{4n^2-4(\delta+3)n+\delta^2+6\delta+9}}{2}\\
&=n-2.
\end{align*}
By Lemma \ref{unbalanced}, we have $\rho(\dot{G})=\lambda_1(\dot{G})<\rho(G)\leqslant n-2$, which is a contradiction to (\ref{n-2}). Hence, $e(\dot{G})\geqslant\frac{n(n-1)}{2}-(n-3)$. By Theorem \ref{edge}, we have $e(\dot{G})=\frac{n(n-1)}{2}-(n-3)$ and $\dot{G}$ is switching equivalent to a signed graphs in $\mathcal{G}$. Noting that $\rho(\dot{G})\geqslant n-2$, by Lemma \ref{largest}, we have $\dot{G}\sim \dot{G_1}(0,n-3)$, and $\rho(\dot{G})= n-2$.
\end{proof}
\textbf{Conflicts of Interest:} The authors declare no conflict of interest.
\end{document}
|
\begin{document}
\centerline{\huge \bf Multiple Lattice Tilings in Euclidean Spaces}
\centerline{Dedicated to Professor Dr. Christian Buchta on the occasion of his 60th birthday}
\centerline{\large\bf Qi Yang and Chuanming Zong}
\centerline{\begin{minipage}{12.8cm}
{\bf Abstract.} In 1885, Fedorov discovered that a convex domain can form a lattice tiling of the Euclidean plane if and only if it is a parallelogram or a centrally symmetric hexagon. This paper proves the following results: {\it Besides parallelograms and centrally symmetric hexagons, there is no other convex domain which can form a two-, three- or four-fold lattice tiling in the Euclidean plane. However, there are both octagons and decagons which can form five-fold lattice tilings. Whenever $n\ge 3$, there are non-parallelohedral polytopes which can form five-fold lattice tilings in the $n$-dimensional Euclidean space.}
\end{minipage}}
\noindent
{2010 Mathematics Subject Classification: 52C20, 52C22, 05B45, 52C17, 51M20}
\noindent
{\Large\bf 1. Introduction}
\noindent
Planar tilings is an ancient subject in our civilization. It has been considered in the arts by craftsmen since antiquity. Up to now, it is still an active research field in mathematics and some basic problems remain unsolved. In 1885, Fedorov \cite{fedo} discovered that there are only two types of two-dimensional lattice tiles: {\it parallelograms and centrally symmetric hexagons}. In 1917, Bieberbach suggested Reinhardt (see \cite{rein}) to determine all the two-dimensional convex congruent tiles. However, to complete the list turns out to be challenging and dramatic. Over the years, the list has been successively extended by Reinhardt, Kershner, James, Rice, Stein, Mann, McLoud-Mann and Von Derau (see \cite{zong14,mann}), its completeness has been mistakenly announced several times! In 2017, M. Rao \cite{rao} announced a completeness proof based on computer checks.
The three-dimensional case was also studied in the ancient time. More than 2,300 years ago, Aristotle claimed that both identical regular tetrahedra and identical cubes can fill the whole space without gap. The cube case is obvious! However, the tetrahedron case is wrong and such a tiling is impossible (see \cite{lazo}).
Let $K$ be a convex body with (relative) interior ${\rm int}(K)$, (relative) boundary $\partial (K)$ and volume ${\rm vol}(K)$, and let $X$ be a discrete set, both in $\mathbb{E}^n$. We call $K+X$ a {\it translative tiling} of $\mathbb{E}^n$ and call $K$ a {\it translative tile} if $K+X=\mathbb{E}^n$ and the translates ${\rm int}(K)+{\bf x}_i$ are pairwise disjoint. In other words, if $K+X$ is both a packing and a covering in $\mathbb{E}^n$. In particular, we call $K+\Lambda$ a {\it lattice tiling} of $\mathbb{E}^n$ and call $K$ a {\it lattice tile} if $\Lambda $ is an $n$-dimensional lattice. Apparently, a translative tile must be a convex polytope. Usually, a lattice tile is called a {\it parallelohedron}.
In 1885, Fedorov \cite{fedo} also characterized the three-dimensional lattice tiles: {\it A three-dimensional lattice tile must be a parallelotope, an hexagonal prism, a rhombic dodecahedron, an elongated dodecahedron, or a truncated octahedron.} The situations in higher dimensions turn out to be very complicated. Through the works of Delone \cite{delo}, $\check{S}$togrin \cite{stog} and Engel \cite{enge}, we know that there are exact $52$ combinatorially different types of parallelohedra in $\mathbb{E}^4$. A computer classification for the five-dimensional parallelohedra was announced by Dutour Sikiri$\acute{\rm c}$, Garber, Sch$\ddot{\rm u}$rmann and Waldmann \cite{dgsw} only in 2015.
Let $\Lambda $ be an $n$-dimensional lattice. The {\it Dirichlet-Voronoi cell} of $\Lambda $ is defined by
$$C=\left\{ {\bf x}: {\bf x}\in \mathbb{E}^n,\ \| {\bf x}, {\bf o}\|\le \| {\bf x}, \Lambda \|\right\},$$
where $\| X, Y\|$ denotes the Euclidean distance between $X$ and $Y$. Clearly, $C+\Lambda $ is a lattice tiling and the Dirichlet-Voronoi cell $C$ is a parallelohedron. In 1908,
Voronoi \cite{voro} made a conjecture that {\it every parallelohedron is a linear transformation image of the Dirichlet-Voronoi cell of a suitable lattice.} In $\mathbb{E}^2$, $\mathbb{E}^3$ and $\mathbb{E}^4$, this conjecture was confirmed by Delone \cite{delo} in 1929. In higher dimensions, it is still open.
To characterize the translative tiles is another fascinating problem. First it was shown by Minkowski \cite{mink} in 1897 that {\it every translative tile must be centrally symmetric}. In 1954, Venkov \cite{venk} proved that {\it every translative tile must be a lattice tile $($parallelohedron$)$} (see \cite{alek} for generalizations). Later, a new proof for this beautiful result was independently discovered by McMullen \cite{mcmu}.
Let $X$ be a discrete multiset in $\mathbb{E}^n$ and let $k$ be a positive integer. We call $K+X$ a {\it $k$-fold translative tiling} of $\mathbb{E}^n$ and call $K$ a {\it $k$-fold translative tile} if every point ${\bf x}\in \mathbb{E}^n$ belongs to at least $k$ translates of $K$ in $K+X$ and every point ${\bf x}\in \mathbb{E}^n$ belongs to at most $k$ translates of ${\rm int}(K)$ in ${\rm int}(K)+X$. In other words, $K+X$ is both a $k$-fold packing and a $k$-fold covering in $\mathbb{E}^n$. In particular, we call $K+\Lambda$ a {$k$-fold lattice tiling} of $\mathbb{E}^n$ and call $K$ a {\it $k$-fold lattice tile} if $\Lambda $ is an $n$-dimensional lattice. Apparently, a $k$-fold translative tile must be a convex polytope. In fact, as it was shown by Gravin, Robins and Shiryaev \cite{grs}, {\it a $k$-fold translative tile must be a centrally symmetric polytope with centrally symmetric facets.}
Multiple tilings were first investigated by Furtw\"angler \cite{furt} in 1936 as a generalization of Minkowski's conjecture on cube tilings. Let $C$ denote the $n$-dimensional unit cube. Furtw\"angler made a conjecture that {\it every $k$-fold lattice tiling $C+\Lambda$ has twin cubes. In other words, every multiple lattice tiling $C+\Lambda$ has two cubes sharing a whole facet.} In the same paper, he proved the two- and three-dimensional cases. Unfortunately, when $n\ge 4$, this beautiful conjecture was disproved by Haj\'os \cite{hajo} in 1941. In 1979, Robinson \cite{robi} determined all the integer pairs $\{ n,k\}$ for which Furtw\"angler's conjecture is false. We refer to Zong \cite{zong05,zong06} for an introduction account and a detailed account on this fascinating problem, respectively, to pages 82-84 of Gruber and Lekkerkerker \cite{grub} for some generalizations.
In 1994, Bolle \cite{boll} proved that {\it every centrally symmetric lattice polygon is a multiple lattice tile}. Let $\Lambda $ denote the two-dimensional integer lattice, and let $D_8$ denote the octagon with vertices $(1,0)$, $(2,0)$, $(3,1)$, $(3,2)$, $(2,3)$, $(1,3)$, $(0,2)$ and $(0,1)$. As a particular example of Bolle's theorem, it was discovered by Gravin, Robins and Shiryaev \cite{grs} that {\it $D_8+\Lambda$ is a seven-fold lattice tiling of $\mathbb{E}^2$.} Apparently, the octagon $D_8$ is not a lattice tile. Based on this example and McMullen's criterion on parallelohedra (see Lemma 3 in Section 3), one can easily deduce that, whenever $n\ge 2$, there is a non-parallelohedral polytope which can form a seven-fold lattice tiling in $\mathbb{E}^n$.
In 2000, Kolountzakis \cite{kolo} proved that, if $D$ is a two-dimensional convex domain which is not a parallelogram and $D+X$ is a multiple tiling in $\mathbb{E}^2$, then $X$ must be a finite union of translated two-dimensional lattices. In 2013, a similar result in $\mathbb{E}^3$ was discovered by Gravin, Kolountzakis, Robins and Shiryaev \cite{gkrs}.
Let $P$ denote an $n$-dimensional centrally symmetric convex polytope, let $\tau (P)$ denote the smallest integer $k$ such that $P$ is a $k$-fold translative tile, and let $\tau^* (P)$ denote the smallest integer $k$ such that $P$ is a $k$-fold lattice tile. For convenience, we define $\tau (P)=\infty$ (or $\tau^*(P)=\infty$) if $P$ can not form translative tiling (or lattice tiling) of any multiplicity. Clearly, for every centrally symmetric convex polytope we have
$$\tau (P)\le \tau^*(P).$$
It is a basic and natural problem to study the distribution of the integers $\tau (P)$ or $\tau^*(P)$, when $P$ runs over all $n$-dimensional polytopes. In particular, is there an $n$-dimensional polytope $P$ satisfying $\tau (P)=2$ or $3$? is there an $n$-dimensional polytope $P$ satisfying $\tau (P)\not= \tau^*(P)$? is there a convex domain $D$ satisfying $2\le \tau^*(D)\le 6$?
In this paper, we will prove the following results.
\noindent
{\bf Theorem 1.} {\it If $D$ is a two-dimensional centrally symmetric convex domain which is neither a parallelogram nor a centrally symmetric hexagon, then we have
$$\tau^*(D)\ge 5,$$
where the equality holds when $D$ is a suitable octagon or a suitable decagon.}
\noindent
{\bf Theorem 2.} {\it Whenever $n\ge 3$, there is a non-parallelohedral polytope $P$ such that $P+\mathbb{Z}^n$ is a five-fold lattice tiling of the $n$-dimensional Euclidean space.}
\noindent
{\Large\bf 2. A General Lower Bound and Two Particular Examples}
\noindent
In this section, we will prove Theorem 1. First, let's recall some basic results which will be useful in this paper.
Let $D$ denote a two-dimensional centrally symmetric convex domain and let $\sigma $ be a non-singular affine linear transformation from $\mathbb{E}^2$ to $\mathbb{E}^2$. If $D+X$ is a $k$-fold translative tiling of $\mathbb{E}^2$, then $\sigma (D)+\sigma (X)$ is a $k$-fold translative tiling of $\mathbb{E}^2$ as well. Therefore, we have
$$\tau (D)=\tau (\sigma (D))$$
and
$$\tau^* (D)=\tau^* (\sigma (D)).$$
In particular, without loss of generality, we may assume $\Lambda =\mathbb{Z}^2$ when we study multiple lattice tilings $D+\Lambda $.
Let $\delta_k(D)$ denote the density of the densest $k$-fold lattice packings of $D$ and let $\theta_k(D)$ denote the density of the thinnest $k$-fold lattice coverings of $D$. Clearly, $\delta_1(D)$ is the density $\delta (D)$ of the densest lattice packings of $D$ and $\theta_1(D)$ is the density $\theta (D)$ of the thinnest lattice coverings of $D$. Dumir and Hans-Gill \cite{dumi} and G. Fejes T\'oth \cite{feje} proved the following result.
\noindent
{\bf Lemma 1.} {\it If $k=2,$ $3$ or $4$, then
$$\delta_k(D)=k\cdot \delta (D)$$
holds for every two-dimensional centrally symmetric convex domain $D$.}
In 1994, Bolle \cite{boll} proved the following criterion for the two-dimensional multiple lattice tilings.
\noindent
{\bf Lemma 2.} {\it A convex polygon is a $k$-fold lattice tile for a lattice $\Lambda$ and some positive integer $k$ if and only if the following conditions are satisfied:
\noindent
{\bf 1.} It is centrally symmetric.
\noindent
{\bf 2.} When it is centered at the origin, in the relative interior of each edge $G$ there is a point of ${1\over 2}\Lambda $.
\noindent
{\bf 3.} If the midpoint of $G$ is not in ${1\over 2}\Lambda $, then $G$ is a lattice vector of $\Lambda $.}
Let $P_{2m}$ denote a centrally symmetric convex $2m$-gon centered at the origin, let $\mathcal{P}_{2m}$ denote the set of all such $2m$-gons, and let $\mathcal{D}$ denote the family of all two-dimensional centrally symmetric convex domains. It follows by Fedorov and Venkov's results that
$$\tau (D)=\tau^*(D)=1$$
if and only if $D\in \mathcal{P}_4\cup \mathcal{P}_6$. Then, it would be both important and interesting to determine the values of
$$\min_{D\in \mathcal{D}\setminus \{\mathcal{P}_4\cup \mathcal{P}_6\}}\tau (D)$$
and
$$\min_{D\in \mathcal{D}\setminus \{\mathcal{P}_4\cup \mathcal{P}_6\}}\tau^* (D).\eqno (1)$$
Theorem 1 determines the value of (1).
\noindent
{\bf Proof of Theorem 1.} Let $k$ be a positive integer satisfying $k\le 4$. If $D$ is a two-dimensional centrally symmetric convex domain which can form a $k$-fold lattice tiling in the Euclidean plane, then we have
$$\delta_k(D)=k.$$
By lemma 1 it follows that
$$\delta (D)={\delta_k(D)\over k}=1$$
and therefore $D$ must be a parallelogram or a centrally symmetric hexagon. In other words, if $D$ is neither a parallelogram nor a centrally symmetric hexagon, then we have
$$\tau^*(D)\ge 5.\eqno(2)$$
We take $\Lambda =\mathbb{Z}^2$. As shown by Figure 1, let $D_8$ denote the octagon with vertices
$$\begin{array}{ll}
{\bf v}_1=\left(-\mbox{${3\over {10}}$}, -2\right),\hspace{1cm} & {\bf v}_2=\left(\mbox{${3\over {10}}$}, -1\right),\\
& \\
{\bf v}_3=\left(\mbox{${7\over {10}}$}, 0\right), & {\bf v}_4=\left(\mbox{${{13}\over {10}}$}, 2\right),\\
&\\
{\bf v}_5=\left(\mbox{${3\over {10}}$}, 2\right), & {\bf v}_6=\left(-\mbox{${3\over {10}}$}, 1\right),\\
&\\
{\bf v}_7=\left(-\mbox{${7\over {10}}$}, 0\right), & {\bf v}_8=\left(-\mbox{${{13}\over {10}}$}, -2\right).
\end{array}$$
It can be easily verified that
$${\bf u}_i=\mbox{${1\over 2}$}({\bf v}_i+{\bf v}_{i+1})\in \mbox{${1\over 2}$}\Lambda, \quad i=1, 2, \ldots, 8, $$
where ${\bf v}_9={\bf v}_1$, and
$${\rm vol}(D_8)=5.$$
It follows from Lemma 2 that $D_8+\Lambda$ is a five-fold lattice tiling. Combined with (2), it can be deduced that
$$\tau^*(D_8)=5.$$
Similarly, as shown by Figure 2, let $D_{10}$ denote the decagon with vertices
$$\begin{array}{ll}
{\bf v}_1=\left(-\mbox{${3\over 5}$}, -\mbox{${5\over 4}$}\right),\hspace{1cm} & {\bf v}_2=\left(\mbox{${3\over 5}$}, -\mbox{${3\over 4}$}\right),\\
& \\
{\bf v}_3=\left(\mbox{${7\over 5}$}, -\mbox{${1\over 4}$}\right), & {\bf v}_4=\left(\mbox{${8\over 5}$}, \mbox{${1\over 4}$}\right),\\
&\\
{\bf v}_5=\left(\mbox{${7\over 5}$}, \mbox{${3\over 4}$}\right), & {\bf v}_6=\left(\mbox{${3\over 5}$}, \mbox{${5\over 4}$}\right),\\
&\\
{\bf v}_7=\left(-\mbox{${3\over 5}$}, \mbox{${3\over 4}$}\right), & {\bf v}_8=\left(-\mbox{${7\over 5}$}, \mbox{${1\over 4}$}\right),\\
&\\
{\bf v}_9=\left(-\mbox{${8\over 5}$}, -\mbox{${1\over 4}$}\right), & {\bf v}_{10}=\left(-\mbox{${7\over 5}$}, -\mbox{${3\over 4}$}\right).
\end{array}$$
It can be easily verified that
$${\bf u}_i=\mbox{${1\over 2}$}({\bf v}_i+{\bf v}_{i+1})\in \mbox{${1\over 2}$}\Lambda, \quad i=1, 2, \ldots, 10, $$
where ${\bf v}_{11}={\bf v}_1$, and
$${\rm vol}(D_{10})=5.$$
It follows from Lemma 2 that $D_{10}+\Lambda$ is a five-fold lattice tiling. Combined with (2), it can be deduced that
$$\tau^*(D_{10})=5.$$
Theorem 1 is proved.
{$\Box$}
\noindent
{\Large\bf 3. Some Comparisons and Generalizations}
\noindent
It is interesting to make comparisons with multiple packings and multiple coverings. Let $O$ denote the unit circular disk. It was discovered by Blunden \cite{blun57, blun63} that
$$\delta_k(O)=k\cdot \delta (O)$$
is no longer true when $k\ge 5$, and
$$\theta_k(O)=k\cdot \theta (O)$$
is no longer true when $k\ge 3$. So, the packing case is rather similar to the tilings, while the covering case is much different!
On the other hand, for every two-dimensional convex domain $D$ it was proved by Cohn \cite{cohn76}, Bolle \cite{bolle89} and Groemer \cite{groem86} that
$$\lim_{k\to\infty }{{\delta_k(D)}\over k}=\lim_{k\to\infty }{{\theta_k(D)}\over
k}=1.$$
In other words, from the density point of view, when the multiplicity is big there is no much difference among packing, covering and tiling!
Let $P$ denote an $n$-dimensional centrally symmetric convex polytope with centrally symmetric facets and let $V$ denote a $(n-2)$-dimensional face of $P$. We call the collection of all those facets of $P$ which contain a translate of $V$ as a subface a belt of $P$.
In 1980, P. McMullen \cite{mcmu} proved the following criterion for parallelohedra.
\noindent
{\bf Lemma 3.} {\it A convex body $K$ is a parallelohedron if and only if it is a centrally symmetric polytope with centrally symmetric facets and each belt contains four or six facets.}
\noindent
{\bf Proof of Theorem 2.} For convenience, we write $\mathbb{E}^n=\mathbb{E}^2\times \mathbb{E}^{n-2}$. Let $P_{2m}$ be a centrally symmetric $2m$-gon ($m\ge 4$) such that $P_{2m}+\mathbb{Z}^2$ is a $k$-fold lattice tiling of $\mathbb{E}^2$, let $I^{n-2}$ denote the unit cube $\{ (x_3, x_4, \ldots, x_n):\ |x_i|\le {1\over 2}\}$ in $\mathbb{E}^{n-2}$, and define
$$P=P_{2m}\times I^{n-2}.$$
It is easy to see that $P+\mathbb{Z}^n$ is a $k$-fold lattice tiling of $\mathbb{E}^n$.
Let ${\bf v}_1$, ${\bf v}_2$, $\ldots$, ${\bf v}_{2m}$ be the $2m$ vertices of $P_{2m}$ and let $G_1$, $G_2$, $\ldots $, $G_{2m}$ denote the $2m$ edges of $P_{2m}$, and define
$$V={\bf v}_1\times I^{n-2}$$
and
$$F_i=G_i\times I^{n-2}.$$
Clearly, $\{ F_1, F_2, \ldots , F_{2m}\}$ is a belt of $P$ with $2m$ facets. Therefore, by McMullen's criterion it follows that $P$ is not a parallelohedron in $\mathbb{E}^n$. In particular, the octagon $D_8$ and the decagon $D_{10}$ defined in the proof of Theorem 1 produce non-parallelohedral five-fold lattice tiles $D_8\times I^{n-2}$ and $D_{10}\times I^{n-2}$, respectively.
Theorem 2 is proved.
{$\Box$}
\noindent
{\bf Acknowledgements.} The authors are grateful to Professor Weiping Zhang for calling their attention to Rao's announcement, to Professors Mihalis Kolountzakis, Sinai Robins and G\"unter M. Ziegler for their comments, and to the referee for his revision suggestions. This work is supported by 973 Program 2013CB834201.
\noindent
Qi Yang, School of Mathematical Sciences, Peking University, Beijing 100871, China
\noindent
Corresponding author:
\noindent
Chuanming Zong, Center for Applied Mathematics, Tianjin University, Tianjin 300072, China
\noindent
Email: [email protected]
\end{document}
|
\begin{document}
\title{Coloring cross-intersecting families}
\begin{abstract}
Intersecting and cross-intersecting families usually appear in extremal combinatorics in the vein of the Erd{\H o}s--Ko--Rado theorem~\cite{erdos1961intersection}.
On the other hand, P.~Erd{\H o}s and L.~Lov{\'a}sz in the noted paper~\cite{EL} posed problems on coloring intersecting families
as a restriction of classical hypergraph coloring problems to a special class of hypergraphs.
This note deals with the mentioned coloring problems stated for cross-intersecting families.
\end{abstract}
\section{Introduction}
Intersecting families in extremal combinatorics appeared in~\cite{erdos1961intersection}, and a large branch of extremal combinatorics starts from this paper.
\begin{definition}
Intersecting family is a hypergraph $H = (V, E)$ such that $e \cap f \neq \emptyset$ for every $e, f \in E$.
\end{definition}
Then P.~Erd\H{o}s and L.~Lov{\'a}sz in~\cite{EL} introduced several problems on coloring intersecting families (\textit{cliques} in the original notation), i.\,e.~hypergraphs without a pair of disjoint edges.
Obviously, an intersecting family could have chromatic number 2 or 3 only; the main interest refers to chromatic number 3.
Unfortunately, there is no ``random'' example of such family, so the set of known intersecting families with chromatic number 3 is very poor.
Cross-intersecting families were introduced to study maximal and almost-maximal intersecting families
(the notation appears in~\cite{matsumoto1989exact}).
\begin{definition}
Cross-intersecting family is a hypergraph $H = (V, E = A \cup B)$ such that every $a \in A$ intersects every $b \in B$, and $A$, $B$ are not empty.
\end{definition}
Also, the Hilton--Milner theorem~\cite{hilton1967some} and the Frankl theorem~\cite{frankl1987erdos} should be noted.
Recently a general approach to mentioned problems was introduced by A. Kupavskii and D. Zakharov~\cite{kupavskii2016regular} (the reader can also see this paper for a survey).
\subsection{The chromatic number}
We are interested in vertex colorings of cross-intersecting families.
Coloring is \textit{proper} if there are no monochromatic edges.
\textit{Chromatic number} is the minimal number of colors that admits a proper coloring.
First, note that a cross-intersecting family could have an arbitrarily large chromatic number.
\begin{example}
Consider an arbitrary integer $r > 1$.
Consider a hypergraph $H_0 = (V_0, E_0)$ with chromatic number $r$.
Put $A := E_0$, $B := \{V_0\}$. Obviously, $H := (V_0, A, B)$ is a cross-intersecting family with chromatic number $r$.
\end{example}
\noindent However, under a natural assumption (note that it holds for any $n$-uniform hypergraph) a chromatic number of a cross-intersecting family is bounded.
\begin{proposition}
Let $H = (V, A, B)$ be a cross-intersecting family.
Suppose that $A$ and $B$ both have minimal elements of $E$, i.\,e.~there are such $a \in A$, $b \in B$ that $a$, $b$ both have no subedge in $H$.
Then $\chi(H) \leq 4$.
\end{proposition}
\begin{proof}
Let us color $a \cap b$ in color 1, $a \setminus b$ in color 2, $b \setminus a$ in color 3 and all other vertices in color 4.
One can see that the coloring is proper because both $a$ and $b$ have no subedge.
\end{proof}
It turns out, that if there is no pair $e_1$, $e_2 \in E$ such that $e_1 \subset e_2$ and every edge has a size of at least 3,
then the cross-intersecting family can have chromatic number 2 or 3 only. Moreover, the following theorem holds.
\begin{theorem}
Let $H = (V, A, B)$ be a cross-intersecting family such that there is no pair $e_1, e_2 \in A \cup B$ such that $e_1 \subset e_2$ (i.\ e. $(V, E)$ is a Sperner system).
Then $\chi(H) \leq 3$ or $V := \{v_1, \dots, v_m, u_1, \dots u_l\}$; $B := \{ \{v_1, \dots, v_m\}, \{u_1, \dots u_l\} \}$; $A := \{ \{v_i, u_j\} \mbox{ for all } i, j \}$ (modulo $A$-$B$ symmetry), where $m$, $l \geq 2$.
\label{chi23}
\end{theorem}
\begin{corollary}
Let $H = (V, A, B)$ be an $n$-uniform cross-intersecting family. Then $\chi (H) \leq 3$ or $n=2$ and $H = K_4$.
\label{corchi23}
\end{corollary}
\begin{corollary}
Let $H = (V, A, B)$ be an $n$-uniform cross-intersecting family and $\min (|A|, |B|) \geq 3$. Then $\chi (H) \leq 3$.
\label{cor2}
\end{corollary}
\subsection{Maximal number of edges}
It turns out that the maximal number of edges in a ``nontrivial'' $n$-uniform intersecting family is bounded.
There are two ways to formalize the notion ``nontrivial''. The first one is to say that $\chi(H) \geq 3$ (denote the corresponding maximum by $M(n)$). The second one says that $H$ is nontrivial if and only if $\tau (H) = n$ (denote the corresponding maximum by $r(n)$), where $\tau (H)$ is defined below.
\begin{definition}
Let $H = (V, E)$ be a hypergraph. The \textit{covering number} (also known as \textit{transversal number} or \textit{blocking number}) of $H$ is the smallest integer $\tau (H)$ that
there is a set $A \subset V$ such that every $e \in E$ intersects $A$ and $|A| = \tau$.
\end{definition}
\subsubsection{Upper bounds.} Obviously, $M(n) \leq r(n)$. P.~Erd\H{o}s and L.~Lov{\'a}sz proved in~\cite{EL} that $r(n) \leq n^{n}$
(one can find slightly better bound in~\cite{cherkashin2011maximal}).
The best current upper bound is $r(n) \leq cn^{n-1}$ (see~\cite{arman2017upper}).
Surprisingly, we can prove a very similar statement for cross-intersecting families. Let us introduce a ``nontriviality'' notion for cross-intersecting families.
\begin{definition}
Let us call a cross-intersecting family $H = (V, A, B)$ \textit{critical} if
\begin{itemize}
\item for any edge $a \in A$ and any $v \in a$ there is $b \in B$ such that $a \cap b = \{v\}$;
\item for any edge $b \in B$ and any $v \in b$ there is $a \in A$ such that $a \cap b = \{v\}$.
\end{itemize}
\end{definition}
\noindent Note that if an $n$-uniform intersecting family $H = (V,E)$ has $\tau (H) = n$ then $(V, E, E)$ is a critical cross-intersecting family.
\begin{theorem}
\label{max}
Let $H = (V, A, B)$ be a critical cross-intersecting family. Denote
$$n: = \max_{e \in A \cup B} |e|.$$
Then
$$\max(|A|, |B|) \leq n^n.$$
\end{theorem}
\subsubsection{Lower bounds.} L.~Lov{\'a}sz conjectured that $M(n) = [(e - 1)n!]$ (an example was constructed in~\cite{EL}).
This was disproved by P. Frankl, K. Ota and N. Tokushige~\cite{frankl1996covers}.
They have provided an explicit example of an $n$-uniform hypergraph $H$ with $\tau (H) = n$ and
\begin{equation}
c \left (\frac{k}{2} + 1 \right)^{k-1}
\label{2}
\end{equation}
edges. For cross-intersecting families Example~\ref{nn} shows that Theorem~\ref{max} is tight.
\subsection{The set of the pairwise edge intersection sizes}
\begin{definition}
For a hypergraph $H = (V,E)$ let us consider the set of the sizes of pairwise edge intersections:
$$Q(H) := \{|e_1 \cap e_2|, e_1,e_2 \in E\}.$$
\end{definition}
\noindent Again, P.~Erd\H{o}s and L.~Lov{\'a}sz showed that for an $n$-uniform intersecting family $H$ one has
$3 \leq |Q(H)|$ for sufficiently large $n$, but there is no example with $|Q(H)| < \frac{n-1}{2}$. For cross-intersecting families there is a simple example with $|Q(H)| = 4$.
\begin{theorem}
\label{edgint}
There is an $n$-uniform cross-intersecting family $H$ with $Q(H) = \{0, 1, 2, n-1\}$ and $\chi (H) = 3$.
\end{theorem}
\noindent See Example~\ref{exex} for the proof.
\subsection{Examples}
Unlike the case of intersecting families there is a method of constructing a large set of (critical) cross-intersecting families with chromatic number 3, based
on percolation.
This method makes it possible to construct a cross-intersecting family from a random planar triangulation.
\begin{example}
Consider an arbitrary planar triangulation with external face $F$ that has a size of at least 4.
Split $F$ into 4 disjoint connected parts $F_1$, $F_2$, $F_3$, $F_4$.
Let $A_0$ be the set of collections of vertices that form a simple path from $F_1$ to $F_3$; $B_0$ be the set of collections of vertices that form a simple path from $F_2$ to $F_4$.
Finally, let $A \subset A_0$, $B \subset B_0$ be the sets of all minimal (by the inclusion relation) subsets; $H = (V, E)$.
Obviously, $\chi (H) = 3$ (one may see that no example with chromatic number 4 could be obtained from a planar triangulation).
\end{example}
For a given $n > 2$ there exists an $n$-uniform cross-intersecting family (not critical) with chromatic number 3 and an arbitrarily large number of edges.
\begin{example}
Let $m$ be an arbitrary integer number.
Put $V(H) := \{v_1, \dots, v_{2n-1} \} \cup \{u_1, \dots, u_m \}$;
$E(H) := A_1 \cup A_2 \cup B_1 \cup B_2$, where $A_1 \cup B_1$ is the set of all $n$-subsets of $\{v_1, \dots, v_{2n-1} \}$,
$A_1$ contains edges intersecting $\{v_1, \dots v_{n-1}\}$, $B_1$ contains edges intersecting $\{v_1, v_n, \dots, v_{2n-3} \}$
(so $A_1 \cap B_1 \neq \emptyset$),
$$A_2 := \{ \{v_1, \dots v_{n-1}, u_i \} \ \text {for every} \ i\},$$
$$B_2 := \{ \{v_1, v_n, \dots, v_{2n-3}, u_i \} \ \text{for every} \ i\}.$$
Note that $H_1 := (V_1, A_1 \cup B_1)$ has chromatic number 3, so $\chi (H) \geq 3$, hence by Corollary~\ref{corchi23} we have $\chi (H) = 3$.
Let us show that $H$ is a cross-intersecting family.
Clearly, since $A_1$, $B_1 \subset V_1$ every edge from $A_1$ intersects with every edge from $B_1$.
By the definition every edge of $A_2$ contains $\{v_1, \dots v_{n-1} \}$ so it intersects with every edge from $B_1$; by symmetry the same holds for $B_2$ and $A_1$.
Also every edge from $A_2$ intersects with every edge from $B_2$ at the point $v_1$.
\end{example}
\begin{example}
\label{nn}
Consider an arbitrary $n > 1$.
Let $V := \{v_{ij}\ | \ 1 \leq i, j \leq n \}$, $A := \{ \{v_{i1}, \dots v_{in} \} \ | \ 1 \leq i \leq n \}$,
$B := \{ \{v_{1i_1}, v_{2i_2}, \dots, \ v_{ni_n} \} \ | \ 1 \leq i_1, i_2, \dots, i_n \leq n \}$.
Note that $|A| = n$, $|B| = n^n$.
Obviously, $H := (V, A, B)$ is a cross-intersecting family and $\chi (H) = 3$.
\end{example}
\begin{example}[Proof of Theorem~\ref{edgint}]
\label{exex}
Our construction is based on the following object.
\begin{definition}
A hypergraph is called simple if every two edges share at most one vertex.
\end{definition}
Let us take an $(n-1)$-uniform simple hypergraph $H_0 = (V_0, E_0)$ such that $\chi (H) = 3$ (see~\cite{EL, kostochka2010constructions} for constructions).
Denote $V := V_0 \sqcup \{u_1, \dots, u_n\}$, $B := \{\{u_1, \dots, u_n \}\}$, $A := \{e \cup \{u_i\} | e \in E_0, 1 \leq i \leq n\}$. By the construction, $H$ is an $n$-uniform cross-intersecting family.
Let us show that $\chi (H) = 3$. Suppose the contrary, i.e.~there is a 2-coloring of $V$ without monochromatic edges of $A \cup B$.
By the definition of $H_0$ every 2-coloring of $V_0$ gives a monochromatic (say, blue) edge $e \in E_0$.
Then every $u_i$ is red, otherwise $e \cap \{u_i\}$ is monochromatic. So $\{u_1, \dots, u_n\}$ is red, a contradiction.
Note that $Q(H_0) = \{0,1\}$, so $Q(H) = \{0, 1, 2, n-1\}$.
\end{example}
\section{Proofs}
\begin{proof}[Proof of Theorem~\ref{chi23}]
First, suppose that there is no edge of size 2.
Consider such a pair $a \in A$, $b \in B$ that $|a \cup b|$ is the smallest.
Pick arbitrary vertices $v_a \in a \setminus b$ and $v_b \in b \setminus a$. Let us color $v_a$ and $v_b$ in color 1, $a \cup b \setminus \{v_a, v_b\}$ in color 2 and the remaining vertices in color 3.
Let us show that this coloring is proper. Since there is no edge of size 2, there is no edge of color 1.
Every edge intersects $a$ or $b$, so there is no edge of color 3.
Suppose that there is an edge $e$ of color 2. Without loss of generality $e \in A$.
Then $e \subset |a \cup b \setminus \{v_a\}|$, so $|e \cup b| < |a \cup b|$, a contradiction.
Now let us consider the case $\{u, v\} \in E(H)$.
\begin{lemma}
Let $a = \{u, v\} \in A$, $u \in b \in B$. Then for every $w \in B$ there is the edge $\{v, w\} \in E(H)$ or $\chi (H) \leq 3$.
\end{lemma}
\begin{proof}
Suppose that $\chi (H) > 3$. Then for every $w \in b$ there is the edge $\{w, v\} \in E (H)$,
otherwise one can color $v$, $w$ in color 1, $b \setminus w$ in color 2 and all other vertices in color 3, producing a proper 3-coloring.
\end{proof}
Without loss of generality $\{u, v\} \in A$.
Consider any edge $b \in B$ (without loss of generality $u \in b$). By lemma there is every edge $\{v, w\} \in E(H)$ for $w \in B$.
Suppose that for some $w \in b$ there is the edge $\{v, w\} \in B$.
Then, by lemma (for $a = \{u, v\}$ and $b = \{v, w\}$) we have $\{u, w\} \in E(H)$, so $b = \{u, w\}$.
So $H$ contains a triangle on $\{u, v, w\}$ with edges both in $A$ and $B$ ($\star$).
If $H$ coincides with the triangle on $\{u, v, w\}$, then $\chi (H) = 3$. Otherwise, $H$ contains $e$ which does not
intersect one of the edges $\{u, v\}$, $\{u, w\}$, $\{v, w\}$.
So, we can change denotation as follows: $\{u, v, w\} = \{q, r, s\}$, such that $e$, $\{q, r\} \in B$ and $e \cap \{q,r\} = \emptyset$.
Note that one of the edges $\{q, s\}$, $\{r, s\}$ lies in $A$ (without loss of generality it is $\{q, s\}$).
By lemma (for $a = \{q, s\}$ and $b = e$) there is an edge $\{q, t\}$ for every $t \in e$. If $\{r, s\} \in B$, then $\{q, t\} \in A$ for every $t \in e \setminus s$.
So by lemma (for $a = \{q, t\}$ and $b = \{q, r\}$) there is an edge $\{r, t\}$ for every $t \in e$. If $\{r, s\} \in A$, then by lemma again (for $a = \{r, s\}$ and $b = e$) there is an edge $\{r, t\}$ for every $t \in e$.
Summing up, we have edges $\{q, r\}$, $e \in B$, $\{q, s\} \in A$ and $\{x, t\} \in E(H)$ for every choice $x \in \{q, r\}$ and $t \in e$.
Suppose that $|e| > 2$. It means that there are different $s, t_1, t_2 \in e$.
Note that $\{r, t_1\} \in A$ since $\{q, s\} \in A$, so $\{q, t_2\} \in A$. Thus every edge $\{x, t\} \in A$ for every choice $x \in \{q, r\}$ and $t \in e$.
Obviously, we have listed all edges of the hypergraph, so we proved the claim in this case.
Note also that the set of colors in $\{q, r\}$ does not intersect the set of colors in $e$, so $\chi (H) = 4$.
If $|e| = 2$, then $H = K_4$, and again $\chi (H) = 4$.
In the remaining case we have all $\{v, w\}$ in $A$. If $|B| = 1$, then $\chi (H) \leq 3$, so there is an edge
$b' \in B$, such that it does not contain $u$.
Suppose that $b \cap b' = \emptyset$.
Then by lemma (for $a = \{v, w\}$ and $b'$) we have edges $\{w, t'\}$ for every $w \in b$ and $t' \in b'$.
Obviously, all these edges lie in $A$, otherwise we are done by the first case
(if some $\{w, t'\} \in B$, then we have $\{w, v\} \in A$, $\{w, t'\}$, $b' \in B$).
If $b \cap b' \neq \emptyset$, then by lemma for $a = \{u, v\}$ and $b$ we have an edge $\{v, t\}$ for some $t \in b \cap b'$. Then $b' = \{v, t\}$.
Analogously, $b = \{u, t\}$. So the condition ($\star$) holds, and we are done.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{max}]
First, we need the following definition.
\begin{definition}
Let $H = (V,E)$ be a hypergraph and $W$ be a subset of $V$. Define
$$H_W := (V \setminus W, \{e \setminus W \ |\ e \in E \}).$$
Then $H$ is a \text{flower} with $k$ \text{petals} with core $W$ if $\tau (F_W) \geq k$.
\end{definition}
\noindent The following Lemma was proved by J. H\r{a}stad, S. Jukna and P. Pudl{\'a}k~\cite{haastad1995top}.
We provide its proof for the completeness of presentation.
\begin{lemma}
Let $H = (V, E)$ be a hypergraph; $n := \max_{e \in E} |e|$.
If $|E| > (k - 1)^n$ then $F$ contains a flower with $k$ petals.
\end{lemma}
\begin{proof} Induction on $n$. The basis $n = 1$ is trivial.
Now suppose that the lemma is true for
$n - 1$ and prove it for $n$.
If $\tau (H) \geq k$ then $H$ itself is a flower
with at least $k$ petals (and an empty core).
Otherwise, some set of size $k - 1$ intersects all the edges of $H$, and hence, at least
$|E|/(k - 1)$
of the edges must contain some vertex $x$.
The hypergraph
$H_{\{x\}} = (V_{\{x\}}, E_{\{x\}})$
has
$$|E_{\{x\}}| \geq \frac{|E|}{k - 1} > (k - 1)^{n-1}$$
edges, each of cardinality at most $n - 1$.
By the induction hypothesis,
$H_{\{x\}}$ contains a flower with $k$ petals and some core $Y$.
Adding the element $x$ back to the sets in this flower, we obtain a flower in $H$ with the
same number of petals and the core $Y \cup \{x\}$.
\end{proof}
Now let us prove Theorem~\ref{max}. Suppose the contrary, i. e. that, without loss of generality, $|A| \geq n^n + 1$.
Then by Lemma the hypergraph $(V, A)$ contains a flower with $n+1$ petals.
It means that every $b \in B$ intersects the core of the flower, and $H$ is not critical. A contradiction.
\end{proof}
\section{Open questions}
The most famous problem in hypergraph coloring is to determine the minimal number of edges in
an $n$-uniform hypergraph with $\chi (H) = 3$ (it is usually denoted by $m(n)$). The best known bounds (\cite{Erdos2, radhakrishnan1998improved, cherkashin2015note}) are
\begin{equation}
c \sqrt {\frac{n}{\ln n}} 2^n \leq m(n) \leq \frac{e \cdot \ln 2 }{4} n^2 2^{n} (1 + o(1)).
\label{1}
\end{equation}
P.~Erd\H{o}s and L.~Lov{\'a}sz in~\cite{EL} posed the same question for the class of intersecting families.
Even though the intersecting condition is very strong, it does not provide a better lower bound.
On the other hand, the upper bound in (\ref{1}) is probabilistic, so it does not work for intersecting families.
So the asymptotically best upper bound is $7^{\frac{n-1}{2}}$ for $n = 3^k$, which is given by iterated Fano plane.
Another question is to determine the minimal size $a(n)$ of the largest intersection in an $n$-uniform intersecting family.
The best bounds at this time are
$$\frac{n}{\log_2 n} \leq a(n) \leq n-2.$$
\noindent Studying the mentioned problems for cross-intersecting families is also of interest.
Recall that Example~\ref{nn} shows that Theorem~\ref{max} is tight.
On the other hand, $\max \min (|A|, |B|)$ over all cross-intersecting families with chromatic number 3 is unknown.
Obviously, one may take the example $(V, E)$ by P. Frankl, K. Ota and N. Tokushige and put $A = B = E$ to get lower bound (\ref{2}).
\subsubsection{Acknowledgements.}
The work was supported by the Russian Scientific Foundation grant 16-11-10014.
The author is grateful to A. Raigorodskii and F. Petrov for constant inspiration, to A. Kupavskii for historical review and
for directing his attention to the paper~\cite{haastad1995top} and to N. Rastegaev for very careful reading of the draft of the paper.
\end{document}
|
\begin{document}
\title{Extremal Quantum Correlations: Experimental Study with Two-qubit States}
\author{A. Chiuri}
\affiliation{Dipartimento di Fisica, Sapienza {Universit\`a} di Roma, Piazzale Aldo Moro 5, I-00185 Roma, Italy}
\author{G. Vallone}
\affiliation{Museo Storico della Fisica e Centro Studi e Ricerche Enrico Fermi, Via Panisperna 89/A, Compendio del Viminale, I-00184 Roma, Italy}
\affiliation{Dipartimento di Fisica, Sapienza {Universit\`a} di Roma, Piazzale Aldo Moro 5, I-00185 Roma, Italy}
\author{M. Paternostro}
\affiliation{School of Mathematics and Physics, Queen's University, Belfast BT7 1NN, United Kingdom}
\author{P. Mataloni}
\affiliation{Dipartimento di Fisica, Sapienza {Universit\`a} di Roma, Piazzale Aldo Moro 5, I-00185 Roma, Italy}
\affiliation{Istituto Nazionale di Ottica (INO-CNR), L.go E. Fermi 6, I-50125 Firenze, Italy}
\date{\today}
\begin{abstract}
We explore experimentally the space of two-qubit quantum correlated mixed states, including
frontier ones as defined by the use of quantum discord and von Neumann entropy. Our
experimental setup is flexible enough to allow for the high-quality generation of a vast variety of states.
We address quantitatively the relation between quantum discord and a recently suggested alternative measure of quantum correlations.
\end{abstract}
\pacs{
42.50.Dv,
03.67.Bg,
42.50.Ex
}
\maketitle
Entanglement, {\it ``the characteristic trait of quantum mechanics"} according to the words of E. Schr\"odinger~\cite{Sch}, is universally recognized as the key resource in the processing of quantum information and an important tool for the implementation of quantum communication and quantum-empowered metrology~\cite{hpower4}. Yet, entanglement does not embody the {\it unique} way in which non-classical correlations can be set among the elements of a composite system. When generic mixed states are considered, quantum correlations (QCs) are no longer synonymous of entanglement: {\it Other} forms of stronger-than-classical correlations exist and can indeed be enforced in the state of a multipartite mixed system. However, a general consensus on {\it the} measure of quantum correlations is still far from having been found. Among the quantifiers proposed so far, quantum discord~\cite{QD} (${\cal D}$) occupies a prominent position and enjoys a growing popularity within the community working on quantum information science due to its alleged relevance in the model for deterministic
quantum computation with one qubit~\cite{datta,barbieri}, extendibility to some important classes of infinite-dimensional systems~\cite{parisadesso} and peculiar role in open-system dynamics~\cite{laura}. Recently, some attempts at providing an operational interpretation to discord have been reported~\cite{opInt}.
Yet, interesting alternative to discord exist, each striving at capturing different facets of QCs~\cite{others}. In Ref.~\cite{giro10qph}, in particular, a measure based on the concept of perturbation on a bipartite quantum state (see also Luo in Ref.~\cite{others}) induced by joint local measurements has been put forward and extensively analyzed. Such indicator, dubbed {\it ameliorated measurement induced disturbance} (AMID), has been shown to signal faithfully fully classical states ({\it i.e.} states endowed with only classical correlations). AMID embodies an interesting upper bound to the non-classicality content quantified by {${\cal D}$} and, at variance with the latter, is naturally symmetric.
\begin{figure}
\caption{(color online) {\bf a)}
\end{figure}
A landmark on the study of quantum entanglement has been set by the identification of states maximizing the degree of two-qubit entanglement at set values of the global state mixedness~\cite{mems}. This has spurred an extensive investigation, at all levels, on the interplay between entanglement and mixedness, which has culminated in the experimental exploration of the two-qubit entropic plane, including maximally entangled mixed states (MEMS) by a number of groups worldwide~\cite{pete04prl,barb04prl,woerdman}. Needless to say, given the strong interplay between non-classical correlations and mixedness, an experimental characterization analogous to the one performed for MEMS is not only highly desirable but extremely interesting. This is precisely the aim of this work: Building on the framework provided by the theoretical studies in Refs.~\cite{giro10qph,jamesmaiorca}, here we experimentally navigate the space of two-qubit discorded states focusing our attention, in particular, on the class of two-qubit maximally non-classical mixed states (MNCMS), {\it i.e.} those states maximizing the degree of quantum discord at assigned values of their global von Neumann entropy. We show a very good agreement between theoretical predictions and experimental evidence across the whole range of values of the global entropy for two-qubit states. The extensive nature of our investigation comprises the generation and analysis of a variety of quantum correlated two-qubit states, from Werner states to the MEMS associated with the use of relative entropy of entanglement and von Neumann entropy~\cite{mems}.
Technically, this has been possible due to the {high} flexibility of the experimental setup used for our demonstration, which makes clever and effective use of the possibilities offered by well-tested sources for hyperentangled polarization-path photonic states. We engineer mixedness in the joint polarization state of two photonic qubits by tracing out the path degree of freedom (DOF). The properties of such residual states are then analyzed by means of the quantum state tomography (QST) toolbox~\cite{jame01pra} and a quantitative comparison between their quantum-correlation contents and the predictions on MNCMS is performed. The quality of the generated states is such that we have been able to experimentally verify the predictions given in Ref.~\cite{giro10qph} relating discord and {AMID}: We have generated the states embodying both the lower and upper bound to {AMID} at set values of discord. Our study should be regarded as the counterpart, dealing with the much broader context of general quantum correlations, of the seminal experimental investigations on the relation between entanglement and mixedness performed in Refs.~\cite{pete04prl,barb04prl,woerdman}. As such, it encompasses an important step in the characterization of non-classicality in general two-qubit states.
\noindent
{\it Resource-state generation.--} Before exploring the entropic two-qubit space, it is convenient to introduce the experimental techniques used in order to achieve the ample variety of states necessary for our investigation. The key element for the state engineering in our setup is embodied by state
\begin{equation}\langlebel{xi}
\ket{\xi}_{AB}{=}\sqrt{1{-}\epsilon}\ket{r\ell}_{AB}\ket{\phi^+(p)}_{AB}+\sqrt{\epsilon} \ket{\ell r}_{AB}\ket{HV}_{AB}
\end{equation}
with {$\ket{\phi^\pm(p)}_{AB}{=}\sqrt{p}\ket{HH}_{AB}{\pm}\sqrt{1-p}\ket{VV}_{AB}$}.
In Eq.~(\ref{xi}), four qubits are encoded in the polarization and path DOFs
of optical modes $A$ and $B$. In particular, $H$ ($V$) represents horizontal (vertical) polarization of a photon, while $r$ ($\ell$)
is the right (left) mode in which each photon can be emitted from our source of entangled photon states, which we now describe.
State $\ket{\xi}_{AB}$ is produced by suitably adapting the polarization-momentum source of hyperentangled states that has been recently used as basic building block in experimental test-beds on multipartite entanglement~\cite{barb05pra, chiu10prl}.
{To generate $\ket{\phi^{+}(p)}$,
a UV laser impinges back and forth on a nonlinear crystal [cfr. Fig.~\ref{setup} {\bf a)}].
The forward emission generates the $\ket{HH}$ contribution.
A quarter waveplate (QWP) transforms the $\ket{HH}$ backward
emission into $\ket{VV}$ after reflection at the spherical mirror $M$.}
The relative phase between the $\ket{VV}$ and $\ket{HH}$ contributions is changed by
translating $M$. The weight $\sqrt p$ in the unbalanced Bell state
{$\ket{\phi^{+}(p)}_{AB}$},
can be varied by rotating the
half waveplate HWP$_{1}[p]$ near $M$ [see Fig.~\ref{setup} {\bf a)}], which intercepts twice the UV pump beam. For more
details on the generation of non-maximally entangled states of polarization, see Ref.~\cite{vall07pra}.
A four-hole mask allows us to select four longitudinal spatial modes (two per photon),
namely $\ket{r}_{A,B}$, $\ket{\ell}_{A,B}$,
within the emission cone of the crystal.
The state thus produced finally reads $\ket{\text{HE}(p)}{=}(\ket{r\ell}_{AB}+e^{i\gamma}\ket{\ell r}_{AB}) \otimes \ket{\phi^{+}(p)}_{AB}/{\sqrt{2}}$.
State $\ket{\xi}_{AB}$ has been obtained by making three further changes to
$\ket{\text{HE}(p)}$ [cfr. Fig.~\ref{setup} {\bf a)}]. First, the contributions of modes $\ket{\ell r}$ corresponding to the V-cone is intercepted by inserting two beam stops. An attenuator is then
placed on mode $\ket{r}_{B}$ so as to vary
the relative weight between $\ket{\ell r}_{AB}$ and $\ket{r \ell}_{AB}$. This effectively corresponds to changing $\epsilon$. Finally, a HWP [labelled HWP$_{2}$ in Fig.~\ref{setup} {\bf a)}], oriented at 45$^{\circ}$ and intercepting mode $\ket{r}_{B}$,
allows to transform $\ket{\ell r}\ket{HH}$ into $\ket{\ell r}\ket{HV}$. This gives us the second term in \eq{xi}, with which we have been able to span the entire set of states relevant to our study.
\begin{figure*}
\caption{(color online) {\bf a)}
\end{figure*}
\noindent
{\it Experimental navigation.--} We now introduce the measures of QCs considered in our work and discuss the results of our experimental investigation. We start reminding that discord is associated to the discrepancy between two classically
equivalent versions of mutual information~\cite{QD}. For a bipartite state $\rho_{AB}$
the latter is defined as ${\cal I}(\rho_{AB}){=}{\cal S}(\rho_A){+}{\cal
S}(\rho_B){-}{\cal S}(\rho_{AB})$. Here, ${\cal
S}(\rho){=}{-}\text{Tr}[\rho\log_2\rho]$ is the von Neumann entropy (VNE)
of the arbitrary two-qubit state $\rho$ and $\rho_{j}$ is the reduced
density matrix of party $j{=}A,B$. One can also consider the expression
${\cal J}^\leftarrow(\rho_{AB}){=}{\cal S}(\rho_A){-}{\cal
H}_{\{\hat\Pi_i\}}(A|B)$ (the one-way classical correlation~\cite{QD}) with ${\cal
H}_{\{\hat\Pi_i\}}(A|B){\equiv}\sum_{i}p_i{\cal S}(\rho^i_{A|B})$ the quantum
conditional entropy associated with the the post-measurement density matrix
$\rho^i_{A|B}{=}\text{Tr}_B[\hat\Pi_i\rho_{AB}]/p_i$ obtained upon
performing the complete projective measurement $\{\Pi_i\}$ on system $B$
($p_i{=}\text{Tr}[\hat\Pi_i\rho_{AB}]$). We define discord as
${\cal D}^\leftarrow{=}\inf_{\{\Pi_i\}}[{\cal
I}(\rho_{AB}){-}{\cal J}^\leftarrow(\rho_{AB})]$,
where the
infimum is calculated over the set of projectors $\{\hat\Pi_i\}$.
Discord is in general asymmetric (${\cal D}^\leftarrow{\neq}{\cal
D}^\rightarrow$) with ${\cal D}^\rightarrow$ obtained by swapping the roles of A
and B. This originates the possibility to distinguish between {\it quantum-quantum states} having $({\cal D}^\leftarrow,{\cal D}^\rightarrow){\neq}{0}$, {\it quantum-classical} and {\it classical-quantum} ones, which are states having one of the two values of discord strictly null, and finally {\it classical-classical} states for which ${\cal D}^\leftarrow,{\cal D}^\rightarrow{=}{0}$, which are bipartite states that simply embed a classical probability distribution in a two-qubit state~\cite{pianista}. Clearly, the asymmetry inherent in discord would lead us to mistake a quantum-classical state
as a classical state. This makes such a measure not strongly faithful. In order to bypass such an ambiguity we will consider the symmetrized discord ${\cal D}^\leftrightarrow{=}\max[{\cal D}^\leftarrow,{\cal D}^\rightarrow]$, which is zero only for classical-classical states.
In Ref.~\cite{giro10qph}, AMID has been introduced as an alternative indicator of non-classical correlations for bipartite systems of any dimension as
${\cal A}
{=}{\cal I}(\varrho_{AB}){-}{\cal I}_c(\varrho_{AB})$,
where ${\cal I}_c(\varrho_{AB}) \equiv {\sup}_{\{\hat\Omega\}}{\cal
I}(\varrho^{\hat{\Omega}}_{AB})$ and $\varrho^{\hat\Omega}_{AB}$ is the state resulting from the application of the arbitrary complete (bi-local) projective measurement over the composite system $\hat{\Omega}_{kl}=\hat\Pi_{A,k}\otimes\hat\Pi_{B,l}$.
Our definition is motivated by the analysis in~\cite{terhal}, where ${\cal I}_c$ is defined as the {\it classical mutual information} (optimized over projective measurements), a proper
symmetric measure of classical correlations in bipartite states.
AMID is thus recast as the difference between total and classical mutual
information, which has all the prerequisites to be a {\it bona fide} measure
of QCs \cite{pianista}.
Having presented our quantitative tools, we are in a position to discuss the results of our experimental endeavors by first addressing the ${\cal D}^\leftrightarrow\ vs.\ {\cal S}$ plane. As shown in Refs.~\cite{giro10qph} (see also~\cite{jamesmaiorca}), when ${\cal D}^\leftrightarrow$ and ${\cal S}$ are taken as quantitative figures of merit for QCs and global mixedness respectively, the class of MNCMS consists of four families of states, all of the form
\begin{equation}
\rho_{AB}^{X} = \left[
\begin{matrix}
\rho_{11} & 0 & 0 & \rho_{14}\\
0 & \rho_{22} & \rho_{23} & 0\\
0 & \rho_{23}^{*} & \rho_{33} & 0\\
\rho_{14}^{*} & 0 & 0 & \rho_{44}
\end{matrix}
\right]\!,~~\text{with}~~\sum_{j}\rho_{jj}{=}1.
\end{equation}
The low-entropy region ${\cal S}{\in}[0, 0.9231)$ pertains to the rank-3 states $\rho^R_{AB}$ embodying MEMS for the relative entropy of entanglement~\cite{mems}
\begin{equation}
\langlebel{cc}
{\rho^R_{AB}=\frac{1-a+r}{2}\ket{\Phi^+}\bra {\Phi^+}+
\frac{1-a-r}{2}\ket{\Phi^-}\bra {\Phi^-}+a\ket{01}\bra{01}}
\end{equation}
with $0{\le}a{\le}1/3$ and $r$ a proper function of $a$~\cite{giro10qph}.
{In Eq.~(\ref{cc}) we have used the Bell state $\ket{\Phi^\pm}\equiv\ket{\phi^{\pm}(1/2)}_{AB}$}.
States $\rho^R_{AB}$ span the black-colored trait in Fig.~\ref{disc-vne&amid-disc} {\bf a)}. Next comes the family of Werner states
\begin{equation}
{\rho^{W}_{AB}(\epsilon){=}(1{-}\epsilon)
\ket{\Phi^{+}}_{AB} \bra{\Phi^{+}}{+} \epsilon \frac{\leavevmode\hbox{\small1\normalsize\kern-.33em1}_{4}}{4}},
\end{equation}
which occupy the entropic sector ${\cal S}{\in}[0.9231, 1.410)$ for {$0.225{\le}\epsilon{<}{0.426}$}
and the high-entropy region of ${\cal S}{\in}[1.585, 2]$ for {${0.519}{\le}{\epsilon}{\le}{1}$}.
Such disjoint boundaries are both represented by red curves in Fig.~\ref{disc-vne&amid-disc} {\bf a)}.
There, it is visible that two more families belong to the MNCMS boundary [cfr. the blue and green traits corresponding to the range $1.410{\le}{\cal S}{<}1.585)$]. Such states are currently out of our grasp due to the rather small entropy-window they belong to, which poses some challenge to the tunability of the state mixedness achievable by our method. For conciseness, we omit any discussion about them.
It is worth noticing that quantum discord and AMID share the very same structure of MNCMS, which can thus be rightfully regarded as the two-qubit states whose QCs are maximally robust against state mixedness. This class of states are thus set to play a key role in realistic (noisy) implementations of quantum information schemes based on non-classicality of correlations as a resource~\cite{datta,barbieri}. There is, currently, an enormous interest in designing practical schemes for the exploitation of such features. The sharing of such interesting class of states by the two QC measures addressed here enforces the establishment of a hierarchy between AMID and quantum discord, a point that is precisely along the lines of interesting quantitative comparisons between different measures of entanglement applied to mixed two-qubit states~\cite{grudka}, in an attempt to establish a mutual order.
Such a relationship is elucidated in Fig.~\ref{disc-vne&amid-disc} {\bf b)}, where the solid lines show that AMID embodies an upper bound to ${\cal D}^\leftrightarrow$ and is in agreement with the latter in identifying genuinely classical-classical states having no QCs. Any physically allowed two-qubit state lives in between the straight lower bound such that ${\cal A}{=}{\cal D}^\leftrightarrow$ and the upper one. A full analytic characterization of such boundary curves is possible and can be thoroughly checked by means of a numerical exploration of the ${\cal A}\ vs.\ {\cal D}^\leftrightarrow$ plane~\cite{giro10qph}. Quite obviously, the lower bound in the AMID-discord plane is spanned by pure states of variable entanglement (for pure states ${\cal A}{=}{\cal D}^\leftrightarrow$).
However, such a lower frontier also accommodates both the Werner states
and the family
\begin{equation}\langlebel{low}
{\rho^{\downarrow}_{AB}(q)=(1-q) \ket{\Phi^{+}}_{AB}\bra{\Phi^{+}}+ q \ket{\Phi^{-}}_{AB}\bra{\Phi^{-}}},
\end{equation}
where $q{\in}[0,0.5]$, while the upper bound is spanned by
\begin{equation}\langlebel{up}
{\rho^{\uparrow}(\epsilon,p)_{AB}{=}(1{-}\epsilon)\ket{\phi^{+}(p)}_{AB}\bra{\phi^{+}(p)}{+}\epsilon \ket{01}_{AB} \bra{01}}
\end{equation}
for values of $(\epsilon,p)$ satisfying a transcendental equation~\cite{giro10qph}.
Starting from the {four-qubit} resource $\ket{\xi}_{AB}$, we have generated the states spanning the MNCMS boundary in Fig.~\ref{disc-vne&amid-disc} {\bf a)} and the upper/lower frontier states $\rho^{\updownarrow}_{AB}(\epsilon,p)$ in the ${\cal A}\ vs\ \mathcal{D}^{\leftrightarrow}$ plane.
\noindent
{{\it Generation of $\rho^{\uparrow}_{AB}$.-} This class} serves an ideal platform for the description of the experimental method pursued to achieve the remaining states addressed in our study. By tracing out the path DOF {in $\ket{\xi}_{AB}$}
and using the correspondence between physical states and logical qubits $\ket{H}{\rightarrow}\ket{0}$, $\ket{V}{\rightarrow}\ket{1}$,
the density matrix for state $\rho^{\uparrow}(\epsilon,p)$ is achieved.
The trace over the path DOF is performed by matching the left and right side of the
modes coming from the four-hole mask in Fig.~\ref{setup} {\bf a)} on a beam splitter [indicated as BS in panel {\bf b)} of the same figure].
When the difference between left and right paths
is larger than the photon coherence time,
an incoherent superposition of {$\ket{\phi^{+}(p)}_{AB}$} and $\ket{HV}_{AB}$
is achieved. The values of the pairs $(\epsilon,p)$ determining the experimental
states [shown as blue dots in Fig.~\ref{disc-vne&amid-disc} {\bf b)}] are given in Table~\ref{value}, together with their uncertainties.
\begin{table}[t]
\begin{tabular}{ c| c c c c c c }
\hline
\hline
& & Value & and & uncertainty & \\
\hline\hline
$\epsilon$&0.00$\pm$0.01\!&0.05$\pm$0.01\!& 0.10$\pm$0.01\!& 0.15$\pm$0.01\!& 0.18$\pm$0.01\!&0.20$\pm$0.01 \\
$p$&0.50$\pm$0.02\!& 0.70$\pm$0.01\!& 0.80$\pm$0.01\!& 0.90$\pm$0.01\!&0.95$\pm$0.02\!&0.99$\pm$0.02 \\
\hline
\hline
\end{tabular}
\caption{{\bf Parameters in $\rho^{\uparrow}(\epsilon,p)$}: The table reports the values of the parameters entering the states $\rho^{\uparrow}(\epsilon,p)$ produced in our experiment, together with their uncertainties.}
\langlebel{value}
\end{table}
The values ($\epsilon,p$)=(0,0.5) and ($\epsilon,p$)=(0.2,1) correspond to the case of a pure state (having $\mathcal{A}{=}\mathcal{D}^{\leftrightarrow}{=}1$)
and a completely mixed state (with $\mathcal{A}{=}\mathcal{D}^{\leftrightarrow}${=}0) respectively.
\noindent
{\it Generation of $\rho^{\downarrow}_{AB}$.- }
The family embodied by $\rho^{\downarrow}_{AB}(q)$ can also be generated starting from the resource state $\ket{\xi}_{AB}$. By selecting only the correlated modes $\ket{r\ell}_{AB}$ from the four-hole mask and setting the HWP$_{1}$ at $0^\circ$ (so that $p=1/2$ is fixed), we have generated the Bell state
{$\ket{\Phi^{+}}_{AB}\bra{\Phi^{+}}{\equiv}\rho^{\downarrow}(q{=}0)$.}
By inserting a birefringent quartz plate of proper thickness on the path of one of the two correlated modes, we controllably affect
{the coherence between the $\ket{HH}$ and $\ket{VV}$} states of polarization.
Several quartz plates of different thickness $\ell_{q}$ have been used
{to transform $\ket{\Phi^+}$ into $\rho^{\downarrow}(q)$.
The} value of $q$ is related to the dimensionless parameter $C=\frac{(\Delta n)\ell_{q}}{c\tau_{coh}}$,
where $\tau_{coh}$ is the coherence time of the emitted photons and $\Delta n$ is the difference between ordinary and extraordinary
refraction indices in the quartz. The details of such dependence are inessential and it is enough to state that $q{=}1/2$ ($q\rightarrow0$) for $C{\gg}1$ ($C{\rightarrow}0$).
\noindent
{{\it Generation of $\rho^{R,W}$.-}
Our source of $\rho^R$ and Werner state makes use of the setup previously described for the
states $\rho^{\uparrow}_{AB}(\epsilon,p)$.
By setting $p=1/2$ and by adding a decoherence between $\ket{HH}$
and $\ket{VV}$ (related to the parameter $r$) as previously explained, we can obtain $\rho^R_{AB}$ from $\rho^{\uparrow}_{AB}$. }
As for $\rho^W_{AB}$, while we have already addressed the method used to generate the {$\ket{\Phi^{+}}_{AB}\bra{\Phi^{+}}$} component
of the state, it is worth mentioning how to get the $\leavevmode\hbox{\small1\normalsize\kern-.33em1}_{4}$ contribution.
This has been obtained by inserting a further HWP [HWP$_{3}$ in Fig.\ref{setup} {\bf a)}]
on the $\ket{\ell}_{A}$ mode and rotating both HWP$_{2}$ and HWP$_{3}$ at $22.5^{\circ}$ so as to generate $\ket{\ell r}_{AB} \ket{++}_{AB}$. By using two quartz plates longer than $\tau_{coh}$
and of different thickness, we obtained a fully mixed state on the correlated modes $\ket{\ell r}_{AB}$. Each quartz plate introduces decoherence on the state of each photon. By matching the two correlated-mode pairs on a BS, state $\rho^W_{AB}$ is achieved.
As anticipated, in order to ascertain the properties of all the states being discussed above, we have used QST~\cite{jame01pra} so as to obtain the corresponding physical density matrices and quantify $\mathcal{D}^{\leftrightarrow}$, $\mathcal{S}$ and $\mathcal{A}$. The Pauli operators needed to implement the QST have been measured by using standard polarization analysis setup and two detectors [see the inset in Fig.\ref{setup} {\bf b)}]. Integrated systems given by GRIN lenses and single mode fibres~\cite{ross09prl} have been used to optimally collect the radiation after the QST setup and send it to the detectors $\text{D}_{A,B}$.
\noindent
{\it Discussion and conclusions.--}
{Excellent agreement between the theoretical expectations and experimental results has been found}
for both the navigation in the space of MNCMS and the quantitative confirmation of the predicted relation between AMID and discord. As seen in Fig.~\ref{disc-vne&amid-disc}, almost the whole class of maximally non-classical states has been explored, with the exception of a technically demanding (yet interesting) region, whose exploration is currently under study. Quite remarkably, on the other hand, the whole upper bound in the ${\cal A}\ vs. \ {\cal D}^\leftrightarrow$ has been scanned in an experimental endeavor that has originated an ample wealth of physically interesting states.
{Technically, this has been achieved by cleverly engineering a four-qubit hyperentangled state.
In particular, we exploits the path as an ancillary resource to obtain the desired states
encoded in the polarization. } Our analysis remarkably embodies the first navigation in the space of general quantum correlations at set values of global entropy, thus moving along the lines of the analogous seminal investigations performed on entanglement~\cite{pete04prl,barb04prl,woerdman}. We {hope} that our efforts will spur further interest in the study, at all levels, of the interplay between mixedness and non-classicality.
\noindent
{\it Acknowledgments.--}
{We thank Valentina Rosati for the contributions given in realizing the experiment. MP is grateful to G. Adesso and D. Girolami for fruitful discussions. }
{This work was partially supported by the FARI project 2010 of Sapienza Universit\`a di Roma,}
and the UK EPSRC (EP/G004579/1).
\end{document}
|
\begin{document}
\begin{frontmatter}
\vspace*{6pt}
\title{Discussion of ``Statistical Modeling of Spatial Extremes'' by
A.~C. Davison, S.~A.~Padoan and M. Ribatet}
\runtitle{Discussion}
\begin{aug}
\author[a]{\fnms{D.} \snm{Cooley}\corref{}\ead[label=e1]{[email protected]}}
\and
\author[b]{\fnms{S. R.} \snm{Sain}}
\runauthor{D. Cooley and S. R. Sain}
\address[a]{D. Cooley is Assistant Professor, Department of Statistics, Colorado State University, Fort Collins, Colorado 80523-1877, USA
\printead{e1}.}
\address[b]{Stephan R. Sain is Scientist,
Institute for Mathematics Applied to Geosciences,
National Center for Atmospheric Research,
Boulder, Colorado 80307-3000, USA.}
\end{aug}
\vspace*{-2pt}
\end{frontmatter}
We congratulate the authors for their overview paper discussing
modeling techniques for spatial extremes.
There is great interest in spatial extreme data in the atmospheric
science community, as the data is inherently spatial and it is
recognized that extreme weather events often have the largest economic
and human impacts.
In order to adequately assess the risk of potential future extreme
events, there is a need to know how the characteristics of phenomena
such as precipitation or temperature\break could be altered due to climate change.
Because of the high interest level in the atmospheric science and (more
broadly) the geoscience communities, it is imperative for the
statistics community to develop methodologies which appropriately
answer the questions associated with spatial extreme data.
\citet{davison2011} provide a comprehensive overview of existing
techniques that can serve as a useful starting point for statisticians
entering the field.
That the paper is written as a case study helps to illustrate the
advantages and disadvantages of the various methods.
We hope that this Swiss rainfall data will serve as a~test set by which
future methodologies can be evaluated.
The authors analyze data which are annual maxima.
This is natural from the classical extreme value theory point of view
whose fundamental result establishes the limiting distribution of $\mathbf
Y = ( \bigvee_{i = 1}^n X_{1i},\break \ldots, \bigvee_{i = 1}^n X_{Di}
)^T$ to be\vadjust{\goodbreak} in the family of the multivariate max-stable distributions.
In practice, modeling vectors of annual maxima seems less than ideal,
and it is not clear how much dependence information is lost by
discarding the coincident data.
Scientists in other disciplines can be uncomfortable with the idea of
constructing data vectors of events which most often occur on different days.
We are aware that there is current work to extend spatial extremes work
to deal with threshold exceedances, and we look forward to that work
appearing in the literature.\looseness=1
\citet{davison2011} divide the spatial approaches into three categories:
latent variable models, copulas, and max-stable process models.
In Section 7 they do a very nice job of detailing the strengths and
weaknesses of the three approaches.
However, it seems that the article does not make clear enough that the
aim of the latent variable approach is fundamentally different than the
aim of a copula or max-stable process model.
As the authors state in Sections 2.2 and 2.3, current modeling of
multivariate (or spatial) extremes requires two tasks: (1) the
marginals must be estimated and transformed to something standard
(e.g., unit Fr\'echet) so that (2) the tail dependence in the data can
be modeled.
The latent variable model is a method for characterizing how the
marginal distribution varies over space, that is, task 1.
In contrast, both copula models and existing max-stable process models
explicitly model the tail dependence in the data once the marginals are
known, that is, task 2.
We refer to the dependence remaining after the marginals have been
accounted for as ``residual dependence,'' as \citet{Sang10} described the
random variables after marginal transformation as ``standardized residuals.''
\citet{davison2011} are correct to point out (Figure 4) that using a
latent variable model is inappropriate for applications where the joint
behavior of the random vector is required.
However, there are applications which aim only to model the marginal behavior.
There is a\vadjust{\goodbreak} long history of producing return level maps such as those
shown in Figure~3 of the manuscript.
For instance, the recent effort to update the precipitation frequency
atlases for the US (\citeauthor{bonnin04a}, \citeyear{bonnin04a,bonnin04b}) aimed only to
characterize the marginal distribution's tail over the study region.
\citeauthor{bonnin04a} (\citeyear{bonnin04a,bonnin04b}) employed a regional frequency analysis
(\cite{dalrymple60}; \cite{hosking97}), approach which, like the latent variable
model approach, aims to borrow strength across sites when estimating
marginal parameters.
As \citet{davison2011} clearly show, explicitly modeling residual
depend\-ence requires considerable effort, and when only the mar\-ginal
effects need to be described, we feel it can be appropriate to ignore
the residual dependence so long as one recognizes the limited scope of
the questions that such an analysis can answer.
In situations where the joint behavior of multiple locations must be
described, then one must explicitly model the residual dependence.
As \citet{davison2011} show, dependence models not specifically designed
for extremes may be \mbox{inadequate} to capture tail dependence.
However, models such as the extremal copulas or max-stable processes do
not easily lend themselves to current atmospheric science applications
with hundreds, thousands, or tens-of-thousands of locations.
There are obvious avenues to explore toward adapting the pairwise
likelihood methods for large spatial data sets, but, to date, pairwise
likelihood methods have only been applied to applications similar to
the one in \citet{davison2011} with roughly 50 locations.
We imagine scaling the methods to the size of current applications will
be nontrivial, and perhaps new inference procedures or more
computa\-tionally-feasible extremes dependence models will need to be developed.
Until appropriate extremes techniques are available, people will
continue to be tempted to apply high-dimensional models developed to
describe nonextreme data (e.g., a~Gaussian copula) to model tail dependence.
Most of the spatial extremes work to date has been primarily
descriptive in nature.
Such analyses are useful in assessing risk (i.e., the probability of an
extreme event), but do not help to explain the underlying causes of
extreme events.
There is a~desire in the atmospheric sciences to move beyond
descriptive analyses and toward analyses which enhance understanding of
the processes which lead to extreme events.
For example, \citet{sillmann2011} establish a link between extreme cold
temperatures in Europe and\vadjust{\goodbreak} a blocking phenomenon in the North Atlantic,
\citet{maraun2011} link extreme precipitation in Europe to large-scale
airflow covariates, and \citet{weller2011} link extreme precipitation on
the Pacific coast of North America to surface pressure patterns.
Since it is generally believed that climate models are better at
representing processes at large-scales, establishing links between
extreme events and large-scale phenomena enable one to better
conjecture how the nature of extreme events will change with the climate.
While none of the analyses cited above involved extensive spatial
modeling of extremes, it is foreseeable that science will move in this
direction.
Finally, undertaking a pairwise likelihood fitting of a max-stable
process model is challenging and would be beyond the capabilities of
most geoscientists.
The authors are to be commended for developing the \mbox{{\tt
SpatialExtremes}} (\cite{SpatialExtremes}) package in {\tt R} which
enables the general scientific community to utilize these methods.
\begin{thebibliography}{11}
\bibitem[\protect\citeauthoryear{Bonnin et~al.}{2004a}]{bonnin04a}
\begin{bbook}[auto:STB|2012/03/21|07:41:58]
\bauthor{\bsnm{Bonnin},~\bfnm{G.}\binits{G.}},
\bauthor{\bsnm{Todd},~\bfnm{D.}\binits{D.}},
\bauthor{\bsnm{Lin},~\bfnm{B.}\binits{B.}},
\bauthor{\bsnm{Parzybok},~\bfnm{T.}\binits{T.}},
\bauthor{\bsnm{Yekta},~\bfnm{M.}\binits{M.}} \AND
\bauthor{\bsnm{Riley},~\bfnm{D.}\binits{D.}}
(\byear{2004}a).
\btitle{NOAA Atlas 14, Precipitation Frequency Atlas of the United States}
\bvolume{1}. \bpublisher{U.S. Dept. Commerce, National Oceanic and Atmospheric
Administration, National Weather Service}, \baddress{Silver
Spring, MD}.
\bptok{imsref}
\end{bbook}
\endbibitem
\bibitem[\protect\citeauthoryear{Bonnin et~al.}{2004b}]{bonnin04b}
\begin{bbook}[auto:STB|2012/03/21|07:41:58]
\bauthor{\bsnm{Bonnin},~\bfnm{G.}\binits{G.}},
\bauthor{\bsnm{Todd},~\bfnm{D.}\binits{D.}},
\bauthor{\bsnm{Lin},~\bfnm{B.}\binits{B.}},
\bauthor{\bsnm{Parzybok},~\bfnm{T.}\binits{T.}},
\bauthor{\bsnm{Yekta},~\bfnm{M.}\binits{M.}} \AND
\bauthor{\bsnm{Riley},~\bfnm{D.}\binits{D.}}
(\byear{2004}b).
\btitle{NOAA Atlas 14, Precipitation Frequency Atlas of the United States}
\bvolume{2}. \bpublisher{U.S. Dept. Commerce, National Oceanic and Atmospheric
Administration, National Weather Service}, \baddress{Silver
Spring, MD}.
\bptok{imsref}
\end{bbook}
\endbibitem
\bibitem[\protect\citeauthoryear{Dalrymple}{1960}]{dalrymple60}
\begin{bmisc}[auto:STB|2012/03/21|07:41:58]
\bauthor{\bsnm{Dalrymple},~\bfnm{T.}\binits{T.}}
(\byear{1960}).
\bhowpublished{Flood frequency analyses. Water Supply Paper 1543-a, U.S. Geological
Survey,
{Reston},
{VA}}.
\bptok{imsref}
\end{bmisc}
\endbibitem
\bibitem[\protect\citeauthoryear{Davison, Padoan and
Ribatet}{2012}]{davison2011}
\begin{barticle}[auto:STB|2012/03/21|07:41:58]
\bauthor{\bsnm{Davison},~\bfnm{A.}\binits{A.}},
\bauthor{\bsnm{Padoan},~\bfnm{S.}\binits{S.}} \AND
\bauthor{\bsnm{Ribatet},~\bfnm{M.}\binits{M.}}
(\byear{2012}).
\btitle{Statistical modeling of spatial extremes}.
\bjournal{Statist. Sci.}
\bvolume{27}
\bpages{161--186}.
\bptok{imsref}
\end{barticle}
\endbibitem
\bibitem[\protect\citeauthoryear{Hosking and Wallis}{1997}]{hosking97}
\begin{bbook}[auto:STB|2012/03/21|07:41:58]
\bauthor{\bsnm{Hosking},~\bfnm{J.~R.~M.}\binits{J.~R.~M.}} \AND
\bauthor{\bsnm{Wallis},~\bfnm{J.~R.}\binits{J.~R.}}
(\byear{1997}).
\btitle{Regional Frequency Analysis: An Approach Based on L-Moments}.
\bpublisher{Cambridge Univ. Press},
\baddress{Cambridge}.
\bptok{imsref}
\end{bbook}
\endbibitem
\bibitem[\protect\citeauthoryear{Maraun, Osborn and Rust}{2011}]{maraun2011}
\begin{barticle}[auto:STB|2012/03/21|07:41:58]
\bauthor{\bsnm{Maraun},~\bfnm{D.}\binits{D.}},
\bauthor{\bsnm{Osborn},~\bfnm{T.}\binits{T.}} \AND
\bauthor{\bsnm{Rust},~\bfnm{H.}\binits{H.}}
(\byear{2011}).
\btitle{The influence of synoptic airflow on UK daily precipitation extremes.
Part I: Observed spatio-temporal relationships}.
\bjournal{Climate Dynamics}
\bvolume{36}
\bpages{261--275}.
\bptok{imsref}
\end{barticle}
\endbibitem
\bibitem[\protect\citeauthoryear{Ribatet}{2011}]{SpatialExtremes}
\begin{bmisc}[auto:STB|2012/03/21|07:41:58]
\bauthor{\bsnm{Ribatet},~\bfnm{M.}\binits{M.}}
(\byear{2011}).
\bhowpublished{SpatialExtremes: Modelling spatial extremes. R package version 1.8-1}.
\bptok{imsref}
\end{bmisc}
\endbibitem
\bibitem[\protect\citeauthoryear{Sang and Gelfand}{2010}]{Sang10}
\begin{barticle}[mr]
\bauthor{\bsnm{Sang},~\bfnm{Huiyan}\binits{H.}} \AND
\bauthor{\bsnm{Gelfand},~\bfnm{Alan~E.}\binits{A.~E.}}
(\byear{2010}).
\btitle{Continuous spatial process models for spatial extreme values}.
\bjournal{J. Agric. Biol. Environ. Stat.}
\bvolume{15}
\bpages{49--65}.
\bid{doi={10.1007/s13253-009-0010-1}, issn={1085-7117}, mr={2755384}}
\bptok{imsref}
\end{barticle}
\endbibitem
\bibitem[\protect\citeauthoryear{Sillmann et~al.}{2011}]{sillmann2011}
\begin{barticle}[auto:STB|2012/03/21|07:41:58]
\bauthor{\bsnm{Sillmann},~\bfnm{J.}\binits{J.}},
\bauthor{\bsnm{Croci-Maspoli},~\bfnm{M.}\binits{M.}},
\bauthor{\bsnm{Kallache},~\bfnm{M.}\binits{M.}} \AND
\bauthor{\bsnm{Katz},~\bfnm{R.~W.}\binits{R.~W.}}
(\byear{2011}).
\btitle{Extreme cold winter temperatures in Europe under the influence
of North
Atlantic atmospheric blocking}.
\bjournal{Journal of Climate} \bvolume{24} \bpages{5899--5913}.
\bid{doi={10.1175/2011JCLI4075.1}}
\bptok{imsref}
\end{barticle}
\endbibitem
\bibitem[\protect\citeauthoryear{Weller, Cooley and Sain}{2012}]{weller2011}
\begin{bmisc}[auto:STB|2012/03/21|07:41:58]
\bauthor{\bsnm{Weller},~\bfnm{G.}\binits{G.}},
\bauthor{\bsnm{Cooley},~\bfnm{D.}\binits{D.}} \AND
\bauthor{\bsnm{Sain},~\bfnm{S.}\binits{S.}}
(\byear{2012}).
\bhowpublished{An investigation of the pineapple express phenomenon via bivariate
extreme value theory. \textit{Environmetrics}.
To appear. DOI:\doiurl{10.1002/env.2143}.}
\bptok{imsref}
\end{bmisc}
\endbibitem
\end{thebibliography}
\end{document}
|
\begin{document}
\title[Continuous crystal and Coxeter groups]{Continuous crystal and Duistermaat-Heckmann measure for Coxeter
groups.}
\author{Philippe Biane}
\address{CNRS, IGM, Universit\'e Paris-Est,
77454 Marne-la-Vall\'ee Cedex 2
}
\email{[email protected]}
\author{ Philippe Bougerol}
\address{Laboratoire de Probabilit\'es et mod\`eles al\'eatoires,
Universit\'e Pierre et Marie Curie, 4, Place Jussieu, 75005 Paris,
FRANCE}
\email{[email protected]}
\author{ Neil O'Connell}
\address{Mathematics Institute,
University of Warwick,
Coventry CV4 7AL, UK}
\email{[email protected]}
\subjclass{Primary 20F55, 14M25; Secondary 60J65}
\mathcal D_{\alpha}te{\today}
\thanks{ Research of the third author supported in part by Science Foundation
Ireland
Grant No. SFI04/RP1/I512.}
\begin{abstract}We introduce a notion of continuous crystal analogous, for general
Coxeter groups, to the combinatorial crystals introduced by Kashiwara in
representation theory of Lie algebras.
We explore their main properties in the case of finite Coxeter
groups, where we use a generalization of the Littelmann path model to show the
existence of the
crystals. We introduce a remarkable measure, analogous to the
Duistermaat-Heckman measure, which we interpret in terms of Brownian motion.
We also show that the Littelmann path operators can be derived from simple
considerations on Sturm-Liouville equations.
\end{abstract}
\maketitle
\section{Introduction}
\subsection{}
The aim of this paper is to introduce a notion of continuous crystals for Coxeter
groups, which are not necessarily Weyl groups. Crystals are
combinatorial objects,
which have been associated by Kashiwara to Kac-Moody algebras, in order to
provide a combinatorial model for the representation theory of these algebras, see e.g.
\cite{H-K}, \cite{Joseph-book}, \cite{Joseph-notes}, \cite{kashbook} for an
introduction to this theory. The crystal graphs defined by Kashiwara turn out to be
equivalent to certain other graphs, constructed independently by Littelmann, using his
path model. The approach of Kashiwara to the crystals is through representations of
quantum groups and their ``crystallization'', which is the process of letting the
parameter $q$ in the quantum group go to zero. This requires representation theory and
therefore does not make sense for realizations of arbitrary Coxeter groups.
On the other hand, as it was realized in a previous paper \cite{bbo},
Littelmann's model can be adapted to fit with non-crystallographic
Coxeter groups, but the price to pay is that, since there is no lattice invariant under
the action of the group, one can only define a continuous version of the path model,
namely of the Littelmann path operators (see however the recent preprint
\cite{Joseph-penta}, which has appeared when this paper was under revision). In this continuous model, instead of
the Littelmann path
operators $e_i,f_i$ we have continuous semigroups $e^t_i,f^t_i$ indexed by
nonnegative real numbers $t\geq 0$. In the crystallographic case it is possible to
think of these continuous crystals as ``semi-classical limits" of the combinatorial
crystals, in much the same way as the coadjoint orbits arise as
semi-classical limits of the
representations of a compact semi-simple Lie group.
These continuous path operators, and the
closely related Pitman transforms, were used in \cite{bbo} to investigate symmetry
properties of Brownian
motion in a space where a finite Coxeter group acts, with applications in particular
to the motion of eigenvalues of matrix-valued Brownian motions.
In this paper, which is a sequel to \cite{bbo}, but can for the most part be read
independently, we define continuous crystals and start investigating their main
properties. As for now the theory works well for finite Coxeter groups, but there are
still several difficulties to extend it to infinite groups. This
theory allows us to define objects which are analogues to simplified versions of
the Schubert varieties (or Demazure-Littelmann modules)
associated with semi-simple Lie groups. We hope these objects
might help in certain questions concerning Coxeter groups, such as, for example,
the Kazhdan-Lusztig polynomials.
\subsection{}
This paper is organized as follows.
The next section contains the main definition, that
of a continuous crystal associated with
a realization of a Coxeter group. We establish the main properties of these objects,
following closely the exposition of Joseph in \cite{Joseph-notes}. It would have been
possible to just refer to \cite{Joseph-notes} for the most part of this section,
however, for the convenience of the reader, and also for convincing ourselves that
everything from the crystallographic situation goes smoothly to the continuous
context, we have preferred to write everything down. The main body of the proof
is relegated to an appendix in order to ease the reading of the paper.
The main result of this section is theorem \ref{thmuniq}, a uniqueness result for
continuous crystals, analogous to the one in \cite{Joseph-notes}.
In section 3 we introduce the
path operators and establish their most important properties.
Our approach to the path model is different from that in Littelmann
\cite{littel},
in that we base our exposition on the Pitman transforms,
which are defined from scratch. These transforms satisfy
braid relations, which where proved in \cite{bbo}, and which play a prominent role.
Using these operators, the set of continuous paths is endowed with a crystal structure and the
continuous analogues of the Littelmann modules are introduced as "connected components" of this crystal (see the discussion following proposition \ref{prop:exis}, definition \ref{dem-littel} and
theorem \ref{thm:exis}).
Our definition makes sense for arbitrary Coxeter groups, but we are able to prove significant properties of these only in the case of finite Coxeter groups. It
remains an interesting and challenging problem to extend these properties to the
general case.
Continuous Littelmann modules can be parameterized in several ways by polytopes, corresponding to different reduced decompositions of an element in the Coxeter group.
In the case of Weyl groups, these are the
Berenstein-Zelevinsky polytopes (see \cite{beze2})
which contain the Kashiwara coordinates on the crystals. In section 4
we state some properties of these parametrizations. In theorem \ref{piecewise} we prove that two such parametrizations are related by a piecewise linear transformation, and in theorem
\ref{theo_poly} we show that the polytopes can be obtained by the intersection of a cone
depending only on the element of the Coxeter group, and a set of inequalities which depend on the dominant path. Furthermore, we provide explicit equations for the cone in the dihedral case (in proposition \ref{dihedralcone}).
In theorem \ref{uniq_iso} we prove that the crystal associated with a Littelmann module depends only on the end point of the dominant path, then in theorem \ref{uniq_closed}
we obtain the existence and uniqueness of a family of highest weight normal continuous crystals. We show that the Coxeter group acts on each Littelmann module (theorem \ref{theo:action}).
We introduce the Sch\"utzenberger involution in section \ref{schutz} and use it to give a direct combinatorial proof of the commutativity of the tensor product of continuous crystals (theorem \ref{crysiso}).
We think that
even in the crystallographic case our treatment sheds some light on these topics.
In section 5, we introduce an analogue of the Duistermaat-Heckman measure, motivated by a
result of Alexeev and Brion \cite{ab}. We prove several interesting properties of
this measure, in particular, in theorem \ref{p-2}, an analogue of the Harish-Chandra formula.
The Laplace transform appearing in this formula is a generalized Bessel function. It is shown in theorem \ref{prod_J} to satisfy a product formula, giving a positive answer to a question of R\"osler.
The Duistermaat-Heckman measure is intimately linked with Brownian motion,
and in corollary \ref{cor_cone} we give a Brownian proof of the fact that the
crystal defined by the path model depends only on the final position of the
path.
The final section is of a quite different nature, and somewhat independent of the rest
of the paper. The Littelmann path operators have been introduced as a generalization,
for arbitrary root systems, of combinatorial operations on Young tableaux. Here we
show how, using some simple considerations on Sturm-Liouville equations, the Littelmann
path operators appear naturally. In particular this gives a concrete geometric basis
to the theory of geometric lifting which has been introduced by Berenstein and
Zelevinsky in \cite{beze2} in a purely formal way.
\section{Continuous crystal}
This section is devoted to introducing the main definition and first
properties of continuous crystals.
\subsection{Basic definition}
We use the standard references \cite{bo2}, \cite{humphreys} on Coxeter groups and their realizations.
A Coxeter system $(W, S)$ is a group $W$ generated by a finite set of involutions $S$ such that, if $m(s,s')$ is the order of $ss'$ then the relations
$$(ss')^{m(s,s')}=1$$
for $m(s,s')$ finite, give a presentation of $W$.
A realization of $(W,S)$ is given by a real vector space $V$ with dual
$V^\vee$, an action of $W$ on $V$,
and a subset $\{(\alpha_s,\alpha^\vee_s), s \in S\}$ of
$V \times V^\vee$ such that each $s\in S$ acts on $V$ by
the reflection given by
$$s(x)=x-\alpha_s^\vee(x)\alpha_s,\;\; x \in V,$$ so $\alpha_s^\vee(\alpha_s)=2$.
One calls $\alpha_s$ the simple root associated with $s\in S$ and
$\alpha_s^\vee$ its coroot.
We consider a realization of a Coxeter system $(W,S)$ in a real vector space $V$,
and the associated simple roots $\Sigma=\{\alpha_s, s \in S\}$ in $V$ and coroots $\{\alpha^\vee_s, s \in S\}$ in $V^\vee$.
The closed Weyl chamber is the convex cone $$\overline
C=\{v \in V; \alpha_s^\vee(v) \geq 0, \mbox{ for all } \alpha\in S\}$$
thus the simple roots are positive on $\overline C$.
There is an order relation on $V$ induced by this cone, namely
$\lambda\leq \mu$ if and only if $\mu-\lambda\in \overline C$.
We adapt the definition of crystals due to Kashiwara (see, e.g., Kashiwara \cite{kash93}, \cite{kashbook}, Joseph \cite{Joseph-book}) to a continuous setting.
\begin{definition}
A continuous crystal is a set $B$ equipped with maps
\begin{eqnarray*}wt&:&B\to V,\\
\varepsilon_\alpha, \varphi_\alpha&:&B \to \mathbb R\cup\{-\infty\},\, \alpha \in \Sigma,\\
e_\alpha^r&:& B\cup\{{\bf 0}\}\to B\cup\{{\bf 0}\}, \, \alpha \in \Sigma, r \in \mathbb R,
\end{eqnarray*}
where $\bf 0$ is a ghost element, such that the following properties
hold, for all $\alpha \in \Sigma$, and $b\in B$
(C1) $\varphi_\alpha(b)=\varepsilon_\alpha(b)+\alpha^\vee(wt(b)),$
(C2) If $ e_\alpha^r(b)\not={\bf 0}$ then
\begin{eqnarray*}
\varepsilon_\alpha( e_\alpha^rb)&=&\varepsilon_\alpha(b)-r,\\
\varphi_\alpha( e_\alpha^rb)&=&\varphi_\alpha(b)+r,\\
wt( e_\alpha^rb)&=&wt(b)+r\alpha,
\end{eqnarray*}
(C3) For all $r \in \mathbb R, b\in B$ one has $e^r_\alpha({\bf 0})={\bf 0},e^0_\alpha(b)=b.$
If $ e_\alpha^r(b)\not={\bf 0}$ then, for all $s\in \mathbb R$,
$$ e_\alpha^{s+r}(b)= e_\alpha^s (e_\alpha^r(b)),$$
(C4) If $\varphi_\alpha(b)=-\infty$ then $e_\alpha^r(b)={\bf 0}$, for all $r \in \mathbb R, r�\neq 0.$
\end{definition}
The point is that, in this definition, $r$ takes any real value, and not only discrete ones. Sometimes we write, for $r \geq 0$,
$$f_\alpha^r=e_\alpha^{-r}.$$
\begin{example}[The crystal $B_\alpha$] \label{exba}For each $\alpha \in \Sigma$, we define the crystal $B_\alpha$ as the set $\{b_\alpha(t), t \mbox{ is a nonpositive real number}\}$, with the maps given by
$$wt(b_\alpha(t))=t\alpha,\mathcal Quad \varepsilon_\alpha(b_\alpha(t))=-t, \mathcal Quad \varphi_\alpha(b_\alpha(t))=t,$$
$$e_\alpha^r(b_\alpha(t))=b_\alpha(t+r) \mbox{ if } r \leq -t \mbox{ and }e_\alpha^r(b_\alpha(t)) = {\bf 0} \mbox{ otherwise,}$$
and, if $\alpha ' \neq \alpha$,
$\varepsilon_{\alpha'}(b_\alpha(t))=-\infty, \,\varphi_{\alpha'}(b_\alpha(t))=-\infty,\,e^r_{\alpha'}(b_\alpha(t))={\bf 0},$ when $r \neq 0 $.
\end{example}
\subsection{Morphisms}
\begin{definition}
Let $B_1$ and $B_2$ be continuous crystals.
1. A morphism of crystals $\mathcal P_{s}i: B_1 \to B_2$ is a map $\mathcal P_{s}i: B_1\cup \{{\bf 0}\} \to B_2 \cup \{\bf 0\}$ such that $\mathcal P_{s}i({\bf 0})={\bf 0}$ and for all $\alpha \in \Sigma$ and $b \in B_1$,
$$
wt(\mathcal P_{s}i(b))=wt(b),\, \varepsilon_\alpha(\mathcal P_{s}i(b))=\varepsilon_\alpha(b),\,
\varphi_\alpha(\mathcal P_{s}i(b))=\varphi_\alpha(b)
$$
and
$e_\alpha^r(\mathcal P_{s}i(b))=\mathcal P_{s}i(e_\alpha^r(b))$ when $e_\alpha^r(b)\in B_1$.
2. A strict morphism is a morphism $\mathcal P_{s}i: B_1\to B_2$ such that $e_\alpha^r(\mathcal P_{s}i(b))=\mathcal P_{s}i(e_\alpha^r(b))$ for all $b\in B_1$.
3. A crystal embedding is an injective strict morphism.
\end{definition}
The morphism
$\mathcal P_{s}i$ is called a {\it crystal isomorphism}
if there exists a crystal morphism $\mathcal Phi: B_2\to B_1$ such that $\mathcal Phi\circ \mathcal P_{s}i=id_{B_1\cup\{\bf 0\}}$, and $\mathcal P_{s}i\circ
\mathcal Phi=id_{B_2\cup\{\bf 0\}}$. It is then an embedding.
\subsection{Tensor product}\label{subtens}
Consider two continuous crystals $B_1$ and $B_2$ associated with $(W,S,\Sigma)$. We define the tensor product $B_1\otimes B_2$ as the continuous crystal with set $B= B_1\times B_2$, whose elements are denoted
$b_1\otimes b_2,\text{for}\, b_1\in B_1,b_2\in B_2$.
Let $\sigma=\varphi_\alpha(b_1)-\varepsilon_\alpha(b_2)$ where $(-\infty)-(-\infty)=0$, let $\sigma^+=\max(0,\sigma)$ and $\sigma^-=\max(0,-\sigma)$, then the maps defining the tensor product are given by the following formulas:
\begin{eqnarray*}wt(b_1\otimes b_2)&=&wt(b_1)+wt(b_2)\\\varepsilon_\alpha(b_1\otimes b_2)&=&\varepsilon_\alpha(b_1)+\sigma^-\\
\mathcal Phi_\alpha(b_1\otimes b_2)&=&\mathcal Phi_\alpha(b_2)+\sigma^+\\
e_\alpha^{r}(b_1\otimes b_2)&=&
e_\alpha^{\max(r,-\sigma)-\sigma^-}b_1\otimes e_\alpha^{\min(r,-\sigma)+\sigma^+} b_2,
\end{eqnarray*}
Here $b_1\otimes {\bf 0}$ and ${\bf 0}\otimes b_2$ are understood to be ${\bf 0}$. Notice that when $\sigma \geq 0$, one has $\varepsilon_\alpha(b_1\otimes b_2)=\varepsilon_\alpha(b_1)$ and
\begin{equation}\label{sigmapos}
e_\alpha^r(b_1\otimes b_2)=
e_\alpha^{r}b_1\otimes b_2, \mbox{ for all } r \in [-\sigma,+\infty[.
\end{equation}
As in the discrete case, one can check that the tensor product is associative (but not commutative) so we can define without ambiguity the tensor product of several crystals.
\subsection{Highest weight crystal}
A crystal $B$ is called upper normal when, for all $b\in B$,
$$\varepsilon_\alpha(b) = \max\{r \geq 0 ; e_\alpha^r(b)\not={\bf 0}\}$$ and is called lower normal if
$$\varphi_\alpha(b)=\max\{r \geq 0 ; e_\alpha^{-r}(b)\not={\bf 0}\}.$$
We call it normal (this is sometimes called seminormal by Kashiwara) when it is lower and upper normal. Notice that this implies that $\varepsilon_\alpha(b) \geq 0$ and $ \varphi_\alpha(b) \geq 0$.
We introduce the semigroup $\mathcal F$ generated by the $\{f_\alpha^r, \alpha \mbox { simple root}, r \geq 0\}$:
$$\mathcal F=\{f_{\alpha_1}^{r_1}\cdots f_{\alpha_k}^{r_k} , k \in \mathbb N^*, r_1,\cdots,r_k \geq 0, \alpha_1,\cdots , \alpha_k\in \Sigma\},$$
and, if $b$ is an element of a continuous crystal $B$, the subset
$\mathcal F(b)=\{f (b), f\in \mathcal F\}$ of $B$.
\begin{definition}
Let $\lambda \in V$, a continuous crystal $B(\lambda)$ is said to be of highest weight $\lambda$ if there exists $b_\lambda\in B(\lambda)$ such that $wt(b_\lambda)=\lambda$,
$e_\alpha^r(b_\lambda)={\bf 0}$, for all $r > 0$ and $\alpha \in \Sigma$ and such that $B(\lambda) =\mathcal F(b_\lambda).$
\end{definition}
For a continuous crystal with highest weight $\lambda$,
such an element $b_\lambda$ is unique,
and called the primitive element of $B(\lambda)$.
If the crystal is normal then $\lambda$ must be
in the Weyl chamber $\bar C$.
The vector $\lambda$ is a highest weight in the sense that,
for all $b \in B(\lambda)$, $wt(b) \leq \lambda$.
\subsection{Uniqueness.}
Following Joseph \cite{Joseph-book}, \cite{Joseph-notes} we introduce the following definition.
\begin{definition} Let $(B(\lambda), \lambda \in \bar C),$ be a family
of highest weight continuous crystals.
The family is closed if, for each $\lambda, \mu \in \bar C$, the subset $\mathcal F(b_\lambda\otimes b_\mu)$ of $B(\lambda)\otimes B(\mu)$ is a crystal isomorphic to
$B(\lambda+\mu)$.
\end{definition}
Joseph (\cite{Joseph-book}, 6.4.21) has shown in the Weyl group case,
for discrete crystals, that a closed family of highest weight
normal crystals is unique.
The analogue holds in our situation.
\begin{theorem}\label{thmuniq} For a realization of a Coxeter system
$(W,S)$, if a closed family $B(\lambda), \lambda \in \bar C,$
of highest weight continuous normal
crystals exists, then it is unique.\end{theorem}
The proof of the theorem, which follows closely
Joseph \cite{Joseph-notes}, is in the appendix \ref{prfuniq}.
\section{Pitman transforms and Littelmann path operators
for Coxeter groups}
In this section we recall definition and properties of Pitman transforms, introduced in our previous paper
\cite{bbo}. We deduce from these properties the existence of Littelmann operators, then we define continuous Littelmann modules, prove that they are continuous crystals, and make a first study of their parametrization.
\subsection{The Pitman transform}
Let $V$ be a real vector space, with dual space $V^{\vee}$.
Let $\alpha\in V$
and $\alpha^{\vee}\in V^{\vee}$ be such that $\alpha^{\vee}(\alpha)=2$.
The reflection $s_\alpha:V \to V$ associated to $(\alpha,\alpha^\vee)$ is the
linear map
defined, for $x \in V$, by
$$s_\alpha(x)=x- \alpha^{\vee}(x)\alpha.$$
For $T > 0$, let $C_T^0(V)$ be
the set of continuous path $\eta:[0,T]\to V$ such that
$\eta(0)=0$, with the topology of uniform convergence. We have introduced
and studied in \cite{bbo} the following path transformation, similar to the
one defined by Pitman in \cite{pitman}.
\begin{definition}\label{pitman-transform}The Pitman
transform $\mathcal P_{\alpha}$ associated with $(\alpha,\alpha^\vee)$
is defined on $C_T^0(V)$ by the formula:
$$\mathcal P_{\alpha} \eta(t)=\eta(t)-\inf_{t\geq s\geq 0}\alpha^{\vee}(
\eta(s))\alpha,\mathcal Qquad
T\geq t\geq 0.$$
\end{definition}
A path $\eta \in C_T^0(V)$ is called $\alpha$-dominant
when $\alpha^\vee(\eta (t))\geq 0$ for all $t \in [0,T]$.
The following properties of the Pitman transform are easily established.
\begin{prop}\label{pit}
({\it i}) The transformation $\mathcal P_{\alpha}:C_T^0(V) \to C_T^0(V)$ is continuous.
({\it ii}) For all $\eta \in C_T^0(V)$, the path
$\mathcal P_{\alpha} \eta $ is $\alpha$-dominant and
$\mathcal P_{\alpha} \eta =\eta $ if and only if $\eta $ is $\alpha$-dominant.
({\it iii})
The transformation $\mathcal P_{\alpha}$ is an idempotent, i.e. $\mathcal P_{\alpha}\mathcal P_{\alpha} \eta =\mathcal P_{\alpha} \eta $
for all
$\eta \in C_T^0(V)$.
({\it iv)})
Let $\mathcal Pi\in C_T^0(V)$ be $\alpha$-dominant, and let $ x\in[0, \alpha^\vee(\mathcal Pi(T))]$, then
there exists a unique path $\eta$ in $C^0_T(V)$ such that
$\mathcal P_{\alpha} \eta =\mathcal Pi$
and $\eta(T)=\mathcal Pi(T)-x \alpha.$
Moreover for $0\leq t\leq
T,$
$$\eta(t)=\mathcal Pi(t)-\min[x,\inf_{T\geq s\geq t}\alpha^\vee(\mathcal Pi(s))]\alpha.$$
\end{prop}
\subsection{ Littelmann path operators }\label{littelpit}
Let $V,V^{\vee},\alpha,\alpha^{\vee}$ be as above.
Using proposition \ref{pit}, as in \cite{bbo}, we can define generalized
Littelmann path
operators (see \cite{littel}).
\begin{definition}\label{littelmanntransform}
Let $\eta \in C_T^0(V)$, and
$x\in \mathbb R$, then we define
$\mathbb EE_{\alpha}^x\eta$ as the unique path such that $$\mathcal P_{\alpha} \mathbb EE_{\alpha}^x\eta=\mathcal P_{\alpha}
\eta\mathcal Quad
\text{and}\mathcal Quad \mathbb EE_{\alpha}^x\eta(T)=\eta(T)+x\alpha
$$
if $
-\alpha^{\vee}(\eta(T))+\inf_{0\leq t\leq T}\alpha^{\vee}(
\eta(t))\leq x\leq -\inf_{0\leq t\leq T}\alpha^{\vee}(
\eta(t))$ and $\mathbb EE_{\alpha}^x\eta=\bf 0$ otherwise. The following formula holds
$$\mathbb EE_{\alpha}^x\eta(t)=
\eta(t)-\min(-x,\inf_{t\leq s\leq T}\alpha^\vee(\eta(s))
-\inf_{0\leq s\leq T}\alpha^\vee(\eta(s)))\alpha$$
if $\ -\alpha^\vee(T)+\inf_{0\leq t\leq T}
\alpha^{\vee}(
\eta(t))\leq x\leq 0$, and
$$\mathbb EE_{\alpha}^x\eta(t)=
\eta(t)-
\min(0,-x-\inf_{0\leq s\leq T}\alpha^\vee(\eta(s))+
\inf_{0\leq s\leq t}\alpha^\vee(\eta(s)))\alpha$$
if $ 0\leq x\leq -\inf_{0\leq t\leq T}
\alpha^{\vee}(
\eta(t))$.
\end{definition}
Here, as in the definition of crystals, $\bf 0$ is a ghost element.
The following result is immediate from the definition of the Littelmann
operators.
\begin{prop}
$\mathbb EE_{\alpha}^0\eta=\eta$ and
$\mathbb EE_{\alpha}^x \mathbb EE_{\alpha}^y\eta=\mathbb EE_{\alpha}^{x+y}\eta$ as long as
$\mathbb EE_{\alpha}^y\eta\ne \bf 0$.
\end{prop}
We shall also use the notation $\mathcal F_{\alpha}^x=\mathbb EE_{\alpha}^{-x}$ for $x\geq 0$, and denote by
$\mathcal H_{\alpha}^x$ the restriction of the operator $\mathcal F_{\alpha}^x$ to
$\alpha$-dominant paths.
Let $\mathcal Pi$ be
an $\alpha$-dominant path in $C_T^0(V)$
and $0 \leq x \leq \alpha^\vee(T)$, then
$\mathcal H^x_\alpha\mathcal Pi$ is the unique path in $C^0_T(V)$ such that
$$\mathcal P_{\alpha} \mathcal H^x_\alpha \mathcal Pi =\mathcal Pi$$
and $$\mathcal H^x_\alpha \mathcal Pi(T)=\mathcal Pi(T)-x \alpha.$$
Observe that in this equality
$$x=-\inf_{0\leq t\leq T}\alpha^{\vee}(\mathcal H^x_\alpha \mathcal Pi(t)).$$
\subsection{Product of Pitman transforms}
Let $\alpha,\beta\in V$ and $\alpha^{\vee},\beta^{\vee}\in V^{\vee}$
be such
that $\alpha^{\vee}(\beta)< 0$ and $\beta^{\vee}(\alpha)< 0$. Replacing if necessary
$(\alpha,\alpha^\vee,\beta,\beta^\vee)$ by
$(t\alpha,\alpha^\vee/t,\beta/t,t\beta^\vee)$, which does not change $\mathcal P_{\alpha}$ and $\mathcal P_{\beta}$,
we will assume that
$\alpha^{\vee}(\beta)=\beta^{\vee}(\alpha)$.
We use the notations
$
\rho=-\frac{1}{2}\alpha^{\vee}(\beta)=
-\frac{1}{2}\beta^{\vee}(\alpha)$.
The following result is proved in \cite{bbo}.
\begin{theorem}\label{formula} Let $n$ be a positive integer, then if
$\rho\geq\cos\frac{\mathcal Pi}{n}$,
\begin{eqnarray}\label{forpapb}
(\underbrace{\mathcal P_{\alpha}\mathcal P_{\beta}\mathcal P_{\alpha}\ldots}_{ \text{$n$ terms}} )
\mathcal Pi(t)&=&\mathcal Pi(t)-\inf_{t\geq s_0\geq s_1\geq \ldots\geq s_{n-1}\geq
0}\bigl(\sum_{i=0}^{n-1}T_i(\rho)Z^{(i)}(s_i)\bigr)\alpha\nonumber\\&&
-\inf_{t\geq s_0\geq s_1\geq \ldots\geq s_{n-2}\geq
0}\bigl(\sum_{i=0}^{n-2}T_i(\rho)Z^{(i+1)}(s_i)\bigr)\beta
\end{eqnarray}
where $Z^{(k)}(t)=\alpha^{\vee}( \mathcal Pi(t))$ if $k$ is even and $Z^{(k)}(t)=\beta^{\vee}( \mathcal Pi(t))$ if $k$ is odd. The
$T_k(x)$ are the Tchebycheff polynomials defined by
\begin{equation}\label{Tcheb}
T_0(x)=1,\, T_1(x)=2x, \,
2xT_k(x)=T_{k-1}(x)+T_{k+1}(x) \text{ for}\ k\geq 1.
\end{equation}
\end{theorem}
The Tchebycheff polynomials satisfy
$T_k(\cos\theta)=\frac{\sin(k+1)\theta}{\sin\theta}$ and, in particular,
under the assumptions on $\rho$ and $n$,
$T_k(\rho)\geq 0$ for all $k\leq n-1$.
An important property of the Pitman transforms is the following corollary
(see \cite{bbo}).
\begin{theorem}\label{braid}({\it Generalized braid relations for the Pitman
transforms.})
Let $\alpha,\beta\in V$ and $\alpha^{\vee},\beta^{\vee}\in V^{\vee}$
be such that
$\alpha^{\vee}(\alpha)=\beta^{\vee}(\beta)=2$, and
$\alpha^{\vee}(\beta)< 0,
\beta^{\vee}(\alpha)< 0$ and
$\alpha^{\vee}(\beta)
\beta^{\vee}(\alpha)=4\cos^2\frac{\mathcal Pi}{n}$, where $n\geq 2$
is some integer.
Then
$$\mathcal P_{\alpha}\mathcal P_{\beta}\mathcal P_{\alpha} \ldots =\mathcal P_{\beta}\mathcal P_{\alpha}\mathcal P_{\beta}\ldots$$ where there are $n$
factors in each product.
\end{theorem}
\subsection{Pitman transforms for Coxeter groups}\label{sec_cox}
Let $(W,S)$ be a Coxeter system, with a realization in the space $V$.
For a simple reflection $s$,
denote by
$\mathcal P_{\alpha_s}$ or $\mathcal P_{s}$ the Pitman transform associated with the pair
$(\alpha_s,\alpha^\vee_s)$.
From theorem \ref{braid} and Matsumoto's lemma [\cite{bo2}, Ch. IV, No. 1.5.
Prop.5] we deduce (\cite{bbo}):
\begin{theorem}\label{braidP} Let $w=s_{1}\cdots s_{r}$ be a reduced decomposition of $w \in W$, with $s_1,\cdots,s_r \in S$. Then
$$\mathcal P_w:=\mathcal P_{s_{1}}\cdots\mathcal P_{s_{r}}$$
depends only on $w$ and not on the chosen decomposition.
\end{theorem}
When $W$ is finite, it has a unique longest element, denoted by $w_0$.
The transformation $\mathcal P_{w_0}$ plays a fundamental role in the sequel.
The following result is proved in \cite{bbo}.
\begin{prop} When $W$ is finite, for any path $\eta \in C_T^0(V)$, the path
$\mathcal P_{w_0}\eta$ takes values in the closed Weyl chamber $\overline
C$. Furthermore
$\mathcal P_{w_0}$ is an idempotent and
$\mathcal P_w\mathcal P_{w_0}=\mathcal P_{w_0}\mathcal P_w=\mathcal P_{w_0}$ for all $w \in W$.
\end{prop}
\subsection{The continuous cristal $C_T^0(V)$}
For any path $\eta$ in $C^0_T(V)$,
let
$wt(\eta)=\eta(T)$. Let $e_\alpha^r$ be the generalized Littelmann operator
$\mathbb EE_{\alpha}^r$ defined in Definition \ref{littelmanntransform}, and
$$\varepsilon_\alpha(\eta) = \max\{r \geq 0 ; \mathbb EE_\alpha^r(\eta)\not=0\}
=-\inf_{0 \leq t \leq T} \alpha^\vee(\eta(t))$$
$$\varphi_\alpha(\eta)=\max\{r \geq 0 ; \mathbb EE_\alpha^{-r}(\eta)\not=0\}
=\alpha^\vee(\eta(T))-\inf_{0 \leq t \leq T} \alpha^\vee(\eta(t)).$$
It is clear that
\begin{prop} \label{prop:exis}
With the above definitions,
$C_T^0(V)$ is a normal continuous crystal. \end{prop}
We say that a path is dominant
if it takes its values in the closed
Weyl chamber $\overline C$.
\begin{definition} \label{dem-littel} Let $\mathcal Pi \in C_T^0(V)$ be a dominant path, and $w\in W$.
We define $$L_\mathcal Pi^w=\{\eta \in C_T^0(V); \mathcal P_{w}\eta=\mathcal Pi\}.$$
\end{definition}
These sets are defined for arbitrary
Coxeter groups. We shall establish their main properties in the case of finite
Coxeter groups, where they are analogues of Demazure-Littelmann modules.
It remains an interesting problem to establish similar
properties in the general case.
From now on we assume that $W$ is finite, with longest element
$w_0$, and we denote
$L_\mathcal Pi=L^{w_0}_\mathcal Pi$, which we call the Littelmann module associated with $\mathcal Pi$.
The set $L_\mathcal Pi\cup\{{\bf 0}\}$ is a subset of $C_T^0(V)\cup\{{\bf 0}\}$invariant under the Littelmann operators, thus: \begin{theorem} \label{thm:exis}
For any dominant path $\mathcal Pi$,
$L_\mathcal Pi$ is a normal continuous crystal with highest weight $\mathcal Pi(T)$.
\end{theorem}
\mathcal Proof
This follows from the result of \ref{sec_cox}, except the highest
weight property, which follows from the fact that, see (\ref{xy}),
any $\eta\in L_\mathcal Pi$ can be written as
$$\eta=
\mathcal H_{s_{q}}^{x_q}\mathcal H_{s_{q-1}}^{x_{q-1}}\cdots\mathcal H_{s_{1}}^{x_1}\mathcal Pi.\mathcal Qed$$
Two paths $\eta_1$ and $\eta_2$ are said to be connected if there exists simple roots $\alpha_1,\cdots, \alpha_k$ and real numbers $r_1,\cdots, r_k$ such that
$$ \eta_1=\mathbb EE_{\alpha_1}^{r_1}\cdots \mathbb EE_{\alpha_k}^{r_k}\eta_2.$$
This is equivalent with the relation $\mathcal P_{w_0}\eta_1=\mathcal P_{w_0}\eta_2$. A connected set in $C_T^0(V)$ is a subset in which each two elements are connected.
We see that the sets $\{L_\mathcal Pi, \mathcal Pi \mbox { dominant}\}$ are the connected components in
$C_T^0(V)$. Moreover
we will show in theorem \ref{uniq_iso} that the continuous crystals $L_{\mathcal Pi_1}$ and $L_{\mathcal Pi_2}$ are isomorphic if and only if $\mathcal Pi_1(T)=\mathcal Pi_2(T)$.
\subsection{Braid relations for the $\mathcal H$ operators}
Let $w \in W$ and fix a reduced decomposition $w=s_{ 1}\ldots s_{p}$.
For any path $\eta$ in $C_T^0(V)$, denote $\eta_p=\eta$ and for
$k=1,\ldots,p$,
$$\eta_{k-1}=\mathcal P_{s_{{k}}}\ldots \mathcal P_{s_{p}}\eta.$$
Then $\eta_{k-1}=\mathcal P_{s_{{k}}}\eta_{k}$ is $\alpha_{s_{k}}$-dominant, by
proposition \ref{pit} ({\it ii}) and
$$ \eta_{k}=\mathcal F_{s_{k}}^{x_k}\eta_{k-1}=\mathcal H_{s_{k}}^{x_k}\eta_{k-1}$$
where
\begin{equation}\label{kash}x_k=-
\inf_{0\leq t\leq T}\alpha_{s_{k}}^{\vee}( \eta_k(t)).
\end{equation}
Observe that \begin{equation}\label{inegx}x_k\in
[0,\alpha_{s_{k}}^{\vee}( \eta_{k-1}(T))] \end{equation}
and
$$ \eta_{k}(T)= \eta_{k-1}(T)-x_k\alpha_{s_{k}};$$
thus,
$$ \eta_{k}(T)=\eta_0(T)-\sum_{i=1}^k x_i\alpha_{s_{i}}.$$
Furthermore,
\begin{equation}\label{xy}
\eta_k=\mathcal H_{s_{k}}^{x_k}\mathcal H_{s_{{k-1}}}^{x_{k-1}}
\cdots \mathcal H_{s_{1}}^{x_1}\mathcal P_{w}\eta,
\end{equation}
and the numbers $(x_1,\ldots, x_k)$ are uniquely determined by this equation.
We consider two reduced decompositions
$$w=s_{1}\cdots s_{p}, w=s'_{1}\cdots s'_{p}$$
of $w$. Let
${\bf i}=(s_1,\cdots,s_p)$ and ${\bf j}=(s'_1,\cdots,s'_p)$.
Let $\eta:[0,T]\to V$ be a continuous path such that $\eta(0)=0$,
and let $(x_1,\ldots, x_p)$, respectively $(y_1,\ldots, y_p)$, be
the numbers determined by equation (\ref{xy}) for the two decompositions
${\bf i}$ and ${\bf j}$.
The following
theorem states that the correspondence between
the $x_n$'s and the $y_n$'s actually does not depend on the path $\eta$.
In other words, we have the following braid relation for the operators $\mathcal H$.
\begin{equation}
\mathcal H_{s_{p}}^{x_p}\cdots \mathcal H_{s_{2}}^{x_2}\mathcal H_{s_{1}}^{x_1}=\mathcal H_{s'_{p}}^{y_p}\cdots
\mathcal H_{s'_{2}}^{y_2}\mathcal H_{s'_{1}}^{y_1}.
\end{equation}
\begin{theorem}\label{piecewise}
There exists a piecewise linear continuous map $\phi_{\bf i}^{\bf j}:\mathbb R^p\to \mathbb R^p$ such that for all
paths $\eta \in C_T^0(V)$,
$$(y_1,\cdots, y_p)=\phi_{\bf i}^{\bf j}(x_1,\cdots,x_p).$$
\end{theorem}
\mathcal Proof
First step:
If the roots $\alpha,\beta$ generate a system of type $A_1\times A_1$
and $w=s_\alpha s_\beta=s_\beta s_\alpha$, then $\mathcal P_{\alpha}$ and $\mathcal P_{\beta}$ commute,
and
it is immediate that $x_1=y_2$, $x_2=y_1$.
Let $\alpha,\alpha^\vee$ and $\beta,\beta^\vee$ be such that
$$\alpha^\vee(\alpha)=\beta^\vee(\beta)=2,\mathcal Quad
\alpha^\vee(\beta)=\beta^\vee(\alpha)=-1,$$
then
$\alpha$ and $\beta$ generate a root system of type $A_2$
and the braid relation is
$$w_0=s_\alpha s_\beta s_\alpha=s_\beta s_\alpha s_\beta.$$
Define
$$a\wedge b=\min(a,b),\mathcal Quad a\vee b=\max(a,b).$$
We prove that the following map
\begin{equation}\label{transition}
\begin{array}{ll}
x_1=(y_2-y_1)\wedge y_3&\mathcal Qquad y_1=(x_2-x_1)\wedge x_3\\
x_2=y_1+y_3&\mathcal Qquad y_2=x_1+x_3\\
x_3=y_1\vee (y_2-y_3)&\mathcal Qquad y_3=x_1\vee (x_2-x_3)\\
\end{array}
\end{equation}
satisfies the required properties.
Assume that, for $\mathcal Pi=\mathcal P_{w_0}\eta$,
$$\eta=\mathcal H^{x_3}_\alpha\mathcal H^{x_2}_\beta\mathcal H^{x_1}_\alpha \mathcal Pi.$$
Then define $\eta_2=\mathcal P_{\alpha}\eta, \eta_1=\mathcal P_{\beta}\mathcal P_{\alpha}\eta,
\eta_0=\mathcal Pi=\mathcal P_{\alpha}\mathcal P_{\beta}\mathcal P_{\alpha}\eta$.
Using theorem \ref{formula} for computing the paths $\eta_i$ one gets
the explicit formulas.
$$
\begin{array}{rcl}
x_3&=&-\inf_{0\leq s\leq T}\alpha^{\vee}(\eta(s))\\
x_2&=&-\inf_{0\leq s_2\leq s_1\leq T}
\left(\beta^{\vee}(\eta(s_1))+\alpha^{\vee}(\eta(s_2))\right)\\
x_1&=&-\inf_{0\leq s_2\leq s_1\leq T}
\left(\alpha^{\vee}(\eta(s_1))+\beta^{\vee}(\eta(s_2))\right)-x_3.
\end{array}
$$
Similar formulas are obtained for the $y_i$ coming from the other reduced
decomposition, by exchanging the roles of $\alpha$ and $\beta$. The formula
(\ref{transition}) follows by inspection.
In the context of crystals, this result is well known and first
appeared in Lusztig \cite{luszt} and Kashiwara \cite{kash93}.
We observe that it can also be obtained from
the considerations of section 6,
see e.g. \ref{redbru}.
Second step: When the roots generate
a root system of type $A_n$,
using Matsumoto's lemma, one can pass from one reduced
decomposition to another by a sequence of
braid relations corresponding to the two cases of the first step.
Third step: We consider now the case where the
roots generate the dihedral group $I(m)$, and
$w=s_{\alpha}s_\beta...=s_{\beta} s_{\alpha}...$ is the longest element in
$W$.
We will use an embedding of the dihedral group $I(m)$
in the Weyl group of the system $A_{m-1}$,
see e.g. Bourbaki \cite{bo2}, ch.\ V, 6, Lemme 2.
Recall the Tchebicheff polynomials $T_k$
defined in (\ref{Tcheb}).
Let $\lambda=\cos(2\mathcal Pi/m)$,
$a_1=a_2=1$ and, for $k \geq 1$, $$a_{2k}=T_{k-1}(\lambda),\mathcal Quad
a_{2k+1}=T_{k}(\lambda)+T_{k-1}(\lambda)$$
then,
\begin{equation}\label{rec_Ceby}a_{2k}+a_{2k+2}=a_{2k+1}, \,\,\,a_{2k+1}a_{2k-1}+a_{2k+1}=(1+a_3)a_{2k},\end{equation}
Moreover $a_k >0$ when $k <m$ and $a_{m}=0$.
In the Euclidean space $V=\mathbb R^{m-1}$ we choose
simple roots $\alpha_1,\cdots,\alpha_{m-1}$ which satisfy
$\langle \alpha_i, \alpha_{j} \rangle = a_{ij}$
where $a_{ij}=2$ if $i=j$, $a_{ij}=-1$ if $|i-j|=1$, $a_{ij}=0$ otherwise.
Let $\alpha_i^\vee=\alpha_i$ and $s_i=s_{\alpha_i}$. These generate a root
system of type $A_{m-1}$.
Let $\Pi$ be the two dimensional plane
defined as the set of $x \in V$ such that for all $n <m$,
$$\langle \alpha_n, x \rangle =a_n \langle \alpha_1, x \rangle$$ if $n$ is odd, and $$ \langle \alpha_n, x \rangle =a_n \langle \alpha_2, x \rangle$$
if $n$ is even. It follows from the relation (\ref{rec_Ceby})
that the vectors $$\alpha=\sum_{n \mbox{ \small odd}, n < m}a_n\alpha_n,
\,\,\, \beta=\sum_{n \mbox{ \small even}, n <m}a_n\alpha_n$$ are in $\Pi$.
Let $\alpha^\vee=2\alpha/||\alpha||^2,\,\beta^\vee=2\beta/||\beta||^2$ and
$$\tau_1=s_1s_3s_5\cdots s_{2p-1} ,$$
$$ \tau_2=s_2s_4s_6\cdots s_{2r},$$
where $2p=m-1, r=p$ when $m$ is odd and $2p=m,r=p-1$ when $m$ is even.
Let $w_0$ be the longest element in the Weyl group of $A_{m-1}$. Its length is $q=(m-1)m/2$. We first consider the case where $m$ is odd, $m=2p+1, q= pm$. Then
$$w_0=(\tau_1\tau_2)^p\tau_1, \mbox{ and } w_0=\tau_2(\tau_1\tau_2)^p$$
are two reduced decompositions of $w_0$. Since $(\tau_1\tau_2)^m=Id$ the
angle between $\alpha$ and $-\beta$ is $\mathcal Pi/m$ and these vectors are
the simple roots of the dihedral system $I(m)$.
Let
$\gamma$ be a continuous path in $\Pi$, let $\gamma_p=\gamma$ and for
$1 < k \leq p$, $\gamma_{k-1}=\mathcal P_{\alpha_{2k-1}}\gamma_{k}$ and
$$z_k(t)=-
\inf_{0\leq s\leq t}\alpha_{2k-1}^{\vee}(\gamma_{k}(s)).$$
\begin{lemma}\label{lemzx}
Let $\gamma$ be a continuous path with values in $\Pi$ and let
$$x(t)= -\inf_{0\leq s\leq t}\alpha^\vee(\gamma(s)).$$
Then, for all $k$, $z_k(t)=a_{2k-1}x(t)$ and
$$ {\mathcal P}_{\tau_1}\gamma(t)=
{\mathcal P}_{\alpha_1}{\mathcal P}_{\alpha_3}{\mathcal P}_{\alpha_5}
\cdots{\mathcal P}_{\alpha_{2p-1}}\gamma(t)
=\gamma(t)-\inf_{s \leq t}\alpha^{\vee}(\gamma(s))
\alpha={\mathcal P}_\alpha\gamma(t).$$
\end{lemma}
\mathcal Proof First, notice that $\alpha^\vee(\gamma(t))=\alpha_1^\vee(\gamma(t))$.
Since $\gamma$ is in $\Pi$, one has
$$z_p(t)=-\inf_{0\leq s\leq t}\alpha_{2p-1}^\vee(\gamma(s))=-\inf_{0\leq s\leq t}a_{2p-1}\alpha_{1}^\vee(\gamma(s))=a_{2p-1}x(t)$$
where we use the positivity of $a_{2p-1}$. Therefore
$$\gamma_{p-1}(t)=\mathcal P_{\alpha_{2p-1}}\gamma(t)=\gamma(t)+z_p(t)\alpha_{2p-1}=\gamma(t)+a_{2p-1}x(t)\alpha_{2p-1}.$$
Now, since the $\alpha_{2i+1}$ are orthogonal,
$$z_{p-1}(t)=-\inf_{0\leq s\leq t}\alpha_{2p-3}^\vee(\gamma_{p-1}(s))=-\inf_{0\leq s\leq t}\alpha_{2p-3}^\vee(\gamma(s))=a_{2p-3}x(t),$$
and
$$\gamma_{p-2}(t)=\mathcal P_{\alpha_{2p-3}}\gamma_{p-1}(t)=\gamma_{p-1}(t)+z_{p-1}(t)\alpha_{2p-3}$$
$$=\gamma(t)+x(t)(a_{2p-3}\alpha_{2p-3}+a_{2p-1}\alpha_{2p-1}).$$
Continuing, we obtain that
$$z_{k}(t)=a_{2k-1}x(t)$$
$$\gamma_k(t)=\gamma(t)+x(t)(a_{2k-1}\alpha_{2k-1}+\cdots+a_{2p-1}\alpha_{2p-1})$$
Since $\alpha=\alpha_1+a_3\alpha_3+a_5\alpha_5+\cdots+a_{2p-1}\alpha_{2p-1}$ we obtain the lemma. $\square$
We have similarly, if $\gamma$ is a path in $\Pi$,
$$ {\mathcal P}_{\tau_2}\gamma(t)={\mathcal P}_{\alpha_2}
{\mathcal P}_{\alpha_4}{\mathcal P}_{\alpha_6}\cdots{\mathcal P}_{\alpha_{2r}}
\gamma(t)=\gamma(t)-\inf_{s \leq t}\beta^{\vee}(\gamma(s))\beta=
{\mathcal P}_\beta\gamma(t).$$
Let ${\bf i}=(s_{i_1},\cdots,s_{i_q})=({\bf i_1},{\bf i_2},\cdots,{\bf i_{m}})$ and ${\bf j}=(s_{j_1},\cdots,s_{j_q})=({\bf j_1},{\bf j_2},\cdots,{\bf j_{m}})$ where
${\bf i_k}={\bf j_{k+1}}=(s_1,s_3,\cdots,s_{2p-1})$ when $k$ is odd and ${\bf i_k}={\bf j_{k+1}}=(s_2,s_4,\cdots,s_{2p})$ when $k$ is even.
We write explicitly $$w_0=(\tau_1\tau_2)^p\tau_1=s_{i_1}\cdots s_{i_q}, w_0=\tau_2(\tau_1\tau_2)^p=s_{j_1}\cdots s_{j_q}.$$ Let us denote by $\mathcal Phi_{\bf i}^{\bf j}:\mathbb R^q\to \mathbb R^q$ the mapping given by the second step corrresponding to these two reduced decompositions of $w_0$ in the Weyl group of $A_{m-1}$.
Let $\gamma$ be a path with values in $\Pi$.
If we consider it as a path in $V$ we can set
$\eta_q=\tilde \eta_q=\gamma$ and, for $n=1,2,\ldots ,q$,
$$\eta_{n-1}=\mathcal P_{\alpha_{i_{n}}}\eta_{n},\mathcal Quad z_n=-
\inf_{0\leq t\leq T}\alpha_{i_n}^{\vee}( \eta_n(t))$$
$$\tilde\eta_{n-1}=\mathcal P_{\alpha_{j_{n}}}\tilde\eta_{n}, \mathcal Quad\tilde z_n=-
\inf_{0\leq t\leq T}\alpha_{j_n}^{\vee}( \tilde\eta_n(t)).$$
Then, by definition,
$$(\tilde z_1,\cdots, \tilde z_q)= \mathcal Phi_{\bf i}^{\bf j}(z_1,\cdots,z_q).$$
We now consider $\gamma$ as a path in $\Pi$. We let
$$(u_1,u_2,\cdots,u_m)=(\alpha,\beta,\alpha,\beta,\cdots,\alpha)$$
and
$$(v_1,v_2,\cdots,v_m)=(\beta,\alpha,\beta,\alpha,\cdots,\beta).$$
In $I(m)$ the two reduced decompositions of the longest element are
$$s_{u_1}\cdots s_{u_m}=s_{v_1}\cdots s_{v_m}.$$We introduce
$\gamma_m=\tilde \gamma_m=\gamma$, and, for $n=1,2,\ldots ,m,$
$$\gamma_{n-1}=\mathcal P_{u_{n}}\ldots \mathcal P_{u_{m}}\gamma_m, \tilde\gamma_{n-1}=\mathcal P_{v_{n}}\ldots \mathcal P_{u_{m}}\tilde\gamma_m$$
$$x_n=-
\inf_{0\leq t\leq T}u_n^{\vee}( \gamma_n(t)), \tilde x_n=-
\inf_{0\leq t\leq T}v_n^{\vee}(\tilde \gamma_n(t)).$$
It follows from lemma \ref{lemzx} and from its analogue with $\alpha$ replaced by $\beta$
that
$$z_1=a_1x_1,z_2=a_3x_1,\cdots,z_p=a_{2p-1}x_1$$ $$z_{p+1}=a_2x_2,z_{p+2}=a_4x_2,\cdots,z_{2p}=a_{2p}x_2$$
and more generally, for $k=0,\cdots $
$$a_1^{-1}z_{2kp+1}=a_3^{-1}z_{2kp+2}=\cdots=a_{2p-1}^{-1}z_{2kp+p}=x_{k+1}$$
$$a_2^{-1}z_{(2k+1)p+1}=a_4^{-1}z_{(2k+1)p+2}=\cdots=a_{2p}^{-1}z_{(2k+2)p}=x_{k+2}.$$ This defines a linear map $$(x_1,\cdots,x_m)=g(z_1,z_2,\cdots,z_q).$$
Analogously exchanging the role of $\alpha$ and $\beta$
we define a similar map $$ (\tilde x_1,\cdots,\tilde x_m)
=\tilde g(\tilde z_1,\tilde z_2,\cdots,\tilde z_q)$$
(for instance $\tilde z_1=a_2\tilde x_1,\tilde z_2=
a_3 \tilde x_1,\cdots$). Then we see that
$$(x_1,\cdots,x_m)=\mathcal Phi (\tilde x_1,\cdots,\tilde x_m)$$
where
$\mathcal Phi=\tilde g \circ \mathcal Phi_{\bf i}^{\bf j} \circ g^{-1}.$
The proof when $m$ is even is similar (when $m=2p$,
$w_0=(\tau_1\tau_2)^p$ and $w_0=(\tau_2\tau_1)^p$
are two reduced decompositions of $w_0$).
This proves the theorem in the dihedral case.
Fourth step. We use Matsumoto's lemma to reduce the general case to
the dihedral case.
This ends the proof of theorem \ref{piecewise}.
$\square$
\begin{rem} Although the given proof is constructive,
it gives a complicated expression for $\mathcal Phi_{\bf i}^{\bf j}$
which can sometimes be simplified. In the dihedral case $I(m)$, for the Weyl
group case, i.e. $m=3,4,6$,
these expressions are given in Littelmann \cite{littel2}.
For $m=5$ it can be shown by a tedious verification that it is given when $\alpha,\beta$ have the same length, by a similar formula. Thus
for $m=2,3,4,5,6$ let $c_0=1,c_1=2\cos(\mathcal Pi/m), c_{n+1}+c_{n-1}=c_1c_n$ for $n \geq 0,$
and
$$u=\max(c_kx_{k+1}-c_{k-1}x_{k+2}, 0 \leq k \leq m-3),$$ $$ v=\min(c_kx_{k+2}-c_{k+1}x_{k+1}, 1 \leq k \leq m-2).$$
Then the expressions are given by
\begin{eqnarray*}
y_m&=&\max(x_{m-1}-c_1x_m,u)\\
y_{m-1}&=&x_m+\max(x_{m-2}-c_2x_m,c_1u)\\
y_2&=&x_1+\min(x_3-c_2x_1,c_1v)\\
y_1&=&\min(x_2-c_1x_1,v)
\end{eqnarray*}
and
$$y_1+y_3+\cdots= x_2+x_4+\cdots$$
$$y_2+y_4+\cdots=x_1+x_3+\cdots$$
This determines completely $(y_1,\cdots,y_m)$ as a function of $(x_1,\cdots,x_m)$ when $m \leq 6$. For $m=7$ we think
(and made a computer check) that we have to add that
\begin{eqnarray*}y_7+y_5&=&x_6+\max(c_2x_1,x_4-c_3x_7,w)\\
w&=&\min(c_2u,x_4-c_2v,\max(x_6-c_1x_5+x_4+c_2u,c_1x_3-x_2-c_2v).
\end{eqnarray*}
\end{rem}
We do not know of similar formulas for $m\geq 8$.
\begin{rem} The map given by theorem \ref{piecewise} is unique on the set of
all possible coordinates of paths. We will see in the next section that this set is a
convex cone. Since the value of the map $\phi_{\bf i}^{\bf j}$ is irrelevant outside this cone, we may say
that there exists a unique such map for each pair of reduced decompositions
${\bf i},{\bf j}$.
\end{rem}
\section{Parametrization of the continuous Littelmann module}
In this section we make a more in-depth study of the parametrization of the Littelmann modules,
and we prove the analogue of the independence theorem of Littelmann (the crystal structure depends only on the endpoint of the dominant path), then we study the concatenation of paths, using it to prove existence and uniqueness of families of crystals. Finally we define the action of the Coxeter group on the crystal, and the
Sch\"utzenberger involution.
\subsection{String parametrization of $C_T^0(V)$}
Let $(W,S,V,V^\vee)$ be a realization of the Coxeter system
$(W,S)$.
From now on we assume that $W$ is finite, with longest element
$w_0$.
For notational convenience, we
sometimes write $\alpha^\vee \eta$ instead of $\alpha^\vee (\eta)$.
Let $\eta \in L_\mathcal Pi$, where $\mathcal Pi$ is dominant and $w_0=s_1\ldots s_q$ be a reduced decomposition, then we have seen that
$$\eta=\mathcal H_{s_{q}}^{x_q}\mathcal H_{s_{q-1}}^{x_{q-1}}\cdots
\mathcal H_{s_{1}}^{x_1}\mathcal Pi
$$
for a unique sequence $$\varrho_{\bf i}(\eta)=(x_1,\ldots, x_q).$$
Following Berenstein and Zelevinsky \cite{beze2}, we call
$\varrho_{\bf i}(\eta)$ the {\bf i}-string parametrization of $\eta$,
or the string parametrization if no confusion is possible.
We let $$C_{\bf i}^\mathcal Pi=\varrho_{\bf i}(L_\mathcal Pi),$$
this is the set of all the $(x_1,\cdots,x_q)\in \mathbb R^q$
which occur in the string parametrizations of the elements of $L_\mathcal Pi$.
\begin{prop}\label{homeo}
The set $L_\mathcal Pi$ is compact and the map $\varrho_{\bf i}$ is a bicontinuous bijection from $L_\mathcal Pi$ onto its image $C_{\bf i}^\mathcal Pi$.
\end{prop}
\mathcal Proof
The map $\varrho_{\bf i}$ has an inverse
$$\varrho_{\bf i}^{-1}(x_1,\cdots,x_q)=\mathcal H_{s_{q}}^{x_q}
\mathcal H_{s_{q-1}}^{x_{q-1}}\cdots\mathcal H_{s_{1}}^{x_1}\mathcal Pi,
$$
hence it is bijective.
It is clear that $\varrho_{\bf i}$ and $\varrho_{\bf i}^{-1}$ are continuous. Since $\mathcal P_{w_0}$ is continuous, $L_\mathcal Pi=\{\eta; \mathcal P_{w_0}(\eta)=\mathcal Pi\}$ is closed. Using $\varrho_{\bf i}^{-1}$ we easily see that $L_\mathcal Pi$ is equicontinuous, it is thus compact by Ascoli's theorem.
$\square$
We will study $C_{\bf i}^\mathcal Pi$ in detail in the following sections.
\subsection{The crystallographic case}
In this subsection we consider the case of a Weyl group $W$ with a
crystallographic root system.
When $\alpha$ is a root and $\alpha^{\vee}$
its coroot,
then $\mathbb EE_{\alpha}^1$ and $\mathbb EE_{\alpha}^{-1}$ from definition \ref{littelmanntransform}
coincide with the Littelmann operators $e_{\alpha}$ and $f_{\alpha}$,
defined in
\cite{littel}.
Recall that a path $\eta$ is called integral in \cite{littel} if its endpoint $\eta(T)$
is in the weight
lattice and if, for each simple root $\alpha$,
the
minimum of the function $\alpha^{\vee}(\eta(t))$ over $[0,T]$ is an integer.
The class of
integral paths
is invariant under the Littelmann operators.
Let $\mathcal Pi$ be a dominant integral path.
The discrete Littelmann module $D_\mathcal Pi$ is defined
as the orbit of $\mathcal Pi$ under the semigroup generated by all the transformations
$e_\alpha, f_\alpha$, for all simple roots $ \alpha $, so it is the set of integral paths in $L_\mathcal Pi$.
Let ${\bf i}=(s_1,\cdots,s_q)$ where $w_0=s_1\cdots s_q$
is a reduced decomposition, then it follows from Littelmann's theory that
$$D_\mathcal Pi=\{\eta \in L_\mathcal Pi; x_1,\cdots, x_q \in \mathbb N\}=\varrho_{\bf i}^{-1}(\{(x_1,\cdots,x_q)
\in C_{\bf i}^\mathcal Pi; x_1\in \mathbb N,\cdots, x_q \in \mathbb N\}).$$
Furthermore, the set $D_\mathcal Pi$
has a crystal structure isomorphic to the Kashiwara
crystal associated with the highest weight $\mathcal Pi(T)$.
On $ D_\mathcal Pi$ the coordinates $(x_1,\cdots,x_q)$ are called the string or the Kashiwara parametrization of the dual canonical basis. They are
described in Littelmann \cite{littel2} and Berenstein and Zelevinsky \cite{beze2}.
When restricted to $D_\mathcal Pi$, the Pitman operator $\mathcal P_{\alpha}$ coincides with $e_\alpha^{max}$, i.e.
the operator sending $\eta$ to $e_\alpha^n\eta$, where $n=\max(k,e_\alpha^k\eta\ne{\bf 0})$.
For any path $\eta:[0,T]\to V$
and $\lambda>0$ let $\lambda\eta$ be the path defined by $(\lambda\eta)(t)=\lambda\eta(t)$ for $0 \leq t \leq T$.
The following results are immediate.
\begin{prop}[Scaling property] \label{scal}
\begin{enumerate}[(i)]
\item For any $\lambda >0$, $\lambda L_\mathcal Pi=L_{\lambda\mathcal Pi}$.
\item Let $\eta\in C^0_T(V)$, $r\in\mathbb R,u>0$, then $\mathbb EE_\alpha^{ru}(u\eta)=u\mathbb EE_\alpha^{r}(\eta).$
\item Let $\mathcal Pi$ be a dominant path and $a>0$ then
$C_{\bf i}^{a\mathcal Pi}=aC_{\bf i}^\mathcal Pi$.
\end{enumerate}
\end{prop}
\begin{prop} \label{dense}If $\mathcal Pi$ is a dominant integral path, then
the set $$D_\mathcal Pi(\mathbb Q)=\cup_{n \in \mathbb N}\frac{1}{n}D_{n\mathcal Pi}$$
is dense in $L_\mathcal Pi$.
\end{prop}
Actually a good interpretation of $L_\mathcal Pi$ in the Weyl group case is as the "limit" of $ \frac{1}{n}B_{n\mathcal Pi}$ when $n \to \infty$. In the general Coxeter case only the limiting object is defined.
\subsection{Polyhedral nature of the continuous crystal for a Weyl group}
Let $W$ be a finite Weyl group, associated to a crystallographic
root system.
Let $D_\mathcal Pi$ be the discrete Littelmann module associated
with an integral dominant path $\mathcal Pi$. We fix a reduced decomposition
$w_0=s_{1}\cdots s_{q}$ of the longest element and let
${\bf i}=(s_1,\cdots,s_q)$. We have seen that if $\rho_{\bf i}:L_\mathcal Pi\to C_{\bf i}^\mathcal Pi$ is the string parametrization of the continuous module $L_\mathcal Pi$, then
$$D_\mathcal Pi=\{\eta \in L_\mathcal Pi; x_1,\cdots, x_q \in \mathbb N\}=
\varrho_{\bf i}^{-1}(\{(x_1,\cdots,x_q)\in C_{\bf i}^\mathcal Pi; x_1\in \mathbb N,\cdots, x_q \in \mathbb N\}).$$
Therefore the set
$$\tilde C_{\bf i}^\mathcal Pi=C_{\bf i}^\mathcal Pi\cap \mathbb N^q$$
is the image of the discrete Littelmann module $D_\mathcal Pi$, or equivalently,
the image of the Kashiwara crystal with highest
weight $\mathcal Pi(T)$, under the string parametrization
of Littelmann \cite{littel2} and Berenstein and Zelevinsky \cite{beze2}.
Let
$$K_\mathcal Pi=\{(x_1,\cdots,x_q)\in \mathbb R^q;0 \leq x_r \leq \alpha_{i_r}^{\vee}(\mathcal Pi(T)-\sum_{n=1}^{r-1}x_n\alpha_{i_n}), r=1,\cdots q\}.$$
It is shown in Littelmann \cite{littel2} that there exists a
convex rational polyhedral cone $C_{\bf i}$ in $\mathbb R^q$, depending only on
$\bf i$ such that, for all dominant integral paths $\mathcal Pi$,
$$ \tilde C_{\bf i}^\mathcal Pi= C_{\bf i}\cap \mathbb N^q\cap K_\mathcal Pi.$$
This cone is described explicitly in Berenstein and Zelevinsky \cite{beze2}.
Recall that $C_{\bf i}^\mathcal Pi=\varrho_{\bf i}(L_\mathcal Pi)$.
Using propositions \ref{scal}, \ref{dense} it is easy
to see that the following holds.
\begin{prop} For all dominant paths $\mathcal Pi$,
$ C_{\bf i}^\mathcal Pi= C_{\bf i}\cap K_\mathcal Pi.$
\end{prop}
\subsection{The cone in the general case}
We now consider a general Coxeter system $(W,S)$, with $W$ finite, realized in
$V$.
\begin{theorem}\label{theo_poly}Let
$\bf i$ be a reduced decomposition of $w_0$, then there exists a unique
polyhedral
cone $C_{\bf i}$ in $\mathbb R^q$ such that for any dominant path $\mathcal Pi$
$$ C_{\bf i}^\mathcal Pi= C_{\bf i}\cap K_\mathcal Pi.$$
In particular $C_{\bf i}^\mathcal Pi$ depends only on $\lambda=\mathcal Pi(T)$.
\end{theorem}
\mathcal Proof It remains to consider the non crystallographic Coxeter systems. It is clearly
enough to consider reduced systems.
We use their classification: $W$ is either a dihedral group $I(m)$ or
$H_3$ or $H_4$ (see Humphreys \cite{humphreys}), and the same trick as the one used
in the proof of theorem \ref{piecewise}.
We first consider the case $I(m)$ where $m=2p+1$ and we use the notation of the proof of
theorem \ref{piecewise}. Let
${\bf i}=(i_1,\cdots,i_q)$ be as in that proof, and
write $$w_0=(\tau_1\tau_2)^p\tau_1=s_{i_1}\cdots s_{i_q}$$ for the longest word in $A_{m-1}$.
Let $\gamma$ be a path with values in the plane $\Pi$.
If we consider $\gamma$ as a path in $V=\mathbb R^{m-1}$ we can set, for $q=(m-1)m/2$,
$\eta_q=\gamma$ and, for $n=1,2,\ldots ,q,$
$$\eta_{n-1}=\mathcal P_{\alpha_{i_{n}}}\eta_{n},\mathcal Quad z_n=-
\inf_{0\leq t\leq T}\alpha_{i_n}^{\vee}( \eta_n(t)).$$
We can also consider $\gamma$ as a path in $\Pi$, with the realization of $I(m)$. Let
$${\bf u}=(u_1,u_2,\cdots,u_m)=(\alpha,\beta,\alpha,\beta,\cdots,\alpha).$$
Let
$\tilde\eta_m=\gamma$ and, for $n=1,2,\ldots ,m,$
$$\tilde\eta_{n-1}=\mathcal P_{u_{n}}\ldots \mathcal P_{u_{m}}\eta_m,\mathcal Quad x_n=-
\inf_{0\leq t\leq T}u_n^{\vee}( \eta_n(t)).$$
We have seen that the map $$(x_1,\cdots,x_m)=g(z_1,z_2,\cdots,z_q),$$ is linear.
Let $C_{\bf i}$ be the cone associated with ${\bf i}$ in $A_{m-1}$,
then
$C_{\bf u}=g( C_{\bf i})$
is the cone in $\mathbb R^m$ associated with the reduced decomposition $ \alpha\beta\cdots\alpha$ of the longest word in $I(m)$.
Furthermore, for any dominant path $\mathcal Pi$ in $\Pi$,
$ C_{\bf u}^\mathcal Pi= C_{\bf u}\cap K_\mathcal Pi.$
The proof when $m$ is even is similar.
In order to deal with the cases $H_3$ and $H_4$ it is enough, using an analogous proof to embed these systems in some Weyl groups.
Let us first consider the case of $H_4$. We use the embedding of $H_4$ in $E_8$ (see \cite{moopat}).
Consider the following indexation of the simple roots of the system $E_8$:
$${\tt \setlength{\unitlength}{0.60pt}
\begin{picture}(320,150)
\thinlines \mathcal Put(120,10){ System $E_8$}
\mathcal Put(310,45){{$8$}}
\mathcal Put(260,45){{$7$}}
\mathcal Put(210,45){{$6$}}
\mathcal Put(160,45){{$4$}}
\mathcal Put(110,45){{$3$}}
\mathcal Put(220,100){{$5$}}
\mathcal Put(211,66){\line(0,1){33}}
\mathcal Put(60,45){{$2$}}
\mathcal Put(10,45){{$1$}}
\mathcal Put(65,63){\line(1,0){40}}
\mathcal Put(115,63){\line(1,0){40}}
\mathcal Put(165,63){\line(1,0){40}}
\mathcal Put(265,63){\line(1,0){40}}
\mathcal Put(215,63){\line(1,0){40}}
\mathcal Put(211,103){\circle{8}}
\mathcal Put(211,63){\circle{8}}
\mathcal Put(161,63){\circle{8}}
\mathcal Put(261,63){\circle{8}}
\mathcal Put(311,63){\circle{8}}
\mathcal Put(11,63){\circle{8}}
\mathcal Put(111,63){\circle{8}}
\mathcal Put(61,63){\circle{8}}
\mathcal Put(16,63){\line(1,0){40}}
\end{picture}}
$$
In the euclidean space $V=\mathbb R^8$ the roots $\alpha_1,...,\alpha_8$,
satisfy $\langle \alpha_i, \alpha_j \rangle= -1\mbox{ or } 0$ depending whether they are linked or not. Let $ \mathcal Phi=(1+\sqrt{5})/2$. We consider the 4-dimensional subspace $\Pi$ of $V$ defined as the set of $x \in V$ orthogonal to $ \alpha_8-\mathcal Phi\alpha_1, \alpha_7-\mathcal Phi\alpha_2, \alpha_6-\mathcal Phi\alpha_3$ and $ \mathcal Phi\alpha_5- \alpha_4$.
Let $s_i$ be the reflection which corresponds to $\alpha_i$ and $$\tau_1=s_1s_8,\,\, \tau_2=s_2s_7, \,\, \tau_3= s_3s_6,\,\, \tau_4= s_4s_5.$$
One checks easily that $\tau_1,\tau_2,\tau_3,\tau_4$ generate $H_4$ and that the vectors$$\tilde \alpha_1=\alpha_1+\mathcal Phi\alpha_8,\tilde \alpha_2=\alpha_2+\mathcal Phi \alpha_7,\tilde \alpha_3=\alpha_3+\mathcal Phi \alpha_6,\tilde \alpha_4=\alpha_4+\mathcal Phi^{-1}\alpha_5$$ are in $\Pi$. If $\mathcal Pi$ is a continuous path in $\Pi$, then, for $i=1,\cdots,4$, if $\tilde \alpha_i^\vee=\tilde \alpha_i/(2||\tilde \alpha_i||^2)$,
$$ {\mathcal P}_{\tau_i}\mathcal Pi(t)=\mathcal Pi(t)-\inf_{0 \leq s \leq t}\tilde\alpha_i^{\vee}(\mathcal Pi(s))\tilde\alpha_i.$$
The case of $H_3$ is similar by using $D_6$:
$${\tt \setlength{\unitlength}{0.60pt}
\begin{picture}(263,150)
\thinlines \mathcal Put(100,0){ System $D_6$}
\mathcal Put(209,5){{$6$}}
\mathcal Put(110,45){{$3$}}
\mathcal Put(209,110){{$5$}}
\mathcal Put(155,45){{$4$}}
\mathcal Put(59,45){{$2$}}
\mathcal Put(65,63){\line(1,0){40}}
\mathcal Put(9,45){{$1$}}
\mathcal Put(211,103){\circle{8}}
\mathcal Put(211,23){\circle{8}}
\mathcal Put(166,63){\line(1,-1){40}}
\mathcal Put(166,63){\line(1,1){40}}
\mathcal Put(161,63){\circle{8}}
\mathcal Put(116,63){\line(1,0){40}}
\mathcal Put(111,63){\circle{8}}
\mathcal Put(61,63){\circle{8}}
\mathcal Put(16,63){\line(1,0){40}}
\mathcal Put(11,63){\circle{8}}
\end{picture}}
$$
In $V=\mathbb R^6$ we choose the roots $\alpha_1,...,\alpha_6$ with $\langle \alpha_i, \alpha_{j} \rangle =-1$ if they are linked. We define a 3-dimensional subspace $\Pi$ defined as the set of $x \in V$ orthogonal to $ \alpha_5-\mathcal Phi\alpha_1, \alpha_4-\mathcal Phi\alpha_2$ and $ \mathcal Phi\alpha_6- \alpha_3$.
Then the reflections
\begin{equation}\label{tau_sig}\tau_1=s_1s_5,\,\, \tau_2=s_2s_4, \,\, \tau_3= s_3s_6,
\end{equation}
generate $H_3$ and
$$\tilde \alpha_1=\alpha_1+a\alpha_5,\tilde \alpha_2=\alpha_2+a\alpha_4,\tilde \alpha_3=\alpha_3+b\alpha_6$$
are in $\Pi$. $\square$
We will prove in corollary \ref{cor_cone}
that the cones $C_{\bf i}$ have the following description:
for any simple root $\alpha$, let ${\bf j}(\alpha)$ be a reduced decomposition of $w_0$ which begins by $s_\alpha$.
Then $$C_{\bf i}=\{x \in \mathbb R^q; \mathcal Phi_{\bf i}^{{\bf j}(\alpha)}
(x)_1 \geq 0, \mbox{ for all simple roots } \alpha\}.$$
\subsection{The cone in the dihedral case}
In this section we provide explicit equations for the cone, in the dihedral case,
following the approach of Littelmann \cite{littel2} in the Weyl group case.
\begin{lemma}\label{lemme_di} Let $\alpha, \beta\in V$,
$\alpha^\vee, \beta^\vee \in V^\vee$ and
$c=-\beta^\vee(\alpha)$. Consider a continuous path
$\eta \in C^0_T(V)$ and
$\mathcal Pi=\mathcal P_{\alpha}\eta$. Let
\begin{eqnarray*}
U&=&\min_{T \geq t \geq�0}[a \beta^\vee (\eta(t)) +b \min_{t\geq s\geq 0}
\alpha^\vee(\eta(s))]�,\\
V&=&\min_{T\geq t\geq 0}[a \min_{t\geq s\geq 0}\beta^\vee(\mathcal Pi(s)+
(ac-b) \alpha^\vee (\mathcal Pi(t))],\\ W&=&a
\min_{T\geq t\geq 0}\beta^\vee (\mathcal Pi(t))-(ac-b)\min_{T\geq t\geq 0} \alpha^\vee(\eta(t)),
\end{eqnarray*}
where $a,b$ are real numbers such that $a \geq 0, ac-b \geq�0$. Then $U=\min(V,W)$.
\end{lemma}
\mathcal Proof Since
$\mathcal Pi=\mathcal P_{\alpha}\eta$,
$$\beta^\vee(\eta(t))=\beta^\vee(\mathcal Pi(t))-c\min_{t\geq s\geq 0} \alpha^\vee(\eta(s)),$$ thus
\begin{eqnarray*}
U
&=&\min_{T \geq t \geq 0}[a
\beta^\vee (\mathcal Pi(t))+(b-ac)\min_{t\geq s\geq 0}\alpha^\vee(\eta(s))]\\
&=&\min_{T \geq t \geq 0}[
\min_{t\geq s\geq 0}a\beta^\vee (\mathcal Pi(s))+(b-ac)\min_{t\geq s\geq 0} \alpha^\vee(\eta(s))].
\end{eqnarray*}
where we have used the fact that,
if $f,g:[0,T]\to {\mathbb R}$ are two continuous functions, and if $g$ is
non decreasing, then
$$\min_{T\geq t\geq 0}[f(t)+g(t)]=\min_{T\geq t\geq 0}[\min_{t\geq s\geq 0}f(s)+g(t)].$$
Since
$\alpha^\vee(\mathcal Pi(t))
\geq -\min_{t\geq s\geq 0}
\alpha^\vee(\eta(s)),$
$$
\min_{t\geq s\geq 0}a\beta^\vee (\mathcal Pi(s))+(ac-b) \alpha^\vee(\mathcal Pi(t))\geq
\min_{t\geq s\geq 0}a\beta^\vee (\mathcal Pi(s))-(ac-b)\min_{t\geq s\geq 0}
\alpha^\vee(\eta(s)).$$ Let $t_0$ be the largest $t \leq�T$ where the minimum of the right hand side
is achieved. Suppose that $t_0 < T$.
If $\alpha^\vee(\mathcal Pi(t_0)) > -\min_{t_0\geq s\geq
0}
\alpha^\vee(\eta(s))$ then $\min_{t\geq s\geq 0}
\alpha^\vee(\eta(s))$ is locally constant on the right of $t_0$. Since
$\min_{t\geq s\geq 0}a\beta^\vee (\mathcal Pi(s))$ is non increasing, it follows that
$t_0$ is not maximal. Therefore, when $t_0<T,$
$$\alpha^\vee(\mathcal Pi(t_0)) = -\min_{t_0\geq s\geq
0}
\alpha^\vee(\eta(s))$$ and
$$U=\min_{T \geq t \geq 0}[
\min_{t\geq s\geq 0}a\beta^\vee
(\mathcal Pi(s))-(ac-b)\inf_{t\geq s\geq 0}\alpha^\vee(\eta(s))]=V\leq W.$$ When
$t_0=T,$ then $U=W\leq V$. Thus $U=\min(V,W)$.
$\square$
We consider a realization of the dihedral system $I(m)$ with two simple roots $\alpha, \beta$ and
$c:=-\alpha ^{\vee}(\beta) = -\beta^{\vee} (\alpha) =2\cos\frac{\mathcal Pi}{m}.$
Let $$a_n=\frac{\sin(n\mathcal Pi/m)}{
\sin (\mathcal Pi/m)}.$$ Then $a_0=0, a_1=1,$ and $a_{n+1}+a_{n-1}=ca_n$, $a_n > 0$ if $1 \leq n \leq�m-1$ and
$a_m=0$.
Let
$w_0=s_{1}\ldots s_{m}$ be a reduced decomposition of the longest element $w_0\in W$, ${\bf i} =(s_1,\cdots, s_m)$ and $\alpha_1,\cdots,\alpha_m$ be the simple
roots associated with $s_1,\cdots,s_m$. This sequence is either $(\alpha,\beta,\alpha,\cdots)$ or
$(\beta,\alpha,\beta,\cdots)$. Clearly the two roots play a symmetric role, and the cones associated with these two decompositions are the same. We define $\alpha_0$ as the simple root not equal to $\alpha_1$.
As before, when $\eta \in C^0_T(V)$, we define $\eta_m=\eta$ and for $k=0,\cdots,m-1$,
$\eta_{k}=\mathcal P_{s_{{k+1}}}\ldots \mathcal P_{s_{m}}\eta,$
and
$$x_k=-
\min_{0\leq t\leq T}\alpha_{{k}}^{\vee}( \eta_k(t))\mathcal Quad\text{for }k=1,\ldots,m.$$
\begin{prop} \label{dihedralcone}The cone for the dihedral system $I(m)$ is given by
$$C_{\bf i}=\{(x_1,\cdots,x_m)\in {\mathbb R}_+^m;
\frac{x_{m-1}}{a_{m-1}}\geq \frac{x_{m-2}}{a_{m-2}} \geq \cdots \geq
\frac{x_{1}}{a_{1}}
\}.$$
\end{prop}
\mathcal Proof For any $p,k $ such that $0 \leq p \leq m, 0 \leq k \leq p$, let
\begin{eqnarray*}V_k&=&\min_{T \geq t
\geq 0}[a_{k+1} \alpha_{p+1-k}^\vee (\eta_{p-k}(t))+a_{k}
\min_{t\geq s\geq 0}
\alpha_{{p-k}}^\vee(\eta_{p-k}(s))],\\W_k&=&a_{k} \min_{T\geq t\geq 0}\alpha_{p-k}^\vee (\eta_{p-k}(t))-a_{k+1} \min_{T\geq t\geq 0}\alpha_{p+1-k}^\vee (\eta_{p+1-k}(t))
.\end{eqnarray*}
Since $a_{k-1}+a_{k+1}=ca_k$, the lemma above gives that $V_k=\min(W_{k+1},V_{k+1})$. Therefore
$$V_0=\min(W_1,W_2, \cdots,W_p,V_p).$$
Notice that
$$V_p=\min_{T \geq t
\geq 0}[a_{p+1} \alpha_{1}^\vee (\eta_{0}(t))+a_{p}
\min_{t\geq s\geq 0}
\alpha_{{0}}^\vee(\eta_{0}(s))]=0$$ and $W_p=a_{p+1}x_1$
since $\eta_0=\mathcal P_{w_0}\eta$ is dominant. Furthermore
$$V_0=\min_{0\leq t\leq T}\alpha_{{p+1}}^{\vee}( \eta_p(t))$$
since $a_0=0$ and $a_1=1$. Hence,
\begin{equation}\label{eq_min}
\min_{0\leq t\leq T}\alpha_{{p+1}}^{\vee}( \eta_p(t))=\min(a_2x_{p}-a_1x_{p-1},\cdots,a_px_{2}-a_{p-1}x_1,a_{p+1}x_{1},0 ).
\end{equation}
The path $\eta_{ m-1}=\mathcal P_{\alpha_m}\eta$ is $\alpha_m$-dominant,
therefore $\alpha_{{m}}^{\vee}( \eta_{m-1}(t))\geq 0$
and it follows from $(\ref{eq_min})$ applied with $p=m-1$ that for $k=1,\cdots,m-2$
$$a_{m-k}x_{k+1}-a_{m-k-1}x_{k} \geq 0,$$
which is equivalent, since $a_{m-k}=a_k$ to
$$\frac{x_{m-1}}{a_{m-1}}\geq \frac{x_{m-2}}{a_{m-2}}
\geq \cdots \geq
\frac{x_{1}}{a_{1}}\geq 0.$$
Conversely, we suppose that these inequalities hold, i.e. that for $k=1, \cdots,m-2$
\begin{equation}\label{eqnumer}a_{k+1}x_{m-k}-a_kx_{m-k-1} \geq 0,\end{equation}
$$a_{m-k}x_{k+1}-a_{m-k-1}x_{k} \geq 0,$$
and that $(x_1,\cdots, x_m)\in K_\mathcal Pi$ for some dominant path $\mathcal Pi$.
Let us show that
$$\eta=\mathcal H_{\alpha_m}^{x_m}\cdots \mathcal H_{\alpha_1}^{x_1}\mathcal Pi$$
is well defined. Since the string parametrization of $\eta$ is $x$ this will prove the proposition. It is enough to show, by induction on $p=0,\cdots,m$ that
$$\eta_p:=\mathcal H_{\alpha_p}^{x_p}\mathcal H_{\alpha_{p-1}}^{x_{p-1}}\cdots \mathcal H_{\alpha_1}^{x_1}\mathcal Pi$$
is $\alpha_{p+1}$-dominant. This is clear for $p=0$ since $\eta_0=\mathcal Pi$ is dominant. If we suppose that this is true until $p-1$ can apply (\ref{eq_min}) and write that
$$ \min_{0\leq t\leq T}\alpha_{{p+1}}^{\vee}( \eta_p(t))=\min(a_2x_{p}-a_1x_{p-1},\cdots,a_px_{2}-a_{p-1}x_1,a_{p+1}x_{1},0 )$$
Since $c \leq 2$, it is easy to see that
$$\frac{a_{n-1}}{a_n} \geq \frac{a_{n-2}}{a_{n-1}}$$
for $n \leq�m-1$. Therefore, $$\frac{x_{k+1}}{x_{k}} \geq \frac{a_{m-k-1}}{a_{m-k}}\geq \frac{a_{p-k}}{a_{p-k+1}}$$ and $ \alpha_{{p+1}}^{\vee}( \eta_p(t) \geq�0$ for all $0 \leq t \leq T$. $\square$
In the definition of $V_k$ and $W_k$ in the proof above,
replace the sequence $(a_k)$ by the sequence $(a_{k+1})$.
We obtain the following formula. \begin{prop}
If $y_m= -\min_{T \geq t \geq 0} \alpha_{m-1}^\vee
(\eta_m(t))$, then
$$y_m=\max\{0,a_{m-1}x_{m-1}-a_{m-2}x_m,a_{m-2}x_{m-2}-a_{m-3}x_{m-1},\cdots,a_{2}x_{2}-a_{1}x_{3},a_{1}x_1\}$$
\end{prop}
\subsection{Remark on Gelfand Tsetlin cones}
In the Weyl group case, the continuous cone $C_{\bf i}$ appears in the description of toric degenerations (see Caldero \cite{cald}, Alexeev and Brion \cite{ab}). The polytopes $C_{\bf i}^{\mathcal Pi}$ are called the string polytopes in Alexeev and Brion \cite{ab}.
Notice that they have shown that the classical Duistermaat-Heckman
measure coincides with the one given below in Definition \ref{defDH}.
Explicit inequalities for the string cone $C_{\bf i}$ (and therefore for
the string polytopes) in the Weyl group case are given in full generality in
Berenstein and Zelevinsky in \cite[Thm.3.12]{beze2}. Before,
Littelmann \cite[Thm.4.2]{littel2} has described it
for the so called "nice decompositions" of $w_0$. As explained in that paper they were introduced to generalize the Gelfand Tsetlin cones.
For the convenience of the reader let us reproduce the description $C_{\bf i}$ in the $A_n$ case, considered explicitly in Alexeev and Brion \cite{ab}, for
the standard reduced decomposition of the longest element in the symmetric group
$W=S_{n+1}$. This decomposition $ {\bf i}$ is
\begin{displaymath}
w_0=(s_1)(s_2s_1)(s_3s_2s_1)\dots(s_ns_{n-1}\dots
s_1),
\end{displaymath}
where $s_i$ denotes the transposition exchanging $i$ with $i+1$.
Let us use on $V$ the coordinates $x_{i,j}$ with $i,j\ge 1$,
$i+j\le n+1$. The string cone
is defined by
$$
x_{n,1}\ge0; \mathcal Quad x_{n-1,2}\ge x_{n-1,1}\ge 0; \mathcal Quad \dots \mathcal Quad
x_{1,n} \ge \dots \ge x_{1,1} \ge 0,
$$
and to define the polyhedron $C_{\bf i}^\mathcal Pi$ one has to add the
inequalities
\begin{displaymath}
x_{i,j} \le \alpha_j^{\vee}(\lambda) - x_{i,j-1} +
\sum_{k=1}^{i-1} (-x_{k,j-1} +2x_{k,j} - x_{k,j+1}).
\end{displaymath}
where $\lambda=\mathcal Pi(T)$. A more familiar description of this cone
is in terms of Gelfand-Tsetlin patterns:
$$
g_{i,j} \ge g_{i+1,j} \ge g_{i,j+1}
$$
where $g_{0,j}=\lambda_j$ and $g_{i,j}= \lambda_j + \sum_{k=1}^i
(x_{k,j-1} - x_{k,j} )$ for $i,j\ge 1$, $i+j\le n+1$.
\subsection{Crystal structure of the Littelmann module}
We now return to the general case of a finite Coxeter group.
Let $\mathcal Pi$ be a dominant path in $C^0_T(V)$.
The geometry of the crystal $L_\mathcal Pi$ is easy to describe, using the sets $C_{\bf i}^\mathcal Pi$ which parametrize $L_\mathcal Pi$. We have seen
(theorem \ref{theo_poly}) that $C_{\bf i}^\mathcal Pi$ depend on the path $\mathcal Pi$ only through $\mathcal Pi(T)$. We put on $C_{\bf i}^\mathcal Pi$ a continuous crystal structure in the following way.
Let ${\bf i}=(s_1,\cdots,s_q)$ where $w_0=s_{1}\cdots s_{q}$ is a reduced decomposition. If $x=(x_1,\cdots,x_q) \in C_{\bf i}^{\mathcal Pi}$ we set
$$wt(x)=\mathcal Pi(T)-\sum_{k=1}^q x_k \alpha_{s_k}.$$
If the simple root $\alpha$ is $\alpha_{s_1}$ then first define $e_{\alpha,{\bf i}}^r$ for $r \in \mathbb R$ by
$$e_{\alpha,{\bf i}}^r(x_1,x_2,\cdots,x_q)=(x_1+r,x_2,\cdots,x_q) \mbox { or } {\bf 0}$$
depending whether $(x_1+r,\cdots,x_q)$ is in $C_{\bf i}^{\mathcal Pi}$ or not. We let, for $b\in C_{\bf i}^{\mathcal Pi}$, $$\varepsilon_\alpha(b) = \max\{r \geq 0 ; e_{\alpha,{\bf i}}^r(b)\not={\bf 0}\} $$ and
$$\varphi_\alpha(b)=\max\{r \geq 0 ; e_{\alpha,{\bf i}}^{-r}(b)\not={\bf 0}\}.$$
We now consider the case where $\alpha$ is not $\alpha_1$.
We choose a reduced decomposition
$w_0=s'_1s'_2 \cdots s'_q$ with $\alpha_{s'_1}=\alpha$ and let ${\bf j}=(s'_1,s'_2,\cdots, s'_q)$.
We can define $e_{\alpha,{\bf j}}^r$ on
$C_{\bf j}^{\mathcal Pi}, \varepsilon_\alpha, \mathcal Phi_\alpha$ as
above and transport this action on $C_{\bf i}^{\mathcal Pi}$ by
the piecewise linear map $\mathcal Phi_{\bf i}^{\bf j}$
introduced in theorem \ref{piecewise}. In other words
$$e_{\alpha,{\bf i}}^r=\mathcal Phi_{\bf i}^{\bf j}\circ e_{\alpha,{\bf j}}^r \circ \mathcal Phi_{\bf j}^{\bf i}.$$
Finally we let we define the crystal operators by $e_{\alpha}^r=e_{\alpha,{\bf i}}^r$. Then $\rho_{\bf i}:L_\mathcal Pi \to C_{\bf i}^{\mathcal Pi}$
is an isomorphism of crystal. This first shows that our construction does not depend on the chosen decompositions $w_0=s'_1s'_2 \cdots s'_q$ and
then that the crystal structure on $L_\mathcal Pi$ depends only on the extremity $\mathcal Pi(T)$ of the path $\mathcal Pi$:
\begin{theorem}\label{uniq_iso}
If $\mathcal Pi$ and $\bar \mathcal Pi$ are
two dominant paths such that $\mathcal Pi(T)=\bar \mathcal Pi(T)$
then the crystals on $L_\mathcal Pi$ and $L_{\bar\mathcal Pi}$ are isomorphic.
\end{theorem}
This is the analogue of Littelmann independence theorem (see \cite{littel}).
\begin{definition}
When $W$ is finite, for $\lambda \in \bar C$, we denote $B(\lambda)$ the class of the continuous crystals isomorphic to $L_{\mathcal Pi}$ where $\mathcal Pi$ is a dominant path such that $\mathcal Pi(T)=\lambda$.
\end{definition}
\subsection{Concatenation and closed crystals}
The concatenation $\mathcal Pi\star\eta$ of two paths $\mathcal Pi: [0,T] \to V$, $\eta:
[0,T] \to V$ is defined in Littelmann \cite{littel} as the
path
$\mathcal Pi\star\eta:[0,T]\to V$ given by
$(\mathcal Pi\star\eta)(t)=\mathcal Pi(2t)$, and $(\mathcal Pi\star\eta)(t+T/2)=\mathcal Pi(T)+\eta(2t)$ when
$0
\leq t
\leq T/2$. The following theorem is instrumental to prove uniqueness.
\begin{theorem}\label{closed}
The map $$\mathcal Theta:C^0_T(V)\otimes C^0_T(V)\to C^0_T(V)$$ defined by
$\mathcal Theta(\eta_1\otimes \eta_2)=\eta_1\star \eta_2$ is a crystal isomorphism.\end{theorem}
\mathcal Proof We have to show that, for simple roots $\alpha$, for
$\eta_1\in L_{\mathcal Pi_1},
\eta_2\in L_{\mathcal Pi_2}$, for all $s \in \mathbb R$,
$$\mathcal Theta[e_{\alpha}^s(\eta_1\otimes \eta_2)]=
\mathbb EE_{\alpha}^s(\eta_1\star \eta_2).$$
This is a purely one-dimensional statement, which uses only
one root, hence it follows from the similar fact for Littelmann
and Kashiwara crystals. For the convenience of the reader we
provide a proof.
For any $x \geq0$,
let $$\mathcal P_\alpha^x\eta(t)=\eta(t)-\min(0,x+ \inf_{0\leq s\leq t}
\alpha^\vee\eta(s))\alpha.$$
Thus, for $y=(-\inf_{0\leq s\leq T}
\alpha^\vee\eta(s)-x)\vee 0$,
\begin{equation}\label{eq:px}
\mathcal P_\alpha^x\eta=\mathbb EE_\alpha^{y}\eta.
\end{equation}
\begin{lemma}\label{lem:Px} Let $\eta_1,\eta_2\in C^0_T(V)$,
then
\begin{enumerate}[(i)]
\item
$\mathcal P_\alpha(\eta_1 \star \eta_2)=\mathcal P_\alpha\eta_1\star \mathcal P_\alpha^x\eta_2$
where $x=\alpha^\vee\eta_1(T)-\inf_{0\leq t\leq T}
\alpha^\vee\eta_1(t)$;
\item if $x\geq 0$, $\mathcal P_\alpha\mathcal P_\alpha^x=\mathcal P_\alpha;$
\item if $x\geq 0$,
$y\in[0,\alpha^\vee\mathcal Pi(T)]$, and
$\mathcal Pi$ be an $\alpha$-dominant path, $\mathcal P_\alpha^x\mathcal H_\alpha^y\mathcal Pi=\mathcal H_\alpha^{x\wedge y}\mathcal Pi$.
\end{enumerate}
\end{lemma}
\mathcal Proof For all $t\in [0, T/2]$,
$\mathcal P_\alpha(\eta_1 \star \eta_2)(t)=\mathcal P_\alpha\eta_1(t).$
Furthermore,
$$\begin{array}{l}
\mathcal P_\alpha(\eta_1 \star \eta_2)((T+t)/{2})
\\ =(\eta_1 \star \eta_2)((T+t)/{2})-\min[\inf_{0\leq s\leq T}
\alpha^\vee\eta_1(s),\alpha^\vee\eta_1(T)+\inf_{0\leq s \leq t}
\alpha^\vee \eta_2(s)]\alpha\\
=\eta_1(T) -\inf_{0\leq s\leq T}
\alpha^\vee\eta_1(s)\alpha +\\ \mathcal Qquad\eta_2(t)-
\min[0,\inf_{0\leq s \leq t} \alpha^\vee\eta_2(s)+
\alpha^\vee\eta_1(T)-\inf_{0\leq s\leq T}
\alpha^\vee\eta_1(s)]\alpha\\
=\mathcal P_\alpha\eta_1(T)+\mathcal P_\alpha^x \eta_2(t).
\end{array}
$$
This proves $(i)$, and
$(ii)$ follows from (\ref{eq:px}).
Furthermore, $\inf_{0\leq s\leq T}\alpha^{\vee}(\mathcal H_\alpha^y\mathcal Pi(s))=-y$,
therefore $(iii)$ follows also from (\ref{eq:px}).
$\square$
\begin{prop}\label{prop_FF_star}
Let $\mathcal Pi_1, \mathcal Pi_2$ be $\alpha$-dominant paths,
$ x\in[0, \alpha^\vee\mathcal Pi_1(T)]$,
$ y \in[0, \alpha^\vee\mathcal Pi_2(T)]$,
$z=\min(y,\alpha^\vee\mathcal Pi_1(T)-x)$ and $r=x+y-z$, then
$$\mathcal H_{\alpha}^{x} \mathcal Pi_1 \star \mathcal H_{\alpha}^{y}\mathcal Pi_2=\mathcal H_{\alpha}^{r}
(\mathcal Pi_1 \star \mathcal H_{\alpha}^{z} \mathcal Pi_2),$$
\end{prop}
\mathcal Proof Let $s= \alpha^\vee(\mathcal H_{\alpha}^{x} \mathcal Pi_1(T))-
\inf_{0\leq t\leq T}
\alpha^\vee (\mathcal H_{\alpha}^{x} \mathcal Pi_1)(t)$.
By lemma \ref{lem:Px}:
$$\mathcal P_\alpha(\mathcal H_{\alpha}^{x} \mathcal Pi_1 \star \mathcal H_{\alpha}^{y}\mathcal Pi_2)=
\mathcal P_\alpha(\mathcal H_{\alpha}^{x} \mathcal Pi_1) \star \mathcal P_\alpha^s
( \mathcal H_{\alpha}^{y}\mathcal Pi_2)$$
and $\mathcal P_\alpha^s\mathcal H_\alpha^y\mathcal Pi_2=
\mathcal H_\alpha^{s\wedge y}\mathcal Pi_2$. Since $ \mathcal P_\alpha
\mathcal H_{\alpha}^{x}\mathcal Pi_1=\mathcal Pi_1 $ one has
$$\mathcal P_\alpha(\mathcal H_{\alpha}^{x}
\mathcal Pi_1 \star \mathcal H_{\alpha}^{y}\mathcal Pi_2)=\mathcal Pi_1 \star
\mathcal H_{\alpha}^{s\wedge y}\mathcal Pi_2.$$
Notice that $s= \alpha^\vee( \mathcal Pi_1(T))-x$.
On the other hand,
$$(\mathcal H_{\alpha}^{x} \mathcal Pi_1 \star \mathcal H_{\alpha}^{y}\mathcal Pi_2)(T)=
\mathcal H_{\alpha}^{x} \mathcal Pi_1(T)+ \mathcal H_{\alpha}^{y}\mathcal Pi_2(T)=
\mathcal Pi_1(T)+\mathcal Pi_2(T)-(x+y)\alpha$$
$$(\mathcal Pi_1\star \mathcal H_{\alpha}^{s\wedge y}\mathcal Pi_2)(T)=\mathcal Pi_1(T)+\mathcal Pi_2(T)-(s\wedge y)\alpha$$
and we know that $\eta=\mathcal H_\alpha^r\mathcal Pi$ is characterized by the properties $\mathcal P_\alpha\eta=\mathcal Pi$ and $\eta(T)=\mathcal Pi(T)-r\alpha$. Therefore
the proposition holds for $r+s\wedge y=x+y$. $\square$
We now prove that, for $\alpha\in \Sigma$,
$\eta_1\in L_{\mathcal Pi_1},
\eta_2\in L_{\mathcal Pi_2}$, for all $s \in \mathbb R$,
$$\mathcal Theta[e_{\alpha}^s(\eta_1\otimes \eta_2)]=
\mathbb EE_{\alpha}^s(\eta_1\star \eta_2).$$
Since $e_{\alpha}^se_{\alpha}^t=e_{\alpha}^{s+t}$ and
$\mathbb EE_{\alpha}^s\mathbb EE_{\alpha}^t=\mathbb EE_{\alpha}^{s+t}$ it is sufficient to check
this for $s$ near $0$.
We write $\eta_1=\mathcal H_\alpha^x \mathcal Pi_1$ and $\eta_2=\mathcal H_\alpha^y \mathcal Pi_2$
where $\mathcal Pi_1=\mathcal P_\alpha(\eta_1), \mathcal Pi_2=\mathcal P_\alpha(\eta_2)$ are
$\alpha$-dominant. By proposition \ref{prop_FF_star}, if
$z=\min(y,\alpha^\vee\mathcal Pi_1(T)-x)$ and $r=x+y-z$, then
$$\mathbb EE_{\alpha}^s(\eta_1\star \eta_2)=\mathbb EE_{\alpha}^s(\mathcal H_{\alpha}^{x}
\mathcal Pi_1 \star \mathcal H_{\alpha}^{y}\mathcal Pi_2)=\mathbb EE_{\alpha}^s\mathcal H_{\alpha}^{r}
(\mathcal Pi_1 \star \mathcal H_{\alpha}^{z} \mathcal Pi_2).$$
We first show that if
\begin{equation}\label{eq:bf0}\mathbb EE_{\alpha}^s(\eta_1\star \eta_2)=
{\bf 0}\end{equation}
then $e_{\alpha}^s(\eta_1\otimes \eta_2)={\bf 0}$. For $|s|$
small enough (\ref{eq:bf0}) holds only when $r=0$ and $s>0$ or when
$s<0$ and\begin{equation}\label{eq:rr}r=\alpha^\vee((\mathcal Pi_1 \star
\mathcal H_{\alpha}^{z} \mathcal Pi_2)(T))=\alpha^\vee\mathcal Pi_1(T)+ \alpha^\vee\mathcal Pi_2(T)-2z.
\end{equation}
If $r=0$, then $z=\min(y,\alpha^\vee\mathcal Pi_1(T)-x)=x+y$ hence $x=0$ and
$y \leq \alpha^\vee\mathcal Pi_1(T)$.
But
$$\varepsilon_\alpha(\eta_1\otimes \eta_2)=
\varepsilon_\alpha(\eta_1)-\min(\varphi_\alpha(\eta_1)-
\varepsilon_\alpha(\eta_2),0)=\max(2x+y-\alpha^\vee\mathcal Pi_1(T), x).$$
(notice that, in general, when $\mathcal Pi$ is $\alpha$-dominant,
$\varepsilon_\alpha(\mathcal H_\alpha^x\mathcal Pi)=x$ and $\varphi_\alpha
(\mathcal H_\alpha^x\mathcal Pi)=\alpha^\vee\mathcal Pi(T)-x$).
Therefore $\varepsilon_\alpha(\eta_1\otimes \eta_2)=0
$ and $e_{\alpha}^s(\eta_1\otimes \eta_2)={\bf 0}$.
Now, if $r$ is given by (\ref{eq:rr}), then
$$z=\alpha^\vee\mathcal Pi_1(T)-x+ \alpha^\vee\mathcal Pi_2(T)-y$$
since $r=x+y-z$. We know that $\alpha^\vee\mathcal Pi_2(T)-y\geq 0$,
hence $z=\min(y,\alpha^\vee\mathcal Pi_1(T)-x)$ only if
$$z=\alpha^\vee\mathcal Pi_1(T)-x, \alpha^\vee\mathcal Pi_2(T)=y, y\geq \alpha^\vee\mathcal Pi_1(T)-x.$$
Then
$$\varepsilon_\alpha(\eta_1\otimes \eta_2)=2x+y-\alpha^\vee\mathcal Pi_1(T).$$
On the other hand,
$$wt(\eta_1\otimes \eta_2)=wt(\eta_1)+wt( \eta_2)=\mathcal Pi_1(T)-x\alpha+\mathcal Pi_2(T)-y\alpha,$$
thus, using $y=\alpha^\vee\mathcal Pi_2(T)$,
$$\varphi_\alpha(\eta_1\otimes \eta_2)=\varepsilon_\alpha(\eta_1\otimes \eta_2)+\alpha^\vee(wt(\eta_1\otimes \eta_2))=0$$
and $e_{\alpha}^s(\eta_1\otimes \eta_2)={\bf 0}$ when $s<0$.
We now consider the case where (\ref{eq:bf0}) does not hold. Then
for $s$ small enough,
$$\mathbb EE_{\alpha}^s(\eta_1\star \eta_2)=\mathbb EE_{\alpha}^s\mathcal H_{\alpha}^{r}
(\mathcal Pi_1 \star \mathcal H_{\alpha}^{z} \mathcal Pi_2)
=\mathcal H_{\alpha}^{r-s} (\mathcal Pi_1 \star \mathcal H_{\alpha}^{z} \mathcal Pi_2).$$
Using proposition \ref{prop_FF_star},
if $s$ is small enough, and $y>\alpha^\vee\mathcal Pi_1(T)-x,$
then
$$\mathcal H_{\alpha}^{r-s} (\mathcal Pi_1 \star \mathcal H_{\alpha}^{z}
\mathcal Pi_2)=\mathcal H_{\alpha}^{x-s} \mathcal Pi_1 \star \mathcal H_{\alpha}^{y}\mathcal Pi_2=
\mathcal Theta(e_\alpha^s (\mathcal H_{\alpha}^{x} \mathcal Pi_1 \otimes
\mathcal H_{\alpha}^{y}\mathcal Pi_2))$$
and if $y<\alpha^\vee\mathcal Pi_1(T)-x$, then
$$\mathcal H_{\alpha}^{r-s} (\mathcal Pi_1 \star \mathcal H_{\alpha}^{z}
\mathcal Pi_2)=\mathcal H_{\alpha}^{x} \mathcal Pi_1 \star \mathcal H_{\alpha}^{y-s}\mathcal Pi_2=
\mathcal Theta(e_\alpha^s (\mathcal H_{\alpha}^{x} \mathcal Pi_1 \otimes \mathcal H_{\alpha}^{y}\mathcal Pi_2)).$$
The end of the proof is straightforward. $\square$
By theorem \ref{uniq_iso}, this proves that the family of crystals
$B(\lambda), \lambda\in \bar C$ is closed.
From theorem \ref{thm:exis} and theorem \ref{thmuniq}, we get
\begin{theorem}\label{uniq_closed}
When $W$ is a finite Coxeter group,
there exists one and only one closed family of highest
weight normal continuous crystals $B(\lambda), \lambda \in \bar C$.
\end{theorem}
\subsection{Action of $W$ on the Littelmann crystal}
Following Kashiwara \cite{kash93}, \cite{kashbook} and Littelmann \cite{littel}, we show that we can define an action of the Coxeter group on each crystal $L_\mathcal Pi$.
We first notice that for each simple root $\alpha$,
we can define an involution $S_\alpha$ on the set of
paths by
$$S_\alpha \eta= \mathbb EE_\alpha^x \eta\mathcal Quad\text{for}
\mathcal Quad x = -\alpha^\vee(\eta(T)). $$
In particular,
\begin{equation}\label{eq:Ss}
S_{\alpha}\eta(T)=s_\alpha(\eta(T)).
\end{equation}
\begin{lemma}
Let $\eta\in C^0_T(V)$ and $ \alpha \in \Sigma$
such that $\alpha^\vee(\eta(T)) <0$.
For each $\gamma \in C^0_T(V)$ there exists
$m \in \mathbb N$ such that, for all $ n\geq 0$,
$$\mathcal P_\alpha(\gamma\star \eta^{\star (m+n)})
=\mathcal P_\alpha(\gamma\star \eta^{\star m})\star
S_\alpha(\eta)^{\star n}.$$
\end{lemma}
\mathcal Proof By lemma \ref{lem:Px},
$$\mathcal P_\alpha(\gamma\star \eta^{\star {(n+1)}})
=\mathcal P_\alpha(\gamma\star \eta^{\star n})\star \mathcal P_\alpha^x(\eta)$$
where $$x=\alpha^\vee(\gamma\star \eta^{\star n})(T)-\min_{0 \leq s \leq T}\alpha^\vee(\gamma\star \eta^{\star n})(s).$$
Let $\gamma_{\min}=\min_{0 \leq s \leq T}
\alpha^\vee\gamma(s)$ and $\eta_{\min}=\min_{0 \leq s \leq T}\alpha^\vee\eta(s)$. Since $\alpha^\vee\gamma(T)<0$, there exists $m > 0$ such that for $n \geq m$ one has,
\begin{eqnarray*}
\min_{0 \leq s \leq T}\alpha^\vee(\gamma\star \eta^{\star n})(s)&=&\min(\gamma_{\min}, \alpha^\vee(\gamma(T)+k\eta(T))+\eta_{\min}; 0 \leq k \leq n-1)\\
&=& \alpha^\vee(\gamma(T)+(n-1)\eta(T))+\eta_{\min}.
\end{eqnarray*}
Using that $(\gamma\star \eta^{\star m})(T)=\gamma(T)+m \eta(T)$ we have
$x =\alpha^\vee\eta(T)-\eta_{\min}$.
In this case,
$\mathcal P_\alpha^x(\eta)=S_\alpha(\eta)$,
which proves the lemma by induction on $n \geq m$. $\square$
\begin{theorem}\label{theo:action}There is an action $ \{S_w, w \in W\} $ of the Coxeter group $W$ on each $L_\mathcal Pi$ such that $S_{s_\alpha}=S_\alpha$ when $\alpha$ is a simple root.
\end{theorem}
\mathcal Proof By Matsumoto's lemma,
it suffices to prove that the transformations
$S_\alpha$ satisfy to the braid relations. Therefore we can assume that $W$ is a dihedral group $I(q)$. Consider
two
roots $\alpha,\beta$ generating $W$.
Let $\eta$ be a path,
there exists a sequence
$(\alpha_i)=\alpha,\beta,\alpha,\ldots $ or
$\beta,\alpha,\beta,\ldots$ such that
$s_{\alpha_1}s_{\alpha_2}\ldots s_{\alpha_r}
\eta(T)\in -\bar C$.
Let $\tilde\eta=S_{\alpha_1}S_{\alpha_2}\ldots S_{\alpha_r}
\eta$.
Let $s_{\alpha_q}\cdots
s_{\alpha_1}$ be a reduced decomposition. We show by
induction on $k\leq q$ that there exists
$m_k \geq0$ and a path $\gamma_k$ such that
\begin{equation}\label{seta}
\mathcal P_{\alpha_k}\cdots
\mathcal P_{\alpha_1}(\tilde\eta^{\star (m_k+ n)})=
\gamma_k \star (S_{\alpha_k}\cdots S_{\alpha_1}\tilde\eta)^{\star n}
\end{equation}
For $k=1$, this is the preceding
lemma. Suppose that this holds for some $k$. Then
$$\alpha_{k+1}^\vee(S_{\alpha_k}\cdots
S_{\alpha_1}\tilde \eta(T))\leq0$$
(cf. Bourbaki, \cite{bo2}, ch.5, no.4, Thm.\ 1).
Thus, by the lemma, there exists
$m$ such that, for $n \geq 0$,
$$\mathcal P_{\alpha_{k+1}}(\gamma_k \star
(S_{\alpha_k}\cdots S_{\alpha_1}\tilde\eta)^{\star (m+n)})=
\mathcal P_{\alpha_{k+1}}(\gamma_k\star (S_{\alpha_k}\cdots
S_{\alpha_1}\tilde\eta)^{\star m})\star (S_{\alpha_{k+1}}
S_{\alpha_k}\cdots S_{\alpha_1}\tilde\eta)^{\star n}$$
Hence, by the induction hypothesis,
if $\gamma_{k+1}=\mathcal P_{\alpha_{k+1}}
(\gamma_k\star (S_{\alpha_k}\cdots S_{\alpha_1}\tilde\eta)^{\star m})$, then
$$\mathcal P_{\alpha_{k+1}}\mathcal P_{\alpha_k}\cdots \mathcal P_{\alpha_1}
((\tilde\eta^{\star (m_k+ m+n)})=\gamma_{k+1}\star
(S_{\alpha_{k+1}}S_{\alpha_k}\cdots S_{\alpha_1}\tilde
\eta)^{\star n}$$
We apply (\ref{seta}) with $k=q$, then there exists two reduced
decompositions, and we see that
$S_{\alpha_{q}}S_{\alpha_{q-1}}\cdots S_{\alpha_1}\tilde
\eta$ does not depend on the reduced decomposition because
the left hand side does not,
by the braid relations for the $\mathcal P_\alpha$. This implies easily
that $S_{\alpha_{q}}S_{\alpha_{q-1}}
\cdots S_{\alpha_1}
\eta$ also does not depend on the reduced decomposition.
$\square$.
Using the crystal isomorphism between
$L_\mathcal Pi$ and the crystal $B(\mathcal Pi(T))$ we see that
\begin{cor}
The Coxeter group $W$ acts on each crystal
$B(\lambda)$, where $\lambda\in \bar C$, in such a
way that, for $s=s_\alpha$ in $S$, and $b \in B(\lambda)$,
$$S_\alpha(b)=e_\alpha^x(b),
\mbox { where } x=-\alpha^\vee (wt(b)).$$
\end{cor}
Notice that these $S_\alpha$ are not crystal morphisms.
\subsection{Sch\"utzenberger involution} \label{schutz}
The classical
Sch\"utzenberger involution
associates to a Young tableau $T$ another Young tableau
$\hat T$ of the same shape. If $(P,Q)$ is the pair associated
by Robinson-Schensted-Knuth (RSK) algorithm to the word $u_1\cdots u_n$ in the letters $1,\cdots,k$,
then $(\hat P, \hat Q)$ is the pair associated with
$u_n^*\cdots u_1^*$ where $i^*=k+1-i$, see e.g.\ Fulton \cite{Fulton}.
It is remarkable that $\hat P$ depends only on $P$,
and that $\hat Q$ depends only on $Q$. We will establish
an analogous property for the analogue of the Sch\"utzenberger
involution defined in \cite{bbo}
for finite Coxeter groups.
The crystallographic case has been recently investigated
by Henriques and Kamnitzer \cite{hen-kamn1},
\cite{hen-kamn2}, and Morier-Genoud \cite{morier}.
For any path $\eta \in C^0_T(V)$, let
$\kappa \eta(t)=\eta(T-t)-\eta(T), \;\; 0 \leq t \leq T,$
and
$$S\eta=-w_0\kappa \eta.$$
Since $w_0^2=id$, $S$ is an involution of $C^0_T(V)$. The following is proved in \cite{bbo}.
\begin{prop}\label{SQ}
For any $\eta\in C^0_T(V),$
$\mathcal P_{w_0}S\eta(T)=\mathcal P_{w_0}\eta(T).$
\end{prop} As remarked in \cite{bbo}, this implies that the transformation on dominant paths $$\mathcal Pi \mapsto I\mathcal Pi=\mathcal P_{w_0} S\mathcal Pi$$
gives the analogue of the Sch\"utzenberger involution on the $Q'$s. We will consider the action on the crystal itself, i.e.\ the analogue of the Sch\"utzenberger involution on the $P's$.
For each dominant path $\mathcal Pi\in C^0_T(V)$ the crystals $L_\mathcal Pi$ and $L_{I\mathcal Pi}$ are isomorphic, since $\mathcal Pi(T)=I\mathcal Pi(T)$. Therefore there is a unique isomorphism $J_\mathcal Pi:L_\mathcal Pi\to L_{I\mathcal Pi}$, it satisfies $J_\mathcal Pi(\mathcal Pi)=I\mathcal Pi$. For each path $\eta \in C^0_T(V)$, let $J(\eta)=J_{\mathcal Pi}(\eta)$, where $\mathcal Pi=\mathcal P_{w_0}\eta$. This defines an involutive isomorphism of crystal $J:C^0_T(V)\to C^0_T(V).$
We will see that $$\tilde S=J\circ S$$
is the analogue of the Sch\"utzenberger involution on crystals. Although $\tilde S$ is not a crystal isomorphism, and contrary to $S$, it conserves the cristal connected components since $\tilde S(L_\mathcal Pi)=L_\mathcal Pi$, for each dominant path $\mathcal Pi$, this is the main reason for introducing it.
If $\alpha$ is a simple root, then $\tilde \alpha=-w_0\alpha$ is also a simple root and $\tilde \alpha^\vee=-\alpha^\vee w_0$. The following property is straightforward. In the $A_n$ case, it was shown by Lascoux, Leclerc and Thibon \cite{llt} and Henriques and Kamnitzer \cite{hen-kamn1} that it characterizes the Sch\"utzenberger involution.
\begin{lemma}\label{lem_Schu}
For any path $\eta$ in $C^0_T(V)$, any $r\in\mathbb R$, and any simple root $\alpha$, one has
$$ \begin{array}{c}
\mathbb EE_\alpha^r\tilde S \eta= \tilde S\mathbb EE_{\tilde \alpha}^{-r} \eta\\
\varepsilon_{\tilde \alpha}(\tilde S\eta))=\varphi_\alpha(\eta),\,
\varphi_{\tilde \alpha}(\tilde S\eta))=\varepsilon_\alpha(\eta)\\
\tilde S\eta(T)=w_0\eta(T).
\end{array}$$
\end{lemma}
An important consequence of this lemma is that $\tilde S:L_\mathcal Pi\to L_\mathcal Pi$ depends only on the crystal structure of $L_\mathcal Pi$. Indeed, if $\eta = \mathbb EE_{\alpha_1}^{r_1}\cdots \mathbb EE_{\alpha_k}^{r_k}\mathcal Pi$ then $\tilde S(\eta)=\mathbb EE_{\tilde \alpha_1}^{-r_1}\cdots \mathbb EE_{\tilde \alpha_k}^{-r_k}\tilde S(\mathcal Pi)$ and $\tilde S(\mathcal Pi)$ is the unique element of $L_\mathcal Pi$ which has the lowest weight $w_0\mathcal Pi(T)$, namely $S_{w_0}\mathcal Pi$, where $S_{w_0}$ is given by theorem \ref{theo:action}.
In particular, using the isomorphism between $L_\mathcal Pi$ and $B(\lambda)$ where $\lambda=\mathcal Pi(T)$, we can transport the action of $\tilde S$ on each $B(\lambda), \lambda \in \bar C$.
Notice that $S \circ J$ also satisfies to this lemma. Therefore, by uniqueness,
$$S \circ J=J \circ S$$
thus $\tilde S$ is an involution. Following Henriques and Kamnitzer
\cite{hen-kamn2}, let us show:
\begin{theorem}\label{crysiso}
The map $\tau:C^0_T(V)\to C^0_T(V)$ defined by
$$\tau(\eta_1\star \eta_2)=\tilde S (\tilde S\eta_2\star \tilde S\eta_1)$$
is an involutive crystal isomorphism.
\end{theorem}
\mathcal Proof Remark first that any path can be written uniquely as the concatenation of two paths, hence $\tau$ is well defined, furthermore
$S(\eta_1\star \eta_2)=S(\eta_2)\star S(\eta_1)$, therefore, since $\tilde S=SJ=JS$, and $S$ is involutive,
$$\tau(\eta_1\star \eta_2)=J S ( SJ\eta_2\star SJ\eta_1)=J S^2 ( J\eta_1\star J\eta_2)=J ( J\eta_1\star J\eta_2).
$$
Consider the map
$J^{(2)}: C^0_T(V)\to C^0_T(V)$ defined by
$J^{(2)}(\eta_1\star \eta_2)=J\eta_1\star J\eta_2$.
Remark that
$J^{(2)}=\mathcal Theta\circ (J\otimes J)\circ \mathcal Theta^{-1}$
where $\mathcal Theta:C^0_T(V) \otimes C^0_T(V)\to C^0_T(V)$ is the crystal isomorphism defined in theorem \ref{closed} and $(J\otimes J)(\eta_1\otimes \eta_2)=J(\eta_1)\otimes J(\eta_2)$. Since $J$ is an isomorphism, this implies that $J^{(2)}$ is an isomorphism, thus $\tau =J\circ J^{(2)}$ is an isomorphism.
Let $\tilde S^{(2)}$ be defined by $\tilde S^{(2)}(\eta_1\star \eta_2)=\tilde S(\eta_2)\star \tilde S(\eta_1)$. Then $\tau= \tilde S \circ \tilde S^{(2)}$, and, since $\tilde S$ is an involution, the inverse of $\tau $ is $ \tilde S^{(2)}\circ \tilde S$. So to prove that $\tau$ is an involution we have to show that $ \tilde S \circ \tilde S^{(2)}=\tilde S^{(2)}\circ \tilde S$.
Both these maps are crystal isomorphisms, so it is enough to check that for any $\eta\in C_T^0(V)$, the two paths $(\tilde S \circ \tilde S^{(2)})(\eta)$ and $(\tilde S^{(2)}\circ \tilde S)(\eta)$ are in the same connected crystal component. Since $\tilde S$ conserves each connected component, $\eta$ and $\tilde S(\eta)$ on the one hand, and $\tilde S^{(2)}(\eta)$ and $\tilde S ( \tilde S^{(2)}(\eta))$ on the other hand, are in the same component. Therefore is it sufficient to show that if $\eta$ and $\mu$ are in the same component then $\tilde S^{(2)}(\eta)$ and $\tilde S^{(2)}(\mu)$ are in the same component. Let us write $\eta=\eta_1\star \eta_2$. Then if $\mu=\mathbb EE^r_\alpha(\eta)$, $\sigma=\varphi_\alpha(\eta_1)-\varepsilon_\alpha(\eta_2)$ and $\tilde \sigma=-\sigma$,
$$ \tilde S(\mathbb EE^{\min(r,-\sigma)+\sigma^+}_\alpha \eta_2) = \mathbb EE^{-\min(r,-\sigma)-\sigma^+}_{\tilde \alpha} \tilde S\eta_2=\mathbb EE^{\max(-r,\tilde\sigma)-\tilde\sigma^-}_{\tilde \alpha} \tilde S\eta_2$$
and $$
\tilde S(\mathbb EE^{\max(r,-\sigma)-\sigma^-}_\alpha\eta_1)= \mathbb EE^{-\max(r,-\sigma)+\sigma^-}_{\tilde \alpha}\tilde \eta_1=
\mathbb EE^{\min(-r,-\tilde\sigma)+\tilde\sigma^+}_{\tilde \alpha}\tilde S\eta_1$$
therefore
\begin{eqnarray*}
\tilde S^{(2)}(\mu)&=& \tilde S^{(2)}(\mathbb EE^{\max(r,-\sigma)-\sigma^-}_\alpha\eta_1\star\mathbb EE^{\min(r,-\sigma)+\sigma^+}_\alpha \eta_2)\\
&=&\tilde S(\mathbb EE^{\min(r,-\sigma)+\sigma^+}_\alpha \eta_2) \star \tilde S(\mathbb EE^{\max(r,-\sigma)-\sigma^-}_\alpha\eta_1))\\
&=& \mathbb EE^{\max(-r,\tilde\sigma)-\tilde\sigma^-}_{\tilde \alpha} \tilde S\eta_2 \star \mathbb EE^{\min(-r,-\tilde\sigma)+\tilde\sigma^+}_{\tilde \alpha}\tilde S\eta_1\\
&=&\mathbb EE^{-r}_{\tilde \alpha}(\tilde S\eta_2 \star \tilde S\eta_1)\\
&=&\mathbb EE^{-r}_{\tilde \alpha}\tilde S^{(2)}(\eta).
\end{eqnarray*}
So in this case $\tilde S^{(2)}(\mu)$ and $\tilde S^{(2)}(\eta)$ are in the same component. One concludes easily by induction. $\square$
We can now define an involution $\tilde S_\lambda$ on each continuous crystal of the family $\{B(\lambda), \lambda \in \bar C\}$ by transporting the action of $\tilde S$ on $C^0_T(V)$. Let $\lambda, \mu \in \bar C$. For $b_1\in B(\lambda)$ and $b_2\in B(\mu)$ let
$$\tau_{\lambda,\mu}(b_1\otimes b_2)=\tilde S_\gamma (\tilde S_\mu b_2\otimes \tilde S_\lambda b_1)$$
where $\gamma\in \bar C$ is such that $\tilde S_\mu b_2\otimes \tilde S_\lambda b_1
\in B(\gamma).$
\begin{theorem}
For $\lambda, \mu \in \bar C$, the map $$\tau_{\lambda,\mu}:B(\lambda)\otimes B(\mu)\to B(\mu)\otimes B(\lambda)$$
is a crystal isomorphism.
\end{theorem}
This follows from theorem \ref{crysiso}.
As in the construction of Henriques and Kamnitzer \cite{hen-kamn1}, \cite{hen-kamn2} these isomorphisms do not obey the axioms for a braided monoidal category, but instead we have that:\begin{enumerate}
\item $ \tau_{\mu,\lambda} \circ \tau_{\lambda,\mu} = 1; $
\item The following diagram commutes:
\begin{equation*}
\xymatrix{
B(\lambda) \otimes B(\mu) \otimes B(\sigma)
\ar[d]_{\tau_{(\lambda,\mu)} \otimes 1}
\ar[r]^{1 \otimes \tau_{ (\mu, \sigma)}} &
B(\lambda) \otimes B(\sigma) \otimes B(\mu)
\ar[d]^{\tau_{(\lambda, (\sigma,\mu))}}\\
B(\mu) \otimes B(\lambda) \otimes B(\sigma)
\ar[r]_{\tau_{((\mu,\lambda), \sigma)}} &
B(\sigma) \otimes B(\mu) \otimes B(\lambda) \\
}
\end{equation*}
\end{enumerate}
which makes of $B(\lambda), \lambda \in \bar C$, a coboundary category.
\section{The Duistermaat-Heckman measure and Brownian motion}
\subsection{} In this section, we consider a finite Coxeter group, with a
realization in some Euclidean space $V$ identified with its dual
so that, for each root $\alpha$,
$\alpha^{\vee}=\frac{2\alpha}{\Vert \alpha\Vert^2}$.
We will introduce an analogue, for continuous crystals, of the Duistermaat-Heckman measure, compute its Laplace transform (the analogue of the Harish-Chandra formula), and study its connections with Brownian motion.
\subsection{Brownian motion and the Pitman transform}
Fix a reduced decomposition of the longest word
$$w_0=s_1 s_2 \cdots s_q$$
and let ${\bf i}=(s_1,\cdots,s_q)$.
Recall that for any $\eta \in C^0_T(V)$,
its string parameters
$x=(x_1,\cdots,x_q)=\varrho_{\bf i}(\eta)$ satisfy
\begin{equation}\label{p-ineq1}
0\le x_i\le \alpha_{s_i}^{\vee}(\lambda-\sum_{j=1}^{i-1} x_j\alpha_{s_j}), \mathcal Qquad i\le q;
\end{equation}
where $\lambda=\mathcal P_{w_0}\eta(T)$.
For each simple root $\alpha$ choose a reduced decomposition
${\bf i_\alpha}=(s_1^\alpha,\cdots,s_q^\alpha)$ such that $s_1^\alpha=s_\alpha$ and denote the corresponding string parameters $\varrho_{\bf i_\alpha}(\eta)$ by
$(x_1^\alpha,\cdots,x_q^\alpha)$.
Using the map ${\mathcal Phi_{\bf i}^{\bf i_\alpha}}$
given by theorem \ref{piecewise} we obtain
a continuous piecewise linear function $\Psi^{\bf i}_\alpha:\mathbb R^q \to \mathbb R$ such that
\begin{equation}\label{x1}x_1^\alpha=\Psi^{\bf i}_\alpha(x).\end{equation}
Of course
\begin{equation}\label{p-ineq3}
\Psi^{\bf i}_\alpha(x) \ge 0, \mathcal Qquad \text{for all} \ \alpha\in\Sigma .
\end{equation}
Denote by $M_{\bf i}$ the set of $(x,\lambda)\in\mathbb R_+^q\times C$ which satisfy the inequalities (\ref{p-ineq1})
and (\ref{p-ineq3}), and set
\begin{equation}
\label{milambda}M_{\bf i}^\lambda=\{x\in\mathbb R_+^q:\ (x,\lambda)\in M_{\bf i}\}.
\end{equation}
Let ${\mathbb P}$ be a probability measure on
$C^0_T(V)$ under which
$\eta$ is a standard Brownian motion in $V$.
We recall the following theorem from \cite{bbo}.
\begin{theorem}\label{P-brown}
The stochastic process $\mathcal P_{w_0}\eta$ is a Brownian motion in $V$
conditioned, in Doob's sense, to stay in the Weyl chamber
$\bar C$.
\end{theorem}
This means that $\mathcal P_{w_0}\eta$ is the $h$-process of
the standard Brownian motion in $V$ killed when it exits $\bar C$, for the
harmonic function
$$h(\lambda)=\mathcal Prod_{ \alpha \in R_+} \alpha^\vee (\lambda) ,$$
for $\lambda \in V$, where $R_+$ is the set of all positive roots. Let
$c_t=t^{q/2}\int_V e^{-\|\lambda\|^2/2t}\ d\lambda$ and $$k=c_1^{-1}\int_C h(\lambda)^2 e^{-\|\lambda\|^2/2} \ d\lambda.$$
\begin{theorem}\label{p-unif}
For $(\sigma,\lambda)\in M_{\bf i}$,
\begin{equation}\label{uniform}
{\mathbb P}(\varrho_{\bf i}(\eta) \in d\sigma,
\mathcal P_{w_0}\eta(T) \in d\lambda)
= c_T^{-1} h(\lambda) e^{- \| \lambda \|^2 /2T} \ d\sigma
\ d\lambda .
\end{equation}
The conditional law of $\varrho_{\bf i}(\eta)$, given $(\mathcal P_{w_0} \eta(s), s\le T)$ and
$\mathcal P_{w_0} \eta(T)=\lambda$, is the normalized
Lebesgue measure
on $M_{\bf i}^\lambda$, and the volume
of $M_{\bf i}^\lambda$
is $k^{-1}h(\lambda)$.
\end{theorem}
This theorem has the following interesting corollary,
which gives a new proof of the fact that the set $C_{\bf i}^\mathcal Pi$
depends only on $\mathcal Pi(T)$, and is polyhedral.
\begin{cor}\label{cor_cone} For any dominant path $\mathcal Pi$, let $\lambda=\mathcal Pi(T)$, then
$C_{\bf i}^\mathcal Pi=M_{\bf i}^\lambda,$ and
$$C_{\bf i}=\{x \in \mathbb R_+^q; \Psi^{\bf i}_\alpha(x) \ge 0, \mbox{ for all } \alpha\in\Sigma\}.$$
\end{cor}
\mathcal Proof It is clear that $C_{\bf i}^\mathcal Pi$ is contained in $M_{\bf i}^\lambda$ and the theorem
implies that $C_{\bf i}^\mathcal Pi$, equal by definition to the set of
$\varrho_{\bf i}(\eta)$ when $\mathcal P_{w_0}\eta=\mathcal Pi$, contains
$M_{\bf i}^\lambda$. The description of $C_{\bf i}$ follows, since $C_{\bf i}=\cup\{C_{\bf i}^\mathcal Pi, \mathcal Pi \text{ dominant path}\}. $ $\mathcal Qed$
Theorem \ref{p-unif} is proved in section \ref{pf-punif}.
\subsection{The Duistermaat-Heckman measure}
Let $G$ be a compact semi-simple Lie group with maximal torus $T$. If
${\mathcal O}_{\lambda}$ is a coadjoint orbit of $G$, corresponding to a dominant
regular weight, endowed with its canonical symplectic structure $\omega$,
then this maximal torus
acts on the symplectic manifold $({\mathcal O}_{\lambda},\omega)$, and the image of the
Liouville measure on ${\mathcal O}_{\lambda}$ by the moment map, which takes values in
the dual of the Lie algebra of $T$, is called the Duistermaat-Heckman measure.
It is proved in \cite{ab} that this measure is the image of the Lebesgue measure on the
Berenstein-Zelevinsky polytope by an affine map. In analogy with this case, we define
for a realization of a finite Coxeter group,
the Duistermaat-Heckman measure, and prove some properties
which generalize the case of crystallographic groups.
\begin{definition}\label{defDH}
For any $\lambda \in C$, the Duistermaat-Heckman measure
$m^\lambda_{DH}$ on $V$ is the image
of the Lebesgue measure on $M_{\bf i}^\lambda$ (defined by (\ref{milambda})) by the map
\begin{equation}
\label{mapp}x=(x_1,\cdots,x_q)\in M_{\bf i}^\lambda
\mapsto \lambda-\sum_{j=1}^qx_j\alpha_j\in V.
\end{equation}
\end{definition}
In the following, $V^*$ denotes the complexification of $V$.
\begin{theorem}\label{p-2}
The Laplace transform of the Duistermaat-Heckman measure is given,
for $z \in V^*$, by
\begin{equation}\label{harish}
\int_V e^{\langle z, v \rangle} m_{DH}^\lambda(dv) =\frac{\sum_{w\in W} \varepsilon(w)e^{\langle z, w\lambda \rangle}}{h(z)},
\end{equation}
where $\varepsilon(w)$ is the signature of $w \in W$.
With the notations of theorem \ref{p-unif},
the conditional law of $\eta(T)$,
given $(\mathcal P_{w_0} \eta(s), 0\leq s\le T)$ and
$\mathcal P_{w_0} \eta(T)=\lambda$, is the probability measure $\mu_{DH}^\lambda={k \,m^\lambda_{DH}/h(\lambda)}$.
\end{theorem}
Formula \ref{harish} is the analogue, in our setting of the famous formula of Harish-Chandra \cite{harish}.
Theorem \ref{p-2} is proved in section \ref{pf_p-2}.
\begin{prop} The Duistermaat-Heckman measure $m^\lambda_{DH}$ has
a continuous piecewise polynomial density, invariant under
$W$ and with support equal to the convex hull $co(W\lambda)$ of $W\lambda$.
\end{prop}
\mathcal Proof The measure $m^\lambda_{DH}$ is the image by an affine map of the Lebesgue measure on the convex polytope $C_{\bf i}^\mathcal Pi$ when $\mathcal Pi(T)=\lambda$. Therefore it has a piecewise polynomial density and a convex support. Its Laplace transform is invariant under $W$ so $m^\lambda_{DH}$ itself is invariant under $W$. The support $S(\lambda)$ of $m^\lambda_{DH}/h(\lambda)$ is equal to $\{\eta(T); \eta \in L_{\mathcal Pi}\}$. Notice that if $\eta$ is in $L_\mathcal Pi$, then when $x=\alpha^\vee(\eta(T))$, $\mathbb EE_{\alpha}^x\eta$ is in $L_\mathcal Pi$ and $\mathbb EE_{\alpha}^x\eta(T)=s_\alpha \eta(T)$. Starting from $\mathcal Pi(T)=\lambda$ we thus see that $W\lambda$ is contained in $S(\lambda)$.
So $co(W\lambda)$ is contained in $S(\lambda)$.
The components of $x \in M_{\bf i}^\mathcal Pi$ are non negative,
therefore $co(W\lambda)$ contains $S(\lambda) \cap \bar C$ and,
by $W$-invariance it contains $S(\lambda)$ itself.
$\square$
\subsection{Proof of theorem \ref{p-unif}}\label{pf-punif}
First we recall some further path transformations which were
introduced in~\cite{bbo}.
For any positive root $\beta\in R_+$ (not necessarily simple), define $\mathcal Q_{\beta}=
\mathcal P_{\beta} s_\beta$. Then, for $\mathcal P_{s}i\in C^0_T(V)$,
$$\mathcal Q_{\beta}\mathcal P_{s}i(t)=\mathcal P_{s}i(t)-\inf_{t\ge s\ge 0}\beta^\vee(\mathcal P_{s}i(t)-\mathcal P_{s}i(s))
\beta ,\mathcal Qquad T\ge t\ge 0.$$
Let $w_0=s_1s_2\cdots s_q$ be a reduced
decomposition, and let $\alpha_i=\alpha_{s_i}$. Since
$ s_{\alpha}\mathcal P_{\beta}=\mathcal P_{s_{\alpha} \beta}s_\alpha$, for roots $\alpha\ne
\beta$,
the following holds
$$\mathcal Q_{w_0}:=\mathcal P_{w_0} w_0
=\mathcal Q_{\beta_{1}}\, \ldots \, \mathcal Q_{\beta_{q}},$$
where
$\beta_1=\alpha_1,\ \ \beta_i=s_1\ldots s_{i-1}\alpha_i$, when $i\le
q$.
Set $\mathcal P_{s}i_q=\mathcal P_{s}i$ and, for $i\le q$,
\begin{equation}\label{zetadef}
\mathcal P_{s}i_{i-1}=\mathcal Q_{\beta_{i}}\ldots \mathcal Q_{\beta_{q}}\mathcal P_{s}i
\mathcal Qquad
y_i=-\inf_{T\ge t\ge 0}\beta_i^{\vee}(\mathcal P_{s}i_i(T)-\mathcal P_{s}i_i(t)).
\end{equation}
Then $\mathcal P_{s}i_0=\mathcal Q_{w_0}\mathcal P_{s}i$ and, for each $i\le q$,
$$\mathcal Q_{w_0}\mathcal P_{s}i(T)=\mathcal P_{s}i_i(T)+\sum_{j=1}^i y_j \beta_j .$$
Define $\varsigma_{\bf i}(\mathcal P_{s}i):=(y_1,y_2,\ldots,y_q)$.
Now let $\eta=w_0\mathcal P_{s}i$, so that $\mathcal Q_{w_0}\mathcal P_{s}i=\mathcal P_
{w_0} \eta$.
Set $\eta_q=\eta$ and, for $i\le q$,
\begin{equation}\label{xdef}
\eta_{i-1}=\mathcal P_{\alpha_{i}}\ldots \mathcal P_{\alpha_{q}}\eta
\mathcal Qquad
x_i=-\inf_{T\ge t\ge 0}\alpha_i^{\vee}(\eta_i(t)).
\end{equation}
Then $\eta_0=\mathcal P_{w_0}\eta$ and, for each $i\le q$,
$$\mathcal P_{w_0}\eta(T)=\eta_i(T)+\sum_{j=1}^i x_j \alpha_j .$$
The parameters $\varrho_{\bf i}(\eta)=(x_1,\ldots,x_q)$ are related
to $\varsigma_{\bf i}(\mathcal P_{s}i)=(y_1,y_2,\ldots,y_q)$ as follows.
\begin{lemma}\label{x-y}
For each $i\le q$, we have:
\begin{enumerate}[(i)]
\item $\eta_i=s_i\ldots s_1 \mathcal P_{s}i_i ,$
\item $x_i=y_i+\beta_i^\vee(\mathcal P_{s}i_i(T))
=\beta_i^\vee (\mathcal Q_{w_0} \mathcal P_{s}i(T)-\sum_{j=1}^{i-1}y_j\beta_j)-
y_i ,$
\item $
y_i=x_i+\alpha_i^\vee(\eta_i(T))
=\alpha_i^\vee
(\mathcal P_{w_0}\eta(T)-\sum_{j=1}^{i-1}x_j\alpha_j) -x_i.$
\end{enumerate}
\end{lemma}
\begin{proof}
We prove {\it (i)} by induction on $i\le q$. For $i=q$ it holds because
$\eta_q=\eta=w_0\mathcal P_{s}i=w_0\mathcal P_{s}i_q$ and $s_q\ldots s_1=w_0$.
Note that, for each $i\le q$, we can write
$$\mathcal Q_{\beta_i} = \mathcal P_{\beta_i} s_{\beta_i}
= s_1\ldots s_{i-1} \mathcal P_{\alpha_i} s_i\ldots s_1 .$$
Therefore, assuming the induction hypothesis $\eta_i=s_i\ldots s_1
\mathcal P_{s}i_i$,
\begin{eqnarray*}
\eta_{i-1} &=& \mathcal P_{\alpha_i} \eta_i = \mathcal P_{\alpha_i}
s_i\ldots s_1 \mathcal P_{s}i_i \\
&=& s_{i-1}\ldots s_1 \mathcal Q_{\beta_i} \mathcal P_{s}i_i \\
&=& s_{i-1}\ldots s_1 \mathcal P_{s}i_{i-1} ,
\end{eqnarray*}
as required.
This implies {\it (ii)}, using $\eta_{i-1}(T)=\eta_i(T)+x_i\alpha_i$
and $\mathcal P_{s}i_{i-1}(T)=\mathcal P_{s}i_i(T)+y_i\beta_i$:
\begin{eqnarray*}
2 x_i &=& \alpha_i^\vee(\eta_{i-1}(T)-\eta_i(T)) \\
&=& \alpha_i^\vee( s_{i-1}\ldots s_1 \mathcal P_{s}i_{i-1}(T) - s_i\ldots s_1
\mathcal P_{s}i_i(T) ) \\
&=& \alpha_i^\vee( s_{i-1}\ldots s_1 (\mathcal P_{s}i_i(T)+y_i\beta_i) - s_i
\ldots s_1 \mathcal P_{s}i_i(T) ) \\
&=& 2y_i + \alpha_i^\vee( \alpha_i^\vee( s_{i-1}\ldots s_1 \mathcal P_{s}i_i(T))
\alpha_i) \\
&=& 2y_i + 2\beta_i^\vee(\mathcal P_{s}i_i(T)) .
\end{eqnarray*}
Finally, {\it (iii)} follows immediately from {\it (ii)} and {\it (i)}.
\end{proof}
This lemma shows that, when $W$ is a Weyl group, then $(y_1,\cdots,y_q)$ are the Lusztig coordinates with respect to the decomposition ${\bf i^*}$ of the image of the path $\eta$ with string coordinates $(x_1,\cdots, x_q)$ with respect to the decomposition ${\bf i}$ under the Schutzenberger involution, where ${\bf i^*}$ is obtained from ${\bf i}$ by the map $\tilde \alpha= -w_0 \alpha$ (see Morier-Genoud \cite{morier}, Cor.\ 2.17). By $(iii)$ of the preceding lemma,
we can define a mapping $F:M_{\bf i} \to \mathbb R_+^q\times C$ such that
$$(\varsigma_{\bf i}(\mathcal P_{s}i),\mathcal Q_{w_0} \mathcal P_{s}i(T))=
F(\varrho_{\bf i}(\eta),\mathcal P_{w_0}\eta(T)).$$
Let $L_{\bf i}=F(M_{\bf i})$.
It follows from $(ii)$ that $F^{-1}(y,\lambda)=(G(y,\lambda),\lambda)
$, where
$$G(y,\lambda)=\beta_i^\vee (\lambda-\sum_{j=1}^{i-1}y_j\beta_j)-y_i .$$
Thus, $L_{\bf i}$ is the set of $(y,\lambda)\in\mathbb R_+^q\times C$ which
satisfy
\begin{equation}\label{ineq1}
0\le y_i\le \beta_i^{\vee}(\lambda-\sum_{j=1}^{i-1} y_j\beta_j)
\mathcal Qquad (i\le q)
\end{equation}
and
\begin{equation}\label{ineq3}
\Psi^{\bf i}_\alpha( G(y,\lambda) )\ge 0 \mathcal Qquad \alpha\in\Sigma .
\end{equation}
The analogue of theorem~\ref{piecewise} also holds for the parameters
$\varsigma_{\bf i}(\mathcal P_{s}i)=(y_1,y_2,\ldots,y_q)$, and can be proved
similarly.
More precisely, for any two
reduced decompositions ${\bf i}$ and ${\bf j}$, there is a piecewise
linear map
$\theta_{\bf i}^{\bf j}:\mathbb R^q\to\mathbb R^q$ such that
$\varsigma_{\bf j}(\mathcal P_{s}i) = \theta_{\bf i}^{\bf j}(\varsigma_{\bf i}
(\mathcal P_{s}i))$.
In particular, for each simple root $\alpha$, we can define a
piecewise linear map
$\mathcal Theta^{\bf i}_\alpha:\mathbb R^q\to\mathbb R$ such that, if ${\bf i}_\alpha=(s_1^
\alpha,\ldots,s_q^\alpha)$
is a reduced decomposition with $s_1^\alpha=s_\alpha$, and
$\varsigma_{{\bf i}_\alpha}(\mathcal P_{s}i)=(y^\alpha_1,y^\alpha_2,\ldots,y^
\alpha_q)$,
then $y^\alpha_1=\mathcal Theta^{\bf i}_\alpha(y)$ where $\varsigma_{\bf i}(\mathcal P_{s}i)=(y_1,y_2,\ldots,y_q)$.
By lemma \ref{x-y}, we have
\begin{equation}
\mathcal Theta^{\bf i}_\alpha(y) = \alpha^{\vee} (\lambda) - \Psi^{\bf i}_
\alpha(G(y,\lambda) ),
\end{equation}
and the inequalities (\ref{ineq3}) can be written as
\begin{equation}\label{ineq3a}
\alpha^{\vee} (\lambda) - \mathcal Theta^{\bf i}_\alpha( y )\ge 0 \mathcal Qquad
\alpha\in\Sigma .
\end{equation}
As in~\cite{bbo}, we extend the definition of $\mathcal Q_{\beta}$ to two-sided paths.
Denote by $C^0_{\mathbb R}(V)$ the set of continuous paths
$\mathcal Pi:{\mathbb R}\to V$ such that $\mathcal Pi(0)=0$ and $\alpha^\vee(\mathcal Pi(t))
\to\mathcal Pm\infty$
as $t\to\mathcal Pm\infty$ for all simple $\alpha$.
For $\mathcal Pi\in C^0_{\mathbb R}(V)$ and $\beta$ a positive root, define
$\mathcal Q_{\beta}\mathcal Pi$ by
$$ \mathcal Q_{\beta} \mathcal Pi(t) = \mathcal Pi(t) + [\omega(t)-\omega(0)]\beta ,$$ where
$$\omega(t)=-\inf_{t\ge s>-\infty}\beta^{\vee}(\mathcal Pi(t)-\mathcal Pi(s)). $$
It is easy to see that $\mathcal Q_{\beta}\mathcal Pi\in C^0_{\mathbb R}(V)$.
Thus, we can set $\mathcal Pi_q=\mathcal Pi$ and, for $i\le q$,
$$\mathcal Pi_{i-1}=\mathcal Q_{\beta_{i}}\ldots \mathcal Q_{\beta_{q}}\mathcal Pi
\mathcal Qquad
\omega_i(t)=-\inf_{s\le t}\beta_i^{\vee}(\mathcal Pi_i(t)-\mathcal Pi_i(s)).$$
Then $$\mathcal Pi_0=\mathcal Q_{w_0}\mathcal Pi:=
\mathcal Q_{\beta_{1}}\, \ldots \, \mathcal Q_{\beta_{q}}\mathcal Pi$$ and,
for each $i\le q$,
$$\mathcal Q_{w_0}\mathcal Pi(t)=\mathcal Pi_{i}(t)+\sum_{j=1}^i[\omega_j(t)-\omega_j
(0)]\beta_j .$$
For each $t\in\mathbb R$, write $\omega(t)=(\omega_1(t),\ldots,\omega_q(t))$.
\begin{lemma}\label{future}
If $\mathcal Q_{w_0}\mathcal Pi(t)= \lambda$ and $\omega(t)=y$ then
$$\inf_{u\ge t} \alpha^{\vee}(\mathcal Q_{w_0}\mathcal Pi(u)) =
\alpha^{\vee}(\lambda) - \mathcal Theta^{\bf i}_\alpha(y).$$
\end{lemma}
\begin{proof}
It is straightforward to see that
$$\inf_{u\ge t} \beta_1^{\vee}(\mathcal Q_{w_0}\mathcal Pi(u)-\mathcal Q_{w_0}
\mathcal Pi(t)) =
\omega_1(t).$$
In particular, if ${\bf i}_\alpha=(s_1^\alpha,\ldots,s_q^\alpha)$
is a reduced decomposition with $s_1^\alpha=s_\alpha$ and we denote the
corresponding $\omega(\cdot)$ (defined as above) by $\omega^\alpha(\cdot)$, then
$$\inf_{u\ge t} \alpha^{\vee}(\mathcal Q_{w_0}\mathcal Pi(u)-\mathcal Q_{w_0}
\mathcal Pi(t)) = \omega_1^\alpha(t).$$
Now let $\tau_0=\tau^\alpha_0=t$ and, for $0< i \le q$,
$$\tau_i=\max\{ s\le \tau_{i-1}:\ \omega_i(s)=0\},\mathcal Qquad \tau^\alpha_i=\max\{ s\le \tau^\alpha_{i-1}:\ \omega^\alpha_i(s)=0\}.$$
Set $ \tau=\min\{\tau_q,\tau^\alpha_q\}$.
It is not hard to see that the path $\gamma\in C^0_{t-\tau}(V)$, defined by
$$\gamma(s)=\mathcal Pi(\tau+s)-\mathcal Pi(\tau),\mathcal Qquad t-\tau \ge s\ge 0,$$ satisfies
$ \varsigma_{\bf i} (\gamma) = \omega(t) = y$ and
$\varsigma_{{\bf i}_\alpha} (\gamma) = \omega^\alpha(t) $.
Thus, $\omega_1^\alpha(t)=\mathcal Theta^{\bf i}_\alpha(y)$, as required.
\end{proof}
Introduce a probability measure ${\mathbb P}_\mu$ under which $\mathcal Pi$
is a two-sided
Brownian motion in $V$ with drift $\mu\in C$. Set $\mathcal P_{s}i=(\mathcal Pi(t), t
\ge 0)$.
\begin{prop} \label{bmfacts}
Under ${\mathbb P}_\mu$, the following statements hold:
\begin{enumerate}
\item $\mathcal Q_{w_0}\mathcal Pi$ has the same law as $\mathcal Pi$.
\item For each $t\in\mathbb R$, the random variables $\omega_1(t),\ldots,
\omega_q(t)$
are mutually independent and exponentially distributed with
parameters $2\beta^\vee_1(\mu),\ldots,2\beta^\vee_q(\mu)$.
\item For each $t\in\mathbb R$, $\omega(t)$
is independent of $(\mathcal Q_{w_0}\mathcal Pi(s), -\infty < s\le t)$.
\item The random variables $\inf_{u\ge 0} \alpha^{\vee}(\mathcal Q_
{w_0}\mathcal Pi(u))$,
$\alpha$ a simple root, are independent of the $\sigma$-algebra
generated by $(\mathcal Pi(t),t\ge 0)$.
\end{enumerate}
\end{prop}
\mathcal Proof We see by backward induction on $k=q,\cdots,1$ that
$\mathcal Q_{\beta_{k}}\cdots \mathcal Q_{\beta_q}\mathcal Pi(s), s \leq t$
has the
same distribution as $\mathcal Q_{\beta_{k-1}}\cdots \mathcal Q_
{\beta_q}\mathcal Pi(s), s \leq t$, is independent of $\omega_k(t)$, and
that $\omega_k(t)$ has an exponential distribution with parameter $2
\beta^\vee_k(\mu)$. At each step, this is a one dimensional statement
which can be checked directly or seen as a consequence of the classical
output theorem for the $M/M/1$ queue (see, for example, \cite{Neil}).
This implies that (1),
(2), and (3) hold. Moreover
$$\inf_{t\geq 0}\beta_1^\vee(\mathcal Q_{w_0}\mathcal Pi(t))=-\inf_{s \leq 0}
\beta_1^\vee(\mathcal Q_{\beta_{2}}\cdots \mathcal Q_{\beta_{q}}\mathcal Pi
(s))$$
is independent of $\mathcal Pi(t), t\geq0$. Since $\beta_1$ can be chosen as
any simple root $\alpha$, this proves (4).
\mathcal Qed
Let $T>0$. For $\xi \in C$, denote by $E_\xi$ the event that
$\mathcal Q_{w_0}\mathcal Pi(s)\in C-\xi$ for all $s\ge 0$ and by
$E_{\xi,T}$ the event that
$\mathcal Q_{w_0}\mathcal Pi(s)\in C-\xi$ for all $T\ge s\ge 0$.
By proposition~\ref{bmfacts}, $E_\xi$ is independent of $\mathcal P_{s}i$.
For $r>0$, define $$B(\lambda,r)=\{\zeta\in V:\ \|\zeta-\lambda\|<r\}$$
and $$R(z,r)=(z_1-r,z_1+r)\times\cdots\times (z_q-r,z_q+r).$$
Fix $(z,\lambda)$ in the interior of $L_{\bf i}$ and choose $
\epsilon>0$ sufficiently small
so that $R(z,\epsilon)$ is contained in $ L_{\bf i} \times B(\lambda,
\epsilon)$
and
\begin{equation}\label{hyp}
\inf_{\lambda'\in B(\lambda,\epsilon), z'\in R(z,\epsilon) }
\alpha^{\vee}(\lambda') - \mathcal Theta^{\bf i}_\alpha(z') \ge 0.
\end{equation}
\begin{lemma}\label{exit}
\begin{eqnarray*}
\lefteqn{
{\mathbb P}_\mu(\mathcal Q_{w_0} \mathcal P_{s}i(T) \in B(\lambda,\epsilon),
\ \varsigma_{\bf i}(\mathcal P_{s}i)\in R(z,\epsilon))}\\
&=& \lim_{C\ni \xi\to 0} {\mathbb P}_\mu(E_\xi)^{-1}
{\mathbb P}_\mu(\mathcal Q_{w_0} \mathcal Pi(T) \in B(\lambda,\epsilon),
\ \omega(T)\in R(z,\epsilon), E_{\xi,T}).
\end{eqnarray*}
\end{lemma}
\begin{proof} An elementary induction argument on the recursive
construction of $\mathcal Q_{w_0}$
shows that, on the event $E_\xi$, there is a constant $C$ for which
$$\max_{i\le q} \|y_i-\omega_i(T)\|\vee \|\mathcal Q_{w_0}\mathcal P_{s}i(T)-
\mathcal Q_{w_0}\mathcal Pi(T)\|
\le C \|\xi\| .$$
Hence, for $\xi$ sufficiently small,
\begin{eqnarray*}
\lefteqn{
{\mathbb P}_\mu(\mathcal Q_{w_0}\mathcal P_{s}i(T)\in B(\lambda,\epsilon-C\|\xi
\|),\ \varsigma_{\bf i}(\mathcal P_{s}i) \in
R(z,\epsilon-C\|\xi\|),\ E_\xi) }\\
&\leq&
{\mathbb P}_\mu(\mathcal Q_{w_0}\mathcal Pi(T)\in B(\lambda,\epsilon),\ \omega
(T)\in R(z,\epsilon),\ E_\xi) \\
&\leq& {\mathbb P}_\mu(\mathcal Q_{w_0}\mathcal P_{s}i(T)\in B(\lambda,\epsilon+C
\|\xi\|),\ \varsigma_{\bf i}(\mathcal P_{s}i) \in
R(z,\epsilon+C\|\xi\|),\ E_\xi) .
\end{eqnarray*}
Now $E_\xi$ is independent of $\mathcal P_{s}i$, and so
\begin{eqnarray*}
\lefteqn{
{\mathbb P}_\mu(\mathcal Q_{w_0}\mathcal P_{s}i(T)\in B(\lambda,\epsilon-C\|\xi\|),
\ \varsigma_{\bf i}(\mathcal P_{s}i) \in R(z,\epsilon-C\|\xi\|))}\\
&\leq& {\mathbb P}_\mu(E_\xi)^{-1}
{\mathbb P}_\mu(\mathcal Q_{w_0}\mathcal Pi(T)\in B(\lambda,\epsilon),\ \omega
(T)\in R(z,\epsilon),\ E_\xi)\\
&\leq& {\mathbb P}_\mu(\mathcal Q_{w_0}\mathcal P_{s}i(T)\in B(\lambda,\epsilon+C
\|\xi\|),\ \varsigma_{\bf i}(\mathcal P_{s}i) \in
R(z,\epsilon+C\|\xi\|)) .
\end{eqnarray*}
Letting $\xi\to 0$, we obtain that
\begin{equation}\label{Exi}
\begin{array}{l}
\lefteqn{
{\mathbb P}_\mu(\mathcal Q_{w_0} \mathcal P_{s}i(T) \in B(\lambda,\epsilon),
\ \varsigma_{\bf i}(\mathcal P_{s}i)\in R(z,\epsilon))}\\
\ \mathcal Quad = \lim_{C\ni \xi\to 0} {\mathbb P}_\mu(E_\xi)^{-1}
{\mathbb P}_\mu(\mathcal Q_{w_0} \mathcal Pi(T) \in B(\lambda,\epsilon),
\ \omega(T)\in R(z,\epsilon), E_\xi).
\end{array}
\end{equation}
Finally observe that, on the event
$$\{ \mathcal Q_{w_0} \mathcal Pi(T) \in B(\lambda,\epsilon),
\ \omega(T)\in R(z,\epsilon) \},$$ we have,
by Lemma \ref{future} and (\ref{hyp}),
\begin{eqnarray*}
\inf_{u\ge T} \alpha^{\vee}(\mathcal Q_{w_0}\mathcal Pi(u))
&=&
\alpha^{\vee}(Q_{w_0} \mathcal Pi(T)) - \mathcal Theta^{\bf i}_\alpha\left(\omega(T)
\right) \\
&\ge& \inf_{\lambda'\in B(\lambda,\epsilon), z'\in R(z,\epsilon) }
\alpha^{\vee}(\lambda') - \mathcal Theta^{\bf i}_\alpha(z') \ge 0.
\end{eqnarray*}
Thus, we can replace $E_\xi$ by $E_{\xi,T}$ on the right hand side of
(\ref{Exi}),
and this concludes the proof of the lemma.
\end{proof}
For $a,b\in C$, define
$\mathcal Phi(a,b)=\sum_{w\in W}\varepsilon(w) e^{\langle wa,b\rangle}$.
\begin{lemma}\label{tech}
Fix $\mu\in C$. The functions $f(a,b)=\mathcal Phi(a,b)/[h(a)h(b)]$
and $g_\mu(a,b)=\mathcal Phi(a,b)/\mathcal Phi(a,\mu)$ have unique analytic
extensions to
$V\times V$. Moreover, $f(0,b)=k^{-1}$ and $g_\mu(0,b)=h(b)/h(\mu)$.
\end{lemma}
\begin{proof} It is clear that the function $\mathcal Phi$ is analytic in $
(a,b)$, futhermore
it vanishes on the hyperplanes $\langle\beta,a\rangle=0, \langle
\beta,b\rangle=0$,
for all roots $\beta$.
The first claim follows from an elementary analytic functions argument.
In the expansion of $\mathcal Phi$ as an entire function,
the term of homogeneous
degree $d$ is a polynomial in $a,b$ which is antisymmetric under $W$,
therefore
a multiple of $h(a)h(b)$. In particular the term of lowest degree is a
constant multiple of $h(a)h(b)$. This constant is nonzero, as can
be seen by taking
derivatives in the definition of $\mathcal Phi$.
By l'H\^opital's rule, $\lim_{a\to 0} g_\mu(a,b)=h(b)/h(\mu)$.
It follows that $\lim_{a\to 0} f(a,b)$ is a constant. To evaluate
this constant, note that, since $h$ is harmonic and vanishes at the
boundary
of $C$,
$$\int_C h(\lambda)^2 e^{-\|\lambda\|^2/2} f(a,\lambda) d\lambda =
e^{|a|^2/2} \int_V e^{-\|\lambda\|^2/2} d\lambda .$$
Letting $a\to 0$, we deduce that $f(0,\lambda)=k^{-1}$, as required.
\end{proof}
Denote by $F_\xi$ the event that $\mathcal P_{s}i(s)\in C-\xi$ for
all $s\ge 0$ and by $F_{\xi,T}$ the event that
$\mathcal P_{s}i(s)\in C-\xi$ for all $T\ge s\ge 0$.
\begin{lemma} \label{meander}
For $B\subset C$, bounded and measurable,
\begin{eqnarray*}
\lefteqn{
\lim_{C\ni \xi\to 0} {\mathbb P}_\mu(F_\xi)^{-1}
{\mathbb P}_\mu(\mathcal P_{s}i(T) \in B ,\ F_{\xi,T}) }\\ &=&
c_T^{-1} h(\mu)^{-1} \int_{B} e^{\langle\mu,\lambda\rangle-\|\mu\|^2
T/2 }
e^{- \|\lambda\|^2/2T } h(\lambda)\ d\lambda .\end{eqnarray*}
\end{lemma}
\begin{proof}
Set $z_T=\int_V e^{-\|\lambda\|^2/2T}\ d\lambda$.
By the reflection principle,
$$ {\mathbb P}_\mu(\mathcal P_{s}i(T) \in d\lambda ,\ F_{\xi,T}) =
e^{\langle\mu,\lambda\rangle-\|\mu\|^2 T/2}
\sum_{w\in W}\varepsilon(w)p_T(w\xi,\xi+ \lambda)d\lambda,$$
where $p_t(a,b)=z_t^{-1}e^{-\|b-a\|^2/2t}$ is the transition density of
a standard Brownian motion in $V$.
Integrating over $\lambda$ and letting $T\to\infty$, we obtain (see~
\cite{bbo})
$${\mathbb P}_\mu(F_\xi)= \sum_{w\in W}\varepsilon(w)
e^{\langle w\xi-\xi,\mu\rangle}.$$
Thus, using lemma~\ref{tech} and the bounded convergence theorem,
\begin{eqnarray*}
\lefteqn{
\lim_{C\ni \xi\to 0} {\mathbb P}_\mu(F_\xi)^{-1}
{\mathbb P}_\mu(\mathcal P_{s}i(T) \in B ,\ F_{\xi,T}) }\\
&=&
z_T^{-1} \lim_{C\ni \xi\to 0}
\int_{B}
e^{\langle\mu,\lambda\rangle-\|\mu\|^2 T/2}
e^{- (|\xi|^2+|\xi+\lambda|^2)/2T }
\mathcal Phi(\xi,\mu)^{-1} \mathcal Phi\left(\xi,\frac{\xi+\lambda}{T}\right) d
\lambda \\
&=&
z_T^{-1} \lim_{C\ni \xi\to 0}
\int_{B}
e^{\langle\mu,\lambda\rangle-\|\mu\|^2 T/2}
e^{- (\|\xi\|^2+\|\xi+\lambda\|^2)/2T }
g_\mu\left(\xi,\frac{\xi+\lambda}{T}\right) d\lambda \\
&=& z_T^{-1} h(\mu)^{-1} \int_{B} e^{\langle\mu,\lambda\rangle-\|\mu
\|^2 T/2 }
e^{- |\lambda|^2/2T } h(\lambda/T)\ d\lambda \\
&=& c_T^{-1} h(\mu)^{-1} \int_{B} e^{\langle\mu,\lambda\rangle-\|\mu
\|^2 T/2 }
e^{- \|\lambda\|^2/2T } h(\lambda) \ d\lambda ,
\end{eqnarray*}
as required.
\end{proof}
Applying lemmas \ref{exit}, \ref{meander} and proposition~\ref
{bmfacts}, we obtain
\begin{eqnarray*}
\lefteqn{
{\mathbb P}_\mu(\mathcal Q_{w_0} \mathcal P_{s}i(T) \in B(\lambda,\epsilon),
\ \varsigma_{\bf i}(\mathcal P_{s}i)\in R(z,\epsilon))}\\
&=& \lim_{C\ni \xi\to 0} {\mathbb P}_\mu(E_\xi)^{-1}
{\mathbb P}_\mu(\mathcal Q_{w_0} \mathcal Pi(T) \in B(\lambda,\epsilon),
\ \omega(T)\in R(z,\epsilon), E_{\xi,T})\mathcal Quad \text{(lemma \ref{hyp})}\\
&=& \lim_{C\ni \xi\to 0} {\mathbb P}_\mu(E_\xi)^{-1}
{\mathbb P}_\mu( \omega(T)\in R(z,\epsilon) )
{\mathbb P}_\mu(\mathcal Q_{w_0} \mathcal Pi(T) \in B(\lambda,\epsilon),\ E_
{\xi,T})
\mathcal Quad\text{(lemma \ref{bmfacts}(3))}\\
&=& \lim_{C\ni \xi\to 0} {\mathbb P}_\mu(E_\xi)^{-1}
{\mathbb P}_\mu( \omega(T)\in R(z,\epsilon) )
{\mathbb P}_\mu(\mathcal P_{s}i(T) \in B(\lambda,\epsilon),\ F_{\xi,T})\\
&=& \mathcal Prod_{i=1}^q e^{-\beta_i^\vee(\mu)z_i}
[e^{\epsilon\beta_i^\vee(\mu)}-e^{-\epsilon\beta_i^\vee(\mu)}]
\lim_{C\ni \xi\to 0} {\mathbb P}_\mu(E_\xi)^{-1}
{\mathbb P}_\mu(\mathcal P_{s}i(T) \in B(\lambda,\epsilon),\ F_{\xi,T})\\
&&\mathcal Qquad\mathcal Qquad \text{(lemma \ref{bmfacts} (2))}\\
&=& \mathcal Prod_{i=1}^q e^{-\beta_i^\vee(\mu)z_i}
[e^{\epsilon\beta_i^\vee(\mu)}-e^{-\epsilon\beta_i^\vee(\mu)}] \\
& & \mathcal Qquad \times
c_T^{-1} h(\mu)^{-1} \int_{B_V(\lambda,\epsilon)} e^{\mu(\lambda')-\|
\mu\|^2 T/2 }
e^{- \|\lambda'\|^2/2T } h(\lambda') \ d\lambda' .\mathcal Quad\text{(lemma \ref
{meander})}
\end{eqnarray*}
Now divide by $\|B(y,\epsilon)\| (2\epsilon)^{q}$ and let $\epsilon$
tend to zero
to obtain
\begin{eqnarray*}
\lefteqn{ {\mathbb P}_\mu(\mathcal Q_{w_0} \mathcal P_{s}i(T) \in d\lambda,\
\varsigma_{\bf i}(\mathcal P_{s}i)\in dz)}\\
&=& \mathcal Prod_{i=1}^q e^{-\beta_i^\vee(\mu)z_i} e^{\langle\mu,\lambda
\rangle-\|\mu\|^2 T/2}
c_T^{-1} h(\lambda) e^{-\|\lambda\|^2/2T} \ d\lambda\ dz .
\end{eqnarray*}
Letting $\mu\to 0$ this becomes, writing $\mathbb P=\mathbb P_0$,
\begin{equation}
\mathbb P(\mathcal Q_{w_0} \mathcal P_{s}i(T) \in d\lambda,\ \varsigma_{\bf i}
(\mathcal P_{s}i)\in dz)
= c_T^{-1} h(\lambda) e^{-\|\lambda\|^2/2T} \ d\lambda\ dz .
\end{equation}
Using lemma \ref{x-y},
it follows that, for $(w,\lambda)$ in the interior of $M_{\bf i}$,
\begin{equation}\label{p-uniform}
\mathbb P(\varrho_{\bf i}(\eta)\in dw,\ \mathcal P_{w_0} \eta(T) \in d
\lambda)
= c_T^{-1} h(\lambda) e^{-\|\lambda\|^2/2T} \ dw\ d\lambda .
\end{equation}
Under the probability measure $\mathbb P$, $\eta$ is a standard
Brownian motion
in $V$ with transition density given by $p_t(a,b)=z_t^{-1}e^{-\|b-a\|
^2/2t}$.
By theorem \ref{P-brown} under $\mathbb P$, $\mathcal P_{w_0} \eta$
is a
Brownian motion in $C$. Its transition density is given, for $\xi,
\lambda\in C$,
by $$q_t(\xi,\lambda)
=\frac{h(\lambda)}{h(\xi)}\sum_{w\in W}\varepsilon(w)p_t(w\xi,
\lambda).$$
As remarked in \cite{bbo}, this transition density can be extended by
continuity
to the boundary of $C$. From lemma~\ref{tech} we see that
$q_T(0,\lambda)=k^{-1}h(\lambda)^2e^{-\|\lambda\|^2/2T}$. Thus,
\begin{equation}\label{gue}
\mathbb P(\mathcal P_{w_0} \eta(T)\in d\lambda)=k^{-1}h(\lambda)^2e^{-
\|\lambda\|^2/2T}d\lambda.
\end{equation}
To complete the proof of the theorem, first note that since $
\varsigma_{\bf i}(\mathcal P_{s}i)$ is measurable
with respect to the $\sigma$-algebra
generated by $(\mathcal Q_{w_0} \mathcal P_{s}i(u),\ u\ge T)$, $\varrho_{\bf i}
(\eta)$
is measurable with respect to the $\sigma$-algebra
generated by $(\mathcal P_{w_0} \eta(u),\ u\ge T)$.
Thus, by the Markov property of $\mathcal P_{w_0} \eta$, the conditional
distribution of $\varrho_{\bf i}(\eta)$, given $(\mathcal P_{w_0} \eta
(s), s\le T)$,
is measurable with respect to the $\sigma$-algebra generated by $
\mathcal P_{w_0} \eta(T)$.
Combining this with (\ref{p-uniform}) and (\ref{gue}), we conclude
that the conditional law of
$\varrho_{\bf i}(\eta)$, given $(\mathcal P_{w_0} \eta(s), s\le T)$ and
$\mathcal P_{w_0} \eta(t)=\lambda$, is almost surely uniform on $M^
\lambda_{\bf i}$,
and that the Euclidean volume of $M_{\bf i}^\lambda$ is $k^{-1}h
(\lambda)$,
as required.
\subsection{Proof of theorem \ref{p-2}}\label{pf_p-2}
Let $\mathcal P_{s}i=w_0\eta$ and $\mathcal Q_{w_0}=\mathcal P_{w_0} w_0$.
Denote by $P_t$ (respectively $Q_t$) the semigroup of Brownian motion in $V$
(respectively $C$). Under $\mathbb P$, by \cite[Theorem 5.6]{bbo},
$\mathcal Q_{w_0} \mathcal P_{s}i$ is a Brownian motion in $C$.
Let $\delta\in C$. The function $e_\delta(v)=e^{\langle\delta,v\rangle}$
is an eigenfunction of $P_t$ and the $e_\delta$-transform of $P_t$ is a Brownian motion
with drift $\delta$. Setting $\mathcal Phi_\delta(v)=\sum_{w\in W}\varepsilon(w)e^{\langle w\delta,v\rangle},$
the function $\mathcal Phi_\delta/h$ is an eigenfunction of $Q_t$ and the
$(\mathcal Phi_\delta/h)$-transform of $Q_t$ is a Brownian motion with drift $\delta$ conditioned never
to exit $C$ (see \cite[Section 5.2]{bbo} for a definition of this process).
By theorem \ref{p-unif},
the conditional law of $\eta(T)$, given $(\mathcal P_{w_0} \eta(s),\ s\le T)$
and $\mathcal P_{w_0} \eta(T)=\lambda$,
is almost surely given by $\mu_{DH}^\lambda$.
It follows that the conditional law of $\mathcal P_{s}i(T)$, given $(\mathcal Q_{w_0} \mathcal P_{s}i(s),\ s\le T)$
and $\mathcal Q_{w_0} \mathcal P_{s}i(T)=\lambda$, is almost surely given by $\mu_{DH}^\lambda$.
Denote the corresponding Markov operator by $K(\lambda,\cdot)=\mu_{DH}^\lambda(\cdot)$.
By \cite[Theorem 5.6]{bbo} we automatically have the intertwining $K P_t= Q_tK $.
Note that $Ke_\delta$ is an eigenfunction of $Q_t$. By construction, the $Ke_\delta$-transform
of $Q_t$, started from the origin, has the same law as $\mathcal Q_{w_0} \mathcal P_{s}i^{(\delta)}$,
where $\mathcal P_{s}i^{(\delta)}$ is a Brownian motion in $V$ with drift $\delta$. Recalling the proof of
\cite[Theorem 5.6]{bbo} we note that $\mathcal Q_{w_0} \mathcal P_{s}i^{(\delta)}$
has the same law as a Brownian motion with drift $\delta$ conditioned never to exit $C$.
It follows that $Ke_\delta=\mathcal Phi_\delta/(c(\delta)h)$, for some $c(\delta)\ne 0$.
Now observe (using lemma \ref{tech} for example) that $\lim_{\xi\to 0} Ke_\delta(\xi)=1$.
Thus, by lemma~\ref{tech}, $c(\delta)=\lim_{\xi\to 0} \mathcal Phi_\delta(\xi)/h(\xi)=k^{-1} h(\delta)$.
We conclude that
$$\int_V e^{\langle\delta,v\rangle} \mu_{DH}^\lambda(dv) = k
\frac{\sum_{w\in W}\varepsilon(w)e^{\langle w\delta,\lambda\rangle}}{h(\delta)h(\lambda)}.$$
This formula extends to $\delta\in V^*$ by analytic continuation
(see lemma~\ref{tech} again),
and the proof is complete.
\subsection{A Littlewood-Richardson property}
In usual Littelmann path theory, the concatenation of paths is used to describe tensor products of representations, and give a combinatorial formula for the Littlewood-Richardson coefficients.
In our setting of continuous crystals, the representation theory does not exist in general, and the analogue of the Littlewood-Richardson coefficients is a certain conditional distribution
of the Brownian path. In this section we describe this distribution in theorem \ref{the_cond}.
Let ${\bf i}=(s_1,\ldots, s_q)$ where $w_0=s_1\ldots s_q$ is a reduced
decomposition. For $\eta\in C_T^0(V)$, let $x=\rho_{\bf i}(\eta)$.
For each simple root $\alpha$ choose now
${\bf j_\alpha}=(s_1^\alpha,\cdots,s_q^\alpha)$, a reduced decomposition of
$w_0$, such that $s_q^\alpha=s_\alpha$,
and denote the corresponding string parameters of the path $\eta$ by
$(\tilde x_1^\alpha,\cdots,\tilde x_q^\alpha)=\varrho_{\bf j_\alpha}(\eta)$. As in
(\ref{x1}), there is a continuous function $\Psi'_\alpha:\mathbb R^q \to \mathbb R$ such that
$\tilde x_q^\alpha=\Psi'_\alpha(x).$
Fix $\lambda,\mu\in C$ and suppose
that $\lambda+\eta(s) \in C$ for $ 0 \leq s \leq T$. Then
$\tilde x^\alpha_q=-\inf_{s\le T}\alpha^{\vee}(\eta(s)) \leq \alpha ^{\vee}(\lambda).$ In other words,
\begin{equation}\label{InegLR} \Psi'_\alpha(x) \leq \alpha^\vee(\lambda), \mathcal Qquad \alpha \in \Sigma.\end{equation}
Let $M_{\bf i}^{\lambda,\mu}$ denote the set of $x \in M_{\bf i}^\mu$ which satisfy the additional constraints
(\ref{InegLR}).
This is a compact convex polytope. Let $\nu^{\lambda,\mu}$
be the uniform probability distribution on $M_{\bf i}^{\lambda,\mu}$ and let
$\nu_{\lambda,\mu}$ be its image on $V$ by the map
$$x=(x_1,\cdots,x_q)\in M_{\bf i}^{\lambda,\mu} \mapsto \lambda+
\mu-\sum_{j=1}^qx_j\alpha_j\in V.$$
Let $\eta$ be the Brownian motion in $V$ starting from $0$.
Observe that, by theorem \ref{piecewise},
the event $\{{\eta (s) \in C-\lambda , 0 \leq s \leq T} \}$ is
measurable with respect to the $\sigma$-algebra generated by $\rho_{\bf i}(\eta)$. Combining
this with theorem \ref{p-unif} we obtain:
\begin{cor}\label{corcondi} The conditional law of $ \rho_{\bf i}(\eta)$, given $\mathcal P_{w_0}\eta(s) , s \leq T, \mathcal P_{w_0} \eta(T) = \mu$ and $\lambda+\eta (s) \in C$ for $0 \leq s \leq T$, is $\nu^{\lambda,\mu}$ and the conditional law of $\lambda+\eta(T)$ is $\nu_{\lambda,\mu}$.
\end{cor}
For $s, t \geq 0$ let
$$(\tau_s \eta)(t)=\eta(s+t)-\eta(s), (\tau_s\mathcal P_{w_0}\eta )(t) = \mathcal P_{w_0}\eta( s+t )- \mathcal P_{w_0}\eta(s).$$
\begin{lemma}
For all $s \geq 0$,
$$\mathcal P_{w_0} (\tau_s\mathcal P_{w_0}\eta) = \mathcal P_{w_0}\tau_s\eta.$$
\end{lemma}
\mathcal Proof If $\mathcal Pi_1,\mathcal Pi_2 : \mathbb R^+\to V$ are continuous path starting at $0$, let $ \mathcal Pi_1 \star_s \mathcal Pi_2$
be the path defined by $ \mathcal Pi_1 \star_s \mathcal Pi_2(r) = \mathcal Pi_1(r)$ when $0 \leq r \leq s$ and $ \mathcal Pi_1 \star_s \mathcal Pi_2(r) = \mathcal Pi_1(s)+\mathcal Pi_2(r-s)$
when $s\leq r$. By lemma \ref{lem:Px}, $\mathcal P_{w_0} (\mathcal Pi_1 \star_s \mathcal Pi_2) = \mathcal P_{w_0} (\mathcal Pi_1) \star_s \tilde \mathcal Pi_2 $ where $\tilde \mathcal Pi_2$ is a path such that
$ \mathcal P_{w_0}(\tilde \mathcal Pi_2) =\mathcal P_{w_0}( \mathcal Pi_2)$. Since $\tau_s(\mathcal Pi_1 \star_s \mathcal Pi_2)=\mathcal Pi_2$,
this gives the lemma. $\square$
Let $\gamma_{\lambda,\mu}$ be the
measure on $C$ given by
$$\gamma_{\lambda,\mu}(dx) = \frac{h(x)}{
h(\lambda)} \nu_{\lambda,\mu}(dx).$$
It will follow from theorem \ref{the_cond} that this is a probability measure.
Consider
the following $\sigma$-algebra
$${\mathcal G}_{s,t}=\sigma(\mathcal P_{w_0} \eta(a), a\leq s, \mathcal P_{w_0} \tau_s\eta(r), r\leq t).$$
The following result is a continuous analogue of the Littelmann interpretation of
the Littlewood-Richardson decomposition of a tensor product.
\begin{theorem}\label{the_cond} For $s, t > 0,$ $\gamma_{\lambda,\mu}$ is the conditional distribution of $\mathcal P_{w_0} \eta(s + t)$ given
${\mathcal G}_{s,t}$, $\mathcal P_{w_0} \eta(s) = \lambda $ and $\mathcal P_{w_0}\tau_s \eta(t) = \mu.$
\end{theorem}
\mathcal Proof
When $(X_t,(\theta_t),\mathbb P_x)$ is a Markov process with shift $\theta_t$ (i.e. $X_{s+t}=X_s\circ \theta_t$), for any $\sigma(X_r, r \geq 0)$-measurable random variables $Z, Y \geq0$, one has
$$\mathbb E(Z\circ \theta_t|\sigma(X_s, s \leq t, Y\circ \theta_t))=\mathbb E_{X_0}(Z|\sigma( Y))\circ \theta_t.$$
Let us apply this relation to the Markov process $X=\mathcal P_{w_0} \eta$ (see \cite{bbo}). Notice that since $\mathcal P_{w_0}(\tau_sX)=\mathcal P_{w_0}(\tau_0X)\circ \theta_s$, it follows from the lemma that
$${\mathcal G}_{s,t}=\sigma(X_a, \mathcal P_{w_0}(\tau_0X)(r)\circ \theta_s, a \leq s, r \leq t).$$
Therefore,
for any Borel nonnegative function $f: V\to \mathbb R$,
$$\mathbb E[f(\mathcal P_{w_0} \eta(s+t)| {\mathcal G}_{s,t}]
=\mathbb E_{X_0}[f(X_{t})|\sigma(\mathcal P_{w_0} (\tau_0X)(r), r \leq t)]
\circ \theta_s.$$
One knows (Theorem 5.1 in \cite{bbo}) that $X$ is the $h$--process
of the Brownian motion killed at the boundary of $C$. In other words,
starting from $X_0=\lambda$, $X$ is the $h$-- process of $\lambda+\eta(t)$
conditionally on $\lambda+\eta(s)\in C$, for $0 \leq s \leq t$.
It thus follows from corollary \ref{corcondi} that
$$\mathbb E_{\lambda}[f(X_{t})|\sigma(\mathcal P_{w_0} (\tau_0X)(r), r \leq t)]=
\frac{1}{h(\lambda)}\int f(x)h(x)\, d\nu_{\lambda,\mu}(x)$$
when $\mathcal P_{w_0} (\tau_0X)(t)=\mu$. This proves that
$$\mathbb E[f(\mathcal P_{w_0} \eta(s+t))| {\mathcal G}_{s,t}]=\int f(x)\, d\mu_{\lambda,\mu}(x)$$
when $\mathcal P_{w_0}\eta(s)=\lambda$ and $\mathcal P_{w_0}\tau_s\eta(t)=\mu$.
\subsection{A product formula}
Consider the Laplace transform of $\mu^\lambda_{DH} $ given,
for $\lambda \in C, z \in V^*$, by
\begin{equation}\label{eqJl1}J_\lambda(z)=
k\frac{\sum_W \varepsilon(w) e^{\langle z, w\lambda \rangle}}{h(z)h(\lambda)}.\end{equation}
This is an example of a generalized Bessel function,
following the terminology of Helgason \cite{Helg}
in the Weyl group case and Opdam \cite{Opdam} in the general Coxeter case.
It was a conjecture in Gross and Richards \cite{gross-rich} that these
are Laplace transform of positive measures
(this also follows from R\"osler \cite{roesler1}).
They are positive eigenfunctions of the Laplace and of the Dunkl operators on the Weyl chamber $C$ with eigenvalue $\|\lambda\|^2$ and Dirichlet boundary conditions and $J_\lambda(0)=1$.
Let $f_\lambda$ be the density of the probability measure $\mu^\lambda_{DH}$. One has
\begin{equation}\label{eqJl2}\int_V e^{\langle z, v \rangle} f_\lambda(v)
\,dv=J_\lambda(z).\end{equation}
Let, for $v \in C$,
$$f_{\lambda,\mu}(v)=
\frac{1}{h(\mu)} \sum_{w\in W} h(wv)f_\lambda(wv-\mu).$$
It follows from the next result that $ f_{\lambda,\mu}(v) \geq 0$.
\begin{theorem}\label{prod_J}
(i) For $\lambda, \mu \in C$ and $z \in V^*$,
$$J_\lambda(z)J_\mu(z)=\int_C J_v(z)f_{\lambda,\mu}(v)\, dv.$$
(ii) $$\gamma_{\lambda,\mu}(dx)=f_{\lambda,\mu}(x) dx.$$
\end{theorem}
\mathcal Proof
The first part is given by the following computation, similar to the one in Dooley et al \cite{Dooley}, we give it for the convenience of the reader.
It follows from (\ref{eqJl1}) and (\ref{eqJl2}) that
$$J_\lambda(z)J_\mu(z)= \int_V e^{\langle z,v\rangle } J_\mu(z)f_\lambda(v)\, dv=k \sum_W \varepsilon(w)\int_V \frac{ e^{\langle z, w\mu+v \rangle}}{h(\mu)h(z)}\, f_\lambda(v)\, dv.$$
Using the invariance of the measure $\mu^\lambda_{DH}$ under $W$,
$f_\lambda(wv)=f_\lambda(v)$ for $w \in W$. One has
\begin{eqnarray*}J_\lambda(z)J_\mu(z)
&=&k \sum_W \varepsilon(w)\int_V \frac{ e^{\langle z, w(\mu+v) \rangle}}{h(\mu)h(z)} f_\lambda(v)\, dv\\
&=&k \sum_W \varepsilon(w)\int_V \frac{ e^{\langle z, wv \rangle}}{h(\mu)h(z)} f_\lambda(v-\mu)\, dv\\
&=&\frac{1}{h(\mu)} \int_V J_v(z) h(x)f_\lambda(v-\mu)\, dv\\
&=&\frac{1}{h(\mu)} \sum_{w\in W}\int_{w^{-1}C} J_v(z) h(v)f_\lambda(v-\mu)\, dv\\
&=&\frac{1}{h(\mu)} \sum_{w\in W}\int_{C} J_v(z) h(wv)f_\lambda(wv-\mu)\, dv\\
&=&\int_{C} J_z(v) f_{\lambda,\mu}(v)\, dv\end{eqnarray*}
where we have used that, up to a set of measure zero,
$V=\cup_{w\in W}w^{-1}C$. This proves {\it (i)}.
Let us now prove {\it (ii)}, using theorem \ref{the_cond}.
Since $\eta$ is a standard Brownian motion in $V$,
$\{\eta(r), r \leq s\}$ and $ \tau_s\eta$ are independent, hence, for $z \in V^*$,
\begin{eqnarray*}\mathbb E(e^{\langle z, \eta(s+t) \rangle})|{\mathcal G}_{s,t})&=&\mathbb E(e^{\langle z, \eta(s) \rangle}e^{\langle z, \tau_s\eta(t)\rangle}|{\mathcal G}_{s,t})\\
&=& \mathbb E(e^{\langle z, \eta(s) \rangle}| \sigma(\mathcal P_{w_0} \eta(a), a\le s))\mathbb E(e^{\langle z, \tau_s\eta(t)\rangle}|\sigma(\mathcal P_{w_0} \tau_s\eta(b), b\le t)).\end{eqnarray*}
By theorem \ref{p-2},
$$J_\lambda(z)= \mathbb E(e^{\langle z, \eta(s) \rangle}| \sigma(\mathcal P_{w_0} \eta(a), a\le s) $$when
$\mathcal P_{w_0} \eta(s)=\lambda$ and, since $ \tau_s\eta$ and $\eta$ have the same law,
$$J_\mu(z)=\mathbb E(e^{\langle z, \tau_s\eta(t)\rangle}|\sigma(\mathcal P_{w_0} \tau_s\eta(b), b\le t))$$ when $\mathcal P_{w_0} \tau_s\eta(t)=\mu $.
Therefore
$$
\mathbb E(e^{\langle z, \eta(s+t) \rangle}|{\mathcal G}_{s,t})=J_\lambda(z)J_\mu(z).
$$
On the other hand, by lemma \ref{lem:Px},
${\mathcal G}_{s,t}$ is contained in $\sigma(\mathcal P_{w_0} \eta(r), r\le s+t)$, thus
$$ \mathbb E(e^{\langle z, \eta(s+t) \rangle}|{\mathcal G}_{s,t})=
\mathbb E
(\mathbb E(e^{\langle z, \eta(s+t) \rangle}|\sigma(\mathcal P_{w_0} \eta(r), r\le s+t))
|{\mathcal G}_{s,t})$$
$$= \mathbb E(J_z(\mathcal P_{w_0} \eta( s+t))|{\mathcal G}_{s,t}).$$
It thus follows from theorem \ref{the_cond} that
$$J_\lambda(z)J_\mu(z)=\int J_v(z) \, d\gamma_{\lambda,\mu}(v).$$
Therefore, for all $z \in V^*, $
$$\int J_v(z) \, f_{\lambda,\mu}(v)\, dv=\int J_v(z) \, d\gamma_{\lambda,\mu}(v).$$
By injectivity of the Fourier-Laplace transform this implies that
$$d\gamma_{\lambda,\mu}(v)=f_{\lambda,\mu}(v)\, dv. \mathcal Qquad \square$$
The positive product formula gives a positive answer to a question of
R\"osler \cite{roesler2} for the radial Dunkl kernel.
It shows that one can generalize the structure of Bessel-Kingman hypergroup
to any Weyl chamber, for the so called geometric parameter.
\section{Littelmann modules and geometric lifting.}
\subsection{}It was observed some time ago by Lusztig that the combinatorics of the canonical basis
is closely related to the geometry of the totally positive varieties. This connection was made precise by
Berenstein and Zelevinsky in \cite{beze2}, in terms of transformations called
"tropicalization" and "geometric lifting". In this section we show how some simple considerations
on Sturm-Liouville equations lead to a natural way of lifting Littelmann paths, which take values in a Cartan algebra,
to the corresponding Borel group. Using this lift, an application of Laplace's method explains the connection between the canonical basis
and the totally positive varieties.
This section is organized as follows.
We first recall the notions of tropicalization and geometric lifting in the next subsection, as well as the connection between the totally positive varieties and the canonical basis. Then we make some observations on Sturm-Liouville equations and their relation to Pitman transformations and the Littelmann path model in type $A_1$.
We extend these observations to higher rank in the next subsections then we show, in theorem \ref{the:string}
how they explain the link between string parametrization of the canonical basis and the totally positive varieties.
\subsection{Tropicalization and geometric lifting}
A subtraction free rational expression is a rational function in several variables, with positive real coefficients and without minus sign, e.g.
$$t_1+2t_2 /t_3,(1-t^3)/(1-t)\ \text{or}\ 1/(t_1t_2+3t_3t_4)$$ are such expressions, but not $t_1-t_2$.
Any such expression $F(t_1,\ldots,t_n)$ can be tropicalized, which means that
$$F_{trop}(x_1,,\ldots,x_n)=\lim_{\varepsilon\to 0_+}\varepsilon\log(F(e^{x_1/\varepsilon},\ldots,
e^{x_n/\varepsilon}))$$
exists as a piecewise linear function of the real variables $(x_1,\ldots,x_n)$, and is given by an expression in the maxplus algebra over the variables $x_1,\ldots,x_n$. More precisely,
the tropicalization $F\to F_{trop}$
replaces each occurence of $+$ by $\vee$ (the $\max$
sign $x\vee y=\max(x,y)$), each product by a $+$,
and each fraction by a $-$, and each positive real number by 0. For example
the three expressions above give
$$(t_1+2t_2/t_3)_{trop}=x_1\vee(x_2 -x_3),((1-x^3)/(1-x))_{trop}= 0\vee x\vee 2x, $$
and $$
(1/(t_1t_2+3t_3t_4))_{trop}=-\left((x_1+x_2)\vee(x_3+x_4)\right).$$
Tropicalization is not a one to one transformation, and there exists in general many
subtraction free rational expressions which have the same tropicalization. Given some expression $G$ in the maxplus algebra, any
subtraction free rational expression
whose tropicalization is $G$ is called a geometric
lifting of $G$, cf \cite{beze2}.
\subsection{Double Bruhat cells and string coordinates}\label{DBCSC}
We recall some
standard terminology, using the notations of \cite{beze2}. We
consider
a simply connected complex semisimple Lie group $G$, associated with a
root system $R$.
Let $H$ be a maximal torus, and $B,B_-$ be corresponding
opposite Borel subgroups with unipotent
radicals $N,N_-$.
Let $\alpha_i,i\in I,$ and $\alpha_i^{\vee},i\in I,$ be the
simple positive roots and coroots, and $s_i$ the
corresponding reflections
in the Weyl group $W$.
Let $e_i,f_i, h_i,i\in I,$ be Chevalley
generators of the Lie algebra of $G$.
One can choose
representatives $\overline w\in G$ for $w\in W$ by putting
$\overline{s_i}=\exp(-e_i)\exp(f_i)\exp(-e_i)$ and
$\overline{vw}=\overline v\,\overline w$ if
$l(v)+l(w)=l(vw)$ (see \cite{foze} (1.8), (1.9)). The Lie algebra of
$H$, denoted by
$\mathfrak h$ has a Cartan decomposition $\mathfrak
h=\mathfrak a+i\mathfrak a$ such that the roots
$\alpha_i$ take real values on the real vector space $\mathfrak
a$. Thus $\mathfrak a$ is generated by $\alpha_i^{\vee}, i\in I$ and
its dual $\mathfrak a^*$ by $\alpha_i, i\in I$.
A double Bruhat cell is associated with each pair $u,v\in W$ as
$$L^{u,v}=N{\bar u}N\cap B_-\bar vB_-.$$
We will be mainly interested here in the double Bruhat cells $L^{w,e}$.
As shown in \cite{beze2},
given a reduced decomposition $w=s_{i_1}\ldots s_{i_q}$
every element $g\in L^{w,e}$ has a unique decomposition
$g=x_{-i_1}(r_1)\ldots x_{-i_q}(r_q)$ with non zero complex numbers
$(r_1,\ldots, r_q)$, where $x_{-i}(s)=\varphi_i\begin{pmatrix}s&0\\
1&s^{-1}\end{pmatrix}$ (where $\varphi_i$ is the embedding of $SL_2$ into $G$
given by $e_i,f_i,h_i$). The totally positive part of the double Bruhat cell corresponds to the set of
elements with positive real
coordinates.
For two different reduced decompositions,
the transition map between two sets of coordinates of the form
$(r_1,\ldots, r_q)$ is given by a subtraction free rational map, which is therefore
subject to tropicalization.
As a simple example consider the case of type
$A_2$. Let the coordinates on the double Bruhat cell $L^{w_0,e}$ for the reduced
decompositions $w_0 =s_1s_2s_1$, and
$w_0=s_2s_1s_2$ be $(u_1,u_2,u_3)$ and $(t_1,t_2,t_3)$ respectively,
then
\begin{equation}
\begin{pmatrix}t_2&0&0\\t_1&t_1t_3/t_2&0\\1&t_3/t_2+1/t_1&1/t_1t_3\end{pmatrix}=
\begin{pmatrix}u_1u_3&0&0\\u_3+u_2/u_1&u_2/u_1u_3&0\\1&1/u_3&1/u_2\end{pmatrix}
\end{equation}
which yields transition maps
$$
\begin{array}{rcl}\label{tropi} t_1&=&u_3+u_2/u_1\\ t_2&=&u_1u_3\\ t_3&=&u_1u_2/(u_2+u_1u_3).\end{array}
$$
On the other hand, for each reduced expression $w_0=s_{i_1}\ldots s_{i_q}$ we can consider the
parametrization of the canonical basis by means of string coordinates. For any two such reduced decompositions, the transition maps between the two sets of string coordinates are given by
piecewise linear expressions. As shown by Berenstein and Zelevinsky, these expressions are the tropicalizations of the transition maps between the two parametrizations of the double Bruhat cell $L^{w_0,e}$, associated to the Langlands dual group.
For example, in type $A_2$ (which is its own Langlands dual)
let $(x_1,x_2,x_3)$ be the Kashiwara, or string, coordinates of the canonical basis,
using the reduced
decomposition $w_0 =s_1s_2s_1$, and $(y_1,y_2,y_3)$ the ones corresponding
to $w_0=s_2s_1s_2$.
The transition map between the two is given by
$$\begin{array}{rcl} y_1&=&x_3\vee(x_2-x_1)\\ y_2&=&x_1+x_3\\ y_3&=&x_1\wedge (x_2-x_3)\end{array}$$
which is the tropicalization of (\ref{tropi}).
We will show how some elementary considerations on the Sturm-Liouville
equation, and the method of variation of constants, together with the
Littelmann path model explain these connections.
\subsection{Sturm-Liouville equations}
We consider the Sturm-Liouville equation
\begin{equation}\label{ Sturm_L}\varphi''+q\varphi=\lambda \varphi
\end{equation}
on some interval of the real line, say $[0,T]$ to fix notations.
In general there exists no closed form for the solution to such an equation.
However, if one solution $\varphi_0$ is known,
which does not vanish in the interval then all the
solutions can be found by quadrature. Indeed
using for example the "method of variation of constants"
one sees that every
other solution $\varphi$
of this equation in the same interval can be written in the form
$$\varphi(t)=u\varphi_0(t)+v\varphi_0(t)\int_0^t\frac{1}{\varphi_0^2(s)}ds$$
for some constants $u,v$.
If this new solution does not vanish in the
interval $I$, we can use it to generate other solutions of the equation by
the same kind of formula.
This leads us to investigate the composition of two maps of the form
$$E_{u,v}:\varphi\mapsto
u\varphi(t)+v\varphi(t)\int_0^t\frac{1}{\varphi^2(s)}ds\label{action}$$
acting on non vanishing continuous functions.
It is easy to see, using integration by parts, that whenever
the composition is well defined, one has
$$E_{u,v}\circ E_{u',v'}=E_{uu',uv'+v/u'}$$ therefore these maps define a
partial right
action of the group of unimodular
lower triangular matrices
$$\begin{pmatrix}u&0\\v&u^{-1}\end{pmatrix}$$
on the set of continuous paths which do not vanish in $I$.
Of course this is equivalently a partial left action of the upper triangular
group, but for reasons which will soon appear we choose this formulation.
In particular if we start from $\varphi$ and construct
$$\mathcal P_{s}i(t)=u\varphi(t)+v\varphi(t)\int_0^t\frac{1}{\varphi^2(s)}ds$$ which does not
vanish on $[0,T]$, then $\varphi$ can be recovered from $\mathcal P_{s}i$ by the formula
$$\varphi(t)=u^{-1}\mathcal P_{s}i(t)-v\mathcal P_{s}i(t)\int_0^t\frac{1}{\mathcal P_{s}i^2(s)}ds.$$
Coming back to the Sturm-Liouville equation, let $\eta,\rho$ be a fundamental
basis of solutions at 0, namely $\eta(0)=\rho'(0)=1$, $\eta'(0)=\rho(0)=0$.
Then in the two-dimensional space spanned by $\rho,\eta$ the transformation is
given by $$(x,y)\mapsto (ux,uy+v/x)$$ and it is defined on $x\ne 0$.
Again it is easy to check, using
this formula,
that this defines a right action of the lower triangular group.
Let us now investigate the limiting case $u=0$, which gives (assuming $v=1$
for simplicity)
\begin{equation}\label{TPit}{\mathcal T}\varphi(t)=\varphi(t)\int_0^t\frac{ds}{\varphi(s)^2}.\end{equation}
This map provides a ``geometric lifting" of the one-dimensional
Pitman transformation. Indeed
set $\varphi(t)=e^{a(t)}$, then using Laplace's method
\begin{equation}\lim_{\varepsilon\to 0_+}
\varepsilon\log\left(e^{a(t)/\varepsilon}\int_0^t
e^{-2a(s)/\varepsilon}ds\right)=a(t)-2\inf_{0\leq s\leq t}a(s).\end{equation}
This time the function $\varphi$ cannot be recovered from ${\mathcal T}\varphi$.
If we
compute the same transformation with
$\varphi_v(t):=\varphi(t)(1+v\int_0^t\frac{1}{\varphi(s)^2}ds)$ we get
$$
\begin{array}{rcl}{\mathcal T}\varphi_v(t)&=&\varphi_v(t)
\int_0^t\frac{1}{\varphi_v(s)^2}ds\\
&=&\varphi(t)(1+v\int_0^t\frac{1}{\varphi(s)^2}ds)\left(\frac{1}{
v}-\frac{1}{
v(1+v\int_0^t\frac{1}{\varphi(s)^2}ds)}\right)\\
&=&\varphi(t)\int_0^t\frac{1}{\varphi(s)^2}ds\\
&=&{\mathcal T}\varphi(t).
\end{array}
$$
This is of course not surprising, since
${\mathcal T}\varphi$ vanishes at 0, it thus belongs to
a one-dimensional subspace of the space of solutions to the Sturm-Liouville
equation, and ${\mathcal T}$ is not invertible.
In order to recover the function $\varphi$ from $\mathcal P_{s}i={\mathcal T}\varphi$ we thus need to
specify some real number. A convenient choice is to impose the value of
$$\xi=\int_0^T\frac{1}{\varphi(s)^2}ds=
\frac{\mathcal P_{s}i(T)}{\varphi(T)}.$$
With this we can compute
$$\int_t^T\frac{1}{\mathcal P_{s}i(s)^2}ds=
\frac{1}{\int_0^t\frac{1}{\varphi(s)^2}ds}-
\frac{1}{\int_0^T\frac{1}{\varphi(s)^2}ds}=
\frac{\varphi(t)}{\mathcal P_{s}i(t)}-\frac{1}{\xi}.$$
\begin{prop}Assume that $\mathcal P_{s}i={\mathcal T}\varphi$ for some nonvanishing
$\varphi$, then the set ${\mathcal T}^{-1}(\mathcal P_{s}i)$ can be parametrized by
$\xi\in]0,+\infty[$.
For each such $\xi$ there exists a unique $\varphi_\xi\in{\mathcal T}^{-1}(\mathcal P_{s}i)$
such that $\xi=\int_0^T\frac{1}{\varphi_\xi(s)^2}ds$, given by
$$\varphi_\xi(t)=\mathcal P_{s}i(t)\left(\frac{1}{\xi}+
\int_t^T\frac{1}{\mathcal P_{s}i(s)^2}ds\right).$$
\end{prop}
Identifying the positive halfline with the Weyl chamber for $SL_2$, we see that
sets of the form ${\mathcal T}^{-1}(\mathcal P_{s}i)$ are geometric liftings of
the Littelmann modules for $SL_2$.
The formula in the proposition gives a geometric
lifting of the operator ${\mathcal H}^x$ since
$${\mathcal H}^xa(t)=a(t)-x\wedge2\inf_{t\leq s\leq T}a(s)=\lim_{\varepsilon\to0_+}
\varepsilon\log\left(e^{a(t)/\varepsilon}(e^{-x/\varepsilon}+\int_t^Te^{-2a(s)/\varepsilon}ds)\right).
$$
We shall now find the geometric liftings of the
Littelmann operators. For this we have, knowing an element $\varphi_{\xi_1}\in
{\mathcal T}^{-1}(\mathcal P_{s}i)$, to find the solution corresponding to $\xi_2$.
Since
$$\varphi_{\xi_i}(t)=\mathcal P_{s}i(t)\left(\frac{1}{\xi_i}+
\int_t^T\frac{1}{\mathcal P_{s}i(s)^2}ds\right)\mathcal Qquad i=1,2$$
one has
$$\varphi_{\xi_1}=\varphi_{\xi_2}+\mathcal P_{s}i(\frac{1}{\xi_1}-\frac{1}{\xi_2})
=\varphi_{\xi_2}\left(1+(\frac{1}{\xi_1}-
\frac{1}{\int_0^T\frac{1}{\varphi_{\xi_2}(s)^2}ds}
)\int_0^t\frac{1}{\varphi_{\xi_2}(s)^2}ds\right).$$
Using Laplace
method again one can
recover the formula for the operators ${\mathcal E}^x_{\alpha}$,
see definition \ref{littelmanntransform}.
\subsection{A $2\times 2$ matrix interpretation}
We shall now recast the above computations using a $2\times 2$ matrix
differential equation of order one,
and the Gauss decomposition of matrices. This will allow us
in the next section to extend these
constructions to higher rank groups.
Let $N_+$ be the
nilpotent group of upper triangular invertible $2\times 2$ matrices, let $N_-$ be the
corresponding group of lower triangular matrices, and $A$ the group of diagonal
matrices, then an
invertible
$2\times 2$ matrix $g$ has a Gauss decomposition if it can be written as
$g=[g]_-[g]_0[g]_+$ with $[g]_-\in N_-, [g]_0\in A$ and $[g]_+\in N_+$.
We will use also the decomposition
$g=[g]_-[g]_{0+}$ with $[g]_{0+}=[g]_0[g]_+\in B=AN_+$.
The condition for such a decomposition to exist
is exactly that the upper left coefficient of the matrix $g$ be non zero.
Let us consider a smooth path $a:[0,T]\to{\mathbb R}$, such that $a(0)=0$,
and let the matrix $b(t)$ be the solution to
\begin{equation}
\label{eqdiff}\frac{db}{dt}=\begin{pmatrix}\frac{da}{dt}&1
\\0&-\frac{da}{dt}\end{pmatrix}b;\mathcal Qquad b(0)=Id.
\end{equation}
Then one has
$$b(t)=\begin{pmatrix}e^{a(t)}&e^{a(t)}\int_0^te^{-2a(s)}ds\\
0&e^{-a(t)}\end{pmatrix}.$$
Now let $g=\begin{pmatrix}u&0\\
v&u^{-1}\end{pmatrix}$ and consider the Gauss decomposition of the matrix
$$bg=\begin{pmatrix}ue^{a(t)}+ve^{a(t)}\int_0^te^{-2a(s)}ds
&u^{-1}e^{a(t)}\int_0^te^{-2a(s)}ds\\ ve^{-a(t)}& u^{-1}e^{-a(t)}\end{pmatrix}.$$
One finds that
$$[bg]_-=\begin{pmatrix}1&0\\\frac{ve^{-a(t)}}
{ue^{a(t)}+ve^{a(t)}\int_0^te^{-2a(s)}ds}&
1\end{pmatrix}$$
and
$$[bg]_{0+}=\begin{pmatrix}ue^{a(t)}+ve^{a(t)}\int_0^te^{-2a(s)}ds &u^{-1}e^{a(t)}\int_0^te^{-2a(s)}ds\\0&
(ue^{a(t)}+ve^{a(t)}\int_0^te^{-2a(s)}ds)^{-1}\end{pmatrix}.$$
One can check the following proposition.
\begin{prop}
The upper triangular matrix $[bg]_{0+}$ satisfies the
differential equation
$$\frac{d}{dt}[bg]_{0+}=\begin{pmatrix}\frac{d}{dt}T_{u,v}a(t)&1\\0&-\frac{d}{
dt}T_{u,v}a(t)\end{pmatrix}[bg]_{0+}$$
where $T_{u,v}a(t)=\log (E_{u,v}e^{a(t)})$.
\end{prop}
This equation is of
the same kind as the equation (\ref{eqdiff})
satisfied by the original matrix $b$, but with a
different initial point. The right action
$E_{u,v}$ is thus obtained by taking the matrix solution to (\ref{eqdiff}),
multiplying it on the right by $g=\begin{pmatrix}u&0\\v&u^{-1}\end{pmatrix}$
and looking at the
diagonal part of the Gauss decomposition of the
resulting matrix.
Actually in this way the partial action $T_{u,v}$
extends to a partial action $T_g$ of the whole group of invertible
real $2\times 2$ matrices. One starts from the path $a$, constructs the matrix
$b$ by the differential equation and then takes the 0-part in the Gauss
decomposition of $bg$. This yields a path $T_ga$. The statement of the
proposition above remains true for $[bg]_{0+}$. The importance of this
statement is that one can iterate the procedure and see that
$T_{g_1g_2}=T_{g_2}\circ T_{g_1}$ when defined.
Consider now the element $s=\begin{pmatrix}0&-1\\1&0\end{pmatrix}$, then
$$T_sa(t)=a(t)+\log\left( \int_0^te^{-2a(s)}ds\right).$$
This is the geometric lifting of the Pitman operator obtained in (\ref{TPit}).
In the next section we shall extend these considerations to groups of
higher rank.
\subsection{Paths in the Cartan algebra}\label{path-cartan}
We work now in the general framework of the beginning of section \ref{DBCSC}.
One has the usual decomposition
${\mathfrak g}={\mathfrak n}_-+{\mathfrak a}+{\mathfrak n}_+$.
Correspondingly there is a Gauss decomposition $g=[g]_{-}[g]_0[g]_+$ with
$[g]_-\in N_-,[g]_0\in A, [g]_+\in N$, defined
on an open dense subset. We denote by $[g]_{0+}=[g]_0[g]_+$ the $B=AN_+$
part of
the decomposition.
The following is easy to check, and provides a useful characterization of
the vector space generated by the $e_i$.
\begin{lemma}\label{nei}Let $n\in {\mathfrak n}_+$, then one has
$[h^{-1}nh]_{+}=n$ for all $h\in N_-$
if and only if $n$ belongs to the vector space generated by the $e_i$.
\end{lemma}
Let $a$ be a path in the Cartan algebra $\mathfrak a$
and let $b$ be a solution to the
equation
$$\frac{d}{dt} b=(\frac{d}{dt}a+n)b$$
where $n\in \oplus_i\mathbb C e_i$.
\begin{prop} Let $g\in G$, and assume that
$bg$ has a Gauss decomposition, then the upper part
$[bg]_{0+}$ in the Gauss
decomposition of $bg$ satisfies the equation
\begin{equation}
\frac{d}{dt}[bg]_{0+}=(\frac{d}{dt}T_g a+n)[bg]_{0+}\label{eqdiff2}
\end{equation}
where $T_ga(t)$ is a path in the Cartan algebra.
\end{prop}
\mathcal Proof Let us write the equation
$$\frac{d}{dt}([bg]_-[bg]_{0+})=(\frac{d}{dt}a+n)[bg]_-[bg]_{0+}$$
in the form
$$[bg]_{-}^{-1}\frac{d}{dt}[bg]_{-}=[bg]_{-}^{-1}(\frac{d}{dt}a+n)[bg]_{-}
-\frac{d}{dt}[bg]_{0+}[bg]^{-1}_{0+}.$$
Since the left hand side of this equation is lower triangular,
the right hand side has zero upper
triangular part therefore,
by
lemma \ref{nei}
$$n=\left[[bg]_{-}^{-1}(\frac{d}{dt}a+n)[bg]_{-}\right]_{+}=
\left[\frac{d}{dt}[bg]_{0+}[bg]^{-1}_{0+}\right]_{
+}$$
therefore there exists a path $T_g a$ such that equation (\ref{eqdiff2}) holds.
\mathcal Qed
We now assume that $$n=\sum_in_ie_i$$ with all $n_i>0$.
When $g=\bar s_i$ is a fundamental reflection, one gets a geometric lifting of
the Pitman
operator
$$T_{s_i}a(t)=a(t)+\log\left(\int_0^te^{-\alpha_i(a(s))}ds
\right)\alpha^{\vee}_i$$
associated with the dual root system, i.e.
$$\lim_{\varepsilon\to 0}\varepsilon T_{s_i}(\frac{1}{\varepsilon}a)={\mathcal
P}_{\alpha_i^{\vee}}a.$$
Thanks to the above proposition, one can prove that these geometric liftings
satisfy the braid relations, and $T_w$ provides a geometric lifting
of the
Pitman operator $\mathcal P_{w}$ for all $w\in W$.
Analogously the Littelmann raising and lowering operators also
have geometric liftings.
\subsection{Reduced double Bruhat cells}\label{redbru}
In this section we show how our considerations on Littelmann's
path model allow us to
make the connection with the work of Berenstein and Zelevinsky \cite{beze2}.
We consider a path $a$ on the Cartan Lie algebra, with $a(0)=0$, then belongs to
the Littelmann module $L_{\mathcal P_{w_0}a}$.
Consider the solution $b$ to $\frac{d}{dt} b=(\frac{d}{dt}a+n)b$, $b(0)=I$.
Then $[[b]_+w_0]_{-0}\in L^{w_0,e}$, thus if
\begin{equation}\label{dec}
w_0=s_{i_1}\ldots s_{i_q}
\end{equation}
is a reduced decomposition, then one has
$$[[b]_+w_0]_{-0}=x_{-i_1}(r_1)\ldots
x_{-i_q}(r_q)$$
for some uniquely defined
$r_1(a),\ldots, r_q(a)> 0$ (see \cite{beze2}).
Let $u_k(a)=r_k(a)e^{-\alpha_{i_k}(a(T)}$.
\begin{theorem}\label{the:string}
Let $(x_1,\ldots, x_q)$ be the string parametrization of $a$ in $L_{\mathcal P_{w_0}a}$, associated with the decomposition (\ref{dec}), then
$$(x_1,\ldots, x_q)=\lim_{\varepsilon \to 0 } \varepsilon (\log u_1(a/\varepsilon ),\ldots, \log u_q(a/\varepsilon )).$$
\end{theorem}
\mathcal Proof
When
we multiply $b$ on the right by $\bar s_{i_1}$, and take its Gauss decomposition
$$[bs_{i_1}]_-[bs_{i_1}]_0[bs_{i_1}]_+=[b]_0[b]_+s_{i_1}$$
then $$[b]_+s_{i_1}[bs_{i_1}]_+^{-1}=[b]_0^{-1}
[bs_{i_1}]_-[bs_{i_1}]_0\in
Ns_{i_1}N\cap B_-L^{s_{i_1},e}$$ and
$$[b]_+s_{i_1}[bs_{i_1}]_+^{-1}=x_{-i_1}(r_1)$$
for some $r_1$. In fact, using our formula for
Littelmann operators,
$$r_1=e^{\alpha_1(a(T))}\int_0^Te^{-\alpha_1(a(s))}ds.$$
Comparing with (\ref{kash}) we see that $r_1e^{-\alpha_1(a(T))}$
gives a geometric lifting of the first string
coordinate for the Littelmann module.
We can continue the process starting from $[bs_{i_1}]_+$, to get
$$[bs_{i_1}]_+s_{i_2}[bs_{i_1}s_{i_2}]_+^{-1}=x_{-i_2}(r_2)$$
(using the fact that $[g_1g_2]_+=[[g_1]_+g_2]_+$ for $g_1,g_2\in G$)
obtaining
successive
decompositions
$$[b]_+s_{i_1}\ldots s_{i_k}[bs_{i_1}\ldots s_{i_k}]_+^{-1}=x_{-i_1}(r_1)\ldots
x_{-i_k}(r_k).$$
This gives the coordinates of $[[b]_+w_0]_{-0}\in L^{w_0,e}$, which
are thus seen to correspond to the string coordinates by a geometric
lifting.
\mathcal Qed
\section{Appendix}
This appendix is devoted to the proof of theorem \ref{thmuniq}.
\begin{lemma}\label{clnfcry} If $B(\lambda), \lambda \in \bar C,$
is a closed normal family of highest weight continuous crystals
then for each $\lambda, \mu \in \bar C$ such that $\lambda \leq \mu$ there exists an injective map
$\Psi_{\lambda,\mu}:B(\lambda)\to B(\mu)$ with the following properties
\begin{enumerate}[(i)]
\item
$\Psi_{\lambda,\mu}(b_\lambda)=b_\mu,$
\item $\Psi_{\lambda,\mu}e_\alpha^r(b)=e_\alpha^r\Psi_{\lambda,\mu}(b)$,
for all $b \in B(\lambda),\alpha\in\Sigma,r\geq 0$,
\item $\Psi_{\lambda,\mu}f_\alpha^r(b)=f_\alpha^r\Psi_{\lambda,\mu}(b)$
if $f_\alpha^r(b)\in B(\lambda)$.
\end{enumerate}
\end{lemma}
\mathcal Proof Let $\nu = \mu-\lambda$. First consider the map
$\mathcal Phi_{\lambda,\mu}:B(\lambda)\to B(\lambda)\otimes B(\nu)$ given by
$\mathcal Phi_{\lambda,\mu}(b)=b\otimes b_\nu$, when $b \in B(\lambda)$.
Since $b_\nu$ is a highest weight $\varepsilon_\alpha(b_\nu)=0$. By normality, for all $b \in B(\lambda),
\varphi_\alpha(b) \geq 0$. Therefore
$\sigma:=\varphi_\alpha(b)-\varepsilon_\alpha(b_\nu)= \varphi_\alpha(b)\geq 0$. By definition, this implies that
$\varepsilon_\alpha(b\otimes b_\nu)=\varepsilon_\alpha(b)$, $
\varphi_\alpha(b\otimes b_\nu)=\varphi_\alpha(b)$,
$wt(b\otimes b_\nu)=wt(b)+\nu.$ Using (\ref{sigmapos}) we see also that, for $r \geq 0$,
$
e_\alpha^r(b\otimes b_\nu)=
e_\alpha^{r}b\otimes b_\nu$ and that, when $f_\alpha^r(b)\in B(\lambda)$, $r \leq \varphi_\alpha(b) = \sigma$ by normality, and therefore $f_\alpha^r(b\otimes b_\nu)=
f_\alpha^{r}b\otimes b_\nu$. Since the family is closed there is an isomorphim $i_{\lambda,\mu}: \mathcal F(b_\lambda\otimes b_\nu) \to B(\mu)$. One has $i_{\lambda,\mu}(b_\lambda\otimes b_\nu)=b_{\mu}.$ One can take $\Psi_{\lambda,\mu}=i_{\lambda,\mu}\circ \mathcal Phi_{\lambda,\mu}$. $\square$
The family $\Psi_{\lambda,\mu}$ constructed above satisfies
$\Psi_{\lambda, \lambda} =id$ and, when $\lambda \leq \mu\leq\nu$,
$\Psi_{\mu, \nu} \circ \Psi_{\lambda,\mu}= \Psi_{\lambda,\nu}$,
so that we can consider the direct limit $B(\infty)$ of
the family $B(\lambda), \lambda \in \bar C,$ with the injective maps
$\Psi_{\lambda, \mu}: B(\lambda) \to B(\mu), \lambda \leq\mu$.
Still following Joseph \cite{Joseph-notes},
we define a crystal structure on $B(\infty)$.
\begin{prop}
The direct limit $B(\infty)$ is a highest weight upper normal continuous
crystal with highest weight $0$.
\end{prop}
\mathcal Proof By definition, the direct limit $B(\infty)$ is the quotient set
$B/\sim$ where $B=\cup_{\lambda\in \bar C}B(\alpha)$ is the disjoint union of
the $B(\lambda)'s$ and where $b_1 \sim b_2$ for $b_1\in B(\lambda), b_2
\in B(\mu)$, when there exists a $\nu \in \bar C$ such that
$\nu \geq\lambda, \nu\geq�\mu$ and
$\Psi_{\lambda,\nu}(b_1)=\Psi_{\mu,\nu}(b_2)$.
Let $\bar b$ be the image in $B(\infty)$ of $b \in B$.
If $b \in B(\lambda)$, then we define
$wt(\bar b)=wt(b)-\lambda$,
$\varepsilon_\alpha(\bar b)=\varepsilon_\alpha( b),$
$\varphi_\alpha(\bar b)=\varepsilon_\alpha( \bar b)+\alpha^\vee(wt(\bar b))$
and, when $r \geq 0$,
$e^r_{\alpha}(\bar b)=\overline{ e^r_{\alpha}(b)}$.
These do not depend on $\lambda$, since if
$\mu \geq\lambda$ and
$b'=\Psi_{\lambda,\mu}(b)$, then one has
$\bar b'=\bar b$ and $wt(b')=wt(b)+\mu-\lambda$. In order to define
$ f_\alpha^r(\bar b)$ for $r \geq 0$, let us choose $\mu \geq\lambda$ large enough to ensure that
$$\varphi_\alpha( b')=\varepsilon_\alpha( b')+\alpha^{\vee}(wt(b))+
\alpha^\vee(\mu-\lambda)\geq r.$$
Then
$f^r_\alpha b'\neq {\bf 0}$ by normality and we define
$f^r\bar b=\overline{ f^rb'}$. Again this does not depend on $\mu$.
Using the lemma we check that this defines a crystal stucture on $B(\infty)$.
Each $\Psi_{\lambda, \mu}$, $\lambda \leq \mu$,
commutes with the $e_\alpha^r, r \geq 0 $.
This implies that $B(\infty)$ is upper normal.
Since each $B(\lambda)$ is a highest weight crystal,
$B(\infty)$ has also this property. $\square$
We will denote $b_\infty$ the unique element of $B(\infty)$ of weight 0.
Note that $B(\infty)$ is not lower normal.
For instance, \begin{equation}\label{binfty}\varphi_\alpha(b_\infty)=0, f(b_\infty)\neq {\bf 0}, \mbox{ for all } f\in \mathcal F.\end{equation}
For $\lambda \in \bar C$ we define the crystal $S(\lambda)$
as the set with a unique element $\{s_\lambda\}$ and the maps
$wt(s_\lambda)=\lambda, \varepsilon_\alpha(s_\lambda)=
-\alpha^\vee(\lambda),\varphi_\alpha(s_\lambda)=0$ and
$e^r_\alpha(s_\lambda)=\bf 0$ when $r\neq 0.$
\begin{lemma} The map
$$\Psi_\lambda:b\in B(\lambda) \mapsto \bar b \otimes
s_\lambda \in B(\infty)\otimes S(\lambda)$$
is a crystal embedding.
\end{lemma}
\mathcal Proof Let $b \in B(\lambda)$, then
$$wt(\Psi_\lambda(b))=wt(\bar b \otimes s_\lambda)=wt(\bar b)+wt(s_\lambda)=
wt(b)-\lambda+\lambda=wt(b).$$
Let $\sigma=\varphi_\alpha(\bar b)-\varepsilon_\alpha(s_\lambda)$.
Then $\sigma=\varphi_\alpha(b)$ since
$\varepsilon_\alpha(s_\lambda)=-\alpha^\vee(\lambda)$ and
$\varphi_\alpha(\bar b)=\varphi_\alpha(b)-\alpha^\vee(\lambda)$.
Thus $\sigma \geq 0$ by normality of $B(\lambda)$.
By the definition of the tensor product,
this implies that
$$\varepsilon_\alpha(\Psi_\lambda(b))=
\varepsilon_\alpha(\bar b \otimes s_\lambda)=\varepsilon_\alpha(\bar b)=
\varepsilon_\alpha( b),$$
thus $\varphi_\alpha(\Psi_\lambda(b))=\varphi_\alpha( b)$.
Furthermore, since $\sigma \geq 0$,
$$e^r_\alpha(\Psi_\lambda(b))
=e^r_\alpha(\bar b \otimes s_\lambda)=
e^{\max(r,-\sigma)}_\alpha(\bar b)\otimes
e^{\min(r,-\sigma)+\sigma}s_\lambda.$$
When $r \geq -\sigma$, this is equal to
$e^r_\alpha(\bar b)\otimes s_\lambda=\Psi_\lambda(e^r_\alpha(b))$. If
$r < -\sigma$ then $e^r_\alpha(\Psi_\lambda(b))= e^{-\sigma}_\alpha( \bar b) \otimes e^{r+\sigma}_\alpha( s_\lambda)={\bf 0}$, since $e^{s}_\alpha( s_\lambda)={\bf 0}$ when $s\neq 0$,
and on the other hand,
$e^r_\alpha(b)={\bf 0}$ by normality.
Thus $\Psi_\lambda(e^r_\alpha(b))={\bf 0}$.
$\square$
If $f=f_{\alpha_n}^{r_n}\cdots f_{\alpha_1}^{r_1} \in \mathcal F$,
we say that $f'\in F$ is extracted from $f$ if
$f'=f_{\alpha_n}^{r_n'}\cdots f_{\alpha_1}^{r_1'}$ with
$0 \leq r_k'\leq r_k, k=1,\cdots,n$. Recall the definition of
$B_\alpha=\{b_\alpha(t), t \leq 0\}$ given in Example \ref{exba}.
\begin{lemma}\label{lem_indm} Let $f \in \mathcal F$ and $\alpha \in \Sigma$,
then there exists $f'$ extracted from $f$ and $t \geq 0$ such that
$$ f(b_\infty \otimes b_\alpha(0)) = f' b_\infty \otimes b_\alpha(-t).$$
Moreover if $\lambda\in \bar C$ is such that $\alpha^\vee(\lambda)=0$ and
$\beta^\vee(\lambda)$
large enough for all $\beta\in \Sigma-\{\alpha\}$, then for
$\mu\in \bar C$, for the same $f'\in \mathcal F$ and $t \geq 0$,
$$ f(b_\lambda \otimes b_\mu) = f' b_\lambda \otimes f_\alpha^t b_\mu.$$
\end{lemma}
\mathcal Proof
The first part follows easily from the definition of the tensor product.
Let $\lambda\in \bar C$ such that $\alpha^\vee(\lambda)=0$, $\mu \in \bar C, \beta\in \Sigma -\{\alpha\}$ and $r \geq 0$.
If, for some $s >0$, one has $ e_\beta^s(f_\alpha^rb_\mu)\neq {\bf 0}$
then $wt(e_\beta^s(f_\alpha^rb_\mu))=\mu+s\beta-r\alpha$
is in $\mu -\bar C$ (since $\mu$ is a highest weight).
This is not possible because $\beta^\vee(s\beta-r\alpha) \geq s\beta^\vee(\beta) >0$. Therefore, by normality,
$\varepsilon_\beta(f_\alpha^rb_\mu)=0.$
On the other hand, for all
$f=f_{\alpha_n}^{r_n}\cdots f_{\alpha_1}^{r_1} \in \mathcal F$,
$$\varphi_\beta(fb_\lambda)=\beta^\vee(wt(fb_\lambda))
+\varepsilon_\beta(fb_\lambda) \geq
\beta^\vee(wt(fb_\lambda))=\beta^\vee(\lambda)- \sum_{k=1}^n{r_k}
\beta^{\vee}(\alpha_k).$$
Let $\sigma=\varphi_\beta(fb_\lambda)-\varepsilon_\beta(f_\alpha^rb_\mu)=
\varphi_\beta(fb_\lambda)$ and $ s \geq 0$. Then
$$\sigma = \varphi_\beta(fb_\lambda) \geq \beta^\vee(\lambda)-
\sum_{k=1}^n{r_k} \beta^{\vee}(\alpha_k).$$
If $\beta^\vee(\lambda)$ is large enough, then
$\sigma \geq \max(s,0)$ which implies, see (\ref{sigmapos}), that
\begin{equation}\label{flb}f_\beta^{s}(fb_\lambda \otimes f_\alpha^r b_\mu)=(f_\beta^{s}fb_\lambda) \otimes f_\alpha^r b_\mu.\end{equation}
On the other hand,
$\varphi_\alpha(b_\lambda)=\alpha^\vee(\lambda)+\varepsilon_\alpha(b_\lambda)=0$, since
$\varepsilon_\alpha(b_\lambda)=0$ by normality.
We also know that $\varphi_\alpha(b_\infty)=0$, see (\ref{binfty}), hence
$$\varphi_\alpha(fb_\lambda)=
\varphi_\alpha(b_\lambda)-\sum_{k=1}^n{r_k}\alpha^{\vee}(\alpha_k)=
\varphi_\alpha(b_\infty)-\sum_{k=1}^n{r_k}\alpha^{\vee}(\alpha_k)=\varphi_\alpha(fb_\infty).$$
Thus $\sigma=\varphi_\alpha(fb_\infty)$ and does not depend on $\lambda$.
It follows that the following decomposition is independent of $\lambda$:
\begin{equation}\label{fla}f_\alpha^s(fb_\lambda\otimes f_\alpha^r b_\mu)=f_\alpha^{\sigma\wedge s}fb_\lambda\otimes f_\alpha^{r+s-\sigma\wedge s}b_\mu.\end{equation}
Using (\ref{flb}) and (\ref{fla}), it is now easy to prove the lemma by induction on $n$, proving first the second assertion. $\square$
\begin{prop}\label{gammaalpha} For each
simple root $\alpha$, there is a crystal embedding
$\Gamma_\alpha:B(\infty)\to B(\infty)\otimes B_\alpha$ such that
$\Gamma_\alpha(b_\infty)=b_\infty\otimes b_\alpha(0)$.
\end{prop}
\mathcal Proof Let us show that the expression
\begin{equation}\label{def_Gamma}
\Gamma_\alpha(f b_\infty)=f(b_\infty\otimes b_\alpha(0)),\;\;
f \in \mathcal F,\end{equation}
defines the morphism $\Gamma_\alpha$.
First we check that it is well defined.
By definition, $fb_\infty=\overline{f b}_\nu$ for all $\nu\in \bar C$
such that $\overline{fb}_\nu\neq {\bf 0}$.
Let us choose $\lambda$ as in lemma \ref{lem_indm}.
For $\mu \in \bar C$ large enough, $\overline{f b}_{\lambda+\mu}\neq {\bf 0}$.
Let us write
$$\overline{f b}_{\lambda+\mu}=f(\bar b_\lambda\otimes\bar b_\mu)=
\overline{f' b}_\lambda \otimes\overline{ f_\alpha^t b}_\mu.$$
Then $f'$ and $t$
depend only on $fb_{\lambda+\mu}$, which by definition depends only on
$fb_\infty$. By lemma \ref{lem_indm},
$$f(b_\infty\otimes b_\alpha(0))= f'b_\infty \otimes b_\alpha(-t)$$
which depends only on $fb_\infty$
(and not on $f$ itself), showing that $\Gamma_\alpha$
is well defined on $\mathcal F b_\infty$, and thus on
$B(\infty)$, since $\mathcal F b_\infty=B(\infty)$.
Notice that
$f(b_\infty\otimes b_\alpha(0))\neq {\bf 0}$
since $f'b_\infty\neq {\bf 0}$.
Let us prove that $\Gamma_\alpha$ is injective. Suppose that $f(b_\infty\otimes b_\alpha(0))=\tilde f(b_\infty\otimes b_\alpha(0))$ for some $f,\tilde f \in \mathcal F$.
Using lemma \ref{lem_indm},
$$f(b_\infty\otimes b_\alpha(0))= f'b_\infty \otimes b_\alpha(-t)\mbox{ and }\tilde f(b_\infty\otimes b_\alpha(0))= \tilde f'b_\infty \otimes b_\alpha(-\tilde t).$$
If $\lambda\in \bar C$ is as in this lemma, then
$$f(b_\lambda\otimes b_\mu)= f'b_\lambda \otimes f_\alpha^t (b_\mu)=
\tilde f'b_\lambda \otimes f_\alpha^{\tilde t} b_\mu=
\tilde f(b_\lambda\otimes b_\mu),$$
therefore $fb_{\lambda+\mu}=\tilde fb_{\lambda+\mu}$,
thus $fb_{\infty}=\tilde fb_\infty$.
It is clear that $\Gamma_\alpha$ commutes with
$f_\alpha^r, r \geq 0$.
Since $\varepsilon_\alpha(b_\alpha(0))=\varphi_\alpha(b_\infty)=0$,
$$\varepsilon_\alpha(\Gamma_\alpha(b_\infty))=
\varepsilon_\alpha(b_\infty\otimes b_\alpha(0))=
\varepsilon_\alpha(b_\infty),$$
hence, if $f=f_{\alpha_n}^{r_n}\cdots f_{\alpha_1}^{r_1}
\in \mathcal F$, $$\varepsilon_\alpha(\Gamma_\alpha(f b_\infty))
=\varepsilon_\alpha(f\Gamma_\alpha(b_\infty))=
\varepsilon_\alpha(\Gamma_\alpha(b_\infty))-\sum_{k=1}^n r_k\beta^{\vee}(\alpha_k)
= \varepsilon_\alpha(fb_\infty)).$$ Therefore $\Gamma_\alpha$ commutes with $\varepsilon_\alpha$. It also commutes with $wt$ since $wt(b_\infty)=0$. Let us now consider
$e^r_\alpha,r \geq 0$.
Let $b\in B(\infty)$. If $e^r_\alpha(b)\neq {\bf 0}$,
then $$\Gamma_\alpha(b)=\Gamma_\alpha(f^r_\alpha e^r_\alpha(b))=
f^r_\alpha (\Gamma_\alpha(e^r_\alpha(b))\neq {\bf 0}$$
hence $\Gamma_\alpha(e^r_\alpha(b))=e^r_\alpha(\Gamma_\alpha(b))$.
Suppose now that $e^r_\alpha(b)= {\bf 0}$. Since $B(\infty)$
is upper normal,
one has
$\varepsilon_\alpha(b)= 0$,
hence $\varepsilon_\alpha(\Gamma_\alpha(b))= 0$.
By the lemma, there is $f'\in \mathcal F$ and $t \geq 0$ such that $\Gamma_\alpha(b)=\Gamma_\alpha(b)=f'b_{\infty}\otimes b_\alpha(-t).$ Therefore
$$0 = \varepsilon_\alpha(\Gamma_\alpha(b))\geq \varepsilon_\alpha(f'b_\infty) \geq 0.$$
By upper normality this implies that $e^r_\alpha(f'b_\infty)={\bf 0}$, hence $$e^r_\alpha(\Gamma_\alpha(b))= e^r_\alpha(f'b_{\infty}\otimes b_\alpha(-t))=(e^r_\alpha f'b_{\infty})\otimes b_\alpha(-t)={\bf 0}.\;\;\; \square$$
The following lemma is clear.
\begin{lemma}\label{lem_comp}
Let $B_1,B_2$ and $C$ be three continuous crystals and $\mathcal P_{s}i: B_1 \to B_2$ be crystal embeddings. Then $ \tilde \mathcal P_{s}i: B_1\otimes C\to B_2\otimes C$ defined by
$\tilde\mathcal P_{s}i(b\otimes c)= \mathcal P_{s}i(b)\otimes c$ is a crystal embedding.
\end{lemma}
\subsection{Uniqueness. Proof of theorem \ref{thmuniq}}\label{prfuniq}
Recall that $\Sigma$ is the set of simple roots.
Fix a sequence $A=(\cdots, \alpha_2,\alpha_1)$
of elements of $\Sigma$ such that each simple
root occurs infinitely many times and $\alpha_n \neq \alpha_{n+1}$
for all $n \geq 1$. Let $ \hat B(A)$ be the subset of
$\cdots B_{\alpha_2} \otimes B_{\alpha_1}$
in which the $k$-th entry differs from $b_{\alpha_k}(0)$
for only finitely many $k$. One checks that the rules
given for the multiple tensor give $\hat B(A)$
the structure of a continuous crystal
(see, e.g., Kashiwara, \cite{kashbook}, 7.2,
Joseph \cite{Joseph-book},\cite{Joseph-notes}).
Let $ b_A$ be the element of $\hat B(A)$ with entries
$b_{\alpha_n}(0)$ for all $n \geq 1$. We denote $B(A)=\mathcal F b_A$.
\begin{prop} There exists a crystal embedding $\Gamma$
from $B(\infty)$ onto $ B(A)$
such that $\Gamma(b_\infty)= b_A$.
\end{prop}
\mathcal Proof Let $f \in \mathcal F$. We can write
$f=f_{\alpha_k}^{r_k}\cdots f_{\alpha_1}^{r_1}$ where
$(\cdots, \alpha_2,\alpha_1)= A$ and
$ r _n \geq 0$ for all $n\geq 1$. By lemma \ref{lem_indm}
$$\Gamma_{\alpha_1}(f_{\alpha_1}^{r_1}(b_\infty))=f_{\alpha_1}^{r_1}(\Gamma_{\alpha_1} b_\infty)=f_{\alpha_1}^{r_1}(b_\infty\otimes b_{\alpha_1}(0))=b_\infty\otimes b_{\alpha_1}(-r_1)$$
therefore
$$ \Gamma_{\alpha_1}(f_{\alpha_k}^{r_k}\cdots f_{\alpha_1}^{r_1}b_\infty)=
(f_{\alpha_k}^{r'_k}\cdots f_{\alpha_2}^{r'_2}b_\infty)\otimes b_{\alpha_1}(-r'_1)$$
for some $r'_1,\cdots, r'_k\geq0$. Similarly,
$$\Gamma_{\alpha_2}(f_{\alpha_k}^{r'_k}\cdots f_{\alpha_2}^{r'_2}b_\infty)=(f_{\alpha_k}^{r''_k}\cdots f_{\alpha_3}^{r''_3}b_\infty)\otimes b_{\alpha_2}(-r''_2)$$ for some $r''_2,r''_3,\cdots,r''_k$.
If we apply lemma \ref{lem_comp} to $B_1=B(\infty),
B_2=B(\infty)\otimes B_{\alpha_2},
\mathcal P_{s}i=\Gamma_{\alpha_2},C=B_{\alpha_1}$,
we obtain a crystal embedding
$$\tilde \Gamma_{\alpha_2}:B(\infty)\otimes B_{\alpha_1}\to B(\infty)\otimes B_{\alpha_2}\otimes B_{\alpha_1}$$
such that, for $b\in B(\infty), b_1\in B_{\alpha_1}$
$$\tilde \Gamma_{\alpha_2}(b\otimes b_1)= \Gamma_{\alpha_2}b\otimes b_1.$$
Let $\Gamma_{\alpha_2,\alpha_1}=\tilde \Gamma_{\alpha_2}\circ
\Gamma_{\alpha_1}:B(\infty)\to B(\infty)\otimes B_{\alpha_2}
\otimes B_{\alpha_1}$, then
\begin{eqnarray*}
\Gamma_{\alpha_2,\alpha_1}(f_{\alpha_k}^{r_k}\cdots f_{\alpha_1}^{r_1}b_\infty)&=&\tilde \Gamma_{\alpha_2}( f_{\alpha_k}^{r'_k}\cdots f_{\alpha_2}^{r'_2}b_\infty\otimes b_{\alpha_1}(-r'_1))\\
&=&\Gamma_{\alpha_2}( f_{\alpha_k}^{r'_k}\cdots f_{\alpha_2}^{r'_2}b_\infty)\otimes b_{\alpha_1}(-r'_1)\\
&=&(f_{\alpha_k}^{r''_k}\cdots f_{\alpha_3}^{r''_3}b_\infty)\otimes b_{\alpha_2}(-r''_2)\otimes b_{\alpha_1}(-r'_1).
\end{eqnarray*}
Again, with $\Gamma_{\alpha_3}$ we build
$\Gamma_{\alpha_3,\alpha_2,\alpha_1}=\tilde
\Gamma_{\alpha_3}\circ \Gamma_{\alpha_2,\alpha_1}$.
Inductively we obtain strict morphisms
$$\Gamma_{\alpha_k,\cdots,\alpha_1}:B(\infty)\to B(\infty)\otimes
B_{\alpha_k}\otimes \cdots \otimes B_{\alpha_2}\otimes B_{\alpha_1}$$
such that for some $s_k,\cdots, s_1$
$$\Gamma_{\alpha_k,\cdots,\alpha_1}
(f_{\alpha_k}^{r_k}\cdots f_{\alpha_1}^{r_1}b_\infty)=b_\infty
\otimes b_{\alpha_k}(-s_k)\otimes \cdots \otimes b_{\alpha_1}(-s_1).$$
Now we can define $\Gamma:B(\infty)\to B(A)$ by the formula
$$\Gamma(f_{\alpha_k}^{r_k}\cdots f_{\alpha_1}^{r_1}b_\infty)=
\cdots \otimes b_{\alpha_{k+n}}(0) \otimes\cdots \otimes b_{\alpha_{k+1}}(0)
\otimes b_{\alpha_k}(-s_k)\otimes \cdots \otimes b_{\alpha_1}(-s_1). $$
One checks that this is a
crystal embedding. $\square$
This shows that $B(\infty)$ is isomorphic to $B(A)$, which does not depend on the chosen closed family of crystals, and thus proves the uniqueness.
It also shows that $B(A)$ doest not depend on $A$, as soon
as a closed family exists.
\tableofcontents
\end{document}
|
\begin{document}
\title{Flows of flowable Reeb homeomorphisms}
\author{Shigenori Matsumoto}
\address{Department of Mathematics, College of
Science and Technology, Nihon University, 1-8-14 Kanda, Surugadai,
Chiyoda-ku, Tokyo, 101-8308 Japan
}
\email{[email protected]
}
\thanks{The author is partially supported by Grant-in-Aid for
Scientific Research (C) No.\ 20540096.}
{\rm supp}ubjclass{37E30}
\keywords{Reeb foliations, homeomorphisms, topological conjugacy.}
\date{\today }
\begin{abstract}
We consider a fixed point free homeomorphsim $h$ of
the closed band $B={\mathbb R}\times[0,1]$ which leaves each leaf
of a Reeb foliation on $B$ invariant. Assuming $h$ is
the time one of various topological flows, we compare the
restriction of the flows on the boundary.
\end{abstract}
\maketitle
{\rm supp}ection{Introduction}
Orientation preserving and fixed point free homeomorphisms
of the plane are called {\em Brouwer homeomorphisms}.
Since the seminal work of L. E. Brouwer nearly 100 years ago,
they draw attentions of many mathematicians (\cite{K},\cite{HT},
\cite{F},\cite{Fr}
\cite{G}, \cite{N}). Nowadays there still remains interesting problems
about them.
Besides those Brouwer homeomorphisms which are topologically
conjugate to the translation, the simplest ones are perhaps
those which preserve the leaves of Reeb foliations; the main
theme of the present notes. It is simpler and loses
nothing to consider their restriction to the Reeb component.
Let $B={\mathbb R}\times[0,1]$ be a closed band, and denote
$\partial_iB={\mathbb R}\times\{i\}$ ($i=0,1$) and ${{\rm Int}\,} B={\mathbb R}\times(0,1)$.
An oriented foliation ${\mathbb R}R$ on $B$ is
called a {\em Reeb foliation} if $\partial_0B$ with the positive
orientation and $\partial_1B$ with the negative orientation are
leaves of ${\mathbb R}R$ and the foliation restricted to the interior
${{\rm Int}\,} B$ is a bundle foliation. The leaf space of a Reeb foliation
is homeomorphic to the non Hausdorff space obtained by glueing two
copies of $[0,\infty)$ along $(0,\infty)$. This shows (\cite{HR})
that any two Reeb foliations are mutually topologically equivalent.
A homeomorphism $h:B\to B$ is called a {\em Reeb homeomorphism} if
$h$ preserves each leaf of a Reeb foliation and $h(x)>x$ for any $x\in
B$,
where $>$ is the total order on a leaf given by the orientation.
In this paper we consider {\em flowable} Reeb homeomorpshisms $h$,
i.\ e.\ those which are the time one of topological flows.
(F. B\'eguin and F. Le Roux constructed in \cite{BL}
examples of non flowable Reeb homeomorphisms.)
When $h$ is flowable, $h$ can be embedded as the time one
into a great variety of flows. The purpose of this paper is to compare
the restriction of one flow to the boundary of $B$ with that
of another. This problem is motivated by a result of \cite{L2}
which states that if two flows have a common orbit foliation,
then their restrictions to the boundary are the same.
Let
$$P=\{(x,y)\mid x\geq0, y\geq0\}-\{(0,0)\}.$$
Notice that $P$ is homeomorphic to $B$. Consider a homeomorphsim
$h_P:P\to P$ defined by
$$h_P(x,y)=(2x,2^{-1}y)).$$
A homeomorphism of $B$ is called a {\em standard Reeb homeomorphism}
if it is topologically conjugate to $h_P$, and
{\em nonstandard} otherwise.
It is known (\cite{BL}) that there are nonstandard flowable
Reeb homeomorphisms.
The main result of this paper is
the following.
\begin{Theorem} \label{main}
(1) Assume $h$ is a standard Reeb homeomorphsim of $B$. For $i=0,1$,
let $\{\psi_i^t\}$ be an arbitrary flow on $\partial_iB$
whose time one is the restriction of $h$.
Then there is a flow $\{\varphi^t\}$ on $B$, an extension of
both $\{\psi_0^t\}$ and $\{\psi_1^t\}$, whose time one is $h$.
(2) If $h$ is a nonstandard flowable Reeb homeomorphism,
there is a homeomorphism from $\partial_0B$ to $\partial_1B$ which
commutes
with any flow whose time one is $h$.
\end{Theorem}
Notice that (1) is immediate from the model $h_P$ which is a
product map.
After we prepare some necessary prerequisites in Sect.\ 2, we prove
Theorem \ref{main} (2) in Sect.\ 3. Sect.\ 4 is devoted to two examples
of nonstandard flowable Reeb homeomorphisms, one for which Theorem
\ref{main} (2) is the optimal, and the other for which the restriction
of the flow to the boundary is unique.
{\rm supp}ection{Preliminaries}
Let ${\mathbb R}R$ be a Reeb foliation on $B$. A topological flow on $B$ is called an
${\mathbb R}R$-{\em flow} if its oriented orbit foliation is ${\mathbb R}R$.
Let ${\mathcal E}$ be the set of the topological conjugacy classes of ${\mathbb R}R$-flows.
We shall summerize a main result of \cite{L},
a classification of ${\mathcal E}$, which will play
a crucial role in what follows.
For $i=0,1$ let
$\gamma_i:[0,\infty)\to B$ be a continuous path such that
$\gamma_i(0)\in \partial_iB$
and that $\gamma_i$ intersects every interior leaf of ${\mathbb R}R$
at exactly one point. Let us parametrize
$\gamma_i$ so that for any $y>0$ the points $\gamma_0(y)$
and $\gamma_1(y)$ lie on the same leaf of ${\mathbb R}R$.
Let $\{\Phi^t\}$ be an ${\mathbb R}R$-flow.
Then one can define a continuous function
$$f_{\{\Phi^t\},\gamma_0,\gamma_1}:(0,\infty)\to {\mathbb R}$$
by setting that $f_{\{\Phi^t\},\gamma_0,\gamma_1}(y)$
is the time needed for the flow $\{\Phi^t\}$ to
drift from the point $\gamma_0(y)$ to $\gamma_1(y)$.
That is,
$$
\Phi^{t}(\gamma_0(y))=\gamma_1(y) \mbox{ for }t=
f_{\{\Phi^t\},\gamma_0,\gamma_1}(y).$$
Then the function $f_{\{\Phi^t\},\gamma_0,\gamma_1}$ belongs to the
space
$$E=\{f:(0,\infty)\to{\mathbb R}\mid f\ \ \mbox{is continuous and}
\ \ \lim_{y\to 0}f(y)=\infty\}.$$
Of course $f_{\{\Phi^t\},\gamma_0,\gamma_1}$ depends upon the
choice of $\gamma_i$. There are two umbiguities,
one coming from the parametrization of $\gamma_i$, and the other
coming from the positions of $\gamma_i$.
Let $H$ be the space of homemorphisms of $[0,\infty)$ and $C$
the space of continuous functions on $[0,\infty)$.
Define an equivalence relation ${\rm supp}im$ on $E$ by
$$
f{\rm supp}im f'\Longleftrightarrow f'=f\circ h+k,\ \ \exists h\in H,\
\ \exists k\in C.$$
Then clearly the equivalence class $[f_{\{\Phi^t\},\gamma_0,\gamma_1}]$
does not depend on the choice of $\gamma_i$.
Moreover
it is an invariant of the topological conjugacy classes of
${\mathbb R}R$-flows. Therefore we get a well defined map
$$\iota:{\mathcal E}\to E/{\rm supp}im.$$
It is easy to see that $\iota$ is injective.
The main result of \cite{L} states that $\iota$ is surjective
as well, i.\ e.\ any $f\in E$ is realized
as $f=f_{\{\Phi^t\},\gamma_0,\gamma_1}$ for some ${\mathbb R}R$-flow $\{\Phi^t\}$
and curves $\gamma_i$.
The equivalence class $[f]$ of $f\in E$ is determined by how
$f(y)$ oscilates while it tends to $\infty$ as $y\to 0$.
For example any monotone function of $E$ belongs to a single
equivalence class, which corresponds to a standard Reeb
flow. By definition a {\em standard Reeb flow} is a flow which
is topologically conjugate to the flow $\{\varphi^t_P\}$
on $P$ given by
$$
\varphi_P^t(x,y)=(2^tx,2^{-t}y).$$
To measure the degree of oscilation of
$f\in E$, define a nonnegative valued continuous function
$f^*$ defined on $(0,1]$ by
$$
f^*(y)=\max(f\vert_{[y,1]})-f(y).
$$
Then we have the following easy lemma.
\begin{lemma} \label{easy}
(1) If $h\in H$, then $(f\circ h)^*=f^*\circ h$ in a neighbourhoof of $0$.
(2) If $k\in C$ and $y\to 0$, then $(f+k)^*(y)-f^*(y)\to 0$.
(3) There is a sequence $\{y_n\}$ converging to $0$
such that $f^*(y_n)=0.$
\qed
\end{lemma}
For $f$ as above, define an invariant ${\rm supp}igma(f)\in[0,\infty]$ by
$${\rm supp}igma(f)=\limsup_{y\to 0}f^*(y).$$
Lemma \ref{easy} implies that ${\rm supp}igma(f)$ is an invariant of the class
$[f]$. We also have ${\rm supp}igma(f)=0$ if and only if the class
$[f]$ is represented by a monotone function, that is, $[f]$
corresponds to a standard Reeb flow.
{\rm supp}ection{Proof of Theorem \ref{main}}
Fix once and for all a nonstandard Reeb homeomorphism $h$ of $B$ and assume that
$h$ is the time one of a flow $\{\Phi^t\}$.
Then it can be shown that the orbit foliation ${\mathbb R}R$
of $\{\Phi^t\}$ is a bundle foliation in ${{\rm Int}\,}(B)$,
and therefore ${\mathbb R}R$ is a Reeb foliation. Let $\gamma_i$ ($i=0,1$)
and $f=f_{\{\Phi^t\},\gamma_0,\gamma_1}$ be as in Sect.\ 2.
Notice that ${\rm supp}igma(f)>0$ since $\{\Phi^t\}$ must be nonstandard.
Our plan is to define ``coordinates'' of $B$ via these data,
and study the behaviour of any other flow $\{\varphi^t\}$
whose time one is $h$ using these coordinates. But it is more
convenient to work with the quotient space by $h$.
So let
$$A=B/\langle h\rangle.$$
$A$ is a non Hausdorff 2-manifold with two boundary cirles,
$\partial_iA=\partial_iB/\langle h\rangle$ ($i=0,1$).
Any neighbourhood of any point of $\partial_0A$ intersects
any neighbourhood of any point of $\partial_1A$.
Denote
$${{\rm Int}\,}(A)={{\rm Int}\,}(B)/\langle h\rangle, \ \ A_i={{\rm Int}\,}(A)\cup
\partial_iA, \ \ i=0,1.
$$
$A_i$ is a Hausdorff space homeomorphic to $S^1\times[0,\infty)$.
The flow $\{\Phi^t\}$, as well as the other flow $\{\varphi^t\}$,
induces an $S^1$-action on $A$, still denoted by the same letter.
The curve $\gamma_i$ induces a curve in $A_i$, denoted by the same
letter. One can use the parameter of the curve $\gamma_i$ as a
hight function $p$ on $A$.
Recall that by the convention of Sect.2, the points $\gamma_0(y)$ and
$\gamma_1(y)$ lie on the same leaf of ${\mathbb R}R$ if $y>0$.
Let us define a projection $p:A\to[0,\infty)$ as follows.
For any $\xi\in A$,
$$
p(\xi)=y\Longleftrightarrow \xi\mbox{ lies on the leaf of }{\mathbb R}R
\mbox{ passing through }\gamma_0(y)\mbox{ or }\gamma_1(y). $$
Of course $p(\partial_iA)=\{0\}.$ The orbit foliation of the
$S^1$ action $\{\Phi^t\}$ is now {\em horizontal}.
On the other hand we do not know what the orbit foliation
of $\{\varphi^t\}$ looks like.
Next define a
projection $\pi_i:A_i\to S^1$ as follows.
For any point $\xi\in A_i$,
$$\pi_i(\xi)=x\Longleftrightarrow \xi=\Phi^x(\gamma_ip(\xi)).$$
Since
$$\xi=\Phi^{\pi_0(\xi)}\gamma_0p(\xi)
\ \mbox{ and }\
\xi=\Phi^{\pi_1(\xi)}\gamma_1p(\xi)
=\Phi^{\pi_1(\xi)}\Phi^{fp(\xi)}\gamma_0p(\xi),
$$
we have
\begin{equation} \label{important}
\pi_0-\pi_1=f\circ p \mbox{ mod }{\mathbb Z} \ \ \mbox{ \ \ on }{{\rm Int}\,}(A).
\end{equation}
There is a homeomorphism
$$\pi_0\times p:A_0\to S^1\times[0,\infty).
$$
If we use the arguments $x=\pi_0(\xi)$ and $y=p(\xi)$ on $A_0$,
the foliation by $\pi_0$ is {\em vertical} i.\ e.\ given
by the curves $x={\rm const}.$, while
the foliation by $\pi_1$, defined on ${{\rm Int}\,}(A)$, is given by the curves
$$
x=f(y)+\mbox{const.}\mbox{ mod }{\mathbb Z}.$$
Both foliations are invariant by the horizontal rotation
$\{\Phi^t\}$.
Now choose a decreasing sequence of positive numbers
$$
1>y_1>y_1'>y_2>y_2'>\cdots$$
such that
\begin{equation} \label{f^*}
f^*(y_n')>{\rm supp}igma(f)/2,\ \ y_n=\min\{y>y_n'\mid f^*(y)=0\},
\ \ y_n\to0\ (n\to\infty).
\end{equation}
By Lemma \ref{easy} (3), the value of the function $f^*$ is oscilating between
$0$ and around ${\rm supp}igma(f)$. Therefore it is possible to choose such a
sequence.
Returning to $f$, (\ref{f^*}) implies
\begin{equation} \label{f}
f(y_n)=\max(f\vert_{[y_n',1]}), \ \ f(y_n')<f(y_n)-{\rm supp}igma(f)/2.
\end{equation}
Thus the foliation by $\pi_1$ has oscilating leaves. It is
(topologically) tangent
to the vertical foliation at the level set $p^{-1}(y_n)$.
To prove Theorem \ref{main}, it suffices to show the existence
of a homeomorphism of $\partial_0A$ to $\partial_1A$ that conjugates
$\varphi^t\vert_{\partial_0A}$ to $\varphi^t\vert_{\partial_1A}$ for
any free $S^1$ action $\{\varphi^t\}$ on $A$. Define an $S^1$ action
$\{\varphi^t_i\}$ on $S^1$ ($i=0,1$) as the conjugate of
$\varphi^t\vert_{\partial_iA}$ by $\pi_i$, i.\ e.\ so as to satisfy
$$
\varphi_i^t\circ\pi_i=\pi_i\circ\varphi^t \mbox{ on } \partial_iA.$$
Then our goal is to show that $\varphi_0^t$ is conjugate to $\varphi_1^t$
by a homeomorphism $g$ of $S^1$ which can be chosen independently
of the $S^1$ action $\{\varphi^t\}$. But since $\{\Phi^t\}$ is one such
$S^1$ action and $\Phi_i^t$ is just a rotation,
the homeomorphism $g$ must be a rotation itself.
Besides (\ref{f}), we may assume
\begin{equation}\label{conv}
f(y_n)\to\alpha\ \mbox{ mod }\ {\mathbb Z}.
\end{equation}
Denote by $R_\alpha:S^1\to S^1$ the rotation by $\alpha$.
Now our goal is to show the following proposition.
\begin{proposition} \label{beta}
For any free $S^1$-action
$\{\varphi^t\}$ on $A$, we have
$$
R_\alpha\circ\varphi_1^t=\varphi_0^t\circ R_\alpha,\ \ \forall t.$$
\end{proposition}
This follows from the following lemma.
\begin{lemma} \label{alpha}
For any free $S^1$ action $\{\varphi^t\}$ on $A$
and for any nonnegative integer $k$
$$
R_\alpha\circ\varphi_1^{1/2^k}=\varphi_0^{1/2^k}\circ R_\alpha.$$
\end{lemma}
The rest of this section is devoted to the proof of Lemma \ref{alpha}.
We shall first prove it for $k=1$.
Define a function
$\delta:{{\rm Int}\,}(A)\to{\mathbb R}$ by
$$
\delta(\xi)=(\pi_0\varphi^{1/2}(\xi)-\pi_1\varphi^{1/2}(\xi))
-(\pi_0(\xi)-\pi_1(\xi)).$$
We shall study the function $\delta$ on the circle $p^{-1}(y_n)$.
For $\xi\in p^{-1}(y_n)$, we have by (\ref{important})
\begin{equation} \label{new}
\delta(\xi)=fp\varphi^{1/2}(\xi)-f(y_n).
\end{equation}
The position of $\varphi^{1/2}(\xi)$ for $\delta(\xi)>0$
is indicated by the dot in the figure. Notice that
it must be below $p^{-1}(y_n')$.
There exists a {\em horizontally going point} $\xi_n(0)$
in $\pi^{-1}(y_n)$, i.\ e.\ a point such that
$$
p\varphi^{1/2}(\xi_n(0))=p(\xi_n(0))=y_n.
$$
For, otherwise $\varphi^{1/2}$ will
displace the curve $p^{-1}(y_n)$, sending it, say below itself.
But then $\varphi^{1/2}\circ \varphi^{1/2}$ cannot be the identity.
Notice that $\varphi^{1/2}(\xi_n(0))$ is also a horizontally going point.
By (\ref{new}), we have
\begin{equation} \label{delta}
\delta(\xi_n(0))=0\ \mbox{ and }\ \delta\varphi^{1/2}(\xi_n(0))=0.
\end{equation}
Passing to a subsequence if necessary, we may assume that
$$
\pi_i(\xi_n(0))\to \alpha_i,\ \ i=0,1.
$$
Of course we have by (\ref{important})
$$
\alpha=\alpha_0-\alpha_1.$$
For any $x\in S^1$, define $\xi_n(x)$ to be the point
on $p^{-1}(y_n)$ such that
$$\pi_i(\xi_n(x))=\pi_i(\xi_n(0))+x\ \
(i=0,1).$$
Then we have
\begin{equation} \label{x}
\pi_i(\xi_n(x))\to x+\alpha_i,\ \mbox{ and }\
\pi_i\varphi^{1/2}(\xi_n(x))\to\varphi^{1/2}_i(x+\alpha_i).
\end{equation}
To see the second assertion, notice that by the first assertion and
the fact that $p(\xi_n(x))\to 0$,
the point $\xi_n(x)$ converges to a point $\xi_{\infty,i}(x)\in\partial_iA$.
Of cource
$$\pi_i(\xi_{\infty,i}(x))=x+\alpha_i\ \mbox{ and } \
\varphi^{1/2}(\xi_n(x))\to\varphi^{1/2}(\xi_{\infty,i}(x)).
$$
Therefore
$$
\pi_i\varphi^{1/2}(\xi_n(x))\to\pi_i\varphi^{1/2}(\xi_{\infty,i}(x))
= \varphi_i^{1/2}\pi_i(\xi_{\infty,i}(x))=\varphi_i^{1/2}(x+\alpha_i),
$$
as is asserted.
Now we have
\begin{equation} \label{J}
\delta(\xi_n(x))=(\pi_0\varphi^{1/2}(\xi_n(x))-\pi_0(\xi_n(x)))
-(\pi_1\varphi^{1/2}(\xi_n(x))-\pi_1(\xi_n(x)))
\end{equation}
$$
\longrightarrow(\varphi_0^{1/2}(x+\alpha_0)-\alpha_0)-
(\varphi_1^{1/2}(x+\alpha_1)-\alpha_1)=J_0(x)-J_1(x),
$$
where
$$J_i=R_{\alpha_i}^{-1}\circ\varphi_i^{1/2}\circ R_{\alpha_i},$$
an involution on $S^1$.
Now (\ref{delta}) and (\ref{J}) implies that
$$
J_1(0)=J_0(0).$$
By some abuse we denote by $<$ the positive
circular order for two nearby points of $S^1$.
All we are about is to show that $J_0=J_1$. Assume for contradiction
that this is not the case.
Since $J_i$ is an involution, there is a point $0<x_0<J_i(0)$
such that $J_0(x_0)\neq J_1(x_0)$.
Choosing the point $x_0$ as near $0$ as we wish, we can assume
\begin{equation} \label{1/4}
\abs{J_1(x)-J_0(x)}<{\rm supp}igma(f)/4\ \ \mbox{ if }\
0\leq x\leq x_0.
\end{equation}
There are two cases, one $J_0(x_0)>J_1(x_0)$
and the other $J_0(x_0)<J_1(x_0)$. But
the latter case can be
reduced to the former case by replacing $0$ by $J_0(0)=J_1(0)$
and $x_0$ by $J_0(x_0)$.
Notice that the image by $\pi_i$ of the horizontally
going points $\varphi^{1/2}(\xi_n(0))$ converge to
$J_0(0)=J_1(0)$. This is all we need in the argument that follows,
and therefore we can replace $0$ by $J_0(0)=J_1(0)$.
So we assume
\begin{equation} \label{case1}
J_0(x_0)> J_1(x_0).
\end{equation}
By (\ref{J}) (\ref{1/4}) and (\ref{case1}), we have for any large $n$,
\begin{equation} \label{1/3}
\abs{\delta(\xi_n(x))}<{\rm supp}igma(f)/3 \mbox{ if } 0\leq x\leq x_0\ \mbox{ and }\
\delta(\xi_n(x_0))>0.
\end{equation}
Let
$$W_n=\{\xi\in A\mid fp(\xi)>f(y_n)-{\rm supp}igma(f)/3\}$$
and let $V_n$ be the connected component of $W_n$ that
contains $p^{-1}(y_n)$. The subset $V_n$ is a horizontal open annulus
disjoint from $p^{-1}(y_n')$. See the figure.
By (\ref{f}),
\begin{equation}\label{V_n}
\xi\in V_n\Longrightarrow fp(\xi)\leq f(y_n).
\end{equation}
Now we have by (\ref{new}) and (\ref{1/3})
$$
fp\varphi^{1/2}(\xi_n(x))-f(y_n)
=\delta(\xi_n(x))>-{\rm supp}igma(f)/3
$$
if $0\leq x\leq x_0$. This shows that $\varphi^{1/2}(\xi_n(x))$
is contained in $W_n$. But since $\xi_n(0)$
is horizontally going, $\varphi^{1/2}(\xi_n(0))$ lies
in $V_n$. Moreover the assignment
$$x\mapsto\varphi^{1/2}(\xi_n(x))$$
is continuous.
Thus $\varphi^{1/2}(\xi_n(x))$ lies in $V_n$ for any $0\leq x\leq x_0$.
In particular
$$
\varphi^{1/2}(\xi_n(x_0))\in V_n\ \mbox{ and }\
fp\varphi^{1/2}(\xi_n(x_0))\leq f(y_n).
$$
But the assumption $\delta(\xi_n(x_0))>0$ of (\ref{1/3}) implies
$$
fp\varphi^{1/2}(\xi_n(x_0))>f(y_n).$$
The contradiction shows that $J_0=J_1$, i.\ e.\
$$
\varphi_1^{1/2}\circ R_{\alpha}=R_{\alpha}\circ\varphi_0^{1/2}$$
for $\alpha=\alpha_0-\alpha_1$, as is required.
Now $J_0=J_1$ implies that for any large $n$
$$\xi\in p^{-1}(y_n)\Longrightarrow \abs{\delta(\xi)}<{\rm supp}igma(f)/3,$$
that is, any point in $p^{-1}(y_n)$
is {\em nearly horizontally going}, meaning that it is
mapped by $\varphi^{1/2}$ into $V_n$.
To show Lemma \ref{alpha} for $k=2$,
first choose a horizontally going point $\xi_n'(0)\in
p^{-1}(y_n)$
for $\varphi^{1/4}$. Its image $\varphi^{1/4}(\xi_n'(0))$ is not
horizontally going, but nearly horizontally going for $\varphi^{1/4}$.
Passing to a subsequence, we may assume
$$
\pi_i(\xi_n'(0))\to \alpha_i'.
$$
Clearly we have
$$
\alpha=\alpha_0'-\alpha_1'.$$
The point $\xi'_n(x)$ in $p^{-1}(y_n)$ is defined
just as before by
$$\pi_i(\xi'_n(x))=\pi_i(\xi_n'(0))+x.$$
Define a function
$\delta':{{\rm Int}\,}(A)\to{\mathbb R}$ by
$$
\delta'(\xi)=(\pi_0\varphi^{1/4}(\xi)-\pi_1\varphi^{1/4}(\xi))
-(\pi_0(\xi)-\pi_1(\xi)),$$
and let
$$
J_i'=R_{\alpha_i'}^{-1}\circ\varphi^{1/4}\circ R_{\alpha_i'}.$$
Then we have
$$
\delta(\xi'_n(x))\to J_0'(x)-J_1'(x).$$
By the previous step we have shown
$$
\varphi_1^{1/2}=R_\alpha^{-1}\circ\varphi_0^{1/2}\circ R_\alpha,$$
which implies
$(J_0')^2=(J_1')^2$.
This enables us to find a point $x_0'$ playing the same role as
$x_0$ in the previous argument such that $J_0'(x_0')>J_1'(x_0')$
either near $0$ or near $J_0'(0)=J_1'(0)$.
In the latter case the point $\varphi^{1/4}(\xi_n'(0))$ is
only nearly horizontally going
but this is enough for our purpose.
By the same argument as before, we can show $J_0'=J_1'$.
The proof for general $k$ is by an induction.
{\rm supp}ection{Examples}
We shall construct two examples of $f\in E$.
We consider the correspoding flow $\{\Phi^t\}$ and construct the
non Hausdorff space $A$
as in Sect.3. Properties of examples are stated in terms of
the $S^1$ action on $A$.
All the notations of Sect.\ 3
will be used.
\begin{example} \label{e1}
There exists $f\in E$ such that ${\rm supp}igma(f)=1$ satisfying the
following property:
For any $S^1$ action $\{\psi^t\}$ on $S^1$, there
is an $S^1$ acion $\{\varphi^t\}$ on $A$ such that
$\varphi_i^t=\psi^t$ ($i=0,1$).
\end{example}
The construction of $f$ goes as follows.
Let
$$
y_1>y_1'>y_2>y_2'\cdots$$
be a sequence converging to 0. Define $f$ such that
$$
f(y_n)=n,\ \mbox{ and }\ f(y_n')=n-1$$
and that $f$ is monotone on the complementary
intervals.
On the circles $p^{-1}(y_n)$ and $p^{-1}(y_n')$,
$\pi_0=\pi_1$ mod ${\mathbb Z}$. The desired flow is to preserve these
circles and to be the conjugate of $\psi^t$
by $\pi_i$ there. The complementary regions are open annulus, and
there the foliations by $\pi_0$ and $\pi_1$ are transverse,
thanks to the monotonicity assumption on $f$. Therefore
one can define $\varphi^t$ so as to satisfy
$
\pi_i\circ\varphi^t=\psi^t\circ\pi_i$ ($i=0,1$).
\begin{example}
There exists $f\in E$ such that any $S^1$ action $\{\varphi^t\}$ on $A$
satisfies $\varphi_i^t=R_t$ ($i=0,1$).
\end{example}
Using the sequence of Example \ref{e1}, define $f\in E$ such that
$$
f(y_n)=n\beta\ \mbox{ and }\ f(y_n')=n\beta-1,$$
for some irrational $\beta>0$
and that $f$ is monotone on the complementary intervals.
Then
$$
\pi_0-\pi_1=n\beta\ \mbox{ on }\ p^{-1}(y_n).$$
Let $\{\varphi^t\}$ be an arbitrary $S^1$ action on $A$.
Then any point $\tau\in S^1$ is an accumulation point of
$\pi_0(\xi_n(0))-\pi_1(\xi_n(0))$. The argument of Sect.\ 3
shows that
$$\varphi_1^t\circ R_{\tau}=R_{\tau}\circ \varphi_0^t,\ \ \forall t,\tau.
$$
This clearly shows that
$$
\varphi_0^t=\varphi_1^t=R_t.$$
\end{document}
|
\begin{document}
\title{Minimal diagrams of virtual links: II}
\abstract{In the present paper we bring together minimality
conditions proposed in papers \cite{MaArx,MaArx2} and present some
new minimality conditions for classical and virtual knots and
links.}
\section{The main result}
This paper is a sequel of \cite{MaArx,MaArx2}. We deal with
virtual link diagrams and test whether these diagrams are minimal
with respect to the number of classical crossings. All necessary
definitions can be found in \cite{Ma,MaArx,MaArx2}.
Throughout the text, all virtual links are thought to be
non-split, unless otherwise specified. In any minimality theorem
for links we assume that the link in question has no split
diagrams. All virtual links are thought to be orientable in the
sense of atoms, see \cite{Ma,MaArx}.
First, let us formulate the two main theorems from \cite{MaArx}
and \cite{MaArx2}.
The main Theorem from \cite{MaArx} says the following:
\begin{thm} If a virtual link diagram $L'$ is $1$-complete and $2$-complete then
it is minimal. \end{thm}
The main Theorem from \cite{MaArx2} says the following:
\begin{thm}
Suppose the diagram $K$ of a classical knot is good. Then it is
minimal in the classical category. In other words, if the diagram
$K$ has $n$ crossings then for any classical diagram representing
the same knot the number of crossing is at least $n$.
\end{thm}
In the sequel, we shall refer to these two theorems as ``The first
Theorem'' and ``The second Theorem''. The first theorem says that
if the span of the Kauffman polynomial for a virtual link diagram
is ``as large as it should be'' and the genus of the corresponding
atom is ``as large as it should be'' then the diagram is minimal.
The second theorem deals only with classical knots (not virtual
knots, and not links) and it has only one condition. This
condition says that no cell of the corresponding atom touches
itself at a crossing. This condition much stronger than the first
minimality condition used in the first theorem saying that {\em
the leading and the lowest term of the Kauffman state sum
expansion do not vanish}. The condition of the second Theorem says
that each of the two extreme coefficients equals precisely to one,
and we have only one non-trivial summand contributing to each of
them.
In fact, the condition of the second Theorem allows to consider
the cablings of the initial diagram $L$, the diagrams $D_{m}(L)$
where $m$ is a positive integer. It turns out that the main
condition of the second Theorem (that the diagram is good) is
hereditary: if it holds for $L$, then it should hold for
$D_{m}(L)$. This allows to establish minimality by passing to
cablings and some more tricks (see \cite{MaArx2}). Here we do not
worry about the thickness of the Khovanov homology \cite{Shu} of
the corresponding atom.
The first condition of the first Theorem (saying that the leading
and the lowest terms in the Kauffman state-sum expansion) are {\bf
not} hereditary: they may hold for $L$, but not for
$D_{m}(L),m\neq 1$, and usually they {\bf do not}. For instance,
they do not hold in the classical case. Thus, to establish the
minimality of a (virtual) knot or link diagram we have to handle
the genus thus adding one more condition on the Khovanov homology.
In the present paper, we wish to formulate a stronger statement
explaining the connection between the first Theorem and the second
Theorem, namely, we shall deal with the genus of links $D_{m}(L)$,
and $D_{m}(L{\#}{\bar L})$, where ${\bar L}$ is the mirror image
of the diagram $L$.
First, we deal with classical knots (to have the connected sum
operation well defined). At the end of the present paper, we shall
prove a theorem on virtual links.
If some auxiliary theorem or lemma admits a formulation for
virtual links, we formulate it for the general case though we
might need it only for the case of classical knots.
Suppose we have a virtual link diagram $L$ with $n$ classical
crossings such that the corresponding atom has genus $g$ and Euler
characteristic $\chi=2-2g$. Then the maximal possible span for the
Kauffman bracket of $L$ is estimated as (\cite{MaArx}):
\begin{equation}
span\langle L\rangle \le 4n+2 (\chi-2).
\end{equation}
Also, the Khovanov homology has thickness $T(L)$ defined by the
diagonals it lives between, see \cite{MaArx2}.
In \cite{MaArx2} (using a refined version of the result from
\cite{Weh}) we demonstrated that
\begin{equation}
T(L)\le 2+g(L).
\end{equation}
We shall refer to the corresponding equalities as ``the span of
the Kauffman bracket is as large as it should be'' and ``the
Khovanov homology is as thick as it should be''.
\newcommand{\zeta_{m}}{\zeta_{m}}
Let $K$ be a classical knot diagram with $n$ crossings, $N=2n$.
The usual estimate for $span\langle D_{m}\rangle$ is
$2(m^{2}+m)N+2m\chi(K{\#}{\bar K})-4$. It is so if the neither the
leading term nor the lowest term of the Kauffman state-sum
expansion vanishes. The following assymptotic theorem says that if
this length is smaller then expected, but assymptotically the
difference between the real length and the estimate is not very
large, then the initial diagram is minimal. Namely, we have
\begin{thm}[The assymptotic theorem]
Suppose that for some $\varepsilon>0$ there is an infinite sequence
$i_{1}<i_{2}<\dots i_{m}<\dots$ of positive integers such that for
any positive integer $m$ we have
$span\langle D_{i_m}(K{\#}{\bar K})\rangle\ge
2({i_m}^{2}+{i_m})N+2{i_m}\chi(K{\#}{\bar
K})-4-(4-\varepsilon)({i_{m}}^{2}+i_{m})$.
Then the diagram $K$ is minimal in the classical category.
\label{tmgy}
\end{thm}
We have the following
\begin{lm}
If a virtual link diagram $L$ is good then $T(L)\ge
g(L)$.\label{lmy}
\end{lm}
\begin{proof}
Indeed, one should just take the $A$-state with $v_{-}$ associated
to all circles and the $B$-state with $v_{+}$ associated to all
circles. The property that the diagram is good guarantees that the
first chain is a cycle, whence the second one is not a boundary.
Recalling the definition of the atom genus, we get the required
estimate.
\end{proof}
Now, having a good diagram $L$ of a classical knot, we see that
all diagrams $D_{m}(L{\#}{\bar L})$ are also good. By Lemma
\ref{lmy} we see that the diagram $L$ obviously satisfies the
condition of Theorem \ref{tmgy}. Here we may take a constant for
$\zeta_{m}$. Thus, the diagram $L$ is minimal.
So, the estimate for the thickness of the Khovanov homology for a
good diagram $L$ is that it is in between $2+g(L)$ and $g(L)$.
This immediately results in the following
\begin{thm}
Let $L$ be a good virtual link diagram with $n$ classical
crossings. Then any virtual link diagram $L'$ equivalent to $L$
has at least $n-2$ classical crossings.
\end{thm}
\begin{proof}
Indeed, suppose we have a diagram $L'$ equivalent to $L$ with $n'$
classical crossings. Then the thickness of $L'$ is at least
$g(L)$, so the genus $g(L')$ is at least $g(L)-2$, thus,
$\chi(L')\le \chi(L)+4$. From this we see that
\begin{equation}
4n+2(\chi(L)-2)=span \langle L\rangle= span\langle L'\rangle \le
4n'+2(\chi(L)+2),
\end{equation}
so $4(n-n')\le 8$, that completes the proof.
\end{proof}
We are still unable to use the trick with connected summation with
mirror image as in \cite{MaArx2} and prove the exact result (in
the unframed category). However, the reasonings with Khovanov
homology give a minimality estimate between $n-2$ and $n$
classical crossings.
\begin{proof}[Proof of Theorem \ref{tmgy}]
The proof goes in the same lines as that of the second Theorem
\cite{MaArx}.
Indeed, fix a positive integer $m$. Suppose there is a classical
diagram $K'$ having $n'$ crossings ($n'<n$) and representing the
same classical knot as $K$.
The diagrams $D_{i_m}(K{\#}{\bar K})$ and $D_{i_m}(K'{\#}{\bar
K}')$ generate isotopic knots. Denote $D_{i_m}(K{\#}{\bar K})$ by
$D_{m}$ and denote $D_{i_m}(K'{\#}{\bar K}')$ by $D'_{m}$. By
definition we have $\langle D_{m}\rangle=\langle D'_{m}\rangle$.
Also, set $\chi=\chi(K{\#}{\bar K}),\chi'=\chi(K'{\#}{\bar K'})$.
We have:
\begin{equation}
span \langle D_{m}\rangle \ge 4 {i_m}^{2}N+2(\chi_{m}-2) -
(4-\varepsilon)(i_{m}^{2}+i_{m}),
\end{equation}
where $\chi_{m}=\chi(D_{i_m})$. The atom $V(D_{m})$ has
${i_m}^{2}N$ vertices, $2{i_m}^{2}N$ edges and ${i_m}\Gamma$
$2$-cells, where $\Gamma=N+\chi$ is the number of the $2$-cells of
the atom $K{\#}{\bar K}$. Thus,
\begin{equation}
span \langle D_{i_m}\rangle \ge 2 ({i_m}^2+{i_m})N+2{i_m}
\chi-4-(4-\varepsilon)(i_{m}^{2}+i_{m}).\label{ra1}
\end{equation}
For $D'_{m}$ we have:
\begin{equation}
span \langle D'_{m}\rangle \le 2 ({i_m}^2+{i_m})N+2{i_m}
\chi'-4.\label{ra1}
\end{equation}
Thus, taking into account $\langle D'_{m}\rangle=\langle
D_{m}\rangle$, we get
\begin{equation}2(i_{m}^{2}+ i_{m})(N-N')\le
(4-\varepsilon)(i_{m}^{2}+i_{m})+2{i_m}(\chi'-\chi).
\end{equation}
This leads to a contradiction since $N-N'\ge 2$.
\end{proof}
\begin{re}
If we deal with classical link diagrams then for any link diagram
$L$ which is not good and for any positive integer $m>1$, the
leading coefficient in the Kauffman state-sum expansion for
$\langle D_{m}(L)\rangle$ is equal to zero. Thus, we can not apply
Theorem \label{tmyg} to the classical case directly.
In the virtual case, there are some diagrams $L$ which are not
good, but for every $m>1$, the state-sum expansion of the Kauffman
polynomial for $D_{m}(L)$ does not vanish. But here we can not
apply Theorem \ref{tmyg} directly because the connected summation
for virtual diagrams is not well defined.
\end{re}
Besides the asymptotic theorem (which works in the case of
classical knots only, because it uses the connected sum
construction), we also have the following generalisation of the
First theorem:
\begin{re}
Theorem \ref{tmgy} works also in the case of {\em long} virtual
knots. The proof is literally the same because we have a
well-defined connect summation for such knots.
In the long virtual category, we have a lot of examples where the
first minimality condition is hereditary. Such examples obviously
give us minimal diagrams by theorem \ref{tmgy}.
Also, analogous theorems remain true for {\em tangles and braids}.
We shall discuss it in separate papers.
\end{re}
\end{document}
|
\begin{document}
\title
{Building partially entangled states
with Grover's amplitude amplification process}
\author{Hiroo Azuma\thanks{[email protected]}\\
Mathematical Engineering Division,\\
Canon Research Center,\\
5-1, Morinosato-Wakamiya, Atsugi-shi,\\
Kanagawa, 243-0193, Japan}
\date{January 12, 2000}
\maketitle
\begin{abstract}
We discuss how to build some partially entangled states
of $n$ two-state quantum systems (qubits).
The optimal partially entangled state
with a high degree of symmetry is
considered to be
useful for overcoming a shot noise limit
of Ramsey spectroscopy
under some decoherence.
This state is
invariant under permutation of any two qubits
and inversion
between the ground state $|0\rangle$
and an excited state $|1\rangle$
for each qubit.
We show that
using selective phase shifts in certain basis vectors
and Grover's inversion about average operations,
we can construct this high symmetric entangled state
by $(\mbox{polynomial in $n$})\times 2^{n/2}$
successive unitary transformations
that are applied on two or three qubits.
We can apply our method to build more general entangled states.
\end{abstract}
\section{Introduction}
\label{INTRO}
Recently rapid progress
in quantum computation
and quantum information theory
have been made\cite{Feynman}\cite{Deutsch-Jozsa}.
In these fields, properties of quantum mechanics,
which are superposition,
interference,
and entanglement, are handled skillfully.
After Shor's algorithm for factorization
and discrete logarithms
and
Grover's algorithm for search problems
appeared
\cite{Simon}\cite{Shor}\cite{Ekert-Jozsa}\cite{Grover}\cite{Boyer-Brassard},
many researchers have been proposing methods
for the realization of quantum computation and
developing quantum algorithms.
On the other hand, in the fields of quantum information theory,
it is recognized that entangled states play important roles
for robustness against decoherence\cite{Bennett-Fuchs}.
As an application of these results,
it is considered to overcome
the quantum shot noise limit
by using entangled states of
$n$ two-level systems (qubits)
for Ramsey
\\
spectroscopy\cite{Wineland92}\cite{Wineland94}.
(M.~Kitagawa et al.
gave a similar idea,
though an experimental scheme that they discussed
was not Ramsey spectroscopy
of qubits\cite{Kitagawa}.)
When we can neglect decoherence of the system
caused by an environment,
the maximally entangled state serves us
an improvement of a frequency measurement.
In this case,
the fluctuation of frequency is decreased by $1/\sqrt{n}$.
(In this paper, for example,
we consider
$(1/\sqrt{2})
(|0\cdots 0\rangle +|1\cdots 1\rangle)$
one of the maximally entangled states.
Entanglement for $n(\geq 3)$-qubit system
has not been defined clearly\cite{Bennett-DiVincenzo}.)
However,
if the decoherence is considered,
the maximally entangled state provides
the same resolution
that an uncorrelated system provides\cite{Huelga}.
S.~F.~Huelga et al.
proposed
using a partially entangled state
which has a high degree of symmetry.
This state is invariant under permutation of any two qubits
and inversion between the ground state $|0\rangle$
and an excited state $|1\rangle$ for each qubit.
If we prepare the high symmetric partially entangled state
optimized numerically,
it provides high resolution in comparison
with the maximally entangled states and uncorrelated states.
Carrying out an experiment of Ramsey spectroscopy
with the optimal high symmetric partially entangled state,
we have to prepare it for an initial state as soon as possible,
before time limit of decoherence.
In this paper,
we study how to construct this state efficiently.
We estimate time to prepare it
by the number of elementary quantum gates
that are unitary transformations
applied on two or three qubits
\cite{Shor}\cite{Ekert-Jozsa}\cite{Barenco}.
The number of gates is considered to be
in proportion to the amount of time for building the state.
We show it takes
$O((n^{3}\log_{2}n)\times 2^{n/2})$ steps at most
to build it.
(It was shown that any unitary transformation
$U$ $(\in \mbox{\boldmath $U$}(2^{n}))$
can be constructed from $O(n^{3}2^{2n})$ elementary gates
at most\cite{Barenco}.)
Furthermore, our method can be applied to build
more general entangled states.
Before discussing how to build partially entangled states,
we try to construct the maximally entangled state
with $n$ qubits from an initial state $|0\cdots 0\rangle$.
To do it,
we need two unitary transformations for elementary gates.
They are $H^{(j)}$ (the Walsh-Hadamard transformation)
which operates on the $j$-th qubit,
and
$\bigwedge_{1}^{(j,k)}(\sigma_{x})$
which operates on the $j$-th and $k$-th qubits:
\[
H^{(j)}=\frac{1}{\sqrt{2}}
\begin{array}{ccc}
\langle 0| & \langle 1| &\\
\svline{1} & 1 & \svline{|0\rangle}\\
\svline{1} & -1 & \svline{|1\rangle}
\end{array},
\quad\quad
\mbox{$\bigwedge$}_{1}^{(j,k)}(\sigma_{x})=
\begin{array}{ccccc}
\langle 00| & \langle 01| & \langle 10| & \langle 11| &\\
\svline{1} & 0 & 0 & 0 & \svline{|00\rangle}\\
\svline{0} & 1 & 0 & 0 & \svline{|01\rangle}\\
\svline{0} & 0 & 0 & 1 & \svline{|10\rangle}\\
\svline{0} & 0 & 1 & 0 & \svline{|11\rangle}
\end{array}.
\]
Because $\bigwedge_{1}^{(j,k)}(\sigma_{x})$
transforms $|x,y\rangle$ $(x,y\in \{0,1\})$
to $|x,x\oplus y\rangle$
(applying $\sigma_{x}$ on $k$-th qubit
according to the $j$-th qubit),
it is sometimes called the controlled-NOT gate.
Applying
$\bigwedge_{1}^{(1,n)}(\sigma_{x})\cdots
\bigwedge_{1}^{(1,2)}(\sigma_{x})H^{(1)}$
on
$|0\rangle_{1}\otimes\cdots\otimes |0\rangle_{n}$,
we can obtain the maximally entangled state,
$(1/\sqrt{2})(|0\cdots 0\rangle + |1\cdots 1\rangle)$.
But, building partially entangled states
like
\begin{eqnarray}
|\psi_{4}\rangle
&=&a_{0}|0\rangle_{s}+a_{1}|1\rangle_{s}+a_{2}|2\rangle_{s} \nonumber \\
&\equiv&a_{0}(|0000\rangle +|1111\rangle ) \nonumber \\
&&\quad
+a_{1}(|0001\rangle +|0010\rangle +|0100\rangle+|1000\rangle \nonumber \\
&&\quad\quad\quad\quad
+|1110\rangle +|1101\rangle +|1011\rangle+|0111\rangle ) \nonumber \\
&&\quad
+a_{2}(|0011\rangle +|0101\rangle +|0110\rangle+|1001\rangle
+|1010\rangle +|1100\rangle )
\label{Psi4Form}
\end{eqnarray}
(this is an example of the $4$-qubit high symmetric
partially entangled state),
where $a_{0}$, $a_{1}$, $a_{2}$ are given (real)
coefficients,
and $|k\rangle_{s}$ is an equally weighted superposition of
$k$ or $(4-k)$ excited qubits,
we feel difficult.
It is hard to resolve a unitary transformation
that transforms $|0000\rangle$ to $|\psi_{4}\rangle$
into local operations
like $H^{(j)}$ or $\bigwedge_{1}^{(j,k)}(\sigma_{x})$.
This is because we don't know a systematic method
for adjusting coefficients of basis vectors.
This matter is a motivation of this paper.
This paper is arranged as follows.
In \S\ref{HiSymParEntSta},
we explicitly describe
the high symmetric partially entangled states.
We make preparations for our method of building them.
In \S\ref{MakMasVecBeWeiEq},
we introduce a unitary transformation
that makes two sets of basis vectors
classified by their coefficients
be weighted equally.
We derive a sufficient condition
for finding an appropriate parameter
that characterizes this transformation.
In \S\ref{CaseWhSufCondNotSat},
we develop a technique which transforms
the state that doesn't satisfy the sufficient
condition derived in \S\ref{MakMasVecBeWeiEq}
into a state
that satisfies it.
This technique is an application of
Grover's amplitude amplification process\cite{Grover}\cite{Boyer-Brassard}.
In \S\ref{WholeProc},
we show the whole procedure for building
the high symmetric entangled states
and give a sketch of
implementation for it.
We estimate the whole number of elementary gates
of our method.
We also show that we can use our procedure
for building more general entangled states.
In \S\ref{DISCUSSION},
we give a brief discussion.
In {\S}Appendix,
we construct networks of quantum gates
for our method concretely,
and derive a variation of coefficients
of the state under the transformation
discussed in \S\ref{CaseWhSufCondNotSat}.
\section{High symmetric partially entangled states}
\label{HiSymParEntSta}
In this section,
we define high symmetric partially entangled states
explicitly.
We also make preparations
for our method of building them,
defining an initial state,
giving some unitary transformations
used frequently,
and so on.
The partially entangled state
which has a high degree of symmetry
is given by
\begin{equation}
|\psi_{n}\rangle=\sum_{k=0}^{\lfloor n/2 \rfloor} a_{k}|k\rangle_{s}
\quad\quad
\mbox{for $n\geq 2$},
\label{PsiNForm}
\end{equation}
where
$\lfloor n/2 \rfloor$ is the maximum integer
that doesn't exceed $n/2$\cite{Huelga}.
$\{a_{k}\}$ are given real coefficients.
We assume $a_{k}\geq 0$ for
$k=0,\cdots,\lfloor n/2 \rfloor$
for a while.
$|k\rangle_{s}$ is an equally weighted superposition of
$k$ or $(n-k)$ excited qubits,
as shown in
$|\psi_{4}\rangle$ of Eq.~(\ref{Psi4Form}).
This state has symmetric properties,
invariance under permutation of any two qubits,
and
invariance under inversion between $|0\rangle$ and $|1\rangle$
for each qubit.
A main aim of this paper is
to show a procedure for building
$|\psi_{n}\rangle$ efficiently.
We emphasize that
$\{a_{k}\}$ of $|\psi_{n}\rangle$ in
Eq.~(\ref{PsiNForm}) are given
and numerically optimized to
realize high precision for Ramsey spectroscopy.
We make some preparations.
To build $|\psi_{n}\rangle$,
we prepare an $n$-qubit register in a uniform superposition
of $2^{n}$ binary states,
$(1/\sqrt{2^{n}})\sum_{x\in\{0,1\}^{n}}|x\rangle$
($\{0,1\}^{n}$ represents
a set of all $n$-bit binary strings),
and apply unitary transformations on the register successively.
(Initializing the register to $|0\cdots 0\rangle$ and
applying $H^{(j)}$ $(1\leq j \leq n)$ on each qubit,
we can obtain the uniform superposition.)
In our method,
we use two kinds of transformations.
One of them is a selective phase shift transformation
in certain basis vectors.
It is given by the
$2^{n}\times 2^{n}$ diagonal matrix form,
\begin{equation}
R_{xy}=
\left\{\begin{array}{ll}
\exp(i\theta_{x}), & \mbox{for $x=y$} \\
0, & \mbox{for $x\neq y$}
\end{array}
\right.,
\label{matrix-phaseshift}
\end{equation}
where subscripts $x$, $y$ represent the basis
vectors $\{|x\rangle|x\in\{0,1\}^{n}\}$
and $0\leq\theta_{x} <2\pi$ for $\forall x$.
(Although a general phase shift
transformation in the form of Eq.~(\ref{matrix-phaseshift})
takes a number of elementary gates
exponential in $n$ at most,
we use only special transformations
that need polynomial steps.
This matter is discussed
in \S\ref{WholeProc}
and {\S}Appendix~A.)
The other is
Grover's inversion about average operation
$D$\cite{Grover}.
The $2^{n}\times 2^{n}$ matrix representation of
$D$ is given by
\begin{equation}
D_{xy}=
\left\{\begin{array}{ll}
-1+ 2^{-n+1}, & \mbox{for $x=y$} \\
2^{-n+1}, & \mbox{for $x\neq y$}
\end{array}
\right..
\label{matrix-GroversD}
\end{equation}
Because we use only unitary transformations
and never measure any qubits,
we can regard our procedure
for building $|\psi_{n}\rangle$
as a succession of unitary transformations.
For simplicity,
we consider a chain of transformations reversely
to be a
transformation from $|\psi_{n}\rangle$ to the uniform superposition
instead of it from the uniform superposition
to $|\psi_{n}\rangle$.
Fortunately, an inverse operation of
the selective phase shift on certain basis vectors
is also the phase shift,
and an inverse operation of $D$ defined in Eq.~(\ref{matrix-GroversD})
is also $D$.
In the rest of this paper,
because of simplicity,
we describe the procedure
reversely from $|\psi_{n}\rangle$ to the uniform superposition.
(Building $|\psi_{n}\rangle$ actually,
we carry out the inversion of the procedure.)
\section{Making basis vectors
be weighted equally}
\label{MakMasVecBeWeiEq}
At first, we show how to transform $|\psi_{2}\rangle$
to the uniform superposition as an example.
After that,
we consider a case of $|\psi_{n}\rangle$ for $n\geq 3$.
Writing $|\psi_{2}\rangle$ as
\[
|\psi_{2}\rangle
=
a_{0}(|00\rangle+|11\rangle)
+a_{1}(|01\rangle+|10\rangle)
\quad\quad
\mbox{where $a_{0}\geq0$, $a_{1}\geq0$
and
$a_{0}^{2}+a_{1}^{2}=1/2$},
\]
we apply the following transformations on it.
Shifting the phase of $|01\rangle$ by $\theta$
and
shifting the phase of $|10\rangle$ by $(-\theta)$,
we obtain
\[
a_{0}(|00\rangle+|11\rangle)
+a_{1}e^{i\theta}|01\rangle
+a_{1}e^{-i\theta}|10\rangle.
\]
The value of $\theta$ is considered later.
Then, we apply $D$ on the above state.
$D$ is given as
\[
D=\frac{1}{2}
\begin{array}{ccccc}
\langle 00| & \langle 01| & \langle 10| & \langle 11| &\\
\svline{-1} & 1 & 1 & 1 & \svline{|00\rangle}\\
\svline{1} & -1 & 1 & 1 & \svline{|01\rangle}\\
\svline{1} & 1 & -1 & 1 & \svline{|10\rangle}\\
\svline{1} & 1 & 1 & -1 & \svline{|11\rangle}
\end{array},
\]
and we get
\[
A_{0}(|00\rangle+|11\rangle)
+A_{1}|01\rangle
+A_{1}^{*}|10\rangle
\quad\quad
\mbox{where $A_{0}=a_{1}\cos\theta$,
$A_{1}=a_{0}-ia_{1}\sin\theta$}.
\]
Defining $\phi$ as $e^{i\phi}\equiv A_{1}/|A_{1}|$,
we shift the phase of $|01\rangle$ by $(-\phi)$ and
shift the phase of $|10\rangle$ by $\phi$.
We get
\[
A_{0}(|00\rangle+|11\rangle)
+|A_{1}|(|01\rangle+|10\rangle).
\]
If $A_{0}=|A_{1}|$,
we obtain the uniform superposition.
Here,
we can assume
$0\leq a_{0}<1/2<a_{1}$
without losing generality.
From these considerations,
the value of $\theta$ is given by $\cos\theta=1/(2a_{1})$.
In case of $n\geq 3$,
we take the following method.
Classifying basis vectors
$\{|x\rangle|x\in\{0,1\}^{n}\}$
of $|\psi_{n}\rangle$
by their coefficients,
we obtain $(\lfloor n/2 \rfloor +1)$ sets of them
characterized by $a_{k}$.
We consider the transformation that makes
two sets of basis vectors
(e.g. sets of basis vectors with $a_{0}$ and $a_{1}$)
be weighted equally and reduces the number of sets by one.
If we do this operation for $\lfloor n/2 \rfloor$ times,
we obtain the uniform superposition.
Here, we consider how to make
a set of basis vectors with $a_{1}$ be weighted equally
to a set of them with $a_{0}$ on $|\psi_{n}\rangle$.
A similar discussion can be applied on other sets of them.
From now,
we write $|\psi_{n}\rangle$ as
\begin{equation}
|\Psi\rangle
=[
\underbrace{a_{0},\cdots,}_{2l}
\underbrace{a_{1},\cdots,}_{2m}
a_{2(l+m)},\cdots,a_{2^{n}-1}
]
\quad\quad
\mbox{for $n\geq 2$}.
\label{procedure-step2}
\end{equation}
As the representation of Eq.~(\ref{procedure-step2}),
we sometimes write a column vector by a row vector.
In Eq.~(\ref{procedure-step2}),
we order the orthonormal basis vectors
$\{|x\rangle|x\in\{0,1\}^{n}\}$
appropriately,
and
coefficients $a_{0}$ and $a_{1}$ are put
in the left side of the row.
Because $|\psi_{n}\rangle$ is
invariant under inversion between $|0\rangle$
and $|1\rangle$ for each qubit,
the number of basis vectors that have a coefficient
$a_{k}$ $(0\leq k\leq \lfloor n/2 \rfloor)$
is even.
Therefore, we can give the number of $a_{0}$
by $2l$ and the number of $a_{1}$
by $2m$,
where $l\geq 1$, $m\geq 1$, and $l+m\leq 2^{n-1}$.
The other $(2^{n}-2l-2m)$ coefficients,
$\{a_{2},\cdots,a_{\lfloor n/2 \rfloor}\}$,
are gathered in the right side of the row
and they are relabeled
$\{a_{j}|2(l+m)\leq j \leq 2^{n}-1\}$.
Reordering basis vectors
never changes matrix forms of
$R$ defined in Eq.~(\ref{matrix-phaseshift})
and $D$ defined in Eq.~(\ref{matrix-GroversD}),
except for permutation of diagonal elements
of $R$.
We carry out the following transformations.
Firstly,
we shift phases of $m$ basis vectors
with coefficients $a_{1}$
by $\theta$
and
shift phases of the other $m$ basis vectors
with coefficients $a_{1}$
by $(-\theta)$.
How to choose the value of $\theta$ is discussed later.
We obtain
\begin{equation}
R_{\theta}|\Psi\rangle=
[
\underbrace{a_{0},\cdots,}_{\mbox{$2l$}}
\underbrace{e^{i\theta}a_{1},\cdots,}_{\mbox{$m$}}
\underbrace{e^{-i\theta}a_{1},\cdots,}_{\mbox{$m$}}
a_{2(l+m)},\cdots,a_{2^{n}-1}
],
\end{equation}
where $0\leq\theta<2\pi$
($R_{\theta}$ is given by $2^{n}\times 2^{n}$ diagonal matrix
whose diagonal elements are
$\{1,\cdots,e^{i\theta},\cdots,e^{-i\theta},\cdots,1,\cdots,1\}$
).
Then we apply $D$ on $R_{\theta}|\Psi\rangle$,
\begin{equation}
DR_{\theta}|\Psi\rangle=
[
\underbrace{A_{0},\cdots,}_{\mbox{$2l$}}
\underbrace{A_{1},\cdots,}_{\mbox{$m$}}
\underbrace{A_{1}^{*},\cdots,}_{\mbox{$m$}}
A_{2(l+m)},\cdots,A_{2^{n}-1}
],
\end{equation}
where
\begin{equation}
\left\{
\begin{array}{rcl}
2^{n-1}A_{0}&=&(2l-2^{n-1})a_{0}+2ma_{1}\cos\theta+C, \\
2^{n-1}A_{1}
&=&2la_{0}+(m-2^{n-1})a_{1}e^{i\theta}
+ma_{1}e^{-i\theta}+C, \\
2^{n-1}A_{j}
&=&2la_{0}+2ma_{1}\cos\theta-2^{n-1}a_{j}+C,
\end{array}
\right.
\label{A-Coefficients}
\end{equation}
for $j=2(l+m),\cdots,2^{n}-1$,
and
$C=\sum_{j=2(l+m)}^{2^{n}-1}a_{j}$.
We notice that $A_{i}=A_{j}$,
if $a_{i}=a_{j}$
for $2(l+m)\leq \forall i,j \leq 2^{n}-1$.
Finally, we apply the selective phase shift
to cancel the phases of $A_{1}$ and $A_{1}^{*}$.
Defining $\phi$ as
$e^{i\phi}=A_{1}/|A_{1}|$,
we shift the phases of $m$ basis vectors
with coefficients $A_{1}$
by $(-\phi)$
and
shift the phases of $m$ basis vectors
with coefficients $A_{1}^{*}$
by $\phi$.
We obtain
\begin{equation}
\tilde{R}_{\theta}DR_{\theta}|\Psi\rangle=
[
\underbrace{A_{0},\cdots,}_{\mbox{$2l$}}
\underbrace{|A_{1}|,\cdots,}_{\mbox{$2m$}}
A_{2(l+m)},\cdots,A_{2^{n}-1}
].
\label{RDR-column}
\end{equation}
We write the second phase shift operator
as $\tilde{R}_{\theta}$,
because
the phase shift angle $\phi$
depends on $\theta$ and $\{a_{k}\}$.
If we can choose $\theta$ to let $|A_{1}|$ be equal to $A_{0}$,
we succeed in making two sets of basis vectors
characterized by $a_{0}$ and $a_{1}$ be weighted equally.
From now,
we call this series of operations an $(\tilde{R}DR)$ operation.
If we can carry out the $(\tilde{R}DR)$ operations,
with suitable parameters $\theta$s,
$\lfloor n/2 \rfloor$ times on $|\psi_{n}\rangle$,
we get the uniform superposition.
However, there are two difficulties.
We can't always find
a suitable $\theta$ that lets $|A_{1}|$ be equal to $A_{0}$
for the $(\tilde{R}DR)$ operation
on an arbitrary given $|\psi_{n}\rangle$.
We consider the next lemma
that
shows a sufficient condition
for finding a suitable $\theta$.
It gives us a hint
which couple of sets of basis vectors do we let be weighted equally.
\vspace*{12pt}
\noindent
{\bf Lemma~1:}
We define an $n$-qubit state
$|\Psi\rangle$ as
\begin{equation}
|\Psi\rangle=
[\underbrace{a_{0},\cdots,}_{\mbox{$2l$}}
\underbrace{a_{1},\cdots,}_{\mbox{$2m$}}
a_{2(l+m)},\cdots,a_{2^{n}-1}]
\quad\quad
\mbox{for $n\geq 2$},
\label{ReductionLemma1-Psi}
\end{equation}
where $0\leq a_{j}$ for $j=0,1,2(l+m),\cdots,2^{n}-1$
and
$a_{0}<a_{1}$.
The basis vectors of Eq.~(\ref{ReductionLemma1-Psi})
are $\{|x\rangle|x\in\{0,1\}^{n}\}$.
We assume that
the number of elements $a_{0}$ is equal to $2l$
and
the number of elements $a_{1}$ is equal to $2m$,
where $l\geq 1$, $m\geq 1$ and $l+m\leq 2^{n-1}$.
We write a sum of all coefficients by
\begin{equation}
S
=2la_{0}+2ma_{1}+\sum_{j=2(l+m)}^{2^{n}-1}a_{j}.
\end{equation}
If the following condition is satisfied,
\begin{equation}
S-2^{n-2}(a_{0}+a_{1})\geq 0,
\label{suff-cond-Lemma1}
\end{equation}
we can always make $2(l+m)$ basis vectors
whose coefficients are $a_{0}$ or $a_{1}$
be weighted equally by the $(\tilde{R}DR)$ operation
in which $R$ and $\tilde{R}$ are applied on
$2m$ basis vectors with $a_{1}$.
\vspace*{12pt}
\noindent
{\bf Proof:}
$\tilde{R}_{\theta}DR_{\theta}|\Psi\rangle$
is given by Eq.~(\ref{A-Coefficients}) and Eq.~(\ref{RDR-column}).
To evaluate a difference between
$A_{0}^{2}$ and $|A_{1}|^{2}$,
we define
\begin{eqnarray}
f(\theta)
&=&2^{n-2}(A_{0}^{2}-|A_{1}|^{2}) \nonumber \\
&=&
(2la_{0}+2ma_{1}\cos\theta+C)(a_{1}\cos\theta-a_{0})
-2^{n-2}(a_{1}^{2}-a_{0}^{2}).
\label{FunctionFTheta}
\end{eqnarray}
If $f(\theta)=0$,
$A_{0}^{2}$ is equal to $|A_{1}|^{2}$.
We estimate $f(0)$ and $f(\pi/2)$,
\begin{equation}
\left\{
\begin{array}{rcl}
f(0)&=&(a_{1}-a_{0})[S-2^{n-2}(a_{0}+a_{1})], \\
f(\pi/2)&=&-a_{0}(2la_{0}+C)-2^{n-2}(a_{1}^{2}-a_{0}^{2})<0.
\end{array}
\right.
\end{equation}
If $S-2^{n-2}(a_{0}+a_{1})\geq 0$,
there is $0\leq \theta <(\pi/2)$,
which satisfies $A_{0}^{2}=|A_{1}|^{2}$.
If signs of $A_{0}$ and $|A_{1}|$ are different from each other,
the phase shift by $\pi$ on basis vectors
with negative coefficients is done.
\qed
To find the suitable sequence of sets of basis vectors
that we make be weighted equally,
we take the following procedure.
(For $n=2,3$, the condition of Eq.~(\ref{suff-cond-Lemma1})
is always satisfied.
Therefore, we consider the case of $n\geq 4$.)
We describe a given state $|\psi_{n}\rangle$ by
Eq.~(\ref{PsiNForm}),
where
$n\geq 4$ and
$a_{k}\geq 0$ for $0\leq k\leq \lfloor n/2 \rfloor$.
Let $a_{min}$ be the minimum coefficient
among $\{a_{k}\}$
and
$a_{min+1}$ be the coefficient next to $a_{min}$
($0\leq a_{min}<a_{min+1}<a_{j}$,
where $a_{j}$ is any coefficient of $|\psi_{n}\rangle$
except $a_{min}$ and $a_{min+1}$).
Because the number of different coefficients
in $\{a_{k}\}$ is
equal to $(\lfloor n/2 \rfloor+1)$,
it takes $O(n)$ steps to find $a_{min}$ and $a_{min+1}$
on classical computation.
\begin{enumerate}
\item
If $S<2^{n-2}(a_{min}+a_{min+1})$,
we get $S<2^{n-2}(a_{i}+a_{j})$ for $\forall i,j$.
In this case,
it can't be guaranteed
to find a good $\theta$ for the $(\tilde{R}DR)$ operation.
We take another technique explained in the next section.
\item
If $S\geq 2^{n-2}(a_{min}+a_{min+1})$,
we can find a good $\theta$ for
the $(\tilde{R}DR)$ operation
and get a relation,
$A_{min}^{2}=|A_{min+1}|^{2}$.
Because Eq.~(\ref{FunctionFTheta})
is an equation of the second degree
for $\cos\theta$,
we can obtain $\theta$ with some calculations.
In this case,
though we may make
other couples of sets of basis vectors be weighted equally,
we neglect them.
Shifting phases of basis vectors which have negative coefficients
by $\pi$ after the $(\tilde{R}DR)$ operation,
we obtain a state whose all coefficients are
nonnegative.
There are
$\lfloor n/2 \rfloor$ kinds of new coefficients in the state after
these operations,
and we can derive them from
Eq.~(\ref{A-Coefficients}) and Eq.~(\ref{RDR-column})
with $poly(n)$ steps by classical computation
($poly(n)$ means polynomial in $n$).
We can check whether the condition of
Lemma~1
is satisfied or not again.
\end {enumerate}
\section{The case where the sufficient condition
isn't satisfied}
\label{CaseWhSufCondNotSat}
In this section,
we consider how to make a couple of sets of
basis vectors be weighted equally
in the case where the state doesn't
satisfy the condition of Lemma~1,
$S\geq 2^{n-2}(a_{min}+a_{min+1})$.
We develop a technique that adjusts
amplitudes of basis vectors
and transforms the state
to a state that satisfies the sufficient condition.
This is an application of Grover's
amplitude amplification process.
For example, we consider the state
which has two kinds of coefficients,
\begin{equation}
|\Psi\rangle
=[\underbrace{a_{0},\cdots,}_{\mbox{$(2^{n}-t)$}}
\underbrace{a_{1},\cdots}_{\mbox{$t$}}]
\quad\quad
\mbox{for $n\geq 4$},
\mbox{where $0\leq a_{0}<a_{1}$}.
\label{SimpleModelForRpiDIteration}
\end{equation}
We assume that
the number of elements $a_{1}$ is equal to $t$,
where $2\leq t\leq2^{n}-2$, and $t$ is even.
If $0<t<2^{n-2}$ and
$[(3\cdot 2^{n-2}-t)/(2^{n-2}-t)]a_{0}<a_{1}$
($a_{1}$ is bigger enough than $a_{0}$),
we obtain $S<2^{n-2}(a_{0}+a_{1})$
for $|\Psi\rangle$.
\begin{figure}
\caption{A variation of coefficients
under the $(R_{\pi}
\end{figure}
In this case,
applying $D$
(we use the property of
the inversion about average operation),
and then,
shifting the phase by $\pi$ on basis vectors
which have negative coefficients,
we can reduce a difference between
new coefficients, $B_{0}$ and $B_{1}$,
as shown in
Figure~\ref{varco2-epsf}.
We write this phase shift operation by $R_{\pi}$.
It can be expected that
$[S-2^{n-2}(a_{min}+a_{min+1})]$ gets bigger
by applying $(R_{\pi}D)$ successively.
The next lemma shows it clearly.
\vspace*{12pt}
\noindent
{\bf Lemma~2:}
We consider a state,
\begin{equation}
|\Psi\rangle=
[\underbrace{a_{0},\cdots,}_{\mbox{$2l$}}
\underbrace{a_{1},\cdots,}_{\mbox{$2m$}}
a_{2(l+m)},\cdots,a_{2^{n}-1}]
\quad\quad
\mbox{for $n\geq 4$},
\end{equation}
where $0\leq a_{0}<a_{1}<a_{j}$
for $j=2(l+m),\cdots,2^{n}-1$.
We assume that
the number of elements $a_{0}$ is equal to $2l$
and
the number of elements $a_{1}$ is equal to $2m$,
where $l\geq 1$, $m\geq 1$, and $l+m\leq 2^{n-1}$.
We also assume $S$,
a sum of all coefficients of $|\Psi\rangle$,
has the relation,
\begin{equation}
S-2^{n-2}(a_{0}+a_{1})<0.
\label{AssumptionOfLemma2}
\end{equation}
Applying the inversion about average operation $D$ on $|\Psi\rangle$,
and then,
doing the phase shift transformation by $\pi$ on basis vectors
which have negative coefficients,
we obtain
\begin{equation}
R_{\pi}D|\Psi\rangle=
[B_{0},\cdots,
B_{1},\cdots,
B_{2(l+m)},\cdots,B_{2^{n}-1}].
\label{RpiDPsi-form-Lemma2}
\end{equation}
We define $\tilde{S}$ as a sum of all coefficients
of $R_{\pi}D|\Psi\rangle$.
We also define
\begin{equation}
\left\{
\begin{array}{rcl}
\epsilon^{(0)}&=&(2l-2^{n-1})a_{0}+(2^{n}-2l)a_{1}, \nonumber \\
\epsilon^{(1)}&=&(2l-2^{n-1})B_{0}+(2^{n}-2l)B_{1}.
\end{array}
\right.
\label{def-epsilon0and1}
\end{equation}
\begin{enumerate}
\item
We get
$0<B_{0}<B_{1}<B_{j}$ for $j=2(l+m),\cdots,2^{n}-1$
and
\begin{equation}
[\tilde{S}-2^{n-2}(B_{0}+B_{1})]
-[S-2^{n-2}(a_{0}+a_{1})]
>\epsilon^{(0)}>0.
\end{equation}
\item
We obtain the relation,
\begin{equation}
\epsilon^{(1)}-\epsilon^{(0)}
\geq
\frac{2^{n}-2l}{2^{n-2}}
[2^{n-2}(a_{0}+a_{1})-S]>0.
\end{equation}
\end{enumerate}
\vspace*{12pt}
\noindent
{\bf Proof:}
We can derive
$D|\Psi\rangle=
[a'_{0},\cdots,
a'_{1},\cdots,
a'_{2(l+m)},\cdots,a'_{2^{n}-1}]$,
where
\begin{equation}
\left\{
\begin{array}{rcl}
2^{n-1}a'_{0}&=&S-2^{n-1}a_{0}, \\
2^{n-1}a'_{1}&=&S-2^{n-1}a_{1}, \\
2^{n-1}a'_{j}&=&S-2^{n-1}a_{j},
\quad
\mbox{for $2(l+m)\leq j\leq 2^{n}-1$}.
\end{array}
\right.
\label{a-coefficient}
\end{equation}
It is clear that $S-2^{n-1}a_{0}>0$.
Using the assumption of Eq.~(\ref{AssumptionOfLemma2}),
we obtain
$S-2^{n-1}a_{k}<0$ for $\forall k\neq 0$.
Therefore, we get $R_{\pi}D|\Psi\rangle$
of Eq.~(\ref{RpiDPsi-form-Lemma2}),
where
\begin{equation}
\left\{
\begin{array}{rcl}
2^{n-1}B_{0}&=&S-2^{n-1}a_{0}, \\
2^{n-1}B_{1}&=&-S+2^{n-1}a_{1}, \\
2^{n-1}B_{j}&=&-S+2^{n-1}a_{j},
\quad
\mbox{for $2(l+m)\leq j\leq 2^{n}-1$}.
\end{array}
\right.
\label{CoefficientsB1}
\end{equation}
We can derive a difference of $B_{1}$ and $B_{0}$
with the assumption of Eq.~(\ref{AssumptionOfLemma2}),
\begin{equation}
2^{n-1}(B_{1}-B_{0})=-2[S-2^{n-2}(a_{0}+a_{1})]>0.
\end{equation}
It is clear that
$B_{1}<B_{j}$ for $j=2(l+m),\cdots,2^{n}-1$.
We obtain the relation,
$0<B_{0}<B_{1}<B_{j}$ for $j=2(l+m),\cdots,2^{n}-1$.
Since
\[
\tilde{S}=\frac{4l}{2^{n-1}}(S-2^{n-1}a_{0})-S,
\quad\quad
\mbox{and}
\quad\quad
B_{0}+B_{1}=a_{1}-a_{0},
\]
we can derive $\Delta$
that is a variation
of $[S-2^{n-2}(a_{0}+a_{1})]$
caused by the $(R_{\pi}D)$ operation,
\begin{eqnarray}
\Delta
&=&[\tilde{S}-2^{n-2}(B_{0}+B_{1})]
-[S-2^{n-2}(a_{0}+a_{1})] \nonumber \\
&=&2(\frac{2l}{2^{n-1}}-1)S
-(4l-2^{n-1})a_{0}.
\label{DeltaRepresentation1}
\end{eqnarray}
To estimate $\Delta$ precisely,
we prepare some useful relations.
From the definition of $S$,
we get
\begin{equation}
S=2la_{0}+2ma_{1}+\sum_{j=2(l+m)}^{2^{n}-1}a_{j}
\geq 2la_{0}+(2^{n}-2l)a_{1}.
\label{ConditionOfS1}
\end{equation}
Using the assumption of Eq.~(\ref{AssumptionOfLemma2})
and Eq.~(\ref{ConditionOfS1}),
we can derive the relation,
\begin{eqnarray}
0
&>&S-2^{n-2}(a_{0}+a_{1}) \nonumber \\
&\geq&2la_{0}+(2^{n}-2l)a_{1}-2^{n-2}(a_{0}+a_{1}) \nonumber \\
&=&(2l-2^{n-2})a_{0}+(3\cdot 2^{n-2}-2l)a_{1}.
\label{StrictUnequation1}
\end{eqnarray}
We modify the relation of Eq.~(\ref{StrictUnequation1})
and get a rougher relation,
\begin{equation}
0
>2la_{0}-2^{n-2}a_{1}+(3\cdot 2^{n-2}-2l)a_{1}
=2la_{0}+(2^{n-1}-2l)a_{1}.
\end{equation}
Because $0\leq a_{0}<a_{1}$,
we obtain
$2l-2^{n-1}>0$.
Seeing this relation and Eq.~(\ref{StrictUnequation1}) again,
we also obtain
\begin{equation}
2l>3\cdot 2^{n-2}.
\label{2lCondition2}
\end{equation}
Here, we can estimate $\Delta$.
Because of Eq.~(\ref{2lCondition2}),
we can substitute Eq.~(\ref{ConditionOfS1})
for Eq.~(\ref{DeltaRepresentation1}),
\begin{eqnarray}
\Delta
&\geq&2(\frac{2l}{2^{n-1}}-1)[2la_{0}+(2^{n}-2l)a_{1}]
-(4l-2^{n-1})a_{0} \nonumber \\
&=&
\frac{1}{2^{n-1}}(4l-3\cdot 2^{n-1})
[(2l-2^{n-2})a_{0}
+(3\cdot 2^{n-2}-2l)a_{1}] \nonumber \\
&&\quad\quad\quad\quad\quad\quad
+2^{n-2}(a_{1}-a_{0}).
\labelel{Delta-ineqality}
\end{eqnarray}
Seeing Eq.~(\ref{2lCondition2}),
we find $3\cdot 2^{n-2}<2l<2^{n}$.
Therefore, we can derive the relation,
$0<(4l-3\cdot 2^{n-1})<2^{n-1}$.
From Eq.~(\ref{StrictUnequation1})
and Eq.~(\ref{Delta-ineqality}),
we can estimate $\Delta$,
\begin{equation}
\Delta
>
[(2l-2^{n-2})a_{0}
+(3\cdot 2^{n-2}-2l)a_{1}]
+2^{n-2}(a_{1}-a_{0})
=\epsilon^{(0)}
>0.
\end{equation}
The first result is derived.
From the definition
of Eq.~(\ref{def-epsilon0and1})
and Eq.~(\ref{CoefficientsB1}),
(\ref{ConditionOfS1}),
(\ref{StrictUnequation1}),
we can estimate the difference
between $\epsilon^{(0)}$ and $\epsilon^{(1)}$,
\begin{eqnarray}
\epsilon^{(1)}-\epsilon^{(0)}
&=&
\frac{1}{2^{n-1}}
[(4l-3\cdot 2^{n-1})S-2^{n}a_{0}(2l-2^{n-1})] \nonumber \\
&\geq&
\frac{1}{2^{n-1}}
\{(4l-3\cdot 2^{n-1})[2la_{0}+(2^{n}-2l)a_{1}]
-(2l-2^{n-1})2^{n}a_{0}\} \nonumber \\
&=&
-\frac{2^{n}-2l}{2^{n-2}}
[(3\cdot 2^{n-2}-2l)a_{1}
+(2l-2^{n-2})a_{0}] \nonumber \\
&\geq&
\frac{2^{n}-2l}{2^{n-2}}
[2^{n-2}(a_{0}+a_{1})-S]>0.
\end{eqnarray}
The second result is derived.
\qed
Because of Lemma~2,
doing the $(R_{\pi}D)$ transformations successively,
we can make
$[S-2^{n-2}(a_{0}+a_{1})]$ be nonnegative.
We explain this matter as follows.
We consider the state $|\Psi^{(0)}\rangle$
specified with coefficients,
$0\leq a_{0}<a_{1}<a_{j}$
for $2(l+m)\leq j\leq 2^{n}-1$,
and assume
$S-2^{n-2}(a_{0}+a_{1})<0$.
We apply $(R_{\pi}D)$ on $|\Psi^{(0)}\rangle$
and obtain $|\Psi^{(1)}\rangle$
described with coefficients,
$0<B_{0}^{(1)}<B_{1}^{(1)}<B_{j}^{(1)}$
for $2(l+m)\leq j\leq 2^{n}-1$.
Because of Lemma~2.1,
we obtain
\begin{equation}
[S^{(1)}-2^{n-2}(B_{0}^{(1)}+B_{1}^{(1)})]
-[S-2^{n-2}(a_{0}+a_{1})]
>\epsilon^{(0)}>0,
\end{equation}
where $S^{(1)}$ is a sum of all coefficients of $|\Psi^{(1)}\rangle$.
Then, we assume
$S^{(1)}-2^{n-2}(B_{0}^{(1)}+B_{1}^{(1)})<0$.
After applying $(R_{\pi}D)$ on $|\Psi^{(1)}\rangle$,
we get $|\Psi^{(2)}\rangle$
specified by
$0<B_{0}^{(2)}<B_{1}^{(2)}<B_{j}^{(2)}$
for $2(l+m)\leq j\leq 2^{n}-1$.
Because of Lemma~2.2,
we get
\begin{equation}
[S^{(2)}-2^{n-2}(B_{0}^{(2)}+B_{1}^{(2)})]
-[S^{(1)}-2^{n-2}(B_{0}^{(1)}+B_{1}^{(1)})]
>\epsilon^{(1)}>\epsilon^{(0)}>0.
\end{equation}
Consequently,
if $S^{(k)}-2^{n-2}(B_{0}^{(k)}+B_{1}^{(k)})<0$,
$[S^{(k+1)}-2^{n-2}(B_{0}^{(k+1)}+B_{1}^{(k+1)})]$
increases by $\epsilon^{(0)}(>0)$
at least.
$k$ stands for the number of the $(R_{\pi}D)$ transformations
applied on the state
and $S^{(0)}=S$, $B_{0}^{(0)}=a_{0}$, $B_{1}^{(0)}=a_{1}$.
Because $\epsilon^{(0)}$ is defined by $\{a_{0},a_{1}\}$ and $l$,
$\epsilon^{(0)}$ is a definite finite value and positive.
Repeating the $(R_{\pi}D)$ finite times,
we can certainly make $[S^{(k)}-2^{n-2}(B_{0}^{(k)}+B_{1}^{(k)})]$
be nonnegative.
From Eq.~(\ref{a-coefficient}) and Eq.~(\ref{CoefficientsB1}),
during the $(R_{\pi}D)$ iteration,
we find that the phase shift is applied on the same basis vectors.
Therefore, the $(R_{\pi}D)$ iteration can be understood
as the inversion of Grover's iteration.
We use Grover's iteration
for enhancing an amplitude of a certain basis.
If $[S^{(k)}-2^{n-2}(B_{0}^{(k)}+B_{1}^{(k)})]$
comes to be nonnegative,
we start to do the $(\tilde{R}DR)$ operation again.
Using the $(\tilde{R}DR)$ operation and
the $(R_{\pi}D)$ iteration,
we can always transform $|\psi_{n}\rangle$ to the uniform superposition.
How many times do we need to apply $(R_{\pi}D)$ on a state
to obtain the relation,
$S^{(k)}-2^{n-2}(B_{0}^{(k)}+B_{1}^{(k)})\geq 0$?
Estimating it,
first,
we introduce notations,
\begin{eqnarray}
\epsilon^{(k)}
&=&
(2l-2^{n-1})B_{0}^{(k)}+(2^{n}-2l)B_{1}^{(k)}, \nonumber \\
{\cal F}^{(k)}
&=&
S^{(k)}-2^{n-2}(B_{0}^{(k)}+B_{1}^{(k)}).
\end{eqnarray}
Because of Lemma~2,
if ${\cal F}^{(k)}<0$,
we get relations,
\begin{eqnarray}
{\cal F}^{(k+1)}-{\cal F}^{(k)}
&>&\epsilon^{(k)}>0, \nonumber \\
\epsilon^{(k+1)}-\epsilon^{(k)}
&\geq&
-[(2^{n}-2l)/2^{n-2}]{\cal F}^{(k)}
\geq
-{\cal F}^{(k)}/2^{n-3}
>0,
\end{eqnarray}
where we give the minimum of $(2^{n}-2l)$ by $2$.
Consequently,
if
${\cal F}^{(0)}<{\cal F}^{(1)}<\cdots
<{\cal F}^{(K-1)}<{\cal F}^{(K)}<0$,
we estimate $\epsilon^{(k)}$
($k=1,2,\cdots$) recurrently,
\begin{eqnarray}
\epsilon^{(1)}
&\geq&
-x{\cal F}^{(0)}+\epsilon^{(0)}
>
-x{\cal F}^{(1)}
(>0), \nonumber \\
\epsilon^{(2)}
&\geq&
-x{\cal F}^{(1)}+\epsilon^{(1)}
>
-2x{\cal F}^{(1)}
>
-2x{\cal F}^{(2)}
(>0), \nonumber \\
\cdots \nonumber \\
\epsilon^{(K)}
&\geq&
-x{\cal F}^{(K-1)}+\epsilon^{(K-1)}
>
-Kx{\cal F}^{(K-1)}
>
-Kx{\cal F}^{(K)}
(>0),
\end{eqnarray}
where $x=1/2^{n-3}$.
From these relations,
assuming $Kx\leq 1$,
we obtain
\begin{eqnarray}
(0>){\cal F}^{(2)}
&>&{\cal F}^{(1)}+\epsilon^{(1)}
>(1-x){\cal F}^{(1)}
>(1-x){\cal F}^{(0)}, \nonumber \\
(0>){\cal F}^{(3)}
&>&{\cal F}^{(2)}+\epsilon^{(2)}
>(1-2x){\cal F}^{(2)}
>(1-x)(1-2x){\cal F}^{(0)}, \nonumber \\
\cdots \nonumber \\
{\cal F}^{(K+1)}
&>&{\cal F}^{(K)}+\epsilon^{(K)}
>(1-Kx){\cal F}^{(K)}
\geq\prod_{k=1}^{K}(1-kx){\cal F}^{(0)}.
\end{eqnarray}
If
${\cal F}^{(K+1)}+\epsilon^{(0)}\geq 0$,
we obtain ${\cal F}^{(K+2)}>0$
and
we can conclude we need to apply
the $(R_{\pi}D)$ transformation $(K+2)$ times at most.
To derive the upper bound on times
we have to apply
the $(R_{\pi}D)$ transformations,
we estimate
$\epsilon^{(0)}$
and ${\cal F}^{(0)}$,
\begin{eqnarray}
\epsilon^{(0)}
&\geq&
(2^{n}-2l)a_{1}\geq 2a_{1}, \nonumber \\
{\cal F}^{(0)}
&>&
2^{n}a_{0}-2^{n-2}(a_{0}+a_{1})
\geq -2^{n-2}a_{1},
\end{eqnarray}
and we obtain
\begin{equation}
{\cal F}^{(K+1)}+\epsilon^{(0)}
\geq
\prod_{k=1}^{K}(1-kx){\cal F}^{(0)}
+\epsilon^{(0)}
\geq
-2^{n-2}a_{1}[\prod_{k=1}^{K}(1-kx)-x].
\end{equation}
Therefore,
to estimate the lower bound on
$K$ for
${\cal F}^{(K+1)}+\epsilon^{(0)}\geq 0$,
we have to derive
the lower bound on $K$
for the large $n$ (small $x$) limit, where
\begin{equation}
\prod_{k=1}^{K}(1-kx)\leq x
\quad\quad
\mbox{for $0<x \ll 1$}.
\end{equation}
Because
$\lim_{m\to +\infty}
[1-(1/m)]^{-m}=e(>2)$,
if $x_{0}$ is small enough,
we obtain
\begin{equation}
\prod_{k=\lceil \sqrt{1/x} \rceil}
^{2\lceil \sqrt{1/x} \rceil -1}
(1-kx)
<
(1-\frac{1}{\lceil \sqrt{1/x} \rceil})
^{\lceil \sqrt{1/x} \rceil}
<
\frac{1}{2},
\end{equation}
for $0<\forall x <x_{0}\ll 1$
($\lceil \sqrt{1/x} \rceil$ is the minimum integer
that does not below
$\sqrt{1/x}$).
Remembering $x=1/2^{n-3}$,
we get
\begin{equation}
x>
[
\prod_{k=\lceil \sqrt{1/x} \rceil}
^{2\lceil \sqrt{1/x} \rceil -1}
(1-kx)
]^{n-3}
>
\prod_{k=1}
^{(n-2)\lceil \sqrt{1/x} \rceil -1}
(1-kx).
\end{equation}
Consequently,
the lower bound on $K$ is
$(n-2)\lceil \sqrt{1/x} \rceil -1
\sim
O(n2^{n/2})$.
We have to apply the $(R_{\pi}D)$
transformation $O(n2^{n/2})$
times at most.
(See {\S}Appendix~B.)
Using Eq.~(\ref{CoefficientsB1}),
we can compute $\{B_{k}\}$ with $poly(n)$ steps
by classical computation,
because the number of different coefficients
in $\{B_{k}\}$ is
equal to $(\lfloor n/2 \rfloor +1)$.
\section{The whole procedure}
\label{WholeProc}
In this section,
we show the whole procedure for building $|\psi_{n}\rangle$
and give a sketch of implementation
for our procedure.
We also show that we can use it
for building more general entangled states.
As a result of discussion we have had,
we obtain the whole procedure to build
$|\psi_{n}\rangle$ as follows.
(We describe the procedure reversely
from $|\psi_{n}\rangle$ to the uniform superposition.
Throughout our procedure,
we take $\{|x\rangle|x\in\{0,1\}^{n}\}$
as basis vectors.)
\begin{enumerate}
\item
We consider an $n$-qubit register
that is in the state of $|\psi_{n}\rangle$
for an initial state.
(We assume all coefficients of basis vectors are
positive or equal to $0$.)
\item
If the state of the register is equal to the uniform superposition,
stop operations.
If it is not equal to the uniform superposition,
go to step~3.
\item
Let $a_{min}$ be the minimum coefficient for basis vectors
in the state of the register
and $a_{min+1}$ be the coefficient next to $a_{min}$.
Examine whether $a_{min}$ and $a_{min+1}$ satisfy
the sufficient condition
of Lemma~1
or not.
If they satisfy it,
carry out the $(\tilde{R}DR)$ operation,
shift the phases of
basis vectors
which have negative coefficients by $\pi$,
and then go to step~2.
If they do not satisfy it,
go to step~4.
\item
Apply the $(R_{\pi}D)$ transformation on the register
and go to step~3.
\end{enumerate}
Before executing this procedure,
we need to trace a variation of coefficients of basis
in each step
by classical computation,
because
we have to know which basis vectors have
the coefficients $a_{min}$ and $a_{min+1}$,
find the phase shift parameter
of the $(\tilde{R}DR)$ operation, and so on.
From these results,
we construct a network of quantum gates.
The amount of classical computation is comparable
with the number of steps for the whole quantum transformations.
We now sketch out the points of networks
of quantum gates for our procedure.
Because it is a chain
of phase shift transformations
and Grover's operation $D$s,
we discuss the networks of quantum gates for them.
First, we discuss the phase shift transformation.
In the $(\tilde{R}DR)$ operation,
we shift the phases by $\theta$ on half of basis vectors
which have coefficients $a_{k}$ (as $a_{min+1}$)
and by $(-\theta)$ on the other half of them.
Constructing networks for $R_{\theta}$,
we prepare two registers and a unitary transformation
$U_{f}$,
\begin{equation}
|x\rangle \otimes |y\rangle
\stackrel{U_{f}}{\longrightarrow}
|x\rangle \otimes |y\oplus f(x)\rangle,
\label{functionF}
\end{equation}
where the first (main) register is made from $n$ qubits,
the second (auxiliary) register is made from
$m=\lceil \log_{2}(n+1)\rceil$ qubits
initialized to $|0\cdots 0\rangle$,
and
\begin{equation}
f(x)=
(\mbox{the number of ``$1$'' in the binary string of $x$}).
\label{definition-fx-number1}
\end{equation}
Obtaining $f(x)$ on classical computation,
we need $O(nm)\sim O(n\log_{2} n)$ classical gates
(XOR, and so on)
and $O(m)\sim O(\log_{2} n)$
other auxiliary classical bits.
Therefore,
we can construct $U_{f}$ with $O(n\log_{2} n)$
elementary quantum gates
(\cite{Feynman}\cite{Barenco}\cite{Bennett73}
and see {\S}Appendix~A).
To execute the selective phase shift efficiently,
we apply it on the second register
instead of the first register.
Because the phase shift matrix defined in
Eq.~(\ref{matrix-phaseshift})
is diagonal,
we can do this way.
After shifting the phases,
we apply $U_{f}$ again and
initialize the second register.
Unnecessary entanglement between the first and the second register is removed.
To see these operations precisely,
we apply $U_{f}$ on $|\psi_{n}\rangle$
defined in Eq.~(\ref{PsiNForm}).
We get
\begin{eqnarray}
U_{f}|\psi_{n}\rangle\otimes|0\rangle
&=&U_{f}
\sum_{k=0}^{\lfloor n/2 \rfloor} a_{k}|k\rangle_{s}
\otimes|0\rangle \nonumber \\
&=&
\left\{
\begin{array}{ll}
\sum_{k=0}^{(n-1)/2} a_{k}
[|k)\otimes|k\rangle+|n-k)\otimes|n-k\rangle],
& \mbox{($n$ is odd)} \\
\sum_{k=0}^{(n/2)-1} a_{k}
[|k)\otimes|k\rangle+|n-k)\otimes|n-k\rangle] \nonumber \\
\quad\quad\quad\quad\quad\quad
+a_{n/2}|n/2)\otimes|n/2\rangle,
& \mbox{($n$ is even)}
\end{array}
\right.
\end{eqnarray}
where $|k)$ is an equally weighted superposition of
$k$ excited qubits
($|k\rangle_{s}=|k)+|n-k)$
except for $|n/2\rangle_{s}=|n/2)$
where $n$ is even).
We shift the phases of basis vectors
$|k\rangle$, $|n-k\rangle$ on the second register,
instead of $|k)$, $|n-k)$
(that contain
$2(^{n}_{k})$
binary states)
on the first register
(where $k\neq n/2$).
This implementation reduces the number of basis vectors
on which we apply the phase shift operation
from
$2(^{n}_{k})$
to $2$,
and we can save elementary quantum gates.
Using another auxiliary qubit,
we can carry out the phase shift with $O(\log_{2} n)$
elementary quantum gates
(\cite{Barenco}\cite{Cleve-Ekert} and see {\S}Appendix~A).
If $n$ is even and $k=n/2$,
we can't decide
which basis vectors we have to shift the phases
by $\theta$ or $(-\theta)$.
In this case, we refer to not only
$|n/2\rangle$ on the second register
but also the first qubit of the first register
(cf. $|\psi_{4}\rangle$ defined in Eq.~(\ref{Psi4Form})).
Then, we discuss how to construct the quantum
network of $D$.
It is known that $D$ can be decomposed to the form,
$D=-WRW$,
where $W=H^{(1)}\otimes \cdots \otimes H^{(n)}$
(the Walsh-Hadamard transformation on $n$ qubits
of the main register),
and $R$ is a phase shift by $\pi$
on $|0\cdots 0\rangle$ of $n$ qubits\cite{Grover}.
$D$ takes $O(n)$ steps.
We repeat the $(R_{\pi}D)$ transformation
$O(n2^{n/2})$times at most
before the $(\tilde{R}DR)$ operation.
If we do the $(R_{\pi}D)$ iteration before every $(\tilde{R}DR)$,
we carry out it $\lfloor n/2 \rfloor$ times.
Therefore,
the $(R_{\pi}D)$ iterations take the main part of the whole steps.
Because $(R_{\pi}D)$ takes $O(n\log_{2} n)$ steps,
we need $O((n^{3}\log_{2} n)\times 2^{n/2})$
steps for the whole procedure in total
at most.
Finally,
we consider the case where all of $\{a_{k}\}$
are neither positive nor real.
Doing the selective phase shift on the basis vectors
with complex or negative real
coefficients in $|\psi_{n}\rangle$ to cancel the phases,
we obtain
a superposition whose all coefficients are real
and nonnegative.
After this operation,
we can apply our procedure on the state.
In our method,
we don't fully use the symmetry of $|\psi_{n}\rangle$.
Essential points that we use are as follows.
First,
the number of basis vectors
that have the same coefficient is always even.
Second,
we can efficiently shift
the phase of half the basis vectors
that have the same coefficients.
Third,
the number of different coefficients $\{a_{k}\}$
is $poly(n)$.
Therefore,
we can apply our method to build more general entangled states
that have above properties.
\begin{figure}
\caption{A function $f$
defined in Eq.~(\ref{def-general-function-f}
\end{figure}
Here, we discuss applying our method for building
more general entangled states than $|\psi_{n}\rangle$.
We consider an entangled state
defined by a function $f$,
as follows,
\begin{equation}
f:X=\{0,1\}^{n}\rightarrow Y=\{0,1\}^{m},
\label{GeneralFunction1}
\end{equation}
where $m=\lceil \log_{2}(M+1)\rceil +1$
and $M$ is polynomial in $n$.
We assume
we can label elements of image
caused by $f$ from $X=\{0,1\}^{n}$ by
$\{(0,\pm),(1,\pm),\cdots,(M,\pm)\}$.
We also assume
the number of $X$'s elements mapped to $(k,+)$
and
the number of them mapped to $(k,-)$
are equal to $l_{k}$ for $k=0,\cdots,M$,
where
$2\sum_{k=0}^{M}l_{k}=2^{n}$.
We can describe the function $f$ by
\begin{equation}
f(x(k,\epsilon,\zeta))=(k,\epsilon),
\label{def-general-function-f}
\end{equation}
where $k=0,1,\cdots,M$,
and
$\epsilon=\pm$,
and $\zeta=1,\cdots,l_{k}$,
as shown in
Figure~\ref{mapgenf2-epsf}.
Then we consider the following
$n$-qubit partially entangled state,
\begin{equation}
|\Psi_{n}\rangle=
\sum_{k=0}^{M}
\sum_{\epsilon=\pm}
\sum_{\zeta=1}^{l_{k}}
c_{k}|x(k,\epsilon,\zeta)\rangle
\quad\quad
\mbox{for $n\geq 2$},
\label{GeneralPartiallyEntangledStates}
\end{equation}
where $\{c_{k}\}$ are complex.
The number of sets of basis vectors classified by $\{c_{k}\}$
is $(M+1)$,
and the number of basis vectors
that have the coefficient $c_{k}$
is $2l_{k}$.
Executing the selective phase shift efficiently,
we apply $U_{f}$ of Eq.~(\ref{def-general-function-f})
to write $(k,\epsilon)$ on the $m$-qubit
second register
and apply phase shift transformation
on it.
We can shift the phase by $\theta$
or $(-\theta)$ according to $\epsilon$
in the $(\tilde{R}DR)$ operation.
To transform $|\Psi_{n}\rangle$ to the uniform superposition,
we have to do the $(\tilde{R}DR)$ operation $M$ times.
Consequently, the $(R_{\pi}D)$ transformation is repeated
$M\times O(n2^{n/2})$
times at most.
It is desirable that $M$ is $poly(n)$.
\section{Discussion}
\label{DISCUSSION}
It is known that any unitary transformation
$U$ $(\in \mbox{\boldmath $U$}(2^{n}))$
can be constructed from $O(n^{3}2^{2n})$ elementary gates
at most\cite{Barenco}.
In comparison with this most general case,
our method is efficient,
although the number of gates increases exponentially in $n$.
C.~H.~Bennett et al.
discuss transmitting
classical information via quantum noisy
\\
channels\cite{Bennett-Fuchs}.
It is shown when two transmissions of
the two-Pauli channel are used,
the optimal states for transmitting classical information are
partially entangled states of two qubits.
Therefore, we can expect our method is available for quantum communication.
Grover's algorithm was proposed
as a solution of the SAT(satisfiability) problems.
It finds a certain combination
from all of $2^{n}$ possible combinations of
$n$ binary variables.
From a different view,
what Grover's method does is
enhancing an amplitude of a certain basis vector
specified with an oracle
for a superposition of $2^{n}$ basis vectors.
In our method,
we use Grover's method for
adjusting amplitudes of basis vectors.
We can't show whether our procedure is optimal or not
in view of the number of elementary gates.
Because we don't use
the symmetry of $|\psi_{n}\rangle$ enough,
our method seems to waste steps.
Recently,
constructing approximately an optimal state
for Ramsey
spectroscopy
by spin squeezing
has been proposed\cite{Ulam-Orgikh}.
This state also has symmetry like
Eq.~(\ref{PsiNForm}),
and
it is characterized by one parameter.
\noindent
{\bf \large Acknowledgements}
We would like to thank Dr.~M.~Okuda
and
Prof.~A.~Hosoya for critical reading
and valuable comments.
\appendix
\section{Networks of quantum gates}
We construct networks of quantum gates
for our method concretely.
For notations of networks
and quantum gates,
we refer to A.~Barenco et al.\cite{Barenco}.
\subsection{The network of $U_{f}$}
$U_{f}$
(defined in
Eq.~(\ref{functionF}),
(\ref{definition-fx-number1}) or
Eq.~(\ref{def-general-function-f}))
is given by the controlled gate,
which causes the unitary transformation
on the second register
under the value of the first register.
Constructing the controlled gate of $U_{f}$
with $poly(n)$ quantum elementary gates,
we can use our method efficiently.
We consider a network
for $U_{f}$ defined in
Eq.~(\ref{functionF}) and
Eq.~(\ref{definition-fx-number1}).
$f(x)$ represents the number of ``$1$'' bit
in the binary string $x$.
Writing the first (main) and second (auxiliary) register by
$|X_{n},X_{n-1},\cdots,X_{2},X_{1}\rangle\otimes|S\rangle$,
where $|S\rangle$ is
made up of $m=\lceil \log_{2}(n+1)\rceil$
qubits and
initialized to $|0\cdots 0\rangle$,
we can write the quantum networks
as the following program.
(For the notation of the program,
we referred to Cleve et al.\cite{Cleve-DiVincenzo}.)
\begin{tabbing}
{\bf Program} adder-1 \\
\quad \={\bf quantum registers}: \\
\>$X_{1},X_{2},\cdots,X_{n}$: qubit registers \\
\>$S$: an m-qubit register \\
{\bf for} $k=1$ {\bf to} $n$ {\bf do} \\
\quad\quad\=
$S\leftarrow (S+X_{k}) \quad\mbox{mod}\;2^{m}$. \\
\end{tabbing}
\begin{figure}
\caption{The network of the adder-2
for $m=\lceil \log_{2}
\end{figure}
To write a program for the addition
of $X_{k}$ in the adder-1,
we describe the qubits of the second register
by $|S_{m-1},\cdots,S_{1},S_{0}\rangle$,
introduce other auxiliary
qubits $|C_{m-1},\cdots,C_{1}\rangle$,
and
use $C_{j}$ as a carry bit of
addition at the $(j-1)$th bit.
We can write the program
as follows.
\begin{tabbing}
{\bf Program} adder-2 \\
\quad \={\bf quantum registers}: \\
\>$X_{k}$: a qubit register \\
\>$S_{0},S_{1},\cdots,S_{m-1}$: qubit registers \\
\>$C_{1},C_{2},\cdots,C_{m-1}$: auxiliary qubit registers
(initialized and finalized to 0) \\
$C_{1}\leftarrow C_{1}\oplus(S_{0}\land X_{k})$ \\
{\bf for} $j=2$ {\bf to} $m-1$ {\bf do} \\
\>\quad \=$C_{j}\leftarrow C_{j}\oplus(C_{j-1}\land S_{j-1})$ \\
{\bf for} $j=m-1$ {\bf down} {\bf to} $2$ {\bf do} \\
\>\> $S_{j}\leftarrow S_{j}\oplus C_{j}$ \\
\>\> $C_{j}\leftarrow C_{j}\oplus(C_{j-1}\land S_{j-1})$ \\
$S_{1}\leftarrow S_{1}\oplus C_{1}$ \\
$C_{1}\leftarrow C_{1}\oplus(S_{0}\land X_{k})$ \\
$S_{0}\leftarrow S_{0}\oplus X_{k}$. \\
\end{tabbing}
Because we don't use interference,
we can describe these operations with a higher
level language of classical computation.
In this program,
to avoid obtaining unnecessary entanglement,
we initialize and finalize all
auxiliary qubits $\{|C_{j}\rangle\}$
to $|0\rangle$.
Figure~\ref{adder2d-epsf} shows a network of
this program for $m=4$.
Repeating the quantum network
of the adder-2
for each $X_{k}$ $(k=1,\cdots,n)$,
we can construct the adder-1.
We estimate
the number of
quantum elementary gates
to construct the
adder-1.
In Figure~\ref{adder2d-epsf}, we use
$2(\lceil \log_{2}(n+1) \rceil -1)$ Toffoli gates
(that maps
$|x,y,z\rangle\rightarrow|x,y,z\oplus(x\wedge y)\rangle$)
and
$\lceil \log_{2}(n+1) \rceil$ controlled-NOT gates
for the adder-2.
Because we repeat the adder-2 $n$ times,
the number of the whole steps for the adder-1
is equal to $n(3\lceil \log_{2}(n+1) \rceil -2)$.
\subsection{Construction of $\bigwedge_{n}(R_{z}(\alpha))$}
From now on,
we often use a
$\bigwedge_{n}(R_{z}(\alpha))$
gate,
where
$R_{z}(\alpha)$
is given in the form,
\begin{equation}
R_{z}(\alpha)
=\exp(i\alpha\sigma_{z}/2)
=
\left[
\begin{array}{cc}
\exp(i\alpha/2) & 0 \\
0 & \exp(-i\alpha/2)
\end{array}
\right].
\end{equation}
(We describe the $\mbox{controlled}^{m}\mbox{-}U$
by $\bigwedge_{m}(U)$,
where $\forall U\in \mbox{\boldmath $U$}(2)$.
$\bigwedge_{m}(U)$
has an $m$-qubit control subsystem and
a one-qubit target subsystem.
It works as follows.
If all $m$ qubits of control subsystem are equal to $|1\rangle$,
$\bigwedge_{m}(U)$ applies $U$ on a target qubit.
Otherwise
$\bigwedge_{m}(U)$ does nothing.
We can write the Toffoli gate by $\bigwedge_{2}(\sigma_{x})$,
the controlled-NOT gate by $\bigwedge_{1}(\sigma_{x})$,
and any $\mbox{\boldmath $U$}(2)$ gate for one qubit
by $\bigwedge_{0}$.)
Here,
we consider how to construct
it
from elementary gates.
\begin{figure}
\caption{Decomposition of a $\bigwedge_{n}
\end{figure}
\begin{figure}
\caption{Decomposition of a $\bigwedge_{1}
\end{figure}
At first, using relations,
\[
R_{z}(\alpha/2)\sigma_{x}R_{z}(-\alpha/2)\sigma_{x}
=R_{z}(\alpha),
\quad\quad
\mbox{and}
\quad\quad
R_{z}(\alpha/2)R_{z}(-\alpha/2)
=\mbox{\boldmath $I$},
\]
we can decompose a $\bigwedge_{n}(R_{z}(\alpha))$ gate
to a $\bigwedge_{1}(R_{z}(\alpha/2))$ gate,
a $\bigwedge_{1}(R_{z}(-\alpha/2))$ gate
and two $\bigwedge_{n-1}(\sigma_{x})$ gates,
as shown in Figure~\ref{lnrza-epsf}.
Seeing Figure~\ref{l1rzb-epsf},
we can decompose a $\bigwedge_{1}(R_{z}(\beta))$ gate
to an $R_{z}(\beta/2)$ gate, an $R_{z}(-\beta/2)$ gate
and two controlled-NOT gates.
We have to only consider
how to make a $\bigwedge_{n-1}(\sigma_{x})$ gate
from elementary gates on an $(n+1)$-qubit network.
Especially, we pay attention to the fact that
there is a qubit which is not used
by the $\bigwedge_{n-1}(\sigma_{x})$ gate
on the network.
It is shown that,
on an $(n+1)$-qubit network,
where $n\geq 6$,
a $\bigwedge_{n-1}(\sigma_{x})$ gate
can be decomposed to $8(n-4)$ Toffoli gates\cite{Barenco}.
Consequently, on the $(n+1)$-qubit network$(n\geq 6)$,
a $\bigwedge_{n}(R_{z}(\alpha))$ gate can be decomposed
to $16(n-4)$ Toffoli gates,
four $\bigwedge_{1}(\sigma_{x})$ gates and
four $\bigwedge_{0}$ gates.
Therefore, $\bigwedge_{n}(R_{z}(\alpha))$
takes $8(2n-7)$ quantum elementary gates in total.
\subsection{The phase shift on certain basis vectors}
Figure~\ref{rot2nd2-epsf} shows a quantum network
for the selective phase shift by $\theta$
on a certain basis vector of the second register
defined in Eq.~(\ref{functionF}).
In Figure~\ref{rot2nd2-epsf}, we use a
$\bigwedge_{m}(R_{z}(2\theta))$
gate.
Setting the auxiliary qubit being $|0\rangle$,
$\bigwedge_{m}(R_{z}(2\theta))$ generates an eigenvalue
$\exp(i\theta)$
if and only if the second register is in the state
$|1\cdots 1\rangle$.
This technique is called ``kick back''\cite{Cleve-Ekert}.
\begin{figure}
\caption{The network of the phase shift on the second register.}
\end{figure}
In Figure~\ref{rot2nd2-epsf},
a shaded box stands for the NOT-gate given by $\sigma_{x}$
($|0\rangle\rightarrow|1\rangle$,
$|1\rangle\rightarrow|0\rangle$)
or the identity transformation.
Deciding which gates are set in each shaded box,
$\sigma_{x}$ or {\boldmath $I$},
we can select a basis vector on which we shift the phase.
In case $m\geq 6$,
it has been already shown that
a $\bigwedge_{m}(R_{z}(2\theta))$ gate can be constructed
from $8(2m-7)$ quantum elementary gates at most.
Seeing Figure~\ref{rot2nd2-epsf},
we find that the selective phase shift
on the second register takes
$2m+8(2m-7)=2(9m-28)\sim O(m)$ gates at most.
Building $|\psi_{n}\rangle$,
we can carry out the phase shift
on certain basis vectors
on the second register with $O(\log_{2}n)$ steps.
\subsection{The network of $D$}
Figure~\ref{netDd-epsf} shows a network of $D$.
Since this network consists of
$4n$ $\bigwedge_{0}$ gates
and a $\bigwedge_{n}(R_{z}(2\pi))$ gate,
it takes $4(5n-14)$ elementary gates, in case $n \geq 6$.
Therefore, $D$ takes $O(n)$ steps.
\begin{figure}
\caption{The network of $D$.}
\end{figure}
\subsection{Estimation of steps}
How many elementary gates
do we need to construct
$|\psi_{n}\rangle$
defined in Eq.~(\ref{PsiNForm}) or
$|\Psi_{n}\rangle$
defined in Eq.~(\ref{GeneralPartiallyEntangledStates})
from the uniform superposition?
If $M$ (the number of sets of basis vectors
classified by their coefficients) is $poly(n)$,
and if the function $U_{f}$
defined in Eq.~(\ref{functionF}) can be constructed
from $poly(n)$ elementary gates,
the $(R_{\pi}D)$ iterations take
the main part of the whole steps.
In the $(R_{\pi}D)$ transformation,
we do the following operations.
Applying $D$ on the $n$-qubit first register,
preparing the initialized $m$-qubit second register,
we apply $U_{f}$ on both of the registers
as Eq.~(\ref{functionF}).
Then,
we shift the phases of basis vectors
on the second register.
Finally,
we apply $U_{f}$ again to initialize the second register.
It has been already shown $D$ takes $4(5n-14)$ steps,
where $n \geq 6$.
The number of steps that a network of $U_{f}$ takes
depends on the function $f$.
For instance,
when we build $|\psi_{n}\rangle$,
$U_{f}$ needs $O(n\log_{2} n)$ steps.
\begin{figure}
\caption{The network of the phase shift
by $\pi$
on the second register for $R_{\pi}
\end{figure}
Figure~\ref{netRpi2-epsf} shows a network
of the phase shift by $\pi$
on the second register
with negative coefficients for $R_{\pi}$,
in the case of building $|\psi_{n}\rangle$.
To inverse signs of negative coefficients,
we shift the phase at most
for $\lfloor n/2 \rfloor$
sets of basis vectors
characterized by coefficients.
Therefore,
we shift the phases of
$2\lfloor n/2 \rfloor$
basis vectors of the second register at most.
As a result,
the network can be constructed from
$(2\lfloor n/2 \rfloor +1)
\cdot
\lceil \log_{2}(n+1) \rceil$
$\bigwedge_{0}$ gates and
$2\lfloor n/2 \rfloor$
$\bigwedge_{\lceil \log_{2}(n+1) \rceil}(R_{z}(2\pi))$ gates.
We can carry out $R_{\pi}$ with $O(n\log_{2}n)$ steps.
Similarly,
in the case of $|\Psi_{n}\rangle$,
we can estimate $R_{\pi}$
takes $O(M\log_{2}M)$ steps.
Building $|\psi_{n}\rangle$,
we repeat the $(R_{\pi}D)$ transformation
$O(n2^{n/2})$times at most
before the $(\tilde{R}DR)$ operation.
If we do the $(R_{\pi}D)$ iteration before every $(\tilde{R}DR)$,
we carry out it $\lfloor n/2 \rfloor$ times.
Consequently,
we need $O((n^{3}\log_{2} n)\times 2^{n/2})$
steps for the whole procedure in total
at most.
\section{A variation of coefficients
during the $(R_{\pi}D)$ iteration}
We explicitly derive
a variation of coefficients
during the $(R_{\pi}D)$ iteration
for the case
described in (\ref{SimpleModelForRpiDIteration}),
and estimate how many times do we need
to apply $(R_{\pi}D)$
to make $[S^{(k)}-2^{n-2}(B_{0}^{(k)}+B_{1}^{(k)})]$
be nonnegative.
We find it takes $O(2^{n/2})$ times,
in spite of the results,
$O(n2^{n/2})$ times,
in \S\ref{CaseWhSufCondNotSat}.
Applying $(R_{\pi}D)$ on $|\Psi\rangle$ defined by
(\ref{SimpleModelForRpiDIteration}),
we obtain
$R_{\pi}D|\Psi\rangle=[B_{0}^{(1)},\cdots,B_{1}^{(1)},\cdots]$,
where
\begin{equation}
\left \{
\begin{array}{lll}
2^{n-1}B_{0}^{(1)}&=S-2^{n-1}a_{0}
&=(2^{n-1}-t)a_{0}+ta_{1}, \\
2^{n-1}B_{1}^{(1)}&=-S+2^{n-1}a_{1}
&=-(2^{n}-t)a_{0}+(2^{n-1}-t)a_{1}.
\end{array}
\right.
\label{SimpleModelRpiDCoefficient}
\end{equation}
Referring to \cite{Boyer-Brassard},
we write $t$ as
\begin{equation}
\sin^{2}\theta=\frac{t}{2^{n}},
\quad
(\cos^{2}\theta=\frac{2^{n}-t}{2^{n}}),
\label{SimpleModelRpiDParat}
\end{equation}
where $0<\theta<(\pi/2)$,
and write $\{a_{0},a_{1}\}$ as
\begin{equation}
a_{0}=\frac{\sin\alpha}{\sqrt{2^{n}-t}},
\quad
a_{1}=\frac{\cos\alpha}{\sqrt{t}},
\label{SimpleModelForRpiDIteration2}
\end{equation}
where $0\leq\alpha<(\pi/2)$.
Using
(\ref{SimpleModelRpiDCoefficient}),
(\ref{SimpleModelRpiDParat}),
and (\ref{SimpleModelForRpiDIteration2}),
we can describe $\{B_{0}^{(1)},B_{1}^{(1)}\}$ by
\begin{equation}
\left \{
\begin{array}{lll}
B_{0}^{(1)}
&=(1/\sqrt{2^{n}})
[\cos 2\theta(\sin\alpha/\cos\theta)
+2\sin\theta\cos\alpha]
&=\sin(\alpha+2\theta)/\sqrt{2^{n}-t}, \\
B_{1}^{(1)}
&=(1/\sqrt{2^{n}})
[-2\cos\theta\sin\alpha
+\cos 2\theta(\cos\alpha/\sin\theta)]
&=\cos(\alpha+2\theta)/\sqrt{t}.
\end{array}
\right .
\end{equation}
Writing coefficients of the state
on which $(R_{\pi}D)$ has been applied $k$ times
as $B_{0}^{(k)}$ and $B_{1}^{(k)}$,
we obtain
\begin{equation}
B_{0}^{(k)}=\frac{\sin(\alpha+2k\theta)}{\sqrt{2^{n}-t}},
\quad\quad
B_{1}^{(k)}=\frac{\cos(\alpha+2k\theta)}{\sqrt{t}}
\quad\quad
\mbox{for $k=0,1,2,\cdots,$}
\label{explicit-Bs}
\end{equation}
where $B_{0}^{(0)}=a_{0}$, $B_{1}^{(0)}=a_{1}$.
Defining $S^{(k)}=(2^{n}-t)B_{0}^{(k)}+tB_{1}^{(k)}$,
we can derive
\begin{eqnarray}
&&S^{(k)}-2^{n-2}(B_{0}^{(k)}+B_{1}^{(k)}) \nonumber \\
&=&(3\cdot 2^{n-2}-t)B_{0}^{(k)}+(t-2^{n-2})B_{1}^{(k)} \nonumber \\
&=&\sqrt{2^{n}}
\{\sin[\alpha+(2k+1)\theta]
-\frac{1}{2\sin 2\theta}
\cos[\alpha+(2k-1)\theta]\} \nonumber \\
&=&-\frac{\sqrt{2^{n-2}}}{\sin 2\theta}F^{(k)},
\label{explicit-Sdash}
\end{eqnarray}
where
\begin{equation}
F^{(k)}=
\cos[\alpha+(2k+3)\theta].
\end{equation}
Since $0<\theta<(\pi/2)$ and $\sin 2\theta>0$,
it depends on a sign of $F^{(k)}$
whether $[S^{(k)}-2^{n-2}(B_{0}^{(k)}+B_{1}^{(k)})]$
is negative or not.
(With some calculations,
we can confirm that
(\ref{explicit-Bs}) and
(\ref{explicit-Sdash})
satisfy Lemma~2.)
Because of $0\leq \alpha <(\pi/2)$,
if $(2k+3)\theta=(\pi/2)$,
it is always accomplished that $F^{(k)}\leq 0$ and
$S^{(k)}-2^{n-2}(B_{0}^{(k)}+B_{1}^{(k)})\geq 0$.
Therefore, the number of times we need to apply
$(R_{\pi}D)$ doesn't
exceed $k_{MAX}$,
which is given as
\begin{equation}
k_{MAX}=\frac{1}{2\theta}(\frac{\pi}{2}-3\theta).
\end{equation}
On the other hand,
we can write $\theta$ as $\sin\theta=\sqrt{t/2^{n}}$
from (\ref{SimpleModelRpiDParat}),
and the minimum value of $t$ is $2$.
Taking the limit that
$t\sim O(1)$ and $n$ is large enough,
we obtain a relation,
$\sin\theta\sim\theta\sim\sqrt{t/2^{n}}$
and
\begin{equation}
k_{MAX}\sim\frac{\pi}{4}\sqrt{\frac{2^{n}}{t}}
\sim O(2^{n/2}).
\end{equation}
The $(R_{\pi}D)$ transformation is repeated
$O(2^{n/2})$ times at most to make
$[S^{(k)}-2^{n-2}(B_{0}^{(k)}+B_{1}^{(k)})]$
be nonnegative.
\end{document}
|
\begin{document}
\newtheorem{theorem}{Theorem}
\newtheorem{cor}[theorem]{Corollary}
\newtheorem{guess}{Conjecture}
\newtheorem{claim}[theorem]{Claim}
\newtheorem{lemma}[theorem]{Lemma}
\makeatletter
\newcommand\figcaption{\emf\@captype{figure}\caption}
\newcommand\tabcaption{\emf\@captype{table}\caption}
\makeatother
\newtheorem{acknowledgement}[theorem]{Acknowledgement}
\newtheorem{question}{Question}
\newenvironment{proof}{\noindent {\bf
Proof.}}{
$\blacksquare$
}
\newcommand{
\par\noindent {\bf Remark.~~}}{
\par\noindent {\bf Remark.~~}}
\newcommand{{\it p.}}{{\it p.}}
\newcommand{\em}{\em}
\newcommand{\rm mad}{\rm mad}
\newcommand{
$\blacksquare$
}{
$\blacksquare$
}
\title{The Strongly Antimagic labelings of Double Spiders}
\author{
Fei-Huang Chang\thanks{Grant number: MOST 104-2115-M-003-008-MY2}\\
Division of Preparatory Programs
for Overseas Chinese Students\\
National Taiwan Normal University\\
New Taipei City, Taiwan\\
{\tt [email protected]}
\and
Pinhui Chin \\
Department of Mathematics\\
Tamkang University\\
New Taipei City, Taiwan\\
{\tt [email protected]}
\and
Wei-Tian Li\thanks{Grant number: MOST-105-2115-M-005-003-MY2}\\
Department of Applied Mathematics\\
National Chung-Hsing University\\
Taichung City, Taiwan\\
{\tt [email protected]}
\and
Zhishi Pan \thanks{Corresponding Author}\\
Department of Mathematics\\
Tamkang University\\
New Taipei City, Taiwan\\
{\tt [email protected]}
}
\maketitle
\begin{abstract}
A graph $G=(V,E)$ is strongly antimagic, if there is a bijective mapping $f: E \to \{1,2,\ldots,|E|\}$
such that for any two vertices $u\neq v$, not only $\sum_{e \in E(u)}f(e) \ne \sum_{e\in E(v)}f(e)$
and also $\sum_{e \in E(u)}f(e) < \sum_{e\in E(v)}f(e)$ whenever $\emg(u)< \emg(v) $,
where $E(u)$ is the set of edges incident to $u$.
In this paper, we prove that double spiders, the trees contains exactly two vertices of degree at least 3, are strongly antimagic.
\end{abstract}
\section{Introduction}
Suppose $G=(V,E)$ is a connected, finite, simple graph and $f: E \to \{1,2,\ldots,
|E|\}:=[|E|]$ is a bijection. For each vertex $u$ of $V$, let $E(u)$ be the set of edges incident to $u$, and the {\em vertex-sum} $\varphi_f$ at $u$ is defined as $\varphi_f(u)=\sum_{e \in E(u)}f(e)$. The degree of $u$, denoted by $\emg(u)$, is the capacity of $E(u)$, i.e. $\emg(u)=|E(u)|$ and the leaf set is defined by $V_1=\{u|\emg(u)=1, u\in V\}$.
If $\varphi_f(u) \ne \varphi_f(v)$ for any two distinct vertices $u$ and $v$ of $V$, then $f$ is called an {\em antimagic labeling} of $G$.
The problem of finding antimagic labelings of graphs was introduced by Hartsfield and Ringel \cite{HR1990} in $1990$.
They proved that some special families of graphs, such as {\em paths, cycles, complete graphs}, are antimagic and put two conjectures. The conjectures have received much attention, but both conjectures
remain open.
\begin{guess} \cite{HR1990}\label{g1}
Every connected graph with order at least $3$ is antimagic.
\end{guess}
The most significant progress of Conjecture \ref{g1} is a result of Alon, Kaplan, Lev, Roditty, and
Yuster \cite{AKLRY2004}. They proved that a graph $G$ with minimum degree $\emlta(G)\geq c\log |V|$ for a constant $c$ or with maximum degree $\Delta(G)\geq |V|-2$ is antimagic. They also proved that complete partite graph other than $K_2$ is antimagic.
Cranston \cite{Cra2009} proved that for $k \ge 2$, every $k$-regular bipartite graph
is antimagic. For non-bipartite regular graphs, B\'{e}rczi, Bern\'{a}th, Vizer \cite{BBV2015} and
Chang, Liang, Pan, Zhu \cite{CLPZ2016} proved independently that every regular graph is antimagic.
\begin{guess} \cite{HR1990}\label{g2}
Every tree other than $K_2$ is antimagic.
\end{guess}
For Conjecture \ref{g2}, Kaplan, Lev, and Roditty \cite{KLR2009}
and Liang, Wong, and Zhu \cite{LWZ2012} showed that a tree with at most one vertex of degree $2$ is antimagic.
Recently, Shang \cite{S2015} proved that a special family of trees, spiders, is antimagic.
A {\em spider} is a tree formed from taking a set of disjoint paths and identifying one endpoint of each path together.
Huang, in his thesis~\cite{H2015}, also proved that spiders are antimagic.
Moreover, the antimagic labellings $f$ given in~\cite{H2015} have the property: $\emg(u)<\emg(v)$ implies $\varphi_f(u)<\varphi_f(v)$.
Given a graph $G$, if there exists an antimagic labeling $f$ satisfying the above property,
the $f$ is called a {\em strongly antimagic labeling} of $G$.
A graph $G$ is called {\em strongly antimagic} if it has a strongly antimagic labeling.
Finding a strongly antimagic labeling on a graph $G$ enables us to find an antimagic labeling
of the supergraph of $G$.
Let us describe such inductive method in Lemma \ref{lm1}
which is extracted from the ideas in~\cite{H2015}.
For a graph $G$, let $V_k$ be the set of vertices of degree $k$ in $V(G)$.
Assume that $V_1=\{v_1,v_2,...,v_i\}$, then we define $G\bigoplus V_1'=(V(G)\cup V_1',E(G)\cup E')$,
where $V_1'=\{v_1',v_2',...,v_i'\}$ and $E'=\{v_1v_1',v_2v_2',...,v_iv_i'\}$.
\begin{lemma} \label{lm1}
For any connected graph $G$ with $V_1\neq\varnothing$, if $G$ is strongly antimagic, then $G\bigoplus V_1'$ is strongly antimagic.
\end{lemma}
\begin{proof}
The proof of this lemma is similar to the proof of a corollary in \cite{H2015}. Let $f$ be a strongly antimagic labeling of $G$ and $V_1=\{v_1,v_2,...,v_i\}$ with $\varphi_f(v_1)<\varphi_f(v_2)<...<\varphi_f(v_i)$.
We construct a bijective mapping $f':E(G\bigoplus V_1')\rightarrow [|E(G)|+i]$ as following.
\[
f'(e)=\left\{
\begin{array}{ll}
j, & \hbox{if $e=v_jv_j'\in E',1\leq j\leq i$;} \\
f(e)+i, & \hbox{if $e\in E$.}
\end{array}
\right.
\]
For any vertices $u\in V(G)-V_1 $, $v_j\in V_1$, and $v_{j'}\in V_1'$, the vertex sums
under $f'$ are $\varphi_{f'}(u)=\varphi_f(u)+i\emg(u)$, $\varphi_{f'}(v_j)=\varphi_f(v_j)+j$,
and $\varphi_{f'}(v_j')=j$.
By some calculations and comparisons,
it is clear that $f'$ is a strongly antimagic labeling of $G\bigoplus V_1'$.
\end{proof}
A {\em double spider} is a tree which contains exactly two vertices of degree greater than 2.
It also can formed by first taking two sets of disjoint paths and one extra path,
and then identifying an endpoint of each path in the two sets to the two endpoints of the extra path, respectively.
In this paper, we manage to solve Conjecture~\ref{g2} for double spiders.
We have a stronger result:
\begin{theorem} \label{main thm}
Double spiders are strongly antimagic.
\end{theorem}
The rest of the paper is organized as follows.
In Section 2, we give some reduction methods
and classify the double spiders into four types.
For the four types of the double spiders,
we will prove that they are strongly antimagic by giving the labeling rules
in four different lemmas.
Hence, to prove our main theorem, it suffices to prove the lemmas.
The proofs of the lemmas are presented in Section 3.
However, we will only give the labeling rules and show the
strongly antimagic properties for degree one and degree two vertices.
For the comparisons between other vertices, we put all the details in Appendix.
Some concluding remarks and problems will be proposed in Section 4.
\section{Main Results}
Given a double spider, we decompose its edge set into three subsets:
The core path $P^{core}$, $L$, and $R$, where
$P^{core}$ is the unique path connecting the two vertices of degree at least three,
$L$ consists of paths with one endpoint of each path identified to an endpoint of $P^{core}$,
and $R$ consists of paths with one endpoint of each path identified to the other endpoint of $P^{core}$.
We denote the endpoints of $P^{core}$ by $v_l$ and $v_r$, respectively.
Conventionally, we assume $L$ contains at least as many paths as $R$, hence $\emg(v_l)\ge \emg(v_r)$.
See Figure $1$ as an illustration.
Note that two double spiders are isomorphic if their $L$ sets, $R$ sets, and the core paths are isomorphic.
From now on, we denote a double spider by $DS(L,P^{core},R)$.
The complexity of finding an antimagic labeling of a double spider depends on the number of the paths and their lengths composing the double spider.
So let us begin with reducing the number of paths of length one in $L$ and $R$.
\begin{figure}
\caption{A double spider $DS(L,P^{core}
\label{fig1}
\end{figure}
\begin{lemma} \label{Right minus 1}\label{casebasic}
Suppose $G=DS(L,P^{core},R)$ contains some paht $P$ of length one in $L\cup R$. Assume at least one of the following conditions holds.
\begin{description}
\item{(1)} $P\in R$, $\emg_G(v_l)\geq \emg_G(v_r)>3$, and $DS(L,P^{core},R\setminus \{P\})$ is strongly antimagic,
\item{(2)} $P\in L$, $\emg_G(v_l)>\emg_G(v_r)\geq 3$, and $DS(L\setminus \{P\},P^{core},R)$ has a strongly antimagic labeling $f$ with $\varphi_f(v_l)>\varphi_f(v_r)$.
\end{description}
Then $G$ is strongly antimagic.
\end{lemma}
\begin{proof} We only prove (1), since the proof of (2) is analogous.
Let $f$ be a strongly antimagic labeling on $G'=DS(L,P^{core},R\setminus \{P\})$, and $P=v_rv$. We create a bijective mapping $f^*$ from $E(G)$ to $[|E(G)|]$ on $G$ by
\[f^*(e)=\left\{
\begin{array}{ll}
1, & \hbox{if $e=v_rv$;} \\
f(e)+1, & \hbox{if $e\in E(G')$}.
\end{array}
\right.\]
Since $\emg_{G'}(v_r)<\emg_{G'}(v_\ell)$, we have $\varphi_f(v_r)<\varphi_f(v_l)$, and \begin{eqnarray*}
\varphi_{f^*}(v_r) &=& \varphi_f(v_r)+(\emg_G(v_r)-1)+f^*(v_rv)\\
&=& \varphi_f(v_r)+\emg_G(v_r) \\
&<& \varphi_{f}(v_l)+\emg_G(v_l)=\varphi_{f^*}(v_l).
\end{eqnarray*}
It is clear that $f^*$ is a strongly antimagic labeling of $DS(L,P^{core},R).$
\end{proof}
An {\em odd path (rest. even path)} is a path of odd (even) length.
Now suppose $R$ consists of $a$ odd paths and $b$ even paths,
and $L$ consists of $c$ odd paths with length greater than one, $d$ even paths,
and $t$ odd paths of length one.
By means of the following lemmas, Theorem~\ref{main thm} will be proved.
\begin{lemma} \label{deqL equals 3}
If $\emg(v_l)=\emg(v_r)=3$ then $DS(L,P^{core},R)$ is strongly antimagic.
\end{lemma}
\begin{lemma} \label{R has two P1}
If $\emg(v_l)>\emg(v_r)\geq 3$, $b=0$, and $R$ has no odd path of length at least $3$, then $DS(L,P^{core},R)$ is strongly antimagic.
\end{lemma}
\begin{lemma} \label{R has odd path}
If $\emg(v_l)>\emg(v_r)\geq 3$, $b=0$, and $R$ has at least one odd path of length at least $3$, then $DS(L,P^{core},R)$ is strongly antimagic.
\end{lemma}
\begin{lemma} \label{R has even path}
If $\emg(v_l)>\emg(v_r)\geq 3$ and $b\geqslant 1$, then $DS(L,P^{core},R)$ is strongly antimagic.
\end{lemma}
\noindent{\bf Proof of Theorem~\ref{main thm}.}
By Lemma \ref{deqL equals 3}, \ref{R has two P1}, \ref{R has odd path}, and \ref{R has even path},
the remaining case we need to show is $\emg(v_l)=\emg(v_r)\geq 4$.
For such a double spider $DS(L,P^{core},R)$,
let $h$ be the minimum length of a path in $L\cup R$.
Without loss of generality, we assume there is a $P_h$ in $R$.
Consider the double spider $DS(L',P^{core},R')$ that is obtained by
recursively deleting the leaf sets of $DS(L,P^{core},R)$ and of the resulting graphs $h-1$ times.
According to Lemma \ref{lm1}, we only need to show that $DS(L',P^{core},R')$
is strongly antimagic.
It is clear that $R'$ contains a path $P$ of length one.
By Lemma \ref{Right minus 1}, it is sufficient to show $G^{*}=DS(L',P^{core},R'\setminus \{P\})$ is strongly antimagic. Now $\emg_{G^{*}}(v_l)>\emg_{G^{*}}(v_r)$, by Lemma \ref{R has two P1}, \ref{R has odd path}, and \ref{R has even path}, $G^{*}$ is strongly antimagic.
$\blacksquare$
\section{Proofs of the Remaining Lemmas}
In this section, we are going to prove the Lemmas in last section.
We will give the rules to label the double spiders in each proof.
However, part of the work of checking the strongly antimagic property is moved to Appendix
because of the tedious and complicated calculations.
To achieve the goal, we need to give all edges and vertices the informative names.
Here we use $P_h$ to denote a path with length $h$, i.e. $P_h=u_0u_1u_2,...,u_h$,
which is not the common way but is helpful for us to simply the notation in our proof.
In addition, for paths of the same length in $L$ (or $R$), we can interchange the labelings on the edges of one paths with those of another. Thus, only the length of a path matters,
and we use the same notation to represent paths of the same length in $L$ or $R$.
Now, let $DS(L,P^{core},R)$ be a double spider with path
parameters $a$, $b$, $c$, $d$, and $t$ defined in Section 2,
and let $s$ be the length of $P^{core}$. Then we name the vertices and edges on the paths as follows:
$P^{core} = v_1v_2\cdots v_{s+1}$ with $v_1=v_\ell,v_{s+1}=v_r$ and $e_i=v_iv_{i+1}$.
$R= \{P_{2x_1+1},P_{2x_2+1},...,P_{2x_a+1},P_{2y_1},P_{2y_2},...,P_{2y_b}\}$ with $y_1\leq \cdots\leq y_b.$
\begin{itemize}
\item $P_{2x_i+1}=v_rv^{r, odd}_{i,1}v^{r, odd}_{i,2}\cdots v^{r, odd}_{i,2x_i+1}$
with $e^{r, odd}_{i,j}=v^{r, odd}_{i,j-1}v^{r, odd}_{i,j}$ and $e^{r,odd}_{i,1}=v_rv^{r, odd}_{i,1}$.
\item $P_{2y_i} =v_rv^{r,even}_{i,1}v^{r,even}_{i,2}\cdots v^{r,even}_{2y_i}$
with $e^{r, even}_{i,j}=v^{r,even}_{i,j-1}v^{r,even}_{i,j}$ and $e^{r,even}_{i,1}=v_rv^{r, even}_{i,1}$.
\end{itemize}
$L = \{P_{2w_1+1},...,P_{2w_c+1},P_{2z_1},...,P_{2z_d},P^1_1,P^2_1,...,P^t_1\}$ with $w_i\geq 1 $.
\begin{itemize}
\item $P_{2w_i+1}=v_l v^{l, odd}_{i,2w_i+1} v^{l, odd}_{i,2w_i}\cdots v^{l, odd}_{i,1}$
with $e^{l, odd}_{i,j-1}=v^{l, odd}_{i,j} v^{l, odd}_{i,j-1}$ and $e^{l, odd}_{i,2w_i+1}=v_lv^{l,odd}_{i,2w_i+1}$.
\item $P_{2z_i} =v_lv^{l,even}_{i,2z_i}v^{l,even}_{i,2z_i-1}\cdots v^{l,even}_{i,1}$
with $e^{l, even}_{i,j-1}=v^{l,even}_{i,j}v^{l,even}_{i,j-1}$ and $e^{l, even}_{i,2z_i}=v_lv^{l,even}_{i,2z_i}$.
\item $P^i_1 = v_lv^i_1$ with $e^i=v_lv^i_1$ $1\leq i\leq t$.
\end{itemize}
A vertex (resp. edge) denoted as $v_{i,j}^{r,odd}$ (resp. $e_{i,j}^{l,even}$) means that it is the $j$th vertex (resp. edge) of the $i$th odd (resp. even) path in $R$ (resp. $L$).
Observe that the index $j$ of an edge of a path in $R$ is increasing from $v_r$ to the leaf of the path, but the index of that in $L$ is reverse.
An edge of a path is called an {\em odd} (or {\em even}) {\em edge} if the index $j$ of the edge is odd (or even).
Define the following quantities for the total number of odd (even) edges in some odd (even) paths. (The summation is zero if $i=0$. )
\[
\begin{array}{lll}
A^{odd}_i=\sum_{k=1}^{i}(x_k+1),& A^{even}_i=\sum_{k=1}^{i}x_k,& B^{odd}_i=B^{even}_i=\sum_{k=1}^{i}y_k,\\
C^{odd}_i=\sum_{k=1}^{i}(w_k+1),& C^{even}_i=\sum_{k=1}^{i}w_k,& D^{odd}_i=D^{even}_i=\sum_{k=1}^{i}z_k.
\end{array}
\]
Let $A^{all}=A^{odd}_a+A^{even}_a,\ldots,D^{all}=D^{odd}_d+D^{even}_d$.
Then the total number of edges $m=A^{all}+B^{all}+s+C^{all}+D^{all}+t$.
We use $[n]_{odd}$ and $[n_1,n_2]_{odd}$ to denote the set of all odd integers in $[n]$ and the set of all odd integers in $\{n_1,n_1+1,\ldots, n_2\}$, respectively.
The definitions of $[n]_{even}$ and $[n_1,n_2]_{even}$ are similar.
Let us begin with Lemma \ref{R has odd path}, which is the simplest one.
\noindent{\bf Proof of Lemma \ref{R has odd path}.}
We construct a bijective mapping $f$ by assigning $1,2,\ldots,m$ to the edges accordingly in the following steps.
Some steps can be skipped if no such edges exist. Without loss of generality, $x_a\ge 1$.
\noindent{\bf Step 1.} Label the odd edges of the odd paths in $R$ by
\begin{center}
$f(e^{r,odd}_{i,j})=
\left\{
\begin{array}{ll}
A^{odd}_{i-1}+\frac{j+1}{2},& \hbox{for } i\in [a-1]\hbox{ and }j\in[2x_i+1]_{odd}\\
A^{odd}_{a-1}+\frac{j-1}{2},& \hbox{for } i=a\hbox{ and }j\in[2,2x_a+1]_{odd}\end{array}
\right.
$
\end{center}
We label the edge $e^{r,odd}_{a,1}$ later in order to ensure that the vertex sum at $v_r$ later is large enough.
Recall $A_0^{odd}=0$.
\noindent{\bf Step 2.} If $c\ge 1$, for $i\in[c]$ and $j\in[2w_i]_{odd}$, label the odd edges of the odd paths in $L$ by
\begin{center}
$f(e^{l,odd}_{i,j})=A^{odd}_a-1+C^{odd}_{i-1}-(i-1)+\frac{j+1}{2}$.
\end{center}
We also leave the $c$ edges $e^{l,odd}_{i,2w_i+1}$ for $ 1\leq i\leq c$ to enlarge the vertex sum at $v_l$.
\noindent{\bf Step 3.} If $s\ge 4$, label the edges of $P^{core}$ by,
\begin{equation*}
f(e_j)=A^{odd}_a-1+C^{odd}_c-c+\left\{
\begin{array}{ll}
\frac{s-j}{2}, \hbox{ for }j\in[2, s-2]_{even}, & \hbox{when $s$}\hbox{ is even}. \\
\frac{j-1}{2}, \hbox{ for }j\in[3, s-2]_{odd}, & \hbox{when $s$}\hbox{ is odd}.
\end{array}
\right.
\end{equation*}
In this step, we have labeled $s_1=\lfloor\frac{|s-2|}{2}\rfloor$ edges on the core path $P_s$.
\noindent{\bf Step 4.} If $d\ge 1$, for $i\in[d]$ and $j\in[2z_i]_{odd}$,
label the odd edges of the even paths in $L$ by
\begin{center}
$f(e^{l,even}_{i,j})=A^{odd}_a-1+C^{odd}_c-c+s_1+D^{odd}_{i-1}+\frac{j+1}{2}$.
\end{center}
\noindent{\bf Step 5.} If $t\ge 1$, for $i\in[t]$, label the paths of length one in $L$ by
\begin{center}
$f(e^i)=A^{odd}_a-1+C^{odd}_c-c+s_1+D^{odd}_d+i$.
\end{center}
\noindent{\bf Step 6.} For $i\in[a]$ and $j\in[2x_i]_{even}$, label the even edges of the odd paths in $R$ by
\begin{center}
$f(e^{r,odd}_{i,j})=A^{odd}_a-1+C^{odd}_c-c+s_1+D^{odd}_d+t+A^{even}_{i-1}+\frac{j}{2}$.
\end{center}
\noindent{\bf Step 7.} If $c\ge 1$, for $i\in[c]$ and $j\in[2w_i]_{even}$, label the even edges of the odd paths in $L$ by
\begin{center}
$f(e^{l,odd}_{i,j})=A^{all}-1+C^{odd}_c-c+s_1+D^{odd}_d+t+C^{even}_{i-1}+\frac{j}{2}$.
\end{center}
\noindent{\bf Step 8.} If $s\ge 2$, label the edges in $P^{core}$ by
\begin{center}
$f(e_j)=A^{all}-1+C^{all}-c+s_1+D^{odd}_d+t+\left\{
\begin{array}{lll}
\frac{s+1-j}{2},&\hbox{for } j\in [s]_{odd}, & \hbox{when } s\hbox{ is even}. \\
\frac{j}{2},&\hbox{for } j\in [s]_{even}, & \hbox{when } s\hbox{ is odd}.
\end{array}
\right.
$
\end{center}
We now have $s_2$ unlabeled edges on $P^{core}$, where
$s_2=1$, if $s=1$ or $s$ is even, otherwise $s_2=2$.
\noindent{\bf Step 9.} If $d\ge 1$, for $i\in[d]$ and $j\in[2z_i]_{even}$, label the even edges of the even paths in $L$ by
\begin{center}
$f(e^{l,even}_{i,j})=A^{all}-1+C^{all}-c+s-s_2+D^{odd}_d+t+D^{even}_{i-1}+\frac{j}{2}$.
\end{center}
\noindent{\bf Step 10.} Label the edge $e^{r,odd}_{a,1}$ by
\begin{center}
$f(e^{r,odd}_{a,1})=A^{all}-1+C^{all}-c+s-s_2+D^{all}+t+1=m-c-s_2.$
\end{center}
\noindent{\bf Step 11.} If $c\ge 1$, for $i\in[c]$, label the edges $e^{l,odd}_{i,2w_i+1}$ by
\begin{center}
$f(e^{l,odd}_{i,2w_i+1})=m-c-s_2+i$.
\end{center}
\noindent{\bf Step 12.} Label the remaining edges in $P^{core}$ by the following rules:
If $s=1$ or $s$ is even, then let $f(e_s)=m$; otherwise, let $f(e_1)=m-1$ and $f(e_s)=m$.
We prove that $f$ is strongly antimagic:
{\bf Claim:} $\varphi_f(u)> \varphi_f(v)$ for any $u\in V_2$ and $v\in V_1$.
Observe that, at Step 5, every pendant edge has been labeled,
and for a vertex $u$ in $V_2$, there is an unlabeled edge in $E(u)$.
This guarantees that $\varphi_f(u)>\varphi_f(v)$ for every vertex $v$ in $V_1$.
{\bf Claim:} $\varphi(u)$ are all distinct for $u\in V_2$.
For any two vertices $u'$ and $u''$ in $V_2$, let $E(u')=\{e^1_{u'}, e^2_{u'}\}$ and $E(u'')=\{e^1_{u''}, e^2_{u''}\}$.
Assume $f(e^1_{u'})< f(e^2_{u'})$ and $f(e^1_{u''})< f(e^2_{u''})$.
Our labeling rules give that
if $f(e^1_{u'})\le f(e^1_{u''})$, then $f(e^2_{u'})\le f(e^2_{u''})$, and at least
one of the inequalities is strict. This guarantees that
$\varphi_f(u)$ are distinct for all $u\in V_2$.
For $\varphi_f(v_l)>\varphi_f(v_r)>\varphi_f(u)$ for any $u\in V_2$, see Appendix.
$\blacksquare$
The next is the proof of Lemma~\ref{R has even path}.
\noindent{\bf Proof of Lemma~\ref{R has even path}.}
We use similar concepts of the proof of Lemma \ref{R has odd path} to create a bijection from $E$ to $[m]$.
However, the rules will be a little more complicated than those in Lemma \ref{R has odd path}.
In our basic principles, for each path in $L$ or $R$,
edges of the same parity as the pendent edge on the same path should be labeled first in general,
except for some edges incident to $v_l$ or $v_r$; and edges of different parities to the pendent edge on the same path
will always be labeled after all pendent edges have been labeled (so that the vertex sum at a vertex of degree two
can be greater than the vertex sum of a pendent vertex).
Thus, when $L$ contains $P_1$'s, the labels of these edges will be less than the
labels of the odd edges incident to $v_r$ on the even paths in $R$.
This could lead to $\varphi_f(v_l)<\varphi_f(v_r)$ if there are too many $P_1$'s in $L$.
Once this happens, our solution is to switch the labeling order of some edges in the even paths in $R$.
More precisely, we need to change the labeling orders of the edges for
$\alpha$ even paths, where $\alpha=\max\{0, (b-1)-(c+d)\}$, to construct the desired strongly antimagic labeling.
In addition, we will change the labeling order of the edges on $P_2$'s first.
Let $b_2$ be the number of $P_2$'s in $R$ and $\beta=\min\{\alpha ,b_2\}$.
If $\alpha>0$, then $b-1>c+d$.
Since $\emg(v_l)=c+d+t>a+b=\emg(v_r)$ , we have $t>a+1+\alpha> \beta$.
The followings are our labeling rules. Again, some steps can be skipped if no such edges exist.
\noindent{\bf Step 1.} If $\beta>0$, we first label the odd edges of $\beta$ $P_2$'s in $R$ and the edges of $\beta-1$ $P_1$'s in $L$ by
\[f(e^{r,even}_{i,1})=2i-1\hbox{ for } i\in[\beta]\hbox{ and } f(e^{i})=2i \hbox{ for } i\in[\beta -1].\]
Previously, we should label the even edges of $P_2$ in $R$, but now we first label the odd edges of them and leave the pendant edges to be labeled later.
Observe that for the label of each $e^{r,even}_{i,1}$ is an odd integer,
and $f(e^i)>f(e^{r,even}_{i,1})$ for $i\in [\beta-1]$.
We define \[\beta_1=\max\{0,\beta-1\}.\]
\noindent{\bf Step 2.}
If $\alpha>\beta=b_2$, for $i\in [\beta+1, \alpha]$ and $j\in[4, 2y_i]_{even}$, label the even edges of the even paths in $R$ by
\begin{center}
$f(e^{r,even}_{i,j})=\beta_1+B^{even}_{i-1}-(i-(\beta+1))+\frac{j-2}{2}$.
\end{center}
Furthermore, if $b-1>\alpha$, we also label $e^{r,even}_{i,j}$
for $i\in [\alpha+1, b-1]$ and $j\in[2y_i]_{even}$ by
\[f(e^{r,even}_{i,j})=\beta_1+B^{even}_{i-1}-(\alpha-\beta)+\frac{j}{2}.\]
\noindent{\bf Step 3.} If $a\ge 1$, for $i\in [a]$ and $j\in[ 2x_i+1]_{odd}$, label the odd edges of odd paths in $R$ by
\begin{center}
$f(e^{r,odd}_{i,j})=\beta_1+B^{even}_{b-1}-(\alpha-\beta)+A^{odd}_{i-1}+\frac{j+1}{2}$.
\end{center}
\noindent{\bf Step 4.} If $c\ge 1$, for $i\in [c]$ and $j\in[ 2w_i]_{odd}$, label the odd edges of odd paths in $L$ by
\begin{center}
$f(e^{l,odd}_{i,j})=\beta_1+B^{even}_{b-1}-(\alpha-\beta)+A^{odd}_{a}+C^{odd}_{i-1}-(i-1)+\frac{j+1}{2}$.
\end{center}
We leave the edges $e^{l,odd}_{i,2w_i+1}$ of odd paths in $L$ to be labeled later to ensure that
$v_l$ has a large vertex sum.
\noindent{\bf Step 5.} If $s\ge 4$, label the edges of $P^{core}$ by
\begin{align*}
f(e_j)&=\beta_1+B^{even}_{b-1}-(\alpha-\beta)+A^{odd}_a+C^{odd}_c-c\\
& +\left\{
\begin{array}{ll}
\frac{s-j}{2}, \hbox{ for }j\in[2, s-2]_{even}, & \hbox{when $s$}\hbox{ is even}. \\
\frac{j-1}{2}, \hbox{ for }j\in[3, s-2]_{odd}, & \hbox{when $s$}\hbox{ is odd}.
\end{array}
\right.
\end{align*}
Again, we have labeled $s_1=\lfloor\frac{|s-2|}{2}\rfloor$ edges on the core path $P_s$ at this step.
Next, we label the even edges of the $b$-th even path in $R$.
\noindent{\bf Step 6.} For $j\in[2y_b]_{even}$, label the edges $e^{r,even}_{b,j}$ by
\begin{center}
$f(e^{r,even}_{b,j})=\beta_1+B^{even}_{b-1}-(\alpha-\beta)+A^{odd}_a+C^{odd}_c-c+s_1+\frac{j}{2}.$
\end{center}
\noindent{\bf Step 7.} If $d\ge 1$, for $i\in[d]$ and $j\in[2z_i]_{odd}$, label
the odd edges of the even paths in $L$ by
\begin{center}
$f(e^{l,even}_{i,j})=\beta_1+B^{even}_{b}-(\alpha-\beta)+A^{odd}_a+C^{odd}_c-c+s_1+D^{odd}_{i-1}+\frac{j+1}{2}.$
\end{center}
\noindent{\bf Step 8.} If $\alpha>\beta=b_2$, for $i\in[\beta+1, \alpha]$, label the edges $e^{r,even}_{i,1}$ in $R$ by
\begin{center}
$f(e^{r,even}_{i,1})=\beta_1+B^{even}_{b}-(\alpha-\beta)+A^{odd}_a+C^{odd}_c-c+s_1+D^{odd}_{d}+(i-\beta)$.
\end{center}
Note that, for $\beta+1\leq i\leq \alpha$, $e^{r,even}_{i,2}$ and $e^{r,even}_{i,3}$ on $P_{2y_i}$ are two incident edges unlabeled yet.
\noindent{\bf Step 9.} If $\beta\ge 1$, then $t>\beta$.
Recall that we have labeled $\beta-1$ paths of length one in $L$ at Step 1. Now
label the remaining edges $e^i$ in $L$ by
\begin{center}
$f(e^i)=\beta_1+B^{even}_{b}+A^{odd}_a+C^{odd}_c-c+s_1+D^{odd}_{d}+(i-\beta_1)$ for $i\in[\beta, t]$.
\end{center}
\noindent{\bf Step 10.} If $\beta\ge 1$, for $i\in[\beta]$, label the pendant edges of the $P_2$'s in $R$ by
\begin{center}
$f(e^{r,even}_{i,2})=B^{even}_{b}+A^{odd}_a+C^{odd}_c-c+s_1+D^{odd}_{d}+t+(\beta+1-i)$.
\end{center}
\noindent{\bf Step 11.} If $\alpha>\beta$, for $i\in[\beta+1, \alpha]$ and $j\in [3, 2y_i]_{odd}$, label
the odd edges of the even paths in $R$ by
\begin{center}
$f(e^{r,even}_{i,j})=B^{even}_{b}+A^{odd}_a+C^{odd}_c-c+s_1+D^{odd}_{d}+t+B^{odd}_{i-1}-(i-(\beta+1))+\frac{j-1}{2}$.
\end{center}
Moreover, if $b-1>\alpha$, for $i\in [\alpha+1, b-1]$ and $j\in [2y_i]_{odd}$, label $e^{r,even}_{i,j}$ by
\begin{center}
$f(e^{r,even}_{i,j})=B^{even}_{b}+A^{odd}_a+C^{odd}_c-c+s_1+D^{odd}_{d}+t+B^{odd}_{i-1}-(\alpha-\beta)+\frac{j+1}{2}$.
\end{center}
\noindent{\bf Step 12.} If $a\ge 1$, for $i\in [a]$ and $j\in [2x_i]_{even}$, label the even edges of odd paths in $R$ by
\begin{center}
$f(e^{r,odd}_{i,j})=B^{all}-y_b-(\alpha-\beta)+A^{odd}_a+C^{odd}_c-c+s_1+D^{odd}_{d}+t+A^{even}_{i-1}+\frac{j}{2}$.
\end{center}
\noindent{\bf Step 13.} If $c\ge 1$, for $i\in [c]$ and $j\in [2w_i]_{even}$, label the even edges of odd paths in $L$ by
\begin{center}
$f(e^{l,odd}_{i,j})=B^{all}-y_b-(\alpha-\beta)+A^{all}+C^{odd}_c-c+s_1+D^{odd}_{d}+t+C^{even}_{i-1}+\frac{j}{2}$.
\end{center}
\noindent{\bf Step 14.} If $s\ge 2$, label the edges in $P^{core}$ by
\begin{eqnarray*}
f(e_j) &=& B^{all}-y_b-(\alpha-\beta)+A^{all}+C^{all}-c+s_1+D^{odd}_{d}+t \\
&& +\left\{
\begin{array}{lll}
\frac{s+1-j}{2},& \hbox{ for } j\in[s]_{odd},& \hbox{when $s$}\hbox{ is even}. \\
\frac{j}{2},& \hbox{ for } j\in[s]_{even},& \hbox{when $s$}\hbox{ is odd}.
\end{array}
\right.
\end{eqnarray*}
\noindent{\bf Step 15.} For $j\in[2y_b]_{odd}$, label the odd edges of the $b$-th even path in $R$ by
\begin{center}
$f(e^{r,even}_{b,j})=B^{all}-y_b-(\alpha-\beta)+A^{all}+C^{all}-c+s-s_2+D^{odd}_{d}+t+\frac{j+1}{2}$.
\end{center}
\noindent{\bf Step 16.} If $d\ge 1$, for $i\in [d]$ and $j\in [2z_i]_{even}$,
label the even edges of the even paths in $L$ by
\begin{center}
$f(e^{l,even}_{i,j})=B^{all}-(\alpha-\beta)+A^{all}+C^{all}-c+s-s_2+D^{odd}_{d}+t+D^{even}_{i-1}+\frac{j}{2}$.
\end{center}
\noindent{\bf Step 17.} If $\alpha>\beta$, for $i\in[\beta+1, \alpha]$, label the unlabeled edges $e^{r,even}_{i,2}$ by
\begin{center}
$f(e^{r,even}_{i,2})=B^{all}-(\alpha-\beta)+A^{all}+C^{all}-c+s-s_2+D^{all}+t+(i-\beta)$.
\end{center}
\noindent{\bf Step 18.} If $c\ge 1$, for $i\in [c]$, label $e^{l,odd}_{i,2w_i+1}$ by
\begin{center}
$f(e^{l,odd}_{i,2w_i+1})=B^{all}+A^{all}+C^{all}-c+s-s_2+D^{all}+t+i$.
\end{center}
\noindent{\bf Step 19.} Label the remaining edges in $P^{core}$ by the following rules:
If $s=1$ or $s$ is even, then let $f(e_s)=m$; otherwise, let $f(e_1)=m-1$ and $f(e_s)=m$.
Next, we prove that $f$ is strongly antimagic.
{\bf Claim:} $\varphi_f(u)\ge \varphi_f(v)$ for any $u\in V_2$ and $v\in V_1$.
Observe that all pendent edges have been labeled at Step 9 or Step 10. For the former case,
$\beta=0$ and there is an unlabeled edge in $E(u)$ for every $u\in V_2$ at the end of Step 8.
Hence the claim holds.
For the latter case, observe that the edge $e^{r,even}_{1,1}$ has the largest label among all pendent edge.
Hence the largest vertex sum of all pendent vertices is $f(e^{r,even}_{1,1})$.
Let us check the vertex sum of a vertex in $V_2$. For $1\le i\le \beta$, the vertex sum at $v^{r,even}_{i,1}=f(e^{r,even}_{i,1})+f(e^{r,even}_{i,2})
=(2i-1)+(B^{even}_{b}+A^{odd}_a+C^{odd}_c-c+s_1+D^{odd}_{d}+t+(\beta+1-i))$ is increasing in $i$.
For any other vertex $u\in V_2$, by our labeling rules,
we can find one edge $e'\in E(u)$ with $f(e')>f(e^{r,even}_{\beta,1})$ and the other edge $e''\in E(u)$ with $f(e'')>f(e^{r,even}_{\beta,2})$.
Thus, the smallest vertex sum of a vertex in $V_2$ happens at $v^{r,even}_{1,1}$, and is greater than the vertex sum of any pendent vertex.
{\bf Claim:} $\varphi(u)$ are all distinct for $u\in V_2$.
We have already showed that the vertex sums satisfy
$\varphi_f(v^{r,even}_{1,1})<\varphi_f(v^{r,even}_{2,1})<
\ldots<\varphi_f(v^{r,even}_{\beta,1})< \varphi_f(u)$
for $u\in V_2-\{v^{r,even}_{1,1},v^{r,even}_{2,1},\ldots, v^{r,even}_{\beta,1}\}$.
For other vertices $u'$ and $u''$ in $V_2$, let $E(u')=\{e^1_{u'}, e^2_{u'}\}$ and $E(u'')=\{e^1_{u''}, e^2_{u''}\}$.
Assume $f(e^1_{u'})< f(e^2_{u'})$ and $f(e^1_{u''})< f(e^2_{u''})$.
Our labeling rules give that
if $f(e^1_{u'})\le f(e^1_{u''})$, then $f(e^2_{u'})\le f(e^2_{u''})$, and at least
one of the inequalities is strict. This guarantees that
$\varphi_f(u)$ are distinct for all $u\in V_2$.
For $\varphi_f(v_l)>\varphi_f(v_r)>\varphi_f(u)$ for any $u\in V_2$, see Appendix.
$\blacksquare$
Lemma \ref{deqL equals 3} and Lemma \ref{R has two P1} could be proven
by the same labeling rules.
\noindent{\bf Proof of Lemma \ref{deqL equals 3} and Lemma \ref{R has two P1}.}
First, We use Lemma \ref{lm1} and Lemma \ref{Right minus 1} to do some reductions.
Given a double spider $DS(L,P^{core}, R)$ described in Lemma \ref{deqL equals 3},
let us first consider $h=\min \{j| P_j \in R\cup L\}$.
Since $\emg(v_l)=\emg(v_r)=3$, without loss of generality,
we assume the number of $P_h$ in $R$ is greater than or equal to that in $L$.
By Lemma \ref{lm1}, it follows that we only need to show that the double spider is strongly antimagic
for $h=1$.
Given a double spider $DS(L,P^{core}, R)$ described in Lemma \ref{R has two P1},
we remove all but two $P_1$s in $R$.
Moreover, if there are some $P_1$ in $L$ as well, we remove them as many as possible
unless one of the three situation happens: $L$ contains no $P_1$'s, or
$L$ consists of exactly two $P_1$'s,
or $L$ consists of exactly one $P_1$ and one path of length at least two.
By Lemma \ref{Right minus 1}, if the resulting double spider is strongly antimagic,
then $DS(L,P^{core}, R)$ is also strongly antimagic.
Every reduced double spider belongs to at least one of the three types:
\begin{description}
\item{(a)} $\emg(v_l)=\emg(v_r)=3$, $t=2$, $a=2$, and $x_1=x_2=0$; or
\item{(b)} $\emg(v_l)=\emg(v_r)=3$, $t\le 1$, $a\geq 1$, and $x_1=0$; or
\item{(c)} $\emg(v_l)\ge \emg(v_r)=3$, $t=0$, $a=2$, and $x_1=x_2=0$.
\end{description}
Now we show each type of double spiders above is strongly antimagic.
If a double spider is of type (a), then the total number of edges $m=s+4$.
When $s$ is odd, we give the labeling $f$ as follows:
$f(e_j)=\frac{(s+1)-j}{2}$ for $j\in [s]_{even}$,
$f(e^{r,odd}_{1,1})=\frac{s-1}{2}+1$, $f(e^{r,odd}_{2,1})=\frac{s-1}{2}+2$,
$f(e^1)=\frac{s-1}{2}+3$, $f(e^2)=\frac{s-1}{2}+4$, and
$f(e_j)=\frac{s-1}{2}+4+\frac{(s+2)-j}{2}$ for $j\in [s]_{odd}$.
For this labeling, we have vertex sums
$\varphi_f(v_l)=2s+10$, $\varphi_f(v_r)=\frac{3s+13}{2}$,
$\varphi_f(v_j)=\frac{3s+11-2j}{2}$ for $2\le j\le s$, and
$\varphi_f(v)\in\{\frac{s+1}{2},\frac{s+3}{2},\frac{s+5}{2},\frac{s+7}{2}\}$ if $\emg(v)=1$.
When $s$ is even, we give the labeling $f$ as follows:
$f(e_j)=\frac{j}{2}$ for $j\in [s]_{even}$,
$f(e^1)=\frac{s}{2}+1$, $f(e^2)=\frac{s}{2}+2$,
$f(e^{r,odd}_{1,1})=\frac{s}{2}+3$, $f(e^{r,odd}_{2,1})=\frac{s}{2}+4$,
and $f(e_j)=\frac{s}{2}+4+\frac{j+1}{2}$ for $j\in [s]_{odd}$.
For this labeling, we have vertex sums
$\varphi_f(v_l)=\frac{3s+20}{2}$, $\varphi_f(v_r)=\frac{3s+10}{2}$,
$\varphi_f(v_j)=\frac{s+8+2j}{2}$ for $j\in[2,s]$, and
$\varphi_f(v)\in\{\frac{s+2}{2},\frac{s+4}{2},\frac{s+6}{2},\frac{s+8}{2}\}$ if $\emg(v)=1$.
It is easy to see the labelings are strongly antimagic.
For a double spider of type (b) or (c),
we will give the rules to label the edges by $1,2,\ldots, m$ accordingly.
Our rules will produce a strongly antimagic labeling
except for the double spider is isomorphic to the following ones:
\begin{figure}
\caption{The double spider with $L=\{P_3,P_1\}
\label{fig 2}
\end{figure}
We construct a strongly labeling separately in the right graph of Figure~\ref{fig 2}.
Note that for the two types of double spiders, $R$ contains only two paths and one of them has length one.
For convenience, we will denote the two paths in $R$ by
$P_1 =v_rv^{r}_{1,1}=e^{r}_{1,1}$ and $P_k=v_rv^{r}_{2,1}v^{r}_{2,2}\cdots v^{r}_{2,k}$ with
$e^{r}_{2,j}=v^{r}_{2,j-1}v^{r}_{2,j}$ and $e^{r}_{2,1}=v_rv^{r}_{2,1}$.
The following are our rules to label the double spiders of type (b) and (c):
\noindent{\bf Step 1.} If $k\ge 2$, label all even edges of $P_k$ in $R$ by
\begin{center}
$f(e^{r}_{2,j})=\lfloor\frac{k+2-j}{2}\rfloor$, for $j\in [k]_{even}$.
\end{center}
\noindent{\bf Step 2.} If $c\ge 1$, label all odd edges of $P_{2w_{i}+1}$ in $L$, except for $e^{l,odd}_{1,2w_{1}+1}$, by
\begin{center}
$f(e^{l,odd}_{1,j})=\lfloor\frac{k}{2}\rfloor+\frac{j+1}{2}, \hbox{ for }j\in [2w_{1}-1]_{odd}$,
\end{center}
and for $i\in[2,c]$ and $j\in[2w_{i}+1]_{odd}$, let
\begin{center}
$f(e^{l,odd}_{i,j})=\lfloor\frac{k}{2}\rfloor+C^{odd}_{i-1}-1+\frac{j+1}{2}$.
\end{center}
Moreover, we define $w'=-1$ when $c\ge 1$, otherwise $w'=0$.
Then we have $\lfloor\frac{k}{2}\rfloor+c+w'+D^{odd}_d+t\ge 1$.
\noindent{\bf Step 3.}
If $s\ge 4$, label the edges of $P^{core}$ by,
\begin{center}
$f(e_j)=\lfloor\frac{k}{2}\rfloor+C^{odd}_c+w'+\left\{
\begin{array}{ll}
\frac{s-j}{2}, \hbox{ for }j\in[2, s-2]_{even}, & \hbox{when $s$}\hbox{ is even}. \\
\frac{j-1}{2}, \hbox{ for }j\in[3, s-2]_{odd}, & \hbox{when $s$}\hbox{ is odd}.
\end{array}
\right.$
\end{center}
As before, we labeled $s_1=\lfloor\frac{|s-2|}{2}\rfloor$ edges of the core path $P_s$.
\noindent{\bf Step 4.} If $d\ge 1$, for $i\in[d]$ and $j\in[ 2z_i]_{odd}$,
label the odd edges of $P_{2z_i}$ by
\begin{center}
$f(e^{l,even}_{i,j})=\lfloor\frac{k}{2}\rfloor+C^{odd}_c+w'+s_1+D_{i-1}^{odd}+\frac{j+1}{2}$.
\end{center}
Next, we label edges of the paths of length one in $L$ and $R$.
We have to slightly adjust the labeling orders for different cases.
Let \[t'=\left\{
\begin{array}{ll}
1, & \hbox{if $t=1,d=1$ or $t=1,s=2,c=1,w_1=1,k\geq 2$;} \\
0, & \hbox{otherwise;}
\end{array}
\right.\]
Observe that if $t'=1$, then $\lfloor\frac{k}{2}\rfloor+D^{odd}_d\ge 1$.
\noindent{\bf Step 5.} We label $e^{1}$ (it does not exsit if $t=0$)
and $e^{r}_{1,1}$ in different order according to the number $t'$.
If $t'=1$, we label $e^{r}_{1,1}$ and $e^{1}$ by
\begin{eqnarray}\label{c11 9}
f(e^{r}_{1,1}) &=& \hbox{$\lfloor\frac{k}{2}\rfloor$}+C^{odd}_c+w'+s_1+D^{odd}_d+1. \\
\notag f(e^{1}) &=& \hbox{$\lfloor\frac{k}{2}\rfloor$}+C^{odd}_c+w'+s_1+D^{odd}_d+2.
\end{eqnarray}
Else, $t'=0$, then we label $e^{1}$ and $e^{r}_{1,1}$ by
\begin{eqnarray}\label{c11 8}
\notag f(e^{1}) &=& \hbox{$\lfloor\frac{k}{2}\rfloor$}+C^{odd}_c+w'+s_1+D^{odd}_d+t. \\
f(e^{r}_{1,1})&=& \hbox{$\lfloor\frac{k}{2}\rfloor$}+C^{odd}_c+w'+s_1+D^{odd}_d+t+1.
\end{eqnarray}
In this step, $f(e^1)$ is undefined when $t=0$.
\noindent{\bf Step 6.} Label all the odd edges of $P_k$, $j\in[k]_{odd}$, in $R$ by
\begin{equation}\label{c11 10}
f(e^{r}_{2,j})=\hbox{$\lfloor\frac{k}{2}\rfloor+C^{odd}_c+w'+s_1+D^{odd}_d+1+t+\lceil\frac{k+1-j}{2}\rceil$}.
\end{equation}
\noindent{\bf Step 7.} If $c\ge 1$, for $i\in [c]$ and $j\in [2w_{i}]_{even}$, label the even edges of $P_{2w_i+1}$ in $L$ by
\begin{center}
$f(e^{l,odd}_{i,j})=k+1+C^{odd}_c+w'+s_1+D^{odd}_d+t+C_{i-1}^{even}+\frac{j}{2}$.
\end{center}
\noindent{\bf Step 8.} If $s\ge 2$, label the edges in $P^{core}$ by
\begin{equation}\label{c11 12}
f(e_j)=k+1+C^{all}+w'+D^{odd}_d+s_1+t+\left\{
\begin{array}{lll}
\frac{s+1-j}{2},& \hbox{for }j\in[s]_{odd}, & \hbox{when }s\hbox{ is even}; \\
\frac{j}{2},& \hbox{for }j\in [s]_{even}, & \hbox{when }s\hbox{ is odd};
\end{array}\right.
\end{equation}
Let $s_2$ be the number of unlabeled edges on $P^{core}$. So
$s_2=1$, if $s=1$ or $s$ is even, otherwise $s_2=2$.
\noindent{\bf Step 9.} If $d\ge 1$, for $j\in [2z_i]_{even}$ and $ i\in[d]$, label the even edges of $P_{2z_i}$ in $L$ by
\begin{center}
$f(e^{l,even}_{i,j})=k+1+C^{all}+w'+s-s_2+D^{odd}_d+t+D_{i-1}^{even}+\frac{j}{2}$.
\end{center}
\noindent{\bf Step 10.} If $c\ge 1$, label the edge $e^{l,odd}_{1,2w_1+1}$ left at Step 2 by
\begin{center}
$f(e^{l,odd}_{1,2w_1+1})=m-s_2$.
\end{center}
\noindent{\bf Step 11.} Label the remaining edges in $P^{core}$ by the following rules:\\
If $s=1$ or $s$ is even, then let $f(e_s)=m$;
otherwise, let $f(e_1)=m-1$ and $f(e_s)=m$.
We prove $f$ is a strongly antimagic labeling.
{\bf Claim:} $\varphi_f(v)> \varphi_f(u)$ for any $v\in V_2$ and $u\in V_1$.
Observe that either all pendent edges were labeled
before Step 6, or there exists exactly one pendent edge labeled at Step 6,
when $k$ is odd and $k\ge 3$.
In the former case, for every $v \in V_2$,
there is an edge in $E(v)$ not labeled yet at the
beginning at Step 6. This promises that $\varphi_f(v)> \varphi_f(u)$ for any
$u\in V_1$.
In the latter case, we label the pendent edge of $P_k$ by $\lfloor\frac{k}{2}\rfloor+C^{odd}_c+w'+s_1+D^{odd}_d+t+2$
at Step 6, and it is equal to $\varphi_f(v^{r}_{2,k})$.
Moreover, every vertex $v\in V_2$, except for $v^{r}_{2,k-1}$, is incident
to an edge of label greater than $\lfloor\frac{k}{2}\rfloor+C^{odd}_c+w'+s_1+D^{odd}_d+t+2$.
This also leads $\varphi_f(v)> \varphi_f(u)$ for any vertex $v\in V_2$ and $u\in V_1$.
{\bf Claim:} $\varphi(u)$ are all distinct for $u\in V_2$.
For any two vertices $u'$ and $u''$ in $V_2$, let $E(u')=\{e^1_{u'}, e^2_{u'}\}$ and $E(u'')=\{e^1_{u''}, e^2_{u''}\}$.
Assume $f(e^1_{u'})< f(e^2_{u'})$ and $f(e^1_{u''})< f(e^2_{u''})$.
Our labeling rules give that
if $f(e^1_{u'})\le f(e^1_{u''})$, then $f(e^2{u'})\le f(e^2_{u''})$, and at least
one of the inequalities is strict. This guarantees that
$\varphi_f(u)$ are distinct for all $u\in V_2$.
For $\varphi_f(v_l)>\varphi_f(v_r)>\varphi_f(u)$ for any $u\in V_2$, see Appendix.
$\blacksquare$
\section{Conclusion and Future Work}
In general, given an antimagic graph $G$, there exist many antimagic labelings on $G$.
Some of the labelings are not strongly antimagic.
Thus, finding a strongly antimagic labeling of a graph could be more difficult
than finding a general antimagic labeling.
In fact, we do not know if there exists a strongly antimagic labeling for every antimagic graph .
However, if a graph is strongly antimagic, then we can use Lemma~\ref{lm1}
to construct a larger graph which is not only antimagic but also strongly antimagic .
It would be helpful to tackle the antimagic labeling problem
if we have more constructive methods like that.
For example, Lemma~\ref{lm1} can be generalized to the following theorem.
\begin{theorem}
Let $G$ be a strongly antimagic graph and $V_k=\{v\in V\mid \emg(v)=k\}$.
If for each vertex in $V_k$, we attach an edge to it,
then the resulting graph is also strongly antimagic.
\end{theorem}
The proof of the above theorem is exactly the same as Lemma~\ref{lm1}. First add $|V_k|$ to
the label of each edge in $E$ when the strongly antimagic labeling is given,
then label the new edges by $1,\ldots, |V_k|$
according to the order of the vertex sums of the vertices in $V_k$.
For antimagic graphs, we ask the following questions.
\begin{question}\label{q1}
Does there exist a strongly antimagic labellings for every antimagic graph?
\end{question}
In 2008 , Wang and Hsiao~\cite{W2008} introduced the {\em $k$-antimagic labeling} on a graph $G$, which
is a bijection $f$ from $E(G)$ to $\{k+1,\ldots,k+|E(G)|\}$ for an integer $k\ge 0$
such that the vertex sums $\varphi_f(v)$ are distinct over all vertices.
We call a graph {\em $k$-antimagic} if it has a $k$-antimagic labeling.
The purpose of studying such kind of labelings is to apply them for
finding the antimagic labelings of the Cartesian product of graphs.
Wang and Hsiao also pointed out that
if the antimagic labeling $f$ of a graph $G$ has the property
that the order of vertex sums is consistent with the order of degrees,
then $G$ is $k$-antimagic for any $k\ge 0$.
This property on the vertex sums is exactly the same definition of the strongly antimagic labeling in our article.
In fact, all the $k$-antimagic labelings studied in~\cite{W2008}
are derived from the strongly antimagic labeling of the graph with a translation on labels.
Hence all those $k$-antimagic labelings have the ``strong property'': $\varphi_f(v)<\varphi_f(u)$ whenever $\emg(u)< \emg(v)$.
\begin{question}\label{q2}
Is there a $k$-antimagic graph but not $(k+1)$-antimagic?
\end{question}
Note that if the answer of Question~\ref{q2} is yes for some graph $G$,
then every $k$-antimagic labeling on $G$ does not have the above strong property on the vertex sums and the degrees.
Moreover, $G$ is a negative answer for Question~\ref{q1} if $k=0$.
{\bf Remark.} There is a different version of $k$-antimagic labeling studied in \cite{Dan2005,W2012}.
They consider injections from $E(G)$ to $\{1,2,...,|E(G)|+k\}$ such that all vertex sums are pairwise distinct.
Recall that the set $V_k$ of a graph consists of vertices of degree $k$.
For any graph, let $V_{\ge 3}$ be the set of vertices of degree at least three.
Kaplan, Lev and Roditty~\cite{KLR2009} proved that for a tree, if the set $|V_2|\le 1$, then it is antimagic.
Our strongly antimagic double spiders together with the known results on spiders and paths
can be rephrased as following: For a tree, if the set $|V_{\ge 3}|\le 2$, then it is antimagic.
If we have both large $V_2$ and $V_{\ge 3}$ in the tree, then the problem turns out to be more difficult. We explain the reasons.
Note that $|V_1|$ must be larger than $|V_{\ge 3}|$ by the simple fact that the average degree
of a tree is less than two.
Hence, large $V_2$ and $V_{\ge 3}$ leads to large $V_2$ and $V_1$.
If we label the edges at random, then the vertex sum of a vertex in $V_2$ has fifty percent likelihood to be smaller than $|E|$,
which is very likely to coincide with the vertex sums of vertices in $V_1$.
A very recently result~\cite{LMS2017} is that for a caterpillar, if $|V_1|\ge \frac{1}{2}(3(|V_2|+|V_{\ge 3}|+1))$, then it is antimagic.
Until the paper is completed, we do not have an affirmative answer of Conjecture~\ref{g2} for all caterpillars yet.
\section*{Acknowledgment}
The first and third authors would like to thank Alfr\'{e}d R\'{e}nyi Institution of Mathematics for host on August, 2017, in Hungary.
\section{Appendix}
\subsection{Rest of the Proof of Lemma \ref{deqL equals 3} and Lemma \ref{R has two P1}. }
{\bf Claim:} $\varphi_f(v_r)>\varphi_f(u)$ for any $u\in V_2$.
Let $u_2$ be the vertex in $V_2$ with the largest vertex sum.
If $s=1$, we have
\[f(e^{r}_{1,1})+f(e^{r}_{2,1}) \geq m-1,\]
by Equalities (\ref{c11 9}) or (\ref{c11 8}), and (\ref{c11 10}).
Moreover, $s=1$ implies that we label $m$ to the core edge which incident to $v_r$ and $v_l$.
So $\varphi_f(u_2)\le (m-1)+(m-2) <2m-1\le f(e_1)+f(e_{1,1}^{r})+f(e_{2,1}^{r})=\varphi_f(v_r)$.
If $s\ge 2$, then $u_2=v_s$ since it is incident to $e_s$, the last labeled edge.
By Equalities (\ref{c11 10}) and (\ref{c11 12}),
we have
\[f(e_{s-1})=f(e^{r}_{2,1})+C^{even}_c+
\left\{
\begin{array}{ll}
1, & \hbox{if $s$ is even;} \\
\frac{s-1}{2}, & \hbox{if $s$ is odd;}
\end{array}
\right.
\]
Recall that $\lfloor\frac{k}{2}\rfloor+c+w'+D^{odd}_d+t>0$.
In Equation (\ref{c11 8}),
we have
\begin{align*}
f(e^{r}_{1,1})&\ge \lfloor\frac{k}{2}\rfloor+C^{odd}_c+w'+s_1+D^{odd}_d+t+1\\
&=s_1+C^{even}_c+1+(\lfloor\frac{k}{2}\rfloor+c+w'+D^{odd}_d+t)\\
&>s_1+C^{even}_c+1\ge f(e_{s-1})-f(e^r_{2,1}),
\end{align*}
and hence
\begin{center}
$\varphi_f(v_r)=f(e^r_{1,1})+f(e^r_{2,1})+f(e_s)
>f(e_{s-1})+f(e_s)=\varphi_f(u_2).$
\end{center}
\noindent{\bf Claim:} $\varphi_f(v_l)>\varphi_f(v_r)$.
When $t'=1$, we have $f(e^1)=f(e^{r}_{1,1})+1$ by the rules in Step 5.
Thus, $\varphi_f(v_r)=f(e_s)+f(e^r_{1,1})+f(e^r_{2,1})=(m-1)+f(e^1)+f(e^r_{2,1})$.
Note that $m-1$ is assigned to an edge $e$ at Step 9, or Step 10, or Step 11.
So, $e\in E(v_l)\setminus \{e^1\}$.
If $f(e_1)=m-1$ ($s$ is odd and greater than 3), then there exists an edge in $E(v_l)$
labeled at Step 9 or 10,
whose label is $m-2$ and greater than $f(e^r_{2,1})$.
So
\[\varphi_f(v_l)> (m-1)+f(e^1)+f(e^r_{2,1})=\varphi_f(v_r).\]
If $f(e)=m-1$ for some $e\in E(v_l)-\{e_1,e^1\}$,
then $e_1$ is labeled at Step 11 when $s=1$ or it is labeled at Step 8 when $s$ is even.
In the former case, we have $f(e^r_{2,1})< m-1$ and hence
\[\varphi_f(v_l)\ge f(e^1)+2m-1>f(e^r_{2,1})+f(e^r_{1,1})+m=\varphi_f(v_r).\]
In the latter case, we have $f(e_1)>f(e^r_{2,1})$, and hence
\[\varphi_f(v_l)\ge f(e^1)+f(e_1)+m-1>f(e^r_{2,1})+f(e^r_{1,1})+m=\varphi_f(v_r).\]
When $t'=0$, recall that $\emg(v_l)=c+d+t+1\ge 3$.
We classify the possible values of $c$, $d$, and $t$.
\begin{description}
\item{Case 1.} $c+d\ge 2$.
\begin{description}
\item{Subcase 1.1.} $d\ge 1$.
Then we can pick two edges $e',e''\in E(v_l)$ labeled at Step 9 and Step 10, whose labels are both greater than
$f(e^r_{2,1})$ and $f(e^r_{1,1})$. If $f(e_1)\in\{m,m-1\}$, then
\[\varphi_f(v_l)\ge f(e')+f(e'')+f(e_1)
>f(e^r_{2,1})+f(e^r_{1,1})+f(e_s)= \varphi_f(v_r).\]
If $f(e_1)\not\in\{m,m-1\}$, then we use $f(e_1)>f(e^r_{1,1})$, and hence
\[\varphi_f(v_l)\ge (m-1)+(m-2)+f(e_1)>m+f(e^r_{2,1})+f(e^r_{1,1}).\]
\item{Subcase 1.2.} $d=0$.
If $s=1$, we have $f(e^{l,odd}_{c,2w_c+1})+1=f(e^{r}_{1,1})$. Moreover,
$f(e^{l,odd}_{1,2w_1+1})- f(e^{r}_{2,1})\geq 2$ since $C^{even}_c\ge 2$.
Therefore,
\begin{eqnarray*}
\varphi_f(v_l)&\ge& f(e_s)+f(e^{l,odd}_{1,2w_1+1})+f(e^{l,odd}_{c,2w_c+1})\\
&>&f(e_s)+f(e^{r}_{2,1})+f(e^{r}_{1,1})\\
&=&\varphi_f(v_r).
\end{eqnarray*}
If $s$ is odd and greater than 1,
we have $f(e^{l,odd}_{c,2w_c+1})+\frac{s-3}{2}+1=f(e^{r}_{1,1})$ and
$f(e^{l,odd}_{1,2w_1+1}) - f(e^{r,odd}_{2,1})\ge C^{even}_c+\frac{s-1}{2}+1$.
Thus,
\begin{eqnarray*}
\varphi_f(v_l)&\ge& f(e_1)+f(e^{l,odd}_{1,2w_1+1})+f(e^{l,odd}_{c,2w_c+1})\\
&\ge& (m-1)+[f(e^{r}_{1,1})-(\frac{s-3}{2}+1)]+[f(e^{r,odd}_{2,1})+ C^{even}_c+\frac{s-1}{2}+1]\\
&>&f(e_s)+f(e^{r}_{2,1})+f(e^{r}_{1,1})\\
&=&\varphi_f(v_r).
\end{eqnarray*}
If $s$ is even, we have $f(e^{l,odd}_{c,2w_c+1})+\frac{s-2}{2}+1=f(e^{r,odd}_{1,1})$,
and by Equalities (\ref{c11 10}) and (\ref{c11 12}), we have
$f(e_1)-f(e^{r}_{2,1})=\frac{s}{2}+C^{even}_c\geq \frac{s}{2}+2$.
Then
\begin{eqnarray*}
\varphi_f(v_l)&\ge& f(e_1)+f(e^{l,odd}_{1,2w_1+1})+f(e^{l,odd}_{c,2w_c+1})\\
&\ge& (f(e^{r}_{2,1})+\frac{s}{2}+2)+[f(e^{r,odd}_{1,1})-(\frac{s-2}{2}+1)]+(m-1)\\
&>&f(e_s)+f(e^{r}_{2,1})+f(e^{r}_{1,1})\\
&=&\varphi_f(v_r).
\end{eqnarray*}
\end{description}
\item{Case 2.} $c+d=1$.
In this case, $t=1$ by the fact $d+c+t=\emg(v_l)\ge 3$ and reduction.
\begin{description}
\item{Subcase 2.1.} $c=1$ and $d=0$.
Since the special case of $c=1$, $d=0$, $t=1$, $w_1=1$, $s=2$ and $k=1$ has be handled separately
as illustrated in Figure~\ref{fig 2},
we may assume at least one of the conditions $w_1\ge 2$, $s\neq 2$, and $k\ge 2$ holds.
Because $t=1$, we have $f(e^1)+1=f(e^{r,odd}_{1,1})$.
If $s=1$, by Equality (\ref{c11 10}), $f(e^{l,odd}_{1,2w_1+1})- f(e^{r}_{2,1})=w_1+1\geq 2$.
Then
\[ \varphi_f(v_l)=
f(e^{l,odd}_{1,2w_1+1})+f(e^1)+f(e_1)> f(e^{r}_{2,1})+f(e^{r}_{1,1})+f(e_1)
=\varphi_f(v_r).\]
If $s$ is odd and greater than one, then $f(e_1)=m-1$. By Equalities (\ref{c11 10}) and (\ref{c11 12}), we have
$f(e^{l,odd}_{1,2w_1+1})- f(e^{r}_{2,1}) =w_1+\frac{s+1}{2}\geq 3$. Then
\[ \varphi_f(v_l)=
f(e^{l,odd}_{1,2w_1+1})+f(e^1)+f(e_1)> f(e^{r}_{2,1})+f(e^{r}_{1,1})+f(e_s)
=\varphi_f(v_r).\]
If $s$ is even, then $f(e^{l,odd}_{1,2w_1+1})=m-1$.
By Equalities (\ref{c11 10}) and (\ref{c11 12}), we have
$f(e_1)- f(e^{r}_{2,1})=w_1+\frac{s}{2}\geq 3.$
Then
\[ \varphi_f(v_l)=
f(e^{l,odd}_{1,2w_1+1})+f(e^1)+f(e_1)> f(e^{r}_{2,1})+f(e^{r}_{1,1})+f(e_s)
=\varphi_f(v_r).\]
\item{Subcase 2.2.} $c=0$ and $d=1$.
This cannot happen since $t=1$ and $d=1$ will imply $t'=1$.
\end{description}
\end{description}
\subsection{Rest of the Proof of Lemma~\ref{R has odd path}.}
The conditions $\emg(v_r)\ge 3$ and $b=0$ imply $a\ge 2$.
Without loss of generality, assume the length of the $a$-th odd path in $L$ is at least 3.
Since $\emg(v_l)\geq 4$, $s\geq 1$, $2x_a+1\geq 3$, and $2x_1+1\geq 1$,
we have the total number of edges $m\geq D_d^{even}+z_d+7$.
We make some observations.
\begin{itemize}
\item At Step 5, if $t\geq 2$, we have
\begin{eqnarray}
f(e^t)+f(e^{t-1}) &=& (A^{odd}_a-1+C^{odd}_c-c+s_1+D^{odd}_d+t)\notag \\
&&+(A^{odd}_a-1+C^{odd}_c-c+s_1+D^{odd}_d+t-1)\notag \\
&=&(A^{all}+a-2)+(C^{all}-c)+2s_1+D^{all}+(2t-1)\notag \\
&=& m+a-c+t-(2s_1-s-3)\label{lm6 1}
\end{eqnarray}
\item At Step 9, if $d\geq 2$, we have
\begin{equation}\label{lm6 2}
f(e^{l,even}_{d,2z_d})\geq m-c-3\hbox{ and } f(e^{l,even}_{d-1,2z_{d-1}})\geq m-c-z_d-3.
\end{equation}
\item At Step 10, if $s$ is even, then
\begin{equation}\label{lm6 3}
f(e_1)=f(e^{r,odd}_{a,1})-D^{even}_{d}-1,
\end{equation}
and, by the order we labeled the edges $e^{r,odd}_{a,1}$, $e_{s-1}$, and $e^{l,odd}_{c,2w_c}$, we have
\begin{equation}\label{lm6 4}
f(e^{r,odd}_{a,1})>f(e_{s-1})>f(e^{l,odd}_{c,2w_c}).
\end{equation}
Moreover, we have
\begin{equation}\label{lm6 5}
f(e^{r,odd}_{a,1})=\left\{
\begin{array}{ll}
m-c-1, & \hbox{if }s=1 \hbox{ or }s\hbox{ is even}. \\
m-c-2, & \hbox{otherwise }.
\end{array}
\right.
\end{equation}
\item At Step 11, if $c\geq 2$, we have
\begin{equation}\label{lm6 6}
f(e^{l,odd}_{c,2w_c+1})\geq m-2 \hbox{ and } f(e^{l,odd}_{c-1,2w_{c-1}+1})\geq m-3.
\end{equation}
\item At Step 12, if $s$ is odd, $f(e_1)=f(e^{r,odd}_{a,1})+c+1$. With Equality (\ref{lm6 3}), we have
\begin{equation}\label{lm6 7}
f(e_1)=\left\{
\begin{array}{ll}
f(e^{r,odd}_{a,1})+c+1, & \hbox{if $s$ is odd.} \\
f(e^{r,odd}_{a,1})-D^{even}_{d}-1, & \hbox{if $s$ is even.}
\end{array}
\right.
\end{equation}
\end{itemize}
\noindent{\bf Claim:} $\varphi_f(v_r)>\varphi_f(u)$ for any $u\in V_2$.
Let $u_2$ be the vertex of the largest vertex sum in $V_2$.
Then, we have
\[u_2=\left\{
\begin{array}{ll}
v_{s}, & \hbox{if $s>1$.} \\
v^{l,odd}_{c,2w_c+1}, & \hbox{if $s=1,c>0$.}\\
v^{r,odd}_{a,1}, & \hbox{if $s=1, c=0$.}
\end{array}
\right.\]
By Inequality (\ref{lm6 4}) and $f(e_s)=m>f(e^{l,odd}_{c,2w_c+1})>f(e^{r,odd}_{a,2})$,
the vertex sum at $v_r$ is
\begin{eqnarray*}
\varphi_f(v_r) &= &\sum_{i=1}^{a-1}f(e^{r,odd}_{i,1})+f(e^{r,odd}_{a,1})+f(e_s)>f(e^{r,odd}_{a,1})+f(e_s)\\
&> &\left\{
\begin{array}{ll}
f(e_{s-1})+f(e_s)=\varphi_f(v_{s}), & \hbox{if $s>1$;} \\
f(e^{l,odd}_{c,2w_c})+f(e^{l,odd}_{c,2w_c+1})=\varphi_f(v^{l,odd}_{c,2w_c+1}), & \hbox{if $s=1,c>0$;}\\
f(e^{r,odd}_{a,1})+f(e^{r,odd}_{a,2})=\varphi_f(v^{r,odd}_{a,1}), & \hbox{if $s=1,c=0$;}\end{array}\right. \\
&= &\varphi_f(u_2).
\end{eqnarray*}
\noindent{\bf Claim:} $\varphi_f(v_l)>\varphi_f(v_r)$.
Recall that $\emg(v_l)>\emg (v_r)\ge 3$,
and for any $e\in E(v_l)$, we have $f(e)>f(e^{r,odd}_{i,1})$ for $1\leq i\leq a-1$.
Thus, if we can find three edges in $E(v_l)$ such that the sum of the labels is
not less than the sum of of the maximal two labels of the edges in $E(v_r)$, namely $f(e_s)+f(e^{r,odd}_{a,1})$,
then we are done.
Recall that $\emg(v_l)=c+d+t+1$. The choice of the three edges in $E(v_l)$
depends on the values of $c$, $d$, and $t$:
\begin{description}
\item{Case 1.} $c\geq 2$
By Inequality (\ref{lm6 6}) and Equality (\ref{lm6 7}), and $m\geq D^{even}_d+z_d+7$,
\begin{eqnarray*}
f(e^{l,odd}_{c,2w_c+1})+f(e^{l,odd}_{c-1,2w_{c-1}+1})+f(e_1)
& \geq & (m-2)+(m-3)\\
&&+(f(e^{r,odd}_{a,1})-D^{even}_d-1)\\
&=& (m+f(e^{r,odd}_{a,1}))+(m-D^{even}_d-6) \\
&>& f(e_s)+f(e^{r,odd}_{a,1}).
\end{eqnarray*}
\item{Case. 2} $c=1$
\begin{description}
\item{Subcase 2.1.} $d\geq 1$.
By Inequalities (\ref{lm6 2}) and (\ref{lm6 6}), and Equality (\ref{lm6 7}),
\begin{eqnarray*}
f(e^{l,odd}_{c,2w_c+1})+f(e^{l,even}_{d,2z_{d}})+f(e_1)
&\geq& (m-2)+(m-c-3)\\
&&+(f(e^{r,odd}_{a,1})-D^{even}_d-1) \\
&=& (m+f(e^{r,odd}_{a,1}))+(m-D^{even}_d-7) \\
&>& f(e_s)+f(e^{r,odd}_{a,1}).
\end{eqnarray*}
\item{Subcase 2.2.} $d=0$.
By Inequality (\ref{lm6 6}) and Equality (\ref{lm6 7}),
\begin{eqnarray*}
f(e^{l,odd}_{c,2w_c+1})+f(e^t)+f(e_1)
&\geq& (m-2)+(A^{odd}_a-1+C^{odd}_c-c+s_1+D^{odd}_d+t) \\
&& +(f(e^{r,odd}_{a,1})-D^{even}_d-1)\\
&=& (m+f(e^{r,odd}_{a,1}))+(A^{odd}_a+C^{odd}_c+s_1+t-5) \\
&>& f(e_s)+f(e^{r,odd}_{a,1}).
\end{eqnarray*}
\end{description}
\item{Case 3.} $c=0$.
\begin{description}
\item{Subcase 2.1.} $d\geq 2$.
By Inequality (\ref{lm6 2}) and Equality (\ref{lm6 7}),
\begin{eqnarray*}
f(e^{l,even}_{d,2z_d})+f(e^{l,even}_{d-1,2z_{d-1}})+f(e_1)
&\geq& (m-3)+(m-z_d-3)\\
&&+(f(e^{r,odd}_{a,1})-D^{even}_d-1) \\
&=& (m+f(e^{r,odd}_{a,1}))+(m-D^{even}_d-z_d-7)\\
&\ge& f(e_s)+f(e^{r,odd}_{a,1}).
\end{eqnarray*}
\item{Subcase 2.2.} $d=1$.
Then we have $t\geq 2$. By Inequality (\ref{lm6 2}), Equality (\ref{lm6 7}), and $A^{odd}_a\geq 3$,
\begin{eqnarray*}
f(e^{l,even}_{d,2z_d})+f(e^t)+f(e_1)
&\geq& (m-3)+(A^{odd}_a-1+C^{odd}_c-c+s_1+D^{odd}_d+t)\\
& &+(f(e^{r,odd}_{a,1})-D^{even}_d-1) \\
&\geq& (m+f(e^{r,odd}_{a,1}))+(A^{odd}_a+s_1+t-5) \\
&\geq& f(e_s)+f(e^{r,odd}_{a,1}).
\end{eqnarray*}
\item{Subcase 2.3.} $d=0$
Then $t\geq 3$. If $s$ is odd, by Equalities (\ref{lm6 1}) and (\ref{lm6 7}), and $a\geq 2$,
\begin{eqnarray*}
f(e^t)+f(e^{t-1})+f(e_1)
&\geq& m+t+a-c-6+(f(e^{r,odd}_{a,1})+c+1) \\
&=& (m+f(e^{r,odd}_{a,1}))+(t+a-5) \\
&\geq& f(e_s)+f(e^{r,odd}_{a,1}).
\end{eqnarray*}
If $s$ is even, by Equality (\ref{lm6 1}),
\begin{eqnarray*}
f(e^t)+f(e^{t-1})+f(e_1)
&\geq& m+t+a-c-5+(f(e^{r,odd}_{a,1})-D^{even}_d-1) \\
&=& (m+f(e^{r,odd}_{a,1}))+(t+a-6)\\
&\ge &f(e_s)+f(e^{r,odd}_{a,1})+(t+a-6).
\end{eqnarray*}
The quantity $t+a-6$ in the above inequality is negative only if $t=3$ and $a=2$. However,
we have $f(e^1)\ge (A^{odd}_a-1)+1 \geq 3$ and $f(e^{r,odd}_{1,1})=1$. So
\begin{eqnarray*}
\varphi_f(v_l) &=& f(e^1)+f(e^{t})+f(e^{t-1})+f(e_1) \\
&\geq& 3+(m+f(e^{r,odd}_{a,1}))+(t+a-6) \\
&>&1+f(e_s)+f(e^{r,odd}_{a,1})\\
&=&f(e^{r,odd}_{1,1})+f(e_s)+f(e^{r,odd}_{a,1})\\
&=&\varphi_f(v_r).
\end{eqnarray*}
\end{description}
\end{description}
\subsection{Rest of the Proof of of Lemma \ref{R has even path}. }
We make some observations.
\begin{itemize}
\item From Step 1, Step 8, and Step 9, we have
\begin{equation}\label{lm9b18}
f(e^i)>f(e^{r,even}_{i,1})\hbox{ for }i\in[\alpha], \hbox{ and } f(e^i)>f(e^{r,odd}_{i',1})\hbox{ for } i\in [\beta, t]\hbox{ and } i'\in[a].
\end{equation}
\item From Step 16 and Step 18, we have
\begin{equation}\label{lm9b17}
f(e)>f(e').
\end{equation}
for $e\in \{e^{l,odd}_{i,2w_i+1},i\in [c]\}\cup \{e^{l,even}_{i,2z_i}, i\in [d]\}$ and $e'\in E(v_r)\setminus \{e_s\}$.
\item At Step 9, if $t\geq 2$, we have
\begin{equation}\label{lm9b11}
f(e^t)=B^{even}_{b}+A^{odd}_a+C^{odd}_c-c+s_1+D^{odd}_{d}+t,\hbox{ if }t\geq 1,
\end{equation}
and
\begin{equation}\label{lm9b12}
f(e^{t-1})=B^{even}_{b}+A^{odd}_a+C^{odd}_c-c+s_1+D^{odd}_{d}+t-1,\hbox{ if }t\geq 2.
\end{equation}
\item At Step 14, if $s$ is even, then we have
\begin{equation}\label{lm9b13}
f(e_1)=m-y_b-(\alpha-\beta)-c-D^{even}_d-1.
\end{equation}
\item At Step 15, after labeling $e^{r,even}_{b,1}$, we have
\begin{equation}\label{lm9b14}
f(e^{r,even}_{b,1})=m-D^{even}_d-(y_b-1)-(\alpha-\beta)-c-\left\{
\begin{array}{ll}
1, & \hbox{if }s=1 \hbox{ or }s\hbox{ is even}, \\
2, & \hbox{if }s\ge 3\hbox{ and is odd }.
\end{array}
\right.
\end{equation}
Moreover, when $s\ge 2$
\begin{equation}\label{lm9b15}
f(e^{r,even}_{b,1})>f(e_{s-1}).
\end{equation}
\item By the order we labeled edges on $E$, we have
\begin{equation}\label{lm9b16}
f(e^{r,even}_{b,1})>f(e^{l,odd}_{c,2w_c})>f(e^{r,even}_{\alpha,3})>f(e^{l,even}_{d,2z_d-1})>f(e^{r,even}_{b,2y_b}).
\end{equation}
\end{itemize}
\noindent{\bf Claim:} $\varphi_f(v_r)>\varphi_f(u)$ for any $u\in V_2$.
Let $u_2$ be the vertex in $V_2$ with the largest vertex sum. If $s=1$, then
\[\varphi_f(u_2)=\left\{
\begin{array}{ll}
\varphi_f(v^{l,odd}_{c,2w_c+1})=f(e^{l,odd}_{c,2w_c})+f(e^{l,odd}_{c,2w_c+1}), & \hbox{if $c>0$;} \\
\varphi_f(v^{r,even}_{\alpha,2})=f(e^{r,even}_{\alpha,3})+f(e^{r,even}_{\alpha,2}), & \hbox{if $c=0,(\alpha-\beta)>0$;} \\
\varphi_f(v^{l,even}_{d,2z_d})=f(e^{l,even}_{d,2z_d-1})+f(e^{l,even}_{d,2z_d}), & \hbox{if $c=0,(\alpha-\beta)=0, d>0$;} \\
\varphi_f(v^{r,even}_{b,2y_b-1})=f(e^{r,even}_{b,2y_b})+f(e^{r,even}_{b,2y_b-1}), & \hbox{otherwise.}
\end{array}
\right.\]
By Inequality (\ref{lm9b16}) and $f(e_s)=m$, we have \[\varphi_f(v_r)>f(e^{r,even}_{b,1})+f(e_s)>\varphi_f(u_2).\]
If $s\ge 2$, then $u_2=v_s$. By Inequality (\ref{lm9b15}), we have $\varphi_f(v_r)>\varphi_f(u_2)$.
\noindent{\bf Claim:} $\varphi_f(v_l)>\varphi_f(v_r)$.
The idea is similar to that in the proof of Lemma~\ref{R has odd path}.
We will choose $k+1$ edges in $E(v_l)$ and $k$ edges in $E(v_r)$
such that the sum of the labels of the $k+1$ edges in $E(v_l)$ is not less than
the sum of the labels of the $k$ edges in $E(v_r)$.
Moreover, for other edges $e'\in E(v_l)$ and $e''\in E(v_l)$ which are not chosen, $f(e')>f(e'')$ holds.
\begin{description}
\item{Case 1.} $s=1.$
If $t\le 1$, by Inequalities (\ref{lm9b18}) and (\ref{lm9b17}), we have $\varphi_f(v_l)>\varphi_f(v_r)$.
If $t\geq 2$,
by Equalities (\ref{lm9b11}), (\ref{lm9b12}), and (\ref{lm9b14}), we have \[f(e^{t})+f(e^{t-1})=m+a-c+t-2> f(e^{r,even}_{b,1}).\]
With Inequalities (\ref{lm9b18}) and (\ref{lm9b17}), $\varphi_f(v_l)>\varphi_f(v_r)$.
\item{Case 2.} $s\ge 2.$
If $t=0$, we have either $f(e^{l,odd}_{c,2w_c+1})+f(e_1)>f(e_s)$ or $f(e^{l,even}_{d,2z_d})+f(e_1)>f(e_s)$; and
if $t=1$, by Equalities (\ref{lm9b11}) and (\ref{lm9b13}), $f(e^t)+f(e_1)>f(e_s)$ holds.
With Inequalities (\ref{lm9b18}) and (\ref{lm9b17}), we have $\varphi_f(v_l)>\varphi_f(v_r)$.
For $t\geq 2,$ note $f(e^t)+f(e^{t-1})>f(e^{r,even}_{b,1})$ and if $s$ is odd, then $f(e_1)=m-1$.
So $f(e^t)+f(e^{t-1})+f(e_1)\geq f(e_s)+f(e^{r,even}_{b,1})$.
With Inequalities (\ref{lm9b18}) and (\ref{lm9b17}), we have $\varphi_f(v_l)>\varphi_f(v_r)$.
If $s$ is even, we need compare more edges.
First we have $f(e_1)=f(e^{r,even}_{b,1})+1$ and,
by Equalities (\ref{lm9b11}) and (\ref{lm9b12}), $f(e^t)+f(e^{t-1})=m+a-c+t-3$.
\begin{description}
\item{Subcase 2.1.} $c\ge 1$.
We compare the sum of the labels of the edges
$e^{l,odd}_{c,2w_c+1}$, $e^t$, $e^{t-1}$, an $e_1$ in $E(v_l)$
and the sum of maximal three labels of edges in $E(v_r)$.
Let
\[r=\max\{f(e)\mid e\in E(v_r)\setminus \{e_s,e^{r,even}_{b,1}\}\}.\]
Then $f(e^{l,odd}_{c,2w_c+1})-r>c+3$, and hence
\[
f(e^{l,odd}_{c,2w_c+1})+f(e^t)+f(e^{t-1})+f(e_1)>m+f(e^{r,even}_{b,1})+r.
\]
With Inequalities (\ref{lm9b18}) and (\ref{lm9b17}), $\varphi_f(v_l)>\varphi_f(v_r)$ holds.
\item{Subcase 2.2.} $c=0$.
If $t>3$ or $a\geq 1$, then $m+a-c+t-3\geq m+1$ and $f(e^t)+f(e^{t-1})+f(e_1)\geq f(e_s)+f(e^{r,even}_{b,1})$.
The remaining cases are $t=2$ and $a=0$, or $t=3$ and $a=0$.
Note that $a=0$ implies $b\ge 2$, because $\emg(v_r)\ge 3$.
If $d>0$, no matter $t=2$ or $t=3$,
$f(e^{l,even}_{d,2z_d})-f(e^{r,even}_{b-1,1})>2$.
So \[f(e^{l,even}_{d,2z_d})+f(e^t)+f(e^{t-1})+f(e_1)>m+f(e^{r,even}_{b,1})+f(e^{r,even}_{b-1,1}).\]
With Inequalities (\ref{lm9b18}) and (\ref{lm9b17}), $\varphi_f(v_l)>\varphi_f(v_r)$.
If $d=0$, by $\emg(v_l)>\emg(v_r)$, we have $t=3$ and $b=2$. Hence $\beta=1$. Then
$f(e^{t-2})=f(e^1)=3$ and $f(e^{r,even}_{1,1})=1$.
Therefore,
\begin{eqnarray*}
\varphi_f(v_l)&=&f(e^t)+f(e^{t-1})+f(e^{t-2})+f(e_1)\\
&>&m+f(e^{r,even}_{b,1})+f(e^{r,even}_{b-1,1})\\
&=&\varphi_f(v_r).
\end{eqnarray*}
\end{description}
\end{description}
\end{document}
|
\begin{document}
\begin{abstract}
We study an asymptotic behavior of solutions
to elliptic equations of the second order in a two dimensional exterior domain.
Under the assumption that the solution belongs to $L^q$ with $q \in [2,\infty)$,
we prove a pointwise asymptotic estimate of the solution at the spatial infinity
in terms of the behavior of the coefficients.
As a corollary, we obtain the Liouville-type theorem
in the case when the coefficients may grow at the spacial infinity.
We also study a corresponding parabolic problem
in the $n$-dimensional whole space
and discuss the energy identity
for solutions in $L^q$.
As a corollary we show also the Liouville-type theorem for both forward and ancient solutions.
\end{abstract}
\keywords{elliptic and parabolic equations of second order; asymptotic behavior}
\maketitle
\section{Introduction}
\footnote[0]{2010 Mathematics Subject Classification. 35J15, 35K10, 35B53 }
We consider an elliptic differential equation of the second order with the
divergence form such as
\begin{align}
\label{ell}
- \sum_{i,j=1}^2 \partial_i (a_{ij}(x) \partial_j u) + \mathbf{b}(x)\cdot \nabla u + c(x) u = 0,
\quad x \in \Omega
\end{align}
where
$\Omega$
is the whole plane $\mathbb{R}^2$
or an exterior domain
$\Omega = \overline{B_{r_0}(0)}^c = \{ x \in \mathbb{R}^2 ; |x| = r \ge r_0 \}$.
Our aim is to clarify how the asymptotic behavior of the coefficients
$a_{ij}(x)$ and $\mathbf{b}(x)$,
in particular,
their {\it growth} conditions at infinity, has an
influence to the asymptotic behavior of solution $u(x)$ of (\mathbb Rf{ell}) as $|x|\to \infty$.
Our study is motivated by investigation of the asymptotic behavior of solutions to
the stationary Navier-Stokes equations
\begin{align}
\label{ns}
\left\{ \begin{array}{l}
-\Delta v + (v \cdot \nabla)v + \nabla p = 0,\\
\diver v = 0,
\end{array} \right.
\quad x \in \Omega.
\end{align}
By the pioneer work of Leray \cite{Le},
the existence of solutions $(v,p)$ of (\mathbb Rf{ns}) with the finite Dirichlet integral
\begin{align}
\label{d_sol}
\int_{\Omega} |\nabla v (x) |^2 \,dx < \infty
\end{align}
had been proved.
Then, Gilbarg-Weinberger \cite{GiWe78},
Amick \cite{Am88},
and Korobkov-Pileckas-Russo \cite{KoPiRu17, KoPiRu18}
studied the asymptotic behavior of solutions satisfying \eqref{d_sol}.
They proved that the solution $v$ to \eqref{ns}--\eqref{d_sol}
converges to a constant vector $v_{\infty}$ uniformly at infinity, i.e.,
\begin{align*}
\lim_{r\to \infty} \sup_{\theta \in [0,2\pi]} |v(r,\theta) - v_{\infty}| = 0,
\end{align*}
where $(r,\theta)$ denotes the polar coordinates.
A basic approach to the analysis of \eqref{ns} is to handle
the vorticity
$\omega = \rot v = \partial_{x_1} v_2 - \partial_{x_2} v_1$
which satisfies the equation
\begin{align}
\label{vor_eq}
-\Delta \omega + v\cdot \nabla \omega = 0.
\end{align}
In our previous result \cite{KoTeWapr},
we studied the asymptotic behavior of solutions $\omega$ to \eqref{vor_eq}
with the finite {\it generalized} Dirichlet integral
\begin{align}
\label{g_diri}
\int_{\Omega} |\nabla v(x)|^q \,dx < \infty
\end{align}
for some $q \in (2,\infty)$.
Note that \eqref{g_diri} implies $\omega \in L^q(\Omega)$.
Indeed, it is proved in \cite{KoTeWapr} that the vorticity $\omega$ and the gradient $\nabla v$
of the velocity behave like
\begin{align*}
|\omega (r,\theta)| = o(r^{-(\frac{1}{q} + \frac{1}{q^2})}),\quad
|\nabla v(r,\theta)| = o(r^{-(\frac{1}{q} + \frac{1}{q^2})}\log r)
\quad
\mbox{as $r \to \infty$},
\end{align*}
respectively.
The crucial point is to regard the velocity $v(x)$ as a given coefficient in the equation \eqref{vor_eq}
and to analyze how the asymptotic behavior of $v(x)$ does affect
that of $\omega$.
In this respect, the problem \eqref{ell} may be regarded as a generalization of \eqref{vor_eq}.
In this paper, we generalize the result of \cite{KoTeWapr} to
the elliptic equation \eqref{ell},
and prove the asymptotic behavior of solutions
at the spatial infinity under the assumption that
$u \in L^q(\Omega)$ with some $q \in [2,\infty)$.
In particular, we are interested in the case when
the coefficients $a_{ij}, b$ may grow at spatial infinity.
\par
Our precise assumptions and results are the following.
\par
\noindent
{\bf Assumptions on the coefficients for the elliptic problem \eqref{ell}}
\begin{itemize}
\item[(e-i)]
$a_{ij} \in C^1(\Omega)$, $a_{ij}= a_{ji}$ for $i, j =1, 2$ and
\begin{align*}
\sum_{i,j=1}^2 a_{ij}(x) \xi_i \xi_j \ge \lambda |\xi|^2 \quad (\xi \in \mathbb{R}^2, x\in \Omega)
\end{align*}
with some $\lambda>0$. The growth condition
\begin{align*}
| a_{ij}(x) | = O(|x|^{\alpha}), \quad
|\partial_i a_{ij}(x) | = O(|x|^{\alpha-1}) \quad (|x|\to \infty)
\end{align*}
is satisfied for $i,j=1,2$
with some $\alpha \in [0,2]$.
\item[(e-ii)]
$\mathbf{b}(x) = (b_1(x), b_2(x)) \in C^1(\Omega)$
satisfies
\begin{align*}
\mathbf{b}(x) = O(|x|^{\beta}) \quad (|x| \to \infty)
\end{align*}
with some $\beta \le 1$.
\item[(e-iii)]
$c(x)$ is measurable and {\it nonnegative}.
\item[(e-iv)]
Either following condition (1) or (2) holds:
\begin{itemize}
\item[(1)]
$\diver \mathbf{b}(x) \le 2 c(x)$,
\item[(2)]
$|\diver \mathbf{b}(x)| = O(|x|^{\beta-1}) \quad (|x| \to \infty)$.
\end{itemize}
\end{itemize}
\par
Our first result on the elliptic equation (\mathbb Rf{ell}) now reads
\begin{theorem}\label{thm_asym}
Let the assumptions {\rm (e-i)--(e-iv)} above hold.
Suppose that that $u \in C^2(\Omega)$ satisfies \eqref{ell} in $\Omega$ and
that $u \in L^q (\Omega)$ with some $q \in [2,\infty)$.
Then, we have that
\begin{equation}\label{decay}
\sup_{0\le \theta \le 2\pi} |u(r,\theta)| = o( r^{-\frac{1}{q}(1 + \frac{\gamma}{2})} )
\quad\mbox{as $r \to \infty$},
\end{equation}
where
$\gamma = \min\{1-\beta, 2-\alpha \}$.
\end{theorem}
\begin{remark}
When
$\alpha = 0$ and $\beta \le -1$, we have
\begin{align*}
\sup_{0\le \theta \le 2\pi} |u(r,\theta)| = o( r^{-\frac{2}{q}} )
\quad\mbox{as $r \to \infty$},
\end{align*}
which exhibits the correspondence to the condition
$u \in L^q(\Omega)$.
\end{remark}
As a corollary of the above theorem,
we have the following Liouville-type result.
\begin{corollary}\label{cor_liouville}
Assume {\rm (e-i)--(e-iv)}.
Let $\Omega = \mathbb{R}^2$ and let
$u \in C^2(\mathbb{R}^2)$ be a solution to \eqref{ell} satisfying
$u \in L^q (\mathbb{R}^2)$
with some $q \in [2,\infty)$.
Then, it holds that $u \equiv 0$ on $\mathbb R^2$.
\end{corollary}
\begin{remark}
The above corollary is sharp in the sense that
if $q = \infty$, then there exists a solution $u$ of (\mathbb Rf{ell})
which is not a constant.
Indeed, let
$a_{ij}(x) = \deltata_{ij}$,
$\mathbf{b}(x) = (-x_1, x_2)$ (namely, $\beta = 1$),
and
$c(x) \equiv 0$.
Consider
$u = u(x_1, x_2) = f(x_1)$
with
\begin{align*}
f(\tau) = \int_0^{\tau} e^{-s^2/2} \,ds,
\quad \tau \in \mathbb R.
\end{align*}
It is easy to see that $u \in L^{\infty}(\mathbb{R}^2)$ with
$- \Delta u + \mathbf{b}(x)\cdot\nabla u = 0$ in $\mathbb R^2$.
Obviously, $u$ is not a constant.
\end{remark}
\begin{remark}
{\rm
For the elliptic equation \eqref{ell} with $\Omega = \mathbb{R}^n$ and $n\ge 1$,
the result by
Seregin-Silvestre-\v{S}ver\'{a}k-Zlato\v{s} \cite[Theorem 1.2]{SeSiSvZl12}
implies a Liouville-type theorem under the conditions
that $a_{ij}$ is bounded, $\mathbf{b} \in BMO^{-1}$ with $\diver \mathbf{b} = 0$,
and that $c \equiv 0$,
namely, every bounded solutions are constants.
Compared with their theorem, our result allows
the coefficients $a_{ij}, b$ to grow at spacial infinity.
On the other hand, we impose
on the stronger assumption on the solution such as $u \in L^q(\Omega)$ with
some $q \in [2,\infty)$
}
\end{remark}
\begin{remark}
{\rm
The above corollary may be regarded as a generalization of \cite[Corollary 1.2]{KoTeWapr},
which states that
every smooth solution $v$ of \eqref{ns} in $\mathbb{R}^2$
satisfying the condition $\nabla v \in L^q(\mathbb{R}^2)$ for some $q \in (2,\infty)$
must be a constant vector.
Recently, Liouville-type theorems of the stationary Navier-Stokes equations
are fully studied, and we refer the reader to
\cite{BiFuZh13, Ch14, Ch15, ChWo16, ChJaLepr, KoTeWa17, Se16} and the references therein.
}
\end{remark}
The proofs of Theorem \mathbb Rf{thm_asym} and Corollary \mathbb Rf{cor_liouville}
are given in the next section.
Our approach is based on that of Gilbarg and Weinberger \cite{GiWe78}
and its generalization introduced in \cite{KoTeWapr}.
We first show a certain elliptic estimate of the solution $u$ by the energy method
(Lemma 2.1).
Then, combining it with
the integral mean value theorem for the radial variable $r$
and the fundamental theorem of calculus for the angular variable $\theta$,
we derive a pointwise decay estimate of the solution along with
a special sequence $\{ r_n \}_{n=1}^{\infty}$ of the radial variable satisfying
$\displaystyle{\lim_{n\to\infty}r_n = \infty}$.
Finally, applying the maximum principle in the annular domains
between $r=r_{n-1}$ and $r=r_n$,
we have the desired uniform decay like (\mathbb Rf{decay})(Lemma 2.2).
Furthermore, our approach is also applicable to parabolic problems.
We discuss energy estimates of solutions to
the corresponding parabolic equation in the
$n$-dimensional whole space $\mathbb{R}^n$;
\begin{align}
\label{para}
\partial_t u - \sum_{i,j=1}^n \partial_i (a_{ij}(x,t) \partial_j u)
+ \mathbf{b}(x,t)\cdot \nabla u + c(x,t) u = 0,
\quad x \in \mathbb{R}^n, t\in I,
\end{align}
where
$I \subset \mathbb{R}$ is an interval.
We impose similar assumptions on the coefficients
$a_{ij}(x,t)$, $\mathbf{b}(x,t)$ and $c(x,t)$, on the premise that they are measurable functions
on $\mathbb{R}^n\times I$:
\par
\noindent
{\bf Assumptions on the coefficients for the parabolic problem \eqref{para}}
\begin{itemize}
\item[(p-i)]
$a_{ij} \in C^{1,0}(\mathbb{R}^n\times I)$, $a_{ij}=a_{ji}$ for $i, j = 1, \cdots,n$ and
\begin{align*}
\sum_{i,j=1}^n a_{ij}(x,t) \xi_i \xi_j \ge \lambda |\xi|^2 \quad (\xi \in \mathbb{R}^n, x\in \mathbb{R}^n, t \in I)
\end{align*}
with some $\lambda>0$. The growth condition
\begin{align*}
| a_{ij}(x,t) | = O(|x|^2),
\quad |\partial_i a_{ij}(x,t) | = O(|x|) \quad (|x| \to \infty)
\end{align*}
holds locally uniformly in $t \in I$.
\item[(p-ii)]
$\mathbf{b}(x,t) = (b_1(x,t), \ldots, b_n(x,t)) \in C^{1,0}(\mathbb{R}^n\times I)$
satisfies
\begin{align*}
\mathbf{b}(x,t) = O(|x|) \quad (|x| \to \infty)
\end{align*}
locally uniformly in $t \in I$.
\item[(p-iii)]
$c(x, t)$ is nonnegative, and
$\diver \mathbf{b}(x,t) = \sum_{j=1}^{n} \partial_{x_j} b_j(x,t) \le 2c(x,t)$
holds for all
$(x,t) \in \mathbb{R}^n \times I$.
\end{itemize}
Under these assumptions,
we show the following energy identity
for solutions belonging to $L^q(\mathbb{R}^n \times I)$.
\begin{theorem}\label{thm_en_est}
Assume {\rm (p-i)} and {\rm (p-ii)}.
Let $u \in C^{2,1}(\mathbb{R}^n \times I)$ be a solution to \eqref{para} satisfying
$u \in L^{q} (\mathbb{R}^n \times I)$
with some $q \in [2,\infty)$.
Then, we have the energy identity
\begin{align}
\label{en_id}
&\int_{\mathbb{R}^n} |u(x,t)|^q \,dx
+ q(q-1) \int_{s}^t \int_{\mathbb{R}^n} |u(x,\tau)|^{q-2}
\sum_{i, j =1}^na_{ij}(x, \tau)\partial_iu(x, \tau)\partial_ju(x, \tau)\,dx d\tau \\
\notag
&\quad + \int_{s}^t \int_{\mathbb{R}^n} (-\diver \mathbf{b}(x,\tau) + qc(x,\tau)) |u(x,\tau)|^q \,dx d\tau \\
\notag
&=
\int_{\mathbb{R}^n} |u(x,s)|^q \,dx
\end{align}
for all $t, s \in I$ such that $s \le t$.
\end{theorem}
\begin{remark}
In Theorem \mathbb Rf{thm_en_est}, we do not need the assumption {\rm (p-iii)}.
\end{remark}
By Theorem \mathbb Rf{thm_en_est},
we have the following Liouville-type results
on solutions of the Cauchy problem of \eqref{para} and on ancient solutions of \eqref{para}.
\begin{corollary}\label{cor_liouville_pr}
In addition to {\rm (p-i)} and {\rm (p-ii)}, assume that {\rm (p-iii)} holds.\\
{\rm (i)} Let $u \in C^2(\mathbb{R}^n \times [0, T))$ be a solution of \eqref{para}
satisfying $u \in L^{q} (\mathbb{R}^n \times [0, T))$
with some $q \in [2,\infty)$.
Moreover, we assume that $u(x,0) \equiv 0$ on $\mathbb{R}^n$.
Then, we have $u \equiv 0$ on $\mathbb{R}^n \times [0, T)$. \\
{\rm (ii)} Let $u \in C^2(\mathbb{R}^n \times (-\infty, 0))$ be
an ancient solution of \eqref{para} satisfying
$u \in L^{q} (\mathbb{R}^n \times (-\infty, 0))$
with some $q \in [2,\infty)$.
Then, we have $u \equiv 0$ on $\mathbb{R}^n \times (-\infty, 0)$.
\end{corollary}
\begin{remark}
{\rm
For the heat equation
$\partial_t v - \Delta v =0$
on a complete noncompact Reimannian manifold with the
nonnegative Ricci curvature,
Souplet--Zhang \cite{SoZh} proved that
any positive ancient (or entire) solution $u$ having the bound
\begin{align*}
u(x,t) = O(e^{o(d(x)+\sqrt{t})})
\quad\mbox{as $d(x) \to \infty$}
\end{align*}
must be a constant,
where
$d(x)$ is the distance from a base point.
They also proved that any ancient (or entire) solution $u$ having the bound
\begin{align*}
u(x,t) = o(d(x) + \sqrt{t})
\quad\mbox{as $d(x) \to \infty$}
\end{align*}
must be a constant.
Compared with their result, we are able to treat more
general time-dependent coefficients
which may grow at the spatial infinity.
On the other hand, we impose solutions $u$ on the stronger assumption that
$u \in L^{q} (\mathbb{R}^n \times I)$ with some $q \in [2,\infty)$.
}
\end{remark}
\begin{remark}
{\rm
The Liouville-type theorem for
the non-stationary Navier-Stokes equations
\begin{align*}
\left\{ \begin{array}{l}
\partial_t v -\Delta v + (v \cdot \nabla)v + \nabla p = 0,\\
\diver v = 0,
\end{array} \right.
\quad (x,t) \in \mathbb{R}^n\times I
\end{align*}
has been fully studied, where $I=(0, T)$ or $I=(-\infty, 0)$.
We refer the reader to
\cite{KoNaSeSv09, Ch11, Gi13}
and the references therein.
}
\end{remark}
\section{Proof of Theorem \mathbb Rf{thm_asym}}
In what follows, we shall denote by
$C$
various constants which may change from line to line.
In particular, we denote by
$C = C(*, . . . , *)$
constants depending only on the quantities appearing in parentheses.
\begin{lemma}\label{lem_1}
Under the assumptions on Theorem \mathbb Rf{thm_asym},
for every $r_1 > r_0$, we have
\begin{align*}
\int_{r\ge r_1} r^{\gamma} |u|^{q-2} |\nabla u|^2 \,dx
\le C(q,r_1) \int_{\Omega} |u|^q \,dx,
\end{align*}
where
$\gamma = \min \{ 1-\beta, 2-\alpha \}$.
\end{lemma}
\begin{proof}
Let
$\eta = \eta(r) \in C_0^{\infty}(\Omega)$
and let
$h = h(u) \in C^1(\mathbb{R})$
be a piecewise $C^2$ function specified later.
We start with the following identity:
\begin{align*}
& - \sum_{i=1}^2 \partial_i
\left[ \eta(r) \sum_{j=1}^2 a_{ij} \partial_j (h(u))
- \sum_{j=1}^2 a_{ij} (\partial_j \eta) h(u) - \eta(r) h(u) b_i(x) \right] \\
&= - \eta(r) h''(u) \left( \sum_{i,j=1}^2 a_{ij} \partial_i u \partial_j u \right) \\
&\quad
+ h(u) \left[ \sum_{i,j = 1}^2 \partial_j (a_{ij} \partial_i \eta)
+ \mathbf{b}(x) \cdot \nabla \eta(r) + \eta(r)\diver \mathbf{b} \right]\\
&\quad - \eta (r) h'(u)
\left[ \sum_{i,j=1}^2 \partial_j (a_{ij} \partial_i u ) - \mathbf{b} \cdot \nabla u \right]
\end{align*}
Since $u$ satisfies the equation \eqref{ell},
integration of the above identity over $\Omega$ yields
\begin{align*}
\int_{\Omega} \eta(r) h''(u) \left( \sum_{i,j=1}^2 a_{ij} \partial_i u \partial_j u \right) \,dx
&= \int_{\Omega} h(u) \left[ \sum_{i,j = 1}^2 \partial_j (a_{ij} \partial_i \eta)
+ \mathbf{b}(x) \cdot \nabla \eta(r) \right] \,dx \\
&\quad + \int_{\Omega} \eta(r) (h(u) \diver \mathbf{b}(x) - h'(u) c(x) u) \,dx.
\end{align*}
Let
$r_1 > r_0$
and let
$\xi_1 = \xi_1(r) \in C^{\infty}(\Omega)$
be nonnegative, monotone increasing in $r$,
and satisfy $\xi(r) = 1$ for $r \ge r_1$ and $\xi(r) = 0$ for $r \le (r_0 + r_1)/2$.
Let
$\xi_2 = \xi_2(r) \in C_0^{\infty}(B_1(0))$
be nonnegative, monotone decreasing, and satisfy
$\xi_2(r) = 1$ for $r \le 1/2$.
We choose the cut-off function
$\eta(r)$
as
\begin{align*}
\eta(r) = r^{\gamma} \xi_1(r) \xi_2 \left(\frac{r}{R}\right),
\end{align*}
with the parameter $R\ge 1$,
where
$\gamma = \min\{ 1-\beta, 2-\alpha \}$.
Then, we have
$|\nabla \eta(r)| \le Cr^{\gamma-1}$,
$|\partial_i \partial_j \eta(r) | \le Cr^{\gamma-2}$.
Now, we take
$h(u) = |u|^q$.
Then, it holds that
$h'(u) = q|u|^{q-2}u$
and
$h''(u) = q(q-1) |u|^{q-2}$.
Therefore, we obtain
\begin{align}
\label{eq_energy}
&q(q-1) \int_{\Omega} \eta(r) |u|^{q-2} \left( \sum_{i,j=1}^2 a_{ij}(x) \partial_i u \partial_j u \right) \,dx \\
\nonumber
&= \int_{\Omega} |u|^q
\left[ \sum_{i,j=1}^2 \partial_i ( a_{ij}(x) \partial_j \eta) + \mathbf{b}(x) \cdot \nabla \eta \right] \,dx \\ \nonumber
& \mbox{} + \int_{\Omega} \eta ( \diver \mathbf{b}(x) -q c(x) ) |u|^q \,dx.
\nonumber
\end{align}
By the assumptions (e-i) and (e-ii), the estimates
\begin{align*}
& |\mathbf{b}(x) \cdot \nabla \eta(r) | \le C,\\
& | a_{ij}(x) \partial_i \partial_j \eta(r) | \le C, \quad
| \partial_i a_{ij}(x) \partial_j \eta(r) | \le C,
\quad i, j=1, 2
\end{align*}
hold, and hence the first term of RHS of \eqref{eq_energy} is estimated by
$C \int_{\Omega} |u|^q \,dx$.
Furthermore, since $c(x) \ge 0$, implied by the assumption (e-iii),
we have by (e-iv) that
$\diver \mathbf{b}(x) -q c(x) \le 0$
or
$| \eta \diver \mathbf{b} | \le C$,
and hence,
\begin{align*}
\int_{\Omega} \eta ( \diver \mathbf{b}(x) -q c(x) ) |u|^q \,dx
\le C \int_{\Omega} |u|^q \,dx
\end{align*}
holds in both cases.
Thus, we obtain from the above estimates and the assumption (e-i) that
\begin{align*}
&\int_{r_1 \le r \le R/2} r^{\gamma} |u|^{q-2} |\nabla u|^2\,dx
\le C \int_{\Omega} |u|^q \,dx.
\end{align*}
Letting $R\to \infty$, we conclude
\begin{align*}
\int_{r\ge r_1} r^{\gamma} |u|^{q-2} |\nabla u|^2 \,dx
\le C \int_{\Omega} |u|^q \,dx.
\end{align*}
This completes the proof of Lemma \mathbb Rf{lem_1}.
\end{proof}
\begin{lemma}\label{lem_asym}
Under the assumptions on Theorem \mathbb Rf{thm_asym}, we have
\begin{align*}
\lim_{r\to \infty} r^{1+\frac{\gamma}{2}} \sup_{\theta \in [0,2\pi]} |u(r,\theta)|^q = 0.
\end{align*}
\end{lemma}
\begin{proof}
For each sufficiently large integer $n$, let us introduce the quantity
\begin{align*}
A_n = \int_{2^n}^{2^{n+1}} \frac{dr}{r} \int_0^{2\pi}
|u|^{q-2} \left( r^2 |u|^2 + r^{1+ \frac{\gamma}{2}} |u| |\partial_{\theta} u | \right) d\theta.
\end{align*}
Since $|\partial_{\theta}u|\le r|\nabla u|$,
we have by Lemma \mathbb Rf{lem_1} and the the Schwarz inequality that
\begin{equation*}
A_n \le C \int_{2^n<r< 2^{n+1}} (|u|^q + r^{\gamma} |u|^{q-2} |\nabla u|^2 ) \,dx.
\end{equation*}
On the other hand,
by the mean value theorem for integration, there exists $r_n \in (2^n, 2^{n+1})$ such that
\begin{align*}
A_n &= \log 2 \int_{0}^{2\pi}
|u(r_n, \theta)|^{q-2} ( r_{n}^2 |u(r_n, \theta)|^2 + r_n^{1+ \frac{\gamma}{2}}
|u(r_n, \theta)| |\partial_{\theta} u(r_n, \theta)| )\,d\theta.
\end{align*}
Next, we estimate
\begin{align*}
|u(r_n, \theta)|^q - |u(r_n, \varphi)|^q
&\le \left| \int_{\varphi}^{\theta} \partial_{\psi} | u(r_n, \psi)|^q \,d\psi \right| \\
&\le \int_0^{2\pi} q |u(r_n, \psi)|^{q-1} |\partial_{\theta} u(r_n, \psi) | \,d\psi.
\end{align*}
Integrating the above for $\varphi \in [0,2\pi]$, we infer
\begin{align*}
| u(r_n, \theta)|^q
&\le C \int_0^{2\pi} |u(r_n, \varphi)|^q \,d\varphi
+ C \int_0^{2\pi} q |u(r_n, \psi)|^{q-1} |\partial_{\theta} u(r_n, \psi) | \,d\psi.
\end{align*}
Multiplying both sides of this estimate by $r_n^{1 + \frac{\gamma}{2}}$
and then noting $1 + \frac{\gamma}{2}\le 2$, we have that
\begin{align*}
r_n^{1 + \frac{\gamma}{2}} |u(r_n, \theta)|^q
&\le C r_n^{1 + \frac{\gamma}{2}} \int_0^{2\pi} |u(r_n, \varphi)|^q \,d\varphi \\
&\quad + C r_n^{1 + \frac{\gamma}{2}} \int_0^{2\pi} q |u(r_n, \psi)|^{q-1} |\partial_{\theta} u(r_n, \psi) | \,d\psi \\
&\le C A_n,
\end{align*}
Consequently, we obtain
\begin{align*}
r_n^{1 + \frac{\gamma}{2}} |u(r_n, \theta)|^q
\le \int_{r>2^n} (|u|^q + r^{\gamma} |u|^{q-2}|\nabla u|^2) \,dx.
\end{align*}
Since the right-hand side of the above inequality tends to zero as $n \to \infty$,
implied by Lemma \mathbb Rf{lem_1}, we have that
\begin{equation}\label{eqn:2.1}
\lim_{n\to \infty} r_n^{1 + \frac{\gamma}{2}} \sup_{\theta\in [0,2\pi]} |u(r_n, \theta)|^q = 0.
\end{equation}
Finally, since the solution $u$ of \eqref{ell} satisfies the maximum principle
and since $r_{n+1} \le 4 r_n$,
we estimate for $r \in (r_n, r_{n+1})$ that
\begin{align*}
& r^{1 + \frac{\gamma}{2}} \sup_{\theta\in [0,2\pi]} |u(r, \theta)|^q \\
&\le r_{n+1}^{1 + \frac{\gamma}{2}}
\max\{ \sup_{\theta\in [0,2\pi]} |u(r_{n}, \theta)|^q, \sup_{\theta\in [0,2\pi]} |u(r_{n+1}, \theta)|^q \} \\
&\le \max \{ 16 r_n^{1 + \frac{\gamma}{2}} \sup_{\theta\in [0,2\pi]} |u(r_{n}, \theta)|^q,
r_{n+1}^{1 + \frac{\gamma}{2}} \sup_{\theta\in [0,2\pi]} |u(r_{n+1}, \theta)|^q \},
\end{align*}
which yields with the aid of (\mathbb Rf{eqn:2.1}) that
\begin{align*}
\lim_{r\to \infty}r^{1 + \frac{\gamma}{2}} \sup_{\theta\in [0,2\pi]} |u(r, \theta)|^q =0.
\end{align*}
This completes the proof of Lemma \mathbb Rf{lem_asym}, and whence Theorem \mathbb Rf{thm_en_est}.
\end{proof}
\begin{proof}[Proof of Corollary \mathbb Rf{cor_liouville}]
By the assumption (e-i), the equation \eqref{ell} has the maximum principle.
Combining this with the asymptotic behavior from Theorem \mathbb Rf{thm_asym},
we have $u\equiv 0$.
\end{proof}
\section{Proof of Theorem \mathbb Rf{thm_en_est}}
Let $h(u) = |u|^q$.
We take a nonnegative function
$\psi \in C_0^{\infty}(\mathbb{R}^n)$
such that
\begin{align*}
\psi(x) = \begin{cases}
1 &(|x| \le 1),\\
0 &(|x| \ge 2),
\end{cases}
\end{align*}
and with the parameter $R>0$ we define
\begin{align*}
\psi_R (x) = \psi \left( \frac{x}{R} \right).
\end{align*}
Similarly to the previous section, by a direct computation we have
\begin{align*}
& - \sum_{i=1}^n \partial_i
\left[ \psi_R \sum_{i=j}^n a_{ij} \partial_j (h(u))
- \sum_{j=1}^n a_{ij} (\partial_j \psi_R) h(u) - \psi_R h(u) b_i \right] \\
&= - \psi_R h''(u) \left( \sum_{i,j=1}^n a_{ij} \partial_i u \partial_j u \right) \\
&\quad
+ h(u)\left[\sum_{i,j = 1}^n \partial_i (a_{ij} \partial_j \psi_R)
+ \mathbf{b} \cdot \nabla \psi_R + \psi_R \diver \mathbf{b}\right] \\
&\quad - \psi_R h'(u)
\left[ \sum_{i,j=1}^n \partial_i (a_{ij} \partial_j u ) - \mathbf{b} \cdot \nabla u \right]
\end{align*}
Using the equation \eqref{para} and the identity $h'(u) \partial_t u = \partial_t ( h(u) )$,
we integrate the above identity over $\mathbb{R}^n$ to obtain
\begin{align*}
&\frac{d}{dt} \int_{\mathbb{R}^n} \psi_R h(u) \,dx
+ \int_{\mathbb{R}^n} \psi_R h''(u) \left( \sum_{i,j=1}^2 a_{ij} \partial_i u \partial_j u \right)\,dx \\
&= \int_{\mathbb{R}^n} h(u)
\left[ \sum_{i,j=1}^n \partial_i \left( a_{ij} \partial_j \psi_R \right) + \mathbf{b} \cdot \nabla \psi_R \right] \,dx
+ \int_{\mathbb{R}^n} \psi_R \left( h(u) \diver \mathbf{b} - ch'(u) u \right) \,dx.
\end{align*}
Furthermore, since $h'(u) = q |u|^{q-2}u$ and $h''(u) = q(q-1) |u|^{q-2}$,
we integrate the above identity over $[s,t]$ to obtain
\begin{align}
\label{en_est_1}
&\int_{\mathbb{R}^n} \psi_R h(u(t)) \,dx
+ q(q-1) \int_{s}^t \int_{\mathbb{R}^n}
\psi_R |u|^{q-2} \sum_{i, j=1}^na_{ij}\partial_iu\partial_ju \,dx d\tau \\
\notag
&\quad
+ \int_{s}^t \int_{\mathbb{R}^n} \psi_R (-\diver \mathbf{b} + q c) |u|^q \,dx d\tau \\
\notag
&=
\int_{\mathbb{R}^n} \psi_R h(u(s)) \,dx \\
\notag
&\quad
+ \int_{s}^t \int_{\mathbb{R}^n} h(u)
\left[ \sum_{i,j=1}^n \partial_j \left( a_{ij} \partial_i \psi_R \right)
+ \mathbf{b} \cdot \nabla \psi_R \right] \,dxd\tau.
\end{align}
Let us estimate the right-hand side.
Using the assumptions (p-i) and (p-ii), and then applying the Lebesgue dominated convergence theorem,
we have that
\begin{align*}
&\int_{s}^t \int_{\mathbb{R}^n} h(u)
\sum_{i,j=1}^n \partial_j \left( a_{ij} \partial_i \psi_R \right) \,dxd\tau \\
&\le C R^{-2} \int_{s}^t \int_{B_{2R}\setminus B_R} |u|^q |x|^2 \,dxd\tau
+ CR^{-1} \int_{s}^t \int_{B_{2R}\setminus B_R} |u|^q |x| \,dxd\tau \\
&\le C \int_{s}^t \| u (\cdot, \tau) \|_{L^q(B_{2R}\setminus B_R)}^q \,d\tau \\
&\to 0 \quad (R\to \infty)
\end{align*}
and
\begin{align*}
\int_{s}^t \int_{\mathbb{R}^n} h(u) \mathbf{b} \cdot \nabla \psi_R \,dxd\tau
&\le CR^{-1} \int_{s}^t \int_{B_{2R}\setminus B_R} |u|^q |x| \,dxd\tau \\
&\le C \int_{s}^t \| u (\cdot, \tau) \|_{L^q(B_{2R}\setminus B_R)}^q \,d\tau \\
&\to 0 \quad (R\to \infty).
\end{align*}
Consequently, letting $R\to \infty$ in \eqref{en_est_1}, we obtain
\begin{align*}
&\int_{\mathbb{R}^n} |u(x,t)|^q \,dx
+ q(q-1) \int_{s}^t \int_{\mathbb{R}^n} |u(x,\tau)|^{q-2}
\sum_{i, j =1}^na_{ij}(x, \tau)\partial_iu(x, \tau)\partial_ju(x, \tau)
\,dx d\tau \\
\notag
&\quad
+ \int_{s}^t \int_{\mathbb{R}^n} (- \diver \mathbf{b}(x,\tau) + qc(x,\tau)) |u(x,\tau)|^q \,dx d\tau \\
\notag
&=
\int_{\mathbb{R}^n} |u(x,s)|^q \,dx.
\end{align*}
This completes the proof of Theorem \mathbb Rf {thm_en_est}.
\begin{proof}[Proof of Corollary \mathbb Rf{cor_liouville_pr}]
(i) Applying Theorem \mathbb Rf{thm_en_est} with $s = 0$ and using $u(x,0) = 0$,
we have by the assumption (p-i) that
\begin{align*}
&\int_{\mathbb{R}^n} |u(x,t)|^q \,dx
+ q(q-1)\lambda \int_{0}^t \int_{\mathbb{R}^n} |u(x,\tau)|^{q-2} |\nabla u(x,\tau)|^2 \,dx d\tau \\
\notag
&\quad
+ \int_{0}^t \int_{\mathbb{R}^n} (- \diver \mathbf{b}(x,\tau) + qc(x,\tau)) |u(x,\tau)|^q \,dx d\tau \\
\notag
&\le 0
\end{align*}
for all $t \in [0,T)$.
Noting
$- \diver \mathbf{b} + qc \ge 0$
by the assumption (p-iii),
we conclude $u(x,t) = 0$ for $(x,t) \in \mathbb{R}^n \times [0, T)$.
\par
(ii)
Let $I = (-\infty, 0)$.
Similarly to the above (i), applying Theorem \mathbb Rf{thm_en_est} for $s < t < 0$,
we have by the assumption (p-iii) that
\begin{align}
\label{est_ulq}
\| u(t) \|_{L^q} \le \| u(s) \|_{L^q}.
\end{align}
Since
$u \in L^q(\mathbb{R}^n \times (-\infty, 0))$,
there exists a sequence $\{ s_n \}_{n=1}^{\infty} \subset (-\infty, 0)$
such that
$\lim_{n\to \infty} s_n = - \infty$
and
$\lim_{n\to \infty} \| u(s_n) \|_{L^q} = 0$.
Therefore, taking $s = s_n$ in \eqref{est_ulq} and then letting $n \to \infty$,
we have
$\| u(t) \|_{L^q} = 0$.
Since $t \in (-\infty, 0)$ is arbitrary,
we conclude that $u \equiv 0$ on $\mathbb{R}^n \times (-\infty, 0)$.
This proves Corollary \mathbb Rf{cor_liouville_pr}.
\end{proof}
\end{document}
|
\begin{document}
\title{Weak bisimulations for labelled transition systems\ weighted over semirings}
\begin{abstract}
Weighted labelled transition systems are LTSs whose transitions are
given weights drawn from a commutative monoid. WLTSs subsume a wide
range of LTSs, providing a general notion of strong (weighted)
bisimulation. In this paper we extend this framework towards other
behavioural equivalences, by considering \emph{semirings} of
weights. Taking advantage of this extra structure, we introduce a
general notion of \emph{weak weighted bisimulation}. We show that
weak weighted bisimulation coincides with the usual weak
bisimulations in the cases of non-deterministic and
fully-probabilistic systems; moreover, it naturally provides a
definition of weak bisimulation also for kinds of LTSs where this
notion is currently missing (such as, stochastic systems). Finally,
we provide a categorical account of the coalgebraic construction of
weak weighted bisimulation; this construction points out how to port
our approach to other equivalences based on different notion of
observability.
\end{abstract}
\section{Introduction}
Many extensions of labelled transition systems have been proposed for
dealing with quantitative notions such as execution times, transition
probabilities and stochastic rates; see \eg~\cite{bg98:empa,denicola13:ultras,hhk2002:tcs,hillston:pepabook,klinS08,pc95:cj}
among others. This ever-increasing plethora of variants has naturally
pointed out the need for general mathematical frameworks, covering
uniformly a wide range of cases, and offering general results and
tools. As examples of these theories we mention \textsc{ULTraS}s
\cite{denicola13:ultras} and \emph{weighted labelled transition
systems} (WLTSs) \cite{tofts1990:synchronous,Klin09wts,klins:ic2012}. In particular, in a
WLTS every transition is associated with a \emph{weight} drawn from a
commutative monoid \mfk W; the monoid structure defines how weights of
alternative transitions combine. As we will recall in
Section~\ref{sec:wlts}, by suitably choosing this monoid we can
recover ordinary
non-deterministic LTSs, probabilistic transition systems, and
stochastic transition systems, among others. WLTSs offer
a notion of \emph{(strong) \mfk W-weighted bisimulation}, which can
be readily instantiated to particular cases obtaining precisely the
well-known Milner's strong bisimulation \cite{milner:cc}, Larsen-Skou's
strong probabilistic bisimulation \cite{ls:probbisim}, strong stochastic
bisimulation \cite{hillston:pepabook}, \etc.
However, in many situations strong bisimulations are too fine, and
many coarser relations have been introduced since then. Basically,
these \emph{observational} equivalences do not distinguish systems
differing only for unobservable or not relevant transitions.
Likely the most widely known of these observational equivalences is
Milner's \emph{weak bisimulation} for non-deterministic LTSs
\cite{milner:cc} (but see
\cite{glabbeek90:spectrum,glabbeek93:spectrum2} for many variations).
Weak bisimulations focus on systems' interactions (communications,
synchronizations, etc.), ignoring transitions associated with
systems' internal operations, hence called \emph{silent} (and denoted
by the $\tau$).
\looseness=-1
Unfortunately, weak bisimulations become quite more problematic in
models for stochastic systems, probabilistic systems, \etc. The
conundrum is that we do not want to observe $\tau$-transitions but at
the same time their quantitative effects (delays, probability
distributions) are still observable and hence cannot be ignored. In
fact, for quantitative systems there is no general agreement of what a
weak bisimulation should be. As an example, consider the stochastic
system $S_1$ executing an action $a$ at rate $r$, and a system $S_2$
executing $\tau$ at rate $r_1$, followed by an $a$ at rate $r_2$:
should these two systems be considered weakly bisimilar?
\[\begin{tikzpicture}[auto,xscale=2,font=\footnotesize,scale=.9,
dot/.style={circle, minimum size=5pt,inner sep=0pt, outer sep=2pt},
dot stage1/.style={dot,fill=white,draw=black},
dot stage2/.style={dot,fill=black,draw=black},
arr stage1/.style={->,-open triangle 60},
arr stage2/.style={->,-triangle 60},
baseline=(current bounding box.center)]
\begin{scope}
\node[dot stage1,label=180:$S_1$] (n0) at (0,0) {};
\node[dot stage1] (n1) at (1,0) {};
\draw[arr stage1] (n0) to node {\(a,r\)} (n1);
\end{scope}
\begin{scope}[xshift=25mm]
\node[dot stage1,label=180:$S_2$] (n0) at (0,0) {};
\node[dot stage1] (n1) at (1,0) {};
\node[dot stage1] (n2) at (2,0) {};
\draw[arr stage1] (n0) to node {\(\tau,r_1\)} (n1);
\draw[arr stage1] (n1) to node {\(a,r_2\)} (n2);
\end{scope}
\end{tikzpicture}\]
Some
approaches restrict to instantaneous $\tau$-actions (and hence
$r_2=r$) \cite{mb2007:ictcs}; others require that the average times of
$a$'s executions are the same the two systems - but still these can be
distinguished by looking at the variances \cite{mb2012:qapl}.
Therefore, it is not surprising that many definitions proposed in
literature are rather \emph{ad-hoc}, and that a general mathematical
theory is still missing.
This is the problem we aim to address in this paper. More precisely,
in Section~\ref{sec:weak-bisim} we introduce the uniform notion of
\emph{weak weighted bisimulation} which applies to labelled transition
systems weighted over a \emph{semiring}. The multiplication operation of
semirings allows us to compositionally extend weights to multi-step
transitions and traces. In Section~\ref{sec:weak-instances} we show
that our notion of weak bisimulation coincides with the known ones in
the cases of non-deterministic and fully probabilistic systems, just
by changing the underlying semiring. Moreover it naturally applies to
stochastic systems, providing an effective notion of \emph{weak
stochastic bisimulation}. As a side result we introduce a new
semiring of \emph{stochastic variables} which generalizes that of
rated transition systems \cite{klinS08}.
Then, in Section~\ref{sec:cw-algorithm} we present the general
algorithm for computing weak weighted bisimulation equivalence
classes, parametric in the underlying semiring. This algorithm is a
variant of Kanellakis-Smolka's algorithm for deciding strong
non-deterministic bisimulation \cite{ks1990:ic}. Our solution builds
on the refinement technique used for the \emph{coarsest stable
partition}, but instead of ``strong'' transitions in the original
system we consider ``weakened'' ones. We prove that this algorithm is
correct, provided the semiring satisfies some mild conditions,
\ie\ it is $\omega$-complete. Finally, we discuss also its complexity,
which is comparable with Kanellakis-Smolka's algorithm. Thus, this
algorithm can be used in the verification of many kinds of systems,
just by replacing the underlying semiring (boolean, probabilistic,
stochastic, tropical, arctic, \dots) and taking advantage of existing
software packages for linear algebras over semirings.
In Section~\ref{sec:cat-view} we give a brief categorical account of
weak weighted bisimulations. These will be characterized as
cocongruences between suitably \emph{saturated} systems, akin to the
elegant construction of $\epsilon$-elimination given in
\cite{sw2013:epsilon}.
In Section~\ref{sec:concl} we give some final remarks and directions for further work.
\section{Weighted labelled transition systems}\label{sec:wlts}
In this section we recall the notion of \emph{labelled transition
systems weighted over a commutative monoid}, showing how these
subsume non-deterministic, stochastic and probabilistic systems, among
many others.
Weighted LTSs were originally introduced by Klin in \cite{Klin09wts} as
the prosecution and generalization of the work on stochastic SOS
presented in \cite{klinS08} with Sassone and were further developed in
\cite{klins:ic2012}.
In the following let \mfk W denote a generic commutative (aka \emph{abelian}) monoid
$(W,+,0)$, \ie~a set equipped with a distinguished element $0$ and a
binary operation $+$ which is associative, commutative and has $0$ as
left and right unit.
\begin{definition}[\mfk W-LTS {\cite[Def.~1]{Klin09wts}}]
\label{def:wlts}
Given a commutative monoid $\mfk W = (W,+,0)$, a \emph{\mfk W-weighted labelled
transistion system} is a triple $(X,A,\rho)$ where:
\begin{itemize}
\item $X$ is a set of \emph{states} (processes);
\item $A$ is an at most countable set of \emph{labels};
\item $\rho:X\times A \times X \to W$ is a \emph{weight
function}, mapping each triple of $X\times A \times X$ to a
weight.
\end{itemize}
$(X,A,\rho)$ is said to be \emph{image finite}
(resp.~\emph{countable}) iff for each $x\in X$ and $a\in A$, the set
$\{ y \in X \mid \rho(x,a,y)\neq 0\}$ is finite (resp.~countable).
A state $x\in X$ is said \emph{terminal} iff for every $a \in A$ and
$y\in X$: $\rho(x,a,y)=0$.
\end{definition}
For adherence to the notation used in \cite{Klin09wts} and to support
the intuitions based on classical labelled transition systems we shall
often write $\rho(x\xrightarrow{a}y)$ for $\rho(x,a,y)$; moreover,
following a common notation for stochastic and probabilistic systems,
we will write also $x\xrightarrow{a,w}y$ to denote $\rho(x,a,y) = w$.
The monoidal structure was not used in Definition~\ref{def:wlts} but
for the existence of a distinguished element required by the image
finiteness (resp. countability) property. The commutative monoidal
structure of weights comes into play in the notion of
\emph{bisimulation}, where weights of transitions with the same labels
have to be ``summed''. This operation is commonplace for stochastic
LTSs, but at first it may appear confusing with respect to the notion of
bisimulation of non-deterministic LTSs; we will explain it in
Section~\ref{sec:lts-as-wlts}.
\begin{definition}[Strong \mfk W-bisimulation {\cite[Def.~3]{Klin09wts}}]
\label{def:wlts-strong}
Given a \mfk W-LTS $(X,A,\rho)$, a \emph{(strong) \mfk W-bisimulation} is an
equivalence relation $R$ on $X$ such that for each pair $(x,x')$ of
elements of $X$, $(x,x') \in R$ implies that for each label $a\in A$
and each equivalence class $C$ of $R$:
\[\sum_{y\in C}\rho(x\xrightarrow{a}y) = \sum_{y\in C}\rho(x'\xrightarrow{a}y)\text.\]
Processes $x$ and $x'$ are said to be \emph{\mfk W-bisimilar}
(or just bisimilar when \mfk W is understood) if there exists a
\mfk W-bisimulation $\sim_{\mfk W}$ such that $x\sim_{\mfk W} x'$.
\end{definition}
Clearly \mfk W-bisimulations are closed under arbitrary unions
ensuring the \emph{\mfk W-bisimilarity} on any \mfk W-LTS to be the
largest \mfk W-bisimulation over it.\footnote{Actually, strong \mfk
W-bisimulation has been proven to be a strong bisimulation in
coalgebraic sense \cite{Klin09wts}.}
\begin{remark}
\label{rem:wlts-strong}
In order for the above definition to be well-given, summations
need to be well-defined. Intuitively this means that
the \mfk W-LTS $(X,A,\rho)$ does not exceed the expressiveness
of its underlying monoid of weights \mfk W. Reworded, the
system has to be image finite if the monoid admits only finite
summations; image countable if the monoid admits countable
summations, and so on.
\end{remark}
In \cite{Klin09wts,klins:ic2012}, for the sake of simplicity the
authors restrict themselves to image finite systems (which is not
unusual in the coalgebraic setting). In the present paper we extend
their definitions to the case of countable images. This
generalization allows to capture a wider range of systems and is
crucial for the definition of weak and delay bisimulations.
In practice, Remark \ref{rem:wlts-strong} is not a severe restriction,
since the commutative monoids relevant for most systems of interest
admit summations over countable sets. To supports this claim, in the
rest of this Section we illustrate how non-deterministic, stochastic
and probabilistic labelled transition systems can be recovered as
systems weighted over commutative monoids whit countable sums. These
kind of commutative monoids are often called \emph{commutative
$\omega$-monoids}\footnote{Monoids can be readily extended to
$\omega$-monoids adding either colimits freely or an ``$\infty$''
element.}.
\subsection{Non-deterministic systems are WLTS}
\label{sec:lts-as-wlts}
This section illustrates how non-deterministic labelled transition systems
\cite{milner:cc}
can be recovered as systems weighted over the commutative $\omega$-monoid of
logical values equipped with logical disjunction
$\mfk 2 \triangleq (\{\tt t\!t,\tt f\!f\},\lor,\tt f\!f)$.
\begin{definition}[Non-deterministic LTS]
\label{def:lts}
A \emph{non-deterministic labelled transition system}
is a triple $(X,A,\rightarrow)$ where:
\begin{itemize}
\item $X$ is a set of \emph{states} (processes);
\item $A$ is an at most countable set of \emph{labels} (actions);
\item $\mathop{\rightarrow} \subseteq X\times A\times X$ is the \emph{transition
relation}.
\end{itemize}
As usual, we shall denote an $a$-labelled transition from $x$ to $y$
\ie~$(x,a,y) \in {\rightarrow}$ by $x\xrightarrow{a}y$. A state $y$
is called \emph{successor} of a given state $x$ iff
$x\xrightarrow{a}y$. If $x$ has no successors then it is said to be
\emph{terminal}. If every state has a finite set of
successors then the system is said to be \emph{image finite}.
Likewise it is said to be \emph{image countable} if each state has
at most countably many successors.
\end{definition}
Every \mfk 2-valued weight function is a predicate defining a subset
of its domain, turning $\rho: X\times A \times X \to \mfk 2$
equivalent to the classical definition of the transition relation
${\rightarrow} \subseteq X\times A \times X$.
\begin{definition}[Strong non-deterministic bisimulation]
\label{def:lts-bisim}
Let $(X,A,{\rightarrow})$ be an LTS. An equivalence relation $R\subseteq X\times X$
is a \emph{(strong non-deterministic) bisimulation on $(X,A,\rightarrow)$} iff
for each pair of states $(x,x') \in R$, for any label
$a \in A$ and each equivalence class $C \in X/R$:
\[
\exists y \in C. x\xrightarrow{a}y \iff \exists y' \in C. x'\xrightarrow{a}y'
\text.\]
Two states $x$ and $x'$ are said \emph{bisimilar} iff there exists a bisimulation
relation $\sim_l$ such that $x\sim_l y$. The greatest bisimulation for
$(X,A,\rightarrow)$ uniquely exists and is called \emph{(strong) bisimilarity}.
\end{definition}
Strong \mfk 2-bisimulation and strong non-deterministic bisimulation
coincide, since logical disjunction over the states in a given class
$C$ encodes the ability to reach $C$ making an $a$-labelled transition.
\subsection{Stochastic systems are WLTS}
\label{sec:rts-wlts}
Stochastic systems have important application especially in the field
of quantitative analysis, and several tools and formalisms to describe
and study them have been proposed (\eg~PEPA
\cite{hillston:pepabook}, EMPA \cite{bg98:empa} and the stochastic
$\pi$-calculus \cite{pc95:cj}). Recently, \emph{rated transition
systems} \cite{klinS08,klins:ic2012,denicola09:rts,denicola13:ultras} emerged
as a convenient presentation of these kind of systems.
\begin{definition}[Rated LTS {\cite[Sec.~2.2]{klinS08}}]
\label{def:rlts}
A \emph{rated labelled transition system}
is a triple $(X,A,\rho)$ where:
\begin{itemize}
\item $X$ is a set of \emph{states} (processes);
\item $A$ is a countable set of \emph{labels} (actions);
\item $\rho : X\times A\times X \to \mbb R^+_0$ is the \emph{rate function}.
\end{itemize}
\end{definition}
Semantics of stochastic processes is usually given by means of labelled
continuous time Markov chains (CTMC). The real number $\rho(x,a,y)$
is interpreted as the parameter of an exponential probability
distribution governing the duration of the transition from state $x$
to $y$ by means of an $a$-labelled action and hence encodes the
underlying CTMC (for more information about CTMCs and their presentation
by transition rates see \eg~\cite{hhk2002:tcs,hillston:pepabook,prakashbook09,pc95:cj}).
\begin{definition}[Strong stochastic bisimulation]
\label{def:rts-bisim}
Given a rated system $(X,A,\rho)$ an equivalence relation $R\subseteq X\times X$
is a \emph{(strong stochastic) bisimulation on $(X,A,\rho)$}
(or \emph{strong equivalence} \cite{hillston:pepabook}) iff
for each pair of states $(x,x') \in R$, for any label
$a \in A$ and each equivalence class $C \in X/R$:
\[\sum_{y\in C}\rho(x,a,y) = \sum_{y\in C}\rho(x',a,y)\text.\]
Two states $x$ and $x'$ are said \emph{bisimilar} iff there exists a bisimulation
relation $\sim_s$ such that $x\sim_s y$. The greatest bisimulation for
$(X,A,\rightarrow)$ uniquely exists and is called \emph{(strong) bisimilarity}.
\end{definition}
Rated transition systems (hence stochastic systems) are precisely
WLTS weighted over the commutative monoid of nonnegative real
numbers (closed with infinity) under addition
$(\overline{\mbb R}_0^+,+,0)$ and stochastic bisimulations
correspond to $\overline{\mbb R}_0^+$-bisimulations, as shown in \cite{Klin09wts}.
Moreover, $\overline{\mbb R}_0^+$ is an $\omega$-monoid since non-negative real numbers
admit sums over countable families.
In particular, the sum of a given countable family $\{x_i \mid i \in I\}$
is defined as the supremum of the set of sums over its finite subfamilies:
\[\sum_{i\in I}x_i \triangleq \sup\left\{\sum_{i \in J}x_i \mid J \subseteq I, |J| < \omega \right\}\text.\]
\subsection{Probabilistic systems are (Constrained) WLTS}
\label{ex:plts-as-wlts}
This section illustrates how probabilistic LTSs are captured by
weighted ones. We focus on fully probabilistic systems (also known as
\emph{generative systems}) \cite{gsb90:ic,ls:probbisim, baier97:cav}
but in the end we provide some hints on other types of probabilistic
systems.
Fully probabilistic system can be regarded as a specializations of
non-deterministic transition systems where probabilities are used to
resolve nondeterminism. From a slightly different point of view, they
can also be interpreted as labelled Markov chains with discrete
parameter set \cite{ks76:fmc}.
\begin{definition}[Fully probabilistic LTS]
\label{def:plts}
A \emph{fully probabilistic labelled transition system}
is a triple $(X,A,\mrm P)$ where:
\begin{enumerate}[\em(1)]
\item $X$ is a set of \emph{states} (processes);
\item $A$ is a countable set of \emph{labels} (actions);
\item $\mrm P : X\times A\times X \to [0,1]$ is a function such
that for any $x \in X$ $\mrm P(x,\mbox{\large\bf\_} , \mbox{\large\bf\_} )$ is either a
discrete probability measures for $A \times X$ or the
constantly $0$ function.
\end{enumerate}
\end{definition}
In ``reactive'' probabilistic systems, in contrast to fully probabilistic systems,
transition probability distributions are dependent on the occurrences of actions
\ie~for any $x \in X$ and $a \in A$ $\mrm P(x,a, \mbox{\large\bf\_} )$ is either a discrete
probability measures for $X$ or the constantly $0$ function.
Strong probabilistic bisimulation has been originally introduced by Larsen and
Skou \cite{ls:probbisim} for reactive systems and has been reformulated by
van Glabbeek et al.~\cite{gsb90:ic} for fully probabilistic systems.
\begin{definition}[Strong probabilistic bisimilarity]
\label{def:plts-bisim}
Let $(X,A,\mrm P)$ be a fully probabilistic system. An equivalence relation
$R\subseteq X\times X$ is a \emph{(strong probabilistic) bisimulation on
$(X,A,\mrm P)$} iff for each pair of states $(x,x') \in R$,
for any label $a \in A$ and any equivalence class $C \in X/R$:
\[
\mrm P(x,a,C) = \mrm P(x',a,C)
\]
where $\mrm P(x,a,C) \triangleq \sum_{y \in C}\mrm P(x,a,y)$.
Two states $x$ and $x'$ are said \emph{bisimilar} iff there exists a bisimulation
relation $\sim_p$ such that $x\sim_p y$. The greatest bisimulation for
$(X,A,\mrm P)$ uniquely exists and is called \emph{bisimilarity}.
\end{definition}
It would be tempting to recover fully probabilistic systems as LTS
weighted over the probabilities interval $[0,1]$ but unfortunately the
addition does not define a monoid on $[0,1]$ since it is not a total
operation when restricted $[0,1]$. There exist various commutative
monoids over the probabilities interval, leading to different
interpretations of probabilistic systems (as will be shown in Section
\ref{sec:other-semirings}), but since in
Definition~\ref{def:plts-bisim} we sum probabilities of outgoing
transitions (\eg~to compute the probability of reaching a certain set
of states), the real number addition has to be used.
\begin{remark}[On partial commutative monoids]\label{rem-partial}
The theory of weighted labelled transition systems can be extended to
consider partial commutative monoids (\ie~$a + b$ may be undefined
but when it is defined then also $b + a$ is and commutativity
holds) or commutative $\sigma$-monoids to handle sums over
opportune countable families (thus relaxing the requirement of
weights forming $\omega$-monoids). However, every
$\sigma$-semiring can be turned into an $\omega$-complete one
by adding a distinguished $+\infty$ element and resolving
partiality accordingly.
\end{remark}
Klin \cite{Klin09wts} suggested to consider probabilistic systems as
systems weighted over $(\mathbb R_0^+,+,0)$ but subject to suitable
constraints ensuring that the weight function is a state-indexed
probability distribution and thus satisfies Definition~\ref{def:plts}.
These \emph{constrained} WLTSs were proposed
to deal with reactive probabilistic systems.
\begin{definition}[constrained \mfk W-LTS]
\label{def:cwlts}
Let \mfk W be a commutative monoid and \mcl C be a constraint family.
A \emph{$\mcl C$-constrained \mfk W-weighted labelled transistion system} is a
\mfk W-LTS $(X,A,\rho)$ such that its weight function $\rho$
satisfies the constraints \mcl C over \mfk W.
\end{definition}
Then, fully probabilistic labelled transition systems
are precisely constrained $\mathbb R_0^+$-LTSs $(X,A,\rho)$ subject to the
constraint family:
\[\sum_{a \in A, y \in X}\rho(x,a,y) \in \{0,1\}\mbox{ for }x\in X\text.\]
Likewise, reactive probabilistic systems are $\mathbb R_0^+$-LTSs subject to the
constraint family:
\[\sum_{y \in X}\rho(x,a,y) \in \{0,1\}\mbox{ for $x\in X$ and $a\in A$}\text.\]
Therefore strong bisimulations for these kind of systems are exactly strong
$\mathbb R_0^+$-bisimulations.
\section{Weak bisimulations for WLTS over semirings}\label{sec:weak-bisim}
In the previous section we illustrated how weighted labelled
transitions systems can uniformly express several kinds of systems
such as non-deterministic, stochastic and probabilistic systems.
Remarkably, bisimulations for these systems were proved to be
instances of weighted bisimulations.
In this section we show how other observational equivalences can
be stated at the general level of the weighted transition system
offering a treatment for these notions uniform across the wide range
of systems captured by weighted ones. Due to space constraints we
focus on weak bisimulation but eventually we discuss briefly how the proposed
results can cover other notions of observational equivalence.
\subsection{From transitions to execution paths}
Let $(X,A+\{\tau\},\rho)$ be a \mfk W-LTS.
A \emph{finite execution path} $\pi$ for this system is a sequence of transition \ie~an
alternating sequence of states and labels like
\[\pi = x_0 \xrightarrow{a_1} x_1 \xrightarrow{a_2} x_2 \dots x_{n-1}\xrightarrow{a_n} x_n\]
such that for each transition $x_{i-1} \xrightarrow{a_i} x_i$ in the path:
\[
\rho(x_{i-1} \xrightarrow{a_i} x_i)\neq 0.
\]
Let $\pi$ denote the above path, then set:
\begin{gather*}
\mrm{length}(\pi) = n\qquad \mrm{first}(\pi) = x_0\qquad \mrm{last}(\pi) = x_n \qquad
\mrm{trace}(\pi) = a_1a_2\dots a_n
\text.\end{gather*}
to denote the length, starting state, ending state and trace of $\pi$ respectively.
In order to extend the definition of the weight function $\rho$ to
executions we need some additional structure on the domain of weights,
allowing us to capture concatenation of transition. To this end, we
require weights to be drawn from a semiring, akin to the theory of
weighted automata. Recall that a semiring is a set $W$
equipped with two binary operations $+$ and $\cdot$ called
\emph{addition} and \emph{multiplication} respectively and such that:
\begin{itemize}
\item $(W,+,0)$ is a commutative monoid and $(W,\cdot,1)$ is a monoid;
\item multiplication left and right distributes over addition:
\begin{gather*}
a\cdot(b+c) = (a\cdot b) + (a\cdot c)\qquad
(a+b)\cdot c = (a\cdot c) + (b\cdot c)
\end{gather*}
\item multiplication by $0$ annihilates $W$:
\[0\cdot a = 0 = a \cdot 0\text.\]
\end{itemize}
Basically, the idea is to express parallel and subsequent transitions
(\ie~branching and composition) by means of addition and
multiplication respectively. Therefore, multiplication is not
required to be commutative (\cf~the semiring of formal languages).
Distributivity ensures that execution paths are independent from the
alternative branching \ie~given two executions sharing some sub-path,
we are not interested in which is the origin of the sharing; as the
following diagram illustrates:
\begin{equation}\label{eq:path-dist}
\begin{tikzpicture}[auto,font=\small,
baseline=(current bounding box.center),
extended/.style={shorten >=-#1, shorten <=-#1},
extended/.default=0pt,
dot/.style={circle,fill=black,minimum size=4pt,inner sep=0, outer sep=2pt}]
\node (s0) at (0,0) {
\begin{tikzpicture}[auto,yscale=0.6,xscale=.5]
\node[dot] (n0) at (0,2) {\(\)};
\node[dot] (n1) at (0,1) {\(\)};
\node[dot] (n2) at (-.7,0) {\(\)};
\node[dot] (n3) at (0.7,0) {\(\)};
\draw[->] (n0) to node [] {\(a\)} (n1);
\draw[->] (n1) to node [pos=.7,swap] {\(b\)} (n2);
\draw[->] (n1) to node [pos=.7] {\(c\)} (n3);
\end{tikzpicture}
};
\node[right=16pt of s0] (s1) {
\begin{tikzpicture}[auto,yscale=0.6,xscale=.5]
\node[dot] (n0) at (0,2) {\(\)};
\node[dot] (n1) at (-.7,1) {\(\)};
\node[dot] (n2) at (-.7,0) {\(\)};
\node[dot] (n3) at (0,2) {\(\)};
\node[dot] (n4) at (.7,1) {\(\)};
\node[dot] (n5) at (.7,0) {\(\)};
\draw[->] (n0) to node [pos=.7,swap] {\(a\)} (n1);
\draw[->] (n1) to node [swap] {\(b\)} (n2);
\draw[->] (n3) to node [pos=.7] {\(a\)} (n4);
\draw[->] (n4) to node [] {\(c\)} (n5);
\end{tikzpicture}
};
\node[right=23pt of s1] (s2) {
\begin{tikzpicture}[auto,yscale=0.6,xscale=.5]
\node[dot] (n0) at (0,0) {\(\)};
\node[dot] (n1) at (0,1) {\(\)};
\node[dot] (n2) at (-.7,2) {\(\)};
\node[dot] (n3) at (0.7,2) {\(\)};
\draw[<-] (n0) to node [swap] {\(c\)} (n1);
\draw[<-] (n1) to node [pos=.7] {\(a\)} (n2);
\draw[<-] (n1) to node [pos=.7,swap] {\(b\)} (n3);
\end{tikzpicture}
};
\node[right=16pt of s2] (s3) {
\begin{tikzpicture}[auto,yscale=0.6,xscale=.5]
\node[dot] (n0) at (-.7,2) {\(\)};
\node[dot] (n1) at (-.7,1) {\(\)};
\node[dot] (n2) at (0,0) {\(\)};
\node[dot] (n3) at (.7,2) {\(\)};
\node[dot] (n4) at (.7,1) {\(\)};
\node[dot] (n5) at (0,0) {\(\)};
\draw[->] (n0) to node [swap] {\(a\)} (n1);
\draw[->] (n1) to node [pos=.7,swap] {\(c\)} (n2);
\draw[->] (n3) to node [] {\(b\)} (n4);
\draw[->] (n4) to node [pos=.7] {\(c\)} (n5);
\end{tikzpicture}
};
\node at ($(s0)!0.5!(s1)$) {\(\equiv\)};
\node at ($(s2)!0.5!(s3)$) {\(\equiv\)};
\end{tikzpicture}
\end{equation}
Finally, since weights of (proper) transitions are always different from $0$,
the annihilation property means that no proper execution can contain
improper transitions.
Then, the weight function $\rho$ extends to finite paths by semiring multiplication
(therefore we shall use the same symbol):
\[\rho(x_0 \xrightarrow{a_1}x_1\dots \xrightarrow{a_n} x_n) \triangleq \prod_{i=1}^n\rho(x_{i-1},a_i,x_i)\]
In the following let \mfk W be a semiring $(W,+,0,\cdot,1)$.
Semirings offer enough structure to extend weight function to finite execution
paths compositionally but executions can also be (countably) infinite.
Likewise countable branchings (\cf~Remark \ref{rem:wlts-strong}), paths of countable
length can be treated requiring multiplication to be defined also over
(suitable) countable families of weights and obviously respect the
semiring structure.
However, the additional requirement for $(W,\cdot,1)$ can be avoided
by dealing with suitable sets of paths as long as these convey enough
information for the notion of weak bisimulation (and observational
equivalence in general). In particular, a finite path $\pi$
determines a set of paths (possibly infinite) starting with $\pi$,
thus $\pi$ can be seen as a representative for the set. Moreover, the
behavior of a system can be reduced to its \emph{complete} executions:
a path is called \emph{complete} (or ``full'' \cite{baier97:cav}) if
it is either infinite or ends in a \emph{terminal} state.
Intuitively, we distinguish complete paths only up to the chosen
representatives: longer representative may generate smaller sets of
paths, and this can be thought in ``observing more'' the system. If
two complete paths are distinguishable, we have to be able to
distinguish them in a finite way \ie~there must be two representative
with enough information to tell one set from the other. Otherwise, if
no such representative exist, then the given complete paths are indeed
equivalent. Therefore, it is enough to be able to compositionally
weight (finite) representatives in order to distinguish any complete
path.
The remaining of the subsection elaborates the above intuition
defining a $\sigma$-algebra over complete paths (for each state).
The method presented is a generalization to semirings of the one used in
\cite{baier98}. This structure allows to deal
with sets of finite paths avoiding redundancies
(\cf~Example \ref{ex:class-reachability})
and define weights compositionally.
Let $\mrm{Paths}(x)$, $\mrm{CPaths}(x)$ and $\mrm{FPaths}(x)$ denote the sets of
all, complete and finite paths starting in the state $x \in X$ respectively.
Likewise, we shall denote the corresponding sets of paths \wrt~any starting state
as $\mrm{Paths}$, $\mrm{CPaths}$ and $\mrm{FPaths}$ respectively (\eg~$\mrm{Paths} = \cup_{x\in X} \mrm{Paths}(x)$).
Paths naturally organize into a preorder by the prefix relation. In particular,
given $\pi,\pi' \in \mrm{Paths}(x)$ define $\pi\preceq\pi'$ if and only if
one of the following holds:
\begin{enumerate}
\item $\pi \equiv
x \xrightarrow{a_1} x_1 \dots \xrightarrow{a_n} x_n$ and
$\pi' \equiv
x \xrightarrow{a'_1} x'_1 \dots \xrightarrow{a'_n} x'_{n'}$
(both finite),
$x_i = x'_i$ and $a_i = a'_i$ for $i \leq n \leq n'$;
\item $\pi \equiv
x \xrightarrow{a_1} x_1 \dots \xrightarrow{a_n} x_n$ and
$\pi' \equiv x \xrightarrow{a'_1} x'_1 \dots$
(one finite and the other infinite),
$x_i = x'_i$ and $a_i = a'_i$ for $i \leq n$;
\item $\pi = \pi'$ (both infinite).
\end{enumerate}
For each finite path $\pi \in \mrm{FPaths}(x)$ define the \emph{cone
of complete paths generated by $\pi$} as follows:
\[
\cone\pi \triangleq \{\pi'\in \mrm{CPaths}(x) \mid \pi \preceq \pi'\}
\text.\]
Cones are precisely the sets we were sketching in the intuition above
and form a subset of the parts of $\mrm{CPaths}(x)$:
\[
\Gamma \triangleq \{\cone\pi \mid \pi \in \mrm{FPaths}(x)\}\text.
\]
This set is at most countable since the set $\mrm{FPaths}(x)$ is so
and every two of its elements are either disjoint or one the subset
of the other as the following Lemmas state.
\begin{lemma}
\label{lem:fpath-countable}
For any state $x\in X$, the set of finite paths $\mrm{FPaths}(x)$
of an image countable \mfk W-LTS is at most countable.
\end{lemma}
\begin{proof}
By induction on the length $k$ of paths in $\mrm{FPaths}(x)$, these
are at most countable. In fact, for $k = 0$ there is exactly one path,
$\varepsilon$ and, taken the set of paths of length $k$ be at most
countable, then the set of those with length $k+1$ is at most countable
because the system is assumed to be image countable.
Then $\mrm{FPaths}(x)$ is at most countable since it is the disjoint union of
\[\{\pi \in \mrm{Paths}(x) \mid \mrm{length}(\pi) = k\}\]
for $k \in \mbb N$.
\end{proof}
\begin{lemma}
\label{lem:geom-cones}
Two cones $\cone\pi_1$ and $\cone{\pi_2}$ are either disjoint
or one the subset of the other.
\end{lemma}
\begin{proof}
For any $\pi \in \mrm{CPaths}(x)$, we have by definition:
\[\pi\in\cone{\pi_1} \iff \pi_1\preceq \pi \mbox{ and } \pi\in\cone{\pi_2} \iff
\pi_2\preceq\]
Then, if $\pi_1\preceq\pi_2$ then $\pi\in\cone{\pi_2} \Rightarrow \pi\in\cone{\pi_1}$
(likewise for $\pi_2\preceq\pi_1$). For the other case, since
$\pi_1\not\preceq\pi_2 \land \pi_2\not\preceq\pi_1$ there is no $\pi$ such that
$\pi_1\preceq \pi \land \pi_2\preceq \pi$.
\end{proof}
Given $\Pi\subseteq\mrm{FPaths}(x)$, the set of all cones generated by
its elements is denoted by $\cone\Pi$ and defined as the (at most
countable) union of the cones generated by each $\pi \in \Pi$. If
this union is over disjoint cones then $\Pi$ is said to be
\emph{minimal}.
Minimality is not preserved by set union even if operands are disjoint and
both minimal. As a counter example consider the sets $\{\pi\}$ and $\{\pi'\}$
for $\pi \prec \pi' \in \mrm{FPaths}(x)$; both are minimal and disjoint, but
their union is not minimal since $\cone{\pi'}\subseteq\cone{\pi}$.
However, $\Pi$ always has at least a subset $\Pi'$ being minimal and such that
\begin{equation}\label{eq:gen-same-cones}
\cone\Pi = \cone{\Pi'}
\text.\end{equation}
and among these there exists exactly one which is also minimal in the sense of prefixes:
\begin{lemma}\label{lem:fpaths-support}
For $\Pi \subseteq \mrm{FPaths}(x)$, there exists a minimal subset
$\Pi'\subseteq\Pi$ which satisfies (\ref{eq:gen-same-cones}), \ie~for any
$\Pi''\subseteq\Pi$ satisfying (\ref{eq:gen-same-cones}) we have:
$
\forall \pi''\in\Pi''\, \exists \pi' \in \Pi . \pi'\preceq\pi''\text.
$
We denote such $\Pi'$ by $\scone\Pi$.
\end{lemma}
\begin{proof}
Clearly $\cone\Pi = \emptyset$ iff $\Pi = \emptyset$ since there are no infinite
prefix descending chains.
Then $\cone{(\scone{\Pi})} \subseteq \cone\Pi$ since $\scone{\Pi} \subseteq \Pi$ is minimal. For every $\pi \in \Pi$ there exists
$\pi' \in \scone\Pi$ such that $\pi'\preceq\pi$ and by
Lemma~\ref{lem:geom-cones} $\cone\pi\subseteq\cone{\pi'}$ \ie~
$\cone\Pi\subseteq\cone{(\scone{\Pi})}$. Therefore $\cone\Pi = \cone{(\scone{\Pi})}$.
Consider $\Pi'$ as in the enunciate, then, for every $\pi\in \Pi$ there exists
$\pi' \in \scone\Pi$ such that $\pi'\preceq\pi$ and in particular if $\pi \in \Pi'$.
Uniqueness follows straightforwardly.
\end{proof}
The set $\scone\Pi$ is called \emph{minimal support of $\Pi$} and intuitively
correspond to the ``minimal'' set of finite executions needed to completely
characterize the behavior captured by $\Pi$ and the complete paths it induces.
Any other path of $\Pi$ is therefore redundant (\cf~Example \ref{ex:class-reachability}).
The idea of which complete paths are distinguishable and then ``measurable''
(\ie~that can be given weight) is captured precisely by the notion of
$\sigma$-algebra. In fact, the set of all cones $\Gamma$ (together
with the emptyset) induce a $\sigma$-algebra, as they form a
\emph{semiring of sets} (in the sense of \cite{zaanen1958:integration}).
\begin{lemma}
\label{lem:cpath-semiring}
The set $\Gamma\cup\{\emptyset\}$ is a semiring of sets
and uniquely induces a $\sigma$-algebra over $\mrm{CPaths}(x)$.
\end{lemma}
\begin{proof}[Proof (Sketch)]
$\Gamma\cup\{\emptyset\}$
is closed under finite intersections since cones are always either disjoint or
one the subset of the other. Set difference follows from the existence
of minimal supports.
\end{proof}
As discussed before, in general the weight of
$\Pi\subseteq\mrm{FPaths}(x)$ cannot be defined as the sum of the
weights of its elements, due to redundancies. However, what we are
really interested in is the \emph{unique} set of behaviors
described by $\Pi$, \ie~the complete paths it subsumes.
Therefore we first extend $\rho$ to \emph{minimal} $\Pi$, as follows:
\[
\rho(\Pi) \triangleq \sum_{\pi \in \Pi}\rho(\pi) \quad \mbox{ for } \Pi \mbox{ minimal.}
\]
then, for all $\Pi$, we simply take
\[
\rho(\Pi) \triangleq \rho(\scone\Pi)\text.
\]
Because $\Pi$ can be countably infinite, semiring addition
\emph{has to} support countable
additions over these sets (\cf~Remark~\ref{rem:wlts-strong}).
\subsection{Well-behaved semirings}\label{sec:good-semirings}
\begin{definition}\label{def:good-semiring}
Let the semiring $\mfk W$ be endowed with a preorder $\sqsubseteq$.
We call the semiring \emph{well-behaved} if, and only if, for
any two $\Pi_1$ and $\Pi_2$ the following holds:
\[\Pi_1 \subseteq \Pi_2 \Rightarrow \rho(\Pi_1) \sqsubseteq \rho(\Pi_2)\text.\]
\end{definition}
If the semiring is well-behaved then addition unit $0$ is necessarily the
bottom of the preorder because $\rho(\emptyset) \triangleq 0$.
Moreover, the semiring operations have to respect the preorder \eg:
\[ a\sqsubseteq b \Rightarrow a + c \sqsubseteq b + c\text.\]
As a direct consequence, annihilation of parallel is
avoided by the \emph{zerosumfree} property of the semiring
\ie~the sum of weighs
of proper transition always yield the weight of a proper
transition where proper means different from the addition unit.
Well-behaved semirings are precisely \emph{positively (partially) ordered
semirings} and it is well known that these admit the natural preorder:
\[ a \trianglelefteq b \defiff \exists c. a + c = b\]
which is respected by the semiring operation and has $0$ as bottom.
The natural preorder is the weaker preorder rendering a semiring
positively ordered (hence well-behaved) where weaked means that
for any such preorder $\sqsubseteq$ and elements $a,b$
\[a \trianglelefteq b \implies a \sqsubseteq b\text.\]
The converse holds only when also the other order is natural.
\begin{lemma}
The natural preorder is the weaker preorder rendering the semiring
well-behaved.
\end{lemma}
Note that any idempotent semiring bares a natural preorder and hence
is well-behaved and the same holds for every semiring considered in
the examples illustrated in this paper (\cf~Section \ref{sec:weak-instances}).
For instance, some arithmetic semirings like $(\mbb R,+,0,\cdot,1)$
are not positively ordered because of negatives; moreover their are not
$\omega$-semirings (there is no limit for $1 + (-1) + 1 + (-1) \dots$).
\subsection{Weak \mfk W-bisimulation}
\label{sec:weak-wbisim}
Weak bisimulations weakens the notion of strong bisimulation
by allowing sequences of silent action before and after any observable one.
Then, we are now dealing with (suitable) paths instead of single transitions
and the states are compared on the bases of how opportune classes of states
are reached from these by means of the paths allowed (\ie~making some silent actions,
before and after an observable, if any).
Therefore, the notion of how a class state is reached and what paths can be used in
doing this is crucial in the definition of the notion of weak bisimulation.
For instance, for non-deterministic LTSs, the question of how and if a class
is reached coincides and then it suffices to find a (suitable) path leading to the
class. This allows weak bisimulation for non-deterministic LTSs to rely on the
reflexive and transitive closure of $\tau$-labelled transition of
a system (\cf~Definition~\ref{def:lts-weak}) to blur the distinction
between sequences of silent actions which can then be ``skipped''.
In fact, the $\tau$-closure at the base of \eqref{eq:tau-clos}
defines a new LTS over the same state space
of the previous and such that every weak bisimulation for this new system
is a weak bisimulation for the given one and vice versa.
In \cite{pbpk03:weakautom} Buchholz and Kemper extends this notion to
a class of automatons weighted over suitable semirings \ie~those
having operations commutative and idempotent (\eg~$w+w=w$).
This class includes interesting examples such as the boolean and
bottleneck semiring (\cf~Section \ref{sec:other-semirings})
but not the semiring of non-negative real numbers and therefore
does not cover the cases of fully probabilistic systems.
Modulo some technicality connected to initial and accepting states,
their results can be extended to labelled transition systems
and holds also for LTSs weighted over suitable semirings.
Their interesting construction relies on the $\tau$-closure
of a system and it is known that this closure does not cover
the general case. For instance, it can not be applied to recover
weak bisimulation for generative systems as demonstrated by
Baier and Hermanns (\cf~\cite{baier98}).
The following example gives an intuition of the issue.
\begin{example}\label{ex:class-reachability}\rm
\newcommand{\convexpath}[2]{
[
create hullnodes/.code={
\global\edef\namelist{#1}
\foreach [count=\counter] \nodename in \namelist {
\global\edef\numberofnodes{\counter}
\node at (\nodename) [draw=none,name=hullnode\counter] {};
}
\node at (hullnode\numberofnodes) [name=hullnode0,draw=none] {};
\pgfmathtruncatemacro\lastnumber{\numberofnodes+1}
\node at (hullnode1) [name=hullnode\lastnumber,draw=none] {};
},
create hullnodes
]
($(hullnode1)!#2!-90:(hullnode0)$)
\foreach [
evaluate=\currentnode as \previousnode using \currentnode-1,
evaluate=\currentnode as \nextnode using \currentnode+1
] \currentnode in {1,...,\numberofnodes} {
-- ($(hullnode\currentnode)!#2!-90:(hullnode\previousnode)$)
let \p1 = ($(hullnode\currentnode)!#2!-90:(hullnode\previousnode) - (hullnode\currentnode)$),
\n1 = {atan2(\x1,\y1)},
\p2 = ($(hullnode\currentnode)!#2!90:(hullnode\nextnode) - (hullnode\currentnode)$),
\n2 = {atan2(\x2,\y2)},
\n{delta} = {-Mod(\n1-\n2,360)}
in
{arc [start angle=\n1, delta angle=\n{delta}, radius=#2]}
}
-- cycle
}
Consider the \mfk W-LTS below.
\[\begin{tikzpicture}[auto,scale=.75,font=\small,
state/.style={circle,draw=black,fill=white,
minimum size=11pt, inner sep=.5pt,outer sep=1pt}]
\node[state] (n0) at (0,1) {\(x\)};
\node[state] (n1) at (2,2) {\(x_1\)};
\node[state] (n2) at (4,2) {\(x_2\)};
\node[state] (n3) at (6,1) {\(x_3\)};
\node[state] (n4) at (2,0) {\(x_4\)};
\node[state] (n5) at (4,0) {\(x_5\)};
\node[state] (n6) at (8,1) {\(x_6\)};
\draw[->] (n0) to node [pos=.65] {\(b,w_1\)} (n1);
\draw[->] (n1) to node [] {\(b,w_2\)} (n2);
\draw[->] (n2) to node [pos=.45] {\(b,w_3\)} (n3);
\draw[->] (n3) to node [pos=.6] {\(a,w_7\)} (n5);
\draw[->] (n0) to node [pos=.6,swap] {\(a,w_4\)} (n4);
\draw[->] (n4) to node [swap] {\(b,w_5\)} (n5);
\draw[->] (n3) to node [] {\(a,w_6\)} (n6);
\begin{pgfonlayer}{background}
\draw[black!90,dotted,fill=gray!20] \convexpath{n2,n5,n4}{16pt};
\end{pgfonlayer}
\node[] (c) at ($(n2)!0.45!(n4)!0.2!(n5)$) {\(C\)};
\end{tikzpicture}
\]
There are four finite paths going from state the $x$ to the class $C$.
Their weights are:
\begin{align*}
\rho(x \xrightarrow{b} x_1\xrightarrow{b} x_2) &= w_1\cdot w_2\\
\rho(x \xrightarrow{b} x_1\xrightarrow{b} x_2\xrightarrow{b} x_3\xrightarrow{a} x_5)
&= w_1\cdot w_2\cdot w_3\cdot w_7\\
\rho(x \xrightarrow{a} x_4) &= w_4\\
\rho(x \xrightarrow{a} x_4 \xrightarrow{b} x_5) &= w_4\cdot w_5
\end{align*}
Let us suppose to define the weight
of the set of these paths as the sum of its elements weights and
suppose that the system is generative; then the probability of
reaching $C$ from $x$ would exceed $1$. Likewise, in the case of a
stochastic system, the rate of reaching $C$ cannot consider paths
passing through $C$ before ending in it. If we are interested in how
$C$ is reached from $x$ with actions yielding a trace in the set
$b^*ab^*$, paths $w_1\cdot w_2$ and $w_5\cdot w_6$ are ruled out
because the first has a different trace and the second reaches $C$
before it ends.
\end{example}
Then, given a set of traces $T$, a state $x$ and a class of states $C$,
the set of finite paths of the given transition system reaching $C$
from $x$ with trace in $T$ that should be considered is:
\[\Lbag x,T,C \Rbag\triangleq
\Biggl\{\pi\;\Bigg|\;
\parbox{6.5cm}{
${\pi\in \mrm{FPaths}(x)}$,
${\mrm{last}(\pi)\in C}$, ${\mrm{trace}(\pi) \in T}$,
${\forall \pi'\preceq\pi : \mrm{trace}(\pi') \in T \Rightarrow
\mrm{last}(\pi') \notin C}$
}
\Biggr\}
\]
since these are all and only the finite executions of the system starting
going from $x$ to $C$ with trace in $T$ and never passing through $C$ except
for their last state. Redundancies highlighted in the example above
are ruled out since no execution path in this set is the prefix of an other
in the same set. In particular $\Lbag x,T,C \Rbag$ is the minimal support of the set of all
finite paths reaching $C$ from $x$ with trace in $T$:
\[\Lbag x,T,C \Rbag = \scone{\{\pi\mid\pi\mrm{FPaths}(x),\mrm{last}(\pi)\in C, \mrm{trace}(\pi) \in T\}}\text.\]
Therefore, weight functions can be consistently extended to these sets
by point-wise sums:
\[
\rho(\Lbag x,T,C \Rbag) = \sum_{\pi\in\Lbag x,T,C \Rbag} \rho(\pi)\text.
\]
The sum is at most countable since $\mrm{FPaths}(x)$ so is
$\mrm{FPaths}(x)$ and $\Lbag x,T,C \Rbag\subseteq\mrm{FPaths}(x)$.
Then, the addition operation of the semiring will support countable
sums as discussed in Remark~\ref{rem:wlts-strong}.
When clear from the context, we may omit the bag brackets from $\rho(\Lbag x,T,C \Rbag)$.
We are now ready to state the notion of weak bisimulation of a labelled transition
system weighted over any semiring admitting sums over (not necessarily every)
countable family of weights.
The notion we propose relies on the weights of paths reaching every class
in the relation but making at most one observable and hence the importance
of defining sets of paths reaching a class consistently.
\begin{definition}[Weak \mfk W-bisimulation]
\label{def:wlts-weak}
Let $(X,A+\{\tau\},\rho)$ be a LTS weighted over the semiring
\mfk W. A \emph{weak \mfk W-bisimulation} is an equivalence relation
$R$ on $X$ such that for all $x,x'\in X$, $(x,x') \in R$ implies that
for each label $a\in A$ and each equivalence class $C$
of $R$:
\begin{gather*}
\rho(x,\tau^*a\tau^*,C) = \rho(x,\tau^*a\tau^*,C)\\
\rho(x,\tau^*,C) = \rho(x,\tau^*,C)
\text.\end{gather*}
States $x$ and $x'$ are said to be \emph{weak \mfk W-bisimilar}
(or just weak bisimilar), written $x \approx_\mfk W x'$, if there exists a
weak \mfk W-bisimulation $R$ such that $x R x'$.
\end{definition}
The approach we propose applies to other behavioural equivalences.
For instance, delay bisimulation can be recovered for
WLTSs by simply considering in the above definition of weak bisimulations
sets of paths of the sort of $\Lbag x,\tau^*,C \Rbag$ and $\Lbag x,\tau^*a,C \Rbag$.
The notion of branching bisimulation relies on paths with the same traces
of those considered for defining weak bisimulation but with some additional
constraint on the intermediate states. In particular, the states right
before the observable $a$ have to be in the same equivalence class and
likewise the states right after it. Definition~\ref{def:wlts-weak}
is readily adapted to branching bisimulation by considering these particular
subsets of $\Lbag x,\tau^*a\tau^*,C \Rbag$.
\section{Examples of weak \mfk W bisimulation}
\label{sec:weak-instances}
In this Section we instantiate Definition~\ref{def:wlts-weak} to
the systems introduced in Section~\ref{sec:wlts} as instances of
LTSs weighted over commutative $\omega$-monoids.
\subsection{Non-deterministic systems}
Let us recall the usual definition of weak bisimulation for LTS \cite{milner:cc}.
\begin{definition}[Weak non-deterministic bisimulation]
\label{def:lts-weak}
An equivalence relation $R\subseteq X\times X$ is a \emph{weak (non-deterministic)
bisimulation on $(X,A+\{\tau\},\rightarrow)$} iff for each $(x,x') \in R$,
label $\alpha \in A+\{\tau\}$ and equivalence class $C \in X/R$:
\begin{equation}\label{eq:tau-clos}
\exists y \in C. x\xRightarrow{\alpha}y \iff \exists y' \in C. x'\xRightarrow{\alpha}y'
\end{equation}
where $\mathop{\Rightarrow} \subseteq X\times (A\uplus\{\tau\}) \times X$ is the
well-known $\tau$-reflexive and $\tau$-transitive closure of the transition relation
$\rightarrow$.
Two states $x$ and $x'$ are said \emph{weak bisimilar} iff there exists
a weak non-deterministic bisimulation relation $\approx_n$ such that $x\approx_n y$.
\end{definition}
Clearly, a weak bisimulation is a relation on states
induced by a strong bisimulation of a suitable LTS with the same states and actions.
In particular, weak bisimulations for $(X,A+\{\tau\},\rightarrow)$ are strong
bisimulations for $(X,A+\{\tau\},\Rightarrow)$ and viceversa.
The transition system $(X,A+\{\tau\},\Rightarrow)$ is sometimes referred as
saturated or weak (\eg~in \cite{jensen:thesis}). This observation is at the base of
some algorithmic and coalgebraic approaches to weak non-deterministic bisimulations
(\cf~Section~\ref{sec:cw-algorithm} and Section~\ref{sec:cat-view} respectively).
Section~\ref{sec:lts-as-wlts} illustrated that non-deterministic LTSs are
$\mfk 2$-WLTSs. The commutative monoid \mfk 2 is part of the
\emph{boolean semiring} of logical values under disjunction and conjunction
$(\{\tt t\!t,\tt f\!f\},\lor,\tt f\!f,\land,\tt t\!t)$ which we shall also denote
as \mfk 2. Then, by straightforward application of the definitions, the notions of
weak non-deterministic bisimulation and weak \mfk 2-bisimulation coincide.
\begin{proposition}
Definition \ref{def:lts-weak} is equivalent to Definition
\ref{def:wlts-weak} with $\mfk W = \mfk 2$.
\end{proposition}
It easy to check that a similar correspondence holds for branching and delay
bisimulations.
\subsection{Probabilistic systems}
In the definition of weak bisimulation for fully probabilistic systems
we are interested in the probability of reach a class of states. This
aspect is present also in the case of strong bisimulation, but things
become more complex for weak equivalences due to silent actions and
multi-step executions. Moreover, $\sigma$-additivity is no longer
available since the probability of reaching a class of states is not
the sum of the probabilities of reaching every single state in that
class. (On the contrary, a class is reachable if any of its state is
so which is the property we are interested in when dealing with
non-deterministic systems.)
Weak bisimulation for fully probabilistic systems was introduced by
Baier and Hermanns in \cite{baier97:cav,baier98}. Here we
recall briefly their definition; we refer the reader to
\emph{loc. cit.} for a detailed presentation.
\begin{definition}[Weak probabilistic bisimilarity {\cite{baier97:cav,baier98}}]
\label{def:plts-weak}
Given a fully probabilistic system $(X,A+\{\tau\},\mrm P)$,
an equivalence relation $R$ on $X$ is a \emph{weak (probabilistic)
bisimulation} iff for $(x,x') \in R$,
for any $a \in A$ and any equivalence class $C \in X/R$:
\begin{gather*}
\mrm{Prob}(x,\tau^*a\tau^*,C) = \mrm{Prob}(x',\tau^*a\tau^*,C) \\
\mrm{Prob}(x,\tau^*,C) = \mrm{Prob}(x',\tau^*,C)
\text.\end{gather*}
Two states $x$ and $x'$ are said \emph{weak bisimilar} iff there exists
a weak probabilistic bisimulation relation $\approx_p$ such that $x\approx_p y$.
\end{definition}
The function \mrm{Prob} is the extension over finite execution paths of the
unique probability measure induced by $\mrm{P}$ over the $\sigma$-field of
the basic cylinders of complete paths.
\begin{proposition}
Definition \ref{def:lts-weak} is equivalent to Definition
\ref{def:wlts-weak} with $\mfk W = (\mathbb R_0^+,+,0,\cdot,1)$.
\end{proposition}
The function $\mrm{P}$ is a weight function such that $\mrm{P}(x,\mbox{\large\bf\_} ,\mbox{\large\bf\_} )$
is a probability measure (or the constantly $0$ measure) which extends to the unique
$\sigma$-algebra on $\mrm{CPaths}(x)$ (Lemma~\ref{lem:cpath-semiring}). This defines
precisely $Prob$. In particular, for any $x \in X$ and $\Pi \subseteq \mrm{FPaths}(x)$
$Prob(\Pi) = Prob(\scone{\Pi}) = \mrm{P}(\scone{\Pi}) = \mrm{P}(\Pi)$ where $\mrm{P}$
is seen as the weigh function of a $\mbb R^+_0$-LTS.
\subsection{Stochastic systems}
As we have seen in Section~\ref{sec:rts-wlts}, stochastic transition
systems can be captured as WLTSs over $(\overline{\mbb R}^+_0,+,0)$ by describing
the exponential time distributions of a CTMC by their rates
\cite{klinS08}. Unfortunately, this does not extend to paths because
the sequential composition of two exponential distributions does not
yield an exponential distribution, and hence it can not be represented
by an element of $\overline{\mbb R}^+_0$. Moreover, there are stochastic systems
(\eg~TIPP \cite{gotz93:tipp}, SPADES \cite{ak2005:spades}) whose
transition times follow generic probability distributions.
To overcome this shortcoming, in this Section we introduce a semiring of
weights called \emph{stochastic variables} which allows to express
stochastic transition system with generic distributions as WLTSs. Then
the results of this theory can be readily applied to define various
behavioural equivalences, ranging from strong bisimulation to trace
equivalence, for all these kind of systems. In particular, we define
\emph{weak stochastic bisimulation}
by instantiating Definition~\ref{def:wlts-weak} on the semiring of
stochastic variables.
The carrier of the semiring structure we are defining is the set \mbb T
of \emph{transition-time random variables} \ie~random variables on
the nonnegative real numbers (closed with infinity) which describes
the nonnegative part of the line of time.
Given two (possibly dependent) random variables $X$ and $Y$
from \mbb T, let $\min(X,Y)$ be the minimum random variable
yielding the minimum between $X$ and $Y$. If the variables $X$ and
$Y$ characterize the time required by two transitions then their
combined effect is defined by the stochastic race between the
two transitions; a race that is ``won'' by the transition completed earlier
and hence the minimum. For instance, given two stochastic transitions
${x \xrightarrow{X} x'}$ and ${y\xrightarrow{Y}y'}$ the transition time
for their ``combination'' going from $\{x,y\}$ to $\{x',y'\}$
is characterized by the random variable $\min(X,Y)$ \ie~the overall
time is given by the first transition to be completed on the specific run.
Minimum random variables defines the operation $\min$ over \mbb T with a
constantly $+\infty$ continuous random variable $\mcl T_{+\infty}$
(its density is the Dirac delta function $\delta_{+\infty}$) as the unit.
Random variables of the sort of $\mcl T_{+\infty}$ are self-independent
and since they always always yield $+\infty$ we shall make no distinction
between them and refer to \emph{the} $\mcl T_{+\infty}$ random variable.
In general, time-transition variables do not have to be self-independent since
the events they describe usually depends on themselves. Intuitively,
it is like racing against ourself \ie~we are the only racer
and therefore $\min(X,X) = X$. Formally:
\[
\mbb P(\min(X,X) > t)
= \mbb P(X > t \cap X > t)
= \mbb P(X > t)\cdot \mbb P(X > t\mid X > t) = \mbb P(X > t)\text.
\]
Let $X$ and $Y$ be two continuous random variables from \mbb T with probability
density functions $f_X$ and $f_Y$ respectively. The density $f_{\min(X,Y)}$
describing $\min(X,Y)$ is:
\begin{align*}
f_{\min(X,Y)}(z) = f_X(z) + f_Y(z) - f_{X,Y}(z,z)\text.
\end{align*}
When $X$ and $Y$ are independent (but not necessarily \emph{i.i.d.})
$f_{\min(X,Y)}$ can be simplified as:
\[
f_{\min(X,Y)}(z) =
f_X(z)\cdot\int_z^{+\infty} f_Y(y)\mathrm{d} y +
f_Y(z)\cdot\int_z^{+\infty} f_X(x)\mathrm{d} x
\text.
\]
Intuitively, the
likelihood that one variable is the minimum must be ``weighted'' by
the probability that the other one is not.
In particular, for independent exponentially distributed variables
$X$ and $Y$, $\min(X,Y)$ is exponentially distributed and its
rate is the sum of the rates of the negative exponentials characterizing
$X$ and $Y$. Therefore, the commutative monoid
$(\mbb T,\min,\mcl T_{+\infty})$ faithfully generalizes the monoid
$(\overline{\mbb R}^+_0,+,0)$ used in Section~\ref{sec:rts-wlts} to capture
CTMCs as WLTSs
During the execution of a given path, the time of every transition in the sequence
sums to the overall time. Therefore, the transition time for \eg~$x \xrightarrow{X} y \xrightarrow{Y} z$ is characterized by the random variable
$X+Y$ sum of the variable characterizing the single transitions composing the path.
Sum and the constantly 0 continuous variable $\mcl T_0$
define a commutative monoid over \mbb T. The operation has to be commutative
because the order a path imposes to its steps does not change the total time of
execution.
Let $X$ and $Y$ be two continuous random variables from \mbb T with probability
density functions $f_X$ and $f_Y$ respectively. The probability density
function $f_{X+Y}$ is:
\[
f_{X+Y}(t) = \int_0^t f_{X,Y}(s,t-s)\mathrm{d}
s
\]
and, if $X$ and $Y$ are independent (but not necessarily \emph{i.i.d.}), $f_{X+Y}$ is the convolution:
\[
f_{X+Y}(t) = \int_0^t f_X(s)\cdot f_Y(t-s)\mathrm{d}
s\text.
\]
It is easy to check that sum distributes over
minimum:
\[X+\min(Y,Z) = \min(X+Y,X+Z)\]
by taking advantage of the latter operation being idempotent.
Then, because of sum being commutative,
left distributivity implies right one (and vice versa).
Thus $\mfk S \triangleq (\mbb T,
\min,\mcl T_{+\infty},+,\mcl T_{0})$ is a (commutative and idempotent)
semiring and stochastic systems can be read as \mfk S-LTS.
This induces immediately a strong bisimulation (by instantiating
Definition~\ref{def:wlts-strong}) which corresponds to strong stochastic
bisimulations on rated LTS (Definition \ref{def:rts-bisim}).
Moreover, following Definition \ref{def:wlts-weak}, we can readily
define the weak stochastic bisimulation as the weak \mfk
S-bisimulation.
In literature there are some (specific and ad hoc) notions of weak
bisimilarity for stochastic systems. The closest to our is the one
given by Bernardo et al.~for CTMCs extended with passive rates and
\emph{instantaneous} actions \cite{mb2007:ictcs,mb2012:qapl}. Their
definition is finer than our weak \mfk S-bisimulation since they allow
to merge silent actions only when these are instantaneous and hence
unobservable also \wrt~the time. Instead, in our definition sequences
of $\tau$ actions are equivalent as long as their overall ``rates'' are
the same (note that in general, the convolution of exponentially distributed
random variables is no longer exponentially distributed but an hyper-exponential).
In \cite{mb2012:qapl}, Bernardo et al. relaxed the definition given in
\cite{mb2007:ictcs} to account also for non-instantaneous $\tau$-transitions.
However, to retain exponentially distributed variables, they approximate
hyper-exponentials with exponentials with the same average. This approach
allows them to obtain a saturated system that still is a CTMC
but loosing precision since, in general, the average is the only momentum preserved
during the operation. On the opposite, our approach does not introduce
any approximation.
In \cite{ln2005:ictac} L{\'o}pez and N{\'u}{\~n}ez proposed a definition of weak
bisimulation for stochastic transition systems with generic distributions. Their
(rather involved) definition is a refinement of the notion they previously
proposed in \cite{ln2004:fac} and relies on the reflexive and transitive closure
of silent transitions. However, their definition of strong
bisimulation does not correspond to the results from the theory of
WLTSs, so neither the weak one does.
\subsection{Other examples}
\label{sec:other-semirings}
The definition of weak \mfk W-bisimulation applies to many other
situations. In the following we briefly illustrate some interesting cases.
\paragraph{Tropical and arctic semirings}
These semirings are used very often in optimization problems,
especially for task scheduling and routing problems. Some examples
are: $(\overline{\mbb R},\min,+\infty,+,0)$; $(\overline{\mbb R},\max,-\infty,+,0)$;
$(\overline{\mbb R},\min,+\infty,\max,-\infty)$.
In these contexts, weak bisimulation would allow to abstract from
``unobservable'' tasks \eg~internal tasks and treat a cluster of
machines as a single one, reducing the complexity of the problem.
\paragraph{Truncation semiring}
$(\{0,\dots,k\},\max,0,\min\{\mbox{\large\bf\_} +\mbox{\large\bf\_} ,k\},k)$. It is variant of the
above ones, and it is used to reason ``up-to'' a threshold $k$. A
weak bisimulation for this semiring allows us to abstract from how the
threshold is violated, but only if this happens.
\paragraph{Probabilistic semiring}
Another semiring used for reasoning about probabilistic events is
$([0,1],\max,0,\cdot,1)$. This is used to model the maximum likelihood
of events, \eg~for troubleshooting, diagnosis, failure forecasts,
worse cases, etc. A weak bisimulation on this semiring allows to
abstract from ``unlikely'' events, focusing on the most likely ones.
\paragraph{Formal languages}
A well-known semiring is that of formal languages over a given alphabet
$(\wp(\Sigma^*),\cup,\emptyset,\circ,\varepsilon)$. Here, a weak
bisimulation is a kind of \emph{determinization} \wrt~to words assigned to
$\tau$ transitions.
\section{A parametric algorithm for computing weak \mfk
W-bisim\-u\-la\-tions}\label{sec:cw-algorithm}
In this section we present an algorithm for computing weak \mfk
W-bisimulation equivalence classes which is parametric in the semiring
structure \mfk W. Being parametrized, the same algorithm can be used
in the mechanized verification and analysis of many kinds of systems.
This kind of algorithms is often called \emph{universal} since they do
not depend on any particular numerical domain nor its machine
representation.
In particular, algorithms parametric over a semiring structure have
been successfully applied to other problems of computer science,
especially in the field of system analysis and optimization (\cf~\cite{lmrs2011}).
The algorithm we present is a variation of the well-known
Kanellakis-Smolka's algorithm for deciding strong non-deterministic
bisimulation \cite{ks1990:ic}.
Our solution is based on the same refinement technique used for the
\emph{coarsest stable partition}, but instead of ``strong''
transitions in the original system we consider ``weakened'' or
saturated ones. The idea of deciding weak bisimulation by computing
the strong bisimulation equivalence classes for the saturated version
of the system has been previously and successfully used \eg~for
non-deterministic or probabilistic weak bisimulations \cite{baier98}.
The resulting complexity is basically that of the coarsest stable
partition problem plus that introduced by the construction of the
saturated transitions. The last factor depends on the properties and
kind of the system and, in our case, on the properties of the semiring
\mfk W (the algorithm and its complexity will be discussed with more
detail in Section~\ref{sec:cw-algo-and-complexity}).
Before outlining the general idea of the algorithm let us introduce
some notation. For a finite set $X$ we denote by $\mcl{X}$ a
partition of it \ie~a set of pairwise disjoint sets $B_0,\dots,B_n$
covering $X$:
\[
X = \biguplus \mcl X = \biguplus \{B_0,\dots,B_n\}\text.
\]
We shall refer to the elements of the partition \mcl X as \emph{blocks} or
\emph{classes} since every partition induces an equivalence relation
$\bigcup_{B\in\mcl X} B\times B$ on $X$ and vice versa.
Given a finite \mfk W-LTS $(X,A+\{\tau\},\rho)$ the general idea
for deciding weak \mfk W-bisimulation by partition refinement is
to start with a partition of the states $\mcl{X}_0$
coarser than the weak bisimilarity relation \eg~$\{X\}$ and then successively
refine the partition with the help of a \emph{splitter} (\ie~a witness
that the partition is not stable \wrt~the transitions). This process
eventually yields a partition $\mcl{X}_k$
being the set of equivalence classes of the weak bisimilarity.
A splitter of a partition $\mcl X$ is a pair made of an action and a class of $\mcl X$
that violates the condition for $\mcl{X}$ to be a weak bisimulation.
Reworded, a pair $\langle \alpha,C\rangle \in (A+\{\tau\})\times\mcl X$ is a splitter
for $\mcl X$ if, and only if, there exist $B \in \mcl X$ and $x,y\in B$ such that:
\begin{equation}\label{eq:cw-split-check}
\rho(x,\hat\alpha,C) \neq \rho(y,\hat\alpha,C)
\end{equation}
where $\hat\alpha$ is a short hand for the sets of traces $\tau^*$
and $\tau^*a\tau^*$ when $\alpha=\tau$ and $\alpha = a \in A$ respectively.
Then $\mcl X_{i+1}$ is obtained from $\mcl X_i$ splitting
every\footnote{In Kanellakis and Smolka's algorithm, only the block
$B$ is split but in our case we need to evaluate every block anyway
because of saturation, \cf~Section~\ref{sec:cw-saturation}.} $B \in
\mcl X_i$ accordingly to the selected splitter $\langle
\alpha,C\rangle$.
\begin{equation}
\label{eq:cw-refine}
\mcl X_{i+1} \triangleq \bigcup\left\{
B/{\underset{\scriptscriptstyle{\alpha,C}}{\approx}}\mid B \in \mcl X_i\right\}
\end{equation}
where $\underset{\scriptscriptstyle{\alpha,C}}{\approx}$ is the equivalence relation
on states induced by the splitter and such that:
\[
x \underset{\scriptscriptstyle{\alpha,C}}{\approx} y
\defiff\rho(x,\hat\alpha,C) = \rho(y,\hat\alpha,C)
\text.
\]
Note that the block $B$ can be split in more than two parts (which is the case
of non-deterministic systems) since splitting depends on weights of outgoing
weak transitions.
\subsection{Computing weak transitions}
\label{sec:cw-saturation}
The algorithm outlined above follows the classical approach to the
coarsest stable partition problem where stability is given in terms of
weak weighted transitions like $\Lbag x,\tau^*,C \Rbag$ (and in general weighted
sets of paths \eg~$\Lbag x,T,C \Rbag$) but nothing is assumed on how these
values are computed. In this section, we show how weights of weak
transitions can be obtained as solutions of systems of linear
equations over the semiring \mfk W. Clearly, for some specific cases
and sets of paths, there may be more efficient \emph{ad-hoc} technique
(\eg~saturated transitions can be precomputed
for non-deterministic LTSs)
however the linear system at the core of our algorithm is a general
and flexible solution which can be readily adapted to other
observational equivalences (\cf~Example~\ref{ex:delay-bisim}).
Let $C$ be a class. For every $x \in X$ and
$\alpha \in A+\{\tau\}$ let $x_\alpha$ be a variable with domain
the semiring carrier. Intuitively, once solved, these will represent:
\[
x_\tau = \rho(\Lbag x,\tau^*,C\Rbag) \qquad
x_a = \rho(\Lbag x,\tau^*a\tau^*,C\Rbag)
\]
The linear system is given by the equation families
\eqref{eq:cw-ls-in-class}, \eqref{eq:cw-ls-only-tau} and \eqref{eq:cw-ls-action}
which capture exactly the finite paths yielding the cones covering weak transitions.
\begin{alignat}{3}
\label{eq:cw-ls-in-class}
x_\tau &= 1
&& \quad\mbox{ for } x \in C \\
\label{eq:cw-ls-only-tau}
x_\tau &= {\sum_{y \in X} \rho(x\xrightarrow{\tau}y)\cdot y_\tau}
&& \quad\mbox{ for } x \notin C \\
\label{eq:cw-ls-action}
x_a &= {\sum_{y \in X} \rho(x\xrightarrow{a}y) \cdot y_\tau} +
{\sum_{y \in X} \rho(x\xrightarrow{\tau}y) \cdot y_a}
&&
\end{alignat}
The system is given as a whole but it can
be split in smaller sub-systems improving the efficiency of the resolution
process. In fact, unknowns like $x_a$ depend only on those indexed by $\tau$
or $a$ and unknowns like $x_\tau$ depend only on those indexed by $\tau$.
Hence instead of a system of $|A+\{\tau\}|\cdot|X|$ equations and unknowns,
we obtain $|A+\{\tau\}|$ systems of $|X|$ equations and unknowns by
first solving the sub-system for $x_\tau$ and then a separate
sub-system of each action $a \in A$ (where $x_\tau$ are now constant).
\begin{example}[Delay bisimulation]
\label{ex:delay-bisim}
Delay bisimulation is defined at the general level of WLTSs
simply by replacing $\Lbag x,\tau^*a\tau^*,C\Rbag$ with $\Lbag x,\tau^*a,C\Rbag$
in Definition~\ref{def:wlts-weak}. Then, delay bisimulation
equivalence classes can be computed with the same algorithm simply by
changing the saturation part at its core. Weights
of sets like $\Lbag x,\tau^*a,C\Rbag$ are computed as the solution to
the linear equation system:
\[
x_a = {\sum_{y \in X} \rho(x\xrightarrow{\tau}y)} \cdot y_a+
{\sum_{y \in C} \rho(x\xrightarrow{a}y)}
\text.\]
\end{example}
\subsubsection{Solvability}
Decidability of the algorithm depends on the solvability the equation
system at its core. In particular, on the existence and uniqueness of the
solution. In section we prove that this holds for every positively
ordered $\omega$-semiring. The results can be extended to
$\sigma$-semirings provided that their $\sigma$-algebra covers
the countable families used by Theorem~\ref{thm:uniq-solution}.
The linear equation systems under consideration bare a special form:
they have exactly the same number of equations and unknowns
(say $n$)
and every unknown appears alone on the left side
of exactly one equation. Therefore, these systems are
defining an operator
\begin{equation}\label{eq:sys-matrix}
F(x) = M\times x + b
\end{equation}
over the space of $n$-dimensional vectors $W^n$
where $M$ and $b$ are a $n$-dimensional matrix
and vector respectively defined by the equations of the
system.
Then, the solutions of the system are precisely
the fix-points of the operator $F$ and since the number of
equations and unknowns is the same, if $F$ has a fix-point, it is unique.
Let the semiring $\mfk W$ be positively ordered.
These semirings admit a natural preorder $\trianglelefteq$ which subsumes
any preorder $\sqsubseteq$ respecting the structure of the semiring;
hence we restrict ourselves to the former.
The point-wise extension of $\trianglelefteq$ to $n$-dimensional vectors
defines the partial order with bottom $(W^n,\dot\trianglelefteq,0^n)$;
suprema are lifted pointwise from $(W,\trianglelefteq,0)$
where are sum-defined. Therefore, $\omega$-chains suprema exists
only under the assumption of addition over at most countable families and viceversa.
\begin{lemma}\label{lem:cpo}
$(W^n,\dot\trianglelefteq,0^n)$ is $\omega$-complete
iff $\mfk W$ admits countable sums.
\end{lemma}
The operator $F$ manipulates its arguments only by additions
and constant multiplications which respect the natural order.
Thus $F$ is monotone with respect to $\dot\trianglelefteq$.
Moreover, $F$ preserves $\omega$-chains suprema (and in general
$\omega$-families) because suprema for $\trianglelefteq$ are
defined by means of additions and the order is lifted point-wise.
\begin{lemma}\label{lem:scott}
The operator $F$ over $(W^n,\dot\trianglelefteq,0^n)$ is Scott-continuous.
\end{lemma}
Finally, we can state the main result of this Section from which decidability
follows as a corollary.
\begin{theorem}\label{thm:uniq-solution}
Systems in the form of \eqref{eq:sys-matrix}
have unique solutions if the underlying semiring is
well-behaved and $\omega$-complete.
\end{theorem}
\begin{proof}
By Lemma \ref{lem:cpo}, Lemma \ref{lem:scott} and Kleene Fix-point
Theorem $F$ has a least fix point. Because the linear equation
system has the same number of equations and unknowns, this solution
is unique.
\end{proof}
The linear equation systems defined by
the equation families \eqref{eq:cw-ls-in-class}, \eqref{eq:cw-ls-only-tau} and \eqref{eq:cw-ls-action} have exactly one solution and hence the algorithm
proposed is decidable. Moreover this holds also for any
behavioural equivalence whose saturation can be expressed in a similar
way \eg~delay bisimulation (\cf~Example~\ref{ex:delay-bisim}).
\subsubsection{Adequacy}
If $x\in C$, then the empty execution $\varepsilon$ is the
only element of the set $\Lbag x,\tau^*,C \Rbag$,
(by definition of reachability) $\rho(\varepsilon)$ is the value
of the $0$-fold multiplication \ie~the unit $1$.
This case falls under \eqref{eq:cw-ls-in-class} and
hence $x_\tau$ is $\rho(x,\tau^*,C)$ when $x \in C$.
On the other hand, if $x\notin C$, then every path reaching $C$ from $x$
needs to have length strictly greater than $0$; reworded, it starts with a transition
${x\xrightarrow{\tau}y}$ and from $y$ heads towards $C$.
The weight of $\Lbag x,\tau\tau^*,C \Rbag$ is the sum of the weights
of its paths which are themselves the ordered multiplication of their
steps. Then by grouping paths by their second state the remaining parts
are exactly the paths in the set $\Lbag y,\tau^*,C \Rbag$.
Then we obtain the unfolding
\[\rho(\Lbag x,\tau\tau^*,C \Rbag) = \sum_{y\in X}\rho(x\xrightarrow{\tau}y)\cdot\rho(\Lbag y,\tau^*,C \Rbag)\]
which recursively defines the weight of these sets as the unfolding of
executions. In particular, the base case is precisely \eqref{eq:cw-ls-only-tau}
and the inductive one is \eqref{eq:cw-ls-in-class}.
Every path in the set $\Lbag x,\tau^*a\tau^*,C \Rbag$ contains exactly
one transition labelled by the action $a$ and hence it has a transition,
to some state $y$, and is labelled with either $a$ or $\tau$.
In the first case , the observable $a$ is consumed and remaining path
is necessarily in the set $\Lbag y,\tau^*,C \Rbag$ covered above.
In the second case, the only observable of the path has not been consumed yet
and thus the remaining part of the path should be in the set
$\Lbag y,\tau^*a\tau^*,C \Rbag$
completing the case for \eqref{eq:cw-ls-action}.
\begin{proposition}
Let $\mfk W$ be a positively ordered $\omega$-semiring.
For any $C$, $\alpha$ and $x$, solutions for
\eqref{eq:cw-ls-in-class}, \eqref{eq:cw-ls-only-tau} and \eqref{eq:cw-ls-action}
are exactly the weights of $\Lbag x,\hat\alpha,C \Rbag$.
\end{proposition}
\subsection{The algorithm and its complexity}
\label{sec:cw-algo-and-complexity}
\begin{figure}
\caption{The algorithm for weak \mfk W-bisimulation.}
\label{fig:cw-algo}
\end{figure}
In this section we describe the algorithm and study its worst
case complexity. The algorithm and the resulting analysis follow the structure
of the Kanellakis-Smolka's result. However, some assumptions available
in the case of strong bisimulation for non-deterministic systems
are not available in this settings. For instance, transitions have to be computed
on the fly. Moreover, like many other algorithms parametrized over
semirings, no hypotheses are made over the numerical domain nor
over its machine representation. As a consequence, we can not assume
constant-time random access data-structures or linearly order the
elements of the semiring. However, since many practical semirings
admit total-orderings and efficient data structures, we will describe
also this second case providing a more efficient version of the
algorithm for the general case.
The first algorithm we propose is reported in Figure~\ref{fig:cw-algo}.
Given a finite \mfk W-LTS $(X,A+\{\tau\},\rho)$ as input, it returns a
partition \mcl{X} of $X$ inducing a weak \mfk W-bisimulation for the
system.
The partition \mcl X is initially assumed to have the set of states $X$
as its only block and corresponds
to the assumption of the largest possible equivalence relation on $X$ being also a
weak bisimulation. In general, any partition coarser than some weak bisimulation
would be a suitable initial partition.
The purpose of the two auxiliary partitions $\mcl X'$ and $\mcl X''$
is to keep track of which classes were added to \mcl X during the
previous iteration of the repeat-until loop and thus avoiding to reuse a split
candidate. We used these additional partitions for readability but the same result
may be achieved, for instance, having two colours distinguishing blocks already
checked. Moreover, $\mcl X'$ and $\mcl X''$ make the flag $changed$ redundant.
The algorithms iterates over each split candidate
$\langle \alpha,C\rangle$ and tries to split the partition by checking
whatever \eqref{eq:cw-split-check} holds. If the partition ``survives''
to every split test then it is stable and in particular it describes
a weak $\mfk W$-bisimulation relation. The saturated transitions
required to test $\langle \alpha,C\rangle$ are computed
by solving the linear equations system described before.
Overall, we have to solve ${|A|+1}$ systems of $|X|$ linear equations and unknowns
for each $C$.
The complexity of solving these systems depends on the underling
semiring structure. For instance, solving a system over the semiring
of non-negative real numbers is in \mrm P \cite{aho74:algobook},
whereas solving a system over the tropical (resp. arctic) semiring
is in $\mrm{NP} \cap \mrm{coNP}$ (\cf~\cite{gp12:corr}). Since
the algorithm is parametrized by the semiring, its complexity will
be parametrized by the one introduced by the solution of these
linear equation systems. Therefore we shall denote by
$\mcl{L}_\mfk W(n)$ the complexity of solving a system of $n$ linear
equations in $n$ variables over \mfk W.
\begin{remark}
\label{rem:cw-efficient-split}
The complexity of the split test can be made preciser since we are not
solving a general linear system, but a specific sub-class of these.
For instance, solving a linear system over the boolean semiring is
NP-Complete in general, whereas we are interested in a specific
subclass of those encoding a reachability problem over a directed
graph which is in P.
\end{remark}
Let $n$ and $m$ denote the cardinality of states and
labels respectively. For each block $C$ used to generate splits,
there are exactly $m$ candidates requiring to solve $m$ split tests
and perform at most $m$ updates to \mcl X. Splits can be thought
describing a tree whose nodes are the various blocks encountered by
the algorithm during its execution and whose leaves are exactly the
elements of the final partition. Because the cardinality of \mcl X is
bound by $n$, the algorithm can encounter at most $\mcl O(n)$ blocks
during its entire execution and hence it performs at most $\mcl O(n)$
updates of \mcl X (which happens when splits describe a perfect tree with
$n$ leaves). Therefore, in the worst case, the algorithm does
$\mcl{O}(nm)$ split tests and $\mcl{O}(n)$ partition refinements.
Partition refinements and checks of \eqref{eq:cw-split-check} can be
both done in $\mcl{O}(n^2)$ without any additional assumption about
$X$, $A$ and \mfk W nor the use of particular data structures or
primitives. Therefore the asymptotic upper bound for time complexity
of the proposed algorithm is $\mcl{O}(nm(\mcl{L}_\mfk W(n) + n^2))$
where $\mcl{L}_\mfk W(n)$ is the upper bound for the complexity
introduced by computing the weak transitions for a given set of
states.
\begin{figure}
\caption{An alternative algorithm for linearly ordered blocks and weights.}
\label{fig:cw-algo-with-ords}
\end{figure}
The time complexity can be lowered by means of more efficient representations of
systems, partitions and weights. For instance, the structure of every semiring can
be used to define an ordering for its elements (\cf\cite{han2012:ordsemiring})
allowing the use of lookup data structures.
Under the assumption of some linear ordering for weights and blocks
(at least within the same partition)
the operations of refinement and split testing can be carried out
more efficiently by sorting lexicographically the transitions ending in the
splitting block $C$. The resulting algorithm is reported in
Figure~\ref{fig:cw-algo-with-ords}.
This allows the algorithm to carry out the refinement of \mcl X
while it is reading the lexicographically ordered list of the saturated
transitions. In fact, a block $B$ is split by $\langle\alpha,C\rangle$ if the list
contains different weights in the portion of the list where $B$ appears.
A change in the weights correspond to two states $x$ and $y$ such that
\eqref{eq:cw-split-check} holds.
For each $\langle\alpha,C\rangle$ there are at most $n$ weak transitions
${\rho(x,\hat\alpha,C)}$ and these are sorted in
$\mcl O(n\mrm{ln}(n))$ -- or in $\mcl O(n)$ using a classical algorithm from
\cite{aho74:algobook}. On the worst case the algorithm encounters $\mcl O(n)$
blocks during its entire execution yielding a worst case time complexity in
$\mcl{O}(nm(\mcl{L}_\mfk W(n) + n))$.
Overall, we have proved the following result:
\begin{proposition}
The asymptotic upper bound for time complexity of the algorithm is
in $\mcl{O}(nm(\mcl{L}_\mfk W(n) + n^2))$, for the general case, and in
$\mcl{O}(nm(\mcl{L}_\mfk W(n) + n))$ given a linear ordering for blocks and weights.
Both algorithms have space complexity in $\mcl O(mn^2)$.
\end{proposition}
\section{Coalgebraic perspective}\label{sec:cat-view}
In this Section we illustrate the categorical construction behind
Definition~\ref{def:wlts-weak}. The presentation is succinct due to
space constraints but it is based on general results from coalgebraic
theory. In particular, we define weak bisimulations as cocongruences
of saturated or weak systems extending the elegant approach proposed
by Silva and Westerbaan in \cite{sw2013:epsilon}.
This is not the first work on a coalgebraic perspective of weak
bisimulations coalgebraically, as in the recent years there have been
several works in this direction. In general, the approach is to
recover weak bisimulation as the coalgebraic bisimulation of saturated
systems. In \cite{sokolova05:entcs,sokolova09:sacs} Sokolova et
al.~studied the case of action-based coalgebras and demonstrated their
results on the cases of non-deterministic and fully-probabilistic
systems. In particular, the latter required to change the category of
coalgebras. Recently, Brengos
\cite{brengos2012:ifip,brengos2013:corr} proposed an interesting
construction based on ordered-functors which yields saturated
coalgebras for the same behavioural functor. Both these constructions
are parametric in the notion of saturation and are therefore way more
general; \cite{brengos2013:corr} describes an algebraic
structure and some conditions yielding precisely saturations for weak
bisimulations. However, this approach does not cover the case of
generative and stochastic systems \cite[Sec.~6]{brengos2012:ifip} yet.
In \cite{ps2012:cmcs} Soboci\'{n}ski describes a neat account of weak
(bi)simula\-tion for non-deterministic systems and proves that
saturation via the double-arrow construction (\ie~$\tau$-closure)
results from a suitable change of base functor having a left adjunct
in the 2-categorical sense.
Likewise, we rely on saturation of the given systems but we do not
require any additional parameter. Moreover, we base our definition on
cocongruences which allow us to work explicitly with the equivalence
classes and saturate the given coalgebras such that these describes
how each class is reached by each state without the need to alter the
behavioural functor.
Our saturation construction builds on the account of
$\epsilon$-transitions recently given in \cite{sw2013:epsilon} and on
the neat coalgebraic perspective of trace equivalence given by Hasuo
in \cite{hasuo2010:trace}. Therefore the same settings are assumed,
\ie~we consider coalgebras for functors like $TF$ where $T$ and $F$
are endofunctors over a category \cat C with all finite limits and
$\omega$-colimits; $(T,\mu,\eta)$ is a monad; there exists a natural
transformation $\lambda$ distributing $F$ over $T$; the Kleisli
category $\cat{Kl}(T)$ is CPPO-enriched and has, for any $X \in \cat
C$, a final $(\mbox{\large\bf\_} + X)$-coalgebra. Before describing the saturation
construction let us state the main definition of this Section.
\begin{definition}\label{def:coalg-weak}
Given two $TF_\tau$-coalgebras $(X,\alpha)$ and $(Y,\beta)$, a span of
jointly monic arrows $X \xleftarrow{p} R \xrightarrow{q} Y$ describes
a weak bisimulation between $\alpha$ and $\beta$ if and only if
there exists an epic cospan $X \xrightarrow{ f} C \xleftarrow{ g} Y$
such that $(R,p,q)$ is the final span to make the following diagram
commute:
\[
\begin{tikzpicture}[auto,xscale=2.3,yscale=1.2,font=\footnotesize,
extended/.style={shorten >=-#1, shorten <=-#1},
baseline=(current bounding box.center)]
\node (n0) at (0,1) {\(X\)};
\node (n1) at (2,1) {\(Y\)};
\node (n2) at (1,.5) {\(C\)};
\node (n3) at (0,0) {\(TF_\tau X\)};
\node (n4) at (2,0) {\(TF_\tau Y\)};
\node (n5) at (1,-.5) {\(TF_\tau C\)};
\node (n6) at (-.7,1) {\(X\)};
\node (n7) at (2.7,1) {\(Y\)};
\node (n8) at (-.7,0) {\(TF_\tau X\)};
\node (n9) at (2.7,0) {\(TF_\tau Y\)};
\node (n10) at (1,1.5) {\(R\)};
\draw[->] (n0) to node [swap] {\( f\)} (n2);
\draw[->] (n1) to node [] {\( g\)} (n2);
\draw[->] (n0) to node [swap] {\(\alpha^w\)} (n3);
\draw[->] (n1) to node [] {\(\beta^w\)} (n4);
\draw[->] (n2) to node [swap] {\(\gamma\)} (n5);
\draw[->] (n3) to node [swap] {\(TF_\tau f\)} (n5);
\draw[->] (n4) to node [] {\(TF_\tau g\)} (n5);
\draw[->] (n6) to node [swap] {\(\alpha\)} (n8);
\draw[->] (n7) to node [] {\(\beta\)} (n9);
\draw[->] (n10) to node [swap] {\(p\)} (n0);
\draw[->] (n10) to node [] {\(q\)} (n1);
\end{tikzpicture}
\]
where $\alpha^w$ and $\beta^w$ are the \emph{weak saturated} $TF_\tau$-coalgebras
\wrt~$ f$ and $g$.
\end{definition}
Let us see how the weak saturation $\alpha^w$ is defined. In our
setting, the traces of a $TF$-coalgebra $\alpha$ are described by the
final map $\mrm{tr}_\alpha$ from the lifting of $\alpha$ in
$\cat{Kl}(T)$ to the final $\overline F$-coalgebra where $\overline F$
is the lifting of $F$ to $\cat{Kl}(T)$ induced by the distributive law
$\lambda : FT \Longrightarrow TF$ (\cf~\cite{hasuo2010:trace}). Rawly
speaking, the monad $T$ can be thought as describing the branching of
the system whereas the observables are characterized by $F$. Assuming
this point of view, any $F$ can be extended with silent actions $\tau$
as the free pointed functor
\[
F_\tau \triangleq X + FX\text.
\]
Now, a $TF_\tau$-coalgebra $\alpha$ can be ``determinized'' by means
of its \emph{iterate} \cite{sw2013:epsilon}:
\[\mrm{itr}_\alpha \triangleq \nabla_{FX}\circ\mrm{tr}_\alpha\]
where $\nabla$ is the codiagonal; the traces refer to $\alpha$ seen as
a $T(X + B)$ for $B = FX$ and $(X,\mrm{itr}_\alpha)$ is a
$TF$-coalgebra. The iterate offers an elegant and general way to
``compress'' executions with leading silent transitions like $\tau^*a$
into single-step transitions with exactly one observable but retaining
the effects of the entire execution within the monad $T$.
These results can be used to cover executions ending with an observable
and hence do not directly lend themselves to equivalences based also on
trailing silent actions like in the case $\tau^*a\tau^*$, as required by the
weak bisimulation. However, let us suppose to have, for any given $TF_\tau$-coalgebra
$(X,\alpha)$, the $T$-coalgebra $(X,\alpha^\tau)$ describing how each state
reaches every class with $\tau$-transition only; then, the coalgebra
describing reachability by $\tau^*a\tau^*$ is exactly:
\[
\alpha^\flat :
X \xrightarrow{\mrm{itr}_\alpha} TFX \xrightarrow{TF{\alpha^\tau}}
TFTX \xrightarrow{\lambda_X} TTFX \xrightarrow{\mu} TFX
\text.
\]
Then the saturated coalgebra $\alpha^w$ is defined by means of the
2-cell structure of $\cat{Kl}(T)$ as the join described by the diagram below.
\begin{equation}
\label{eq:coalg-split}
\begin{tikzpicture}[auto,xscale=2.5,yscale=1.2,font=\footnotesize,rotate=-45,
baseline=(current bounding box.center)]
\node (n0) at (0,1) {\(X\)};
\node (n1) at (0,0) {\(X\)};
\node (n2) at (1,1) {\(FX\)};
\node (n3) at (1,0) {\((X+F X)\)};
\draw[->] (n0) to node [swap] {\(\alpha^\tau\)} (n1);
\draw[->] (n0) to node [] {\(\alpha^\flat\)} (n2);
\draw[->] (n1) to node [swap] {\(\iota_1\)} (n3);
\draw[->] (n2) to node [] {\(\iota_2\)} (n3);
\draw[->] (n0) -- (n3);
\node[preaction={fill=white,inner sep = 0pt}] at ($(n1)!.5!(n2)$) {\(\alpha^w\triangleq\sqcup\)};
\node[] at ($(n1)!.25!(n2)$) {\(\sqsubseteq\)};
\node[] at ($(n2)!.25!(n1)$) {\(\sqsupseteq\)};
\end{tikzpicture}\end{equation}
This definition points out that $\tau^*$ and
$\tau^*a\tau^*$ are two close but different cases.
In order to define the $T$-coalgebra $\alpha^\tau$, first we
need to be able to consider only the silent action of the given
$\alpha$. This information can be isolated from $\alpha$
by means of the same structure used in \eqref{eq:coalg-split}.
Therefore we define, for every $TF_\tau$-coalgebra $\alpha$,
its silent and observable parts, namely $\alpha^s$ and
$\alpha^o$, as the (greatest) arrows to make the following
diagram commute and have $\alpha$ as their join.
\[
\begin{tikzpicture}[auto,xscale=2.5,yscale=1.2,font=\footnotesize,rotate=-45,
baseline=(current bounding box.center)]
\node (n0) at (0,1) {\(X\)};
\node (n1) at (0,0) {\(X\)};
\node (n2) at (1,1) {\(FX\)};
\node (n3) at (1,0) {\((X+F X)\)};
\draw[->] (n0) to node [swap] {\(\alpha^s\)} (n1);
\draw[->] (n0) to node [] {\(\alpha^o\)} (n2);
\draw[->] (n1) to node [swap] {\(\iota_1\)} (n3);
\draw[->] (n2) to node [] {\(\iota_2\)} (n3);
\draw[->] (n0) -- (n3);
\node[preaction={fill=white,inner sep = 0pt}] at ($(n1)!.5!(n2)$) {\(\alpha=\sqcup\)};
\node[] at ($(n1)!.25!(n2)$) {\(\sqsubseteq\)};
\node[] at ($(n2)!.25!(n1)$) {\(\sqsupseteq\)};
\end{tikzpicture}\]
Because $\alpha^\tau$ has to describe how each class is reached,
classes can be used as the observables needed to apply the iterate
construction to $\alpha^\tau$. However, to be able to select the class
to be reached and consider it as the only one observable by the
iterate (likewise $F_\tau$ distinguish silent and observable actions
by means of a coproduct) we need $X$ and $C$ to be represented as
indexed coproducts of simpler canonical subobjects corresponding to
the classes induced by $f : X \to C$.
Henceforth, for simplicity we assume $X \cong X\cdot 1$ and $C \cong
C\cdot 1$.
For each class $c : 1 \to C$ let $X\cong \overline{X}_c + X_c$ be the
split induced by $c$. This extends to the coalgebra $\alpha^s$ (by
coproduct) determining the coalgebra: $\alpha^s_c : \overline{X}_c \to
T(\overline{X}_c + X_c)$ whose iterate is the map $\alpha^+_c :
\overline{X}_c \to T(X_c)$ describing executions of silent action only
ending in $c$ (but starting elsewhere). This yields a $C$-indexed
family of morphisms which together describe $\tau^+$ and the
information is collected in one $T$-algebra as a join in the 2-cell
like \eqref{eq:coalg-split}. For this join to be admissible we
require $T$ to not exceed the completeness of 2-cells, \ie~for any $x
: 1 \to X \in \cat C$ the supremum of the set $\cat{Kl}(T)(1,X)$
determined by $\cat{Kl}(T)(x,X)$ exists. Reworded if cells are
$\kappa$-CPPOs, then $T$ is $\kappa$-finitary; \eg~in \cat{Set}
$T$-coalgebras describe image $\kappa$-bounded $T$-branching
systems. Thus, for every $x$ the family of arrows $\{\alpha^+_c\circ
x\}$ is limited by $\kappa$ and can be joined. These are
composed in $\alpha^+$ as the universal arrow in the $X$-fold
coproduct. The last step is provided by the monad
unit which is a $T$-coalgebra describing how states reach their
containing class and can be easily joined to the above obtaining,
finally, $\alpha^\tau$. This completes the construction of $\alpha^w$.
\paragraph{Weighted labelled transition systems}
Assuming at most countably many actions, image-countable $\mfk W$-WLTs
are in 1-1 correspondence with $\mathcal{F}_{\mathfrak{W}}(A\times\mbox{\large\bf\_} )$-coalgebras where
$(\mathcal{F}_{\mathfrak{W}},\mu,\eta)$ is the monad of $\mfk W$-valued functions with at most
countable support. In particular, $\mathcal{F}_{\mathfrak{W}} X$ is the set morphisms from
$\cat{Set}(X,W)$ factoring through $\mbb N$. On arrows $\mathcal{F}_{\mathfrak{W}}$ is defined as
$\mathcal{F}_{\mathfrak{W}} f (\varphi)(y)\triangleq \sum_{x\in f^{-1}(y)}\varphi(x)$. The unit $\eta$
is defined as $\eta(x)(y) \triangleq 1$ for $x = y$ and $0$ otherwise, and the
multiplication $\mu$ is defined as
$\mu(\psi)(x) \triangleq \sum_{\varphi}\psi(\varphi)\cdot\varphi(x)$.
If $\mfk W$ is the boolean semiring, $\mathcal{F}_{\mathfrak{W}}$ is precisely the countable
powerset monad. Strength and double strength readily generalize to
every $\mathcal{F}_{\mathfrak{W}}$ and by \cite{hasuo07:trace} there is a canonical law
$\lambda$ distributing $(A\times\mbox{\large\bf\_} )$ over $\mathcal{F}_{\mathfrak{W}}$. The semiring
$\mfk W$ can be easily endowed with an ordering which lifts point-wise
to $\mfk W$-valued functions \cite{han2012:ordsemiring}. In
particular, any $\omega$-semiring with a natural order
(\cf~Section~\ref{sec:good-semirings}) yields a
CPPO-enriched $\cat{Kl}(\mathcal{F}_{\mathfrak{W}})$ with bottom the constantly $0$ function.
\begin{theorem}
Let $T$ be $\mathcal{F}_{\mathfrak{W}}$ and $F$ and $(A\times\mbox{\large\bf\_} )$. For any given
$TF_\tau$-coalgebra $\alpha$ and its corresponding $\mfk W$-LTS,
Definition~\ref{def:coalg-weak} and Definition~\ref{def:wlts-weak}
coincide.
\end{theorem}
\begin{proof}
By unfolding of Definition~\ref{def:coalg-weak} and by minimality of
executions considered by the construction of $\alpha^w$.
\end{proof}
\section{Conclusions and future work}
\label{sec:concl}
In this paper we have introduced a general notion of \emph{weak
weighted bisimulation} which applies to any system that can be
specified as a LTS weighted over a \emph{semiring}. The semiring
structure allows us to compositionally extend weights to multi-step
transitions.
We have shown that our notion of weak
bisimulations naturally covers the cases of non-deterministic,
fully-probabilistic, and stochastic systems, among others.
We described a ``universal'' algorithm for computing weak bisimulations
parametric in the underlying semiring structure and proved its
decidability for every positively ordered $\omega$-semiring.
Finally, we gave a categorical account of the coalgebraic construction behind
these results, providing the basis for extending the results presented
here to other behavioural equivalences.
Our results came with a great flexibility offered, from one hand,
by the possibility to instantiate WLTSs to several systems (by
just providing opportune semirings) and, from the other,
by the possibility to consider many other behavioural equivalences
simply by changing the observation patterns used
in Definition~\ref{def:wlts-weak} and in the linear
equations systems at the core of the proposed algorithm
as described by Example~\ref{ex:delay-bisim}.
A possible future work is to improve the efficiency of our algorithm,
\eg~by extending Paige-Tarjan's algorithm for strong bisimulation
instead of Kannellakis-Smolka's, or using more recent approaches
based on symbolic bisimulations \cite{wimmer2006:sigref}.
The algorithm presented is based on
Kanellakis-Smolka's. A possible future work could be to improve the
efficiency of this algorithm, \eg~by extending Paige-Tarjan's
algorithm for strong bisimulation, or more recent approaches like
symbolic bisimulations (\eg~\cite{wimmer2006:sigref}).
Obviously, for specific systems and semirings there are solutions
more efficient than our. For instance, in the case of systems over the semiring of
non-negative real numbers (which captures \eg~probabilistic
systems) the asymptotic upper bound for time complexity of our
algorithm is $\mcl O(mn^{3.8})$ (since $\mcl{L}_{\mbb R^+_0}(n)$ is in
$\mcl O(n^{2.8})$ using \cite{aho74:algobook}). However, deciding
weak bisimulation for fully-probabilistic systems is in $\mcl O(mn^3)$
on the worst case using the algorithm proposed by Baier and Hermanns
in \cite{baier00} (the original analysis assumed $A$ to be fixed
resulting in the worst case complexity $\mcl O(n^3)$). Their
algorithm capitalise on properties not available at the general level of
WLTSs (even under the assumption of suitable orderings), such as:
sums of outgoing transitions are bounded,
there are complementary events, real numbers have more structure
than a semiring, weak and delay bisimulations coincide for finite
fully-probabilistic systems (\eg~this does not hold for non-deterministic LTSs). The aim of
future work is to generalize the efficient results of
\cite{baier00} to a parametrized algorithm for constrained WLTSs, or
at least for some classes of WLTSs subject to suitable families of
constraints.
The construction presented in Section~\ref{sec:cat-view}
introduces some techniques and tools that can be used to deal with
other behavioural equivalences. In fact, we think that many
behavioural equivalences can be obtained by ``assembling'' smaller
components, by means of 2-splits, 2-merges and \emph{iterate}, as we
did for weak bisimulation. We plan to provide a formal, and easy to
use, language for describing and combining these ``building blocks''
in a modular way.
An important direction for future work is to generalize our
framework by weakening the assumptions on the underlying category
(introduced in order to observe and manipulate equivalence classes)
and by considering different behavioural functors. In particular, we
intend to extend this framework to \textsc{ULTraS}s, \ie~the generalization of
WLTSs recently proposed by Bernardo et al.~in
\cite{denicola13:ultras}. These are an example of \emph{staged transition systems}, where
several behavioural functors (or stages) are ``stacked'' together.
\end{document}
|
\begin{document}
\title{Compressed Conjugacy and the Word Problem for Outer
Automorphism Groups of Graph Groups}
\author{Niko Haubold \and Markus Lohrey \and Christian Mathissen}
\institute{Institut f\"ur Informatik, Universit\"at Leipzig, Germany\\
\texttt{\{haubold,lohrey,mathissen\}@informatik.uni-leipzig.de} }
\maketitle
\begin{abstract}
It is shown that for graph groups (right-angled Artin groups)
the conjugacy problem
as well as a restricted version of the simultaneous
conjugacy problem can be solved
in polynomial time even if input words are represented
in a compressed form. As a consequence it follows
that the word problem for the outer automorphism group of a
graph group can be solved in polynomial time.
\end{abstract}
\section{Introduction}
{\em Automorphism groups} and {\em outer automorphism groups} of {\em
graph groups} received a lot of interest in the past few
years. A graph group $\mathbb{G}(\Sigma,I)$ is given by a finite
undirected graph $(\Sigma,I)$ (without self-loops). The set $\Sigma$
is the set of generators of $\mathbb{G}(\Sigma,I)$ and every edge $(a,b) \in
I$ gives rise to a commutation relation $ab=ba$. Graph groups are
also known as {\em right-angled Artin groups} or {\em free partially
commutative groups}. Graph groups interpolate between finitely
generated free groups and finitely generated free Abelian groups. The
automorphism group of the free Abelian group $\mathbb{Z}^n$ is
$\mathsf{GL}(n,\mathbb{Z})$ and hence finitely generated. By a
classical result of Nielsen, also automorphism groups of free groups
are finitely generated, see e.g. \cite{LySch77}. For graph groups in
general, it was shown by Laurence \cite{Lau95} (building up on
previous work by Servatius \cite{Ser89}) that their automorphism
groups are finitely generated. Only recently, Day \cite{Day09} has
shown that $\mathsf{Aut}(\mathbb{G}(\Sigma,I))$ is always finitely presented. Some
recent structural results on automorphism groups of graph groups can
be found in \cite{ChCrVo07,ChoVog09}; for a survey see \cite{Cha07}.
In this paper, we continue the investigation of algorithmic aspects of
automorphism groups of graph groups. In \cite{LoSchl07} it was shown
that the word problem for $\mathsf{Aut}(\mathbb{G}(\Sigma,I))$ can be solved in
polynomial time. The proof of this result used compression techniques.
It is well-known that the word problem for $\mathbb{G}(\Sigma,I)$ can be
solved in linear time. In \cite{LoSchl07}, a compressed (or succinct)
version of the word problem for graph groups was studied. In this
variant of the word problem, the input word is represented succinctly
by a so-called \emph{straight-line program}. This is a context free
grammar $\mathbb{A}$ that generates exactly one word $\mathsf{val}(\mathbb{A})$, see
Section~\ref{S SLP}. Since the length of this word may grow
exponentially with the size (number of productions) of the SLP $\mathbb{A}$,
SLPs can be seen indeed as a succinct string representation. SLPs
turned out to be a very flexible compressed representation of strings,
which are well suited for studying algorithms for compressed data, see
e.g. \cite{GaKaPlRy96,Lif07,Loh06siam,MiShTa97,Pla94,PlaRy98}. In
\cite{LoSchl07,Schl06} it was shown that the word problem for the
automorphism group $\mathsf{Aut}(G)$ of a group $G$ can be reduced in
polynomial time to the {\em compressed word problem} for $G$, where
the input word is succinctly given by an SLP. In \cite{Schl06}, it
was shown that the compressed word problem for a finitely generated
free group $F$ can be solved in polynomial time and in \cite{LoSchl07}
this result was extended to graph groups. It follows that the word
problem for $\mathsf{Aut}(\mathbb{G}(\Sigma,I))$ can be solved in polynomial time.
Recently, Macdonald \cite{Macd09} has shown that also the compressed
word problem for every fully residually free group can be solved in
polynomial time.
It is not straightforward to carry over the above mentioned complexity
results from $\mathsf{Aut}(\mathbb{G}(\Sigma,I))$ to the {\em outer} automorphism
group $$\mathsf{Out}(\mathbb{G}(\Sigma,I)) =
\mathsf{Aut}(\mathbb{G}(\Sigma,I))/\text{Inn}(\mathbb{G}(\Sigma,I)).$$ Nevertheless,
Schleimer proved in \cite{Schl06} that the word problem for the outer
automorphism group of a finitely generated free group can be decided
in polynomial time. For this, he used a compressed variant of the
simultaneous conjugacy problem in free groups. In this paper, we
generalize Schleimer's result to graph groups: For every graph
$(\Sigma,I)$, the word problem for $\mathsf{Out}(\mathbb{G}(\Sigma,I))$ can be solved
in polynomial time. Analogously to Schleimer's approach for free
groups, we reduce the word problem for $\mathsf{Out}(\mathbb{G}(\Sigma,I))$ to a
compressed variant of the simultaneous conjugacy problem in
$\mathbb{G}(\Sigma,I)$. In this problem, we have given an SLP $\mathbb{A}_a$ for
every generator $a \in \Sigma$, and the question is whether there
exists $x \in \mathbb{G}(\Sigma,I)$ such that $a = x\, \mathsf{val}(\mathbb{A}_a)\, x^{-1}$ for
all $a \in \Sigma$. A large part of this paper develops a polynomial
time algorithm for this problem. Moreover, we also present a
polynomial time algorithm for the compressed version of the classical
conjugacy problem in graph groups: In this problem, we have given two
SLPs $\mathbb{A}$ and $\mathbb{B}$ and we ask whether there exists $x \in
\mathbb{G}(\Sigma,I)$ such that $\mathsf{val}(\mathbb{A}) = x\, \mathsf{val}(\mathbb{B}) x^{-1}$ in
$\mathbb{G}(\Sigma,I)$. For our polynomial time algorithm, we have
to develop a pattern matching algorithm for SLP-compressed
Mazurkiewicz traces, which is inspired by a pattern matching
algorithm for hierarchical message sequence charts from \cite{GenMus08}.
For the non-compressed version of the
conjugacy problem in $\mathbb{G}(\Sigma,I)$, a
linear time algorithm was presented in \cite{Wra89} based on
\cite{LiuWraZeg90}. In \cite{CrGoWi09} this result was generalized to
various subgroups of graph groups.
\section{Preliminaries}
Let $\Sigma$ be a finite alphabet. For a word $s = a_1 \cdots a_m$
($a_i \in \Sigma$) let
\begin{itemize}
\item $|s| = m$, $\mathsf{alph}(s) = \{a_1, \ldots, a_m\}$,
\item $s[i] = a_i$ for $1 \leq i \leq m$,
\item $s[i:j] = a_i \cdots a_j$ for $1 \leq i \leq j \leq m$ and
$s[i:j] = \varepsilon$ for $i > j$, and
\item $|s|_a = |\{k \mid s[k]=a\}|$ for $a \in \Sigma$.
\end{itemize}
We use $\Sigma^{-1} = \{ a^{-1} \mid a \in \Sigma\}$ to denote a
disjoint copy of $\Sigma$ and let $\Sigma^{\pm 1} = \Sigma \cup
\Sigma^{-1}$. Define $(a^{-1})^{-1} = a$; this defines an involution
${}^{-1} : \Sigma^{\pm 1} \to \Sigma^{\pm 1}$, which can be extended
to an involution on $(\Sigma^{\pm 1})^*$ by setting $(a_1 \cdots
a_n)^{-1} = a_n^{-1} \cdots a_1^{-1}$.
\subsection{Straight-line programs}
\label{S SLP}
We are using straight-line programs as a succinct representation of
strings with reoccurring subpatterns \cite{PlRy99}. A
\textit{straight-line program (SLP) over the alphabet $\Gamma$} is a
context free grammar $\mathbb{A}=(V,\Gamma,S,P)$, where $V$ is the set of
\textit{nonterminals}, $\Gamma$ is the set of \textit{terminals},
$S\in V$ is the \textit{initial nonterminal}, and $P\subseteq V \times
(V\cup \Gamma)^*$ is the set of \textit{productions} such that (i) for
every $X\in V$ there is exactly one $\mathsf{alph}ha \in (V \cup \Gamma)^*$
with $(X,\mathsf{alph}ha)\in P$ and (ii) there is no cycle in the relation
$\{(X,Y)\in V\times V \mid \exists \mathsf{alph}ha : (X,\mathsf{alph}ha) \in P, Y
\in \mathsf{alph}(\mathsf{alph}ha) \}$. These conditions ensure that the language
generated by the straight-line program $\mathbb{A}$ contains exactly one word
$\mathsf{val}(\mathbb{A})$. Moreover, every nonterminal $X\in V$
generates exactly one word that is denoted by $\mathsf{val}_\mathbb{A}(X)$, or
briefly $\mathsf{val}(X)$, if $\mathbb{A}$ is clear from the context. The size of
$\mathbb{A}$ is $|\mathbb{A}|= \sum_{(X,\mathsf{alph}ha)\in P}|\mathsf{alph}ha|$. It can be seen
easily that an SLP can be transformed in polynomial time into an
equivalent SLP in \textit{Chomsky normal form}, which means that all
productions have the form $A\rightarrow BC$ or $A\rightarrow a$ with
$A,B,C\in V$ and $a \in \Gamma$.
For an SLP $\mathbb{A}$ over $\Sigma^{\pm 1}$ (w.l.o.g. in Chomsky normal
form) we denote with $\mathbb{A}^{-1}$ the SLP that has for each terminal
rule $A\rightarrow a$ from $\mathbb{A}$ the terminal rule $A\rightarrow
a^{-1}$ and for each nonterminal rule $A\rightarrow BC$ from $\mathbb{A}$ the
nonterminal rule $A\rightarrow CB$. Clearly, $\mathsf{val}(\mathbb{A}^{-1}) =
\mathsf{val}(\mathbb{A})^{-1}$.
Let us state some simple algorithmic problems that
can be easily solved in polynomial time:
\begin{itemize}
\item Given an SLP $\mathbb{A}$, calculate $|\mathsf{val}(\mathbb{A})|$.
\item Given an SLP $\mathbb{A}$ and a number $i \in \{1, \ldots, |\mathsf{val}(\mathbb{A})|\}$, calculate
$\mathsf{val}(\mathbb{A})[i]$.
\item Given an SLP $\mathbb{A}$ and two numbers $1 \leq i \leq j \leq |\mathsf{val}(\mathbb{A})|$,
compute and SLP $\mathbb{B}$ with $\mathsf{val}(\mathbb{B}) = \mathsf{val}(\mathbb{A})[i,j]$.
\end{itemize}
In \cite{Pla94}, Plandowski presented a polynomial time algorithm
for testing whether $\mathsf{val}(\mathbb{A}) = \mathsf{val}(\mathbb{B})$ for two given
SLPs $\mathbb{A}$ and $\mathbb{B}$. A cubic algorithm was presented by Lifshits \cite{Lif07}.
In fact, Lifshits gave an algorithm for compressed pattern
matching: given SLPs $\mathbb{A}$ and $\mathbb{B}$, is $\mathbb{A}$ a factor of $\mathbb{B}$?
The running time of his algorithm is $O(|\mathbb{A}| \cdot |\mathbb{B}|^2)$.
A \emph{composition system} $\mathbb{A} = (V,\Gamma,S,P)$ is defined
analogously to an SLP, but in addition to productions of the form
$A\to \mathsf{alph}ha$ ($A\in V$, $\mathsf{alph}ha\in (V\cup\Gamma)^*$) it may also
contain productions of the form $A \to B[i:j]$ for $B\in V$ and $i,j
\in \mathbb{N}$ \cite{GaKaPlRy96}. For such a production we define
$\mathsf{val}_{\mathbb{A}}(A) = \mathsf{val}_{\mathbb{A}}(B)[i:j]$.
The size of a production $A \to B[i:j]$ is
$\lceil\log(i)\rceil+\lceil\log(j)\rceil$. As for SLPs we define
$\mathsf{val}(\mathbb{A}) = \mathsf{val}_{\mathbb{A}}(S)$. In \cite{Hag00}, Hagenah presented a
polynomial time algorithm, which transforms a given composition system
$\mathbb{A}$ into an SLP $\mathbb{B}$ with $\mathsf{val}(\mathbb{A}) = \mathsf{val}(\mathbb{B})$.
Composition systems were also heavily used in \cite{LoSchl07,Schl06}
in order to solve compressed word problems efficiently.
\subsection{Trace monoids and graph groups}
We introduce some notions from trace theory. For a thorough
introduction see \cite{DieRoz95}. An \emph{independence alphabet} is a
finite undirected graph $(\Sigma,I)$ without loops. Then $I\subseteq
\Sigma \times \Sigma$ is an irreflexive and symmetric relation. The
complementary graph $(\Sigma,D)$ with $D=
(\Sigma\times\Sigma)\setminus I$ is called a \emph{dependence
alphabet}. The \emph{trace monoid} $\mathbb{M}(\Sigma,I)$ is defined as the
quotient $\mathbb{M}(\Sigma,I)=\Sigma^*/\{ab=ba\mid (a,b)\in I\}$ with
concatenation as operation and the empty word as the neutral
element. This monoid is cancellative and its elements are called
\emph{traces}. The trace represented by the word $s\in \Sigma^*$ is
denoted by $[s]_I$. As an ease of notation we denote with
$u\in\Sigma^*$ also the trace $[u]_I$. For $a\in \Sigma$ let
$I(a)=\{b\in\Sigma\mid (a,b)\in I\}$ be the letters that commute with $a$
and $D(a)=\{b\in \Sigma\mid (a,b)\in D\}$ be the letters that are
dependent from $a$. For traces $u,v\in \mathbb{M}(\Sigma,I)$ we denote with
$uIv$ the fact that $\mathsf{alph}(u)\times \mathsf{alph}(v)\subseteq I$. For
$\Gamma\subseteq \Sigma$ we say that $\Gamma$ is \emph{connected} if the
subgraph of $(\Sigma,D)$ induced by $\Gamma$ is connected. For a trace
$u$ we denote with $\max(u)$ the set of possible last letters of $u$,
i.e., $\max(u) = \{a\mid u=va\text{ for } a\in\Sigma, v\in
\mathbb{M}(\Sigma, I)\}$. Analogously we define $\min(u)$ to be the set of
possible first letters i.e., $\min(u) = \{a\mid u=av\text{ for }
a\in\Sigma,v\in \mathbb{M}(\Sigma, I)\}$.
A convenient representation for traces are \emph{dependence graphs},
which are node-labeled directed acyclic graphs. For a word
$w \in\Sigma^*$ the dependence graph $D_w$ has vertex set
$\{1,\dots,|w|\}$ where the node $i$ is labeled with $w[i]$. There is an
edge from vertex $i$ to $j$ if and only if $i<j$ and $(w[i],w[j])\in
D$. It is easy to see that for two words $w,w'\in \Sigma^*$ we have
$[w]_I = [w']_I$ if and only if $D_w$ and $D_{w'}$ are isomorphic. Hence, we
can speak of \emph{the} dependence graph of a trace.
For background in combinatorial group theory see \cite{LySch77}. The
\emph{free group} $F(\Sigma)$ generated by $\Sigma$ can be defined as
the quotient monoid
$$
F(\Sigma) = (\Sigma^{\pm 1})^*/\{ aa^{-1} = \varepsilon \mid a \in
\Sigma^{\pm 1} \}.
$$
For an independence alphabet $(\Sigma, I)$ the {\em graph group}
$\mathbb{G}(\Sigma,I)$ is defined as the quotient group
$$
\mathbb{G}(\Sigma,I) = F(\Sigma)/\{ab = ba \mid (a,b) \in I\}.
$$
{}From the independence alphabet $(\Sigma,I)$
we derive the independence alphabet
$$(\Sigma^{\pm 1}, \{(a^{\varepsilon_1},b^{\varepsilon_2})\mid (a,b)\in
I,\; \varepsilon_1,\varepsilon_2\in\{-1,1\}\}).$$
Abusing notation, we denote the independence relation
of this alphabet again with $I$.
Note that $(a,b)\in I$ implies $a^{-1}b = ba^{-1}$ in
$\mathbb{G}(\Sigma,I)$. Thus, the graph group $\mathbb{G}(\Sigma,I)$
can be also defined as the quotient
$$
\mathbb{G}(\Sigma,I) = \mathbb{M}(\Sigma^{\pm 1},I)/\{ aa^{-1} =
\varepsilon \mid a \in \Sigma^{\pm 1} \}.
$$
Graph groups are also known as right-angled Artin groups and free
partially commutative groups.
\subsection{(Outer) automorphism groups}
The \emph{automorphism group} $\mathsf{Aut}(G)$ of a group $G$ is the set of
all bijective homomorphisms from $G$ to itself with composition as
operation and the identity mapping as the neutral element. An
automorphism $\varphi$ is called \emph{inner} if there is a group
element $x\in G$ such that $\varphi(y)=xyx^{-1}$ for all $y\in G$.
The set of all inner automorphisms
for a group $G$ forms the \emph{inner automorphism group} of $G$
denoted by $\mathsf{Inn}(G)$. This is easily seen to be a normal subgroup of
$\mathsf{Aut}(G)$ and the quotient group $\mathsf{Out}(G)=\mathsf{Aut}(G)/\mathsf{Inn}(G)$
is called the \emph{outer automorphism group} of $G$.
Assume that $\mathsf{Aut}(G)$ is finitely generated\footnote{In general, this
won't be the case, even if $G$ is finitely generated.} and let
$\Psi=\{\psi_1,\dots,\psi_k\}$ be a monoid generating set for
$\mathsf{Aut}(G)$. Then $\Psi$ also generates $\mathsf{Out}(G)$ where we identify
$\psi_i$ with the coset $\psi_i\,\mathsf{Inn}(G)$ for $i\in\{1,\dots,k\}$.
Then the \emph{word problem} for the outer automorphism group can be
viewed as the following decision problem:
\noindent
INPUT: A word $w\in\Psi^*$.\\
QUESTION: Does $w=1$ in $\mathsf{Out}(G)$?
\noindent
Since an automorphism belongs to the same coset (with respect to $\mathsf{Inn}(G)$) as
the identity if and only if it is inner, we can rephrase the word
problem for $\mathsf{Out}(G)$ as follows:
\noindent
INPUT: A word $w\in\Psi^*$. \\
QUESTION: Does $w$ represent an element of $\mathsf{Inn}(G)$ in $\mathsf{Aut}(G)$?
\noindent
Building on results from \cite{Ser89}, Laurence has shown in
\cite{Lau95} that automorphism groups of graph groups are finitely
generated. Recently, Day \cite{Day09} proved that automorphism groups
of graph groups are in fact finitely presented. Further results on
(outer) automorphism groups of graph groups can be found in
\cite{ChCrVo07,ChoVog09}. The main purpose of this paper is to give a
polynomial time algorithm for the word problem for
$\mathsf{Out}(\mathbb{G}(\Sigma,I))$.
\section{Main results}\label{sec:main-results}
In this section we will present the main results of this paper the
proof of which are subject to the rest of the paper. In order to
solve the word problem for $\mathsf{Out}(\mathbb{G}(\Sigma,I))$ in polynomial
time, we have to deal with compressed conjugacy problems in
$\mathbb{G}(\Sigma,I)$. Recall that two elements $g$ and $h$ of a
group $G$ are {\em conjugated} if and only if there exists $x \in G$
such that $g = x h x^{-1}$. The classical {\em conjugacy problem} for
$G$ asks, whether two given elements of $G$ are conjugated. We will
consider a compressed variant of this problem in $\mathbb{G}(\Sigma,I)$,
which we call the \emph{compressed conjugacy problem for
$\mathbb{G}(\Sigma,I)$}, $\mathsf{CCP}(\Sigma,I)$ for short:
\noindent
INPUT: SLPs $\mathbb{A}$ and $\mathbb{B}$ over $\Sigma^{\pm 1}$.\\
QUESTION: Are $\mathsf{val}(\mathbb{A})$ and $\mathsf{val}(\mathbb{B})$ conjugated in
$\mathbb{G}(\Sigma,I)$?
\begin{theorem}\label{decidesccp}
Let $(\Sigma,I)$ be a fixed independence alphabet. Then,
$\mathsf{CCP}(\Sigma,I)$ can be solved in polynomial time.
\end{theorem}
We will proof Theorem~\ref{decidesccp} in Section~\ref{sec:CC}.
In order to solve the word problem for $\mathsf{Out}(\mathbb{G}(\Sigma,I))$ in
polynomial time, Theorem~\ref{decidesccp} is not sufficient. We need
an extension of $\mathsf{CCP}(\Sigma,I)$ to several pairs of input SLPs. Let
us call this problem the {\em simultaneous compressed conjugacy
problem} for $\mathbb{G}(\Sigma,I)$:
\noindent
INPUT: SLPs $\mathbb{A}_1,\mathbb{B}_1,\dots,\mathbb{A}_n,\mathbb{B}_n$ over $\Sigma^{\pm 1}$.\\
QUESTION: Does there exist $x\in{(\Sigma^{\pm 1})}^*$ such that
$\mathsf{val}(\mathbb{A}_i)=x\,\mathsf{val}(\mathbb{B}_i)x^{-1}$ in $\mathbb{G}(\Sigma,I)$ for all
$i\in\{1,\dots,n\}$?
\noindent
The simultaneous (non-compressed) conjugacy problem also appears in
connection with group-based cryptography \cite{MyShUs08}.
Unfortunately, we don't know, whether the simultaneous compressed
conjugacy problem can be solved in polynomial time. But, in order to
deal with the word problem for $\mathsf{Out}(\mathbb{G}(\Sigma,I))$, a
restriction of this problem suffices, where the SLPs
$\mathbb{B}_1,\dots,\mathbb{B}_n$ from the simultaneous compressed conjugacy problem
are the letters from $\Sigma$. We call this problem the {\em
restricted simultaneous compressed conjugacy problem}, briefly
$\mathsf{RSCCP}(\Sigma,I)$:
\noindent
INPUT: SLPs $\mathbb{A}_a$ ($a \in \Sigma$) over $\Sigma^{\pm 1}$.\\
QUESTION: Does there exist $x\in{(\Sigma^{\pm 1})}^*$ with
$\mathsf{val}(\mathbb{A}_a)=xax^{-1}$ in $\mathbb{G}(\Sigma,I)$ for all $a\in\Sigma$?
\noindent
An $x$ such that $\mathsf{val}(\mathbb{A}_a)=xax^{-1}$ in $\mathbb{G}(\Sigma,I)$ for
all $a\in\Sigma$ is called a {\em solution} of the
$\mathsf{RSCCP}(\Sigma,I)$-instance. The following theorem will be shown in
Section~\ref{sec:RSSC}.
\begin{theorem}\label{decide_ccp}
Let $(\Sigma,I)$ be a fixed independence alphabet. Then,
$\mathsf{RSCCP}(\Sigma,I)$ can be solved in polynomial time. Moreover, in
case a solution exists, one can compute an SLP for a solution in
polynomial time.
\end{theorem}
Using Theorem~\ref{decide_ccp}, it is straightforward to decide the
word problem for $\mathsf{Out}(\mathbb{G}(\Sigma,I))$ in polynomial time.
\begin{theorem} \label{thm:outer}
Let $(\Sigma,I)$ be a fixed independence alphabet. Then, the word
problem for the group $\mathsf{Out}(\mathbb{G}(\Sigma,I))$ can be solved in
polynomial time.
\end{theorem}
\begin{proof}
Fix a finite monoid generating set $\Phi$ for
$\mathsf{Aut}(\mathbb{G}(\Sigma,I))$. Let $\varphi=\varphi_1\cdots
\varphi_n$ with $\varphi_1,\ldots, \varphi_n \in \Phi$ be the
input. By \cite{Schl06} we can compute in polynomial time SLPs
$\mathbb{A}_a$ ($a \in \Sigma$) over $\Sigma^{\pm 1}$ with
$\mathsf{val}(\mathbb{A}_a)=\varphi(a)$ in $\mathbb{G}(\Sigma,I)$ for all $a \in
\Sigma$. The automorphism $\varphi$ is inner if and only if there
exists $x$ such that $\mathsf{val}(\mathbb{A}_a)=x a x^{-1}$ in
$\mathbb{G}(\Sigma,I)$ for all $a \in \Sigma$. This can be decided
in polynomial time by Theorem~\ref{decide_ccp}. \qed
\end{proof}
It is important in Theorem~\ref{decidesccp}--\ref{thm:outer}
that we fix the independence alphabet $(\Sigma,I)$. It is open
whether these results also hold if $(\Sigma,I)$ is part of
the input.
\section{Simple facts for traces}
In this section, we state some simple facts on the prefix order of
trace monoids, which will be needed later. A trace $u$ is said to be
a \emph{prefix} of a trace $w$ if there exists a trace $v$ such that
$uv = w$ and we denote this fact by $u\preceq w$. The prefixes of a
trace $w$ correspond to the downward-closed node sets of the
dependence graph of $w$. Analogously a trace $v$ is a \emph{suffix} of
a trace $w$ if there is a trace $u$ such that $uv=w$. For two traces
$u,v \in \mathbb{M}(\Sigma,I)$, the \emph{infimum} $u \sqcap v$ is the
largest trace $s$ with respect to $\preceq$ such that $s \preceq u$ and $s
\preceq v$; it always exists \cite{CoMeZi93}. With $u \setminus v$ we
denote the unique trace $t$ such that $u = (u \sqcap v) t$; uniqueness
follows from the fact that $\mathbb{M}(\Sigma,I)$ is cancellative.
Note that $u \setminus v = u \setminus (u \sqcap v)$.
The \emph{supremum} $u \sqcup v$ of two traces $u,v \in
\mathbb{M}(\Sigma,I)$ is the smallest trace $s$ with respect to $\preceq$ such
that $u\preceq s$ and $v\preceq s$ if any such trace exists. The
following result can be found in \cite{CoMeZi93}:
\begin{lemma}[\cite{CoMeZi93}] \label{lemma-supremum-exists} The trace
$u\sqcup v$ exists if and only if $(u\setminus v)\, I\, ( v\setminus
u)$, in which case we have $u\sqcup v=(u\sqcap v)\,(u\setminus v)\,
( v\setminus u)$.
\end{lemma}
We can define the supremum of several traces $w_1,\ldots, w_n$
analogously by induction: $w_1\sqcup\dots \sqcup w_n=(w_1\sqcup \dots
\sqcup w_{n-1})\sqcup w_n$. We mention a necessary and sufficient
condition for the existence of the supremum of several traces that
follows directly from the definition.
\begin{lemma}\label{trace_supremum}
Let $(\Sigma,I)$ be an independence alphabet and
$u_1,\dots,u_r\in\mathbb{M}(\Sigma,I)$. If $ u=u_1\sqcup\dots\sqcup
u_{r-1}$ exists then $s=u_1\sqcup\dots\sqcup u_r$ is exists if and
only if $\left( u\setminus u_r \right)\;I\;\left( u_r\setminus
u\right)$. In this case $s= u\;\left( u_r\setminus u\right)$.
\end{lemma}
\begin{example} \label{Ex traces1} We consider the following
independence alphabet $(\Sigma,I)$:
\begin{center}
\setlength{\unitlength}{1.2mm}
\begin{picture}(18,9)(0,-5)
\gasset{Nadjust=wh,Nadjustdist=0.5,Nfill=n,Nframe=n,AHnb=0,linewidth=.1}
\node(a)(0,3){$c$}
\node(b)(9,3){$a$}
\node(d)(4.5,-3){$e$}
\node(c)(13.5,-3){$d$}
\node(e)(22.5,-3){$b$}
\drawedge(a,b){}
\drawedge(b,c){}
\drawedge(d,c){}
\drawedge(e,c){}
\drawedge(b,d){}
\end{picture}
\end{center}
Then the corresponding dependence alphabet is:
\begin{center}
\setlength{\unitlength}{1.2mm}
\begin{picture}(18,9)(0,-5)
\gasset{Nadjust=wh,Nadjustdist=0.5,Nfill=n,Nframe=n,AHnb=0,linewidth=.1}
\node(b)(0,3){$a$}
\node(d)(9,3){$e$}
\node(e)(4.5,-3){$b$}
\node(a)(13.5,-3){$c$}
\node(c)(22.5,-3){$d$}
\drawedge(e,b){}
\drawedge(e,a){}
\drawedge(a,c){}
\drawedge(a,d){}
\drawedge(e,d){}
\end{picture}
\end{center}
We consider the words $u=aeadbacdd$ and $v=eaabdcaeb$. Then the
dependence graphs $D_u$ of $u$ and $D_v$ of $v$ look as follows
(where we label the vertices $i$ with the letter $u[i]$
(resp. $v[i]$)):
\begin{center}
\setlength{\unitlength}{1.2mm}
\begin{picture}(78,15)(0,-10)
\gasset{Nadjust=wh,Nadjustdist=0.5,Nfill=n,Nframe=n,AHnb=1,linewidth=.1}
\put(-10,-3){$D_u$}
\node(b1)(0,3){$a$}
\node(d1)(0,-3){$e$}
\node(c1)(0,-9){$d$}
\node(b2)(9,3){$a$}
\node(e1)(13.5,-3){$b$}
\node(b3)(20.5,-3){$a$}
\node(a1)(18,-9){$c$}
\node(c2)(24.5,-9){$d$}
\node(c3)(31,-9){$d$}
\drawedge(b1,b2){}
\drawedge(d1,e1){}
\drawedge(b2,e1){}
\drawedge(c1,a1){}
\drawedge(e1,a1){}
\drawedge(e1,b3){}
\drawedge(a1,c2){}
\drawedge(c2,c3){}
\put(40,-3){$D_v$}
\node(b1')(50,3){$a$}
\node(d1')(50,-3){$e$}
\node(c1')(50,-9){$d$}
\node(b2')(59,3){$a$}
\node(e1')(63.5,-3){$b$}
\node(b3')(71.5,-3){$a$}
\node(a1')(68,-9){$c$}
\node(e2')(79.5,-3){$b$}
\node(d2')(75.5,-9){$e$}
\drawedge(b1',b2'){}
\drawedge(d1',e1'){}
\drawedge(b2',e1'){}
\drawedge(c1',a1'){}
\drawedge(e1',a1'){}
\drawedge(e1',b3'){}
\drawedge(b3',e2'){}
\drawedge(a1',d2'){}
\drawedge(d2',e2'){}
\end{picture}
\end{center}
Then we have $u\sqcap v=aeadbac=:p$ and its dependence graph is:
\begin{center}
\setlength{\unitlength}{1.2mm}
\begin{picture}(28,15)(0,-10)
\gasset{Nadjust=wh,Nadjustdist=0.5,Nfill=n,Nframe=n,AHnb=1,linewidth=.1}
\put(-15,-3){$D_p$}
\node(b1)(0,3){$a$}
\node(d1)(0,-3){$e$}
\node(c1)(0,-9){$d$}
\node(b2)(9,3){$a$}
\node(e1)(13.5,-3){$b$}
\node(b3)(20.5,-3){$a$}
\node(a1)(18,-9){$c$}
\drawedge(b1,b2){}
\drawedge(d1,e1){}
\drawedge(b2,e1){}
\drawedge(c1,a1){}
\drawedge(e1,a1){}
\drawedge(e1,b3){}
\end{picture}
\end{center}
Since $u\setminus p= dd$ and $v\setminus p=eb$ we have $(u\setminus
p) I (v\setminus p)$ and hence the supremum
$s=u\sqcup v= aeadbacddeb$ is defined. The dependence graph for $s$ is:\\
\begin{center}
\setlength{\unitlength}{1.2mm}
\begin{picture}(28,15)(0,-17)
\gasset{Nadjust=wh,Nadjustdist=0.5,Nfill=n,Nframe=n,AHnb=1,linewidth=.1}
\put(-15,-3){$D_s$}
\node(b1')(0,3){$a$}
\node(d1')(0,-3){$e$}
\node(c1')(0,-9){$d$}
\node(b2')(9,3){$a$}
\node(e1')(13.5,-3){$b$}
\node(b3')(21.5,-3){$a$}
\node(a1')(18,-9){$c$}
\node(e2')(29.5,-3){$b$}
\node(d2')(25.5,-9){$e$}
\node(c2')(21.5,-15){$d$}
\node(c3')(27.5,-15){$d$}
\drawedge(b1',b2'){}
\drawedge(d1',e1'){}
\drawedge(b2',e1'){}
\drawedge(c1',a1'){}
\drawedge(e1',a1'){}
\drawedge(e1',b3'){}
\drawedge(b3',e2'){}
\drawedge(a1',d2'){}
\drawedge(d2',e2'){}
\drawedge(a1',c2'){}
\drawedge(c2',c3'){}
\end{picture}
\end{center}
\end{example}
The following lemma is a basic statement for traces, see for example
\cite{DieRoz95}:
\begin{lemma}[Levi's Lemma]\label{levi_lemma}
Let $u_1,u_2,v_1,v_2$ be traces such that $u_1u_2=v_1v_2$. Then
there exist traces $x,y_1,y_2,z$ such that $y_1Iy_2$ and $u_1=xy_1$,
$u_2=y_2z$, $v_1=xy_2$, and $v_2=y_1z$.
\end{lemma}
We use Levi's Lemma to prove the following statement:
\begin{lemma}\label{unique_decomposition}
Let $a\in \Sigma$. The decomposition of a trace $t\in\mathbb{M}(\Sigma,I)$
as $t=u_1u_2$ with $u_2Ia$ and $|u_2|$ maximal is unique in
$\mathbb{M}(\Sigma,I)$.
\end{lemma}
\begin{proof}
Let $u_1u_2=t=v_1v_2$ be such that $u_2Ia$, $v_2Ia$ and $|u_2|$ and
$|v_2|$ are both maximal (hence $|u_2|=|v_2|$). By Levi's Lemma
there are traces $x,y_1,y_2,z$ such that $y_1Iy_2$ and $u_1=xy_1$,
$u_2=y_2z$, $v_1=xy_2$, and $v_2=y_1z$. From $u_2Ia$ and $v_2Ia$ we
get $y_1Ia$ and $y_2Ia$. Maximality of $|u_2|=|v_2|$ and
$xy_1u_2=t=xy_2v_2$ implies $y_1=y_2=\varepsilon$. Hence $u_1=v_1$
and $u_2=v_2$. \qed
\end{proof}
A \emph{trace rewriting system} $R$ over $\mathbb{M}(\Sigma,I)$ is
just a finite subset of $\mathbb{M}(\Sigma,I) \times
\mathbb{M}(\Sigma,I)$ \cite{Die90lncs}. We can define the
\emph{one-step rewrite relation} $\to_R \;\subseteq
\mathbb{M}(\Sigma,I) \times \mathbb{M}(\Sigma,I)$ by: $x \to_R y$ if
and only if there are $u,v \in \mathbb{M}(\Sigma,I)$ and $(\ell,r) \in
R$ such that $x = u\ell v$ and $y = u r v$. With $\xrightarrow{*}_R$
we denote the reflexive transitive closure of $\rightarrow_R$. The
notion of a confluent and terminating trace rewriting system is
defined as for other types of rewriting systems \cite{BoOt93}: A trace
rewriting system $R$ is called \emph{confluent} if for all $u,v,v'\in
\mathbb{M}(\Sigma,I)$ it holds that $u \xrightarrow{*}_R v$ and
$u\xrightarrow{*}_R v'$ imply that there is a trace $w$ with $v
\xrightarrow{*}_R w$ and $v'\xrightarrow{*}_R w$. It is called
\emph{terminating} if there does not exist an infinite chain
$u_0\rightarrow_R u_1 \rightarrow_R u_2 \cdots$. A trace $u$ is
\emph{$R$-irreducible} if no trace $v$ with $u \to_R v$ exists. The
set of all $R$-irreducible traces is denoted
with $\mathsf{IRR}(R)$. If $R$ is terminating and confluent, then for every
trace $u$, there exists a unique \emph{normal form} $\mathbb{N}F_R(u) \in
\mathsf{IRR}(R)$ such that $u \xrightarrow{*}_R \mathbb{N}F_R(u)$.
Let us now work in the trace monoid $\mathbb{M}(\Sigma^{\pm 1}, I)$.
For a trace $u=[a_1\cdots a_n]_I\in\mathbb{M}(\Sigma^{\pm 1},I)$
we denote with $u^{-1}$ the trace
$u^{-1}=[a_n^{-1}\cdots a_1^{-1}]_I$. It is easy to see that this
definition is independent of the chosen representative $a_1\cdots a_n$
of the trace $u$. It follows that we have
$[\mathsf{val}(\mathbb{A})]_I^{-1}=[\mathsf{val}(\mathbb{A}^{-1})]_I$ for an SLP $\mathbb{A}$. For the
rest of the paper, we fix the trace rewriting system
$$
R=\{([aa^{-1}]_I,[\varepsilon]_I)\mid a\in \Sigma^{\pm 1}\}
$$
over the trace monoid $\mathbb{M}(\Sigma^{\pm 1},I)$. Since $R$ is
length-reducing, $R$ is terminating. By \cite{Die90lncs,Wra88}, $R$
is also confluent. For traces $u,v \in \mathbb{M}(\Sigma^{\pm 1},I)$ we have
$u=v$ in $\mathbb{G}(\Sigma,I)$ if and only if $\mathbb{N}F_R(u)=\mathbb{N}F_R(v)$. Using
these facts, it was shown in \cite{Die90lncs,Wra88} that the word
problem for $\mathbb{G}(\Sigma,I)$ can be solved in linear time (on the RAM
model).
\section{Algorithms for compressed traces}
In this section, we will recall some results from \cite{LoSchl07}
concerning traces, which are represented by SLPs. For SLPs $\mathbb{A}$ and
$\mathbb{B}$ over $\Sigma^{\pm 1}$ we say that $\mathbb{B}$ is an
\emph{$R$-reduction} of $\mathbb{A}$ if $[\mathsf{val}(\mathbb{B})]_I=\mathbb{N}F_R([\mathsf{val}(\mathbb{A})]_I)$.
We will need the following theorem.
\begin{theorem}[\cite{LoSchl07}]\label{R-reduction}
Let $\mathbb{A}$ be an SLP over $\Sigma^{\pm 1}$ representing a trace in
$\mathbb{M}(\Sigma^{\pm 1},I)$. We can compute an $R$-reduction for $\mathbb{A}$
in polynomial time.
\end{theorem}
\begin{corollary}\label{cwp_graphgroup}
The following decision problem can be solved in polynomial time.
\noindent
INPUT: An SLP $\mathbb{A}$ over $\Sigma^{\pm 1}$.\\
QUESTION: $\mathbb{N}F_R([\mathsf{val}(\mathbb{A})]_I)=[\varepsilon]_I$?
\end{corollary}
Note that this is equivalent to a polynomial time solution of the
compressed word problem for graph groups.
\begin{theorem}[\cite{LoSchl07}] \label{theo-compressed-inf}
For given SLPs $\mathbb{A}_0$ and $\mathbb{A}_1$ over $\Sigma^{\pm 1}$, we can compute in
polynomial time SLPs $\mathbb{P}$, $\mathbb{D}_0$, $\mathbb{D}_1$ with
$[\mathsf{val}(\mathbb{P})]_I=[\mathsf{val}(\mathbb{A}_0)]_I\sqcap[\mathsf{val}(\mathbb{A}_1)]_I$ and
$[\mathsf{val}(\mathbb{D}_i)]_I=[\mathsf{val}(\mathbb{A}_i)]_I\setminus [\mathsf{val}(\mathbb{A}_{1-i})]_I$ ($i
\in \{0,1\}$).
\end{theorem}
An immediate corollary of Theorem~\ref{theo-compressed-inf} and
Lemma~\ref{lemma-supremum-exists} is:
\begin{corollary} \label{theo-compressed-sup} For given SLPs $\mathbb{A}_0$
and $\mathbb{A}_1$ over $\Sigma^{\pm 1}$, we can check in polynomial time, whether
$[\mathsf{val}(\mathbb{A}_0)]_I\sqcup[\mathsf{val}(\mathbb{A}_1)]_I$ exists, and in case it
exists, we can compute in polynomial time an SLP $\mathbb{S}$ with
$[\mathsf{val}(\mathbb{S})]_I = [\mathsf{val}(\mathbb{A}_0)]_I\sqcup[\mathsf{val}(\mathbb{A}_1)]_I$.
\end{corollary}
Lemma~\ref{trace_supremum} and Corollary~\ref{theo-compressed-sup}
imply the following corollary.
\begin{corollary}\label{decide_compressed_supremum}
Let $r$ be a fixed constant. For given SLPs $\mathbb{V}_1,\dots,\mathbb{V}_r$
over $\Sigma^{\pm 1}$, we can decide in polynomial time whether
$[\mathsf{val}(\mathbb{V}_1)]_I\sqcup\dots\sqcup[\mathsf{val}(\mathbb{V}_r)]_I$ exists, and in
case it exists we can compute in polynomial time an SLP $\mathbb{S}$ with
$[\mathsf{val}(\mathbb{S})]_I=[\mathsf{val}(\mathbb{V}_1)]_I\sqcup\dots\sqcup[\mathsf{val}(\mathbb{V}_r)]_I$.
\end{corollary}
It is important that we fix the number $r$ of SLPs in
Corollary~\ref{decide_compressed_supremum}: Each application
of Lemma~\ref{theo-compressed-sup} may increase the size of the SLP
polynomially. Hence, a non-fixed number of applications might
lead to an exponential blow-up.
\section{Double $a$-cones}
The definition of the problem $\mathsf{RSCCP}(\Sigma,I)$
in Section~\ref{sec:main-results} motivates
the following definition: A \emph{double $a$-cone} for $a\in\Sigma^{\pm 1}$ is an
$R$-irreducible trace of the form $uau^{-1}$ with $u\in
\mathbb{M}(\Sigma^{\pm 1},I)$. In this section, we will prove several
results on double $a$-cones, which will be needed later for deciding
$\mathsf{RSCCP}(\Sigma,I)$ in polynomial time.
\begin{lemma}\label{char_cone}
A trace $uau^{-1}$ is a double $a$-cone if and only if the following
conditions hold:
\begin{enumerate}[(1)]
\item $u \in \mathsf{IRR}(R)$
\item $\max(u)\cap (\{a,a^{-1}\}\cup I(a)) =\emptyset$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $v=uau^{-1}$ be a double $a$-cone.
Since $v \in \mathsf{IRR}(R)$, also $u \in \mathsf{IRR}(R)$.
If $a^\varepsilon\in \max(u)$ for
$\varepsilon\in\{1,-1\}$ then $v=u'a^{\varepsilon} a
a^{-\varepsilon} u'^{-1}$ for
some trace $u'$ contradicting the $R$-irreducibility of
$v$. Similarly, if there is some $b\in I(a)\cap \max(u)$ it follows
that $v=u'bab^{-1}u'^{-1}=u'abb^{-1}u'^{-1}$ again a
contradiction. Suppose on the other hand that $(1)$ and $(2)$ hold
for $v=uau^{-1}$. Since $u \in \mathsf{IRR}(R)$ and no element from
$\max(u)$ cancels against or commutes with $a$ it follows that $v$
is also $R$-irreducible. \qed
\end{proof}
It follows that every letter in a double $a$-cone either lies before
or after the central letter $a$. Its dependence graph always has the
following form:
\begin{center}
\setlength{\unitlength}{1.2mm}
\begin{picture}(20,10)(-10,-5)
\gasset{Nadjust=wh,Nadjustdist=0,Nfill=n,Nframe=n,AHnb=0,linewidth=.1}
\node(a)(0,0){$a$}
\node(lo)(-10,5){}
\node(lm)(-1,0){}
\node(lu)(-10,-5){}
\node(ro)(10,5){}
\node(rm)(1,0){}
\node(ru)(10,-5){}
\drawedge(lo,lm){}
\drawedge(lo,lu){}
\drawedge(lm,lu){}
\drawedge(ro,rm){}
\drawedge(ro,ru){}
\drawedge(rm,ru){}
\end{picture}
\end{center}
By the following lemma, each double $a$-cone has a unique
factorization of the form $u_1 b u_2$ with $|u_1|=|u_2|$.
\begin{lemma}\label{unique_cone}
Let $v=uau^{-1}$ be a double $a$-cone and let $v=u_1bu_2$ with
$b\in\Sigma^{\pm 1}$ and $|u_1|=|u_2|$. Then $a=b$, $u_1=u$ and
$u_2=u^{-1}$.
\end{lemma}
\begin{proof}
Let $v=uau^{-1}=u_1bu_2$ be a double $a$-cone where
$|u_1|=|u_2|$. We have $\max(ua)=\{a\}$ and $(a,c)\in D$ for all
$c\in \min(u^{-1})$. Moreover $|u_1|=|u_2|=|u|$. By Levi's Lemma,
there exist traces $x$, $y_1$, $y_2$ and $z$ with $u_1b=xy_1$,
$u_2=y_2z$, $ua=xy_2$ and $u^{-1}=y_1z$. Assume that
$y_2\neq\varepsilon$. Since $\max(y_2)\subseteq \max(ua)=\{a\}$ we
get $\max(y_2)=\{a\}$. Since $(a,c)\in D$ for all $c\in
\min(y_1)\subseteq \min(u^{-1})$ and $y_1Iy_2$ it follows
$y_1=\varepsilon$. But then $|u|=|u^{-1}|=|z|<|y_2z|=|u_2|$ leads to
a contradiction. Hence, we must have $y_2=\varepsilon$. Thus
$|u|=|u^{-1}|=|y_1z|=|y_1|+|z|=|y_1|+|u_2|=|y_1|+|u|$ implies
$y_1=\varepsilon$. Therefore we get $ua=u_1b$ and
$u^{-1}=u_2$. Finally, since $\max(ua)=\{a\}$ we must have $a=b$ and
$u=u_1$. \qed
\end{proof}
\begin{lemma} \label{lemma-double-a-cone}
Let $w\in \mathbb{M}(\Sigma^{\pm 1}, I)$ be $R$-irreducible and $a\in
\Sigma^{\pm 1}$. Then the following three conditions are equivalent:
\begin{enumerate}[(1)]
\item There exists $x\in \mathbb{M}(\Sigma^{\pm 1}, I)$ with $w=xax^{-1}$ in $\mathbb{G}(\Sigma, I)$.
\item There exists $x\in \mathbb{M}(\Sigma^{\pm 1}, I)$ with $w=xax^{-1}$ in $\mathbb{M}(\Sigma^{\pm 1}, I)$.
\item $w$ is a double $a$-cone.
\end{enumerate}
\end{lemma}
\begin{proof}
Direction ``$(2)\Rightarrow (1)$'' is trivially true and
``$(2)\Leftrightarrow (3)$'' is just the definition of a double
$a$-cone. For ``$(1)\Rightarrow (2)$'' assume that $w=xax^{-1}$ in
$\mathbb{G}(\Sigma, I)$. Since $w \in \mathsf{IRR}(R)$, we have
$xax^{-1}\xrightarrow{*}_R w$. W.l.o.g., $x \in \mathsf{IRR}(R)$. Let
$n\geq 0$ such that $xax^{-1}\xrightarrow{}^n_R w$. We prove (2) by
induction on $n$.
For $n=0$ we have $w=xax^{-1}$ in $\mathbb{M}(\Sigma^{\pm 1}, I)$. So
assume $n>0$. If $a\in \max(x)$ we have $x=ya$ for some
$y\in\mathbb{M}(\Sigma^{\pm 1}, I)$ and hence $xax^{-1}=
yaaa^{-1}y^{-1}\rightarrow_R yay^{-1}$. Since $R$ is confluent, we
have $yay^{-1}\xrightarrow{*}_R w$ and since each rewriting rule
from $R$ reduces the length of a trace by $2$ it follows that
$yay^{-1}\xrightarrow{}^{n-1}_R w$. Hence, by induction, there
exists a trace $v$ with $w=vav^{-1}$ in $\mathbb{M}(\Sigma^{\pm 1},
I)$. The case where $a^{-1}\in\max(x)$ is analogous to the previous
case. If there exists $b\in\max(x)$ with $(a,b)\in I$ we can infer
that $x=yb$ for some trace $y$ and
$xax^{-1}=ybab^{-1}y^{-1}=yabb^{-1}y^{-1}\rightarrow_R yay^{-1}$. As
for the previous cases we obtain inductively $w=vav^{-1}$ in
$\mathbb{M}(\Sigma^{\pm 1},I)$ for some trace $v$. Finally, if
$\max(x)\cap(\{a,a^{-1}\}\cup I(a))=\emptyset$, then $xax^{-1}$ is a
double $a$-cone by Lemma~\ref{char_cone} and hence $R$-irreducible,
which contradicts $n>0$. \qed
\end{proof}
\begin{lemma}\label{condition_if_conjugating_then_supremum}
Let $w_a,v_a \in \mathbb{M}(\Sigma^{\pm 1},I)$ ($a \in \Sigma$) be
$R$-irreducible such that $w_a=v_a a v_a^{-1}$ in $\mathbb{M}(\Sigma^{\pm
1},I)$ for all $a \in \Sigma$ (thus, every $w_a$ is a double
$a$-cone). If there is a trace $x\in \mathbb{M}(\Sigma^{\pm 1},I)$ with $x
a x^{-1}=w_a$ in $\mathbb{G}(\Sigma,I)$ for all $a \in \Sigma$, then $s =
\bigsqcup_{a\in\Sigma} v_a$ exists and $s a s^{-1}=w_a$ in
$\mathbb{G}(\Sigma,I)$ for all $a \in \Sigma$.
\end{lemma}
\begin{proof}
Assume that a trace $x\in\mathbb{M}(\Sigma^{\pm 1},I)$ exists with
$xax^{-1}=w_a$ in $\mathbb{G}(\Sigma,I)$ for all $a\in\Sigma$. We
can assume w.l.o.g. that $x \in \mathsf{IRR}(R)$. First, write $x$ as
$x=x_a a^{n_a}$ with $n_a\in \mathbb{Z}$ and $|n_a|$ maximal for
every $a \in \Sigma$. Then $a, a^{-1}\not\in \max(x_a)$ and $x_a$
is uniquely determined by the cancellativity of $\mathbb{M}(\Sigma^{\pm
1},I)$. Next we write $x_a$ as $x_a=t_a u_a$ with $u_a I a$ and
$|u_a|$ maximal. This decomposition is unique by
Lemma~\ref{unique_decomposition}. We get
$$
xax^{-1}=t_a u_a a^{n_a} a a^{-n_a}u_a^{-1}t_a^{-1}\xrightarrow{*}_R
t_a u_a a u_a^{-1}t_a^{-1}\xrightarrow{*}_R t_a a t_a^{-1}
\xrightarrow{*}_R w_a = v_a a v_a^{-1} .
$$
From the choice of $n_a$ and $u_a$ it follows that $\max(t_a)\cap (\{a,
a^{-1}\} \cup I(a)) =\emptyset$. This implies that $t_a a t_a^{-1}
\in \mathsf{IRR}(R)$.
Hence $t_a a t_a^{-1}=v_a a v_a^{-1}$ in
$\mathbb{M}(\Sigma^{\pm 1},I)$ and by Lemma~\ref{unique_cone} it follows
that $t_a=v_a$. So for all $a\in\Sigma$ it holds that $v_a\preceq x$
and therefore $s= \bigsqcup_{a\in\Sigma} v_a$ exists.
Now we infer that $s a s^{-1}=w_a$ in $\mathbb{G}(\Sigma,I)$ for all
$a\in\Sigma$. Since $v_a\preceq x$ for all $a\in\Sigma$, there is
some trace $y$ such that $x=sy$ in $\mathbb{M}(\Sigma^{\pm 1},I)$.
We can write $s=v_a r_a$
for all $a\in\Sigma$. Let $z_a = r_a
y$ and hence $x = v_a r_a y = v_a z_a$.
As a suffix of the $R$-irreducible trace $x$, $z_a$ is
$R$-irreducible as well. By assumption we have
$$
\forall a\in\Sigma : v_a a v_a^{-1} = w_a = xax^{-1} = v_a z_a a z_a^{-1} v_a^{-1}
$$
in $\mathbb{G}(\Sigma,I)$ and hence, by cancelling $v_a$ and
$v_a^{-1}$,
$$
\forall a\in\Sigma : a = z_a a z_a^{-1}
$$
in $\mathbb{G}(\Sigma,I)$. Since $a$ as a single symbol is
$R$-irreducible, this means that
\begin{equation} \label{derivation to _a} \forall a\in\Sigma : z_a a
z_a^{-1} \to^*_R a .
\end{equation}
We prove by induction on $|z_a|$ that $\mathsf{alph}(z_a) \subseteq I(a) \cup \{a,a^{-1}\}$.
The case $z_a = \varepsilon$ is
clear. Now assume that $z_a \neq \varepsilon$. If every maximal
symbol in $z_a$ belongs to $\Sigma^{\pm 1} \setminus (I(a) \cup
\{a,a^{-1}\})$, then $z_a a z_a^{-1} \in \mathsf{IRR}(R)$ (recall
that $z_a \in \mathsf{IRR}(R)$), which contradicts (\ref{derivation to _a}).
Hence, let $z_a = z'_a b$ with $b \in I(a) \cup
\{a,a^{-1}\}$. We get $z_a a z_a^{-1} \to_R z'_a a {z'}_a^{-1}
\to^*_R a$. By induction, it follows that $\mathsf{alph}(z'_a)
\subseteq I(a) \cup \{a,a^{-1}\}$. Hence, the same is true for
$z_a$ and therefore for the prefix $r_a$ of $z_a$ as well. But this
implies $sas^{-1}= v_a r_a a r_a^{-1} v_a^{-1} = v_a a v_a^{-1} =
w_a$ in $\mathbb{G}(\Sigma,I)$. \qed
\end{proof}
\section{Restricted simultaneous compressed conjugacy}
\label{sec:RSSC}
Based on our results on double $a$-cones from the previous section, we
will prove Theorem~\ref{decide_ccp} in this section. First, we have to
prove the following lemma:
\begin{lemma}\label{slp_is_cone}
Let $a\in \Sigma^{\pm 1}$. For a given SLP $\mathbb{A}$ with
$[\mathsf{val}(\mathbb{A})]_I \in \mathsf{IRR}(R)$, we can check in polynomial time
whether $[\mathsf{val}(\mathbb{A})]_I$ is a double $a$-cone. In case
$[\mathsf{val}(\mathbb{A})]_I$ is a double $a$-cone, we can compute in polynomial
time an SLP $\mathbb{V}$ over $\Sigma^{\pm 1}$ with
$\mathsf{val}(\mathbb{A})=\mathsf{val}(\mathbb{V})\,a\,\mathsf{val}(\mathbb{V}^{-1})$ in $\mathbb{M}(\Sigma^{\pm 1},I)$.
\end{lemma}
\begin{proof}
First we check whether $|\mathsf{val}(\mathbb{A})|$ is odd. If not, then
$[\mathsf{val}(\mathbb{A})]_I$ cannot be a double $a$-cone. Assume that
$|\mathsf{val}(\mathbb{A})|=2k+1$ for some $k\geq 0$ and let $\mathsf{val}(\mathbb{A})=u_1bu_2$
with $|u_1|=|u_2|=k$. By \cite{Hag00} we can construct SLPs $\mathbb{V}_1$
and $\mathbb{V}_2$ such that $\mathsf{val}(\mathbb{V}_1)=\mathsf{val}(\mathbb{A})[1:k] =u_1$ and
$\mathsf{val}(\mathbb{V}_2)=\mathsf{val}(\mathbb{A})[k+2:2k+1]=u_2$. By Lemma~\ref{unique_cone},
$[\mathsf{val}(\mathbb{A})]_I$ is a double $a$-cone if and only if $a=b$ and
$[\mathsf{val}(\mathbb{V}_1)]_I=[\mathsf{val}(\mathbb{V}_2^{-1})]_I$. This can be checked in
polynomial time. \qed
\end{proof}
Now we are in the position to present a polynomial time algorithm for
$\mathsf{RSCCP}(\Sigma,I)$:
\noindent
{\it Proof of Theorem \ref{decide_ccp}.}
Let $\mathbb{A}_a$ ($a \in \Sigma$) be the input SLPs. We have to check
whether there exists $x$ such that $\mathsf{val}(\mathbb{A}_a) = xax^{-1}$
in $\mathbb{G}(\Sigma,I)$ for all $a
\in \Sigma$. Since the SLP $\mathbb{A}_a$ and an $R$-reduction of $\mathbb{A}_a$
represent the same group element in $\mathbb{G}(\Sigma,I)$, Theorem
\ref{R-reduction} allows us to assume that the input SLPs $\mathbb{A}_a$ ($a
\in \Sigma$) represent $R$-irreducible traces.
We first check whether every trace $[\mathsf{val}(\mathbb{A}_a)]_I$ is a double
$a$-cone. By Lemma~\ref{slp_is_cone} this is possible in polynomial
time. If there exists $a \in \Sigma$ such that $[\mathsf{val}(\mathbb{A}_a)]_I$ is
not a double $a$-cone, then we can reject by
Lemma~\ref{lemma-double-a-cone}. Otherwise, we can compute (using
again Lemma~\ref{slp_is_cone}) SLPs $\mathbb{V}_a$ ($a \in \Sigma$) such that
$[\mathsf{val}(\mathbb{A}_a)]_I = [\mathsf{val}(\mathbb{V}_a)]_I a [\mathsf{val}(\mathbb{V}_a)]_I^{-1}$ in
$\mathbb{M}(\Sigma^{\pm 1},I)$. Finally,
by Lemma~\ref{condition_if_conjugating_then_supremum}, it suffices to
check whether $\bigsqcup_{a\in\Sigma} [\mathsf{val}(\mathbb{V}_a)]_I$
exists, which is possible in polynomial time by
Corollary~\ref{decide_compressed_supremum} (recall that $|\Sigma|$ is
a constant in our consideration). Moreover, if this supremum exists,
then we can compute in polynomial time an SLP $\mathbb{S}$ with
$[\mathsf{val}(\mathbb{S})]_I = \bigsqcup_{a\in\Sigma} [\mathsf{val}(\mathbb{V}_a)]_I$. Then,
$\mathsf{val}(\mathbb{S})$ is a solution for our $\mathsf{RSCCP}(\Sigma,I)$-instance. \qed
\section{Computing the core of a compressed trace}
In order to prove Theorem~\ref{decidesccp} we need some further
concepts from \cite{Wra88}.
\begin{definition}
A trace $y$ is called \emph{cyclically $R$-irreducible} if $y \in
\mathsf{IRR}(R)$ and $\min(y)\cap\min(y^{-1})=\emptyset$. If for a trace
$x$ we have $\mathbb{N}F_R(x) = uyu^{-1}$ in $\mathbb{M}(\Sigma^{\pm 1},I)$ for
traces $y,u$ with $y$ cyclically $R$-irreducible, then we call $y$ the
\emph{core} of $x$, $\mathcal{C}R(x)$ for short.
\end{definition}
The trace $y$ in the last definition is uniquely defined
\cite{Wra88}. Moreover, note that a trace $t$ is a double $a$-cone if and
only if $t \in \mathsf{IRR}(R)$ and $\mathcal{C}R(t) = a$.
In this section, we will present a polynomial time algorithm for
computing an SLP that represents $\mathcal{C}R([\mathsf{val}(\mathbb{A})]_I)$ for a given SLP
$\mathbb{A}$. For this, we need the following lemmas.
\begin{lemma}\label{prefixinverseepsilon}
Let $p,t\in\mathbb{M}(\Sigma^{\pm 1},I)$. If $p\preceq t$, $p^{-1}\preceq
t$ and $t \in \mathsf{IRR}(R)$, then $p=\varepsilon$.
\end{lemma}
\begin{proof}
Suppose for contradiction that
$$
T=\{t \in \mathsf{IRR}(R) \mid \exists p\in \mathbb{M}(\Sigma^{\pm
1},I) \setminus \{ \varepsilon\} : p\preceq t \wedge p^{-1}\preceq
t \} \neq\emptyset.
$$
Let $t\in T$ with $|t|$ minimal and $p\in \mathbb{M}(\Sigma^{\pm 1},I)$
such that $p \neq \varepsilon$, $p\preceq t$, and $p^{-1}\preceq
t$. If $|p|=1$ then $p=a$ for some $a\in\Sigma^{\pm 1}$ and hence
$a\preceq t$ and $a^{-1}\preceq t$, a contradiction since
$aDa^{-1}$. If $|p|=2$ then $p=a_1a_2$ for some
$a_1,a_2\in\Sigma^{\pm 1}$. Since $t$, and therefore $p$ is
$R$-irreducible, we have $a_1 \neq a_2^{-1}$. Since $a_1\in \min(t)$ and
$a_2^{-1}\in\min(t)$ we have $a_1 I a_2^{-1}$, i.e., $a_1 I
a_2$. Hence, also $a_2 \in \min(t)$, which contradicts
$a_2^{-1}\in\min(t)$. So assume that $|p|>2$. Let $a\in
\min(p)$. Then $a\in \min(t)$, and there exist traces $y, t'$ with
$t = at' = p^{-1} y$. If $a \not\in \min(p^{-1})$, then $a \in
\min(y)$ and $a I p^{-1}$. But the latter independence contradicts
$a^{-1} \in \mathsf{alph}(p^{-1})$. Hence $a \in \min(p^{-1})$, i.e., $a^{-1}
\in \max(p)$. Thus, we can write $p=aqa^{-1}$ and
$p^{-1}=aq^{-1}a^{-1}$ with $q \neq \varepsilon$. Since $aqa^{-1} =
p \preceq at'$, $aq^{-1}a^{-1} = p^{-1} \preceq at'$ and
$\mathbb{M}(\Sigma^{\pm 1},I)$ is cancellative, we have a $q\preceq t'$,
$q^{-1}\preceq t'$. Since $q \neq \varepsilon$, we have a
contradiction to the fact that $|t|$ is minimal. \qed
\end{proof}
\begin{example}\label{exacore}
We take the independence alphabet from Example \ref{Ex traces1} and
consider the trace
$x=[c^{-1}d^{-1}a^{-1}ba^{-1}cabdc^{-1}d^{-1}a^{-1}b^{-1}dca]_I\in\mathbb{M}(\Sigma^{\pm
1},I)$, whose dependence graph looks as follows:
\begin{center}
\setlength{\unitlength}{1mm}
\begin{picture}(76,13)(-6,-3)
\gasset{Nadjust=wh,Nadjustdist=0.3,Nfill=n,Nframe=n,AHnb=1,linewidth=.2}
\node(oc1)(-6,7){$c^{-1}$}
\node(od1)(9,7){$d^{-1}$}
\node(oc2)(17,7){$c$}
\node(od2)(30,7){$d$}
\node(oc3)(42,7){$c^{-1}$}
\node(od3)(55,7){$d^{-1}$}
\node(od4)(63,7){$d$}
\node(oc4)(70,7){$c$}
\node(ua1)(-6,0){$a^{-1}$}
\node(ub1)(3,0){$b$}
\node(ua2)(15,0){$a^{-1}$}
\node(ua3)(24,0){$a$}
\node(ub2)(30,0){$b$}
\node(ua4)(44,0){$a^{-1}$}
\node(ub3)(58,0){$b^{-1}$}
\node(ua5)(70,0){$a$}
\drawedge(oc1,ub1){}
\drawedge(ub1,oc2){}
\drawedge(oc2,ub2){}
\drawedge(ub2,oc3){}
\drawedge(oc3,ub3){}
\drawedge(ub3,oc4){}
\drawedge(oc1,od1){}
\drawedge(od1,oc2){}
\drawedge(oc2,od2){}
\drawedge(od2,oc3){}
\drawedge(oc3,od3){}
\drawedge(od3,od4){}
\drawedge(od4,oc4){}
\drawedge(ua1,ub1){}
\drawedge(ub1,ua2){}
\drawedge(ua2,ua3){}
\drawedge(ua3,ub2){}
\drawedge(ub2,ua4){}
\drawedge(ua4,ub3){}
\drawedge(ub3,ua5){}
\drawqbezier[AHnb=0,linewidth=.1,dash={.7}0](14,-2,18,-6,23,-2)
\drawqbezier[AHnb=0,linewidth=.1,dash={.7}0](55,9,59,12,63,9)
\end{picture}
\end{center}
Then the $R$-reduction of $x$ is
$\mathbb{N}F_R(x)=[c^{-1}d^{-1}a^{-1}bcbdc^{-1}a^{-1}b^{-1}ca]_I$:
\begin{center}
\setlength{\unitlength}{1mm}
\begin{picture}(76,13)(-6,-3)
\gasset{Nadjust=wh,Nadjustdist=0.3,Nfill=n,Nframe=n,AHnb=1,linewidth=.2}
\node(oc1)(-6,7){$c^{-1}$}
\node(od1)(9,7){$d^{-1}$}
\node(oc2)(17,7){$c$}
\node(od2)(30,7){$d$}
\node(oc3)(42,7){$c^{-1}$}
\node(oc4)(70,7){$c$}
\node(ua1)(-6,0){$a^{-1}$}
\node(ub1)(3,0){$b$}
\node(ub2)(30,0){$b$}
\node(ua4)(44,0){$a^{-1}$}
\node(ub3)(58,0){$b^{-1}$}
\node(ua5)(70,0){$a$}
\drawedge(oc1,ub1){}
\drawedge(ub1,oc2){}
\drawedge(oc2,ub2){}
\drawedge(ub2,oc3){}
\drawedge(oc3,ub3){}
\drawedge(ub3,oc4){}
\drawedge(oc1,od1){}
\drawedge(od1,oc2){}
\drawedge(oc2,od2){}
\drawedge(od2,oc3){}
\drawedge(ua1,ub1){}
\drawedge(ub2,ua4){}
\drawedge(ua4,ub3){}
\drawedge(ub3,ua5){}
\drawqbezier[AHnb=0,linewidth=.1,dash={.7}0](3,12,7,3,3,-5)
\drawqbezier[AHnb=0,linewidth=.1,dash={.7}0](54,12,50,3,54,-5)
\end{picture}
\end{center}
Hence, the core of $x$ is $\mathcal{C}R(x)=[d^{-1}cbdc^{-1}a^{-1}]_I$ and looks
as follows:
\begin{center}
\setlength{\unitlength}{1mm}
\begin{picture}(35,13)(9,-3)
\gasset{Nadjust=wh,Nadjustdist=0.3,Nfill=n,Nframe=n,AHnb=1,linewidth=.2}
\node(od1)(9,7){$d^{-1}$}
\node(oc2)(17,7){$c$}
\node(od2)(30,7){$d$}
\node(oc3)(42,7){$c^{-1}$}
\node(ub2)(30,0){$b$}
\node(ua4)(44,0){$a^{-1}$}
\drawedge(oc2,ub2){}
\drawedge(ub2,oc3){}
\drawedge(od1,oc2){}
\drawedge(oc2,od2){}
\drawedge(od2,oc3){}
\drawedge(ub2,ua4){}
\end{picture}
\end{center}
Note that we have
$\mathbb{N}F_R (x)\sqcap \mathbb{N}F_R(x^{-1})=c^{-1}a^{-1}b$
and hence
\begin{align*}
\mathbb{N}F_R\Big(\big(\mathbb{N}F_R(x)\sqcap& \mathbb{N}F_R(x^{-1})\big)^{-1}\mathbb{N}F_R(x)\big(\mathbb{N}F_R(x)\sqcap \mathbb{N}F_R(x^{-1}\big)\Big)\\
&=\mathbb{N}F_R\Big(\big(c^{-1}a^{-1}b\big)^{-1}\big(c^{-1}d^{-1}a^{-1}bcbdc^{-1}a^{-1}b^{-1}ca\big)\big(c^{-1}a^{-1}b\big)\Big)\\
&=d^{-1}cbdc^{-1}a^{-1}=\mathcal{C}R(x).
\end{align*}
This fact holds for every trace, and shall be proven next.
\end{example}
\begin{lemma} \label{main core lemma}
Let $x \in \mathsf{IRR}(R)$ and $d=x\sqcap x^{-1}$. Then
$\mathbb{N}F_R(d^{-1}xd) = \mathcal{C}R(x)$.
\end{lemma}
\begin{proof}
Let $d=x\sqcap x^{-1}$. Thus, there are traces $y,z$ such that $dy
=x = z^{-1}d^{-1}$ and $\min(y) \cap \min(z) = \emptyset$. By Levi's
Lemma it follows that there are traces $u,v_1,v_2,w$ such that
$uv_1=d$, $v_2w=y$, $uv_2=z^{-1}$, $v_1w=d^{-1}$, and
$v_1Iv_2$. Hence we have $v_1^{-1}\preceq d^{-1}$ and $v_1\preceq
d^{-1}$ and since $x$ is $R$-irreducible, so is $d^{-1}$. We can apply
Lemma~\ref{prefixinverseepsilon} to infer that $v_1=\varepsilon$.
It follows that $u = d$, $w = d^{-1}$, and thus $x = dy = d v_2 w =
d v_2 d^{-1}$. Moreover, since $\min(v_2w) \cap \min (v_2^{-1}
u^{-1}) = \min(y) \cap \min(z) = \emptyset$, we have
$\min(v_2)\cap\min(v_2^{-1})=\emptyset$. Hence, $v_2$ is the core of
$x$. Moreover since $x$ (and therefore $v_2$) is $R$-irreducible, we
have $\mathbb{N}F_R(d^{-1}xd)=\mathbb{N}F_R(d^{-1}dv_2d^{-1}d)=v_2$. \qed
\end{proof}
We now easily obtain:
\begin{corollary}\label{computecore}
Fix an independence alphabet $(\Sigma^{\pm 1},I)$. Then, the
following problem can be solved in polynomial time:
\noindent
INPUT: An SLP $\mathbb{A}$ \\
OUTPUT: An SLP $\mathbb{B}$ with $[\mathsf{val}(\mathbb{B})]_I = \mathcal{C}R([\mathsf{val}(\mathbb{A})]_I)$
\end{corollary}
\begin{proof}
By Theorem~\ref{R-reduction} we can assume that $[\mathsf{val}(\mathbb{A})]_I \in \mathsf{IRR}(R)$.
Then, using Theorem~\ref{theo-compressed-inf} we can
compute in polynomial time an SLP $\mathbb{P}$ with
$[\mathsf{val}(\mathbb{P})]_I=[\mathsf{val}(\mathbb{A})]_I\sqcap[\mathsf{val}(\mathbb{A})^{-1}]_I$. By
Lemma~\ref{main core lemma} we have $\mathcal{C}R([\mathsf{val}(\mathbb{A})]_I)
=\mathbb{N}F_R([\mathsf{val}(\mathbb{P})^{-1}\mathsf{val}(\mathbb{A})\mathsf{val}(\mathbb{P})]_I)$. Finally, by
Theorem~\ref{R-reduction} we can compute in polynomial time an SLP
$\mathbb{B}$ such that
$[\mathsf{val}(\mathbb{B})]_I=\mathbb{N}F_R([\mathsf{val}(\mathbb{P})^{-1}\mathsf{val}(\mathbb{A})\mathsf{val}(\mathbb{P})]_I)$. \qed
\end{proof}
\section{A pattern matching algorithm for connected patterns}\label{sec:connected}
Our second tool for proving Theorem~\ref{decidesccp} is a pattern
matching algorithm for compressed traces.
For two traces $v$ and $w$ we say that $v$ is a factor of $w$ if there
is some trace $u$ such that $uv\preceq w$. We consider the following
problem and show that it can be solved in polynomial time if the
independence alphabet $(\Sigma,I)$ satisfies certain conditions.
\noindent
INPUT: An independence alphabet $(\Sigma,I)$ and two SLPs $\mathbb{T}$ and $\mathbb{P}$
over $\Sigma$.\\
QUESTION: Is $[\mathsf{val}(\mathbb{P})]_I$ a factor of $[\mathsf{val}(\mathbb{T})]_I$?
\noindent
We write $\mathsf{alph}(\mathbb{T})$ and $\mathsf{alph}(\mathbb{P})$ for $\mathsf{alph}(\mathsf{val}(\mathbb{T}))$ and
$\mathsf{alph}(\mathsf{val}(\mathbb{P}))$, respectively. We may assume that
$\Sigma=\mathsf{alph}(\mathbb{T})$ and that $\Sigma$ is connected. Otherwise we simply
solve several instances of the latter problem separately. Also, we
assume in the following that the SLPs $\mathbb{T}=(V,\Sigma,S,P)$ and $\mathbb{P}$
are in Chomsky normal form. Let $\Gamma\subseteq \Sigma$. We denote
by $\pi_{\Gamma,}$ the homomorphism $\pi_{\Gamma}:\mathbb{M}(\Sigma,I)\to
\mathbb{M}(\Gamma,I\cap (\Gamma\times \Gamma))$ with $\pi_{\Gamma}(a)=a$ for
$a\in \Gamma$ and $\pi_{\Gamma}(a)=\varepsilon$ for $a\in
\Sigma\setminus\Gamma$. Let $V^\Gamma=\{X^\Gamma\mid X\in V\}$ be a
disjoint copy of $V$. For each production $p\in P$ define a new
production $p^\Gamma$ as follows. If $p$ is of the form $X\to a$
$(a\in \Sigma)$, then let $p^\Gamma=(X^\Gamma \to a)$ in case $a\in
\Gamma$ and $p^\Gamma=(X^\Gamma\to \varepsilon)$ otherwise. Moreover, if
$p\in P$ is of the form $X\to YZ$ $(X,Y,Z\in V)$ define
$p^\Gamma=(X^\Gamma\to Y^\Gamma Z^\Gamma)$. We denote with $\mathbb{T}^\Gamma$
the SLP $(V^\Gamma,\Gamma,S^\Gamma,P^\Gamma)$ where
$P^\Gamma=\{p^\Gamma\mid p\in P\}$.
Obviously, $\mathsf{val}(\mathbb{T}^{\Gamma})=\pi_\Gamma(\mathsf{val}(\mathbb{T}))$.
In order to develop a polynomial time algorithm for the problem stated
above we need a succinct representation for an occurrence of $\mathbb{P}$ in
$\mathbb{T}$. Since $[\mathsf{val}(\mathbb{P})]_I$ is a factor of $[\mathsf{val}(\mathbb{T})]_I$ iff there
is a prefix $u\preceq[\mathsf{val}(\mathbb{T})]_I$ such that $u[\mathsf{val}(\mathbb{P})]_I\preceq
[\mathsf{val}(\mathbb{T})]_I$, we will in fact compute prefixes with the latter
property and represent a prefix $u$ by its Parikh image $(|u|_a)_{a\in
\Sigma}$. Hence we say a sequence $O=(O_a)_{a\in \Sigma}\in
\mathbb{N}^{\Sigma}$ is an \emph{occurrence} of a trace $v$ in a trace $w$ iff
there is a prefix $u\preceq w$ such that $u v\preceq w$, and
$O=(|u|_a)_{a\in \Sigma}$. For $\Gamma\subseteq \Sigma$ we write
$\pi_{\Gamma}(O)$ for the restriction
$(O_a)_{a\in\Gamma}$. Furthermore, we say that $O$ is an occurrence of
$\mathbb{P}$ in $\mathbb{T}$ if $O$ is an occurrence of $[\mathsf{val}(\mathbb{P})]_I$ in
$[\mathsf{val}(\mathbb{T})]_I$. Note that our definition of an occurrence of $\mathbb{P}$ in
$\mathbb{T}$ does not exactly correspond to the intuitive notion of an
occurrence as a convex subset of the dependence graph of
$[\mathsf{val}(\mathbb{T})]_I$. In fact, to a convex subset of the dependence graph
of $[\mathsf{val}(\mathbb{T})]_I$, which is isomorphic to the dependence graph
of $[\mathsf{val}(\mathbb{P})]_I$, there might correspond several
occurrences $O$, since for an $a\in \Sigma$ that is independent of
$\mathsf{alph}(\mathbb{P})$ we might have several possibilities for the value
$O_a$. However, if we restrict to letters that are dependent on
$\mathsf{alph}(\mathbb{P})$, then our definition of an occurrence coincides with the
intuitive notion.
Let $X$ be a nonterminal of $\mathbb{T}$ with production $X\to YZ$ and let
$O$ be an occurrence of $[\mathsf{val}(\mathbb{P})]_I$ in $[\mathsf{val}(X)]_I$. If there
are $a,b\in \mathsf{alph}(\mathbb{P})$ such that $O_a<|\mathsf{val}(Y)|_a$ and
$O_b+|\mathsf{val}(\mathbb{P})|_b>|\mathsf{val}(Y)|_b$, then we say that $O$ is an occurrence
of $\mathbb{P}$ \emph{at the cut} of $X$. We assume w.l.o.g. that
$|\mathsf{val}(\mathbb{P})|\geq 2$, otherwise the problem reduces simply to checking
whether there occurs a certain letter in $\mathsf{val}(\mathbb{T})$. This
assumption implies that $[\mathsf{val}(\mathbb{P})]_I$ is a factor of $[\mathsf{val}(\mathbb{T})]_I$ if
and only if there is a nonterminal $X$ of $\mathbb{T}$ for which there is an occurrence
of $\mathbb{P}$ at the cut of $X$.
\begin{example}\label{exapat1}
We take the independence alphabet from Example \ref{Ex traces1}
again. Let $X$ be a nonterminal with
$\mathsf{val}(X)=acbc\;ad\;cbc\;acbc\;acbc\;acbc\;acb|c\;acbc\;acbc\;acbc\;acb\;dc$
where '$|$' denotes the cut of $X$ and
$\mathsf{val}(\mathbb{P})=acbc\;acbc\;acbc\;acbc\;acbc$. Then the occurrences of
$\mathsf{val}(\mathbb{P})$ at the cut of $X$ are $(1,1,2,1)$,$(2,2,4,1)$,
$(3,3,6,1)$ and $(4,4,8,1)$ where the positions in a tuple
correspond to the letters in our alphabet in the order
$a,b,c,d$. We will see later how to construct them.
\end{example}
\begin{lemma}[\cite{LiuWraZeg90}]
\label{lem:liuwrazeg}
Let $v$ and $w$ be traces over $\Sigma$. A sequence $(n_a)_{a\in
\Sigma}\in \mathbb{N}^{\Sigma}$ is an occurrence of $v$ in $w$ if and only
if $(n_a,n_b)$ is an occurrence of $\pi_{\{a,b\}}(v)$ in
$\pi_{\{a,b\}}(w)$ for all $(a,b)\in D$.
\end{lemma}
\noindent
An \emph{arithmetic progression} is a subset of $\mathbb{N}^{\Sigma}$ of the form
$$
\{ (i_a)_{a\in \Sigma}+k\cdot (d_a)_{a\in \Sigma}\mid 0\leq
k\leq \ell\}.
$$
This set can be represented by the triple
$((i_a)_{a\in\Sigma},(d_a)_{a\in\Sigma},\ell)$. The
{\em descriptional size} $|((i_a)_{a\in\Sigma},(d_a)_{a\in\Sigma},\ell)|$ of the arithmetic
progression $((i_a)_{a\in\Sigma},(d_a)_{a\in\Sigma},\ell)$ is
$\log_2(\ell)+\sum_{a\in \Sigma}(\log_2(i_a)+\log_2(d_a))$.
In Example~\ref{exapat1}, the occurrences of $\mathsf{val}(\mathbb{P})$ at the cut of
$X$ form the arithmetic progression $\big((1,1,2,1),(1,1,2,0),3\big)$.
We will use the last lemma in order to compute the occurrences of
$\mathbb{P}$ in $\mathbb{T}$ in form of a family of arithmetic progressions. To this
aim, we follow a similar approach as Genest and Muscholl for message
sequence charts~\cite{GenMus08}. In particular Lemma~\ref{lem:sameNT}
below was inspired by \cite[Proposition~1]{GenMus08}.
Throughout the rest of this section we make the following assumption:
\begin{equation}
\label{assumption}
\text{$\mathsf{alph}(\mathbb{P})$ is
connected and $\{a,b\}\cap\mathsf{alph}(\mathbb{P})\ne
\emptyset$ for all $(a,b) \in D$ with $a\ne b$.}
\end{equation}
Let $X$ be a nonterminal of $\mathbb{T}$ and let $O$ be an occurrence of
$\mathbb{P}$ at the cut of $X$. Since the pattern is connected there must be
some $a,b\in \Sigma$ with $(a,b)\in D$ such that $\pi_{\{a,b\}}(O)$ is
at the cut of $X^{\{a,b\}}$. We will therefore compute occurrences of
$\pi_{\{a,b\}}(\mathsf{val}(\mathbb{P}))$ at the cut of $X^{\{a,b\}}$. It is well
known that the occurrences of $\pi_{\{a,b\}}(\mathsf{val}(\mathbb{P}))$ at the cut of
$X^{\{a,b\}}$ form an arithmetic progression
$((i_a,i_b),(d_a,d_b),\ell)$ and that $\pi_{\{a,b\}}(\mathsf{val}(\mathbb{P}))$ is of
the form $u^nv$ for some $n\geq \ell$ and strings $u,v\in\{a,b\}^*$
with $v\preceq u$, $|u|_a=d_a$ and $|u|_b=d_b$. Moreover, the
arithmetic progression $((i_a,i_b),(d_a,d_b),\ell)$ can be computed in
time $|\mathbb{T}|^2|\mathbb{P}|$ (see~\cite{Lif07}\footnote{In fact,
in~\cite{Lif07} it was shown that the arithmetic progression
$(i_a+i_b,d_a+d_b,\ell)$ can be computed in polynomial time. Observe
that from this the arithmetic progression
$((i_a,i_b),(d_a,d_b),\ell)$ can easily be computed.}). Now suppose
we have computed the occurrences of $\pi_{\{a,b\}}(\mathsf{val}(\mathbb{P}))$ at the
cut of $X^{\{a,b\}}$ in form of an arithmetic progression. The
problem now is how to find (for the possibly exponentially many
occurrences in the arithmetic progression) matching occurrences of
projections onto all other pairs in $D$.
The following lemma states
that either there is a pair $(a,b)\in D$ such that the projection onto
$\{a,b\}$ is the first or the last element of an arithmetic
progression, or all projections lie at the cut of the same
nonterminal.
\begin{lemma}\label{lem:sameNT}
Let $X$ be a nonterminal of $\mathbb{T}$ and let $O$ be an occurrence of
$\mathbb{P}$ at the cut of $X$. Then either
\begin{enumerate}[(i)]
\item $\pi_{\{a,b\}}(O)$ is at the cut of $X^{\{a,b\}}$ for all
$(a,b)\in D$ with $a\ne b$, or
\item there are $a,b\in \mathsf{alph}(\mathbb{P})$ with $(a,b)\in D$ such that
$\pi_{\{a,b\}}(O)$ is the first or last element of the arithmetic
progression of occurrences of $\pi_{\{a,b\}}(\mathsf{val}(\mathbb{P}))$ at the cut
of $X^{\{a,b\}}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $X\to YZ$ be a production of $\mathbb{T}$. Clearly, by our general
assumption \eqref{assumption} it suffices to show that either (ii)
holds, or $O_a<|\mathsf{val}(Y)|_a<O_a+|\mathsf{val}(\mathbb{P})|_a$ for all $a\in
\mathsf{alph}(\mathbb{P})$. We show this assertion by induction on $|\mathsf{alph}(\mathbb{P})|$.
If $\mathsf{alph}(\mathbb{P})$ is a singleton, then it is trivially true.
Next, we consider the case $|\mathsf{alph}(\mathbb{P})|=2$. So let $\{a,b\}=
\mathsf{alph}(\mathbb{P})$ and hence $(a,b)\in D$ by \eqref{assumption}.
Assume that (ii) does not hold.
Consider the arithmetic progression $((i_a,i_b),(d_a,d_b),\ell)$ of
occurrences of $\mathsf{val}(\mathbb{P})$ at the cut of $X^{\{a,b\}}$. Then
$\mathsf{val}(\mathbb{P})$ is of the form $u^nv$ for some $n\geq \ell$ and strings
$u,v\in\{a,b\}^*$ with $v\preceq u$, $|u|_a=d_a$ and $|u|_b=d_b$.
We conclude that $d_a,d_b>0$ as otherwise $|\mathsf{alph}(\mathbb{P})|\leq 1$.
Suppose for contradiction that $i_a+\ell d_a >|\mathsf{val}(Y)|_a$. Since no
prefix $w$ of $\pi_{\{a,b\}}(\mathsf{val}(X))$ can satisfy $|w|_a<|\mathsf{val}(Y)|_a$
and $|w|_b>|\mathsf{val}(Y)|_b$ we conclude $i_b+\ell
d_b\geq|\mathsf{val}(Y)|_b$. But then the occurrence $(i_a+\ell d_a,i_b+\ell
d_b)$ is not at the cut of $X^{\{a,b\}}$, which is a
contradiction. Hence $i_a+\ell d_a\leq|\mathsf{val}(Y)|_a$ and by symmetry
$i_b+\ell d_b\leq|\mathsf{val}(Y)|_b$. Similarly, since $(i_a,i_b)$ is an
occurrences of $\mathsf{val}(\mathbb{P})$ at the cut of $X^{\{a,b\}}$, we get
$|\mathsf{val}(Y)|_a\leq i_a+|\mathsf{val}(\mathbb{P})|_a$ and $|\mathsf{val}(Y)|_b\leq
i_b+|\mathsf{val}(\mathbb{P})|_b$. As $\pi_{\{a,b\}}(O)$ is neither the first nor
the last element of the arithmetic progression we have
$O_a=i_a+kd_a$ and $O_b=i_b+kd_b$ for some $0<k<\ell$ and hence
$O_a<|\mathsf{val}(Y)|_a<O_a+|\mathsf{val}(\mathbb{P})|_a$ and
$O_b<|\mathsf{val}(Y)|_b<O_b+|\mathsf{val}(\mathbb{P})|_b$ as required.
Now, suppose that $|\mathsf{alph}(\mathbb{P})|\geq 3$. Since $O$ is an occurrence
at the cut of $X$, there are $a,b\in \mathsf{alph}(\mathbb{P})$ such that
$O_a<|\mathsf{val}(Y)|_a$ and $O_b+|\mathsf{val}(\mathbb{P})|_b>|\mathsf{val}(Y)|_b$. We may
assume that $(a,b)\in D$. Indeed, if $O_a+|\mathsf{val}(\mathbb{P})|_a>
|\mathsf{val}(Y)|_a$ choose $a=b$. Otherwise, since $\mathsf{alph}(\mathbb{P})$ is connected
there is a dependence path between $a$ and $b$. Since
$O_a+|\mathsf{val}(\mathbb{P})|_a\leq |\mathsf{val}(Y)|_a$, there must be an edge
$(a',b')\in D$ on this path such that
$a', b' \in \mathsf{alph}(\mathbb{P})$, $O_{a'}+|\mathsf{val}(\mathbb{P})|_{a'}\leq
|\mathsf{val}(Y)|_{a'}$ (and hence $O_{a'} < |\mathsf{val}(Y)|_{a'}$), and
$O_{b'}+|\mathsf{val}(\mathbb{P})|_{b'}>|\mathsf{val}(Y)|_{b'}$.
Next consider a spanning tree of
$(\mathsf{alph}(\mathbb{P}),D\cap\mathsf{alph}(\mathbb{P})\times\mathsf{alph}(\mathbb{P}))$ which contains the edge
$(a,b)$ (in case $a\ne b$). Let $c\notin\{a,b\}$ be a leaf of this
spanning tree (it exists since $|\mathsf{alph}(\mathbb{P})| \geq 3$).
Obviously, $\Delta=\mathsf{alph}(\mathbb{P})\setminus\{c\}$ is
connected and $\pi_\Delta(O)$ is at the cut of $X^\Delta$. Thus we
can apply the induction hypothesis. Assume again that (ii) does not
hold. Applying the induction hypothesis to $\pi_\Delta(\mathsf{val}(\mathbb{P}))$
and $X^\Delta$ we get $O_a<|\mathsf{val}(Y)|_a<O_a+|\mathsf{val}(\mathbb{P})|_a$ for all
$a\in \Delta$. In particular, $O_d<|\mathsf{val}(Y)|_d<O_d+|\mathsf{val}(\mathbb{P})|_d$
for some $d\in \Delta$ with $(c,d)\in D$. Hence, $\pi_{\{d,c\}}(O)$
is at the cut of $X^{\{d,c\}}$. Thus, applying the induction
hypothesis also to $\pi_{\{d,c\}}(\mathsf{val}(\mathbb{P}))$ and $X^{\{d,c\}}$ we
get $O_c<|\mathsf{val}(Y)|_c<O_c+|\mathsf{val}(\mathbb{P})|_c$.\qed
\end{proof}
The last lemma motivates that we partition the set of occurrences into
two sets. Let $O$ be an occurrence of $\mathbb{P}$ in $\mathbb{T}$ at the cut of
$X$. We call $O$ \emph{single} (for $X$) if there are
$a,b\in\mathsf{alph}(\mathbb{P})$ with $(a,b)\in D$ such that the projection
$\pi_{\{a,b\}}(O)$ is the first or the last element of the arithmetic
progression of occurrences of $\pi_{\{a,b\}}(\mathsf{val}(\mathbb{P}))$ at the cut of
$X^{\{a,b\}}$. Otherwise, we call $O$ \emph{periodic} (for $X$). By
Lemma~\ref{lem:sameNT}, if $O$ is periodic, then $\pi_{\{a,b\}}(O)$ is
an element of the arithmetic progression of occurrences of
$\mathsf{val}(\mathbb{P}^{\{a,b\}})$ at the cut of $X^{\{a,b\}}$ for all $(a,b)\in D$
(but neither the first nor the last element, if $a,b\in\mathsf{alph}(\mathbb{P})$).
The next proposition shows that we can decide in polynomial time
whether there are single occurrences of $\mathbb{P}$ in $\mathbb{T}$.
\begin{proposition}\label{prop:computesingle}
Given $a,b\in\mathsf{alph}(\mathbb{P})$ with $(a,b)\in D$, a nonterminal $X$ of
$\mathbb{T}$ and an occurrence $(O_a,O_b)$ of $\pi_{\{a,b\}}(\mathsf{val}(\mathbb{P}))$ at
the cut of $X^{\{a,b\}}$, we can decide in time
$(|\mathbb{T}|+|\mathbb{P}|)^{O(1)}$ whether this occurrence is a projection of an
occurrence of $\mathbb{P}$ at the cut of $X$.
\end{proposition}
\begin{proof}
Let $a_1,\ldots,a_n$ be an enumeration of $\Sigma$ such that
$a=a_1$, $b=a_2$ and $D(a_i)\cap\{a_1,\ldots,a_{i-1}\}\ne \emptyset$
for all $2\leq i\leq n$. Moreover, we require that the elements of
$\mathsf{alph}(\mathbb{P})$ appear at the beginning of our enumeration, i.e., are
the elements $a_1,\ldots,a_j$ for some $j\leq n$. This can be
assumed since $\Sigma$ and $\mathsf{alph}(\mathbb{P})$ are connected. We iterate
over $3\leq i\leq n$ and compute, if possible, an integer $O_{a_i}$
such that $(O_{a_1},\ldots,O_{a_i})$ is an occurrence of
$\pi_{\{a_1,\ldots,a_i\}}(\mathsf{val}(\mathbb{P}))$ in
$\pi_{\{a_1,\ldots,a_i\}}(\mathsf{val}(X))$.
So let $i\geq 3$, $d=a_i$, and $\Delta=\{a_1,\ldots,a_{i-1}\}$.
By our general assumption \eqref{assumption} we can choose some
$c\in \Delta \cap\mathsf{alph}(\mathbb{P})$ such that $(c,d)\in D$. Let us further
assume that we have already constructed an occurrence
$(O_{a_1},\ldots,O_{a_{i-1}})$ of $\pi_{\Delta}(\mathsf{val}(\mathbb{P}))$ in
$\pi_{\Delta}(\mathsf{val}(X))$. First, we compute the number $k \geq 0$ such that
$d^kc$ is a prefix of $\pi_{\{c,d\}}(\mathsf{val}(\mathbb{P}))$. Then, we compute
$O_d$ such that there is a prefix $wd^kc$ of $\pi_{c,d}(\mathsf{val}(X))$
for some $w\in \{c,d\}^*$ with $|w|_c=O_c$, $|w|_d=O_d$. If such a
prefix does not exist, then there is no occurrence
$(O_{a_1},\ldots,O_{a_{i-1}},O_{d})$ of
$\pi_{\Delta \cup \{d\}}(\mathsf{val}(\mathbb{P}))$ in
$\pi_{\Delta\cup\{d\}}(\mathsf{val}(X))$. On the other hand, observe that
if there is such an occurrence
$(O_{a_1},\ldots,O_{a_{i-1}},O_{d})$, then $O_{d}=|w|_d$.
Last, using~\cite{Lif07} we check in polynomial time for all $e\in
D(d)\cap\Delta$ whether $(O_e,O_d)$ is an occurrence of
$\pi_{\{d,e\}}(\mathsf{val}(\mathbb{P}))$ in $\pi_{\{d,e\}}\mathsf{val}(X)$. By
Lemma~\ref{lem:liuwrazeg}, the latter holds if and only if
$(O_{a_1},\ldots,O_{a_{i-1}},O_d)$ is an occurrence of $\pi_{\Delta\cup\{d\}}(\mathsf{val}(\mathbb{P}))$
in $\pi_{\Delta\cup\{a_i\}}(\mathsf{val}(X))$. \qed
\end{proof}
It remains to show that for every nonterminal $X$ of $\mathbb{T}$ we can
compute the periodic occurrences. To this aim we define the
amalgamation of arithmetic progressions. Let $\Gamma,\Gamma'\subseteq
\Sigma$ such that $\Gamma\cap\Gamma'\ne\emptyset$. Consider two
arithmetic progressions
$$
p=((i_a)_{a\in \Gamma},(d_a)_{a\in \Gamma},\ell), \qquad
p'=((i'_a)_{a\in \Gamma'},(d'_a)_{a\in \Gamma'},\ell').
$$
The \emph{amalgamation} of $p$ and $p'$ is
$$
p\otimes p'=\{v=(v_a)_{a\in \Gamma\cup\Gamma'}\mid \pi_{\Gamma}(v)\in
p\text{ and } \pi_{\Gamma'}(v)\in p'\} .
$$
\begin{example}
We continue Example \ref{exapat1} and show how to compute
occurrences at the cut. First we consider the projections of $\mathbb{P}$
and $X$:
\begin{align*}
\pi_{\{a,b\}}(\mathsf{val}(\mathbb{P})) &=(ab)^5 & \mathsf{val}(X^{\{a,b\}})&=(ab)^6|(ab)^4 \\
\pi_{\{b,c\}}(\mathsf{val}(\mathbb{P})) &=(cbc)^5 & \mathsf{val}(X^{\{b,c\}})&=(cbc)^5cb|c(cbc)^4\\
\pi_{\{c,d\}}(\mathsf{val}(\mathbb{P})) &= c^{10} & \mathsf{val}(X^{\{c,d\}})&=c^2dc^9|c^8dc
\end{align*}
For the projections we find the arithmetic progressions
$p_{ab},p_{bc},p_{cd}$ of
occurrences at the cut:
\begin{eqnarray*}
\text{occurrences of } \pi_{\{a,b\}}(\mathsf{val}(\mathbb{P}))\textrm{ at the cut of }X^{\{a,b\}}:&p_{ab}=\big((2,2),(1,1),3\big)\\
\text{occurrences of }\pi_{\{b,c\}}(\mathsf{val}(\mathbb{P}))\textrm{ at the cut of }X^{\{b,c\}}:&p_{bc}=\big((1,2),(1,2),4\big)\\
\text{occurrences of }\pi_{\{c,d\}}(\mathsf{val}(\mathbb{P})) \textrm{ at the cut of }X^{\{c,d\}}:&p_{cd}=\big((2,1),(1,0),7\big).
\end{eqnarray*}
Note that in $p_{ab}$ the first component corresponds to $a$ and the
second to $b$ whereas in $p_{bc}$ the first component corresponds to
$b$ and the second to $c$. We amalgamate the arithmetic progressions
and obtain $p_{abc}=p_{ab}\otimes
p_{bc}=\big((2,2,4),(1,1,2),3\big)$. If we again amalgamate we
obtain $p_{abcd}=p_{abc}\otimes p_{cd}=
\big((2,2,4,1),(1,1,2,0),2\big)$.
This way we found occurrences $(2,2,4,1)$, $(3,3,6,1)$ and
$(4,4,8,1)$ of $\mathbb{P}$ at the cut of $X$. Observe that there is a fourth
occurrence $(1,1,2,1)$ that we did not find this way which is single.
\end{example}
\begin{lemma}\label{lem:intersectap}
Let $\Gamma,\Gamma'\subseteq \Sigma$ with
$\Gamma\cap\Gamma'\ne\emptyset$, and let $p=((i_a)_{a\in
\Gamma},(d_a)_{a\in \Gamma},\ell)$ and $p'=((i'_a)_{a\in
\Gamma'},(d'_a)_{a\in \Gamma'},\ell')$ be two arithmetic
progressions. Then $p\otimes p'$ is an arithmetic progression which
can be computed in time $(|p|+|p'|)^{O(1)}$.
\end{lemma}
\begin{proof}
We need to solve the system of linear equations
\begin{equation}\label{eq:amalgam-systm}
\left[~i_b+d_b\cdot x=i'_b+d'_b\cdot y~\right]_{b\in \Gamma\cap\Gamma'}
\end{equation}
for integers $x$ and $y$ under the constraint
\begin{equation}\label{eq:amalgam-constr}
0\leq x\leq \ell \text{ and } 0\leq y \leq \ell'.
\end{equation}
Let us fix an $a\in \Gamma\cap\Gamma'$. First we solve the single
equation
\begin{equation}\label{eq:amalgam-sing}
i_a+d_a\cdot x=i'_a+d'_a\cdot y.
\end{equation}
for non-negative integers $x$ and $y$. The solutions are given by
the least solution plus a mutliple of the least common multiple of
$d_a$ and $d'_a$. We start by computing $g=\gcd(d_{a},d'_{a})$. If
$i_{a}\ne i'_{a} \mod g$, then there is no solution for
equation~\eqref{eq:amalgam-sing} and hence $p\otimes
p'=\emptyset$. In this case we stop. Otherwise, we compute the least
solution $s_{a}\geq \max(i_{a},i'_{a})$ of the simultaneous
congruences
\begin{align*}
z&=i_{a}\mod d_{a},\\z&=i'_{a} \mod d'_{a}.
\end{align*}
This can be accomplished with $(\log(d_a)+\log(d'_a))^2$ many bit
operations; see e.g.~\cite{BacSha96}. Let $k=(s_{a}-i_{a})/d_{a}
\geq 0$
and $k'=(s_a-i'_a)/d'_a \geq 0$. Now, the non-negative solutions of
equation~\eqref{eq:amalgam-sing} are given by
\begin{equation}\label{eq:amalgam-sol}
(x,y)=(k+\frac{d'_a}{g}\cdot t,k'+\frac{d_a}{g}\cdot t) \text{ for all } t\geq
0.
\end{equation}
If $|\Gamma\cap\Gamma'|=1$ we adapt the range for $t$ such that the
constraint~\eqref{eq:amalgam-constr} is satisfied and we are done.
Otherwise, \eqref{eq:amalgam-systm} is a system of at least $2$
linear equations in $2$ variables. Hence \eqref{eq:amalgam-systm} has
at least $2$ (and then infinitely many) solutions iff any two equations
are linearly dependent over $\mathbb{Q}$, i.e. for all $b\in \Gamma\cap\Gamma'$ the
following holds:
\begin{equation}\label{eq:amalgam-condit}
\exists k_b \in \mathbb{Q} :
d_a=k_b\cdot d_b,~~d'_b=k_b\cdot d'_a \text{ and }i'_a-i_a=k_b\cdot (i'_b-i_b)
\end{equation}
In this case all solutions of equation~\eqref{eq:amalgam-sing} are
solutions of equation~\eqref{eq:amalgam-systm}. Thus we can test
condition~\eqref{eq:amalgam-condit} for all $b\in \Gamma\cap\Gamma'$
and in case it holds it only remains to adapt the range for $t$ such
that the constraint~\eqref{eq:amalgam-constr} is satisfied.
Otherwise there is at most one solution and we can fix $b\in
\Gamma\cap\Gamma'$ such that \eqref{eq:amalgam-condit} does not
hold. We plug the solution~\eqref{eq:amalgam-sol} into $i_b+d_b\cdot
x=i'_b+d'_b\cdot y$ and obtain
\begin{equation*}
i_b+(k+ \frac{d'_a}{g}\cdot t)\cdot d_b =
i'_b+(k'+\frac{d_a}{g}\cdot t)\cdot d'_b.
\end{equation*}
We can solve this for $t$ (if possible) and test whether this gives
rise to a solution for \eqref{eq:amalgam-systm} under the
constraint~\eqref{eq:amalgam-constr}. \qed
\end{proof}
\begin{proposition}\label{prop:computeperiodic}
Let $X$ be a nonterminal of $\mathbb{T}$. The periodic occurrences of
$\mathbb{P}$ at the cut of $X$ form an arithmetic progression which
can be computed in time ${(|\mathbb{T}|+|\mathbb{P}|)^{O(1)}}$.
\end{proposition}
\begin{proof}
As in the proof of Proposition~\ref{prop:computesingle} let
$a_1,\ldots,a_n$ be an enumeration of $\Sigma$ such that
$\{a_1,\ldots,a_{i-1}\}\cap D(a_i)\ne \emptyset$ for all $2\leq
i\leq n$ and the elements of $\mathsf{alph}(\mathbb{P})$ appear at the beginning of
the enumeration. We iterate over $1\leq i\leq n$ and compute the
arithmetic progressions of the periodic occurrences of
$\pi_{\{a_1,\ldots,a_i\}}(\mathsf{val}(\mathbb{P}))$ at the cut of
$X^{\{a_1,\ldots,a_i\}}$. For $i=1$ this is easy.
So let $i\geq 2$, let $a=a_i$ and let
$\Delta=\{a_1,\ldots,a_{i-1}\}$. Assume that the periodic
occurrences of $\pi_{\Delta}(\mathsf{val}(\mathbb{P}))$ at the cut of $X^{\Delta}$
are given by the arithmetic progression
$p=((i_c)_{c\in\Delta},(d_c)_{c\in\Delta},\ell)$. For all $b\in
D(a)\cap \Delta$ let
$$p^{\{a,b\}}=((i^{\{a,b\}}_a,i^{\{a,b\}}_{b}),(d^{\{a,b\}}_a,d^{\{a,b\}}_{b}),n^{\{a,b\}})$$
be the occurrences of $\pi_{\{a,b\}}(\mathsf{val}(\mathbb{P}))$ at the cut of
$X^{\{a,b\}}$ (without the first and the last occurrence if
$a,b\in\mathsf{alph}(\mathbb{P})$). Recall that we assume that
$\{c,d\}\cap\mathsf{alph}(\mathbb{P})\ne \emptyset$ for all $c,d\in \Sigma$ with
$(c,d)\in D$ and $c \neq d$. Hence, by Lemma~\ref{lem:liuwrazeg},
$O$ is a periodic occurrence
of $\pi_{\{a_1,\ldots,a_i\}}(\mathsf{val}(\mathbb{P}))$ at
the cut of $X^{\{a_1,\ldots,a_i\}}$ if and only if $\pi_\Delta(O)\in
p$ and $(O_a,O_{b})\in p^{\{a,b\}}$ for all $b\in D(a)\cap \Delta$.
Hence the periodic occurrences of
$\pi_{\{a_1,\ldots,a_i\}}(\mathsf{val}(\mathbb{P}))$ at the cut of
$X^{\{a_1,\ldots,a_i\}}$ are given by
$$ \bigotimes_{b\in D(a)\cap \Delta}p^{\{a,b\}} \otimes p.$$
The result follows now from Lemma~\ref{lem:intersectap}.\qed
\end{proof}
Summarizing the last section we get the following theorem.
\begin{theorem}\label{thm:patmat}
Given an independence alphabet $(\Sigma,I)$, and two SLPs $\mathbb{P}$ and
$\mathbb{T}$ over $\Sigma$ such that $\mathsf{alph}(\mathbb{P})=\mathsf{alph}(\mathbb{T})$, we can decide
in polynomial time whether $[\mathsf{val}(\mathbb{P})]_I$ is a factor of
$[\mathsf{val}(\mathbb{T})]_I$.
\end{theorem}
\begin{proof}
Note that our assumption \eqref{assumption} is satsified
if $\mathsf{alph}(\mathbb{P})=\mathsf{alph}(\mathbb{T})$.
Recall that we may assume that $\mathsf{alph}(\mathbb{T})$ is connected and that
$|\mathsf{val}(\mathbb{P})|\geq 2$.
Let $X$ be a nonterminal of $\mathbb{T}$. Using \cite{Lif07} we compute
for each pair $(a,b)\in D$ the arithmetic progression of occurrences
of $\pi_{a,b}(\mathsf{val}(\mathbb{P}))$ at the cut of $X^{\{a,b\}}$. By applying
Proposition~\ref{prop:computesingle} to the first and to the last
elements of each of these arithmetic progressions, we compute in
polynomial time the single
occurrences at the cut of $X$. The periodic occurrences can be
computed in polynomial time using Proposition~\ref{prop:computeperiodic}. The result
follows now since by definition $[\mathsf{val}(\mathbb{P})]_I$ is a factor of
$[\mathsf{val}(\mathbb{T})]_I$ iff there is a nonterminal $X$ of $\mathbb{T}$ such that
there is either a single occurrence of $\mathbb{P}$ at the cut of $X$ or a
periodic occurrence of $\mathbb{P}$ at the cut of $X$. \qed
\end{proof}
\begin{remark}
In the last section we actually proved the theorem above under
weaker assumptions: We only need for each connected component
$\Sigma_i$ of $\mathsf{alph}(\mathbb{T})$ that $\Sigma_i\cap \mathsf{alph}(\mathbb{P})$ is connected
and that $\{a,b\}\cap\mathsf{alph}(\mathbb{P})\ne \emptyset$ for all $(a,b)\in
D\cap (\Sigma_i\times\Sigma_i)$ with $a\ne b$.
\end{remark}
\section{Compressed conjugacy}
\label{sec:CC}
In this section we will prove Theorem~\ref{decidesccp}. For this, we
will follow the approach from \cite{LiuWraZeg90,Wra89} for
non-compressed traces. The following result allows us to transfer the
conjugacy problem to a problem on (compressed) traces:
\begin{theorem}[\cite{LiuWraZeg90,Wra89}] \label{conjugacymonoidgroup}
Let $u,v\in \mathbb{M}(\Sigma^{\pm 1},I)$. Then the following are
equivalent:
\begin{enumerate}[(1)]
\item $u$ is conjugated to $v$ in $\mathbb{G}(\Sigma,I)$.
\item There exists $x \in \mathbb{M}(\Sigma^{\pm 1},I)$ such that $x\,
\mathcal{C}R(u) = \mathcal{C}R(v)\, x$ in $\mathbb{M}(\Sigma^{\pm 1},I)$ (it is said that
$\mathcal{C}R(u)$ and $\mathcal{C}R(v)$ are conjugated in $\mathbb{M}(\Sigma^{\pm 1},I)$).
\item $|\mathcal{C}R(u)|_a = |\mathcal{C}R(v)|_a$ for all $a \in \Sigma^{\pm 1}$ and
there exists $k\leq |\Sigma^{\pm 1}|$ such that $\mathcal{C}R(u)$ is a
factor of $\mathcal{C}R(v)^k$.
\end{enumerate}
\end{theorem}
The equivalence of (1) and (2) can be found in \cite{Wra89}, the
equivalence of (2) and (3) is shown in \cite{LiuWraZeg90}. We can now
infer Theorem~\ref{decidesccp}:
\noindent
{\em Proof of Theorem~\ref{decidesccp}.}
Let $\mathbb{A}$ and $\mathbb{B}$ be two given SLPs over $\Sigma^{\pm 1}$. We want
to check, whether $\mathsf{val}(\mathbb{A})$ and $\mathsf{val}(\mathbb{B})$ represent conjugated
elements of the graph group $\mathbb{G}(\Sigma,I)$. Using
Corollary~\ref{computecore}, we can compute in polynomial time SLPs
$\mathbb{C}$ and $\mathbb{D}$ with $[\mathsf{val}(\mathbb{C})]_I=\mathcal{C}R([\mathsf{val}(\mathbb{A})]_I)$ and
$[\mathsf{val}(\mathbb{D})]_I=\mathcal{C}R([\mathsf{val}(\mathbb{B})]_I)$. By
Theorem~\ref{conjugacymonoidgroup}, it suffices to check the following
two conditions:
\begin{itemize}
\item $|\mathcal{C}R([\mathsf{val}(\mathbb{C})]_I)|_a = |\mathcal{C}R([\mathsf{val}(\mathbb{D})]_I)|_a$ for all $a \in
\Sigma^{\pm 1}$
\item There exists $k\leq |\Sigma^{\pm 1}|$ such that
$\mathcal{C}R([\mathsf{val}(\mathbb{C})]_I)$ is a factor of $\mathcal{C}R([\mathsf{val}(\mathbb{D})]_I)^k$.
\end{itemize}
The first condition can be easily checked in polynomial time, since
the number of occurrences of a symbol in a compressed strings can be
computed in polynomial time. Moreover, the second condition can be
checked in polynomial time by Theorem~\ref{thm:patmat}, since (by the
first condition) we can assume that $\mathsf{alph}(\mathsf{val}(\mathbb{C})) =
\mathsf{alph}(\mathsf{val}(\mathbb{D}))$. \qed
\section{Open problems}
Though we have shown that some cases of the simultaneous compressed
conjugacy problem for graph groups (see
Section~\ref{sec:main-results}) can be decided in polynomial time, it
remains unclear whether this holds also for the general case. It is
also unclear to the authors, whether the general compressed pattern
matching problem for traces, where we drop restriction
\eqref{assumption}, can be decided in
polynomial time. Finally, it is not clear, whether
Theorem~\ref{decidesccp}--\ref{thm:outer} also hold if the
independence alphabet is part of the input.
\def$'${$'$}
\end{document}
|
\begin{document}
\begin{abstract}
We prove that every $C^1$ three-dimensional flow with positive topological entropy can be
$C^1$ approximated by flows with homoclinic orbits.
This extends a previous result for $C^1$ surface diffeomorphisms \cite{g}.
\end{abstract}
\operatorname{m}aketitle
\section{Introduction}
\noindent
In his classical paper \cite{k} Katok proved
that every $C^{1+\alpha}$ surface diffeomorphism with positive topological entropy
has a homoclinic orbit.
In \cite{g} Gan asked if this result is true for $C^1$ surface diffeomorphisms too.
He didn't answer this question but managed to prove that
every $C^1$ surface diffeomorphism with positive entropy can be
$C^1$ approximated by diffeomorphisms with homoclinic orbits.
More recently, the authors \cite{gy} proved that
every three-dimensional flow can be $C^1$ approximated by Morse-Smale flows or by flows with
a homoclinic orbit (this entails the weak Palis conjecture for three-dimensional flows).
From this they deduced that
there is an open and dense subset of three-dimensional flows
where the property of having zero topological entropy is invariant under topological equivalence.
Moreover, the $C^1$ approximation by three-dimensional flows with robustly zero topological entropy is equivalent
to the $C^1$ approximation by Morse-Smale ones.
In this paper we will extend \cite{g} from surface diffeomorphisms to three-dimensional flows.
In other words, we will prove that every $C^1$ three-dimensional flow with positive topological entropy
can be $C^1$ approximated by flows with homoclinic orbits.
Let us state our result in a precise way.
The term {\em flow} will be referred to $C^1$ vector fields $X$ defined on a compact
connected boundaryless Riemannian manifold $M$. To emphasize its differentiability we say that $X$ is a $C^r$ flow, $r\in\operatorname{m}athbb{N}^+$.
When $dim(M)=3$ we say that $X$ is a {\em three-dimensional flow}.
The flow of $X$ will be denoted by $\phi_t$ (or $\phi^X_t$ to emphasize $X$), $t\in\operatorname{m}athbb{R}$.
We denote by $\Phi_t=\Phi_t$ the derivative of $\phi_t$.
The space of $C^r$ flows $\operatorname{m}athcal{X}^r$ is endowed with the standard $C^r$ topology.
We say that $x\in M$ is a {\em periodic point} of a flow $X$ if there is a minimal positive number $\pi(x)$ (called {\em period})
such that $\phi_{\pi(x)}(x)=x$.
Notice that $1$ is always an eigenvalue of the
derivative $DX_{\pi(x)}(x)$ with eigenvector $X(x)$.
The remainders eigenvalues will be referred to as the eigenvalues of $x$.
We say that the orbit $O(x)=\{X_t(x):t\in\operatorname{m}athbb{R}\}$ of a periodic point $x$ (or the periodic point $x$)
is {\em hyperbolic} if it has no eigenvalue of modulus $1$.
In case there are eigenvalues of modulus less and bigger than $1$
we say that the hyperbolic periodic point is a {\em saddle}.
The Invariant Manifold Theory \cite{hps} asserts that through any periodic saddle $x$ it passes a pair of invariant manifolds,
the so-called strong stable and unstable manifolds
$W^{ss}(x)$ and $W^{uu}(x)$, tangent at $x$ to the eigenspaces corresponding to the eigenvalue
of modulus less and bigger than $1$ respectively.
Saturating them with the flow we obtain the stable and unstable manifolds $W^s(x)$ and $W^u(x)$ respectively.
We say that $O$ is a {\em homoclinic orbit} (associated to a periodic saddle $x$)
if $O\operatorname{Supp}bset W^s(x)\cap W^u(x)\setminus O(x)$. If, additionally, $\operatorname{dim}m(T_qW^s(x)\cap T_qW^u(x))\neq1$
then we say that $O$ is a {\em homoclinic tangency}.
We say that $E\operatorname{Supp}bset X$ is {\em $(T,\epsilon)$-separated} for some $T,\epsilon>0$
if for any distinct point $x,y\in E$ there exists $0\leq t\leq T$ such that
$d(X_t(x),X_t(y))>\epsilon$.
The number
$$
h(X)=\lim_{\epsilon\to0}\limsup_{T\to\infty}\frac{1}T\log \operatorname{Supp}p\{Car(E):E \operatorname{m}box{ is } (T,\epsilon)\operatorname{m}box{-separated}\}
$$
is the so-called {\em topological entropy} of $X$.
With these definitions we can state our result.
\begin{thm}
\label{thAA}
Every $C^1$ three-dimensional flow with positive topological entropy can be $C^1$ approximated by
flows with homoclinic orbits.
\end{thm}
The proof follows Gan's arguments \cite{g} using
the variational principle (e.g. \cite{bru}) and Ruelle's inequality.
But we simplify such arguments by using recent tools as a flow-version of a result by Crovisier \cite{c} and Gan-Yang \cite{gy}.
Denote by $\operatorname{Cl}(\cdot)$ and $int(\cdot)$ the closure and interior operations.
As in \cite{g} we get from Theorem \ref{thAA} the following corollary.
\begin{cor}
\label{c1}
If $\operatorname{m}athcal{H}_+=\{X\in \operatorname{m}athcal{X}^1:h(X)>0\}$, then $\operatorname{Cl}(int(H_+))=H_+$
and so $\operatorname{m}athcal{H}_+$ has no isolated points.
\end{cor}
\section{Proof of Theorem \ref{thAA}}
\label{sec1}
\noindent
Denote by $\operatorname{Sing}(X)$ the set of singularities of a flow $X$.
Given $\Lambda\operatorname{Supp}bset M$, we denote by $\Lambda^*=\Lambda\setminus\operatorname{Sing}(X)$ the set of regular points
oif a flow $X$ in $\Lambda$.
Define by $E^X$ the map assigning to $p\in M$ the subspace of $T_pM$ generated by $X(p)$.
It turns out to be a one-dimensional subbundle of $TM$ when restricted to $M^*$.
Define also the normal subbundle $N$ over $M^*$ whose fiber
$N_p$ at $p\in M^*$ is the orthogonal complement of $E^X_p$ in $T_pM$.
Denoting by $\pi=\pi_p:T_pM\to N_p$ the orthogonal projection we obtain the {\em linear Poincar\'e flow}
$\psi_t:N\to N$ defined by $\psi_t(p)=\pi_{\phi_t(p)}\circ \Phi_t(p)$.
When necessary we will use the notation
$N^X$ and $\psi^X_t$ to indicate the dependence on $X$.
For a (nonnecessarily compact) invariant set $\Omega\operatorname{Supp}bset M^*$, one says that {\em $\Omega$
has a dominated splitting with respect to the Poincar\'e flow} if there are a continuous splitting
$N_\Omega=N^-\oplus N^+$ into $\psi_t$-invariant subbundles $N^-,N^+$ and positive numbers $K,\lambda$ such that
$$
\|\psi_t|_{N^-_x}\|\cdot\|\psi_{-t}|_{N^+_{\phi_t(x)}}\|\leq Ke^{-\lambda t},
\quad\quad\forall x\in \Omega, t\geq0.
$$
Let $\operatorname{m}u$ be a Borel probability measure of $M$.
We say that $\operatorname{m}u$ is {\em nonatomic} if it has no points with positive mass.
We say that $\operatorname{m}u$ is supported on $H\operatorname{Supp}bset M$ if $\operatorname{Supp}pp(\operatorname{m}u)\operatorname{Supp}bset H$, where $\operatorname{Supp}pp(\operatorname{m}u)$
denotes the support of $\operatorname{m}u$.
We say that $\operatorname{m}u$ is
{\em invariant} if
$\operatorname{m}u(X_t(A))=\operatorname{m}u(A)$ for every borelian $A$ and every $t\in\operatorname{m}athbb{R}$.
Moreover, $\operatorname{m}u$ is {\em ergodic} if it is invariant and
every measurable invariant set has measure $0$ or $1$.
Oseledets's Theorem \cite{s} ensures that
every ergodic measure $\operatorname{m}u$ is equipped with
an invariant set of full measure $R$, a positive integer $k$, real numbers
$\chi_1<\chi_2<\cdots<\chi_{k}$ and a measurable invariant splitting $T_RM=
E^1\oplus \cdots \oplus E^k$ over $R$
such that
$$
\operatorname{dim}splaystyle\lim_{t\to\pm\infty}\frac{1}{t}\log\|\Phi_t(x) e^i\|=\chi_i,
\quad\quad\forall x\in R, \forall e^i\in E^i_x\setminus\{0\}, \forall 1\leq i\leq k.
$$
The numbers $\chi_1,\cdots, \chi_k$ are the so-called {\em Lyapunov exponents} of $\operatorname{m}u$.
Clearly, the Lyapunov exponent of $\operatorname{m}u$ corresponding to the flow
direction is zero. If the remainders exponents are nonzero, then we say that $\operatorname{m}u$ is a {\em hyperbolic measure}.
In such a case we can rewrite the Oseledets decomposition of $\operatorname{m}u$
as $T_RM=E^s\oplus E^X\oplus E^u$ where $E^s$ (resp. $E^u$) is the sum of the subbundles
$\{E_1,\cdots, E^k\}$ for which the corresponding Lyapunov exponent is less than (resp. bigger than) $1$.
We then say that the {\em Oseledets decomposition of $\operatorname{m}u$ is dominated with respect to the Poincar\'e flow}
if $\operatorname{Supp}pp(\operatorname{m}u)^*\neq\emptyset$ (equivalently $\operatorname{m}u(\operatorname{Sing}(X))=0$)
and the decomposition $N_R=N^s\oplus N^u$
given by $N^*=\pi(E^*)$ for $*=s,u$ is dominated with respect to the Poincar\'e flow.
We shall use the following lemma.
\begin{lemma}
\label{cro1}
Let $\operatorname{m}u$ be a hyperbolic ergodic measure of a flow $X$
whose Oseledets decomposition is dominated with respect to the Poincar\'e flow.
Then, there are $\eta,T>0$ such that $\operatorname{m}u$ is ergodic for $\phi^X_T$,
$$
\int\log\|\psi_T|_{N^s}\|d\operatorname{m}u\leq-\eta\quad \quad \operatorname{m}box{ and }\quad \quad \int\log\|\psi_{-T}|_{N^u}\|d\operatorname{m}u\leq-\eta.
$$
\end{lemma}
\begin{proof}
It follows from the hypothesis that $\operatorname{m}u(\operatorname{Sing}(X))=0$.
On the other hand, $\operatorname{m}u$ is ergodic for $X$ so there is $T_1>0$ such that $\operatorname{m}u$
is totally ergodic for $\phi_{T_1}$ (c.f. \cite{ps}).
Since $\operatorname{m}u$ is hyperbolic, there is $\eta_0>0$ such that any
Lyapunov exponent off the flow direction belongs to $\operatorname{m}athbb{R}\setminus [-\eta_0,\eta_0]$.
From this and the
Furstenberg-Kesten Theorem (see also p. 150 in \cite{w})
we obtain
$$
\lim_{n\to\infty}\frac{1}n\log\|\psi_{nT_1}|_{N^s_x}\|\leq -\eta_0\operatorname{m}box{ and }
\lim_{n\to\infty}\frac{1}n\log\|\psi_{-nT_1}|_{N^s_{\phi_{nT_1}(x)}}\|\leq -\eta_0,
$$
for $\operatorname{m}u$-a.e. $x\in M$.
Hence
$$
\lim_{n\to\infty}\frac{1}n \int\log\| \psi_{nT_1}|_{N^s}\|d\operatorname{m}u\leq-\frac{\eta_0}2\quad\operatorname{m}box{ and }\quad
\lim_{n\to\infty}\frac{1}n \int\log\|\psi_{-nT_1}|_{N^u}\|d\operatorname{m}u\leq -\frac{\eta_0}2
$$
by the Dominated Convergence Theorem.
Now take $T=nT_1$ and $\eta=n\frac{\eta_0}2$ with $n$ large.
\end{proof}
Denote by $\operatorname{Cl}(\cdot)$ the closure operation.
We say that $H\operatorname{Supp}bset M$ is a {\em homoclinic class} of $X$ if there is a periodic saddle $x$
such that
$$
H=\operatorname{Cl}(\{q\in W^s(x)\cap W^u(x): \operatorname{dim}m(T_qW^s(x)\cap T_qW^u(x))=1\}).
$$
A homoclinic class is {\em nontrivial} if it does not reduce to a single periodic orbit.
The following is the flow-version of Proposition 1.4 in \cite{c}.
\begin{prop}
\label{thA}
For every flow,
every hyperbolic ergodic measure whose
Oseledets decomposition is dominated with respect to the Poincar\'e flow is supported on a homoclinic class.
\end{prop}
\begin{proof}
Let $\operatorname{m}u$ be a hyperbolic ergodic measure of a flow $X$.
Suppose that the Oseledets decomposition of $\operatorname{m}u$ is dominated with respect to the linear Poincar\'e flow.
By Lemma \ref{cro1} there are $\eta,T>0$ such that $\operatorname{m}u$ is ergodic for $\phi^X_T$,
$$
\int\log\|\psi_T|_{N^s}\|d\operatorname{m}u\leq-\eta\quad \quad \operatorname{m}box{ and }\quad \quad \int\log\|\psi_{-T}|_{N^u}\|d\operatorname{m}u\leq-\eta.
$$
It follows from the hypothesis that $\operatorname{m}u(\operatorname{Sing}(X))=0$. Since $\operatorname{m}u$ is ergodic, we obtain
$$
\int\log\|\Phi_T|_{E^X}\|d\operatorname{m}u=0.
$$
Replacing in the two previous inequalities we obtain
$$
\int\log\|\psi^*_T|_{N^s}\|d\operatorname{m}u\leq-\eta\quad \quad \operatorname{m}box{ and }\quad \quad \int\log\|\psi^*_{-T}|_{N^u}\|d\operatorname{m}u\leq-\eta,
$$
where
$$
\psi^*_t=\frac{\psi_t}{\|\Phi_t(x)|_{E^X_x}\|},
\quad\quad x\in M^*, t\in \operatorname{m}athbb{R}
$$
is the scaled linear Poincar\'e flow (c.f. \cite{sgw}).
On the other hand, standard arguments (c.f. \cite{lgw}) imply that the decomposition $N_R=N^s\oplus N^u$
(which is dominated for the Poincar\'e flow by hypothesis) extends continuously to a dominated splitting $N_{\operatorname{Supp}pp(\operatorname{m}u)^*}=N^s\oplus N^u$
with respect to the linear Poincar\'e flow.
By Lemma 2.29 in \cite{ap}
there are a neighborhood $U$ of
$\operatorname{Supp}pp(\operatorname{m}u)$ and a splitting $N_{\Lambda^*}=N^s\oplus N^u$
extending $N_{\operatorname{Supp}pp(\operatorname{m}u)^*}=N^s\oplus N^u$
where $\Lambda=\bigcap_{t\in\operatorname{m}athbb{R}}X_t(U)$.
From this point forward we can reproduce the arguments on p. 214 of \cite{sgw} to conclude the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thAA}]
Let $X$ be a three-dimensional flow with positive topological entropy.
By the variational principle (e.g. \cite{bru})
there is an invariant measure $\operatorname{m}u$ of $X$ such that $h_\operatorname{m}u(X_1)>0$,
where $h_\operatorname{m}u$ is the metric entropy operation. By the ergodic decomposition theorem we can assume that $\operatorname{m}u$ is ergodic.
By Ruelle's inequality (e.g. Theorem 5.1 in \cite{g}) we get that $\operatorname{m}u$ has at least one positive Lyapunov exponent.
By applying this inequality to the reversed flow we obtain that $\operatorname{m}u$ has also a negative exponent.
Since $dim(M)=3$, we conclude that $\operatorname{m}u$ is hyperbolic of {\em saddle-type}
(i.e. with positive and negative exponents).
By the Ergodic Closing Lemma for flows (c.f. Theorem 5.5 in \cite{sgw})
there are a sequence of flows $X^n$ and a sequence of hyperbolic periodic orbits $\gamma_n$ of $X_n$
such that $X_n\to X$ and $\gamma_n\to \operatorname{Supp}pp(\operatorname{m}u)$ as $n\to \infty$ where the latter convergence is with respect to the Hausdorff
topology of compact subsets of $M$.
By passing to a subsequence if necessary we can assume that the index (stable manifold dimension) of these periodic orbits is constant ($i$ say).
Now we assume by contradiction that $X$ cannot be approximated by flows with homoclinic orbits.
Hence $X$ cannot be approximated by flows with homoclinic tangencies either.
Since $dim(M)=3$, $i$ can take the values $0,1,2$ only.
If $i=2$ then each $\gamma_n$ is an attracting periodic orbit of $X^n$.
Since $X$ cannot be approximated by flows with homoclinic tangencies, Lemma 2.9 in \cite{gy}
implies that there is $T>0$ such that
$\|\psi^{X^n}_T|_{N^{X^n}_x}\|\leq \frac{1}2$ for all $n\in\operatorname{m}athbb{N}$ and all $x\in \gamma_n$.
Letting $n\to\infty$ we get $\|\psi^X_T|_{N_x}\|\leq\frac{1}2$ for all $x\in \operatorname{Supp}pp(\operatorname{m}u)$.
This would imply that the Lyapunov exponents of $\operatorname{m}u$ off the flow direction are all negative.
Since $\operatorname{m}u$ is saddle-type, we obtain a contradiction proving $i\neq 2$. Similarly, $i\neq0$ and so $i=1$.
This allows us to apply Corollary 2.10 in \cite{gy} to obtain a dominated splitting $N_{\operatorname{Supp}pp(\operatorname{m}u)^*}=N^-\oplus N^+$ of index $1$ (i.e.
$dim(N^-)=1$) with respect to the Poincar\'e flow.
Next we observe that both the Oseledets splitting $N^s\oplus N^u$ for the linear Poincar\'e flow
and the splitting $N^-\oplus N^+$ obtained above are pre-dominated of index $1$ in the sense
of Definition 2.1 in \cite{lgw}.
Since
pre-dominated splittings of prescribed index are unique (c.f. Lemma 2.3 in \cite{lgw}),
we get $N^s\oplus N^u=N^-\oplus N^+$.
Since $N^-\oplus N^+$ is dominated with respect to the Poincar\'e flow,
the Oseledets decomposition $N^s\oplus N^u$ of $\operatorname{m}u$ is dominated with respect to the linear Poincar\'e flow either.
We conclude that $\operatorname{m}u$ is supported on a homoclinic class
by Proposition \ref{thA}.
Since $\operatorname{m}u$ has positive metric entropy, such a homoclinic class is nontrivial and so
$X$ has a homoclinic orbit against the assumption.
This contradiction completes the proof of the theorem.
\end{proof}
\end{document}
|
\begin{document}
\title{
Schauder estimates for solutions of sub-Laplace equations with Dini terms
\thanks{This work was supported by the National
Natural Science Foundation of China (Grant No. 11271299), Natural
Science Foundation Research Project of Shaanxi Province (Grant No.
2012JM1014).} }
\author{Tingxi Hu, Pengcheng Niu\thanks{ Corresponding author. [email protected](P. Niu)}\\Department of Applied Mathematics,\\Northwestern Polytechnical University, Xi'an 710129, China}
\maketitle
\begin{abstract}
In this paper we establish Schauder estimates for the sublalpace equation
\[\Sigma _{j = 1}^mX_j^2u = f,\]
where ${X_1},{X_2}, \ldots ,{X_m}$ is a system of smooth vector field which generates the first
layer in the Lie algebra of a Carnot group. We drive the estimate for the second
order derivatives of the solution to the equation with Dini continue inhomogeneous term $f$ by the perturbation argument.
\textbf{Keywords:}\;Carnot group; sub-Laplace; Schauder estimate; Dini continue; perturbation argument.
\textbf{MSC2010:}\;35B65; 35R03.
\end{abstract}
\section{Introduction}
Schauder estimates play an important role in the theory of elliptic equations, see [6, 11]. For the second
order uniformly elliptic equation in any bounded domain $\Omega \subset {\mathbb{R}^n}$
\[\Sigma _{i,j = 1}^n{a_{ij}}(x)\partial _{ij}^2u = f,\]
such estimates provide a bound of the H\"{o}lder norm in $\Omega$ of the second derivatives of the solution
$u$ in terms of the H\"{o}lder norms in $\Omega$ of the coefficients $a_{ij}$ and $f$.
A sharper form of these estimates was introduced by Caffarelli [3]
in the study of fully non-linear elliptic equations. He derived
Schauder estimates for viscosity solutions by comparing the
solutions with osculating quadratic polynomials in a neighborhood of
a fixed point, and this method is called \lq\lq perturbation argument". In
Caffarelli's approach, the H\"{o}lder regularity of $u$ at a point is
basically determined by the H\"{o}lder regularity of $a_{ij}$ and
$f$ at the same point, hence such estimates are said pointwise
Schauder estimates. Wang in [23] compared the quadratic part of the
solutions to the Laplace equation with solutions of approximate
equations and proved the H\"{o}lder norm of $D^2u$ in terms of
Dini continuous inhomogeneous term. Afterwords, the method was
used to investigate the fully non-linear elliptic and parabolic
equations, see Liu-Trudinger-Wang[16], Tian-Wang [22].
For degenerate elliptic equations constructed by left translation invariant vector fields, several authors derived
Schauder estimates, see Lunardi in [19], Capogna-Han [5], Polidoro
-Di Francesco [21] and Guti\'{e}rrez-Lanconelli [12].
Schauder estimates for heat type euqations induced by smooth vector fields satisfying H\"{o}rmander's finite rank condition were
showed by Bramanti-Brandolini [2].
Recently, Jiang-Tian [15] showed Schauder estimates for the Kohn-Laplace equation with Dini continuous inhomogeneous
term in the Heisenberg group in the spirit of [23]. We generalize the result in [15] to the sub-Laplace equations in
Carnot groups.
In the present paper we consider the equation
\begin{equation}
Lu \equiv \Sigma _{j = 1}^mX_j^2u = f \;\;\;in \;{B_1}(0),
\end{equation}
in which $L$ is the sub-laplacian on a Carnot group $G$ and the right hand term $f$ is Dini continuous, i.e., $f$ satisfies
\[\int_0^1 {\frac{{{\omega _f}(r)}}{r}dr} < \infty ,\]
where ${\omega _f}(r) = \mathop {\sup }\limits_{d(\xi ,\eta ) < r} |f(\xi ) - f(\eta )|$ and $d(\xi ,\eta )$ is the
pseudo-distance (see next section) between $\xi$ and $\eta$, ${B_1}(0)$ denotes the unit gauge ball centered at origin.
Our main result is the following
\begin{theorem}
Let $u \in {C^2}({B_1}(0))$ be a solution of (1.1), then for any $\xi ,\eta \in {B_{1/2}}(0)$, $d = d(\xi ,\eta )$,
there exists a positive constant $C$ such that
\begin{eqnarray}
&&|{X_i}{X_j}u(\xi ) - {X_i}{X_j}u(\eta )| \nonumber\\
&&\leqslant C\left(d(\mathop {\sup }\limits_{{B_1}(0)} |u| +
||f|{|_{{L^\infty }}} + \int_{\sqrt d }^1 {\frac{{{\omega
_f}(r)}}{{{r^2}}}dr} ) + \int_0^{\sqrt d } {\frac{{{\omega
_f}(r)}}{r}dr}\right).
\end{eqnarray}
In particular, if $f \in {C^{0,\alpha }}({B_1}(0))\;(0 < \alpha \leqslant 1)$, then
\begin{equation}
|{X_i}{X_j}u(\xi ) - {X_i}{X_j}u(\eta )| \leqslant C{d^{\alpha
/2}}\left(\mathop {\sup }\limits_{{B_1}(0)} |u| +
||f|{|_{{C^{0,\alpha }}}}\right), \;\; \alpha \in (0,1),
\end{equation}
\begin{eqnarray}
&&|{X_i}{X_j}u(\xi ) - {X_i}{X_j}u(\eta )| \nonumber\\
&&\leqslant C{d^{1/2}}\left(\mathop {\sup }\limits_{{B_1}(0)} |u| +
||f|{|_{{C^{0,1}}}}\left(1 + |\sqrt d \log \sqrt d |\right)\right),
\;\;\alpha=1.
\end{eqnarray}
\end{theorem}
The plan of the paper is as follows: in Section 2 we introduce knowledge related to Carnot groups and some
preliminary lemmas. Also a maximum principle for (1.1) with Dirichlet boundary value problem is proved.
Section 3 is devoted to the proof of Theorem 1.1. We mention that the treatment for the Taylor polynomials
in the Carnot group is more complicated than in the Heisenberg group. Also necessary techniques to use the
perturbation argument are given.
\section{Preliminary results}
We begin by describing several known facts on Carnot groups and refer to [1, 10] for more information. Especially,
we provide a maximum principle (Lemma 2.5) for solutions of a boundary value problem to the sub-Laplace equation.
A Carnot group $G$ of step $s$ is a simply connected nilpotent Lie group such that its Lie algebra $\mathfrak{g}$
admits a stratification $\mathfrak{g}= \oplus _{l = 1}^s{V_l}$, with $[{V_1},{V_l}] = {V_{l + 1}}$ $(l = 1,2, \ldots ,s - 1)$
and $[{V_1},{V_s}] = \left\{ 0 \right\}$. Denoting ${m_l} = \dim {V_l}$, we fix on $G$ a system of coordinates
$\xi = \left( {{z_1},{z_2}, \ldots ,{z_s}} \right)$, in which ${z_l} = ({x_{l,1}},{x_{l,2}}, \ldots ,{x_{l,{m_l}}}) \in {\mathbb{R}^{{m_l}}}$.
Every Carnot group $G$ is naturally equipped with a family of non-isotropic dilations defined by $\delta_r$:
$${\delta _r}(\xi ) = (r{z_1},{r^2}{z_2}, \ldots ,{r^s}{z_s})\;,\xi \in G,r > 0,$$
and the homogeneous dimension of $G$ is given by $Q = \sum\limits_{l
= 1}^s {l{m_l}}$. We express by $dH(\xi)$ a fixed bi-invariant Haar
measure on $G$. One easily sees $dH({\delta _r}(\xi )) = {r^Q}dH(\xi
)$. By the Baker-Campbell-Hausdorff formula, the group law on $G$ is
$$\xi \eta = \xi + \eta + \sum\limits_{1 \leqslant l,k \leqslant s} {{Z_{l,k}}(\xi ,\eta )} \;,\;\;\xi ,\eta \in
G,
$$
where ${Z_{l,k}}(\xi ,\eta )$ is a fixed linear combination of
iterated commutators containing $l$ times $\xi$ and $k$ times
$\eta$.
The homogenous norm of $\xi$ on $G$ is defined by $|\xi | =
{(\Sigma _{j = 1}^s|{z_j}{|^{2s!/j}})^{1/2s!}}$, where $|{z_j}|$
denotes the Euclidean norm of ${z_j} \in {\mathbb{R}^{{m_j}}}$. Such
homogenous norm on $G$ can be used to define a pseudo-distance on
$G$ which is $d(\xi ,\eta ) = |{\xi ^{ - 1}}\eta |$. Denote the
gauge ball of radial $r$ centered at $\xi$ by ${B_r}(\xi ) = \{
\eta \in G|d(\xi ,\eta ) < r\}$.
Let $X = \{ {X_1},{X_2}, \ldots ,{X_m}\}$ be a basis of $V_{1}$,
then we can write $X_{i}$ as
$${X_i} = {\partial _{1,i}} + \sum\limits_{j= i+1}^m{a_{ij}}(\xi ){\partial _{1,j}}
+\sum\limits_{l =2}^s\sum\limits_{k= 1}^{m_l}{b_{ilk}}(\xi ){\partial _{l,k}},\;{X_i}(0) = {\partial
_{1,i}},
$$
where ${a_{ij}}(\xi )$ and ${b_{ilk}}(\xi )$ are polynomials. The sub-Laplacian on $G$
associated with $X$ is of the form $L = \Sigma _{i = 1}^mX_i^2$,
which is a hypoelliptic second order partial differential operator.
Obviously, it holds $L(\frac{{x_{1,1}^2}} {2}) = 1$.
Suppose that $\{ {X_{l,1}},{X_{l,2}}, \ldots ,{X_{l,{m_l}}}\}$ is a
basis of $V_{l}$ and consider a multi-index $I = \{ ({i_k},{j_k})\}
_{k = 1}^s$ with ${i_k} \in \{ 1,2, \ldots ,l\} ,{j_k} \in \{ 1,2,
\ldots ,{m_{{i_k}}}\}$. For a smooth function $f$ on $G$, we
denote a derivative of $f$ with order $|I| = \Sigma _{k = 1}^s{i_k}$ by
$${X^I}f = {X_{{i_1},{j_1}}}{X_{{i_2},{j_2}}} \ldots {X_{{i_s},{j_s}}}f.
$$
We will use the notation $Xf = ({X_1}f,{X_2}f, \ldots ,{X_m})$ for
convenience too.
A polynomial on $G$ is a function which can be expressed in the
exponential coordinates by
$$P(\xi ) = \sum\limits_J {{a_J}{x^J}},$$
where $J = \{ {j_{i,k}}\} _{i = 1, \ldots ,{m_k}}^{k = 1, \ldots
,s}$, ${a_J}$ are the real numbers and ${x^J} = \Pi _{i = 1,
\ldots ,{m_k}}^{k = 1, \ldots ,s}x_{i,k}^{{j_{i,k}}}$. The
homogeneous degree of the monomial $x^{J}$ is given by the sum $|J|
= \Sigma _{k = 1}^s\Sigma _{i = 1}^{{m_k}}k{j_{i,k}}$.
Let $\Omega \subset G$ be an open set. If $k \in N$ and $1
\leqslant p < \infty$, we define the horizontal Sobolev space by
$$H{W^{k,p}}(\Omega ) = \{ f:\;|{X^I}f| \in {L^p}(\Omega ),0 \leqslant |I| \leqslant k\}
.$$ Then we illustrate the H\"{o}lder space and Lipschitz space with
respect to the pseudo-distance. If $0 < \alpha \leqslant 1$ and $f$
is a function defined in an open set $\Omega$, let
$${[f]_{{C^{0,\alpha }}}} = \sup \left\{ {\frac{{|f(\xi ) - f(\eta )|}}
{{d{{(\xi ,\eta )}^\alpha }}}:\xi ,\eta \in \Omega ,\xi \ne \eta }
\right\}.
$$
The H\"{o}lder space is defined by
$${C^{0,\alpha }}(\Omega ) = \{ f|\;[f]_{{{C^{0,\alpha }}}} < \infty
\}, \;0<\alpha<1 $$
and Lipschitz space by ${C^{0,1}}(\Omega ) = \{
f|\;[f]_{{{C^{0,1}}}} < \infty \}$. In addition we denote that
$||f||_{{{C^{0,\alpha }}}}:=[f]_{{{C^{0,\alpha }}}}+ ||f||_{L^\infty}$, for $0< \alpha \leqslant 1$.
We introduce some known results that will be used in this paper.
\begin{lemma}([1, pp.390-391])
The gauge balls ${B_r}(\xi )(\xi \in G,r > 0)$
are $L$-regular open sets, i.e., for any $f \in
{C^\infty }({B_r}(\xi ))$, there exists a Perron-Wiener-Brelot
generalized solution $u \in {C^\infty }({B_r}(\xi )) \cap
C(\overline {{B_r}(\xi )} )$ to the boundary value problem
\begin{equation}\left\{ {\begin{array}{*{20}{c}}
{Lu = f} \\
{u{|_{\partial {B_r}(\xi )}} = g} \\
\end{array} } \right.\;\;\begin{array}{*{20}{c}}
{in\;{B_r}(\xi ),} \\
{g \in C(\partial {B_r}(\xi )).} \\
\end{array}
\end{equation}
\end{lemma}
\begin{lemma}(a priori estimates, [4])
Let $\Omega\subset G$ be an open set and $u$ be $L$ harmonic,
i.e., $u$ satisfies $Lu=0$, then for a given integer $k$ and any
multiple-index $I$, $|I| \leqslant k$, there exists a constant $C$
depending on $G$ and $k$ such that if $\overline {{B_r}(\xi )}
\subset \Omega $, then
\begin{equation}
|{X^I}u|(\eta ) \leqslant C{r^{ - k}}\mathop {\sup
}\limits_{\overline {{B_r}(\xi )} } |u|,\;\;\eta \in {B_r}(\xi ).
\end{equation}
\end{lemma}
\begin{lemma} (Folland-Stein [7])
Let $\Omega\subset G$ be an open set, then for any $1<p<Q$, there
exist a positive constant $S_{p}$ depending on $G$, such that for
$f \in C_0^\infty (\Omega )$,
\begin{equation}
{\left(\int_\Omega {|f{|^{p*}}dH} \right)^{1/p*}} \leqslant
{S_p}{\left(\int_\Omega {|Xf{|^p}dH} \right)^{1/p}},
\end{equation}where $p* = \frac{{pQ}} {{Q - p}}, |Xf| = {(\Sigma _{j =
1}^m|{X_j}f{|^2})^{1/2}}.$
\end{lemma}
The following technical lemma is adapted from Chen and Wu [6].
\begin{lemma}(De Giorgi's iteration lemma, [6])
Let $\varphi (t)$ be a nonnegative and non-increasing function on
$[{k_0}, + \infty )$ satisfying
$$\varphi (h) \leqslant \frac{C}
{{{{(h - k)}^\alpha }}}{[\varphi (k)]^\beta }, \;h > k \geqslant
{k_0},
$$
for some constant $C > 0,\alpha > 0,\beta > 1$. Then we have
\begin{equation}
\varphi ({k_0} + \tilde d) = 0,
\end{equation}
in which $\tilde d = {C^{1/\alpha }}{[\varphi ({k_0})]^{(\beta -
1)/\alpha }}{2^{\beta /(\beta - 1)}}$.
\end{lemma}
Following the method of proving a classical maximum principle in [6,
Theorem 2.4], we can obtain the following result by combining Lemmas
2.3 and 2.4.
\begin{lemma} (Maximum principle)
Let $\Omega\subset G$ be an open set, $f \in {L^\infty }(\Omega )$,
$u \in {C^2}(\Omega )$ solves (2.1), then
\begin{equation}
\mathop {\sup }\limits_\Omega |u| \leqslant \mathop {\sup
}\limits_{\partial \Omega } |g| + C||f|{|_{{L^\infty }(\Omega
)}}|\Omega {|^{2/Q}}{2^{(Q + 2)/4}}.
\end{equation}
\end{lemma}
\textbf{Proof.} Notice that for every $\varphi \in C_0^2(\Omega
)$, we have
$$\int_\Omega {\Sigma _{i = 1}^{m}{X_i}u{X_i}\varphi } dH = - \int_\Omega {f\varphi dH}.
$$
Set ${k_0} = \mathop {\sup }\limits_{\partial \Omega } |g|$ and
$\varphi = {(u - k)_ + }$ with $k>k_{0}$, and denote $A(k) = \{
\xi \in \Omega |u > k\}$. It is easy to know ${X_i}\varphi =
{X_i}u$ in $A(k)$. Then
\begin{equation}
\int_{A(k)} {\Sigma _{i = 1}^{m}|{X_i}\varphi {|^2}dH} =
\int_{A(k)} {\Sigma _{i = 1}^{m}{X_i}u{X_i}\varphi } dH =
\int_{A(k)} {f\varphi dH}.
\end{equation}
By Lemma 2.3, it obtains
\begin{equation}
{\left( {\int_{A(k)} {|\varphi {|^{2^*}}dH}}
\right)^{2/2^*}} \leqslant C\int_{A(k)} {\Sigma _{i =
1}^{m}|{X_i}\varphi {|^2}dH}.
\end{equation}
On the other hand,
\begin{eqnarray}
\int_{A(k)} {f\varphi dH} &\leqslant& {\left( {\int_{A(k)}
{|\varphi {|^{2^*}}dH} } \right)^{1/2^*}}{\left( {\int_{A(k)}
{|f{|^{2Q/(2 + Q)}}dH} }
\right)^{(2 + Q)/2Q}}
\nonumber\\
&\leqslant& {\left(\int_{A(k)} {|\varphi {|^{2^*}}dH}
\right)^{1/2^*}}||f|{|_{{L^\infty }(\Omega )}}|A(k){|^{(2 + Q)/2Q}}.
\end{eqnarray}
Since $A(h) \subset A(k)$ and $\varphi
\geqslant h - k$ in $A(h)$ if $k<h$, it follows
\begin{equation}
{(h - k)^{2^*}}|A(h)| \leqslant \int_{A(h)} {|\varphi {|^{2^*}}dH}
\leqslant \int_{A(k)} {|\varphi {|^{2^*}}dH}.
\end{equation}
Combining (2.6)-(2.9), it yields
$$|A(h)| \leqslant \frac{{{{(C||f|{|_{{L^\infty }(\Omega )}})}^{2^*}}}}
{{{{(h - k)}^{2^*}}}}|A(k){|^{(Q + 2)/(Q - 2)}}.
$$
By Lemma 2.4 we get (2.5). $\Box$
We will need the following three Lemmas referring to [1, 8], which
are important in applying the perturbation argument.
\begin{lemma} (Taylor polynomial)
Let $f \in {C^\infty }(G)$, then for every integer $n$, there
exists a unique polynomial ${P_n}(f,0)$ homogenous of degree at
most $n$, such that
\begin{equation}
{X^I}{P_n}(f,0)(0) = {X^I}f(0),
\end{equation}
for all multiple-index $I$ satisfying $|I| \leqslant n$.
\end{lemma}
\begin{lemma} (Remainder in Taylor formula)
Let $f \in {C^{n + 1}}(G), \,\xi \in G$, then
\begin{equation}
f(\eta ) - {P_n}(f,\xi )(\eta ) = {O_{\eta \to \xi }}({d^{n +
1}}({\xi ^{ - 1}}\eta )).
\end{equation}
\end{lemma}
\begin{lemma} (Mean value theorem)
There exist absolute constants $b,C>0$, depending only on $G$ and the
homogenous norm $| \cdot |$, such that
\begin{equation}
|f(\xi \eta ) - f(\xi )| \leqslant C|\eta |\mathop {\sup
}\limits_{{B_{b|\eta |}}(\xi )} |Xf|,
\end{equation}
for all $f \in {C^1}(G)$ and every $\xi ,\eta \in G$.
\end{lemma}
\begin{remark}
The constant $b$ in Lemma 2.8 can be taken 1 when the homogenous
norm $| \cdot |$ is changed by the Carnot-Carath\'{e}odory distance,
see [1] for detail. In the sequel we always suppose $b\geq1$ without loss
of generality.
\end{remark}
\section{Proof of main result}
\textbf{Proof of Theorem 1.1.} We divide the proof into three
steps.
\textbf{Step 1.} Denote ${B_k} = {B_{{\rho ^k}}}(0),\rho =
\frac{1} {2}$. By Lemma 2.1, there exists a solution ${u_k} \in
{C^\infty }({B_k}) \cap C({\bar B_k})$ to the boundary value problem
\[\left\{ {\begin{array}{*{20}{c}}
{L{u_k} = {f_0} = f(0)} \\
{{u_k} = u} \\
\end{array} } \right.\begin{array}{*{20}{c}}
{in\;{B_k},} \\
{on\;\partial {B_k}.} \\
\end{array} \]
Then ${v_k} = u - {u_k}$ satisfies the Dirichlet boundary value
problem
\[\left\{ {\begin{array}{*{20}{c}}
{L{v_k} = f - {f_0}} \\
{{v_k} = 0} \\
\end{array} } \right.\begin{array}{*{20}{c}}
{in\;{B_k},} \\
{on\;\partial {B_k}.} \\
\end{array} \]
By Lemma 2.5, we have
\begin{equation}
\mathop {\sup }\limits_{{B_k}} |{v_k}| \leqslant C{\rho
^{2k}}{\omega _f}({\rho ^k}).
\end{equation}
Since ${w_k} = {u_k} - {u_{k + 1}}$ is $L$-harmonic in ${B_{k +
2}}$, we have by Lemma 2.2 and (3.1) that
\begin{equation}
\mathop {\sup }\limits_{{B_{k + 2}}} |{X_i}{w_k}| \leqslant C{\rho
^{ - k - 2}}\mathop {\sup }\limits_{{B_{k + 1}}} |{w_k}| \leqslant
C{\rho ^{ - k}}\left(\mathop {\sup }\limits_{{B_{k + 1}}} |{v_k}| +
\mathop {\sup }\limits_{{B_{k + 1}}} |{v_{k + 1}}|\right) \leqslant
C{\rho ^k}{\omega _f}({\rho ^k}).
\end{equation}
and
\begin{equation}
\mathop {\sup }\limits_{{B_{k + 2}}} |{X_i}{X_j}{w_k}| \leqslant
C{\rho ^{ - 2k - 4}}\mathop {\sup }\limits_{{B_{k + 1}}} |{w_k}|
\leqslant C{\rho ^{ - 2k}}\left(\mathop {\sup }\limits_{{B_{k + 1}}}
|{v_k}| + \mathop {\sup }\limits_{{B_{k + 1}}} |{v_{k + 1}}|\right)
\leqslant C{\omega _f}({\rho ^k}).
\end{equation}
Applying Lemma 2.6 to $u \in {C^2}({B_1}(0))$, it gets a homogenous
polynomial ${P_2}(u,0)$ of degree 2 such that for $1 \leqslant i,j
\leqslant m$,
$${X_i}{P_2}(u,0)(0) = {X_i}u(0)
$$
and
$${X_i}{X_j}{P_2}(u,0)(0) = {X_i}{X_j}u(0).
$$
By (3.1) and Lemma 2.7, we have
\begin{eqnarray}
\mathop {\sup }\limits_{{B_k}} |{u_k} - {P_2}(u,0)| &\leqslant&
\mathop {\sup }\limits_{{B_k}} |u - {u_k}| + \mathop {\sup
}\limits_{{B_k}} |u - {P_2}(u,0)|\nonumber\\
&\leqslant&C{\omega _f}({\rho ^k}){\rho ^{2k}} + o({\rho ^{2k}})
\leqslant o({\rho ^{2k}}).
\end{eqnarray}
Noting $L{P_2}(u,0) = Lu(0) = f(0) = L{u_k}$, it sees that ${u_k} -
{P_2}(u,0)$ is $L$-harmonic, and follows by Lemma 2.2 and (3.4) that
\[\mathop {\sup }\limits_{{B_k}} |{X_i}{u_k} - {X_i}{P_2}(u,0)| \leqslant C{\rho ^{ - k}}o({\rho ^{2k}}) = o({\rho ^k})\]
and
\[\mathop {\sup }\limits_{{B_k}} |{X_i}{X_j}{u_k} - {X_i}{X_j}{P_2}(u,0)| \leqslant C{\rho ^{ - 2k}}o({\rho ^{2k}}) = o(1),\]
hence
\begin{equation}
\mathop {\lim }\limits_{k \to \infty } {X_i}{u_k}(0) =
{X_i}{P_2}(u,0)(0) = {X_i}u(0),
\end{equation}
\begin{equation}
\mathop {\lim }\limits_{k \to \infty } {X_i}{X_j}{u_k}(0) =
{X_i}{X_j}{P_2}(u,0)(0) = {X_i}{X_j}u(0).
\end{equation}
For any point $\xi_{0}$ near the origin satisfying $|{\xi _0}|
\leqslant 1/4{b^2}$, we have
\begin{eqnarray}
&&|{X_i}{X_j}u({\xi _0}) - {X_i}{X_j}u(0)|
\nonumber\\
&&\leqslant|{X_i}{X_j}u({\xi _0}) - {X_i}{X_j}{u_k}({\xi _0})| +
|{X_i}{X_j}{u_k}({\xi _0}) - {X_i}{X_j}{u_k}(0)| +
|{X_i}{X_j}{u_k}(0) - {X_i}{X_j}u(0)| \nonumber\\
&&: = {I_1} + {I_2} + {I_3}.
\end{eqnarray}
\textbf{Step 2.} We now estimate $I_{1},I_{2}$ and $I_{3}$,
respectively, to prove (1.2).
To estimate $I_{3}$, let $k$ satisfy ${\rho ^{2k + 4}} \leqslant
|{\xi _0}|: = {d_0} \leqslant {\rho ^{2k + 3}}$. It shows by (3.3)
and (3.6) that
\begin{equation}
{I_3} \leqslant \Sigma _{l = k}^\infty |{X_i}{X_j}{u_l}(0) -
{X_i}{X_j}{u_{l + 1}}(0)| \leqslant C\Sigma _{l = k}^\infty
\frac{{{\omega _f}({\rho ^l})}} {{{\rho ^l}}}{\rho ^l} \leqslant
C\int_0^{\sqrt {{d_0}} } {\frac{{\omega (r)}} {r}dr}.
\end{equation}
To estimate $I_{1}$, we consider the boundary value problem
\[\left\{ {\begin{array}{*{20}{c}}
{L{u'_k} = {f_{{\xi _0}}} = f({\xi _0})} \\
{{u'_k} = u} \\
\end{array} } \right.\begin{array}{*{20}{c}}
{in\;{B_k}({\xi _0}),} \\
{on\;\partial {B_k}({\xi _0}).} \\
\end{array} \]
Similarly to (3.3) and (3.6), it follows
\begin{equation}
\mathop {\sup }\limits_{{B_{k + 2}}({\xi _0})}
|{X_i}{X_j}{u'_l}({\xi _0}) - {X_i}{X_j}{u'_{l + 1}}({\xi _0})|
\leqslant C{\omega _f}({\rho ^k}),
\end{equation}
\begin{equation}
\mathop {\lim }\limits_{k \to \infty } {X_i}{X_j}{u'_k}({\xi _0}) =
{X_i}{X_j}u({\xi _0}).
\end{equation}
Since $L({u'_k} - {u_k}) = {f_{{\xi _0}}} - {f_0}$ in ${B_{k +
2}}({\xi _0})$, it implies
\[L[{u'_k} - {u_k} - \frac{1}
{2}({f_{{\xi _0}}} - {f_0})x_{1,1}^2] = 0,\;in\;{B_{k + 2}}({\xi
_0}).
\]
By Lemma 2.5 and (3.1), we have
\begin{eqnarray}
&&|{X_i}{X_j}{u'_k}({\xi _0}) - {X_i}{X_j}{u_k}({\xi _0})| \nonumber\\
&&\leqslant |({f_{{\xi _0}}} - {f_0})| +|{X_i}{X_j}{u'_k}({\xi
_0}) - {X_i}{X_j}{u_k}({\xi _0}) - \frac{1}
{2}{X_i}{X_j}({f_{{\xi _0}}} - {f_0})x_{1,1}^2| \nonumber\\
&&\leqslant C{\omega _f}({\rho ^k}) + C{\rho ^{2k}}\mathop {\sup
}\limits_{{B_{k + 2}}({\xi _0})} |{u'_k} - {u_k}| + C\mathop {\sup
}\limits_{{B_{k + 2}}({\xi _0})} |({f_{{\xi _0}}} - {f_0})x_{1,1}^2|
\nonumber\\
&&\leqslant C{\omega _f}({\rho ^k}) + C{\rho ^{2k}}\left(C{\rho
^{2k}}{\omega _f}({\rho ^k}) + \mathop {\sup }\limits_{\partial
{B_{k + 2}}({\xi _0})} |u - {u_k}|\right) + C{\rho ^{2k}}{\omega
_f}({\rho
^k}) \nonumber\\
&&\leqslant C{\omega _f}({\rho ^k}).
\end{eqnarray}
With a similar process to (3.8), one has by (3.9), (3.10) and (3.11)
that
\begin{eqnarray}
{I_1} &\leqslant& |{X_i}{X_j}u({\xi _0}) - {X_i}{X_j}{u'_k}({\xi
_0})| + |{X_i}{X_j}{u'_k}({\xi _0}) - {X_i}{X_j}{u_k}({\xi _0})|
\nonumber\\
&\leqslant& \Sigma _{l = k}^\infty |{X_i}{X_j}{u'_l}({\xi _0}) -
{X_i}{X_j}{u'_{l + 1}}({\xi _0})| + C{\omega _f}({\rho
^k})\nonumber\\
&\leqslant& C\int_0^{\sqrt {{d_0}} } {\frac{{\omega (r)}} {r}dr} .
\end{eqnarray}
Finally, let us estimate $I_{2}$. Since ${w_k} \in {C^\infty
}({B_{k + 2}})$, we have by Lemma 2.8 that
\begin{equation}
|{X_i}{X_j}{w_k}({\xi _0}) - {X_i}{X_j}{w_k}(0)| \leqslant
C{d_0}\mathop {\sup }\limits_{\tiny{\begin{array}{*{20}{c}}
{|\eta | < b|{\xi _0}| < {\rho ^{k + 2}}} \\
{l = 1,2, \ldots ,m} \\
\end{array} } }|{X_i}{X_j}{X_l}{w_k}(\eta )| \leqslant C{d_0}{\rho ^{ - k}}{\omega _f}({\rho
^k}).
\end{equation}
On the other hand, it derives
\begin{eqnarray}
&&|{X_i}{X_j}{u_1}({\xi _0}) - {X_i}{X_j}{u_1}(0)|\nonumber\\
&&\leqslant C{d_0}\mathop {\sup }\limits_{\tiny{\begin{array}{*{20}{c}}
{|\eta | < b|{\xi _0}| < {\rho ^{k + 2}}} \\
{l = 1,2, \ldots ,m} \\
\end{array} }} |{X_l}{X_i}{X_j}({u_1}(\eta ) - P({u_1},0)(\eta
))|\nonumber\\
&&\leqslant C{d_0}\left(\mathop {\sup }\limits_{{B_1}} |{u_1}| +
\mathop
{\sup }\limits_{{B_1}} |P({u_1},0)|\right)\nonumber\\
&&\leqslant C{d_0}\left(\mathop {\sup }\limits_{{B_1}} |u| +
||f|{|_{{L^\infty }}} + \mathop \Sigma \limits_{0 < |I| \leqslant 2}
\mathop {\sup }\limits_{{B_1}} |{X^I}({u_1} - \frac{1}
{2}{f_0}x_{1,1}^2)| + \mathop \Sigma \limits_{0 < |I| \leqslant 2}
\mathop {\sup }\limits_{{B_1}} |\frac{1}
{2}{X^I}({f_0}x_{1,1}^2)|\right)\nonumber\\
&&\leqslant C{d_0}\left(\mathop {\sup }\limits_{{B_1}} |u| +
||f|{|_{{L^\infty }}}\right).
\end{eqnarray}
Then we get by (3.13) and (3.14) that
\begin{eqnarray}
{I_2} &\leqslant& |{X_i}{X_j}{u_{k - 1}}({\xi _0}) - {X_i}{X_j}{u_{k
- 1}}(0)| + |{X_i}{X_j}{w_{k - 1}}({\xi _0}) - {X_i}{X_j}{w_{k -
1}}(0)|\nonumber\\
&\leqslant& |{X_i}{X_j}{u_1}({\xi _0}) - {X_i}{X_j}{u_1}(0)| +
\Sigma _{l = 1}^{k - 1}|{X_i}{X_j}{w_l}({\xi _0}) -
{X_i}{X_j}{w_l}(0)|\nonumber\\
&\leqslant& C{d_0}\left(\mathop {\sup }\limits_{{B_1}} |u| +
||f|{|_{{L^\infty }}} + \Sigma _{l = 1}^{k - 1}\frac{{{\omega
_f}({\rho ^l})}} {{{\rho ^{2l}}}}{\rho ^l}\right)\nonumber\\
&\leqslant& C{d_0}\left( {\mathop {\sup }\limits_{{B_1}} |u| +
||f|{|_{{L^\infty }}} + \int_{\sqrt {{d_0}} }^1 {\frac{{{\omega
_f}(r)}} {{{r^2}}}dr} } \right).
\end{eqnarray}
Substituting (3.8), (3.12), (3.15) into (3.7), we conclude that for
every $\xi_{0}$ satisfying ${d_0} = |{\xi _0}| \leqslant
1/4{b^2}$, it holds
\begin{eqnarray}
&&|{X_i}{X_j}u({\xi _0}) - {X_i}{X_j}u(0)|\nonumber\\
&&\leqslant C\left( {{d_0}(\mathop {\sup }\limits_{{B_1}(0)} |u| +
||f|{|_{{L^\infty }}} + \int_{\sqrt {{d_0}} }^1 {\frac{{{\omega
_f}(r)}} {{{r^2}}}dr} ) + \int_0^{\sqrt {{d_0}} } {\frac{{{\omega
_f}(r)}} {r}dr} } \right).
\end{eqnarray}
For any $\xi$ and $\eta$ in ${B_{1/2}}(0)$, $d = d(\xi ,\eta )$,
let us choose $\xi = {\xi _1}, \ldots ,{\xi _n} = \eta $ such that
\[d({\xi _i},{\xi _{i + 1}}) = d',d' < {d_0},(n - 1)d' = d,\;for\;1 \leqslant i \leqslant n -
1.
\]
By applying (3.16) to those points, we get (1.2).
\textbf{Step 3.} If $f \in {C^{0,\alpha }}({B_1}(0))$, $\alpha \in
(0,1)$, then
\[|f(\xi ) - f(\eta )| \leqslant {[f]_{{C^{0,\alpha }}}}d{(\xi ,\eta )^\alpha },\]
thus
\[{\omega _f}(r) = \mathop {\sup }\limits_{d(\xi ,\eta ) < r} |f(\xi ) - f(\eta )| \leqslant {[f]_{{C^{0,\alpha }}}}{r^\alpha }.\]
Hence it yields from the right side of (1.2) that
\begin{eqnarray*}
&&d\left( {\mathop {\sup }\limits_{{B_1}(0)} |u| + ||f|{|_{{L^\infty
}}} + \int_{\sqrt d }^1 {\frac{{{\omega _f}(r)}} {{{r^2}}}dr} }
\right) + \int_0^{\sqrt d } {\frac{{{\omega _f}(r)}} {r}dr}
\nonumber\\
&& \leqslant d\left( {\mathop {\sup }\limits_{{B_1}(0)} |u| +
||f|{|_{{L^\infty }}} + {{[f]}_{{C^{0,\alpha }}}}\int_{\sqrt d }^1
{\frac{1} {{{r^{2 - \alpha }}}}dr} } \right) + {[f]_{{C^{0,\alpha
}}}}\int_0^{\sqrt d } {\frac{1} {{{r^{1 - \alpha }}}}dr}
\nonumber\\
&&\leqslant d\mathop {\sup }\limits_{{B_1}(0)} |u| + \frac{d} {{2 -
\alpha }}{[f]_{{C^{0,\alpha }}}}\left( {\frac{1} {{{{(\sqrt d )}^{1
- \alpha }}}} - 1} \right) + \frac{1} {\alpha }{[f]_{{C^{0,\alpha
}}}}{\left( {\sqrt d } \right)^\alpha }
\nonumber\\
&&\leqslant C{d^{\alpha /2}}\left( {\mathop {\sup
}\limits_{{B_1}(0)} |u| + ||f|{|_{{C^{0,\alpha }}}}} \right).
\end{eqnarray*}
and proves (1.3).
If $f \in {C^{0,1}}({B_1}(0))$, then
\[{\omega _f}(r) = \mathop {\sup }\limits_{d(\xi ,\eta ) < r} |f(\xi ) - f(\eta )| \leqslant {[f]_{{C^{0,1}}}}r\]
and
\begin{eqnarray*}
&&d\left( {\mathop {\sup }\limits_{{B_1}(0)} |u| + ||f|{|_{{L^\infty
}}} + \int_{\sqrt d }^1 {\frac{{{\omega _f}(r)}} {{{r^2}}}dr} }
\right) + \int_0^{\sqrt d } {\frac{{{\omega _f}(r)}} {r}dr}
\nonumber\\
&&\leqslant d\left( {\mathop {\sup }\limits_{{B_1}(0)} |u| +
||f|{|_{{L^\infty }}} + {{[f]}_{{C^{0,1}}}}\int_{\sqrt d }^1
{\frac{1} {r}dr} } \right) + {[f]_{{C^{0,1}}}}\sqrt d \nonumber\\
&&\leqslant d\left( {\mathop {\sup }\limits_{{B_1}(0)} |u| +
||f|{|_{{L^\infty }}}} \right) + {[f]_{{C^{0,1}}}}\sqrt d \left( {1
+ |\sqrt d \log \sqrt d |} \right)\nonumber\\
&&\leqslant {d^{1/2}}\left( {\mathop {\sup }\limits_{{B_1}(0)} |u| +
||f|{|_{{C^{0,1}}}}\left( {1 + |\sqrt d \log \sqrt d |} \right)}
\right),
\end{eqnarray*}
thus (1.4) is obtained. $\Box$
\end{document}
|
\begin{document}
\title{Complete Problems of Propositional Logic\\ for the Exponential Hierarchy}
\titlerunning{Complete Problems for the Exponential Hierarchy}
\author{Martin L체ck}
\institute{Institut f체r Theoretische Informatik\\
Leibniz Universit채t Hannover, DE\\
\texttt{[email protected]}}
\authorrunning{M. L체ck}
\maketitle
\begin{abstract}
Large complexity classes, like the exponential time hierarchy, received little attention in terms of finding complete problems.
In this work a generalization of propositional logic is investigated which fills this gap with the introduction of \emph{Boolean higher-order quantifiers} or equivalently \emph{Boolean Skolem functions}. This builds on the important results of Wrathall and Stockmeyer regarding complete problems, namely QBF and QBF$_k$, for the polynomial hierarchy. Furthermore it generalizes the \emph{Dependency QBF} problem introduced by Peterson, Reif and Azhar which is complete for $\protect\ensuremath{\numberClassFont{N}}\xspaceEXP$, the first level of the exponential hierarchy. Also it turns out that the hardness results do not collapse at the consideration of conjunctive and disjunctive normal forms, in contrast to plain QBF.
\end{abstract}
\section{Introduction}
The class of problems decidable in polynomial space, $\protect\ensuremath{\complClFont{PSPACE}}\xspace$, can equivalently be defined as the class $\protect\ensuremath{\complClFont{AP}}\xspace$, i.e.\@\xspace, via alternating machines with polynomial runtime and no bound on the alternation number. The classes $\SigmaP{k}$ and $\protect\ensuremath{\complClFont{P}}\xspaceiP{k}$ of the polynomial hierarchy are then exactly the restrictions of $\protect\ensuremath{\complClFont{AP}}\xspace$ to levels of bounded alternation \cite{alternation}. The problem of \emph{quantified Boolean formulas}, often called $\protect\ensuremath{\numberClassFont{Q}}\xspaceBF$ resp. $\protect\ensuremath{\numberClassFont{Q}}\xspaceBF_k$, is complete for these classes. The subscript $k$ denotes the number of allowed quantifier alternations of a qbf in prenex normal form, whereas $\protect\ensuremath{\numberClassFont{Q}}\xspaceBF$ imposes no bound on quantifier alternations. For this correspondence Stockmeyer called $\protect\ensuremath{\numberClassFont{Q}}\xspaceBF$ the \emph{$\omega$-jump} of the bounded $\protect\ensuremath{\numberClassFont{Q}}\xspaceBF_k$ variants, and similar the class $\protect\ensuremath{\complClFont{PSPACE}}\xspace$ the $\omega$-jump of the polynomial hierarchy \cite{polyH}, in reference to the arithmetical hierarchy.
On the scale of exponential time the alternation approach leads to discrepancies regarding natural complete problems. Unbounded alternations in exponential time ($\protect\ensuremath{\complClFont{AEXP}}\xspace$) leads to the same class as exponential space, in symbols $\protect\ensuremath{\complClFont{AEXP}}\xspace = \protect\ensuremath{\complClFont{EXPSPACE}}\xspace$, and therefore $\protect\ensuremath{\complClFont{EXPSPACE}}\xspace$ is analogously the $\omega$-jump of exponential time classes with bounded alternations \cite{alternation}. Complete problems for $\protect\ensuremath{\complClFont{EXPSPACE}}\xspace$ are rare, and often artificially constructed, frequently just succinctly encoded variants of $\protect\ensuremath{\complClFont{PSPACE}}\xspace$-complete problems
\cite{allender_eric_minimum_2015,goos_second_1995, MeyerS72}.
If the number of machine alternations is bounded by a polynomial then this leads to the class $\protect\ensuremath{\complClFont{AEXP}}\xspacePOLY$ which in fact lies between the exponential time hierarchy and its $\omega$-jump.
In this paper a natural complete problem is presented, similar to $\protect\ensuremath{\numberClassFont{Q}}\xspaceBF_k$, which allows quantification over Boolean functions and is complete for the levels of the exponential time hierarchy.
The first appearance of such Boolean formulas with quantified Boolean functions was in the work of Peterson, Reif and Azhar who modeled games of imperfect information as a problem they called \emph{DQBF} or \emph{Dependency QBF} \cite{dqbf}. The basic idea is that in a game the player $\exists$ may or may not see the whole state that is visible to player $\forall$, and hence her next move must depend only on the disclosed information.
The existence of a winning strategy in a game can often be modeled as a formula with a prefix of alternating quantifiers corresponding to the moves. This naturally fits games where all information about the state of the game is visible to both players, as quantified variables always may depend on each previously quantified value. Any existentially quantified proposition $x$ can equivalently be replaced by its \emph{Skolem function} which is a function depending on the $\forall$-quantified propositions to the left of $x$.
To model imperfect information in the game, all one has to do is now to restrict the arguments of the Skolem function. For first-order predicate logic several formal notions have been introduced to accommodate this semantics, e.g.\@\xspace, Henkin's branching quantifiers (see \cite{blass_henkin_1986}) or Hintikka's and Sandu's \emph{Independence Friendly Logic} \cite{hintikka_informational_1989}.
\subsubsection*{Contribution.}
The presented problem is a generalization of the QBF problem where Skolem functions of variables are explicit syntactical objects. This logic will be called QBSF as in \emph{Quantified Boolean Second-order Formulas}. It is shown that this introduction of function quantifiers to QBF (reminding of the step from first-order predicate logic to second-order predicate logic) yields enough expressive power the encode alternating quantification of exponentially large words. The problem of deciding the truth of a given QBF with higher-order quantifiers is complete for the class $\protect\ensuremath{\complClFont{AEXP}}\xspacePOLY$, but has natural complete fragments for every level of the exponential hierarchy.
The complexity of the problem is classified for several fragments, namely bounded numbers of function quantifiers and proposition quantifiers, as well as the restriction to formulas where function variables only occur as the Skolem functions of quantified propositions, i.e.\@\xspace, always with the same arguments. The latter fragment is used as an alternative hardness proof for the original DQBF problem by Peterson, Reif and Azhar \cite{dqbf}.
\allowdisplaybreaks
\section{Preliminaries}
The reader is assumed to be familiar with usual notions of Turing machines (TMs) and complexity classes, especially in the setting of alternation introduced by Chandra, Kozen and Stockmeyer \cite{alternation}. In accordance to the original definition of alternating machines (ATMs) we distinguish them by the type of their initial state.
We abbreviate alternating Turing machines that start in an existential state as \emph{$\Sigma$ type machines} ($\Sigma$-ATMs), and those which start in a universal state as \emph{$\protect\ensuremath{\complClFont{P}}\xspacei$ type machines} ($\protect\ensuremath{\complClFont{P}}\xspacei$-ATMs).
We define $\protect\ensuremath{\complClFont{E}}\xspaceP$ and $\protect\ensuremath{\numberClassFont{N}}\xspaceEXP$ as the classes of problems which are decidable by a \mbox{(non-)}deterministic machine in time $2^{p(n)}$ for a polynomial $p$.
\begin{definition}
For $Q \in \{ \Sigma, \protect\ensuremath{\complClFont{P}}\xspacei \}$ $g(n) \geq 1$, define $\ATIMEI{t(n), g(n)}{Q}$ as the class of all problems $A$ for which there is a $Q$-ATM deciding $A$ in time $\bigO{t(n)}$ with at most $g(n)$ alternations.
\end{definition}
The number of alternations is the maximal number of \emph{transitions} between universal and existential states or vice versa that $M$ does on inputs of length $n$, counting the initial configuration as first alternation. A polynomial time $\Sigma$-ATM ($\protect\ensuremath{\complClFont{P}}\xspacei$-ATM) with $g(n)$ alternations is also called $\SigmaP{g}$-machine ($\protect\ensuremath{\complClFont{P}}\xspaceiP{g}$-machine). For exponential time, i.e.\@\xspace, $2^{p(n)}$ for al polynomial $p$, we analogously write $\SigmaE{g}$ resp. $\protect\ensuremath{\complClFont{P}}\xspaceiE{g}$.
\begin{definition}[\cite{alternation}]
\begin{alignat*}{2}
&\protect\ensuremath{\complClFont{AEXP}}\xspace &&\dfn \bigcup_{t \in 2^{n^{\bigO{1}}}} \ATIMES{t,t}\text{,}\\
&\protect\ensuremath{\complClFont{AEXP}}\xspacePOLY \;&&\dfn \bigcup_{\substack{t \in 2^{n^{\bigO{1}}}\\ p \in n^{\bigO{1}}}} \ATIMES{t,p}\text{.}
\end{alignat*}
\end{definition}
In this work we further require the notion of \emph{oracle Turing machines}. An oracle Turing machine is an ordinary Turing machine which additionally has access to an \emph{oracle language} $B$. The machine queries $B$ by writing an instance $x$ on a special \emph{oracle tape} and moving to a \emph{query state} $q_?$. But then instead of $q_?$ itself, one of two states, say, $q_+$ and $q_-$, is assumed instantaneously to identify the answer if $x \in B$ or not. There is no bound on the number of queries during a computation of an oracle machine, i.e.\@\xspace, the machine can erase the oracle tape and ask more questions.
If $B$ is a language, then the usual complexity classes $\protect\ensuremath{\complClFont{P}}\xspace, \protect\ensuremath{\numberClassFont{N}}\xspaceP, \protect\ensuremath{\numberClassFont{N}}\xspaceEXP$ etc. are generalized to $\protect\ensuremath{\complClFont{P}}\xspace^B, \protect\ensuremath{\numberClassFont{N}}\xspaceP^B, \protect\ensuremath{\numberClassFont{N}}\xspaceEXP^B$ etc.\ where the definition is just changed from ordinary Turing machines to corresponding oracle machines with oracle $B$.
If $\mathcal{C}$ is a class of languages, then $\protect\ensuremath{\complClFont{P}}\xspace^{\mathcal{C}} \dfn \bigcup_{B \in \mathcal{C}} \protect\ensuremath{\complClFont{P}}\xspace^B$ and so on.
To classify the complexity of the presented decision problems we require some standard definitions.
\begin{definition}
A \emph{logspace-reduction} from a language $A$ to a language $B$ is a function $f$ that is computable in logarithmic space such that $x \in A \Leftrightarrow f(x) \in B$.
If such $f$ exists then write $A \protect\ensuremath{\leq^\mathCommandFont{log}_\mathCommandFont{m}} B$. Say that $B$ is \emph{$\protect\ensuremath{\leq^\mathCommandFont{log}_\mathCommandFont{m}}$-hard} for a complexity class $\protect\ensuremath{\mathcal{C}}$ if $A \in \protect\ensuremath{\mathcal{C}}$ implies $A \protect\ensuremath{\leq^\mathCommandFont{log}_\mathCommandFont{m}} B$, and $B$ is \emph{$\protect\ensuremath{\leq^\mathCommandFont{log}_\mathCommandFont{m}}$-complete} for $\protect\ensuremath{\mathcal{C}}$ if it is $\protect\ensuremath{\leq^\mathCommandFont{log}_\mathCommandFont{m}}$-hard for $\protect\ensuremath{\mathcal{C}}$ and $B \in \protect\ensuremath{\mathcal{C}}$.
\end{definition}
\begin{definition}[The Polynomial Hierarchy \cite{polyH}]
The levels of the polynomial hierarchy are defined inductively as follows, where $k \geq 1$:
\begin{itemize}
\item $\SigmaP{0} = \protect\ensuremath{\complClFont{P}}\xspaceiP{0} = \DeltaP{0} \dfn \protect\ensuremath{\complClFont{P}}\xspace$.
\item $\SigmaP{k} \dfn \protect\ensuremath{\numberClassFont{N}}\xspaceP^{\SigmaP{k-1}}$, $\protect\ensuremath{\complClFont{P}}\xspaceiP{k} \dfn {\protect\ensuremath{\complClFont{co}}}NP^{\SigmaP{k-1}}$, $\DeltaP{k} \dfn \protect\ensuremath{\complClFont{P}}\xspace^{\SigmaP{k-1}}$.
\end{itemize}
\end{definition}
\begin{definition}[The Exponential Hierarchy \cite{Mocas, Simon}]
The levels of the exponential hierarchy are defined inductively as follows, where $k \geq 1$:
\begin{itemize}
\item $\SigmaE{0} = \protect\ensuremath{\complClFont{P}}\xspaceiE{0} = \DeltaE{0} = \protect\ensuremath{\complClFont{E}}\xspaceP$.
\item $\SigmaE{k} \dfn \protect\ensuremath{\numberClassFont{N}}\xspaceEXP^{\SigmaP{k-1}}$, $\protect\ensuremath{\complClFont{P}}\xspaceiE{k} \dfn {\protect\ensuremath{\complClFont{co}}}NEXP^{\SigmaP{k-1}}$, $\DeltaE{k} \dfn \protect\ensuremath{\complClFont{E}}\xspaceP^{\SigmaP{k-1}}$.
\end{itemize}
\end{definition}
\begin{theorem}[\cite{alternation}]\label{thm:ph_by_alternations}
For all $k \geq 1$:
\begin{align*}
\SigmaP{k} &= \bigcup_{p \in n^{\bigO{1}}} \ATIMES{p, k}\text{,}\\
\protect\ensuremath{\complClFont{P}}\xspaceiP{k} &= \bigcup_{p \in n^{\bigO{1}}} \ATIMEP{p, k}\text{.}
\end{align*}
\end{theorem}
The next two lemmas characterize the classes of the exponential hierarchy similar to the characterization of the polynomial hierarchy in \cite{polyH,Wrathall}. The proofs are rather straightforward adaptions of the characterization of the polynomial hierarchy.
First, it is possible to reduce a language recognized in alternating exponential time down to a language with deterministic polynomial time complexity by introducing additional \emph{word quantifiers}. These words roughly correspond to the "choices" of an encoded alternating machine, hence for the polynomial hierarchy words of polynomial length are quantified.
To encode machines deciding problems in $\SigmaE{k}$ or $\protect\ensuremath{\complClFont{P}}\xspaceiE{k}$ we require, informally spoken, \emph{large word quantifiers}, i.e.\@\xspace, we quantify words of exponential length w.\,r.\,t.\@\xspace to the input.
\begin{lemma}\label{thm:eh_by_quantifiers}
For $k\geq 1$, $A \in \SigmaE{k}$ if and only if there is $t \in 2^{n^{\bigO{1}}}$\!\! and $B \in \protect\ensuremath{\complClFont{P}}\xspace$ s.\,t.\@\xspace{}
\[
x \in A \Leftrightarrow \exists y_1 \forall y_2 \ldots \Game_k y_k \; : \; \langle x,y_1,\ldots,y_n\rangle \in B\text{,}
\]
where $\Game_k = \forall$ for even $k$ and $\Game_k = \exists$ for odd $k$, and all $y_i$ have length bounded in $t(\size{x})$.
\end{lemma}
\begin{proof}
We have to show that for all $k\geq 1$ it is $A \in \SigmaE{k}$ if and only if there is $t \in 2^{n^{\bigO{1}}}$, $B \in \protect\ensuremath{\complClFont{P}}\xspace$ s.\,t.\@\xspace{}
\[
x \in A \Leftrightarrow \exists y_1 \forall y_2 \ldots \Game_k y_k \; : \; \langle x,y_1,\ldots,y_n\rangle \in B\text{,}
\]
where $\Game_k = \forall$ for even $k$ and $\Game_k = \exists$ for odd $k$, and all $y_i$ have length bounded in $t(\size{x})$.
"$\Leftarrow$": Define
\[
D \dfn \Set{ \langle 0^{t(\size{x})},x,y_{1}\rangle | \forall y_2 \ldots \Game_k y_k \; : \; \langle x,y_1,\ldots,y_n\rangle \in B }\text{,}
\]
where $0^{t(\size{x})}$ is the string consisting of $t(\size{x})$ zeros and the quantified $y_i$ are length-bounded by $t(\size{x})$. Then $D \in \protect\ensuremath{\complClFont{P}}\xspaceiP{k-1}$, and the algorithm that guesses a $y_1$ of length $\leq t(\size{x})$ and queries $D$ as oracle witnesses that $A \in \protect\ensuremath{\numberClassFont{N}}\xspaceEXP^{\protect\ensuremath{\complClFont{P}}\xspaceiP{k-1}} = \SigmaE{k}$.
"$\protect\ensuremath{\numberClassFont{R}}\xspaceightarrow$":
Let $A$ be decided by some non-deterministic Turing machine $M$ with oracle $C \in \SigmaP{k-1}$. Assume that $M$ has runtime $t(n) = 2^{p(n)}$ for some polynomial $p$.
Consider now words of the form $z = \langle d,q,a\rangle$ of length $\bigO{t^2(\size{x})}$ where $d$ encodes $t(n)$ non-deterministic choices in a computation of $M$, $q$ encodes the oracle questions asked, and $a$ encodes the answers used by $M$. Then $x \in A$ if and only if there is such a word $z = \langle d,q,a\rangle$ s.\,t.\@\xspace{} $M$ accepts on the computation encoded by the choices $d$, and $a$ are actually the correct answers of the oracle $C$ to the queries in $q$.
With given $\langle x,z\rangle = \langle x,d,q,a\rangle$ the encoded computation of $M$ on the path $d$ can be simulated deterministically in time polynomial in $\size{z}$. With given $\langle x,z\rangle$, also the problem of determining whether the answers $a$ for the queries $q$ are correct for the oracle $C$ is in $\protect\ensuremath{\complClFont{P}}\xspace^C \subseteq \protect\ensuremath{\complClFont{P}}\xspace^{\SigmaP{k-1}} \subseteq \SigmaP{k}$. Therefore the set of all tuples $\langle x,z\rangle$ which fulfill both properties, call it $C'$, is in $\SigmaP{k}$.
By the quantifier characterization of $\SigmaP{k}$ it holds that $\langle x,z\rangle \in C'$ if and only if $\exists y_1 \ldots \Game_k y_k \; : \; \langle x,z,y_1,\ldots,y_k\rangle \in B$ for some set $B \in \protect\ensuremath{\complClFont{P}}\xspace$ and polynomially bounded, alternating quantifiers \cite{polyH,Wrathall}.
But then $x \in A \Leftrightarrow \exists \langle z,y_1\rangle \forall y_2 \ldots \Game_k y_k : \langle x,z,y_1,y_2,\ldots,y_k\rangle \in B$ quantifiers with length bounded exponentially in $\size{x}$.
\end{proof}
We next state the known correspondence between the classes of the exponential hierarchy (which are defined via oracle machines) and the alternating time classes by the following lemma. It can be seen as the exponential equivalent of \protect\ensuremath{\numberClassFont{C}}\xspaceref{thm:ph_by_alternations}.
\begin{lemma}\label{thm:alternating_exp_classes}
For all $k \geq 1$:
\begin{align*}
\SigmaE{k} &= \bigcup_{t \in 2^{n^{\bigO{1}}}} \ATIMES{t, k}\text{,}\\
\protect\ensuremath{\complClFont{P}}\xspaceiE{k} &= \bigcup_{t \in 2^{n^{\bigO{1}}}} \ATIMEP{t, k}\text{.}
\end{align*}
\end{lemma}
\begin{proof}
We show only the $\SigmaE{k}$ case as it can easily be adapted to the $\protect\ensuremath{\complClFont{P}}\xspaceiE{k}$ case.
For \enquote{$\subseteq$}, apply the foregoing \protect\ensuremath{\numberClassFont{C}}\xspaceref{thm:eh_by_quantifiers}. Use an alternating machine to guess the exponentially long quantified words and check in deterministic exponential time if the resulting word is in $B$. Now to \enquote{$\supseteq$}. Let $t \in 2^{n^{\bigO{1}}}$ s.\,t.\@\xspace{} $A \in \ATIMES{t,k}$. Then
$B \dfn \Set{\langle x,0^{t(\size{x})}\rangle | x \in A }$ is in $\SigmaP{k}$, therefore
\begin{align*}
x \in A &\Leftrightarrow \exists y_1 \ldots \Game_k y_k \; : \; \langle x,0^{t(\size{x})},y_1,\ldots,y_k\rangle \in C\\
&\Leftrightarrow \exists \langle y_0,y_1\rangle \forall y_2 \ldots \Game_k y_k \; : \; (y_0 = 0^{t(\size{x})}) \text{ and } \langle x,y_0,y_1,\ldots,y_k\rangle \in C\\
&\Leftrightarrow \exists \langle y_0,y_1\rangle \forall y_2 \ldots \Game_k y_k \; : \; \langle x,y_0,y_1,\ldots,y_k\rangle \in C'\\
\end{align*}
for some $C,C' \in \protect\ensuremath{\complClFont{P}}\xspace$ and alternating quantifiers which are exponentially bounded in $\size{x}$. By the previous lemma then it holds $A \in \SigmaE{k}$.
\end{proof}
Orponen gave a characterization of the exponential hierarchy via an \emph{indirect simulation} technique \cite{orponen}. He introduced it primarily due to its non-relativizing nature (while \emph{direct simulation} relativizes), however it also allows to use polynomial time machines to characterize languages with much higher complexity. Informally spoken, the whole computation of an exponential time machine is encoded into quantified oracles (instead of exponentially long words), which are then verified bit for bit, but in parallel, by an alternating oracle machine with only polynomial runtime. Baier and Wagner \cite{baier_analytic_1998} investigated more generally so-called \emph{type 0, type 1} and \emph{type 2} quantifiers, improving Orponen's result. These oracle characterizations play a major role for classifying the complexity of QBSF, the logic introduced in this paper, as they translate to the quantification of Boolean functions (which are per se exponentially large objects).
\begin{theorem}[\cite{baier_analytic_1998}]\label{thm:simulation}
Let $Q \in \{\Sigma, \protect\ensuremath{\complClFont{P}}\xspacei \}$, $k \in \mathbb{N}$. For every $L \in Q^E_{k}$ there is a polynomial $p$ and a deterministic polynomial time oracle machine $M$ s.\,t.\@\xspace{}
\begin{align*}
x \in L \Leftrightarrow \; &\Game_1 A_1 \subseteq \{0,1\}^{p(\size{x})} \; \ldots\; \Game_{k} A_{k} \subseteq \{0,1\}^{p(\size{x})} \; \Game_{k+1} y \in \{0,1\}^{p(\size{x})}\\
&\text{s.\,t.\@\xspace{}} \; M \text{ accepts }\langle x,y\rangle\text{ with oracle }\langle A_1, \ldots, A_{k}\rangle\text{,}
\end{align*}
where $\Game_1 = \exists$ if $Q = \Sigma$ and $\Game_1 = \forall$ if $Q = \protect\ensuremath{\complClFont{P}}\xspacei$, and $\Game_i \in \{ \exists, \forall \} \setminus \{ \Game_{i-1} \}$ for $1 < i \leq k+1$.
\end{theorem}
For sets $A_1, \ldots, A_k$ the term $\langle A_1, \ldots, A_k\rangle \dfn \Set{ \langle i, x\rangle | x \in A_i, 1 \leq i \leq k }$ is called \emph{efficient disjoint union} in this context. It allows the machine to access an arbitrary number of oracles in its computations by writing down the corresponding oracle index together with the query.
The following is a variant of the above theorem where the number $k$ of alternations is not fixed but polynomial in the input size:
\begin{theorem}[\cite{hannula_complexity_2015}]\label{thm:simulationpoly}
For every set $L \in \protect\ensuremath{\complClFont{AEXP}}\xspacePOLY$ there is a polynomial $p$ and a deterministic polynomial time oracle machine $M$ s.\,t.\@\xspace{}
\begin{align*}
x \in L \Leftrightarrow \;&\Game_1 A_1 \subseteq \{0,1\}^{p(\size{x})} \; \ldots\; \Game_{p(\size{x})} A_{p(\size{x})} \subseteq \{0,1\}^{p(\size{x})}\\
&\Game_1 y_1 \,\,\in \{0,1\}^{p(\size{x})} \; \ldots\; \Game_{p(\size{x})} y_{p(\size{x})} \,\,\in \{0,1\}^{p(\size{x})}\\
&\text{s.\,t.\@\xspace{}} \; M \text{ accepts } \langle x, y_1, \ldots, y_{p(\size{x})}\rangle \text{ with oracle }\langle A_1, \ldots, A_{p(\size{x})}\rangle\text{,}
\end{align*}
where $\Game_1 \ldots \Game_{p(\size{x})}$ is an alternating quantifier sequence.
\end{theorem}
Obviously each quantified word can be efficiently encoded in its own additional oracle. There are only polynomially many quantified words, so in the unbounded case we can drop the word quantifiers completely.
These characterizations all have tight upper bounds. Suppose that a language is characterized by such a sequence of quantified oracles. Then conversely an alternating machine can non-deterministically guess the oracle sets with runtime exponential in $p$ and then simulate $M$ including the word quantifiers in deterministic exponential time.
Together with \protect\ensuremath{\numberClassFont{C}}\xspaceref{thm:alternating_exp_classes} we obtain:
\begin{corollary}\label{thm:alt_seq_poly}
$L \in \protect\ensuremath{\complClFont{AEXP}}\xspacePOLY$ if and only if there is a polynomial $p$ and an deterministic polynomial time oracle machine $M$ s.\,t.\@\xspace{} $x \in L$ iff
\begin{align*}
\Game_1 A_1 \subseteq \{0,1\}^{p(\size{x})} \;\ldots\; &\Game_{p(\size{x})} A_{p(\size{x})} \subseteq \{0,1\}^{p(\size{x})}\; :\;\\
&M \text{ accepts } x \text{ with oracle } \langle A_1, \ldots, A_{p(\size{x})}\rangle\text{,}
\end{align*}
where $\Game_1, \ldots, \Game_{p(\size{x})}$ is an alternating sequence of quantifiers.
\end{corollary}
\begin{corollary}
For all $k \geq 1$, $L \in \SigmaE{k}$ if and only if there is a polynomial $p$ and a deterministic polynomial time oracle machine $M$ s.\,t.\@\xspace{} $x \in L$ iff
\begin{align*}
\exists A_1 \subseteq \{0,1\}^{p(\size{x})} \;\ldots\; &\Game_{k} A_{k} \subseteq \{0,1\}^{p(\size{x})}\; \Game_{k+1} y \in \{0,1\}^{p(\size{x})}\;:\;\\
&M \text{ accepts } \langle x,y\rangle \text{ with oracle } \langle A_1, \ldots, A_{k}\rangle\text{,}
\end{align*}
where $\Game_1 = \exists, \ldots, \Game_{k+1}$ is an alternating sequence of quantifiers.
\end{corollary}
\begin{corollary}\label{thm:alt_seq_pik}
For all $k \geq 1$, $L \in \protect\ensuremath{\complClFont{P}}\xspaceiE{k}$ if and only if there is a polynomial $p$ and a deterministic polynomial time oracle machine $M$ s.\,t.\@\xspace{} $x \in L$ iff
\begin{align*}
\forall A_1 \subseteq \{0,1\}^{p(\size{x})} \;\ldots\; &\Game_{k} A_{k} \subseteq \{0,1\}^{p(\size{x})}\; \Game_{k+1} y \in \{0,1\}^{p(\size{x})}\;:\;\\
&M \text{ accepts } \langle x,y\rangle \text{ with oracle } \langle A_1, \ldots, A_{k}\rangle\text{,}
\end{align*}
where $\Game_1 = \forall, \ldots, \Game_{k+1}$ is an alternating sequence of quantifiers.
\end{corollary}
\section{Second-order QBF}
In this section the logic QBSF is introduced formally. It is a straightforward generalization of usual QBF to include function variables; it could be interpreted as a "second-order" extension: It behaves similarly to plain QBF as second-order logic behaves to first-order logic.
\begin{definition}[Syntax of QBSF]
The constants $1$ and $0$ are \emph{quantified Boolean second-order formulas (qbsfs)} . If $f^n$ is a function symbol of arity $n \geq 0$ and $\varphi_1, \ldots, \varphi_n$ are qbsfs, then $f^n(\varphi_1, \ldots, \varphi_n)$, $\varphi_1 \land \varphi_2$, $\neg \varphi_1$ and $\exists f^n \varphi_1$ are all qbsfs.
\end{definition}
Abbreviations like $\varphi \lor \psi$, $\varphi \rightarrow \psi$, $\varphi \leftrightarrow \psi$ and $\forall f^{n} \,\psi$ can be defined from this as usual. In this setting, propositions can be understood as functions of arity zero. If the arity of a symbol is clear or does not matter we drop the indicator from now on. Furthermore a sequence $x_1, \ldots, x_s$ of variables can be abbreviated as $\vec{x}$ if the number $s$ does not matter, $\exists \vec{x}$ meaning $\exists x_1 \ldots \exists x_n$ and so on.
For practical reasons we transfer the terms \emph{first-order variable} and \emph{second-order variable} to the Boolean realm when referring to functions of arity zero resp. greater than zero.
\begin{definition}[Semantics of QBSF]
An \emph{interpretation} $\protect\ensuremath{\mathcal{I}}$ is a map from function variables $f^n$ to $n$-ary Boolean functions. A function variable occurs \emph{freely} if it is not in the scope of a matching quantifier. Write $\ensuremath{\mathrm{Fr}}(\varphi)$ for all free variables in the qbsf $\varphi$. For $\protect\ensuremath{\mathcal{I}}$ that are defined on $\ensuremath{\mathrm{Fr}}(\varphi)$, write $\llbracket \varphi \rrbracket_\protect\ensuremath{\mathcal{I}}$ for the valuation of $\varphi$ in $\protect\ensuremath{\mathcal{I}}$, which is defined as
\begin{alignat*}{2}
&\llbracket c \rrbracket_\protect\ensuremath{\mathcal{I}} &&\dfn c\text{ for }c \in \{0, 1\}\\
&\llbracket \varphi \land \psi \rrbracket_\protect\ensuremath{\mathcal{I}} &&\dfn \llbracket \varphi \rrbracket_\protect\ensuremath{\mathcal{I}} \cdot \llbracket \psi \rrbracket_\protect\ensuremath{\mathcal{I}}\\
&\llbracket \neg \varphi \rrbracket_\protect\ensuremath{\mathcal{I}} &&\dfn 1 - \llbracket \varphi \rrbracket_\protect\ensuremath{\mathcal{I}}\\
&\llbracket f^n(\varphi_1,\ldots,\varphi_n) \rrbracket_\protect\ensuremath{\mathcal{I}} &&\dfn \protect\ensuremath{\mathcal{I}}(f^n)(\llbracket \varphi_1 \rrbracket_\protect\ensuremath{\mathcal{I}}, \ldots, \llbracket \varphi_n \rrbracket_\protect\ensuremath{\mathcal{I}})\\
&\llbracket \exists f^n \varphi\rrbracket_\protect\ensuremath{\mathcal{I}} &&\dfn \max\Set{\llbracket\varphi\rrbracket_{\protect\ensuremath{\mathcal{I}}[f^n \mapsto F]} | F {\protect\ensuremath{\complClFont{co}}}lon \{0,1\}^n \to \{0,1\}}
\end{alignat*}
where $\protect\ensuremath{\mathcal{I}}[f^n \mapsto F]$ is the interpretation s.\,t.\@\xspace{} $\protect\ensuremath{\mathcal{I}}[f^n \mapsto F](f^n) = F$ and $\protect\ensuremath{\mathcal{I}}[f^n \mapsto F](g^m) = \protect\ensuremath{\mathcal{I}}(g^m)$ for $g^m \neq f^n$.
\end{definition}
Write $\protect\ensuremath{\mathcal{I}} \models \varphi$ for a qbsf $\varphi$ if $\protect\ensuremath{\mathcal{I}}$ is defined on $\ensuremath{\mathrm{Fr}}(\varphi)$ and $\llbracket \varphi \rrbracket_\protect\ensuremath{\mathcal{I}} = 1$.
Say that $\varphi$ \emph{entails} $\psi$, $\varphi \models \psi$, if $\protect\ensuremath{\mathcal{I}} \models \varphi \protect\ensuremath{\numberClassFont{R}}\xspaceightarrow \protect\ensuremath{\mathcal{I}} \models \psi$ for all interpretations $\protect\ensuremath{\mathcal{I}}$ which are defined on $\ensuremath{\mathrm{Fr}}(\varphi) \cup \ensuremath{\mathrm{Fr}}(\psi)$. If $\varphi \models \psi$ and $\psi \models \varphi$, then $\varphi$ and $\psi$ are called \emph{equivalent}, in symbols $\varphi \equiv \psi$.
\begin{lemma}\label{thm:invariant}
The set of interpretations satisfying a qbsf $\varphi$ is invariant under substitution of equivalent subformulas in $\varphi$.
\end{lemma}
\begin{proof}
Proven by simple induction.
\end{proof}
Write $\protect\ensuremath{\numberClassFont{Q}}\xspaceBSF$ for the set of all qbsfs $\varphi$ for which $\emptyset \models \varphi$ holds, i.e.\@\xspace, $\varphi$ is satisfied by the empty interpretation.
If in a formula $\varphi$ all quantifiers are at the beginning of $\varphi$, then it is in \emph{prenex form}.
A second-order qbf is \emph{simple} if all function symbols have only propositions as arguments.
It is in \emph{conjunctive normal form (CNF)} if it is in prenex form, simple and the matrix is in propositional CNF. Analogously define \emph{disjunctive normal form (DNF)}.
\begin{theorem}\label{thm:soqbf-in-aexp}
$\protect\ensuremath{\numberClassFont{Q}}\xspaceBSF \in \protect\ensuremath{\complClFont{AEXP}}\xspacePOLY$.
\end{theorem}
\begin{proof}
First transform $\varphi$ into prenex form $\varphi'$ in polynomial time.
Evaluate $\varphi'$ by alternating between existentially and universally states for each quantifier alternation, and guess and write down the truth tables for the quantified Boolean functions. These functions have arity at most $\size{\varphi}$, thus this whole step requires time $2^{\size{\varphi}}\cdot \size{\varphi'}$. Evaluate the matrix in deterministic exponential time by looking up the truth tables and accept if and only if it is true.
\end{proof}
\begin{theorem}\label{thm:soqbf-hardness}
$\protect\ensuremath{\numberClassFont{Q}}\xspaceBSF$ in CNF or DNF is $\protect\ensuremath{\leq^\mathCommandFont{log}_\mathCommandFont{m}}$-hard for $\protect\ensuremath{\complClFont{AEXP}}\xspacePOLY$.
\end{theorem}
\begin{proof}
Let $L \in \protect\ensuremath{\complClFont{AEXP}}\xspacePOLY$, where $L \subseteq \Sigma^*$ for some alphabet $\Sigma$.
Let $\text{bin}(L) \dfn \Set{ \text{bin}(x) | x \in L }$, where $\text{bin}(\cdot)$ efficiently encodes words from $\Sigma^*$ over $\{0,1\}$. As $L \protect\ensuremath{\leq^\mathCommandFont{log}_\mathCommandFont{m}} \text{bin}(L)$, we only need to consider languages $L$ over $\{0,1\}$.
By \protect\ensuremath{\numberClassFont{C}}\xspaceref{thm:alt_seq_poly} there is a polynomial $p$ and a deterministic oracle Turing machine $M$ with polynomial runtime such that
\begin{align*}
x \in L \Leftrightarrow \;&\Game_1 A_1 \subseteq \{0,1\}^{p(\size{x})} \; \ldots\; \Game_{p(\size{x})} A_{p(\size{x})} \subseteq \{0,1\}^{p(\size{x})}\\
&\text{s.\,t.\@\xspace{}} \; M \text{ accepts }x\text{ with oracle }\langle A_1, \ldots, A_{p(\size{x})}\rangle\text{,}
\end{align*}
for an alternating quantifier sequence $\Game_1, \ldots, \Game_{p(\size{x})}$. Let $\ell = p(\size{x})$.
In this reduction we will represent the oracles $A_i$ as their characteristic Boolean functions, call them $c_i$.
Translate $M$ and $x$ into a formula $\exists \vec{z} \varphi_x(\vec{z})$, where $\size{\vec{z}}$ and $\size{\varphi_x}$ both are polynomial in $\size{x}$, and where $\varphi_x$ is in CNF. The encoding can be done as in \cite{cook_complexity_1971} and is possible in logspace (iterate over each possible timestep, tape, position and transition).
In an oracle-free setting we could now claim that $\exists \vec{z} \varphi_{x}(\vec{z})$ is true if and only if $M$ accepts $x$. The oracle questions queried in transitions to the state $q_?$ however require special handling. Assume that $M$ uses only $r + m$ tape cells for the oracle questions, $r \in \bigO{\log \ell}$, $m \in \bigO{\ell}$, i.e.\@\xspace, it writes the index of the oracle and the concrete query always on the same cells. Let the proposition $z^t_{p} \in \vec{z}$ mean that at timestep $t$ on position $p$ of the oracle tape there is a one. Then modify $\varphi_{x}$ as follows. Let a clause $C$ of $\varphi_{x}$ encode a possible transition from some state $q$ to the state $q_{?}$ in timestep $t$. If the correct oracle answer is $q_{+}$, then $z^t_{1} \ldots z^t_{r}$ must represent some number $i$ in binary, $1 \leq i \leq \ell$, and $c_i(z^t_{r + 1},\ldots, z^t_{r + m}) = 1$ must hold. For the answer $q_-$ analogously $c_i(z^t_{r + 1},\ldots, z^t_{r + m}) = 0$. Therefore any transition to $q_?$ at a timestep $t$ must be encoded not in $C$ but instead in the new clauses $C^{+}_{1}, \ldots, C^{+}_{\ell}, C^{-}_{1}, \ldots, C^{-}_{\ell}$. Every such clause $C^{+}_{i}$/$C^{-}_{i}$ contains the same literals as $C$, but additionally says that the oracle number is $i$, and contains a single second-order atom of the form $c_i(\ldots)$ or $\neg c_i(\ldots)$. The new state of the transition is then obviously changed to $q_{+}$/$q_{-}$ instead of $q_{?}$. As $\ell$ is polynomial in $\size{x}$, there are also only polynomially many cases for the oracle number. The number of arguments of the characteristic functions is exactly $m$ which is again polynomial in $\size{x}$. The logspace-computability of the new clauses is straightforward.
Altogether the second-order qbf $\Game_1 c^m_1 \ldots \Game_\ell c^m_\ell \; \exists \vec{z} \;\, \varphi_x$ is true if and only if $x \in L$.
Let us now consider the DNF case. As $M$ is deterministic, there is another deterministic oracle machine $M'$ with identical runtime which simulates $M$ including the oracle calls, but then rejects any word that is accepted by $M$ and vice versa. Let the formula $\varphi_{x}'$ be the translation of $(M',x)$ as explained before. Then $x \in L$ iff $\Game_1 c_1 \ldots \Game_\ell c_\ell \; \neg \exists \vec{z} \;\, \varphi'_x$ is true iff $\Game_1 c_1 \ldots \Game_\ell c_\ell \; \forall \vec{z} \;\, \widehat{\varphi'_x}$ is true, where $\widehat{\varphi'_x}$ is the dual formula of $\varphi'_x$ (i.e.\@\xspace, the negation normal form of its negation) and thus in DNF.
\end{proof}
\begin{corollary}
$\protect\ensuremath{\numberClassFont{Q}}\xspaceBSF$ in CNF or DNF is $\protect\ensuremath{\leq^\mathCommandFont{log}_\mathCommandFont{m}}$-complete for $\protect\ensuremath{\complClFont{AEXP}}\xspacePOLY$.
\end{corollary}
\section{Fragments with bounded quantifier alternation}
In Orponen's original characterization of the $\SigmaE{k}$ classes a language $A$ is expressed by a sequence of $k$ alternatingly quantified oracles, the input being verified by a $\protect\ensuremath{\complClFont{P}}\xspaceiP{k}$ oracle machine \cite{orponen}. Baier and Wagner improved this to a single word quantifier in the "first-order" suffix of the characterization instead of $k$ word quantifiers (see \protect\ensuremath{\numberClassFont{C}}\xspaceref{thm:simulation}).
In this section we use a different strategy to reduce the first-order quantifier alternations directly on the level of formulas. The difference to Baier's and Wagner's result is that we obtain CNF formulas where previously only DNF formulas could be obtained and vice versa.
We define the following restricted problem of QBSF:
\begin{definition}
Let $n,m,k,\ell$ be non-negative integers, or $\omega$, and $P,Q \in \{ \Sigma, \protect\ensuremath{\complClFont{P}}\xspacei \}$. Write $\protect\ensuremath{\numberClassFont{Q}}\xspaceBSF(P^n_m Q^\ell_k)$ for the restriction of $\protect\ensuremath{\numberClassFont{Q}}\xspaceBSF$ to (prenex) formulas of the form
\[
\Game_1 f_1 \; \ldots\; \Game_p f_p \;\; \Game'_1 g^0_1 \;\ldots\; \Game'_q g^0_q \;\;H
\]
where $H$ is quantifier-free, $f_i$ are functions of arbitrary arities, $g_i$ are functions of arity zero (i.e.\@\xspace, propositional variables), $p \leq n, q \leq \ell$, the quantifiers $\Game_1 \ldots \Game_p$ alternate at most $m-1$ times, the quantifiers $\Game'_1 \ldots \Game'_\ell$ alternate at most $k-1$ times, $\Game_1 = \exists$ iff $Q = \Sigma$, and $\Game'_{1} = \exists$ iff $Q' = \Sigma$.
\end{definition}
\begin{example}
The formula $\exists f \, \forall x \, \forall y \, (x \land y \leftrightarrow f(x, y))$ is in $\protect\ensuremath{\numberClassFont{Q}}\xspaceBSF(\Sigma^1_1 \protect\ensuremath{\complClFont{P}}\xspacei^2_1)$ and $\protect\ensuremath{\numberClassFont{Q}}\xspaceBSF(\Sigma^\omega_1, \protect\ensuremath{\complClFont{P}}\xspacei^\omega_1)$, but not in $\protect\ensuremath{\numberClassFont{Q}}\xspaceBSF(\protect\ensuremath{\complClFont{P}}\xspacei^1_1 \Sigma^\omega_\omega)$ or in $\protect\ensuremath{\numberClassFont{Q}}\xspaceBSF(\Sigma^1_1 \protect\ensuremath{\complClFont{P}}\xspacei^1_1)$.
\end{example}
The following theorem demonstrates the reduction of propositional quantifier alternations.
The idea behind this is that the truth of the whole first-order part can be encoded in a single Boolean function. Denote dual quantifiers as $\overline{\exists} \dfn \forall$ and $\overline{\forall} \dfn \exists$.
\begin{theorem}[First-order alternation reduction]\label{thm:qbf_reduct}
Let $\varphi = \Game f \Game_1 x^0_1 \ldots \Game_k x^0_k H$ be a qbsf such that $H$ is quantifier-free and in CNF \emph{(}$\Game = \exists$\emph{)} resp. in DNF \emph{(}$\Game = \forall$\emph{)}.
Then there is an equivalent qbsf $\xi = \Game g \overline{\Game} x^0_1 \ldots \overline{\Game} x^0_k H'$ computable in logarithmic space, where $H'$ is in CNF \emph{(}$\Game = \exists$\emph{)} resp. in DNF \emph{(}$\Game = \forall$\emph{)}.
\end{theorem}
\begin{proof}
Let $f$ have arity $m$, this will be important later on. The first-order part of $\varphi$ is $\varphi' \dfn \Game_1 x_1 \ldots \Game_k x_k H$ (we drop the arities from now on) --- as all quantified variables are merely propositions, it is an ordinary qbf, except that function atoms occur in $H$. Let $\protect\ensuremath{\mathcal{I}}$ be some interpretation of $f$. To verify $\protect\ensuremath{\mathcal{I}} \models \varphi'$ we can use a set $S$ which models an \emph{assignment tree} of $\varphi'$. $S$ should have the following properties: It contains the empty assignment; further if $S$ contains some partial assignment $s$ to $x_1, \ldots, x_{j-1}$ and $\Game_j = \exists$ ($\forall$), then it must also contain $s \cup \{x_j \mapsto 0\}$ or (and) $s \cup \{x_j \mapsto 1\}$. For any total assignment $s \in S$, i.e.\@\xspace, which is defined on all $x_1, \ldots, x_k$, the interpretation $\protect\ensuremath{\mathcal{I}} \cup s$ must satisfy $H$. It is clear that, by the semantics of QBSF, such $S$ exists iff $\protect\ensuremath{\mathcal{I}} \models \varphi'$.
This set $S$ is encoded in a new quantified function $g$ with arity $m^* \dfn \max\{m, 2k\}$. We define how $g$ represents each (possibly partial) assignment $s \in S$. For each propositional variable $x_i$ we use two bits, the first one tells if $s(x_i)$ is defined (1 if yes, 0 if no), and the second one tells the value $s(x_i) \in \{0,1\}$ (and, say, 0 if undefined). It is $m^* \geq 2k$, so all bits will fit into the arguments of $g$, and if $g$ has larger arity than $2k$ then the trailing bits are assumed constant 0. Write $\langle s \rangle$ for the binary vector of length $m^*$ which encodes $s$, then $g(\langle s\rangle) = 1$ iff $s \in S$.
For the actual reduction of the quantifier rank consider two cases. In the case $\Game= \exists$ it is $H$ in CNF, say $H \dfn \bigwedge_{i = 1}^{n} C_i$ for clauses $C_i$. The conditions of the set $S$ encoded by a given $g$ are verified by the following formula in CNF:
\begin{align*}
\vartheta_{\varphi'}(g) \dfn &\forall x_1 \ldots \forall x_k \quad g(\vec{0}) \, \land \, \bigwedge_{i=1}^{n} \Big(g(1, x_1, \ldots, 1, x_k, \vec{0}) \rightarrow C_i\Big) \; \land\\
\quad \bigwedge_{\substack{i=1\\ \Game_i = \exists}}^{k-1} &\Bigg(g(1, x_1, \ldots, 1, x_i, \vec{0}) \rightarrow\\
\quad &\qquad\big(g(1, x_1, \ldots, 1, x_i, 1, 0, \vec{0}) \lor g(1, x_1, \ldots, 1, x_i, 1, 1, \vec{0})\big)\Bigg) \; \land\\
\quad\bigwedge_{\substack{i=1\\ \Game_i = \forall}}^{k-1} &\Bigg(g(1, x_1, \ldots, 1, x_i, \vec{0}) \rightarrow g(1,x_1, \ldots, 1, x_i, 1, 1, \vec{0})\Bigg)\; \land\\
&\quad\Bigg(g(1, x_1, \ldots, 1, x_i, \vec{0}) \rightarrow g(1, x_1, \ldots, 1, x_i, 1, 0, \vec{0})\Bigg)
\end{align*}
$\vartheta_{\varphi'}(g)$ is logspace-computable from $\varphi'$. In $\varphi$ now replace $\varphi'$ with $\exists g \; \vartheta_{\varphi'}$. To see the correctness of this step assume that $\protect\ensuremath{\mathcal{I}}$ is an interpretation of $x_1,\ldots,x_k$. Since $\protect\ensuremath{\mathcal{I}} \models \varphi' \Leftrightarrow \protect\ensuremath{\mathcal{I}} \models \exists g\; \vartheta_{\varphi'}$, as explained above, we can apply \protect\ensuremath{\numberClassFont{C}}\xspaceref{thm:invariant}.
For the case $\Game = \forall$ it is $\varphi'$ is in DNF. Consider $\vartheta_{\psi}$ where $\psi$ is the dual of $\varphi'$. Note that $\psi$ itself has a matrix in CNF and $\vartheta_{\psi}$ thus can be constructed as above. Further it holds
\[
\protect\ensuremath{\mathcal{I}} \models \varphi' \Leftrightarrow \protect\ensuremath{\mathcal{I}} \not\models \psi \Leftrightarrow \protect\ensuremath{\mathcal{I}} \not\models \exists g \; \vartheta_{\psi} \Leftrightarrow \protect\ensuremath{\mathcal{I}} \models \forall g \; \neg \vartheta_{\psi}\text{.}
\]
Therefore replace $\varphi'$ now with $\forall g \; \widehat{\vartheta}_{\psi}$, where $\widehat{\vartheta}_{\psi}$ is the dual of $\vartheta_{\psi}$, and hence again in DNF.
Finally replace all occurrences of $f(a_1, \ldots, a_m)$ by $f(a_1, \ldots, a_m, 0, \ldots, 0)$, i.e.\@\xspace, pad any possible interpretation of $f$ with zeros up to arity $m^*$. The functions $g$ and $f$ have then the same arity and identical quantifier type $\Game$. Hence we can merge them into a single function: Replace $\Game f \Game g$ by $\Game h$, and as well each expression $f(a_1, \ldots, a_{m^*})$ in the matrix with $h(0, a_1, \ldots, a_{m^*})$ and likewise $g(a_1, \ldots, a_{m^*})$ with $h(1, a_1, \ldots, a_{m^*})$. It is easy to see that the matrix then holds for some (all) interpretation(s) of $h$ if and only if it holds for some (all) interpretation(s) of $f$ and $g$. This concludes the proof.
\end{proof}
\begin{theorem}\label{thm:no-alt-completeness}
The following problems restricted to CNF or DNF are $\protect\ensuremath{\leq^\mathCommandFont{log}_\mathCommandFont{m}}$-complete:
\begin{itemize}
\item If $k$ is even, then $\protect\ensuremath{\numberClassFont{Q}}\xspaceBSF(\Sigma^k_k \Sigma^{\omega}_1)$ for $\SigmaE{k}$ and $\protect\ensuremath{\numberClassFont{Q}}\xspaceBSF(\protect\ensuremath{\complClFont{P}}\xspacei^k_k \protect\ensuremath{\complClFont{P}}\xspacei^{\omega}_1)$ for $\protect\ensuremath{\complClFont{P}}\xspaceiE{k}$.
\item If $k$ is odd, then $\protect\ensuremath{\numberClassFont{Q}}\xspaceBSF(\Sigma^k_k \protect\ensuremath{\complClFont{P}}\xspacei^{\omega}_1)$ for $\SigmaE{k}$ and $\protect\ensuremath{\numberClassFont{Q}}\xspaceBSF(\protect\ensuremath{\complClFont{P}}\xspacei^k_k \Sigma^{\omega}_1)$ for $\protect\ensuremath{\complClFont{P}}\xspaceiE{k}$.
\item $\protect\ensuremath{\numberClassFont{Q}}\xspaceBSF(\Sigma^\omega_\omega \Sigma^{\omega}_1)$ and $\protect\ensuremath{\numberClassFont{Q}}\xspaceBSF(\Sigma^\omega_\omega \protect\ensuremath{\complClFont{P}}\xspacei^{\omega}_1)$ for $\protect\ensuremath{\complClFont{AEXP}}\xspacePOLY$.
\end{itemize}
\end{theorem}
\begin{proof}
The upper bounds work as in \protect\ensuremath{\numberClassFont{C}}\xspaceref{thm:soqbf-in-aexp} by guessing the truth tables of the quantified functions.
For the lower bound consider a reduction similar to the proof of \protect\ensuremath{\numberClassFont{C}}\xspaceref{thm:soqbf-hardness}. By \protect\ensuremath{\numberClassFont{C}}\xspaceref{thm:simulation} we can already correctly choose the number and quantifier type of the functions $c_1,\ldots,c_k$ when reducing from a $\SigmaE{k}$ or $\protect\ensuremath{\complClFont{P}}\xspaceiE{k}$ language. The first-order part can be constructed accordingly as in \protect\ensuremath{\numberClassFont{C}}\xspaceref{thm:soqbf-hardness}, but, due to the single word quantifier introduced in \protect\ensuremath{\numberClassFont{C}}\xspaceref{thm:simulation}, has now the form $\exists \vec{y} \, \exists \vec{z} \,\varphi'_{x}(\vec{y},\vec{z})$ in CNF or $\forall \vec{y} \, \forall \vec{z} \, \varphi'_{x}(\vec{y},\vec{z})$ in DNF. Note that we can represent the computation of the deterministic machine on input $x$ arbitrarily as $\exists \vec{z} \,\varphi'_{x}$ in CNF or $\forall \vec{z} \,\varphi'_{x}$ in DNF, therefore choose the quantifiers matching as stated above.
By this construction, the hardness for the CNF cases with $\Sigma^\omega_1$ first-order part and the DNF cases with $\protect\ensuremath{\complClFont{P}}\xspacei^\omega_1$ first-order part is shown.
In the remaining cases apply \protect\ensuremath{\numberClassFont{C}}\xspaceref{thm:qbf_reduct} to obtain an equivalent formula but with CNF matrix after an $\protect\ensuremath{\complClFont{P}}\xspacei^\omega_1$ first-order prefix resp. DNF matrix after an $\Sigma^\omega_1$ first-order prefix.\end{proof}
\section{Fragments with Skolem functions}
In the previous sections we considered the QBSF problem where function atoms could occur multiple times in a formula, in particular with different arguments. A Skolem function of a proposition $x$ however is a Boolean function that depends only on certain other propositions $y_1,\ldots,y_n$, the so-called \emph{dependencies} of $x$. Hence, to connect QBSF to the Dependency QBF problem \cite{dqbf} and other logics of independence, we now focus on formulas where all quantified functions are Skolem functions:
\begin{definition}
Let $n,m,k,\ell$ be non-negative integers or $\omega$. Let $P,Q \in \{\Sigma,\protect\ensuremath{\complClFont{P}}\xspacei\}$. Write $\protect\ensuremath\problemFont{QBSF^{\mathrm{uniq}}}(P^n_m Q^\ell_k)$ for the restriction of $\protect\ensuremath{\numberClassFont{Q}}\xspaceBSF(P^n_m Q^\ell_k)$ to formulas in which for all function symbols $f$ it holds that $f$ always occurs with the same arguments.
\end{definition}
In contrast, \textsf{DQBF} is defined as follows:
\begin{definition}
Every formula of the form $\forall \vec{x} \; \exists y_1(\vec{z}_1) \ldots \exists y_n(\vec{z}_n) \; H$ is called a \emph{dqbf}, where the matrix $H$ is a quantifier-free propositional formula and $\vec{z}_i \subseteq \vec{x}$ f.\,a.\@\xspace $i = 1,\ldots,n$.
A dqbf of this form is \emph{true} if for all $i = 1,\ldots,n$ there is a Skolem function $Y_i$ of $y_i$ depending only on $\vec{z}_i$ s.\,t.\@\xspace for all assignments to $\vec{x}$ the matrix $H$ evaluates to true, provided the values of $Y_i$ are assigned to $y_i$.
As a decision problem, \textsf{DQBF} is defined as the set of all true dqbfs.
\end{definition}
In this section we will prove that the restricted problem $\protect\ensuremath\problemFont{QBSF^{\mathrm{uniq}}}$ is complete for the same complexity classes as the general case, i.e.\@\xspace, the number of function quantifier alternations again determines the level in the exponential hierarchy.
Orponen's characterization via alternating oracle quantification allows polynomial time machines to recognize languages with exponential time complexity \cite{orponen}. \citeauthor{hannula_complexity_2015} generalized this to handle a polynomial number of oracles \cite{hannula_complexity_2015}. In the following we use the notion of \emph{tableaus} to adapt these characterization to our needs.
Call an oracle machine \emph{single-query machine} if it asks at most one oracle question. An indirect simulation with a single-query machine allows, since the oracle tape cells directly correspond to the arguments of the quantified functions, an encoding in $\protect\ensuremath\problemFont{QBSF^{\mathrm{uniq}}}$ as follows.
If $M$ is an ATM with state set $Q$ and tape alphabet $\Gamma$, then a \emph{configuration} of $M$ is a finite sequence $C \in (Q \cup \Gamma)^*$ which contains exactly one state.
A \emph{tableau} $T$ of $M$ is a finite sequence $C_1,\ldots,C_n$ of equally long configurations of $M$ such that each $C_{i+1}$ results from $C_i$ by a transition of $M$.
A tableau is \emph{pure} if all states assumed in the tableau except the last one have the same alternation type, i.e.\@\xspace, all existential or all universal. A pure tableau $C_1,\ldots,C_n$ is \emph{alternating} if $n \geq 2$ and $q_n$ has a different alternation type than $q_1,\ldots,q_{n-1}$, where $q_i$ is the state of $C_i$.
If $T, T'$ are tableaus of $M$, then $T'$ is a \emph{successor tableau} of $T$ if its first configuration is equal to $T$'s final configuration.
Let $T$ be a pure tableau that assumes $q$ as its last state. Say that $T$ is $k$-\emph{accepting} if
\begin{itemize}
\item either $k = 1$ and $q$ is an accepting state of the machine $M$,
\item or $k > 1$ and $T$ is alternating and
\begin{itemize}
\item $q$ is existential and $T$ has a pure $(k-1)$-accepting successor tableau,
\item or $q$ is universal and all pure successor tableaus of $T$ are $(k-1)$-accepting.
\end{itemize}
\end{itemize}
\begin{theorem}[Single-query indirect simulation]\label{thm:single-query}
For every $Q \in \{\Sigma, \protect\ensuremath{\complClFont{P}}\xspacei\}$ and every $Q^E_{g(n)}$-machine $M$ there is a polynomial $h$ and a single-query oracle $\SigmaP{4}$-machine $N$ such that $M$ accepts $x$ if and only if
\[
\Game_1 A_1 \; \Game_2 A_2 \; \ldots \; \Game_{g(\size{x})} A_{g(\size{x})} \,:\, N \text{ accepts } x \text{ with oracle }\langle A_1, \ldots, A_{g(\size{x})} \rangle\text{,}
\]
where $\Game_1, \ldots$ are alternating quantifiers starting with $\exists$ ($Q = \Sigma$) resp. $\forall$ ($Q = \protect\ensuremath{\complClFont{P}}\xspacei$), and further $A_i \subseteq \{0,1\}^{h(\size{x})}$ f.\,a.\@\xspace $i = 1, \ldots, g(\size{x})$.
\end{theorem}
\begin{proof}
Let $M$ have runtime $f$, and let $m \dfn g(\size{x})$ and $n \dfn f(\size{x})$.
W.l.o.g.\xspace{} we can assume the following: $m \leq n$, $2 \leq n$, $M$ does always exactly $m$ alternations before it accepts or rejects, and in each alternation phase it does exactly $n$ steps before it alternates, rejects or accepts. These properties imply that $M$ accepts $x$ if and only if all resp. some of its pure tableau starting with the initial configuration are $m$-accepting.
We further assume for simplicity that $M$ uses only one tape.
The idea is to encode the tableaus $T_1, \ldots, T_m$ of the alternation phases of the computation of $M$ in the oracles $A_1, \ldots, A_m$ as follows. Words of $A_i$ are \emph{indexed cells} $w = (c, t, p)$. Here $p$ denotes the position of the cell on the tape, $t$ is the current timestep, and $c \in Q \cup \Gamma$ is either the symbol written at position $p$, or the state of $M$ if $p$ happens to be the head position at timestep $t$.
Let $h$ be the polynomial size of such a window $w$ encoded over $\{0,1\}$ (by binary encoding of $t$ and $p$).
A set $A \subseteq \{0,1\}^{h(\size{x})}$ then represents a tableau $T = C_1, \ldots, C_n$ in the sense that $(c, t, p) \in A$ if and only if the cell at tape position $p$ contains $c$ at timestep $t$.
We now translate the acceptance condition of $M$ into a formal expression, using $k$-acceptance of tableaus:
\[
\mathrm{Acc}_i \dfn \begin{cases}\exists A_i \subseteq \{ 0,1\}^{h(\size{x})} \quad(\mathrm{Val}_i \land \mathrm{Init}_i \land \mathrm{Alt}_i \land \mathrm{Acc}_{i+1})\quad &\text{if }\Game_i = \exists\\
\forall A_i \subseteq \{0,1\}^{h(\size{x})} \quad(\mathrm{Val}_i \land \mathrm{Init}_i) \rightarrow (\mathrm{Alt}_i \land \mathrm{Acc}_{i+1})\quad &\text{if }\Game_i=\forall\end{cases}
\]
for $1 \leq i \leq m$. The semantics is that $\mathrm{Val}_i$ is true if $A_i$ encodes a pure tableau of $M$, $\mathrm{Init}_1$ is true if there is the initial configuration of $M$ on $x$ encoded in $A_1$, $\mathrm{Init}_i$ for $i > 1$ is true if the first configuration of $A_i$ is equal to the last configuration of $A_{i-1}$, (i.e.\@\xspace, $A_i$ is a successor tableau of $A_{i-1}$), $\mathrm{Alt}_i$ for $i < m$ is true if the tableau encoded in $A_i$ is end-alternating, $\mathrm{Alt}_m$ is true if the tableau encoded in $A_m$ is $1$-accepting, and $\mathrm{Acc}_{m+1}$ is always true.
By the above definitions $M$ accepts $x$ if and only if the predicate $\mathrm{Acc}_1$ is true, as it states that $M$ has an $m$-accepting pure initial tableau.
The formula can be written in prenex form, i.e.\@\xspace, $\mathrm{Acc}_1$ holds if and only if $\Game_1 A_1 \subseteq \{0,1\}^{h(\size{x})} \ldots \Game_m A_m \subseteq \{0,1\}^{h(\size{x})} \; V_1$ holds, where the predicate $V_i$ is defined as
\[
V_i \dfn \begin{cases}(\mathrm{Val}_i \land \mathrm{Init}_i \land \mathrm{Alt}_i \land V_{i+1})&\text{if }\Game_i = \exists\\
(\mathrm{Val}_i \land \mathrm{Init}_i) \rightarrow (\mathrm{Alt}_i \land V_{i+1})\quad &\text{if }\Game_i =\forall\end{cases}
\]
for $1 \leq i \leq m$, and $V_{m+1} = 1$.
To prove the theorem we give now a single-query oracle ATM $N$ with polynomial runtime and $4$ alternations which accepts if and only if $V_1$ is true.
For a predicate $P$ write $\overline{P}$ for its complement. Group the predicates above as follows:
\begin{align*}
\protect\ensuremath{\mathcal{T}}_i &\dfn \Set{ \mathrm{Val}_j, \mathrm{Init}_j, \mathrm{Alt}_j | 1 \leq j < i} \text{ for }i = 1, \ldots, m+1\text{, }\\
\protect\ensuremath{\mathcal{F}}^0_i &\dfn \Set{ \overline{\mathrm{Val}_i} }\text{, }\protect\ensuremath{\mathcal{F}}^1_i \dfn \Set{ \overline{\mathrm{Init}_i} }\text{ for }i=1,\ldots, m\text{, and }\protect\ensuremath{\mathcal{F}}^0_{m+1} \dfn \emptyset\text{, } \protect\ensuremath{\mathcal{F}}^1_{m+1} \dfn \emptyset\text{.}
\end{align*}
By its definition $V_1$ is true if and only if $\exists i \in \{1,\ldots,m\}, \exists d \in \{0,1\}$ s.\,t.\@\xspace all predicates in $\protect\ensuremath{\mathcal{S}}^d_i \dfn \protect\ensuremath{\mathcal{T}}_i \cup \protect\ensuremath{\mathcal{F}}^d_i$ are true and further $\Game_i = \forall$ or $i > m$.
Hence the machine $N$ is defined to work as follows:
\begin{enumerate}
\item In time $\bigO{\log g}$ existentially guess $i$ and $d$,
\item In time $\bigO{\log g}$ universally branch on every predicate $P$ in $\protect\ensuremath{\mathcal{S}}^d_i$,
\item Verify that $P$ is true.
\end{enumerate}
It only remains to verify that every predicate $P$ (and accordingly $\overline{P}$) in $\protect\ensuremath{\mathcal{S}}^d_i$ can be checked in polynomial time, with only one oracle query, and at most two additional alternations.
We sketch the required alternating procedures, where quantifier symbols $\exists, \forall$ always imply branching.
If $(c,t,p)$ is a cell, then $w \in A_i$ means that $A_i$ contains the encoding of $w$. The available timesteps $t$ in each tableau range over $\{0, \ldots, n\}$. The available positions $p$ in the configurations are $\{0,\ldots,n,n+1,\ldots,2n+1\}$, where the input word is placed on positions $n+1, \ldots, n + \size{x}$ and the initial state of $M$ is given on position $n$.
The predicates are checked as follows:
\begin{itemize}
\item $\mathrm{Val}_i$: (check in parallel)\begin{itemize}
\item $\forall w \in \{0,1\}^h$ : if $w$ is no valid encoded cell then $w \notin A_i$,
\item $\forall t \, \forall p \; \exists c \in Q \cup \Gamma \; : \;(c,t,p) \in A_i$,
\item $\forall w_0 = (c,t,p) \; \forall w_1 = (c', t, p) : $ if $c \neq c'$ then $\exists j \in \{0,1\}$ s.\,t.\@\xspace $w_j \notin A_i$,
\item $\forall w_0 = (c_0, t, p - 1) \;\, \forall w_1 = (c_1, t, p) \;\, \forall w_2 = (c_2, t, p + 1)$\\
$\forall w_3 = (c_3, t + 1, p -1)\;\, \forall w_4 = (c_4, t+1, p)\;\, \forall w_5 = (c_5, t+1, p+1)$ :\\
if $M$ has no transition from $(w_0,w_1,w_2)$ to $(w_3,w_4,w_5)$ then $\exists j \in \{0,\ldots,5\}$ s.\,t.\@\xspace $w_j \notin A_i$,
\item $\forall w_0 = (c,t,p) \; \forall w_1 =(c', t', p') :$ if $t < t' < n$ and $c, c'$ are states with different alternation types then $\exists j \in \{0,1\}$ s.\,t.\@\xspace $w_j \notin A_i$,
\end{itemize}
\item $\mathrm{Alt}_i$, $i < m$: $\exists w_0 = (c, n-1,p) \; \exists w_1 = (c', n, p')$ s.\,t.\@\xspace $w_0$ and $w_1$ contain states with different alternation types and $\forall j \in \{0,1\} : w_j \in A_i$,
\item $\mathrm{Alt}_{m}$: $\exists w = (q,n,p)$ : $q$ is an accepting state of $M$ and $w \in A_m$
\item $\mathrm{Init}_1$: (check in parallel)\begin{itemize}
\item $\forall i \in \{1, \ldots, \size{x}\} \; \exists w = (c, 0, n+i)$ s.\,t.\@\xspace $c$ is the $i$-th letter of $x$ and $w \in A_1$,
\item $(q_0,0,n) \in A_1$ where $q_0$ is the initial sate of $M$,
\item $\forall i \notin \{n, \ldots,n+\size{x}\} \; \exists w = (\Box, 0, i) \in A_1$,
\end{itemize}
\item $\mathrm{Init}_i$, $i > 1$: $\forall w_0 = (c, 0, p) \; \exists w_1 = (c', n, p)$ : if $c \neq c'$ then $\exists j \in \{0,1\}$ s.\,t.\@\xspace $w_j \notin A_{i-j}$.\qedhere
\end{itemize}\end{proof}
\begin{theorem}
For $k \geq 1$, the following problems restricted to DNF are $\protect\ensuremath{\leq^\mathCommandFont{log}_\mathCommandFont{m}}$-complete:
\begin{itemize}
\item $\protect\ensuremath\problemFont{QBSF^{\mathrm{uniq}}}(\Sigma^k_k \Sigma^{\omega}_4)$ for $\SigmaE{k}$,
\item $\protect\ensuremath\problemFont{QBSF^{\mathrm{uniq}}}(\protect\ensuremath{\complClFont{P}}\xspacei^k_k \Sigma^{\omega}_4)$ for $\protect\ensuremath{\complClFont{P}}\xspaceiE{k}$,
\item $\protect\ensuremath\problemFont{QBSF^{\mathrm{uniq}}}(\Sigma^\omega_\omega \Sigma^{\omega}_4)$ for $\protect\ensuremath{\complClFont{AEXP}}\xspacePOLY$.
\end{itemize}
\end{theorem}
\begin{proof}
The proof of the upper bounds is essentially the same as for \protect\ensuremath{\numberClassFont{C}}\xspaceref{thm:no-alt-completeness}. The lower bound proof is again similar to \protect\ensuremath{\numberClassFont{C}}\xspaceref{thm:soqbf-hardness}. It holds that the translation from $\SigmaP{k}$ machines to deterministic machines with word quantifiers relativizes (see \textcite[Lem.\ 1.1]{baker_second_1979}) and additionally preserves the single-query property.
As only one oracle question is asked by the encoded machine (at timestep, say, $t$), every function symbol occurs only with a fixed argument set, which describes the content of the oracle tape (modulo the oracle index) at timestep $t$.
\end{proof}
The method of single-query indirect simulation can be applied to obtain an alternative proof for the hardness of \textsf{DQBF}. Peterson, Reif and Azhar \cite{dqbf} state that every dqbf has an equivalent \emph{functional form} which is in essence a QBSF formula with implicit function symbols. Similarly, all $\protect\ensuremath\problemFont{QBSF^{\mathrm{uniq}}}(\Sigma_1^\omega\Sigma^\omega_\omega)$ formulas are equivalent to the functional form of a DQBF formula with a straightforward efficient translation.
\begin{corollary}
\textsf{DQBF} is $\protect\ensuremath{\leq^\mathCommandFont{log}_\mathCommandFont{m}}$-complete for $\SigmaE{1}$.
\end{corollary}
\section{Conclusion}
The presented completeness results for the exponential hierarchy are in analogy to the results known for QBF; still they differ in subtle points.
One difference is that the "$\omega$-jump" of QBSF is complete for $\protect\ensuremath{\complClFont{AEXP}}\xspacePOLY$ and not for $\protect\ensuremath{\complClFont{EXPSPACE}}\xspace$. The reason for this is that any given input of length $n$ with explicit quantifiers, like in the QBF style, can only express $n$ alternations. This differs from decision problems which are defined via exponentially many alternations, e.g.\@\xspace, certain games. It may be that the class $\protect\ensuremath{\complClFont{AEXP}}\xspacePOLY$ is perhaps a more natural analogy to $\protect\ensuremath{\complClFont{AP}}\xspace$ than $\protect\ensuremath{\complClFont{AEXP}}\xspace$, at least in the cases where the number of quantifiers is bounded by the input itself, e.g.\@\xspace, logical operators.
Other differences are with regard to normal forms: The $\SigmaP{k}$-hardness of \textsf{QBF}$_k$ already holds for CNF --- but only if the rightmost quantifier happens to be existential, i.e.\@\xspace, $k$ is odd. If it is universal, i.e.\@\xspace, $k$ is even, then \textsf{CNF-QBF}$_k$ is in $\SigmaP{k-1}$, i.e.\@\xspace, it is supposedly easier. On the other hand, DNF establishes hardness for $\SigmaP{k}$ only for even $k$.
This collapse however does not occur in $\protect\ensuremath{\numberClassFont{Q}}\xspaceBSF$.
This peculiar robustness can be explained if one remembers that function symbols occur in formulas. While the innermost existential guessing can be avoided in propositional formula in DNF (just scan for a single non-contradicting conjunction), this is not possible here: Is the conjunction $f(x_{1},x_{2}) \land g(x_{2},x_{3})$ self-contradicting or not? The hardness results are a hint that the structure of formulas with Boolean second-order variables is unlikely to exhibit such shortcuts as in propositional logic.
On the other hand for $\protect\ensuremath\problemFont{QBSF^{\mathrm{uniq}}}$ no such symmetry of CNF and DNF could be established, as \protect\ensuremath{\numberClassFont{C}}\xspaceref{thm:qbf_reduct} does not preserve the $\protect\ensuremath\problemFont{QBSF^{\mathrm{uniq}}}$ condition.
Similarly, the proof of \protect\ensuremath{\numberClassFont{C}}\xspaceref{thm:qbf_reduct} can at best produce 3CNF (and non-Horn) formulas, even if the matrix of the formula was already in 2CNF and Horn form. How does the complexity of $\protect\ensuremath{\numberClassFont{Q}}\xspaceBSF$ change if 2CNF or 2DNF is considered, or horn formulas? How do CNF and DNF influence the complexity of the $\protect\ensuremath\problemFont{QBSF^{\mathrm{uniq}}}$ fragment? Can the single-query indirect simulation be done by a $\SigmaP{k}$ machine with $k < 4$? Can the problem \textsf{DQBF} be generalized to incorporate universal quantification of Skolem functions?
\end{document}
|
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{document}
\title {Local public good provisioning in networks: \\
A Nash implementation mechanism}
\author{Shrutivandana~Sharma\thanks{Shrutivandana~Sharma obtained her Ph.D. from the University of Michigan, Ann Arbor (email: [email protected]).}
and
Demosthenis~Teneketzis\thanks{Demosthenis~Teneketzis is with the University of Michigan, Ann Arbor (email: [email protected]).}
}
\maketitle
\ensuremath{\epsilon}nsuremath{\bm{i}}bliographystyle{IEEEtran}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{abstract}
In this paper we study resource allocation in decentralized information local public good networks. A network is a local public good network if each user's actions directly affect the utility of an arbitrary subset of network users. We consider networks where each user knows only that part of the network that either affects or is affected by it. Furthermore, each user's utility and action space are its private information, and each user is a self utility maximizer. This network model is motivated by several applications including wireless communications.
For this network model we formulate a decentralized resource allocation problem and develop a decentralized resource allocation mechanism (game form) that possesses the following properties: (i) All Nash equilibria of the game induced by the mechanism result in allocations that are optimal solutions of the corresponding centralized resource allocation problem (Nash implementation). (ii) All users voluntarily participate in the allocation process specified by the mechanism (individual rationality). (iii) The mechanism results in budget balance at all Nash equilibria and off equilibrium.
\ensuremath{\epsilon}nd{abstract}
\ensuremath{\sigma}ection{Introduction} \ensuremath{\lambda}bel{Sec_intro}
In networks individuals' actions often influence the performance of their directly connected neighbors. Such an influence of individuals' actions on their neighbors' performance can propagate through the network affecting the performance of the entire network. Examples include several real world networks. As an instance, in a wireless cellular network, the transmission of the base station to a given user (an action corresponding to this user) creates interference to the reception of other users and affects their performance. In an urban network, when a jurisdiction institutes a pollution abatement program, the benefits also accrue to nearby communities.
The influence of neighbors is also observed in the spread of information and innovation in social and research networks. Networks with above characteristics are called local public good networks.
A local public good network differs from a typical public good system in that a local public good (alternatively, the action of an individual) is accessible to and directly influences the utilities of individuals in a particular neighborhood within a big network. On the other hand a public good is accessible to and directly influences the utilities of all individuals in the system (\cite[Chapter 11]{Mas-Colell}). Because of the localized interactions of individuals, in local public good networks (such as ones described above) the information about the network is often localized; i.e., the individuals are usually aware of only their neighborhoods and not the entire network. In many situations the individuals also have some private information about the network or their own characteristics that are not known to anybody else in the network. Furthermore, the individuals may also be selfish who care only about their own benefit in the network. Such a decentralized information local public good network with selfish users gives rise to several research issues. In the next section we provide a literature survey on prior research in local public good networks.
\ensuremath{\epsilon}nsuremath{\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}bset}section{Literature survey} \ensuremath{\lambda}bel{chap5_Subsec_lit_surv}
There exists a large literature on local public goods within the context of local public good provisioning by various municipalities that follows the seminal work of \cite{Tiebout_56}. These works mainly consider network formation problems in which individuals choose where to locate based on their knowledge of the revenue and expenditure patterns (on local public goods) of various municipalities. In this paper we consider the problem of determining the levels of local public goods (actions of network agents) for a given network; thus, the problem addressed in this paper is distinctly different from those in the above literature. Recently, Bramoull$\acute{e}$ and Kranton \cite{Kranton_07} and Yuan \cite{Yuan_09} analyzed the influence of selfish users' behavior on the provision of local public goods in networks with fixed links among the users. The authors of \cite{Kranton_07} study a network model in which each user's payoff equals its benefit from the sum of efforts (treated as local public goods) of its neighbors less a cost for exerting its own effort. For concave benefit and linear costs, the authors analyze Nash equilibria (NE) of the game where each user's strategy is to choose its effort level that maximizes its own payoff from the provisioning of local public goods. The authors show that at such NE, \ensuremath{\epsilon}mph{specialization} can occur in local public goods provisioning. Specialization means that only a subset of individuals contribute to the local public goods and others free ride. The authors also show that specialization can benefit the society as a whole because among all NE, the ones that are ``specialized'' result in the highest social welfare (sum of all users' payoffs). However, it is shown in \cite{Kranton_07} that none of the NE of abovementioned game can result in a local public goods provisioning that achieves the maximum possible social welfare. In \cite{Yuan_09} the work of \cite{Kranton_07} is extended to directed networks where the externality effects of users' efforts on each others' payoffs can be unidirectional or bidirectional. The authors of \cite{Yuan_09} investigate the relation between the structure of directed networks and the existence and nature of Nash equilibria of users' effort levels in those networks. For that matter they introduce a notion of ergodic stability to study the influence of perturbation of users' equilibrium efforts on the stability of NE. However, none of the NE of the game analyzed in \cite{Yuan_09} result in a local public goods provisioning that achieves optimum social welfare.
In this paper we consider a generalization of the network models investigated in \cite{Kranton_07, Yuan_09}. Specifically, we consider a fixed network where the actions of each user directly affect the utilities of an arbitrary subset of network users. In our model, each user's utility from its neighbors' actions is an arbitrary concave function of its neighbors' action profile. Each user in our model knows only that part of the network that either affects or is affected by it. Furthermore, each user's utility and action space are its private information, and each user is a self utility maximizer. Even though the network model we consider has similarities with those investigated in \cite{Kranton_07, Yuan_09}, the problem of local public goods provisioning we formulate in this paper is different from those in \cite{Kranton_07, Yuan_09}. Specifically, we formulate a problem of local public goods provisioning in the framework of implementation theory~\fn{Refer to \cite{Jack01, Tudor05, Sharma:2011} and \cite[Chapter 3]{Sharma_thesis} and for an introduction to implementation theory.} and address questions such as -- How should the network users communicate so as to preserve their private information, yet make it possible to determine actions that achieve optimum social welfare? How to provide incentives to the selfish users to take actions that optimize the social welfare? How to make the selfish users voluntarily participate in any action determination mechanism that aims to optimize the social welfare? In a nutshell, the prior work of \cite{Kranton_07, Yuan_09} analyzed specific games, with linear cost functions, for local public good provision, whereas our work focusses on \ensuremath{\epsilon}mph{designing a mechanism that can induce, via nonlinear tax functions, ``appropriate'' games among the network users so as to implement the optimum social welfare in NE}. It is this difference in the tax functions that distinguishes our results from those of \cite{Kranton_07, Yuan_09}.
Previous works on implementation approach (Nash implementation) for (pure) public goods can be found in \cite{Groves_led_77, Hurwicz_79, Walker_81, Yan_02}. For our work, we obtained inspiration from \cite{Hurwicz_79}. In \cite{Hurwicz_79} Hurwicz presents a Nash implementation mechanism that implements the Lindahl allocation (optimum social welfare) for a public good economy. Hurwicz' mechanism also possesses the properties of individual rationality (i.e. it induces the selfish users to voluntarily participate in the mechanism) and budget balance (i.e. it balances the flow of money in the system). A local public good network can be thought of as a limiting case of a public good network, in which the influence of each public good tends to vanish on a subset of network users. However, taking the corresponding limits in the Hurwicz' mechanism does not result in a local public good provisioning mechanism with all the original properties of the Hurwicz' mechanism. In particular, such a limiting mechanism does not retain the budget balance property which is very important to avoid any scarcity/wastage of money. In this paper we address the problem of designing a local public good provisioning mechanism that possesses the desirable properties of budget balance, individual rationality, and Nash implementation of optimum social welfare. The mechanism we develop is more general than Hurwicz' mechanism; Hurwicz' mechanism can be obtained as a special case of our mechanism by setting $\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i = \ensuremath{\epsilon}nsuremath{\mathcal{C}}_j = \ensuremath{\epsilon}nsuremath{\mathcal{N}} \ensuremath{\epsilon}nsuremath{\;\forall\;} i,j,\in\ensuremath{\epsilon}nsuremath{\mathcal{N}}$ (the special case where all users' actions affect all users' utilities) in our mechanism.
Our mechanism also proivdes a more efficient way to achieve the properties of Nash implementation, individual rationality, and budget balance as it uses, in general, a much smaller message space than Hurwicz' mechanism.
To the best of our knowledge the resource allocation problem and its solution that we present in this paper is the first attempt to analyze a local public goods network model in the framework of implementation theory. Below we state our contributions.
\ensuremath{\epsilon}nsuremath{\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}bset}section{Contribution of the paper} \ensuremath{\lambda}bel{chap5_Subsec_contr_chap}
The key contributions of this paper are:
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{inparaenum}
\item The formulation of a problem of local public goods provisioning in the framework of implementation theory.
\item The specification of a game form~\fn{See \cite[Chapter 3]{Sharma_thesis} and \cite{Sharma:2011, Tudor05, Jack01} for the definition of ``game form''.} (decentralized mechanism) for the above problem that,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{inparaenum}
\item[(i)] implements in NE the optimal solution of the corresponding centralized local public good provisioning problem;
\item[(ii)] is individually rational;~\fn{Refer to \cite[Chapter 3]{Sharma_thesis} and \cite{Sharma:2011} for the definition of ``individual rationality'' and ``implementation in NE.''} and
\item[(iii)] results in budget balance at all NE and off equilibrium.
\ensuremath{\epsilon}nd{inparaenum}
\ensuremath{\epsilon}nd{inparaenum}
The rest of the paper is organized as follows. In Section~\ref{chap5_Subsec_mdl} we present the model of local public good network. In Section~\ref{chap5_Subsec_res_alloc_prb} we formulate the local public good provisioning problem. In Section~\ref{chap5_Subsec_game_form} we present a game form for this problem and discuss its properties in Section~\ref{chap5_Subsec_Thm}. We conclude in Section~\ref{chap5_Sec_conc} with a discussion on future directions. \\
{\bf Notation used in the paper:}
We use bold font to represent vectors and normal font for scalars. We use bold uppercase letters to represent matrices. We represent the element of a vector by a subscript on the vector symbol, and the element of a matrix by double subscript on the matrix symbol. To denote the vector whose elements are all $x_i$ such that $i \in \ensuremath{\epsilon}nsuremath{\mathcal{S}}$ for some set $\ensuremath{\epsilon}nsuremath{\mathcal{S}}$, we use the notation $(x_i)_{i \in \ensuremath{\epsilon}nsuremath{\mathcal{S}}}$ and we abbreviate it as $\ensuremath{\epsilon}nsuremath{\bm{x}}_{\ensuremath{\epsilon}nsuremath{\mathcal{S}}}$. We treat bold {\bf 0} as a zero vector of appropriate size which is determined by the context. We use the notation $(x_i,\bm{x}^*/i)$ to represent a vector of dimension same as that of $\bm{x}^*$, whose $i$th element is $x_i$ and all other elements are the same as the corresponding elements of $\bm{x}^*$. We represent a diagonal matrix of size $N \times N$ whose diagonal entries are elements of the vector $\ensuremath{\epsilon}nsuremath{\bm{x}} \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l^N$ by $\dg{\ensuremath{\epsilon}nsuremath{\bm{x}}}$.
\ensuremath{\sigma}ection{The local public good provisioning problem} \ensuremath{\lambda}bel{chap5_Sec_net_prob}
In this section we present a model of local public good network motivated by various applications such as wireless communication \cite[Chapter 5]{Sharma_thesis}, social and information networks \cite{Yuan_09, Kranton_07}.
We first describe the components of the model and the assumptions we make on the properties of the network. We then present a resource allocation problem for this model and formulate it as an optimization problem.
\ensuremath{\epsilon}nsuremath{\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}bset}section{The network model (M)} \ensuremath{\lambda}bel{chap5_Subsec_mdl}
We consider a network consisting of $N$ users and one network operator. We denote the set of users by $\ensuremath{\epsilon}nsuremath{\mathcal{N}} := \{1,2,\dots,N\}$.
Each user $i\in\ensuremath{\epsilon}nsuremath{\mathcal{N}}$ has to take an action $a_i \in \ensuremath{\epsilon}nsuremath{\mathcal{A}}_i$ where $\ensuremath{\epsilon}nsuremath{\mathcal{A}}_i$ is the set that specifies user $i$'s feasible actions.
In a real network, a user's actions can be consumption/generation of resources or decisions regarding various tasks.
We assume that,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{Ass} \ensuremath{\lambda}bel{chap5_Ass_Ai_pvt}
For all $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, $\ensuremath{\epsilon}nsuremath{\mathcal{A}}_i$ is a convex and compact set in $\ensuremath{\epsilon}nsuremath{\mathcal{R}}l$ that includes 0.~\fn{In this paper we assume the sets $\ensuremath{\epsilon}nsuremath{\mathcal{A}}_i, i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, to be in $\ensuremath{\epsilon}nsuremath{\mathcal{R}}l$ for simplicity. However, the decentralized mechanism and the results we present in this paper can be easily generalized to the scenario where for each $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, $\ensuremath{\epsilon}nsuremath{\mathcal{A}}_i \ensuremath{\epsilon}nsuremath{\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}bset}set \ensuremath{\epsilon}nsuremath{\mathcal{R}}l^{n_i}$ is a convex and compact set in higher dimensional space $\ensuremath{\epsilon}nsuremath{\mathcal{R}}l^{n_i}$. Furthermore, each space $\ensuremath{\epsilon}nsuremath{\mathcal{R}}l^{n_i}$ can be of a different dimension $n_i$ for different $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$.} Furthermore, for each user $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, the set $\ensuremath{\epsilon}nsuremath{\mathcal{A}}_i$ is its private information, i.e. $\ensuremath{\epsilon}nsuremath{\mathcal{A}}_i$ is known only to user $i$ and nobody else in the network.
\ensuremath{\epsilon}nd{Ass}
Because of the users' interactions in the network, the actions taken by a user directly affect the performance of other users in the network. Thus, the performance of the network is determined by the collective actions of all users. We assume that the network is large-scale, therefore, every user's actions directly affect only a subset of network users in $\ensuremath{\epsilon}nsuremath{\mathcal{N}}$. Thus we can treat each user's action as a local public good. We depict the above feature by a directed graph as shown in Fig.~\ref{chap5_Fig_large_nw}. In the graph, an arrow from $j$ to $i$ indicates that user $j$ affects user $i$; we represent the same in the text as $j \ensuremath{\epsilon}nsuremath{\rightarrow} i$. We assume that $i \ensuremath{\epsilon}nsuremath{\rightarrow} i$ for all $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$.
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{figure}[!ht]
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{minipage}[b]{0.30\linewidth}
\centering
\includegraphics[scale=0.30, page=1]{large_network.pdf}
\caption{{\ensuremath{\sigma}mall{A local public good network depicting the
Neighbor sets $\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i$ and $\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j$ of users $i$ and $j$ respectively.
}}}
\ensuremath{\lambda}bel{chap5_Fig_large_nw}
\ensuremath{\epsilon}nd{minipage}
\hspace{0.5cm}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{minipage}[b]{0.60\linewidth}
\centering
\includegraphics[scale=0.30, page=2]{large_network.pdf}
\caption{Illustration of indexing rule for set $\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j$ shown in Fig.~\ref{chap5_Fig_large_nw}. Index $\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{rj}$ of user $r \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_j$ is indicated on the arrow directed from $j$ to $r$. The notation to denote these indices and to denote the user with a particular index is shown outside the dashed boundary demarcating the set $\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j$.
}
\ensuremath{\lambda}bel{chap5_Fig_Cj_cyclic_index}
\ensuremath{\epsilon}nd{minipage}
\ensuremath{\epsilon}nd{figure}
Mathematically, we denote the set of users that affect user $i$ by $\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i := \{k \in \ensuremath{\epsilon}nsuremath{\mathcal{N}} \mid k \ensuremath{\epsilon}nsuremath{\rightarrow} i\}$. Similarly, we denote the set of users that are affected by user $j$ by $\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j := \{k \in \ensuremath{\epsilon}nsuremath{\mathcal{N}} \mid j \ensuremath{\epsilon}nsuremath{\rightarrow} k\}$. We represent the interactions of all network users together by a graph matrix $\ensuremath{\epsilon}nsuremath{\bm{G}} := [G_{ij}]_{N\times N}$. The matrix $\ensuremath{\epsilon}nsuremath{\bm{G}}$ consists of 0's and 1's, where $G_{ij} = 1$ represents that user $i$ is affected by user $j$, i.e. $j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i$ and $G_{ij} = 0$ represents no influence of user $j$ on user $i$, i.e. $j \ensuremath{\epsilon}nsuremath{\notin} \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i$. Note that $\ensuremath{\epsilon}nsuremath{\bm{G}}$ need not be a symmetric matrix. Because $i \ensuremath{\epsilon}nsuremath{\rightarrow} i$, $G_{ii} = 1$ for all $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$. We assume that,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{Ass} \ensuremath{\lambda}bel{chap5_Ass_G_indep_a}
The sets $\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i, \ensuremath{\epsilon}nsuremath{\mathcal{C}}_i, i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, are independent of the users' action profile $\ensuremath{\epsilon}nsuremath{\bm{a}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}} := (a_k)_{k \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}} \in \prod_{k \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}} \ensuremath{\epsilon}nsuremath{\mathcal{A}}_k$. Furthermore, for each $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, $|\ensuremath{\epsilon}nsuremath{\mathcal{C}}_i| \ensuremath{\gamma}eq 3$.
\ensuremath{\epsilon}nd{Ass}
We consider the condition $|\mathcal{C}_i| \ensuremath{\gamma}eq 3, i\in\ensuremath{\epsilon}nsuremath{\mathcal{N}},$ so as to ensure construction of a mechanism that is budget balanced at all possible allocations, those that correspond to Nash equilibria as well as those that correspond to off-equilibrium messages.
For examples of applications where Assumption~\ref{chap5_Ass_G_indep_a} holds, see
Section~\ref{chap5_Subsec_apps}
and \cite{Yuan_09, Kranton_07}.
We assume that,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{Ass}\ensuremath{\lambda}bel{Ass_user_i_knows_superset_aj}
Each user $i\in\ensuremath{\epsilon}nsuremath{\mathcal{N}}$ knows that the set of feasible actions $\ensuremath{\epsilon}nsuremath{\mathcal{A}}_j$ of each of its neighbors $j\in\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i$ is a convex and compact subset of $\ensuremath{\epsilon}nsuremath{\mathcal{R}}l$ that includes 0.
\ensuremath{\epsilon}nd{Ass}
The performance of a user that results from actions taken by the users affecting it is quantified by a utility function. We denote the utility of user $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$ resulting from the action profile $\ensuremath{\epsilon}nsuremath{\bm{a}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} := (a_k)_{k \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}$ by $u_i(\ensuremath{\epsilon}nsuremath{\bm{a}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i})$. We assume that,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{Ass} \ensuremath{\lambda}bel{chap5_Ass_util_conc_pvt}
For all $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, the utility function $u_i : \ensuremath{\epsilon}nsuremath{\mathcal{R}}l^{|\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i|} \ensuremath{\epsilon}nsuremath{\rightarrow} \ensuremath{\epsilon}nsuremath{\mathcal{R}}l \cup \{-\infty\}$ is concave in $\ensuremath{\epsilon}nsuremath{\bm{a}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}$ and $u_i(\ensuremath{\epsilon}nsuremath{\bm{a}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}) = -\infty$ for $a_i \ensuremath{\epsilon}nsuremath{\notin} \ensuremath{\epsilon}nsuremath{\mathcal{A}}_i$.~\fn{Note that $a_i$ is always an element of $\ensuremath{\epsilon}nsuremath{\bm{a}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}$ because $i \ensuremath{\epsilon}nsuremath{\rightarrow} i$ and hence $i \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i$.} The function $u_i$ is user $i$'s private information.
\ensuremath{\epsilon}nd{Ass}
The assumptions that $u_i$ is concave and is user $i$'s private information are motivated by applications described in
Section~\ref{chap5_Subsec_apps}
and \cite{Yuan_09, Kranton_07}. The assumption that $u_i(\ensuremath{\epsilon}nsuremath{\bm{a}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}) = -\infty$ for $a_i \ensuremath{\epsilon}nsuremath{\notin} \ensuremath{\epsilon}nsuremath{\mathcal{A}}_i$ captures the fact that an action profile $(\ensuremath{\epsilon}nsuremath{\bm{a}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i})$ is of no significance to user $i$ if $a_i \ensuremath{\epsilon}nsuremath{\notin} \ensuremath{\epsilon}nsuremath{\mathcal{A}}_i$.
We assume that,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{Ass} \ensuremath{\lambda}bel{chap5_Ass_selfish_users}
Each network user $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$ is selfish, non-cooperative, and strategic.
\ensuremath{\epsilon}nd{Ass}
Assumption~\ref{chap5_Ass_selfish_users} implies that the users have an incentive to misrepresent their private information, e.g. a user $i\in\ensuremath{\epsilon}nsuremath{\mathcal{N}}$ may not want to report to other users or to the network operator its true preference for the users' actions, if this results in an action profile in its own favor.
Each user $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$ pays a tax $t_i \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l$ to the network operator. This tax can be imposed for the following reasons: (i) For the use of the network by the users. (ii) To provide incentives to the users to take actions that achieve a network-wide performance objective. The tax is set according to the rules specified by a mechanism and it can be either positive or negative for a user. With the flexibility of either charging a user (positive tax) or paying compensation/subsidy (negative tax) to a user, it is possible to induce the users to behave in a way such that a network-wide performance objective is achieved. For example, in a network with limited resources, we can set ``positive tax'' for the users that receive resources close to the amounts requested by them and we can pay ``compensation'' to the users that receive resources that are not close to their desirable ones. Thus, with the available resources, we can satisfy all the users and induce them to behave in a way that leads to a resource allocation that is optimal according to a network-wide performance criterion.
We assume that,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{Ass} \ensuremath{\lambda}bel{chap5_Ass_net_op_acct}
The network operator does not have any utility associated with the users' actions or taxes. It does not derive any profit from the users' taxes and acts like an accountant that redistributes the tax among the users according to the specifications of the allocation mechanism.
\ensuremath{\epsilon}nd{Ass}
Assumption~\ref{chap5_Ass_net_op_acct} implies that the tax is charged in a way such that
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}} t_i = 0. \ensuremath{\lambda}bel{chap5_Eq_sum_ti_0}
\ensuremath{\epsilon}nd{equation}
To describe the ``overall satisfaction'' of a user from the performance it receives from all users' actions and the tax it pays for it, we define an ``aggregate utility function'' $u_i^A(\ensuremath{\epsilon}nsuremath{\bm{a}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}, t_i): \ensuremath{\epsilon}nsuremath{\mathcal{R}}l^{|\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i| + 1} \ensuremath{\epsilon}nsuremath{\rightarrow} \ensuremath{\epsilon}nsuremath{\mathcal{R}}l \cup \{-\infty\}$ for each user $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$:
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
u_i^A(\ensuremath{\epsilon}nsuremath{\bm{a}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}, t_i) :=
\left\{ \ensuremath{\epsilon}nsuremath{\bm{e}}gin{array}{rl}
-t_i + u_i(\ensuremath{\epsilon}nsuremath{\bm{a}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}), & \mb{if} \ a_i \in \ensuremath{\epsilon}nsuremath{\mathcal{A}}_i,
a_j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l, j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i \ensuremath{\epsilon}nsuremath{\ensuremath{\epsilon}nsuremath{\bm{a}}ckslash} \{i\}, \\
-\infty, & \mb{otherwise.}
\ensuremath{\epsilon}nd{array} \right.
\ensuremath{\epsilon}nd{split} \ensuremath{\lambda}bel{chap5_Eq_u_i^A}
\ensuremath{\epsilon}nd{equation}
Because $u_i$ and $\ensuremath{\epsilon}nsuremath{\mathcal{A}}_i$ are user $i$'s private information (Assumptions~\ref{chap5_Ass_Ai_pvt} and \ref{chap5_Ass_util_conc_pvt}), the aggregate utility $u_i^A$ is also user $i$'s private information. As stated in Assumption~\ref{chap5_Ass_selfish_users}, users are non-cooperative and selfish. Therefore, \ensuremath{\epsilon}mph{the users are self aggregate utility maximizers}.
In this paper we restrict attention to static problems, i.e. we assume,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{Ass} \ensuremath{\lambda}bel{chap5_Ass_net_const}
The set $\ensuremath{\epsilon}nsuremath{\mathcal{N}}$ of users, the graph $\ensuremath{\epsilon}nsuremath{\bm{G}}$, users' action spaces $\ensuremath{\epsilon}nsuremath{\mathcal{A}}_i, i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, and their utility functions $u_i, i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, are fixed in advance and they do not change during the time period of interest.
\ensuremath{\epsilon}nd{Ass}
We also assume that,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{Ass} \ensuremath{\lambda}bel{chap5_Ass_nbr_know}
Every user $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$ knows the set $\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i$ of users that affect it as well as the set $\ensuremath{\epsilon}nsuremath{\mathcal{C}}_i$ of users that are affected by it. The network operator knows $\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i$ and $\ensuremath{\epsilon}nsuremath{\mathcal{C}}_i$ for all $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$.
\ensuremath{\epsilon}nd{Ass}
In networks where the sets $\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i$ and $\ensuremath{\epsilon}nsuremath{\mathcal{C}}_i$ are not known to the users beforehand, Assumption~\ref{chap5_Ass_nbr_know} is still reasonable because of the following reason. As the graph $\ensuremath{\epsilon}nsuremath{\bm{G}}$ does not change during the time period of interest (Assumption~\ref{chap5_Ass_net_const}), the information about the neighbor sets $\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i$ and $\ensuremath{\epsilon}nsuremath{\mathcal{C}}_i, i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, can be passed to the respective users by the network operator before the users determine their actions. Alternatively, the users can themselves determine the set of their neighbors before determining their actions.~\fn{The exact method by which the users get information about their neighbor sets in a real network depends on the network characteristics.}
Thus, Assumption~\ref{chap5_Ass_nbr_know} can hold true for the rest of the action determination process.
In the next section we present some applications that motivate Model~(M).
\ensuremath{\epsilon}nsuremath{\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}bset}section{Applications} \ensuremath{\lambda}bel{chap5_Subsec_apps}
\ensuremath{\epsilon}nsuremath{\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}bset}subsection{Application A: Power allocation in cellular networks} \ensuremath{\lambda}bel{chap5_Subsubsec_power_alloc}
Consider a single cell downlink wireless data network consisting of a Base Station (BS) and $N$ mobile users as shown in Fig.~\ref{chap5_Fig_down_link}.
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{figure}[!ht]
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{minipage}[b]{0.30\linewidth}
\centering
\includegraphics[scale=.25,page=1]{cell_net.pdf}
\caption{A downlink network with $N$ mobile users and one base station}
\ensuremath{\lambda}bel{chap5_Fig_down_link}
\ensuremath{\epsilon}nd{minipage}
\hspace{0.5cm}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{minipage}[b]{0.55\linewidth}
\centering
\includegraphics[scale=.30,page=1]{ad_cluster.pdf}
\caption{{\ensuremath{\sigma}mall{Three display ad clusters, each consisting of one main ad and two sub ads.}}}
\ensuremath{\lambda}bel{chap5_Fig_ad_cluster}
\ensuremath{\epsilon}nd{minipage}
\ensuremath{\epsilon}nd{figure}
The BS uses Code Division Multiple Access technology (CDMA) to transmit data to the users and each mobile user uses Minimum Mean Square Error Multi-User Detector (MMSE-MUD) receiver to decode its data. The signature codes used by the BS are not completely orthogonal as this helps increase the capacity of the network. Because of non-orthogonal codes, each user experiences interference due to the BS transmissions intended for other users. However, as the users in the cell are at different distances from the BS, and the power transmitted by the BS undergoes propagation loss, not all transmissions by the BS create interference to every user. For example, let us look at arcs $1$ and $N$ shown in Fig.~\ref{chap5_Fig_down_link} that are centered at the BS. Suppose the radius of arc $1$ is much smaller than that of arc $N$. Then, the signal transmitted by the BS for users inside circle~$1$ (that corresponds to arc~$1$) will become negligible when it reaches outside users such as user $N$ or user $2$. On the other hand, the BS signals transmitted for user $N$ and user $2$ will be received with significant power by the users inside circle~$1$. This asymmetric interference relation between the mobile users can be depicted in a graph similar to one shown in Fig.~\ref{chap5_Fig_large_nw}. In the graph an arrow from $j$ to $i$ would represent that the signal transmitted for user $j$ also affects user $i$. Note that since the signal transmitted for user $i$ must reach $i$, the assumption $i \ensuremath{\epsilon}nsuremath{\rightarrow} i$ made in Section~\ref{chap5_Subsec_mdl} holds in this case. If the users do not move very fast in the network, the network topology can be assumed to be fixed for small time periods. Therefore, if the BS transmits some pilot signals to all network users, the users can figure out which signals are creating interference to their signal reception. Thus, each user would know its (interfering) neighbor set as assumed in Assumption~\ref{chap5_Ass_nbr_know}. Note that if the power transmitted by the BS to the users change, it may result in a change in the set of interfering neighbors of each user. This is different from Assumption~\ref{chap5_Ass_G_indep_a} in Model~(M). However, if the transmission power fluctuations resulting from a power allocation mechanism are not large, the set of interfering neighbors can be assumed to be fixed, and this can be approximated by Assumption~\ref{chap5_Ass_G_indep_a}.
The Quality of Service (QoS) that a user receives from decoding its data is quantified by a utility function. Due to interference the utility $u_i(\cdot)$ of user $i, i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, is a function of the vector $\ensuremath{\epsilon}nsuremath{\bm{a}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}$, where $a_j$ is the transmission power used by the BS to transmit signals to user $j, j \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, and $\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i$ is the set of users such that the signals transmitted by the BS to users in $\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i$ also reach user $i$. Note that in this case all transmissions, in other words the actions $a_i, i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}},$ are carried out by the BS unlike Model~(M) where each user $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$ takes its own action $a_i$. However, as we discuss below, the BS is only an agent which executes the outcome of the mechanism that determines these transmission powers. Thus, we can embed the downlink network scenario into Model~(M) by treating each $a_i$ as a decision ``corresponding'' to user $i, i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, which is executed by the BS for $i$. Since each user uses an MMSE-MUD receiver, a measure of user $i$'s $(i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}})$ utility can be the negative of the MMSE at the output of its receiver,~\fn{See \cite{Verdu} for the derivation of (\ref{chap5_Eq_u_i_cell}).} i.e.,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
u_i(\ensuremath{\epsilon}nsuremath{\bm{a}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i})
= - M\!M\!S\!E_i
= - \min_{\bm{z_i}^T \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l^{1 \times N}}
E[\| b_i - \bm{z_i}^T \bm{y_i} \|^2]
= - \ensuremath{\epsilon}nsuremath{\bm{i}}g[(\bm{I} + \frac{2}{N_{0i}} \ensuremath{\epsilon}nsuremath{\bm{S}}_i \ensuremath{\epsilon}nsuremath{\bm{X}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} \ensuremath{\epsilon}nsuremath{\bm{S}}_i)^{-1}\ensuremath{\epsilon}nsuremath{\bm{i}}g]_{ii},
\quad i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}.
\ensuremath{\lambda}bel{chap5_Eq_u_i_cell}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
In (\ref{chap5_Eq_u_i_cell}) $b_i$ is the transmitted data symbol for user $i$, $\bm{y_i}$ is the output of user $i$'s matched filter generated from its received data, $\bm{I}$ is the identity matrix of size $N \times N$, $N_{0i}/2$ is the two sided power spectral density (PSD) of thermal noise, $\ensuremath{\epsilon}nsuremath{\bm{X}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}$ is the cross-correlation matrix of signature waveforms corresponding to the users $j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i$, and $\ensuremath{\epsilon}nsuremath{\bm{S}}_i:=\dg{(S_{ij})_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}}$ is the diagonal matrix consisting of the signal amplitudes $S_{ij}, j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i$, received by user $i$. $S_{ij}$ is related to $a_j$ as $S_{ij}^2 = a_j h_{0i}$, $j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i$, where $h_{0i}$ is the channel gain from the BS to user $i$ which represents the power loss along this path. As shown in \cite{Sharma:2009, Sharma:2011}, the utility function given by (\ref{chap5_Eq_u_i_cell}) is close to concave in $\ensuremath{\epsilon}nsuremath{\bm{a}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}$. Thus, Assumption~\ref{chap5_Ass_util_conc_pvt} in Model~(M) can be thought of as an approximation to the downlink network scenario.
Note that to compute user $i$'s utility given in (\ref{chap5_Eq_u_i_cell}), knowledge of $N_{0i}$, $\ensuremath{\epsilon}nsuremath{\bm{X}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}$, and $h_{0i}$ is required. The BS knows $\ensuremath{\epsilon}nsuremath{\bm{X}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}$ for each $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$ as it selects the signature waveform for each user. On the other hand, user $i, i\in\ensuremath{\epsilon}nsuremath{\mathcal{N}}$, knows the PSD $N_{0i}$ of thermal noise and the channel gain $h_{0i}$ as these can be measured only at the respective receiver. Consider a network where the mobile users are selfish and non cooperative. Then, these users may not want to reveal their measured values $N_{0i}$ and $h_{0i}$. On the other hand if the network operator that owns the BS does not have a utility and is not selfish, then, the BS can announce the signature waveforms it uses for each user. Thus, each user $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$ would know its corresponding cross correlation matrix $\ensuremath{\epsilon}nsuremath{\bm{X}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}$ and consequently, its utility function $u_i$. However, since $N_{0i}$ and $h_{0i}$ are user $i$'s private information, the utility function $u_i$ is private information of $i$ which is similar to Assumption~\ref{chap5_Ass_util_conc_pvt} in Model~(M). If the wireless channel conditions vary slowly compared to the time period of interest, the channel gains and hence the users' utility functions can be assumed to be fixed. As mentioned earlier, for slowly moving users the network topology and hence the set of interfering neighbors can also be assumed to be fixed. These features are captured by Assumption~\ref{chap5_Ass_net_const} in Model~(M).
In the presence of limited resources, the provision of desired QoS to all network users may not be possible. To manage the provision of QoS under such a situation the network operator (BS) can charge tax to the users and offer them the following tradeoff. It charges positive tax to the users that obtain a QoS close to their desirable one, and compensates the loss in the QoS of other users by providing a subsidy to them. Such a redistribution of money among users through the BS is possible under Assumption~\ref{chap5_Ass_net_op_acct} in Model~(M).
\ensuremath{\epsilon}nsuremath{\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}bset}subsection{Application B: Online advertising} \ensuremath{\lambda}bel{Subsubsec_onl_adv}
Consider an online guaranteed display (GD) ad system. In existing GD systems, individual advertisers sign contracts with web-publishers in which web-publishers agree to serve (within some given time period) a fixed number of impressions~\fn{A display instance of an ad} of each ad for a lump sum payment. Here we consider an extension of current GD systems in which multiple advertisers can form clusters as shown in Fig.~\ref{chap5_Fig_ad_cluster} and contracts can be signed for the number of impressions of each cluster.
Figure~\ref{chap5_Fig_ad_cluster} shows three display ad clusters. In each ad cluster there is one main ad and two sub ads. For example in Fig.~\ref{chap5_Fig_ad_cluster}-(i), ad~1 is the main ad and ad~2 and ad~3 are the two sub ads.
Suppose that the ad clusters are formed in a way so that in each cluster, the main ad creates a positive externality~\fn{See \cite[Chapter 11]{Mas-Colell} for the definition of externality.} to the sub ads. For example, ad~1 can be a Honda ad, whereas ad~2 and ad~3 can be ads of local Honda dealer and local Honda mechanic. We call the ad cluster in which ad $i$ appears as the main ad as cluster $i$.~\fn{We assume that there is at most one ad cluster in which ad $i$ appears as the main ad; hence such a notation is well defined.} The arrangements of ads in clusters can be described by a graph similar to one shown in Fig.~\ref{chap5_Fig_large_nw}. In this graph an arrow from $j$ to $i$ would represent that ad $i$ appears in cluster $j$. For each impression of cluster $i$, $i \in \{1,2,4\}$, advertiser $i$ pays some fixed prespecified amount of money (bid) $b_i \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l_+$ to the web publisher. Suppose the bid $b_i$ is known only to advertiser $i$ and the web publisher. Let $a_i, i \in \{1,2,4\},$ denote the number of impressions of clusters $i$ delivered to the users. Furthermore, let $A_i^{max}$ be the maximum number of impressions advertiser $i$ can request for cluster $i$, i.e. $0 \leq a_i \leq A_i^{max}$. The constraint $A_i^{max}$ may arise due to the budget constraint of advertiser $i$, or due to the restrictions imposed by the web publisher. For these reasons $A_i^{max}$ may be private information of advertiser $i$ (similar to Assumption~\ref{chap5_Ass_Ai_pvt}) or private knowledge between advertiser $i$ and the web publisher. Note that in the ad network, the number of impressions $a_i, i \in \{1,2,4\},$ can take only natural number values; therefore, the assumption of convex action sets $\ensuremath{\epsilon}nsuremath{\mathcal{A}}_i, i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}},$ in Assumption~\ref{chap5_Ass_Ai_pvt} can be thought of as an approximation to this case. Note also that in the above ad network example, no cluster is associated with advertiser 3 or the web publisher (i.e. there is no cluster with main ad 3 or an ad of the web publisher). Such a scenario can be captured by Model~(M) by associating dummy action variables $a_3$ and $a_0$ with advertiser 3 and web publisher respectively, and assuming $A_3^{max} = A_0^{max} = 0$.
Because of the way clusters are formed, each advertiser obtains a non-negative utility from the impressions of the clusters that it is part of. Thus we can represent the utilities of the four advertisers in the ad network of Fig.~\ref{chap5_Fig_ad_cluster} as follows:
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
u_1(a_1, a_2) & = c_{11} a_1 + c_{12} a_2 - b_1 a_1 \\
u_2(a_1, a_2, a_4) & = c_{21} a_1 + c_{22} a_2 + c_{24} a_4 - b_2 a_2 \\
u_3(a_1, a_4) &= c_{31} a_1 + c_{34} a_4 \\
u_4(a_2, a_4) &= c_{42} a_2 + c_{44} a_4 - b_4 a_4.
\ensuremath{\lambda}bel{chap5_Eq_advt_util}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
In (\ref{chap5_Eq_advt_util}) $c_{ij} \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l_+, i,j \in \{1,2,3,4\}$ are non negative real valued constants. The constant $c_{ij}$ represents the value obtained by advertiser $i$ from each impression of cluster $j$. Suppose that for each $j$, $c_{ij}$ is advertiser $i$'s private information. The term $-b_i a_i, i \in \{1,2,4\}$ represents the loss in utility/monetary value incurred by advertiser $i$ due to the prespecified payment it makes to the web publisher. Because the web publisher receives payments from the advertisers, it also obtains a utility as follows:
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
u_0(a_1, a_2, a_4) & = b_1 a_1 + b_2 a_2 + b_4 a_4.
\ensuremath{\lambda}bel{chap5_Eq_pub_util}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
Since each bid $b_i, i \in \{1,2,4\}$, is known to the web publisher, and none of the advertisers know all of these bids, the utility function $u_0$ is web publisher's private information. Similarly, since $c_{ij}$ for each $j$ and the bid $b_i$ are private information of advertiser $i$, for each $i \in \{1,2,4\}$ the utility function $u_i$ is advertiser $i$'s private information. These properties of utility functions along with their linearity given by (\ref{chap5_Eq_advt_util}) and (\ref{chap5_Eq_pub_util}) are modeled by Assumption~\ref{chap5_Ass_util_conc_pvt} in Model~(M). If we assume that the arrangements of ads in clusters and the bids $b_i, i \in \{1,2,4\},$ of advertisers are predetermined, and do not change with any decision regarding the impressions delivery of various clusters, then this leads to assumptions~\ref{chap5_Ass_G_indep_a} and \ref{chap5_Ass_net_const} in Model~(M).
As represented by (\ref{chap5_Eq_advt_util}), each advertiser benefits from a number of other advertisers by being part of their ad clusters. For this reason, in addition to making a direct payment to the web publisher, each advertiser should also make a payment to those advertisers that create positive externalities to it. Furthermore, because of their mutual payments the web publisher may offer discounts to the advertisers in their direct payments to her. These discounts can indirectly encourage advertisers to participate in the clustered GD ad scheme. However, one problem in the implementation of above type of payments/discounts is that in a big network, the advertisers may not have direct contracts with all their cluster sharing advertisers. Furthermore, if the advertisers and the web publisher are strategic and self utility maximizers, they would try to negotiate payments so as to get cluster impressions that maximize their respective utilities. In such a scenario, the abovementioned distribution of money among advertisers and the web publisher can be facilitated through a third party ad agency to which all advertisers and the web publisher can subscribe. The role of an ad agency can be mapped to that of the network operator in Model~(M).
Having discussed various applications that motivate Model (M), we now go back to this generic model and formulate a resource allocation problem for it.
\ensuremath{\epsilon}nsuremath{\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}bset}section{The decentralized local public good provisioning problem ($P_D$)} \ensuremath{\lambda}bel{chap5_Subsec_res_alloc_prb}
For the network model (M) we wish to develop a mechanism to determine the users' action and tax profiles $(\ensuremath{\epsilon}nsuremath{\bm{a}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}, \ensuremath{\epsilon}nsuremath{\bm{t}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}) := ((a_1, a_2, \dots,$ $a_N), (t_1, t_2, \dots, t_N))$. We want the mechanism to work under the decentralized information constraints of the model and to lead to a solution to the following centralized problem. \\
{\bf The centralized problem ($\bm{P_C}$)}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
\max_{(\ensuremath{\epsilon}nsuremath{\bm{a}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}, \ensuremath{\epsilon}nsuremath{\bm{t}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}})} & \quad \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}} u_i^A(\ensuremath{\epsilon}nsuremath{\bm{a}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}, t_i) \\
\mb{s.t.} & \quad \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}} t_i = 0
\ensuremath{\lambda}bel{chap5_Eq_Pc_obj1}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
\ensuremath{\epsilon}quiv \ \ & \max_{(\ensuremath{\epsilon}nsuremath{\bm{a}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}, \ensuremath{\epsilon}nsuremath{\bm{t}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}) \in \ensuremath{\epsilon}nsuremath{\nabla}o} \quad \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}} u_i(\ensuremath{\epsilon}nsuremath{\bm{a}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}) \\
& \mb{where},\ \ \ensuremath{\epsilon}nsuremath{\nabla}o := \{ (\ensuremath{\epsilon}nsuremath{\bm{a}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}, \ensuremath{\epsilon}nsuremath{\bm{t}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}) \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l^{2N} \mid a_i \in \ensuremath{\epsilon}nsuremath{\mathcal{A}}_i \ensuremath{\epsilon}nsuremath{\;\forall\;} i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}; \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}} t_i = 0 \}
\ensuremath{\lambda}bel{chap5_Eq_Pc_obj2}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
The centralized optimization problem (\ref{chap5_Eq_Pc_obj1}) is equivalent to (\ref{chap5_Eq_Pc_obj2}) because for $(\ensuremath{\epsilon}nsuremath{\bm{a}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}, \ensuremath{\epsilon}nsuremath{\bm{t}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}) \ensuremath{\epsilon}nsuremath{\notin} \ensuremath{\epsilon}nsuremath{\nabla}o$, the objective function in (\ref{chap5_Eq_Pc_obj1}) is negative infinity by (\ref{chap5_Eq_u_i^A}). Thus $\ensuremath{\epsilon}nsuremath{\nabla}o$ is the set of feasible solutions of Problem~($P_C$). Since by Assumption~\ref{chap5_Ass_util_conc_pvt}, the objective function in (\ref{chap5_Eq_Pc_obj2}) is concave in $\ensuremath{\epsilon}nsuremath{\bm{a}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}$ and the sets $\ensuremath{\epsilon}nsuremath{\mathcal{A}}_i, i\in\ensuremath{\epsilon}nsuremath{\mathcal{N}},$ are convex and compact, there exists an optimal action profile $\ensuremath{\epsilon}nsuremath{\bm{a}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$ for Problem~($P_C$). Furthermore, since the objective function in (\ref{chap5_Eq_Pc_obj2}) does not explicitly depend on $\ensuremath{\epsilon}nsuremath{\bm{t}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}$, an optimal solution of Problem $(P_C)$ must be of the form $(\ensuremath{\epsilon}nsuremath{\bm{a}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*, \ensuremath{\epsilon}nsuremath{\bm{t}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}})$, where $\ensuremath{\epsilon}nsuremath{\bm{t}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}$ is any feasible tax profile for Problem $(P_C)$, i.e. a tax profile that satisfies (\ref{chap5_Eq_sum_ti_0}).
The solutions of Problem~($P_C$) are ideal action and tax profiles that we would like to obtain. If there exists an entity that has centralized information about the network, i.e. it knows all the utility functions $u_i, i\in\ensuremath{\epsilon}nsuremath{\mathcal{N}}$, and all action spaces $\ensuremath{\epsilon}nsuremath{\mathcal{A}}_i, i\in\ensuremath{\epsilon}nsuremath{\mathcal{N}}$, then that entity can compute the above ideal profiles by solving Problem~($P_C$). Therefore, we call the solutions of Problem~($P_C$) optimal centralized allocations. In the network described by Model~(M), there is no entity that knows perfectly all the parameters that describe Problem~$(P_C)$ (Assumptions~\ref{chap5_Ass_Ai_pvt} and \ref{chap5_Ass_util_conc_pvt}). Therefore, we need to develop a mechanism that allows the network users to communicate with one another and that leads to optimal solutions of Problem~$(P_C)$. Since a key assumption in Model~(M) is that the users are strategic and non-cooperative, the mechanism we develop must take into account the users' strategic behavior in their communication with one another. To address all of these issues we take the approach of implementation theory \cite{Jack01} for the solution of the decentralized local public good provisioning problem for Model~(M). Henceforth we call this decentralized allocation problem as Problem ($P_D$). In the next section we present a decentralized mechanism (game form) for local public good provisioning that works under the constraints imposed by Model~(M) and achieves optimal centralized allocations.
\ensuremath{\sigma}ection{A decentralized local public good provisioning mechanism} \ensuremath{\lambda}bel{chap5_Sec_dec_mech}
For Problem $(P_D)$, we want to develop a game form (message space and outcome function) that is \ensuremath{\epsilon}mph{individually rational}, \ensuremath{\epsilon}mph{budget balanced}, and that \ensuremath{\epsilon}mph{implements in Nash equilibria} the goal correspondence defined by the solution of Problem~$(P_C)$.~\fn{The definition of game form, goal correspondence, individual rationality, budget balance and implementation in Nash equilibria is given in \cite[Chapter 3]{Sharma_thesis}.} Individual rationality guarantees voluntary participation of the users in the allocation process specified by the game form, budget balance guarantees that there is no money left unclaimed/unallocated at the end of the allocation process (i.e. it ensures (\ref{chap5_Eq_sum_ti_0})), and implementation in NE guarantees that the allocations corresponding to the set of NE of the game induced by the game form are a subset of the optimal centralized allocations (solutions of Problem~($P_C$)).
We would like to clarify at this point the definition of individual rationality (voluntary participation) in the context of our problem. Note that in the network model~(M), the participation/non-participation of each user determines the network structure and the set of local public goods (users' actions) affecting the participating users. To define individual rationality in this setting we consider our mechanism to be consisting of two stages as discussed in \cite[Chapter 7]{Fudenberg_91}. In the first stage, knowing the game form, each user makes a decision whether to participate in the game form or not. The users who decide not to participate are considered out of the system. Those who decide to participate follow the game form to determine the levels of local public goods in the network formed by them.~\fn{This network is a subgraph obtained by removing the nodes corresponding to non-participating users from the original graph (directed network) constructed by all the users in the system.} In such a two stage mechanism, individual rationality implies the following. If the network formed by the participating users satisfies all the properties of Model~(M),~\fn{In particular, the network formed by the participating users must satisfy Assumption~\ref{chap5_Ass_G_indep_a} that there are at least three users affected by each local public good in this network. Note that all other assumptions of Model~(M) automatically carry over to the network formed by any subset of the users in Model~(M).} then, at all NE of the game induced by the game form among the participating users, the utility of each participating user will be at least as much as its utility without participation (i.e. if it is out of the system).
We would also like to clarify the rationale behind choosing NE as the solution concept for our problem. Note that because of assumptions \ref{chap5_Ass_Ai_pvt} and \ref{chap5_Ass_util_conc_pvt} in Model~(M), the environment of our problem is one of incomplete information. Therefore one may speculate the use of Bayesian Nash or dominant strategy as appropriate solution concepts for our problem. However, since the users in Model~(M) do not possess any prior beliefs about the utility functions and action sets of other users, we cannot use Bayesian Nash as a solution concept for Model~(M). Furthermore, because of impossibility results for the existence of non-parametric efficient dominant strategy mechanisms in classical public good environments \cite{Groves_Ledyard_1977}, we do not know if it is possible to design such mechanisms for the local public good environment of Model~(M). The well known Vickrey-Clarke-Groves (VCG) mechanisms that achieve incentive compatibility and efficiency with respect to non-numeraire goods, do not guarantee budget balance \cite{Groves_Ledyard_1977}. Hence they are inappropriate for our problem as budget balance is one of the desirable properties in our problem. VCG mechanisms are also unsuitable for our problem because they are direct mechanisms and any direct mechanism would require infinite message space to communicate the generic continuous (and concave) utility functions of users in Model~(M). Because of all of above reasons, and the known existence results for non-parametric, individually rational, budget-balanced Nash implementation mechanisms for classical private and public goods environments \cite{Groves_Ledyard_1977}, we choose Nash as the solution concept for our problem.
We adopt Nash's original ``mass action'' interpretation of NE \cite[page 21]{Nash_thesis}. Implicit in this interpretation is the assumption that the problems's environment is stable, that is, it does not change before the agents reach their equilibrium strategies. This assumption is consistent with our Assumption~\ref{chap5_Ass_net_const}. Nash's ``mass action'' interpretation of NE has also been adopted in \cite[pp. 69-70]{Groves_Ledyard_1977}, \cite[page 664]{Reiter_Reichel_88}, \cite{Sharma:2011}, and
\cite{Kakhbod_12a, Kakhbod_12b}.
Specifically, by quoting \cite{Reiter_Reichel_88}, ``we interpret our analysis as applying to an unspecified (message exchange) process in which users grope their way to a stationary message and in which the Nash property is a necessary condition for stationarity.''
We next construct a game form for the resource allocation problem ($P_D$) that achieves the abovementioned desirable properties -- Nash implementation, individual rationality, and budget balance.
\ensuremath{\epsilon}nsuremath{\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}bset}section{The game form} \ensuremath{\lambda}bel{chap5_Subsec_game_form}
In this section we present a game form for the local public good provisioning problem presented in Section~\ref{chap5_Subsec_res_alloc_prb}. We provide explicit expressions of each of the components of the game form, the message space and the outcome function. We assume that the game form is common knowledge among the users and the network operator.
{\bf The message space:}
Each user $i\in\ensuremath{\epsilon}nsuremath{\mathcal{N}}$ sends to the network operator a message $\ensuremath{\epsilon}nsuremath{\bm{m}}_i \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l^{|\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i|} \times \ensuremath{\epsilon}nsuremath{\mathcal{R}}l_{+}^{|\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i|} =: \ensuremath{\epsilon}nsuremath{\mathcal{M}}_i$ of the following form:
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{eqnarray}
\ensuremath{\epsilon}nsuremath{\bm{m}}_i := ( \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{a}}}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}}, \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}} ); \quad
\tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{a}}}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}} \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l^{|\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i|}, \; \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}} \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l_{+}^{|\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i|},
\ensuremath{\lambda}bel{chap5_Eq_mi}
\ensuremath{\epsilon}nd{eqnarray}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\mb{where,} \; \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{a}}}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}} := ( \tensor*[^{i}]{a}{_k} )_{k \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}; \;
\tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}} := ( \tensor*[^{i}]{\pi}{_k} )_{k \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}, \; i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}.
\ensuremath{\lambda}bel{chap5_Eq_i_p_Ai}
\ensuremath{\epsilon}nd{equation}
User $i$ also sends the component $(\tensor*[^{i}]{a}{_k}, \tensor*[^{i}]{\pi}{_k}), k \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i,$ of its message to its neighbor $k \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i$. In this message, $\tensor*[^{i}]{a}{_k}$ is the action proposal for user $k, k \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i,$ by user $i, i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$. Similarly, $\tensor*[^{i}]{\pi}{_k}$ is the price that user $i, i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}},$ proposes to pay for the action of user $k, k \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i$. A detailed interpretation of these message elements is given in Section~\ref{chap5_Subsec_Thm}. \\
{\bf The outcome function:}
After the users communicate their messages to the network operator, their actions and taxes are determined as follows. For each user $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, the network operator determines the action $\h{a}_i$ of user $i$ from the messages communicated by its neighbors that are affected by it (set $\ensuremath{\epsilon}nsuremath{\mathcal{C}}_i$), i.e. from the message profile $\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_i} := (\ensuremath{\epsilon}nsuremath{\bm{m}}_k)_{k \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_i}$:
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\h{a}_i(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_i}) = \frac{1}{|\ensuremath{\epsilon}nsuremath{\mathcal{C}}_i|} \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{k \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_i} \tensor*[^{k}]{a}{_i}, \quad i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}.
\ensuremath{\lambda}bel{chap5_Eq_a_hat_i}
\ensuremath{\epsilon}nd{equation}
To determine the users' taxes the network operator considers each set $\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j$, $j \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, and assigns indices $1, 2, \dots, |\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j|$ in a cyclic order to the users in $\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j$. Each index $1, 2, \dots, |\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j|$ is assigned to an arbitrary but unique user $i \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_j$. Once the indices are assigned to the users in each set $\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j$, they remain fixed throughout the time period of interest. We denote the index of user $i$ associated with set $\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j$ by $\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}$. The index $\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij} \in \{ 1, 2, \dots, |\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j| \}$ if $i \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_j$, and $\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij} = 0$ if $i \ensuremath{\epsilon}nsuremath{\notin} \ensuremath{\epsilon}nsuremath{\mathcal{C}}_j$. Since for each set $\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j,$ each index $1, 2, \dots, |\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j|$ is assigned to a unique user $i \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_j$, therefore, $\ensuremath{\epsilon}nsuremath{\;\forall\;} i,k \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_j$ such that $i \neq k, \; \ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij} \neq \ensuremath{\epsilon}nsuremath{\mathcal{I}}_{kj}$. Note also that for any user $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, and any $j, k \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i$, the indices $\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}$ and $\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ik}$ are not necessarily the same and are independent of each other. We denote the user with index $k \in \{ 1, 2, \dots, |\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j| \}$ in set $\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j$ by $\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(k)}$. Thus, $\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij})} = i$ for $i \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_j$. The cyclic order indexing means that, if $\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij} = |\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j|$, then $\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)} = \ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(1)}$, $\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+2)} = \ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(2)}$, and so on. In Fig.~\ref{chap5_Fig_Cj_cyclic_index} we illustrate the above indexing rule for the set $\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j$ shown in Fig.~\ref{chap5_Fig_large_nw}.
Based on the above indexing, the users' taxes $\h{t}_i, i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}},$ are determined as follows.
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
\h{t}_i( (\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j})_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} ) =
& \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} l_{ij}(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j})\; \h{a}_j(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j})
+ \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} \tensor*[^{i}]{\pi}{_j} \left( \tensor*[^{i}]{a}{_j} -
\tensor*[^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)}}]{a}{_j} \right)^2 \\
& - \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} \tensor*[^{ \ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)} }]{\pi}{_j} \left( \tensor*[^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)}}]{a}{_j} - \tensor*[^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+2)}}]{a}{_j} \right)^2
\ensuremath{\lambda}bel{chap5_Eq_t_hat_i}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\mb{ where,} \quad
l_{ij}(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}) = \tensor*[^{ \ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)} }]{\pi}{_j} - \tensor*[^{ \ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+2)} }]{\pi}{_j},
\ \ j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i, \ \ i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}.
\ensuremath{\lambda}bel{chap5_Eq_l_ij}
\ensuremath{\epsilon}nd{equation}
We would like to emphasize here that the presence of the network operator is necessary for strategy-proofness and implementation of the above game form. A detailed discussion on the need and significance of the network operator is presented in Section~\ref{chap5_Subsec_impl_mech}.
The game form given by (\ref{chap5_Eq_mi})--(\ref{chap5_Eq_l_ij}) and the users' aggregate utility functions in (\ref{chap5_Eq_u_i^A}) induce a game $(\times_{i\in\ensuremath{\epsilon}nsuremath{\mathcal{N}}}\ensuremath{\epsilon}nsuremath{\mathcal{M}}_i, (\h{a}_i, \h{t}_i)_{i\in\ensuremath{\epsilon}nsuremath{\mathcal{N}}} , \{u_i^A\}_{i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}})$. In this game, the set of network users $\ensuremath{\epsilon}nsuremath{\mathcal{N}}$ are the players, the set of strategies of a user is its message space $\ensuremath{\epsilon}nsuremath{\mathcal{M}}_i$, and a user's payoff is its utility $u_i^A\Big( \ensuremath{\epsilon}nsuremath{\bm{i}}g( \h{a}_j( \ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j} ) \ensuremath{\epsilon}nsuremath{\bm{i}}g)_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}, \ \h{t}_i\ensuremath{\epsilon}nsuremath{\bm{i}}g( (\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j})_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} \ensuremath{\epsilon}nsuremath{\bm{i}}g) \Big)$ that it obtains at the allocation determined by the communicated messages. We define a NE of this game as a message profile $\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$ that has the following property: $\ensuremath{\epsilon}nsuremath{\;\forall\;} i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$ and $\ensuremath{\epsilon}nsuremath{\;\forall\;} \ensuremath{\epsilon}nsuremath{\bm{m}}_i \in \ensuremath{\epsilon}nsuremath{\mathcal{M}}_i,$
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\hs{-0.008in}
u_i^A\Big( \ensuremath{\epsilon}nsuremath{\bm{i}}g( \h{a}_j( \ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^* ) \ensuremath{\epsilon}nsuremath{\bm{i}}g)_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}, \;
\h{t}_i\ensuremath{\epsilon}nsuremath{\bm{i}}g( (\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^*)_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} \ensuremath{\epsilon}nsuremath{\bm{i}}g) \Big)
\ensuremath{\gamma}eq
u_i^A\Big( \ensuremath{\epsilon}nsuremath{\bm{i}}g( \h{a}_j( \ensuremath{\epsilon}nsuremath{\bm{m}}_i, \ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^* / i ) \ensuremath{\epsilon}nsuremath{\bm{i}}g)_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}, \;
\h{t}_i\ensuremath{\epsilon}nsuremath{\bm{i}}g( ( \ensuremath{\epsilon}nsuremath{\bm{m}}_i, \ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^* / i)_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} \ensuremath{\epsilon}nsuremath{\bm{i}}g) \Big).
\ensuremath{\lambda}bel{chap5_Eq_NE}
\ensuremath{\epsilon}nd{equation}
As discussed earlier, NE in general describe strategic behavior of users in games of complete information. This can be seen from (\ref{chap5_Eq_NE}) where, to define a NE, it requires complete information of all users' aggregate utility functions. However, the users in Model~(M) do not know each other's utilities; therefore, the game induced by the game form (\ref{chap5_Eq_mi})--(\ref{chap5_Eq_l_ij}) and the users' aggregate utility functions (\ref{chap5_Eq_u_i^A}) is not one of complete information. Therefore, for our problem we adopt the NE interpretation of \cite{Reiter_Reichel_88} and \cite[Section 4]{Groves_Ledyard_1977} as discussed at the beginning of Section~\ref{chap5_Sec_dec_mech}. That is, we interpret NE as the ``stationary'' messages of an unspecified (message exchange) process that are characterized by the Nash property (\ref{chap5_Eq_NE}).
In the next section we show that the allocations obtained by the game form presented in (\ref{chap5_Eq_mi})--(\ref{chap5_Eq_l_ij}) at all NE message profiles (satisfying (\ref{chap5_Eq_NE})), are optimal centralized allocations.
\ensuremath{\epsilon}nsuremath{\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}bset}section{Properties of the game form} \ensuremath{\lambda}bel{chap5_Subsec_Thm}
We begin this section with an intuitive discussion on how the game form presented in Section~\ref{chap5_Subsec_game_form} achieves optimal centralized allocations. We then formalize the results in Theorems~\ref{chap5_Thm_NE_opt} and \ref{chap5_Thm_NE_exists}.
To understand how the proposed game form achieves optimal centralized allocations, let us look at the properties of NE allocations corresponding to this game form. A NE of the game induced by the game form (\ref{chap5_Eq_mi})--(\ref{chap5_Eq_l_ij}) and the users' utility functions (\ref{chap5_Eq_u_i^A}) can be interpreted as follows: Given the users' messages $\ensuremath{\epsilon}nsuremath{\bm{m}}_k, k \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_i$, the outcome function computes user $i$'s action as $1/|\ensuremath{\epsilon}nsuremath{\mathcal{C}}_i| \ensuremath{\epsilon}nsuremath{\bm{i}}g( \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{k \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_i} \tensor*[^{k}]{a}{_i} \ensuremath{\epsilon}nsuremath{\bm{i}}g)$. Therefore, user $i$'s action proposal $\tensor*[^{i}]{a}{_i}$ can be interpreted as the increment that $i$ desires over the sum of other users' action proposals for $i$, so as to bring its allocated action $\h{a}_i$ to its own desired value. Thus, if the action computed for $i$ based on its neighbors' proposals does not lie in $\ensuremath{\epsilon}nsuremath{\mathcal{A}}_i$, user $i$ can propose an appropriate action $\tensor*[^{i}]{a}{_i}$ and bring its allocated action within $\ensuremath{\epsilon}nsuremath{\mathcal{A}}_i$. The flexibility of proposing any action $\tensor*[^{i}]{a}{_i} \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l$ gives each user $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$ the capability to bring its allocation $\h{a}_i$ within its feasible set $\ensuremath{\epsilon}nsuremath{\mathcal{A}}_i$ by unilateral deviation. Therefore, at any NE, $\h{a}_i \in \ensuremath{\epsilon}nsuremath{\mathcal{A}}_i, \ensuremath{\epsilon}nsuremath{\;\forall\;} i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$. By taking the sum of taxes in (\ref{chap5_Eq_t_hat_i}) it can further be seen, after some computations, that the allocated tax profile $(\h{t}_i)_{i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}}$ satisfies (\ref{chap5_Eq_sum_ti_0}) (even at off-NE messages). Thus, all NE allocations $\Big( (\h{a}_i(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_i}^*))_{i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}}, \; (\h{t}_i( (\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^*)_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} ))_{i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}} \Big)$ lie in $\ensuremath{\epsilon}nsuremath{\nabla}o$ and hence are feasible solutions of Problem~($P_C$).
To see further properties of NE allocations, let us look at the tax function in (\ref{chap5_Eq_t_hat_i}). The tax of user $i$ consists of three types of terms. The type-1 term is $\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} l_{ij}(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j})\; \h{a}_j(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j})$; it depends on all action proposals for each of user $i$'s neighbors $j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i$, and the price proposals for each of these neighbors by users other than user $i$. The type-2 term is $\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} \tensor*[^{i}]{\pi}{_j} \left( \tensor*[^{i}]{a}{_j} - \tensor*[^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)}}]{a}{_j} \right)^2$; this term depends on $\tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{a}}}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}}$ as well as $\tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}}$. Finally, the type-3 term is the following: $- \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} \tensor*[^{ \ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)} }]{\pi}{_j} \times$ $\left( \tensor*[^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)}}]{a}{_j} - \tensor*[^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+2)}}]{a}{_j} \right)^2$; this term depends only on the messages of users other than $i$. Since $\tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}}$ does not affect the determination of user $i$'s action, and affects only the type-2 term in $\h{t}_i$, the NE strategy of user $i, i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, that minimizes its tax is to propose for each $j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i$, $\tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_j} = 0$ unless at the NE, $\tensor*[^{i}]{a}{_j} = \tensor*[^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)}}]{a}{_j}$. Since the type-2 and type-3 terms in the users' tax are similar across users, for each $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$ and $j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i$, all the users $k \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_j$ choose the above strategy at NE. Therefore, the type-2 and type-3 terms vanish from every users' tax $\h{t}_i, i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}},$ at all NE. Thus, the tax that each user $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$ pays at a NE $\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$ is of the form $\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} l_{ij}(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^*)\; \h{a}_j(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^*)$. The NE term $l_{ij}(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^*), i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}, j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i$, can therefore be interpreted as the ``personalized price'' for user $i$ for the NE action $\h{a}_j(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^*)$ of its neighbor $j$. Note that at a NE, the personalized price for user $i$ is not controlled by $i$'s own message. The reduction of the users' NE taxes into the form $\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} l_{ij}(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^*)\; \h{a}_j(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^*)$ implies that at a NE, each user $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$ has a control over its aggregate utility only through its action proposal.~\fn{Note that user $i$'s action proposal determines the actions of all the users $j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i$; thus, it affects user $i$'s utility $u_i\Big( \ensuremath{\epsilon}nsuremath{\bm{i}}g( \h{a}_j( \ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^* ) \ensuremath{\epsilon}nsuremath{\bm{i}}g)_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} \Big)$ as well as its tax $\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} l_{ij}(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^*)\; \h{a}_j(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^*)$.} If all other users' messages are fixed, each user has the capability of shifting the allocated action profile $\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}$ to its desired value by proposing an appropriate $\tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{a}}}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}} \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l^{|\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i|}$ (See the discussion in the previous paragraph). Therefore, the NE strategy of each user $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$ is to propose an action profile $\tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{a}}}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}}$ that results in an allocation $\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}$ that maximizes its aggregate utility.
Thus, at a NE, each user maximizes its aggregate utility for its given personalized prices. By the construction of the tax function, the sum of the users' tax is zero at all NE and off equilibria. Thus, the individual aggregate utility maximization of the users also result in the maximization of the sum of users' aggregate utilities subject to the budget balance constraint which is the objective of Problem~($P_C$).
It is worth mentioning at this point the significance of type-2 and type-3 terms in the users' tax. As explained above, these two terms vanish at NE. However, if for some user $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$ these terms are not present in its tax $\h{t}_i$, then, the price proposal $\tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}}$ of user $i$ will not affect its tax and hence, its aggregate utility. In such a case, user $i$ can propose arbitrary prices $\tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}}$ because they would affect only other users' NE prices. The presence of type-2 and type-3 terms in user $i$'s tax prevent such a behavior as they impose a penalty on user $i$ if it proposes a high value of $\tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}}$ or if its action proposal for its neighbors deviates too much from other users' proposals. Even though the presence of type-2 and type-3 terms in user $i$'s tax is necessary as explained above, it is important that the NE price $l_{ij}(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^*), j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i$ of user $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$ is not affected by $i$'s own proposal $\tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}}$. This is because, in such a case, user $i$ may influence its own NE price in an unfair manner and may not behave as a price taker. To avoid such a situation, the type-2 and type-3 terms are designed in a way so that they vanish at NE. Thus, this construction induces price taking behavior in the users at NE and leads to optimal allocations.
The results that formally establish the above properties of the game form are summarized in Theorems~\ref{chap5_Thm_NE_opt} and \ref{chap5_Thm_NE_exists} below.
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{theorem} \ensuremath{\lambda}bel{chap5_Thm_NE_opt}
Let $\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$ be a NE of the game induced by the game form presented in Section~\ref{chap5_Subsec_game_form} and the users' utility functions (\ref{chap5_Eq_u_i^A}). Let $ (\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*, \h{\ensuremath{\epsilon}nsuremath{\bm{t}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*) := ( \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*), \h{\ensuremath{\epsilon}nsuremath{\bm{t}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*) ) := \Big( (\h{a}_i(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_i}^*))_{i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}},$ $(\h{t}_i( (\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^*)_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} ))_{i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}} \Big)$ be the action and tax profiles at $\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$ determined by the game form. Then,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{itemize}
\item[(a)] Each user $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$ weakly prefers its allocation $(\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*, \h{t}_i^*)$ to the initial allocation $(\bm{0}, 0)$. Mathematically,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation*}
\ u_i^A \Big( \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*, \h{t}_i^* \Big)
\ensuremath{\gamma}eq u_i^A \Big( \bm{0},\; 0 \Big),
\qquad \ensuremath{\epsilon}nsuremath{\;\forall\;} i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}.
\ensuremath{\lambda}bel{chap5_Eq_Thm_NE_opt}
\ensuremath{\epsilon}nd{equation*}
\item[(b)] $(\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*, \h{\ensuremath{\epsilon}nsuremath{\bm{t}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*)$ is an optimal solution of Problem ($P_C$).
$\boxtimes$ \vs{2mm}
\ensuremath{\epsilon}nd{itemize}
\ensuremath{\epsilon}nd{theorem}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{theorem} \ensuremath{\lambda}bel{chap5_Thm_NE_exists}
Let $\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$ be an optimum action profile corresponding to Problem~($P_C$). Then,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{itemize}
\item[(a)] There exist a set of personalized prices $l_{ij}^*, j\in\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i, i\in\ensuremath{\epsilon}nsuremath{\mathcal{N}},$ such that
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation*}
\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^* =
\argmax_{ \ensuremath{\sigma}tackrel{ \h{a}_i \in \ensuremath{\epsilon}nsuremath{\mathcal{A}}_i }{ \h{a}_j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l, \, j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i \ensuremath{\epsilon}nsuremath{\ensuremath{\epsilon}nsuremath{\bm{a}}ckslash} \{i\} } }
- \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} l_{ij}^*\; \h{a}_j + u_i(\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}),
\quad \ensuremath{\epsilon}nsuremath{\;\forall\;} i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}.
\ensuremath{\lambda}bel{chap5_Eq_Thm_NE_exist_a}
\ensuremath{\epsilon}nd{equation*}
\item[(b)] There exists at least one NE $\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$ of the game induced by the game form presented in Section~\ref{chap5_Subsec_game_form} and the users' utility functions (\ref{chap5_Eq_u_i^A}) such that, $\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*) = \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$. Furthermore, if $\h{t}_i^* := \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} l_{ij}^* \h{a}_j^*, i\in\ensuremath{\epsilon}nsuremath{\mathcal{N}}$, the set of all NE $\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^* = (\ensuremath{\epsilon}nsuremath{\bm{m}}_i^*)_{i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}} = ( \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{a}}}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*}, \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*} )$ that result in $(\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*, \h{\ensuremath{\epsilon}nsuremath{\bm{t}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*)$ is characterized by the solution of the following set of conditions:
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{eqnarray*}
\frac{1}{|\ensuremath{\epsilon}nsuremath{\mathcal{C}}_i|} \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{k \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_i} \tensor*[^{k}]{a}{_i^*}
& =& \h{a}_i^*, \quad i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}, \ensuremath{\lambda}bel{chap5_Thm2_NEchar_a_hat} \\
{}^{ \ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)} }\pi_j^* - {}^{ \ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+2)} }\pi_j^*
& =& l_{ij}^*, \quad j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i, \ i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}, \ensuremath{\lambda}bel{chap5_Thm2_NEchar_lij} \\
\tensor*[^{i}]{\pi}{_j^*}
\left( \tensor*[^{i}]{a}{_j^*} - {}^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)}}a_j^* \right)^2
& =& 0, \quad j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i, \ \ i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}, \ensuremath{\lambda}bel{chap5_Thm2_NEchar_pi_a} \\
\tensor*[^{i}]{\pi}{_j^*}
& \ensuremath{\gamma}eq& 0, \quad j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i, \ \ i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}. \hs{2in}
\boxtimes \ensuremath{\lambda}bel{chap5_Thm2_NEchar_pi_pos}
\ensuremath{\epsilon}nd{eqnarray*}
\ensuremath{\epsilon}nd{itemize}
\ensuremath{\epsilon}nd{theorem}
Because Theorem~\ref{chap5_Thm_NE_opt} is stated for an arbitrary NE $\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$ of the game induced by the game form of Section~\ref{chap5_Subsec_game_form} and the users' utility functions (\ref{chap5_Eq_u_i^A}), the assertion of the theorem holds for all NE of this game. Thus, part~(a) of Theorem~\ref{chap5_Thm_NE_opt} establishes that the game form presented in Section~\ref{chap5_Subsec_game_form} is \ensuremath{\epsilon}mph{individually rational}, i.e., at any NE allocation, the aggregate utility of each user is at least as much as its aggregate utility before participating in the game/allocation process. Because of this property of the game form, each user voluntarily participates in the allocation process.
Part~(b) of Theorem~\ref{chap5_Thm_NE_opt} asserts that all NE of the game induced by the game form of Section~\ref{chap5_Subsec_game_form} and the users' utility functions (\ref{chap5_Eq_u_i^A}) result in optimal centralized allocations (solutions of Problem~($P_C$)). Thus the set of NE allocations is a subset of the set of optimal centralized allocations. This establishes that the game form of Section~\ref{chap5_Subsec_game_form} \ensuremath{\epsilon}mph{implements in NE} the goal correspondence defined by the solutions of Problem~($P_C$). Because of this property, the above game form guarantees to provide an optimal centralized allocation irrespective of which NE is achieved in the game induced by it.
The assertion of Theorem~\ref{chap5_Thm_NE_opt} that establishes the above two properties of the game form presented in Section~\ref{chap5_Subsec_game_form} is based on the assumption that there exists a NE of the game induced by this game form and the users' utility functions (\ref{chap5_Eq_u_i^A}). However, Theorem~\ref{chap5_Thm_NE_opt} does not say anything about the existence of NE. Theorem~\ref{chap5_Thm_NE_exists} asserts that NE exist in the above game, and provides conditions that characterize the set of all NE that result in optimal centralized allocations of the form $(\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*, \h{\ensuremath{\epsilon}nsuremath{\bm{t}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*) = (\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^* , (\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} l_{ij}^* \h{a}_j^*)_{i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}} )$, where $\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$ is any optimal centralized action profile.
In addition to the above, Theorem~\ref{chap5_Thm_NE_exists} also establishes the following property of the game form. Since the optimal action profile $\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$ in the statement of Theorem~\ref{chap5_Thm_NE_exists} is arbitrary, the theorem implies that the game form of Section~\ref{chap5_Subsec_game_form} can obtain each of the optimum action profiles of Problem~($P_C$) through at least one of the NE of the induced game. This establishes that the above game form is not biased towards any particular optimal centralized action profile.
We present the proofs of Theorem~\ref{chap5_Thm_NE_opt} and Theorem~\ref{chap5_Thm_NE_exists} in Appendices~\ref{chap5_Proof_Thm_NE_opt} and \ref{chap5_Proof_Thm_NE_exists}. In Appendix~\ref{Apx_illus_ex} we also present an example to illustrate how the properties established by Theorems~\ref{chap5_Thm_NE_opt} and \ref{chap5_Thm_NE_exists} are achieved by the proposed game form.
In the next section we present a discussion on how the proposed game form can be implemented in a real system, i.e., how the message communication and the determination of allocations specified by the game form can be carried out in a real system.
\ensuremath{\epsilon}nsuremath{\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}bset}section{Implementation of the game form} \ensuremath{\lambda}bel{chap5_Subsec_impl_mech}
We explain in this section how the game form of Section~\ref{chap5_Subsec_game_form} can be implemented in the presence of a network operator, and why the presence of a network operator is necessary for the implementation of the game form. To this end let us first suppose that the network operator is not present in the network. As discussed in Section~\ref{chap5_Subsec_game_form} the outcome function specifies the allocation $(\h{a}_i, \h{t}_i)$ for a user $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$ based on its neighbors' messages. Since the game form is common knowledge among the users, if each user announces its messages to all its neighbors, every user can have the required set of messages to compute its own allocations. However, with this kind of local communication, the messages required to compute user $i$'s allocation are not necessarily known to users other than $i$. Therefore, even though the other users know the outcome function for user $i$, no other user can check if the allocation determined by user $i$ corresponds to its neighbors' messages. Since each user $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$ is selfish, it cannot be relied upon for the determination of its allocation. Therefore, in large-scale systems such as one represented by Model~(M), where each user does not hear all other users' messages, the presence of a network operator is extremely important. The network operator's role is twofold. First, according to the specification of the game form (of Section~\ref{chap5_Subsec_game_form}) each user announces its messages to its neighbors as well as to the network operator. The network operator knows the network structure (Assumption~\ref{chap5_Ass_nbr_know}) and the outcome function for each user. Thus, it can compute all the allocations based on the messages it receives, and then it can tell each user its corresponding allocation (or it can check whether the allocation $(a_i^*, t_i^*)$ implemented by user $i, i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, is the same as that specified by the mechanism).
The other role of the network operator that facilitates implementation of the game form is the following. Note that the game form specifies redistribution of money among the users by charging each user an appropriate positive or negative tax (see (\ref{chap5_Eq_sum_ti_0})). This means that the tax money must go from one subset of the users to the other subset of users. Since the users do not have complete network information, nor do they know the allocations of other users in the network, they cannot determine the appropriate flow of money in the network. The network operator implements this redistribution of money by acting as an accountant that collects money from the users who must pay positive tax according to the game form and gives the money back to the users who must receive subsidies (negative tax).
\ensuremath{\sigma}ection{Future directions} \ensuremath{\lambda}bel{chap5_Sec_conc}
The problem formulation and the solution of the local public goods provisioning problem presented in this paper open up several new directions for future research. First, the development of efficient mechanisms that can compute NE is an important open problem. To address this problem there can be two different directions. (i) The development of algorithms that guarantee convergence to Nash equilibria of the games constructed in this paper. (ii) The development of alternative mechanisms/game forms that lead to games with dynamically stable NE. Second, the network model we studied in this paper assumed a given set of users and a given network topology. In many local public good networks such as social or research networks, the set of network users and the network topology must be determined as part of network objective maximization. These situations give rise to interesting admission control and network formation problems many of which are open research problems. Finally, in this paper we focused on static resource allocation problem where the characteristics of the local public good network do not change with time. The development of implementation mechanisms under dynamic situations, where the network characteristics change during the determination of resource allocation, are open research problems. \\
{\bf Acknowledgments:} \hs{1mm}
This work was supported in part by NSF grant CCF-1111061. The authors are grateful to Y. Chen and A. Anastasopoulos at the University of Michigan for stimulating discussions.
\ensuremath{\epsilon}nsuremath{\bm{i}}bliography{Shruti_full}
\appendices
In Appendices~\ref{chap5_Proof_Thm_NE_opt} and \ref{chap5_Proof_Thm_NE_exists} that follow, we present the proof of Theorems~\ref{chap5_Thm_NE_opt} and \ref{chap5_Thm_NE_exists}, respectively. We divide the proof into several claims to organize the presentation. In Appendix~\ref{Apx_illus_ex} we present an example to illustrate how the properties established by Theorems~\ref{chap5_Thm_NE_opt} and \ref{chap5_Thm_NE_exists} are achieved by the proposed game form.
\ensuremath{\sigma}ection{Proof of Theorem~1} \ensuremath{\lambda}bel{chap5_Proof_Thm_NE_opt}
We prove Theorem~\ref{chap5_Thm_NE_opt} in four claims. In Claims~\ref{chap5_Cl_NE_tax_form} and \ref{chap5_Cl_indv_ratn} we show that all users weakly prefer a NE allocation (corresponding to the game form presented in Section~\ref{chap5_Subsec_game_form}) to their initial allocations; these claims prove part~(a) of Theorem~\ref{chap5_Thm_NE_opt}. In Claim~\ref{chap5_Cl_fes_sol} we show that a NE allocation is a feasible solution of Problem~($P_C$). In Claim~\ref{chap5_Cl_NE_optl} we show that a NE action profile is an optimal action profile for Problem~($P_C$). Thus, Claim~\ref{chap5_Cl_fes_sol} and Claim~\ref{chap5_Cl_NE_optl} establish that a NE allocation is an optimal solution of Problem~($P_C$) and prove part~(b) of Theorem~\ref{chap5_Thm_NE_opt}.
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{Cl} \ensuremath{\lambda}bel{chap5_Cl_fes_sol}
If $\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^{*}$ is a NE of the game induced by the game form presented in Section~\ref{chap5_Subsec_game_form} and the users' utility functions (\ref{chap5_Eq_u_i^A}), then the action and tax profile $(\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*, \h{\ensuremath{\epsilon}nsuremath{\bm{t}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*) := (\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*), \h{\ensuremath{\epsilon}nsuremath{\bm{t}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*) )$ is a feasible solution of Problem~($P_C$), i.e. $(\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*, \h{\ensuremath{\epsilon}nsuremath{\bm{t}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*) \in \ensuremath{\epsilon}nsuremath{\nabla}o$.
\ensuremath{\epsilon}nd{Cl}
{\bf Proof:}
We prove the feasibility of the NE action and tax profiles in two steps. First we prove the feasibility of the NE tax profile, then we prove the feasibility of the NE action profile.
To prove the feasibility of NE tax profile, we need to show that it satisfies (\ref{chap5_Eq_sum_ti_0}). For this, we first take the sum of second and third terms on the Right Hand Side (RHS) of (\ref{chap5_Eq_t_hat_i}) over all $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, i.e.
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}} \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} \Bigg[
\tensor*[^{i}]{\pi}{_j} \left(\tensor*[^{i}]{a}{_j} -
\tensor*[^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)}}]{a}{_j}\right)^2
-\tensor*[^{ \ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)} }]{\pi}{_j} \left(
\tensor*[^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)}}]{a}{_j} -
\tensor*[^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+2)}}]{a}{_j} \right)^2 \Bigg].
\ensuremath{\lambda}bel{chap5_Eq_sum_ti_23_one}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
From the construction of the graph matrix $\ensuremath{\epsilon}nsuremath{\mathcal{G}}$ and the sets $\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i$ and $\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j$, $i,j \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, the sum $\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}} \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} (\cdot)$ is equal to the sum $\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}} \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{i \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_j} (\cdot)$. Therefore, we can rewrite (\ref{chap5_Eq_sum_ti_23_one}) as
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}} \Bigg[
\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{i \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_j} \tensor*[^{i}]{\pi}{_j} \left(\tensor*[^{i}]{a}{_j} -
\tensor*[^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)}}]{a}{_j}\right)^2
- \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{i \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_j} \tensor*[^{ \ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)} }]{\pi}{_j} \left(
\tensor*[^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)}}]{a}{_j} -
\tensor*[^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+2)}}]{a}{_j} \right)^2 \Bigg].
\ensuremath{\lambda}bel{chap5_Eq_sum_ti_23_two}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
Note that both the sums inside the square brackets in (\ref{chap5_Eq_sum_ti_23_two}) are over all $i \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_j$. Because of the cyclic indexing of the users in each set $\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j$, $j \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, these two sums are equal. Therefore the overall sum in (\ref{chap5_Eq_sum_ti_23_two}) evaluates to zero. Thus, the sum of taxes in (\ref{chap5_Eq_t_hat_i}) reduces to
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}} \h{t}_i( (\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j})_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}) =
\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}} \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} l_{ij}(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j})\; \h{a}_j(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}).
\ensuremath{\lambda}bel{chap5_Eq_sum_ti_1_one}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
Combining (\ref{chap5_Eq_l_ij}) and (\ref{chap5_Eq_sum_ti_1_one}) we obtain
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}} \h{t}_i( (\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j})_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i})
= \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}} \Bigg[ \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{i \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_j} \tensor*[^{ \ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)} }]{\pi}{_j}
- \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{i \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_j} \tensor*[^{ \ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+2)} }]{\pi}{_j} \Bigg] \h{a}_j(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j})
= 0.
\ensuremath{\lambda}bel{chap5_Eq_sum_ti_1_two}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
The second equality in (\ref{chap5_Eq_sum_ti_1_two}) follows because of the cyclic indexing of the users in each set $\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j$, $j \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, which makes the two sums inside the square brackets in (\ref{chap5_Eq_sum_ti_1_two}) equal. Because (\ref{chap5_Eq_sum_ti_1_two}) holds for any arbitrary message profile $\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}$, it follows that at NE $\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}} \h{t}_i( (\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^*)_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}) = 0.
\ensuremath{\lambda}bel{chap5_Eq_sum_ti_NE_0}
\ensuremath{\epsilon}nd{equation}
To complete the proof of Claim~\ref{chap5_Cl_fes_sol}, we have to prove that for all $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, $\h{a}_i(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_i}^*) \in \ensuremath{\epsilon}nsuremath{\mathcal{A}}_i$. We prove this by contradiction.
Suppose $\h{a}_i^* \ensuremath{\epsilon}nsuremath{\notin} \ensuremath{\epsilon}nsuremath{\mathcal{A}}_i$ for some $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$. Then, from (\ref{chap5_Eq_u_i^A}), $u_i^A( \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*, \h{t}_i^* ) = -\infty$. Consider $\wt{\ensuremath{\epsilon}nsuremath{\bm{m}}}_i = ( ( \tensor*[^{i}]{\wt{a}}{_i}, \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{a}}}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*} / i ), \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*} )$ where $\tensor*[^{i}]{a}{_k^*}, k \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i \ensuremath{\epsilon}nsuremath{\ensuremath{\epsilon}nsuremath{\bm{a}}ckslash} \{i\},$ and $\tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*}$ are respectively the NE action and price proposals of user $i$ and $\tensor*[^{i}]{\wt{a}}{_i}$ is such that
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\h{a}_i(\wt{\ensuremath{\epsilon}nsuremath{\bm{m}}}_i, \ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_i}^*/i)
= \frac{1}{|\ensuremath{\epsilon}nsuremath{\mathcal{C}}_i|} \Big( \tensor*[^{i}]{\wt{\ensuremath{\epsilon}nsuremath{\bm{a}}}}{_i} +
\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{\ensuremath{\sigma}tackrel{k\in\ensuremath{\epsilon}nsuremath{\mathcal{C}}_i}{ k\neq i}} \tensor*[^{k}]{a}{_i^*} \Big)
\in \ensuremath{\epsilon}nsuremath{\mathcal{A}}_i.
\ensuremath{\lambda}bel{chap5_Eq_Cl1_a_m_tilda}
\ensuremath{\epsilon}nd{equation}
Note that the flexibility of user $i$ in choosing any message $\tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{a}}}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}} \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l^{|\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i|}$ (see (\ref{chap5_Eq_mi})) allows it to choose an appropriate $\tensor*[^{i}]{\wt{a}}{_i}$ that satisfies the condition in (\ref{chap5_Eq_Cl1_a_m_tilda}). For the message $\wt{\ensuremath{\epsilon}nsuremath{\bm{m}}}_i$ constructed above,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
& u_i^A \Big( \ensuremath{\epsilon}nsuremath{\bm{i}}g(\h{a}_{k}( \wt{\ensuremath{\epsilon}nsuremath{\bm{m}}}_i, \ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_k}^*/i ) \ensuremath{\epsilon}nsuremath{\bm{i}}g)_{k\in\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i},\;\;
\h{t}_i \ensuremath{\epsilon}nsuremath{\bm{i}}g(( \wt{\ensuremath{\epsilon}nsuremath{\bm{m}}}_i , \,\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^* / i )_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} \ensuremath{\epsilon}nsuremath{\bm{i}}g) \Big) \\
= & - \h{t}_i \ensuremath{\epsilon}nsuremath{\bm{i}}g(( \wt{\ensuremath{\epsilon}nsuremath{\bm{m}}}_i , \,\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^* / i )_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} \ensuremath{\epsilon}nsuremath{\bm{i}}g)
+ u_i \Big( \ensuremath{\epsilon}nsuremath{\bm{i}}g(\h{a}_{k}( \wt{\ensuremath{\epsilon}nsuremath{\bm{m}}}_i, \ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_k}^*/i ) \ensuremath{\epsilon}nsuremath{\bm{i}}g)_{k\in\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} \Big)
> - \infty = u_i^A( \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*, \h{t}_i^* ).
\ensuremath{\lambda}bel{chap5_Eq_Cl1_user_dev}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
Thus if $\h{a}_i(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_i}^*) \ensuremath{\epsilon}nsuremath{\notin} \ensuremath{\epsilon}nsuremath{\mathcal{A}}_i$ user $i$ finds it profitable to deviate to $\wt{\ensuremath{\epsilon}nsuremath{\bm{m}}}_{\ensuremath{\epsilon}nsuremath{\bm{i}}}$. Inequality~(\ref{chap5_Eq_Cl1_user_dev}) implies that $\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$ cannot be a NE, which is a contradiction. Therefore, at any NE $\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$, we must have $\h{a}_i(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_i}^*) \in \ensuremath{\epsilon}nsuremath{\mathcal{A}}_i \ensuremath{\epsilon}nsuremath{\;\forall\;} i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$. This along with (\ref{chap5_Eq_sum_ti_NE_0}) implies that, $(\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*, \h{\ensuremath{\epsilon}nsuremath{\bm{t}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*) \in \ensuremath{\epsilon}nsuremath{\nabla}o$.
\fbox{}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{Cl} \ensuremath{\lambda}bel{chap5_Cl_NE_tax_form}
If $\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^{*}$ is a NE of the game induced by the game form presented in Section~\ref{chap5_Subsec_game_form} and the users' utility functions (\ref{chap5_Eq_u_i^A}), then, the tax $\h{t}_i( (\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^*)_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}) =: \h{t}_i^*$ paid by user $i, i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}},$ at the NE $\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$ is of the form $\h{t}_i^* = \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} l_{ij}^*\; \h{a}_j^*$, where $l_{ij}^* = l_{ij}(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^*)$ and $\h{a}_j^* = \h{a}_j(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^*)$.
\ensuremath{\epsilon}nd{Cl}
{\bf Proof:}
Let $\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$ be the NE specified in the statement of Claim~\ref{chap5_Cl_NE_tax_form}. Then, for each $i\in\ensuremath{\epsilon}nsuremath{\mathcal{N}}$,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
u_i^A \Big( \ensuremath{\epsilon}nsuremath{\bm{i}}g(\h{a}_{k}( \ensuremath{\epsilon}nsuremath{\bm{m}}_i, \ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_k}^*/i ) \ensuremath{\epsilon}nsuremath{\bm{i}}g)_{k\in\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}, \;
\h{t}_i \ensuremath{\epsilon}nsuremath{\bm{i}}g(( \ensuremath{\epsilon}nsuremath{\bm{m}}_i , \,\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^* / i )_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} \ensuremath{\epsilon}nsuremath{\bm{i}}g) \Big)
\leq \ u_i^A \Big( \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*, \h{t}_i^* \Big), \, \ensuremath{\epsilon}nsuremath{\;\forall\;} \ensuremath{\epsilon}nsuremath{\bm{m}}_i \in \ensuremath{\epsilon}nsuremath{\mathcal{M}}_i.
\ensuremath{\lambda}bel{chap5_Cl2_uiA_NE_dev}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
Substituting $\ensuremath{\epsilon}nsuremath{\bm{m}}_i = ( \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{a}}}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*} , \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}} ), \ \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}} \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l_+^{|\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i|}$, in (\ref{chap5_Cl2_uiA_NE_dev}) and using (\ref{chap5_Eq_a_hat_i}) implies
that
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
u_i^A \Big( \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*, \;\; \h{t}_i \ensuremath{\epsilon}nsuremath{\bm{i}}g( ( (\tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{a}}}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*},
\tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}}), \, \ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^* / i )_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} \ensuremath{\epsilon}nsuremath{\bm{i}}g) \Big)
\leq \ u_i^A \Big( \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*, \h{t}_i^* \Big), \quad
\ensuremath{\epsilon}nsuremath{\;\forall\;} \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}} \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l_+^{|\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i|}.
\ensuremath{\lambda}bel{chap5_Cl2_uiA_NE_pi_dev}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
Since $u_i^A$ decreases in $t_i$ (see (\ref{chap5_Eq_u_i^A})), (\ref{chap5_Cl2_uiA_NE_pi_dev}) implies that
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
\h{t}_i \Big( \ensuremath{\epsilon}nsuremath{\bm{i}}g( (\tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{a}}}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*} , \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}}), \, \ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^* / i \ensuremath{\epsilon}nsuremath{\bm{i}}g)_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} \Big)
\ensuremath{\gamma}eq \h{t}_i^*, \quad
\ensuremath{\epsilon}nsuremath{\;\forall\;} \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}} \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l_+^{|\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i|}.
\ensuremath{\lambda}bel{chap5_Cl2_ti_NE_pi_dev}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
Substituting (\ref{chap5_Eq_t_hat_i}) in (\ref{chap5_Cl2_ti_NE_pi_dev}) results in
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} \ensuremath{\epsilon}nsuremath{\bm{i}}gg[ l_{ij}^* \h{a}_j^*
+ \tensor*[^{i}]{\pi}{_j}
\left( \tensor*[^{i}]{a}{_j^*} - {}^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)}}a_j^* \right)^2
- {}^{ \ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)} }\pi_j^*
\left( {}^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)}}a_j^* - {}^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+2)}}a_j^* \right)^2 \ensuremath{\epsilon}nsuremath{\bm{i}}gg] & \ensuremath{\gamma}eq \\
\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} \ensuremath{\epsilon}nsuremath{\bm{i}}gg[ l_{ij}^* \h{a}_j^*
+ \tensor*[^{i}]{\pi}{_j^*}
\left( \tensor*[^{i}]{a}{_j^*} - {}^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)}}a_j^* \right)^2
- {}^{ \ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)} }\pi_j^*
\left( {}^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)}}a_j^* - {}^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+2)}}a_j^* \right)^2 \ensuremath{\epsilon}nsuremath{\bm{i}}gg] &,
\ensuremath{\epsilon}nsuremath{\;\forall\;} \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}} \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l_+^{|\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i|}.
\ensuremath{\lambda}bel{chap5_Cl2_ti_exp_NE}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
Canceling the common terms in (\ref{chap5_Cl2_ti_exp_NE}) gives
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}
( \tensor*[^{i}]{\pi}{_j} - \tensor*[^{i}]{\pi}{_j^*} )
\left( \tensor*[^{i}]{a}{_j^*} - {}^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)}}a_j^* \right)^2
\ensuremath{\gamma}eq 0, \qquad
\ensuremath{\epsilon}nsuremath{\;\forall\;} \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}} \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l_+^{|\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i|}.
\ensuremath{\lambda}bel{chap5_Cl2_ti_cancel_NE}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
Since (\ref{chap5_Cl2_ti_cancel_NE}) must hold for all $\tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}} \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l_+^{|\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i|}$, we must have that
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
\mb{for each} \ j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i, \ \mb{either} \
\tensor*[^{i}]{\pi}{_j^*} = 0 \ \mb{or} \
\tensor*[^{i}]{a}{_j^*} = {}^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)}}a_j^*.
\ensuremath{\lambda}bel{chap5_Cl2_pi_a_0}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
From (\ref{chap5_Cl2_pi_a_0}) it follows that at any NE $\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
\tensor*[^{i}]{\pi}{_j^*}
\left( \tensor*[^{i}]{a}{_j^*} - {}^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)}}a_j^* \right)^2
= 0,
\ \ \ensuremath{\epsilon}nsuremath{\;\forall\;} j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i, \ \ \ensuremath{\epsilon}nsuremath{\;\forall\;} i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}.
\ensuremath{\lambda}bel{chap5_Cl2_pi_a_0_NE}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
Note that (\ref{chap5_Cl2_pi_a_0_NE}) also implies that $\ensuremath{\epsilon}nsuremath{\;\forall\;} i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$ and $\ensuremath{\epsilon}nsuremath{\;\forall\;} j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i$,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
{}^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)}}\pi_j^*
\left( {}^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)}}a_j^* - {}^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+2)}}a_j^* \right)^2
= 0.
\ensuremath{\lambda}bel{chap5_Cl2_pi_a_next_0_NE}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
(\ref{chap5_Cl2_pi_a_next_0_NE}) follows from (\ref{chap5_Cl2_pi_a_0_NE}) because for each $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, $j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i$ also implies that $j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)}}$.
Using (\ref{chap5_Cl2_pi_a_0_NE}) and (\ref{chap5_Cl2_pi_a_next_0_NE}) in (\ref{chap5_Eq_t_hat_i}) we obtain that any NE tax profile must be of the form
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\h{t}_i^* = \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} l_{ij}^*\; \h{a}_j^*, \quad \ensuremath{\epsilon}nsuremath{\;\forall\;} i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}.
\ensuremath{\lambda}bel{chap5_Cl2_NE_tax_form}
\ensuremath{\epsilon}nd{equation}
\fbox{}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{Cl} \ensuremath{\lambda}bel{chap5_Cl_indv_ratn}
The game form given in Section~\ref{chap5_Subsec_game_form} is individually rational, i.e. at every NE $\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$ of the game induced by this game form and the users' utilities in (\ref{chap5_Eq_u_i^A}), each user $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$ weakly prefers the allocation $(\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*, \h{t}_i^*)$ to the initial allocation $(\bm{0},0)$. Mathematically,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
u_i^A \Big( \bm{0},\; 0 \Big)
\leq \ u_i^A \Big( \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*, \h{t}_i^* \Big),
\qquad \ensuremath{\epsilon}nsuremath{\;\forall\;} i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}.
\ensuremath{\epsilon}nd{equation}
\ensuremath{\epsilon}nd{Cl}
{\bf Proof:}
Suppose $\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^{*}$ is a NE of the game induced by the game form presented in Section~\ref{chap5_Subsec_game_form} and the users' utility functions (\ref{chap5_Eq_u_i^A}). From Claim~\ref{chap5_Cl_NE_tax_form} we know the form of users' tax at $\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$. Substituting that from (\ref{chap5_Cl2_NE_tax_form}) into (\ref{chap5_Cl2_uiA_NE_dev}) we obtain that for each $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
u_i^A \Big( \ensuremath{\epsilon}nsuremath{\bm{i}}g(\h{a}_{k}( \ensuremath{\epsilon}nsuremath{\bm{m}}_i, \ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_k}^*/i ) \ensuremath{\epsilon}nsuremath{\bm{i}}g)_{k\in\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i},\;\;
\h{t}_i \ensuremath{\epsilon}nsuremath{\bm{i}}g(( \ensuremath{\epsilon}nsuremath{\bm{m}}_i , \,\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^* / i )_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} \ensuremath{\epsilon}nsuremath{\bm{i}}g) \Big)
\leq u_i^A \Big( \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*, \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} l_{ij}^*\; \h{a}_j^* \Big) &, \\
\ensuremath{\epsilon}nsuremath{\;\forall\;} \ensuremath{\epsilon}nsuremath{\bm{m}}_i = ( \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{a}}}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}}, \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}} ) \in \ensuremath{\epsilon}nsuremath{\mathcal{M}}_i &.
\ensuremath{\lambda}bel{chap5_Cl3_uiA_NE_dev}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
Substituting for $\h{t}_i$ in (\ref{chap5_Cl3_uiA_NE_dev}) from (\ref{chap5_Eq_t_hat_i}) and using (\ref{chap5_Cl2_pi_a_next_0_NE}) we obtain,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
& u_i^A \ensuremath{\epsilon}nsuremath{\bm{i}}gg(
\Big(\h{a}_{k} \ensuremath{\epsilon}nsuremath{\bm{i}}g( ( \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{a}}}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}},
\tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}} ), \ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_k}^*/i \ensuremath{\epsilon}nsuremath{\bm{i}}g) \Big)_{k\in\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i},\;\;
\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} \! \Big( l_{ij}^*\; \h{a}_j \ensuremath{\epsilon}nsuremath{\bm{i}}g(( \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{a}}}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}},
\tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}}), \ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^*/i \ensuremath{\epsilon}nsuremath{\bm{i}}g)
+ \tensor*[^{i}]{\pi}{_j} \ensuremath{\epsilon}nsuremath{\bm{i}}g( \tensor*[^{i}]{a}{_j}
- \tensor*[^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)}}]{a}{_j} \ensuremath{\epsilon}nsuremath{\bm{i}}g)^2 \Big) \! \ensuremath{\epsilon}nsuremath{\bm{i}}gg) \\
& \leq u_i^A \Big(
\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*, \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} l_{ij}^*\; \h{a}_j^* \Big),
\qquad \qquad \qquad \qquad \qquad \qquad
\ensuremath{\epsilon}nsuremath{\;\forall\;} \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{a}}}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}} \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l^{|\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i|},
\ensuremath{\epsilon}nsuremath{\;\forall\;} \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}} \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l_+^{|\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i|}.
\ensuremath{\lambda}bel{chap5_Cl3_uiA_ti_exp_dev}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
In particular, $\tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}} = \bm{0}$ in (\ref{chap5_Cl3_uiA_ti_exp_dev}) implies that
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
u_i^A \ensuremath{\epsilon}nsuremath{\bm{i}}gg( \Big(\h{a}_{k} \ensuremath{\epsilon}nsuremath{\bm{i}}g( ( \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{a}}}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}}, \bm{0} ),
\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_k}^*/i \ensuremath{\epsilon}nsuremath{\bm{i}}g) \Big)_{k\in\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}, \;
\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} \Big( l_{ij}^*\; \h{a}_j \ensuremath{\epsilon}nsuremath{\bm{i}}g(
( \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{a}}}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}}, \bm{0} ), \ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^*/i \ensuremath{\epsilon}nsuremath{\bm{i}}g) \Big) \ensuremath{\epsilon}nsuremath{\bm{i}}gg)
\leq \ u_i^A \Big( \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*, \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} l_{ij}^*\; \h{a}_j^* \Big),
\ensuremath{\epsilon}nsuremath{\;\forall\;} \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{a}}}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}} \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l^{|\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i|}.
\ensuremath{\lambda}bel{chap5_Cl3_uiA_pi_0_dev}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
Since (\ref{chap5_Cl3_uiA_pi_0_dev}) holds for all $\tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{a}}}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}} \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l^{|\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i|}$, substituting in it $\frac{1}{|\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j|} ( \tensor*[^{i}]{a}{_j} + \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{k \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_j \ensuremath{\epsilon}nsuremath{\ensuremath{\epsilon}nsuremath{\bm{a}}ckslash} \{i\} } \tensor*[^{k}]{a}{_j^*} ) = \ol{a}_j \ensuremath{\epsilon}nsuremath{\;\forall\;} j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i$ implies,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
u_i^A \Big( \ensuremath{\epsilon}nsuremath{\bm{i}}g( \ol{a}_j \ensuremath{\epsilon}nsuremath{\bm{i}}g)_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i},\;
\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} \ensuremath{\epsilon}nsuremath{\bm{i}}g( l_{ij}^*\; \ol{a}_j \ensuremath{\epsilon}nsuremath{\bm{i}}g) \Big)
\leq \ u_i^A \Big( \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*, \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} l_{ij}^*\; \h{a}_j^* \Big),
\; \ensuremath{\epsilon}nsuremath{\;\forall\;} \ol{a}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} := (\ol{a}_j)_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l^{|\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i|}.
\ensuremath{\lambda}bel{chap5_Cl3_iaj_subs_dev}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
For $\ol{a}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} = \bm{0}$, (\ref{chap5_Cl3_iaj_subs_dev}) implies further that
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
u_i^A \Big( \bm{0},\; 0 \Big)
\leq \ u_i^A \Big( \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*, \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} l_{ij}^*\; \h{a}_j^* \Big),
\qquad \qquad\qquad\qquad\qquad \ensuremath{\epsilon}nsuremath{\;\forall\;} i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}.
\ensuremath{\lambda}bel{chap5_Cl3_ind_ratn}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
\fbox{}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{Cl} \ensuremath{\lambda}bel{chap5_Cl_NE_optl}
A NE allocation $(\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*, \h{\ensuremath{\epsilon}nsuremath{\bm{t}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*)$ is an optimal solution of the centralized problem~$(P_{C})$.
\ensuremath{\epsilon}nd{Cl}
{\bf Proof:}
For each $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, (\ref{chap5_Cl3_iaj_subs_dev}) can be equivalently written as
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*
& \in \argmax_{\ol{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l^{|\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i|}}
u_i^A \Big( \ol{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i},\; \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} l_{ij}^*\; \ol{a}_j \Big) \\
& = \argmax_{ \ensuremath{\sigma}tackrel{ \ol{a}_i \in \ensuremath{\epsilon}nsuremath{\mathcal{A}}_i }{ \ol{a}_j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l, \, j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i \ensuremath{\epsilon}nsuremath{\ensuremath{\epsilon}nsuremath{\bm{a}}ckslash} \{i\} } }
\left\{ - \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} l_{ij}^*\; \ol{a}_j + u_i(\ol{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}) \right\}
\ensuremath{\lambda}bel{chap5_Cl4_aRi*_argmax}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
Let for each $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}},$ $f_{\ensuremath{\epsilon}nsuremath{\mathcal{A}}_i}(a_i)$ be a convex function that characterizes the set $\ensuremath{\epsilon}nsuremath{\mathcal{A}}_i$ as, $a_i \in \ensuremath{\epsilon}nsuremath{\mathcal{A}}_i \Leftrightarrow f_{\ensuremath{\epsilon}nsuremath{\mathcal{A}}_i}(a_i) \leq 0$.~\fn{By \cite{Boyd} we can always find a convex function that characterizes a convex set.}
Since for each $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, $u_i( \ol{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} )$ is assumed to be concave in $\ol{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}$ and the set $\ensuremath{\epsilon}nsuremath{\mathcal{A}}_i$ is convex, the Karush Kuhn Tucker (KKT) conditions \cite[Chapter 11]{Boyd} are necessary and sufficient for $\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*$ to be a maximizer in (\ref{chap5_Cl4_aRi*_argmax}). Thus, for each $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}} \ensuremath{\epsilon}nsuremath{\;\ensuremath{\epsilon}xists\;} \ensuremath{\lambda}_i \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l_+$ such that, $\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*$ and $\ensuremath{\lambda}_i$ satisfy the KKT conditions given below:
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
\ensuremath{\epsilon}nsuremath{\;\forall\;} j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i \ensuremath{\epsilon}nsuremath{\ensuremath{\epsilon}nsuremath{\bm{a}}ckslash} \{i\}, \quad l_{ij}^* - \ensuremath{\epsilon}nsuremath{\nabla}_{\ol{a}_j}
u_i(\ol{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i})\mid_{ \ol{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} = \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^* } & = 0, \\
l_{ii}^*
- \ensuremath{\epsilon}nsuremath{\nabla}_{\ol{a}_i} u_i(\ol{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i})\mid_{ \ol{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} = \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^* }
+ \ensuremath{\lambda}_i \ensuremath{\epsilon}nsuremath{\nabla}_{\ol{a}_i} f_{\ensuremath{\epsilon}nsuremath{\mathcal{A}}_i}(\ol{a}_i)\mid_{ \ol{a}_i = \h{a}_i^* } & = 0, \\
\ensuremath{\lambda}_i f_{\ensuremath{\epsilon}nsuremath{\mathcal{A}}_i}(\h{a}_i^*) & = 0.
\ensuremath{\lambda}bel{chap5_Cl4_KKT_indiv}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
For each $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, adding the KKT condition equations in (\ref{chap5_Cl4_KKT_indiv}) over $k \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_i$ results in
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{k \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_i} l_{ki}^* - \ensuremath{\epsilon}nsuremath{\nabla}_{\ol{a}_i} \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{k \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_i}
u_k(\ol{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_k})\mid_{ \ol{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_k} = \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_k}^* }
+ \ensuremath{\lambda}_i \ensuremath{\epsilon}nsuremath{\nabla}_{\ol{a}_i} f_{\ensuremath{\epsilon}nsuremath{\mathcal{A}}_i}(\ol{a}_i)\mid_{ \ol{a}_i = \h{a}_i^* }
\ = \ 0.
\ensuremath{\lambda}bel{chap5_Cl4_KKT_indiv_sum}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
From (\ref{chap5_Eq_l_ij}) we have,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{k \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_i} l_{ki}^*
& = \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{k \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_i} \left( {}^{ \ensuremath{\epsilon}nsuremath{\mathcal{C}}_{i(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ki}+1)} }\pi_i^*
- {}^{ \ensuremath{\epsilon}nsuremath{\mathcal{C}}_{i(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ki}+2)} }\pi_i^* \right) = 0.
\ensuremath{\lambda}bel{chap5_Cl4_sum_lik_0}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
Substituting (\ref{chap5_Cl4_sum_lik_0}) in (\ref{chap5_Cl4_KKT_indiv_sum}) we obtain~\fn{The second equality in (\ref{chap5_Cl4_KKT_cent}) is one of the KKT conditions from (\ref{chap5_Cl4_KKT_indiv}).} $\ensuremath{\epsilon}nsuremath{\;\forall\;} i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
- \ensuremath{\epsilon}nsuremath{\nabla}_{\ol{a}_i} \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{k \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_i}
u_k(\ol{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_k})\mid_{ \ol{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_k} = \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_k}^* }
+ \ensuremath{\lambda}_i \ensuremath{\epsilon}nsuremath{\nabla}_{\ol{a}_i} f_{\ensuremath{\epsilon}nsuremath{\mathcal{A}}_i}(\ol{a}_i)\mid_{ \ol{a}_i = \h{a}_i^* } & = 0, \\
\ensuremath{\lambda}_i f_{\ensuremath{\epsilon}nsuremath{\mathcal{A}}_i}(\h{a}_i^*) & = 0.
\ensuremath{\lambda}bel{chap5_Cl4_KKT_cent}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
The conditions in (\ref{chap5_Cl4_KKT_cent}) along with the non-negativity of $\ensuremath{\lambda}_i, i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, specify the KKT conditions (for variable $\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}$) for Problem~($P_C$). Since ($P_C$) is a concave optimization problem, KKT conditions are necessary and sufficient for optimality. As shown in (\ref{chap5_Cl4_KKT_cent}), the action profile $\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$ satisfies these optimality conditions. Furthermore, the tax profile $\h{\ensuremath{\epsilon}nsuremath{\bm{t}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$ satisfies, by its definition, $\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}} \h{t}_i^* = 0$. Therefore, the NE allocation $(\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*, \h{\ensuremath{\epsilon}nsuremath{\bm{t}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*)$ is an optimal solution of Problem~($P_C$).
This completes the proof of Claim~\ref{chap5_Cl_NE_optl} and hence, the proof of Theorem~\ref{chap5_Thm_NE_opt}.
\fbox{}
Claims~\ref{chap5_Cl_fes_sol}--\ref{chap5_Cl_NE_optl} (Theorem~\ref{chap5_Thm_NE_opt}) establish the properties of NE allocations based on the assumption that there exists a NE of the game induced by the game form of Section~\ref{chap5_Subsec_game_form} and users' utility functions (\ref{chap5_Eq_u_i^A}). However, these claims do not guarantee the existence of a NE. This is guaranteed by Theorem~\ref{chap5_Thm_NE_exists} which is proved next in Claims~\ref{chap5_Cl_Cent_Ldl_price} and \ref{chap5_Cl_Ldl_price_NE}.
\ensuremath{\sigma}ection{Proof of Theorem~2} \ensuremath{\lambda}bel{chap5_Proof_Thm_NE_exists}
We prove Theorem~\ref{chap5_Thm_NE_exists} in two steps. In the first step we show that if the centralized problem $(P_C)$ has an optimal action profile $\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$, there exist a set of personalized prices, one for each user $i\in\ensuremath{\epsilon}nsuremath{\mathcal{N}}$, such that when each $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$ individually maximizes its own utility taking these prices as given, it obtains $\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*$ as an optimal action profile. In the second step we show that the optimal action profile $\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$ and the corresponding personalized prices can be used to construct message profiles that are NE of the game induced by the game form of Section~\ref{chap5_Subsec_game_form} and users' utility functions in (\ref{chap5_Eq_u_i^A}).
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{Cl} \ensuremath{\lambda}bel{chap5_Cl_Cent_Ldl_price}
If Problem~$(P_C)$ has an optimal action profile $\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$, there exist a set of personalized prices $l_{ij}^*, j\in\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i, i\in\ensuremath{\epsilon}nsuremath{\mathcal{N}},$
such that
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^* \in
\argmax_{ \ensuremath{\sigma}tackrel{ \h{a}_i \in \ensuremath{\epsilon}nsuremath{\mathcal{A}}_i }{ \h{a}_j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l, \, j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i \ensuremath{\epsilon}nsuremath{\ensuremath{\epsilon}nsuremath{\bm{a}}ckslash} \{i\} } }
- \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} l_{ij}^*\; \h{a}_j + u_i(\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}),
\quad \ensuremath{\epsilon}nsuremath{\;\forall\;} i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}.
\ensuremath{\lambda}bel{chap5_Cl5_a_Ri_argmax}
\ensuremath{\epsilon}nd{equation}
\ensuremath{\epsilon}nd{Cl}
{\bf Proof:}
Suppose $\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$ is an optimal action profile corresponding to Problem~$(P_C)$. Writing the optimization problem $(P_C)$ only in terms of variable $\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}$ gives
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^* \in
\argmax_{\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}} & \ \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}} u_i(\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}) \\
\mb{s.t.} & \ \ \h{a}_i \in \ensuremath{\epsilon}nsuremath{\mathcal{A}}_i, \ensuremath{\epsilon}nsuremath{\;\forall\;} i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}.
\ensuremath{\lambda}bel{chap5_Cl5_aN*_argmax}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
As stated earlier, an optimal solution of Problem~($P_C$) is of the form $(\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*, \h{\ensuremath{\epsilon}nsuremath{\bm{t}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}})$, where $\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$ is a solution of (\ref{chap5_Cl5_aN*_argmax}) and $\h{\ensuremath{\epsilon}nsuremath{\bm{t}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}} \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l^N$ is any tax profile that satisfies (\ref{chap5_Eq_sum_ti_0}). Because KKT conditions are necessary for optimality, the optimal solution in (\ref{chap5_Cl5_aN*_argmax}) must satisfy the KKT conditions. This implies that there exist $\ensuremath{\lambda}_i \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l_+, i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}},$ such that for each $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, $\ensuremath{\lambda}_i$ and $\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$ satisfy
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
- \ensuremath{\epsilon}nsuremath{\nabla}_{\h{a}_i} \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{k \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_i}
u_k(\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_k})\mid_{ \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_k} = \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_k}^* }
+ \ensuremath{\lambda}_i \ensuremath{\epsilon}nsuremath{\nabla}_{\h{a}_i} f_{\ensuremath{\epsilon}nsuremath{\mathcal{A}}_i}(\h{a}_i)\mid_{ \h{a}_i = \h{a}_i^* } & = 0, \\
\ensuremath{\lambda}_i f_{\ensuremath{\epsilon}nsuremath{\mathcal{A}}_i}(\h{a}_i^*) & = 0,
\ensuremath{\lambda}bel{chap5_Cl5_KKT_cent}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
where $f_{\ensuremath{\epsilon}nsuremath{\mathcal{A}}_i}( \cdot )$ is the convex function defined in Claim~\ref{chap5_Cl_NE_optl}.
Defining for each $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
l_{ij}^* & :=
\ensuremath{\epsilon}nsuremath{\nabla}_{\h{a}_j} u_i(\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i})\mid_{ \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} = \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^* },
\qquad j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i \ensuremath{\epsilon}nsuremath{\ensuremath{\epsilon}nsuremath{\bm{a}}ckslash} \{i\}, \\
l_{ii}^* & :=
\ensuremath{\epsilon}nsuremath{\nabla}_{\h{a}_i} u_i(\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i})\mid_{ \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} = \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^* }
- \ensuremath{\lambda}_i \ensuremath{\epsilon}nsuremath{\nabla}_{\h{a}_i} f_{\ensuremath{\epsilon}nsuremath{\mathcal{A}}_i}(\h{a}_i)\mid_{ \h{a}_i = \h{a}_i^* },
\ensuremath{\lambda}bel{chap5_Cl5_lij_define}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
we get $\ensuremath{\epsilon}nsuremath{\;\forall\;} i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{k \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_i} l_{ki}^*
= \ensuremath{\epsilon}nsuremath{\nabla}_{\h{a}_i} \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{k \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_i}
u_k(\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_k})\mid_{ \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_k} = \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_k}^* }
- \ensuremath{\lambda}_i \ensuremath{\epsilon}nsuremath{\nabla}_{\h{a}_i} f_{\ensuremath{\epsilon}nsuremath{\mathcal{A}}_i}(\h{a}_i)\mid_{ \h{a}_i = \h{a}_i^* }
\ =\ 0.
\ensuremath{\lambda}bel{chap5_Cl5_sum_lki}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
The second equality in (\ref{chap5_Cl5_sum_lki}) follows from (\ref{chap5_Cl5_KKT_cent}). Furthermore, (\ref{chap5_Cl5_lij_define}) implies that $\ensuremath{\epsilon}nsuremath{\;\forall\;} i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
\ensuremath{\epsilon}nsuremath{\;\forall\;} j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i \ensuremath{\epsilon}nsuremath{\ensuremath{\epsilon}nsuremath{\bm{a}}ckslash} \{i\}, \quad
l_{ij}^* - \ensuremath{\epsilon}nsuremath{\nabla}_{\h{a}_j} u_i(\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i})\mid_{ \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} = \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^* }
& = 0, \\
l_{ii}^* - \ensuremath{\epsilon}nsuremath{\nabla}_{\h{a}_i} u_i(\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i})\mid_{ \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} = \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^* }
+ \ensuremath{\lambda}_i \ensuremath{\epsilon}nsuremath{\nabla}_{\h{a}_i} f_{\ensuremath{\epsilon}nsuremath{\mathcal{A}}_i}(\h{a}_i)\mid_{ \h{a}_i = \h{a}_i^* }
& = 0.
\ensuremath{\lambda}bel{chap5_Cl5_KKT_indiv}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
The equations in (\ref{chap5_Cl5_KKT_indiv}) along with the second equality in (\ref{chap5_Cl5_KKT_cent}) imply that for each $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, $\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*$ and $\ensuremath{\lambda}_i$ satisfy the KKT conditions for the following maximization problem:
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\max_{ \ensuremath{\sigma}tackrel{ \h{a}_i \in \ensuremath{\epsilon}nsuremath{\mathcal{A}}_i }{ \h{a}_j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l, \, j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i \ensuremath{\epsilon}nsuremath{\ensuremath{\epsilon}nsuremath{\bm{a}}ckslash} \{i\} } }
- \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} l_{ij}^*\; \h{a}_j + u_i(\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i})
\ensuremath{\lambda}bel{chap5_Cl5_max_indiv}
\ensuremath{\epsilon}nd{equation}
Because the objective function in (\ref{chap5_Cl5_max_indiv}) is concave (Assumption~\ref{chap5_Ass_util_conc_pvt}), KKT conditions are necessary and sufficient for optimality. Therefore, we conclude from (\ref{chap5_Cl5_KKT_indiv}) and (\ref{chap5_Cl5_KKT_cent}) that,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation*}
\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^* \in
\argmax_{ \ensuremath{\sigma}tackrel{ \h{a}_i \in \ensuremath{\epsilon}nsuremath{\mathcal{A}}_i }{ \h{a}_j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l, \, j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i \ensuremath{\epsilon}nsuremath{\ensuremath{\epsilon}nsuremath{\bm{a}}ckslash} \{i\} } }
- \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} l_{ij}^*\; \h{a}_j + u_i(\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}),
\qquad \ensuremath{\epsilon}nsuremath{\;\forall\;} i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}.
\ensuremath{\epsilon}nd{equation*}
\fbox{}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{Cl} \ensuremath{\lambda}bel{chap5_Cl_Ldl_price_NE}
Let $\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$ be an optimal action profile for Problem~$(P_C)$, let $l_{ij}^*, j\in\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i, i\in\ensuremath{\epsilon}nsuremath{\mathcal{N}},$ be the personalized prices corresponding to $\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$ as defined in Claim~\ref{chap5_Cl_Cent_Ldl_price}, and let $\h{t}_i^* := \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} l_{ij}^* \h{a}_j^*, i\in\ensuremath{\epsilon}nsuremath{\mathcal{N}}$. Let $\ensuremath{\epsilon}nsuremath{\bm{m}}_i^* := ( \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{a}}}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*}, \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*} ), i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}},$ be a solution to the following set of relations:
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{eqnarray}
\frac{1}{|\ensuremath{\epsilon}nsuremath{\mathcal{C}}_i|} \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{k \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_i} \tensor*[^{k}]{a}{_i^*}
& =& \h{a}_i^*, \quad i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}, \ensuremath{\lambda}bel{chap5_Cl6_NEchar_a_hat} \\
{}^{ \ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)} }\pi_j^* - {}^{ \ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+2)} }\pi_j^*
& =& l_{ij}^*, \quad j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i, \ i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}, \ensuremath{\lambda}bel{chap5_Cl6_NEchar_lij} \\
\tensor*[^{i}]{\pi}{_j^*}
\left( \tensor*[^{i}]{a}{_j^*} - {}^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)}}a_j^* \right)^2
& =& 0, \quad j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i, \ \ i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}, \ensuremath{\lambda}bel{chap5_Cl6_NEchar_pi_a} \\
\tensor*[^{i}]{\pi}{_j^*}
& \ensuremath{\gamma}eq& 0, \quad j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i, \ \ i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}. \ensuremath{\lambda}bel{chap5_Cl6_NEchar_pi_pos}
\ensuremath{\epsilon}nd{eqnarray}
Then, $\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^* := (\ensuremath{\epsilon}nsuremath{\bm{m}}_1^*, \ensuremath{\epsilon}nsuremath{\bm{m}}_2^*, \dots, \ensuremath{\epsilon}nsuremath{\bm{m}}_N^*)$ is a NE of the game induced by the game form of Section~\ref{chap5_Subsec_game_form} and the users' utility functions (\ref{chap5_Eq_u_i^A}). Furthermore, for each $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, $ \h{a}_i(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_i}^*) = \h{a}_i^*$, $ l_{ij}(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^*) = l_{ij}^*, j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i$, and $\h{t}_i( (\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^*)_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} ) = \h{t}_i^*$.
\ensuremath{\epsilon}nd{Cl}
{\bf Proof:}
Note that, the conditions in (\ref{chap5_Cl6_NEchar_a_hat})--(\ref{chap5_Cl6_NEchar_pi_pos}) are necessary for any NE $\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$ of the game induced by the game form of Section~\ref{chap5_Subsec_game_form} and users' utilities (\ref{chap5_Eq_u_i^A}), to result in the allocation $(\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*, \h{\ensuremath{\epsilon}nsuremath{\bm{t}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*)$ (see (\ref{chap5_Eq_a_hat_i}), (\ref{chap5_Eq_l_ij}) and (\ref{chap5_Cl2_pi_a_0_NE})). Therefore, the set of solutions of (\ref{chap5_Cl6_NEchar_a_hat})--(\ref{chap5_Cl6_NEchar_pi_pos}), if such a set exists, is a superset of the set of all NE corresponding to the above game that result in $(\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*, \h{\ensuremath{\epsilon}nsuremath{\bm{t}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*)$. Below we show that the solution set of (\ref{chap5_Cl6_NEchar_a_hat})--(\ref{chap5_Cl6_NEchar_pi_pos}) is in fact exactly the set of all NE that result in $(\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*, \h{\ensuremath{\epsilon}nsuremath{\bm{t}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*)$.
To prove this, we first show that the set of relations in (\ref{chap5_Cl6_NEchar_a_hat})--(\ref{chap5_Cl6_NEchar_pi_pos}) do have a solution. Notice that (\ref{chap5_Cl6_NEchar_a_hat}) and (\ref{chap5_Cl6_NEchar_pi_a}) are satisfied by setting for each $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, $\tensor*[^{k}]{a}{_i^*} = \h{a}_i^* \ensuremath{\epsilon}nsuremath{\;\forall\;} k \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_i$. Notice also that for each $j \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, the sum over $i \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_j$ of the right hand side of (\ref{chap5_Cl6_NEchar_lij}) is 0. Therefore, for each $j \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, (\ref{chap5_Cl6_NEchar_lij}) has a solution in ${}^{i}\pi_j^*, i \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_j$. Furthermore, for any solution ${}^{i}\pi_j^*, i \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_j, j \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, of (\ref{chap5_Cl6_NEchar_lij}), ${}^{i}\pi_j^* + c, i \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_j, j \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, where $c$ is some constant, is also a solution of (\ref{chap5_Cl6_NEchar_lij}). Consequently, by appropriately choosing $c$, we can select a solution of (\ref{chap5_Cl6_NEchar_lij}) such that (\ref{chap5_Cl6_NEchar_pi_pos}) is satisfied.
It is clear from the above discussion that (\ref{chap5_Cl6_NEchar_a_hat})--(\ref{chap5_Cl6_NEchar_pi_pos}) have multiple solutions. We now show that the set of solutions $\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$ of (\ref{chap5_Cl6_NEchar_a_hat})--(\ref{chap5_Cl6_NEchar_pi_pos}) is the set of NE that result in $(\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*, \h{\ensuremath{\epsilon}nsuremath{\bm{t}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*)$. From Claim~\ref{chap5_Cl_Cent_Ldl_price}, (\ref{chap5_Cl5_a_Ri_argmax}) can be equivalently written as
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*
& \in \argmax_{ \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l^{|\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i|} }
u_i^A\Big( \h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}, \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} l_{ij}^*\; \h{a}_j \Big),
\qquad i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}.
\ensuremath{\lambda}bel{chap5_Cl6_a_Ri_argmax}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
Substituting $\h{a}_j |\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j| - \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{k \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_j \ensuremath{\epsilon}nsuremath{\ensuremath{\epsilon}nsuremath{\bm{a}}ckslash} \{i\} } {}^{k}a_j^* = {}^{i}a_j$ for each $j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i$, $i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}$, in (\ref{chap5_Cl6_a_Ri_argmax}) we obtain
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
\tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{a}}}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*} \in
\argmax_{ \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{a}}}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}} \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l^{|\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i|} } u_i^A\ensuremath{\epsilon}nsuremath{\bm{i}}gg(
\Big( \frac{1}{|\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j|} \ensuremath{\epsilon}nsuremath{\bm{i}}g( {}^{i}a_j
+ \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{k \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_j \ensuremath{\epsilon}nsuremath{\ensuremath{\epsilon}nsuremath{\bm{a}}ckslash} \{i\} } {}^{k}a_j^* \ensuremath{\epsilon}nsuremath{\bm{i}}g) \Big)_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i},
\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} l_{ij}^*\; \frac{1}{|\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j|} \ensuremath{\epsilon}nsuremath{\bm{i}}g( {}^{i}a_j
+ \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{k \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_j \ensuremath{\epsilon}nsuremath{\ensuremath{\epsilon}nsuremath{\bm{a}}ckslash} \{i\} } {}^{k}a_j^* \ensuremath{\epsilon}nsuremath{\bm{i}}g) \ensuremath{\epsilon}nsuremath{\bm{i}}gg),
\ i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}.
\ensuremath{\lambda}bel{chap5_Cl6_ia_Ri_argmax}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
Because of (\ref{chap5_Cl6_NEchar_pi_a}), (\ref{chap5_Cl6_ia_Ri_argmax}) also implies that
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
& ( \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{a}}}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*}, \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*} ) \in
\argmax_{ ( \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{a}}}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}}, \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}} )
\in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l^{|\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i|} \times \ensuremath{\epsilon}nsuremath{\mathcal{R}}l_+^{|\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i|} }
u_i^A\ensuremath{\epsilon}nsuremath{\bm{i}}gg( \Big( \h{a}_j \ensuremath{\epsilon}nsuremath{\bm{i}}g(
( {}^{i}\ensuremath{\epsilon}nsuremath{\bm{a}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}, {}^{i}\ensuremath{\epsilon}nsuremath{\bm{p}}i_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} ), \; \ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^* / i \ensuremath{\epsilon}nsuremath{\bm{i}}g)
\Big)_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}, \\
& \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} l_{ij}^*\;
\h{a}_j \ensuremath{\epsilon}nsuremath{\bm{i}}g( ( {}^{i}\ensuremath{\epsilon}nsuremath{\bm{a}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}, {}^{i}\ensuremath{\epsilon}nsuremath{\bm{p}}i_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} ), \; \ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^* / i \ensuremath{\epsilon}nsuremath{\bm{i}}g)
- \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} {}^{ \ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)} }\pi_j^*
\left( {}^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)}}a_j^*
- {}^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+2)}}a_j^* \right)^2
\ensuremath{\epsilon}nsuremath{\bm{i}}gg), \ i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}.
\ensuremath{\lambda}bel{chap5_Cl6_iaRi_ipiRi_argmax}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
Furthermore, since $u_i^A$ is strictly decreasing in the tax (see (\ref{chap5_Eq_u_i^A})), (\ref{chap5_Cl6_iaRi_ipiRi_argmax}) also implies the following:
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
( \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{a}}}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*}, \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}^*} ) \in
& \hs{-0.3in}\argmax_{( \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{a}}}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}}, \tensor*[^{i}]{\ensuremath{\epsilon}nsuremath{\bm{p}}i}{_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}} )
\in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l^{|\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i|} \times \ensuremath{\epsilon}nsuremath{\mathcal{R}}l_+^{|\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i|} }
u_i^A\ensuremath{\epsilon}nsuremath{\bm{i}}gg( \Big( \h{a}_j \ensuremath{\epsilon}nsuremath{\bm{i}}g(
( {}^{i}\ensuremath{\epsilon}nsuremath{\bm{a}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}, {}^{i}\ensuremath{\epsilon}nsuremath{\bm{p}}i_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} ), \ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^* / i \ensuremath{\epsilon}nsuremath{\bm{i}}g)
\Big)_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i},
\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} l_{ij}^*
\h{a}_j \ensuremath{\epsilon}nsuremath{\bm{i}}g( ( {}^{i}\ensuremath{\epsilon}nsuremath{\bm{a}}_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i}, {}^{i}\ensuremath{\epsilon}nsuremath{\bm{p}}i_{\ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} ), \ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^* / i \ensuremath{\epsilon}nsuremath{\bm{i}}g) \\
& \hs{-0.3in} + \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} {}^{i}\pi_j
\left( {}^{i}a_j - {}^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)}}a_j^* \right)^2
- \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} {}^{ \ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)} }\pi_j^*
\left( {}^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)}}a_j^*
- {}^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+2)}}a_j^* \right)^2
\ensuremath{\epsilon}nsuremath{\bm{i}}gg),
\ \quad i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}.
\ensuremath{\lambda}bel{chap5_Cl6_iaRi_ipiRi_NE_argmax}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
Eq.~(\ref{chap5_Cl6_iaRi_ipiRi_NE_argmax}) implies that, if the message exchange and allocation is done according to the game form presented in Section~\ref{chap5_Subsec_game_form}, then user $i, i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}},$ maximizes its utility at $\ensuremath{\epsilon}nsuremath{\bm{m}}_i^*$ when all other users $j \in \ensuremath{\epsilon}nsuremath{\mathcal{N}} \ensuremath{\epsilon}nsuremath{\ensuremath{\epsilon}nsuremath{\bm{a}}ckslash} \{i\}$ choose their respective messages $\ensuremath{\epsilon}nsuremath{\bm{m}}_j^*, j \in \ensuremath{\epsilon}nsuremath{\mathcal{N}} \ensuremath{\epsilon}nsuremath{\ensuremath{\epsilon}nsuremath{\bm{a}}ckslash} \{i\}$. This, in turn, implies that a message profile $\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$ that is a solution to (\ref{chap5_Cl6_NEchar_a_hat})--(\ref{chap5_Cl6_NEchar_pi_pos}) is a NE of the game induced by the aforementioned game form and the users' utilities (\ref{chap5_Eq_u_i^A}). Furthermore, it follows from (\ref{chap5_Cl6_NEchar_a_hat})--(\ref{chap5_Cl6_NEchar_pi_pos}) that the allocation at $\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$ is
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
\h{a}_i(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_i}^*)
= & \frac{1}{|\ensuremath{\epsilon}nsuremath{\mathcal{C}}_i|} \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{k \in \ensuremath{\epsilon}nsuremath{\mathcal{C}}_i} \tensor*[^{k}]{a}{_i^*}
\ = \ \h{a}_i^*, \hs{1.65in} i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}, \\
l_{ij}(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^*)
= & {}^{ \ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)} }\pi_j^* - {}^{ \ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+2)} }\pi_j^*
\ = \ l_{ij}^*, \qquad j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i, \ i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}, \\
\hs{-0.3in} \h{t}_i\ensuremath{\epsilon}nsuremath{\bm{i}}g( (\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^*)_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} \ensuremath{\epsilon}nsuremath{\bm{i}}g)
\! = \! & \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} l_{ij} (\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^*) \h{a}_j(\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_j}^*)
\!+\! \tensor*[^{i}]{\pi}{_j^*} \left( \tensor*[^{i}]{a}{_j^*} - {}^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)}}a_j^* \right)^2
\!-\! {}^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)}}\pi_j^*
\left( {}^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+1)}}a_j^* - {}^{\ensuremath{\epsilon}nsuremath{\mathcal{C}}_{j(\ensuremath{\epsilon}nsuremath{\mathcal{I}}_{ij}+2)}}a_j^* \right)^2 \\
\! = \! & \ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{j \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}_i} l_{ij}^* \h{a}_i^* \ = \ \h{t}_i^*, \hs{1.8in} i \in \ensuremath{\epsilon}nsuremath{\mathcal{N}}.
\ensuremath{\lambda}bel{chap5_Cl6_NE_cent}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
From (\ref{chap5_Cl6_NE_cent}) it follows that the set of solutions $\ensuremath{\epsilon}nsuremath{\bm{m}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*$ of (\ref{chap5_Cl6_NEchar_a_hat})--(\ref{chap5_Cl6_NEchar_pi_pos}) is exactly the set of NE that result in $(\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*, \h{\ensuremath{\epsilon}nsuremath{\bm{t}}}_{\ensuremath{\epsilon}nsuremath{\mathcal{N}}}^*)$.
This completes the proof of Claim~\ref{chap5_Cl_Ldl_price_NE} and hence the proof of Theorem~\ref{chap5_Thm_NE_exists}.
\fbox{}
\ensuremath{\sigma}ection{An example to illustrate properties of the game form} \ensuremath{\lambda}bel{Apx_illus_ex}
Consider a system with three users where each user's action affects the utilities of all three users. Let the sets of feasible actions of the users be $\ensuremath{\epsilon}nsuremath{\mathcal{A}}_1 = \ensuremath{\epsilon}nsuremath{\mathcal{A}}_2 = \ensuremath{\epsilon}nsuremath{\mathcal{A}}_3 = [0,1]$. Suppose that the users' utilities are:
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
u_1(\h{a}_1, \h{a}_2, \h{a}_3) & = \ \ \h{a}_1^{\ensuremath{\epsilon}nsuremath{\alpha}_1} - \h{a}_2^{\ensuremath{\epsilon}nsuremath{\beta}_2} - \h{a}_3^{\ensuremath{\epsilon}nsuremath{\beta}_3},
\quad \h{a}_1 \in \ensuremath{\epsilon}nsuremath{\mathcal{A}}_1, \ (\h{a}_2, \h{a}_3) \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l^2, \\
u_2(\h{a}_1, \h{a}_2, \h{a}_3) & = - \h{a}_1^{\ensuremath{\epsilon}nsuremath{\beta}_1} + \h{a}_2^{\ensuremath{\epsilon}nsuremath{\alpha}_2} - \h{a}_3^{\ensuremath{\epsilon}nsuremath{\beta}_3},
\quad \h{a}_2 \in \ensuremath{\epsilon}nsuremath{\mathcal{A}}_2, \ (\h{a}_1, \h{a}_3) \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l^2, \\
u_3(\h{a}_1, \h{a}_2, \h{a}_3) & = - \h{a}_1^{\ensuremath{\epsilon}nsuremath{\beta}_1} - \h{a}_2^{\ensuremath{\epsilon}nsuremath{\beta}_2} + \h{a}_3^{\ensuremath{\epsilon}nsuremath{\alpha}_3},
\quad \h{a}_3 \in \ensuremath{\epsilon}nsuremath{\mathcal{A}}_3, \ (\h{a}_1, \h{a}_2) \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l^2, \\
\mb{where}, & \quad \ \ensuremath{\epsilon}nsuremath{\alpha}_i \in (0,1), \ \ensuremath{\epsilon}nsuremath{\beta}_i \in (1,\infty) \ \ensuremath{\epsilon}nsuremath{\;\forall\;} i\in\{1,2,3\}.
\ensuremath{\lambda}bel{u_i_def}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
Because of the parameters $\ensuremath{\epsilon}nsuremath{\alpha}_i, \ensuremath{\epsilon}nsuremath{\beta}_i, i\in\{1,2,3\}$, the utility functions in (\ref{u_i_def}) are strictly concave over their domains. Therefore, there is a unique optimum action profile $\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}^* = (\h{a}_1^*, \h{a}_2^*,\h{a}_3^*)$ corresponding to the centralized problem $(P_C)$ defined in (\ref{chap5_Eq_Pc_obj2}). This optimum action profile $\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}^*$ is the solution to $\ensuremath{\epsilon}nsuremath{\nabla}_{\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}} (\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{i\in\{1,2,3\}} u_i(\h{a}_1, \h{a}_2, \h{a}_3)) = 0$.
Computing the gradient in this equation we obtain,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{eqnarray}
\left[
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{array}{l}
\ensuremath{\epsilon}nsuremath{\alpha}_1 \,\h{a}_1^{*^{(\ensuremath{\epsilon}nsuremath{\alpha}_1-1)}} - 2 \,\ensuremath{\epsilon}nsuremath{\beta}_1 \,\h{a}_1^{*^{(\ensuremath{\epsilon}nsuremath{\beta}_1-1)}} \\
\ensuremath{\epsilon}nsuremath{\alpha}_2 \,\h{a}_2^{*^{(\ensuremath{\epsilon}nsuremath{\alpha}_2-1)}} - 2 \,\ensuremath{\epsilon}nsuremath{\beta}_2 \,\h{a}_2^{*^{(\ensuremath{\epsilon}nsuremath{\beta}_2-1)}} \\
\ensuremath{\epsilon}nsuremath{\alpha}_3 \,\h{a}_3^{*^{(\ensuremath{\epsilon}nsuremath{\alpha}_3-1)}} - 2 \,\ensuremath{\epsilon}nsuremath{\beta}_3 \,\h{a}_3^{*^{(\ensuremath{\epsilon}nsuremath{\beta}_3-1)}}
\ensuremath{\epsilon}nd{array}
\right]
& = &
0 \ensuremath{\epsilon}nsuremath{\nonumber} \\
\ensuremath{\epsilon}nsuremath{\ensuremath{\epsilon}nsuremath{\mathcal{R}}ightarrow}
\left[
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{array}{l}
{\h{a}_1^*} \\
{\h{a}_2^*} \\
{\h{a}_3^*}
\ensuremath{\epsilon}nd{array}
\right]
& = &
\left[
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{array}{l}
\ensuremath{\epsilon}nsuremath{\bm{i}}g(\frac{\ensuremath{\epsilon}nsuremath{\alpha}_1}{2 \ensuremath{\epsilon}nsuremath{\beta}_1}\ensuremath{\epsilon}nsuremath{\bm{i}}g)^{1/(\ensuremath{\epsilon}nsuremath{\beta}_1 - \ensuremath{\epsilon}nsuremath{\alpha}_1)} \\
\ensuremath{\epsilon}nsuremath{\bm{i}}g(\frac{\ensuremath{\epsilon}nsuremath{\alpha}_2}{2 \ensuremath{\epsilon}nsuremath{\beta}_2}\ensuremath{\epsilon}nsuremath{\bm{i}}g)^{1/(\ensuremath{\epsilon}nsuremath{\beta}_2 - \ensuremath{\epsilon}nsuremath{\alpha}_2)} \\
\ensuremath{\epsilon}nsuremath{\bm{i}}g(\frac{\ensuremath{\epsilon}nsuremath{\alpha}_3}{2 \ensuremath{\epsilon}nsuremath{\beta}_3}\ensuremath{\epsilon}nsuremath{\bm{i}}g)^{1/(\ensuremath{\epsilon}nsuremath{\beta}_3 - \ensuremath{\epsilon}nsuremath{\alpha}_3)}
\ensuremath{\epsilon}nd{array}
\right]
\ensuremath{\lambda}bel{a*_cent}
\ensuremath{\epsilon}nd{eqnarray}
Note that because $\ensuremath{\epsilon}nsuremath{\alpha}_i < \ensuremath{\epsilon}nsuremath{\beta}_i \ensuremath{\epsilon}nsuremath{\;\forall\;} i\in\{1,2,3\},$ the action profile $\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}^*$ in (\ref{a*_cent}) lies in $\ensuremath{\epsilon}nsuremath{\mathcal{A}}_1\times\ensuremath{\epsilon}nsuremath{\mathcal{A}}_2\times\ensuremath{\epsilon}nsuremath{\mathcal{A}}_3$ and hence is feasible. Equation (\ref{a*_cent}) thus gives the optimum centralized action profile.
\\
We now show the properties of the game form presented in Section~\ref{chap5_Subsec_game_form}.
We first show that the game induced by this game form under the above problem instance has a Nash equilibrium. Consider the following message profile of the users:
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
\ensuremath{\epsilon}nsuremath{\bm{m}}_1^* =
\ensuremath{\epsilon}nsuremath{\bm{i}}g( (\tensor*[^{1}]{a}{_{1}^*}, \tensor*[^{1}]{a}{_{2}^*}, \tensor*[^{1}]{a}{_{3}^*}), \
(\tensor*[^{1}]{\pi}{_{1}^*}, \tensor*[^{1}]{\pi}{_{2}^*}, \tensor*[^{1}]{\pi}{_{3}^*})
\ensuremath{\epsilon}nsuremath{\bm{i}}g)
& =
\ensuremath{\epsilon}nsuremath{\bm{i}}g( (\h{a}_1^*, \h{a}_2^*, \h{a}_3^*),\
(2 \ensuremath{\epsilon}nsuremath{\beta}_1 \h{a}_1^{*^{(\ensuremath{\epsilon}nsuremath{\beta}_1-1)}}, \ensuremath{\epsilon}nsuremath{\beta}_2 \h{a}_2^{*^{(\ensuremath{\epsilon}nsuremath{\beta}_2-1)}},
3 \ensuremath{\epsilon}nsuremath{\beta}_3 \h{a}_3^{*^{(\ensuremath{\epsilon}nsuremath{\beta}_3-1)}})
\ensuremath{\epsilon}nsuremath{\bm{i}}g), \\
\ensuremath{\epsilon}nsuremath{\bm{m}}_2^* =
\ensuremath{\epsilon}nsuremath{\bm{i}}g( (\tensor*[^{2}]{a}{_{1}^*}, \tensor*[^{2}]{a}{_{2}^*}, \tensor*[^{2}]{a}{_{3}^*}), \
(\tensor*[^{2}]{\pi}{_{1}^*}, \tensor*[^{2}]{\pi}{_{2}^*}, \tensor*[^{2}]{\pi}{_{3}^*})
\ensuremath{\epsilon}nsuremath{\bm{i}}g)
& =
\ensuremath{\epsilon}nsuremath{\bm{i}}g( (\h{a}_1^*, \h{a}_2^*, \h{a}_3^*),\
(3 \ensuremath{\epsilon}nsuremath{\beta}_1 \h{a}_1^{*^{(\ensuremath{\epsilon}nsuremath{\beta}_1-1)}}, 2 \ensuremath{\epsilon}nsuremath{\beta}_2 \h{a}_2^{*^{(\ensuremath{\epsilon}nsuremath{\beta}_2-1)}},
\ensuremath{\epsilon}nsuremath{\beta}_3 \h{a}_3^{*^{(\ensuremath{\epsilon}nsuremath{\beta}_3-1)}})
\ensuremath{\epsilon}nsuremath{\bm{i}}g), \\
\ensuremath{\epsilon}nsuremath{\bm{m}}_3^* =
\ensuremath{\epsilon}nsuremath{\bm{i}}g( (\tensor*[^{3}]{a}{_{1}^*}, \tensor*[^{3}]{a}{_{2}^*}, \tensor*[^{3}]{a}{_{3}^*}), \
(\tensor*[^{3}]{\pi}{_{1}^*}, \tensor*[^{3}]{\pi}{_{2}^*}, \tensor*[^{3}]{\pi}{_{3}^*})
\ensuremath{\epsilon}nsuremath{\bm{i}}g)
& =
\ensuremath{\epsilon}nsuremath{\bm{i}}g( (\h{a}_1^*, \h{a}_2^*, \h{a}_3^*),\
( \ensuremath{\epsilon}nsuremath{\beta}_1 \h{a}_1^{*^{(\ensuremath{\epsilon}nsuremath{\beta}_1-1)}}, 3 \ensuremath{\epsilon}nsuremath{\beta}_2 \h{a}_2^{*^{(\ensuremath{\epsilon}nsuremath{\beta}_2-1)}},
2 \ensuremath{\epsilon}nsuremath{\beta}_3 \h{a}_3^{*^{(\ensuremath{\epsilon}nsuremath{\beta}_3-1)}})
\ensuremath{\epsilon}nsuremath{\bm{i}}g).
\ensuremath{\lambda}bel{msg_prof}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
The vector $(\h{a}_1^*, \h{a}_2^*, \h{a}_3^*)$ on the right hand side (RHS) of (\ref{msg_prof}) is the optimum centralized action profile computed in (\ref{a*_cent}), and $\ensuremath{\epsilon}nsuremath{\beta}_i, i\in\{1,2,3\}$, are the parameters of the utility functions in (\ref{u_i_def}). Because $\ensuremath{\epsilon}nsuremath{\beta}_i \in (1,\infty) \ensuremath{\epsilon}nsuremath{\;\forall\;} i\in\{1,2,3\}$, and $(\h{a}_1^*, \h{a}_2^*, \h{a}_3^*) \in \ensuremath{\epsilon}nsuremath{\mathcal{A}}_1\times\ensuremath{\epsilon}nsuremath{\mathcal{A}}_2\times\ensuremath{\epsilon}nsuremath{\mathcal{A}}_3$, (\ref{msg_prof}) implies that $\tensor*[^{i}]{\pi}{_{j}^*}\in\ensuremath{\epsilon}nsuremath{\mathcal{R}}l_+ \ensuremath{\epsilon}nsuremath{\;\forall\;} i,j \in \{1,2,3\}$. Hence the message profile given in (\ref{msg_prof}) lies in the message space (defined by (\ref{chap5_Eq_mi}), (\ref{chap5_Eq_i_p_Ai})) of the proposed game form.
We will now show that the above message profile is a Nash equilibrium of the game induced by the proposed game form and the users' utility functions in (\ref{u_i_def}). To this end we will show that,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{eqnarray}
\ensuremath{\epsilon}nsuremath{\bm{m}}_1^* &\in& \argmax_{\ensuremath{\epsilon}nsuremath{\bm{m}}_1 \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l^3\times\ensuremath{\epsilon}nsuremath{\mathcal{R}}l_+^3}
\left\{ u_1\ensuremath{\epsilon}nsuremath{\bm{i}}g(\ensuremath{\epsilon}nsuremath{\bm{i}}g(\h{a}_i(\ensuremath{\epsilon}nsuremath{\bm{m}}_1, \ensuremath{\epsilon}nsuremath{\bm{m}}_2^*, \ensuremath{\epsilon}nsuremath{\bm{m}}_3^*)\ensuremath{\epsilon}nsuremath{\bm{i}}g)_{i\in\{1,2,3\}}\ensuremath{\epsilon}nsuremath{\bm{i}}g) - \h{t}_1(\ensuremath{\epsilon}nsuremath{\bm{m}}_1, \ensuremath{\epsilon}nsuremath{\bm{m}}_2^*, \ensuremath{\epsilon}nsuremath{\bm{m}}_3^*) \right\}, \ensuremath{\lambda}bel{NE_argmax_cond_1} \\
\ensuremath{\epsilon}nsuremath{\bm{m}}_2^* &\in& \argmax_{\ensuremath{\epsilon}nsuremath{\bm{m}}_2 \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l^3\times\ensuremath{\epsilon}nsuremath{\mathcal{R}}l_+^3}
\left\{ u_2\ensuremath{\epsilon}nsuremath{\bm{i}}g(\ensuremath{\epsilon}nsuremath{\bm{i}}g(\h{a}_i(\ensuremath{\epsilon}nsuremath{\bm{m}}_1^*, \ensuremath{\epsilon}nsuremath{\bm{m}}_2, \ensuremath{\epsilon}nsuremath{\bm{m}}_3^*)\ensuremath{\epsilon}nsuremath{\bm{i}}g)_{i\in\{1,2,3\}}\ensuremath{\epsilon}nsuremath{\bm{i}}g) - \h{t}_2(\ensuremath{\epsilon}nsuremath{\bm{m}}_1^*, \ensuremath{\epsilon}nsuremath{\bm{m}}_2, \ensuremath{\epsilon}nsuremath{\bm{m}}_3^*) \right\}, \ensuremath{\lambda}bel{NE_argmax_cond_2}\\
\ensuremath{\epsilon}nsuremath{\bm{m}}_3^* &\in& \argmax_{\ensuremath{\epsilon}nsuremath{\bm{m}}_3 \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l^3\times\ensuremath{\epsilon}nsuremath{\mathcal{R}}l_+^3}
\left\{ u_3\ensuremath{\epsilon}nsuremath{\bm{i}}g(\ensuremath{\epsilon}nsuremath{\bm{i}}g(\h{a}_i(\ensuremath{\epsilon}nsuremath{\bm{m}}_1^*, \ensuremath{\epsilon}nsuremath{\bm{m}}_2^*, \ensuremath{\epsilon}nsuremath{\bm{m}}_3)\ensuremath{\epsilon}nsuremath{\bm{i}}g)_{i\in\{1,2,3\}}\ensuremath{\epsilon}nsuremath{\bm{i}}g) - \h{t}_3(\ensuremath{\epsilon}nsuremath{\bm{m}}_1^*, \ensuremath{\epsilon}nsuremath{\bm{m}}_2^*, \ensuremath{\epsilon}nsuremath{\bm{m}}_3) \right\},
\ensuremath{\lambda}bel{NE_argmax_cond_3}
\ensuremath{\epsilon}nd{eqnarray}
where, $u_i, i\in\{1,2,3\}$, are the utility functions defined in (\ref{u_i_def}), and $\h{a}_i, \h{t}_i, i\in\{1,2,3\}$, are the action and tax functions of the game form defined by equations (\ref{chap5_Eq_a_hat_i})--(\ref{chap5_Eq_l_ij}).
Below we present a derivation for (\ref{NE_argmax_cond_1}). The conditions (\ref{NE_argmax_cond_2}) and (\ref{NE_argmax_cond_3}) can be derived similarly.\\
Let the indexing of the users for tax computation be the same as their original indexing, i.e. users $1,2,3$ are labeled as $1,2,3$ in all the cycles $\ensuremath{\epsilon}nsuremath{\mathcal{C}}_1, \ensuremath{\epsilon}nsuremath{\mathcal{C}}_2, \ensuremath{\epsilon}nsuremath{\mathcal{C}}_3$ (consisting of all three users) used in tax functions. Then, using equations (\ref{chap5_Eq_t_hat_i}), (\ref{chap5_Eq_l_ij}), the tax of user 1 at message profile $(\ensuremath{\epsilon}nsuremath{\bm{m}}_1, \ensuremath{\epsilon}nsuremath{\bm{m}}_2^*, \ensuremath{\epsilon}nsuremath{\bm{m}}_3^*)$ is:
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
\h{t}_1(\ensuremath{\epsilon}nsuremath{\bm{m}}_1, \ensuremath{\epsilon}nsuremath{\bm{m}}_2^*, \ensuremath{\epsilon}nsuremath{\bm{m}}_3^*) = & \ \ \
\ensuremath{\epsilon}nsuremath{\bm{i}}g( \tensor*[^{2}]{\pi}{_{1}^*} - \tensor*[^{3}]{\pi}{_{1}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g) \;
\frac{1}{3} \ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{1}} + \tensor*[^{2}]{a}{_{1}^*} + \tensor*[^{3}]{a}{_{1}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g) \\
& +
\ensuremath{\epsilon}nsuremath{\bm{i}}g( \tensor*[^{2}]{\pi}{_{2}^*} - \tensor*[^{3}]{\pi}{_{2}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g) \;
\frac{1}{3} \ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{2}} + \tensor*[^{2}]{a}{_{2}^*} + \tensor*[^{3}]{a}{_{2}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g)
+ \ensuremath{\epsilon}nsuremath{\bm{i}}g( \tensor*[^{2}]{\pi}{_{3}^*} - \tensor*[^{3}]{\pi}{_{3}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g) \;
\frac{1}{3} \ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{3}} + \tensor*[^{2}]{a}{_{3}^*} + \tensor*[^{3}]{a}{_{3}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g) \\
& +
\tensor*[^{1}]{\pi}{_{1}} \ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{1}} - \tensor*[^{2}]{a}{_{1}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g)^2 +
\tensor*[^{1}]{\pi}{_{2}} \ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{2}} - \tensor*[^{2}]{a}{_{2}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g)^2 +
\tensor*[^{1}]{\pi}{_{3}} \ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{3}} - \tensor*[^{2}]{a}{_{3}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g)^2 \\
& -
\tensor*[^{2}]{\pi}{_{1}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{2}]{a}{_{1}^*} - \tensor*[^{3}]{a}{_{1}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g)^2 -
\tensor*[^{2}]{\pi}{_{2}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{2}]{a}{_{2}^*} - \tensor*[^{3}]{a}{_{2}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g)^2 -
\tensor*[^{2}]{\pi}{_{3}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{2}]{a}{_{3}^*} - \tensor*[^{3}]{a}{_{3}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g)^2.
\ensuremath{\lambda}bel{tax1_expanded}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
The last three terms on the RHS of (\ref{tax1_expanded}) are zero from the definition of $\tensor*[^{i}]{a}{_{j}^*}, i\in\{2,3\}, j\in\{1,2,3\}$, in (\ref{msg_prof}). Substituting the remaining terms of $\h{t}_1$ in the argument of (\ref{NE_argmax_cond_1}) and expanding $u_1$ we obtain,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
& u_1\ensuremath{\epsilon}nsuremath{\bm{i}}g(\ensuremath{\epsilon}nsuremath{\bm{i}}g(\h{a}_i(\ensuremath{\epsilon}nsuremath{\bm{m}}_1, \ensuremath{\epsilon}nsuremath{\bm{m}}_2^*, \ensuremath{\epsilon}nsuremath{\bm{m}}_3^*)\ensuremath{\epsilon}nsuremath{\bm{i}}g)_{i\in\{1,2,3\}}\ensuremath{\epsilon}nsuremath{\bm{i}}g) - \h{t}_1(\ensuremath{\epsilon}nsuremath{\bm{m}}_1, \ensuremath{\epsilon}nsuremath{\bm{m}}_2^*, \ensuremath{\epsilon}nsuremath{\bm{m}}_3^*) = \\
& \Big(\frac{1}{3} \ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{1}} + \tensor*[^{2}]{a}{_{1}^*} +
\tensor*[^{3}]{a}{_{1}^*}\ensuremath{\epsilon}nsuremath{\bm{i}}g)\Big)^{\ensuremath{\epsilon}nsuremath{\alpha}_1}
-\Big(\frac{1}{3} \ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{2}} + \tensor*[^{2}]{a}{_{2}^*} +
\tensor*[^{3}]{a}{_{2}^*}\ensuremath{\epsilon}nsuremath{\bm{i}}g)\Big)^{\ensuremath{\epsilon}nsuremath{\beta}_2}
-\Big(\frac{1}{3} \ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{3}} + \tensor*[^{2}]{a}{_{3}^*} +
\tensor*[^{3}]{a}{_{3}^*}\ensuremath{\epsilon}nsuremath{\bm{i}}g)\Big)^{\ensuremath{\epsilon}nsuremath{\beta}_3}
- \frac{1}{3} \ensuremath{\epsilon}nsuremath{\bm{i}}g( \tensor*[^{2}]{\pi}{_{1}^*} - \tensor*[^{3}]{\pi}{_{1}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g) \\
& \ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{1}} + \tensor*[^{2}]{a}{_{1}^*} + \tensor*[^{3}]{a}{_{1}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g)
- \frac{1}{3} \ensuremath{\epsilon}nsuremath{\bm{i}}g( \tensor*[^{2}]{\pi}{_{2}^*} - \tensor*[^{3}]{\pi}{_{2}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g)
\ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{2}} + \tensor*[^{2}]{a}{_{2}^*} + \tensor*[^{3}]{a}{_{2}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g)
- \frac{1}{3} \ensuremath{\epsilon}nsuremath{\bm{i}}g( \tensor*[^{2}]{\pi}{_{3}^*} - \tensor*[^{3}]{\pi}{_{3}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g)
\ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{3}} + \tensor*[^{2}]{a}{_{3}^*} + \tensor*[^{3}]{a}{_{3}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g) \\
&
- \tensor*[^{1}]{\pi}{_{1}} \ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{1}} - \tensor*[^{2}]{a}{_{1}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g)^2
- \tensor*[^{1}]{\pi}{_{2}} \ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{2}} - \tensor*[^{2}]{a}{_{2}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g)^2
- \tensor*[^{1}]{\pi}{_{3}} \ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{3}} - \tensor*[^{2}]{a}{_{3}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g)^2.
\ensuremath{\lambda}bel{u1_t1_expanded}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
Note that the last three terms on the RHS of (\ref{u1_t1_expanded}) are non positive for all messages $\ensuremath{\epsilon}nsuremath{\bm{i}}g((\tensor*[^{1}]{a}{_{1}}, \tensor*[^{1}]{a}{_{2}}, \tensor*[^{1}]{a}{_{3}}),$ $(\tensor*[^{1}]{\pi}{_{1}}, \tensor*[^{1}]{\pi}{_{2}}, \tensor*[^{1}]{\pi}{_{3}}) \ensuremath{\epsilon}nsuremath{\bm{i}}g) \in \ensuremath{\epsilon}nsuremath{\mathcal{R}}l^3\times\ensuremath{\epsilon}nsuremath{\mathcal{R}}l_+^3$. Furthermore, the price proposals $(\tensor*[^{1}]{\pi}{_{1}}, \tensor*[^{1}]{\pi}{_{2}}, \tensor*[^{1}]{\pi}{_{3}})$ of user 1 affect only these three terms on the RHS of (\ref{u1_t1_expanded}). Therefore, in
any message $\ensuremath{\epsilon}nsuremath{\bm{m}}_1$ that maximizes (\ref{u1_t1_expanded}), if $\tensor*[^{1}]{a}{_{1}} \neq \tensor*[^{2}]{a}{_{1}^*}$ then $\tensor*[^{1}]{\pi}{_{1}} = 0$ because $\tensor*[^{1}]{\pi}{_{1}}$ can independently control the value of the quadratic term $\tensor*[^{1}]{\pi}{_{1}} \ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{1}} - \tensor*[^{2}]{a}{_{1}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g)^2$.
Similarly, if $\tensor*[^{1}]{a}{_{2}} \neq \tensor*[^{2}]{a}{_{2}^*}$ and $\tensor*[^{1}]{a}{_{3}} \neq \tensor*[^{2}]{a}{_{3}^*}$, then $\tensor*[^{1}]{\pi}{_{2}} = 0$ and $\tensor*[^{1}]{\pi}{_{3}} = 0$. Consequently, the last three terms on the RHS of (\ref{u1_t1_expanded}) must be zero at any message that maximizes this expression.
As a result, the maximization of (\ref{u1_t1_expanded}) with respect to (w.r.t) $\ensuremath{\epsilon}nsuremath{\bm{m}}_1$ is equivalent to the maximization of the following function $g( \tensor*[^{1}]{a}{_{1}}, \tensor*[^{1}]{a}{_{2}}, \tensor*[^{1}]{a}{_{3}})$ w.r.t. $(\tensor*[^{1}]{a}{_{1}}, \tensor*[^{1}]{a}{_{2}}, \tensor*[^{1}]{a}{_{3}})$:
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
& g( \tensor*[^{1}]{a}{_{1}}, \tensor*[^{1}]{a}{_{2}}, \tensor*[^{1}]{a}{_{3}}) := \\
& \Big(\frac{1}{3} \ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{1}} + \tensor*[^{2}]{a}{_{1}^*} +
\tensor*[^{3}]{a}{_{1}^*}\ensuremath{\epsilon}nsuremath{\bm{i}}g)\Big)^{\ensuremath{\epsilon}nsuremath{\alpha}_1}
-\Big(\frac{1}{3} \ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{2}} + \tensor*[^{2}]{a}{_{2}^*} +
\tensor*[^{3}]{a}{_{2}^*}\ensuremath{\epsilon}nsuremath{\bm{i}}g)\Big)^{\ensuremath{\epsilon}nsuremath{\beta}_2}
-\Big(\frac{1}{3} \ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{3}} + \tensor*[^{2}]{a}{_{3}^*} +
\tensor*[^{3}]{a}{_{3}^*}\ensuremath{\epsilon}nsuremath{\bm{i}}g)\Big)^{\ensuremath{\epsilon}nsuremath{\beta}_3}
- \frac{1}{3} \ensuremath{\epsilon}nsuremath{\bm{i}}g( \tensor*[^{2}]{\pi}{_{1}^*} - \tensor*[^{3}]{\pi}{_{1}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g) \\
& \ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{1}} + \tensor*[^{2}]{a}{_{1}^*} + \tensor*[^{3}]{a}{_{1}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g)
- \frac{1}{3} \ensuremath{\epsilon}nsuremath{\bm{i}}g( \tensor*[^{2}]{\pi}{_{2}^*} - \tensor*[^{3}]{\pi}{_{2}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g)
\ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{2}} + \tensor*[^{2}]{a}{_{2}^*} + \tensor*[^{3}]{a}{_{2}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g)
- \frac{1}{3} \ensuremath{\epsilon}nsuremath{\bm{i}}g( \tensor*[^{2}]{\pi}{_{3}^*} - \tensor*[^{3}]{\pi}{_{3}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g)
\ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{3}} + \tensor*[^{2}]{a}{_{3}^*} + \tensor*[^{3}]{a}{_{3}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g)
\ensuremath{\lambda}bel{g_a}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
Because of the values of the parameters $\ensuremath{\epsilon}nsuremath{\alpha}_1$, $\ensuremath{\epsilon}nsuremath{\beta}_2$ and $\ensuremath{\epsilon}nsuremath{\beta}_3$, the function $g( \tensor*[^{1}]{a}{_{1}}, \tensor*[^{1}]{a}{_{2}}, \tensor*[^{1}]{a}{_{3}})$ is strictly concave.
Therefore, the maximizer of the function is the solution to $\ensuremath{\epsilon}nsuremath{\nabla}_{(\tensor*[^{1}]{a}{_{1}}, \tensor*[^{1}]{a}{_{2}}, \tensor*[^{1}]{a}{_{3}})} g( \tensor*[^{1}]{a}{_{1}}, \tensor*[^{1}]{a}{_{2}}, \tensor*[^{1}]{a}{_{3}}) = 0$.
Solving this equation we obtain,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
\left[
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{array}{c}
\frac{1}{3}\; \ensuremath{\epsilon}nsuremath{\alpha}_1 \Big(\frac{1}{3} \ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{1}} + \tensor*[^{2}]{a}{_{1}^*} +
\tensor*[^{3}]{a}{_{1}^*}\ensuremath{\epsilon}nsuremath{\bm{i}}g)\Big)^{\ensuremath{\epsilon}nsuremath{\alpha}_1-1}
- \frac{1}{3} \ensuremath{\epsilon}nsuremath{\bm{i}}g( \tensor*[^{2}]{\pi}{_{1}^*} - \tensor*[^{3}]{\pi}{_{1}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g) \\
- \frac{1}{3}\;\ensuremath{\epsilon}nsuremath{\beta}_2 \Big(\frac{1}{3} \ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{2}} + \tensor*[^{2}]{a}{_{2}^*} +
\tensor*[^{3}]{a}{_{2}^*}\ensuremath{\epsilon}nsuremath{\bm{i}}g)\Big)^{\ensuremath{\epsilon}nsuremath{\beta}_2-1}
- \frac{1}{3} \ensuremath{\epsilon}nsuremath{\bm{i}}g( \tensor*[^{2}]{\pi}{_{2}^*} - \tensor*[^{3}]{\pi}{_{2}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g) \\
-\frac{1}{3}\;\ensuremath{\epsilon}nsuremath{\beta}_3 \Big(\frac{1}{3} \ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{3}} + \tensor*[^{2}]{a}{_{3}^*} +
\tensor*[^{3}]{a}{_{3}^*}\ensuremath{\epsilon}nsuremath{\bm{i}}g)\Big)^{\ensuremath{\epsilon}nsuremath{\beta}_3-1}
- \frac{1}{3} \ensuremath{\epsilon}nsuremath{\bm{i}}g( \tensor*[^{2}]{\pi}{_{3}^*} - \tensor*[^{3}]{\pi}{_{3}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g)
\ensuremath{\epsilon}nd{array}
\right]
= 0.
\ensuremath{\lambda}bel{grad_g_a_pi}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
Substituting the values of $\tensor*[^{i}]{a}{_{j}^*}, \tensor*[^{i}]{\pi}{_{j}^*}, i,j\in\{1,2,3\},$ from (\ref{msg_prof}) into (\ref{grad_g_a_pi}) implies that,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
\left[
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{array}{c}
\ensuremath{\epsilon}nsuremath{\alpha}_1 \Big(\frac{1}{3} \ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{1}} + 2 \h{a}_1^{*} \ensuremath{\epsilon}nsuremath{\bm{i}}g)\Big)^{\ensuremath{\epsilon}nsuremath{\alpha}_1-1}
- 2 \ensuremath{\epsilon}nsuremath{\beta}_1 \h{a}_1^{*^{(\ensuremath{\epsilon}nsuremath{\beta}_1-1)}} \\
- \ensuremath{\epsilon}nsuremath{\beta}_2 \Big(\frac{1}{3} \ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{2}} + 2 \h{a}_2^{*} \ensuremath{\epsilon}nsuremath{\bm{i}}g)\Big)^{\ensuremath{\epsilon}nsuremath{\beta}_2-1}
+ \ensuremath{\epsilon}nsuremath{\beta}_2 \h{a}_2^{*^{(\ensuremath{\epsilon}nsuremath{\beta}_2-1)}} \\
-\ensuremath{\epsilon}nsuremath{\beta}_3 \Big(\frac{1}{3} \ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{3}} + 2 \h{a}_3^{*} \ensuremath{\epsilon}nsuremath{\bm{i}}g)\Big)^{\ensuremath{\epsilon}nsuremath{\beta}_3-1}
+ \ensuremath{\epsilon}nsuremath{\beta}_3 \h{a}_3^{*^{(\ensuremath{\epsilon}nsuremath{\beta}_3-1)}}
\ensuremath{\epsilon}nd{array}
\right]
= 0.
\ensuremath{\lambda}bel{grad_g_a*}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
Substituting the values of $\h{a}_1^{*},\h{a}_2^{*},\h{a}_3^{*}$ from (\ref{a*_cent}) into (\ref{grad_g_a*}), and solving for $\tensor*[^{1}]{a}{_{1}}, \tensor*[^{1}]{a}{_{2}}, \tensor*[^{1}]{a}{_{3}}$, we obtain,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
\left[
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{array}{c}
\tensor*[^{1}]{a}{_{1}} \\
\tensor*[^{1}]{a}{_{2}} \\
\tensor*[^{1}]{a}{_{3}}
\ensuremath{\epsilon}nd{array}
\right]
=
\left[
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{array}{c}
3 \left( \frac{2\ensuremath{\epsilon}nsuremath{\beta}_1}{\ensuremath{\epsilon}nsuremath{\alpha}_1} \left( \frac{\ensuremath{\epsilon}nsuremath{\alpha}_1}{2\ensuremath{\epsilon}nsuremath{\beta}_1} \right)^{\frac{\ensuremath{\epsilon}nsuremath{\beta}_1-1}{\ensuremath{\epsilon}nsuremath{\beta}_1-\ensuremath{\epsilon}nsuremath{\alpha}_1}} \right)^{\frac{1}{\ensuremath{\epsilon}nsuremath{\alpha}_1-1}}
- 2 \left( \frac{\ensuremath{\epsilon}nsuremath{\alpha}_1}{2\ensuremath{\epsilon}nsuremath{\beta}_1} \right)^{\frac{1}{\ensuremath{\epsilon}nsuremath{\beta}_1-\ensuremath{\epsilon}nsuremath{\alpha}_1}} \\
3 \left( \h{a}_2^{*^{(\ensuremath{\epsilon}nsuremath{\beta}_2-1)}} \right)^{\frac{1}{\ensuremath{\epsilon}nsuremath{\beta}_2-1}}
- 2 \h{a}_2^* \\
3 \left( \h{a}_3^{*^{(\ensuremath{\epsilon}nsuremath{\beta}_3-1)}} \right)^{\frac{1}{\ensuremath{\epsilon}nsuremath{\beta}_3-1}}
- 2 \h{a}_3^*
\ensuremath{\epsilon}nd{array}
\right]
=
\left[
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{array}{c}
\left( \frac{\ensuremath{\epsilon}nsuremath{\alpha}_1}{2\ensuremath{\epsilon}nsuremath{\beta}_1} \right)^{\frac{1}{\ensuremath{\epsilon}nsuremath{\beta}_1-\ensuremath{\epsilon}nsuremath{\alpha}_1}} \\
\h{a}_2^* \\
\h{a}_3^*
\ensuremath{\epsilon}nd{array}
\right]
=
\left[
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{array}{c}
\tensor*[^{1}]{a}{_{1}^*} \\
\tensor*[^{1}]{a}{_{2}^*} \\
\tensor*[^{1}]{a}{_{3}^*}
\ensuremath{\epsilon}nd{array}
\right]
\ensuremath{\lambda}bel{grad_g_sol}
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
The last equality in (\ref{grad_g_sol}) follows from the definition of $\tensor*[^{1}]{a}{_{1}^*}, \tensor*[^{1}]{a}{_{2}^*}, \tensor*[^{1}]{a}{_{3}^*}$ in (\ref{msg_prof}) and the value of the centralized optimum in (\ref{a*_cent}). Note that for $(\tensor*[^{1}]{a}{_{1}}, \tensor*[^{1}]{a}{_{2}}, \tensor*[^{1}]{a}{_{3}}) = (\tensor*[^{1}]{a}{_{1}^*}, \tensor*[^{1}]{a}{_{2}^*}, \tensor*[^{1}]{a}{_{3}^*})$, the last three quadratic terms in (\ref{u1_t1_expanded}) are zero for all $(\tensor*[^{1}]{\pi}{_{1}}, \tensor*[^{1}]{\pi}{_{2}}, \tensor*[^{1}]{\pi}{_{3}})$ (since $\tensor*[^{1}]{a}{_{1}^*}=\tensor*[^{2}]{a}{_{1}^*}, \ \tensor*[^{1}]{a}{_{2}^*}=\tensor*[^{2}]{a}{_{2}^*}, \ \tensor*[^{1}]{a}{_{3}^*}=\tensor*[^{2}]{a}{_{3}^*}$ by (\ref{msg_prof})). Hence, any price proposal $(\tensor*[^{1}]{\pi}{_{1}}, \tensor*[^{1}]{\pi}{_{2}}, \tensor*[^{1}]{\pi}{_{3}})$ along with $(\tensor*[^{1}]{a}{_{1}^*}, \tensor*[^{1}]{a}{_{2}^*}, \tensor*[^{1}]{a}{_{3}^*})$ is a maximizer of (\ref{u1_t1_expanded}). Consequently, the message $\ensuremath{\epsilon}nsuremath{\bm{m}}_1^*$ defined in (\ref{msg_prof}) is a maximizer in (\ref{NE_argmax_cond_1}).
By following similar arguments as above, the conditions (\ref{NE_argmax_cond_2}) and (\ref{NE_argmax_cond_3}) can also be derived. Thus, the message profile $\ensuremath{\epsilon}nsuremath{\bm{m}}^* = (\ensuremath{\epsilon}nsuremath{\bm{m}}_1^*, \ensuremath{\epsilon}nsuremath{\bm{m}}_2^*, \ensuremath{\epsilon}nsuremath{\bm{m}}_3^*)$ is a NE of the game under consideration.
\\
We now show that the allocations obtained at the above NE possess the properties established by Theorem~\ref{chap5_Thm_NE_opt}.
By substituting the messages defined by (\ref{msg_prof}) into equation (\ref{chap5_Eq_a_hat_i})
we obtain,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\left[
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{array}{c}
\h{a}_1(\ensuremath{\epsilon}nsuremath{\bm{m}}^*) \\
\h{a}_2(\ensuremath{\epsilon}nsuremath{\bm{m}}^*) \\
\h{a}_3(\ensuremath{\epsilon}nsuremath{\bm{m}}^*)
\ensuremath{\epsilon}nd{array}
\right]
=
\left[
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{array}{c}
\frac{1}{3} \ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{1}^*} + \tensor*[^{2}]{a}{_{1}^*} + \tensor*[^{3}]{a}{_{1}^*}\ensuremath{\epsilon}nsuremath{\bm{i}}g) \\
\frac{1}{3} \ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{2}^*} + \tensor*[^{2}]{a}{_{2}^*} + \tensor*[^{3}]{a}{_{2}^*}\ensuremath{\epsilon}nsuremath{\bm{i}}g) \\
\frac{1}{3} \ensuremath{\epsilon}nsuremath{\bm{i}}g(\tensor*[^{1}]{a}{_{3}^*} + \tensor*[^{2}]{a}{_{3}^*} + \tensor*[^{3}]{a}{_{3}^*}\ensuremath{\epsilon}nsuremath{\bm{i}}g)
\ensuremath{\epsilon}nd{array}
\right]
=
\left[
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{array}{c}
\h{a}_1^* \\
\h{a}_2^* \\
\h{a}_3^*
\ensuremath{\epsilon}nd{array}
\right].
\ensuremath{\lambda}bel{NE_alloc_cent}
\ensuremath{\epsilon}nd{equation}
Thus (\ref{NE_alloc_cent}) shows that the allocation at Nash equilibrium $\ensuremath{\epsilon}nsuremath{\bm{m}}^*$ is the optimal centralized allocation $\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}^*$.
\\
To see budget balance at $\ensuremath{\epsilon}nsuremath{\bm{m}}^*$, note that the last six terms on the RHS of (\ref{tax1_expanded}) are zero at $\ensuremath{\epsilon}nsuremath{\bm{m}}=\ensuremath{\epsilon}nsuremath{\bm{m}}^*$. Similarly, the corresponding terms in the taxes of users 2 and 3 will also be zero. Adding the remaining terms in the users' taxes gives,
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{equation}
\ensuremath{\epsilon}nsuremath{\bm{e}}gin{split}
\textstyle{\ensuremath{\epsilon}nsuremath{\ensuremath{\sigma}upset}m_{i\in\{1,2,3\}}} \h{t}_i(\ensuremath{\epsilon}nsuremath{\bm{m}}^*) =
& \ \ \ \, \ensuremath{\epsilon}nsuremath{\bm{i}}g( \tensor*[^{2}]{\pi}{_{1}^*} - \tensor*[^{3}]{\pi}{_{1}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g) \; \h{a}_1^*
+ \ensuremath{\epsilon}nsuremath{\bm{i}}g( \tensor*[^{2}]{\pi}{_{2}^*} - \tensor*[^{3}]{\pi}{_{2}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g) \; \h{a}_2^*
+ \ensuremath{\epsilon}nsuremath{\bm{i}}g( \tensor*[^{2}]{\pi}{_{3}^*} - \tensor*[^{3}]{\pi}{_{3}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g) \; \h{a}_3^* \\
&
+ \ensuremath{\epsilon}nsuremath{\bm{i}}g( \tensor*[^{3}]{\pi}{_{1}^*} - \tensor*[^{1}]{\pi}{_{1}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g) \; \h{a}_1^*
+ \ensuremath{\epsilon}nsuremath{\bm{i}}g( \tensor*[^{3}]{\pi}{_{2}^*} - \tensor*[^{1}]{\pi}{_{2}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g) \; \h{a}_2^*
+ \ensuremath{\epsilon}nsuremath{\bm{i}}g( \tensor*[^{3}]{\pi}{_{3}^*} - \tensor*[^{1}]{\pi}{_{3}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g) \; \h{a}_3^* \\
&
+ \ensuremath{\epsilon}nsuremath{\bm{i}}g( \tensor*[^{1}]{\pi}{_{1}^*} - \tensor*[^{2}]{\pi}{_{1}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g) \; \h{a}_1^*
+ \ensuremath{\epsilon}nsuremath{\bm{i}}g( \tensor*[^{1}]{\pi}{_{2}^*} - \tensor*[^{2}]{\pi}{_{2}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g) \; \h{a}_2^*
+ \ensuremath{\epsilon}nsuremath{\bm{i}}g( \tensor*[^{1}]{\pi}{_{3}^*} - \tensor*[^{2}]{\pi}{_{3}^*} \ensuremath{\epsilon}nsuremath{\bm{i}}g) \; \h{a}_3^* \\
= & \ \ \ \, 0.
\ensuremath{\epsilon}nd{split}
\ensuremath{\epsilon}nd{equation}
Voluntary participation at $\ensuremath{\epsilon}nsuremath{\bm{m}}^*$ can be seen by substituting $\ensuremath{\epsilon}nsuremath{\bm{m}}^*$ from (\ref{msg_prof}) into $u_i(\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}(\ensuremath{\epsilon}nsuremath{\bm{m}}^*)) - \h{t}_i(\ensuremath{\epsilon}nsuremath{\bm{m}}^*), \; i\in\{1,2,3\}$, and computing this aggregate utility for each user using (\ref{u_i_def}), (\ref{a*_cent}), (\ref{msg_prof}) and (\ref{chap5_Eq_a_hat_i})--(\ref{chap5_Eq_l_ij}). The computation shows that $u_i(\h{\ensuremath{\epsilon}nsuremath{\bm{a}}}(\ensuremath{\epsilon}nsuremath{\bm{m}}^*)) - \h{t}_i(\ensuremath{\epsilon}nsuremath{\bm{m}}^*) \ensuremath{\gamma}eq 0 \ensuremath{\epsilon}nsuremath{\;\forall\;} i\in\{1,2,3\}$ which means that the users obtain at least zero utility at $\ensuremath{\epsilon}nsuremath{\bm{m}}^*$ which is same as the utility they would obtain if they do not participate in the mechanism.
\\
\ensuremath{\epsilon}nd{document}
|
\begin{document}
\date{} \title{EM-Based Channel Estimation from Crowd-Sourced RSSI Samples Corrupted by Noise and Interference}
\begin{abstract}
We propose a method for estimating channel parameters from RSSI measurements and the lost packet count, which can work in the presence of losses due to both interference and signal attenuation below the noise floor. This is especially important in the wireless networks, such as vehicular, where propagation model changes with the density of nodes. The method is based on {\em Stochastic Expectation Maximization}, where the received data is modeled as a mixture of distributions (no/low interference and strong interference), incomplete (censored) due to packet losses. The PDFs in the mixture are Gamma, according to the commonly accepted model for wireless signal and interference power. This approach leverages the loss count as additional information, hence outperforming maximum likelihood estimation, which does not use this information (ML-), for a small number of received RSSI samples. Hence, it allows inexpensive on-line channel estimation from ad-hoc collected data. The method also outperforms ML- on uncensored data mixtures, as ML- assumes that samples are from a single-mode PDF.
\IEEEauthorrefmark{2} \end{abstract}
\section{Introduction} \label{sec:Intro}
For various reasons (such as participatory RF sensing in order to develop low-cost RF maps \cite{GhasemiSousa}, or for calibrating the channel in order to reproduce field trials in a simulator), wireless systems often collect signal strength data “on the fly”, i.e., in the course of actual operation. Such data is often collected in the form of paired values of Tx-Rx distance and the received signal strength indication (RSSI), which can be thought of (within a known additive constant) as the received power in dBm \cite{Vlavianos}. RSSI is measured on a per-packet basis. If there is too much noise and/or interference for a given measurement, the packet can be lost in which case only the failure indication is recorded (indirectly, e.g., through packet sequencing). The data reduction challenge is to reconstruct, from the collection of recorded RSSI values and packets tagged as lost, the probability density function (PDF) of the received signal. With the PDF thus estimated, the analyst can accurately model the propagation in the environment (e.g., path loss vs. distance), and also model interference effects for a given scenario (e.g., geometry, spatial density of both active and inactive Tx-s, etc.) The widespread adoption of Nakagami PDFs for modeling radio links is justified by the abundant analysis of empirical data \cite{Goldsmith,Molisch}. When we refer to the Nakagami PDF, it implies the signal amplitude; the corresponding power is Gamma-distributed, with the same scale parameter m and shape parameter $\Omega$, and the dB power (hence, RSSI) can be thought of as log-Gamma. Note that Nakagami with {\em m=1} corresponds to the Rayleigh distribution.
However, the problem of estimating parameters of this PDF based on packet data collected over time periods of practical interest (the shorter the better) remains challenging. The reason is a high amount of lost (censored) samples caused by interference and low SNR due to fading or distance-based attenuation. As interference is intermittent, there are two broad classes of RSSI data points, namely, those with no (or low) interference, and those with enough interference to result in a significantly modified statistical model (different PDF). Note that maximum-likelihood (ML) \cite{ChengBeaulieu}, the typically best approach for single statistical model, does not offer a closed form solution for data mixtures with loss counts. To derive parameters of PDFs featured in a {\em censored mixture of two random variables} (RVs), representing samples with no/ low interference, and with strong interference, we propose the use of Stochastic Expectation-Maximization (SEM) \cite{ChauveauSEM} estimators. In addition, our approach leverages the loss count as additional information to improve the estimation accuracy for a given number of samples. We introduce notation {\em ML-} to denote the ML that {\em utilizes a single-mode PDF assumption and only received samples}. In this paper, we demonstrate that our approach performs better than ML- in the presence of interference, because it starts with an assumption of two components (dual mixture) and because it uses the loss count as side information. It also outperforms ML- in cases without interference, if the number of received samples is small, which is frequently the case in on-line estimation tasks.
The organization of the paper is as follows: in Section~\ref{sec:Syst} we briefly describe the system model based on an example, while in Section~\ref{sec:BasicElms}, we introduce basic algorithmic elements; in Section~\ref{sec:ChannelSEM} we present the algorithm used in our approach; we evaluate our model on both simulated and empirical data, and discuss the results in Section~\ref{sec:Eval}. In the last section we conclude and address future work.
\section{System Model and Motivational Example}\label{sec:Syst}
We refer to no/ low interference samples as {\em signal (or 1st) component}, and to strong interference samples as {\em interference (or 2nd) component}. We propose to have both signal and interference components in the mixture modeled by the same family of PDFs, i.e., Gamma. Properly parameterized Gamma PDFs ({\em GPDFs}) are widely used to model small-scale fading, to approximate the product of the small-scale and lognormal fading distribution, and to approximate the interference power \cite{Heath13}.
Our claim that interference samples deserve to be modeled by a 2nd component is evident in Fig~\ref{fig:grad} \cite{KokaljCamp}, where the distortion caused by interference increases with the spatial density of interferers. The field trial in which the samples were collected included 200 (moving) vehicles equipped with wireless modems, where the test first ran with 100 active transmitters; then another 50 were added, and in the final third of the test all 200 modems were transmitting (3 parts delineated in Fig~\ref{fig:grad}). Note that these and other RSSI measurements featured here are made on OFDM transmissions with a 10MHz bandwidth centered near 5.9 GHz, in compliance with V2V DSRC IEEE802.11p, using Atheros 802.11p chips.
\begin{figure}
\caption{Time plots from \cite{KokaljCamp}
\label{fig:grad}
\end{figure}
It appears that fading is increased as more Tx-s are activated in the field, although the propagation environment has not changed, due to constant density of vehicles. This is the effect of random phases of the interferers; the sum of the M random phasors with equal amplitudes approaches Rayleigh as M grows. Hence, as the interference increases, m in Gamma (and Nakagami, for amplitudes) should approach 1. In this case, the dB peak power (as in Fig~\ref{fig:grad}) is limited to be 10log10(M) above the average power, but the dB power swings below the average can be huge, because of the phasor-sum reductions. In the Rayleigh limit (which M = 10 roughly approximates), the probability of being 10 dB or more below the average is about 10\%, while the prob. of 10 dB or more above the average is 0.
For this reason, we would model the 2nd component in the mixture with a GPDF of the scale parameter m initially set to one, while the 1st component (pure signal) is modeled with a different m, initially set according to some side information about the data origin (mobile, static, indoor, outdoor, rural, urban etc). Starting with these and other initial values, the SEM algorithm should eventually converge to parameters that better characterize both the signal and the interference as functions of the distance from the signal Tx. In each RSSI mixture component, there are two sub-classes: received (uncensored) data, and lost (censored) data. For the no/low interference case, the censored data are mostly at large distances where the median Rx power is attenuated at or below the noise threshold. The Rx power can also go below the noise floor at any distance as a result of deep fades, due to multi-path. Per Fig~\ref{fig:grad}, the interference causes similar fading on RSSI samples, possibly more intense, causing more losses.
The stochastic EM algorithm is a known approach for computing ML estimates in the mixture problem. Our model is derived from an extension of the SEM algorithm \cite{CeleuxDiebolt}, dubbed SEMcm, in a particular case of incomplete data \cite{ChauveauSEM}, where the information loss is due to both mixture of distributions and censored observations. We aim to estimate the parameters of a left-censored dual mixture, which we propose as a model of observed wireless RSSI samples with countable losses, following \cite{ChauveauSEM}.
\section{Basic Algorithmic Elements}\label{sec:BasicElms}
A mixture of 2 distributions of the same family $p(y|\phi_i),\ i=1,2,$ is defined by
\begin{align}\eqnlabel{mixt1}
p_{\varphi}(y)=\alpha_1p(y|\phi_1)+\alpha_2p(y|\phi_2).
\end{align}
Here, y is the RV modeling an arbitrary mixture sample.
$\alpha_1$ is the mixing probability. Equivalently,
\begin{align}\eqnlabel{mixt1a}
p_{\varphi}(y,z)=p(y|\varphi_z)= f_{\varphi_z}(y)
\end{align}
is the joint distribution of the RVs Y and Z, where Z is the indicator RV modeling the association with one of the two mixture components (with probability $\alpha_i$), and the subscript represents the PDF parameters that we aim to estimate:
\begin{align}\eqnlabel{mixt2}
\varphi=\paren{\alpha_1,\alpha_2,\phi_1,\phi_2},\ 0\leq \alpha_1 \leq 1,\ \alpha_1=1-\alpha_2.
\end{align}
We propose that de-logged RSSI values be modeled by a dual mixture of GPDFs $p(y|\phi_i),\ i=1,2.$ Hence, we have
\begin{align}\eqnlabel{mixt3}
\phi_i=\paren{m_i, \Omega_i};\ p(y|\phi_i)=\frac{1}{\Gamma(m_i)\Omega_i}\paren{\frac{y}{\Omega_i}}^{m_i-1}\eX{\frac{y}{\Omega_i}}.
\end{align}
This model is also depicted in plate notation in Fig.~\ref{fig:plate}~(a).
Next, we introduce censoring: let $y\in R$ where R is partitioned into disjoint domains $R=R_o\cup R_1,$ where $R_o$ is the subset of uncensored data, wile $R_1$ is the subset corresponding to left-censored data, i.e., $y\le c_L$ where $c_L$ denotes left threshold. Let us assume that there are n samples total (e.g., n transmitted packets), $r_o$ of which are uncensored (received packets): $y_k =x_k \in R_o, k \in C_o, \abs{C_o}=r_o,$ and $r_1$ left censored (lost) samples: $y_k \in R_1, k \in C_1, \abs{C_1}=r_1,$ where $r_o+r_1=n$.
Note that $C_o$ and $C_1$ are disjoint sets of sample indices (e.g., packet sequence numbers SNs), $x_k$ is measured while $y_k$ is te real value (which are not equal for censored samples). In our model, total number od samples and losses could be obtained by tracking SNs of received packets. We define
\begin{align}\eqnlabel{mixt4}
T^{\paren{p+1}}_{o,i}(x_k)=\E{Z_{k,i}|y=x_k,\phi^{(p)}}=\frac{\alpha^{(p)}_i p(x_k|\phi^{(p)}_i)}{\sum^{2}_{t=1}{\alpha^{(p)}_t p(x_k|\phi^{(p)}_t)}},
\end{align}
where $i=1,2,\ k\in C_o,$ $T^{\paren{p+1}}_{o,i}(x_k)$ denoting {\em current estimate} of the probability that uncensored sample $x_k$ belong to component $i$; and
\begin{align}\eqnlabel{mixt5}
T^{\paren{p+1}}_{1,i}=\E{Z_{i
L}|y\in R_1,\phi^{(p)}}=\frac{\alpha^{(p)}_i \int_{R_1}{p(y|\phi^{(p)}_i)dy}}{\sum^{2}_{t=1}{\alpha^{(p)}_t \int_{R_1}{p(y|\phi^{(p)}_t)dy}}},
\end{align}
where $i=1,2,$ $T^{\paren{p+1}}_{1,i}$ denoting {\em current estimate} of the probability that a left-censored sample belong to component $i.$
The current estimate refers to the $(p+1)$th iteration of the {\em $SEM_{cm}$} algorithm (described in the next subsection). Observe that we have 2 classes of binary latent variables in \eqnref{mixt4} and \eqnref{mixt5}, for $k\in C_o$ and $k\in C_1,$ respectively. The $1$st includes $r_o$ indicators $Z_{k,1}$ characterized by “prob. of success“ $T^{\paren{p+1}}_{o,1}(x_k)$ (prob. of the 1st component), with $Z_{k,1}=1-Z_{k,2}$; the $2$nd class has a single RV $Z_{1L}$ indicating the 1st component w.p. $T^{\paren{p+1}}_{1,1},$ with $Z_{1L}=1-Z_{2L}.$
The censored model is also depicted in plate notation in Fig.~\ref{fig:plate}~(b).
\begin{figure}
\caption{Plate models for (a) uncensured dual mixture of Gamma components (b) censured dual mixture of Gamma components; Shaded circles represent observables.}
\label{fig:plate}
\end{figure}
\section{SEM-based Channel Estimation Algorithm}\label{sec:ChannelSEM}
Given samples of RSSI, and loss counts for different distances $d$ between a Tx and an Rx, the goal is now to obtain $\alpha_i$ and the two PDFs, $p(y|\phi_i),$ for $i=1$ (signal component) and $i=2$ (interference component), as a function of distance $d.$ We refer to all lost samples as left-censored, as the noise floor is on the left side of the support set of both components, and to the noise floor as the left threshold $c_L.$ Let us first revisit the EM algorithm for mixture data without censoring. We have samples $y,$ but we are missing the indicator RVs $z$ in \eqnref{mixt1a}. The EM algorithm replaces the maximization of the unknown $\log{p_{\varphi}(y,z)}$ by iterative maximizations of the log-likelihood expectation, conditionally to the observed sample $x$, and for the current value of the parameter $\varphi$ \cite{SheskinNonParam}.
To calculate $Q(\varphi,\varphi^{(p)})=\E{\log{p_{\varphi}(y,z)|y=x,\varphi^{(p)}}}$ we must derive the current conditional density of $(y,z)$ given $y=x,$
\begin{align}\eqnlabel{mixt6}
h(y,z|y,\varphi^{(p)})=\frac{p_{\varphi^{(p)}}(y,z)}{f_{\varphi^{(p)}}(y)}.
\end{align}
Iteration p+1 has 2 steps:
\noindent {\bf E-step:} Compute $h(y,z|y,\varphi^{(p)})$ (hence $Q(\varphi,\varphi^{(p)})$)
\noindent {\bf M-step:} Choose $\varphi^{(p+1)}=\argmax_{\varphi\in\Phi}Q(\varphi,\varphi^{(p)}).$
Now, the stochastic EM (SEM) was introduced \cite{CeleuxDiebolt} to overcome the numerical limitations of EM. For the current value $\varphi^{(p)}$ of the parameter, it completes the observed samples by replacing each missing data by a value drawn at random from $h(y,z|y,\varphi^{(p)})$ (S-step), and then computes the ML estimate based on the completed sample (M-step). We first define the three steps for the left-censored dual-mixture in general, and then present the specific expressions for GPDF.
{\bf E-step:} Compute $T^{\paren{p+1}}_{o,i}(x_k)$ for $k\in C_o,\ i=1,2$
\noindent \qquad\qquad Compute $T^{\paren{p+1}}_{1,i}$ for $i=1,2$
{\bf S-step: (1)} For $x_k \in R_o, k \in C_o$ simulate $r_o$ binary vectors $z^{(p+1)}_k=\set{z^{(p+1)}_{k1},z^{(p+1)}_{k2}}$ by running Bernoulli experiments w.p. $T^{\paren{p+1}}_{o,1}$ ; {\bf(2)} simulate $r_1$ binary vectors $z^{(p+1)}_{Li}=\set{z^{(p+1)}_{Li1},z^{(p+1)}_{Li2}},$ $i=1,\cdots, r_1,$ each as a Bernoulli experiment w.p. $T^{\paren{p+1}}_{1,1}$; {\bf(3)} simulate $r_1$ missing left censored values sampling from $h(\cdot|c_L,\varphi^{(p)})=\frac{p_{\varphi^{(p)}}(\cdot)}{\int_{R_1}{f_{\varphi^{(p)}}(y)}dy}$;
\noindent\begin{align}\eqnlabel{mixt7}
\mbox{{\bf M-step:} Choose }\varphi^{(p+1)}=\argmax_{\varphi\in\Phi}Q(\varphi,\varphi^{(p)})
\end{align}
where
\begin{align}\eqnlabel{mixt8}
\nonumber &Q(\varphi,\varphi^{(p)})=\sum^{2}_{i=1}{\paren{\sum_{k\in C_o}{z^{(p+1)}_{ki}}+\sum^{r_1}_{j=1}{z^{(p+1)}_{Li,j}}}}\log\alpha^{p}_i+\\
&\sum^{2}_{i=1}{\paren{\sum_{k\in C_o}{z^{(p+1)}_{ki}\log p(x_k|\phi^p_i)}+\sum^{r_1}_{j=1}{z^{(p+1)}_{Li,j}\log p(y_{L,j}|\phi^p_i)}}}
\end{align}
We next evaluate $Q(\varphi,\varphi^{(p)})$ for GPDFs, resulting in the proposed channel estimation algorithm, dubbed {\em SEMcmG}:
\noindent {\bf E-step:} as in \eqnref{mixt7}-E, based on \eqnref{mixt3}
\noindent {\bf S-step:} as in \eqnref{mixt7}-S, do {\bf (1)-(3)}, based on \eqnref{mixt3}
\noindent {\bf M-step:} Based on \eqnref{mixt3} and \eqnref{mixt8} solve
\begin{align}\eqnlabel{mixt9}
\nonumber &\mbox{i. }\frac{\partial Q(\varphi,\varphi^{(p)})}{\partial\alpha_i}=0\Rightarrow\alpha^{(p+1)}_i=\frac{1}{n}\paren{\sum_{k\in C_o}{z^{(p+1)}_{ki}}+\sum^{r_1}_{j=1}{z^{(p+1)}_{Li,j}}},\\
\nonumber &\mbox{ii. }\frac{\partial Q(\varphi,\varphi^{(p)})}{\partial\Omega_i}=0\Rightarrow \Omega^{(p+1)}_{i}=\frac{\Omega^{(p+1)}_{im}}{m^p_i}\\
\nonumber &\Omega^{(p+1)}_{im}=\frac{\sum_{k\in C_o}{z^{(p+1)}_{ki}x_k}+\sum^{r_1}_{j=1}{z^{(p+1)}_{Li,j}y^{(p+1)}_{L,j}}}{n\alpha^{(p+1)}_i}\\
\nonumber&\mbox{iii. }\frac{\partial Q(\varphi,\varphi^{(p)})}{\partial m_i}=0\Rightarrow\Psi(x)=\frac{\deriv{\Gamma}(x)}{\Gamma(x)}\\
\nonumber&\Psi(x)\approx \log{x}-\frac{1}{2x}-\frac{1}{12x^2};\ L^{p+1}_{i,x}=\log{\frac{x}{\Omega^{(p+1)}_{i}}}\\
&L^{p+1}_{iA}=\frac{\sum_{k\in C_o}{z^{(p+1)}_{ki}L^{p+1}_{i,x_k}}+\sum^{r_1}_{j=1}{z^{(p+1)}_{Li,j}L^{p+1}_{i,y^{p+1}_{Lj}}}}{n\alpha^{(p+1)}_i}
\end{align}
$$\mbox{Solve }\Psi(m^{p+1}_i)-L^{p+1}_{iA}=0.$$
Note that we are frequently averaging over the expected number of samples. Total number of samples and losses could be obtained by tracking sequence numbers of received packets.
\section{Evaluation} \label{sec:Eval}
\subsection{Model Evaluation on Simulated Data}
Besides evaluating SEMcm algorithm on some trivial data sets (one component with left-censoring \cite{GaussMixtWiFi}; one doubly-censored component), we successfully evaluated SEMcmG on a simulated mixture of two left-censored components, which was meant to emulate interference affected RSSI samples. The first component represents the signal over a distance range identical to the range considered in the empirical data evaluation: $l_d=23--32$, where $l_d$ is the log-distance, defined as $10\log{}_{10}(\mbox{distance in m})$. The second component models RSSI samples with strong interference over the same distance range. We simulated different parameters, mostly with the interference component having m=1 (i.e., $m_2\approx 1$), following our discussion in Section~\ref{sec:Intro}. The results are encouraging. However, we now present a mixture with arbitrary parameters, chosen to create a ”signal cloud” visually distinguishable from the interference cloud in the mixture scatter-plot (bottom left pane in Fig.~\ref{fig:making}), while capable of exemplifying main concerns about censored RSSI mixtures. The $m_1$ is chosen slightly high for the assumed mobile signal $(m_1=7),$ while $m_2 =35;$ such a high value of $m_2$ may represent a single (or dominant) interferer.
Signal attenuation over space is exponential, with the attenuation coefficient to be determined through parameter estimation. We choose to present the exponential attenuation in dB domain as a linear function of $l_d.$ Hence, as in our prior work \cite{KokaljCamp}, median path-loss [PL] is fitted by the straight-line function
\begin{align} \eqnlabel{pl}
\set{PL}=A-Bl_d.
\end{align}
Note that PL is defined as $PL=RSSI-10\log{}_{10}(P_t)$, where $P_t$ is the Tx power. Hence, it is distributed as log-Gamma. We present data points in some of our plots as PL rather than RSSI, as it reflects the propagation medium only (independent of Tx power). The simulated $\Omega$ was chosen so that the linear fit into the dBm value of the Gamma mean (i.e, $10\log{}_{10}(\Omega m)$) vs. Tx-Rx distance be equal to \eqnref{pl} with A=-16, B=3. Note that $\Omega$ is a function $\Omega(l_d).$ With these values, the signal only scatter plot (in dB) is presented in the upper-left corner of Fig.~\ref{fig:making}.
\begin{figure}
\caption{The {\bf making of}
\label{fig:making}
\end{figure}
As for interference, for simplicity and without loss of generality, we propose that the median interference is constant over space, e.g., assuming one distant interferer. Such interference points (dBm) are shown in the upper-right plot in Fig.~\ref{fig:making}.
Notice that for both components we generated points for discrete values of $l_d,$ referred to as distance bins, with 0.5 dB space in between. For each bin we generated 1000 signal (or interference) points, referred to bin arrays. Then, for each bin, and each bin’s sample index (1-1000) we would select with probability ½ either signal or interference point in that place, making a balanced mixture of the two components, and ending up with 1000 points per bin (bottom-left in Fig.~\ref{fig:making}). The choice of the mixing coefficient ½ that gives equal weight to both components is deliberate, as such mixtures were hardest to separate. Finally, we censor (drop) the points that are below the threshold cL = -109 (indicated in bottom plots of Fig. 3), resulting in a set of points in the bottom-right plot of Fig.~\ref{fig:making}. These are the points fed into SEMcmG, along with the initial values of the parameters, and the information of how many samples per bin were censored. The initial values were distorted with respect to the real values up to 50
\begin{figure}
\caption{Estimated mean (green) is almost identical to the real one for most distances (except for cluster-overlap bins), so that its linear fit (cyan) is covering the black line (real mean from the bottom-right plot of Fig.~\ref{fig:making}
\label{fig:InterfMean}
\end{figure}
\begin{figure}
\caption{m parameter’s SEM estimate diverges from the real mean in cluster-overlap bins (as do ML and MB). For other bins, ML and MB take the interference as part of the signal and estimate higher fading (m below 1).}
\label{fig:mixtM}
\end{figure}
Observe in the bottom-right plot of Fig.~\ref{fig:making} the red line that was obtained as a Least Square Error (LSE) estimate of the mean of the censored mixture, as opposed to the black line that represents the real mean of the signal. This illustrates how much the assumption of one component (as in the presented LSE) can cost in terms of estimation error. With SEM, the estimates (per bin) were perfect for most simulated mixtures if the data losses constituted less than 60-70\% of data, while for higher losses they were just better than ML estimates. For this particular mixture, losses were up to 45\% (Fig.~\ref{fig:mixtM}), in order to highlight the “cluster overlap” problem, i.e. the distance bins where the median values of the components were indistinguishable. Please observe the green line in Fig.~\ref{fig:InterfMean}, which illustrates the signal’s mean estimate. Note that only in the area around $l_d =27$ (cluster overlap) does SEM diverge from the real mean, while following the LSE mean estimate, and in the same area the interference mean estimate follows that of the real signal. We are looking into additional mechanisms to address this phenomenon.
Fig.~\ref{fig:mixtM} shows the $m$ estimate per bin. Again, as the likelihood equations are intractable for any maximum likelihood estimate, we compare our results for the $m$ parameter with good existing approximations. The {\em ML and moment-based (MB) } estimates in Fig.~\ref{fig:mixtM} are calculated based on the $r$ received samples. The former one is obtained according to the following maximum likelihood approximation $$m^{ML}=\frac{6+\sqrt{36+48\Delta}}{24\Delta},\mbox{ \cite{ChengBeaulieu}, where}$$
$$\Delta=\ln{\set{\sum^{r}_{i=1}{p_i}}}-\frac{1}{r}\sum^{r}_{i=1}{\ln{p_i}}$$ and $p_i$ is the Rx power sample (de-logged RSSI). The latter, $m^{MB},$ follows eqn. (10) from \cite{AbdiKaveh}, which is based on the first two sample moments of the received power $p_i$.
The ML and MB estimates never outperform the SEM estimate, even not in the “cluster overlap” area (Fig.~\ref{fig:mixtM}). In fact, outside the “overlap”, ML and MB are producing huge errors, as they assume one Gamma distribution, and, hence, they are interpreting the wide clouds outside of central area as a sign of deep fades; consequently, $m$ is estimated too low (around 1). This is a very important argument for the proposed approach, as interference clearly cannot be accounted for by any single-component model.
\begin{figure}
\caption{Parameter estimates convergence over 30 iterations to known real values (bin 10): $\Omega$ - upper plot, m estimate - bottom.}
\label{fig:both}
\end{figure}
A feature of interest for on-line estimation is the convergence rate. We illustrate it in Fig.~\ref{fig:both} for both $\Omega$ and $m$ and a given bin: a step-by-step evolution of the estimated parameter. It seems that both estimates could have been better if we ran some additional iterations.
\subsection{Model Evaluation on Empirical Data}
The ZR trial, described in \cite{KokaljCamp}, included only one Tx at a time, mounted on a vehicle that traveled back and forth from the static Rx-s on a straight road $d_{max}$=1200 m long. This scenario with no interference helped us to study the performance of our SEMcm algorithms in terms of signal component estimates, when the initial values for the (non-existent) interference were arbitrary. As the Tx was mobile, suggesting Gamma distribution, SEMcm with Gaussian PDFs gave bad estimates (as expected) and numerical instabilities. SEMcmG showed good results. The initial values for the signal parameters were taken from imperfect estimates, based on the linear LSE fit into a pathloss function that was linear only beyond a break point, and also due to noise-floor saturation (Fig.~\ref{fig:ZRpt18} – upper pane).
\begin{figure}
\caption{Both mean $(10\log{}
\label{fig:ZRpt18}
\end{figure}
For simplicity we performed SEMcmG only for the distance bins after the break-point (2nd segment), as the smaller distances involved the two-ray phenomenon. The linear fit of the initial $\Omega m$ in dBm, represented by the yellow line in Fig.~\ref{fig:ZRpt18}, matches \eqnref{pl} with coefficients A2 and B2. Other coefficients, based on the LSE over the 2nd segment only, came closer to the real median PL (known from running the same field trial with higher Tx power, which avoids the noise floor within traversed distances).
The SEMcm estimated line (red line with circular markers) is almost the same as the real one. The initial value for m was 1.5 (bottom black line in the middle pane of Fig.~\ref{fig:ZRpt18}), yet SEMcmG managed to improve it to 3.3 on average, which is identical to its ML estimate. Now, the ML estimate works optimally when there is sufficient number of samples, which was the case here. The bottom pane of Fig.~\ref{fig:ZRpt18} shows the number of transmitted packets in black, and the number of lost packets in red. The last bin has the worst losses (75\%), yet, more than 500 packets received is sufficient for ML.
In conclusion, without interference, SEMcm outperforms the LSE approach in estimating the mean ($10\log{}_{10}(\Omega m)$), while it is comparable to ML in estimating $m$.
\begin{figure}
\caption{log-Gamma component PDFs ($f_1$ and $f_2$), based on the SEMcm estimates, given data from the 3rd part in Fig. 1 for a given distance bin. Mixing coefficients are found to be 0.1 and 0.9: mixture PDF with these $\alpha_i$ is shown as insert, along with the RSSI histogram of that distance bin. The arrows point to the similarity of the estimated PDF shapes and empirical data.}
\label{fig:GammaCISS}
\end{figure}
Finally, we present Fig.~\ref{fig:GammaCISS} which is based on the data featured in Fig 1. Apart from show-casing the notion of dual mixture and censoring, this figure affirms the censored mixture approach, as it illustrates a good match between the SEM-reconstructed PDF of the data featured in Fig 1, and its empirical distribution. Observe that the points left of the black vertical line around $-115 dBm$ represent censored samples (i.e, $c_L=-115$) .
\section{Conclusion}
Our main contribution is a novel model of interference affected RSSI samples, presented as censored mixture of Gamma PDFs, based on the insight from data collected for varying interference levels (see Fig. 1). Also, we applied a flavor of EM algorithm which not only mechanizies the computation of the parameters' ML estimates for our complex statistical model of {\em incomplete non-Gaussian mixed data} \cite{DempsterLairdRubin,LeeScott}, but also utilizes stochastic randomization to avoid strong dependence on its starting position, convergence to a saddle point, and low convergence rate. A great property of this method is that it leverages the count of lost data, to improve estimates for small number of samples, which is especially important for online estimation based on crowd-sourced data.
Our future work will explore online versions of EM algorithms \cite{onlineEMlatent} applied to our problem. Also, future work will address improvements for signal levels that are on average too close to interference levels, such as in cluster-overlap bins in Figures~\ref{fig:InterfMean}~and~\ref{fig:mixtM}. Although this is a common problem in data clustering, we believe that good predictive models for cluster overlaps could be developed based on signal samples in distance bins with good separation.
\end{document}
|
\begin{document}
A collection of Riemannian metrics $\{ g(t) \}_{t \in [0, T)}$ on a closed, smooth manifold $M$ evolves by Ricci flow if
$$\partial_t g = - 2 Rc \qquad \text{for all } t \in (0, T)$$
Solutions often form singularities in finite time and
Hamilton ~\cite{H95} showed that the Riemann curvature tensor blows up at a finite-time singularity, that is
$$\limsup_{t \nearrow T} \sup_{x \in M} | Rm |_{g(t)} ( x, t) = + \infty$$
In fact,
$$\limsup_{t \nearrow T} ( T - t) \sup_{x \in M} | Rm |_{g(t)} ( x, t) > 0 $$
as the parabolic scaling invariance of the Ricci flow suggests.
Finite-time singularities of the Ricci flow are classified as
\begin{equation*} \begin{aligned}
\text{\textit{Type I} if } & \limsup_{t \nearrow T} ( T - t) \sup_{x \in M} | Rm |_{g(t)} ( x, t) < + \infty \text{, or }\\
\text{\textit{Type II} if } & \limsup_{t \nearrow T} ( T - t) \sup_{x \in M} | Rm |_{g(t)}( x, t) = + \infty
\end{aligned} \end{equation*}
Due in part to results of Enders-M{\"u}ller-Topping ~\cite{EMT11}, type I singularities are better understood than type II singularities in many ways.
The first construction of type II singularities for the Ricci flow appeared in ~\cite{DP06}.
Later, Gu and Zhu ~\cite{GZ08} constructed type II singularities for the Ricci flow by considering rotationally symmetric metrics on $S^{n+1}$.
Angenent, Isenberg, and Knopf ~\cite{AIK15} then provided an alternate construction that in particular allowed for curvature blow-up rates of
$$\sup_{x \in M} | Rm|_{g(t)}( x, t) \sim (T-t)^{-2 + \frac{2}{k} }$$
for any $k \in \mathbb{N}$ with $k \ge 3$.
Examples of type II singularities for the Ricci flow on $\mathbb{R}^n$ with blow-up rates of $(T-t)^{-\lambda}$ for any $\lambda \ge 2$ appeared in in ~\cite{W14}.
The first examples of type II singularities for the K{\"a}hler-Ricci flow recently appeared in ~\cite{LTZ18}.
Here, we construct Ricci flow solutions on certain product manifolds that form type II singularities with curvature blow-up rates given by arbitrarily large powers of $(T-t)$.
The main theorem is the following:
\begin{thm} \label{mainThmAbridged}
Let $N^p$ be a closed $p$-dimensional manifold that admits an Einstein metric $g_N$ with positive Einstein constant.
Let $q \ge 10$ and $k > 1$.
Then there exists a smooth solution $\{ g_k(t) \}_{t \in [0, T)}$ to the Ricci flow on $M = N^p \times S^{q+1}$ that forms a local type II singularity at time $T < \infty$ such that
$$0 < \limsup_{t \nearrow T} (T-t)^{k} \sup_{x \in M} | Rm|_{g_k(t)} (x,t) \le \infty$$
and, for some $x \in S^{q+1}$,
$$\left( N^p \times S^{q+1}, \frac{1}{T-t} g_k( t ) , x \right) \xrightarrow[t \nearrow T]{ C^2_{loc} \left( N^p \times S^{q+1} \setminus( N^p \times \{ x \}) \right) } \big( C( N^p \times S^q), g_{RFC}, x_* \big)$$
where $g_{RFC}$ is the Ricci-flat cone metric
$$g_{RFC} = dr^2 + \frac{p-1}{p+q-1} r^2 g_{N} + \frac{q-1}{p+q-1} r^2 g_{\mathbb{S}^q}$$
on the cone $C(N^p \times S^q)$ with vertex $x_*$ (see section \ref{ricciFlatConeMetric} for additional details)
and $g_N$ is normalized such that $Ric_{g_N} = (p-1) g_N$.
\end{thm}
\noindent More precise asymptotics on the singularity formation will be obtained in the course of the proof
that, in particular, imply convergence of the parabolically rescaled \textit{flows}, as opposed to simply the convergence of \textit{time-slices}.
When $N$ is homogeneous with respect to $G$, the metrics $g_k(t)$ are cohomogeneity one with respect to the $G \times SO(q+1)$ action on $N^p \times S^{q+1}$.
The proof of theorem \ref{mainThmAbridged} thereby provides insight into the dynamics of the Ricci flow of metrics of low cohomogeneity and on multiply-warped product metrics.
Indeed, these rigorous examples of conical singularity formation for the Ricci flow (cf. ~\cite{M14, IKS16, Appleton19}) qualitatively differ from the examples of Ricci flow singularities modeled on generalized cylinders $\mathbb{R}^p \times S^q$ (see e.g. ~\cite{S00, AK04, AIK15, Carson17}).
Theorem \ref{mainThmAbridged} has some precedence in mean curvature flow.
In ~\cite{V94}, Vel{\'a}zquez examined the mean curvature flow starting from smooth, non-compact, $O(n) \times O(n)$-invariant hypersurfaces in $\mathbb{R}^{2n}$.
When $n \ge 4$, he shows that there exist such mean curvature flow solutions $\{ \Sigma_t \}_{t \in [0, T)}$ that form a finite-time singularity at the origin whose parabolic dilations $\{ ( T- t)^{-1/2} \Sigma_{ t} \}$ about the origin converge to the Simons cone in a precise sense.
Moreover, for any $l \ge 2$, there are such solutions with blow-up rates for the second fundamental form given by
$$\sup | A| \sim ( T-t)^{-\frac{ 1}{2} - \sigma_l } $$
$$ \text{ where } \quad \sigma_l = \frac{\frac{\alpha - 1}{2} + l}{1 + | \alpha| } \quad \text{ and } \quad \alpha = - \frac{2n-3}{2} + \frac{1}{2} \sqrt{ (2n - 1)^2 - 8(2n-2)}$$
After rescaling by the blow-up rate of the second fundamental form, the rescaled hypersurfaces $\{ (T- t)^{ -\frac{1}{2} - \sigma_l } \Sigma_{t} \}$ converge locally uniformly to a minimal hypersurface that is asymptotic to the Simons cone.
These solutions with $l = 2$ were further analyzed in ~\cite{GS18} where it is shown that the mean curvature blows up but at a rate strictly less than that of the second fundamental form.
While the Riemann curvature tensor and the Ricci tensor ~\cite{S05} blow up at a finite-time singularity of the Ricci flow, it is unknown if the scalar curvature necessarily blows up at a finite-time singularity.
This problem of scalar curvature blow-up is related to the possible blow-up rates of the Riemann curvature tensor.
Indeed, B. Wang ~\cite{W12} showed that
$$\limsup_{t \nearrow T} (T-t) \sqrt{ \sup_{x \in M} | Rm | } \sqrt{ \sup_{x \in M } R } > 0$$
which in particular implies that the scalar curvature can remain bounded only if $ |Rm|$ blows up at a rate of at least $(T- t)^{-2}$.
Theorem \ref{mainThmAbridged} gives the existence of Ricci flow solutions with blow-up rates greater than $(T-t)^{-2}$ and singularities modeled on a Ricci-flat cone at parabolic scales.
Moreover, a formal matched asymptotic argument indicates that the solutions converge to the Ricci-flat B{\" o}hm metric on $\mathbb{R}^{q+1} \times N^p$ (see section \ref{bohmMetric}) at the curvature blow-up scale.
These asymptotics suggest that the solutions in theorem \ref{mainThmAbridged} may include examples of finite-time singularities of the Ricci flow with bounded scalar curvature.
We do not obtain the scalar curvature blow-up behavior in this current paper but believe it merits further investigation.
To this end, we include a formal and numerical argument that suggests the scalar curvature satisfies a type I bound, namely
$$\limsup_{t \nearrow T} (T-t) \sup_{x \in S^p \times S^{q+1} } |R|(x,t) < \infty$$
Additionally, because a Ricci-flat cone may be considered as a shrinking Ricci soliton, we note that the singularity formation exhibited in theorem \ref{mainThmAbridged} is consistent with the convergence results for Ricci flows with uniform scalar curvature bounds proved in ~\cite{Bamler18}.
The outline of the current work is as follows:
Section \ref{Setup} establishes some preliminary results for the Ricci flow of doubly-warped product metrics and collects notation for the coordinate systems used throughout the paper.
In section \ref{BoxDefn}, we set up the topological argument used to construct the solutions in theorem \ref{mainThmAbridged}.
The topological argument follows the general strategy employed in ~\cite{V94, HV}.
In our case, however, we must consider a system of partial differential equations (\ref{psRF}), rather than a scalar equation, and so the use of maximum principles is more delicate.
The remainder of the proof relies on technical estimates contained in sections \ref{PointwiseEst}, \ref{CoeffEst}, \ref{ShortTimeEsts}, and \ref{LongTimeEsts}, as well as an argument in section \ref{NoInnBlowup} that ensures the Ricci flow solutions remain smooth up to time $T$.
Section \ref{ScalarCurv} provides the formal argument for the scalar curvature behavior at the singularity time.
Finally, appendix \ref{AnalyticFacts} collects several facts about the special functions and weighted $L^2$-spaces used throughout the paper.
Appendix \ref{AppendixOfConstants} lists the parameters and constants used in the topological argument for the readers' convenience
and is ordered such that each constant depends only on those above it in the list.
\addtocontents{toc}{\protect\setcounter{tocdepth}{0}}
\end{document}
|
\begin{document}
\title[Regularity of a degenerate parabolic equation]{Regularity of a degenerate parabolic equation appearing in Ve\v{c}e\v{r}'s unified pricing of Asian options}
\author[H. Dong]{Hongjie Dong}
\address[H. Dong]{Division of Applied Mathematics, Brown University, 182 George Street, Box F, Providence, RI 02912, USA}
\email{Hongjie\[email protected]}
\author[S. Kim]{Seick Kim}
\address[S. Kim]{Department of Mathematics, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul 120-749, Republic of Korea}
\email{[email protected]}
\thanks{S. Kim is partially supported by NRF Grant No. 2012R1A1A2040411.}
\subjclass[2000]{35B65, 35K20, 91B28}
\keywords{asian options; degenerate parabolic equation; regularity of solutions}
\begin{abstract}
Ve\v{c}e\v{r} \cite{Vecer02} derived a degenerate parabolic equation with a boundary condition characterizing the price of Asian options with generally sampled average.
It is well understood that there exists a unique probabilistic solution to such a problem but it remained unclear whether the probabilistic solution is a classical solution.
We prove that the probabilistic solutions to Ve\v{c}e\v{r}'s PDE are regular.
\end{abstract}
\maketitle
\section{Introduction and Main result}
An Asian option is a specialized form of an option where the payoff is not determined by the underlying price at maturity, but is connected to the average value of the underlying security over certain time interval.
In an interesting article \cite{Vecer02}, Ve\v{c}e\v{r} presented a unifying PDE approach for pricing Asian options that works for both discrete and continuous arithmetic average.
By using a dimension reduction technique, he derived a simple degenerate parabolic equation in two variables $(t,x)\in [0,T)\times\mathbb R$
\begin{equation}
\label{eq:I03c}
u_t+\frac{1}{2}
\left(x-e^{-\int_0^t d\nu(s)}q(t) \right)^2 \sigma^2 u_{xx}=0\quad
\end{equation}
supplemented by a terminal condition
\begin{equation}
\label{eq:I04d}
u(T,x)=(x-K)_{+}:= \max(x-K,0),
\end{equation}
which gives the price of the Asian option.
Here, $\nu(t)$ is the measure representing the dividend yield, $\sigma$
is the volatility of the underlying asset, $q(t)$ is the trading strategy
given by
\[
q(t)=\exp\left\{-\int_t^T \,d\nu(s)\right\}\cdot\int_t^T
\exp\left\{-r(T-s)+\int_s^T \,d\nu(\tau)\right\}\,d\mu(s),
\]
where $r$ is the interest rate and $\mu(t)$ represents a general weighting
factor.
We ask readers to refer to \cite{Vecer02} for the derivation of
the equation \eqref{eq:I03c}.
It should be noted that the function $b$ given by
\begin{align}
\label{eq:I04x}
b(t) &:=e^{-\int_0^t d\nu(s)}q(t)\\
\nonumber
&= \exp\left\{-\int_0^T \,d\nu(s)\right\}\cdot
\int_t^T \exp\left\{-r(T-s)+\int_s^T \,d\nu(\tau)\right\}\,d\mu(s)
\end{align}
is a nonnegative monotone decreasing function defined on $[0,T]$, and the problem now read as follows:
\begin{equation}
\label{eq:I01a}
u_t+\tfrac{1}{2}\left(x-b(t)\right)^2\sigma^2 u_{xx}=0.
\end{equation}
\begin{equation}
\label{eq:I02b}
u(T,x)=(x-K)_{+}.
\end{equation}
It is mathematically natural to investigate existence, uniqueness, and regularity of a solution to the degenerate parabolic problem \eqref{eq:I01a}, \eqref{eq:I02b}.
The existence and uniqueness question can be easily answered by using a probabilistic argument.
Indeed, the problem \eqref{eq:I01a}, \eqref{eq:I02b} admits a unique probabilistic solution
\begin{equation}
\label{eq:I09u}
u(t,x):=\mathbb E f(X_{T}(t,x)),
\end{equation}
where $f(y):=(y-K)_{+}$ and $X_s=X_s(t,x)$ is the stochastic process that satisfies, for $t\in[0,T]$ and $x\in\mathbb R$, the following SDE:
\begin{equation}
\label{eq:I08z}
\left\{\begin{array}{l}
dX_s=(X_s-b_s)\sigma\,dw_s,\quad s\ge t, \quad (\,b_s=b(s)\,)\\
X_{t}=x.
\end{array}\right.
\end{equation}
On the other hand, regularity of the probabilistic solution $u$ defined in
\eqref{eq:I09u} is a more subtle issue.
There is a classical result, originally due to Freidlin, saying that
if $f$ in \eqref{eq:I09u} is twice continuously differentiable and satisfies a certain growth condition,
then $u(t,x)$ defined by \eqref{eq:I09u} is meaningful
and twice differentiable with respect to $x$ continuously in $(t,x)$, etc.;
see e.g. \cite[Theorem V.7.4]{Krylov}.
However, in our case, $f$ is only Lipschitz continuous and thus
the above mentioned result is not applicable.
As a matter of fact, it is not trivial whether or not the problem
\eqref{eq:I03c}, \eqref{eq:I04d} admits any classical or strong solution.
This regularity question was studied by one of the authors in \cite{Kim}.
It is shown in \cite{Kim} that if $K=0$ (in this case we have the fixed strike Asian call option) and if $b(t)$ is a monotone decreasing
Lipschitz continuous function, then the probabilistic solution $u$ defined
in \eqref{eq:I09u} is indeed a classical solution.
We note that $b(t)$ satisfies such an assumption if $d\mu(t)=\rho(t)\,dt$ for some $\rho\in L^\infty([0,T])$ satisfying $\rho(t)\geq \rho_0>0$; i.e. the measure $\mu(t)$ is absolutely continuous with respect to the Lebesgue measure $dt$ and its density function $\rho(t)$ is strictly positive and bounded.
This excludes the cases when $\rho(t)$ is a nonnegative step function that vanishes on some intervals or when $\mu(t)$ is a linear combination of Dirac delta functions, which corresponds to discretely sampled Asian options.
These two cases are important in practice but have been left out in \cite{Kim}.
The goal of this article is, roughly speaking, to show that even in those cases, the probabilistic solution of problem \eqref{eq:I03c}, \eqref{eq:I04d} is still a classical solution.
As a matter of fact, we even give an improvement to the main result of \cite{Kim}.
To be precise, we will assume that the function $b(t)$ has the following properties.
\begin{enumerate}[i)]
\item
$b(t)$ is nonnegative and monotone decreasing on $[0,T]$
\item
$b(t)$ is discontinuous at most finitely many points $t_1<\cdots < t_n$ in $[0,T]$.
\item
$b(T)=0$ and there is an $\varepsilon>0$ such that
\begin{enumerate}
\item
$b(t)=0$ on $[T-\varepsilon, T]$ if $K \neq 0$.
\item
$m_1 \le -b'(t) \le m_2$ a.e. on $[T-\varepsilon, T]$ for some $m_1, m_2 >0$ if $K=0$.
\end{enumerate}
\end{enumerate}
It is clear that condition ii) allows us to treat the discrete sampling case.
We point out that in \cite{Kim} it is assumed that $K=0=b(T)$ and $m_1 \le -b'(t) \le m_2$ a.e. on $[0,T]$, so that $b(t)$ in \cite{Kim} satisfies the above properties.
We also note that the condition iii) is technical but is a generic one in the sense that it can be always realized in practice by perturbing sampling strategy.
In particular, note that by \eqref{eq:I04x}, we have $b(T)=0$ unless the measure $\mu(t)$ has a point mass at $T$.
We use the notation
\begin{equation*}
\mathbb H_T:= [0,T)\times \mathbb R,\quad
\bar{\mathbb H}_T:= [0,T]\times \mathbb R,\quad
\mathring{\mathbb H}_T:=\mathbb H_T\setminus \left(\bigcup_{i=1}^n \set{t_i}\times \mathbb R\right)
\end{equation*}
in our main result stated below.
\begin{theorem}
\label{thm:I01}
Let $b(t)$ satisfy the conditions i) - iii) above.
Suppose $u$ is the probabilistic solution of the problem \eqref{eq:I01a},
\eqref{eq:I02b};
i.e., $u(t,x)$ is defined by \eqref{eq:I09u}.
Then $u(t,x)$ is continuous in $\bar{\mathbb H}_T$ and satisfies the terminal
condition \eqref{eq:I02b}.
Moreover, $u(t,x)$ is twice differentiable with respect
to $x$ continuously in $\mathbb H_T$, differentiable with respect to $t$ continuously
in $\mathring{\mathbb H}_T$, and satisfies the equation \eqref{eq:I01a} in $\mathring{\mathbb H}_T$.
\end{theorem}
It is clear from \eqref{eq:I01a} that $u_t$ cannot be continuous where $u_{xx}$ is continuous but $b(t)$ is not continuous.
Therefore, if the measure $\mu(t)$ contains a pure point mass (i.e. a discrete sampling), then $u_t$ cannot be continuous in the entire $\mathbb H_T$.
\begin{conclusion}
The case $K=0$ corresponds to the fixed strike Asian call option.
In that case, it is recommended to employ a continuous sampling near the terminal time $T$.
Except for the case $K=0$, it is recommended not to sample near the terminal time to ensure that the solution becomes a classical one.
\end{conclusion}
A couple of further remarks are in order.
\begin{remark}
Suppose $K=b(T) \neq 0$ and let $T'=\inf\set{t\in [0,T]:b(t)=K}$.
It is a matter of computation to verify that probabilistic solution
$u$ of the problem \eqref{eq:I01a}, \eqref{eq:I02b} in $[T',T]\times \mathbb R$ is
given by $u(t,x)=(x-K)_{+}$.
Of course, $u$ is not twice continuously differentiable with respect to $x$
there.
\end{remark}
\begin{remark}
In Ve\v{c}e\v{r}'s PDE method, the price of Asian option is determined by
$u(0,\cdot)$.
Theorem~\ref{thm:I01} suggests that to minimize numerical error in computing
$u(0,\cdot)$ by finite difference methods, one should include in time grids all the points where $b(t)$ is discontinuous (i.e., discrete sampling points).
\end{remark}
\section{Proof of Theorem~\ref{thm:I01}}
For $(t,x) \in \bar{\mathbb H}_T$, let $X_s=X_s(t,x)$ be the stochastic process
which satisfies \eqref{eq:I08z}.
It is well known that such a process $X_s$ exists; see e.g.,
\cite[Theorem V.1.1]{Krylov}.
The probabilistic solution $u$ of the
problem \eqref{eq:I01a}, \eqref{eq:I02b} is then given by
the formula \eqref{eq:I09u}.
It is then evident that $u$ is continuous in $\bar{\mathbb H}_T$ and satisfies
the terminal condition \eqref{eq:I02b}.
If $f$ in \eqref{eq:I09u} were twice continuously differentiable,
then, as it was mentioned in the introduction,
the theorem would follow from \cite[Theorem V.7.4]{Krylov}.
But this is clearly not the case since $f(y)=(y-K)_{+}$ is merely a Lipschitz
continuous function.
However, it should be pointed out that $u$ also has the representation
\[
u(t,x)=\mathbb E u(X_{\tilde T}(t,x)),\quad \forall \tilde T\in [t,T].
\]
Therefore, if $u(\tilde T,\cdot)$ is twice continuously differentiable for some
$\tilde T\in (0,T)$, then we would have the second part of the theorem with
$T$ replaced by $\tilde T$; i.e., $u(t,x)$ is twice
differentiable with respect to $x$ continuously in $\mathbb H_{\tilde T}$, differentiable
with respect to $t$ continuously in $\mathring{\mathbb H}_{\tilde T}$, and satisfies the equation
\eqref{eq:I01a} in $\mathring{\mathbb H}_{\tilde T}$.
Therefore, in the case when $K=0$, we have $m_1 \le -b'(t) \le m_2$ a.e. on $[T-\varepsilon, T]$ for some $m_1, m_2>0$, and thus by \cite[Theorem~1.10]{Kim}, we are done.
It only remains to consider the case when $K\neq 0$ and $b(t)=0$ on $[T-\varepsilon, T]$.
By the above observation, we have the theorem if we show that
$u\in \mathcal{C}^{1,2}_{t,x;\; loc}([T-\varepsilon,T)\times\mathbb R)$ and satisfies the equation \eqref{eq:I01a}
there.
Therefore, it will be enough for us to prove that the probabilistic solution $u$ of the problem
\begin{align}
\label{eq:P01a}
u_t+\frac{1}{2} \sigma^2 x^2 u_{xx}=0 \quad\text{in}\quad \mathbb H_T,\\
\label{eq:P01b}
u(T,x)=(x-K)_{+}
\end{align}
is a classical solution.
By the Black-Scholes-Merton's options pricing formula, we know its solution is $C^\infty$ in both $x$ and $t$ in $\mathbb H_T$ and thus we are done.
$\blacksquare$
\section{Appendix}
We give a self-contained proof that the probabilistic solution $u(t,x)$ of the problem \eqref{eq:P01a}, \eqref{eq:P01b} is a classical solution without invoking Black-Scholes-Merton's formula.
Let $Y_s=Y_s(x)$ be the process satisfying the stochastic equation
\begin{equation}
\label{eq3.37}
dY_s=\sigma Y_s\,dw_s,\quad Y_0=x.
\end{equation}
It is easy to verify that $Y_s$ is given by
$Y_s= x e^{\sigma w_s-s/2}$.
Then $u(t,x)$ is given by
\begin{equation}
\label{eq:M02a}
u(t,x)=\mathbb E(Y_{T-t}(x)-K)_{+}=\mathbb E\left(xe^{\sigma w_{T-t}-(T-t)/2}-K\right)_{+}.
\end{equation}
Since $Y_s$ is a martingale and $(y-K)_{+}=(y-K)+(K-y)_{+}$, we also have
\begin{equation}
\label{eq:M02b}
u(t,x)=x-K+\mathbb E\left(K-xe^{\sigma w_{T-t}-(T-t)/2}\right)_{+}.
\end{equation}
The above computations lead us to define
\begin{equation}
\label{eq:M11f}
v(t,x):=
\left\{\begin{array}{l}
\mathbb E\left(xe^{\sigma w_{T-t}-(T-t)/2}-K\right)_{+} \quad\text{if}\quad K>0,\\
\mathbb E\left(K-xe^{\sigma w_{T-t}-(T-t)/2}\right)_{+} \quad\text{if}\quad K<0.
\end{array}\right.
\end{equation}
By \eqref{eq:M02a} and \eqref{eq:M02b}, we find that
$u\in\mathcal{C}^{1,2}_{t,x;\; loc}(\mathbb H_T)$ and satisfies \eqref{eq:P01a} if and only if
$v\in\mathcal{C}^{1,2}_{t,x;\; loc}(\mathbb H_T)$ and satisfies \eqref{eq:P01a}.
Denote
\begin{equation*}
\mathbb H_T^{+}:=[0,T)\times(0,\infty),\quad
\mathbb H_T^{-}:=[0,T)\times(-\infty,0).
\end{equation*}
It is easy to check that $v\equiv 0$ in $\mathbb H_T\setminus \mathbb H_T^{+}$ if $K>0$ and
$v\equiv 0$ in $\mathbb H_T\setminus\mathbb H_T^{-}$ if $K<0$.
Also, by using an approximation argument similar to the one used
in \cite{Kim}, we find that $v$ belongs to $\mathcal{C}^{1+\alpha/2,2+\alpha}_{t,x, loc}(\mathbb H_T^{+}\cup \mathbb H_T^{-})$ and satisfies the equation \eqref{eq:P01a} in $\mathbb H_T^{+}\cup \mathbb H_T^{-}$
regardless the sign of $K$.
Therefore, the proof will be complete if we show
\begin{equation}
\label{eq4.10}
\lim_{x\to 0}\,(|v_x|+|v_{xx}|+|v_t|)(t,x)=0\quad
\text{locally uniformly in }t\in [0,T).
\end{equation}
\begin{lemma}
Let $v$ be defined by \eqref{eq:M11f} and denote
\begin{equation*}
Q:=
\left\{
\begin{array}{l}
{[0,T)\times (0,K)}
\quad\text{if}\quad K>0, \\
{[0,T)\times (K,0)}
\quad\text{if}\quad K<0.
\end{array}\right.
\end{equation*}
Then we have the following estimate for $v(t,x)$ in $Q$:
\begin{equation}
\label{eq:M07g}
0\leq v(t,x)\leq \sqrt{\frac{2}{\pi}} \frac{\sigma K\sqrt{T}}{\ln |K/x|}
\exp\set{-\frac{(\ln|K/x|)^2}{\sigma^2 T}}.
\end{equation}
\end{lemma}
\begin{proof}
We shall assume that $K>0$ and carry out the proof.
The proof for the case when $K<0$ is parallel and shall be omitted.
First of all, it is clear from \eqref{eq:M11f} that $v\geq 0$.
For any $(t,x)\in Q$, we define the process $Z_s=Z_s(t,x)=(t+s,Y_s(x))$,
where $Y_s=Y_s(x)=xe^{\sigma w_s-s/2}$ satisfies the stochastic equation
\eqref{eq3.37} above.
Let $\tau=\tau(t,x)$ be the first exit time of $Z_s(t,x)$ from $Q$.
We define $\tilde v(t,x)$ by
\begin{equation*}
\tilde v(t,x)=\mathbb E g(Z_\tau(t,x))=\mathbb E g(t+\tau,Y_\tau(x)),
\end{equation*}
where the values of $g=g(s,y)$ on the parabolic boundary $\partial_p Q$ of
$Q$ are given by
\begin{equation*}
\left\{\begin{array}{l}
g(T,y)=0\quad\text{for}\quad 0<y<K,\\
g(s,0)=0\quad\text{and}\quad g(s,K)=K \quad\text{for}\quad 0\le s\le T.
\end{array}\right.
\end{equation*}
We claim that $v\leq \tilde v$ in $Q$.
To see this, first note that
\begin{equation*}
v(t,x)=\mathbb E v(Z_\tau(t,x))=\mathbb E v(t+\tau,Y_\tau(x)).
\end{equation*}
Thus, it is enough to show that $v\leq g$ on $\partial_p Q$.
It is obvious that $v(T,y)=0$ for any $y\in (0,K)$ and $v(t,0)=0$ for
any $t\in [0,T]$.
Also, since $(y-K)_{+}\leq y$ for any $y\geq 0$ and $e^{\sigma w_s-s/2}$
is a martingale, we have
\begin{equation*}
v(t,K) \leq \mathbb E\left(Ke^{\sigma w_{T-t}-(T-t)/2}\right)= K,\quad
\forall t\in [0,T].
\end{equation*}
It thus follows that $v\leq \tilde v$ in $Q$.
Therefore, we have
\begin{align*}
v(t,x)
&\leq \tilde v(t,x)=K\mathbb P\{xe^{\sigma w_\tau-\tau/2}=K\}\\
&\le K\mathbb P\set{\sup_{0\le s<T-t}(\sigma w_s-s/2)\ge \ln (K/x)}\\
&\le K\mathbb P\set{\sup_{0\le s<T}(\sigma w_s-s/2)\ge \ln (K/x)}\\
&\le K\mathbb P\set{\sup_{0\le s<T}w_s\ge \sigma^{-1}\ln (K/x)}
=2K\mathbb P\set{w_T\ge \sigma^{-1}\ln (K/x)}\\
&\le K\sqrt{\frac{2}{\pi}} \frac{\sigma\sqrt{T}}{\ln (K/x)}
\exp\set{-\frac{(\ln(K/x))^2}{\sigma^2 T}},
\end{align*}
where, in the last step, we have used an inequality
\begin{align*}
\int_\alpha^\infty e^{-x^2/2}\,dx &\leq
\frac{1}{\alpha}\int_\alpha^\infty x e^{-x^2/2}\,dx
=\alpha^{-1} e^{-\alpha^2/2},\quad \forall\alpha>0.
\end{align*}
The lemma is proved.
\end{proof}
Now, we prove the statement \eqref{eq4.10}.
We shall assume that $K>0$ since the case when $K<0$ can be treated in a similar way.
We extend $v$ to zero on $(T,\infty)\times (0,K)$.
Then, it is easy to see that $v$ belongs to $\mathcal{C}^{1+\alpha/2,2+\alpha}_{t,x;\; loc}([0,\infty)\times (0,K))$ and satisfies both the equation \eqref{eq:P01a} and the estimate \eqref{eq:M07g} in $[0,\infty)\times (0,K)$.
Denote
\begin{equation*}
Q_{r}(t_0,x_0):=(t_0,t_0+1)\times(x_0-r,x_0+r),\quad
\Pi_\rho:=(0,\rho^2)\times (-\rho,\rho).
\end{equation*}
Let $(t_0,x_0)\in [0,T)\times (0,2K/3)$ and set $r=r(x_0):=x_0/2$.
Define
\begin{equation*}
V(t,x)=v(t_0+ t,x_0+rx)
\end{equation*}
It is easy to verify that $V(t,x)$ satisfies the equation
\begin{equation*}
V_t+\frac{1}{2} a(x) V_{xx}=0\quad\text{in}\quad \Pi_1,
\end{equation*}
where $a(x):= \sigma^2 r^{-2}(x_0+rx)^2$ satisfies
\begin{equation*}
\sigma^2 \leq a(x)\le 9\sigma^2 \quad\text{and}\quad |a'(x)|\leq 6\sigma^2, \quad \forall x\in (-1,1).
\end{equation*}
By the Schauder estimates (see e.g., \cite{Kr2}), there is some $C=C(\sigma)$ such that
\[
|V_x(0,0)|+|V_{xx}(0,0)|+|V_t(0,0)|\leq C \sup_{\Pi_1}|V|,
\]
while by the estimate \eqref{eq:M07g} we have
\[
\sup_{\Pi_1} |V|=\sup_{Q_{r}(t_0,x_0)} |v| \leq
\frac{\sigma K\sqrt{2T}}{\sqrt{\pi}\ln |K/3r|}
\exp\set{-\frac{(\ln|K/3r|)^2}{\sigma^2 T}}.
\]
Hence, there is some $r_0=r_0(K)$ and $N=N(\sigma, T, K)$ such that
\begin{equation}
\label{eq:P19d}
|V_x(0,0)|+ |V_{xx}(0,0)|+|V_t(0,0)|\leq N e^{-(\ln r)^2/N},
\end{equation}
provided that $r<r_0$.
Note that \eqref{eq:P19d} translates to as follows:
There is some $r_0=r_0(K)$ and $N=N(\sigma, T, K)$ such that
if $x_0<r_0$ then
\[
x_0 |v_x(t_0,x_0)|+ x_0^2 |v_{xx}(t_0,x_0)|+|v_t(t_0,x_0)|\leq N e^{-(\ln x_0)^2/N}.
\]
The above estimate obviously implies \eqref{eq4.10}.
$\blacksquare$
\begin{acknowledgment}
The authors thank Jan Ve\v{c}e\v{r} for very helpful discussion.
\end{acknowledgment}
\end{document}
|
\begin{document}
\begin{abstract}
We give sufficient conditions on the Lebesgue exponents for
compositions of odd numbers of pseudo-differential
operators with symbols in modulation spaces. As a byproduct,
we obtain sufficient conditions for twisted convolutions of odd
numbers of factors to be bounded on Wiener amalgam spaces.
\end{abstract}
\title{Multi-linear products with
odd factors in pseudo-differential
calculus with symbols in modulation spaces}
\section{Introduction}\label{sec0}
\par
In the paper we deduce multi-linear Weyl products
and other products in pseudo-differential calculus
of odd factors with symbols belonging to suitable
modulation spaces. Especially we improve some
of the odd multi-linear products in \cite{CoToWa}. Here we
notice that for corresponding bilinear products, the results in
\cite{CoToWa} are sharp.
\par
Suppose that $N\ge 1$ is an integer. Then
it follows from \cite[Proposition 2.5]{CoToWa} that
\begin{gather}
M^{p_1,q_1}{\text{\footnotesize{$\#$}}} M^{p_2,q_2}{\text{\footnotesize{$\#$}}} M^{p_3,q_3} {\text{\footnotesize{$\#$}}}
\, \cdot \, ts {\text{\footnotesize{$\#$}}} M^{p_N,q_N}
\subseteq
M^{p_0',q_0'}
\label{Eq:NonWeightModEmb}
\intertext{holds true when}
\max \left ( \mathsf R _N(\textstyle{\frac 1{q'}}) ,0\right )
\le
\min \left (
{\textstyle{\frac 1{p_j},\frac 1{p_j'},\frac 1{q_j},\frac 1{q_j'}},\mathsf R _N(\frac 1p) }
\right ).
\label{Eq:LebExpCondProp2.5}
\end{gather}
Here
$$
\mathsf R _N (x) = \frac 1{N-1}\left (
\sum _{j=0}^Nx_j -1
\right ),
\qquad x=(x_0,x_1,\dots ,x_N)\in [0,1]^{N+1},
$$
and we observe that \cite[Proposition 2.5]{CoToWa}
is a weighted version of \eqref{Eq:NonWeightModEmb}
and \eqref{Eq:LebExpCondProp2.5}.
(See \cite{Ho3} and Section \mathbf Rf{sec1} for notations.) We notice
that the conditions on $\frac 1{p_j'}$ and $\frac 1{q_j}$ in
\eqref{Eq:LebExpCondProp2.5} can be
outlined because they are always fulfilled when the other conditions
hold (see e.{\,}g. Theorem 0.1$'$ in \cite{CoToWa} and its proof).
\par
The multi-linear multiplication property \eqref{Eq:NonWeightModEmb}
with \eqref{Eq:LebExpCondProp2.5} is obtained by interpolating the case
that \eqref{Eq:NonWeightModEmb} holds true when
\eqref{Eq:LebExpCondProp2.5} is replaced by
\begin{equation}\tag*{(\mathbf Rf{Eq:LebExpCondProp2.5})$'$}
\mathsf R _N(\textstyle{\frac 1{q'}}) \le 0 \le \mathsf R _N(\textstyle{\frac 1p}),
\end{equation}
(see Proposition 2.1 in \cite{CoToWa}) with the case
\begin{equation}\tag*{(\mathbf Rf{Eq:NonWeightModEmb})$'$}
M^{2,2}{\text{\footnotesize{$\#$}}} M^{2,2}{\text{\footnotesize{$\#$}}} M^{2,2}{\text{\footnotesize{$\#$}}} \, \cdot \, ts {\text{\footnotesize{$\#$}}} M^{2,2}
\subseteq M^{2,2}.
\end{equation}
Note that the last the last multiplication property is equivalent with
the fact that compositions of Hilbert-Schmidt operators result
into new Hilbert-Schmidt operators.
\par
In the paper we improve \eqref{Eq:NonWeightModEmb}
and \eqref{Eq:LebExpCondProp2.5} for multi-linear products
with odd factors, by replacing \eqref{Eq:NonWeightModEmb}$'$
with the more general
\begin{equation}\tag*{(\mathbf Rf{Eq:NonWeightModEmb})$''$}
M^{p,p}{\text{\footnotesize{$\#$}}} M^{p',p'}{\text{\footnotesize{$\#$}}} M^{p,p}{\text{\footnotesize{$\#$}}} \, \cdot \, ts {\text{\footnotesize{$\#$}}} M^{p,p}
\subseteq M^{p,p}
\end{equation}
in the interpolation with \eqref{Eq:NonWeightModEmb}
and \eqref{Eq:LebExpCondProp2.5}$'$. This leads to
that \eqref{Eq:NonWeightModEmb} remains true for odd $N$,
when \eqref{Eq:LebExpCondProp2.5} is replaced by
\begin{equation}\tag*{(\mathbf Rf{Eq:LebExpCondProp2.5})$''$}
\max \left ( \mathsf R _N(\textstyle{\frac 1{q'}}) ,0\right )
\le
\min \left (
{\textstyle{\mathsf Q _N(\frac 1{p}),\mathsf Q _N(\frac 1{q'}),
\mathsf Q _N(\frac 1{p},\frac 1{q'}),
\mathsf R _N(\frac 1p) }} \right ).
\end{equation}
Here
$$
\mathsf Q _N (x,y) = \min _{j+k\, \text{odd}}(\textstyle{\frac 12}(x_j+y_k),
1-\textstyle{\frac 12}(x_j+y_k))
\quad \text{and}\quad
\mathsf Q _N (x) = \mathsf Q _N (x,x),
$$
when
$$
x=(x_0,x_1,\dots ,x_N)\in [0,1]^{N+1}
\quad \text{and}\quad
y=(y_0,y_1,\dots ,y_N)\in [0,1]^{N+1}.
$$
In fact, in Section \mathbf Rf{sec2} we deduce weighted
versions of \eqref{Eq:NonWeightModEmb}
and \eqref{Eq:LebExpCondProp2.5}$'$ (see Proposition
\mathbf Rf{Prop:MainPropOdd} and Theorem
\mathbf Rf{Thm:MainThmOdd} in Section
\mathbf Rf{sec2}).
\medspace
We observe that \eqref{Eq:NonWeightModEmb} is close
to certain regularization techniques for linear operators in
distribution theory.
In fact, a convenient way to regularizing operators might be to
enclose them with regularizing operators. For example,
suppose that $T$ is a linear and continuous operator
from the Schwartz space $\mathscr S (\rr d)$ to the
set of tempered distributions $\mathscr S '(\rr d)$.
The operator $T$ possess weak continuity
properties in the sense that it is only guaranteed that
$T$ maps elements from $\mathscr S (\rr d)$ into the
significantly larger space $\mathscr S '(\rr d)$. On the other hand,
by enclosing $T$ with operators $S_1,S_2$
which are regularizing in the sense that they are
mapping $\mathscr S '(\rr d)$ into the smaller space $\mathscr S (\rr d)$,
the resulting operator
$$
T_0=S_1\circ T\circ S_2
$$
becomes again regularizing. Equivalently, by using the kernel theorem of
Schwartz and identifying kernels with operators one has that
\begin{equation}\label{Eq:OpThreeKernels}
\begin{gathered}
(K_1,K_2,K_3) \mapsto K_1\circ K_2\circ K_3,
\\[1ex]
\mathscr S (\rr {2d})\times \mathscr S '(\rr {2d})\times \mathscr S (\rr {2d})
\to
\mathscr S (\rr {2d}),
\end{gathered}
\end{equation}
is sequently continuous. Here we observe that by omitting one of
the compositions one may only guarantee that
$$
(K_1,K_2)\mapsto K_1\circ K_2
$$
is sequently continuous from
$\mathscr S (\rr {2d})\times \mathscr S '(\rr {2d})$
or
$\mathscr S '(\rr {2d})\times \mathscr S (\rr {2d})$
to $\mathscr S '(\rr {2d})$, which is a significantly weaker
continuity property.
\par
The continuity of the map \eqref{Eq:OpThreeKernels} can be obtained
by rewriting $K_1\circ K_2\circ K_3$ as
\begin{align*}
K_1\circ K_2\circ K_3(x,y) &= R_{K_2,F}(x,y)\equiv
\scal {K_2}{F(x,y,\, \cdot \, )},
\intertext{where}
F(x,y,x_1,x_2) &= (K_1\otimes K_3)(x,x_1,x_2,y)
=
K_1(x,x_1)K_3(x_2,y).
\end{align*}
The asserted continuity then follows from the facts
that $(K,F)\mapsto R_{K,F}$ is
continuous from $\mathscr S '(\rr {2d})\times \mathscr S (\rr {4d})$
to $\mathscr S (\rr {2d})$, and that $(K_1,K_3)\mapsto K_1\otimes K_3$
is continuous from $\mathscr S (\rr {2d})\times \mathscr S (\rr {2d})$
to $\mathscr S (\rr {4d})$.
\par
The mapping properties can also be conveniently formulated
within the theory of pseudo-differential calculus, e.{\,}g. in the Weyl calculus.
Recall that the Weyl product ${\text{\footnotesize{$\#$}}}$ of the elements
$a_1,a_2\in \mathscr S (\rr {2d})$ are defined by the formula
$$
\operatorname{Op} ^w(a_1{\text{\footnotesize{$\#$}}} a_2) = \operatorname{Op} ^w(a_1)\circ \operatorname{Op} ^w(a_2),
$$
where $\operatorname{Op} ^w(a)$ is the Weyl operator of $a\in \mathscr S (\rr {2d})$,
defined by
$$
\operatorname{Op} ^w(a)f(x) = (2\pi)^{-d}\iint _{\rr {2d}}a({\textstyle{\frac 12}}(x+y),\xi )
f(y)e^{i\scal {x-y}\xi}\, dyd\xi ,
$$
when $f\in \mathscr S (\rr d)$. The definition of $\operatorname{Op} ^w(a)$ extends
to a continuous operator from $\mathscr S '(\rr d)$ to $\mathscr S (\rr d)$.
We may also extend the definition of $\operatorname{Op} ^w(a)$ to allow
$a\in \mathscr S '(\rr {2d})$, and then $\operatorname{Op} ^w(a)$ is continuous from
$\mathscr S (\rr d)$ to $\mathscr S '(\rr d)$.
\par
By straight-forward Fourier techniques it follows that
\eqref{Eq:OpThreeKernels} is equivalent to
\begin{align}
\mathscr S (\rr {2d}){\text{\footnotesize{$\#$}}} \mathscr S '(\rr {2d}){\text{\footnotesize{$\#$}}} \mathscr S (\rr {2d})
&\subseteq
\mathscr S (\rr {2d}),
\label{Eq:ThreeSymbolsSchwartz}
\intertext{which by duality gives}
\mathscr S '(\rr {2d}){\text{\footnotesize{$\#$}}} \mathscr S (\rr {2d}){\text{\footnotesize{$\#$}}} \mathscr S '(\rr {2d})
&\subseteq
\mathscr S '(\rr {2d}),
\label{Eq:ThreeSymbolsSchwartzDual}
\end{align}
Again we observe that for the bilinear case we only have
$$
\mathscr S (\rr {2d}){\text{\footnotesize{$\#$}}} \mathscr S '(\rr {2d})\subseteq
\mathscr S '(\rr {2d}),
$$
and that $\mathscr S '(\rr {2d}){\text{\footnotesize{$\#$}}} \mathscr S '(\rr {2d})$ does not
make any sense.
\par
By similar arguments, \eqref{Eq:ThreeSymbolsSchwartz}
and \eqref{Eq:ThreeSymbolsSchwartzDual} can be extended
to any odd-linear Weyl products. That is, one has
\begin{align}
\mathscr S (\rr {2d}){\text{\footnotesize{$\#$}}} \mathscr S '(\rr {2d}){\text{\footnotesize{$\#$}}} \mathscr S (\rr {2d})
{\text{\footnotesize{$\#$}}} \, \cdot \, ts {\text{\footnotesize{$\#$}}} \mathscr S (\rr {2d})
&\subseteq
\mathscr S (\rr {2d}),
\tag*{(\mathbf Rf{Eq:ThreeSymbolsSchwartz})$'$}
\intertext{and}
\mathscr S '(\rr {2d}){\text{\footnotesize{$\#$}}} \mathscr S (\rr {2d}){\text{\footnotesize{$\#$}}} \mathscr S '(\rr {2d})
{\text{\footnotesize{$\#$}}} \, \cdot \, ts {\text{\footnotesize{$\#$}}} \mathscr S '(\rr {2d})
&\subseteq
\mathscr S '(\rr {2d}),
\tag*{(\mathbf Rf{Eq:ThreeSymbolsSchwartzDual})$'$}
\end{align}
which are analogous to \eqref{Eq:NonWeightModEmb}$''$,
taking into account that $M^{p',p'}$ is the dual of $M^{p,p}$
when $p<\infty$.
\par
\par
\section{Preliminaries}\label{sec1}
\par
In this section we introduce notation and discuss the background on
Gelfand--Shilov spaces,
pseudo-differential operators, the Weyl
product, twisted convolution and modulation spaces.
Most proofs can be found in the literature and are therefore omitted.
\par
Let $0<h,s\in \mathbf R$ be fixed. The space $\mathcal S_{s,h}(\rr d)$
consists of all $f\in C^\infty (\rr d)$ such that
\begin{equation*}
\nm f{\mathcal S_{s,h}}\equiv \sup \frac {|x^\beta \partial ^\alpha
f(x)|}{h^{|\alpha | + |\beta |}\alpha !^s\, \beta !^s}
\end{equation*}
is finite, with supremum taken over all $\alpha ,\beta \in
\mathbf N^d$ and $x\in \rr d$.
\par
The space $\mathcal S_{s,h} \subseteq
\mathscr S$ ($\mathscr S$ denotes the Schwartz space) is a Banach space
which increases with $h$ and $s$.
Inclusions between topological spaces are understood to be continuous.
If $s>1/2$, or $s =1/2$ and $h$ is sufficiently large, then $\mathcal
S_{s,h}$ contains all finite linear combinations of Hermite functions.
Since the space of such linear combinations is dense in $\mathscr S$, it follows
that the topological dual $(\mathcal S_{s,h})'(\rr d)$ of $\mathcal S_{s,h}(\rr d)$ is
a Banach space which contains $\mathscr S'(\rr d)$.
\par
\subsection{Gelfand-Shilov spaces of functions and distributions}
\par
The \emph{Gelfand--Shilov spaces} $\mathcal S_{s}(\rr d)$ and
$\Sigma _{s}(\rr d)$ (cf. \cite{GS}) are the inductive and projective limits, respectively,
of $\mathcal S_{s,h}(\rr d)$, with respect to the parameter $h$. Thus
\begin{equation}\label{GSspacecond1}
\mathcal S_{s}(\rr d) = \bigcup _{h>0}\mathcal S_{s,h}(\rr d)
\quad \text{and}\quad \Sigma _{s}(\rr d) =\bigcap _{h>0}\mathcal S_{s,h}(\rr d),
\end{equation}
where $\mathcal S_{s}(\rr d)$ is equipped with the the strongest topology such
that the inclusion map from $\mathcal S_{s,h}(\rr d)$ into $\mathcal S_{s}(\rr d)$
is continuous, for every choice of $h>0$. The space $\Sigma _s(\rr d)$ is a
Fr{\'e}chet space with seminorms
$\nm \, \cdot \, {\mathcal S_{s,h}}$, $h>0$. We have $\Sigma _s(\rr d)\neq \{ 0\}$
if and only if $s>1/2$, and $\mathcal S _s(\rr d)\neq \{ 0\}$
if and only if $s\ge 1/2$. From now on we assume that $s>1/2$ when we
consider $\Sigma _s(\rr d)$, and $s\ge 1/2$ when we consider $\mathcal S
_s(\rr d)$.
\medspace
The \emph{Gelfand--Shilov distribution spaces} $\mathcal S_{s}'(\rr d)$
and $\Sigma _s'(\rr d)$ are the projective and inductive limit
respectively of $\mathcal S_s'(\rr d)$. This means that
\begin{equation}\tag*{(\mathbf Rf{GSspacecond1})$'$}
\mathcal S_s'(\rr d) = \bigcap _{h>0}\mathcal S_{s,h}'(\rr d)\quad
\text{and}\quad \Sigma _s'(\rr d) =\bigcup _{h>0} \mathcal S_{s,h}'(\rr d).
\end{equation}
In \cite{GS, Ko, Pil} it is proved that $\mathcal S_s'(\rr d)$
is the topological dual of $\mathcal S_s(\rr d)$, and $\Sigma _s'(\rr d)$
is the topological dual of $\Sigma _s(\rr d)$.
\par
For each $\varepsilon >0$ and $s>1/2$ we have
\begin{equation}\label{GSembeddings}
\begin{alignedat}{3}
\mathcal S _{1/2}(\rr d) & \subseteq &\Sigma _s (\rr d) & \subseteq&
\mathcal S _s(\rr d) & \subseteq \Sigma _{s+\varepsilon}(\rr d)
\\[1ex]
\quad \text{and}\quad
\Sigma _{s+\varepsilon}' (\rr d) & \subseteq & \mathcal S _s'(\rr d)
& \subseteq & \Sigma _s'(\rr d) & \subseteq \mathcal S _{1/2}'(\rr d).
\end{alignedat}
\end{equation}
\par
The Gelfand--Shilov spaces are invariant under several basic operations,
e.{\,}g. translations, dilations, tensor products
and (partial) Fourier transformation.
\par
We normalize the Fourier transform of $f\in L^1(\rr d)$ as
$$
(\mathscr Ff)(\xi )= \widehat f(\xi ) \equiv (2\pi )^{-\frac d2}\int _{\rr
{d}} f(x)e^{-i\scal x\xi }\, dx,
$$
where $\scal \, \cdot \, \, \cdot \, $ denotes the scalar
product on $\rr d$. The map $\mathscr F$ extends
uniquely to homeomorphisms on $\mathscr S'(\rr d)$, $\mathcal S_s'(\rr d)$
and $\Sigma _s'(\rr d)$, and restricts to
homeomorphisms on $\mathscr S(\rr d)$, $\mathcal S_s(\rr d)$ and
$\Sigma _s(\rr d)$, and to a unitary operator on $L^2(\rr d)$.
\par
\subsection{Modulation spaces}
\par
Next we turn to the basic properties of modulation spaces, and start by
recalling the conditions for the involved weight functions. Let
$0<\omega \in L^\infty _{loc}(\rr d)$. Then $\omega$ is called
\emph{moderate} if there is a function $0<v\in L^\infty _{loc}(\rr d)$
such that
\begin{equation}\label{moderate}
\omega (x+y) \lesssim \omega (x)v(y),\quad x,y\in \rr d.
\end{equation}
Then $\omega$ is also called \emph{$v$-moderate}.
Here the notation $f(x) \lesssim g(x)$ means that there exists $C>0$
such that $f(x) \leq C g(x)$ for all arguments $x$ in the domain of $f$
and $g$. If $f \lesssim g$ and $g \lesssim f$ we write $f \asymp g$.
The function $v$ is called \emph{submultiplicative}
if it is even and \eqref{moderate} holds when $\omega =v$. We note that if
\eqref{moderate} holds then
$$
v^{-1}\lesssim \omega \lesssim v.
$$
For such $\omega$ it follows that \eqref{moderate} is true when
$$
v(x) =Ce^{c|x|},
$$
for some positive constants $c$ and $C$ (cf. \cite{Grochenig5}).
In particular, if $\omega$ is moderate on $\rr d$, then
$$
e^{-c|x|}\lesssim \omega (x)\lesssim e^{c|x|},
$$
for some $c>0$.
\par
The set of all moderate functions on $\rr d$
is denoted by $\mathscr P _E(\rr d)$.
If $v$ in \eqref{moderate}
can be chosen as $v(x)=\eabs{x}^s=(1+|x|^2)^{s/2}$ for some
$s \ge 0$, then $\omega$ is
said to be of polynomial type or polynomially moderate. We let
$\mathscr P (\rr d)$ be the set
of all polynomially moderate functions on $\rr d$.
\medspace
Let $\phi \in \mathcal S _s (\rr d) \setminus 0$ be
fixed. The \emph{short-time Fourier transform} (STFT) $V_\phi f$ of $f\in
\mathcal S _s ' (\rr d)$ with respect to the \emph{window function} $\phi$ is
the Gelfand--Shilov distribution on $\rr {2d}$ defined by
$$
V_\phi f(x,\xi ) \equiv
\mathscr F (f \, \overline {\phi (\, \cdot \, -x)})(\xi ).
$$
\par
For $a \in \mathcal S _{1/2} '(\rr {2d})$ and
$\Phi \in \mathcal S _{1/2} (\rr {2d}) \setminus 0$ the
\emph{symplectic short-time Fourier transform} $\mathcal V _{\Phi} a$
of $a$ with respect to $\Phi$ is the defined similarly as
\begin{equation}\nonumber
\mathcal V _{\Phi} a(X,Y) = \mathscr F _\sigma \big( a\, \overline{ \Phi (\, \cdot \, -X) }
\big) (Y),\quad X,Y \in \rr {2d}.
\end{equation}
We have
\begin{multline}\label{stftcompare}
\mathcal V _{\Phi} a(X,Y) = 2^d V_\Phi a(x,\xi , -2\eta ,2y),
\\[1ex]
X=(x,\xi )\in \rr {2d},\ Y=(y,\eta )\in \rr {2d},
\end{multline}
which shows the close connection between $V_\Phi a$
and $\mathcal V _{\Phi} a$.
The Wigner distribution $W_{f,\phi}$ and $V_\phi f$ are also closely related.
\par
If $f ,\phi \in \mathcal S _s (\rr d)$ and $a,\Phi \in \mathcal S _s (\rr {2d})$ then
\begin{align*}
V_\phi f(x,\xi ) &= (2\pi )^{-\frac d2}\int f(y)\overline {\phi
(y-x)}e^{-i\scal y\xi}\, dy
\intertext{and}
\mathcal V _\Phi a(X,Y ) &= \pi ^{-d}\int a(Z)\overline {\Phi
(Z-X)}e^{2i\sigma (Y,Z)}\, dZ .
\end{align*}
\par
Let $\omega \in \mathscr P _E (\rr {2d})$, $p,q\in [1,\infty ]$
and $\phi \in \mathcal S _{1/2} (\rr d)\setminus 0$ be fixed. The
\emph{modulation space} $M^{p,q}_{(\omega )}(\rr d)$ consists of
all $f\in \mathcal S _{1/2} '(\rr d)$ such that
\begin{align}
\nm f{M^{p,q}_{(\omega )}} &\equiv \Big (\int _{\rr d}\Big (\int _{\rr d}
|V_\phi f(x,\xi )\omega (x,\xi )|^p\, dx\Big )^{q/p}\, d\xi \Big )^{1/q}
\label{modnorm1}
\intertext{is finite, and the Wiener amalgam
space $W^{p,q}_{(\omega )} (\rr d)$ consists of all $f\in
\mathcal S _{1/2} '(\rr d)$ such that}
\nm f {W^{p,q}_{(\omega )}} &\equiv \Big (\int _{\rr d}\Big (\int _{\rr d}
|V_\phi f(x,\xi )\omega (x,\xi )|^q\, d\xi \Big )^{p/q}\, dx \Big )^{1/p}
\label{modnorm2}
\end{align}
is finite (with obvious modifications in \eqref{modnorm1} and
\eqref{modnorm2} when $p=\infty$ or $q=\infty$).
\par
\begin{rem}\label{MoreWeightClasses}
As follows from Proposition \mathbf Rf{p1.4} (2) below we have that in fact
$M^{p,q}_{(\omega )}(\rr d)$ contains the superspace $\Sigma _1(\rr d)$
of $\mathcal S _{1/2}(\rr d)$, and is contained in the subspace $\Sigma _1'(\rr d)$
of $\mathcal S _{1/2}'(\rr d)$, when $\omega \in \mathscr P _E(\rr {2d})$. Hence we
could from the beginning have assumed that $f \in \Sigma _1'(\rr d)$ in
\eqref{modnorm1} and \eqref{modnorm2}.
\par
On the other hand, in \cite{Toft8}, certain weight classes containing $\mathscr P
_E(\rr {2d})$ and superexponential weights are introduced. For any $s>1/2$,
the corresponding families of modulation spaces are large enough to contain
superspaces of $\mathcal S _s'(\rr d)$ and subspaces of $\mathcal S _s(\rr d)$.
\par
However, we are not dealing with these large families of modulation spaces
because we need (1) and (2) in Proposition \mathbf Rf{p1.4} which are not known to be
true for weights of this generality.
\end{rem}
\par
\begin{rem}
The literature contains slightly different conventions
concerning modulation and Wiener amalgam spaces. Sometimes our
definition of a Wiener amalgam space is considered as a particular
case of a general class of modulation spaces (cf.
\cite{Feichtinger1,Feichtinger2,Feichtinger6}). Our definition is
adapted to give the relation \eqref{twistfourmod} that suits our
purpose to transfer continuity for the Weyl product on
modulation spaces to continuity for twisted convolution on Wiener
amalgam spaces.
\end{rem}
\par
On the even-dimensional phase space $\rr {2d}$ we may define
modulation spaces based on the symplectic STFT.
Thus if $\omega \in \mathscr P _E (\rr {4d})$, $p,q\in [1,\infty ]$
and $\Phi \in \mathcal S _{1/2} (\rr {2d})\setminus 0$ are fixed, then
the \emph{symplectic modulation spaces}
$\EuScript M ^{p,q}_{(\omega )}(\rr {2d})$ and Wiener amalgam spaces $\EuScript W ^{p,q}
_{(\omega )}(\rr {2d})$ are obtained by replacing
the STFT $a\mapsto V_\Phi a$ by the corresponding
symplectic version $a\mapsto \mathcal V _\Phi a$ in \eqref{modnorm1} and
\eqref{modnorm2}.
(Sometimes the word \emph{symplectic} before modulation space is
omitted for brevity.)
By \eqref{stftcompare} we have
$$
\EuScript M ^{p,q}_{(\omega )}(\rr {2d}) = M ^{p,q}_{(\omega _0)}(\rr {2d}),
\quad \omega(x,\xi, y, \eta) = \omega_0 (x,\xi, -2 \eta, 2 y).
$$
It follows that all properties which are valid for $M ^{p,q}_{(\omega )}$
carry over to $\EuScript M ^{p,q}_{(\omega )}$.
\par
From
\begin{equation}\label{FourSTFTs}
V_{\widehat \phi}\widehat f (\xi ,-x) =e^{i\scal x\xi}V_{\phi}f(x,\xi )
\end{equation}
it follows that
$$
f\in W^{q,p}_{(\omega )}(\rr d)\quad \Leftrightarrow \quad
\widehat f\in M^{p,q}_{(\omega _0)}(\rr d),\qquad \omega _0(\xi
,-x)=\omega (x,\xi ).
$$
In the symplectic situation these formulas read
\begin{equation}\label{stftsymplfour}
\mathcal V _{\mathscr F _\sigma \Phi}(\mathscr F _\sigma a)(X,Y) =
e^{2i\sigma (Y,X)}\mathcal V _\Phi a(Y,X)
\end{equation}
and
\begin{equation}\label{twistfourmod}
\mathscr F _\sigma \EuScript M ^{p,q}_{(\omega )}(\rr {2d}) = \EuScript W
^{q,p}_{(\omega _0)}(\rr {2d}), \qquad \omega _0(X,Y)=\omega (Y,X).
\end{equation}
\par
For brevity we denote $\EuScript M ^p _{(\omega )}= \EuScript M ^{p,p}_{(\omega
)}$, $\EuScript W ^p_{(\omega )}=\EuScript W ^{p,p}_{(\omega
)}$, and when $\omega \equiv 1$ we write $\EuScript M ^{p,q}=\EuScript M
^{p,q}_{(\omega )}$ and $\EuScript W ^{p,q}=\EuScript W ^{p,q}_{(\omega )}$. We
also let $\EuScript M ^{p,q} _{(\omega )} (\rr {2d})$ be the
completion of $\mathcal S _s(\rr {2d})$ with
respect to the norm $\nm \, \cdot \, {\EuScript M ^{p,q}_{(\omega )}}$.
\par
In the following proposition we list some basic facts on invariance, growth
and duality for modulation spaces. For any $p\in (0,\infty]$, its conjugate
exponent $p'\in [1,\infty ]$ is defined by
$$
p'=
\begin{cases}
\infty , & p\in (0,1],
\\[1ex]
\displaystyle{\frac p{p-1}}, & p\in (1,\infty),
\\[2ex]
1, & p=\infty .
\end{cases}
$$
Since our main results are formulated in terms of
symplectic modulation spaces, we state the result for them
instead of the modulation spaces $M ^{p,q}_{(\omega )}(\rr {d})$.
\par
\begin{prop}\label{p1.4}
Let $p,q,p_j,q_j\in [1,\infty ]$ for $j=1,2$, and $\omega
,\omega _1,\omega _2,v\in \mathscr P _E (\rr {4d})$ be such that
$v=\check v$, $\omega$ is $v$-moderate and $\omega _2\lesssim
\omega _1$. Then the following is true:
\begin{enumerate}
\item[{\rm{(1)}}] $a\in \EuScript M ^{p,q}_{(\omega )}(\rr {2d})$ if and only if
\eqref{modnorm1} holds for any $\phi \in \EuScript M ^1_{(v)}(\rr {2d})\setminus
0$. Moreover, $\EuScript M ^{p,q}_{(\omega )}(\rr {2d})$ is a
Banach space under the norm
in \eqref{modnorm1} and different choices of $\phi$ give rise to
equivalent norms;
\item[{\rm{(2)}}] if $p_1\le p_2$ and $q_1\le q_2$ then
$$
\Sigma _1 (\rr {2d})\subseteq \EuScript M ^{p,q}_{(\omega )}(\rr
{2d})\subseteq \Sigma _1 '(\rr {2d})
\quad \text{and}\quad
\EuScript M ^{p_1,q_1}_{(\omega _1)}(\rr
{2d})\subseteq \EuScript M ^{p_2,q_2}_{(\omega _2)}(\rr {2d}).
$$
If in addition $v\in \mathscr P (\rr {2d})$, then
$$
\mathscr S (\rr {2d})\subseteq \EuScript M ^{p,q}_{(\omega )}(\rr
{2d})\subseteq \mathscr S '(\rr {2d});
$$
\item[{\rm{(3)}}] the $L^2$ inner product $( \, \cdot \, ,\, \cdot \, )_{L^2}$ on $\mathcal S _{1/2}$
extends uniquely to a continuous sesquilinear form $( \, \cdot \, ,\, \cdot \, )$
on $\EuScript M ^{p,q}_{(\omega )}(\rr {2d})\times \EuScript M ^{p'\! ,q'}_{(1/\omega )}(\rr {2d})$.
On the other hand, if $\nmm a = \sup \abp {(a,b)}$, where the supremum is
taken over all $b\in \mathcal S _{1/2} (\rr {2d})$ such that
$\nm b{\EuScript M ^{p',q'}_{(1/\omega )}}\le 1$, then $\nmm {\, \cdot \, t}$ and $\nm
\, \cdot \, t {\EuScript M ^{p,q}_{(\omega )}}$ are equivalent norms;
\item[{\rm{(4)}}] if $p,q<\infty$, then $\mathcal S _{1/2} (\rr {2d})$ is dense in
$\EuScript M ^{p,q}_{(\omega )}(\rr {2d})$ and the dual space of $\EuScript M
^{p,q}_{(\omega )}(\rr {2d})$ can be identified
with $\EuScript M ^{p'\! ,q'}_{(1/\omega )}(\rr {2d})$, through the form
$(\, \cdot \, ,\, \cdot \, )$. Moreover, $\mathcal S _{1/2} (\rr {2d})$ is weakly dense
in $\EuScript M ^{p' ,q'}_{(\omega )}(\rr {2d})$ with respect to the form $(\, \cdot \, ,\, \cdot \, )$
provided $(p,q) \neq (\infty,1)$ and $(p,q) \neq (1,\infty)$;
\item[{\rm{(5)}}] if $p,q,r,s,u,v \in [1,\infty]$, $0\le \theta \le 1$,
\begin{equation*}
\frac1p = \frac {1-\theta }{r}+\frac {\theta}{u} \quad
\text{and} \quad \frac 1q = \frac {1-\theta }{s}+\frac {\theta}{v},
\end{equation*}
then complex interpolation gives
\begin{equation*}
(\EuScript M ^{r,s}_{(\omega )},\EuScript M ^{u,v}_{(\omega )})_{[\theta ]}
= \EuScript M ^{p,q}_{(\omega )}.
\end{equation*}
\end{enumerate}
Similar facts hold if the $\EuScript M ^{p,q}_{(\omega )}$ spaces are replaced by
the $\EuScript W ^{p,q}_{(\omega )}$ spaces.
\end{prop}
\par
The proof of Proposition \mathbf Rf{p1.4}
can be found in \cite {Cordero,Feichtinger1,Feichtinger2,
Feichtinger3, Feichtinger4, Feichtinger5, Grochenig2, Toft2,
Toft4, Toft5, Toft8}.
\par
In fact, (1) follows from Gr{\"o}chenig's argument verbatim in
\cite[Proposition 11.3.2 (c)]{Grochenig2}. Note that the window
class $\EuScript M ^1_{(v)}(\rr {2d})$ in (1) contains $\Sigma _1(\rr {2d})$,
which in turn contains $\mathcal S _{1/2}(\rr {2d})$. Furthermore, if in
addition $v\in \mathscr P (\rr {4d})$,
then $\EuScript M ^1_{(v)}(\rr {2d})$ contains $\mathscr S (\rr {2d})$.
\par
The proof of (2) in \cite[Chapter 12]{Grochenig2} is based on Gabor
frames and formulated for polynomial type weights $\mathscr P
(\rr {4d})$. These arguments also hold for the broader weight class $\mathscr P
_E(\rr {4d})$. Another way to prove this is by means of
\cite[Lemma 11.3.3]{Grochenig2} and Young's inequality.
\par
The assertions (3)--(5) in Proposition \mathbf Rf{p1.4} can be found for
more general weights in Theorem 4.17, and a combination of
Theorem 3.4 and Proposition 5.2 in \cite{Toft8}.
\par
\begin{rem}
For $p,q\in (0,\infty ]$ (instead of $p,q\in [1,\infty ]$ as in Proposition
\mathbf Rf{p1.4}), $a\in \EuScript M ^{p,q}_{(\omega )}(\rr {2d})$ if and only if
\eqref{modnorm1} holds for any $\phi \in \Sigma _1(\rr {2d})\setminus
0$. Moreover, $\EuScript M ^{p,q}_{(\omega )}(\rr {2d})$ is a
quasi-Banach space under the quasi-norm
in \eqref{modnorm1} and different choices of $\phi$ give rise to
equivalent quasi-norms. (See e.{\,}g. \cite{GaSa,Toft13}.)
\end{rem}
\par
\begin{rem}\label{remGSmodident}
Let $\mathcal P$ be the set of all
$\omega \in \mathscr P _E(\rr {4d})$ such that
$$
\omega (X,Y ) = e^{c(|X|^{1/s}+|Y|^{1/s})},
$$
for some $c>0$. (Note that this implies that $s\ge 1$.) Then
\begin{alignat*}{2}
\bigcap _{\omega \in \mathcal P}\EuScript M ^{p,q}_{(\omega )}(\rr {2d}) &=
\Sigma _s(\rr {2d}),
&\quad \phantom{\text{and}}\quad
\bigcup _{\omega \in \mathcal P}\EuScript M ^{p,q}_{(1/\omega )}(\rr {2d}) &=
\Sigma _s'(\rr {2d})
\\[1ex]
\bigcup _{\omega \in \mathcal P}\EuScript M ^{p,q}_{(\omega )}(\rr {2d}) &=
\mathcal S _s(\rr {2d}),
&
\bigcap _{\omega \in \mathcal P}\EuScript M ^{p,q}_{(1/\omega )}(\rr {2d}) &=
\mathcal S _s'(\rr {2d}),
\end{alignat*}
and for $\omega \in \mathcal P$
$$
\Sigma _s(\rr {2d})\subseteq \EuScript M ^{p,q}_{(\omega )}(\rr {2d})
\subseteq
\mathcal S _s(\rr {2d}) \quad \text{and}\quad
\mathcal S _s'(\rr {2d}) \subseteq \EuScript M ^{p,q}_{(1/\omega )}(\rr {2d}) \subseteq
\Sigma _s'(\rr {2d}).
$$
(Cf. \cite[Prop.~4.5]{CPRT10}, \cite[Prop.~4]{GZ}, \cite[Cor.~5.2]{Pilipovic1} and
\cite[Thm.~4.1]{Teo2}. See also \cite[Thm.~3.9]{Toft8} for an extension of these
inclusions to broader classes of Gelfand--Shilov and modulation spaces.)
\end{rem}
\medspace
By Proposition \mathbf Rf{p1.4} (4) we have norm density of $\mathcal S _{1/2}$ in
$\EuScript M ^{p,q}_{(\omega )}$ when $p,q<\infty$. We may relax the assumptions
on $p$, provided we replace the norm convergence with \emph{narrow
convergence}.
This concept, that allows us to approximate elements in $\EuScript M ^{\infty,q}
_{(\omega )}(\rr {2d})$ for $1 \leq q < \infty$,
is treated in \cite{Sjo1,Toft2,Toft4}, and, for
the current setup of possibly exponential weights, in \cite{Toft8}.
(Sj{\"o}strand's original definition in \cite{Sjo1} is somewhat different.)
Narrow convergence is defined by means of the function
$$
H_{a,\omega ,p}(Y) \equiv \| \mathcal V _\Phi a(\, \cdot \, t,Y)\omega (\, \cdot \, t,Y) \|
_{L^p(\rr {2d})}, \quad Y \in \rr {2d},
$$
for $a \in \mathcal S _{1/2}'(\rr {2d})$, $\omega \in \mathscr P _E(\rr {4d})$,
$\Phi \in \mathcal S _{1/2}(\rr {2d}) \setminus 0$ and $p\in [1,\infty]$.
\par
\begin{defn}\label{p2.1}
Let $p,q\in [1,\infty
]$, and $a,a_j\in
\EuScript M ^{p,q}_{(\omega )}(\rr {2d})$, $j=1,2,\dots \ $. Then $a_j$
is said to \emph{converge narrowly} to $a$ with respect to $p,q$, $\Phi \in
\mathcal S _{1/2}(\rr {2d})\setminus 0$ and $\omega \in \mathscr P
_E(\rr {4d})$, if there exist $g_j,g \in L^q(\rr {2d})$ such that:
\begin{enumerate}
\item[{\rm{(1)}}] $a_j\to a$ in $\mathcal S _{1/2}'(\rr {2d})$ as $j\to \infty$;
\par
\item[{\rm{(2)}}] $H_{a_j,\omega ,p} \le g_j$ and $g_j \to g$ in $L^q(\rr
{2d})$ and a.{\,}e. as $j\to \infty$.
\end{enumerate}
\end{defn}
\par
\begin{prop}\label{narrowprop}
If $\omega \in \mathscr P _E(\rr {4d})$ and $1 \le q < \infty$ then the following
is true:
\begin{enumerate}
\item[{\rm{(1)}}] $\mathcal S _{1/2}(\rr {2d})$ is
dense in $\EuScript M ^{\infty,q}_{(\omega )}(\rr {2d})$ with respect to narrow
convergence;
\item[{\rm{(2)}}] $\EuScript M ^{\infty,q}_{(\omega )}(\rr {2d})$ is sequentially
complete with respect to the topology defined by narrow convergence.
\end{enumerate}
\end{prop}
\par
We refer to \cite{CoToWa} for a proof of Proposition
\mathbf Rf{narrowprop}.
\par
\subsection{Pseudo-differential operators}
\par
Next we recall some basic facts from pseudo-differential calculus
(cf. \cite{Ho3}). Let $\mathbf{M} (d,\Omega)$ be the set of
all $d\times d$ matrices with entries in $\Omega$,
$s\ge 1/2$, $a\in \mathcal S _s
(\rr {2d})$, and $A\in \mathbf{M} (d,\mathbf R)$ be fixed. Then the
pseudo-differential operator $\operatorname{Op} _A(a)$ defined by
\begin{equation}\label{e0.5}
\operatorname{Op} _A(a)f(x) = (2\pi )^{-d}\iint _{\rr {2d}}a(x-A(x-y),\xi )
f(y)e^{i\scal {x-y}\xi}\, dyd\xi
\end{equation}
is a linear and continuous operator on $\mathcal S _s (\rr d)$.
For $a\in \mathcal S _s'(\rr {2d})$ the
pseudo-differential operator $\operatorname{Op} _A(a)$ is defined as the continuous
operator from $\mathcal S _s(\rr d)$ to $\mathcal S _s'(\rr d)$ with
distribution kernel given by
\begin{equation}\label{atkernel}
K_{a,A}(x,y)=(2\pi )^{-\frac d2}(\mathscr F _2^{-1}a)(x-A(x-y),x-y).
\end{equation}
Here $\mathscr F _2F$ is the partial Fourier transform of $F(x,y)\in
\mathcal S _s'(\rr {2d})$ with respect to the variable $y \in \rr d$. This
definition generalizes \eqref{e0.5} and is well defined, since the mappings
\begin{equation}\label{homeoF2tmap}
\mathscr F _2\quad \text{and}\quad F(x,y)\mapsto F(x-A(x-y),y-x)
\end{equation}
are homeomorphisms on $\mathcal S _s'(\rr {2d})$.
The map $a\mapsto K_{a,A}$ is hence a homeomorphism on
$\mathcal S _s'(\rr {2d})$.
\par
If $A=0$, then $\operatorname{Op} _A(a)$ is the standard or Kohn-Nirenberg representation
$a(x,D)$. If instead $A=\frac 12 I$, then $\operatorname{Op} _A(a)$ agrees
with the Weyl operator or Weyl quantization $\operatorname{Op} ^w(a)$.
\par
For any $K\in \mathcal S '_s(\rr {d_1+d_2})$, let $T_K$ be the
linear and continuous mapping from $\mathcal S _s(\rr {d_1})$
to $\mathcal S _s'(\rr {d_2})$ defined by
\begin{equation}\label{pre(A.1)}
(T_Kf,g)_{L^2(\rr {d_2})} = (K,g\otimes \overline f )_{L^2(\rr {d_1+d_2})},
\quad f \in \mathcal S _s(\rr {d_1}), \quad g \in \mathcal S _s(\rr {d_2}).
\end{equation}
It is a well known consequence of the Schwartz kernel
theorem that if $t\in \mathbf R$, then $K\mapsto T_K$ and $a\mapsto
\operatorname{Op} _A(a)$ are bijective mappings from $\mathscr S '(\rr {2d})$
to the space of linear and continuous mappings from $\mathscr S (\rr d)$ to
$\mathscr S '(\rr d)$ (cf. e.{\,}g. \cite{Ho3}).
\par
Likewise the maps $K\mapsto T_K$
and $a\mapsto \operatorname{Op} _A(a)$ are uniquely extendable to bijective
mappings from $\mathcal S _s'(\rr {2d})$ to the set of linear and
continuous mappings from $\mathcal S _s(\rr d)$ to $\mathcal S _s'(\rr d)$.
In fact, the asserted bijectivity for the map $K\mapsto T_K$ follows from
the kernel theorem \cite[Theorem 2.3]{LozPer} (cf. \cite[vol. IV]{GS}).
This kernel theorem corresponds to the Schwartz kernel theorem
in the usual distribution theory.
The other assertion follows from the fact that $a\mapsto K_{a,A}$
is a homeomorphism on $\mathcal S _s'(\rr {2d})$.
\par
In particular, for each $a_1\in \mathcal S _s '(\rr {2d})$ and $A_1,A_2\in
\mathbf{M} (d,\mathbf R)$, there is a unique $a_2\in \mathcal S _s '(\rr {2d})$ such that
$\operatorname{Op} _{A_1}(a_1) = \operatorname{Op} _{A_2} (a_2)$. The relation between $a_1$ and $a_2$
is given by
\begin{equation}\label{calculitransform}
\operatorname{Op} _{A_1}(a_1) = \operatorname{Op} _{A_2}(a_2) \quad \Leftrightarrow \quad
a_2(x,\xi )=e^{i\scal {(A_1-A_2)D_\xi }{D_x}}a_1(x,\xi ).
\end{equation}
(Cf. \cite{Ho3}.) Note that the right-hand side makes sense, since
it means $\widehat a_2(\xi ,x)=e^{i\scal {(A_1-A_2)x}{\xi}}\widehat a_1(\xi ,x)$,
and since the map $a(\xi ,x)\mapsto e^{i\scal {Ax}\xi }a(\xi ,x)$ is continuous on
$\mathcal S _s '(\rr {2d})$.
\par
For future references we observe the relationship
\begin{equation}\label{Eq:STFTLinkKernelSymbol}
\begin{aligned}
|(V_\Phi &K_{a,A})(x,y,\xi ,-\eta )|
\\
&=
|(V_\Psi a)(x-A(x-y),A^*\xi +(I-A^*)\eta ,\xi -\eta ,y-x)|,
\\[1ex]
\Phi (x,y) &= (\mathscr F _2\Psi )(x-A(x-y),x-y)
\end{aligned}
\end{equation}
between symbols and kernels for pseudo-differential operators,
which follows by straight-forward applications of Fourier inversion
formula (see also the proof of Proposition 2.5 in \cite{Toft15}).
We observe that for the Weyl case, \eqref{Eq:STFTLinkKernelSymbol}
takes the convenient form
\begin{equation}\label{Eq:STFTLinkKernelSymbolWeyl}
\begin{aligned}
|(V_\Phi K_{a}^w)(x,y,\xi ,-\eta )|
&=
\left | (\mathcal V _\Psi a)\left ({\textstyle{\frac 12}} (Y+X) ,{\textstyle{\frac 12}}(Y-X) \right ) \right |,
\\[1ex]
\Phi (x,y) &= (\mathscr F _2\Psi )(\textstyle{\frac 12}(x+y),x-y),
\\[1ex]
X&=(x,\xi )\in \rr {2d},\quad Y=(y,\eta )\in \rr {2d}.
\end{aligned}
\end{equation}
when using symplectic short-time Fourier transforms. Here
$K_{a}^w$ is the kernel of $\operatorname{Op} ^w(a)$.
\medspace
Next we discuss symbol products in pseudo-differential calculi,
twisted convolution and related
operations (see \cite{Ho3,Folland}). Let $A\in \mathbf{M} (d,\mathbf R)$,
$s\ge 1/2$ and let $a,b\in
\mathcal S _s '(\rr {2d})$. The pseudo product with respect to
$A$ or the \emph{$A$-pseudo product} $a{\text{\footnotesize{$\#$}}} _{\! A}
b$ between $a$
and $b$ is the function or distribution which satisfies
\begin{align*}
\operatorname{Op} _A(a{\text{\footnotesize{$\#$}}} _{\! A} b) &= \operatorname{Op} _A(a)\circ \operatorname{Op} _A(b),
\intertext{provided the right-hand side
makes sense as a continuous operator from $\mathcal S _s(\rr d)$ to
$\mathcal S _s'(\rr d)$. Since the Weyl case is especially important,
we put $a{\text{\footnotesize{$\#$}}} b =a{\text{\footnotesize{$\#$}}} _{\! A}b$ when $A=\frac 12 I_d$, where
$I_d$ is the unit matrix of order $d$. That is, we have}
\operatorname{Op} ^w(a{\text{\footnotesize{$\#$}}} b) &= \operatorname{Op} ^w(a)\circ \operatorname{Op} ^w(b),
\end{align*}
provided the right-hand side makes sense.
\par
\subsection{The Weyl product and the twisted convolution}
\par
The symplectic Fourier transform ${\mathscr F _\sigma}$ is continuous on
$\mathcal S _s (\rr {2d})$ and extends uniquely to a homeomorphism on
$\mathcal S _s '(\rr {2d})$, and to a unitary map on $L^2(\rr {2d})$, since similar
facts hold for $\mathscr F$. Furthermore $\mathscr F _\sigma^{2}$ is the identity
operator.
\par
Let $s\ge 1/2$ and $a,b\in \mathcal S _s (\rr {2d})$. The \emph{twisted
convolution} of $a$ and $b$ is defined by
\begin{equation}\label{twist1}
(a \ast _\sigma b) (X)
= (2/\pi)^{\frac d2} \int _{\rr {2d}}a(X-Y) b(Y) e^{2 i \sigma(X,Y)}\, dY.
\end{equation}
The definition of $*_\sigma$ extends in different ways. For example
it extends to a continuous multiplication on $L^p(\rr {2d})$ when $p\in
[1,2]$, and to a continuous map from $\mathcal S _s '(\rr {2d})\times
\mathcal S _s (\rr {2d})$ to $\mathcal S _s '(\rr {2d})$. If $a,b \in
\mathcal S _s '(\rr {2d})$, then $a {\text{\footnotesize{$\#$}}} b$ makes sense if and only if $a
*_\sigma \widehat b$ makes sense, and
\begin{equation}\label{tvist1}
a {\text{\footnotesize{$\#$}}} b = (2\pi)^{-\frac d2} a \ast_\sigma (\mathscr F _\sigma {b}).
\end{equation}
For the twisted convolution we have
\begin{equation}\label{weylfourier1}
\mathscr F _\sigma (a *_\sigma b) = (\mathscr F _\sigma a) *_\sigma b =
\check{a} *_\sigma (\mathscr F _\sigma b),
\end{equation}
where $\check{a}(X)=a(-X)$ (cf. \cite{Toft1}). A
combination of \eqref{tvist1} and \eqref{weylfourier1} gives
\begin{equation}\label{weyltwist2}
\mathscr F _\sigma (a{\text{\footnotesize{$\#$}}} b) = (2\pi )^{-\frac d2}(\mathscr F _\sigma
a)*_\sigma (\mathscr F _\sigma b).
\end{equation}
If $\widetilde a(X) = \overline{a(-X)}$ then
\begin{equation*}
(a_1*_\sigma a_2,b) = (a_1,b*_\sigma \widetilde a_2)=(a_2,\widetilde
a_1*_\sigma b),\quad (a_1*_\sigma a_2)*_\sigma b = a_1*_\sigma
(a_2*_\sigma b),
\end{equation*}
for appropriate $a_1, a_2,b$, and furthermore (cf. \cite{HTW})
\begin{equation}\label{duality0}
(a_1 {\text{\footnotesize{$\#$}}} a_2, b) = (a_2, \overline{a_1} {\text{\footnotesize{$\#$}}} b) = (a_1, b {\text{\footnotesize{$\#$}}}
\overline{a_2}).
\end{equation}
\par
We have the following result for the map $e^{it\scal {AD_\xi}{D_x}}$ in
\eqref{calculitransform} when the domains are modulation spaces. We refer
to \cite[Proposition 1.7]{Toft5} for the proof (see also
\cite[Proposition 6.14]{Toft8}).
\par
\begin{prop}\label{propCalculiTransfMod}
Let $\omega _0\in \mathscr P _E(\rr
{4d})$, $p,q\in [1,\infty ]$, $A_1,A_2\in \mathbf{M} (d,\mathbf R)$, and set
$$
\omega _A(x,\xi ,\eta ,y)= \omega _0(x-Ay,\xi -A^*\eta ,\eta ,y).
$$
The map $e^{i\scal {AD_\xi}{D_x}}$ on $\mathcal S _{1/2}'(\rr {2d})$
restricts to a homeomorphism from $M^{p,q}_{(\omega
_0)}(\rr {2d})$ to $M^{p,q}_{(\omega _A)}(\rr {2d})$.
\par
In particular, if $a_1,a_2\in \mathcal S _{1/2}'(\rr {2d})$ satisfy
\eqref{calculitransform}, then $a_1\in
M^{p,q}_{(\omega _{A_1})}(\rr {2d})$, if and only if $a_2\in
M^{p,q}_{(\omega _{A_2})}(\rr {2d})$.
\end{prop}
\par
(Note that
in the equality of (2) in \cite[Proposition 6.14]{Toft8},
$y$ and $\eta$ should be interchanged in the last two arguments
in $\omega _0$.)
\par
\section{Continuity for the Weyl product on
modulation spaces}\label{sec2}
\par
In this section we deduce results on sufficient conditions for
continuity of the Weyl product on
modulation spaces, and the twisted convolution on Wiener amalgam spaces.
The main results are Theorem
\mathbf Rf{Thm:MainThmOdd} and \mathbf Rf{Thm:MainThmOdd2}
concerning the Weyl product and more general products in
pseudo-differential calculus,
and Theorem \mathbf Rf{Thm:MainThmOddTwistConv} concerning the twisted
convolution.
\par
When proving Theorem \mathbf Rf{Thm:MainThmOdd} we first need norm estimates.
Then we prove the uniqueness of the extension, where generally norm
approximation not suffices, since the test function space may fail
to be dense in several of the domain spaces. The situation is
saved by a comprehensive argument based on narrow convergence.
First we prove the important
special cases Propositions \mathbf Rf{Prop:Prop1} and \mathbf Rf{Prop:Prop2} and then
we deduce Theorem \mathbf Rf{Thm:MainThmOdd}.
\par
For $N \ge 2$ we let $\mathsf R _N$ be the
function on $[0,1]^{N+1}$, given by
\begin{equation}\label{Eq:HYfunctional}
\begin{aligned}
\mathsf R _N(x) &= ({N-1})^{-1}\left ({\sum _{j=0}^N
x_j-1}\right ),
\\[1ex]
x &= (x_0,x_1,\dots ,x_N)\in [0,1]^{N+1},
\end{aligned}
\end{equation}
and we consider mappings of the form
\begin{align}
(a_1,\dots ,a_N)
&\mapsto
a_1{\text{\footnotesize{$\#$}}} \, \cdot \, ts {\text{\footnotesize{$\#$}}} a_N,
\label{Eq:Weylmap}
\intertext{or, more generally, mappings of the form}
(a_1,\dots ,a_N)
&\mapsto
a_1{\text{\footnotesize{$\#$}}} _{\! A}\, \cdot \, ts {\text{\footnotesize{$\#$}}} _{\! A} a_N,
\tag*{(\mathbf Rf{Eq:Weylmap})$'$}
\end{align}
We observe that
\begin{equation}\label{Eq:HYfunctionalConj}
\mathsf R _N({\textstyle{\frac 1p}}) + \mathsf R _N({\textstyle{\frac 1{p'}}}) = 1.
\end{equation}
\par
We first show a formula for the STFT
of $a_1{\text{\footnotesize{$\#$}}} \, \cdot \, ts {\text{\footnotesize{$\#$}}} a_N$ expressed with
\begin{equation}\label{Fjdef}
F_j(X,Y) = \mathcal V_{\Phi _j}a_j (X+Y,X-Y).
\end{equation}
\par
The following lemma is a restatement of \cite[Lemma 2.3]{CoToWa}.
The proof is therefore omitted.
\par
\begin{lemma}\label{prodlemma}
Let $\Phi _j \in \mathcal S _{1/2}(\rr {2d})$, $j=1,\dots ,N$, $a_k \in
\mathcal S _{1/2}'(\rr {2d})$ for some $1 \le k \le N$, and $a_j \in
\mathcal S _{1/2}(\rr {2d})$ for $j \in \{1,\dots ,N\} \setminus k$.
Suppose
$$
\Phi _0 = \pi ^{(N-1)d}\Phi _1{\text{\footnotesize{$\#$}}} \, \cdot \, ts {\text{\footnotesize{$\#$}}} \Phi _N\quad
\text{and}\quad
a_0 = a_1{\text{\footnotesize{$\#$}}} \, \cdot \, ts {\text{\footnotesize{$\#$}}} a_N.
$$
If $F_j$ are given by \eqref{Fjdef} then
\begin{multline}\label{STFTintegral}
F_0(X_N,X_0)
\\[1ex]
=\idotsint _{\rr {2(N-1)d}}e^{2 i Q(X_0,\dots ,X_N)}
\prod _{j=1}^NF_j(X_j,X_{j-1}) \, dX_1
\, \cdot \, ts dX_{N-1}
\end{multline}
with
$$
Q(X_0,\dots,X_N)=\sum_{j=1}^{N-1}\sigma(X_j-X_0,X_{j+1}-X_0).
$$
\end{lemma}
\par
Next we use the previous lemma to find sufficient conditions
for the extension of \eqref{Eq:Weylmap} to modulation spaces.
The integral representation of $V_{\Phi _0}a_0$ in the previous
lemma leads to the weight condition
\begin{multline}\label{Eq:WeightCond}
1 \lesssim \omega _0(X_N+X_0,X_N-X_0)\prod _{j=1}^N
\omega _j(X_j+X_{j-1},X_j-X_{j-1}),
\\[1ex]
X_0,X_1,\dots ,X_N\in \rr {2d}.
\end{multline}
\par
The following result
is a restatement of \cite[Proposition 2.2]{CoToWa}. The proof is
therefore omitted.
\par
\begin{prop}\label{Prop:Prop1}
Let $p_j,q_j\in [1,\infty ]$, $j=0,1,\dots , N$, and suppose
$$
\mathsf R _N({\textstyle{\frac 1{q'}}})\le 0\le \mathsf R _N({\textstyle{\frac 1p}}).
$$
Let $\omega _j$, $j=0,1,\dots ,N$, and suppose
\eqref{Eq:WeightCond} holds. Then the map \eqref{Eq:Weylmap}
from $\mathcal S _{1/2}(\rr {2d}) \times \, \cdot \, ts \times
\mathcal S _{1/2}(\rr {2d})$ to $\mathcal S _{1/2}(\rr {2d})$
extends uniquely to a continuous and
associative map from $\EuScript M ^{p_1,q_1}_{(\omega _1)}(\rr {2d})
\times \, \cdot \, ts \times
\EuScript M ^{p_N,q_N}_{(\omega _N)}(\rr {2d})$ to $\EuScript M ^{p_0',q_0'}
_{(1/\omega _0)}(\rr {2d})$.
\end{prop}
\par
The associativity means that for any
product \eqref{Eq:Weylmap}, where the factors $a_j$ satisfy
the hypotheses, the subproduct
$$
a_{k_1}{\text{\footnotesize{$\#$}}} a_{k_1+1} {\text{\footnotesize{$\#$}}} \, \cdot \, ts {\text{\footnotesize{$\#$}}} a_{k_2}
$$
is well defined as a distribution for any $1\le k_1 \le k_2\le N$, and
$$
a_1{\text{\footnotesize{$\#$}}} \, \cdot \, ts {\text{\footnotesize{$\#$}}} a_N = (a_1{\text{\footnotesize{$\#$}}} \, \cdot \, ts {\text{\footnotesize{$\#$}}} a_k)
{\text{\footnotesize{$\#$}}} (a_{k+1}{\text{\footnotesize{$\#$}}} \, \cdot \, ts {\text{\footnotesize{$\#$}}} a_N),
$$
for any $1\le k\le N-1$.
\par
For appropriate weights $\omega$ the space
$\EuScript M ^2_{(\omega )}(\rr {2d})$ consists of symbols of Hilbert--Schmidt
operators acting between certain modulation spaces (cf. \cite{Toft5,Toft9}).
\par
The next result is an extension of this fact.
\par
\begin{prop}\label{Prop:Prop2}
Let $N\ge 3$ be odd,
$p,p_j\in (0,\infty ]$, $j=1,\dots , N$,
and let
$\omega _j \in \mathscr P _E(\rr {4d})$, $j=0,1,\dots ,N$, and
suppose \eqref{Eq:WeightCond} holds.
Then the following is true:
\begin{enumerate}
\item[{\rm{(1)}}] if $p_0=p_N=p$, $p_j=\max (1,p)$ when $j\in [3,N-2]$ is odd
and $p_j=p'$ when $j$ is even, then the map \eqref{Eq:Weylmap}
from $\mathcal S _{1/2}(\rr {2d}) \times \, \cdot \, ts \times
\mathcal S _{1/2}(\rr {2d})$ to $\mathcal S _{1/2}(\rr {2d})$
extends uniquely to a continuous and associative map from
$\EuScript M ^{p_1} _{(\omega _1)}(\rr {2d}) \times \, \cdot \, ts
\times \EuScript M ^{p_N} _{(\omega _N)}(\rr {2d})$ to
$\EuScript M ^{p} _{(1/\omega _0)}(\rr {2d})$;
\par
\item[{\rm{(1)}}] if $p_j=p$ when $j$ is even and
$p_j=p'$ when is odd,
then the map \eqref{Eq:Weylmap}
from $\mathcal S _{1/2}(\rr {2d}) \times \, \cdot \, ts \times
\mathcal S _{1/2}(\rr {2d})$ to $\mathcal S _{1/2}(\rr {2d})$
extends uniquely to a continuous and associative map from
$\EuScript M ^{p_1} _{(\omega _1)}(\rr {2d}) \times \, \cdot \, ts
\times \EuScript M ^{p_N} _{(\omega _N)}(\rr {2d})$ to
$\EuScript M ^{p'} _{(1/\omega _0)}(\rr {2d})$.
\end{enumerate}
\end{prop}
\par
Proposition \mathbf Rf{Prop:Prop2} follows by combining
\eqref{Eq:STFTLinkKernelSymbolWeyl} with the following
result for kernel operators. The details are left for the reader.
\par
\begin{prop}\label{Prop:Prop2Kernels}
Let $N\ge 3$ be odd,
$p,p_j\in (0,\infty ]$, $j=1,\dots , N$,
and let $\omega _j \in \mathscr P _E(\rr {4d})$,
$j=0,1,\dots ,N$, be such that
\begin{equation}\label{Eq:KernelAlgWeightCond}
\inf _{x_j,\xi _j\in \rr d}
\left (
\omega _0(x_0,x_N,\xi _0,-\xi _N) \prod _{j=1}^N
\omega _j(x_{j-1},x_j,\xi _{j-1},-\xi _j)
\right ) >0.
\end{equation}
Then the following is true:
\begin{enumerate}
\item[{\rm{(1)}}] if $p_0=p_N=p$, $p_j=\max (1,p)$ when $j\in [3,N-2]$ is odd
and $p_j=p'$ when $j$ is even, then
\begin{equation}\label{Eq:MultLinKernelMap}
(K_1,K_2,\dots ,K_N) \mapsto K_1\circ K_2\circ \, \cdot \, ts \circ K_N
\end{equation}
from $\mathcal S _{1/2}(\rr {2d}) \times \, \cdot \, ts \times
\mathcal S _{1/2}(\rr {2d})$ to $\mathcal S _{1/2}(\rr {2d})$
extends uniquely to a continuous and associative map from
$M ^{p_1} _{(\omega _1)}(\rr {2d}) \times \, \cdot \, ts
\times M ^{p_N} _{(\omega _N)}(\rr {2d})$ to
$M ^{p} _{(1/\omega _0)}(\rr {2d})$, and
\begin{equation}\label{Eq:MultLinKernelMapEst}
\begin{aligned}
\nm {K_1\circ K_2\circ \, \cdot \, ts \circ K_N}{M^p_{(1/\omega _0)}}
&\lesssim
\prod _{j=1}^N \nm {K_j}{M^{p_j}_{(\omega _j)}},
\\[1ex]
K_j &\in M^{p_j}_{(\omega _j)}(\rr {2d}),\ j=1,\dots ,N\text ;
\end{aligned}
\end{equation}
\par
\item[{\rm{(2)}}] if $p_j=p$ when $j$ is even and
$p_j=p'$ when is odd,
then the map \eqref{Eq:MultLinKernelMap}
from $\mathcal S _{1/2}(\rr {2d}) \times \, \cdot \, ts \times
\mathcal S _{1/2}(\rr {2d})$ to $\mathcal S _{1/2}(\rr {2d})$
extends uniquely to a continuous and associative map from
$M ^{p_1} _{(\omega _1)}(\rr {2d}) \times \, \cdot \, ts
\times M ^{p_N} _{(\omega _N)}(\rr {2d})$ to
$M ^{p'} _{(1/\omega _0)}(\rr {2d})$.
\end{enumerate}
\end{prop}
\par
\begin{proof}
We observe that \eqref{Eq:KernelAlgWeightCond} is the same
as
$$
\inf _{x_j,\xi _j\in \rr d}
\left (
\omega _0(x_0,x_N,\xi _0,\xi _N) \prod _{j=1}^N
\omega _j(x_{j-1},x_j,(-1)^{j-1}\xi _{j-1},(-1)^{j-1}\xi _j)
\right ) >0.
$$
First suppose that $K_j\in \mathcal S _{1/2}(\rr {2d})$ for every
$j$.
Let $p_0=\max (p,1)$,
\begin{align*}
\widetilde K_j
&=
\begin{cases}
K_j,& j\ \text{odd}
\\[1ex]
\overline{K_j},& j\ \text{even}
\end{cases}
\\[1ex]
\widetilde \omega _j(x,y,\xi ,\eta )
&=
\begin{cases}
\omega _j(x,y,\xi ,\eta ),& j\ \text{odd}
\\[1ex]
\omega _j(x,y,-\xi ,-\eta ),& j\ \text{even}
\end{cases}
\\[1ex]
y &= (x_2,x_3,\dots ,x_{N-2}),
\qquad
\eta = (\xi _2,\xi _3,\dots ,\xi _{N-2}),
\\[1ex]
G(x_0,x_N,x_1,x_{N-1}) &= \widetilde K_1(x_0,x_1)
\widetilde K_N(x_{N-1},x_N),
\\[1ex]
H_1 (x_1,x_{N-1},y)
&=
\prod _{j=1}^{(N-1)/2}\widetilde K_{2j}(x_{2j-1},x_{2j}),
\\[1ex]
H_2 (y) &=
\prod _{j=1}^{(N-3)/2}\widetilde K_{2j+1}(x_{2j},x_{2j+1}),
\\[1ex]
H(x_0,x_N) &= (H_2,H_1(x_1,x_{N-1},\, \cdot \, ))_{L^2},
\end{align*}
\begin{align*}
\vartheta _0(x_0,x_N,x_1,x_{N-1},&\xi _0,\xi _N,\xi _1,\xi _{N-1})
\\
&=
\omega _1(x_0,x_1,\xi _0,\xi _1)\omega _N(x_{N-1},x_N,\xi _{N-1},\xi _N),
\\[1ex]
\vartheta _1(x_1,x_{N-1},y,\xi _1,\xi _{N-1},\eta )
&=
\prod _{j=1}^{(N-1)/2}\widetilde \omega _{2j}(x_{2j-1},x_{2j},\xi _{2j-1},\xi _{2j}),
\intertext{and}
\vartheta _2(y,\eta )
&=
\prod _{j=1}^{(N-3)/2}\widetilde \omega _{2j+1}(x_{2j},x_{2j+1},\xi _{2j},\xi _{2j+1}).
\end{align*}
Then it follows by straight-forward computations that
\begin{equation}\label{Eq:CompKernelRef}
(K_1\circ \, \cdot \, ts \circ K_N)(x_0,x_N) = (G(x_0,x_N,\, \cdot \, ),\overline H)_{L^2},
\end{equation}
and that the right-hand side makes sense as an element in $\mathcal S _{1/2}(\rr {2d})$
when $K_j\in \mathcal S _{1/2}'(\rr {2d})$ for even $j$. In the same way,
the right-hand side of \eqref{Eq:CompKernelRef}
makes sense as an element in $\mathcal S _{1/2}'(\rr {2d})$
when $K_j\in \mathcal S _{1/2}'(\rr {2d})$ for odd $j$. Hence the map
\eqref{Eq:MultLinKernelMap} extends to continuous mappings from
$$
\mathcal S _{1/2}(\rr {2d})\times \mathcal S _{1/2}'(\rr {2d}) \times \mathcal S _{1/2}(\rr {2d})
\times \, \cdot \, ts \times \mathcal S _{1/2}(\rr {2d})
$$
to $\mathcal S _{1/2}(\rr {2d})$, and from
$$
\mathcal S _{1/2}'(\rr {2d})\times \mathcal S _{1/2}(\rr {2d}) \times \mathcal S _{1/2}'(\rr {2d})
\times \, \cdot \, ts \times \mathcal S _{1/2}'(\rr {2d})
$$
to $\mathcal S _{1/2}'(\rr {2d})$. The uniquenesses of these extensions follows
by approximating those $K_j$ which belong to
$\mathcal S _{1/2}'(\rr {2d})$, by taking sequences
of elements $\mathcal S _{1/2}(\rr {2d})$ which converge to those $K_j$ in
$\mathcal S _{1/2}'(\rr {2d})$.
\par
We have that $\mathcal S _{1/2}(\rr {2d})$ is dense in $M^p_{(\omega _j)}(\rr {2d})
\subseteq \mathcal S _{1/2}'(\rr {2d})$ when
$p<\infty$, and that $p'=1<\infty$ when $p=\infty$
and $p\le 1<\infty$ when $p'=\infty$. Hence it follows from the recent
uniqueness properties, that the result follows if we prove that
\eqref{Eq:MultLinKernelMapEst} holds when $K_j\in \mathcal S _{1/2}(\rr {2d})$
for every $j$.
\par
We have
\begin{align}
\nm {\widetilde K_j}{M^{p_j}_{(\widetilde \omega _j)}}
&=
\nm {K_j}{M^{p_j}_{(\omega _j)}},
\\[1ex]
\nm G{M^p_{(\vartheta _0)}}
&\lesssim
\nm {K_1}{M^p_{(\omega _1)}}\nm {K_N}{M^p_{(\omega _N)}},
\label{Eq:KernelGNormEst}
\\[1ex]
\nm {H_1}{M^{p'}_{(\vartheta _1)}(\rr {(N-1)d})}
&=
\prod _{j=1}^{(N-1)/2}
\nm {\widetilde K_{2j}}{M^{p'}_{(\widetilde \omega _{2j})}(\rr {2d})},
\notag
\\[1ex]
\nm {H_2}{M^{p'}_{(\vartheta _2)}(\rr {(N-3)d})}
&=
\prod _{j=1}^{(N-3)/2}
\nm {\widetilde K_{2j+1}}{M^{p'}_{(\widetilde \omega _{2j+1})}(\rr {2d})},
\notag
\intertext{which implies}
\nm {\overline H}{M^{p'}_{(\vartheta )}}
&\lesssim
\prod _{j=2}^{N-1}\nm {\widetilde K_j}{M^{p_j}_{(\widetilde \omega _j)}}
=
\prod _{j=2}^{N-1}\nm {K_j}{M^{p_j}_{(\omega _j)}}
\label{Eq:ModNormConjHEst}
\end{align}
A combination of this estimate with \eqref{Eq:CompKernelRef},
\eqref{Eq:KernelGNormEst} and \eqref{Eq:ModNormConjHEst}
gives \eqref{Eq:MultLinKernelMapEst}, and (1) follows.
\par
The assertion (2) follows by similar arguments and is left for the reader.
\end{proof}
\par
The following result now follows by interpolation between
Propositions \mathbf Rf{Prop:Prop1} and \mathbf Rf{Prop:Prop2}.
Here and in what follows we let
$$
I_N=[0,N]\cap \mathbf Z,
\quad \text{and}\quad
\Omega _N =\sets {(j,k)\in I_N^2}{j+k\in 2\mathbf Z +1},
$$
and
\begin{equation}\label{Eq:sfQfunctional1}
\begin{aligned}
\mathsf Q _{0,N}(x,y)
&=
\min _{j+k\in 2\mathbf Z+1}
\left ( \frac {x_j+y_k}2 \right ),
\\[1ex]
\mathsf Q _N(x,y)
&=
\min _{j+k\in 2\mathbf Z+1}
\left ( \frac {x_j+y_k}2,1-\frac {x_j+y_k}2 \right ),
\\[1ex]
\mathsf Q _{0,N}(x) &= \mathsf Q _{0,N}(x,x),
\qquad
\mathsf Q _N(x) = \mathsf Q _N(x,x),
\\[1ex]
x
&=
(x_0,x_1,\dots ,x_N)\in [0,1]^{N+1},
\\[1ex]
y
&=
(y_0,y_1,\dots ,y_N)\in [0,1]^{N+1}.
\end{aligned}
\end{equation}
\par
\begin{prop}\label{Prop:MainPropOdd}
Let $N\ge 3$ be odd, $\mathsf R _N$ be as in \eqref{Eq:HYfunctional},
$\mathsf Q _N$ be as in \eqref{Eq:sfQfunctional1}, and let
$p_j,q_j\in [1,\infty ]$, $j=0,1,\dots , N$, be such that
\begin{equation}\label{Eq:pqConditionsA}
\max \left ( \mathsf R _N({\textstyle{\frac 1{q'}}}) ,0 \right )
\le \min
\left ( \mathsf Q _N({\textstyle{\frac 1p}}), \mathsf Q _N({\textstyle{\frac 1{q'}}}),
\mathsf Q _N({\textstyle{\frac 1p,\frac 1q}}),
\mathsf R _N({\textstyle{\frac 1p}})\right ).
\end{equation}
Also let $\omega _j \in \mathscr P _E(\rr {4d})$, $j=0,1,\dots ,N$, and
suppose \eqref{Eq:WeightCond} holds.
Then the map \eqref{Eq:Weylmap}
from $\mathcal S _{1/2}(\rr {2d}) \times \, \cdot \, ts \times
\mathcal S _{1/2}(\rr {2d})$ to $\mathcal S _{1/2}(\rr {2d})$
extends uniquely to a continuous and associative map from
$\EuScript M ^{p_1,q_1} _{(\omega _1)}(\rr {2d}) \times \, \cdot \, ts
\times \EuScript M ^{p_N,q_N} _{(\omega _N)}(\rr {2d})$ to
$\EuScript M ^{p_0',q_0'} _{(1/\omega _0)}(\rr {2d})$.
\end{prop}
\par
We observe that $\mathsf Q _N({\textstyle{\frac 1{q'}}})
=\mathsf Q _N({\textstyle{\frac 1q}})$ when $q$ is the same as
in the previous proposition.
\par
\begin{proof}
Evidently, the result holds true when $\mathsf R _N(q ') \le 0$ in view
of Proposition \mathbf Rf{Prop:Prop1}. We need to prove the result when
$\mathsf R _N({\textstyle{\frac 1{q'}}}) \ge 0$.
\par
We use the same notations as in Lemma \mathbf Rf{Lemma:Athm0.3}
and its proof. By Propositions \mathbf Rf{Prop:Prop1} and \mathbf Rf{Prop:Prop2}
we have
\begin{align}
\EuScript M ^{r_1,s_1}_{(\omega _1)}
\times \, \cdot \, ts \times
\EuScript M ^{r_N,s_N}_{(\omega _N)} &\hookrightarrow \EuScript M ^{r_0',s_0'}
_{(1/\omega _0)}
\label{Eq:WeylProdExprs}
\intertext{and}
\EuScript M ^{u_1,u_1}_{(\omega _1)}
\times \, \cdot \, ts \times
\EuScript M ^{u_N,u_N}_{(\omega _N)} &\hookrightarrow \EuScript M ^{u_0',u_0'}
\label{Eq:WeylProdExpu}
\end{align}
when $r_j,s_j,u_j\in [1,\infty ]$, $j\in I_N$, satisfy
\begin{align}
\sum _{j=0}^N\frac 1{s_j'} &\le 1\le \sum _{j=0}^N\frac 1{r_j},
\label{Eq:reCond}
\intertext{and}
u_j &=
\begin{cases}
v', & j\in 2\mathbf Z,
\\[1ex]
v, & j\in 2\mathbf Z+1,
\end{cases}
\label{Eq:uTovCond}
\end{align}
for some $v\in [1,\infty ]$. Hence, by
combining Proposition \mathbf Rf{p1.4} (5)
with multi-linear interpolation in \cite[Chapter 4]{BeLo},
we get
$$
\EuScript M ^{p_1,q_1}_{(\omega _1)}
\times \, \cdot \, ts \times
\EuScript M ^{p_N,q_N}_{(\omega _N)} \hookrightarrow \EuScript M ^{p_0',q_0'}
_{(1/\omega _0)}
$$
when
\begin{alignat}{3}
\frac 1{p_j} &= \frac {1-\theta}{r_j}+\frac \theta{v'}, &
\qquad
\frac 1{q_j} &= \frac {1-\theta}{s_j}+\frac \theta{v'}, &
\qquad j&\in 2\mathbf Z
\label{Eq:InterpolCond1}
\intertext{and}
\frac 1{p_k} &= \frac {1-\theta}{r_k}+\frac \theta{v}, &
\qquad
\frac 1{q_k} &= \frac {1-\theta}{s_k}+\frac \theta{v}, &
\qquad k&\in 2\mathbf Z+1.
\label{Eq:InterpolCond2}
\end{alignat}
This gives
\begin{multline*}
\sum _{j=0}^N \frac 1{p_j} =
(1-\theta )\sum _{j=0}^N \frac 1{r_j}
+ \theta \, \cdot \, t \frac {N+1}2
\left (
\frac 1v+\frac 1{v'}
\right )
\\[1ex]
=
(1-\theta )\sum _{j=0}^N \frac 1{r_j}
+ \theta \, \cdot \, t \frac {N+1}2 \ge 1+\theta \, \cdot \, t \frac {N-1}2
\end{multline*}
and
\begin{multline*}
\sum _{j=0}^N \frac 1{q_j'} =
(1-\theta )\sum _{j=0}^N \frac 1{s_j'}
+ \theta \, \cdot \, t \frac {N+1}2
\left (
\frac 1v+\frac 1{v'}
\right )
\\[1ex]
=
(1-\theta )\sum _{j=0}^N \frac 1{s_j'}
+ \theta \, \cdot \, t \frac {N+1}2
\le 1+\theta \, \cdot \, t \frac {N-1}2 ,
\end{multline*}
which implies
\begin{equation}\label{Eq:pqConditionsAStep1}
\mathsf R _N({\textstyle{\frac 1{q'}}})\le \frac \theta 2
\le \mathsf R _N({\textstyle{\frac 1p}}).
\end{equation}
In particular we have $R_N({\textstyle{\frac 1{q'}}}) \le \frac \theta 2$.
\par
By \eqref{Eq:InterpolCond1} and \eqref{Eq:InterpolCond2} we also get
$$
\frac 1{p_j}+\frac 1{p_k}
= \frac {1-\theta}{r_j}+\frac {1-\theta}{r_k}+\frac \theta{v}+\frac \theta{v'}
\ge
\frac \theta{v}+\frac \theta{v'}=\theta ,
$$
when $j+k$ is odd. That is,
\begin{equation}\label{Eq:IneqLebExp1}
\frac 12\left ( \frac 1{p_j}+\frac 1{p_k} \right ) \ge \frac \theta 2
\ge R_N({\textstyle{\frac 1{q'}}}).
\end{equation}
In the same way we get
$$
\frac 12\left ( \frac 1{q_j}+\frac 1{q_k} \right ) \ge \frac \theta 2
\ge R_N({\textstyle{\frac 1{q'}}})
\quad \text{and}\quad
\frac 12\left ( \frac 1{p_j}+\frac 1{q_k} \right ) \ge \frac \theta 2
\ge R_N({\textstyle{\frac 1{q'}}})
$$
when $j+k$ is odd. We also have
$$
\frac 1{p_j'}+\frac 1{p_k'}
= \frac {1-\theta}{r_j'}+\frac {1-\theta}{r_k'}+\frac \theta{v'}+\frac \theta{v}
\ge
\frac \theta{p}+\frac \theta{p'}=\theta ,
$$
when $j+k$ is odd, and it follows that \eqref{Eq:IneqLebExp1}
and its two following inequalities hold true with $p_j'$, $p_k'$,
$q_j'$ and $q_k'$ in place of $p_j$, $p_k$,
$q_j$ and $q_k$, respectively, at each occurrence. By combining
these inequality we get \eqref{Eq:pqConditionsA}.
\par
In order for verify the the interpolation completely, we need to prove
that if $p,q\in [1,\infty ]^{N+1}$ satisfy \eqref{Eq:pqConditionsA},
then there are $r,s,u\in [1,\infty ]^{N+1}$, $v\in [1,\infty ]$
and $\theta \in [0,1]$ such that
\eqref{Eq:WeylProdExprs}--\eqref{Eq:InterpolCond2} hold.
As remarked above, the result holds true if
$\mathsf R _N({\textstyle{\frac 1{q'}}})\le 0$. Therefore assume that
$\mathsf R _N({\textstyle{\frac 1{q'}}})>0$. By \eqref{Eq:pqConditionsA}
it follows that $\mathsf R _N({\textstyle{\frac 1{q'}}})\le \frac 12$. Choose
$\theta \in (0,1]$ such that $\mathsf R _N({\textstyle{\frac 1{q'}}})=\frac \theta 2$.
By reasons of symmetry we may assume that
$$
p_0=\min (j\in I_N)(p_j,q_j),
$$
and we shall consider the two cases when $\frac 1{p_0}\ge \frac \theta 2$
and when $\frac 1{p_0}< \frac \theta 2$, separately.
\par
First suppose that $\frac 1{p_0}\ge \frac \theta 2$. Then
$$
\min \left ( \frac 1{p},\frac 1{p'},\frac 1{q},\frac 1{q'},
\mathsf R _N({\textstyle{\frac 1p}})\right )
\ge \mathsf R _N({\textstyle{\frac 1{q'}}}),
$$
and the result follows from \cite[Proposition 2.5]{CoToWa}.
\par
Therefore assume that $\frac 1{p_0}< \frac \theta 2$ and let $v>2$ be
chosen such that
$$
\frac 1{p_0}=\frac \theta v.
$$
Then
\begin{equation}\label{Eq:LebExpEstEven}
\frac 1{p_j},\frac 1{q_j} \ge \frac 1{p_0} = \frac \theta v,
\qquad j\in 2\mathbf Z
\end{equation}
and since
$$
\mathsf Q _N({\textstyle{\frac 1p}}), \mathsf Q _N({\textstyle{\frac 1q}},
\mathsf Q _N({\textstyle{\frac 1p,\frac 1q}})
\ge
\mathsf R _N({\textstyle{\frac 1{q'}}}) = \frac \theta 2,
$$
we get
$$
\frac \theta v+\frac 1{p_k} = \frac 1{p_0}+\frac 1{p_k} \ge \theta
$$
and
$$
\frac \theta v+\frac 1{q_k} = \frac 1{p_0}+\frac 1{q_k} \ge \theta
$$
when $k$ is odd. This implies that
\begin{equation}\label{Eq:LebExpEstOdd}
\frac 1{p_k},\frac 1{q_k} \ge \frac \theta {v'},\qquad k\in 2\mathbf Z+1.
\end{equation}
\par
By \eqref{Eq:LebExpEstEven} and \eqref{Eq:LebExpEstOdd},
there are $r,s\in [1,\infty ]^{N+1}$ such that
\eqref{Eq:InterpolCond1} and \eqref{Eq:InterpolCond2}
hold. We have
\begin{multline*}
(1-\theta ) \mathsf R _N({\textstyle{\frac 1r}}) = \mathsf R _N({\textstyle{\frac 1p}}) -\theta \mathsf R _N({\textstyle{\frac 1{v},\frac 1{v'},\dots ,\frac 1{v},\frac 1{v'}}})
\\[1ex]
\ge
\mathsf R _N({\textstyle{\frac 1{q'}}}) -\theta \left (
\frac 1{N-1}\left (
\frac {N+1}2 \, \cdot \, t \left (
\frac 1{v'}+\frac 1v
\right )
-1
\right )
\right )
\\[1ex]
=
\frac \theta 2 -\theta \left (
\frac 1{N-1}\left (
\frac {N+1}2 \, \cdot \, t \left (
\frac 1{v'}+\frac 1v
\right )
-1
\right )
\right )
=0
\end{multline*}
and
\begin{multline*}
(1-\theta ) \mathsf R _N({\textstyle{\frac 1{s'}}}) = \mathsf R _N({\textstyle{\frac 1{q'}}}) -\theta \mathsf R _N({\textstyle{\frac 1{v'},\frac 1{v},\dots ,\frac 1{v'},\frac 1{v}}})
\\[1ex]
=
\frac \theta 2 -\theta \left (
\frac 1{N-1}\left (
\frac {N+1}2 \, \cdot \, t \left (
\frac 1{v'}+\frac 1v
\right )
-1
\right )
\right )
=0.
\end{multline*}
Consequently, if $p,q\in [1,\infty ]^{N+1}$ satisfy \eqref{Eq:pqConditionsA},
we have found $r,s,u\in [1,\infty ]^{N+1}$
and $\theta \in [0,1]$ such that
\eqref{Eq:WeylProdExprs}--\eqref{Eq:InterpolCond2} hold. Hence
the interpolation works out properly and the result follows.
\end{proof}
\par
Next we polish up Proposition \mathbf Rf{Prop:MainPropOdd},
by purging away some superfluous conditions. More precisely
we have the following.
\par
\begin{thm}\label{Thm:MainThmOdd}
Let $N\ge 3$ be odd, $\mathsf R _N$ be as in \eqref{Eq:HYfunctional},
$\mathsf Q _{0,N}$ and $\mathsf Q _N$ be as in \eqref{Eq:sfQfunctional1},
and let $p_j,q_j\in [1,\infty ]$, $j=0,1,\dots , N$, be such that
\begin{equation}\label{Eq:pqConditionsB}
\max \left ( \mathsf R _N({\textstyle{\frac 1{q'}}}) ,0 \right )
\le \min
\left ( \mathsf Q _{N}({\textstyle{\frac 1p}}), \mathsf Q _{0,N}({\textstyle{\frac 1{q'}}}),
\mathsf Q _N({\textstyle{\frac 1p,\frac 1q}}),
\mathsf R _N({\textstyle{\frac 1p}})\right ).
\end{equation}
Also let $\omega _j \in \mathscr P _E(\rr {4d})$, $j=0,1,\dots ,N$, and
suppose \eqref{Eq:WeightCond} holds.
Then the map \eqref{Eq:Weylmap}
from $\mathcal S _{1/2}(\rr {2d}) \times \, \cdot \, ts \times
\mathcal S _{1/2}(\rr {2d})$ to $\mathcal S _{1/2}(\rr {2d})$
extends uniquely to a continuous and associative map from
$\EuScript M ^{p_1,q_1} _{(\omega _1)}(\rr {2d}) \times \, \cdot \, ts
\times \EuScript M ^{p_N,q_N} _{(\omega _N)}(\rr {2d})$ to
$\EuScript M ^{p_0',q_0'} _{(1/\omega _0)}(\rr {2d})$.
\end{thm}
\par
We need some preparations for the proof of Theorem
\mathbf Rf{Thm:MainThmOdd}. First we have
the following analogy of \cite[Lemma 2.7]{CoToWa}.
\par
\begin{lemma}\label{Lemma:Athm0.3}
Let $N \ge 3$ be odd, $x_j\in [0,1]$, $y_{j,k}=\frac 12(x_j+x_k)$ $j,k=0,\dots ,N$,
and consider the inequalities:
\begin{enumerate}
\item[{\rm{(1)}}] $\displaystyle{(N-1)^{-1}\left (\sum _{k=0}^Nx_k -1\right )\le
\min _{(j,k)\in \Omega _N}y_{j,k}}$;
\item[{\rm{(2)}}] $y_{j,k}\le \frac 12$, for all $(j,k)\in \Omega _N$;
\item[{\rm{(3)}}] $\displaystyle{(N-1)^{-1}\left (\sum _{k=0}^Nx_k -1\right )
\le \min _{(j,k)\in \Omega _N}(1-y_{j,k})}$.
\end{enumerate}
Then
$$
{\rm{(1)}}\Rightarrow {\rm{(2)}} \Rightarrow {\rm{(3)}}.
$$
\end{lemma}
\par
\begin{rem}\label{Remark:Athm0.3}
We notice the similarities between the previous lemma and
\cite[Lemma 2.7]{CoToWa}. In fact, let
$N \ge 2$, $x_j\in [0,1]$, $j=0,\dots ,N$ and consider the inequalities:
\begin{enumerate}
\item[{\rm{(1)}}] $\displaystyle{(N-1)^{-1}\left (\sum _{k=0}^Nx_k -1\right )\le
\min _{0\le j\le N}x_j}$;
\item[{\rm{(2)}}] $x_j+x_k\le 1$, for all $k\neq j$;
\item[{\rm{(3)}}] $\displaystyle{(N-1)^{-1}\left (\sum _{k=0}^Nx_k -1\right )
\le \min _{0\le j\le N}(1-x_j)}$.
\end{enumerate}
Then Lemma \cite[Lemma 2.7]{CoToWa} shows that
$$
{\rm{(1)}}\Rightarrow {\rm{(2)}} \Rightarrow {\rm{(3)}}.
$$
\end{rem}
\par
\begin{proof}
We shall use similar ideas as in the proof of Lemma
\cite[Lemma 2.7]{CoToWa}.
Let
\begin{equation}\label{Eq:J1NJ2NDef}
J_{1,N}=\{ 0,\dots ,N \} \cap 2\mathbf Z,
\quad \text{and}\quad
J_{2,N}=\{ 0,\dots ,N \} \cap (2\mathbf Z +1),
\end{equation}
and assume that (1) holds but (2) fails. Then $x_j+x_k>1$
for some $(j,k)\in \Omega _N$.
By renumbering we may assume that $x_2+x_3\le x_j+x_{j+1}$
for every $j\in J_{1,N}$, and that $x_0+x_1>1$. Then (1) and the
fact that there are $(N-1)/2$ pairs of $(j,j+1)$ with
$j\in J_{1,N}\setminus 0$ give
\begin{multline*}
(N-1)y_{2,3} = \frac {N-1}2 (2y_{2,3})
\le
\sum _{m=1}^{(N-1)/2} 2y_{2m,2m+1}
\\[1ex]
= \sum _{j=2}^{N} x_j
< \sum _{j=0}^Nx_j -1 \le (N-1)y_{2,3},
\end{multline*}
which is a contradiction. Hence the assumption
$x_0+x_1>1$ must be wrong and it follows that (1) implies (2).
\par
Now suppose that (2) holds, and let
$j_0\in J_{1,N}$ and $k_0\in J_{2,N}$.
Then
$$
x_j\le 1-x_k
\quad \text{and}\quad
x_j\le 1-x_k ,\quad j\in J_{1,N},\ k\in J_{2,N}.
$$
This gives
\begin{equation*}
\sum _{j\in J_{1,N}\setminus j_0} x_j \le \frac {N-1}2(1-x_{k_0})
\quad \text{and}\quad
\sum _{k\in J_{2,N}\setminus k_0} x_k \le \frac {N-1}2(1-x_{j_0}) ,
\end{equation*}
giving that
$$
\sum _{j\in I_{N}\setminus \{ j_0,k_0\} }
x_j \le (N-1)\left (1-\frac 12 \left ( x_{j_0} +x_{k_0} \right ) \right )
= (N-1)(1-y_{j_0,k_0}).
$$
Since
$$
x_{j_0}+x_{k_0}-1 = 2y_{j_0,k_0}-1 \le 0,
$$
we obtain
\begin{multline*}
\sum _{j\in I_{N}} x_j -1
=
(x_{j_0}+x_{k_0}-1) + \sum _{j\in I_{N}\setminus \{ j_0,k_0\} } x_j
\\[1ex]
\le
\sum _{j\in I_{N}\setminus \{ j_0,k_0\} } x_j \le (N-1)(1-y_{j_0,k_0}).
\end{multline*}
Since $j_0\in J_{1,N}$ and $k_0\in J_{2,N}$ was chosen arbitrary, it
follows that (3) holds.
\end{proof}
\par
\begin{proof}[Proof of Theorem \mathbf Rf{Thm:MainThmOdd}]
If $I_N=\{ 0,1,\dots ,N\}$ as before and $j,k\in I_N$ satisfies
$j+k\in 2\mathbf Z+1$. Then the assumptions and Lemma
\mathbf Rf{Lemma:Athm0.3} implies that
\begin{equation}\label{Eq:RNqLessMeans}
0\le \mathsf R _N({\textstyle{\frac 1{q'}}}) \le \min
\left (
\frac 12 \left ( \frac 1{q_j'}+\frac 1{q_k'} \right ),
\frac 12 \left ( \frac 1{q_j}+\frac 1{q_k} \right )
\right ),
\qquad j+k\in 2\mathbf Z +1.
\end{equation}
Hence \eqref{Eq:pqConditionsB} implies \eqref{Eq:pqConditionsA},
and the result follows from Proposition \mathbf Rf{Prop:MainPropOdd}.
\end{proof}
\par
\begin{rem}
We observe that Theorem \mathbf Rf{Thm:MainThmOdd} implies that
the inclusion
\begin{equation}\label{Ex3-lin-form}
\EuScript M ^{\infty ,1}{\text{\footnotesize{$\#$}}} \EuScript M ^{2,2}{\text{\footnotesize{$\#$}}} \EuScript M ^{2,2} \subseteq \EuScript M ^{2,2}.
\end{equation}
In this context we observe that Theorem 0.1$'$ in \cite{CoToWa}
does ensure the validity of this inclusion, while Theorem 2.9 in
\cite{CoToWa} does.
\end{rem}
\par
\par
We may use \eqref{calculitransform} and Proposition
\mathbf Rf{propCalculiTransfMod} to extend Theorem \mathbf Rf{Thm:MainThmOdd}
to involve more general products
arising in the pseudo-differential calculi. More precisely, the let
$\mathbf{M} (d,\Omega)$ be the set of all $d\times d$ matrices with entries in
the set $\Omega$, and let $A\in \mathbf{M} (d,\mathbf R)$.
By \eqref{calculitransform} we have
\begin{multline*}
a _1 {\text{\footnotesize{$\#$}}} _{\! A} \, \cdot \, ts {\text{\footnotesize{$\#$}}} _{\! A} a_N
=
e^{-i\scal {A_0D_\xi }{D_x}} ((e^{i\scal {A_0D_\xi }{D_x}}a _1)
{\text{\footnotesize{$\#$}}} \, \cdot \, ts {\text{\footnotesize{$\#$}}} (e^{i\scal {A_0D_\xi }{D_x}} a_N)),
\\[1ex]
A_0=A-\frac 12 I_d,
\end{multline*}
where $I_d$ is the $d\times d$ unit matrix (see (2.14) and (2.15)
in \cite{Toft15}).
If we combine this relation with
Proposition \mathbf Rf{propCalculiTransfMod} and Theorem
\mathbf Rf{Thm:MainThmOdd}, we get the following result.
The condition on the weight functions is
\begin{multline}\label{weightcondtcalc}
1 \lesssim \omega _0(T_A(X_N,X_0))\prod _{j=1}^N
\omega _j(T_A(X_{j},X_{j-1})),
\quad X_0,\dots ,X_N \in \rr {2d},
\end{multline}
where
\begin{multline}\label{Ttdef}
T_A(X,Y) =(y+A(x-y),\xi +A^*(\eta -\xi ),\eta -\xi , x-y),
\\[1ex]
X=(x,\xi )\in \rr {2d},\ Y=(y,\eta )\in \rr {2d}.
\end{multline}
(See (2.16) and (2.17) in \cite{Toft15}.)
\par
\begin{thm}\label{Thm:MainThmOdd2}
Let $A\in \mathbf{M} (d,\mathbf R)$, $N\ge 3$ be odd, $\mathsf R _N$
be as in \eqref{Eq:HYfunctional}, $\mathsf Q _{0,N}$ and $\mathsf Q _N$
be as in \eqref{Eq:sfQfunctional1}, and let
$p_j,q_j\in [1,\infty ]$, $j=0,1,\dots , N$, be such that
\begin{equation}\label{Eq:pqConditionsBAgain}
\max \left ( \mathsf R _N({\textstyle{\frac 1{q'}}}) ,0 \right )
\le \min
\left ( \mathsf Q _{N}({\textstyle{\frac 1p}}), \mathsf Q _{0,N}({\textstyle{\frac 1{q'}}}),
\mathsf Q _N({\textstyle{\frac 1p,\frac 1q}}),
\mathsf R _N({\textstyle{\frac 1p}})\right ).
\end{equation}
Also let $\omega _j \in \mathscr P _E(\rr {4d})$, $j=0,1,\dots ,N$,
and suppose
\eqref{weightcondtcalc} and \eqref{Ttdef} hold. Then the map
\eqref{Eq:Weylmap}$'$ from $\mathcal S _{1/2}(\rr {2d}) \times \, \cdot \, ts \times
\mathcal S _{1/2}(\rr {2d})$ to $\mathcal S _{1/2}(\rr {2d})$
extends uniquely to a continuous and associative map from $M ^{p_1,q_1}
_{(\omega _1)}(\rr {2d}) \times \, \cdot \, ts \times M ^{p_N,q_N}
_{(\omega _N)}(\rr {2d})$ to $M ^{p_0',q_0'} _{(1/\omega _0)}(\rr {2d})$.
\end{thm}
\par
In the same way we get the following result by combining
\eqref{calculitransform} and Propositions
\mathbf Rf{propCalculiTransfMod} and \mathbf Rf{Prop:Prop2}. The details
are left for the reader.
\par
\begin{prop}\label{Prop:Prop2Ext}
Let $A\in \mathbf{M} (d,\mathbf R)$, $N\ge 3$ be odd,
$p,p_j\in (0,\infty ]$, $j=1,\dots , N$,
and let
$\omega _j \in \mathscr P _E(\rr {4d})$, $j=0,1,\dots ,N$, and
suppose \eqref{weightcondtcalc} and \eqref{Ttdef} hold.
Then the following is true:
\begin{enumerate}
\item[{\rm{(1)}}] if $p_0=p_N=p$, $p_j=\max (1,p)$ when $j\in [3,N-2]$ is odd
and $p_j=p'$ when $j$ is even, then the map \eqref{Eq:Weylmap}$'$
from $\mathcal S _{1/2}(\rr {2d}) \times \, \cdot \, ts \times
\mathcal S _{1/2}(\rr {2d})$ to $\mathcal S _{1/2}(\rr {2d})$
extends uniquely to a continuous and associative map from
$\EuScript M ^{p_1} _{(\omega _1)}(\rr {2d}) \times \, \cdot \, ts
\times \EuScript M ^{p_N} _{(\omega _N)}(\rr {2d})$ to
$\EuScript M ^{p} _{(1/\omega _0)}(\rr {2d})$;
\par
\item[{\rm{(1)}}] if $p_j=p$ when $j$ is even and
$p_j=p'$ when is odd,
then the map \eqref{Eq:Weylmap}$'$
from $\mathcal S _{1/2}(\rr {2d}) \times \, \cdot \, ts \times
\mathcal S _{1/2}(\rr {2d})$ to $\mathcal S _{1/2}(\rr {2d})$
extends uniquely to a continuous and associative map from
$\EuScript M ^{p_1} _{(\omega _1)}(\rr {2d}) \times \, \cdot \, ts
\times \EuScript M ^{p_N} _{(\omega _N)}(\rr {2d})$ to
$\EuScript M ^{p'} _{(1/\omega _0)}(\rr {2d})$.
\end{enumerate}
\end{prop}
\par
Finally we prove a continuity result for the twisted convolution.
The map \eqref{Eq:Weylmap} is then replaced by
\begin{equation}\label{Twistmap}
(a_1,a_2,\dots ,a_N)\mapsto a_1*_\sigma a_2*_\sigma \, \cdot \, ts *_\sigma a_N.
\end{equation}
The following
result follows immediately from Theorem \mathbf Rf{Thm:MainThmOdd}.
Here the condition \eqref{Eq:WeightCond} is replaced by
\begin{multline}\label{weightcond3}
1 \lesssim \omega _0(X_N-X_0,X_N+X_0)\prod _{j=1}^N
\omega _j(X_j-X_{j-1},X_j+X_{j-1}),
\\[1ex]
X_0,X_1,\dots ,X_N\in \rr {2d}.
\end{multline}
\par
\begin{thm}\label{Thm:MainThmOddTwistConv}
Let $p_j,q_j\in [1,\infty ]$, $j=0,1,\dots , N$, and suppose that
\begin{equation*}
\max \left ( \mathsf R _N({\textstyle{\frac 1{p'}}}) ,0 \right )
\le \min
\left ( \mathsf Q _{N}({\textstyle{\frac 1q}}), \mathsf Q _{0,N}({\textstyle{\frac 1{p'}}}),
\mathsf Q _N({\textstyle{\frac 1p,\frac 1q}}),
\mathsf R _N({\textstyle{\frac 1q}})\right ).
\end{equation*}
Suppose $\omega _j\in \mathscr P _E(\rr {4d})$, $j=0,1,\dots ,N$, satisfy
\eqref{weightcond3}. Then the map
\eqref{Twistmap} from $\mathcal S _{1/2}(\rr {2d}) \times \, \cdot \, ts \times
\mathcal S _{1/2}(\rr {2d})$ to $\mathcal S _{1/2}(\rr {2d})$
extends uniquely to a continuous and associative map from
$\EuScript W^{p_1,q_1}_{(\omega _1)}(\rr {2d})
\times \, \cdot \, ts \times \EuScript W^{p_N,q_N}_{(\omega _N)}(\rr {2d})$
to $\EuScript W^{p_0',q_0'} _{(1/\omega _0)}(\rr {2d})$.
\end{thm}
\par
\par
\end{document}
|
\betaegin{equation}gin{document}
\tildeitle[]{Eigenvalue estimates for Beltrami-Laplacian under Bakry-\'Emery Ricci curvature condition}
\alphauthor{Ling Wu, XingYu Song and Meng Zhu}
\alphaddress{School of Mathematical Sciences and Shanghai Key Laboratory of PMMP, East China Normal University, Shanghai 200241, China}
\epsilonmail{ [email protected], [email protected], [email protected]}
\nablaate{}
\betaegin{equation}gin{abstract}
On closed Riemannian manifolds with Bakry-\'Emery Ricci curvature bounded from below and bounded gradient of the potential function, we obtain lower bounds for all positive eigenvalues of the Beltrami-Laplacian instead of the drifted Laplacian. The lower bound of the $k$th eigenvalue depends on $k$, Bakry-\'Emery Ricci curvature lower bound, the gradient bound of the potential function, and the dimension and diameter upper bound of the manifold, but the volume of the manifold is not involved. Especially, these results apply to closed manifolds with Ricci curvature bounded from below.
\epsilonnd{abstract}
\maketitle
\section{Introduction}
Let $(M, g)$ be a Riemannian manifold, $f$ a smooth function on $M$. The Bakry-\'Emery Ricci curvature tensor $Ric+Hess\, f$, first introduced in \cite{BE}, is a natural generalization of the classical Ricci curvature tensor (the case where $f$ is a constant). Here, $Ric$ and $Hess\, f$ represent the Ricci curvature tensor and the hessian of $f$, respectively.
Bakry-\'Emery Ricci curvature being bounded below is the concept of ``Ricci curvature bounded below" for smooth metric space $(M, g, e^{-f}dV)$, namely, $M$ equipped with the distance induced by $g$ and measure $e^{-f}dV$, where $dV$ is the volume element. It can also be extended to general metric measure spaces and used to study Ricci limit spaces (see e.g. \cite{St1}, \cite{St2}, \cite{LV}). Moreover, manifolds with constant Bakry-\'Emery Ricci curvature are so called Ricci solitons, which play a crucial role in the singularity analysis of the Ricci flow (see e.g. \cite{Per}, \cite{TZ}, \cite{CW}, \cite{Bam}). Therefore, the question that whether the results for manifolds with Ricci curvature bounded below can also be established when Bakry-\'Emery Ricci curvature is bounded below has drawn a lot of attention.
In this paper, we study the eigenvalue estimates of Beltrami Laplacian $\Deltalta$ on closed manifolds. The basic assumptions are that $(M^m, g)$ is an $m$-dimensional closed Riemannian manifold with
\betaegin{equation}\labelel{basic assumption1}
Ric+Hess\,f\bareq -K g,
\epsilonnd{equation}
and
\betaegin{equation}\labelel{basic assumption2}
|\nabla f|\leq L,
\epsilonnd{equation}
where $\nabla f$ is the gradient of $f$, and $K$ and $L$ are nonnegative constants.
On manifolds with Ricci curvature bounded below, there have been numerous results on eigenvalue estimates (see e.g. \cite{Lich}, \cite{Ob}, \cite{Cheeger}, \cite{Cheng}, \cite{LY}, \cite{ZY}, \cite{Li}). For manifolds with Bakry-\'Emery Ricci curvature bounded from below, normally the weighted measure $e^{-f}dV$ is considered, and the corresponding self-adjoint Laplace operator is the drifted Laplacian $\Deltalta_f=\Deltalta- \nabla f\cdot \nabla$. Under the assumptions \epsilonqref{basic assumption1} and \epsilonqref{basic assumption2}, Munteanu-Wang \cite{MW}, Su-Zhang \cite{SZ}, and Wu \cite{Wu} independently obtained a Cheng type upper bound for the first positive eigenvalue of $\Deltalta_f$. On the other hand, Charalambous-Lu-Rowlett \cite{CLR} proved lower bound estimates for all positive eigenvalues of $\Deltalta_f$. An eigenvalue comparison for the first positive eigenvalue of $\Deltalta_f$ is also given in \cite{BQ} and \cite{AN}.
Different from the above setting, we consider here the standard measure $dV$ and Beltrami-Laplacian $\Deltalta$ under conditions \epsilonqref{basic assumption1} and \epsilonqref{basic assumption2}. A main difficulty rising in this case is that the hessian of $f$ does not appear in the Bochner formula for $\Deltalta$, as opposed to the Bochner formula for $\Deltalta_f$. Thus, to utilize the lower boundedness of the Bakry-\'Emery Ricci curvature, we need to manually add $Hess\,f$, which causes an extra bad term $-Hess\,f(\nabla \cdot, \nabla \cdot)$. By using integration by parts and Moser iteration, we are able to overcome this difficulty.
Denote the eigenvalues of $\Deltalta$ by $0=\lambda_0<\lambda_1\leq \lambda_2\leq \cdots\leq \lambda_k\leq \cdots$, we derive lower bounds for all $\lambda_k$'s. More precisely, we show that
\betaegin{equation}gin{theorem}\labelel{main theorem}
Let $(M^m,g)$ be an $m$-dimensional closed Riemannian manifold. Assume that conditions \epsilonqref{basic assumption1} and \epsilonqref{basic assumption2} are satisfied. Then\\
(1) we have
\betaegin{equation}gin{equation}\labelel{lower bound of 1}
\lambda_1\bare c_0;
\epsilonnd{equation}
(2) for $m \bare 3$,
\betaegin{equation}gin{equation}\labelel{lower bound of k}
\lambda_k\bare c_1k^{\frac{2}{m}}, \ \forall k\bareq 2,
\epsilonnd{equation}
and for $m=2$,
\betaegin{equation}gin{equation}\labelel{lower bound of k m=2}
\lambda_k\bare c_2k^{\frac{1}{2}}, \ \forall k\bareq 2.
\epsilonnd{equation}
Here $c_0$, $c_1$ and $c_2$ are constants depending on $m$, $K,\ L$, and the upper bound $D$ of the diameter of $M$.
\epsilonnd{theorem}
We prove (1) and (2) of Theorem \ref{main theorem} separately in sections 2 and 3 (see Theorem \ref{thm 1} and Theorem \ref{thm n}), where explicit expressions of $c_0$ , $c_1$ and $c_2$ can also be found. In section 2, we establish the estimate \epsilonqref{lower bound of 1} by finding a lower bound of Cheeger's isoperimetric constant $IN_1(M)$. Actually, we obtain lower bound for the general isoperimetric constant $IN_{\alphalignedpha}(M)$, $\alphalignedpha>0$, defined in \cite{Li}. The proof follows a method of Dai-Wei-Zhang \cite{DWZ} and uses the volume comparison result of Q. Zhang and the third author \cite{ZZ}. In section 3, following the method in \cite{WZ} (see also \cite{LZZ}), estimates \epsilonqref{lower bound of k} and \epsilonqref{lower bound of k m=2} are proved by using \epsilonqref{lower bound of 1} and gradient estimates for eigenfunctions. The gradient estimates are done by Moser iteration, in which the Sobolev inequality required comes from the isoperimetric constant estimate in section 2.\\
\section{Isoperimetric constant estimate and lower bound of $\lambda_1$ }
In this section, we prove part (1) of Theorem \ref{main theorem}. According to \cite{Cheeger}, it suffices to bound Cheeger's isoperimetric constant from below. Firstly, let us recall the definitions of isoperimetric constants. We adapt the notations and definitions in \cite{Li}.
\betaegin{equation}gin{definition}
Let $(M,g)$ be a compact Riemannian manifold (with or without boundary). For $\alphalignedpha>0$, The Neumann $\alphalignedpha$-isoperimetric constant of M is defined by
$$ IN_\alphalignedpha(M)=\sqrt{-1}nf_{\substack{\partialrtial\Omega_1=H=\partialrtial\Omega_2 \\ M=\Omega_1\cup H \cup \Omega_2}}\frac{\textrm{Vol}ol(H)}{\min\{{\textrm{Vol}ol(\Omega_1),\textrm{Vol}ol(\Omega_2)}\}^{\frac{1}{\alphalignedpha}}},$$
\\
where the infimum is taken over all hypersurfaces $H$ dividing $M$ into two parts, denoted by $\Omega_1$ and $\Omega_2$, and $\textrm{Vol}ol(\cdot)$ denotes the volume of a region.
\epsilonnd{definition}
In \cite{Cheeger}, Cheeger showed that
\betaegin{equation}gin{lemma}\labelel{Cheeger}
Let $(M,g)$ be a closed Riemannian manifold. Then
\[\lambda_1 \bare \frac{IN_1(M)^2}{4}.\]
\epsilonnd{lemma}
Thus, one can get a lower bound of $\lambda_1$ by bounding $IN_1(M)$ from below. As indicated in \cite{DWZ}, this can be done by using the method therein. For completeness, we state the result and also include the proof in the following.
\betaegin{equation}gin{theorem}\labelel{isoperimetric estimate}
Let $(M^m,g)$ be an $m$-dimensional complete Riemannian manifold, $m\bareq 2$. Assume that \epsilonqref{basic assumption1} and \epsilonqref{basic assumption2} are satisfied. Let $\Omega$ be a bounded convex domain in $M$. Then
for $1\le\alphalignedpha \le \frac{m}{m-1},$ we have
\betaegin{equation}gin{equation}
IN_\alphalignedpha(\Omega)\bare d^{-1}2^{-2m-1}5^{-m}e^{-(24-\frac{2}{\alphalignedpha})Ld-(104-\frac{1}{\alphalignedpha})Kd^2} \textrm{Vol}ol(\Omega)^{1-\frac{1}{\alphalignedpha}},
\epsilonnd{equation}
and for $0<\alphalignedpha<1,$ we have
\betaegin{equation}gin{equation}
IN_{\alphalignedpha}(\Omega)\bare d^{-1}2^{-2m-1}5^{-m}e^{-22Ld-103Kd^2}\textrm{Vol}ol(\Omega)^{1-\frac{1}{\alphalignedpha}},
\epsilonnd{equation}
where $d$ is the diameter of the domain $\Omega$.
In particular, if $M$ is closed, then
\betaegin{equation}gin{equation}
IN_1(M)\bare D^{-1}2^{-2m-1}5^{-m}e^{-22LD-103K D^2},
\epsilonnd{equation}
and
\betaegin{equation}gin{equation}
IN_{\frac{m}{m-1}}(M)\bare D^{-1}2^{-2m-1}5^{-m}e^{-(22+\frac{2}{m})LD-(103+\frac{1}{m})K D^2}\textrm{Vol}ol(M)^{\frac{1}{m}},
\epsilonnd{equation}
where $D$ is an upper bound of the diameter of $M$.
\epsilonnd{theorem}
Before starting the proof of Theorem \ref{isoperimetric estimate}, let us present some results needed. First of all, Q. Zhang and the third author \cite{ZZ} proved a volume comparison theorem for manifolds satisfying \epsilonqref{basic assumption1} and \epsilonqref{basic assumption2}.
\betaegin{equation}gin{theorem}[\cite{ZZ}]\labelel{volume element comparison}
Let $(M^m, g)$ be an $m$-dimensional complete Riemannian manifold. Suppose that $Ric+\frac{1}{2}\mathscr{L}_Vg \bare -Kg$ for some constant $K\bare0$ and smooth vector field $V$ with $|V|\le L$, where $\mathscr{L}_V$ means the Lie derivative in the direction of $V$. Then the following conclusions are true.
\\
(a)Let $A(s,\tildeheta)$ denote the volume element of the metric $g$ on M in geodesic polar coordinates. Then for any $0< s_1 <s_2$, we have
\betaegin{equation}gin{equation}\labelel{AC}
\frac{A(s_2,\tildeheta)}{s_2^{m-1}}\le e^{2Ls_2+Ks_2^2} \frac{A(s_1,\tildeheta)}{s_1^{m-1}}.
\epsilonnd{equation}
\\
(b)For any $0<r_1<r_2$, we have
\betaegin{equation}gin{equation} \labelel{VC}
\frac{\textrm{Vol}ol(B_{r_2}(x))}{r_2^m}\le e^{[K(r_2^2-r_1^2)+2L(r_2-r_1)]}\frac{\textrm{Vol}ol(B_{r_1}(x))}{r_1^m},
\epsilonnd{equation}
where $B_r(x)$ is the geodesic ball centered at $x\sqrt{-1}n M$ with radius $r$.
\epsilonnd{theorem}
\betaegin{equation}gin{remark}
When $V=\nabla f$, the assumptions in the above Theorem become \epsilonqref{basic assumption1} and \epsilonqref{basic assumption2}.
\epsilonnd{remark}
Next, we need the following lemma by Gromov.
\betaegin{equation}gin{lemma}[\cite{Gro}]\labelel{Gromov}
Let $(M^m,g)$ be a complete Riemannian manifold. Let $\Omega$ be a convex domain in $M$, and $H$ a hypersurface dividing $\Omega$ into two parts $\Omega_1,\Omega_2$. For any Borel subsets $W_i \subset \Omega_i,i=1,2$, there exists an $x_1$
in one of $W_i$, say $W_1$, and a subset $W$ in the other part $W_2$, such that
\betaegin{equation}gin{equation}
\textrm{Vol}ol(W) \bare \frac{1}{2}\textrm{Vol}ol(W_2),
\epsilonnd{equation}
and for any $x_2\sqrt{-1}n W$, there is a unique minimal geodesic $\baramma_{x_1, x_2}$ between $x_1$ and $x_2$ which intersects $H$ at some $z$ with
\betaegin{equation}gin{equation}
dist(x_1,z)\bare dist(x_2,z),
\epsilonnd{equation}
where $dist(x_1,z)$ denotes the distance between $x_1$ and $z$.
\epsilonnd{lemma}
Combining Theorem \ref{volume element comparison} and Lemma \ref{Gromov}, we get
\betaegin{equation}gin{lemma}
Let $H,W$ and $x_1$ be as in Lemma \ref{Gromov}. Then
\betaegin{equation}gin{equation}
\textrm{Vol}ol(W)\le D_12^{m-1}e^{4LD_1+4KD_1^2}\textrm{Vol}ol(H^{'}),
\epsilonnd{equation}
where $D_1=\sup_{x\sqrt{-1}n W} dist(x_1,x)$, and $H^{'}$ is the set of intersection points with $H$ of geodesics $\baramma_{x_1,x} $ for all $x \sqrt{-1}n W$.
\epsilonnd{lemma}
\proof Let $S_{x_1}$ be the set of unit tangent vectors of $M$ at $x_1$, and $\Gamma \subset S_{x_1} $ the subset of vectors $\tildeheta$ such that $\baramma_{\tildeheta} =\baramma_{x_1,x_2}$ for some $x_2\sqrt{-1}n W$. The volume element of the metric $g$ is written as $dV=A(\tildeheta,t)d\tildeheta \wedge dt$ in polar coordinates $(\tildeheta,t) \sqrt{-1}n S_{x_1} \tildeimes \mathbb{R^{+}}$. For any $\tildeheta \sqrt{-1}n \Gamma$, let $r(\tildeheta)$ be the radius such that $exp_{x_1}(r (\tildeheta))\sqrt{-1}n H$. Then it follows from Lemma \ref{Gromov} that $W\subset \{exp_{x_1}(r)|r(\tildeheta) \le r \le 2r(\tildeheta),\ \tildeheta \sqrt{-1}n \Gamma\}$, and hence
\betaegin{equation}\labelel{vol M}
\textrm{Vol}ol(W) \le \sqrt{-1}nt_{\Gamma}\sqrt{-1}nt_{r(\tildeheta)}^{2r(\tildeheta)} A(\tildeheta,t)dtd\tildeheta.
\epsilonnd{equation}
For $r(\tildeheta) \le t \le 2r(\tildeheta) \le 2D_1$, by \epsilonqref{AC}, we have
\[\frac{A(\tildeheta,t)}{t^{m-1}}\le e^{2Lt+Kt^2} \frac{A(\tildeheta,r(\tildeheta))}{r(\tildeheta)^{m-1}},\]
which implies that
\[A(\tildeheta,t) \le e^{4LD_1+4KD_1^2}2^{m-1}A(\tildeheta,r(\tildeheta)).\]
Plugging the above inequality into \epsilonqref{vol M} gives \[\textrm{Vol}ol(W)\le e^{4LD_1+4KD_1^2}2^{m-1}\sqrt{-1}nt_{\Gamma} r(\tildeheta)A(\tildeheta,r(\tildeheta))d\tildeheta \le D_12^{m-1} e^{4LD_1+4KD_1^2} \textrm{Vol}ol(H^{'}). \] \qed\\
When $W$ is the intersection of $\Omega$ and a ball in $M$, the above lemma implies that
\betaegin{equation}gin{corollary}
Let $H$ be any hypersurface dividing a convex domain $\Omega$ into two parts $\Omega_1,\Omega_2$. For any ball $B_r(x)$ in $M$, we have
\betaegin{equation}gin{equation}
\min(\textrm{Vol}ol(B_r(x)\cap\Omega_1),\textrm{Vol}ol(B_r(x)\cap\Omega_2)) \le 2^{m+1}re^{4Ld+4Kd^2}\textrm{Vol}ol(H\cap(B_{2r}(x))),
\epsilonnd{equation}
where $d=diam(\Omega)$, the diameter of $\Omega$. In particular, if $B_r(x)\cap\Omega$ is divided equally by $H$, then
\betaegin{equation}gin{equation} \labelel{RV}
\textrm{Vol}ol(B_r(x)\cap\Omega) \le 2^{m+2}re^{4Ld+4Kd^2}\textrm{Vol}ol(H\cap B_{2r}(x)) .
\epsilonnd{equation}
\proof Put $W_i=B_r(x)\cap\Omega_i$ in the above lemma and use $D_1 \le 2r$ and $H^{'}\subset H\cap B_{2r}(x).$ \qed\\
\epsilonnd{corollary}
Now we are ready to prove Theorem \ref{isoperimetric estimate}.\\
\noindent{\sqrt{-1}t Proof of Theorem \ref{isoperimetric estimate}.} Let $H$ be any hypersurface dividing $M$ into two parts, $\Omega_1$ and $\Omega_2$. We may assume that $\textrm{Vol}ol(\Omega_1) \le \textrm{Vol}ol(\Omega_2)$. For any $x\sqrt{-1}n\Omega_1$, Let $r_x$ be the smallest radius such that
\[\textrm{Vol}ol(B_{r_x}(x)\cap\Omega_1)=\textrm{Vol}ol(B_{r_x}(x)\cap\Omega_2)=\frac{1}{2}\textrm{Vol}ol(B_{r_x}(x)\cap\Omega).\]
By \epsilonqref{RV}, we have,
\betaegin{equation}gin{equation}\labelel{CC}
\textrm{Vol}ol(B_{r_x}(x)\cap\Omega) \le 2^{m+2}r_xe^{4Ld+4Kd^2}\textrm{Vol}ol(H\cap B_{2r_x}(x)).
\epsilonnd{equation}
The domain $\Omega_1$ has a covering
\[\Omega_1\subset \cup_{x\sqrt{-1}n\Omega_1}B_{2r_x}(x).\]
By Vitali Covering Lemma, we can choose a countable family of disjoint balls $B_i=B_{2r_{x_i}}(x_i)$ such that $\cup_iB_{10r_{x_i}}(x_i) \supset \Omega_1.$
So\[\textrm{Vol}ol(\Omega_1)\le \sum_i \textrm{Vol}ol(B_{10r_{x_i}}(x_i)\cap\Omega_1).\]
Applying the volume comparison Theorem \ref{volume element comparison} in $\Omega_1$ gives
\[\frac{\textrm{Vol}ol(B_{10r_{x_i}}(x_i)\cap\Omega_1)}{(10r_{x_i})^m}\le e^{99Kr_{x_i}^2+18Lr_{x_i}}\frac{\textrm{Vol}ol(B_{r_{x_i}}(x_i)\cap\Omega_1)}{(r_{x_i})^m}.\]
On the other hand, since $\textrm{Vol}ol(\Omega_1) \le \textrm{Vol}ol(\Omega_2) $, we have $r_x \le d$ for any $x\sqrt{-1}n \Omega_1$. Thus,
\betaegin{equation}gin{align}
\textrm{Vol}ol(B_{10r_{x_i}} (x_i) \cap\Omega_1)
&\le 10^me^{99Kd^2+18Ld} \textrm{Vol}ol(B_{r_{x_i}}(x_i)\cap\Omega_1)\nonumber\\ &=2^{-1}10^me^{99Kd^2+18Ld} \textrm{Vol}ol(B_{r_{x_i}}(x_i)\cap\Omega)\nonumber.
\epsilonnd{align}
Therefore,
\betaegin{equation}\labelel{vol omega 1}
\textrm{Vol}ol(\Omega_1) \le 2^{-1}10^me^{99Kd^2+18Ld} \sum_i\textrm{Vol}ol(B_{r_{x_i}}(x_i)\cap\Omega) .
\epsilonnd{equation}
Moreover, since the balls $B_i$ are disjoint, \epsilonqref{CC} gives
\betaegin{equation}\labelel{vol H}
\textrm{Vol}ol(H)\bare \sum_i \textrm{Vol}ol(B_i \cap H) \bare 2^{-m-2}e^{-4Ld-4Kd^2} \sum_i r_{x_i}^{-1} \textrm{Vol}ol(B_{r_{x_i}}(x_i)\cap\Omega).
\epsilonnd{equation}
When $1\le \alphalignedpha \le \frac{m}{m-1}$, it follows from \epsilonqref{vol omega 1} and \epsilonqref{vol H} that
\betaegin{equation}\labelel{iso 1}
\betaegin{equation}gin{split}
\frac{\textrm{Vol}ol(H)}{\textrm{Vol}ol(\Omega_1)^{\frac{1}{\alphalignedpha}}} &\bare \frac{2^{-m-2}e^{-4Ld-4Kd^2}}{(2^{-1}10^me^{99Kd^2+18Ld})^{\frac{1}{\alphalignedpha}}}\frac{\sum_i r_{x_i}^{-1}\textrm{Vol}ol(B_{r_{x_i}}(x_i)\cap\Omega)}{(\sum_i\textrm{Vol}ol(B_{r_{x_i}}(x_i)\cap\Omega))^{\frac{1}{\alphalignedpha}}} \\
& \bare \frac{2^{-m-2}e^{-4Ld-4Kd^2}}{2^{-1}10^me^{99Kd^2+18Ld}}\frac{\sum_ir_{x_i}^{-1} \textrm{Vol}ol(B_{r_{x_i}}(x_i)\cap\Omega)}{\sum_i\textrm{Vol}ol(B_{r_{x_i}}(x_i)\cap\Omega)^{\frac{1}{\alphalignedpha}}} \\
& \bare2^{-2m-1}5^{-m}e^{-22Ld-103Kd^2}\sqrt{-1}nf_i\frac{r_{x_i}^{-1}\textrm{Vol}ol(B_{r_{x_i}}(x_i)\cap\Omega)}{\textrm{Vol}ol(B_{r_{x_i}}(x_i)\cap\Omega)^\frac{1}{\alphalignedpha}} \\
&= 2^{-2m-1}5^{-m}e^{-22Ld-103Kd^2}\sqrt{-1}nf r_{x_i}^{-1}\textrm{Vol}ol(B_{r_{x_i}}(x_i)\cap\Omega)^{1-\frac{1}{\alphalignedpha}}.
\epsilonnd{split}
\epsilonnd{equation}
Applying the volume comparison Theorem \ref{volume element comparison} in $\Omega$ gives
\[\frac{\textrm{Vol}ol(B_d(x_i)\cap\Omega)}{d^m}\le e^{Kd^2+2Ld}\frac{\textrm{Vol}ol(B_{r_{x_i}}(x_i)\cap\Omega)} {r_{x_i}^m}.\]
Since $1-\frac{1}{\alphalignedpha} \bare 0$, and $m(1-\frac{1}{\alphalignedpha})-1\le0$,
we derive
\betaegin{equation}\labelel{iso 2}
\alphaligned\sqrt{-1}nf r_{x_i}^{-1} \textrm{Vol}ol(B_{r_{x_i}}(x_i)\cap\Omega)^{1-\frac{1}{\alphalignedpha}} \bare & d^{m(1-\frac{1}{\alphalignedpha})-1}\sqrt{-1}nf r_{x_i}^{-m(1-\frac{1}{\alphalignedpha})} \textrm{Vol}ol(B_{r_{x_i}}(x_i)\cap\Omega)^{1-\frac{1}{\alphalignedpha}}\\
\bareq & d^{-1}e^{-(Kd^2+2Ld)(1-\frac{1}{\alphalignedpha})}\textrm{Vol}ol(\Omega)^{1-\frac{1}{\alphalignedpha}}.
\epsilonndaligned
\epsilonnd{equation}
From \epsilonqref{iso 1} and \epsilonqref{iso 2}, we conclude that
\[IN_{\alphalignedpha}(\Omega)\bare d^{-1}2^{-2m-1}5^{-m}e^{-(24-\frac{2}{\alphalignedpha})Ld-(104-\frac{1}{\alphalignedpha})K d^2}\textrm{Vol}ol(\Omega)^{1-\frac{1}{\alphalignedpha}}.\]
On the other hand, when $0<\alphalignedpha<1$, similarly to \epsilonqref{iso 1}, we have
\betaegin{equation}\labelel{iso 3}
\betaegin{equation}gin{split}
\frac{\textrm{Vol}ol(H)}{\textrm{Vol}ol(\Omega_1)^{\frac{1}{\alphalignedpha}}}&=\frac{\textrm{Vol}ol(H)}{\textrm{Vol}ol(\Omega_1) \textrm{Vol}ol(\Omega_1)^{\frac{1}{\alphalignedpha}-1}} \bare \frac{\textrm{Vol}ol(H)}{\textrm{Vol}ol(\Omega_1) \textrm{Vol}ol(\Omega)^{\frac{1}{\alphalignedpha}-1}} \\
& \bare \frac{2^{-m-2}e^{-4Ld-4Kd^2}}{2^{-1}10^me^{99Kd^2+18Ld}}\frac{\sum_ir_{x_i}^{-1} \textrm{Vol}ol(B_{r_{x_i}}(x_i)\cap\Omega)}{\sum_i\textrm{Vol}ol(B_{r_{x_i}}(x_i)\cap\Omega)} \textrm{Vol}ol(\Omega)^{1-\frac{1}{\alphalignedpha}} \\
& \bare d^{-1}2^{-2m-1}5^{-m}e^{-22Ld-103Kd^2}\textrm{Vol}ol(\Omega)^{1-\frac{1}{\alphalignedpha}} .
\epsilonnd{split}
\epsilonnd{equation}
Taking infimum over $H$ finishes the proof. \qed\\
From Lemma \ref{Cheeger} and Theorem \ref{isoperimetric estimate}, we immediately have the estimate of the first eigenvalue.
\betaegin{equation}gin{theorem}\labelel{thm 1}
Let $(M^m,g)$ be an $m$-dimensional closed Riemannian manifold with diameter bounded from above by $D$, and $m\bareq 2$. Suppose that \epsilonqref{basic assumption1} and \epsilonqref{basic assumption2} are satisfied. Then
\betaegin{equation}gin{equation}
\lambda_1\bare \frac{1}{16}D^{-2}400^{-m}e^{-44LD-206KD^2}:= c_0.
\epsilonnd{equation}
\epsilonnd{theorem}
To derive the lower bound of higher order eigenvalues, we need to use gradient estimates for eigenfunctions, which in term require a Sobolev inequality. According to section 9 in \cite{Li}, the desired Sobolev inequality follows from the lower bound estimate of $IN_{\frac{m}{m-1}}(M)$.
\betaegin{equation}gin{definition}[\cite{Li}]
Let $(M^m,g)$ be an $m$-dimensional compact Riemannian manifold (with or without boundary). For any $\alphalignedpha>0$, the Neumann $\alphalignedpha$-Sobolev constant of $M$ is defined by
\[SN_\alphalignedpha(M)=\sqrt{-1}nf_{f\sqrt{-1}n H^{1,1}(M)} \frac{\sqrt{-1}nt_M |\nabla f|}{\{\sqrt{-1}nf_{k\sqrt{-1}n \mathbb{R}} \sqrt{-1}nt_M |f-k|^\alphalignedpha\}^{\frac{1}{\alphalignedpha}}},\]
where $H^{1,1}(M)$ is the Sobolev space.
\epsilonnd{definition}
As pointed out in \cite{Li}, when $\alphalignedpha>\frac{m}{m-1}$, it holds that $IN_{\alphalignedpha}(M)=SN_{\alphalignedpha}(M)=0$. In general, the relation between $IN_\alphalignedpha(M)$ and $SN_\alphalignedpha(M)$ is as follows.
\betaegin{equation}gin{lemma}[section 9 in \cite{Li}]\labelel{SN and IN}
For any $\alphalignedpha >0$, we have
$$ \min\{{1,2^{1-\frac{1}{\alphalignedpha}}}\} IN_\alphalignedpha(M) \le SN_\alphalignedpha(M)\le \max\{{1,2^{1-\frac{1}{\alphalignedpha}}}\}IN_\alphalignedpha(M). $$
\epsilonnd{lemma}
Moreover, a lower bound of the Sobolev constant $SN_{\alphalignedpha}(M)$ provides a Sobolev inequality. In fact, we have
\betaegin{equation}gin{lemma}[Corollary 9.9 in \cite{Li}]\labelel{SN and Sobolev}
Let $(M^m, g)$ be a compact Riemannian manifold (with or without boundary). There exist constants $C_1(\alphalignedpha), C_2(\alphalignedpha)>0$ depengding only on $\alphalignedpha$, such that
\[\sqrt{-1}nt_M |\nabla f|^2 \bare C_1(\alphalignedpha)SN_\alphalignedpha(M)^2\left(\left(\sqrt{-1}nt_M |f|^{\frac{2\alphalignedpha}{2-\alphalignedpha}}\right)^\frac{2-\alphalignedpha}{\alphalignedpha}-C_2(\alphalignedpha)\textrm{Vol}ol(M)^{\frac{(2-2\alphalignedpha)}{\alphalignedpha}} \sqrt{-1}nt_M |f|^2\right)\] for all $f\sqrt{-1}n H^{1,2}(M).$
\epsilonnd{lemma}
Then by choosing $\alphalignedpha=\frac{m}{m-1}$ for $m\bareq 3$ and $\alphalignedpha=\frac{4}{3}$ for $m=2$, and combining Lemma \ref{SN and IN}, Lemma \ref{SN and Sobolev}, and Theorem \ref{isoperimetric estimate}, one can get the following Sobolev inequalities.
\betaegin{equation}gin{corollary} \labelel{Sobolev inequality}
Let $(M^m,g)$ be an $m$-dimensional compact Riemannian manifold (with or without boundary). Assume that \epsilonqref{basic assumption1} and \epsilonqref{basic assumption2} are satisfied. Then for any $f\sqrt{-1}n H^{1,2}(M)$,\\
(1) when $m\bare 3$, we have
\betaegin{equation}gin{equation}\labelel{Sobolev1}
\sqrt{-1}nt_M |\nabla f|^2 \bare C_1(m) \tildeilde{C}^2 \textrm{Vol}ol(M)^{\frac{2}{m}}\left(\left(\sqrt{-1}nt_M |f|^{\frac{2m}{m-2}}\right)^{\frac{m-2}{m}}-C_2(m)\textrm{Vol}ol(M)^{-\frac{2}{m}}\sqrt{-1}nt_M|f|^2\right),
\epsilonnd{equation}
where $\tildeilde{C}=D^{-1}2^{-2m-1}5^{-m}e^{-(22+\frac{2}{m})LD-(103+\frac{1}{m})K D^2}$, and $C_1(m)$ and $C_2(m)$ are dimensional constants; \\
(2) when $m=2$, one has
\betaegin{equation}gin{equation}\labelel{Sobolev2}
\sqrt{-1}nt_M |\nabla f|^2 \bare \tildeilde{S_1} \tildeilde{S}^2 \textrm{Vol}ol(M)^{\frac{1}{2}}\left(\left(\sqrt{-1}nt_M |f|^{4}\right)^{\frac{1}{2}}-\tildeilde{S_2}\textrm{Vol}ol(M)^{-\frac{1}{2}}\sqrt{-1}nt_M|f|^2\right),
\epsilonnd{equation}
where $\tildeilde{S}_1$ and $\tildeilde{S}_2$ are pure constants, and
$\tildeilde{S}=D^{-1}2^{-5}5^{-2}e^{-(22+\frac{1}{2})LD-(103+\frac{1}{4})K D^2}.$\\
\epsilonnd{corollary}
\betaegin{equation}gin{remark}\labelel{constants}
By carefully following the proof of Corollary 9.9 in \cite{Li}, one can check that we may take $C_1(m)=\frac{(m-2)^2}{4(m-1)^2}2^{\frac{2-m}{m(m-1)}}$, $C_2(m)=2^{\frac{2m^3-7m^2+2m+4}{m(m-1)(m-2)}}$, $\tildeilde{S_1}=3^{-2}2^{-\frac{1}{6}}$, and $\tildeilde{S_2}=2^{\frac{7}{6}}$.
\epsilonnd{remark}
\section{Gradient and higher order eigenvalue estimates }
In this section, we use a method in \cite{WZ} (see also \cite{LZZ}) to show the lower bound estimates of high order eigenvalues. Firstly, we prove a gradient estimate of eigenfunctions by Moser iteration.
\betaegin{equation}gin{proposition}\labelel{prop gradient estimate eigenfunction}
Let $(M^m,g)$, $m\bare3$, be an $m$-dimensional closed Riemannian manifold. Suppose that \epsilonqref{basic assumption1} and \epsilonqref{basic assumption2} are satisfied. Let $\lambda$ be an eigenvalue of the Laplace operator, and $u$ an eigenfunction satisfing $\Deltalta u=-\lambda u$. Then we have the following gradient estimate.
\betaegin{equation}gin{equation}
|\nabla u|^2\le 2^m\left(\frac{m}{m-2}\right)^\frac{m(m-2)}{2}\left(\frac{3\lambda+2K+2L^2+C_2}{C_1}\right)^\frac{m}{2}(\lambda+L^2)\textrm{Vol}ol(M)^{-1}\sqrt{-1}nt_M u^2,
\epsilonnd{equation}
where $C_1=C_1(m)\tildeilde{C}^2$, and $C_2=C_1(m)\tildeilde{C}^2C_2(m)$ with $C_1(m),\ C_2(m)$, $\tildeilde{C}$ the constants in \epsilonqref{Sobolev1}.
In particular, when $||u||_{L^2}=1$, we have
\betaegin{equation}gin{equation}\labelel{gradient estimate eigenfunction}
|\nabla u|^2\le 2^m\left(\frac{m}{m-2}\right)^\frac{m(m-2)}{2}\left(\frac{3\lambda+2K+2L^2+C_2}{C_1}\right)^\frac{m}{2}(\lambda+L^2)\textrm{Vol}ol(M)^{-1}.
\epsilonnd{equation}
\epsilonnd{proposition}
\proof Let $v=|\nabla u|^2+L^2u^2$. The Bochner formula and assumptions \epsilonqref{basic assumption1} and \epsilonqref{basic assumption2} induce that
\betaegin{equation}gin{align}
\Deltalta v &=2|Hess\,u|^2+2<\nabla\Deltalta u,\nabla u>+2Ric(\nabla u,\nabla u)+2L^2u\Deltalta u+2L^2|\nabla u|^2 \nonumber\\
&\bare 2|Hess\,u|^2-2\lambda|\nabla u|^2-2K|\nabla u|^2-2f_{ij}u_iu_j-2L^2\lambda u^2+2L^2|\nabla u|^2\nonumber\\
&=2|Hess\,u|^2-2\lambda v +(2L^2-2K)|\nabla u|^2-2f_{ij}u_iu_j\nonumber\\
&\bare 2u_{ij}^2-2(\lambda+K) v-2f_{ij}u_iu_j.\nonumber
\epsilonnd{align}
Multiple both sides above by $v^{p-1}$, $p\bareq 2$, and take integrals over $M$. Notice that
\betaegin{equation}gin{align}
\sqrt{-1}nt_Mv^{p-1}\Deltalta v&=-\sqrt{-1}nt_M<\nabla v^{p-1},\nabla v>=-\sqrt{-1}nt_M(p-1)v^{p-2}<\nabla v,\nabla v>\nonumber\\
&=-(p-1)\sqrt{-1}nt_Mv^{p-2}|\nabla v|^2=-
\frac{4(p-1)}{p^2}\sqrt{-1}nt_M|\nabla v^{\frac{p}{2}}|^2\nonumber.
\epsilonnd{align}
Hence, we have
\betaegin{equation}gin{equation}\labelel{multiply v^p-1}
\frac{4(p-1)}{p^2}\sqrt{-1}nt_M|\nabla v^{\frac{p}{2}}|^2\le -2\sqrt{-1}nt_M u_{ij}^2v^{p-1}+2(\lambda+K) \sqrt{-1}nt_Mv^p+2\sqrt{-1}nt_Mf_{ij}
u_iu_jv^{p-1}.
\epsilonnd{equation}
For the third term on the right hand side above, integrating by part yields
\betaegin{equation}gin{align}\labelel{int by parts}
2\sqrt{-1}nt_Mf_{ij}u_iu_jv^{p-1}&=-2\sqrt{-1}nt_Mf_i(u_iu_jv^{p-1})_j\nonumber\\
&=\underbrace{-2\sqrt{-1}nt_Mf_iu_{ij}u_jv^{p-1}}_{I}\underbrace{-2\sqrt{-1}nt_Mf_iu_iu_{jj}v^{p-1}}_{II}\underbrace{-2\sqrt{-1}nt_Mf_iu_iu_j(p-1)v^{p-2}v_j}_{III}.
\epsilonnd{align}
For $I$ above, using Cauchy-Schwarz inequality and the bound of $|\nabla f|$ gives
\betaegin{equation}gin{align}
I=-2\sqrt{-1}nt_Mf_iu_{ij}u_jv^{p-1}& \le 2\sqrt{-1}nt_M(u_{ij}^2v^{p-1}+\frac{1}{4}f_i^2u_j^2v^{p-1})\nonumber\\
&=2\sqrt{-1}nt_Mu_{ij}^2v^{p-1}+\frac{1}{2}\sqrt{-1}nt_Mf_i^2u_j^2v^{p-1}\nonumber\\
&\le 2\sqrt{-1}nt_Mu_{ij}^2v^{p-1}+\frac{L^2}{2}\sqrt{-1}nt_Mv^{p}.\nonumber
\epsilonnd{align}
Next, noticing that $v\bare2L|\nabla u||u|$, we have
$$II=-2\sqrt{-1}nt_Mf_iu_iu_{jj}v^{p-1}=2\lambda\sqrt{-1}nt_Mf_iu_iuv^{p-1}\le2\lambda L \sqrt{-1}nt_M|\nabla u||u|v^{p-1}\le\lambda \sqrt{-1}nt_M v^p.$$
Finally, by applying Cauchy-Schwarz inequality inequality again to $III$, we deduce
\betaegin{equation}gin{align}
III=-2\sqrt{-1}nt_Mf_iu_iu_j(p-1)v^{p-2}v_j&\le2(p-1)L\sqrt{-1}nt_M|\nabla u|^2|\nabla v|v^{p-2}\le2(p-1)L\sqrt{-1}nt_M|\nabla v|v^{p-1}\nonumber\\
&\le 2(p-1)L(\frac{1}{4\textrm{Vol}arepsilonilon_1}\sqrt{-1}nt_Mv^p+\textrm{Vol}arepsilonilon_1\sqrt{-1}nt_M|\nabla v|^2v^{p-2})\nonumber\\
&=\frac{(p-1)L}{2\textrm{Vol}arepsilonilon_1}\sqrt{-1}nt_Mv^p+\frac{8(p-1)L\textrm{Vol}arepsilonilon_1}{p^2}\sqrt{-1}nt_M
|\nabla v^{\frac{p}{2}}|^2,\nonumber
\epsilonnd{align}
where $\textrm{Vol}arepsilonilon_1> 0$ is any constant. Thus, by combining the above estimates in \epsilonqref{multiply v^p-1}, we arrive at
$$(\frac{4(p-1)}{p^2}-\frac{8(p-1)L\textrm{Vol}arepsilonilon_1}{p^2})\sqrt{-1}nt_M|\nabla v^{\frac{p}{2}}|^2\le(3\lambda+\frac{L^2}{2}+ 2K+\frac{(p-1)L}{2\textrm{Vol}arepsilonilon_1})\sqrt{-1}nt_Mv^p.$$
Assume for now that $L>0$. Then, by choosing $\textrm{Vol}arepsilonilon_1=\frac{1}{4L}$ and noticing that $\frac{2(p-1)}{p^2}\bare\frac{1}{p}$ for $p\bareq 2$, one gets
\betaegin{equation}\labelel{L>0}
\sqrt{-1}nt_M|\nabla v^{\frac{p}{2}}|^2\le p^2(3\lambda+2L^2+ 2K)\sqrt{-1}nt_Mv^p.
\epsilonnd{equation}
If $L=0$, then $f$ is a constant, and from \epsilonqref{multiply v^p-1} we conclude that
\[\sqrt{-1}nt_M|\nabla v^{\frac{p}{2}}|^2\le \frac{1}{2}p^2(\lambda+K)\sqrt{-1}nt_Mv^p,\]
which is better than \epsilonqref{L>0}. Therefore, we always have
\betaegin{equation}\labelel{L}
\sqrt{-1}nt_M|\nabla v^{\frac{p}{2}}|^2\le p^2(3\lambda+2L^2+ 2K)\sqrt{-1}nt_Mv^p.
\epsilonnd{equation}
Recall the Sobolev inequality \epsilonqref{Sobolev1},
\betaegin{equation}gin{align} \labelel{22222}
\sqrt{-1}nt_M |\nabla f|^2 \bare C_1\textrm{Vol}ol(M)^{\frac{2}{m}}\left(\sqrt{-1}nt_M|f|^{\frac{2m}{m-2}}\right)^{\frac{m-2}{m}}-C_2\sqrt{-1}nt_M|f|^2
\epsilonnd{align}
for all $f\sqrt{-1}n H^{1,2}(M)$, where $C_1=C_1(m)\tildeilde{C}^2$, and $C_2=C_1(m)\tildeilde{C}^2C_2(m).$
Putting $f=v^{\frac{p}{2}}$ and using \epsilonqref{L} yield
$$\left(\sqrt{-1}nt_Mv^{\frac{pm}{m-2}}\right)^{\frac{m-2}{m}}\le p^2\left(\frac{3\lambda+2K+2L^2+C_2}{C_1\textrm{Vol}ol(M)^{\frac{2}{m}}}\right)\sqrt{-1}nt_Mv^p.$$
Denote $Q=\frac {3\lambda+2K+2L^2+C_2}{C_1\textrm{Vol}ol(M)^{\frac{2}{m}}}$ \ for convenience. The inequality above means that
$$||v||_{\frac{pm}{m-2}}\le(p^2Q)^\frac{1}{p}||v||_p$$ for all $p\bare2$.
Setting $\betaegin{equation}ta=\frac{m}{m-2},\ p=2\betaegin{equation}ta^j $ for $ j=0,\ 1,\ 2,\ ...\, ,$ it implies that
$$||v||_{2\betaegin{equation}ta^{j+1}}\le2^{\frac{1}{\betaegin{equation}ta^j}}\betaegin{equation}ta^{\frac{j}{\betaegin{equation}ta^j}}Q^{\frac{1}{2\betaegin{equation}ta^j}}||v||_{2\betaegin{equation}ta^j}.$$
Iterating this estimate, we conclude that
$$||v||_{2\betaegin{equation}ta^{j+1}}\le2^{\sum_{l=0}^{j}\frac{1}{\betaegin{equation}ta^l}}\betaegin{equation}ta^{\sum_{l=0}^{j}\frac{l}{\betaegin{equation}ta^l}}Q^{\sum_{l=0}^{j}\frac{1}{2\betaegin{equation}ta^l}}||v||_2.$$ Letting $j\tildeo\sqrt{-1}nfty$, we obtain
$$||v||_{\sqrt{-1}nfty}\le2^{\frac{m}{2}}\left(\frac{m}{m-2}\right)^\frac{m(m-2)}{4}Q^{\frac{m}{4}}||v||_2.$$
Notice that $\sqrt{-1}nt_M v^2\le||v||_{\sqrt{-1}nfty}\sqrt{-1}nt_M v$. Therefore, the above estimate reduces to
$$\max\limits_{M} v\le2^m\left(\frac{m}{m-2}\right)^\frac{m(m-2)}{2}Q^{\frac{m}{2}}\sqrt{-1}nt_Mv.$$
This finishes the proof, since $$\sqrt{-1}nt_M v=\sqrt{-1}nt_M(|\nabla u|^2+L^2u^2)=(\lambda+L^2)\sqrt{-1}nt_Mu^2.$$ \qed\\
When $m=2$, by using the Sobolev inequality \epsilonqref{Sobolev2} instead of \epsilonqref{Sobolev1}, one can similarly obtain the following gradient estimate for $u$.
\betaegin{equation}gin{proposition}\labelel{prop gradient estimate eigenfunction m=2}
If $(M, g)$ is a Riemann surface, $u$ is an eigenfunction associated to eigenvalue $\lambda$, and \epsilonqref{basic assumption1} and \epsilonqref{basic assumption2} are satisfied, then
\[|\nabla u|^2 \le 2^8\left(\frac{3\lambda+2K+2L^2+S_2}{S_1}\right)^2(\lambda +L^2)\textrm{Vol}ol(M)^{-1}\sqrt{-1}nt_M u^2,\]
where $S_1=\tildeilde{S_1}\tildeilde{S}^2$, and $S_2=\tildeilde{S_1}\tildeilde{S}^2\tildeilde{S_2}$ with $\tildeilde{S_1},\ \tildeilde{S_2}$, $\tildeilde{S}$ the constants in \epsilonqref{Sobolev2}.
\epsilonnd{proposition}
Next, we prove a similar gradient estimate for linear combinations of eigenfunctions.
\betaegin{equation}gin{proposition}\labelel{prop gradient estimate combination}
Let $(M^m, g_{ij})$ be an $m$-dimensional closed Riemannian manifold satisfying \epsilonqref{basic assumption1} and \epsilonqref{basic assumption2}. Let $\phi_j$ be a normalized eigenfunction associated to $\lambda_j$, $j=1,\,2,\,\ ...,\ k$ i.e., $\Deltalta \phi_j=-\lambda_j \phi_j$ and $\sqrt{-1}nt_M |\phi_j|^2 dV=1$. Then for any sequence of real numbers $b_j,\ j=1,\ 2,\ ...,\ k,$ with $\sum_{j=1}^{k}b_j^2 \le 1$, the linear combination $w=\sum_{i=1}^{k}b_j\phi_j$ satisfies that, for $m\bareq 3$,
\betaegin{equation}gin{equation}\labelel{gradient estimate combination}
|\nabla w|^2 +L^2w^2 \le 2^m \left(\frac{m}{m-2}\right)^\frac{m(m-2)}{2}\left(\frac{6\lambda_k+2K+2L^2+C_2}{C_1}\right)^{\frac{m}{2}}(\lambda_k+L^2)\textrm{Vol}ol(M)^{-1},
\epsilonnd{equation}
and for $m=2$,
\betaegin{equation} \labelel{gradient estimate combination m=2}
|\nabla w|^2+L^2w^2 \le 2^8\left(\frac{6\lambda_k+2K+2L^2+S_2}{S_1}\right)^2(\lambda_k+L^2)\textrm{Vol}ol(M)^{-1},
\epsilonnd{equation}
where $C_1,\ C_2,\ S_1,\ S_2$ are constants in Propositions \ref{prop gradient estimate eigenfunction} and \ref{prop gradient estimate eigenfunction m=2}.
\epsilonnd{proposition}
\proof Here, we only present the proof of \epsilonqref{gradient estimate combination}. The proof of \epsilonqref{gradient estimate combination m=2} is similar by using \epsilonqref{Sobolev2} instead of \epsilonqref{Sobolev1}. First of all, since $\lambda_k>0$, we can write
$$\Deltalta w=-\sum_{j=1}^{k}\lambda_jb_j\phi_j=-\lambda_k \epsilonta,$$ where $\nablaisplaystyle \epsilonta=\sum_{j=1}^{k}\frac{\lambda_j}{\lambda_k}b_j\phi_j.$\\
Let $v=|\nabla w|^2+L^2w^2$. Then
\betaegin{equation}gin{align}
\Deltalta v &=2|Hess\,w|^2+2<\nabla \Deltalta w,\nabla w>+2Ric(\nabla w,\nabla w)+2L^2w\Deltalta w +2L^2|\nabla w|^2\nonumber\\
&\bare 2w_{ij}^2-2\lambda_k\epsilonta_iw_i-2K|\nabla w|^2-2f_{ij}w_iw_j-2L^2\lambda_k\epsilonta w.\nonumber\\
&\bare 2w_{ij}^2-2\lambda_k\epsilonta_iw_i-2Kv-2f_{ij}w_iw_j-2L^2\lambda_k\epsilonta w.\nonumber
\epsilonnd{align}
Multiplying both sides by $v^{p-1}, \ p \bare 2$, and integrating over $M$ give
\betaegin{equation}gin{equation} \labelel{222}
\betaegin{equation}gin{split}
\frac{4(p-1)}{p^2}\sqrt{-1}nt_M|\nabla v^{\frac{p}{2}}|^2 &\le -2\sqrt{-1}nt_M w_{ij}^2v^{p-1}+2\lambda_k\sqrt{-1}nt_M \epsilonta_iw_iv^{p-1}\\
&+2K\sqrt{-1}nt_Mv^p+2\sqrt{-1}nt_M f_{ij}w_iw_jv^{p-1} +2\lambda_k L^2\sqrt{-1}nt_M \epsilonta wv^{p-1}.
\epsilonnd{split}
\epsilonnd{equation}
Using H\"older inequality yields
\betaegin{equation}gin{equation}\labelel{combination1}
2\lambda_k\sqrt{-1}nt_M \epsilonta_iw_iv^{p-1} \le 2\lambda_k \sqrt{-1}nt_M |\nabla \epsilonta|v^{p-\frac{1}{2}} \le 2\lambda_k \left( \sqrt{-1}nt_M v^p \right)^{\frac{p-\frac{1}{2}}{p}} \left( \sqrt{-1}nt_M |\nabla\epsilonta|^{2p}\right)^{\frac{1}{2p}}.
\epsilonnd{equation}
Notice that the coefficients in $\nabla \epsilonta$ satisfy $\sum_{j=1}^{k}(\frac{\lambda_j}{\lambda_k}b_j)^2 \le \sum_{j=1}^{k}b_j^2 \le 1$ and $\sqrt{-1}nt_M v^p \bare \sqrt{-1}nt_M |\nabla w|^{2p}$. Thus,
\betaegin{equation}\labelel{combination2}
\sqrt{-1}nt_M |\nabla \epsilonta|^{2p} \le \max \limits_{b_1,\nablaots,b_k} \sqrt{-1}nt_M v^p.
\epsilonnd{equation}
By combining \epsilonqref{combination1} and \epsilonqref{combination2}, we obtain
\betaegin{equation}\labelel{max1}
2\lambda_k\sqrt{-1}nt_M \epsilonta_iw_iv^{p-1} \le 2\lambda_k \max \limits_{b_1,\nablaots,b_k} \sqrt{-1}nt_M v^p.
\epsilonnd{equation}
Here and in the rest of the proof, the maximum is taken for all real numbers $b_1,\cdots,b_k$ such that $\sum_{j=1}^k b_j^2\leq1$.
Similarly, for the last term of \epsilonqref{222}, we have
\betaegin{equation}
\alphaligned
2\lambda_kL^2 \sqrt{-1}nt_M \epsilonta wv^{p-1} \le& 2 \lambda_kL\sqrt{-1}nt_M |\epsilonta|v^{p-\frac{1}{2}} \le 2\lambda_kL \left( \sqrt{-1}nt_M v^p \right)^{\frac{p-\frac{1}{2}}{p}} \left( \sqrt{-1}nt_M |\epsilonta|^{2p}\right)^{\frac{1}{2p}}\\
\leq &2\lambda_k \max \limits_{b_1,\nablaots,b_k} \sqrt{-1}nt_M v^p
\epsilonndaligned
\epsilonnd{equation}
Finally, we need to deal with the fourth term on the right hand side of \epsilonqref{222}. Using integration by parts gives
\betaegin{equation}gin{equation}\labelel{22}
2\sqrt{-1}nt_M f_{ij}w_iw_jv^{p-1}=\underbrace{-2\sqrt{-1}nt_M f_{i}w_{ij}w_jv^{p-1}}_{I}\underbrace{-2\sqrt{-1}nt_M f_{i}w_iw_{jj}v^{p-1}}_{II}\underbrace{-2\sqrt{-1}nt_M f_{i}w_iw_j(p-1)v^{p-2}v_j}_{III}.
\epsilonnd{equation}
Using Cauchy-Schwarz inequality and the bound of $|\nabla f|$, we have
\[I=-2\sqrt{-1}nt_M f_{i}w_{ij}w_jv^{p-1}\le 2\sqrt{-1}nt_M w^2_{ij}v^{p-1}+\frac{L^2}{2}\sqrt{-1}nt_M v^p \le 2\sqrt{-1}nt_M w^2_{ij}v^{p-1} +\frac{L^2}{2} \max \limits_{b_1,\nablaots,b_k} \sqrt{-1}nt_M v^p,\]
\[II=-2\sqrt{-1}nt_M f_{i}w_iw_{jj}v^{p-1}\le 2\lambda_k \sqrt{-1}nt_M |\nabla f| |\nabla w| |\epsilonta| v^{p-1} \le 2\lambda_k L \sqrt{-1}nt_M |\epsilonta|v^{p-\frac{1}{2}} \le 2\lambda_k\max \limits_{b_1,\nablaots,b_k} \sqrt{-1}nt_M v^p,\]
and
\betaegin{equation}gin{align}
III=-2\sqrt{-1}nt_M f_{i}w_iw_j(p-1)v^{p-2}v_j &\le 2(p-1)L \sqrt{-1}nt_M |\nabla w |^2v^{p-2}|\nabla v| \le 2(p-1)L \sqrt{-1}nt_M v^{p-1}|\nabla v|\nonumber\\
& \le 2(p-1)L \left(\frac{1}{4\textrm{Vol}arepsilon_2}\sqrt{-1}nt_M v^p +\textrm{Vol}arepsilon_2 \sqrt{-1}nt_M v^{p-2}|\nabla v|^2 \right)\nonumber\\
&= \frac{(p-1)L}{2\textrm{Vol}arepsilon_2}\max_{b_1,\nablaots,b_k}\sqrt{-1}nt_M v^p +\frac{8(p-1)L\textrm{Vol}arepsilon_2}{p^2} \sqrt{-1}nt_M |\nabla v^{\frac{p}{2}}|^2, \nonumber
\epsilonnd{align}
where $\textrm{Vol}arepsilon_2 >0$ is arbitrary constant. Hence, plugging the estimates above in \epsilonqref{222} asserts that
\[ \left(\frac{4(p-1)}{p^2}-\frac{8(p-1)L\textrm{Vol}arepsilon_2}{p^2} \right ) \sqrt{-1}nt_M |\nabla v^{\frac{p}{2}}|^2 \le \left(6\lambda_k +\frac{L^2}{2}+2K+\frac{(p-1)L}{2\textrm{Vol}arepsilon_2}\right) \max \limits_{b_1,\nablaots,b_k} \sqrt{-1}nt_M v^p .\]
Choosing $\textrm{Vol}arepsilon_2=\frac{1}{4L}$, it follows that
\betaegin{equation}\labelel{max2}
\max_{b_1,\nablaots,b_k}\sqrt{-1}nt_M|\nabla v^{\frac{p}{2}}|^2 \le p^2 \left(6\lambda_k+2K+2L^2\right) \max \limits_{b_1,\nablaots,b_k} \sqrt{-1}nt_M v^p.
\epsilonnd{equation}
Again, by \epsilonqref{max2} and the Sobolev inequality \epsilonqref{Sobolev1}, we have
\betaegin{equation}gin{equation}
\max \limits_{b_1,\nablaots,b_k} \left(\sqrt{-1}nt_M v^{\frac{pm}{m-2}} \right)^{\frac{m-2}{m}} \le p^2
\left(\frac{6\lambda_k+2K+2L^2+C_2}{C_1\textrm{Vol}ol(M)^{\frac{2}{m}}}\right) \max \limits_{b_1,\nablaots,b_k} \left(\sqrt{-1}nt_M v^p \right).
\epsilonnd{equation}
Denoting $Q=\frac{6\lambda_k+2K+2L^2+C_2}{C_1\textrm{Vol}ol(M)^{\frac{2}{m}}}$ and using Moser iteration as in Proposition \ref{prop gradient estimate eigenfunction}, it follows that
\[\max \limits_{b_1,\nablaots,b_k} ||v||_{\sqrt{-1}nfty} \le 2^{\frac{m}{2}} \left(\frac{m}{m-2} \right)^{\frac{m(m-2)}{4}}Q^{\frac{m}{4}} \max \limits_{b_1,\nablaots,b_k} ||v||_2 .\]
Square both sides above and notice that
\[ \max \limits_{b_1,\nablaots,b_k} \sqrt{-1}nt_M v^2 \le \max \limits_{b_1,\nablaots,b_k} ||v||_{\sqrt{-1}nfty} \max \limits_{b_1,\nablaots,b_k}\sqrt{-1}nt_M v. \]
Thus, we get \\
\betaegin{equation}gin{equation} \labelel{3}
\max \limits_{b_1,\nablaots,b_k} ||v||_{\sqrt{-1}nfty} \le 2^m \left(\frac{m}{m-2} \right)^{\frac{m(m-2)}{2}}Q^{\frac{m}{2}} \max \limits_{b_1,\nablaots,b_k}\sqrt{-1}nt_M v .
\epsilonnd{equation}
On the other hand, since $\phi_1,\ \phi_2,\ \nablaots,\ \phi_k$ are orthonormal, we have
\[
\betaegin{equation}gin{split}
\sqrt{-1}nt_M v &=\sqrt{-1}nt_M (|\nabla w|^2+L^2w^2)=-\sqrt{-1}nt_M w\Deltalta w+L^2\sqrt{-1}nt_M w^2 \\ &=\sqrt{-1}nt_M(\sum_{j=1}^{k}b_j\phi_j)(\sum_{i=1}^{k} \lambda_i b_i \phi_i)+L^2\sqrt{-1}nt_M(\sum_{j=1}^{k}b_j\phi_j)^2 \\
&=\sum_{j=1}^{k} \lambda_j b_j^2+L^2\sum_{j=1}^{k}b_j^2 \le (\lambda_k+L^2)\sum_{j=1}^{k}b_j^2 \le \lambda_k+L^2.
\epsilonnd{split}
\]
This, together with \epsilonqref{3}, completes the proof.\qed\\
The above gradient estimate for linear combinations of eigenfunctions allows us to derive the arithmetic inequality of the eigenvalues below.
\betaegin{equation}gin{lemma}\labelel{lem combination eigenvalue}
Under the same assumptions and notations as in Proposition \ref{prop gradient estimate combination}, we have for $m\bareq 3$,
\betaegin{equation}gin{equation}\labelel{combination eigenvalue}
\lambda_1+\lambda_2+...+\lambda_k\le \\
m 2^m \left(\frac{m}{m-2}\right)^\frac{m(m-2)}{2}\left(\frac{6\lambda_k+2K+2L^2+C_2}{C_1}\right)^{\frac{m}{2}}(\lambda_k+L^2),
\epsilonnd{equation}
and for $m=2$,
\betaegin{equation}\labelel{combination eigenvalue m=2}
\lambda_1+\lambda_2+...+\lambda_k \le 2^9\left(\frac{6\lambda_k+2K+2L^2+S_2}{S_1}\right)^2(\lambda_k+L^2).
\epsilonnd{equation}
\epsilonnd{lemma}
\proof We only prove \epsilonqref{combination eigenvalue} by using \epsilonqref{gradient estimate combination}. The proof of \epsilonqref{combination eigenvalue m=2} follows similarly from \epsilonqref{gradient estimate combination m=2}.
If $k\le m$, the conclusion follows immediately from Proposition \ref{prop gradient estimate eigenfunction} by integrating both sides of \epsilonqref{gradient estimate eigenfunction} for each $\phi_j$, $j=1,2,\cdots,k$.
When $k> m$, for each $x\sqrt{-1}n M$, we can find an orthogonal matrix $(a_{ij})_{k\tildeimes k}$ such that
$$\textrm{Vol}arphi_i=\sum_{j=1}^{k}a_{ij}\phi_j,i=1,\ 2,\ \nablaots,\ k$$ satisfy that
$$\nabla_l\textrm{Vol}arphi_i(x)=0,\ l=1,\ 2,\ \nablaots,\ m,\ m+1\le i \le k. $$
Indeed, since the rank of the matrix
\betaegin{equation}gin{equation}
J=\betaegin{equation}gin{pmatrix}
\nabla_1\phi_1&\nablaots&\nabla_1\phi_k\\
\textrm{Vol}dots& & \textrm{Vol}dots\\
\nabla_m\phi_1&\nablaots&\nabla_m\phi_k
\epsilonnd{pmatrix}
\epsilonnd{equation}
is no more than $m$, there are $k-m$ linearly independent solutions of $J\textrm{Vol}ec{x}=\textrm{Vol}ec{0}$, and then Schmidt orthogonalization gives $(a_{ij})$.
Thus, we derive from Proposition \ref{prop gradient estimate combination} that
$$|\nabla \phi_1|^2+...+|\nabla \phi_k|^2=|\nabla \textrm{Vol}arphi_1|^2+...+|\nabla \textrm{Vol}arphi_k|^2=|\nabla \textrm{Vol}arphi_1|^2+...+|\nabla \textrm{Vol}arphi_m|^2 $$
$$\le m 2^m \left(\frac{m}{m-2}\right)^\frac{m(m-2)}{2}\left(\frac{6\lambda_k+2K+2L^2+C_2}{C_1}\right)^{\frac{m}{2}}(\lambda_k+L^2)\textrm{Vol}ol(M)^{-1}.$$ \qed\\
Thus, integrating both sides gives Lemma \ref{lem combination eigenvalue}.
\betaegin{equation}gin{remark}
Notice that the above Lemma cannot be deduced directly from Propositions \ref{prop gradient estimate eigenfunction} and \ref{prop gradient estimate eigenfunction m=2}, which will enlarge the coefficient $m$ on the right hand side of \epsilonqref{combination eigenvalue} and \epsilonqref{combination eigenvalue m=2} to be $k$.
\epsilonnd{remark}
From \epsilonqref{combination eigenvalue} and \epsilonqref{combination eigenvalue m=2}, in order to get a lower bound of $\lambda_k$, we only need the following lemma.
\betaegin{equation}gin{lemma}[\cite{WZ}]\labelel{lem WZ}
For $0\le\lambda_1\le\lambda_2\le...\le\lambda_k\le...$, if the inequality
\betaegin{equation}gin{equation}
\lambda_1+\lambda_2+...+\lambda_k\le C_3\lambda_k^{\frac{m}{2}+1}
\epsilonnd{equation}
holds for any $k\bare 1$, then ones has
\betaegin{equation}gin{equation}
\lambda_k\bare C_4k^{\frac{2}{m}},
\epsilonnd{equation}
\\
where
$$C_4=min\left\{\lambda_1,\ \left(\frac{m}{C_3(m+2)}\right)^{\frac{2}{m}}\right\},$$
and $m\bare 1$ is an integer.
\epsilonnd{lemma}
Now we can see that a lower bound of $\lambda_k$ follows immediately from Theorem \ref{thm 1}, Lemma \ref{lem combination eigenvalue} and Lemma \ref{lem WZ}.
\betaegin{equation}gin{theorem}\labelel{thm n}
Assume that $(M^m,g)$ is an $m$-dimensional closed Riemannian manifold such that \epsilonqref{basic assumption1} and \epsilonqref{basic assumption2} are satisfied. Let $c_0$ be the lower bound of $\lambda_1$ in Theorem \ref{thm 1}. Then\\
(1) for $m\bareq 3$,
\betaegin{equation}gin{equation}\labelel{lambda k lower bound}
\lambda_k\bare c_1k^{\frac{2}{m}},\ \forall k\bareq 2,
\epsilonnd{equation}
where $c_1=min\left\{c_0,\ \left(\frac{m}{C_5(m+2)}\right)^{\frac{2}{m}}\right\},$ and \\ $C_5=m2^m\left(\frac{m}{m-2}\right)^\frac{m(m-2)}{2}c_0^{-(\frac{m}{2}+1)}\left(\frac{6c_0+2K+2L^2+C_2}{C_1}\right)^\frac{m}{2}(c_0+L^2);$\\
(2) for $m=2$,
\betaegin{equation}\labelel{lambda k lower bound m=2}
\lambda_k \bare c_2k^{\frac{1}{2}},\ \forall k\bareq 2,
\epsilonnd{equation}
where $c_2=\min\left\{c_0,\ \left(\frac{2}{3C_6}\right)^{\frac{1}{2}}\right\}$, and $C_6=2^9c_0^{-3}\left(\frac{6c_0+2K+2L^2+S_2}{S_1}\right)^2\left(c_0+L^2\right). $
\epsilonnd{theorem}
\proof To prove \epsilonqref{lambda k lower bound}, from Lemma \ref{lem combination eigenvalue}, we have
$$\lambda_1+\lambda_2+...+\lambda_k\le \lambda_k^{\frac{m}{2}+1}m 2^m \left(\frac{m}{m-2}\right)^\frac{m(m-2)}{2}\left(\frac{6+\frac{2K+2L^2+C_2} {\lambda_k}}{C_1}\right)^{\frac{m}{2}}(1+\frac{L^2}{\lambda_k}).$$
Since $\lambda_k\bareq \lambda_1\bareq c_0$, it follows that
\betaegin{equation}gin{equation}
\lambda_1+\lambda_2+...+\lambda_k\le C_5\lambda_k^{\frac{m}{2}+1}.
\epsilonnd{equation}
From Lemma \ref{lem WZ}, we can easily get the conclusion.
The proof of \epsilonqref{lambda k lower bound m=2} is similar. \qed
\betaegin{equation}gin{remark}
Recall that the constants $C_1$, $C_2$, $S_1$, and $S_2$ have explicit expressions according to Corollary \ref{Sobolev inequality} and Remark \ref{constants}. Thus, the lower bound of $\lambda_k$ in the above theorem can also be expressed explicitly.\\
\epsilonnd{remark}
\end{document}
|
\begin{equation}gin{document}
\begin{equation}gin{abstract}
We study a free transmission problem in which solution minimizes a functional with different definitions in positive and negative phases. We prove some asymptotic regularity results when the jumps of the diffusion coefficients get smaller along the free boundary. At last, we prove a measure-theoretic result related to the free boundary.
\end{abstract}
\title{A non-isotropic free transmission problem governed by quasi-linear operators}
\textbf{Keywords:} variational calculus, transmission problems, free boundary, finite perimeter.
\textbf{2010 Mathematics Subjects Classification:}49J05, 35B65, 35Q92, 35Q35
\tableofcontents
\section{Introduction}\label{introduction}
In this article we intend to study the regularity issues related to the transmission problems. In various applied sciences, many phenomenas are modelled by transmission problem also known as phase transition problems. These kind of models naturally appear when we study the diffusion of a quantity through different media. For example, modelling a composite material having different diffusion properties: like combination of ice and water, or mixture of chemicals, a tumour in some tissue, or heat conduction through different regions.
Very broadly speaking, variational formulation of a transmission problem is of the following form
$$
\int_{\Omega}a_+(x,v,\nabla v)\chi_{\{v>0\}}+a_-(x,v,\nabla v)\chi_{\{v\le 0\}}\,dx \;\to \,\min
$$
for an appropriate domain $\Omega\subset \mathbb{R}^N$, $a_+$ and $a_-$ determine diffusion in positive and negative phases. Candidates $v$ are from appropriate function space. The ice-water example is the most relatable because the (solid) ice part corresponds to the negative phase and (liquid) water part corresponds to the positive phase.
The mathematical analysis of transmission problems involves discontinuous coefficients, due to the difference in the properties of different media. Let us focus on the stationary state of the ice-water combination and study the diffuson of heat (related to the temperature $T$ ) $T:\Omega\to \mathbb{R}^N$, $\Omega$ being the domain under study. We can say that in ice the diffusion is determined by an operator corresponding to solid state of water that is
$$
-\dive(a_-(x)\nabla T)=0\qquad\mbox {in ice, $\{T<0\}$}
$$
and in water, the diffusion is determined according to an operator corresponging to liquid state
$$
-\dive((a_+(x)\nabla T)=0\qquad\mbox{in water, $\{T\ge0\}$}.
$$
As a comination of the above two PDEs we can write
$$
-\dive(a(x)\nabla T)=0\qquad\mbox{in $\Omega$}
$$
with $a(x,T)=a_+(x)\chi_{\{T>0\}}+a_-(x)\chi_{\{T\le0\}}$. With $a$ being a discontinuous function along the free boundary of $T$. An important point to be noticed is that the diffusion tend to compensate the transition of phases, which gets reflected in the free boundary condition. It can be formally written as follows, supposing that the free boundary is sufficiently regular
$$
\mathcal {G}(\partial _{\nu_+}T,\partial _{\nu_-}T)=0\qquad\mbox{on $\partial \{T>0\}$}
$$
for some function $\mathcal G$. Above mentioned PDEs and the free boundary condition can be posed in the following variational setup, which was studied in \cite{TA15}
\begin{equation}\label{TA15}
\int_{\Omega}\langle A(x,v)\nabla v ,\nabla v \rangle-f(x,v)v+\gamma(x,v)\,dx
\end{equation}
with
\[
\begin{equation}gin{split}
A(x,v)=A_+(x)\chi_{\{v>0\}}+A_-(x)\chi_{\{v\le 0\}}\\
f(x,v)=f_+(x)\chi_{\{v>0\}}+f_-(x)\chi_{\{v\le 0\}}\\
\gamma(x,v)=\gamma_+(x)\chi_{\{v>0\}}+\gamma_-(x)\chi_{\{v\le 0\}}
\end{split}
\]
and matrices $A_{\pm}$ satisfying the ellipticity condition for any $\xi\in \mathbb{R}^N$
$$
\lambda |\xi|^2 \le \langle A_{\pm}\xi,\xi \rangle \le \Lambda |\xi|^2,
$$
$f_{\pm}\in L^N(\Omega)$, $\gamma_{\pm}\in C(\Omegab)$.
One important point to note is that the functionals involved in the phase transition problems are not convex, hence the existence result does not follow from classical methods. The approach involve some tools from measure theory and also variational calculus (see \cite{LTQ15}, \cite{JDS18}, \cite{TA15}).
We remark that the addition of the last term $\gamma(x,u)$ (commonly called the compensation term) penalizes the change of phases, which in turn imposes some regularity on the free boundary. The role of this term is very evident in the Section \ref{section5} where we prove that the last term forces the free boundary to be a rectifiable set of finite $\mathcal H^{N-1}$ measure. The technique to show rectifiability of the free boundary is adapted from \cite{db12}, also see \cite{ButHar18} for an application of the same technique in shape optimization. We expect the free boundary to be even more regular, and the compensation term should play an important role in it.
In \cite{TA15} authors have shown that as the discontinuity of the diffusion coefficients gets smaller the solution $u$ of \eqref{TA15} gets more regular, tending to Lipschitz regularity. We can imagine it as studying the behaviour of diffusion when the material becomes more and more homogenized with time.
In this article, we have considered a functional corresponding to a quasi-linear operator in respective phases (see \eqref{P}). The problem of phase transitions can be seen as a generalization of the free boundary problems studied by Alt, Caffarelli and Friedman (see \cite{altcaf81} for one phase problem and \cite{ACF84} for two phase model). One can see results in \cite {ACF84} as a particular case of the functional \eqref{TA15} with $A_+=A_-=Id$ and $f_+=f_-=0$.
In fact, we can see the variational problem dealt in this article as a combination of the problems which fall into into two broad categories, Bernoulli type free boundary problems (with the source term $f=0$, see \cite{altcaf81}, \cite{ACF84} for the linear case, and \cite{dp05} for a non-linear one phase problem) and obstacle type problem (with the compensation term $\gamma=0$ see \cite{FKR17}, \cite{FGS17} ). Very roughly speaking, the minimizers of Bernoulli type functionals are less regular (at most Lipschitz continuous) while solutions of an obstacle problem can carry up to $C^{1,1}$ regularity. Since we are studying a mixture of both of the above mentioned problems, it is reasonable to think that one should not expect a minimizer to be more than Lipschitz continuous. The observations made in \cite{TA15}, \cite{LTQ15} indicate the same. One can refer to \cite{KLS17} where Alt-Caffarelli-Friedman monotonicity formula with two different operators is established and used to show the Lipschitz continuity for solution of PDEs with jump discontinuity in the operator.
Another prominent work related to free transmission problems can be found in \cite{JDS18}, dealing with the functionals of the form \eqref{JDS18} such that a solution satisfy PDEs with different non-linearities in different phases.
\begin{equation}\label{JDS18}
\int_{\Omega\cap \{u>0\}}|\nabla u|^p-f_+(x)u+\gamma_+(x)\,dx+\int_{\Omega\cap \{u\le 0\}}|\nabla u|^q-f_-(x)u+\gamma_-(x)\,dx.
\end{equation}
Also refer to \cite{LTQ15} where the above functional with $p=q$ is studied, proving that the solutions are locally log-Lipschitz in the domain. The functional \eqref{JDS18} is also studied in \cite{AF94}, assuming the free boundary is a fixed surface with Lipschitz regularity.
We present the mathematical setup which we will be working on in this article. $\Omega \subset \mathbb{R}^N$ is open, smooth and bounded. $N\ge 3$, $\mathcal{A}_+, \mathcal{A}_- \in L^{\infty} (\Omega)$ and satisfy the following boundedness condition
\begin{equation}\label{ellipticity}
\lambda \le \mathcal{A}_{\pm}(x) \le \Lambda
\end{equation}
for almost every $x\in \Omega$, $0<\lambda\le\Lambda<\infty$ are fixed constants. $\gamma_{\pm}$ are continuous and integrable real valued functions on $\Omega$, $p\in [2,N)$ is fixed. $f_{\pm}\in L^q(\Omega)$ for $q>\frac{N}{p}$.
We consider a functional of the form
\begin{equation}\tag{$\mathcal P$}\label{P}
\mathcal{F}_{\mathcal{A},f,\gamma}(v;\Omega )=\int_{\Omega} \mathcal{A} (x,v) |\nabla v|^{p} -f(x,v)v+\gamma(x,v)\,dx
\end{equation}
where the integrand is defined as
\[
\begin{equation}gin{split}
\mathcal{A} (x,s)&= \mathcal{A}_+(x)\chi_{\{s>0\}}+\mathcal{A}_-(x)\chi_{\{s\le 0\}}\\
f(x,s)&= f_+(x)\chi_{\{s>0\}}+f_-(x)\chi_{\{s\le 0\}}\\
\gamma(x,s)&= \gamma_+(x)\chi_{\{s>0\}}+\gamma_-(x)\chi_{\{s\le 0\}}.
\end{split}
\]
For candidate functions in the search of a minimizer, we consider the following Sobolev space with fixed boundary data $\phi\in W^{1,p}(\Omega)$
$$
W^{1,p}_{\phi}(\Omega)=\left \{ v\in W^{1,p}(\Omega) \lvert v-\phi\in W_0^{1,p}(\Omega) \right \}.
$$
In the absence of any ambiguity, we will denote $\mathcal{F}_{\mathcal{A},f,\gamma}$ solely as $\mathcal{F}$, and we will mention only the subscripts which carry a risk of being ambiguous. Note that any quantity depending solely on $\mathcal{A},f,\gamma,\phi,\Omega$ will be referred to as quantity depending on data of the problem.
As mentioned earlier, the functional $\mathcal{F}$ in \eqref{P} is not convex in $W^{1,p}(\Omega)$, we will prove that $\mathcal{F}$ is lower semi-continuous with respect to $v$ in $W^{1,p}(\Omega)$ topology via techniques from measure theory (see Theorem \ref{existence}). In the Theorem \ref{holder}, we use the results from the theory developed by Giaquinta-Giusti (see \cite{gigi84}, \cite{eg05}) which is related to quasi minima of a functional and conclude local boundedness and existence of a universal modulus of continuity for all the minimizers of $\mathcal{F}$. Method used in the Theorem \ref {holder} differ from the one used in \cite{TA15}, we believe Giaquinta-Giusti's arguments used in this article can be applied to more general classes of transmission problems.
The universal modulus of continuity plays a decisive role in giving compactness arguments while studying the asymptotic regularity of solutions in the Section \ref{section3} and \ref{section4}.
In the Section \ref{section3} we will show that if $\mathcal{A}_{+}=\mathcal{A}_-=\mathcal{A}\in C(\Omega)$ and $f_{\pm}\in L^{N}(\Omega)$, then $u\in C^{0,1^-}_{loc}(\Omega)$, using tangential analysis method. Main idea is to study regularity of solution of $\mathcal{F}$ with the coefficients $\mathcal{A}_{\pm}$ such that $\mathcal{F}$ is close to a given tangential free boundary problem. The arguments in the Section \ref{section3} and \ref{section4} can also be posed in terms of $\Gamma$-convergence (see \cite{dg68}, \cite{ab02}, \cite{eg05} for a comprehensive introduction to the subject), but we have refrained from using this term in the proofs. In the Section \ref{section4}, we use analogous arguments as in the Section \ref{section3} to show that as the jumps between $\mathcal{A}_{\pm}\in C(\Omega)$ gets smaller, solutions tend to be more regular, asymptotically tending to Lipschitz regularity.
In the last section, Section \ref{section5}, we prove that the free boundary $\partial ^*\{u>0\}$ of a minimizer $u$ of $\mathcal{F}$ in \eqref{P} is always a set of finite perimeter under the assumption of Dirichlet boundary condition i.e. $\phi=0$ and $(\gamma_+-\gamma_-)>c>0$. We can also prove similar result for a general boundary condition, but in order to avoid tedious calculations which will digress the reader from the main idea of the proof, we have chosen to provide the proof for only Dirichlet boundary, and key steps for the general case is mentioned in the Remark \ref{general boundary}.
We also remark that the assumption on the ordering on $\gamma_{\pm}$ i.e. $\gamma_+>\gamma_-$ can be dropped and replaced with $\gamma(x,s)>0$ for $s\neq 0$, $\gamma(x,0)=0$. In this case we can prove rectifiability (finite perimeter) of a larger set $\partial ^*\{|u|>0\}$
The proof in the last section involves techniques from geometric measure theory, we refer the reader to \cite{fmgmt}, \cite{HF69}, \cite{AFP00}, \cite{PK08}, \cite{GE15} for definitions and preliminary results used to prove the Theorem \ref{finite perimeter}.
\section{Existence and minimal H\"older regularity}
We combine the methods in calculus of variations and measure theory to show the existence of a minimizer, note that the functional is not convex (see the discussion in \cite{TA15} for a counter example). An approach similar to ours can be found in \cite{LTQ15}.
\begin{equation}gin{theo}\label{existence}
Given a boundary data $\phi\in W^{1,p}(\Omega)$, there exists a minimizer $u \in W^{1,p}_{\phi}(\Omega)$ of $\mathcal{F}$ in the problem \eqref{P} .
\end{theo}
\begin{equation}gin{proof}
Since $f\in L^q(\Omega)$, $q>\frac{N}{p}$ and $\mathcal{A}_{\pm}$ satisfy the boundedness condition \eqref{ellipticity} we have
\begin{equation}\label{1.1}
\mathcal{F}(v)\ge \int_{\Omega}\lambda|\nabla v|^p-f(x,v)v+\gamma(x,v)\,dx
\end{equation}
and since $p<N$ and $f\in L^q(\Omega)$ for $q>\frac{N}{p}>{p^*}'$, therefore by H\"older inequality and Poincaré inequality, we have
\begin{equation}\label{1.2}
\int_{\Omega}f(x,v)v\,dx\le C(N)\|f\|_{L^{{p^*}'}(\Omega)}\|v\|_{L^{p^*}(\Omega)}\le C(N,q) \|f\|_{L^q(\Omega)}(\|\nabla v\|_{L^p(\Omega)} +C(\phi))
\end{equation}
since $\gamma_{\pm}(x)$ are integrable in $\Omega$, the last term $\int_{\Omega}\gamma(x,v)\,dx$ is bounded. Combining this fact with \eqref{1.1} and \eqref{1.2}, we have
\begin{equation}\label{1.3}
\mathcal{F}(v)\ge \lambda \|\nabla v\|_{L^p(\Omega)}^p -C(N,q)\|f\|_{L^q}\left (\|\nabla v\|_{L^p(\Omega)}+C(\phi) \right )+C(\gamma)>-\infty
\end{equation}
for all $v\in W^{1,p}_{\phi}(\Omega)$. Thus, we establish existence of a lower bound for the functional $\mathcal{F}$. As there exists a minimum value, let $\{u_n\}$ be a minimizing sequence in $W^{1,p}_{\phi}(\Omega)$, by standard arguments (use Poincar\'e inequality on $\{(u_n-\phi)\}$, $u_n\in W^{1,p}_{\phi}(\Omega)$ ), we can show that
$$
\sup_{n\in \mathbb{N}}\mathcal{F}(u_n)<\infty \mathbb{R}ightarrow \sup_{n\in \mathbb{N}}\|u_n\|_{W^{1,p}(\Omega)}<\infty.
$$
Hence $u_n$ is a bounded sequence in $W^{1,p}(\Omega)$ norm, by reflexivity of $W^{1,p}(\Omega)$, the sequence $u_n$ has a weak limit upto a subsequence.
Since $\Omega$ is a bounded set therefore by Rellich theorem (see \cite{evans}, \cite{HB10}), $W^{1,p}(\Omega)$ embeds compactly into $L^p(\Omega)$. Therefore, there exists a function $u_0\in W_{\phi}^{1,p}(\Omega)$ such that $\nabla u_n \rightharpoonup \nabla u_0$ in $L^p(\Omega)$ and $u_n\to u_0$ in $L^p(\Omega)$ upto a subsequence. Moreover, we know that $u_n\to u_0$ pointwise almost everywhere in $\Omega$ upto another subsequence. By Egorov's theorem (\cite{GBF99}, \cite{WR87}), given an ${\varepsilon}>0$ there exists a set $\Omega_{{\varepsilon}}\subset \Omega$ such that $|\Omega\setminus \Omega_{{\varepsilon}}|<{\varepsilon}$ and $u_n \to u_0$ uniformly in $\Omega_{{\varepsilon}}$. Fix $\delta >0$ and we see that
\begin{equation}
\begin{equation}gin{split}\label{lsc1}
\int_{\Omega_{{\varepsilon}}\cap \{u_0>\delta\}}\mathcal{A}(x,u_0) |\nabla u_0|^{p}\,dx&=\int_{\Omega_{{\varepsilon}}\cap \{u_0>\delta\}}\mathcal{A}_{+}(x)|\nabla u_0|^p\,dx\\
&\le \liminf_{n\to \infty}\int_{\Omega_{{\varepsilon}}\cap \{u_0>\delta\}}\mathcal{A}_+(x) |\nabla u_n|^{p}\,dx\\
&\le\liminf_{n\to \infty} \int_{\Omega_{{\varepsilon}}\cap \{u_n>\frac{\delta}{2}\}} \mathcal{A}_+(x) |\nabla u_n|^{p}\,dx\\
&\le \liminf_{n\to \infty}\int_{\Omega\cap \{u_n>0\}} \mathcal{A}_+(x) |\nabla u_n|^{p} \,dx\\
&= \liminf_{n\to \infty} \int_{\Omega\cap \{u_n>0\}} \mathcal{A}(x,u_n) |\nabla u_n|^{p} \,dx
\end{split}
\end{equation}
and from \eqref{ellipticity} we can write
\begin{equation}\label{1.5}
\int_{\Omega\setminus \Omega_{{\varepsilon}}} \lambda |\nabla u_0|^p\,dx \le \int_{\Omega\setminus \Omega_{{\varepsilon}}} \mathcal{A}(x,u_0) |\nabla u_0|^{p}\le \int_{\Omega\setminus \Omega_{{\varepsilon}}} \Lambda |\nabla u_0|^p \,dx\to 0 \mbox{ as ${\varepsilon}\to 0$}.
\end{equation}
By letting $\delta\to 0$ and ${\varepsilon}\to 0$, combine \eqref{lsc1} and \eqref{1.5} and we have
\begin{equation}\label{lsc+}
\int_{\Omega\cap \{u_0>0\}} \mathcal{A}(x,u_0)|\nabla u_0|^{p}\,dx\le \liminf_{n\to \infty} \int_{\Omega\cap \{u_n>0\}} \mathcal{A}(x,u_n) |\nabla u_n|^{p} \,dx.
\end{equation}
By considering the set $\Omega_{{\varepsilon}}\cap \{u_0<-\delta\}$ in the equations \eqref{lsc1}, we can argue analogously to say that
\begin{equation}\label{lsc-}
\int_{\Omega\cap \{u_0\le0\}}\ \mathcal{A}(x,u_0)|\nabla u_0|^{p} \,dx\le \liminf_{n\to \infty} \int_{\Omega\cap \{u_n\le0\}} \mathcal{A}(x,u_n) |\nabla u_n|^{p}\,dx
\end{equation}
lower semi-continuity of the other terms in $\mathcal{F}$ is well known, that is
$$
\int_{\Omega}f(x,u_0)u_0\,dx \,\le\, \liminf_{n\to \infty} \int_{\Omega}f(x,u_n)u_n\,dx
$$
and since $u_n\to u_0$ pointwise almost everywhere in $\Omega$,
$$
\int_{\Omega}\gamma(x,u_0)\,dx \, \le \, \liminf_{n\to \infty}\int_{\Omega}\gamma(x,u_n)\,dx.
$$
Along with \eqref{lsc+} and \eqref{lsc-}, we get
$$
\mathcal{F}(u_0)\le \liminf_{n\to \infty}\mathcal{F}(u_n)=\min
$$
and it follows that $u_0$ is a minimizer of $\mathcal{F}$ in $W^{1,p}_{\phi}(\Omega)$. This concludes the Theorem \ref{existence}.
\end{proof}
Now that we have established the existence of a minimizer of $\mathcal{F}$, we can proceed to prove that any minimizer of $\mathcal{F}$ posses a mimimal local H\"older continuity in the domain $\Omega$. For the sake of convinience in notation, we define the following for $x\in \Omega$, $s\in \mathbb{R}$, $\xi \in \mathbb{R}^N$,
$$
F(x,s,\xi)= \mathcal{A}(x,s) |\xi|^{p} -f(x,s)s+\gamma(x,s)
$$
and observe that there exists a $C>0$ such that $s\le s^p+C$ for all $s\ge 0$, using this and \eqref{ellipticity} we can write that
\begin{equation}\label{estimates}
\lambda|\nabla u |^p-|f||u|^p-(C|f|+|\gamma|)\le F(x,u,\nabla u)\le \Lambda |\nabla u |^p+|f||u|^p+(C|f|+|\gamma|).
\end{equation}
where, by slightly abusing the notation, we define $|f|=|f_{+}|+|f_-|$ and $|\gamma|=|\gamma_+|+|\gamma_-|$.
\begin{equation}gin{rema}
Existence result holds true for more general values of $p\in (1, \infty)$. Since further regularity results are known only for the range of $p$ considered above, we choose to stick to the limit $p\in [2,N)$.
\end{rema}
\begin{equation}gin{theo}\label{holder}
Given a minimizer $u\in W^{1,p}(\Omega)$ of $\mathcal{F}$, then $u$ is locally bounded and locally H\"older continuous in $\Omega$. That is for all $\Omega'\Subset \Omega$ there exists a $M(\Omega')>0$ and $0<\alpha_0<1$ depending only on data of the problem such that
$$
\|u\|_{C^{\alpha_0}(\Omega')}\le M\|u\|_{L^{\infty}(\Omega')}.
$$
\end{theo}
\begin{equation}gin{proof}
Since, $F$ satisfies the estimates \eqref{estimates}, the minimization problem \eqref{P} falls into the general setting of the variational problems studied in \cite{eg05}. That is, it satisfies the condition (7.2) in \cite[Section 7.1]{eg05}.
Note that, $F$ satisfies the hypothesis of \cite[Theorem 7.3]{eg05} which proves the local boundedness of $u$ in terms of its $L^{p}$ norm. From local boundedness of $u$ and \cite[Theorem 7.6]{eg05} we have local H\"older regularity of a minimizer of $\mathcal{F}$.
\end{proof}
\section{Small jumps and regularity in continuous medium}\label{section3}
In this section we will be proving that when $\mathcal{A}_+=A_-=\mathcal{A}\in C(\Omega)$ and $f_{\pm}\in L^N(\Omega)$, a minimizer $u$ of $\mathcal{F}$ satisfy local $C^{0,1^-}$ regularity estimates. We interpret the functional studied in \cite{LTQ15} as a tangential free boundary problem, this strategy is adapted from \cite{TA15}. One can see \cite{TU13}, \cite{TE12}, \cite{TE11} for other applications of similar strategy as ours.
Also, we shall be proving the theorems in this section for a unit ball with centre at the origin. Which, on rescaling will represent a small ball contained inside a general domain $\Omega$. As we have already established the local boundedness of a solution in any general domain in the previous section, we will be assuming that for a minimizer $u$ of $\mathcal{F}$ in $B_1=B_1(0)$, $\|u\|_{L^{\infty}(B_1)}=1$.
We will first prove that as the oscillation of the diffusion coefficients $\mathcal{A}_{\pm}$ gets smaller, the graph of a minimizer $u$ of $\mathcal{F}$ tends to the graph of a $C^{0,1^-}$ function in $B_{1/2}$. This will lead us to asymptotic $C^{0,1^-}$ estimates in $B_{1/2}$ on the points located on the free boundary.
Then we will use the Moser-Harnack inequality and some geometric arguments to prove that $u$ is locally $C^{0,1^-}$ regular when $\mathcal{A}_+=\mathcal{A}_-=\mathcal{A}\in C(\Omega)$.
\begin{equation}gin{lemm}\label{convergence}
Under fixed boundary condition $\varphi$, let $u\in W_{\varphi}^{1,p}(B_1)$ be a minimizer of functional $\mathcal{F}(\cdot,B_1)$ with $f_{\pm}\in L^N(B_1)$ then, for every ${\varepsilon}>0$ there exists a $\delta>0$ such that if
\begin{equation}\label{coefficients}
\|\mathcal{A}-\mathcal{A}_0\|_{L^1(B_1)}\le \delta
\end{equation}
for some constant $\mathcal{A}_0$, $\lambda \le \mathcal{A}_0 \le \Lambda$, then there exists a function $u_0\in C^{0,1^-}(B_{1/2})$ such that
\begin{equation}\label{uniform limit}
\|u-u_0\|_{L^{\infty}(B_{1/2})}\le {\varepsilon}.
\end{equation}
$u_0$ is such that for every $0<\begin{equation}ta<1$ the constant of $\begin{equation}ta$-H\"older continuity, $C_0(\begin{equation}ta)$ depends only on $\mathcal{A}_0,\begin{equation}ta$ and data of the problem.
\end{lemm}
\begin{equation}gin{proof}
We argue by contradiction, i.e. let there is a sequence of functions $\mathcal{A}_k$ satisfying \eqref{ellipticity} such that
\begin{equation}\label{converging coeff}
\lim_{k\to \infty} \|\mathcal{A}_k-\mathcal{A}_0\|_{L^1(B_1)}=0
\end{equation}
and there exists ${\varepsilon}_0>0$ such that
for every $w\in C^{0,1^-}(B_{1/2})$
\begin{equation}\label{contradiction}
\|u-w\|_{L^{\infty}(B_{1/2})}>{\varepsilon}_0.
\end{equation}
Before moving into the main body of the proof, we stop to make some observations. From the hypothesis \eqref{converging coeff}, we write $\mathcal{A}_{\pm,k}\to \mathcal{A}_{0}$ in $L^1(B_1)$. Then, upto a subsequence, $\mathcal{A}_{\pm,k}\to \mathcal{A}_0$ pointwise almost everywhere in $B_1$.
The functionals $\mathcal{F}_k=\mathcal{F}_{\mathcal{A}_k}$ satisfy the hypothesis of the Theorem \ref{holder} in $B_1$, thus we can show that the functions in the sequence $\{u_k\}$ of minimizers of $\mathcal{F}_k$ are locally H\"older continuous in $B_1$. That is
$$
\|u_k\|_{C^{\alpha_0}(\overline B_{1/2})}<K_0
$$
for some $K_0>0$, not depending on $k\in \mathbb{N}$. $\alpha_0$ is as in the Theorem \ref{holder}.
Therefore, we can apply Arzela Ascoli theorem to $\{u_k\}$ and there exists a function $h\in C^{0,\alpha_0}(B_{1/2})$ such that
\begin{equation}\label{AA limit}
u_k\to h\;\;\mbox{uniformly in $B_{1/2}$ up to a subsequence}.
\end{equation}
Therefore by compact embedding (see \cite{evans}, \cite{HB10}), we have a function $u_0\in W_{\varphi}^{1,p}(B_1)$ such that
\begin{equation}\label{weak limit}
\begin{equation}gin{split}
\{u_k\}\rightharpoonup u_0 &\mbox{ in $W^{1,p}(B_1)$ and}\\
\{u_k\}\to u_0& \mbox{ in $L^p(B_1)$}
\end{split}
\end{equation}
up to a subsequence.
From \eqref{AA limit} and \eqref{weak limit} we can say that $u_0=h$ almost everywhere in $B_{1/2}$. Also note that $\|u_k\|_{W^{1,p}(B_1)}$ is uniformly bounded in $k$, from the assumptions mentioned in the beginning of the section, \eqref{ellipticity} and minimility of $u_k$ for the functional $\mathcal{F}_k$ we have
\begin{equation}\label{equiintegrability}
\begin{equation}gin{split}
\lambda \int_{B_1}|\nabla u_k|^p\,dx \,\le \int_{B_1} \mathcal{A}_k(x,u_k)|\nabla u_k|^p\,dx\le \mathcal{F}_k(\phi)+\int_{B_1}f(x,u_k)u_k-\gamma(x,u_k)\,dx\\
\, \le \Lambda \int_{B_1}|\nabla \phi|^p\,dx + C<C_0<\infty.
\end{split}
\end{equation}
We proceed to show that the function $u_0\in W_{\varphi}^{1,p}(B_1)$ is a minimizer of
\begin{equation}\label{F0}
\mathcal{F}_0(v)=\int_{B_1}\mathcal{A}_0 |\nabla v|^{p}-f(x,v)v+\gamma(x,v)\,dx
\end{equation}
observe that
\begin{equation}\label{principle part}
\liminf_{k\to \infty} \int_{B_1} \mathcal{A}_k (x,u_k) |\nabla u_k|^{p} \,dx \ge \int_{B_1} \mathcal{A}_0 |\nabla u_0|^{p}\,dx.
\end{equation}
Indeed the inequation \eqref{principle part} can be shown via a set of arguments similar to those in the proof of the Theorem \ref{existence}. Only little difference is that we need to apply Egorov's theorem to the sequences $\{u_k\}$ as well as $\{\mathcal{A}_k\}$. Fix ${\varepsilon}'>0$ and let $\Omega_{{\varepsilon}'}\subset B_1$ be such that $\mathcal{A}_k \to \mathcal{A}_0$ and $u_k \to u_0$ uniformly in $\Omega_{{\varepsilon}'}$ and $|B_1\setminus \Omega_{{\varepsilon}'}|<{\varepsilon}'$. From this information, one can easily verify that the sequence $\{ |\mathcal{A}_k(\cdot,u_k)|^{\frac{1}{p}}\nabla u_k \}$ weakly converges to $|\mathcal{A}_0|^{\frac{1}{p}} \nabla u_0$ in $L^{p}(\Omega_{{\varepsilon}'})$.
Then we can proceed exactly like \eqref{lsc1} - \eqref{lsc-} to prove \eqref{principle part}.
\begin{equation}gin{comment}
the last term in the above inequation goes to Zero. Indeed since $\mathcal{A}_k\to \mathcal{A}_0$ pointwise upto subsequence. By Egorov's theorem, for all ${\varepsilon}'>0$ there exists $\Omega_{{\varepsilon}'}\subset B_1$ such that $|B_1\setminus \Omega_{{\varepsilon}'}|<{\varepsilon}'$ and $\mathcal{A}_k\to \mathcal{A}_0$ uniformly in $\Omega_{{\varepsilon}'}$, then we can argue as in the proof of the Theorem \ref{existence} by considering the domain $B_1$ instead of $\Omega$. That is, $\int_{\Omega_{{\varepsilon}'}}(\mathcal{A}_k(x,u_k)-\mathcal{A}_0)|\nabla u_k|^p\,dx$ goes to zero as $k\to \infty$ because $\mathcal{A}_{\pm,k}\to \mathcal{A}_0$ uniformly in $\Omega_{{\varepsilon}'}$. And from the fact that $\|u_k\|_{W^{1,p}(B_1)}< C_0$ from \eqref{equiintegrability} we have
$$
(\lambda-\Lambda)\int_{B_1\setminus \Omega_{{\varepsilon}'}} |\nabla u_k|^p\,dx \le \int_{B_1\setminus \Omega_{{\varepsilon}'}}(\mathcal{A}_k(x,u_k)-\mathcal{A}_0)|\nabla u_k|^p\,dx \le (\Lambda-\lambda)\int_{B_1\setminus \Omega_{{\varepsilon}'}} |\nabla u_k|^p\,dx
$$
the above quantity can be arbitrarily small choosing ${\varepsilon}'\to 0$.
\end{comment}
As $k\to \infty$ from \eqref{AA limit} we have
\begin{equation}\label{FB part}
\liminf_{k\to \infty}\int_{B_1} \gamma(u_k,x)\,dx \ge \int_{B_1}\gamma(u_0,x)\,dx.
\end{equation}
Adding \eqref{principle part} and \eqref{FB part}, we get the following inequality:
\begin{equation}\label{part1}
\liminf_{k\to \infty} \mathcal{F}_k(u_k) \ge\,\mathcal{F}_0(u_0)
\end{equation}
Moreover, for any $v\in W_{\varphi}^{1,p}(B_1)$
\begin{equation}\label{part2}
\mathcal{F}_0(v) \ge \mathcal{F}_k(v)
+ \int_{B_1 } (\mathcal{A}_{0}-\mathcal{A}_{k}(x,v)) |\nabla v|^{p} \,dx
\end{equation}
the last term in \eqref{part2} goes to Zero (following the same reasoning as \eqref{principle part}) as $k\to \infty$. Therefore from \eqref{part1} and \eqref{part2}, we have
$$
\mathcal{F}_0(v) \ge \lim_{k\to \infty} \mathcal{F}_k(v)\ge \liminf_{k\to \infty}\mathcal{F}_k(u_k)\ge \mathcal{F}_0(u_0)
$$
this shows that $u_0$ is the minimizer of $\mathcal{F}_0$.
From \cite{LTQ15} we know that the function $u_0$ is a locally log-Lipschitz (and therefore locally $C^{0,1^-}$) function in $B_{1}$, that is $u_0\in C^{0,1^-}(B_{1/2})$. If we take $w=u_0$ in \eqref{contradiction}, we get a contradiction. Hence we prove the lemma.
\end{proof}
\begin{equation}gin{lemm}\label {far from FB 0}
With $0\in \partial \{u>0\}$ and hypothesis of the previous lemma, for every $0<\alpha<1$ there exists a $0<r_0 <1/2$ and a $\delta=\delta(\alpha)>0$ such that if
$$
\|\mathcal{A}-\mathcal{A}_0\|_{L^1(B_1)} <\delta
$$
then
$$
\sup_{B_{r_0}}|u|\le r_0^{\alpha}.
$$
\end{lemm}
\begin{equation}gin{proof}
For an ${\varepsilon}>0$ to be chosen later (and accordingly $\delta>0$) we have from the Lemma \ref{convergence} that
\begin{equation}\label{5.1}
\|\mathcal{A}-\mathcal{A}_{0}\|_{L^{1}(B_1)}<\delta \mathbb{R}ightarrow \|u-u_0\|_{L^{\infty}(B_{1/2})}<{\varepsilon}
\end{equation}
for some $u_0\in C^{0,1^-}(B_{1/2})$, that is, for every $0<\begin{equation}ta<1$ there exists $C(\begin{equation}ta)>0$ such that
\begin{equation}\label{5.2}
\sup_{B_r}|u_0(x)-u_0(0)|\le C(\begin{equation}ta) r^{\begin{equation}ta}\qquad\mbox{$B_r\subset B_{1/2}$}
\end{equation}
using \eqref{5.1} and \eqref{5.2} we have
\begin{equation}\label{5.3}
\begin{equation}gin{split}
\sup_{B_r}|u(x)-u(0)|&\le \sup_{B_r}\mathcal{B}ig ( |u(x)-u_0(x)|+|u_0(x)-u_0(0)|+|u_0(0)-u(0)| \mathcal{B}ig ) \\
&\le 2{\varepsilon}+C(\begin{equation}ta )r^{\begin{equation}ta}
\end{split}
\end{equation}
now select $\begin{equation}ta$ such that, $1>\begin{equation}ta>\alpha$ and $r=r_0>0$ such that
$$
C(\begin{equation}ta)r_0^{\begin{equation}ta}=\frac{r_0^{\alpha}}{3}
$$
that is
$$
r_0=\mathcal{B}ig ( \frac{1}{3C(\begin{equation}ta)} \mathcal{B}ig )^{1/\begin{equation}ta-\alpha}
$$
and select ${\varepsilon}>0$ (and accordingly $\delta({\varepsilon})>0$) such that
$$
{\varepsilon}<\frac{r_0^{\alpha}}{3}
$$
now from \eqref{5.3} and using the assumption that $u(0)=0$, we prove the lemma.
\end{proof}
\begin{equation}gin{lemm}\label{far from FB}
Following the hypothesis of the previous lemma, for all $0<\alpha<1$ there exists $C_0>0$ depending only on $\alpha$ and data of the problem such that if
$$
\|\mathcal{A}-\mathcal{A}_0\|_{L^1(B_1)}<\delta
$$
implies
$$
\sup_{B_{r}}|u(x)|\le C_0 r^{\alpha} \qquad \mbox{for $r<r_0$}.
$$
($r_0$ and $\delta$ are as in the Lemma \ref{far from FB 0}.)
\end{lemm}
\begin{equation}gin{proof}
Let us first show that for all $k\in \mathbb{N}$
$$
\sup_{B_{r_0^{k}}}|u(x)|\le r_0^{k\alpha }.
$$
We already know from the Lemma \ref{far from FB 0} that the above claim is true for $k=1$. Suppose it is true up to a value $k_0\in \mathbb{N}$. We claim that it is also true for $k_0+1$.
Define $v(y)=\frac{1}{r_0^{k_0\alpha}}u(r_0^{k_0}y)$ for $y\in B_1=B_1(0)$.
We have
$$
\nabla v(y)=\frac{1}{r_0^{k_0(\alpha-1)}}\nabla u(r_0^ky)
$$
by change of variables we have
\begin{equation}
\begin{equation}gin{split}
\mathcal{F}_{\mathcal{A}}(u;B_{r_0^{k_0}})=&r_0^{Nk_0}\int_{B_1}r_0^{pk_0(\alpha-1)} \mathcal{A}(r_0^{k_0}y,v) |\nabla v(y)|^{p}\,dy- \\
&r_0^{Nk_0}\int_{B_1}r_0^{k_0\alpha}vf(r_0^{k_0}y,v)+\gamma(r_0^{k_0}y,v)\,dy.
\end{split}
\end{equation}
\end{proof}
See that, $v$ is a minimizer of $\tilde \mathcal{F}=\mathcal{F}_{\tilde \mathcal{A}, \tilde f,\tilde \gamma}(v;B_1)$ with
\[
\begin{equation}gin{split}
&\tilde \mathcal{A}(y,s)=\mathcal{A}(r_0^{k_0}y,s)\\
&\tilde f(y,s)=r_0^{(pk_0(1-\alpha)+k_0\alpha)}f(r_0^{k_0}y,s)\\
&\tilde \gamma (y,s)=r_0^{pk_0(1-\alpha)} \gamma (r_0^{k_0}y,s)
\end{split}
\]
moreover,
$$
\|\tilde f\|_{L^{N}(B_1)}=r_0^{k_0(1-\alpha)(p-1)}\|f\|_{L^N(B_{r^k})}\le \|f\|_{L^N(B_1)}.
$$
Since $\sup _{B_{1}}|v|= r_0^{-k_0 \alpha} \sup_{B_{r_0^{k_0}}}|u|\le 1$, $ \tilde \mathcal{F}$ satisfies the hypothesis of the Lemma \ref{far from FB 0} and since $0\in \partial \{v>0\}$ we have
$$
\sup_{B_{r_0}}|v|\le r_0^{\alpha}
$$
substituting $u$ from the definition of $v$
$$
\sup_{B_{r_0^{k_0+1}}}|u|\le r_0^{(k_0+1)\alpha}.
$$
This proves the claim. Now we prove the lemma using a classical iteration argument. Let $0<r<r_0$ and select $k$ such that $r_0^{k+1}<r<r_0^k$, and we see that
$$
\sup_{B_r}|u|\le \sup_{B_{r_0^{k}}}|u|\le r_0^{k\alpha}=r_0^{(k+1)\alpha}\frac{1}{r_0^{\alpha}}\le \frac{1}{r_0^{\alpha}}r^{\alpha}.
$$
This concludes the proof.
\begin{equation}gin{rema}\label{rem3.4}
We have obtained a local asymptotic $C^{0,1^-}$ regularity estimates on $u$ at points on the free boundary. If $\mathcal{A}_{\pm}(x)$ are separately continuous and $f_{\pm}\in L^N$, minimizer $u$ satisfy the following Euler Lagrangian equation in positive and negative phases
$$
\begin{equation}gin{cases}
-\dive(\mathcal{A}_+(x)|\nabla u|^{p-2}\nabla u)=f_+\qquad \mbox{in $\{u>0\}$}\\
-\dive(\mathcal{A}_-(x)|\nabla u|^{p-2}\nabla u)=f_-\qquad \mbox{in $\{u<0\}$}.
\end{cases}
$$
Since $p<N$, from \cite[Theorem 4.2]{EVT11} we have local $C^{0,1^-}$ regularity estimates on $u$ away from the free boundary. However, these regularity estimates deteriorate as we move closer to the free boundary and therefore we cannot yet conclude that local asymptotic $C^{0,1^-}$ regularity estimates holds in the entire domain under consideration. In order to prove it, we will proceed using our information on how those estimates obtained in \cite{EVT11} deteriorate near the free boundary and make use of the non-homogenous Moser-Harnack inequality with some geometric arguments.
\end{rema}
\begin{equation}gin{lemm}\label{main result}
Let $\mathcal{A}_{\pm}\in C(B_1)$ and with the same hypothesis as in the previous theorem, for all $0<\alpha<1$ there exists $\delta>0$ depending on $\alpha$ and data of the problem such that if
$$
\|\mathcal{A}-\mathcal{A}_0\|_{L^1(B_1)}< \delta
$$
we have
$$
u\in C^{\alpha}(B_{1/2}).
$$
\end{lemm}
\begin{equation}gin{proof}
As mentioned in the previous remark, we only need to show that the constant of $\alpha$ H\"older continuity does not blow up as we move closer to the free boundary. This information along with the local estimate in \cite[Theorem 4.2]{EVT11}, will prove our claim.
Let $x_0\in \{u>0\}\cap B_{1/2}$ be very close to the free boundary. From \cite[Theorem 1]{serrin63}, we know that $u$ satisfy the non-homogenous Moser-Harnack inequality for $r<d/4$ ($d=\dist(x_0,\partial \{u>0\})$)
\begin{equation}\label{harnack}
\sup_{B_r(x_0)} u\le C \left ( \inf_{B_{r/2}}u+r\|f\|_{L^N(B_r(x_0))} \right ).
\end{equation}
Also, we know from [\cite{EVT11}, Theorem 4.2] and Campanato's theorem that $u$ satisfy interior $C^{0,1^-}$ estimates in $\{u>0\}$ which, for a given $0<\alpha<1$, can be written as follows
\begin{equation}\label{schauder}
\|u\|_{C^{\alpha}(B_r(x_0))} \le \|u\|_{C^{\alpha}(B_{d/2}(x_0))}\le \frac{C}{d^{\alpha}}\|u\|_{L^{\infty}(B_{2d/3})}
\end{equation}
using \eqref{harnack} in \eqref{schauder} by putting appropriate value of $r>0$, we can get
\begin{equation}\label{control 0}
\|u\|_{C^{\alpha}(B_{r}(x_0))}\le \frac{C_1}{d^{\alpha}} \left ( u(x_0)+d\|f\|_{L^N(B_1)} \right ).
\end{equation}
And now we make use of the Lemma \ref{far from FB}. Let $y_0\in \{u>0\}$ be a point such that $d=\dist(x_0,y_0)=\dist (x_0,\partial \{u>0\})$. From the Lemma \ref{far from FB} we have
\begin{equation}\label{holder on FB}
\sup_{B_{2d}(y_0)}|u|\le C_2 d^{\alpha}.
\end{equation}
Observe that $x_0\in B_{2d}(y_0)$ and then we combine \eqref{control 0} and \eqref{holder on FB} to get
$$
\|u\|_{C^{\alpha}(B_{r}(x_0))}\le C_1C_2+C_1d^{1-\alpha}\|f\|_{L^N(B_1)}
$$
that is
$$
\|u\|_{C^{\alpha}(B_{r}(x_0))}\le C_3+C_12^{1-\alpha}\|f\|_{L^N(B_1)}.
$$
From the Remark \ref{rem3.4}, we already know that $u\in C_{\mathrm{loc}}^{0,\alpha}(\{u>0\}\cap B_{1/2})$ and $u\in C_{\mathrm{loc}}^{0,\alpha}(\{u<0\}\cap B_{1/2})$. Hence, the asymptotic $C^{0,1^-}$ regularity estimates on $u$ in $B_{1/2}$ is proved.
\end{proof}
\begin{equation}gin{theo}[Regularity in a continuous medium]\label{continuous medium}
If $u\in W_{\phi}^{1,p}(\Omega)$ is minimizer of $\mathcal{F}(\cdot,\Omega)$, $\mathcal{A}_{+}=\mathcal{A}_{-}=\mathcal{A}\in C(\Omega)$ and $f_{\pm}\in L^N(\Omega)$. Then $u\in C^{0,1^-}_{\mathrm{loc}}(\Omega)$.
\end{theo}
\begin{equation}gin{proof}
The proof of this theorem follows simply by rescaling argument. Let $\Omega'\Subset \Omega$, set $d=\dist (\Omega^{c},\Omega')$ and $\Omega^{''}=\{x\in \Omega \mid \dist(x,\Omega')<d/2\}$. From previous lemmas, we know that $u$ is uniformly bounded in $\overline {\Omega^{''}}$ and moreover $\mathcal{A}$ is uniformly continuous in $\overline{\Omega^{''}}$.
Given $0<\alpha<1$, choose corresponding ${\varepsilon}>0$ from the above lemmas and $\delta<d/2$ such that $|\mathcal{A}(x)-\mathcal{A}(y)|<{\varepsilon}$ when $|x-y|<\delta$, ($x,y\in \Omega'$). Let $x_0\in \Omega'$ and fix $\mathcal{A}_0=\mathcal{A}(x_0)$, rescale $u_\delta(y)=u(x_0+\delta y)$ for $y\in B_1$. We can apply the Lemma \ref{main result} to $u_\delta$ and obtain $u_{\delta}\in C^{0,\alpha}(B_{1/2})$. Thus we can conclude that $u\in C^{0,\alpha}(B_{\delta/2}(x_0))$. Covering $\Omega'$ with balls of radius $\delta$, we can prove the theorem.
\end{proof}
\section{Asymptotic regularity estimates}\label{section4}
Now, we will use the same strategy as in the previous section and show that the regularity of a minimizer of $\mathcal{F}$ with $\mathcal{A}_{\pm}\in C(\Omega)$ and $f_{\pm}\in L^N(\Omega)$ tends asymptitically to locally Lipschitz regularity as the $L^p(\Omega)$ distance of $\mathcal{A}_+$ and $\mathcal{A}_-$ becomes smaller. That is, given any $0<\alpha<1$ there is a distance $\delta>0$ such that if $\|\mathcal{A}_+-\mathcal{A}_-\|_{L^1(\Omega)}$ is smaller than $\delta$, then $u\in C_{\mathrm{loc}}^{0,\alpha}(\Omega)$.
The result of the Theorem \ref{continuous medium} is the limit case when distance between $\mathcal{A}_+$ and $\mathcal{A}_-$ is zero. In fact, instead of considering $\mathcal{F}_0$ (defined in the \eqref{F0}, Lemma \ref{convergence}), we will consider the functional in the hypothesis of the Theorem \ref{continuous medium} as a tangential free boundary problem and recover regularity in converging solutions. Note that Lipschitz regularity for minimizer does not hold true even when $\mathcal{A}_+=\mathcal{A}_-=\mbox{constant}$ (see \cite{LTQ15}).
We will be assuming that $\mathcal{A}_+$ and $\mathcal{A}_-$ are separately continuous in $\Omega$ with a modulus of continuity $\omega$
\begin{equation}\label{modulus of continuity}
|\mathcal{A}_{\pm}(x)-\mathcal{A}_{\pm}(y)|\le \omega(|x-y|).
\end{equation}
\begin{equation}gin{theo}
Let $u\in W^{1,p}_{\phi}(\Omega)$ be a minimizer of $\mathcal{F}$ with $\mathcal{A}_{\pm}$ satisfying \eqref{modulus of continuity} and $f_{\pm}\in L^N(\Omega)$. Then given $\Omega'\Subset \Omega$, for all $0<\alpha<1$ there exists a $\delta >0$ depending on $\alpha,\Omega'$ and data of the problem such that if
\begin{equation}\label{Lpconvergence}
\|\mathcal{A}_{+}-\mathcal{A}_{-}\|_{L^{1}(\Omega)}\le \delta
\end{equation}
then
$$
u\in C^{0,\alpha}(\Omega').
$$
\end{theo}
\begin{equation}gin{proof}
The proof of the theorem is in the same lines as we proceeded in the previous section. We outline a very brief sketch of the proof for the reader.
For the reasons same as that discussed in the beginning of the Section \ref{section3}, we prove results for the unit ball $B_1$ centred at $0$ and assume $\|u\|_{L^{\infty}(B_1)}=1$.
Assume for a sequence $\{\mathcal{A}_{\pm}^k\}\in C(\Omega)$ satisfying \eqref{modulus of continuity} such that $\|\mathcal{A}_+^k-\mathcal{A}_-^k\|_{L^{1}(B_1)}\rightarrow 0$. Where $u_k$ is minimizer of functional $\mathcal{F}_k=\mathcal{F}_{\mathcal{A}_k}(\cdot;B_1)$. We have defined $\mathcal{A}^k(x,s)$ as
$$
\mathcal{A}^k{k}(x,s)=\mathcal{A}^k_+(x)\chi_{\{s>0\}}+\mathcal{A}^k_-(x)\chi_{\{s\le 0\}}.
$$
To show that the sequence $\{u_k\}$ uniformly converges to a $C^{0,1^-}$ function in $B_{1/2}$, we argue by contradiction. Let us assume that there exists ${\varepsilon}_0>0$ such that for every $w\in C^{0,1^-}(B_{1/2})$ we have $\|u_k-w\|_{L^{\infty}(B_{1/2})}>{\varepsilon}_0$.
Then we can argue in the same way as in the proof of the Lemma \ref{convergence} contradict the claim.
Since $\mathcal{A}_{\pm}^k$ satisfy \eqref{Lpconvergence} and \eqref{modulus of continuity}, we can apply Egorov's theorem and then Arzela Ascoli theorem to $\{\mathcal{A}^k\}$ in $B_{1/2}$. Thus the sequence $\{\mathcal{A}^k\}$ converges (upto a subsequence) uniformly to a continuous function $\mathcal{A}^*$ satisfying \eqref{modulus of continuity}. Then proceeding as in \eqref{principle part} and \eqref{part2}, we know that the sequence $\{u_k\}$ converge uniformly in $B_{1/2}$ to minimizer $u^*$ of $\mathcal{F}^*$ defined as
$$
\mathcal{F}^{*}(v)=\int_{B_1}\mathcal{A}^*(x)|\nabla v|^p-f(x,v)v+\lambda(x,v)\,dx .
$$
In order to show the regularity estimates on $u$, we will be considering $\mathcal{F}^*$ as the tangential free boundary problem insteal of $\mathcal{F}_0$ as in previous section.
From the Theorem \ref{continuous medium} we know $u^*\in C^{0,1^-}(B_{1/2})$. Hence we get a contradiction and we show that $u_k\to u^{*}\in C^{0,1^-}(B_{1/2})$ uniformly in $B_{1/2}$.
Analogous rescaling arguments as in the proof of the Lemma \ref{far from FB} can be used to show the asymptotic $C^{0,1^-}$ estimates on $u$ in $B_{1/2}$ at points on the free boundary.
Then we can prove the theorem using the Schauder type estimates (see \eqref{schauder}) and the non-homogenous Moser-Harnack (see \eqref{harnack}) inequality with same geometric arguments as in the Theorem \ref{main result}.
\end{proof}
\section{Finite perimeter of the free boundary}\label{section5}
In this section, we will prove that the free boundary of a minimizer of $\mathcal{F}$ is a set of finite perimeter. This result marks the impact of the last term $\gamma_{\pm}(x)$ on the nature of the problem. Heuristically speaking, this term compensates the transition of phases and thus imposes some flux balance along the free boundary. This in turn forces the free boundary to gain some regularity.
The technique we will be adapting is from geometric measure theory. One can refer to \cite{db12}. Also refer to \cite{ButHar18} to see an application of the same technique in a shape optimization problem. The main idea is very simple, the information related to the perimeter of the free boundary is present in the integral $\int_{|u|>0}|\nabla u|\,dx$, and the link between this integral and the perimeter is expressed through the co-area formulae.
We will be assuming the Dirichlet boundary condition and that the terms $\gamma_{\pm}$ are strictly ordered. That is
\begin{equation}\label{ordering}
0<c<\gamma_{+}(x)-\gamma_-(x).
\end{equation}
\begin{equation}gin{theo}\label{finite perimeter}
Given a minimizer $u$ of $\mathcal{F}$ with $\gamma$ satisfying \eqref{ordering} and $\phi=0$ on $\partial \Omega$, the reduced boundary $\partial^* \{u>0\}$ is a set of finite perimeter.
\end{theo}
\begin{equation}gin{proof}
For ${\varepsilon}>0$ define the following
$$
u_{{\varepsilon}}=(u-{\varepsilon})^+-(u+{\varepsilon})^-\mbox{ and } A_{{\varepsilon}}=\{0<u\le {\varepsilon}\}\cap \Omega.
$$
Note that
$$u_{\varepsilon}=\begin{equation}gin{cases}
u-{\varepsilon}&\hbox{if }u>{\varepsilon}\\
u+{\varepsilon}&\hbox{if }u<-{\varepsilon}\\
0&\hbox{if }|u|\le{\varepsilon}
\end{cases}
\qquad\hbox{and}\qquad
\nabla u_{\varepsilon}=\begin{equation}gin{cases}
\nabla u&\hbox{a.e. on }\{|u|>{\varepsilon}\}\\
0&\hbox{a.e. on }\{|u|\le{\varepsilon}\}.
\end{cases}$$
From the minimility of $u$ for the functional $\mathcal{F}$ we have
$$
\mathcal{F}(u)\le \mathcal{F}(u_{{\varepsilon}})
$$
therefore
\[
\begin{equation}gin{split}
\int_{\Omega}\mathcal{A}(x,u)|\nabla u|^p-\mathcal{A}(x,u_{{\varepsilon}})|\nabla u_{{\varepsilon}}|^p\,dx +\int_{\Omega} \gamma(x,u)-\gamma(x,u_{{\varepsilon}})\,dx & \le C(f,\Omega,N){\varepsilon}\\
\mathbb{R}ightarrow \int_{\{-{\varepsilon}<|u|\le {\varepsilon} \}}\mathcal{A}(x,u)|\nabla u|^p\,dx +\int_{\{0<u\le {\varepsilon}\}}(\gamma_+(x)-\gamma_-(x))\,dx&\le C(f,\Omega,N){\varepsilon}.
\end{split}
\]
Using the hypothesis \eqref{ellipticity} and \eqref{ordering} in above inequation, we obtain the following
$$
\int_{A_{{\varepsilon}}}\lambda |\nabla u|^p \,dx +c |A_{{\varepsilon}}|\,dx\le C(f,\Omega,N){\varepsilon}
$$
this implies that $\int_{A_{{\varepsilon}}}|\nabla u|^p\,dx \le C{\varepsilon}$ and $|A_{{\varepsilon}}|\le C{\varepsilon}$, from the H\"older inequality, we have
$$
\int_{\{0<u<{\varepsilon}\}}|\nabla u|\,dx \le \int_{A_{{\varepsilon}}}|\nabla u|\,dx\le \left ( \int_{A_{{\varepsilon}}}|\nabla u |^p\,dx\right )^{1/p}|A_{{\varepsilon}}|^{1/p'}\le C{\varepsilon}
$$
where $C=C(f,\lambda,c,\Omega,N)$.
Now, we use the coarea formula and we deduce
$$\int_0^{\varepsilon}\mathcal{H}H^{N-1}\big(\partial^*\{u>t\}\big)\,dt\le C{\varepsilon}.$$
Thus there exists a sequence $\delta_n\to0$ such that
$$\mathcal{H}H^{N-1}\big(\partial^*\{u>\delta_n\}\big)\le C\qquad\hbox{for every }n.$$
Since $\{u>\delta_n\}\to \{u>0\}$ in $L^1(\Omega)$ and perimeter is lower semicontinuous with respect to $L^1-$convergence of sets (see \cite{AFP00}), we finally imply that
$$\mathcal{H}H^{N-1}\big(\partial^*\{u>0\}\big)\le C$$
as required.
\end{proof}
\begin{equation}gin{rema}
After adding an assumption that $\gamma(x,0)=0, \gamma(x,s)>c>0\;(s\neq 0)$ and following the lines of the proof of the Theorem \ref{finite perimeter}, one can show that the reduced boundary $\partial ^*\{u<0\}$ is also a set of finite perimeter. In this case, the ordering condition \eqref{ordering} in the hypothesis of the Theorem \ref{finite perimeter} can be dropped.
\end{rema}
\begin{equation}gin{rema}\label{general boundary}
We can prove a local version of the Theorem \ref{finite perimeter} for a general boundary condition, using the same ideas. For this, we consider a ball $B_r$, such that $B_{2r}$ is contained inside the domain $\Omega$ and we define the test function $\tilde{u_{\varepsilon}}$ as
$$
\tilde {u_{{\varepsilon}}} =\eta u+ (1-\eta)u_{{\varepsilon}}
$$
where $u_{{\varepsilon}}$ is defined same as in the proof of Theorem \ref{finite perimeter} and $\eta$ is a smooth function such that
$$
\eta(x)= \begin{equation}gin{cases}
0 \;\; \mbox{if $x\in B_r$}\\
1 \;\; \mbox{if $x\in \Omega\setminus B_{2r}$}.
\end{cases}
$$
If we make use of $\tilde{u_{{\varepsilon}}}$ instead of $u_{{\varepsilon}}$ as a test function in the proof of the Theorem \ref{finite perimeter}, we can show that the reduced free boundary is a set of finite perimeter inside the ball $B_r$. More elaborate discussion can be found in the Section 5.2 of \cite{velichkovnotes}, for the case of one phase free boundary problem.
\end{rema}
\begin{equation}gin{comment}
Now that we know that free boundary of minimizers of $\mathcal{F}$ is a ${N-1}$ surface, we will now show that for any cone with a given opening, there exists a small distance $\delta>0$ as in Corollary \ref{convergence2} such that free boundary surface can be contained inside that cone with vertex of the cone being a free boundary point.
\begin{equation}gin{theo}
Given that $\mathcal{F}, \mathcal{F}_0$ satisfy the hypothesis of the Corollary \ref{convergence2}, for every $\Omega'\Subset \Omega$ there exists a $\delta >0$ such that if
\begin{equation}\label {hypothesis}
\|\mathcal{A}_+-\mathcal{A}_-\|_{L^2(\Omega)}+\|\gamma_+-\gamma_-\|_{L^{\infty}(\Omega)}<\delta
\end{equation}
then the free boundary of minimizer $u_{\delta}$ is Lipschitz in $\Omega'$ with Lipschitz constant depending on $\Omega'$.
\end{theo}
\begin{equation}gin{proof}
We first note that the minimizer $u_0$ of the functional $\mathcal{F}_0$ defined in the Theorem \ref{convergence} is locally $C^{1,\sigma}$ regular. That is for every open $\Omega'\Subset \Omega$ there exists $C=C(\Omega')$ such that for $B_r=B_r(0)\Subset \Omega'$ (supposing that $0\in \Omega'$ without loosing generality)
\begin{equation}\label{C1alpha}
\sup_{B_r}|u_0(x)-u_0(0)-\nabla u_0(0)\cdot x|\le C(\Omega')r^{1+\sigma}
\end{equation}
Suppose $0\in \Omega'$ is a free boundary point of a minimizer $u_{\delta}$ of $\mathcal{F}_{\mathcal{A}}$ satisfying \eqref{hypothesis} and $B_r$ is the largest ball inside $\Omega'$ centred at $0$. We will prove that for $\delta (\Omega')>\delta >0$ the free boundary $\partial \{u_{\delta}>0\}$ can be contained inside the compliment of a cone with vertex on $ \{u_{\delta}>0\}$.
We continue proceeding from \eqref{C1alpha} , we have for all $x\in B_r(0)$
\begin{equation}\label{lies between planes}
\nu \cdot x-C r^{1+\sigma}\le u_0(x)-u_0(0) \le \nu \cdot x+Cr^{1+\sigma}
\end{equation}
this means the graph of the function $u_0(\cdot)-u_0(0)$ lies between two planes $\nu \cdot x-C r^{1+\sigma}$ and $\nu \cdot x+Cr^{1+\sigma}$. From the Coraolary \ref{convergence2} we have that if $\delta $ smaller that a given $\delta(\Omega')$ we have from \eqref{lies between planes}
$$
\nu \cdot x-C r^{1+\sigma}\le u_{\delta}(x)-u_{\delta}(0) \le \nu \cdot x+Cr^{1+\sigma}
$$
for all $x\in B_r$. Since, we took $0$ as free boundary point of $u_{\delta}$ then,
$$
\nu \cdot x-C r^{1+\sigma}\le u_{\delta}(x) \le \nu \cdot x+Cr^{1+\sigma}
$$
thus for every $x\in B_r\cap \partial \{u_{\delta}>0\}$, we have $u_{\delta}(x)=0$ and
\begin{equation}\label{what the use?}
-Cr^{1+\sigma}\le \nu\cdot x \le Cr^{1+\sigma}
\end{equation}
in other words, free boundary of $u_{\delta}$ is contained inside the compliment of a cone of opening $\frac{1}{|\nu|}C(\Omega')r^{1+\sigma}$ and the vertex of the cone of being $0$ and the axis being the vector $\nu$.
\end{proof}
\end{comment}
\begin{equation}gin{thebibliography}{20}
\bibitem{AF94}
{\sc E.~Acerbi and N.~Fusco}, {\em A transmission problem in the calculus of
variations}, Calc. Var. Partial Differential Equations, 2
(1994), pp.~1--16.
\bibitem{altcaf81}
{\sc H.~Alt and L.~Caffarelli}, {\em Existence and regularity for a minimum
problem with free boundary.}, J. Reine Angew. Math, 325 (1981), pp.~105--144.
\bibitem{ACF84}
{\sc H.~Alt, L.~Caffarelli, and A.~Friedman}, {\em Variational problems with
two phases and their free boundaries}, Trans. Amer. Math. Soc., 282 (1984), pp.~431--431.
\bibitem{TA15}
{\sc M.~D. Amaral and E.~Teixeira}, {\em Free transmission problems},
Comm. Math. Phys., 337 (2015), pp.~1465--1489.
\bibitem{AFP00}
{\sc L.~Ambrosio, N.~Fusco, and D.~Pallara}, {\em Functions of Bounded
Variation and Free Discontinuity Problems}, Oxford mathematical monographs,
Clarendon Press, 2000.
\bibitem{ab02}
{\sc A.~Braides}, {\em Gamma-convergence for Beginners}, Oxford Lecture Series
in Mathematics and Its Applications, Clarendon Press, 2002.
\bibitem{HB10}
{\sc H.~Brezis}, {\em Functional Analysis, Sobolev Spaces and Partial
Differential Equations}, Universitext, Springer New York, 2010.
\bibitem{db12}
{\sc D.~Bucur}, {\em Minimization of the k-th eigenvalue of the {D}irichlet
{L}aplacian}, Arch. Ration. Mech. Anal., 206 (2012),
pp.~1073--1083.
\bibitem{ButHar18}
{\sc G.~Buttazzo and H.~Shrivastava}, {\em Optimal shapes for general integral
functionals}, Ann. H. Lebesgue, (to appear).
\bibitem{JDS18}
{\sc J.~da~Silva and J.~Rossi}, {\em A limit case in non-isotropic two-phase
minimization problems driven by p-laplacians}, Interfaces Free Bound., 20 (2018), p.~379–406.
\bibitem{dp05}
{\sc D.~Danielli and A.~Petrosyan}, {\em A minimum problem with free boundary
for a degenerate quasilinear operator}, Calc. Var. Partial Differential Equations, 23 (2005), pp.~97--124.
\bibitem{dg68}
{\sc E.~De~Giorgi}, {\em Teoremi di semicontinuit{\`a} nel calcolo delle
variazioni: (lezioni tenute nell'anno accademico 1968-69)}, Istituto
nazionale di alta matematica, 1969.
\bibitem{evans}
{\sc L.~C. Evans}, {\em Partial differential equations}, vol.~19 of Graduate
studies in mathematics, American Mathematical Society, 2010.
\bibitem{GE15}
{\sc L.~C. Evans and R.~F. Gariepy}, {\em Measure Theory and Fine Properties of
Functions, Revised Edition}, Textbooks in Mathematics, CRC Press, 2015.
\bibitem{HF69}
{\sc H.~Federer}, {\em Geometric Measure Theory}, Classics in Mathematics,
Springer-Verlag, New York, 1969.
\bibitem{FKR17}
{\sc A.~Figalli, B.~Krummel, and X.~Ros-Oton}, {\em On the regularity of the
free boundary in the $p-$laplacian obstacle problem}, J. Differential
Equations, 263 (2017), pp.~1931--1954.
\bibitem{FGS17}
{\sc M.~Focardi, F.~Geraci, and E.~Spadaro}, {\em The classical obstacle
problem for nonlinear variational energies}, Nonlinear Anal., 154 (2017), pp.~71--87.
\bibitem{GBF99}
{\sc G.~B. Folland}, {\em Real analysis: Modern techniques and their
applications}, Pure and applied mathematics, John Wiley and Sons, 2013.
\bibitem{gigi84}
{\sc M.~Giaquinta and E.~Giusti}, {\em Quasi-minima}, Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire, 1 (1984), pp.~79--107.
\bibitem{eg05}
{\sc E.~Giusti}, {\em Direct methods in the calculus of variations}, World
Scientific, 2003.
\bibitem{KLS17}
{\sc S.~Kin, K.~A. Lee, and H.~Shahgholian}, {\em An elliptic free boundary
arising from the jump of conductivity}, Nonlinear Anal., 161 (2017), pp.~1--29.
\bibitem{PK08}
{\sc S.~G. Krantz and H.~R. Parks}, {\em Geometric Integration Theory},
Birkh\"auser Advanced Texts Basler Lehrb\"ucher, Springer Science and
Business Media, 2008.
\bibitem{LTQ15}
{\sc R.~Leit\~ao, O.~S. de~Queiroz, and E.~Teixeira}, {\em Regularity for
degenerate two-phase free boundary problems}, Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire, 32 (2015), pp.~741--762.
\bibitem{fmgmt}
{\sc F.~Maggi}, {\em Sets of finite perimeter and geometric variational
problems}, vol.~135 of Cambridge Studies in Advanced Mathematics, Cambridge
University Press, 2012.
\bibitem{WR87}
{\sc W.~Rudin}, {\em Real and Complex Analysis}, McGraw-Hill series in Higher
Mathematics, McGraw-Hill Education, 2~ed., 1978.
\bibitem{serrin63}
{\sc J.~Serrin}, {\em A harnack inequality for nonlinear equations}, Bull. Amer. Math. Soc. (N.S.), 69 (1963), pp.~481--486.
\bibitem{EVT11}
{\sc E.~Teixeira}, {\em Sharp regularity for general poisson equations with
borderline sources}, J. Math. Pures Appl., 99
(2011), pp.~150--164.
\bibitem{TE11}
{\sc E.~Teixeira}, {\em Universal moduli of
continuity for solutions to fully nonlinear elliptic equations}, Arch. Ration. Mech. Anal., 211 (2011), pp.~911--927.
\bibitem{TE12}
{\sc E.~Teixeira}, {\em Regularity for
quasilinear equations on degenerate singular sets}, Math. Ann.,
358 (2014), pp.~241--256.
\bibitem{TU13}
{\sc E.~Teixeira and J.~Urbano}, {\em A geometric tangential approach to sharp
regularity for degenerate evolution equations}, Anal. PDE, 7 (2014),
pp.~733--744.
\bibitem{velichkovnotes}
{\sc B.~Velichkov}, {\em Regularity of the one-phase free boundaries}, (Lecture notes).
\end{thebibliography}
\end{document}
|
\begin{document}
\title[Upper triangular matrices and operations in $K$-theory]{Upper triangular matrices and operations in odd primary connective $K$-theory}
\author{Laura Stanley}
\author{Sarah Whitehouse}
\address{School of Mathematics and Statistics, University of Sheffield, Sheffield S3 7RH, UK.}
\email{[email protected]}
\begin{abstract}
We prove analogues for odd primes of results of Snaith and Barker-Snaith.
Let $\ell$ denote the $p$-complete connective Adams summand and consider
the group of left $\ell$-module automorphisms of $\ell\wedge\ell$ in the stable homotopy category
which induce the identity on mod $p$ homology.
We prove a group isomorphism between this group and a certain group
of infinite invertible upper triangular matrices with entries in the $p$-adic integers. We
determine information about the matrix corresponding to the automorphism $1\wedge\Psi^q$ of $\ell\wedge\ell$,
where $\Psi^q$ is the Adams operation and $q$ is an integer which generates the $p$-adic units.
\end{abstract}
\keywords{$K$-theory operations, upper triangular technology}
\subjclass[2010]{Primary: 55S25;
Secondary: 19L64,
11B65.
}
\date{$18^{\text{th}}$ April 2012}
\maketitle
\section{Introduction}
\label{SecIntro}
We prove analogues for odd primes of results of Snaith and Barker-Snaith~\cite{Snaith-utt, BaSn}.
Let $\ell$ denote the Adams summand of the $p$-complete connective $K$-theory spectrum and let
$\mathcal{A}ut_{\text{left-}\ell\text{-mod}}^0(\ell\wedge\ell)$ be the group of left $\ell$-module automorphisms
of $\ell\wedge\ell$ in the stable homotopy category which induce the identity on mod $p$ homology.
The first main result is Theorem~\ref{iso}, which gives a group isomorphism between this group and a certain group
of infinite invertible upper triangular matrices with entries in the $p$-adic integers. The second main result is
Theorem~\ref{isomatrix}, which determines information about the matrix corresponding to the automorphism $1\wedge\Psi^q$ of $\ell\wedge\ell$,
where $\Psi^q$ is the Adams operation and $q$ is an integer which generates the $p$-adic units $\mathbb{Z}_p^\times$.
An application is given to the important maps $1\wedge\varphi_n$ where $\varphi_n=(\Psi^q-1)(\Psi^q-\hat{q})\dots(\Psi^q-\hat{q}^{n-1})$
and $\hat{q}=q^{p-1}$.
While the general strategy of the proofs is the same as in the $2$-primary case, there
are differences in algebraic and combinatorial details. In places we obtain entirely new information.
Notably, Theorem~\ref{app} gives a new closed formula, involving $q$-binomial coefficients, for
each entry in the matrix corresponding to the map $1\wedge\varphi_n$.
This article is organized as follows. Section 2 contains the proof of the
upper triangular matrix result. Section 3 presents an explicit basis for the
torsion-free part of $\pi_*(\ell\wedge \ell)$. This is used in Section 4 to
obtain information about the matrix corresponding to $1\wedge \Psi^q$. Applications
are discussed in Section 5 and there is a short appendix about the $q$-binomial theorem.
This paper is based on work in the Ph.D. thesis of the first author~\cite{stanley},
produced under the supervision of the second author.
\section{Upper triangular technology}
\label{SecUTT}
In this section we prove the odd primary analogue of a theorem of Snaith~\cite[Theorem 1.2]{Snaith-utt}; see also~\cite[Theorem $3.1.2$]{Vic'sBook}. This provides an identification between a group of
$p$-adic infinite upper triangular matrices and certain operations for the Adams summand of complex connective $K$-theory.
Let $ku$ be the $p$-adic connective complex $K$-theory spectrum and let $\ell$ be the $p$-adic Adams summand.
\begin{dfn}
Let $\End_{\text{left-}\ell\text{-mod}}(\ell\wedge\ell)$ be the ring of left $\ell$-module endomorphisms of $\ell\wedge\ell$ of degree zero
in the stable homotopy category
and let $\mathcal{A}ut_{\text{left-}\ell\text{-mod}}(\ell\wedge\ell)$ be the group of units of this ring. Denote by $\mathcal{A}ut^0_{\text{left-}\ell\text{-mod}}(\ell\wedge\ell)$ the subgroup consisting of those homotopy equivalences which induce the identity map in mod $p$ homology.
\end{dfn}
\begin{dfn}\label{matrix group}
Consider the group (under matrix multiplication) of invertible infinite upper triangular matrices with entries in the $p$-adic integers $\mathbb{Z}_p$.
An element is a matrix $X=(X_{i,j})$ for $i,j\in\mathbb{N}_0$, where $X_{i,j}\in\mathbb{Z}_p$, $X_{i,j}=0$ for $i>j$,
and $X_{i,i}\in \mathbb{Z}_p^\times$.
Let $U_\infty\mathbb{Z}_p$ be the subgroup with all diagonal entries lying in the subgroup $1+p\mathbb{Z}_p$ of $\mathbb{Z}_p^\times$.
\end{dfn}
The main theorem of this section is as follows.
\begin{thm}\label{iso}
There is an isomorphism of groups
$$
\Lambda:U_\infty\mathbb{Z}_p\xrightarrow{\cong}\mathcal{A}ut^0_{\textup{left-}\ell\textup{-mod}}(\ell\wedge \ell).
$$
\end{thm}
The method of proof is essentially that used by Snaith to prove the analogous result for $p=2$, but there are differences of detail.
A basic ingredient in the proof is the following splitting
$$
\ell\wedge\ell\simeq\ell\wedge\bigvee_{n\geqslantslant0}\mathcal{K}(n).
$$
This goes back to work of Kane~\cite{Kane}. (He claimed the result in the $p$-local setting. In~\cite{cdgm} a gap in his argument was identified
and fixed in the $p$-complete situation.) The spectra $\mathcal{K}(n)$ appearing in the splitting are suspensions of Brown-Gitler spectra,
realising a weight filtration of the homology of $\Omega^2S^3\langle 3\rangle_p$. They are $p$-complete finite spectra.
(We remark that in the $2$-primary case studied by
Snaith, the pieces of the splitting should also be suspensions of Brown-Gitler spectra,
rather than the finite complexes $F_{4n}/F_{4n-1}$.)
The splitting means that it is enough to study left $\ell$-module maps of the form
$\varphi_{m,n}:\ell\wedge\mathcal{K}(m)\rightarrow \ell\wedge\mathcal{K}(n)$
for each $m$, $n\geqslantslant0$.
We use a suitable Adams spectral sequence to identify particular maps
$\iota_{m,n}:\ell\wedge\mathcal{K}(m)\rightarrow\ell\wedge\mathcal{K}(n)$
which are represented by generators of certain groups on the $E_2$ page of the spectral sequence. These
maps $\iota_{m,n}$ are used to define the required isomorphism.
The work in this section is organised as follows. Firstly,
we set up the required Adams spectral sequence. Next
we establish the stable isomorphism class of
the mod $p$ cohomology of $\mathcal{K}(n)$ in order to simplify the $E_2$ term.
We note that the spectral sequence collapses at the $E_2$ term. We then pick generators of certain groups on the $E_2$ page
to give the maps $\iota_{m,n}$ used in the definition of the map. The spectral sequence is then further analysed to show that
this map is bijective. Finally we show that the choice of the maps $\iota_{m,n}$ can be made in such a way that $\Lambda$ is a group isomorphism.
Since we consider left $\ell$-module maps, a map $\varphi_{m,n}$ as above is determined by its restriction to $S^0\wedge\mathcal{K}(m)\rightarrow \ell\wedge\mathcal{K}(n)$. This is an element of the homotopy group
$$
[\mathcal{K}(m),\ell\wedge\mathcal{K}(n)]=[S^0,\ell\wedge\mathcal{K}(n)\wedge D(\mathcal{K}(m))]_p.
$$
Here we are abusing notation slightly by writing $D(\mathcal{K}(m))$ to mean the $p$-completion of the Spanier-Whitehead dual of the finite spectrum $Y$,
where $\mathcal{K}(m)\simeq Y_p$.
We write $\mathcal{A}_p$ for the mod $p$ Steenrod algebra and we let $B=E[Q_0, Q_1]\subset \mathcal{A}_p$ be the exterior
subalgebra generated by $Q_0=\beta$ and $Q_1$, where $Q_0$ has degree $1$ and $Q_1$ has degree $2p-1$.
All homology and cohomology groups will be with coefficients in $\mathbb{Z}/p$ unless explicitly stated otherwise; we omit the coefficients from
the notation.
We use the Adams spectral sequence with $E_2$ term
\begin{equation}
\label{ASS}
E_2^{s,t}=\Ext_{\mathcal{A}_p}^{s,t}(H^*(\ell \wedge \mathcal{K}(n) \wedge D(\mathcal{K}(m))),\mathbb{Z}/p)
\end{equation}
and which converges to
$$
E^{s,t}_\infty=[S^0,\ell \wedge \mathcal{K}(n) \wedge D(\mathcal{K}(m))]_{t-s}\otimes\mathbb{Z}_p
=\pi_{t-s}(\ell \wedge \mathcal{K}(n) \wedge D(\mathcal{K}(m)))\otimes\mathbb{Z}_p.
$$
\noindent The $E^2$ term simplifies in a standard way as follows.
\begin{align}
E_2^{s,t}&=\Ext_{\mathcal{A}_p}^{s,t}(H^*(\ell \wedge \mathcal{K}(n) \wedge D(\mathcal{K}(m))),\mathbb{Z}/p)\notag\\
&\cong\Ext_{\mathcal{A}_p}^{s,t}(H^*(\ell)\otimes H^*(\mathcal{K}(n))\otimes H^*(D(\mathcal{K}(m))),\mathbb{Z}/p)\notag\\
&\cong\Ext_{\mathcal{A}_p}^{s,t}((\mathcal{A}_p\otimes_B\mathbb{Z}/p) \otimes H^*(\mathcal{K}(n))\otimes H^*(D(\mathcal{K}(m))),\mathbb{Z}/p)\notag\\
&\cong\Ext_{\mathcal{A}_p}^{s,t}(\mathcal{A}_p\otimes_B (H^*(\mathcal{K}(n))\otimes H^*(D(\mathcal{K}(m)))),\mathbb{Z}/p)\notag\\
&\cong\Ext_B^{s,t}(H^*(\mathcal{K}(n)) \otimes H^*(D(\mathcal{K}(m))) ,\mathbb{Z}/p).\notag
\end{align}
The first two isomorphisms are by the K\"{u}nneth theorem and the fact that $H^*(\ell)\cong\mathcal{A}_p\otimes_B\mathbb{Z}/p$ - see
~\cite[Part III, Proposition $16.6$]{AdamsSH}, respectively.
Next we use the isomorphism of left $\mathcal{A}_p$-modules
$$
(\mathcal{A}_p\otimes_B\mathbb{Z}/p)\otimes M\cong\mathcal{A}_p\otimes_B M
$$
where $\mathcal{A}_p$ acts diagonally on the
left-hand side by the comultiplication and on the right-hand side by multiplication within $\mathcal{A}_p$ -
see~\cite[Part III, Proof of Proposition $16.1$]{AdamsSH}. Finally we use a standard change of rings isomorphism.
To simplify the $E^2$ term further, we use the theory of stable isomorphism classes; see
~\cite[Part III, Chapter 16]{AdamsSH}.
Stable isomorphism of modules over $B$ will be denoted by $\approxeq$.
The stable classes we need are expressible in terms of two basic
$B$-modules, the augmentation ideal $I$ of $B$ and the $B$-module $\Sigma$ with a single copy of $\mathbb{Z}/p$ in degree $1$.
We denote the $a$-fold tensor power of $I$ by $I^a$, and similarly for $\Sigma$.
We write $\nu_p$ for the $p$-adic valuation function.
\begin{thm}\label{stable Kn}
There are stable isomorphisms
\begin{align*}
H^*(\mathcal{K}(n))&\approxeq\Sigma^{2n(p-1)-\nu_p(n!)}I^{\nu_p(n!)},\\
H^*(D(\mathcal{K}(n)))&\approxeq\Sigma^{\nu_p(n!)-2n(p-1)}I^{-\nu_p(n!)}.
\end{align*}
\end{thm}
\begin{proof}
From~\cite[Lemma 8:3, Lemma 8:4]{Kane}, we have the following calculations
of the $Q_0$ and $Q_1$ homology of $H^*(\mathcal{K}(n))$, each of which is
concentrated in a single degree.
\begin{align*}
H(H^*(\mathcal{K}(n));Q_0)=\mathbb{Z}/p &\text{\ \ in dimension $2n(p-1)$ and}\\
H(H^*(\mathcal{K}(n));Q_1)=\mathbb{Z}/p &\text{\ \ in dimension $2(p-1)(\nu_p(n!)+n)$}.
\end{align*}
The first stable isomorphism then follows from~\cite[Part III, Theorem $16.3 $]{AdamsSH}.
The Universal Coefficient Theorem gives us the $B$-module isomorphism
$$
H^*(\mathcal{K}(n))\cong\Hom^*_{\mathbb{Z}/p}(H_{-*}(\mathcal{K}(n)),\mathbb{Z}/p).
$$
For any invertible $B$-module, its linear dual is its inverse stable isomorphism class, by~\cite[Part III, Lemma 16.3(i)]{AdamsSH},
so it follows from the above that
$$
H_{-*}(\mathcal{K}(n))\approxeq\Sigma^{\nu_p(n!)-2n(p-1)}I^{-\nu_p(n!)}.
$$
Then Spanier-Whitehead duality gives us the $B$-module isomorphism
$$
H^*(D(\mathcal{K}(n)))\cong H_{-*}(\mathcal{K}(n)),
$$
which gives the second stable isomorphism.
\end{proof}
\begin{cor}
In the spectral sequence~(\ref{ASS}) we have for $s>0$,
$$
E^{s,t}_2\cong\Ext_B^{s+\nu_p(n!)-\nu_p(m!),t-2(n-m)(p-1)+\nu_p(n!)-\nu_p(m!)}(\mathbb{Z}/p,\mathbb{Z}/p).
$$
\end{cor}
\begin{proof}
This follows from Theorem~\ref{stable Kn} and
the standard dimension-shifting isomorphisms of $\Ext$ groups
\begin{equation}\label{dimshift}
\begin{aligned}
\Ext^{s,t}_B(I\otimes M,\mathbb{Z}/p)&\cong\Ext^{s+1,t}_B(M,\mathbb{Z}/p)\\
\Ext^{s,t}_B(\Sigma\otimes M,\mathbb{Z}/p)&\cong\Ext^{s,t-1}_B(M,\mathbb{Z}/p)
\end{aligned}
\end{equation}
for $s>0$ and $M$ a $B$-module.
\end{proof}
\begin{lem}
The spectral sequence~(\ref{ASS}) collapses at the $E_2$ term.
\end{lem}
\begin{proof}
Recall that
$\Ext^{*,*}_B(\mathbb{Z}/p,\mathbb{Z}/p)=\mathbb{Z}/p[c,d]$ where $c\in\Ext_B^{1,1}$ and $d\in\Ext_B^{1,2p-1}$.
It follows from the above that, away from the line $s=0$, all non-zero terms are in even total degrees,
so there are no non-trivial differentials when $s>0$.
Showing there are no non-trivial differentials when $s=0$ can be done by the method of~\cite[Part III, Lemma 17.12]{AdamsSH}.
Consider an element $e\in E_2^{0,t}$ where $t$ is odd (if $t$ is even, there can be no non-trivial differentials for degree reasons).
We proceed by induction. Suppose that $d_i=0$ for $i<r$, so $E_2^{s,t}\cong E_r^{s,t}$.
We have $ce=0$ as this lies in odd total degree, hence $d_r(ce)=0$.
But $cd_r(e)=d_r(ce)$ because this is a spectral sequence of modules over $\Ext_B^{*,*}(\mathbb{Z}/p,\mathbb{Z}/p)$,
so $cd_r(e)=0$. Away from the $s=0$ line, the $E_2=E_r$ page of the spectral sequence reduces to a polynomial algebra
with $c$ corresponding to one of the generators, so multiplication by $c$ is a monomorphism on $E_r^{s,t}$ for $s>0$. Thus
$d_r(e)=0$ which completes the induction.
\end{proof}
\begin{dfn}\label{iota}
For $m\geqslant n$,
let ${\iota_{m,n}}:\ell\wedge\mathcal{K}(m)\rightarrow \ell\wedge\mathcal{K}(n)$ be a map which is represented in the spectral sequence by a choice of generator of
$$
E_2^{m-n-\nu_p(n!)+\nu_p(m!),m-n-\nu_p(n!)+\nu_p(m!)}.
$$
Also let $\iota_{m,m}$ be the identity on $\ell\wedge\mathcal{K}(m)$.
\end{dfn}
\begin{prop}\label{lambda bij}
There is a bijective map
$$
\Lambda:U_\infty\mathbb{Z}_p\rightarrow\mathcal{A}ut^0_{\text{left-}\ell\text{-mod}}(\ell\wedge \ell),$$
given by
$$
X\mapsto\sum_{m\geqslantslant n}X_{n,m}\iota_{m,n}:\ell\wedge(\bigvee_{i\geqslantslant0}\mathcal{K}(i))\rightarrow \ell\wedge(\bigvee_{i\geqslantslant0}\mathcal{K}(i)).
$$
\end{prop}
\begin{proof}
Firstly, we check that $\Lambda$ does have the correct target.
Note that a left $\ell$-module endomorphism of $\ell\wedge\ell$ which induces the identity
on mod $p$ homology corresponds to a collection of maps $\varphi_{m,n}: \ell\wedge \mathcal{K}(m)\rightarrow \ell\wedge \mathcal{K}(n)$,
where $\varphi_{m,n}$ induces the zero map for $m\neq n$ and each $\varphi_{m,m}$ induces the identity.
In the spectral sequence, elements represented in the $s=0$ line are detected in mod $p$ homology.
So, for $m\neq n$, we are interested in elements of
$\pi_0(\ell\wedge\mathcal{K}(n)\wedge D(\mathcal{K}(m)))\otimes \mathbb{Z}_p$
represented in the spectral sequence in $E_2^{s,s}=E_\infty^{s,s}$
with $s>0$.
We have, $E_2^{s,s}=\Ext^{u,v}_B(\mathbb{Z}/p,\mathbb{Z}/p)$ for $s>0$, where
\begin{align*}
u&=s+\nu_p(n!)-\nu_p(m!),\\
v&=s-2(n-m)(p-1)+\nu_p(n!)-\nu_p(m!).
\end{align*}
So $v-u=2(m-n)(p-1)$. If $n>m$ then $u>v$ and these groups are all zero.
This explains the range $m\geqslantslant n$ in the definition of the map $\Lambda$.
It is clear that $\sum_{m\geqslantslant n}X_{n,m}\iota_{m,n}$ defines a left-$\ell$-module endomorphism of $\ell\wedge\ell$.
It is easy to check that it is invertible because the coefficient $X_{m,m}$ of each identity map $\iota_{m,m}$ is a unit. For $m\neq n$,
we have chosen $\iota_{m,n}$ represented away from the $s=0$ line, so this map
induces the zero map on mod $p$ homology. Evidently $\iota_{m,m}$ induces the identity map, and since its coefficient
lies in $1+p\mathbb{Z}_p$ the resulting map on $\ell\wedge\ell$ induces the identity on
mod $p$ homology. Hence $\Lambda$ does take values in $\mathcal{A}ut^0_{\text{left-}\ell\text{-mod}}(\ell\wedge \ell)$.
To show that $\Lambda$ is bijective, we consider non-trivial homotopy classes of left-$\ell$-module maps of the form
$\varphi_{m,n}:\ell\wedge\mathcal{K}(m)\rightarrow \ell\wedge\mathcal{K}(n)$,
where $m\geqslantslant n$, such that $\varphi_{m,n}$
induces the identity on mod $p$ homology if $m=n$ and induces zero if $m>n$.
We start with $m>n$. As we have seen, a map $\varphi_{m,n}$ as above is
represented in the spectral sequence in $E_2^{s,s}=E_\infty^{s,s}$ with $s>0$.
Any non-zero $E_2^{s,s}=\Ext_B^{u,v}(\mathbb{Z}/p,\mathbb{Z}/p)$ group is isomorphic to $\mathbb{Z}/p$ generated by
$$
c^{s+n-m+\nu_p(n!)-\nu_p(m!)}d^{m-n}.
$$
This group is non-zero precisely when $s\geqslantslant(m-n)-\nu_p(n!)+\nu_p(m!)$.
So the map $\varphi_{m,n}$ is represented in
$$
E_\infty^{j+m-n-\nu_p(n!)+\nu_p(m!),j+m-n-\nu_p(n!)+\nu_p(m!)}
$$
for some integer $j\geqslantslant 0$.
\noindent If
$$
E_\infty^{m-n-\nu_p(n!)+\nu_p(m!),m-n-\nu_p(n!)+\nu_p(m!)}=\mathbb{Z}/p\{x\}
$$
then
$$
E_\infty^{j+m-n-\nu_p(n!)+\nu_p(m!),j+m-n-\nu_p(n!)+\nu_p(m!)}=\mathbb{Z}/p\{c^jx\}.
$$
The ring structure of the spectral sequence yields that multiplication by $c$ in the spectral sequence
corresponds to multiplication by $p$ on $\pi_0(\ell \wedge \mathcal{K}(n) \wedge D(\mathcal{K}(m)))\otimes\mathbb{Z}_p$, so we see
that
$$
\varphi_{m,n}=\gamma p^j\iota_{m,n}
$$
for some $p$-adic unit $\gamma$ and integer $j\geqslantslant 0$.
If $m=n$, we need to consider the terms $E_2^{s,s}=E_\infty^{s,s}$ for $s\geqslantslant 0$.
Here we see that $E_2^{s,s}=E_\infty^{s,s}=\mathbb{Z}/p\{c^s\}$ for $s\geqslantslant 0$. Again multiplication
by $c$ corresponds to multiplication by $p$ on $\pi_0(\ell \wedge \mathcal{K}(n) \wedge D(\mathcal{K}(n)))\otimes\mathbb{Z}_p$.
Thus
$$
\varphi_{m,m}=\gamma p^j\iota_{m,m}
$$
for some $p$-adic unit $\gamma$ and integer $j\geqslantslant 0$. The map $\varphi_{m,m}$ induces the identity on mod $p$ homology if and only if $j=0$ and $\gamma\in 1+p\mathbb{Z}_p$, corresponding to the condition that the diagonal entries of the matrix lie in $1+p\mathbb{Z}_p$.
This shows that, for each collection of maps $\varphi_{m,n}$ corresponding to an element of the target $\mathcal{A}ut^0_{\text{left-}\ell\text{-mod}}(\ell\wedge \ell)$, there is a unique choice of $X_{n,m}\in\mathbb{Z}_p$ for $m> n$ and $X_{m,m}\in 1+p\mathbb{Z}_p$, such that the map
is the image under $\Lambda$ of the matrix $X$.
\end{proof}
We now fix choices of the maps $\iota_{m,n}$ in such a way that $\Lambda$ is a group isomorphism.
\begin{prop}\label{extprod}
We can choose the maps $\iota_{m,n}$ as follows.
As before let $\iota_{m,m}$ be the identity map on $\ell\wedge\mathcal{K}(m)$, let $\iota_{m+1,m}$ be as already described, then let
$$
\iota_{m,n}=\iota_{n+1,n}\iota_{n+2,n+1}\cdots\iota_{m,m-1}
$$
for all $m>n+1$. Then
\begin{displaymath}
\iota_{m,n}\iota_{k,l}=\begin{cases}
\iota_{k,n} & \textrm{ if } k\geqslantslant l=m\geqslantslant n,\\
0 & \textrm{ otherwise,}\end{cases}
\end{displaymath}
and with these choices $\Lambda$ is an isomorphism of groups.
\end{prop}
\begin{proof}
Let $\iota_{m,n}$ be any choice of generator as in Definition~\ref{iota}. To justify that these can be chosen as above,
we need to consider the relationship between the product $\iota_{m,n}\iota_{k,m}$ and $\iota_{k,n}$,
for $k>m>n$. Let $s(m,n)=m-n-\nu_p(n!)+\nu_p(m!)$, then $\iota_{m,n}$ is represented by a generator of
$$
\Ext^{s(m,n),s(m,n)}_B(\Sigma^{2(n-m)(p-1)+\nu_p(m!)-\nu_p(n!)}I^{\nu_p(n!)-\nu_p(m!)},\mathbb{Z}/p),
$$
$\iota_{k,m}$ is represented by a generator of
$$
\Ext^{s(k,m),s(k,m)}_B(\Sigma^{2(m-k)(p-1)+\nu_p(k!)-\nu_p(m!)}I^{\nu_p(m!)-\nu_p(k!)},\mathbb{Z}/p)
$$
and $\iota_{k,n}$ is represented by a generator of
$$
\Ext^{s(k,n),s(k,n)}_B(\Sigma^{2(n-k)(p-1)+\nu_p(k!)-\nu_p(n!)}I^{\nu_p(n!)-\nu_p(k!)},\mathbb{Z}/p).
$$
The product $\iota_{m,n}\iota_{k,m}$ is represented by the product of the representatives under the pairing of $\Ext$ groups
$$
\Ext^{s,s}(\Sigma^aI^b,\mathbb{Z}/p)\otimes \Ext^{s',s'}(\Sigma^{a'}I^{b'},\mathbb{Z}/p)\rightarrow\Ext^{s+s',s+s'}(\Sigma^{a+a'}I^{b+b'},\mathbb{Z}/p)
$$
induced by the isomorphism $\Sigma^aI^b\otimes\Sigma^{a'}I^{b'}\cong\Sigma^{a+a'}I^{b+b'}$. We can identify this pairing using the following
commutative diagram.
\footnotesize\begin{equation*}
\xymatrix{
\Ext^{s,s}(\Sigma^aI^b,\mathbb{Z}/p)\otimes\Ext^{s',s'}(\Sigma^{a'}I^{b'},\mathbb{Z}/p) \ar[r] \ar[d]_{\cong} & \Ext^{s+s',s+s'}(\Sigma^{a+a'}I^{b+b'},\mathbb{Z}/p) \\
\Ext^{s+b,s-a}(\mathbb{Z}/p,\mathbb{Z}/p)\otimes\Ext^{s'+b',s'-a'}(\mathbb{Z}/p,\mathbb{Z}/p) \ar[r] & \Ext^{s+s'+b+b',s+s'-a-a'}(\mathbb{Z}/p,\mathbb{Z}/p) \ar[u]_{\cong} }
\end{equation*}
\normalsize
The bottom pairing is the Yoneda splicing and it is an isomorphism when all the groups are non-zero as any non-zero $\Ext$ group here is a copy of $\mathbb{Z}/p$. The vertical isomorphisms are the dimension-shifting isomorphisms. So the top pairing is an isomorphism whenever the groups are non-zero
and since $s(k,m)+s(m,n)=s(k,n)$ this holds in our case. Hence up to a $p$-adic unit $u_{k,m,n}$ we have
$$
\iota_{m,n}\iota_{k,m}=u_{k,m,n}\iota_{k,n},
$$
and we can choose the maps $\iota_{m,n}$ as stated above.
Now $\Lambda$ is a group isomorphism because
\begin{align*}
\Lambda(X)\Lambda(Y)&=\left(\sum_{m\geqslantslant n}X_{n,m}\iota_{m,n}\right)\left(\sum_{k\geqslantslant l}Y_{l,k}\iota_{k,l}\right)
=\sum_{k\geqslantslant l=m\geqslantslant n}X_{n,m}Y_{l,k}\iota_{m,n}\iota_{k,l}\\
&=\sum_{k\geqslantslant l\geqslantslant n}X_{n,l}Y_{l,k}\iota_{k,n}
=\sum_{k\geqslantslant n}(XY)_{n,k}\iota_{k,n}\\
&=\Lambda(XY).\qedhere
\end{align*}
\end{proof}
\noindent Hence we have now proved Theorem~\ref{iso}.
\section{A basis of the torsion-free part of $\pi_*(\ell\wedge\ell)$}
\label{Secbasis}
In this section, we find a basis for the torsion-free part of the homotopy groups $\pi_*(\ell\wedge\ell)$.
To do this we follow methods introduced by Adams in~\cite{AdamsSH}. We then study some of the properties of
this basis including how it relates to Kane's splitting. We explore its behaviour with
relation to the Adams spectral sequence in order to assess the effect of the maps $(\iota_{m,n})_*$.
This will allow us to compare with the effect of $(1\wedge\Psi^q)_*$
and hence, in the next section, to deduce information about the matrix corresponding to $1\wedge\Psi^q$
under the isomorphism $\Lambda$.
It would be interesting to compare the basis that we find here with elements
of the torsion free part of $\pi_*(\ell\wedge\ell)$
studied in~\cite[\S9,10]{br}. We hope to return to this in future work.
\subsection{A basis}
\label{subsec:basis}
We consider the torsion-free part of $\pi_*(\ell\wedge\ell)$ by considering its image in
$\pi_*(\ell\wedge\ell)\otimes\mathbb{Q}_p=\mathbb{Q}_p[\hat{u},\hat{v}]$,
where $\pi_*(\ell)=\mathbb{Z}_p[\hat{u}]$.
We fix a choice of $q$ primitive modulo $p^2$, so that $q$ is a topological
generator of the $p$-adic units $\mathbb{Z}_p^\times$ and we let $\hat{q}=q^{p-1}$.
We also adopt the notation $\rho=2(p-1)$.
The integrality conditions governing the image can be found as in~\cite[Part III, Theorem 17.5]{AdamsSH}
and are as follows.
\begin{prop}\label{subring}
For $f(\hat{u},\hat{v})\in\mathbb{Q}_p[\hat{u},\hat{v}]$ to be in the image
of $\pi_*(\ell\wedge\ell)$ it is necessary and sufficient for $f$ to satisfy the following two conditions.
\newcounter{itemcounter}
\begin{list}
{(\arabic{itemcounter})}
{\usecounter{itemcounter}\leftmargin=0.5em}
\item $f(kt,lt)\in\mathbb{Z}_p[t]$ for all $k$, $l\in 1+p\mathbb{Z}_p$.
\item $f(\hat{u},\hat{v})$ is in the subring $\mathbb{Z}_p[\frac{\hat{u}}{p},\frac{\hat{v}}{p}]$.\qed
\end{list}
\end{prop}
We begin with the following polynomials. They have been chosen by starting from the basis elements for
$L_0(l)$ given in~\cite[Proposition 4.2]{CCW2} and multiplying each by a suitable power of $\hat{u}$
and a suitable power of $p$ to bring it into $\mathbb{Z}_p[\frac{\hat{u}}{p},\frac{\hat{v}}{p}]$.
\begin{dfn}\label{fk}
Define
\begin{align*}
c_{k}&=\prod_{i=0}^{k-1}\frac{\hat{v}-\hat{q}^{i}\hat{u}}{\hat{q}^k-\hat{q}^{i}},\\
f_{k}&=p^{\nu_p(k!)}c_{k}.
\end{align*}
\end{dfn}
\noindent It is easy to check that the elements $f_{k}$ lie in $\mathbb{Z}_p[\frac{\hat{u}}{p},\frac{\hat{v}}{p}]$ for all $k\in\mathbb{N}_0$, using
that $\nu_p\left(\prod_{i=0}^{k-1}(\hat{q}^k-\hat{q}^{i})\right)=\nu_p(k!)+k$.
\begin{dfn}\label{basis elements}
Let
$$
F_{i,j,k}=\hat{u}^i\left(\frac{\hat{u}}{p}\right)^jf_{k}.
$$
\end{dfn}
\noindent Now the method of Adams leads to the analogue of~\cite[Part III, Proposition 17.6]{AdamsSH}.
\begin{thm}\label{basis}
\begin{list}
{(\arabic{itemcounter})}
{\usecounter{itemcounter}\leftmargin=0.5em}
\item The intersection of the subring satisfying condition (1) of Proposition~\ref{subring} with $\mathbb{Q}_p[\hat{u},\hat{v}]$ is free on the $\mathbb{Z}_p[\hat{u}]$-basis $\{c_{k}\,|\,k\geqslantslant 0\}$.
\item A $\mathbb{Z}_p$-basis for $\pi_*(\ell\wedge\ell)/\textup{Torsion}$ is given by the polynomials $F_{i,j,k}$,
where $k\geqslantslant 0$, $0\leqslantslant j\leqslantslant \nu_p(k!)$ with $i=0$ if $j<\nu_p(k!)$ and $i\geqslantslant0$ if $j=\nu_p(k!)$.
\end{list}
\end{thm}
\begin{proof}
Firstly, it follows from~\cite[Proposition 4.2]{CCW2} that the elements $c_{k}$ satisfy
Proposition~\ref{subring} condition $(1)$, but will not do so if divided by
more $p$'s. It follows that $\left(\frac{\hat{u}}{p}\right)^{\nu_p(k!)}f_{k}$ satisfies
this condition, but $\left(\frac{\hat{u}}{p}\right)^{\nu_p(k!)+1}f_{k}$ does not.
To prove part (1), note that the $c_{k}$ are clearly linearly independent. Consider a polynomial $f(u,v)\in\mathbb{Q}_p[\hat{u},\hat{v}]$
satisfying condition (1) of Proposition~\ref{subring} and suppose that $f$ is homogeneous of degree $\rho n$.
We can write $f$ as
$$
f(\hat{u},\hat{v})=\lambdabda_0\hat{u}^n+\lambdabda_1\hat{u}^{n-1}c_{1}+\lambdabda_2\hat{u}^{n-2}c_{2}+\cdots.
$$
Assume as an inductive hypothesis that $\lambdabda_0,\lambdabda_1,\ldots,\lambdabda_{s-1}$ lie in $\mathbb{Z}_p$. Let the sum of the remaining terms be
$$
g(\hat{u},\hat{v})=\lambdabda_s\hat{u}^{n-s}c_{s}+\lambdabda_{s+1}\hat{u}^{n-s-1}c_{s+1}+\cdots.
$$
This sum must also satisfy condition (1) of Proposition~\ref{subring}. Thus
$g(t,\hat{q}^st)=\lambdabda_s t^n\in\mathbb{Z}_p[t]$
and hence $\lambdabda_s\in\mathbb{Z}_p$. The initial case for $\lambdabda_0$ works in the same way and this completes the induction.
Thus we can write $f$ as a $\mathbb{Z}_p[\hat{u}]$-linear combination of the $c_{k}$s.
To prove part (2), write
$$
n_k:=\text{ numerator of }c_{k}=\prod_{i=0}^{k-1}(\hat{v}-\hat{q}^{i}\hat{u}).
$$
In degree $\rho k$ there are $k+1$ $\mathbb{Q}_p$-basis elements
$$
n_k,\hat{u}n_{k-1},\hat{u}^2n_{k-2},\ldots,\hat{u}^k.
$$
In order to produce the elements $F_{i,j,k}$ we divided each of the above elements by the highest power of $p$ which leaves it satisfying both conditions (1) and (2) of Proposition~\ref{subring}. For the element $\hat{u}^in_s$, this is $\min\{p^{s+\nu_p(s!)},p^{s+i}\}$. Now consider an element $f(u,v)\in\mathbb{Q}_p[\hat{u},\hat{v}]$, homogeneous of degree $\rho k$, which satisfies conditions (1) and (2) of Proposition~\ref{subring}. We can write $f$ as
$$
f(\hat{u},\hat{v})=\frac{\lambdabda_0}{p^{a_0}}\hat{u}^k+\frac{\lambdabda_1}{p^{a_1}}\hat{u}^{k-1}n_1
+\frac{\lambdabda_2}{p^{a_2}}\hat{u}^{k-2}n_2+\cdots
$$
where $\lambdabda_i\in\mathbb{Z}_p$ for $i\geqslantslant 0$. By part (1), $a_s\leqslantslant \nu_p(\text{denominator of }c_{s})=s+\nu_p(s!)$.
We also claim that $a_s\leqslantslant (k-s)+s=k$.
Let the inductive hypothesis for a downwards induction be that $a_{s'}\leqslantslant k$ for $s'>s$. Let the sum of the remaining terms be
$$
g(\hat{u},\hat{v})=\frac{\lambdabda_0}{p^{a_0}}\hat{u}^k+\cdots+\frac{\lambdabda_s}{p^{a_s}}\hat{u}^{k-s}n_s,
$$
which must also satisfy conditions (1) and (2) of Proposition~\ref{subring}. The top coefficient $\frac{\lambdabda_s}{p^{a_s}}$ is the coefficient of $\hat{u}^{k-s}\hat{v}^s$ so because $g$ satisfies condition (2) of Proposition~\ref{subring} we must have that $a_s\leqslantslant (k-s)+s=k$. The first step of the induction works in the same way and the induction is complete.
Thus $f$ is a $\mathbb{Z}_p$-linear combination of the elements $F_{i,j,k}$.
\end{proof}
\subsection{Properties of the Basis}
We now consider how the basis we have found above relates to Kane's splitting of $\ell\wedge\ell$.
\begin{dfn}
Let
$$
G_{m,n}=\frac{\pi_m(\ell\wedge\mathcal{K}(n))}{\text{Torsion}}.
$$
Then we have $$G_{*,*}=\bigoplus_{m,n}G_{m,n}\cong\frac{\pi_*(\ell\wedge\ell)}{\text{Torsion}}.$$
\end{dfn}
\begin{prop}\label{homotopyGmn}
For each $n\geqslantslant0$,
\begin{displaymath}
G_{m,n}=
\begin{cases}
\mathbb{Z}_p & \textrm{ if } m \textrm{ is a multiple of $\rho$ and } m\geqslantslant\rho n,\\
0 & \textrm{ otherwise.}
\end{cases}
\end{displaymath}
\end{prop}
\begin{proof}
This follows directly from the description of $\frac{\pi_*(\ell\wedge K(n))}{\text{Torsion}}$ given in
~\cite[Proposition 9:2]{Kane}.
\end{proof}
\begin{dfn}\label{gml}
For $m\geqslantslant l$, define the element $g_{m,l}\in\mathbb{Z}_p[\frac{\hat{u}}{p},\frac{\hat{v}}{p}]$ to be the
element produced from $f_{l}$ lying in degree $\rho m$, i.e.
\begin{displaymath}
g_{m,l}=\left\{\begin{array}{ll}
F_{0,m-l,l} & \textrm{ if } m\leqslantslant \nu_p(l!)+l,\\
F_{m-l-\nu_p(l!),\nu_p(l!),l} & \textrm{ if } m>\nu_p(l!)+l.
\end{array} \right.
\end{displaymath}
\end{dfn}
\begin{lem}\label{mhomotopy}
The elements $\{g_{m,l}:0\leqslantslant l\leqslantslant m\}$ form a basis for $G_{\rho m,*}$.
\end{lem}
\begin{proof}
The elements $\{g_{m,l}:0\leqslantslant l\leqslantslant m\}$ are precisely all of the basis elements $F_{i,j,k}$ which lie in homotopy degree $\rho m$.
\end{proof}
We will need to see that $\pi_*(\ell\wedge \ell)$ contains no torsion of order larger than $p$. To prove
this we need information about the stable class of $H^*(\ell)$.
\begin{prop}\label{stable l}
The stable isomorphism class of $H^*(\ell)$ as a $B$-module is
$$
\bigotimes_{i=0}^\infty\bigoplus_{j=0}^{p-1}\Sigma^{j(2p^{i+1}-2p^{i}-\pi_p(i))}I^{j\pi_p(i)},
$$
where $\pi_p(i)=\frac{p^i-1}{p-1}$.
\end{prop}
\begin{proof}
We calculate the $Q_0$ and $Q_1$ homologies of $H_{-*}(\ell)$ explicitly and
deduce its stable class and then dualise this statement to find the stable class of $H^*(\ell)$.
Recall that
$$
\pi_*(\ell\wedge H\mathbb{Z}/p)\cong H_*(\ell)\cong\mathbb{Z}/p[\bar{\xi_1},\bar{\xi_2},\ldots]\otimes E(\bar{\tau_2},\bar{\tau_3},\ldots),
$$
where a bar over an element denotes the image of that element under the anti-automorphism $\chi$ of the
dual Steenrod algebra $\mathcal{A}_p^*$; see~\cite{Knapp}.
The action of $Q_0$ sends $\bar{\xi}_i$ to zero and $\bar{\tau}_i$ to $-\bar{\xi}_i$. Using the K\"{u}nneth Theorem, we see
that the $Q_0$ homology of $\pi_{-*}(\ell\wedge H\mathbb{Z}/p)$ is isomorphic to $\mathbb{Z}/p[\bar{\xi_1}]$.
The action of $Q_1$ sends $\bar{\xi}_i$ to zero and $\bar{\tau}_i$ to $-\bar{\xi}_{i-1}^p$.
Again using the K\"{u}nneth Theorem, we see that the $Q_1$ homology of $\pi_{-*}(\ell\wedge H\mathbb{Z}/p)$ is isomorphic to ${\mathbb{Z}/p[\bar{\xi}_1,\bar{\xi}_2,\ldots]}/{(\bar{\xi}_1^p,\bar{\xi}_2^p,\ldots)}$.
Recall from~\cite[Part III, Proposition 16.4]{AdamsSH} that there are so-called lightning flash modules $M_i$
characterised as follows.
For $i\geqslant1$, $M_i$ is a finite-dimensional submodule of $\pi_{-*}(\ell\wedge H\mathbb{Z}/p)$ such that
\begin{list}
{(\arabic{itemcounter})}
{\usecounter{itemcounter}\leftmargin=0.5em}
\item $H(M_i;Q_0)\cong\mathbb{Z}/p$ generated by $\bar{\xi}_1^{p^{i-1}}$ and
\item $H(M_i;Q_1)\cong\mathbb{Z}/p$ generated by $\bar{\xi}_i$.
\end{list}
It follows that there is a stable isomorphism
$$
M_i\approxeq\Sigma^{-2p^i+2p^{i-1}+\pi_p(i-1)}I^{-\pi(i-1)}.
$$
We claim that the map
$$
\bigotimes_{i=1}^\infty\bigoplus_{j=0}^{p-1}M_i^j\rightarrow
\pi_{-*}(\ell\wedge H\mathbb{Z}/p)
$$
is a stable isomorphism.
Indeed it is clear that it is a $B$-module map and, again using the K\"{u}nneth
Theorem, we see that it induces an isomorphism on both $Q_0$ and $Q_1$ homology,
so it is a stable isomorphism by~\cite[Part III, Lemma 16.7]{AdamsSH}.
Dualising, we get a stable isomorphism
$$
H^*(\ell)\approxeq(1+M_1^*+{M_1^*}^2+\cdots+{M_1^*}^{p-1})(1+M_2^*+{M_2^*}^2+\cdots+{M_2^*}^{p-1})\ldots,
$$
and combining this with
$$
M_i^*\approxeq\Sigma^{2p^i-2p^{i-1}-\pi_p(i-1)}I^{\pi(i-1)}
$$
gives the result.
\end{proof}
\begin{prop}\label{p torsion}
Let $\tilde{G}_{m,n}=\pi_m(\ell\wedge\mathcal{K}(n))$, then $\tilde{G}_{m,n}\cong G_{m,n}\oplus W_{m,n}$ where $W_{m,n}$ is a finite elementary abelian $p$-group, i.e. $\tilde{G}_{m,n}$ contains no torsion of order larger than $p$.
\end{prop}
\begin{proof}
This is proved for the case $p=2$ in \cite[Part III, Chapter 17]{AdamsSH}; the odd primary analogue is similar. We require two conditions in order to apply the two results of Adams necessary to prove this. Firstly that $H_r(\ell\wedge\ell;\mathbb{Z})$ is finitely generated for each $r$ which is true (see \cite[p.353]{AdamsSH}) and secondly that, as a $B$-module, $H^*(\ell\wedge\ell)$ is stably isomorphic to $\oplus_i\Sigma^{a(i,p)}I^{b(i,p)}$ where $b(i,p)\geqslantslant0$ and $a(i,p)+b(i,p)\equiv0\!\!\oldmod2$.
For the second condition, using the K\"{u}nneth formula and Proposition~\ref{stable l},
$$
H^*(\ell\wedge\ell)\cong H^*(\ell)\otimes H^*(\ell)\approxeq
\left(
\bigotimes_{i=0}^\infty\bigoplus_{j=0}^{p-1}\Sigma^{j(2p^{i+1}-2p^{i}-\pi_p(i))}I^{j\pi_p(i)}
\right)^{\otimes 2}.
$$
Then it is clear that
the required condition holds for each summand and hence for $H^*(\ell\wedge\ell)$.
Given these assumptions we can now apply \cite[Part III, Lemma 17.1]{AdamsSH} which states that $H_*(ku\wedge\ell;\mathbb{Z})$ and hence $H_*(\ell\wedge\ell;\mathbb{Z})$ has no torsion of order higher than $p$. Then from \cite[Part III, Proposition 17.2(i)]{AdamsSH}, the Hurewicz homomorphism $$h:\pi_*(ku\wedge\ell)\rightarrow H_*(ku\wedge\ell;\mathbb{Z})$$ is a monomorphism. Since this is true of $ku\wedge\ell$ it follows that the same is true of $\ell\wedge\ell$ and so the result follows.
\end{proof}
\begin{dfn}
Consider the projection map
$$
\ell\wedge\ell\simeq\bigvee_{n\geqslantslant0}\ell\wedge\mathcal{K}(n)\rightarrow\ell\wedge\mathcal{K}(n)
$$
and let $P_n:G_{*,*}\rightarrow G_{*,n}$ be the induced projection map on homotopy modulo torsion.
\end{dfn}
\begin{lem}\label{proj}
$P_n(g_{n,l})=0$ if $l<n$.
\end{lem}
\begin{proof}
Since $G_{m,n}$ is torsion free we can consider just whether $P_n(g_{n,l})$ is zero in $G_{*,n}\otimes\mathbb{Q}_p$. Let $l<n$, then for $\alpha(n,l)\in\mathbb{N}_0$,
$$
g_{n,l}=\frac{\hat{u}^{n-l}}{p^{\alpha(n,l)}}f_{l}\in
\hat{u}^{n-l}\frac{\pi_{\rho l}(\ell\wedge\ell)}{\text{Torsion}}\otimes\mathbb{Q}_p
\subset\frac{\pi_{\rho n}(\ell\wedge\ell)}{\text{Torsion}}\otimes\mathbb{Q}_p.
$$
Since $P_n$ is a left $\ell$-module map, $P_n(g_{n,l})$ is $\hat{u}^{n-l}$ times an element of
$G_{\rho l, n}$. But this group is zero, by Proposition~\ref{homotopyGmn}.
\end{proof}
\subsection{The Elements $z_{m}$}
Now we choose labels for generators of certain homotopy groups and compare with our basis elements.
\begin{dfn}
Let $z_{n}$ be a generator for $G_{\rho n,n}\cong\mathbb{Z}_p$ and let $\tilde{z}_{n}$ be any element in $\tilde{G}_{\rho n,n}\cong G_{\rho n,n}\oplus W_{\rho n,n}$ where the first co-ordinate is $z_{n}$.
\end{dfn}
\begin{prop}\label{0 or 1}
In the Adams spectral sequence
$$
E_2^{s,t}\cong\Ext_{\mathcal{A}_p}^{s,t}(H^*(\ell\wedge\mathcal{K}(n)),\mathbb{Z}/p)\Longrightarrow\pi_{t-s}(\ell\wedge\mathcal{K}(n))\otimes\mathbb{Z}_p
$$
the class of $\tilde{z}_{n}$ is represented in either $E_2^{0,\rho n}$ or $E_2^{1,\rho n+1}$.
\end{prop}
\begin{proof}
Consider the Adams spectral sequence
\begin{align*}
E_2^{s,t}&=\Ext_{\mathcal{A}_p}^{s,t}(H^*(\ell\wedge\mathcal{K}(n)),\mathbb{Z}/p)\\
& \cong\Ext_{\mathcal{A}_p}^{s,t}(H^*(\ell)\otimes H^*(\mathcal{K}(n)),\mathbb{Z}/p)\\
&\cong\Ext_B^{s,t}(H^*(\mathcal{K}(n)),\mathbb{Z}/p) \qquad \qquad\Longrightarrow\pi_{t-s}(\ell\wedge\mathcal{K}(n))\otimes\mathbb{Z}_p.
\end{align*}
By Theorem~\ref{stable Kn}, for $s>0$,
$$
E_2^{s,t}\cong\Ext_B^{s,t}(\Sigma^{\rho n-\nu_p(n!)}I^{\nu_p(n!)},\mathbb{Z}/p)
\cong\Ext_B^{s+\nu_p(n!),t-\rho n+\nu_p(n!)}(\mathbb{Z}/p,\mathbb{Z}/p).
$$
We see that, away from $s=0$, the $E_2$ term is isomorphic to a shifted version of $\Ext_B^{*,*}(\mathbb{Z}/p,\mathbb{Z}/p)\cong\mathbb{Z}/p[c,d]$ with $c\in\Ext_B^{1,1}$ and $d\in\Ext_B^{1,2p-1}$. The spectral sequence collapses at $E_2$, just as for~(\ref{ASS}).
The spectral sequence gives us information about a filtration $F^i$ of $\tilde{G}_{\rho n,n}=\pi_{\rho n}(\ell\wedge\mathcal{K}(n))\otimes\mathbb{Z}_p$.
The multiplicative structure of the spectral sequence is such that $pF^i= F^{i+1}$,
for $i\geqslantslant 1$ and $F^1\cong \mathbb{Z}_p$. By Proposition~\ref{p torsion}, $W_{\rho n,n}$ is an elementary abelian $p$-group, so $pW_{\rho n,n}=0$ and $W_{\rho n,n}$ must be represented in $E_2^{0,\rho n}$.
Assume that the generator $\tilde{z}_{n}$ is represented in $E_2^{j,\rho n+j}$ for $j\geqslantslant2$, then we must have that $\tilde{z}_{n}\in F^j$. Then there is some generator $\tilde{z}_{n}'\in F^1$ such that $p^{j}\tilde{z}_{n}'$ is a generator for $F^{j+1}$. Therefore, for some $\gamma\in\mathbb{Z}_p$,
$$
p^j\gamma\tilde{z}_{n}'=p\tilde{z}_{n}.
$$
Thus
$$
p(p^{j-1}\gamma\tilde{z}_{n}'-\tilde{z}_{n})=0
$$
and hence we must have $p^{j-1}\gamma\tilde{z}_{n}'-\tilde{z}_{n}\in W_{\rho n,n}$ because nothing else has any torsion. This implies that in $G_{\rho n,n}$, $z_{n}$ has a factor $p$ which contradicts the fact that we chose $z_{n}$ to be a generator of $G_{\rho n,n}\cong\mathbb{Z}_p$.
\end{proof}
We can now give a more explicit description of the generators $z_{n}$ in terms of our basis elements $g_{m,l}$. We first state
a lemma we will need; this is readily proved from the definitions.
\begin{lem}\label{alglem}
For $0\leqslantslant i\leqslantslant m-1$,
\footnotesize
\begin{displaymath}
\left(\frac{\hat{u}}{p}\right)^{\nu_p(m!)}g_{m,i}=\begin{cases}
\frac{1}{p^{\nu_p(m!)+m-\nu_p(i!)-i}}\hat{u}^{\nu_p(m!)-\nu_p(i!)+m-i}\left(\frac{\hat{u}}{p}\right)^{\nu_p(i!)}f_{i} & \textrm{ if } m\leqslantslant \nu_p(i!)+i,\\
\frac{1}{p^{\nu_p(m!)}}\hat{u}^{\nu_p(m!)-\nu_p(i!)+m-i}\left(\frac{\hat{u}}{p}\right)^{\nu_p(i!)}f_{i} & \textrm{ if }m>\nu_p(i!)+i.
\end{cases}
\end{displaymath}\qed
\end{lem}
\normalsize
\begin{dfn}
For $m,i\geqslant 0$, define $\beta(m,i)\in\mathbb{N}_0$ by
\begin{displaymath}
\beta(m,i)=\left\{\begin{array}{ll}
\nu_p(m!) & \textrm{ if } m>\nu_p(i!)+i,\\
\nu_p(m!)+m-\nu_p(i!)-i & \textrm{ if } m\leqslantslant \nu_p(i!)+i.
\end{array}\right.
\end{displaymath}
\end{dfn}
\begin{prop} \label{z in basis elements}
The generators $z_{m}\in G_{\rho m, *}$ have the following form
$$
z_{m}=\sum_{i=0}^m p^{\beta(m,i)}\lambdabda_{m,i}g_{m,i}
$$
where $\lambdabda_{s,t}\in\mathbb{Z}_p$ if $s\neq t$, $\lambdabda_{s,s}\in\mathbb{Z}_p^\times$.
\end{prop}
\begin{proof}
By Lemma~\ref{mhomotopy}, $\{g_{m,l}:0\leqslantslant l\leqslantslant m\}$ forms a basis for $G_{\rho m,*}$, so
we can express $z_{m}$ in terms of this basis,
\begin{equation}\label{z in basis}
z_{m}=\lambdabda_{m,m}g_{m,m}+
\tilde{\lambdabda}_{m,m-1}g_{m,m-1}+\cdots
+\tilde{\lambdabda}_{m,0}g_{m,0},
\end{equation}
where $\lambdabda_{m,m},\tilde{\lambdabda}_{m,l}\in\mathbb{Z}_p$. The projection map $P_m:G_{\rho m,*}\rightarrow G_{\rho m,m}$
acts as the identity on $z_{m}$ and as zero on all $g_{m, l}$ where $m\neq l$ by Lemma~\ref{proj}. Hence
$$
z_{m}=P_m(z_{m})=\lambdabda_{m,m}P_m(g_{m,m})=\lambdabda_{m,m}P_m(f_{m}).
$$
Thus the coefficient $\lambdabda_{m,m}$ is a unit, since otherwise $z_{m}$ would have a factor of $p$, contradicting its choice as a generator of $G_{\rho m,m}\cong\mathbb{Z}_p$.
We can now multiply by the largest power of $\frac{\hat{u}}{p}$ possible to leave the result still lying in $G_{*,m}$ and we get
$$
\left(\frac{\hat{u}}{p}\right)^{\nu_p(m!)}z_{m}
=\lambdabda_{m,m}P_m\left(\left(\frac{\hat{u}}{p}\right)^{\nu_p(m!)}f_{m}\right)
$$
which lies in $G_{\rho(\nu_p(m!)+m),m}$. By multiplying equation~\eqref{z in basis} by $\left(\frac{\hat{u}}{p}\right)^{\nu_p(m!)}$ we now have the following relation in $G_{\rho(\nu_p(m!)+m),m}\otimes\mathbb{Q}_p$
\begin{equation}\label{coefficients}
\left(\frac{\hat{u}}{p}\right)^{\nu_p(m!)}z_{m}=
\left(\frac{\hat{u}}{p}\right)^{\nu_p(m!)}\lambdabda_{m,m}f_{m}+
\sum_{i=0}^{m-1}\tilde{\lambdabda}_{m,i}\left(\frac{\hat{u}}{p}\right)^{\nu_p(m!)}g_{m,i}.
\end{equation}
Since the left hand side of this equation lies in $G_{\rho(\nu_p(m!)+m),m}$, we can calculate how many factors of $p$ each
$\tilde{\lambdabda}_{m,i}$ must have to ensure that, once the right hand side is expressed in terms of the basis in Theorem~\ref{basis}, all the coefficients are $p$-adic integers.
Using Lemma~\ref{alglem} we can see that if $m\leqslantslant \nu_p(i!)+i$ we have
$$
\left(\frac{\hat{u}}{p}\right)^{\nu_p(m!)}g_{m,i}=\frac{1}{p^{\nu_p(m!)+m-\nu_p(i!)-i}}\hat{u}^{\nu_p(m!)-\nu_p(i!)+m-i}
\left(\frac{\hat{u}}{p}\right)^{\nu_p(i!)}f_{i}
$$
and so in equation~\eqref{coefficients} the coefficient $\tilde{\lambdabda}_{m,i}$ must be divisible by $p^{\nu_p(m!)+m-\nu_p(i!)-i}$.
Similarly, when $m>\nu_p(i!)+i$ we have
$$
\left(\frac{\hat{u}}{p}\right)^{\nu_p(m!)}g_{m,i}=\frac{1}{p^{\nu_p(m!)}}\hat{u}^{\nu_p(m!)-\nu_p(i!)+m-i}
\left(\frac{\hat{u}}{p}\right)^{\nu_p(i!)}f_{i}
$$
and so in equation~\eqref{coefficients}, $\tilde{\lambdabda}_{m,i}$ must be divisible by $p^{\nu_p(m!)}$.
\end{proof}
\begin{prop}\label{0}
In Proposition~\ref{0 or 1}, $\tilde{z}_{n}$ is actually represented in $E_2^{0,\rho n}$.
\end{prop}
\begin{proof}
We will assume that $\tilde{z}_{n}$ is represented in $E_2^{1,\rho n+1}$ in the spectral sequence and obtain a contradiction.
The spectral sequence in question collapses and, away from $s=0$, the $E_2$ page is
$$
E_2^{s,t}\cong\Ext_B^{s+\nu_p(n!),t-\rho n+\nu_p(n!)}(\mathbb{Z}/p,\mathbb{Z}/p).
$$
This is a shifted version of $\Ext_B^{*,*}(\mathbb{Z}/p,\mathbb{Z}/p)\cong\mathbb{Z}/p[c,d]$, so on the line $s=1$ the non-zero groups are $E_2^{1,\rho n+1}$, $E_2^{1,\rho(n+1)+1}$, $\ldots$, $E_2^{1,\rho(n+\nu_p(n!)+1)+1}$ and each of these is a single copy of $\mathbb{Z}/p$. Using the multiplicative structure of the spectral sequence, if there is a class $w\in E_2^{j,\rho(n+j)+1}$ and the group $E_2^{j,\rho(n+j+1)+1}$ is non-zero then there exists a class $w'\in E_2^{j,\rho(n+j+1)+1}$ such that $pw'=\hat{u}w$. In other words if $w$ is represented by $c^xd^y$ in the spectral sequence where $x\geqslantslant1$, $y\geqslantslant0$ then $w'$ is represented by $c^{x-1}d^{y+1}$. Applying this to $\tilde{z}_{n}\in E_2^{1,\rho n+1}$, since $E_2^{1,\rho(n+\nu_p(n!)+1)+1}$ is non-zero, there must exist a class $w\in E_2^{1,\rho(n+\nu_p(n!)+1)+1}$ such that
$$
\hat{u}^{1+\nu_p(n!)}\tilde{z}_{n}=p^{1+\nu_p(n!)}w.
$$
This implies that $\hat{u}^{1+\nu_p(n!)}\tilde{z}_{n}$ is divisible by $p^{1+\nu_p(n!)}$ in $G_{*,*}$. However this contradicts the proof of Proposition~\ref{z in basis elements}, hence $\tilde{z}_{n}$ must be represented in $E_2^{0,\rho n}$.
\end{proof}
\begin{lem}\label{zonly}
In the spectral sequence
$$
E_2^{s,t}\cong\Ext^{s,t}_B(H^*(\mathcal{K}(n)),\mathbb{Z}/p)\Longrightarrow\pi_{t-s}(\ell\wedge\mathcal{K}(n))\otimes\mathbb{Z}_p,
$$
up to multiplication by a unit, $\left(\frac{\hat{u}}{p}\right)^i(p\tilde{z}_{n})$ is represented by $c^{\nu_p(n!)+1-i}d^i$ for $0\leqslantslant i\leqslantslant\nu_p(n!)$ and $\hat{u}^j\left(\frac{\hat{u}}{p}\right)^{\nu_p(n!)}(\tilde{z}_{n})$ is represented by $d^{\nu_p(n!)+j}$ for $j\geqslantslant1$.
\end{lem}
\begin{proof}
From Proposition~\ref{0} we know that in the spectral sequence
$$
E_2^{s,t}\cong\Ext^{s,t}_B(H^*(\mathcal{K}(n);\mathbb{Z}/p),\mathbb{Z}/p)\Longrightarrow\pi_{t-s}(\ell\wedge\mathcal{K}(n))\otimes\mathbb{Z}_p
$$
$\tilde{z}_{n}$ is represented in $E_2^{0,\rho n}$. By the multiplicative structure of the spectral sequence this means that $p\tilde{z}_{n}$ is represented in $E_2^{1,\rho n+1}$. We have
$$
E_2^{1,\rho n+1}\cong\Ext_B^{1+\nu_p(n!),1+\nu_p(n!)}(\mathbb{Z}/p,\mathbb{Z}/p)
\cong\mathbb{Z}/p\langle c^{1+\nu_p(n!)}\rangle.
$$
We know from~\cite[Part III, Lemma 17.11]{AdamsSH} that in the spectral sequence, multiplication by $c$ and $d$ correspond to multiplication by $p$ and $\hat{u}$ respectively on homotopy groups. We list below some homotopy elements of $\pi_*(\ell\wedge\mathcal{K}(n))$ with a choice of corresponding representatives in the spectral sequence.
\begin{displaymath}
\begin{tabular}{c|c}
Homotopy element & Representative\\
\hline
&\\
$p\tilde{z}_{n}$ & $c^{1+\nu_p(n!)}$ \\
$\left(\frac{\hat{u}}{p}\right)(p\tilde{z}_{n})$ & $c^{\nu_p(n!)}d$ \\
$\left(\frac{\hat{u}}{p}\right)^2(p\tilde{z}_{n})$ & $c^{\nu_p(n!)-1}d^2$ \\
\vdots & \vdots \\
$\left(\frac{\hat{u}}{p}\right)^{\nu_p(n!)}(p\tilde{z}_{n})$ & $cd^{\nu_p(n!)}$ \\
$\hat{u}\left(\frac{\hat{u}}{p}\right)^{\nu_p(n!)}(\tilde{z}_{n})$ & $d^{\nu_p(n!)+1}$ \\
$\hat{u}^2\left(\frac{\hat{u}}{p}\right)^{\nu_p(n!)}(\tilde{z}_{n})$ & $d^{\nu_p(n!)+2}$ \\
\vdots & \vdots \\
\end{tabular}
\end{displaymath}
From this table it is clear to see that the descriptions given in the statement are correct.
\end{proof}
Recall from Definition~\ref{iota} the maps $$\iota_{m,n}:\ell\wedge\mathcal{K}(m)\rightarrow \ell\wedge\mathcal{K}(n)$$ which were maps represented in the spectral sequence
\begin{align*}
E_2^{s,t}\cong\Ext_B^{s,t}(H^*(D(\mathcal{K}(m)))&\otimes H^*(\mathcal{K}(n)),\mathbb{Z}/p)\\
&\Longrightarrow\pi_{t-s}(D(\mathcal{K}(m))\wedge\mathcal{K}(n)\wedge \ell)\otimes\mathbb{Z}_p
\end{align*}
by a choice of generator of
$$
E_2^{m-n-\nu_p(n!)+\nu_p(m!),m-n-\nu_p(n!)+\nu_p(m!)}.
$$
\begin{prop}\label{iota on z}
For $m>n$, the map induced in the $(\rho m)$th homotopy group $$(\iota_{m,n})_*:G_{\rho m,m}\rightarrow G_{\rho m, n}$$ satisfies the condition
$$
(\iota_{m,n})_*(z_{m})=\mu_{m,n}p^{\nu_p(m!)-\nu_p(n!)}\hat{u}^{m-n}z_{n}
$$
for some $p$-adic unit $\mu_{m,n}$.
\end{prop}
\begin{proof}
By definition, $\tilde{z}_{m}$ is any element in $G_{\rho m,m}\oplus W_{\rho m,m}$ whose first co-ordinate is $z_{m}$. Also $W_{\rho m,m}$ has torsion of order $p$ at the highest by Proposition~\ref{p torsion}. We will prove the analogous result for the element $p\tilde{z}_{m}=pz_{m}$; then by linearity the required result will be true for $z_{m}$.
By Lemma~\ref{zonly}, in the spectral sequence
$$
E_2^{s,t}\cong\Ext^{s,t}_B(H^*(\mathcal{K}(m)),\mathbb{Z}/p)\Longrightarrow\pi_{t-s}(\ell\wedge\mathcal{K}(m))\otimes\mathbb{Z}_p,
$$
$p\tilde{z}_{m}$ is represented in $E_2^{1,\rho m+1}\cong\Ext_B^{1+\nu_p(m!),1+\nu_p(m!)}(\mathbb{Z}/p,\mathbb{Z}/p)$, up to a unit, by $c^{1+\nu_p(m!)}$.
Recall that in the spectral sequence
\begin{align*}
E_2^{s,t}\cong\Ext_B^{s,t}(H^*(D(\mathcal{K}(m));\mathbb{Z}/p)&\otimes H^*(\mathcal{K}(n);\mathbb{Z}/p),\mathbb{Z}/p)\\
&\Longrightarrow\pi_{t-s}(D(\mathcal{K}(m))\wedge\mathcal{K}(n)\wedge \ell)\otimes\mathbb{Z}_p,
\end{align*}
the maps $\iota_{m,n}:\ell\wedge\mathcal{K}(m)\rightarrow \ell\wedge\mathcal{K}(n)$ are represented in
\begin{align*}
E_2^{m-n-\nu_p(n!)+\nu_p(m!),m-n-\nu_p(n!)+\nu_p(m!)}&\cong\Ext_B^{m-n,(m-n)(\rho+1)}(\mathbb{Z}/p,\mathbb{Z}/p)\\
&\cong\mathbb{Z}/p\langle d^{m-n}\rangle.
\end{align*}
Using the pairing of Ext groups described in the proof of Proposition~\ref{extprod},
$$
\Ext^{s,t}(\Sigma^aI^b,\mathbb{Z}/p)\otimes \Ext^{s',t'}(\Sigma^{a'}I^{b'},\mathbb{Z}/p)\rightarrow\Ext^{s+s',t+t'}(\Sigma^{a+a'}I^{b+b'},\mathbb{Z}/p)
$$
we get an induced pairing on the $E_2$ pages of the respective Adams spectral sequences. Since in all cases the spectral sequences collapse this passes to the $E_\infty$ pages. The pairing also respects filtrations, so the Ext group pairing passes to a pairing of spectral sequences, giving us a map
\small\begin{align*}
\Ext_B^{s,t}(H^*(D(\mathcal{K}(m)))\otimes H^*(\mathcal{K}(n)),\mathbb{Z}/p)&\otimes\Ext^{s',t'}_B(H^*(\mathcal{K}(m)),\mathbb{Z}/p)\\
&\rightarrow\Ext_B^{s+s',t+t'}(H^*(\mathcal{K}(n)),\mathbb{Z}/p).
\end{align*}\normalsize
This shows that $(\iota_{m,n})_*(p\tilde{z}_{m})$ is represented in the spectral sequence
$$
E_2^{s,t}\cong\Ext_B^{s,t}(H^*(\mathcal{K}(n)),\mathbb{Z}/p)\Longrightarrow\pi_{t-s}(\ell\wedge\mathcal{K}(n))\otimes\mathbb{Z}_p
$$
by a generator of
\begin{align*}
&E_2^{1+m-n-\nu_p(n!)+\nu_p(m!),\rho m+1+m-n-\nu_p(n!)+\nu_p(m!)}\\
&\quad\cong\Ext_B^{1+m-n-\nu_p(n!)+\nu_p(m!),\rho m+1+m-n-\nu_p(n!)+\nu_p(m!)}(H^*(\mathcal{K}(n)),\mathbb{Z}/p)\\
&\quad\cong\Ext_B^{1+m-n-\nu_p(n!)+\nu_p(m!),\rho m+1+m-n-\nu_p(n!)+\nu_p(m!)}(\Sigma^{\rho n-\nu_p(n!)}I^{\nu_p(n!)},\mathbb{Z}/p)\\
&\quad\cong\Ext_B^{1+m-n+\nu_p(m!),\rho m+1+m-n+\nu_p(m!)-\rho n}(\mathbb{Z}/p,\mathbb{Z}/p)\\
&\quad\cong\Ext_B^{1+m-n+\nu_p(m!),1+(\rho+1)(m-n)+\nu_p(m!)}(\mathbb{Z}/p,\mathbb{Z}/p)\\
&\quad\cong\mathbb{Z}/p\langle c^{1+\nu_p(m!)}d^{m-n}\rangle.
\end{align*}
Thus $(\iota_{m,n})_*(p\tilde{z}_{m})$ is, up to a unit, represented by $c^{1+\nu_p(m!)}d^{m-n}$ and all that remains is to express this element in terms of $p\tilde{z}_{n}$.
Using Lemma~\ref{zonly} we can see that we have two cases for $(\iota_{m,n})_*(p\tilde{z}_{m})$, either the power of $d$ in its representative is at least $\nu_p(n!)+1$ (and hence the power of $c$ in its representative is zero) or not.
In the first case we have $m-n\geqslantslant \nu_p(n!)+1$. Then by Lemma \ref{zonly}, $d^{m-n}$ represents
$$
\left(\frac{\hat{u}}{p}\right)^{\nu_p(n!)}\hat{u}^{m-n-\nu_p(n!)}\tilde{z}_{n}=p^{-\nu_p(n!)}\hat{u}^{m-n}\tilde{z}_{n}.
$$
This implies that up to a $p$-adic unit, $(\iota_{m,n})_*(p\tilde{z}_{m})$ is equal to
$$
p^{1+\nu_p(m!)}p^{-\nu_p(n!)}\hat{u}^{m-n}\tilde{z}_{n}=p^{\nu_p(m!)-\nu_p(n!)}\hat{u}^{m-n}(p\tilde{z}_{n}).
$$
In the second case we have $m-n<\nu_p(n!)+1$. Hence by Lemma~\ref{zonly} the representative is $c^{1+\nu_p(n!)-m+n}d^{m-n}$ and this represents the homotopy element
$$
\left(\frac{\hat{u}}{p}\right)^{m-n}(p\tilde{z}_{n}).
$$
This gives us that up to a $p$-adic unit, $(\iota_{m,n})_*(p\tilde{z}_{m})$ is equal to
$$
p^{1+\nu_p(m!)-(1+\nu_p(n!)-m+n)}\left(\frac{\hat{u}}{p}\right)^{m-n}(p\tilde{z}_{n})
=p^{\nu_p(m!)-\nu_p(n!)}\hat{u}^{m-n}(p\tilde{z}_{n}).\qedhere
$$
\end{proof}
\section{The matrix of $1\wedge \Psi^q$}
\label{Secmatrix}
In this section we study the matrix corresponding to the map $1\wedge\Psi^q:\ell\wedge\ell\rightarrow\ell\wedge\ell$ under the isomorphism $\Lambda$ of Theorem~\ref{iso}. Firstly information on the form of this matrix
is obtained by comparing the effect of the maps $(\iota_{m,n})_*$
on the basis elements $z_{m}$ with the effect of the induced map $(1\wedge\Psi^q)_*$.
Then it is shown that, by altering $\Lambda$ by a conjugation, the matrix can be given a particularly nice and simple
form.
A remark about our notation is in order. We are following~\cite{CCW2} in denoting by $\Psi^q$ the $\ell$ Adams operation which acts on
$\pi_{2(p-1)k}(\ell)$ as multiplication by $q^{(p-1)k}=\hat{q}^k$. Some authors write $\Psi^{\hat{q}}$ for this operation.
Our choice in~\cite{CCW2} was motivated by wanting to compare directly the $ku$ operations with the $\ell$ ones.
But of course the $l$ operation only depends on $\hat{q}$.
\subsection{The effect of $1\wedge\Psi^q$ on the basis}
\begin{lem}\label{action on f}
For $m\geqslantslant1$,
$$
(1\wedge\Psi^q)_*(f_{m})=\hat{q}^mf_{m}+p^{\nu_p(m)}\hat{u}f_{m-1}.
$$
\end{lem}
\begin{proof}
Using that the map $(1\wedge\Psi^q)_*$ fixes $\hat{u}$, multiplies $\hat{v}$ by $\hat{q}$ and is additive and multiplicative,
a straightforward calculation gives
$$
(1\wedge\Psi^q)_*(c_{m})=\hat{q}^mc_{m}+\hat{u}c_{m-1}.
$$
Then
\begin{align*}
(1\wedge\Psi^q)_*(f_{m})&=\hat{q}^mf_{m}+p^{\nu_p(m!)-\nu_p((m-1)!)}\hat{u}f_{m-1}\\
&=\hat{q}^mf_{m}+p^{\nu_p(m)}\hat{u}f_{m-1}.\qedhere
\end{align*}
\end{proof}
\begin{prop}\label{action on g}
The action of $(1\wedge\Psi^q)_*$ on the basis elements is as follows.
\begin{displaymath}
(1\wedge\Psi^q)_*(g_{m,m})=\left\{\begin{array}{ll}
\hat{q}^mg_{m,m}+p^{\nu_p(m)+1}g_{m,m-1}& \textrm{ if } m>p,\\
\hat{q}^mg_{m,m}+pg_{m,m-1} & \textrm{ if } m=p,\\
\hat{q}^mg_{m,m}+g_{m,m-1} & \textrm{ if } 1\leqslantslant m\leqslantslant p-1,\\
g_{0,0} & \textrm{ if } m=0.
\end{array}\right.
\end{displaymath}
And for $m>n$,
\small\begin{multline*}
(1\wedge\Psi^q)_*(g_{m,n})\\
=\begin{cases}
\hat{q}^ng_{m,n}+g_{m,n-1}& \text{ if } m>\nu_p(n!)+n,\\
\hat{q}^ng_{m,n}+p^{\nu_p(n!)+n-m}g_{m,n-1}& \text{ if } \nu_p((n-1)!)+n-1<m\leqslantslant \nu_p(n!)+n,\\
\hat{q}^ng_{m,n}+p^{\nu_p(n)+1}g_{m,n-1}& \text{ if } m\leqslantslant \nu_p((n-1)!)+n-1.
\end{cases}
\end{multline*}
\normalsize
\end{prop}
\begin{proof}
This is a matter of straightforward case-by-case calculation, using Definition~\ref{gml} and Lemma~\ref{action on f}.
\end{proof}
\subsection{The Coefficients of the Matrix}
Let $A\in U_\infty\mathbb{Z}_p$ be the matrix such that $\Lambda(A)=1\wedge\Psi^q$. The main result to be proved in this section
provides some restrictions on the form of the matrix $A$, by comparing actions on the basis elements $z_{m}$ from Proposition~\ref{z in basis elements}.
The following lemma is needed in the proof.
It follows easily from the definitions.
\begin{lem}\label{lower g}
\begin{equation*}
\hat{u}^{m-n}g_{n,i}=\begin{cases}
p^{m-n}g_{m,i} & \textrm{ if } n\leqslantslant m\leqslantslant \nu_p(i!)+i,\\
p^{\nu_p(i!)-n+i}g_{m,i} & \textrm{ if } n\leqslantslant \nu_p(i!)+i<m,\\
g_{m,i} & \textrm{ if } \nu_p(i!)+i<n\leqslantslant m.
\end{cases}
\end{equation*}\qed
\end{lem}
\begin{prop}\label{Aform}
The matrix $A$ corresponding to the map $1\wedge\Psi^q:\ell\wedge\ell\rightarrow\ell\wedge\ell$ under the isomorphism $\Lambda$ has the form
\begin{displaymath}
A=\left(\begin{array}{cccccc}
1 & \upsilon_0 & a_{0,2} & a_{0,3} & a_{0,4} & \cdots\\
0 & \hat{q} & \upsilon_1 & a_{1,3} & a_{1,4} & \cdots\\
0 & 0 & \hat{q}^2 & \upsilon_2 & a_{2,4} & \cdots\\
0 & 0 & 0 & \hat{q}^3 & \upsilon_3 & \cdots\\
\vdots & \vdots & \vdots & \vdots & \vdots & \ddots
\end{array}\right),
\end{displaymath}
where $\upsilon_i\in\mathbb{Z}_p^\times$ for all $i\geqslantslant 0$ and $a_{i,j}\in\mathbb{Z}_p$ for all $i,j\geqslantslant 0$.
\end{prop}
\begin{proof}
By definition of $\Lambda$,
$$
(1\wedge\Psi^q)_*(z_{m})=\sum_{n\leqslantslant m}A_{n,m}(\iota_{m,n})_*(z_{m}).
$$
Using Propositions~\ref{z in basis elements} and~\ref{iota on z}, this becomes
\begin{align}\label{equate}
\sum_{i=0}^mp^{\beta(m,i)}\lambdabda_{m,i}(1\wedge\Psi^q)_*(g_{m,i})&=A_{m,m}\sum_{i=0}^mp^{\beta(m,i)}\lambdabda_{m,i}g_{m,i}\nonumber\\
&+\sum_{n<m}\sum_{i=0}^nA_{n,m}\mu_{m, n}p^{\nu_p(m!)-\nu_p(n!)+\beta(n,i)}\hat{u}^{m-n}\lambdabda_{n,i}g_{n, i},
\end{align}
where $\lambdabda_{m,i}\in\mathbb{Z}_p, \lambdabda_{m,m}\in\mathbb{Z}_p^\times$ and $\mu_{m,n}\in\mathbb{Z}_p^\times$.
We will determine information about the $A_{n,m}$s by equating coefficients in equation~\eqref{equate} and using
Proposition~\ref{action on g} and Lemma~\ref{lower g}. Firstly if $m=0$, then
$$
\lambdabda_{0,0}=\lambdabda_{0,0}(1\wedge\Psi^q)_*(g_{0,0})=A_{0,0}\lambdabda_{0,0}g_{0,0}=A_{0,0}\lambdabda_{0,0},
$$
so $A_{0,0}=1$.
We will now split the rest of the proof into three cases.
Case (i): $1\leqslantslant m\leqslantslant p-1$. Equating coefficients of $g_{m,m}$ gives
$\hat{q}^m\lambdabda_{m,m}=A_{m,m}\lambdabda_{m,m}$,
so $A_{m,m}=\hat{q}^m$.
\noindent Next, equating coefficients of $g_{m,m-1}$, and using
$\hat{u}g_{m-1,m-1}=g_{m,m-1}$ and $\beta(m,m)=\beta(m,m-1)=0$,
gives
$$
\lambdabda_{m,m}+\lambdabda_{m,m-1}\hat{q}^{m-1}
=\hat{q}^{m}\lambdabda_{m,m-1}+A_{m-1,m}\mu_{m,m-1}\lambdabda_{m-1,m-1}.
$$
So
$$
A_{m-1,m}=\mu_{m,m-1}^{-1}\lambdabda_{m-1,m-1}^{-1}((\hat{q}^{m-1}-\hat{q}^{m})
\lambdabda_{m,m-1}+\lambdabda_{m,m})\ \in \mathbb{Z}_p^\times.
$$
Case (ii): $m=p$. Equating coefficients of $g_{m,m}$ gives
$\hat{q}^m\lambdabda_{m,m}=A_{m,m}\lambdabda_{m,m}$, so we have $A_{m,m}=\hat{q}^m$ as before.
Then, equating coefficients of $g_{m,m-1}$ gives
$$
\lambdabda_{m,m}p+p\lambdabda_{m,m-1}\hat{q}^{m-1}
=\hat{q}^{m}p\lambdabda_{m,m-1}+A_{m-1,m}\mu_{m,m-1}p\lambdabda_{m-1,m-1},
$$
where this time we have used $\hat{u}g_{m-1,m-1}=g_{m,m-1}$,
$\beta(m,m-1)=\nu_p(p!)=1$ and $\beta(m,m)=0$.
So
$$
A_{m-1,m}=\mu_{m,m-1}^{-1}\lambdabda_{m-1,m-1}^{-1}((\hat{q}^{m-1}-\hat{q}^{m})
\lambdabda_{m,m-1}+\lambdabda_{m,m})\ \in \mathbb{Z}_p^\times.
$$
Case (iii): $m>p$. We find that $A_{m,m}=\hat{q}^m$ as before.
Then, equating coefficients of $g_{m,m-1}$ we find
\begin{align*}
\lambdabda_{m,m}p^{\nu_p(m)+1}+&p^{\nu_p(m)+1}\lambdabda_{m,m-1}\hat{q}^{m-1}\\
&=\hat{q}^{m}p^{\nu_p(m)+1}\lambdabda_{m,m-1}+A_{m-1,m}\mu_{m,m-1}p^{\nu_p(m)+1}\lambdabda_{m-1,m-1},
\end{align*}
where we have used $g_{m,m-1}=\frac{\hat{u}}{p}g_{m-1,m-1}$, and
$$
\beta(m,m-1)=\nu_p(m!)+m-\nu_p((m-1)!)-(m-1)=\nu_p(m)+1.
$$
So
$$
A_{m-1,m}=\mu_{m,m-1}^{-1}\lambdabda_{m-1,m-1}^{-1}((\hat{q}^{m-1}-\hat{q}^{m})
\lambdabda_{m,m-1}-\lambdabda_{m,m})\ \in \mathbb{Z}_p^\times.\qedhere
$$
\end{proof}
\subsection{Conjugation}
We now complete the proof of the odd primary analogue of~\cite[Theorem $4.2$]{BaSn}, by conjugating to obtain a particularly
nice form for the matrix. The argument we give for this follows an idea suggested by Francis Clarke, see~\cite[Theorem $5.4.3$]{Vic'sBook}.
Firstly, let $E$ be the invertible diagonal matrix with
$$
E_{i,j}=\begin{cases}
1&\text{if $i=j=0$,}\\
v_0v_1\dots v_{i-1},&\text{if $i=j>0$},\\
0,&\text{otherwise}.
\end{cases}
$$
Then
\begin{displaymath}
EAE^{-1}=C=\left(\begin{array}{cccccc}
1 & 1 & c_{0,2} & c_{0,3} & c_{0,4} & \cdots\\
0 & \hat{q} & 1 & c_{1,3} & c_{1,4} & \cdots\\
0 & 0 & \hat{q}^2 & 1 & c_{2,4} & \cdots\\
0 & 0 & 0 & \hat{q}^3 & 1 & \cdots\\
\vdots & \vdots & \vdots & \vdots & \vdots & \ddots
\end{array}\right).
\end{displaymath}
for some $c_{i,j}\in\mathbb{Z}_p$.
\begin{thm}
There exists an invertible upper triangular matrix $U$ such that $UCU^{-1}=R$, where
\begin{displaymath}
R=\left(\begin{array}{cccccc}
1 & 1 & 0 & 0 & 0 & \cdots\\
0 & \hat{q} & 1 & 0 & 0 & \cdots\\
0 & 0 & \hat{q}^2 & 1 & 0 & \cdots\\
0 & 0 & 0 & \hat{q}^3 & 1 & \cdots\\
\vdots & \vdots & \vdots & \vdots & \vdots & \ddots
\end{array}\right).
\end{displaymath}
One such is given by $U=(U_{i,j})_{i,j\geqslantslant0}\in U_\infty\mathbb{Z}_p$ where the first row is chosen to be
\begin{displaymath}
U_{0,j}=\left\{\begin{array}{ll}
1 & \textrm{ if } j=0,\\
0 & \textrm{ otherwise,}
\end{array}\right.
\end{displaymath}
and the next row is defined recursively from the previous one by
$$
U_{i+1,j}=\left(\sum_{s=i}^{j-2}U_{i,s}c_{s,j}\right)+U_{i,j-1}+(\hat{q}^j-\hat{q}^i)U_{i,j}.
$$
\end{thm}
\begin{proof}
Let $U$ be the matrix defined recursively above. First we check that $U$ is upper triangular and invertible. It is clear that $U_{i,j}\in\mathbb{Z}_p$ for $i,j\geqslantslant0$. It can be shown that $U_{i,j}=0$ if $i>j$ by induction on $i$. It is true from the formula that $U_{1,0}=0$. Now assume that $U_{i-1,j}=0$ for all $j<i-1$. Then, for $i>j$,
$$
U_{i,j}=U_{i-1,j-1}+(\hat{q}^j-\hat{q}^{i-1})U_{i-1,j}.
$$
If $j<i-1$, then both $U_{i-1,j-1}$ and $U_{i-1,j}$ are zero by assumption. And if $j=i-1$, then $U_{i-1,j-1}$ is zero and $\hat{q}^{i-1}-\hat{q}^{i-1}=0$, so the induction is complete.
Now we show that $U_{i,i}\in\mathbb{Z}_p^\times$ for all $i\geqslantslant0$, so that $U$ is invertible.
Again we proceed by induction. Clearly $U_{0,0}=1\in\mathbb{Z}_p^\times$. Now assume that $U_{i,i}\in\mathbb{Z}_p^\times$.
We have
$$
U_{i+1,i+1}=U_{i,i}+(\hat{q}^{i+1}-\hat{q}^i)U_{i,i+1}
$$
and since $\hat{q}^{i+1}-\hat{q}^i=\hat{q}^i(\hat{q}-1)\equiv0\!\!\oldmod p$, it follows that $U_{i+1,i+1}\in \mathbb{Z}_p^\times$.
To show that $UCU^{-1}=R$, we compare entries $(UC)_{i,j}$ and $(RU)_{i,j}$. Diagonally
$(UC)_{i,i}=\hat{q}^iU_{i,i}=(RU)_{i,i}$. Now let $j>i$; the entries of $UC$ and $RU$ are given by
\begin{align*}
(UC)_{i,j}&=\left(\sum_{s=i}^{j-2}U_{i,s}c_{s,j}\right)+U_{i,j-1}+\hat{q}^jU_{i,j},\\
(RU)_{i,j}&=\hat{q}^iU_{i,j}+U_{i+1,j}.
\end{align*}
Then the recurrence relation for the entries $U_{i,j}$ tells us that
$(UC)_{i,j}=(RU)_{i,j}$.
Hence $(UC)_{i,j}=(RU)_{i,j}$ for all $i,j\geqslantslant0$ and $j\geqslantslant i$.
\end{proof}
So we now have the following result.
\begin{thm}\label{isomatrix}
There is an isomorphism of groups
$$
\Lambda': U_\infty\mathbb{Z}_p\rightarrow\mathcal{A}ut^0_{\text{left-}\ell\text{-mod}}(\ell\wedge \ell),
$$
under which
the automorphism $1\wedge\Psi^q$ corresponds to the matrix
\begin{displaymath}
R=\left(\begin{array}{cccccc}
1 & 1 & 0 & 0 & 0 & \cdots\\
0 & \hat{q} & 1 & 0 & 0 & \cdots\\
0 & 0 & \hat{q}^2 & 1 & 0 & \cdots\\
0 & 0 & 0 & \hat{q}^3 & 1 & \cdots\\
\vdots & \vdots & \vdots & \vdots & \vdots & \ddots
\end{array}\right).
\end{displaymath}
The isomorphism is given by $\Lambda'(X)= \Lambda(B^{-1}XB)$, where
$B=UE$ and $E$ and $U$ are the matrices above.
\end{thm}
\begin{proof}
Since $B$ is an invertible upper triangular matrix, $X\mapsto B^{-1}XB$ is a group isomorphism
$U_\infty\mathbb{Z}_p\rightarrow U_\infty\mathbb{Z}_p$ and the result follows.
\end{proof}
\begin{remark}
It would be interesting to find an explicit basis for which the matrix of $1\wedge\Psi^q$ is precisely
$R$. We hope to return to this in future work.
\end{remark}
\section{Applications}
\label{SecApplications}
In this section we present two applications. Firstly, we obtain precise information about the important map
$$
1\wedge\varphi_n=1\wedge(\Psi^q-1)(\Psi^q-\hat{q})\ldots(\Psi^q-\hat{q}^{n-1}):\ell\wedge\ell\rightarrow\ell\wedge\ell.
$$
We give closed formulas involving $q$-binomial coefficients for all the entries in the corresponding matrix.
Secondly, we give a new description of the ring $l^0(l)$ of degree zero stable operations for the ($p$-local)
Adams summand in terms of matrices.
\subsection{The Map $1\wedge\varphi_n$ and the Matrix $X_n$}
We apply the preceding result to study of the map
$$
1\wedge\varphi_n=1\wedge(\Psi^q-1)(\Psi^q-\hat{q})\ldots(\Psi^q-\hat{q}^{n-1}):\ell\wedge\ell\rightarrow\ell\wedge\ell.
$$
The analogous map was first studied by Milgram in~\cite{Milgram} in relation to real
connective $K$-theory $ko$ localised at the prime $2$. We follow the method used in~\cite[Theorem $5.4$]{BaSn}, but we are
able to produce new closed formulas for every entry in the matrix corresponding to the above map, in terms of
$q$-binomial coefficients (also known as Gaussian polynomials). A short discussion of the relevant information about these can be found in the
appendix.
We will write $\tilde{U}_\infty\mathbb{Z}_p$ for the ring of upper triangular matrices with entries in the $p$-adic integers.
The group $U_\infty\mathbb{Z}_p$ is a subgroup of the multiplicative group of units in this ring. Generalising the group isomorphism $\Lambda'$ of
Theorem~\ref{isomatrix} we can construct the following diagram
\begin{displaymath}
\xymatrix{
U_\infty\mathbb{Z}_p \ar[rrr]^{\Lambda'}_{\cong} \ar[d]_\cap &&& \mathcal{A}ut^0_{\text{left-}\ell\text{-mod}}(\ell\wedge \ell) \ar[d]_\cap \\
\tilde{U}_\infty\mathbb{Z}_p \ar[rrr]_{\lambdabda'} &&& \End_{\text{left-}\ell\text{-mod}}(\ell\wedge \ell)}
\end{displaymath}
where $\lambdabda'_{|U_\infty\mathbb{Z}_p}=\Lambda'$. Recall that the map $\Lambda'$ sends a matrix
$A\in U_\infty\mathbb{Z}_p$ to $\Lambda'(A)=\Lambda(B^{-1}AB)=\sum_{m\geqslantslant n}(B^{-1}AB)_{n,m}\iota_{m,n}$. We extend this by letting
$$
\lambdabda(A')=\sum_{m\geqslantslant n}(B^{-1}A'B)_{n,m}\iota_{m,n}
$$
to obtain a left-$\ell$-module endomorphism of $\ell\wedge\ell$. This is a multiplicative map by the same argument given for $\Lambda$ in the proof of Proposition~\ref{extprod}.
By moving from $U_\infty\mathbb{Z}_p$ to $\tilde{U}_\infty\mathbb{Z}_p$ it is now possible to use the additive structure given by matrix addition
and it is easy to check that $\lambdabda'$ respects addition.
\begin{dfn}
Let
$$
\varphi_n=(\Psi^q-1)(\Psi^q-\hat{q})\cdots(\Psi^q-\hat{q}^{n-1})
$$ and let $R_n=R-\hat{q}^{n-1}I\in\tilde{U}_\infty\mathbb{Z}_p$ and $X_n=R_1R_2\cdots R_n\in\tilde{U}_\infty\mathbb{Z}_p$.
Here $I$ denotes the infinite identity matrix.
\end{dfn}
By Theorem~\ref{isomatrix}, the map $1\wedge\Psi^q$ corresponds to the matrix $R$.
It follows that $1\wedge\varphi_n$ corresponds to the matrix $X_n$.
A basic tool we will use is splitting up the matrix $R$
in order to easily calculate its powers.
\begin{dfn}\label{SD}
Define matrices $D$ and $S$ in $\tilde{U}_\infty\mathbb{Z}_p$ by
\begin{displaymath}
D=\left(\begin{array}{cccccc}
1 & 0 & 0 & 0 & 0 & \cdots\\
0 & \hat{q} & 0 & 0 & 0 & \cdots\\
0 & 0 & \hat{q}^2 & 0 & 0 & \cdots\\
0 & 0 & 0 & \hat{q}^3 & 0 & \cdots\\
\vdots & \vdots & \vdots & \vdots & \vdots & \ddots
\end{array}\right),\qquad
S=\left(\begin{array}{cccccc}
0 & 1 & 0 & 0 & 0 & \cdots\\
0 & 0 & 1 & 0 & 0 & \cdots\\
0 & 0 & 0 & 1 & 0 & \cdots\\
0 & 0 & 0 & 0 & 1 & \cdots\\
\vdots & \vdots & \vdots & \vdots & \vdots & \ddots
\end{array}\right).
\end{displaymath}
\end{dfn}
Then $R=D+S$. Since powers of $D$ and $S$ are easy to calculate,
and we have $SD=\hat{q}DS$, we are in a situation where we can apply the $q$-binomial theorem
to calculate $R^n=(D+S)^n$. See the appendix for a short discussion of the
$q$-binomial coefficients ${n\brack m}_{q}$.
\begin{lem}\label{Rpower}
$$
(R^n)_{s,s+c}={n\brack n-c}_{\hat{q}}\hat{q}^{(s-1)(n-c)}.
$$
\end{lem}
\begin{proof}
We note that $D^iS^j$ is given by
\begin{displaymath}
(D^iS^j)_{s,t}=\begin{cases}
\hat{q}^{(s-1)i}&\textrm{ if }t=s+j,\\
0&\textrm{ otherwise.}
\end{cases}
\end{displaymath}
Applying the $q$-binomial theorem~(\ref{expand}), we have
$$
(R^n)_{s,s+c}=((D+S)^n)_{s,s+c}=\sum_{i=0}^n{n\brack i}_{\hat{q}}(D^iS^{n-i})_{s,s+c}.
$$
\noindent For any particular value of $c$ at most one term in this sum is non-zero, namely the $i=n-c$ term
if $0\leqslantslant c\leqslantslant n$. The result follows.
\end{proof}
Let $\Omega$ denote the homotopy equivalence giving Kane's splitting,
$$
\Omega:\bigvee_{n\geqslant0}\ell\wedge\mathcal{K}(n)\rightarrow\ell\wedge\ell.
$$
\begin{thm}\label{app}
\begin{list}
{(\arabic{itemcounter})}
{\usecounter{itemcounter}\leftmargin=0.5em}
\item The first $n$ columns of the matrix $X_n$ are trivial.
\item Let $C_n$ be the mapping cone of the restriction of $\Omega$ to the first $n$ pieces of the splitting of $\ell\wedge\ell$, i.e.
$$
C_n=\text{Cone}\left(\Omega_|:\bigvee_{0\leqslantslant m\leqslantslant n-1}\ell\wedge\mathcal{K}(m)\rightarrow\ell\wedge\ell\right).
$$
Then in the $p$-complete stable homotopy category there exists a commutative diagram of left $\ell$-module spectra of the form
\begin{displaymath}
\xymatrix{\ell\wedge\ell \ar[rr]^{1\wedge\varphi_n} \ar[dr]_{\pi_n} && \ell\wedge\ell\\
& C_n \ar[ur]_{\hat{\varphi}_n}& }
\end{displaymath}
where $\pi_n$ is the cofibre of $\Omega_|$ and $\hat{\varphi}_n$ is determined up to homotopy by the diagram.
\item For $n\geqslantslant1$, we have $(X_n)_{s,s+c}=0$ if $c<0$ or $c>n$ and for $0\leqslantslant c\leqslantslant n$,
$$
(X_n)_{s,s+c}=
\sum_{i=c}^n(-1)^{n-i}\hat{q}^{{n-i\choose 2}+(s-1)(i-c)}{n\brack i}_{\hat{q}}{i\brack i-c}_{\hat{q}}.
$$
\end{list}
\end{thm}
\begin{proof}
\quad (1)\ \ The result is certainly true of $X_1=R_1$. We prove the result for all $n\geqslantslant1$ by induction.
Assume that the first $n$ columns of $X_n$ are trivial, i.e. $(X_n)_{i,j}=0$ if $j\leqslantslant n$. By definition $X_{n+1}=X_nR_{n+1}$.
We also have $(R_{n+1})_{i,j}=0$ unless $(i,j)=(s,s)$ or $(s,s+1)$ and $(R_{n+1})_{n+1,n+1}=0$. Now
$$
(X_{n+1})_{i,j}=(X_n)_{i,j-1}(R_{n+1})_{j-1,j}+(X_n)_{i,j}(R_{n+1})_{j,j}.
$$
This is zero if $j\leqslantslant n$ because $(X_n)_{i,j-1},(X_n)_{i,j}=0$ and it is zero if $j=n+1$
because $(X_n)_{i,n},(R_{n+1})_{n+1,n+1}=0$.
\noindent (2)\ \ We know that $1\wedge\varphi_n=\lambdabda'(X_n)$. In order for $1\wedge\varphi_n$ to factor via $C_n$ (and for the diagram to commute) we need to show that $X_n$ corresponds under $\lambdabda'$ to a left $\ell$-module endomorphism of $\vee_{m\geqslantslant0}\ell\wedge\mathcal{K}(m)$ which is trivial on each piece $\ell\wedge\mathcal{K}(m)$ where $m\leqslantslant n-1$. The map $\lambdabda'(X_n)$ acts trivially on pieces $\ell\wedge\mathcal{K}(m)$ where $m\leqslantslant n-1$ if each map $\iota_{m,k}:\ell\wedge\mathcal{K}(m)\rightarrow\ell\wedge\mathcal{K}(k)$ has coefficient zero when $m\leqslantslant n-1$ in the explicit description of $\lambdabda'(X_n)$. This corresponds to the condition $(X_n)_{k,m}=0$ when $m\leqslantslant n-1$, which is true by part (1).
\noindent (3)\ \ Certainly $X_n$ is upper triangular, so $(X_n)_{s,s+c}=0$ if $c<0$. We show that $(X_n)_{s,s+c}=0$ if $c>n$ by induction on $n$. The initial case for the induction is $X_1$ where this clearly holds. Assume that $(X_{n-1})_{s,s+c}=0$ if $c>n-1$. As in part (1), we have $X_n=X_{n-1}R_n$ and $(X_n)_{i,j}=(X_{n-1})_{i,j-1}(R_n)_{j-1,j}+(X_{n-1})_{i,j}(R_n)_{j,j}$. Now let $j>n$, then
$$
(X_n)_{s,s+j}=(X_{n-1})_{s,s+j-1}(R_n)_{s+j-1,s+j}+(X_{n-1})_{s,s+j}(R_n)_{s+j,s+j}
$$
and this is zero because both $(X_{n-1})_{s,s+j-1}$ and $(X_{n-1})_{s,s+j}$ are zero by the inductive hypothesis.
For the second part, by~\cite[Proposition 8]{ccw},
$$
X_n=\sum_{i=0}^n(-1)^{n-i}\hat{q}^{n-i\choose 2}{n\brack i}_{\hat{q}}R^i.
$$
Hence, using Lemma~\ref{Rpower},
\begin{align*}
(X_n)_{s,s+c}&=\sum_{i=0}^n(-1)^{n-i}\hat{q}^{n-i\choose 2}{n\brack i}_{\hat{q}}(R^i)_{s,s+c}\\
&=\sum_{i=c}^n(-1)^{n-i}\hat{q}^{n-i\choose 2}{n\brack i}_{\hat{q}}{i\brack i-c}_{\hat{q}}\hat{q}^{(s-1)(i-c)}.
\end{align*}
The range of the final sum can be restricted to starting from $c$ rather than $0$ as the second $q$-binomial coefficient is zero
for $i\leqslantslant c$.
\end{proof}
\subsection{$K$-Theory Operations}
The matrix approach provides another way of viewing the ring of stable
degree zero operations on the $p$-local Adams summand. We will work in this final section in the $p$-local stable homotopy category. In a slight abuse of notation let $\ell$ now denote the Adams summand of $p$-local complex connective $K$-theory (rather than the $p$-complete version).
Let $\tilde{U}_\infty\mathbb{Z}_{(p)}$ be the ring of upper triangular matrices with entries in the $p$-local integers.
\begin{dfn}
We define a filtration on $\tilde{U}_\infty\mathbb{Z}_{(p)}$ by, for $n\in\mathbb{N}$,
$$
U_n=\{X\in\tilde{U}_\infty\mathbb{Z}_{(p)}:x_{i,j}=0\text{ if }j\leqslantslant n\}.
$$
This gives a decreasing filtration
$$
\tilde{U}_\infty\mathbb{Z}_{(p)}=U_0\supset U_1\supset U_2\supset\cdots
$$
where each $U_n$ is a two-sided ideal of $\tilde{U}_\infty\mathbb{Z}_{(p)}$.
\end{dfn}
This column filtration gives
a filtration by two-sided ideals because the matrices are upper triangular
(and this would not be the case if we filtered by rows). This can be regarded as the natural filtration
on $\tilde{U}_\infty\mathbb{Z}_{(p)}$ and $\tilde{U}_\infty\mathbb{Z}_{(p)}$ is complete with respect to this topology.
\begin{thm}\label{topringapp}
The ring of degree zero stable operations of the Adams summand, $\ell^0(\ell)$, is isomorphic as a topological ring
to the completion of the subring of $\tilde{U}_\infty\mathbb{Z}_{(p)}$ generated by the matrix $R$.
\end{thm}
\begin{proof}
We have the following description of $\ell^0(\ell)$ from~\cite[Theorem 4.4]{CCW2}
$$
\ell^0(\ell)=\left\{\sum_{n=0}^\infty a_n\varphi_n:a_n\in\mathbb{Z}_{(p)}\right\}.
$$
This is complete in the filtration topology when filtered by the
ideals
$$
\left\{\sum_{n=m}^\infty a_n\varphi_n:a_n\in\mathbb{Z}_{(p)}\right\}.
$$
Define a map $\alpha:\ell^0(\ell)\rightarrow\tilde{U}_\infty\mathbb{Z}_{(p)}$ as
the continuous ring homomorphism determined by $\alpha(\Psi^q)=R$.
We have
$$
\alpha(\varphi_n)=\alpha(\prod_{i=0}^{n-1}(\Psi^q-\hat{q}^i))=\prod_{i=0}^{n-1}(R-\hat{q}^i)=X_n.
$$
By Theorem~\ref{app} (1), the first $n$ columns of $X_n$ are trivial, so $\alpha(\varphi_n)\in U_n$. Thus $\alpha$ respects the filtration and so when applied to infinite sums $\alpha\left(\sum_{n=0}^\infty a_n\varphi_n\right)=\sum_{n=0}^\infty a_nX_n$ is well-defined (each entry in the matrix is a finite sum).
We have $\mathcal{K}er\alpha=\left\{\sum_{n=0}^\infty a_n\varphi_n:a_n=0\text{ for all }n\right\}=0$, so $\alpha$ is injective.
Let $S=\left\{\sum_{n=0}^N a_n R^n:a_n\in\mathbb{Z}_{(p)},N\in\mathbb{N}_0\right\}$. It is clear that $S\subseteq\im(\alpha)$. Because $\alpha$ is continuous and $\tilde{U}_\infty\mathbb{Z}_{(p)}$ is complete it follows that the completion of $S$ is precisely the image of $\alpha$.
\end{proof}
Similar descriptions can be given for ${ku_{(p)}}^0(ku_{(p)})$
and $ko_{(2)}^0(ko_{(2)})$.
\section{Appendix: The $q$-Binomial Theorem}
\label{appendix}
The $q$-binomial coefficients, also known as Gaussian polynomials, arise in many diverse areas of mathematics.
Perhaps the nicest way to define them is as the coefficients arising in the
following version of the
$q$-binomial theorem. If $X$ and $Y$ are variables which $q$-commute,
that is, $YX=qXY$, then for $n\in\mathbb{N}_0$ we have
\begin{equation}\label{expand}
(X+Y)^n=\sum_{i=0}^n{n\brack i}_q X^iY^{n-i},
\end{equation}
where ${n\brack i}_q$ is a $q$-binomial coefficient.
This version of the $q$-binomial theorem goes back to~\cite{Schutzenberger}.
(Various other results also go under the name of $q$-binomial theorem.)
The above point of view has several nice features. It makes evident the relationship with the
ordinary binomial coefficients and that the
coefficients ${n\brack i}_q$ are indeed integer polynomials in $q$. The
two standard recurrences for $q$-binomial coefficients are easily read off, by writing
$(X+Y)^n$ as $(X+Y)(X+Y)^{n-1}$ and as $(X+Y)^{n-1}(X+Y)$.
On the other hand, if one starts from the closed formula
for the $q$-binomial coefficients:
$$
{n\brack i}_q=\prod_{j=0}^{i-1}\frac{1-q^{n-j}}{1-q^{i-j}}\qquad\text{where $n,i\in\mathbb{N}_0$},
$$
then it is easy to deduce the standard recurrences and~(\ref{expand}) can be readily proved
via induction and either one of them.
\end{document}
|
\begin{document}
\title[]{Instability of some Riemannian manifolds with real Killing spinors}
\author{Changliang Wang}
{\rm ad}dress{Max-Planck-Institut f\"ur Mathematik, Vivatsgasse 7, Bonn 53111, Germany}
\email{[email protected]}
\author{M. Y.-K. Wang}
{\rm ad}dress {Department of Mathematics and Statistics, McMaster University, Hamilton, Ontario, L8S4K1 Canada}
\email{[email protected]}
\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}bjclass[2010]{Primary 53C25}
\keywords{linear stability, $\mbox{${\mathfrak n}$}u$-entropy, Sasaki Einstein metrics, real Killing spinors}
\date{revised \today}
\begin{abstract}
We prove the instability of some families of Riemannian manifolds with non-trivial real Killing spinors.
These include the invariant Einstein metrics on the Aloff-Wallach spaces $N_{k, l}={\rm SU}(3)/i_{k, l}(S^{1})$ (which are
all nearly ${\rm G}_2$ except $N_{1,0}$), and Sasaki Einstein circle bundles over certain irreducible Hermitian symmetric spaces.
We also prove the instability of most of the simply connected non-symmetric compact homogeneous Einstein spaces of
dimensions $5, 6, $ and $7$, including the strict nearly K\"ahler ones (except ${\rm G}_2/{\rm SU}(3)$).
\end{abstract}
\mbox{${\mathfrak m}$}aketitle
\mbox{${\mathfrak s}$}ection{\bf Introduction}
In this article we will derive the instability of some families of simply connected closed Einstein manifolds most of which admit
a non-trivial real Killing spinor. One consequence of our work is the existence of examples of unstable Einstein manifolds with
non-trivial real Killing spinors whose Euclidean metric cones realize all the possible irreducible special holonomy types.
Recall that for a spin manifold $(M^n, g)$, a Killing spinor $\mbox{${\mathfrak s}$}igma$ is a section of the complex spinor bundle which satisfies the equation
$$ \mbox{${\mathfrak n}$}abla_X \mbox{${\mathfrak s}$}igma = c \, X \cdot \mbox{${\mathfrak s}$}igma $$
for all tangent vectors $X$, where $\mbox{${\mathfrak n}$}abla$ is the spinor connection induced by the Levi-Civita connection of $g$, $\cdot$ denotes
Clifford multiplication, and $c$ is a priori a complex constant. By the fundamental work of T. Friedrich and his colleagues it is
now well-known that $c$, if nonzero, is either purely imaginary or real. Furthermore, the metric $g$ must be Einstein with
Einstein constant $\Lambda = 4c^2 (n-1)$.
In the $c=0$ case, the metric $g$ has restricted holonomy properly contained in ${\rm SO}(n)$. Calabi-Yau, hyperk\"ahler, torsion free
${\rm G}_2$ and ${\rm Spin}(7)$ manifolds belong to this class. These Einstein manifolds all turn out to be stable by the work of
Dai-Wang-Wei \cite{DWW05}. When $c$ is purely imaginary and $(M, g)$ is complete, the classification was achieved by H. Baum \cite{Bau89},
and proofs of the stability of the Einstein metrics were given, first by Kr\"oncke in \cite{Kr17}, and later
by the first author in \cite{Wan17}.
In the real case, an important conceptual classification was given by \cite{Ba93}, which can be summarized by the
statement that $(M, g)$ admits a non-trivial real Killing spinor iff its Euclidean metric cone admits a
non-trivial parallel spinor. Of course the detailed classification of these manifolds includes the study of Sasaki Einstein manifolds
(see e.g. \cite{BFGK91}, \cite{BG08}) in odd dimensions and nearly K\"ahler $6$-manifolds (see e.g. \cite{FH17}).
Furthermore, T. Friedrich \cite{Fr80} gave a lower bound for the eigenvalues of the Dirac operator on closed spin manifolds with positive
scalar curvature that depended on the dimension and minimum value of the scalar curvature. This result was later generalized in
\cite{Hi86} where the positivity (resp. minimum value) of the scalar curvature was replaced by the positivity
(resp. value) of the first eigenvalue of the conformal Laplacian. In both cases, the equality case is characterized
by manifolds admitting a non-trivial real Killing spinor. By comparison, the equality case of the Lichnerowicz estimate
for the first eigenvalue of the Laplace-Beltrami operator on a manifold with positive Ricci curvature is characterized by the
round spheres, which are stable and happen also to have a maximal family of Killing spinors.
In addition to their intrinsic interest within Differential Geometry, manifolds with real Killing spinors are of great interest
in Mathematical Physics. In the 1980s, such manifolds, especially ones of dimension six or seven, were independently investigated
by theoretical physicists in their pursuit of Kaluza-Klein compactifications in supergravity theories \cite{DNP86}.
More recently, interest in these spaces from the physics community stems from the AdS/CFT correspondence, see e.g., \cite{GMSW05}.
For all the above reasons, there is good motivation to study the stability problem for manifolds admitting
real Killing spinors.
We shall actually use various different notions of stability for Einstein metrics in this paper.
These are different from notions of stability used by physicists, see e.g., \cite{DNP86}, \cite{GiHa02},
\cite{GiHaP03} . All Einstein manifolds under consideration hereafter will have positive Einstein constant.
Unless otherwise stated, we will exclude the case of round spheres.
The first stability notion comes from the fact that for a closed manifold $M^{n}$ Einstein metrics
are precisely the critical points of the normalized total scalar curvature functional
\begin{equation}
\widetilde{\bf S}(g)=\frac{1}{({\rm Vol}(M, g))^{\frac{n-2}{n}}}\int_{M}s_{g}\,d{\rm vol}_{g}
\end{equation}
where in the above $s_{g}$ is the scalar curvature of the Riemannian metric $g$ on $M^{n}$. Since this functional is invariant
under the action of the diffeomorphisms of $M$ and is locally minimizing along conformal change directions, it is customary to
restrict $\widetilde{\bf S}$ to the space of Riemannian metrics with constant scalar curvature and fixed volume. The tangent space
to this ILH-manifold consists of the {\em TT-tensors}, i.e., symmetric $2$-tensors satisfying ${\rm tr}_g(h) = 0 $ and ${\delta}_g h = 0$
(\cite{Bes87}, section 4.G). The second variation of $\widetilde{\bf S}$ is then given by
\begin{equation} \label{2ndvar-Hilbert}
\widetilde{\bf S}^{\partialrime\partialrime}_{g}(h, h)=\frac{-1}{2({\rm Vol}(M, g))^{\frac{n-2}{n}}}\int_{M}\langle\mbox{${\mathfrak n}$}abla^{*}\mbox{${\mathfrak n}$}abla h-2\mbox{${\mathfrak m}$}athring{R}h, h\rangle \,d{\rm vol}_{g}
\end{equation}
where $(\mbox{${\mathfrak m}$}athring{R}h)_{ij}$ is defined to be $ \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m_{ k, l} \,R_{ikjl}h^{kl}$ and our convention for the curvature tensor is
$R_{X, Y} = \mbox{${\mathfrak n}$}abla_{[X, Y]} - [\mbox{${\mathfrak n}$}abla_X, \mbox{${\mathfrak n}$}abla_Y ]$.
\begin{defn} \label{stability-def}
A closed Einstein manifold $(M, g)$ is
\mbox{${\mathfrak n}$}oindent{$($a$)$} $\widetilde{\bf S}$-stable if $g$ is a local maximum of $\widetilde{\bf S}$
restricted to the space of Riemannian metrics on $M$ with constant scalar curvature and the same volume as $g$;
\mbox{${\mathfrak n}$}oindent{$($b$)$} $\widetilde{\bf S}$-linearly stable if
$\langle \mbox{${\mathfrak n}$}abla^* \mbox{${\mathfrak n}$}abla h - 2 \mbox{${\mathfrak m}$}athring{R} h, h \rangle_{L^2(M, g)} \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}eq 0 $ for all
TT-tensors $h$ on $M$.
\end{defn}
Note that $\widetilde{\bf S}$-stability is the first notion of stability mentioned by Koiso (\cite{Koi80}, p. 52),
while the second notion is weaker than that given in Definition 2.7 there. Both notions of stability in the above include the
possibility of non-trivial (resp. infinitesimal) Einstein deformations (which may not be integrable in general).
In the first case, the value of the restricted functional would be unchanged, while in
the second case one would get an eigentensor of the Lichnerowicz Laplacian with eigenvalue equal to twice
the Einstein constant $\Lambda$, owing to the identity $\mbox{${\mathfrak n}$}abla^* \mbox{${\mathfrak n}$}abla - 2 \mbox{${\mathfrak m}$}athring{R} = -(\Delta_L + 2 \Lambda \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athbb I}$})$
for an Einstein manifold. The $\widetilde{\bf S}$-coindex of $g$ is the dimension (necessarily finite by elliptic theory)
of the maximal negative definite subspace for the quadratic form
\begin{equation} \label{quadratic}
{\mbox{${\mathfrak m}$}athscr Q}(h, h) := \langle \mbox{${\mathfrak n}$}abla^* \mbox{${\mathfrak n}$}abla h - 2 \mbox{${\mathfrak m}$}athring{R} h, h \rangle_{L^2(M, g)}.
\end{equation}
The corresponding notions of instability are given by negation. Hence $\widetilde{\bf S}$-linear instability
implies $\widetilde{\bf S}$-instability. Moreover, $\widetilde{\bf S}$-linear instability implies $\mbox{${\mathfrak n}$}u$-linear instability and further also $\mbox{${\mathfrak n}$}u$-instability (see Definition $\ref{nu-stability-def}$ below). Then by Theorem 1.3 in \cite{Kr15},
(since $\Lambda > 0$) it also implies that $g$ is {\em dynamically unstable} for the Ricci flow.
Another notion of stability comes from the $\mbox{${\mathfrak n}$}u$-entropy of Perelman. For detailed information about this
functional and its second order properties we refer the reader to \cite{Pe02}, \cite{CHI04}, \cite{CM12}, and \cite{CH15}.
For us the important facts about the $\mbox{${\mathfrak n}$}u$-entropy to recall are that its value is unchanged by the action of diffeomorphisms
and homotheties, it is monotonic increasing along Ricci flows, and its critical points consist of shrinking
gradient Ricci solitons (which include Einstein metrics with positive $\Lambda$).
At an Einstein metric $g$, the second
variation of the $\mbox{${\mathfrak n}$}u$-entropy is given (up to a positive constant) by $- \frac{1}{2} {\mbox{${\mathfrak m}$}athscr Q}(h, h)$ on the subspace
$ \ker {\rm tr}_g \cap \ker \delta_g$ of TT-tensors. However, unlike the case of the $\widetilde{\bf S}$ functional, the second
variation is no longer always positive on ${\mbox{${\mathfrak m}$}athscr C}(M)g$--the positive directions are given by eigenfunctions
of the Laplacian of $g$ corresponding to eigenvalues less than $2\Lambda$.
(Our convention for eigenvalues is given by $\Delta \partialhi = - \lambda \partialhi$ with $\lambda \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}eq 0$.)
\begin{defn} \label{nu-stability-def}
A closed Einstein manifold $(M, g)$ with Einstein constant $\Lambda$ is
\mbox{${\mathfrak n}$}oindent{$($a$)$} $\mbox{${\mathfrak n}$}u$-stable if $g$ is a local maximizer of the $\mbox{${\mathfrak n}$}u$-entropy;
\mbox{${\mathfrak n}$}oindent{$($b$)$} $\mbox{${\mathfrak n}$}u$-linearly stable if the second variation of the $\mbox{${\mathfrak n}$}u$-entropy is negative semi-definite on
${\mbox{${\mathfrak m}$}athscr C}(M)g \oplus (\ker {\rm tr}_g \cap \ker \delta_g)$.
\end{defn}
By \cite{CH15}, $\mbox{${\mathfrak n}$}u$-linear stability is equivalent to ${\mbox{${\mathfrak m}$}athscr Q}(h, h) \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}eq 0$ for all
TT-tensors and $\lambda_1(M, g) \leq 2 \Lambda$. Consequently, there are two sources contributing to
$\mbox{${\mathfrak n}$}u$-linear instability, and $\widetilde{\bf S}$-linear instability implies $\mbox{${\mathfrak n}$}u$-linear instability.
We will discuss the restriction of the second variation of the $\mbox{${\mathfrak n}$}u$-entropy to ${\mbox{${\mathfrak m}$}athscr C}(M)g$ in
greater detail in section \ref{instab-conf}, where we deduce the $\mbox{${\mathfrak n}$}u$-linear instability of
some homogeneous Einstein metrics which admit real Killing spinors. Here we only note the interesting fact
that destablizing directions coming from conformal deformation by eigenfunctions necessarily deform a
homogeneous Einstein metric away from the space of homogeneous metrics.
Before stating our results on instability, we need to recall a few more facts about simply connected
manifolds admitting a non-trivial real Killing spinor. Because of our assumption of simple connectivity
and our exclusion of round spheres, such a manifold is de Rham irreducible and cannot be a symmetric space
(p. 35, \cite{BFGK91}, Theorem 13). By the results of B\"ar \cite{Ba93}, if its Euclidean cone has ${\rm SU}(m+1)$
holonomy, $m \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}eq 2$, then $(M^{2m+1}, g)$ is Sasaki Einstein. (The implicit scaling involved is
choosing $\Lambda = 2m$, whence $c = \partialm \frac{1}{2}$.)
Conversely, a simply connected Sasaki Einstein manifold is spin \cite{Mo97} and admits non-trivial
real Killing spinors (\cite{FrK90}, Theorem 1). The dimension of the space of real Killing spinors
is $2$ and the chiral nature (i.e., whether or not both signs of $c$ occur) of these spinors depends
on the parity of $m$.
If the Euclidean cone of $(M^{4m+3}, g)$ has ${\rm Sp}(m+1)$ holonomy, $m \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}eq 1$, then $(M^{4m+3}, g)$ is $3$-Sasakian.
In this case the dimension of the space of real Killing spinors is $m+2$ and only one sign of $c$ occurs
once the orientation is fixed \cite{W89}.
Finally, if the Euclidean cone has ${\rm Sp}in(7)$ (resp. ${\rm G}_2$ ) holonomy, then by \cite{BFGK91} and \cite{Ba93}
$(M, g)$ has a nearly ${\rm G}_2$ (resp. a strict nearly K\"ahler) structure, and the space of real
Killing spinors has dimension $1$. The converse statements are proved respectively in
\cite{BFGK91} and \cite{Gr90}. We also refer to \cite{FKSM97} for nearly ${\rm G}_{2}$ structures and Killing spinors.
An interesting family of simply connected closed Riemannian manifolds which are nearly ${\rm G}_2$
are the Aloff-Wallach spaces $N_{k, l} = {\rm SU}(3)/U_{k,l}$, where $k,l$ are relatively prime integers
and $U_{k,l}$ is the circle ${\rm diag}(e^{2\partiali i k\theta}, e^{2\partiali i l \theta}, e^{-2 \partiali i (k+l) \theta})$
in ${\rm SU}(3)$. It is well-known that these manifolds are spin and, up to isometry, they admit two ${\rm SU}(3)$-invariant
Einstein metrics \cite{W82}, \cite{CR84}, \cite{PP84}, \cite{KoV93}, \cite{Nik04}. Except for the spaces
$N_{1, -1}$ and $N_{1,1}$, all the Euclidean cone metrics of these ${\rm SU}(3)$-invariant Einstein metrics
have holonomy ${\rm Sp}in(7)$ (see, \cite{CR84}, \cite{BFGK91}, and \cite{Ba93}). Topologically speaking, the $N_{k,l}$
exhibit infinitely many homotopy types, and there exists pairs which are homeomorphic but not diffeomorphic \cite{KS91}.
\begin{thm} \label{Aloff-Wallach}
The invariant Einstein metrics on the Aloff-Wallach manifolds $N_{k, l}$ described above are all
$\widetilde{\bf S}$-linearly unstable, and therefore, $\mbox{${\mathfrak n}$}u$-linearly unstable.
\end{thm}
The proof of this theorem will be given in subsections \ref{AWI} and \ref{AWII}. Some remarks about the two
exceptional Aloff-Wallach spaces are also given in section \ref{homog}.
\mbox{${\mathfrak m}$}edskip
Let $(M^{2m+1}, g), m \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}eq 2$ be a closed simply connected Sasakian Einstein manifold. The Sasaki structure
is {\em regular} if the characteristic vector field generates a free circle action on $M$. In this case,
$M$ is a principal circle bundle over a Fano K\"ahler Einstein manifold $B$ such that the projection map
is a Riemannian submersion with totally geodesic fibres, and the Euler class of the bundle is a rational
multiple of the first Chern class of $B$. It follows from Corollary 1.7 in \cite{WW18} that
if the second Betti number of $B$ is greater than $1$, then $(M, g)$ is $\widetilde{\bf S}$-linearly unstable.
When $b_2(B) = 1$, we have $H^2(B; \mbox{${\mathfrak m}$}athbb{Z}) \approx \mbox{${\mathfrak m}$}athbb{Z}$ (since $H^2(B; \mbox{${\mathfrak m}$}athbb{Z})$ is torsion free), so all principal
circle bundles over it are, up to a change in orientation in the fibers, quotients of the circle bundle corresponding
to one of the two indivisible classes in $H^2(B; \mbox{${\mathfrak m}$}athbb{Z})$. The total spaces of these two circle bundles are diffeomorphic
and simply connected. The simplest examples of Fano K\"ahler Einstein manifolds with $b_2 =1$ are the irreducible
hermitian symmetric spaces of compact type. For complex projective space ${\mbox{${\mathfrak m}$}athbb{C} \partialp}^m$, the corresponding simply connected
regular Sasaki Einstein manifold over it is just $S^{2m+1}$ equipped with the constant curvature $1$ metric, which
is $\widetilde{\bf S}$-linearly stable. By contrast we have
\begin{thm} The following simply connected regular Sasaki Einstein manifold are $\mbox{${\mathfrak n}$}u$-linearly unstable from
conformal variations:
\mbox{${\mathfrak n}$}oindent{$($a$)$} ${\rm SO}(p+2)/{\rm SO}(p), p \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}eq 3,$ circle bundle over the complex quadric $ {\rm SO}(p+2)/({\rm SO}(p) \times {\rm SO}(2))$;
\mbox{${\mathfrak n}$}oindent{$($b$)$} ${\rm E}_6/ {\rm Sp}in(10)$, and ${\rm E}_7/{\rm E}_6$, which are respectively circle bundles over the
hermitian symmetric spaces ${\rm E}_6/({\rm Sp}in(10 \cdot {\rm U}(1))$ and ${\rm E}_7/({\rm E}_6 \cdot {\rm U}(1))$;
\mbox{${\mathfrak n}$}oindent{$($c$)$} ${\rm SU}(p+2)/({\rm SU}(p) \times {\rm SU}(2)), p \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}eq 2$, a circle bundle over the complex Grassmannian
${\rm SU}(p+2)/{\rm S}({\rm U}(p)\times {\rm U}(2))$.
Moreover, the Stiefel manifolds in $($a$)$ above are also $\widetilde{\bf S}$-linearly unstable, and for $k \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}eq 4$, ${\rm Sp}(k)/{\rm SU}(k)$,
which are circle bundles over ${\rm Sp}(k)/{\rm U}(k)$, are $\widetilde{\bf S}$-linearly unstable, and so $\mbox{${\mathfrak n}$}u$-linearly unstable.
\end{thm}
The proof of this theorem, including the dimensions of the destablizing eigenspaces, are given in
sections \ref{Stiefel} and \ref{instab-conf}.
The $\widetilde{\bf S}$-linear instability of ${\rm Sp}(k)/{\rm SU}(k)$, $k\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}eq4$, follows from Koiso's work in \cite{Koi80} and Corollary 6.1 in \cite{Wan17}. Indeed, by Koiso's calculations on page 68 and the table on page 70 in \cite{Koi80}, the Hermitian symmetric space ${\rm Sp}(k)/{\rm U}(k)$ of dimension $k^{2}+k$ is $\widetilde{\bf S}$-linearly unstable. Moreover, after rescaling the symmetric metric used in \cite{Koi80} so that the
new Einstein constant is $k^2+k+2$, one finds that $\mbox{${\mathfrak n}$}abla^{*}\mbox{${\mathfrak n}$}abla-2\mbox{${\mathfrak m}$}athring{R}$ has a negative eigenvalue $-4\frac{k^{2}+k+2}{2(k+1)}=-2k-\frac{4}{k+1}<-8$ if $k\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}eq4$.
\mbox{${\mathfrak s}$}mallskip
In the last section of this paper we discuss the stability of compact simply connected homogeneous Einstein
manifolds of dimension $\leq 7$ by putting together the results in this paper and \cite{WW18}
with classification results for these manifolds, and the work of \cite{Koi80} and \cite{CH15}. The results are
summarized as follows:
\begin{thm} \label{lowdim}
Let $(M=G/K, g)$ be a compact simply connected Einstein manifold on which the semisimple connected
Lie group $G$ acts almost effectively by isometries and with isotropy group $K$. Assume that $(G, K)$ is not
a Riemannian symmetric pair and that $5 \leq \dim M \leq 7$. Assume further that $M \mbox{${\mathfrak n}$}eq S^3 \times S^3$
or the isotropy irreducible space ${\rm Sp}(2)/{\rm SU}(2)$. Then $g$ is $\widetilde{\bf S}$-linearly unstable.
\end{thm}
\mbox{${\mathfrak m}$}edskip
As mentioned at the beginning of the Introduction, it follows from all the above theorems and Corollary \ref{3-Sasakian}
that there are $\widetilde{\bf S}$-linearly unstable (and hence $\mbox{${\mathfrak n}$}u$-linearly unstable and dynamically unstable)
examples of manifolds admitting non-trivial real Killing spinors exhibiting all possible Euclidean metric cone special
holonomy types and in all admissible dimensions. By contrast, up to now, the only $\widetilde{\bf S}$-linearly stable
examples with non-trivial real Killing spinors are the constant curvature spheres.
\mbox{${\mathfrak m}$}edskip
\mbox{${\mathfrak n}$}oindent{\bf Acknowledgements:} The first author would like to thank Professors Xianzhe Dai and Guofang Wei for their interests and many helpful discussions. During the Fall term of 2017-2018, he was supported by a Fields Postdoctoral Fellowship. He thanks the Fields Institute for Research in Mathematical Sciences for the support.
The second author is partially supported by NSERC Discovery Grant No. OPG00009421.
Both authors thank Professors Stuart Hall and Thomas Murphy for their comments on an earlier version of the paper.
\mbox{${\mathfrak s}$}ection{\bf Instability of Einstein metrics on Aloff-Wallach spaces} \label{AW}
In this section we will prove Theorem $\ref{Aloff-Wallach}$, i.e., deduce the $\widetilde{\bf S}$-linear instability of all invariant Einstein metrics on the Aloff-Wallach spaces $N_{k, l}={\rm SU}(3)/U_{k,l}$, where $k, l$ are integers, and $U_{k,l}$ is the circle ${\rm diag}(e^{2\partiali i k\theta}, e^{2\partiali i l \theta}, e^{-2 \partiali i (k+l) \theta})$ in ${\rm SU}(3)$. We will assume in addition that $k, l$ are
coprime, so that $N_{k, l}$ is simply connected, and remove diffeomorphic spaces by assuming that $k \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}eq l \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}eq 0$.
In the proof we will use the explicit solutions in \cite{CR84} for the invariant Einstein equations on all Aloff-Wallach spaces except one invariant Einstein metric on $N_{1,0}$. Thus in \S2.1 and 2.2 we will follow the notation in \cite{CR84}, except that the parameters $\alpha, \beta, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma, \delta$ in $(\ref{InvariantMetric})$ below are $\frac{1}{\alpha^{2}}, \frac{1}{\beta^{2}}, \frac{1}{\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma^{2}}, \frac{1}{\delta^{2}}$ in \cite{CR84}.
In \cite{CR84}, the Aloff-Wallach spaces $N_{k, l}$ are denoted instead by $N^{pq0}$ with co-prime integers $p, q$. The Lie algebra of the embedded circle subgroup is generated by $N$ as defined in (\ref{LieAlgebraBasis}) below. By comparing $N$ with the Lie algebra of $U_{k, l}$, one obtains the following relationship between $k, l$ and $p, q$:
\begin{equation}\label{pq-klRelation}
\begin{cases}
p=(k-l)c,\\
q=3(k+l)c,
\end{cases}
\end{equation}
for some proportionality constant $c$.
Our assumptions on $k, l$ translate into the conditions that $p, q\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}eq0, (p, q)=1,$ and $3p\leq q$. Then the integer pairs $(k, l)$ and $(p, q)$ uniquely determine each other by $(\ref{pq-klRelation})$. In \cite{CR84}, more general spaces $N^{pqr}$ with integers $p, q$, and $r$ taken to be relatively prime were studied. These spaces have $N^{pq0}$ as their universal covers. We also note that the
special spaces $N_{1,1}, N_{1, 0}$ correspond respectively to $N^{010}$ and $N^{130}$. (These spaces are special because their isotropy
representations contain equivalent irreducible summands.)
\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}bsection{Einstein metrics on $N^{pq0}$}
In this subsection, we will recall the Einstein equations on the Aloff-Wallach spaces and their solutions
in \cite{CR84}. They will play important roles in \S $\ref{AWI}$.
We use the following basis and the decomposition of the Lie algebra $\mbox{${\mathfrak m}$}athfrak{su}(3)$ as in \cite{CR84}.
For each fixed pair of integers $p, q$ with $(p, q)=1$, let
\begin{equation}\label{LieAlgebraBasis}
\begin{aligned}
& N = -\frac{i}{\mbox{${\mathfrak s}$}qrt{3p^{2}+q^{2}}}\begin{bmatrix}
-\frac{\mbox{${\mathfrak s}$}qrt{3}}{6}(q-3p) & 0 & 0 \\
0 & -\frac{\mbox{${\mathfrak s}$}qrt{3}}{6}(q+3p) & 0 \\
0 & 0 & \frac{\mbox{${\mathfrak s}$}qrt{3}}{3}q
\end{bmatrix}, \\ \\
& Z = -\frac{i}{\mbox{${\mathfrak s}$}qrt{3p^{2}+q^{2}}}\begin{bmatrix}
\frac{p+q}{2} & 0 & 0 \\
0 & \frac{p-q}{2} & 0 \\
0 & 0 & -p
\end{bmatrix},\\ \\
& X_{1}=-\frac{1}{2}i\lambda_{1}=\begin{bmatrix}
0 & -\frac{1}{2}i & 0\\
-\frac{1}{2}i & 0 & 0\\
0 & 0 & 0
\end{bmatrix},
\mbox{${\mathfrak q}$}quad
X_{2}=-\frac{1}{2}i\lambda_{2}=\begin{bmatrix}
0 & -\frac{1}{2} & 0\\
\frac{1}{2} & 0 & 0 \\
0 & 0 & 0
\end{bmatrix},\\ \\
& X_{4}=-\frac{1}{2}i\lambda_{4}=\begin{bmatrix}
0 & 0 & -\frac{1}{2}i\\
0 & 0 & 0\\
-\frac{1}{2}i & 0 & 0
\end{bmatrix},
\mbox{${\mathfrak q}$}quad
X_{5}=-\frac{1}{2}i\lambda_{5}=\begin{bmatrix}
0 & 0 & -\frac{1}{2}\\
0 & 0 & 0 \\
\frac{1}{2} & 0 & 0
\end{bmatrix},\\ \\
& X_{6}=-\frac{1}{2}i\lambda_{6}=\begin{bmatrix}
0 & 0 & 0\\
0 & 0 & -\frac{1}{2}i\\
0 & -\frac{1}{2}i & 0
\end{bmatrix},
\mbox{${\mathfrak q}$}quad
X_{7}=-\frac{1}{2}i\lambda_{7}=\begin{bmatrix}
0 & 0 & 0\\
0 & 0 & -\frac{1}{2}\\
0 & \frac{1}{2} & 0
\end{bmatrix},
\end{aligned}
\end{equation}
where $\lambda_{k}$ are called the Gell-Mann matrices in the physics literature.
Let $\mbox{${\mathfrak m}$}athfrak{h}={\rm span}(N)$, $\mbox{${\mathfrak m}$}athfrak{m}_{1}={\rm span}(X_{1}, X_{2})$, $\mbox{${\mathfrak m}$}athfrak{m}_{2}={\rm span}(Z)$,
$\mbox{${\mathfrak m}$}athfrak{m}_{3}={\rm span}(X_{4}, X_{5})$, and $\mbox{${\mathfrak m}$}athfrak{m}_{4}={\rm span}(X_{6}, X_{7})$. Then we
have the Lie algebra decomposition:
\begin{equation}\label{LieAlgDec}
\mbox{${\mathfrak m}$}athfrak{su}(3)=\mbox{${\mathfrak m}$}athfrak{t}\oplus\mbox{${\mathfrak m}$}athfrak{m}_{1}\oplus\mbox{${\mathfrak m}$}athfrak{m}_{3}\oplus\mbox{${\mathfrak m}$}athfrak{m}_{4}
=\mbox{${\mathfrak m}$}athfrak{h}\oplus\mbox{${\mathfrak m}$}athfrak{m}_{2}\oplus\mbox{${\mathfrak m}$}athfrak{m}_{1}\oplus\mbox{${\mathfrak m}$}athfrak{m}_{3}\oplus\mbox{${\mathfrak m}$}athfrak{m}_{4}=\mbox{${\mathfrak m}$}athfrak{h}\oplus\mbox{${\mathfrak m}$}athfrak{m},
\end{equation}
where $\mbox{${\mathfrak m}$}athfrak{t}=\mbox{${\mathfrak m}$}athfrak{h}\oplus\mbox{${\mathfrak m}$}athfrak{m}_{2}$, and $\mbox{${\mathfrak m}$}athfrak{m}=\mbox{${\mathfrak m}$}athfrak{m}_{1}\oplus\mbox{${\mathfrak m}$}athfrak{m}_{2}\oplus\mbox{${\mathfrak m}$}athfrak{m}_{3}\oplus\mbox{${\mathfrak m}$}athfrak{m}_{4}$. Let $H_{p,q} \approx {\rm U}(1)$ be the isotropy group (generated by $N$) of the identity coset $[I_{3}]$, where $I_{3}$ denotes the identity matrix of size $3$. We identify the tangent space $T_{[I_{3}]}N^{pq0}$ with $\mbox{${\mathfrak m}$}athfrak{m}$ as usual.
The background metric chosen in \cite{CR84} is the negative of the Killing form of $\mbox{${\mathfrak m}$}athfrak{su}(3)$:
$Q(X, Y)$ $=-6{\rm tr}(XY)$ for any $X, Y\in \mbox{${\mathfrak m}$}athfrak{su}(3)$. This can be seen from (2.7) and the first equation in (2.5)
of \cite{CR84}. Then for any four positive real numbers $\alpha, \beta, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma, \delta$, the following ${\rm Ad}_{H}$-invariant
inner product
\begin{equation}\label{InvariantMetric}
g(\alpha, \beta, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma, \delta)=\alpha Q\vert_{\mbox{${\mathfrak m}$}athfrak{m}_{1}}\oplus\beta Q\vert_{\mbox{${\mathfrak m}$}athfrak{m}_{2}}
\oplus\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma Q\vert_{\mbox{${\mathfrak m}$}athfrak{m}_{3}}\oplus\delta Q\vert_{\mbox{${\mathfrak m}$}athfrak{m}_{4}}
\end{equation}
on $\mbox{${\mathfrak m}$}athfrak{m}$ induces an ${\rm SU}(3)$-invariant Riemannian metric on $N^{pq0}$. We will also use $g(\alpha, \beta, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma, \delta)$ to denote this invariant Riemannian metric. In the generic cases, namely when $(p, q)\mbox{${\mathfrak n}$}eq (0, 1)$ or $(1, 3)$, the components $\mbox{${\mathfrak m}$}_{1}$, $\mbox{${\mathfrak m}$}_{2}$, $\mbox{${\mathfrak m}$}_{3}$, and $\mbox{${\mathfrak m}$}_{4}$ of the decomposition of the isotropy representation $\mbox{${\mathfrak m}$}$ are inequivalent to each other. Thus the inner products in $(\ref{InvariantMetric})$ induce all of the ${\rm SU}(3)$-invariant metrics on $N^{pq0}$. On the other hand, it turns out that all Einstein metrics on $N^{010}$ and $N^{130}$ obtained in \cite{CR84} and \cite{PP84} also have the block diagonal form as in $(\ref{InvariantMetric})$. Thus, as in \cite{CR84}, we first only consider invariant metrics defined as in $(\ref{InvariantMetric})$.
We remind the reader that the $\alpha, \beta, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma, \delta$ in $(\ref{InvariantMetric})$ are actually $\frac{1}{\alpha^{2}}, \frac{1}{\beta^{2}}, \frac{1}{\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma^{2}}, \frac{1}{\delta^{2}}$ in \cite{CR84}.
The Ricci tensors of the metrics in $(\ref{InvariantMetric})$ have the same block diagonal form as the metrics. Their
components are given in (2.13) in \cite{CR84}, and we recall them below.
\begin{equation}\label{RicciCurvatureAloffWallach}
\begin{split}
Ric\vert_{\mbox{${\mathfrak m}$}_{1}} &= \left[\frac{3}{4}\frac{1}{\alpha}+\frac{1}{8}\left(\frac{\alpha}{\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma\delta}-\frac{\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma}{\alpha\delta}-\frac{\delta}{\alpha\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma}\right)
-\frac{1}{4}q^{2}\frac{\beta}{\alpha^{2}}\right]g\vert_{\mbox{${\mathfrak m}$}_{1}},\\
Ric\vert_{\mbox{${\mathfrak m}$}_{2}} &= \left[\frac{1}{4}q^{2}\frac{\beta}{\alpha^{2}}+\frac{1}{16}(3p+q)^{2}\frac{\beta}{\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma^{2}}+\frac{1}{16}(3p-q)^{2}\frac{\beta}{\delta^{2}}\right]
g\vert_{\mbox{${\mathfrak m}$}_{2}},\\
Ric\vert_{\mbox{${\mathfrak m}$}_{3}} &=\left[\frac{3}{4}\frac{1}{\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma}+\frac{1}{8}\left(\frac{\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma}{\alpha\delta}-\frac{\alpha}{\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma\delta}-\frac{\delta}{\alpha\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma}\right)
-\frac{1}{16}(3p+q)^{2}\frac{\beta}{\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma^{2}}\right]g\vert_{\mbox{${\mathfrak m}$}_{3}},\\
Ric\vert_{\mbox{${\mathfrak m}$}_{4}}&=\left[\frac{3}{4}\frac{1}{\delta}+\frac{1}{8}\left(\frac{\delta}{\alpha\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma}-\frac{\alpha}{\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma\delta}
-\frac{\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma}{\alpha\delta}\right)-\frac{1}{16}(3p-q)^{2}\frac{\beta}{\delta^{2}}\right]g\vert_{\mbox{${\mathfrak m}$}_{4}}.
\end{split}
\end{equation}
Recall also the following change of variables given in (3.3) in \cite{CR84}:
\begin{equation}\label{ChangeVariables}
a=\frac{\delta}{\alpha},\mbox{${\mathfrak q}$}uad b=\frac{\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma}{\alpha}, \mbox{${\mathfrak q}$}uad u=\mbox{${\mathfrak s}$}qrt{\frac{\beta\delta}{\alpha\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma}}\frac{(3p+q)}{\mbox{${\mathfrak s}$}qrt{2}}, \mbox{${\mathfrak q}$}uad v=-\mbox{${\mathfrak s}$}qrt{\frac{\beta\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma}{\alpha\delta}}\frac{(3p-q)}{\mbox{${\mathfrak s}$}qrt{2}}, \mbox{${\mathfrak q}$}uad \lambda=96\frac{\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma\delta e^{2}}{\alpha},
\end{equation}
where we have used our choice of $\alpha, \beta, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma, \delta$ as in $(\ref{InvariantMetric})$. Using this change of variables, Castellani and Romans transformed the Einstein equations for the metric $g(\alpha, \beta, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma, \delta)$ with Einstein constant $12e^{2}$ to the equations
\begin{equation}\label{EinsteinCondition}
\begin{aligned}
6ab+1-a^{2}-b^{2}-(av+bu)^{2} &=& \lambda,\\
6a+b^{2}-a^{2}-1-v^{2} &=& \lambda,\\
6b+a^{2}-b^{2}-1-v^{2} &=& \lambda,\\
(av+bu)^{2}+u^{2}+v^{2} &=& \lambda.
\end{aligned}
\end{equation}
This is (3.2) in \cite{CR84}. They further obtain the following rather explicit solutions to equations $(\ref{EinsteinCondition})$:
\begin{equation}\label{Solution}
\begin{split}
a &= c+\frac{1}{2}d+\frac{3}{2} \mbox{${\mathfrak q}$}quad (-1\leq c\leq 1),\cr
b &= c-\frac{1}{2}d+\frac{3}{2},\cr
u^{2} &= \frac{5}{2}-2(c+\frac{1}{2}d)^{2}, \mbox{${\mathfrak q}$}quad uv=-2+\frac{5}{2}c^{2},\cr
v^{2} &= \frac{5}{2}-2(c-\frac{1}{2}d)^{2},\cr
\lambda &= \frac{3}{2}(c+2)^{2},
\end{split}
\end{equation}
where $d=\partialm\mbox{${\mathfrak s}$}qrt{1-c^{2}}$ and $c$ is related to $p$ and $q$ by
\begin{equation}\label{EquationForc}
\frac{3p}{q}=\frac{1-\frac{av}{bu}}{1+\frac{av}{bu}}.
\end{equation}
These are the equations (3.4) and (3.5) in \cite{CR84}.
For each pair of co-prime non-negative integers $p, q$ with $3p\leq q$, a solution of $(\ref{EquationForc})$ satisfying $-1\leq c\leq -\frac{2}{\mbox{${\mathfrak s}$}qrt{5}}$ with $d=\mbox{${\mathfrak s}$}qrt{1-c^{2}}$ and the corresponding Einstein metric was obtained \cite{CR84}. Then in \cite{PP84}, Page and Pope showed that for each pair of such integers $p, q$, another solution of $(\ref{EquationForc})$ satisfying $\frac{2}{\mbox{${\mathfrak s}$}qrt{5}}\leq c\leq 1$ with $d=-\mbox{${\mathfrak s}$}qrt{1-c^{2}}$ actually gives a geometrically inequivalent Einstein metric, and furthermore there are exactly two geometrically inequivalent Einstein metrics on each $N^{pq0}$ among metrics of the form (\ref{InvariantMetric}). Their $\widetilde{\bf S}$-linear instability will be shown in \S2.2. However, in \cite{Nik04}, Nikonorov pointed out that the two Einstein metrics on $N^{130}$ obtained in \cite{CR84} and \cite{PP84} are isometric to each other. Moreover, he found a geometrically inequivalent invariant Einstein metric on $N^{130}$ and showed that there are exactly two geometrically inequivalent invariant Einstein metrics on $N^{130}$. The $\widetilde{\bf S}$-linear instability of the additional invariant Einstein metric will be shown in \S2.3.
\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}bsection{Instability of invariant Einstein metrics in \cite{CR84} and \cite{PP84}} \label{AWI}
By using the Ricci curvature formulas (\ref{RicciCurvatureAloffWallach}), one easily obtains the scalar curvature of $g(\alpha, \beta, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma, \delta)$ as
\begin{equation*}
\begin{aligned}
s_{g(\alpha, \beta, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma, \delta)}=
&\frac{3}{2}\left(\frac{1}{\alpha}+\frac{1}{\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma}+\frac{1}{\delta}\right)-\frac{1}{4}\left(\frac{\alpha}{\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma\delta}
+\frac{\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma}{\alpha\delta}+\frac{\delta}{\alpha\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma}\right)\\
&-\frac{1}{4}q^{2}\frac{\beta}{\alpha^{2}}
-\frac{1}{16}(3p+q)^{2}\frac{\beta}{\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma^{2}}-\frac{1}{16}(3p-q)^{2}\frac{\beta}{\delta^{2}}.
\end{aligned}
\end{equation*}
The volume of $(N^{pq0}, g(\alpha, \beta, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma, \delta))$, denoted by
${\rm Vol}(g(\alpha, \beta, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma, \delta))$, is equal to ${\rm Vol}(Q)\alpha\beta^{\frac{1}{2}}\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma\delta,$
where ${\rm Vol}(Q)$ denotes the volume of the space $N^{pq0}$ with the metric induced by $Q$. Then the normalized total scalar curvature of a metric $g(\alpha, \beta, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma, \delta)$ in $(\ref{InvariantMetric})$ is given by
\begin{equation*}
\begin{aligned}
\widetilde{\bf S}(g(\alpha, \beta, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma, \delta))
&=({\rm Vol}(g(\alpha, \beta, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma, \delta)))^{\frac{2}{7}}s_{g(\alpha, \beta, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma, \delta)}\\
&=({\rm Vol}(Q))^{\frac{2}{7}}(\alpha\beta^{\frac{1}{2}}\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma\delta)^{\frac{2}{7}}\bigg[\frac{3}{2}\left(\frac{1}{\alpha}+\frac{1}{\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma}+\frac{1}{\delta}\right)-\frac{1}{4}\left(\frac{\alpha}{\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma\delta}
+\frac{\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma}{\alpha\delta}+\frac{\delta}{\alpha\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma}\right)\\
&\mbox{${\mathfrak q}$}uad -\frac{1}{4}q^{2}\frac{\beta}{\alpha^{2}}
-\frac{1}{16}(3p+q)^{2}\frac{\beta}{\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma^{2}}-\frac{1}{16}(3p-q)^{2}\frac{\beta}{\delta^{2}}\bigg]
\end{aligned}
\end{equation*}
By straightforward calculations, one has the following partial derivatives
\begin{equation*}
\begin{split}
\frac{\partialartial}{\partialartial\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma}\widetilde{\bf S}(g(\alpha, \beta, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma, \delta)) &= {\rm Vol}(Q)^{\frac{2}{7}}\frac{1}{56}
(\alpha\beta^{\frac{1}{2}}\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma\delta)^{\frac{2}{7}}\frac{1}{\alpha^{3}\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma^{3}\delta^{3}}F_{3}(\alpha, \beta, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma, \delta),\\
\frac{\partialartial}{\partialartial\delta}\widetilde{\bf S}(g(\alpha, \beta, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma, \delta)) &= {\rm Vol}(Q)^{\frac{2}{7}}\frac{1}{56}
(\alpha\beta^{\frac{1}{2}}\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma\delta)^{\frac{2}{7}}\frac{1}{\alpha^{3}\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma^{3}\delta^{3}}F_{4}(\alpha, \beta, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma, \delta),\\
\end{split}
\end{equation*}
where
\begin{equation}\label{F3}
\begin{split}
F_{3}(\alpha, \beta, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma, \delta)=
&-60\alpha^{3}\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma\delta^{3}+24(\alpha^{2}\delta^{3}+\alpha^{3}\delta^{2})\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma^{2}+10\alpha^{4}\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma\delta^{2}\\
&-18\alpha^{2}\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma^{3}\delta^{2}+10\alpha^{2}\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma\delta^{4}-4q^{2}\alpha\beta\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma^{2}\delta^{3}\\
&+6(3p+q)^{2}\alpha^{3}\beta\delta^{3}-(3p-q)^{2}\alpha^{3}\beta\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma^{2}\delta,
\end{split}
\end{equation}
\begin{equation}\label{F4}
\begin{split}
F_{4}(\alpha, \beta, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma, \delta)=
&-60\alpha^{3}\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma^{3}\delta+24(\alpha^{2}\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma^{3}+\alpha^{3}\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma^{2})\delta^{2}+10\alpha^{4}\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma^{2}\delta\\
&+10\alpha^{2}\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma^{4}\delta-18\alpha^{2}\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma^{2}\delta^{3}-4q^{2}\alpha\beta\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma^{3}\delta^{2}\\
&-(3p+q)^{2}\alpha^{3}\beta\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma\delta^{2}+6(3p-q)^{2}\alpha^{3}\beta\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma^{3}.
\end{split}
\end{equation}
Let $g(\alpha_{0}, \beta_{0}, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}, \delta_{0})$ be a fixed but arbitrary invariant Einstein metric as in \cite{CR84} and \cite{PP84}. Then we investigate the stability of this invariant Einstein metric by varying the components of the metric in $\mbox{${\mathfrak m}$}athfrak{m}_{3}\oplus\mbox{${\mathfrak m}$}athfrak{m}_{4}$. This keeps the variations within the class of homogeneous metrics.
Accordingly consider the function
\begin{equation}
\widetilde{S}(t):= \widetilde{\bf S}(g(\alpha_{0}, \beta_{0}, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}+At, \delta_{0}+Bt)), \mbox{${\mathfrak q}$}quad t\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}eq0,
\end{equation}
where $A$ and $B$ are parameters.
\begin{prop}\label{CR-Instability}
There exist parameters $A$ and $B$ {\rm (}depending on $\alpha_{0}, \beta_{0}, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0},$ and $\delta_{0}${\rm )} such that
\begin{equation}
\frac{d^{2}}{dt^{2}}\widetilde{S}(0)>0.
\end{equation}
\end{prop}
\begin{proof}
Since
\begin{equation*}
\begin{aligned}
\frac{d}{dt}\widetilde{S}(t)
&=A\, \frac{\partialartial\widetilde{\bf S}}{\partialartial\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma}(\alpha_{0}, \beta_{0}, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}+At, \delta_{0}+Bt)+B\, \frac{\partialartial\widetilde{\bf S}}{\partialartial\delta}(\alpha_{0}, \beta_{0}, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}+At, \delta_{0}+Bt)\\
&=\frac{({\rm Vol}(Q))^{\frac{2}{7}}(\alpha_{0}\beta_{0}^{\frac{1}{2}}(\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}+At)(\delta_{0}+Bt))^{\frac{2}{7}}}{56\alpha_{0}^{3}(\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}+At)^{3}(\delta_{0}+Bt)^{3}}
\,[A F_{3}(\alpha_{0}, \beta_{0}, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}+At, \delta_{0}+Bt)\\
&\mbox{${\mathfrak q}$}uad +B F_{4}(\alpha_{0}, \beta_{0}, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}+At, \delta_{0}+Bt)],
\end{aligned}
\end{equation*}
and $F_{3}(\alpha_{0}, \beta_{0}, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}, \delta_{0})=F_{4}(\alpha_{0}, \beta_{0}, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}, \delta_{0})=0$, we have
\begin{equation*}
\begin{aligned}
\frac{d^{2}}{dt^{2}}\widetilde{S}(0)=
&\,\frac{({\rm Vol}(Q))^{\frac{2}{7}}(\alpha_{0}\beta_{0}^{\frac{1}{2}}\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}\delta_{0})^{\frac{2}{7}}}{56\alpha_{0}^{3}\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}^{3}\delta_{0}^{3}}
\bigg[A^{2}\frac{\partialartial F_{3}}{\partialartial \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma}(\alpha_{0}, \beta_{0}, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}, \delta_{0})\\
& +AB(\frac{\partialartial F_{3}}{\partialartial \delta}+\frac{\partialartial F_{4}}{\partialartial \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma})(\alpha_{0}, \beta_{0}, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}, \delta_{0})
+B^{2}\frac{\partialartial F_{4}}{\partialartial \delta}(\alpha_{0}, \beta_{0}, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}, \delta_{0})\bigg].
\end{aligned}
\end{equation*}
Thus we only need to show that there exist $A$ and $B$ such that
$$A^{2}\frac{\partialartial F_{3}}{\partialartial \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma}(\alpha_{0}, \beta_{0}, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}, \delta_{0})+AB(\frac{\partialartial F_{3}}{\partialartial \delta}+\frac{\partialartial F_{4}}{\partialartial \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma})(\alpha_{0}, \beta_{0}, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}, \delta_{0})+B^{2}\frac{\partialartial F_{4}}{\partialartial \delta}(\alpha_{0}, \beta_{0}, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}, \delta_{0})>0.$$
For this it suffices to show that
\begin{equation}
\left[\left(\frac{\partialartial F_{3}}{\partialartial \delta}+\frac{\partialartial F_{4}}{\partialartial \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma}\right)^{2}-4\left(\frac{\partialartial F_{3}}{\partialartial \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma}\right)\left(\frac{\partialartial F_{4}}{\partialartial \delta}\right)\right](\alpha_{0}, \beta_{0}, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}, \delta_{0})>0.
\end{equation}
For the solution $(\alpha_{0}, \beta_{0}, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}, \delta_{0})$, from the equations in $(\ref{ChangeVariables})$, one can easily deduce that
\begin{equation}\label{ChangeVariables1}
q^{2}\beta_{0}=\frac{(av+bu)^{2}\alpha^{3}_{0}}{2\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}\delta_{0}}, \mbox{${\mathfrak q}$}uad (3p+q)^{2}\beta_{0}=\frac{2u^{2}\alpha_{0}\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}}{\delta_{0}}, \mbox{${\mathfrak q}$}uad (3p-q)^{2}\beta_{0}=\frac{2v^{2}\alpha_{0}\delta_{0}}{\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}}.
\end{equation}
Then by substituting the first two equations in $(\ref{ChangeVariables})$, the last equation in $(\ref{EinsteinCondition})$, and equations in $(\ref{ChangeVariables1})$ into the partial derivatives of the functions $F_{3}$ and $F_{4}$ defined in $(\ref{F3})$ and $(\ref{F4})$, we obtain
\begin{align*}
&\mbox{${\mathfrak q}$}uad\left[\left(\frac{\partialartial F_{3}}{\partialartial \delta}+\frac{\partialartial F_{4}}{\partialartial \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma}\right)^{2}-4\left(\frac{\partialartial F_{3}}{\partialartial \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma}\right)\left(\frac{\partialartial F_{4}}{\partialartial \delta}\right)\right](\alpha_{0}, \beta_{0}, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}, \delta_{0})\\
&=\alpha^{8}_{0}\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma^{2}_{0}\delta^{2}_{0}[(-132b+144ba-132a+40+4b^{2}+4a^{2}-12\lambda+46u^{2}+46v^{2})^{2}\\
&\mbox{${\mathfrak q}$}uad-4(-60a+48ab+48b+10-54b^{2}+10a^{2}-4\lambda+4u^{2})\cdot\\
&\mbox{${\mathfrak q}$}uad(-60b+48ab+48a+10+10b^{2}-54a^{2}-4\lambda+4v^{2})]\\
&=32\alpha^{8}_{0}\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma^{2}_{0}\delta^{2}_{0}(-392c^{4}-273c^{3}+812c^{2}+840c+168)\\
&=32\alpha^{8}_{0}\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma^{2}_{0}\delta^{2}_{0}f(c)
\end{align*}
where
$$f(c):=-392c^{4}-273c^{3}+812c^{2}+840c+168.$$
In the second last step above, we have used the equations in $(\ref{Solution})$ and $d=\partialm\mbox{${\mathfrak s}$}qrt{1-c^{2}}$. Since for all invariant Einstein metrics in \cite{CR84} and \cite{PP84} the parameter $c\in \big[-1, \frac{2}{\mbox{${\mathfrak s}$}qrt{5}}\big]\cup\big[\frac{2}{\mbox{${\mathfrak s}$}qrt{5}}, 1\big]$, in order to complete the proof, we only need to show that
$f(c)>0$ for such $c$.
By simple calculations, one can see that
$$f^{\partialrime\partialrime}(c)<0, \mbox{${\mathfrak q}$}uad \text{for} -1\leq c\leq -0.85 \ \ \text{or} \ \ 0.85\leq c\leq 1.$$
It follows that
$$f^{\partialrime}(c)\leq f^{\partialrime}(-1)=-35<0 \mbox{${\mathfrak q}$}uad \text{for} \ \ -1\leq c\leq -0.85,$$
and
$$f^{\partialrime}(c)\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}eq f^{\partialrime}(1)=77>0 \mbox{${\mathfrak q}$}uad \text{for} \ \ 0.85\leq c\leq 1.$$
Thus
$$f(c)\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}eq f(-0.85)>3>0 \mbox{${\mathfrak q}$}uad \text{for} \ \ -1\leq c\leq -0.85,$$
and
$$f(c)\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}eq f(0.85)>1096>0 \mbox{${\mathfrak q}$}uad \text{for} \ \ 0.85\leq c\leq 1.$$
In particular,
$$f(c)=-392c^{4}-273c^{3}+812c^{2}+840c+168>0, \mbox{${\mathfrak q}$}uad \text{for} -1\leq c\leq -\frac{2}{\mbox{${\mathfrak s}$}qrt{5}} \ \ \text{or} \ \ \frac{2}{\mbox{${\mathfrak s}$}qrt{5}}\leq c\leq 1.$$
This completes the proof.
\end{proof}
Next we shall show that the invariant variations used in Proposition \ref{CR-Instability} above are actually divergence-free. Then
the $\widetilde{\bf S}$-linear instability of the invariant Einstein metrics follows immediately from Proposition $\ref{CR-Instability}$.
\begin{lem} \label{divergencefree}
Let $(G/K, g)$ be a $G$-homogeneous Riemannian manifold of dimension $n$ with $G$ compact and $K \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}bset G$ closed. Let $Q$ be a fixed
bi-invariant metric on $G$ and use it to write $\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$} = \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak k}$} \partialerp \mbox{${\mathfrak m}$}$. Suppose that
\begin{equation} \label{m-decomposition}
\mbox{${\mathfrak m}$} = \mbox{${\mathfrak m}$}_1 \oplus \cdots \oplus \mbox{${\mathfrak m}$}_r
\end{equation}
is a $Q$-orthogonal decomposition of $\mbox{${\mathfrak m}$}$ into ${\rm Ad}(K)$-invariant summands.
Finally suppose that $g$ and $G$-invariant symmetric $2$-tensor $h$ are given by
\begin{eqnarray*}
g & = & a_1 Q| \mbox{${\mathfrak m}$}_1 \oplus \cdots \oplus a_r Q | \mbox{${\mathfrak m}$}_r, \,\,\, a_i > 0 \\
h & = & c_1 Q| \mbox{${\mathfrak m}$}_1 \oplus \cdots \oplus c_r Q | \mbox{${\mathfrak m}$}_r, \,\,\, c_i \in \mbox{${\mathfrak m}$}athbb{R}.
\end{eqnarray*}
Then $\delta_g h = 0$.
\end{lem}
\begin{proof} We identify $\mbox{${\mathfrak m}$}athfrak{m}$ with the tangent space of $G/H$ at $[H]$ as usual.
Let $\{X_{1}, \cdots, X_{n}\}\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}bset\mbox{${\mathfrak m}$}athfrak{m}$ be a orthonormal basis with respect to $g$, and extend them to Killing vector fields
in a neighborhood of the base point $[H]$. Then we have at $[H]$
\begin{align*}
(\delta_{g}h)(X_{j}) & =-\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m^{n}_{i=1}(\mbox{${\mathfrak n}$}abla_{X_{i}}h)(X_{i}, X_{j})\\
& =-\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m^{n}_{i=1}\big(X_{i}(h(X_{i}, X_{j}))-h(\mbox{${\mathfrak n}$}abla_{X_{i}}X_{i}, X_{j})-h(X_{i}, \mbox{${\mathfrak n}$}abla_{X_{i}}X_{j})\big)\\
& =-\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m^{n}_{i=1}\big(h(X_{i}, [X_{i}, X_{j}])-h(X_{i}, \mbox{${\mathfrak n}$}abla_{X_{i}}X_{j})-h(\mbox{${\mathfrak n}$}abla_{X_{i}}X_{i}, X_{j})\big)\\
& =\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m^{n}_{i=1}\big(h(X_{i}, \mbox{${\mathfrak n}$}abla_{X_{j}}X_{i})+h(\mbox{${\mathfrak n}$}abla_{X_{i}}X_{i}, X_{j})\big)\\
& =\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m^{n}_{i, k=1}g(\mbox{${\mathfrak n}$}abla_{X_{j}}X_{i}, X_{k})h(X_{i}, X_{k})+
\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m^{n}_{i, k=1}g(\mbox{${\mathfrak n}$}abla_{X_{i}}X_{i}, X_{k}) h(X_k, X_j)
\end{align*}
where in the third equality above we used the $G$-invariance of $h$.
We next use the fact that covariant derivatives involving Killing vector fields on a homogeneous
Riemannian manifold can be expressed entirely in terms of Lie brackets (see e.g. Lemma 7.27 in \cite{Bes87}).
After some simplification and replacing brackets for vector fields with the negative of the corresponding Lie brackets in
$\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}$, we obtain
\begin{equation} \label{div-formula}
(\delta_g h)(X_j) = \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m_{i, k} h(X_j, X_k) g([X_k, X_i], X_i) - \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m_{i} h([X_j, X_i]_{\mbox{${\mathfrak m}$}}, X_i)
\end{equation}
where $[\cdot, \cdot]_{\mbox{${\mathfrak m}$}}$ denotes the $Q$-orthogonal projection of the bracket onto $\mbox{${\mathfrak m}$}$.
Let $\{ e_q^{(\ell)}, 1 \leq \ell \leq r, \, 1 \leq q \leq d_{\ell} := \dim \mbox{${\mathfrak m}$}_{\ell} \}$ be a $Q$-orthonormal
basis of $\mbox{${\mathfrak m}$}$ adapted to the decomposition (\ref{m-decomposition}). The corresponding adapted $g$-orthonormal
basis is then given by $ X_q^{(\ell)} := \frac{1}{\mbox{${\mathfrak s}$}qrt{a_{\ell}}} e_q^{(\ell)}$. We examine separately the two
sums in (\ref{div-formula}). Let $X_j = X_q^{(\ell)} \in \mbox{${\mathfrak m}$}_{\ell}$.
The first sum is then equal to
$$ \frac{c_{\ell}}{a_{\ell}} \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m_{i=1}^r \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m_{\alpha = 1}^{d_i} \, a_i \,
Q\left(\left[\frac{e_q^{(\ell)}}{\mbox{${\mathfrak s}$}qrt{a_{\ell}}}, \frac{e_{\alpha}^{(i)}}{\mbox{${\mathfrak s}$}qrt{a_i}}\right], \frac{e_{\alpha}^{(i)}}{\mbox{${\mathfrak s}$}qrt{a_i}} \right)
= \frac{c_{\ell}}{a_{\ell}^{\frac{3}{2}}} \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m_{i=1}^r \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m_{\alpha = 1}^{d_i} \,
Q\left(\left[e_q^{(\ell)}, e_{\alpha}^{(i)} \right], e_{\alpha}^{(i)} \right) = 0 $$
since $Q$ is bi-invariant.
Similarly, the second sum is equal to
$$ \frac{1}{\mbox{${\mathfrak s}$}qrt{a_{\ell}}} \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m_{i=1}^r \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m_{\alpha = 1}^{d_i} \, c_i \,
Q\left(\left[e_q^{(\ell)}, \frac{e_{\alpha}^{(i)}}{\mbox{${\mathfrak s}$}qrt{a_i}}\right], \frac{e_{\alpha}^{(i)}}{\mbox{${\mathfrak s}$}qrt{a_i}} \right)
= \frac{1}{\mbox{${\mathfrak s}$}qrt{a_{\ell}}} \,\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m_{i=1}^r \frac{c_i}{a_i} \,\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m_{\alpha = 1}^{d_i} \,
Q\left(\left[e_q^{(\ell)}, e_{\alpha}^{(i)} \right], e_{\alpha}^{(i)} \right) = 0 $$
again by the bi-invariance of $Q$.
\end{proof}
\begin{rmk}
Note that in the above Lemma, the ${\rm Ad}(K)$-invariant summands $\mbox{${\mathfrak m}$}_i$ are not assumed to be
irreducible or pairwise inequivalent. This will be important in the next subsection.
\end{rmk}
\begin{prop}\label{CR-linear-instability}
The invariant Einstein metrics on $N^{pq0}$ where $(p, q) \mbox{${\mathfrak n}$}eq (1, 3)$ are $\widetilde{\bf S}$-linearly unstable.
\end{prop}
\begin{proof}
Let $g(\alpha_{0}, \beta_{0}, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}, \delta_{0})$ be a fixed but arbitrary invariant Einstein metric on $N^{pq0}\mbox{${\mathfrak n}$}eq N^{130}$.
Up to isometry any such metric is diagonal with respect to the decomposition (\ref{m-decomposition}).
Moreover, with the parameters $A$ and $B$ obtained in Proposition $\ref{CR-Instability}$, $A\cdot(Q\vert_{\mbox{${\mathfrak m}$}athfrak{m}_{3}})\oplus B\cdot(Q\vert_{\mbox{${\mathfrak m}$}athfrak{m}_{4}})$ is an Ad$_H$-invariant symmetric bilinear form on $\mbox{${\mathfrak m}$}$, and so it induces an ${\rm SU}(3)$-invariant
symmetric 2-tensor $h$ on $N^{pq0}$.
By Proposition $\ref{CR-Instability}$, the second variation of $\widetilde{\bf S}$ at $g(\alpha_{0}, \beta_{0}, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}, \delta_{0})$
is strictly positive along $h$, i.e., $\widetilde{\bf S}^{\partialrime\partialrime}_{g(\alpha_{0}, \beta_{0}, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}, \delta_{0})}(h, h)>0$. If we
replace $h$ by its trace-free part given by $h_{0}=h-\frac{2}{7}\big(\frac{A}{\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}}+\frac{B}{\delta_{0}}\big)g(\alpha_{0}, \beta_{0}, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}, \delta_{0})$, since the normalized total scalar curvature functional is homothety invariant, we have
$$\widetilde{\bf S}^{\partialrime\partialrime}_{g(\alpha_{0}, \beta_{0}, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}, \delta_{0})}(h_{0}, h_{0})=\widetilde{\bf S}^{\partialrime\partialrime}_{g(\alpha_{0}, \beta_{0}, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma_{0}, \delta_{0})}(h, h)>0.$$
But Lemma $\ref{divergencefree}$ implies that $h$ is divergence-free, and so $h_{0}$ is a TT-tensor. Thus $h_{0}$ is a $\widetilde{\bf S}$-linearly unstable direction.
\end{proof}
\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}bsection{Instability of the invariant Einstein metric on $N^{130}$ in \cite{Nik04}} \label{AWII}
In order to complete the proof of Theorem $\ref{Aloff-Wallach}$, we only need to check the $\widetilde{\bf S}$-linear instability of
the invariant Einstein metric on $N^{130}$ found by Nikonorov in \cite{Nik04}.
For $N^{130}$ the irreducible sub-representations $\mbox{${\mathfrak m}$}athfrak{m}_{1}$ and $\mbox{${\mathfrak m}$}athfrak{m}_{4}$ in $(\ref{LieAlgDec})$ are isomorphic to each other. Thus there are ${\rm SU}(3)$-invariant metrics on $N^{130}$ that are not block diagonal with respect to the decomposition of the isotropy representation in $(\ref{LieAlgDec})$. In \cite{Nik04}, Nikonorov showed that the two block diagonal (with respect to the decomposition in $(\ref{LieAlgDec})$) ${\rm SU}(3)$-invariant Einstein metrics on $N^{130}$ obtained in \cite{CR84} and \cite{PP84} are isometric to each other. He also found a geometrically distinct ${\rm SU}(3)$-invariant Einstein metric on $N^{130}$ that is not block diagonal with respect to the decomposition in $(\ref{LieAlgDec})$.
Since $\mbox{${\mathfrak m}$}_1$ and $\mbox{${\mathfrak m}$}_4$ are equivalent as $H_{1,3}$-representations, there is a whole circle's worth of different ways of decomposing
$\mbox{${\mathfrak m}$}_1 \partialerp \mbox{${\mathfrak m}$}_4$ as $ W_1 \partialerp W_2$, where $W_i$ are subspaces of $\mbox{${\mathfrak m}$}_1 \partialerp \mbox{${\mathfrak m}$}_4$ which are isomorphic as $H_{1,3}$-representations
to $\mbox{${\mathfrak m}$}_1 \approx \mbox{${\mathfrak m}$}_4$. Thus Nikoronov considered the $1$-parameter family of ${\rm Ad}(H_{1,3})$-invariant
irreducible decompositions of the isotropy representation on $N^{130}$ given by
\begin{equation}\label{LieAlgDecInNik}
\mbox{${\mathfrak m}$}athfrak{m}=\mbox{${\mathfrak m}$}athfrak{p}_{1}\oplus\mbox{${\mathfrak m}$}athfrak{p}_{2}\oplus\mbox{${\mathfrak m}$}athfrak{p}_{3}\oplus\mbox{${\mathfrak m}$}athfrak{p}_{4}
\end{equation}
where $\mbox{${\mathfrak m}$}athfrak{p}_{1}={\rm span}(Y_{1}, Y_{2}), \mbox{${\mathfrak m}$}athfrak{p}_{2}={\rm span}(Y_{3}, Y_{4}), \mbox{${\mathfrak m}$}athfrak{p}_{3}=\mbox{${\mathfrak m}$}athfrak{m}_{3}={\rm span}(X_{4}, X_{5}), \mbox{${\mathfrak m}$}athfrak{p}_{4}=\mbox{${\mathfrak m}$}athfrak{m}_{2}={\rm span}(Z)$,
$$Y_{1}=-\cos(\alpha)(2X_{2})-\mbox{${\mathfrak s}$}in(\alpha)(2X_{7}), \mbox{${\mathfrak q}$}quad Y_{2}=-\cos(\alpha)(2X_{1})-\mbox{${\mathfrak s}$}in(\alpha)(2X_{6}),$$
$$Y_{3}=\mbox{${\mathfrak s}$}in(\alpha)(2X_{2})-\cos(\alpha)(2X_{7}), \mbox{${\mathfrak q}$}quad Y_{4}=\mbox{${\mathfrak s}$}in(\alpha)(2X_{1})-\cos(\alpha)(2X_{6}),$$
$\alpha\in\mbox{${\mathfrak m}$}athbb{R}$, and $X_{1}, X_{2}, X_{4}, X_{5}, X_{6}, X_{7}$, and $Z$ are as given in $(\ref{LieAlgebraBasis})$.
He showed that for a suitably chosen value $\alpha\in\mbox{${\mathfrak m}$}athbb{R}$, this additional invariant Einstein metric is diagonal with respect to the corresponding decomposition in $(\ref{LieAlgDecInNik})$. Therefore, we will consider in the following the invariant metrics on $N^{130}$ of the form
\begin{equation}
g(x_{1}, x_{2}, x_{3}, x_{4})=x_{1}Q^{\partialrime}\vert_{\mbox{${\mathfrak m}$}athfrak{p}_{1}}\oplus x_{2}Q^{\partialrime}\vert_{\mbox{${\mathfrak m}$}athfrak{p}_{2}}\oplus x_{3}Q^{\partialrime}\vert_{\mbox{${\mathfrak m}$}athfrak{p}_{3}}\oplus
x_{4}Q^{\partialrime}\vert_{\mbox{${\mathfrak m}$}athfrak{p}_{4}},
\end{equation}
where $Q^{\partialrime}$ is the multiple of the Killing form of $\mbox{${\mathfrak m}$}athfrak{su}(3)$ given by $Q^{\partialrime}(X, Y)=-\frac{1}{2}{\rm tr}(XY)$ for $X, Y\in \mbox{${\mathfrak m}$}athfrak{su}(3)$.
For this family of metrics the scalar curvature formula is given in \cite{Nik04} as
\begin{equation}
\begin{aligned}
s_{g(x_{1}, x_{2}, x_{3}, x_{4})}=
&\frac{12}{x_{1}}+\frac{12}{x_{2}}+\frac{12}{x_{3}}+\frac{6a}{x_{4}}-\frac{3-3a}{2}\bigg(\frac{x_{4}}{x^{2}_{1}}+\frac{x_{4}}{x^{2}_{2}}\bigg)\\
&-2\bigg(\frac{x_{1}}{x_{2}x_{3}}+\frac{x_{2}}{x_{1}x_{3}}+\frac{x_{3}}{x_{1}x_{2}}\bigg)
-3a\bigg(\frac{x_{1}}{x_{2}x_{4}}+\frac{x_{1}}{x_{2}x_{4}}+\frac{x_{4}}{x_{1}x_{2}}\bigg),
\end{aligned}
\end{equation}
where $a=\mbox{${\mathfrak s}$}in^{2}(2\alpha)$.
The Einstein equations were then considered in three different cases: $a=0, a=1,$ and $0<a<1$. When $a=0$, the Einstein metrics obtained in \cite{CR84} and \cite{PP84} were recovered. When $a=1$, two solutions of the Einstein equations were found. Approximate values of these solutions are
\begin{equation}\label{Nik-solution1}
(x_{1}, x_{2}, x_{3}, x_{4})\approx(5.67352, 1.09220, 5.50695, 5.72906),
\end{equation}
and
\begin{equation}\label{Nik-solution2}
(x_{1}, x_{2}, x_{3}, x_{4})\approx(1.09220, 5.67352, 5.50695, 5.72906).
\end{equation}
However, these two solutions give rise to isometric invariant Einstein metrics. When $0<a<1$, no new Einstein metrics were obtained.
We will now show that the new Einstein metric obtained in the $a=1$ case is unstable.
The normalized total scalar curvature of $g(x_{1}, x_{2}, x_{3}, x_{4})$ is
\begin{align*}
\widetilde{\bf S}(g(x_{1}, x_{2}, x_{3}, x_{4}))=
& \,{\rm Vol}(Q^{\partialrime})^{\frac{2}{7}}(x^{2}_{1}x^{2}_{2}x^{2}_{3}x_{4})^{\frac{1}{7}}
\bigg[\frac{12}{x_{1}}+\frac{12}{x_{2}}+\frac{12}{x_{3}}+\frac{6}{x_{4}}\\
&-2\bigg(\frac{x_{1}}{x_{2}x_{3}}+\frac{x_{2}}{x_{1}x_{3}}+\frac{x_{3}}{x_{1}x_{2}}\bigg)
-3\bigg(\frac{x_{1}}{x_{2}x_{4}}+\frac{x_{1}}{x_{2}x_{4}}+\frac{x_{4}}{x_{1}x_{2}}\bigg)\bigg],
\end{align*}
where ${\rm Vol}(Q^{\partialrime})$ is the volume of $N^{130}$ with respect to the metric induced by $Q^{\partialrime}\vert_{\mbox{${\mathfrak m}$}athfrak{m}}$.
\begin{prop}
At the solution given by $(\ref{Nik-solution1})$, we have
\begin{equation}
\frac{\partialartial^{2}}{\partialartial x^{2}_{2}}\,\widetilde{\bf S}(g(x_{1}, x_{2}, x_{3}, x_{4}))>0.
\end{equation}
\end{prop}
\begin{proof}
One easily computes that
\begin{equation}\label{FristDerivativeNik}
\begin{aligned}
\frac{\partialartial}{\partialartial x_{2}}\widetilde{\bf S}(g(x_{1}, x_{2}, x_{3}, x_{4}))=
&{\rm Vol}(Q^{\partialrime})^{\frac{2}{7}}(x^{2}_{1}x^{2}_{2}x^{2}_{3}x_{4})^{\frac{1}{7}}\frac{2}{7x_{2}}\bigg[\frac{12}{x_{1}}-\frac{30}{x_{2}}+\frac{12}{x_{3}}
+\frac{6}{x_{4}}+5\frac{x_{1}}{x_{2}x_{3}}\\
&-9\frac{x_{2}}{x_{1}x_{3}}+5\frac{x_{3}}{x_{1}x_{2}}+\frac{15}{2}\frac{x_{1}}{x_{2}x_{4}}
-\frac{27}{2}\frac{x_{2}}{x_{1}x_{4}}+\frac{15}{2}\frac{x_{4}}{x_{1}x_{2}}\bigg].
\end{aligned}
\end{equation}
At the solution given by $(\ref{Nik-solution1})$, the second order partial derivative with respect to $x_{2}$ is
\begin{equation}\label{SecondDerivativenNik}
\begin{aligned}
\frac{\partialartial^{2}}{\partialartial x^{2}_{2}}\widetilde{\bf S}(g(x_{1}, x_{2}, x_{3}, x_{4}))=
& {\rm Vol}(Q^{\partialrime})^{\frac{2}{7}}(x^{2}_{1}x^{2}_{2}x^{2}_{3}x_{4})^{\frac{1}{7}}\frac{2}{7x_{2}}\bigg[\frac{30}{x^{2}_{2}}-5\frac{x_{1}}{x^{2}_{2}x_{3}}
-9\frac{1}{x_{1}x_{3}}\\
& -5\frac{x_{3}}{x_{1}x^{2}_{2}}-\frac{15}{2}\frac{x_{1}}{x^{2}_{2}x_{4}}
-\frac{27}{2}\frac{1}{x_{1}x_{4}}-\frac{15}{2}\frac{x_{4}}{x_{1}x^{2}_{2}}\bigg]
\end{aligned}
\end{equation}
Because the first order partial derivative in $(\ref{FristDerivativeNik})$ vanishes at this solution, we have
\begin{equation}
-\bigg(5\frac{x_{1}}{x_{3}}+5\frac{x_{3}}{x_{1}}+\frac{15}{2}\frac{x_{1}}{x_{4}}+\frac{15}{2}\frac{x_{4}}{x_{1}}\bigg)
=12\frac{x_{2}}{x_{1}}-30+12\frac{x_{2}}{x_{3}}+6\frac{x_{2}}{x_{4}}-9\frac{x^{2}_{2}}{x_{1}x_{3}}-\frac{27}{2}\frac{x^{2}_{2}}{x_{1}x_{4}}.
\end{equation}
Substituting this into $(\ref{SecondDerivativenNik})$ and factoring $\frac{1}{x^{2}_{2}}$ out, we obtain
\begin{eqnarray*}
\frac{\partialartial^{2}}{\partialartial x^{2}_{2}}\widetilde{\bf S}(g(x_{1}, x_{2}, x_{3}, x_{4}))
&=&{\rm Vol}(Q^{\partialrime})^{\frac{2}{7}}(x^{2}_{1}x^{2}_{2}x^{2}_{3}x_{4})^{\frac{1}{7}}\frac{2}{7x^{3}_{2}}
\bigg[12\frac{x_{2}}{x_{1}}\\
& &+\bigg(12-18\frac{x_{2}}{x_{1}}\bigg)\frac{x_{2}}{x_{3}}+\bigg(6-27\frac{x_{2}}{x_{1}}\bigg)\frac{x_{2}}{x_{4}}\bigg].
\end{eqnarray*}
This is strictly positive for $(x_{1}, x_{2}, x_{3}, x_{4})\approx(5.67352, 1.09220, 5.50695, 5.72906)$, since $x_{1} > 5.5, x_{2}<1.1,$ and therefore $\frac{x_{2}}{x_{1}}<\frac{1}{5}$.
\end{proof}
For the solution $(x_{1}, x_{2}, x_{3}, x_{4})\approx(1.09220, 5.67352, 5.50695, 5.72906)$, similarly one can show that $\frac{\partialartial^{2}}{\partialartial x^{2}_{1}}\widetilde{\bf S}(g(x_{1}, x_{2}, x_{3}, x_{4}))$ is strictly positive.
As in Proposition $\ref{CR-linear-instability}$, this implies that the invariant Einstein metric obtained in \cite{Nik04} is $\widetilde{\bf S}$-linearly unstable. Together with Proposition $\ref{CR-linear-instability}$ this completes the proof of Theorem $\ref{Aloff-Wallach}$.
\mbox{${\mathfrak s}$}ection{\bf Instability of Einstein metrics on the Stiefel manifolds} \label{Stiefel}
The Stiefel manifold $V_{2}(\mbox{${\mathfrak m}$}athbb{R}^{n+1})= \frac{{\rm SO}(n+1)}{{\rm SO}(n-1)}$ $(n\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}eq 3)$ may be viewed as a
principal circle bundle over the real oriented Grassmannian $\frac{{\rm SO}(n+1)}{{\rm SO}(n-1){\rm SO}(2)}$, which is an
irreducible Hermitian symmetric space (with second Betti number $b_2 = 1$). Sagle \cite{Sa70} constructed an
invariant Einstein metric on $V_{2}(\mbox{${\mathfrak m}$}athbb{R}^{n+1})$ which is now known to be unique (up to isometry and homothety) among all
${\rm SO}(n+1)$-invariant metrics \cite{Ker98}, except when $n=3$. Its relevance for us is that it can be viewed as the regular
Sasaki Einstein metric determined by the base considered as a Fano Einstein manifold with the symmetric
metric scaled so that its scalar curvature equals $2n+2$. In this section, we will show that this Einstein metric
is $\widetilde{\bf S}$-linearly unstable, and therefore $\mbox{${\mathfrak n}$}u$-linearly unstable. Additionally, in Example
$\ref{hyperquadric}$ in \S $\ref{instab-conf}$, we will show that this Einstein metric is also $\mbox{${\mathfrak n}$}u$-linearly
unstable along conformal variation directions. When $n=3$, ${\rm SO}(4)/{\rm SO}(2)$ is diffeomorphic to $S^2 \times S^3$,
so the product metric is a second Einstein metric which is not isometric to the Sasaki Einstein metric. The product
metric is of course also $\widetilde{\bf S}$-linearly unstable.
We will follow the notation in \cite{Ker98}. Embedding ${\rm SO}(n-1)$ into ${\rm SO}(n+1)$ as
${\rm SO}(n-1)\cong\begin{bmatrix} Id_{2} & 0 \\ 0 & {\rm SO}(n-1)\end{bmatrix}\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}bset {\rm SO}(n+1)$ gives rise to the Stiefel manifold
$V_{2}(\mbox{${\mathfrak m}$}athbb{R}^{2})$ as a quotient space $\frac{SO(n+1)}{SO(n-1)}$. On the Lie algebra level, the embedding is $\mbox{${\mathfrak m}$}athfrak{so}(n-1)\cong\begin{bmatrix} 0 & 0 \\ 0 & \mbox{${\mathfrak m}$}athfrak{so}(n-1)\end{bmatrix}\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}bset \mbox{${\mathfrak m}$}athfrak{so}(n+1)$.
We then choose the ${\rm Ad}_{{\rm SO}(n-1)}$-invariant complement $\mbox{${\mathfrak m}$}athfrak{p}=\mbox{${\mathfrak m}$}athfrak{so}(n-1)^{\partialerp}$
(with respect to the Killing form). The isotropy representation of ${\rm SO}(n-1)$ on $\mbox{${\mathfrak m}$}athfrak{p}$ can be decomposed into irreducible sub-representations as $\mbox{${\mathfrak m}$}athfrak{p}=\mbox{${\mathfrak m}$}athfrak{p}_{0}\oplus\mbox{${\mathfrak m}$}athfrak{p}_{1}\oplus\mbox{${\mathfrak m}$}athfrak{p}_{2}$, where $\mbox{${\mathfrak m}$}athfrak{p}_{0}=$span$\{E_{12}\}$, $\mbox{${\mathfrak m}$}athfrak{p}_{i}=$span$\{E_{j, 2+i} \vert 1\leq i\leq n-1\}$ for $j=1, 2$,
and $E_{ij}$ denotes the matrix with 1 in the $(i,j)$-entry, $-1$ in the $(j,i)$-entry, and zeros everywhere else.
Let $Q^{\partialrime}$ be the multiple of the Killing form of $\mbox{${\mathfrak m}$}athfrak{so}(n+1)$ given by
$Q^{\partialrime}(X, Y)=-\frac{1}{2}{\rm tr}(XY)$ for $X, Y\in \mbox{${\mathfrak m}$}athfrak{so}(n+1)$, and choose
$Q^{\partialrime}|_{\mbox{${\mathfrak m}$}athfrak{p}}$ as the background metric. Then we consider ${\rm SO}(n+1)$-invariant metrics on
$\frac{{\rm SO}(n+1)}{{\rm SO}(n-1)}$ induced by
\begin{equation*}
g(x_{0}, x_{1}, x_{2})=x_{0}Q^{\partialrime}|_{\mbox{${\mathfrak m}$}athfrak{p}_{0}}\oplus x_{1}Q^{\partialrime}|_{\mbox{${\mathfrak m}$}athfrak{p}_{1}}\oplus x_{2}Q^{\partialrime}|_{\mbox{${\mathfrak m}$}athfrak{p}_{2}}
\end{equation*}
for $x_{0}, x_{1}, x_{2}>0$.
Recall the scalar curvature formula in section 4 of \cite{Ker98} as
\begin{equation*}
s_{g(x_{0}, x_{1}, x_{2})}=(n-1)\left(\frac{n-1}{x_{1}}+\frac{n-1}{x_{2}}+\frac{1}{x_{0}}\right)
-\frac{n-1}{2}\left(\frac{x_{1}}{x_{2}x_{0}}+\frac{x_{2}}{x_{1}x_{0}}+\frac{x_{0}}{x_{1}x_{2}}\right).
\end{equation*}
By considering variations of this scalar curvature function, Kerr showed that $x_{1}=x_{2}$ and $x_{0}=\frac{2(n-1)}{n}x_{1}$ give the unique SO$(n+1)$-invariant Einstein metric up to diffeomorphisms and homotheties. In particular, we will consider the Einstein metric with $x_{0}=2(n-1), x_{1}=x_{2}=n$.
Now the normalized total scalar curvature of $g(x_{0}, x_{1}, x_{2})$ is
\begin{eqnarray*}
\widetilde{\bf S}(g(x_{0}, x_{1}, x_{2}))
&=& {\rm Vol}(Q^{\partialrime})^{\frac{2}{2n-1}}(x_{0}x^{n-1}_{1}x^{n-1}_{2})^{\frac{1}{2n-1}}
\bigg[(n-1)\left(\frac{n-1}{x_{1}}+\frac{n-1}{x_{2}}+\frac{1}{x_{0}}\right)\\
& & -\frac{n-1}{2}\left(\frac{x_{1}}{x_{2}x_{0}}+\frac{x_{2}}{x_{1}x_{0}}+\frac{x_{0}}{x_{1}x_{2}}\right)\bigg],
\end{eqnarray*}
where ${\rm Vol}(Q^{\partialrime})$ is the volume of the Stiefel manifold with the metric induced by $Q^{\partialrime}$. Its first partial derivative with respect to $x_{1}$ is
\begin{eqnarray*}
\frac{\partialartial}{\partialartial x_{1}}\widetilde{\bf S}(g(x_{0}, x_{1}, x_{2}))
&=&{\rm Vol}(Q^{\partialrime})^{\frac{2}{2n-1}}(x_{0}x^{n-1}_{1}x^{n-1}_{2})^{\frac{1}{2n-1}}\frac{(n-1)}{(2n-1)x_{1}}
\bigg[-\frac{n(n-1)}{x_{1}}\\
& &+(n-1)\left(\frac{n-1}{x_{2}}+\frac{1}{x_{0}}\right)-\frac{(3n-2)x_{1}}{2x_{2}x_{0}}
+\frac{n}{2}\left(\frac{x_{2}}{x_{1}x_{0}}+\frac{x_{0}}{x_{1}x_{2}}\right)\bigg].
\end{eqnarray*}
Then at the Einstein metric with $x_{0}=2(n-1), x_{1}=x_{2}=n$, we have the second derivative with respect to $x_{1}$ as
\begin{eqnarray*}
& &\frac{\partialartial^{2}}{\partialartial x^{2}_{1}}\widetilde{\bf S}(g(x_{0}, x_{1}, x_{2}))|_{(2(n-1), n, n)}\\
&=& {\rm Vol}(Q^{\partialrime})^{\frac{2}{2n-1}}(x_{0}x^{n-1}_{1}x^{n-1}_{2})^{\frac{1}{2n-1}}\frac{(n-1)}{(2n-1)x_{1}}
\bigg[\frac{n(n-1)}{x^{2}_{1}}-\frac{3n-2}{2x_{2}x_{0}}\\
& &+\frac{n}{2}\left(-\frac{x_{2}}{x^{2}_{1}x_{0}}-\frac{x_{0}}{x^{2}_{1}x_{2}}\right)\bigg]\bigg|_{(x_{0}, x_{1}, x_{2})=(2(n-1), n, n)}\\
&=& {\rm Vol}(Q^{\partialrime})^{\frac{2}{2n-1}}(2(n-1)n^{2n-2}_{1})^{\frac{1}{2n-1}}\frac{(n-1)[(n-3)(2n^2-2n+1)+1]}{2(2n-1)(n-1)n^3}\\
&\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}eq& {\rm Vol}(Q^{\partialrime})^{\frac{2}{2n-1}}(2(n-1)n^{2n-2}_{1})^{\frac{1}{2n-1}}\frac{(n-1)}{2(2n-1)(n-1)n^3}>0,
\end{eqnarray*}
for $n\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}eq3$.
As in Proposition $\ref{CR-linear-instability}$, together with Lemma $\ref{divergencefree}$,
this implies the $\widetilde{\bf S}$-linear instability of the invariant Sasaki Einstein metric on
$V_{2}(\mbox{${\mathfrak m}$}athbb{R}^{n+1})$ with $n\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}eq3$.
\mbox{${\mathfrak s}$}ection{\bf Instability from conformal deformations} \label{instab-conf}
A second source of instability for the $\mbox{${\mathfrak n}$}u$-functional comes from conformal deformations of the
Einstein metric in question. A sufficient condition for instability is that the smallest nonzero eigenvalue of
the Laplace-Beltrami operator is less than $2 \Lambda$, where $\Lambda$ denotes the Einstein constant
\cite{CH15}. In fact, by \cite{CHI04}, provided that the Einstein manifold $(M, g)$ is not the constant
curvature sphere, the operator $\mbox{${\mathfrak m}$}athscr{S}$ given by
\begin{equation}
{\mbox{${\mathfrak m}$}athscr S} u := - {\rm Hess}_g u + (\Delta u) g + \Lambda u g
\end{equation}
is injective and maps eigenfunctions of the Laplace-Beltrami operator with eigenvalue $\lambda$ to divergence-free
symmetric $2$-tensors which are eigentensors of the Lichnerowicz Laplacian with the same eigenvalue.
When the Einstein manifold is a homogeneous space $(G/K, g)$ where $G$ is a semisimple compact Lie
group, $K$ is a closed subgroup, and $g$ is induced by the negative of the Killing form
$Q_G$ of $G$, then $L^2(G/K, g)$ is a Hilbert space direct sum of the irreducible finite-dimensional
unitary representations of $G$ which are of class $1$ with respect to $K$, with multiplicity equal to the
dimension of the subspace of $K$-fixed vectors \cite{MU80}. Furthermore, the eigenvalues are given by the
Casimir constants $Q_G( \lambda, \lambda + 2 \delta )$ of the irreducible class $1$ representations,
where $\lambda$ is the dominant weight of the representation, and $2\delta$ is the sum of the positive roots of $G$.
By abuse of notation we have used $Q_G$ to denote also the inner product induced by the Killing form on the dual of the
chosen real Cartan subalgebra in $\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}$. Obviously, in the above we can replace $g$ by any negative multiple of
the Killing form. (Note that $Q_G$ is negative definite on $\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}$ but positive definite on
the Cartan subalgebra.)
If, on the other hand, the Einstein manifold lies in the canonical variation
(see \cite{Bes87} pp. 252-255) of the Killing form metric along a closed intermediate subgroup
$K \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}bset H \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}bset G$, then the spectrum of the Laplacian can be determined using the results
in \cite{BB90}, which improves upon the work in \cite{BeBo82}.
\begin{example} \label{NK-S3S3}
Let $G = S^3 \times S^3 \times S^3$ and $K$ be the image of the diagonally embedded $S^3$ in $G$.
The Killing form metric is well-known to be nearly K\"ahler, and the dimension of the associated space of
real Killing spinors is $1$. We will show that this Einstein metric is $\mbox{${\mathfrak n}$}u$-unstable.
For convenience we will take the metric $g$ on $G$ to be the product of the normalized Killing form $Q^{\partialrime}$
of ${\rm SU}(2)$, which is that multiple of the Killing form such that the maximal root of ${\rm SU}(2)$ has length $-\mbox{${\mathfrak s}$}qrt{2}$.
Since $Q = - 4 Q^{\partialrime}$, using Corollary 1.7 and Table IV in \cite{WZ85}, one deduces that the Einstein constant of
$g$ is $\Lambda = \frac{5}{3}$. (Note that $g$ induces three times $Q^{\partialrime}$ on the diagonal subalgebra and
the isotropy representation of $G/K$ consists of two copies of the adjoint representation of ${\rm SU}(2)$.)
Next we determine the irreducible class $1$ unitary representations of $S^3 \times S^3 \times S^3$ relative to
the diagonal subgroup. The irreducible unitary representations of $S^3 \times S^3 \times S^3$
consist of external tensor products $\rho_1 \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak h}$}at{\otimes} \rho_2 \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak h}$}at{\otimes} \rho_3$ of irreducible
unitary representations of the individual factors. If only one $\rho_i$ is non-trivial, then the representation
remains irreducible upon restriction to the diagonal subgroup and so cannot be of class $1$.
If exactly two of the $\rho_i$ are non-trivial and equal to the $2$-dimensional vector representation of $S^3$, then
the Clebsch-Gordon formula shows that a $1$-dimensional trivial summand appears upon restriction to the diagonal
subgroup. Hence by permuting the $S^3$ factors we obtain three inequivalent class $1$ irreducible representations
of $S^3 \times S^3 \times S^3$ all having the same Casimir constant of $2 \cdot \frac{3}{2} = 3$. Since this is
less than $2 \Lambda = \frac{10}{3}$, $\mbox{${\mathfrak n}$}u$-instability has been established.
\end{example}
It seems appropriate to recall here the following lemma which we will use repeatedly later in this section.
\begin{lem} \label{compareCasimir}
Let $\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}$ be a complex semisimple Lie algebra with Killing form $Q$. Let $\lambda_1, \lambda_2$ denote the dominant weights of two
irreducible $($finite-dimensional$)$ complex representations of $\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}$. Assume that for each simple root $\alpha$
of $\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}$ we have
$$ \frac{2 Q(\lambda_1, \alpha)}{Q(\alpha, \alpha)} \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}eq \frac{2 Q(\lambda_2, \alpha)}{Q(\alpha, \alpha)}. $$
Then the corresponding Casimir constants satisfy
$$ Q(\lambda_1, \lambda_1 + 2\delta) \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}eq Q(\lambda_2, \lambda_2 + 2\delta)$$
with equality iff $\lambda_1 = \lambda_2$.
\end{lem}
\begin{rmk}
Applying the above lemma to Example \ref{NK-S3S3} we see that the first eigenspace of the Laplacian
has dimension $12$.
\end{rmk}
Using the classification theorem of Butruille \cite{Bu05} for strict nearly K\"ahler simply connected homogeneous
$6$-manifolds, one obtains
\begin{prop} \label{NK-appl}
The only $\mbox{${\mathfrak n}$}u$-stable strict nearly K\"ahler simply connected homogeneous $6$-manifold is $S^6 = {\rm G}_2/{\rm SU}(3)$
with the round metric.
\end{prop}
\begin{proof}
Recall that a strict nearly K\"ahler structure is one that is not K\"ahler. The classification theorem of Butruille
states that, up to homothety, the only simply connected homogeneous strict nearly K\"ahler $6$-manifolds are
$S^6 = {\rm G}_2/{\rm SU}(3), ({\rm SU}(2)\times {\rm SU}(2) \times {\rm SU}(2))/ \Delta {\rm SU}(2), \mbox{${\mathfrak m}$}athbb{C}{\rm P}^3 = {\rm Sp}(2)/({\rm Sp}(1) \times {\rm U}(1)), $
and ${\rm SU}(3)/T^2$, each equipped with a unique invariant nearly K\"ahler structure.
In the first case, the nearly K\"ahler metric is the constant curvature metric, which is $\widetilde{\bf S}$-stable.
The second case is treated in Example \ref{NK-S3S3} above. For the last two cases, the nearly K\"ahler metric
lies in the canonical variation of the Riemannian submersions given by the twistor fibrations
$$ {\rm Sp}(2)/({\rm Sp}(1) {\rm U}(1)) \longrightarrow {\rm Sp}(2)/({\rm Sp}(1)\times {\rm Sp}(1)) = S^4 $$
$$ {\rm SU}(3)/T^2 \longrightarrow {\rm SU}(3)/{\rm S}({\rm SU}(2){\rm U}(1)) = \mbox{${\mathfrak m}$}athbb{C}{\rm P}^2$$
equipped with the metrics induced by the negative of the Killing form of $G$. The Einstein metrics are given
by 9.72 of \cite{Bes87} and the first graph of Fig. 9.72 there. The Fubini-Study metric on $\mbox{${\mathfrak m}$}athbb{C}{\rm P}^3$
is $\widetilde{\bf S}$-stable \cite{Koi80} so the strict nearly K\"ahler one must be given by the local minimum in the canonical variation.
For the last case, the fiber and base metrics are Einstein. The Einstein constant $\Lambda_B$ of the base is $\frac{1}{2}$
since we are using the Killing form metric on a symmetric space (see Corollary 1.6 in \cite{WZ85}). The fibers
are ${\rm SU}(2)/{\rm U}(1)$ and hence are symmetric as well. But the Killing form of ${\rm SU}(3)$ restricts to $\frac{3}{2}$
times the Killing form of ${\rm SU}(2)$ by page 583 of \cite{WZ85}. So the Einstein constant $\Lambda_F$ of the fibers is
$\frac{2}{3} \cdot \frac{1}{2} = \frac{1}{3}$. It follows that $\Lambda_B - 2 \Lambda_F = -\frac{1}{6} < 0$
and a destablizing TT-tensor is given, for example, by Theorem 1.1 in \cite{WW18}.
\end{proof}
\begin{rmk} It is actually known that the Killing form metric on ${\rm SU}(3)/T^2$ is a local minimum
for the normalized scalar curvature functional on the space of ${\rm SU}(3)$-invariant metrics. This can be
checked by directly computing the Hessian of the normalized scalar curvature function at the Killing form
metric. Since $b_2({\rm SU}(3)/T^2) = 2$, it follows from \cite{CHI04} that the three invariant K\"ahler Einstein
metrics on it are also $\mbox{${\mathfrak n}$}u$-unstable.
Analogous computations for ${\rm SU}(n+1)/T^n$ show that the Killing form metric is also $\mbox{${\mathfrak n}$}u$-unstable.
\end{rmk}
Consider next the situation in which we have a circle bundle
\begin{equation} \label{bundle-over-symmetric}
F={\rm U}(1) = (H \cdot {\rm U}(1))/H \longrightarrow M = G/H \mbox{${\mathfrak s}$}tackrel{\partiali}{\longrightarrow} B= G/(H \cdot {\rm U}(1))
\end{equation}
where the base is an irreducible compact Hermitian symmetric space of dimension $2m$.
$B$ is simply connected and has second Betti number equal to $1$. The above fibration becomes a Riemannian
submersion with totally geodesic fibers if we give $G/H$ and $G/(H \cdot {\rm U}(1))$ the normal metrics induced by $Q_G$.
We shall denote these respectively by $g$ and $\check{g}$, and denote the induced metrics on the fibers by $\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak h}$}at{g}$.
In particular the base metric is K\"ahler-Einstein and has Ricci curvature $\frac{1}{2}$.
The canonical variation of $g$ introduced in Chapter 9.G of \cite{Bes87} is the $1$-parameter family of metrics
$$ g_t = t^2 \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak h}$}at{g} + \check{g} $$
on the total space $M$, where in the above definition the horizontal and vertical distributions of the Riemannian submersion are
used implicitly. There is a unique choice of $t^2$ (indeed $t^2 = \frac{2m}{m+1}$) which makes $g_t$ into an Einstein metric
with Einstein constant $\Lambda=\frac{m}{2m+2}$. If we multiply $g_t$ by $\frac{1}{4m+4}$, then the resulting metric
would have Einstein constant $2m$ and the submersed metric on the base would have Einstein constant $2m+2$.
In other words, the rescaled metric would be Sasaki Einstein. Since stability properties are independent of homothety, those
of $g_t$ are the same as those of the corresponding Sasaki Einstein metric.
Simply connected Sasaki Einstein manifolds are spin and admit non-trivial real Killing spinors. So we shall
take care in the following to ensure that $G/H$ is also simply connected. Let $t_*^2$ denote the special value
$\frac{2m}{m+1}$. It follows that the Einstein metric $g_{t_*}$ admits at least two linearly independent Killing spinors.
We shall show below that in some cases the Einstein metric $g_{t_*}$ is $\mbox{${\mathfrak n}$}u$-unstable by exhibiting an eigenvalue of the
Laplacian which is less than $2\Lambda_{g_{t_*}} = \frac{m}{m+1}$.
To do this we will use the results in \cite{BeBo82} and \cite{BB90}, which we recall briefly below. Let $\Delta_t$ and $\Delta_v$
denote respectively the Laplacian of $g_t$ and the vertical Laplacian of the Riemannian submersion (\ref{bundle-over-symmetric})
(with metric $g_1$). Because of the totally geodesic property, all fibers of our fibration are isometric, and the vertical Laplacian
is just the collection of Laplacians of the fibers (for the metrics $\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak h}$}at{g}$). Then $\Delta_1$ is the Laplacian of
the Killing form metric, whose eigenvalues can be found using representation theory. We have the relation
$$ \Delta_t = \Delta_1 + \left(\frac{1}{t^2} -1 \right) \Delta_v.$$
Note that in the totally geodesic situation, $\Delta_1$ and $\Delta_v$ commute. The operator
$\Delta_v$ is not elliptic, however, but has discrete spectrum, and its eigenvalues can have infinite multiplicities.
The crucial fact for us is the
\begin{thm} $($\cite{BeBo82} Theorem 3.6, \cite{BetPi13} Remark 3.3$)$ \label{simult-eigen}
$L^2(M, g_1)$ has a Hilbert space basis consisting of simultaneous eigenfunctions of $\Delta_1$ and $\Delta_v$.
\end{thm}
It follows from this that every eigenvalue of $\Delta_t$ is the sum of an eigenvalue of $\Delta_1$ and
$\frac{1}{t^2} -1$ times an eigenvalue of $\Delta_v$. It is not completely straight-forward to decide which
combinations of eigenvalues occur in general, but this has been worked out in \cite{BB90}.
In our situation, we will assume $m>1$; otherwise $B= S^2$ and $M$, being of dimension $3$, must have
constant curvature and so is $\widetilde{\bf S}$-stable. Then
$t_*^2 > 1$ and $\frac{1}{t_*^2} - 1 = -\left(\frac{m-1}{2m} \right) < 0.$
Let $h_0$ denote the usual metric on ${\rm U}(1) = S^1$ so that it has circumference $2\partiali$. Suppose the Killing form metric
induces on the ${\rm U}(1)$ fibers in (\ref{bundle-over-symmetric}) the metric $a h_0$ where $a > 0$. Then since the eigenvalues
of $\Delta_v$ are of the form $\frac{1}{a} \ell^2$ where $\ell \in \mbox{${\mathfrak m}$}athbb{Z}$, it follows that the eigenvalues of $\Delta_{t_*}$
are of the form
\begin{equation} \label{eigenvalueform}
\lambda + \left(\frac{1}{t_*^2} - 1\right) \frac{\ell^2}{a} = \lambda - \left(\frac{m-1}{2m}\right) \frac{ \ell^2}{a} \leq \lambda,
\end{equation}
where $\lambda$ is an eigenvalue of $\Delta_1$.
In order to apply the results in \cite{BB90}, we need to write $M = G/H$ as $G/H \times_L {\rm U}(1)$
where $L:={\rm U}(1)$ acts freely on the right of $P=G/H=M$ and isometrically on the left of $F = {\rm U}(1)$.
Note that the metric on $M$ is $g_{t_*}$ and the metric on $F$ is $t_*^2 a h_0$. On the other hand,
the principal bundle $p: P \longrightarrow B$ is the projection of $G/H$ onto $B=G/K$ where both spaces
are equipped with the normal metric induced by $Q_G$, and so $L$ indeed acts via isometries
of this metric. Now in the proof of the results in \cite{BB90}, the authors employ a separate canonical
variation along the fibers of this Riemannian submersion, which, when combined with Cheeger's trick, kills off
the metric along $L$ as the variation parameter tends to infinity. In the limit we then get the eigenvalues of
$\Delta_{t_*}$ expressed as the sum of eigenvalues of the horizontal Laplacian of the fibration $p$ and
``corresponding" eigenvalues of $(F, t_*^2 a h_0)$. Here ``corresponding" means that the action of the group
$L$ on the irreducible summands of the eigenspaces in $L^2(P, Q_G)$ and $L^2(F, t_*^2 a h_0)$ must be the same.
Finally, note that the eigenvalues of the horizontal Laplacian of the fibration $p$ can be written as a
difference of an eigenvalue of the Laplacian of $Q_G$ and an eigenvalue of the vertical Laplacian corresponding
to the metric $a h_0$. This gives back the form (\ref{eigenvalueform}) of the eigenvalue together with
the additional information as to which $\ell$ can occur for a given $\lambda$.
\mbox{${\mathfrak m}$}edskip
\mbox{${\mathfrak n}$}oindent{\bf Observation}: The above discussion implies that if we can find an irreducible unitary representation
of $G$ that is of class $1$ relative to $H$ on which $L$ acts non-trivially and if this representation has
a Casimir constant $\leq \frac{m}{m+1}$ then the Einstein metric $g_{t*}$ is $\mbox{${\mathfrak n}$}u$-unstable. Furthermore, if inequality
holds for the Casimir constant, then the same conclusion holds without having to check whether the action of $L$ is trivial or not.
\begin{example} \label{hyperquadric}
Let $G={\rm SO}(m+2), K= {\rm SO}(m) \times {\rm SO}(2)$, and $H= {\rm SO}(m)$ with $m\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}eq 3$. Then $L = {\rm SO}(2)$ and $B= G/K$
is the hyperquadric of complex dimension $m$. Note that $G/H$ is simply connected. The vector representation
$\rho_{m+2}$ of ${\rm SO}(m+2)$ on $\mbox{${\mathfrak m}$}athbb{C}^{m+2}$ has a fixed point set of complex dimension $2$ when restricted to
${\rm SO}(m)$. $L$ acts on the right of $\mbox{${\mathfrak m}$}athbb{C}^{m+2}$ via the usual
representation of ${\rm U}(1)$ by rotations ($\ell = 1$). The element $i$ of the Lie algebra of ${\rm U}(1)$ corresponds to the
matrix in $\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak o}$}(m+2)$ consisting of zeros everywhere except for a single $2 \times 2$ block in the lower right hand
corner given by
$$ \left( \begin{array}{rr}
0 & -1 \\
1 & 0
\end{array} \right).
$$
This matrix has length $2m$ with respect to $Q_G$, and so the constant $a = 2m$.
The Casimir constant of $\rho_{m+2}$ is $\frac{m+1}{2m} < \frac{m}{m+1}$
since $Q_G = 2(m+2 -2) Q_G^{\partialrime}$ where $Q_G^{\partialrime}$ is that multiple of the negative of the Killing form
so that the maximal root has length $-\mbox{${\mathfrak s}$}qrt{2}$. (See pp. 583-586 of \cite{WZ85} for more details.)
The corresponding eigenvalue of $\Delta_{t_*}$ is
$$\frac{m+1}{2m} - \frac{m-1}{2m} \frac{1}{2m}.$$
Since the multiplicity of the eigenvalue $\frac{m+1}{2m}$ in $L^2({\rm SO}(m+2)/{\rm SO}(m))$ is $2(m+2)$ we obtain a
$2(m+2)$-dimensional positive definite subspace for the second variation of the $\mbox{${\mathfrak n}$}u$-functional that is
orthogonal to the unstable direction we found in \S \ref{Stiefel}.
\end{example}
\begin{example} \label{E6}
Let $G = {\rm E}_6$, $H = {\rm Sp}in(10)$, and $K = ({\rm Sp}in(10) \times {\rm U}(1))/\Delta(\mbox{${\mathfrak m}$}athbb{Z}/4)$. Then
$B=G/K$ is a Hermitian symmetric space of dimension $2m = 32$, and $G/H$ is simply connected.
(But $G/K$ is not effective since the center $\mbox{${\mathfrak m}$}athbb{Z}/3$ of ${\rm E}_6$ (lying in the ${\rm U}(1)$ factor in $K$)
is the ineffective kernel.) The Einstein constant $\Lambda_{g_{t_*}}$ is equal to $\frac{16}{34}$.
Let $\partiali_{\lambda}$ be one of the two lowest dimensional irreducible unitary representations of ${\rm E}_6$,
and $\lambda$ be its dominant weight. The complex dimension of $\partiali_{\lambda}$ is $27$, and by Table 25, p. 203, of
\cite{Dyn52}, upon restriction to ${\rm Sp}in(10)$ it decomposes as $\rho_{10} \oplus \Delta^{+}_{10} \oplus \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athbb I}$}$
where $\rho_{10}$ is the vector representation of ${\rm Sp}in(10)$, $\Delta^{+}_{10}$ is the positive spin representation,
and $\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athbb I}$}$ denotes a trivial one-dimensional representation. Hence $\partiali_{\lambda}$ is of class $1$ with respect to
${\rm Sp}in(10)$. If we picked the other lowest dimensional representation, which is contragedient to $\partiali_{\lambda}$,
then in the decomposition the $+$ spin representation would be replaced by the $-$ spin representation.
Now $Q_{{\rm E}_6} = 24 Q^{\partialrime}_{{\rm E}_6}$ so from Table III, p. 586 of \cite{WZ85},
$Q_G(\lambda, \lambda + 2\delta) = \frac{1}{24} \cdot \frac{52}{3} = \frac{13}{18} < \frac{16}{17} = 2 \Lambda_{g_{t_*}}.$
So there is no need to determine the action of $L = {\rm U}(1)$ on the right of $\partiali_{\lambda}$. We obtain
a $2 \cdot 27 = 54$-dimensional subspace of divergence-free symmetric $2$-tensors on which the second
variation of the $\mbox{${\mathfrak n}$}u$-functional is positive definite.
\end{example}
\begin{example} \label{E7}
Let $G= {\rm E}_7$, $H= {\rm E}_6$ and $K = {\rm E}_6 \cdot {\rm U}(1)$ where $K$ is the quotient of
${\rm E}_6 \times {\rm U}(1)$ by the diagonally embedded $\mbox{${\mathfrak m}$}athbb{Z}/3$. (The center of ${\rm E}_6$ is
$\mbox{${\mathfrak m}$}athbb{Z}/3$.) $G/H$ is simply connected. The dimension of $G/K = B$ is $2m= 54$ and so the Einstein
constant $\Lambda$ for $g_{t_*}$ is $\frac{27}{56}$. $G/K$ again is not effective, but can be made so
by dividing by the center of ${\rm E}_7$, which is $\mbox{${\mathfrak m}$}athbb{Z}/2$.
We consider the lowest dimensional non-trivial irreducible representation $\partiali_{\lambda}$ of ${\rm E}_7$,
which is of dimension $56$.
By Table 25, p. 204 of \cite{Dyn52}, upon restriction to ${\rm E}_6$, $\partiali_{\lambda}$ decomposes as
$2 \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athbb I}$} \oplus \rho$, where $\rho$ is the real irreducible representation of ${\rm E}_6$ corresponding to one of the
$27$-dimensional irreducible complex representations of ${\rm E}_6$. So $\partiali_{\lambda}$ is of class $1$ with respect to
${\rm E}_6$ with fixed point set of dimension $2$. Because $Q_G = 36 Q_G^{\partialrime}$, the Casimir constant
$Q_G(\lambda, \lambda + 2 \delta) = \frac{1}{36} \cdot \frac{57}{2} = \frac{57}{72} < 2\Lambda = \frac{27}{28}$
(see Table III, p. 586 of \cite{WZ85}). So again we do not need to determine the action of ${\rm U}(1)$ on this
irreducible summand in $L^2(G/H)$ and we obtain a $2 \cdot 56$-dimensional subspace on which the second variation of the
$\mbox{${\mathfrak n}$}u$-functional is positive definite.
\end{example}
\begin{example} \label{CGr-2planes}
Let $G = {\rm SU}(p+2)$, $H={\rm SU}(p) \times {\rm SU}(2)$, and $K = {\rm S}({\rm U}(p)\times {\rm U}(2))$ with $p \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}eq 2$. Then $G/H$ is simply
connected. Note that $G/K$ has the distinction of being the only Hermitian symmetric space that is also quaternionic
symmetric. Its dimension is $2m = 4p$, so the Einstein constant $\Lambda_{t_*} = \frac{p}{2p+1}$.
Let $\mbox{${\mathfrak m}$}u_k$ denote the vector representation of ${\rm SU}(k)$ on $\mbox{${\mathfrak m}$}athbb{C}^k$. We claim that $\Lambda^2 \mbox{${\mathfrak m}$}u_{p+2}$,
which is irreducible, is of class $1$ relative to $H$. This follows from the calculation
\begin{equation} \label{rep-decompose}
\Lambda^2 \mbox{${\mathfrak m}$}u_{p+2}|\, {\rm SU}(p)\times {\rm SU}(2) = \Lambda^2 \mbox{${\mathfrak m}$}u_p \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak h}$}at{\otimes} \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athbb I}$} \oplus \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athbb I}$} \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak h}$}at{\otimes} \Lambda^2 \mbox{${\mathfrak m}$}u_2
\oplus \mbox{${\mathfrak m}$}u_p \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak h}$}at{\otimes} \mbox{${\mathfrak m}$}u_2
\end{equation}
where $\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athbb I}$}$ denotes the $1$-dimensional trivial representation and $\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak h}$}at{\otimes}$ denotes the external
tensor product. Since $\mbox{${\mathfrak m}$}u_2$ has dimension $2$ and the determinants in ${\rm SU}(2)$ equal to $1$, we get a
single trivial summand upon restriction, provided $p > 2$.
Let $\lambda$ denote the dominant weight of $\mbox{${\mathfrak m}$}u_{p+2}$. Since $Q_{{\rm SU}(p+2)} = 2(p+2) Q_{{\rm SU}(p+2)}^\partialrime$,
using Table III of \cite{WZ85}, we have
$$ Q_G(\lambda, \lambda + 2\delta) = \frac{1}{2(p+2)} \cdot 2p \cdot \frac{p+3}{p+2} = \frac{p(p+3)}{ (p+2)^2} <
2 \Lambda_{g_{t_*}} = \frac{2p}{2p+1}.$$
So again it is unnecessary to determine the action of ${\rm U}(1) = L$ on $\Lambda^2 \mbox{${\mathfrak m}$}u_{p+2}$. We obtain a
$(p+2)(p+1)/2$-dimensional subspace on which the second variation of the $\mbox{${\mathfrak n}$}u$-functional is positive definite.
Notice that when $p=2$, $G/H = {\rm SU}(4)/({\rm SU}(2)\times {\rm SU}(2)) = {\rm SO}(6)/{\rm SO}(4)$, which we analysed in Example (\ref{hyperquadric}).
In this special case, we have an additional trivial summand coming from the $\Lambda^2 \mbox{${\mathfrak m}$}u_p$ in (\ref{rep-decompose}),
and so the multiplicity of $\Lambda^2 \mbox{${\mathfrak m}$}u_{p+2}$ in $L^2(G/H)$ is doubled, which is consistent with the analysis
in Example (\ref{hyperquadric}).
\end{example}
\mbox{${\mathfrak s}$}ection{\bf Low-dimensional homogeneous Einstein spaces and Sasaki Einstein spaces} \label{homog}
In this section we will first apply the results in the earlier sections and in \cite{WW18} to determine the stability
of low-dimensional simply connected compact homogeneous Einstein manifolds. Given such a manifold we will write it
in the form $G/K$ where $G$ is compact, connected, semisimple, and $K$ is a closed subgroup of $G$. We also assume that $G$
acts almost effectively on $G/K$. These assumptions are not too restrictive, since the isometry group of a compact
Riemannian manifold is compact, and the semisimple part of the identity component of a transitive Lie group acting
on a simply connected closed manifold also acts transitively on it.
The $\widetilde{\bf S}$ and $\mbox{${\mathfrak n}$}u$-linear stability of the symmetric metric on compact symmetric spaces
was analysed by Koiso \cite{Koi80} and Cao-He \cite{CH15}. We shall therefore assume that $(G, K)$ is not a symmetric
pair unless otherwise stated. Also recall that any product Einstein metric with positive scalar curvature is $\widetilde{\bf S}$-linearly unstable.
We shall begin with dimension five, since Jensen \cite{J69} proved that all simply connected homogeneous $4$-manifolds
are symmetric.
\mbox{${\mathfrak s}$}mallskip
\mbox{${\mathfrak n}$}oindent{\bf I. Dimension five}
\mbox{${\mathfrak s}$}mallskip
The classification of simply connected compact homogeneous Einstein $5$-manifolds was given in
\cite{ADF96}. There are only two non-symmetric cases: the Stiefel manifold ${\rm SO}(4)/{\rm SO}(2)$, and the family
$({\rm SU}(2) \times {\rm SU}(2))/U_{k,l}$ where $k, l$ are relatively prime integers not both equal to $1$, and
$U_{k, l}$ is the circle embedded by $e^{i \theta} \mbox{${\mathfrak m}$}apsto (e^{i k \theta}, e^{i l \theta})$. The unique
${\rm SU}(2) \times {\rm SU}(2)$-invariant Einstein metric is in fact of Riemannian submersion type over
$S^2 \times S^2$. It is not Sasaki Einstein, and by Corollary 1.3 in \cite{WW18}, it is $\widetilde{\bf S}$-linearly
unstable. The case of the Stiefel manifold ${\rm SO}(4)/{\rm SO}(2)$ actually corresponds to the $(k, l) = (1, 1)$ case of the
above infinite family. There are two invariant Einstein metrics on this space. One is the Sasaki Einstein metric, which is
$\widetilde{\bf S}$-linearly unstable, and the other is the product metric, which is also $\widetilde{\bf S}$-linearly
unstable.
The symmetric $5$-manifolds are $S^5$ (stable), $S^3 \times S^2$ ($\widetilde{\bf S}$-linearly unstable),
and ${\rm SU}(3)/{\rm SO}(3)$, which is neutrally linearly stable, i.e., $\mbox{${\mathfrak n}$}u$-linearly stable and the kernel of
the second variation operator contains a symmetric $2$-tensor orthogonal to the orbit of the diffeomorphism
group.
\mbox{${\mathfrak m}$}edskip
\mbox{${\mathfrak n}$}oindent{\bf II. Dimension six}
\mbox{${\mathfrak m}$}edskip
The classification of simply connected compact homogeneous Einstein metrics in dimension $6$ is as yet
incomplete. The only open case is that of $S^3 \times S^3$ with a left-invariant metric. The remaining possibilities
are classified in \cite{NR03}. In the same paper, the authors showed that if the left-invariant Einstein metric
has an additional circle of isometries acting by right translations, then up to isometries and homotheties it
must be the product metric or the strict nearly K\"ahler metric induced by the Killing form on
$({\rm SU}(2) \times {\rm SU}(2) \times {\rm SU}(2))/ \Delta {\rm SU}(2)$. Quite recently, this result has been improved
in \cite{BCHL18} to allow the same conclusion as long as $S^3\times S^3 = G/K$ with $K \mbox{${\mathfrak n}$}eq \mbox{${\mathfrak m}$}athbb{Z}/2$.
The stability of the strict nearly K\"ahler metrics was dealt with in Proposition \ref{NK-appl} in section
\ref{instab-conf}. The only non-symmetric case is that of $\mbox{${\mathfrak m}$}athbb{C}\partialp^3 = {\rm Sp}(2)/({\rm Sp}(1) \times {\rm U}(1))$
with the Ziller metric. This metric is $\widetilde{\bf S}$-linearly unstable as it lies in the canonical variation
of the Fubini-Study metric on $\mbox{${\mathfrak m}$}athbb{C}\partialp^3$, viewed as a Riemannian submersion with totally
geodesic fibers over the self-dual Einstein space $\mbox{${\mathfrak m}$}athbb{H}\partialp^1= S^4$.
The symmetric cases are all $\widetilde{\bf S}$-linearly unstable except for $S^6 $ and $\mbox{${\mathfrak m}$}athbb{C}\partialp^3$, which are
both stable.
\mbox{${\mathfrak m}$}edskip
\mbox{${\mathfrak n}$}oindent{\bf III. Dimension seven}
\mbox{${\mathfrak m}$}edskip
The seven-dimensional simply connected compact homogeneous Einstein manifolds were classified in
\cite{Nik04}. Except for $S^7$ the symmetric cases are all product manifolds, and hence are $\widetilde{\bf S}$-linearly
unstable. As for the non-symmetric cases, those which are not product manifolds consist of
\begin{enumerate}
\item[$($1$)$] the Aloff-Wallach spaces $N_{k, l}$, with $k, l$ relatively prime integers; \\
\item[$($2$)$] the circle bundles over $S^2 \times S^2 \times S^2$; \\
\item[$($3$)$] the circle bundles over $\mbox{${\mathfrak m}$}athbb{C}\partialp^2 \times S^2$; \\
\item[$($4$)$] the Jensen squashed $7$-sphere; \\
\item[$($5$)$] the Stiefel manifold $SO(5)/{\rm SO}(3)$; \\
\item[$($6$)$] the isotropy irreducible space ${\rm Sp}(2)/{\rm SU}(2)$ where the embedding of ${\rm SU}(2)$ is via the
irreducible $4$-dimensional symplectic representation.
\end{enumerate}
The first case is covered by Theorem \ref{Aloff-Wallach}, proved in \S \ref{AWI} and \ref{AWII}. The second and
third cases have $\widetilde{\bf S}$-coindex of at least $2$ and $1$ respectively by results
in \cite{WW18}. The metric in case $4$ lies in the canonical variation of the Riemannian submersion given by
the Hopf fibration. It is clearly $\widetilde{\bf S}$-linearly unstable since the round metric on $S^7$ is stable.
The Euclidean cone of the Jensen sphere has ${\rm Sp}in(7)$ holonomy, i.e., the Jensen metric is nearly ${\rm G}_2$ with only
a $1$-dimensional space of real Killing spinors. The fifth case is discussed in \S \ref{Stiefel} and in Example
$\ref{hyperquadric}$ in \S $\ref{instab-conf}$. The nature of the
last case remains open.
The discussions in I - III above completes the proof of Theorem \ref{lowdim}.
\mbox{${\mathfrak m}$}edskip
One of the two isometry classes of ${\rm SU}(3)$-invariant Einstein metrics on $N_{1,1}$ is $3$-Sasakian and fits
into the more general context of regular $3$-Sasakian manifolds. We refer the reader to Chapter 13 of \cite{BG08}
for background about this family of spaces. It turns out that regular $3$-Sasakian manifolds are given by certain principal ${\rm SO}(3)$ or ${\rm Sp}(1)$
bundles over a quaternionic K\"ahler manifold with positive scalar curvature. (One gets an ${\rm Sp}(1)$ bundle only
when the quaternionic K\"ahler manifold is quaternionic projective space.) The prevailing conjecture is that
the only quaternionic K\"ahler manifolds with positive scalar curvature are the quaternionic symmetric spaces.
This conjecture has been proved in dimensions $4$ \cite{Hit81} and $8$ \cite{PS91}.
The $3$-Sasakian metric makes the bundle projection into a Riemannian submersion with totally geodesic fibres.
By looking at the canonical variation of this Riemannian submersion, it follows immediately that there is a
second Einstein metric on the principal bundle that is not isometric to the $3$-Sasakian metric.
(This fact was independently observed by B\'erard Bergery and S. Salamon.) In fact the second Einstein metric is
always a local minimum in the canonical variation, see e.g., Theorem 3.4.1 in \cite{BG99}. So this more general
viewpoint explains the existence of the second Einstein metric and its $\widetilde{\bf S}$-linear instability, and applies
in particular to $N_{1, 1}$. The $\widetilde{\bf S}$-linear instability of the $3$-Sasakian metric, which cannot
be detected from the canonical variation, can be explained by Corollary 1.7 in \cite{WW18}, since it is also the
Sasaki Einstein metric on the circle bundle over one of the homogeneous K\"ahler Einstein metrics on ${\rm SU}(3)/T$,
which has $b_2 > 1$.
The same argument works for the regular $3$-Sasakian manifold over the complex Grassmannian ${\rm SU}(p+2)/({\rm S}({\rm U}(p) \times {\rm U}(2))$,
which are the only Hermitian symmetric spaces with a quaternionic K\"ahler structure. For the twistor spaces of the other compact
quaternionic symmetric spaces, the second Betti number is $1$, so their instability is at present unclear. We have therefore deduced
\begin{cor} \label{3-Sasakian} The two Einstein metrics lying in the canonical variation of the
regular $3$-Sasakian fibration ${\rm SO}(3) \longrightarrow {\rm SU}(p+2)/({\rm S}( {\rm U}(p) \times \Delta {\rm S}^1) )
\longrightarrow {\rm SU}(p+2)/({\rm S}({\rm U}(p) \times {\rm U}(2))$, $p \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}eq 1$, are both $\widetilde{\bf S}$-linearly unstable.
\end{cor}
\end{document}
|
\begin{document}
\title{Understanding popular matchings via stable matchings}
\author{\'Agnes Cseh\inst{1,2} \and Yuri Faenza\inst{3} \and Telikepalli Kavitha\inst{4}\thanks{Part of this work was done while visiting MPI for Informatics, Saarland Informatics Campus, Germany.} \and Vladlena Powers\inst{3}}
\institute{Hasso-Plattner-Institute, University of Potsdam, Germany \and Institute of Economics, Centre for Economic and Regional Studies, Hungary; \email{[email protected]} \and IEOR, Columbia University, New York, USA;
\email{\{yf2414, vp2342\}@columbia.edu} \and Tata Institute of Fundamental Research, Mumbai, India; \email{[email protected]}}
\maketitle
\pagestyle{plain}
\begin{abstract}
An instance of the marriage problem is given by a graph $G = (A \cup B,E)$, together with, for each vertex of $G$, a strict preference order over its neighbors. A matching $M$ of $G$ is {\em popular} in the marriage instance if $M$ does not lose a head-to-head election against any matching where vertices are voters. Every stable matching is a {\em min-size} popular matching; another subclass of popular matchings that always exist and can be easily computed is the set of {\em dominant} matchings. A popular matching $M$ is dominant if $M$ wins the head-to-head election against any larger matching. Thus every dominant matching is a {\em max-size} popular matching and it is known that the set of dominant matchings is the linear image of the set of stable matchings in an auxiliary graph. Results from the literature seem to suggest that stable and dominant matchings behave, from a complexity theory point of view, in a very similar manner within the class of popular matchings.
The goal of this paper is to show that there are instead differences in the tractability of stable and dominant matchings, and to investigate further their importance for popular matchings. First, we show that it is easy to check if all popular matchings are also stable, however it is co-NP hard to check if all popular matchings are also dominant. Second, we show how some new and recent hardness results on \emph{popular} matching problems can be deduced from the NP-hardness of certain problems on \emph{stable} matchings, also studied in this paper, thus showing that stable matchings can be employed not only to show positive results on popular matching (as is known), but also most negative ones. Problems for which we show new hardness results include finding a min-size (resp.~max-size) popular matching that is not stable (resp.~dominant). A known result for which we give a new and simple proof is the NP-hardness of finding a popular matching when $G$ is non-bipartite.
\end{abstract}
\section{Introduction}
\label{sec:intro}
Consider a bipartite graph $G = (A \cup B,E)$ on $n$ vertices and $m$ edges where each vertex has a strict ranking of its neighbors. Such a graph \new{supplied with preference lists}, also called a {\em marriage} instance, is an extensively studied model in two-sided matching markets. The problem of computing a
{\em stable} matching in $G$ is classical. A matching $M$ is stable if there is no {\em blocking edge} with respect to $M$, i.e., an edge whose endpoints prefer each other to their respective assignments in $M$. The notion of stability was introduced by Gale and Shapley~\cite{GS62} in 1962 who showed that stable matchings always exist in $G$ and there is a simple linear time algorithm to find one.
Stable matchings in an instance \new{with an underlying bipartite graph} $G$ are well-understood~\cite{GI89}, with efficient algorithms~\cite{Fed92,Fed94,ILG87,Rot92,TS98,VV89} to solve several optimization problems that have many applications in economics, computer science, mathematics, and operations research. Here we study a related and more relaxed notion called {\em popularity}. This notion was introduced by G\"ardenfors~\cite{Gar75} in 1975 who
showed that every stable matching is also popular. For any vertex $u$, its preference over neighbors extends naturally to a preference over matchings as follows:
$u$ prefers $M$ to $M'$ either if (i)~$u$ is matched in $M$ and unmatched in $M'$ or (ii)~$u$ is matched in both and prefers
its partner in $M$ to its partner in~$M'$. Let $\psi(M,M')$ be the number of vertices that prefer $M$ to~$M'$.
\begin{definition}
\label{pop-def}
A matching $M$ is {\em popular} if $\psi(M,M') \ge \psi(M',M)$ for every matching $M'$ in $G$,
i.e., $\Delta(M,M') \ge 0$ where $\Delta(M,M') = \psi(M,M') - \psi(M',M)$.
\end{definition}
Hence, in a voting-based context, vertices constitute the set of voters, and each matching in the instance is an alternative. In a head-to-head election between two matchings, each vertex casts a vote for the matching that it prefers and it abstains from voting if its assignment is the same in both matchings. A popular matching, by definition, never loses such a head-to-head election against another matching. Equivalently, a popular matching is a weak {\em Condorcet winner}~\cite{Con85,wiki-condorcet} in the corresponding voting instance. It is easy to show that a stable matching is a min-size popular matching~\cite{HK13b}. Thus larger matchings and more generally, matchings that achieve more social good, are possible by relaxing the constraint of stability to popularity.
Algorithmic questions for popular matchings in bipartite graphs have been well-studied in the last decade~\cite{BIM10,CK18,FKPZ18,HK13b,HK17,Kav14,Kav16}. We currently know efficient algorithms for the following problems in bipartite graphs: (i)~min-size popular matchings, (ii)~max-size popular matchings, and (iii)~finding a popular matching with a given edge. All these algorithms compute either a stable matching or a {\em dominant} matching.
\begin{definition}
\label{def:dominant}
A popular matching $M$ is dominant in $G$ if $M$ is more popular than any larger matching in $G$, i.e., $\Delta(M,M') > 0$ for any matching
$M'$ such that $|M'| > |M|$.
\end{definition}
Thus a dominant matching defeats every larger matching in a head-to-head election, so it immediately follows that a dominant matching is a popular matching of maximum size. The example in Fig.~\ref{fig:example} (from \cite{HK13b}) demonstrates the differences between stable, dominant, and max-size matchings.
In the graph $G = (A \cup B, E)$ here, we have $A = \left\{ a_1, a_2, a_3 \right\}$ and $B = \left\{ b_1, b_2, b_3 \right\}$. The same preferences are depicted as numbers on the edges and as lists to the left of the drawn graph. Vertex $b_1$ is the top choice for all $a_i$'s, $b_2$ is the second choice for $a_1$ and $a_2$, and $b_3$ is the third choice for $a_1$ alone. The preference lists of the $b_i$ vertices are symmetric. There are two popular matchings here, both of them have the same cardinality: $M_1 = \{(a_1,b_1),(a_2,b_2)\}$ and $M_2 = \{(a_1,b_2),(a_2,b_1)\}$.
The matching $M_1$ is stable, but not dominant, since it is {\em not} more popular than the larger matching $M_3 = \{(a_1,b_3),(a_2,b_2),(a_3,b_1)\}$. Observe that in an election between $M_1$ and $M_3$, vertices $a_1, b_1$ vote for $M_1$, vertices $a_3, b_3$ vote for $M_3$, and vertices $a_2, b_2$ are indifferent between $M_1$ and $M_3$; thus $\Delta(M_1,M_3) = 2 - 2 = 0$.
The matching $M_2$ is dominant since $M_2$ is more popular than $M_3$: observe that $\Delta(M_2,M_3) = 4 - 2 = 2$ since $a_1,b_1,a_2,b_2$ prefer $M_2$ to $M_3$ while $a_3, b_3$ prefer $M_3$ to $M_2$. However $M_2$ is not stable, since $(a_1,b_1)$ blocks it.
\begin{figure}
\caption{The above instance admits two popular matchings. The stable matching $M_1$ is marked by dotted red edges, while the dominant matching $M_2$ is marked by wavy teal edges.}
\label{fig:example}
\end{figure}
Dominant matchings always exist in a bipartite graph~\cite{HK13b} and a dominant matching can be computed in linear time~\cite{Kav14}. Moreover, dominant matchings are the linear image of stable matchings in an auxiliary instance~\cite{CK18}, hence oftentimes an optimization problem over the set of dominant matchings (e.g. finding one of maximum weight) boils down to solving the same problem on the set of stable matchings. Very recently, the following rather surprising result was shown~\cite{FKPZ18}: it is NP-hard to decide if a bipartite graph admits a popular matching that is {\em neither stable nor dominant.}
\subsection{Our problems, results, and techniques}\label{sec:our}
Everything known so far about stable and dominant matchings seemed to suggest that those classes play somehow symmetric roles in popular matching problems in bipartite graphs: both classes are always non-empty and one is a tractable subclass of {\em min-size popular matchings} while the other is a tractable subclass of {\em max-size popular matchings}. Our first set of results shows that this symmetry is not always the case.
Our starting point is an investigation of the complexity of the following two natural and easy-to-ask questions on popular matchings in a bipartite graph $G$ \new{with strict preference lists}:
\begin{enumerate}[(1)]
\item is {\em every} popular matching in $G$ also stable?
\item is {\em every} popular matching in $G$ also dominant?
\end{enumerate}
Both these questions are trivial to answer in instances that admit popular matchings of more than one size.
Then the answer to both questions is ``no'' since $\{$dominant matchings$\} \cap \{$stable matchings$\} = \emptyset$ in such graphs as
dominant matchings are max-size popular matchings while stable matchings are min-size popular matchings.
Thus, in this case, a dominant matching is an {\em unstable} popular matching and a stable matching is a {\em non-dominant} popular matching in $G$.
However, when all popular matchings in $G$ have the same size, these questions are non-trivial.
Moreover, it is useful to ask these questions because when there are edge utilities, the problem of finding a max-utility
popular matching is NP-hard in general and also hard to approximate to a factor better than 2~\cite{FKPZ18}; however if every popular matching is stable (similarly, dominant), then the max-utility
popular matching problem can be solved in polynomial time. Thus a ``yes'' answer to either of these questions has applications.
We show the following dichotomy here: though both these questions seem analogous, {\em only one} is easy-to-answer.
\begin{itemize}
\item[($\ast$)] {\em There is an $O(m^2)$ algorithm to decide if every popular matching in $G = (A \cup B, E)$ is stable, however it is co-NP complete to decide if every popular matching in $G = (A \cup B, E)$ is dominant.}
\end{itemize}
The first step in proving $(\ast)$ is to show that questions (1) and (2) are equivalent to the following:\footnote{Although similar in spirit, the arguments leading from (1) to ($1'$) and from (2) to ($2'$) are not the same, see Lemma~\ref{non-stab-domn} and Lemma~\ref{non-domn-stable}.}
\begin{itemize}
\item[($1'$)] is every \emph{dominant} matching in $G$ also stable?
\item[($2'$)]\label{it:twoprime} is every \emph{stable} matching in $G$ also dominant?
\end{itemize}
In Section~\ref{sec:dom-vs-stab}, we give a combinatorial algorithm that solves ($1'$) in time $O(m^2)$. We settle the complexity of (2) and ($2'$) in Section~\ref{sec:stable}, showing that the problem is co-NP hard. We deduce the latter from the hardness of finding a stable matching with a certain augmenting path: a result that is shown in this paper.
Our hardness reduction is surprisingly simple when compared to those that appeared in recent publications on popular matchings~\cite{FKPZ18,GMSZ18,Kav18}, and establishes a new connection between hardness of problems for stable matchings and hardness of problems for popular matchings. This connection turns out to be very fertile: we exploit it further to show NP-hardness of the following new decision problems for a bipartite graph $G$ (in particular, these hardness results are not implied by the reductions from~\cite{FKPZ18,GMSZ18,Kav18}):
\begin{itemize}
\item[(3)] is there a stable matching in $G$ that is dominant?
\item[(4)] is there a max-size popular matching in $G$ that is not dominant?
\item[(5)] is there a min-size popular matching in $G$ that is not stable?
\end{itemize}
A general graph (not necessarily bipartite) with strict preference lists is called a {\em roommates} instance.
Popular matchings need not always exist in a roommates instance and the popular roommates problem is to decide if a given instance admits one or not. The complexity of the popular roommates problem was open for close to a decade and very recently, two independent proofs of NP-hardness~\cite{FKPZ18,GMSZ18} of this problem were shown. Both these proofs are rather lengthy and technical.
We use the hardness result for problem~(3) to show a short and simple proof of NP-hardness of the popular roommates problem.
Moreover, the hardness result for (5) shows an alternative and much simpler proof of NP-hardness (compared to \cite{FKPZ18}) of the following decision problem in a marriage instance $G = (A \cup B, E)$ \new{equipped with strict preference lists}: is there a popular matching in $G$ that is neither stable nor dominant?\footnote{The reduction in \cite{FKPZ18} also showed that it was NP-hard to decide if $G$ admits a popular matching that is neither a min-size nor a max-size popular matching. Our reduction does not imply this.}
Algorithms for computing min-size/max-size popular matchings and for the popular edge problem compute either stable matchings or dominant matchings.
Dominant matchings in $G$ are stable matchings in a related graph $G'$ (see Section~\ref{prelims}) and so the machinery of stable matchings is used to solve dominant matching problems. Thus all positive results in the domain of popular matchings can be attributed to stable matchings.
Conversely, all hardness results proved in this paper rely on the fact that it is hard to find stable matchings that have / do not have certain augmenting paths. Hence, properties of stable matchings are also responsible for, and provide a unified approach to, the hardness of many popular matching problems\new{.}
\subsection{Background and related results}
In all problems considered in this paper, all vertices of a graph have strict preference lists over their neighbors. The first algorithmic question studied in the domain of popular matchings was in the one-sided preference lists model in bipartite graphs: here, \new{unlike in our setting,} only one side of the graph consists of agents who have preferences over their neighbors; vertices on the other side are objects with no preferences. Popular matchings need not always exist here and an efficient algorithm was shown in \cite{AIKM07} to determine if one exists or not.
Popular matchings always exist in bipartite graphs when every vertex has a strict preference list~\cite{Gar75}. However when preference lists are not strict, the problem of deciding if a popular matching exists or not is NP-hard~\cite{BIM10,CHK17}. The first non-trivial algorithms designed for computing popular matchings in bipartite graphs with strict preference lists were the max-size popular matching algorithms~\cite{HK13b,Kav14}. These algorithms compute dominant matchings and the term {\em dominant matching} was formally defined a few years later in \cite{CK18} to solve the ``popular edge'' problem.
As mentioned earlier, it was recently shown that it is NP-hard to decide if a
marriage instance admits a popular matching that is neither stable nor dominant~\cite{FKPZ18}. This hardness result was shown by a reduction from 1-in-3 SAT and a consequence of this hardness result was the hardness of the
popular roommates problem. The NP-hardness of popular roommates problem shown in \cite{GMSZ18} was by a reduction from a problem called the
{\em partitioned vertex cover} problem.
There are several efficient algorithms to solve the stable roommates problem~\cite{Irv85,Sub94,TS98}. In contrast, the {\em dominant roommates}
problem, i.e., the problem of deciding whether a roommates instance admits a dominant matching or not, is NP-hard~\cite{FKPZ18}. Our NP-hardness proof of the popular roommates problem also proves the hardness of the dominant roommates problem and it is much simpler than the proof in \cite{FKPZ18}.
We remark that constructions from the present paper have been employed in the subsequent work~\cite{FK20} to show \emph{polyhedral} results. In particular,~\cite{FK20} builds on Section~\ref{sec:stable} to show that the dominant matching polytope has an exponential number of facets, and it builds on Section~\ref{sec:min-size} to show that the popular matching polytope has near-exponential extension complexity.
\paragraph{Organization of the paper.} Section~\ref{prelims} contains known facts on popular, stable, and dominant matchings, that will be used throughout the paper. In Section~\ref{sec:dom-vs-stab}, we present an algorithm to decide if $G$ has an unstable popular matching. Section~\ref{sec:stable} has our co-NP hardness result, which is deduced from the problem of deciding whether stable matchings \emph{with} certain augmenting paths exist, whose hardness is also proved in Section~\ref{sec:stable}. In Section~\ref{sec:new} we show how the problem of deciding whether a stable matching \emph{without} certain augmenting paths exists implies new and known hardness results, in particular, the hardness of deciding if there exists a matching that is both stable and dominant.
\section{Preliminaries}
\label{prelims}
Let $G = (A \cup B, E)$ be \new{the graph in our input}. We will often refer to vertices in $A$ and $B$ as {\em men} and {\em women}, respectively. We will always assume that each vertex has a strict preference order over his/her neighbors\new{, and the set of these lists is denoted by~$\mathcal{P}$. An instance of our problem consists therefore of $G$ and~$\mathcal{P}$, but we will omit to explicitly mention ${\cal P}$ when it is clear from the context.} We often abbreviate $n=|A\cup B|$ and $m=|E|$. We now sketch four important tools developed for popular matchings in earlier papers. Each of these will be used in later parts of this paper.
\noindent{\bf 1) Characterization of popular matchings.} Let $M$ be any matching in $G$. For any edge $(a,b) \notin M$, define $\mathsf{vote}_a(b,M)$ as follows (here $M(a)$ is $a$'s partner in the matching
$M$ and $M(a) = \mathsf{null}$ if $a$ is unmatched in $M$):
\begin{equation*}
\mathsf{vote}_a(b,M) = \begin{cases} + & \text{if\ $a$\ prefers\ $b$\ to\ $M(a)$};\\
- & \text{if\ $a$\ prefers\ $M(a)$\ to\ $b$.}
\end{cases}
\end{equation*}
We can similarly define $\mathsf{vote}_b(a,M)$. Label every edge $(a,b) \notin M$ by $(\mathsf{vote}_a(b,M),\mathsf{vote}_b(a,M))$. Thus every edge outside
$M$ has a label in $\{(\pm, \pm)\}$. Note that an edge $e$ is labeled $(+,+)$ if and only if $e$ is a blocking edge to $M$.
Let $G_M$ be the subgraph of $G$ obtained by deleting edges labeled $(-,-)$ from $G$. The following theorem characterizes popular matchings. This characterization holds in non-bipartite graphs as well.
\begin{theorem}[\cite{HK13b}]
\label{thm:char-popular}
Matching $M$ is popular in instance $G\new{, \mathcal{P}}$ if and only if $G_M$ does not contain any of the following with respect to~$M$:
\begin{enumerate}
\item[(i)] an alternating cycle with a $(+,+)$ edge;
\item[(ii)] an alternating path with two distinct $(+,+)$ edges;
\item[(iii)] an alternating path with a $(+,+)$ edge and an unmatched vertex as an endpoint.
\end{enumerate}
\end{theorem}
The following theorem characterizes dominant matchings.
\begin{theorem}[\cite{CK18}]
\label{thm:dominant}
A popular matching $M$ is dominant iff there is no $M$-augmenting path in $G_M$.
\end{theorem}
\noindent{\bf 2) The graph $G'$.}
Dominant matchings in $G$ are equivalent to stable matchings in a related graph $G'$: this equivalence was first used in \cite{CK18},
later simplified in \cite{FKPZ18}.
The graph $G'$ is the bidirected graph corresponding to $G$. The vertex set of $G'$ is the same as the vertex set $A \cup B$ of $G$
and every edge $(a,b)$ in $G$ is replaced by 2 edges in $G'$: one directed from $a$ to $b$ denoted by $(a^+,b^-)$ and the other directed from $b$ to $a$ denoted by
$(a^-,b^+)$. Let $u \in A\cup B$ and suppose $v_1 \succ v_2 \cdots \succ v_k$ is $u$'s preference order in $G$. Then $u$'s preference order in $G'$ is:
\[ v^-_1 \succ v^-_2 \cdots \succ v^-_k \succ v^+_1 \succ v^+_2 \cdots \succ v^+_k.\]
That is, every vertex prefers outgoing edges to incoming edges and among outgoing edges, it maintains its original preference order and among incoming edges, it maintains again its original preference order. Observe that vertex preferences in $G'$ are expressed on incident edges rather than on neighbors. However it is easy to see that stable matchings in $G'$ are equivalent to stable matchings in the following conventional graph that has 3 vertices $u^+, u^-, d(u)$ corresponding to each vertex $u$ in $G'$. The preference order of $u^+$ is $v^-_1 \succ v^-_2 \cdots \succ v^-_k \succ d(u)$, the preference order of $u^-$ is $d(u) \succ v^+_1 \succ v^+_2 \cdots \succ v^+_k$, and the preference order of $d(u)$ is $u^+ \succ u^-$.
It was shown in \cite{CK18,FKPZ18} that any stable matching $M'$ in $G'$ projects to a dominant matching $M$ in $G$ by setting $(a,b) \in M$ if and only if either $(a^+,b^-)$ or $(a^-,b^+)$ is in $M'$,
and conversely, any dominant matching in $G$ can be realized as a stable matching in $G'$.
\noindent{\bf 3) The set of matched vertices.} We will be using the {\em Rural Hospitals Theorem} for stable matchings (see e.g.~\cite{GI89}) that states that all stable matchings in $G$ match the same subset of vertices.
Note that every dominant matching in $G$ matches the same subset of vertices (via the Rural Hospitals Theorem for stable matchings in $G'$). More generally, the following fact is true, where $V(N)$ is the set of vertices matched in a matching $N$.
\begin{lemma}[\cite{Hirakawa-MatchUp15,HK13b}]
\label{lem:all-same}
Let $S$ be a stable, $M$ be a popular, and $D$ be a dominant matching in a marriage instance $G = (A \cup B, E)\new{, \mathcal{P}}$. Then $V(S)\subseteq V(M)\subseteq V(D)$.
\end{lemma}
Thus, in particular, in instances where stable matchings have the same size as dominant matchings, all popular matchings match the same subset of vertices.
\noindent{\bf 4) Witness of a popular matching.}
Let $\tilde{G}$ be the graph $G$ augmented with self-loops. That is, we assume each vertex is its own last choice neighbor. So we can henceforth
regard any matching $M$ in $G$ as a perfect matching $\tilde{M}$ in $\tilde{G}$ by including self-loops at all vertices left unmatched in $M$.
The following edge weight function $\mathsf{wt}_M$ in $\tilde{G}$ will be useful to us. For any edge $(a,b)$ in $G$, define:
\begin{equation*}
\mathsf{wt}_M(a,b) = \begin{cases} 2 & \text{if\ $(a,b)$\ is\ labeled\ $(+,+)$}\\
-2 & \text{if\ $(a,b)$\ is\ labeled\ $(-,-)$}\\
0 & \text{otherwise.}
\end{cases}
\end{equation*}
We need to define $\mathsf{wt}_M$ for self-loops as well: let $\mathsf{wt}_M(u,u) = 0$ if $u$ is matched to itself in $\tilde{M}$, else $\mathsf{wt}_M(u,u) = -1$.
For any matching $N$ in $G$, it is easy to see that $\mathsf{wt}_M(\tilde{N}) = \Delta(N,M)$
and so $M$ is popular in $G$ if and only if every perfect matching in $\tilde{G}$ has weight at most 0. Theorem~\ref{thm:witness} follows from
LP-duality and the fact that $G$ is a bipartite graph.
\begin{theorem}[\cite{KMN11,Kav16}]
\label{thm:witness}
A matching $M$ in $G = (A \cup B, E), \cal P$ is popular if and only if there exists a vector $\vec{\alpha} \in \{0, \pm 1\}^n$ such that
$\sum_{u \in A \cup B} \alpha_u = 0$,
\[ \alpha_a + \alpha_b \ \ \ge \ \ \mathsf{wt}_M(a,b)\ \ \ \forall\, (a,b)\in E\ \ \ \ \ \ \text{and}\ \ \ \ \ \ \alpha_u \ \ \ge\ \ \mathsf{wt}_M(u,u)\ \ \ \forall\, u \in A \cup B.\]
\end{theorem}
For any popular matching $M$, a vector $\vec{\alpha}$ as given in Theorem~\ref{thm:witness} will be called $M$'s {\em witness} or {\em dual certificate}. A popular matching may have several witnesses. Any stable matching has $\vec{0}$ as a witness.
Call an edge $e$ {\em popular} if there is a popular matching in $G$ that contains $e$. For any popular edge $(a,b)$, it was shown in~\cite{FK20} (using complementary slackness) that $\alpha_a + \alpha_b = \mathsf{wt}_M(a,b)$. This will be a useful fact for us.
Another useful fact from \cite{FK20} (again by complementary slackness) is that if vertex $u$ is left unmatched in some popular matching then $\alpha_u = \mathsf{wt}_M(u,u)$ for every popular matching $M$; thus
$\alpha_u = 0$ if $u$ is left unmatched in $M$, else $\alpha_u = -1$.
We now illustrate each of the above four tools on our example instance $G\new{, \mathcal{P}}$ from Fig.~\ref{fig:example}.
\begin{enumerate}[1)]
\item Recall the matching $M_1 = \{(a_1,b_1),(a_2,b_2)\}$. We will test if $M_1$ is popular/dominant using Theorems~\ref{thm:char-popular} and~\ref{thm:dominant}. First we label each edge $(a,b)$ not in $M_1$ by $(\mathsf{vote}_a(b,M_1(a)),\mathsf{vote}_b(a,M_1(b)))$, see Fig.\ref{fig:edge_labels}. Since no edge is labeled $(-,-)$, the subgraph $G_{M_1} = G$. Observe that $G_{M_1}$ has no forbidden alternating path/cycle as given in Theorem~\ref{thm:char-popular}. Thus $M_1$ is popular. However there is an $M_1$-augmenting path $\langle b_3, a_1, b_1, a_3\rangle$ in $G_{M_1}$, so $M_1$ is not dominant (by Theorem~\ref{thm:dominant}).
\begin{figure}
\caption{Edge labels with respect to $M_1 =\left\{(a_1,b_1),(a_2,b_2) \right\}
\label{fig:edge_labels}
\end{figure}
\item The bidirected graph $G'$ corresponding to $G$ is given in Fig.~\ref{fig:G'}. Observe that in \new{the transformed set of} preference lists, each vertex ranks its outgoing edges in their original order, followed by the incoming copies of the same edges in their original order. The only stable matching in $G'$ is $\{(a_1^+,b_2^-), (a_2^-,b_1^+)\}$ (marked by dashed teal edges in Fig.~\ref{fig:G'}) and it projects to
$M_2 = \{(a_1,b_2),(a_2,b_1)\}$---this is the only dominant matching in~$G$.
\begin{figure}
\caption{The bidirected graph $G'$ corresponding to $G$, and the \new{transformed}
\label{fig:G'}
\end{figure}
\item The set of matched vertices is the same for all popular matchings in the above instance: $V(M_1) = V(M_2) = \{(a_1,a_2,b_1,b_2\}$.
\item We will construct a witness $\vec{\alpha}$ for $M_2$, see Fig.~\ref{fig:witness}. So $\vec{\alpha} \in \{0, \pm 1\}^6$. Since $M_2$ leaves $a_3$ and $b_3$ unmatched, it has to be the case that $\alpha_{a_3} = \alpha_{b_3} = 0$. We also know that $\alpha_{a_1} = \alpha_{b_1} = 1$ because $\alpha_{a_1} + \alpha_{b_1} \ge \mathsf{wt}_{M_2}(a_1,b_1) = 2$. Since $\sum_{u \in A \cup B} \alpha_u = 0$, the only possibility for the remaining two vertices is $\alpha_{a_2} = \alpha_{b_2} = -1$. We have $\alpha_u \ge \mathsf{wt}_{M_2}(u,u)$ for all vertices $u$ since
$\mathsf{wt}_{M_2}(a_3,a_3) = \mathsf{wt}_{M_2}(b_3,b_3) = 0$ and $\mathsf{wt}_{M_2}(v,v) = -1$ for other vertices $v$.
Observe that $\alpha_{a_1} + \alpha_{b_3} = 1 > 0 = \mathsf{wt}_{M_2}(a_1,b_3)$ and
$\alpha_{a_3} + \alpha_{b_1} = 1 > 0 = \mathsf{wt}_{M_2}(a_3,b_1)$. For the remaining 4 edges, the corresponding constraint is tight. That is,
$\alpha_a + \alpha_b = \mathsf{wt}_{M_2}(a,b)$ for $a \in \{a_1,a_2\}$ and $b \in \{b_1,b_2\}$.
\begin{figure}
\caption{A witness $\vec{\alpha}
\label{fig:witness}
\end{figure}
\end{enumerate}
\section{Finding an unstable popular matching}
\label{sec:dom-vs-stab}
We are given $G = (A \cup B,E)$ with strict preference lists and the problem is to decide if every popular matching in $G$ is also stable, i.e., if $\{$popular matchings$\} = \{$stable matchings$\}$ or not in $G$. In this section we show an efficient algorithm to answer this question.
\begin{pr}
\textsf{Input: } A bipartite graph $G = (A \cup B,E)$ with strict preference lists.\\
\textsf{Decide: }
If there is an unstable popular matching in $G$.
\end{pr}
Let $G$ admit an unstable popular matching~$M$ and let $\vec{\alpha}\in\{0, \pm 1\}^n$ be $M$'s witness. Let $A_0$ be the set of vertices $a\in A$ with $\alpha_a = 0$ and let $B_0$ be the set of vertices $b\in B$ with $\alpha_b = 0$.
\begin{figure}
\caption{$M_0$ is the matching $M$ restricted to vertices in $A_0\cup B_0$ and $M_1$ is the matching $M$ restricted to remaining vertices.}
\label{fig:first}
\end{figure}
Let $M_0$ be the set of edges $(a,b) \in M$ such that $\alpha_a = \alpha_b = 0$. Let $M_1$ be the set of edges $(a,b) \in M$ such that $\alpha_a,\alpha_b \in \{\pm 1\}$. Note that $M = M_0 \mathbin{\mathaccent\cdot\cup} M_1$, since the parities of $\alpha_a$ and $\alpha_b$ have to be the same for any popular edge due to the tightness of the constraint
$\alpha_a + \alpha_b = \mathsf{wt}_M(a,b) = 0$.
The construction can be followed in Fig.~\ref{fig:first}. $A \setminus A_0$ has been further split into $A_1 \cup A_{-1}$: $A_1$ is the set of those vertices $a$ with $\alpha_a = 1$ and $A_{-1}$ is the set of those vertices $a$ with $\alpha_a = -1$; similarly, $B \setminus B_0$ has been further split into $B_1 \cup B_{-1}$: $B_1$ is the set of those vertices $b$ with $\alpha_b = 1$ and $B_{-1}$ is the set of those vertices $b$ with $\alpha_b = -1$. We have $M_1 \subseteq (A_1\times B_{-1})\cup(A_{-1}\times B_1)$ since for any edge $(a,b)$ in $M$, we have $\alpha_a + \alpha_b = \mathsf{wt}_M(a,b) = 0$.
Now we run a transformation $M_0 \leadsto D$ as given in~\cite{CK18}: let $G_0$ be the graph $G$ restricted to $A_0 \cup B_0$. Run Gale-Shapley algorithm in the graph $G'_0$ with the starting matching $M'_0 = \{(u^+,v^-): (u,v) \in M_0 \ \text{and}\ u\in A_0, v\in B_0\}$, where $G'_0$ is the graph obtained from $G_0$ as described in Section~\ref{prelims}.
That is, instead of starting with the empty matching, we start with the matching $M'_0$ in $G'_0$;
unmatched men in $A_0$ propose in decreasing order of preference and whenever a woman receives a proposal from a neighbor that she prefers to her current partner (her preferences as given in $G'_0$), she rejects her current partner and accepts this proposal. This results in a stable matching in $G'_0$, equivalently, a dominant matching $D$ in $G_0$, see Section~\ref{prelims}.
It was moreover shown in~\cite{CK18} that $M^* = M_1 \cup D$ is a dominant matching in $G$.
We include a new and much simpler proof of this fact below.
\begin{new-claim}
\label{clm:dominant-matching}
The matching $M^* = M_1 \cup D$ is dominant in $G$.
\end{new-claim}
\begin{proof}
The dominant matching $D$ was obtained as the linear image of a stable matching (call it $D'$) in $G'_0$.
Let $A'_1$ be the set of $a \in A_0$ such that $(a^+,b^-) \in D'$ for some $b \in B_0$ and $A'_{-1} = A_0 \setminus A'_1$.
Similarly, let $B'_1$ be the set of $b \in B_0$ such that $(a^-,b^+) \in D'$ for some $a \in A_0$ and $B'_{-1} = B_0 \setminus B'_1$. See Fig.~\ref{fig:Appendix1}.
\begin{figure}
\caption{We transformed the stable matching $M_0$ on $A_0 \cup B_0$ to the dominant matching $D$:
this partitions $A_0$ into $A'_1 \cup A'_{-1}
\label{fig:Appendix1}
\end{figure}
Observe that every vertex in $A'_1 \cup B'_1$ is matched in $D$, however not every vertex in $A'_{-1} \cup B'_{-1}$ is necessarily
matched in $D$. That is, $A'_{-1} \cup B'_{-1}$ may contain unmatched vertices.
The popular matching $M$ has a witness $\vec{\alpha} \in \{0, \pm 1\}^n$: recall that this was the witness used to partition $A \cup B$ into $A_0 \cup B_0$ and $(A \setminus A_0) \cup (B \setminus B_0)$. We have $\alpha_u = 0$ for all $u \in A_0 \cup B_0$ and
$\alpha_u \in \{\pm 1\}$ for all $u \in (A \setminus A_0) \cup (B \setminus B_0)$.
In order to prove the popularity of $M^* = M_1 \cup D$, we will show a witness $\vec{\beta}$ as follows.
For the vertices outside $A_0\cup B_0$, set $\beta_u = \alpha_u$ since nothing has changed for these vertices in
the transformation $M \leadsto M^*$. For the vertices in $A_0\cup B_0$, set $\beta_u = 1$ for $u \in A'_1 \cup B'_1$ and
$\beta_u = -1$ for every matched vertex $u \in A'_{-1} \cup B'_{-1}$. For every unmatched vertex $u$, set $\beta_u = 0$.
Thus $\beta_u \ge \mathsf{wt}_{M^*}(u,u)$ for all $u$ and $\sum_{u:A\cup B} \beta_u = 0$: this is because $D \subseteq (A'_1\times B'_{-1})\cup(A'_{-1}\times B'_1)$, so for every edge $(a,b) \in D$, we have $\beta_a + \beta_b = 0$;
recall that $\alpha_a + \alpha_b = 0$ for all $(a,b) \in M_1$.
What is left to show is that $\beta_a + \beta_b \ge \mathsf{wt}_{M^*}(a,b)$ for every edge $(a,b)$.
The correctness of the Gale-Shapley algorithm in $G'_0$ to compute popular matchings in $G_0$ immediately implies that every edge with both endpoints in $A_0 \cup B_0$ is covered by the sum of $\beta$-values of its endpoints (see \cite{FKPZ18}). We will now show that every edge with one endpoint in $A_0\cup B_0$ and the other endpoint in $(A \setminus A_0)\cup (B \setminus B_0)$ is also covered.
\begin{itemize}
\item Every edge in $A'_1\times B_1$ is covered since $\beta_a + \beta_b = 2 \ge \mathsf{wt}_{M^*}(a,b)$.
Similarly every edge in $A_1\times B'_1$ is also covered.
\item Consider any edge $(a,b) \in A'_1\times B_{-1}$. We have $\alpha_a = 0$ and $\alpha_b = -1$ and so $\mathsf{wt}_M(a,b) \le -1$,
i.e., $\mathsf{wt}_M(a,b) = -2$ since this is a value in $\{0, \pm 2\}$. This means $b$ prefers $M(b) = M^*(b)$ to $a$. Thus $\mathsf{wt}_{M^*}(a,b) \le 0$.
Since $\beta_a = 1$ and $\beta_b = -1$, we have $\beta_a + \beta_b \ge \mathsf{wt}_{M^*}(a,b)$.
Similarly every edge in $A_{-1} \times B'_1$ is also covered.
\item Consider any edge $(a,b)$ in $A_{-1}\times B'_{-1}$. We have $\mathsf{wt}_M(a,b) \le -1$ which means that
$\mathsf{wt}_M(a,b) = -2$. That is, both $a$ and $b$ prefer their partners in $M$ to each other. We will now show that
$\mathsf{wt}_{M^*}(a,b) = -2$. Since $M(a) = M^*(a)$, nothing has changed for $a$ and so $a$ prefers $M^*(a)$ to $b$.
We claim that $M^*(b) \succeq_b M(b)$, i.e., $b$ is no worse in $M^*$ than in $M$.
This is because we ran Gale-Shapley algorithm in $G'_0$ with $M'_0$ as the starting matching.
So if $b \in B'_{-1}$ changed its partner from $u^+$ in $M'_0$ to $v^+$ in $D'$ then $b$ prefers $v$ to $u$.
Thus every $b \in B'_{-1}$ has at least as good a partner in $D$ as in $M_0$. Hence
$\mathsf{wt}_{M^*}(a,b) = -2 = \beta_a + \beta_b$.
By the same reasoning, we can argue that every edge in $A_1\times B'_{-1}$ is also covered.
\item Consider any edge $(a,b)$ in $A'_{-1}\times B_{-1}$. We have $\alpha_a = 0$
and $\alpha_b = -1$ and so $\mathsf{wt}_M(a,b) \le -1$,
i.e., $\mathsf{wt}_M(a,b) = -2$. So $b$ prefers $M(b) = M^*(b)$ to $a$. We claim that $M^*(a) \succeq_a M(a)$. This will imply that
$\mathsf{wt}_{M^*}(a,b) = -2$.
Recall that $D'$ is a stable matching in $G'_0$: here every vertex prefers outgoing edges to incoming edges,
thus every $a \in A'_{-1}$ gets at least as good a partner as $S^*(a)$ in $D$, where $S^*$ is the men-optimal stable matching in $G_0$;
in turn, $S^*(a) \succeq_a M_0(a)$ for all $a \in A$.
Hence $M^*(a) \succeq_a M(a)$ for every $a \in A'_{-1}$. Thus $\mathsf{wt}_{M^*}(a,b) = -2 = \beta_a + \beta_b$. We can similarly show that every edge in $A'_{-1}\times B_1$ is also covered.
\end{itemize}
Thus $M^*$ is a popular matching in $G$. We will now show that $M^*$ is a {\em dominant} matching in~$G$. Observe that
$\beta_u \in \{\pm 1\}$ for every matched vertex $u$: we will use this fact to show that $M^*$ is dominant.
Recall that $\beta_u = 0$ for all unmatched vertices $u$.
Let $\rho = \langle a_0,b_1,a_1,\ldots,b_k,a_k,b_{k+1}\rangle$ be any $M^*$-augmenting path in $G$.
We have $\beta_{a_0} = \beta_{b_{k+1}} = 0$, hence $\beta_{b_1}= \beta_{a_k} = 1$, and so $\beta_{a_1} = \beta_{b_k} = -1$.
Either (i)~$\beta_{b_2} = -1$ or (ii)~$\beta_{b_2} = 1$ which implies that $\beta_{a_2} = -1$. It is now easy to see that in the path $\rho$,
for some $i \in \{1,\ldots,k-1\}$ the edge $(a_i,b_{i+1})$ has to be labeled $(-,-)$. That is, $\rho$ is {\em not} an
augmenting path in $G_{M^*}$. Thus there is no $M^*$-augmenting path in $G_{M^*}$, hence $M^*$ is a dominant matching in $G$
(by Theorem~\ref{thm:dominant}). \qed
\end{proof}
Since $M$ is an unstable matching, there is an edge $(a,b)$ that blocks~$M$. Since $\mathsf{wt}_M(a,b) = 2$, the endpoints of a blocking edge $(a,b)$ have to satisfy $\alpha_a = \alpha_b = 1$; so $a \in A_1$ and $b \in B_1$. The edge $(a,b)$ blocks $M^*$ as well since the matching $M_1$ was unchanged by this transformation of $M_0$ to $D$, so $M^*(a) = M_1(a)$ and $M^*(b) = M_1(b)$, thus $a$ and $b$ prefer each other to their respective partners in~$M^*$. So $M^*$ is an unstable dominant matching and Lemma~\ref{non-stab-domn} follows.
\begin{lemma}
\label{non-stab-domn}
If $G,{\cal P}$ has an unstable popular matching then $G,{\cal P}$ admits an unstable dominant matching.
\end{lemma}
Hence in order to answer the question of whether every popular matching in $G$ is stable or not, we need to decide if there exists a dominant matching $M$ in $G$ with a blocking edge.
We present a simple combinatorial algorithm for this problem.
Our algorithm is based on the equivalence between dominant matchings in $G$ and stable matchings in $G'$.
Our task is to determine if there exists a stable matching in $G'$ that includes a pair of edges $(a^+,v^-)$ and $(u^-,b^+)$ such that $a$ and $b$ prefer each other to $v$ and $u$, respectively, in~$G$. It is easy to decide in $O(m^3)$ time whether such a stable matching exists or not in~$G'$.
\begin{itemize}
\item For every pair of edges $e_1 = (a,v)$ and $e_2 = (u,b)$ in $G$ such that
$a$ and $b$ prefer each other to $v$ and $u$, respectively: determine if there is a stable matching in $G'$ that contains the pair of edges
$(a^+,v^-)$ and~$(u^-,b^+)$.
\end{itemize}
In the graph $G'$, we modify Gale-Shapley algorithm so that $b$ rejects proposals
from all neighbors ranked worse than $u^-$ and $v$ rejects all proposals from neighbors ranked worse than~$a^+$.
If the resulting algorithm returns a stable matching that contains the edges $(a^+,v^-)$ and~$(u^-,b^+)$, then we have the desired matching; else $G'$
has no stable matching that contains this particular pair of edges.
In order to determine if there exists an unstable dominant matching, we may need to go through all
pairs of edges $(e_1,e_2) \in E\times E$. Since we can determine in linear time if there exists a stable matching in $G'$ with any particular pair of edges~\cite{GI89},
the running time of this algorithm is $O(m^3)$.
\paragraph{A faster algorithm.} It is easy to improve the running time to $O(m^2)$. For each $(a,b) \in E$ we check the following.
\begin{itemize}
\item[($\circ$)] Does there exist a stable matching in $G'$ such that
\begin{inparaenum}[(1)]
\item $a^+$ is matched to a neighbor that is ranked worse than $b^-$ in $a$'s list, and
\item $b^+$ is matched to a neighbor that is ranked worse than $a^-$ in $b$'s list?
\end{inparaenum}
\end{itemize}
We modify the Gale-Shapley algorithm in $G'$ so that (1)~$b$ rejects all offers from superscript~$+$ neighbors, i.e., $b$ accepts proposals only from superscript~$-$ neighbors, and (2)~every neighbor of $a$ that is ranked better than $b^-$ rejects proposals from~$a^+$.
Suppose ($\circ$) holds.
Then this modified Gale-Shapley algorithm returns among all such stable matchings, the most men-optimal and women-pessimal one~\cite{GI89}.
Thus among all stable matchings that match $a^+$ to a neighbor ranked worse than $b^-$ and that include some edge $(\ast,b^+)$,
the matching returned by the above algorithm matches $b$ to its least preferred neighbor and $a$ to its most preferred neighbor.
Hence if the modified Gale-Shapley algorithm returns a matching that is \begin{inparaenum}[(i)] \item unstable or
\item includes an edge $(a^-,\ast)$ or
\item matches $b^+$ to a neighbor better than $a^-$, then there is no dominant matching $M$ in $G$ such that the pair $(a,b)$ blocks~$M$.
\end{inparaenum}
Else we have the desired stable matching in $G'$, call this matching~$M'$.
The projection of the matching $M'$ on to the edge set of $G$ will be a dominant matching in $G$ with $(a,b)$ as a blocking edge.
Since we may need to go through all edges in $E$ and the time taken for any edge $(a,b)$ is $O(m)$, the entire running time of this
algorithm is~$O(m^2)$. We have thus shown the following theorem.
\begin{theorem}
\label{thm:unstable}
Given $G = (A \cup B,E)$ on $m$ edges and strict preference lists of its vertices, we can decide in $O(m^2)$ time whether every popular matching in $G$ is stable or not; if not, we can return an unstable popular matching.
\end{theorem}
\section{Finding a non-dominant popular matching}
\label{sec:stable}
Given an instance $G = (A \cup B, E)\new{, \mathcal{P}}$, the problem we consider here is to decide if every popular matching is also
dominant, i.e., to decide if $\{$popular matchings$\} = \{$dominant matchings$\}$ or not in~$G$.
\begin{pr}
\textsf{Input: } A bipartite graph $G = (A \cup B,E)$ with strict preference lists.\\
\textsf{Decide: } If there is a non-dominant popular matching in $G$.
\end{pr}
In this section, we show the following.
\begin{theorem}
\label{thm:non-dominant}
Given $G = (A \cup B,E)$ with strict preference lists, it is NP-complete to decide if $G$ admits a popular matching that is not dominant.
\end{theorem}
We start with the following lemma, that is the counterpart of Lemma~\ref{non-stab-domn}.
\begin{lemma}
\label{non-domn-stable}
If $G,{\cal P}$ has a non-dominant popular matching $M$ then $G,{\cal P}$ admits a non-dominant stable matching $N$. If $M$ is given, then $N$ can be found efficiently.
\end{lemma}
\begin{proof}
Let $M$ be a non-dominant popular matching in $G$ and $\vec{\alpha}\in\{0, \pm 1\}^n$ its witness (see Theorem~\ref{thm:witness}).
We will use the decomposition illustrated in Fig.~\ref{fig:first} to show the existence of a non-dominant stable matching in $G$. As per the decomposition in Fig.~\ref{fig:first}, $M = M_0 \mathbin{\mathaccent\cdot\cup} M_1$. Since $M$ is not dominant, there exists an $M$-augmenting path $\rho$ in $G_M$ (by Theorem~\ref{thm:dominant}). The endpoints of $\rho$ (call them $u$ and~$v$) are unmatched in $M$, hence $\alpha_u = \alpha_v = 0$ (see Section~\ref{prelims})
and so $u$ and $v$ are in $A_0 \cup B_0$.
In the graph $G_M$, the vertices in $B_{-1} \cup A_{-1}$ (these vertices have $\alpha$-value $-1$) are adjacent only to vertices in $A_1 \cup B_1$ as all other edges incident to vertices in $B_{-1} \cup A_{-1}$ are labeled $(-,-)$, and these are not present in~$G_M$. Suppose the $M$-alternating path $\rho$ leaves the vertex set $A_0 \cup B_0$, i.e., suppose it contains a non-matching edge between $A_0 \cup B_0$ and $A_1 \cup B_1$. Since the partners of vertices in $A_1 \cup B_1$ are in $B_{-1} \cup A_{-1}$, the path $\rho$ can never return to $A_0 \cup B_0$. However we know that the last vertex $v$ of $\rho$ is in $A_0\cup B_0$. Thus $\rho$ never leaves $A_0 \cup B_0$, i.e., $\rho$ is an $M_0$-augmenting path in $G^*_{M_0}$, where $G^*$ is the graph $G$ restricted to vertices in $A_0 \cup B_0$.
We now run a transformation $M_1 \leadsto S$ as given in \cite{CK18} to convert $M_1$ into a stable matching $S$ as follows. The matching $S$ is obtained by running Gale-Shapley algorithm in the subgraph $G_1$, which is the graph $G$ restricted to $(A\setminus A_0)\cup(B\setminus B_0)$: however, rather than starting with the empty matching, we start with the matching given by $M_1 \cap (A_{-1}\times B_1)$. So unmatched men (these are vertices in $A_1$ to begin with) propose in decreasing order of preference and whenever a woman receives a proposal from a neighbor that she prefers to her current partner (her preferences as given in $G$), she rejects her current partner and accepts this proposal. This results in a stable matching $S$ in $G_1$.
It was shown in \cite{CK18} that $N = M_0 \cup S$ is a stable matching in $G$. We include a new and simple proof of this below (see Claim~\ref{clm:stable-matching}). Since $\rho$ is an $M_0$-augmenting path in $G^*_{M_0}$ and $M_0$ is a subset of $N$, it follows that $\rho$ is an $N$-augmenting path in $G_N$. Thus $N$ is a non-dominant stable matching in~$G$. \qed
\end{proof}
\begin{new-claim}
\label{clm:stable-matching}
The matching $N = M_0 \cup S$ is a stable matching in $G$.
\end{new-claim}
\begin{proof}
The matching $S$ is obtained by running Gale-Shapley algorithm in the graph $G_1$ on vertex set $(A \setminus A_0)\cup(B \setminus B_0)$.
We did not compute the matching $S$ from scratch --- we started with edges of the matching $M_1$ restricted to $A_{-1} \times B_1$. So in the resulting matching $S$, it is easy to see the following two useful properties:
\begin{itemize}
\item $S(b) \succeq_b M_1(b)$ for every $b \in B_1$. This is because to begin with, every $b \in B_1$ is matched to $M_1(b)$ and $b$ will
change her partner only if she receives a proposal from a neighbor better than $M_1(b)$.
\item $S(a) \succeq_a M_1(a)$ for every $a \in A_1$. This is because all vertices in $B_{-1}$ are unmatched in our starting matching and
every $b \in B_{-1}$ prefers her partner in $M_1$ to any neighbor in $A_{-1}$ (since every edge in $A_{-1}\times B_{-1}$ is labeled $(-,-)$ with respect to $M_1$).
Thus in the matching $S$, $a$ will get accepted either by $M_1(a)$ or a better neighbor.
\end{itemize}
It is now easy to show that $N$ is a stable matching. We already know that $M_0$ is a stable matching on $A_0 \cup B_0$ and $S$ is a stable
matching on $(A \setminus A_0) \cup (B \setminus B_0)$. It is left to show that no edge $(a,b)$ with one endpoint in $A_0 \cup B_0$ and
another endpoint in $(A \setminus A_0) \cup (B \setminus B_0)$ {\em blocks} $N$, i.e., we need to show that $\mathsf{wt}_N(a,b) \le 0$ for every such edge $(a,b)$.
Suppose $a \in A_0$ and $b \in B_{-1}$. Let $\vec{\alpha}$ be the witness of $M$ used to partition $A\cup B$ into $A_0\cup B_0$ and $(A \setminus A_0) \cup (B\setminus B_0)$. So $\alpha_a = 0$ and $\alpha_b = -1$, hence
$\mathsf{wt}_M(a,b) \le -1$, i.e., $\mathsf{wt}_M(a,b) = -2$. Thus $a$ prefers $M_0(a) = N(a)$ to $b$. Hence $\mathsf{wt}_N(a,b) \le 0$. We can similarly show that
$\mathsf{wt}_N(a,b) \le 0$ for $a \in A_{-1}$ and $b \in B_0$.
Suppose $a \in A_0$ and $b \in B_1$. Then $\alpha_a = 0$ and $\alpha_b = 1$ and so $\mathsf{wt}_M(a,b) \le 1$, i.e., $\mathsf{wt}_M(a,b) \le 0$.
So if $a$ prefers $M_0(a) = N(a)$ to $b$ then we can immediately conclude that $\mathsf{wt}_N(a,b) \le 0$. Else $b$ prefers $M_1(b)$ to $a$ and
we have noted above that $S(b) \succeq_b M_1(b)$ for every $b \in B_1$. Thus $b$ prefers $N(b) = S(b)$ to $a$ and so
$\mathsf{wt}_N(a,b) \le 0$. We can similarly argue that $\mathsf{wt}_N(a,b) \le 0$ for every $a \in A_1$ and $b \in B_0$.
This finishes the proof of this claim. \qed
\end{proof}
Our problem now is to decide if there exists a non-dominant stable matching $N$ in $G$, i.e., a stable matching $N$ with an $N$-augmenting path in $G_N$ (see Theorem~\ref{thm:dominant}). We show a reduction from 3SAT. Given a 3SAT formula $\phi$, we transform $\phi = C_1 \wedge \cdots \wedge C_m$ as follows: let $X_1,\ldots,X_n$ be the $n$ variables in $\phi$. For each $i$:
\begin{itemize}
\item replace all occurrences of $\neg X_i$ in $\phi$ with $X_{n+i}$ (a single new variable);
\item add the clauses $X_i \vee X_{n+i}$ and $\neg X_i \vee \neg X_{n+i}$ to capture $\neg X_i \iff X_{n+i}$.
\end{itemize}
Thus, the updated formula is $\phi = C_1 \wedge \cdots \wedge C_m \wedge C_{m+1} \wedge\cdots\wedge C_{m+n} \wedge D_{m+n+1} \wedge \cdots \wedge D_{m+2n}$, where $C_1,\ldots,C_m$ are the original $m$ clauses with negated literals substituted by new variables and for $1 \le i \le n$: $C_{m+i}$ is the clause $X_i \vee X_{n+i}$ and $D_{m+n+i}$ is the clause $\neg X_i \vee \neg X_{n+i}$. Corresponding to the above formula $\phi$, we will construct an instance $G\new{, \mathcal{P}}$ whose high-level picture is shown in Fig.~\ref{newfig1:example}.
\begin{figure}
\caption{The high-level picture of the instance $G\new{, \mathcal{P}
\label{newfig1:example}
\end{figure}
$G$ is the series composition of an edge $(s,u_0)$, a gadget for each clause $i$ that starts with node $v_{i-1}$ and ends with node $u_i$, and finally, an edge $(v_{m+2n},t)$. The gadget corresponding to clause $i$ contains the parallel composition of disjoint gadgets $Z_{i,j}$ for each literal $j$ in clause $i$. Note that gadgets $Z_{i,j}$ associated with positive literals (so $i\leq m+n$) are different from those associated with
negated literals (here $i\geq m+n+1$).
For a variable $x$, we use $a_i,b_i,a'_i,b'_i$ to denote the 4 vertices in the gadget of $x$ in $C_i$ and $c_k,d_k,c'_k,d'_k$ to denote the 4 vertices in the gadget of $\neg x$ in $D_k$. The edge $(a_i,d_k)$ will be called a {\em consistency} edge. This connects the gadget of a positive literal to the (unique) corresponding negative literal (see Fig.~\ref{newfig2:example}).
Definitions of preference lists are given in Section~\ref{sec:preferences}. Here, we explain the main steps of the proof. Define $F:=\{(u_i,v_i):0 \le i \le m+2n\}$ to be the set of \emph{basic edges}.
\begin{figure}
\caption{Suppose $x$ occurs in clause $C_i$ (gadget on the left) and $\neg x$ occurs in clause $D_k$ (gadget on the right). The rank $\infty$ on the edge $(d_k,c_k)$ denotes that $c_k$ is $d_k$'s {\em last}
\label{newfig2:example}
\end{figure}
\begin{lemma}
\label{lem:unstable-edges}
Let $S$ be any stable matching in $G,{\cal P}$. Then
\begin{enumerate}
\item\label{enum:unstable-edges-1} $S$ leaves $s,t$ unmatched, $F\subseteq S$, and $S$ contains no consistency edge. \end{enumerate}
Also, for every variable $x \in \{X_1,\ldots,X_{2n}\}$, we have:
\begin{enumerate}
\setcounter{enumi}{1}
\item\label{enum:unstable-edges-2} From the gadget of $\neg x$, either (i)~$(c_k,d_k),(c'_k,d'_k) \in S$ or (ii)~$(c_k,d'_k),(c'_k,d_k) \in S$.
--- If (i) happens, we say that the gadget is in \emph{true state}, else that it is in \emph{false state}.
\item\label{enum:unstable-edges-3} From a gadget of $x$ in, say, the $i$-th clause, either the pair of edges (i)~$(a_i,b'_i),(a'_i,b_i)$ or
(ii)~$(a_i,b_i),(a'_i,b'_i)$ is in $S$.
--- If (i) happens, we say that the gadget is in \emph{true state}, else that it is in \emph{false state}.
\item\label{enum:unstable-edges-4} If the gadget of $\neg x$ is in true state then all the gadgets of $x$ are in false state.
\end{enumerate}
\noindent Conversely, any matching $S$ of $G$ that satisfies 1-4 above is stable.
\end{lemma}
Lemma~\ref{lem:unstable-edges} implies that each stable matching can be mapped to a true/false assignment as follows:
\begin{itemize}
\item For each $x$, if the gadget of $\neg x$ is in true state, then set $x= \mathsf{false}$, else set $x = \mathsf{true}$.
\end{itemize}
Conversely, for any $\mathsf{true}$/$\mathsf{false}$ assignment $A$ to the variables in $\phi$, we define an associated matching $M_A$ as follows. First, include all basic edges in $M_A$.
For any $x \in \{X_1,\ldots,X_{2n}\}$:
\begin{itemize}
\item If $x = \mathsf{true}$ then set all gadgets corresponding to $x$ in true state, and the gadget corresponding to $\neg x$ in false state. Thus the {\em dotted red} edges (see Fig.~\ref{newfig2:example}) from all these gadgets get added to $M_A$.
\item Otherwise, set all gadgets corresponding to $x$ in false state and the gadget corresponding to $\neg x$ in true state. Thus the {\em dashed blue} edges (see Fig.~\ref{newfig2:example}) from all these gadgets get added to $M_A$.
\end{itemize}
Lemma~\ref{lem:unstable-edges} implies that $M_A$ is stable. The next fact concludes the reduction.
\begin{lemma}
\label{cl:stable-no-consistency} Let $S$ be a stable matching in $G,{\cal P}$. If there is an augmenting path $\rho$ in $G_{S}$, then $\rho$ goes from $s$ to $t$, does not use any consistency edge, and passes, in each clause, in exactly one gadget, which is in true state. In particular, the assignment corresponding to $S$ is feasible. Conversely, if $A$ is a feasible assignment, in each clause $i$ there is a gadget $Z_{i,j_i}$ in true state, and there exists an augmenting path in $G_{S}$ that passes through $Z_{i,j_i}$ for all $i$.
\end{lemma}
In the rest of the section, we give the missing details from the construction, and prove Lemma~\ref{lem:unstable-edges} and Lemma~\ref{cl:stable-no-consistency}.
\subsection{The preference lists}\label{sec:preferences}
The gadgets corresponding to an occurrence of $x$ in the $i$-th clause and the occurrence of $\neg x$ are given in
Fig.~\ref{newfig2:example}. Let $D_k$ be the clause that contains the unique occurrence of $\neg x$ in the transformed formula $\phi$.
Note that the labels of vertices $a_i,b_i,a'_i,b'_i$ should depend on $x$; however, for the sake of readability, we omit the dependency on $x$, since it is always clear from the context. Similarly for the labels of vertices $c_k,d_k,c'_k,d'_k$.
We now describe the preference lists of the 4 vertices in the gadget of $x$ in the clause $C_i$.
\begin{minipage}[c]{0.45\textwidth}
\centering
\begin{align*}
& a_i : \ b_i \succ d_k \succ v_{i-1} \succ b'_i \qquad\qquad && a'_i : \ b'_i \succ b_i \\
& b_i : \ a'_i \succ a_i \succ u_i \qquad\qquad && b'_i : \ a_i \succ a'_i \\
\end{align*}
\end{minipage}
We now describe the preference lists of the 4 vertices in the gadget of $\neg x$.
\begin{minipage}[c]{0.45\textwidth}
\centering
\begin{align*}
& c_k : \ d_k \succ d'_k \succ v_{k-1} \qquad\qquad && c'_k : \ d'_k \succ d_k \\
& d_k : \ c'_k \succ a_i \succ a_j \succ \cdots \succ u_k \succ c_k \qquad\qquad && d'_k : \ c_k \succ c'_k \\
\end{align*}
\end{minipage}
Here $a_i,a_j,\ldots$ are the occurrences of the $a$-vertex in all the gadgets corresponding to literal $x$ in the formula $\phi$.
That is, the literal $x$ occurs in clauses $i,j,\ldots$ The order among the vertices $a_i,a_j,\ldots$ in $d_k$'s preference list does
not matter.
We now describe the preference lists of vertices $u_i$ and $v_i$. The preference list of $u_0$ is $v_0 \succ s$ and
for $1 \le i \le m+n$, the preference list of $u_i$ is as given on the left (these correspond to {\em positive} clauses $C_i$)
and for $m+n+1 \le i \le m+2n$, the preference list of $u_i$ is as given on the right (these correspond to {\em negative} clauses $D_i$):
\[ u_i: \ \ b_{i1} \succ b_{i2} \succ b_{i3} \succ \underline{v_i} \ \ \ \ \ \ \ \ \ \ \ \ \text{or}\ \ \ \ \ \ \ \ \ \ \ \ u_i: \ \ \underline{v_i} \succ d_{i1} \succ d_{i2},\]
where $b_{ij}$ (similarly, $d_{ij}$) is the $b$-vertex (resp., $d$-vertex) that appears in the gadget corresponding to the $j$-th literal in
the $i$-th clause. If the $i$-th clause (a positive one) consists of only 2 literals then there is no $b_{i3}$ here. The vertex $v_i$ is underlined.
The preference list of $v_{m+2n}$ is $u_{m+2n} \succ t$ and
for $1 \le i \le m+n$, the preference list of $v_{i-1}$ is as given on the left (these correspond to {\em positive} clauses $C_i$)
and for $m+n+1 \le i \le m+2n$, the preference list of $v_{i-1}$ is as given on the right (these correspond to {\em negative} clauses $D_i$):
\[ v_{i-1}: \ \ \underline{u_{i-1}} \succ a_{i1} \succ a_{i2} \succ a_{i3} \ \ \ \ \ \ \ \ \ \ \ \ \text{or}\ \ \ \ \ \ \ \ \ \ \ \ v_{i-1}: \ \ c_{i1} \succ c_{i2} \succ \underline{u_{i-1}},\]
where $a_{ij}$ (similarly, $c_{ij}$) is the $a$-vertex (resp., $c$-vertex) that appears in the gadget corresponding to the $j$-th literal
in the $i$-th clause. If the $i$-th clause (a positive one) consists of only 2 literals then there is no $a_{i3}$ here. The vertex $u_{i-1}$
is underlined.
\subsection{Proof of Lemma~\ref{lem:unstable-edges}}
We first derive some useful properties. Let $S$ be the matching given by the union of the basic set and the pair of dashed blue edges from the gadget of every literal. One easily checks that $S$ is stable. Thus $s$ and $t$ are unstable vertices, i.e., they remain unmatched in any stable matching. Suppose an edge $e$ is labeled $(-,-)$ with respect to some stable matching. It is an easy fact~\cite{GI89} that $e$ is an {\em unstable edge}, i.e., no stable matching contains $e$.
The following lemma is based on this fact.
\begin{lemma}
\label{lem:unstable-edges-aux}
Let $S$ be any stable matching in $G,{\cal P}$. Then (i)~$S$ does not contain any consistency edge and (ii) $S$ is a superset of the basic set.
\end{lemma}
\begin{proof}
Consider the matching $N$ given by the union of the basic set, the pair of dashed blue edges from the gadget of every positive literal, and the pair of dotted red edges from the gadget of every negative literal. It is easy to see that $N$ is a stable matching. Every consistency edge is labeled $(-,-)$ with respect to $N$. Thus every consistency edge is unstable, proving (i).
Observe that every edge $(v_{i-1},a_i)$ is labeled $(-,-)$ with respect to $N$. Similarly, every edge $(d_k,u_k)$ is also labeled $(-,-)$ with respect to $N$. Thus no stable
matching can contain these edges. Recall that all the $u$-vertices and $v$-vertices are stable, so this immediately implies that $v_{i-1}$ is matched to $u_{i-1}$ for all $i \le m+n$ and $u_k$ is matched $v_k$ for all $k \ge m+n+1$. Also recall that the 4 vertices in each literal gadget are stable.
This implies $u_{m+n}$ and $v_{m+n}$ are also matched to each other in every stable matching, proving (ii). \qed
\end{proof}
We can now conclude the proof of Lemma~\ref{lem:unstable-edges}. Let $S$ be any stable matching in $G$. Statement~\ref{enum:unstable-edges-1} follows from Lemma~\ref{lem:unstable-edges-aux}. Statements~\ref{enum:unstable-edges-2} and~\ref{enum:unstable-edges-3} are shown by statement~\ref{enum:unstable-edges-1} and stability. As for statement~\ref{enum:unstable-edges-4}, if
the dashed blue pair of edges from $\neg x$'s gadget is included in $S$ then the stability of $S$ implies that $S$ has to contain the dashed blue pair of edges from every gadget of $x$---otherwise some consistency edge would be a {\em blocking edge} to $S$. Finally, it is easy to see that any matching that satisfies statements \ref{enum:unstable-edges-1}-\ref{enum:unstable-edges-4} is stable.
\subsection{Proof of Lemma~\ref{cl:stable-no-consistency}}
We first show the second part of Lemma~\ref{cl:stable-no-consistency}. Let $S$ be a stable matching of $G$. If there is a gadget in true state in each clause, then an $M$-augmenting path starting at $s$ and ending at $t$ in $G_M$ is easily constructed as follows. First, take edge $(s,u_0)$, then all edges $(u_i,v_i)$ and:
\begin{itemize}
\item for each clause $C_i$ with gadget $Z$ set to true, edges $(v_{i-1},a_i), (a_i,b_i'), (b_i',a_i'), (a_i',b_i), (b_i,u_i)$;
\item for each clause $D_k$ with gadget $Z$ set to true, edges $(v_{k-1},c_k), (c_k,d_k), (d_k,u_k)$.
\end{itemize}
Last, take edge $(v_{m+2n},t)$. In order to prove the other direction, we start with some auxiliary facts.
\begin{new-claim}
\label{claim0}
All popular matchings in $G$ have the same size and match the same set of vertices. In particular, they leave unmatched only $s$ and $t$.
\end{new-claim}
\begin{proof}
Consider the matching $M$ given by the union of the basic set with the pair of dashed blue edges from the gadget of every literal.
Note that $M$ is a stable matching in $G$. We claim that $M$ is also a dominant matching in $G$. Since a dominant (resp. stable) matching is a popular matching of maximum (resp. minimum) size, the first claim follows. The second is then implied by Lemma~\ref{lem:all-same}.
Consider the edge $(v_0,a_1)$: this is labeled $(-,-)$ with respect to $M$ since both $v_0$ and $a_1$ prefer
their partners in $M$ to each other. Thus there is no $s$-$t$ path in the graph $G_{M}$. As $s$ and $t$ are the only unmatched vertices,
there is no $M$-augmenting path in $G_{M}$. So $M$ is a dominant matching in $G$ by Theorem~\ref{thm:dominant}. \qed
\end{proof}
\begin{new-claim}
\label{cl:inproof-stable-no-consistency} Let $S$ be a stable matching in $G$ and $\rho$ an augmenting path in $G_{S}$. Then $\rho$ goes from $s$ to $t$ and does not use any consistency edge.
\end{new-claim}
\begin{proof}
By Claim~\ref{claim0}, stable matchings in $G$ leave only 2 vertices unmatched: these are $s$ and $t$. So $\rho$ must go from $s$ to~$t$. Recall that consistency edges cannot be included in $S$, see Lemma~\ref{lem:unstable-edges}. By parity reasons, traversing $\rho$ from $s$ to $t$, a consistency edge can only occur in $\rho$ if it leads $\rho$ back to an earlier clause. That is, a consistency edge $(a_i,d_k)$ has to be traversed in $\rho$ in the direction $d_k \rightarrow a_i$. Let $e = (d_k,a_i)$ be the first consistency edge traversed in $\rho$.
Observe that $\rho$ cannot reach $t$, because nodes $v_{i-1}$ and $u_i$ must have already been traversed by $\rho$. Hence, consistency edges cannot appear in~$\rho$. \qed
\end{proof}
We can now conclude the proof of Lemma~\ref{cl:stable-no-consistency}.
Assume that there is an augmenting path $\rho$ in $G_S$, starting at $s$ and terminating at $t$. Because of Lemma~\ref{cl:stable-no-consistency}, $\rho$ goes from $s$ to $t$, traverses all clauses, and for each clause $i$, there is a path in $G_S$ between $v_{i-1}$ and $u_i$. The literal whose gadget provides this $v_{i-1} \rightarrow u_i$ path is set to $\mathsf{true}$ and thus the $i$-th clause is satisfied. As this holds for each $i$,
this means that $\phi$ has a satisfying assignment.
\subsection{Max-size popular matchings}
A non-dominant popular matching trivially exists if the size of a stable matching differs from the size of a dominant matching in an instance. Our next result is tailored for such instances. We will now show that it is NP-hard to decide if $G\new{, \mathcal{P}}$ admits a {\em max-size} popular matching that is not dominant.
\begin{pr}
\textsf{Input: } A bipartite graph $G = (A \cup B,E)$ with strict preference lists.\\
\textsf{Decide: } If there is a non-dominant matching among max-size popular matchings in~$G$.
\end{pr}
Given a 3SAT formula $\phi$, we will transform it as described earlier and build the graph $G_{\phi}$. Recall that all popular matchings in $G_{\phi}$ have the same size.
We proved in Theorem~\ref{thm:non-dominant} that it is NP-hard to decide if $G_{\phi}$ admits a popular matching that is not dominant. Consider the graph $H = G_{\phi} \cup \rho$ where $\rho$ is the path $\langle p_0,q_0,p_1,q_1\rangle$ with $p_1$ and $q_0$ being each other's top choices. There are no edges between a vertex in $\rho$ and a vertex in $G_{\phi}$. A max-size popular matching in $H$ consists of the pair of edges $(p_0,q_0), (p_1,q_1)$, and a popular matching in $G_{\phi}$.
Hence a max-size popular matching in $H$ that is {\em not} dominant consists of $(p_0,q_0), (p_1,q_1)$, and a popular matching in $G_{\phi}$ that is non-dominant. Thus the desired result immediately follows.
\begin{theorem}
\label{thm:max-size}
Given $G = (A \cup B,E)$ \new{ and $\mathcal{P}$} where in \new{$\mathcal{P}$}, every vertex has a strict ranking over its neighbors, it is NP-complete to decide if $G$ admits a max-size popular matching that is not dominant.
\end{theorem}
\section{Hardness of finding a stable matching that is dominant and consequences for popular matchings}\label{sec:new}
Given $G = (A \cup B, E)$ \new{and a set of strictly ordered lists $\mathcal{P}$}, we first consider here the problem of deciding if $G$ admits a matching that is {\em both} stable and dominant.
\begin{pr}\label{pr:stable-cap-dominant}
\textsf{Input: } A bipartite graph $G = (A \cup B,E)$ with strict preference lists.\\
\textsf{Decide: } If there is a stable matching in $G$ that is also dominant.
\end{pr}
In instances where all popular matchings have the same size, such a matching $M$ is desirable as there are no blocking edges with respect to $M$; moreover, $M$ defeats any larger matching in a head-to-head election.
In Section~\ref{sec:stable-dominant}, we show that the above problem is NP-complete.
We will then use this hardness to show the NP-completeness of deciding if $G$ admits a min-size popular matching that is unstable and also to give a short proof of NP-hardness of the popular roommates problem.
\subsection{Finding a matching that is both stable and dominant}
\label{sec:stable-dominant}
We know from Theorem~\ref{thm:dominant} that Problem~\ref{pr:stable-cap-dominant} is equivalent to the problem of deciding if there exists a stable matching $M$ such that $M$ has no augmenting path in $G_M$. We will show a simple reduction from 3SAT to this stable matching problem. Note that the graph $G$ here (and hence the reduction) is different from the instance given in Section~\ref{sec:stable}. Given a 3SAT formula $\phi$, we will transform $\phi$ as done in Section~\ref{sec:stable} so that there is exactly one occurrence of $\neg x$ in $\phi$ for every variable $x$.
Corresponding to the above formula $\phi$, we will construct an instance $G\new{, \mathcal{P}}$ whose high-level picture is shown in Fig.~\ref{newfig3:example}.
There are two ``unwanted'' vertices $s, t$ along with $u^{\ell}_j,v^{\ell}_j$ for every clause $\ell$ in $\phi$, for $0 \le j \le i$, where $i$ is the number of literals in clause $\ell$, along with one gadget for each literal in every clause.
\begin{figure}
\caption{The high-level picture of the gadgets corresponding to clauses $\ell$ with three literals, and $\ell'$ with two negated literals in the instance $G\new{, \mathcal{P}
\label{newfig3:example}
\end{figure}
In Fig.~\ref{newfig3:example}, $Z_{{\ell}j}$ is the gadget corresponding to the $j$-th literal of clause $\ell$, where each clause has 2 or 3 literals. As before, we have a separate gadget for every occurrence of each literal. That is, there is a separate gadget for each occurrence of $x \in \{X_1,\ldots,X_{2n}\}$ in $\phi$ and another gadget for the unique occurrence of $\neg x$ in $\phi$.
As done in Section~\ref{sec:stable}, vertices from a gadget corresponding to $x$ are denoted by $a_i,a_i',b_i,b_i'$, while vertices from a gadget corresponding to $\neg x$ are denoted by $c_k,c_k',d_k,d'_k$; however adjacency lists and preferences are different here, see again Fig.~\ref{newfig3:example}. As before, we have a {\em consistency edge} between $x$'s gadget and $\neg x$'s gadget: this is now between the $b$-vertex of $x$'s gadget and $c$-vertex of $\neg x$'s gadget (see Fig.~\ref{newfig4:example}). Again, we postpone the definition of preference lists, and instead state the following lemma, analogous to Lemma~\ref{lem:unstable-edges}. The proof of Lemma~\ref{lem:unstable-edges-bis} is analogous to the proof of Lemma~\ref{lem:unstable-edges} and hence omitted.
\begin{figure}
\caption{Suppose $x$ occurs as the first literal in the $i$-th clause and $\neg x$ occurs as the second literal in the $k$-th clause. For convenience, we use $a_1,b_1,a'_1,b'_1$ to denote the 4 vertices in this gadget of $x$ and $c_2,d_2,c'_2,d'_2$ to denote the 4 vertices in the gadget of $\neg x$. As before, $\infty$ and $\infty-1$ denote last choice neighbor and last but one choice neighbor, respectively.}
\label{newfig4:example}
\end{figure}
\begin{lemma}
\label{lem:unstable-edges-bis}
Let $S$ be any stable matching in $G,{\cal P}$. Then
\begin{enumerate}
\item\label{enumbis:unstable-edges-1} $S$ leaves $s$ and $t$ unmatched and it does not contain any consistency edge,
while the edges $(u^{\ell}_i,v^{\ell}_i)$ for all clauses $\ell$ and indices $i$ are in $S$. \end{enumerate}
Also, for every variable $x \in \{X_1,\dots, X_{2n}\}$, we have:
\begin{enumerate}
\setcounter{enumi}{1}
\item\label{enumbis:unstable-edges-2} From the gadget of $\neg x$, either the pair (i)~$(c_k,d_k),(c'_k,d'_k)$ or (ii)~$(c_k,d'_k),(c'_k,d_k)$ is in $S$.
--- If (i) happens, we say that the gadget is in \emph{false state}, else that it is in \emph{true state}.
\item\label{enumbis:unstable-edges-3} From a gadget of $x$, say, its gadget in the $i$-th clause, either the pair of edges (i)~$(a_i,b_i),(a'_i,b'_i)$ or
(ii)~$(a_i,b'_i),(a'_i,b_i)$ is in $S$.
--- If (i) happens, we say that the gadget is in \emph{true state}, else that it is in \emph{false state}.
\item\label{enumbis:unstable-edges-4} If the gadget of $\neg x$ is in true state then all the gadgets of $x$ are in false state. \end{enumerate}
\noindent Conversely, any matching $S$ of $G$ that satisfies 1-4 above is stable.
\end{lemma}
\noindent Corresponding to any stable matching $M$, define a $\mathsf{true}$/$\mathsf{false}$ assignment $A_M$ as follows:
\begin{itemize}
\item If the dashed blue pair $(c_k,d_k)$ and $(c'_k,d'_k)$ from $\neg x$'s gadget belongs to $M$ then
$A_M$ sets $x$ to $\mathsf{true}$; else $A_M$ sets $x$ to $\mathsf{false}$.
\end{itemize}
Conversely, for any $\mathsf{true}$/$\mathsf{false}$ assignment $A$ to the variables in $\phi$, we define an associated matching $M_A$ as follows. For any $x \in \{X_1,\ldots,X_{2n}\}$:
\begin{itemize}
\item If $x = \mathsf{true}$, then set every gadget corresponding to $x$ in true state, and the gadget corresponding to $\neg x$ in false state.
Thus the {\em dashed blue} edges (see Fig.~\ref{newfig4:example}) from all these gadgets get added to $M_A$.
\item Otherwise, set every gadget corresponding to $x$ in false state, and the gadget corresponding to $\neg x$ in true state.
Thus the {\em dotted red} edges (see Fig.~\ref{newfig4:example}) from all these gadgets get added to $M_A$.
\end{itemize}
Finally, include the edges $(u^{\ell}_i,v^{\ell}_i)$ for all $\ell$ and $i$.
Using Lemma~\ref{lem:unstable-edges-bis}, it is easy to see that $M_A$ is a stable matching.
Our goal is to show that $\phi$ is satisfiable if and only if $G$ has a stable matching $S$ without an $S$-augmenting path in $G_S$. This is achieved by the following lemma, whose proof is given at the end of this section.
\begin{lemma}
\label{cl:stable-no-consistency-bis} If there is a stable matching $M$ such that $G_M$ has no augmenting path, then $A_M$ satisfies $\phi$. Conversely, if there exists a satisfiable assignment $A$, then there is no augmenting path with respect to $M_A$ in $G_{M_A}$.
\end{lemma}
We can therefore conclude the following.
\begin{theorem}
\label{thm:stable-and-dominant}
Given $G = (A \cup B,E),{\cal P}$, it is NP-complete to decide if $G$ admits a matching that is both stable and dominant.
\end{theorem}
\subsubsection{Preference lists}\label{sec:hardn2-lists}
We now describe the preference lists of the 4 vertices $a_1,b_1,a'_1,b'_1$ in the gadget of $x$ in the $i$-th clause.
We assume $x$ to be the first literal in this clause.
Note that the consistency edge now exists between the $b$-vertex in $x$'s gadget and the $c$-vertex in $\neg x$'s gadget
(see Fig.~\ref{newfig4:example}).
\begin{minipage}[c]{0.45\textwidth}
\centering
\begin{align*}
& a_1 : \ b_1 \succ v^i_0 \succ b'_1 \qquad\qquad && a'_1 : \ b'_1 \succ b_1 \\
& b_1 : \ a'_1 \succ c_k \succ a_1 \succ u^i_1 \qquad\qquad && b'_1 : \ a_1 \succ a'_1 \\
\end{align*}
\end{minipage}
We now describe the preference lists of the 4 vertices $c_2,d_2,c'_2,d'_2$ in the gadget of $\neg x$ (see Fig.~\ref{newfig4:example}).
We assume $\neg x$ to be the second literal in this clause (let this be the $k$-th clause).
\begin{minipage}[c]{0.45\textwidth}
\centering
\begin{align*}
& c_2 : \ d_2 \succ b_i \succ b_j \succ \cdots \succ d'_2 \succ v^k_1 \qquad\qquad && c'_2 : \ d'_2 \succ d_2\\
& d_2 : \ c'_2 \succ u^k_2 \succ c_2 \qquad\qquad && d'_2 : \ c_2 \succ c'_2 \\
\end{align*}
\end{minipage}
Here $b_i,b_j,\ldots$ are the occurrences of the $b$-vertex in all the gadgets corresponding to literal $x$ in the formula $\phi$.
The order among these vertices in $c_2$'s preference list is not important.
We now describe the preference lists of vertices $u^{\ell}_i$ and $v^{\ell}_i$ for $0 \le i \le r$ in clause~$\ell$,
where $r \in \{2,3\}$ is the number of literals in this clause.
The preference list of $u^{\ell}_0$ is $v^{\ell}_0 \succ s$
and the preference list of $v^{\ell}_r$ is $u^{\ell}_r \succ t$.
Let $1 \le j \le r$ and let $a_j,b_j,a'_j,b'_j$ be the 4 vertices in the gadget of the $j$-th literal in this clause if it is a positive clause and let $c_j,d_j,c'_j,d'_j$ be the 4 vertices in the gadget of the $j$-th literal in this clause if it is a negative clause. The preference list of $u^{\ell}_j$ is as given on the left (resp. right) for positive (resp. negative) clauses.
\[ u^{\ell}_j: \ \ b_j \succ v^{\ell}_j \ \ \ \ \ \ \ \ \ \ \ \ \text{or}\ \ \ \ \ \ \ \ \ \ \ \ u^{\ell}_j: \ \ v^{\ell}_j \succ d_j.\]
The preference list of $v^{\ell}_{j-1}$ is as given on the left (resp. right) for positive (resp. negative) clauses.
\[ v^{\ell}_{j-1}: \ \ u^{\ell}_{j-1} \succ a_j \ \ \ \ \ \ \ \ \ \ \ \ \text{or}\ \ \ \ \ \ \ \ \ \ \ \ v^{\ell}_{j-1}: \ \ c_j \succ u^{\ell}_{j-1}.\]
Observe that $s$ and $t$ are the last choices for each of their neighbors. The preferences of $s$ and $t$ are not relevant and it is easy to see the following.
\begin{new-claim}\label{cl:hardn2} Both $s$ and $t$ are unstable vertices.
\end{new-claim}
\subsubsection{Proof of Lemma~\ref{cl:stable-no-consistency-bis}}\label{sec:hardn2-secondlemma}
Take any stable matching $M$. Suppose $A_M$ does not satisfy, say, the $r$-th clause. We show that $G_M$ has an augmenting path with respect to $M$, concluding the proof of one direction. Note that all gadgets in the $r$-th clause are in false state in $M$. Construct the augmenting path in $G_M$ follows:
\begin{itemize}
\item Let the $r$-th clause be a positive clause $x \vee y \vee z$. Let $a_1,b_1,a'_1,b'_1$ be the 4 vertices in $x$'s gadget,
$a_2,b_2,a'_2,b'_2$ be the 4 vertices in $y$'s gadget, and $a_3,b_3,a'_3,b'_3$ be the 4 vertices in $z$'s gadget. Thus we have the
following augmenting path with respect to $M$:
\[s - (u^r_0,v^r_0) - {\color{red}(a_1,b'_1)} - {\color{red}(a'_1,b_1)} - (u^r_1,v^r_1) - {\color{red}(a_2,b'_2)} - {\color{red}(a'_2,b_2)} - (u^r_2,v^r_2) - {\color{red}(a_3,b'_3)} - {\color{red}(a'_3,b_3)} - (u^r_3,v^r_3) - t.\]
\item Let the $r$-th clause be a negative clause $\neg x \vee \neg y$. So both $x$ and $y$ are set to $\mathsf{true}$ and
$M$ contains the dashed blue pair of edges from $\neg x$'s gadget and also from $\neg y$'s gadget.
Let $c_1,d_1,c'_1,d'_1$ be the 4 vertices in $\neg x$'s gadget and let $c_2,d_2,c'_2,d'_2$ be the 4 vertices in $\neg y$'s gadget.
Thus we have the following augmenting path with respect to $M$:
\[s - (u^r_0,v^r_0) - {\color{blue}(c_1,d_1)} - (u^r_1,v^r_1) - {\color{blue}(c_2,d_2)} - (u^r_2,v^r_2) - t.\]
\end{itemize}
Conversely, assume that $\phi$ has a satisfiable assignment $A$.
We show that there is no $M_A$-augmenting path in $G_{M_A}$.
Let $\ell$ be any positive clause. Suppose $A$ sets the $j$-th literal in this clause to $\mathsf{true}$. Let $a_j,b_j,a'_j,b'_j$ be the
4 vertices in the gadget corresponding to the $j$-th literal. So $(a_j,b_j) \in M_A$ and hence there is no edge $(v^{\ell}_{j-1},a_j)$
in $G_{M_A}$. Thus there is no alternating path between $s$ and $t$ such that all the intermediate vertices on this path correspond to the $\ell$-th clause.
However there are consistency edges jumping across clauses---so there may be an $M_A$-augmenting path that begins with vertices
corresponding to the $\ell$-th clause and then uses a consistency edge. Let $\rho$ be an $M_A$-alternating path in $G_{M_A}$ with $s$ as an
endpoint and let $(s,u^{\ell}_0)$ be the first edge in~$\rho$. So the prefix of $\rho$ consists of vertices that belong to the $\ell$-th clause.
Let $b_i$ be the last vertex of the $\ell$-th clause in this prefix of $\rho$. Let $a_i,b_i,a'_i,b'_i$ be the 4 vertices in the gadget of $b_i$
(let this correspond to variable $x$) in the $\ell$-th clause. Observe that $(a_i,b'_i)$ and $(a'_i,b_i)$ are in $M_A$---otherwise $b_i$ is
not reachable from $u^{\ell}_0$ in $G_{M_A}$ via a path of vertices in the $\ell$-th clause.
Assume the consistency edge $(b_i,c_k)$ belongs to $\rho$, where $c_k$ is a vertex in the gadget of $\neg x$.
Let $c_k,d_k,c'_k,d'_k$ be the vertices in the gadget of $\neg x$ and suppose this literal occurs in the $r$-th clause.
Since the dotted red pair in $x$'s gadget in the $\ell$-th clause is in $M_A$, the dotted red pair $(c_k,d'_k)$ and $(c'_k,d_k)$ has to be in $M_A$.
Since $c'_k$ and $d'_k$ are degree~2 vertices, $\rho$ has to contain the subpath $c_k - d'_k - c'_k - d_k$. However $\rho$ is now stuck at $d_k$ in the
graph $G_{M_A}$. The path cannot go back to $c_k$. There is no edge between $d_k$ and $u^r_k$ in $G_{M_A}$ since this is a $(-,-)$ edge as
both $d_k$ and $u^r_k$ prefer their
respective partners in $M_A$ to each other. Thus the alternating path $\rho$ has to terminate at $d_k$.
The case when $\ell$ is a negative clause is even simpler since in this case $\rho$ cannot leave the vertices of the $\ell$-th clause using a consistency
edge. We know that the assignment $A$ sets some literal in the $\ell$-th clause to $\mathsf{true}$: let this be the $k$-th literal,
so $(c'_k,d_k) \in M_A$ and hence there is no edge $(d_k,u^{\ell}_k)$ in $G_{M_A}$. Thus there is no $M_A$-augmenting path in $G_{M_A}$ in this case also, concluding the proof. \qed
\subsection{Min-size popular matchings}
\label{sec:min-size}
In this section we investigate the counterpart of Theorem~\ref{thm:max-size}, i.e., the complexity of determining if $G = (A\cup B,E)$ admits a
{\em min-size} popular matching that is not stable.
\begin{pr}
\textsf{Input: } A bipartite graph $G = (A \cup B,E)$ \new{with strict preference lists}.\\
\textsf{Decide: } If there is an unstable matching among min-size popular matchings in~$G$.
\end{pr}
Given a 3SAT formula $\phi$, we transform it as described in Section~\ref{sec:stable} and build the graph $G$ as described in
Section~\ref{sec:stable-dominant}. We now augment the bipartite graph $G$ into bipartite graph $H$ as follows:
\begin{itemize}
\item Add a new vertex $w$, which is adjacent to each $d'$-vertex in $\neg x$'s gadget for every variable $x \in\{X_1,\ldots,X_{2n}\}$.
\item Add a square $\langle t,t',r',r\rangle$ at the $t$-end of the graph (see Fig.~\ref{newfig5:example}), where $t',r',r$ are new vertices.
\end{itemize}
\begin{figure}
\caption{We add a square $\langle t,t',r',r\rangle$ at the end of the graph $G$, and a vertex $w$ as shown in the figure above.
Recall that each $s$-$t$ path in $G$ corresponds to a clause in $\phi$.}
\label{newfig5:example}
\end{figure}
The preference lists of the vertices in $\{r,r',t',t\}$ are as follows:
\[r: r' \succ t \ \ \ \ \ \ \ \ \ \ \ \ \ \ r': r \succ t'\ \ \ \ \ \ \ \ \ \ \ \ \ \ t': r' \succ t\ \ \ \ \ \ \ \ \ \ \ \ \ \ t: \cdots \succ r \succ t'.\]
Recall that the vertex $t$ is adjacent to the last $v^{\ell}$-vertex in every clause gadget $\ell$. The vertex $t$ prefers all its $v$-neighbors (the order among these is not important) to its neighbors in the square which are $r$ and $t'$ and $t$ prefers $r$ to $t'$.
Regarding the vertex $w$, this vertex is the last choice for all its neighbors and $w$'s preference list is some permutation of
its neighbors (the order among these neighbors is not important).
This finishes the description of the graph $H$. We denote the collection of all preference lists again by~${\cal P}$. The proof of Lemma~\ref{lemma:min-size} is analogous to the proof of Lemma~\ref{lem:unstable-edges}.
\begin{lemma}
\label{lemma:min-size}
Let $S$ be any stable matching in $H,{\cal P}$. Then $S$ contains the edges $(u^{\ell}_i,v^{\ell}_i)$ for all clauses $\ell$ and all $i$ and the pair of edges
$(r,r'),(t,t')$. Moreover, $S$ does not contain any consistency edge.
\end{lemma}
\begin{corollary}
\label{cor3}
For any stable matching $S$ in $H, \mathcal{P}$ and any variable $x \in \{X_1,\ldots,X_{2n}\}$, we have:
\begin{enumerate}
\item From the gadget of $\neg x$, either the pair (i)~$(c_k,d_k),(c'_k,d'_k)$ or (ii)~$(c_k,d'_k),(c'_k,d_k)$ is in $S$.
\item From a gadget of $x$, say its gadget in the $i$-th clause, either the pair of edges (i)~$(a_i,b_i),(a'_i,b'_i)$ or
(ii)~$(a_i,b'_i),(a'_i,b_i)$ is in $S$.
\end{enumerate}
\end{corollary}
It follows from Lemma~\ref{lemma:min-size} and Corollary~\ref{cor3} that a stable matching in $H$
matches all vertices except $s$ and $w$. A max-size popular matching in $H$ is a perfect matching:
it includes the edges $(s,u^{\ell}_0), (v^{\ell}_0,c_k),(d'_k,w)$, $(d_k,c'_k)$ for some negative clause $\ell$.
We now prove the following theorem.
\begin{theorem}
\label{thm:min-size}
$H,{\cal P}$ admits an unstable min-size popular matching if and only if $G$ admits a stable matching that is dominant.
\end{theorem}
\begin{proof}
Suppose $G$ admits a stable matching $N$ that is also dominant. We claim that $M = N \cup \{(r,t),(r',t')\}$
is a popular matching in $H$. Note that there is a blocking edge $(r,r')$ with respect to~$M$. Since $M$ matches exactly the stable vertices,
this would make $M$ an unstable min-size popular matching.
Recall Theorem~\ref{thm:char-popular} (from Section~\ref{prelims}) that characterizes popular matchings.
We will now show that the matching $M$ satisfies the three sufficient conditions for popularity as given in Theorem~\ref{thm:char-popular}.
Note that $(r,r')$ is the only blocking edge with respect to $M$.
Thus property~(ii) from Theorem~\ref{thm:char-popular} obviously
holds. Property~(i) holds since the edge $(t,t')$ is labeled $(-,-)$ with respect to $M$.
Thus there is no alternating cycle in $H_M$ with the edge
$(r,r')$. We will now show property~(iii) also holds in $H_M$, thus $M$ is a popular matching in $H$.
There are two unmatched vertices in $M$: $s$ and $w$. We need to check that the edge $(r,r')$ is not reachable via an
$M$-alternating path from either $s$ or $w$ in $H_M$.
Since the matching $N$ is dominant in $G$, there is no
$M$-alternating path between $s$ and $t$ in $H_M$. Thus the blocking edge $(r,r')$ is not reachable from $s$ by an $M$-alternating path in $H_M$.
Consider the vertex $w$ and any of its neighbors, say $d'_k$ (see Fig.~\ref{newfig4:example}). We know that $N$ includes either the dotted red pair of edges $(c_k,d'_k),(c'_k,d_k)$ or the dashed blue pair of edges $(c_k,d_k),(c'_k,d'_k)$. In both cases, the blocking edge $(r,r')$
is not reachable in $H_M$ by an $M$-alternating path with $(w,d'_k)$ as a starting edge. This proves one side of the reduction.
We will now show the converse. Let $M$ be a min-size popular matching in $H$ that is unstable. Since $M$ is a min-size popular matching,
the set of vertices matched in $M$ is the set of stable vertices.
Consider the edges $(v^{\ell}_{i-1},a_i), (d_k,u^{\ell}_k)$, and
$(b_i,c_k)$ for any $\ell$, $i$, and $k$. For each of those edges, there is a stable matching in $G$ (and thus in $H$)
where all these edges are labeled $(-,-)$ (see the proof of Lemma~\ref{lem:unstable-edges}). Thus these edges are {\em slack}\footnote{An edge $(x,y)$ is slack with respect to a popular matching $N$ and its witness $\vec{\alpha}$ if $\alpha_x + \alpha_y > \mathsf{wt}_N(x,y)$.} with respect to
a popular matching and its witness, i.e. $\vec{0}$ here---so they are {\em unpopular} edges (see Section~\ref{prelims}). Hence $M$ has to contain
the edges $(u^{\ell}_i,v^{\ell}_i)$ for all clauses $\ell$ and all $i$.
The 4 vertices in a literal gadget are stable, hence matched among themselves in $M$ (since they must all be matched, and we excluded other edges incident to them), i.e., either the dotted red pair or the dashed blue pair from each literal
gadget belongs to $M$. Also, the consistency
edge cannot be a {\em blocking edge} to $M$ as this would make a blocking edge reachable via an $M$-alternating path in $H_M$ from
the unmatched vertex $w$, a contradiction to $M$'s popularity in $H$ (by Theorem~\ref{thm:char-popular}).
Since $M$ is unstable, there is a blocking edge with respect to $M$. The only possibility is from within the square $\langle r,r',t',t\rangle$.
Thus $M$ has to contain the pair of edges $(r,t)$ and $(r',t')$: this makes $(r,r')$ a blocking edge with respect to $M$. Consider the matching
$N = M \setminus \{(r,t),(r',t')\}$. We have already argued that $N$ is a stable matching in $G$.
Suppose $N$ is not dominant in $G$. Then there is an $N$-alternating path between $s$ and $t$ in $G_{N}$. Thus in the graph
$H_M$, there is an $M$-alternating path from the unmatched vertex $s$
to the blocking edge $(r,r')$. This contradicts the popularity of $M$ in $H$ by Theorem~\ref{thm:char-popular}.
Hence $N$ is a dominant matching in $G$. \qed
\end{proof}
We proved in Section~\ref{sec:stable-dominant} that the problem of deciding if $G$ admits a stable matching that is dominant is NP-hard.
Thus we can conclude the following theorem.
\begin{theorem}
\label{thm:min-size-unstable}
Given $G = (A \cup B,E)$ \new{and $\mathcal{P}$, }
it is NP-complete to decide if $G$ admits a min-size popular matching that is not stable.
\end{theorem}
Our \new{graph} $H$ is such that every popular matching here has size either $n/2$ or $n/2 - 1$, where $n$ is the number of vertices in~$H$. All popular matchings of size $n/2$ are dominant since they are perfect matchings. Thus a popular matching $M$ in $H$ is neither stable nor dominant if and only if $M$ is an unstable popular matching of size $n/2 -1$, i.e., $M$ is an unstable min-size popular matching in~$H$. So we have shown a new and simple proof of NP-hardness of the problem of deciding if a marriage instance admits a popular matching that is neither stable nor dominant.
\subsection{A simple proof of NP-hardness of the popular roommates problem}
\label{sec:pop-roommates}
We know that the popular roommates problem is NP-hard~\cite{FKPZ18,GMSZ18}. Here we adapt the hardness reduction given in Section~\ref{sec:stable-dominant} to show a short and simple proof of hardness of this problem. We first mention some useful structural results from Section~\ref{prelims} that extend to the roommates case.
The first is the {\em Rural Hospitals Theorem}, see \cite[Theorem 4.5.2]{GI89}. That is, if $H = (V,E)$ admits stable matchings, then all stable matchings in $H$ have to match the same subset of vertices. Another useful property is that if an edge $e$ is labeled $(-,-)$ with respect to a stable matching in $H$ then no popular
matching in $H$ can include $e$~\cite{Kav18}. This is based on the {\em tightness} of popular edges in roommates instances and the fact that an edge labeled $(-,-)$ with respect to a stable matching is slack.
Last, we recall that the characterization of popular matchings given in Theorem~\ref{thm:char-popular} also holds when $G$ is non-bipartite.
\begin{pr}
\textsf{Input: } A (non-bipartite) graph $H = (V,E)$ with strict preference lists.\\
\textsf{Decide: } If there is a popular matching in $H$.
\end{pr}
Given a 3SAT formula $\phi$, we transform it as described in Section~\ref{sec:stable} and build the graph $G$ as described in
Section~\ref{sec:stable-dominant}. We now augment the bipartite graph $G$ into a non-bipartite graph $H$, depicted in Fig.~\ref{fig:roommates} as follows:
\begin{itemize}
\item Add edges between $s$ and the $d'$-vertex in $\neg x$'s gadget for every variable $x$.
\item At the other end of the graph $G$, add an edge $(t,r)$ along with a triangle $\langle r,r',r''\rangle$, where $r,r',r''$ are new
vertices.
\end{itemize}
\begin{figure}
\caption{We add a new triangle $\langle r,r',r''\rangle$ to $t$, and connect $s$ with all $d'$ vertices as shown in the figure above.}
\label{fig:roommates}
\end{figure}
The preference lists of the vertices in $\{r,r',r'',t\}$ are as follows:
\[r: r' \succ r'' \succ t \ \ \ \ \ \ \ \ \ \ \ \ \ \ r': r'' \succ r\ \ \ \ \ \ \ \ \ \ \ \ \ \ r'': r \succ r'\ \ \ \ \ \ \ \ \ \ \ \ \ \ t: \cdots \succ r.\]
The vertex $t$ is adjacent to the last $v^{\ell}$-vertex in every clause gadget $\ell$. The vertex $t$ prefers all its $v$-neighbors
(the order among these is not important) to $r$. We denote the collection of all preference lists again by ${\cal P}$.
The vertex $s$ is the last choice neighbor for all its neighbors. The preference list of $s$ is not relevant.
Recall the vertex $w$ used in Section~\ref{sec:min-size}: now we have merged the vertices $s$ and $w$. This creates odd cycles, which is allowed here since the graph $H$ is non-bipartite.
\subsubsection{Popular matchings in $H$}
Let $M$ be a popular matching in $H$. Observe that $M$ has to match $r,r'$, and $r''$ since each of these vertices is a top choice neighbor for some vertex. If one of these vertices is left unmatched in $M$ then there would be a blocking edge incident to an unmatched vertex, a
contradiction to its popularity~(see Theorem~\ref{thm:char-popular}, condition~(iii)).
Since $t$ is the only {\em outside} neighbor of $r,r',r''$, the matching $M$ has to contain the pair of edges $(t,r),(r',r'')$. Note that the edge $(r,r'')$ blocks $M$.
Let $H^* = H \setminus \{t,r,r',r''\}$. Consider the matching $N = M\setminus\{(t,r),(r',r'')\}$ in $H^*$. Since $M$ is popular in $H$, the
matching $N$ has to be popular in $H^*$. We claim $N$ has to match all vertices in $H^*$ except $s$. This is because $H^*$ admits a stable
matching: consider $S = \cup_{\ell,i}\{(u^{\ell}_i,v^{\ell}_i)\} \cup\{$dashed blue edges in every literal gadget$\}$---this is a stable
matching in $H^*$. We know that $N$ has to match all stable vertices~\cite{HK13b}. Since the number of vertices in $H^*$ is odd, the vertex $s$ is left unmatched in $N$.
Consider any consistency edge. We claim this is an {\em unpopular} edge in $H^*$. This is because there is a stable matching in $H^*$ where this edge is labeled $(-,-)$
(see the proof of Lemma~\ref{lem:unstable-edges}), hence this edge cannot be used in any popular matching in $H^*$~\cite{Kav18}. Since all vertices in $H^*$ except $s$ have to be matched in $N$, the following 3 observations hold:
\begin{enumerate}
\item $N$ contains the edges $(u^{\ell}_i,v^{\ell}_i)$ for all clauses $\ell$ and all $i$.
\item From the gadget of $\neg x$, either the pair (i)~$(c_k,d_k),(c'_k,d'_k)$ or (ii)~$(c_k,d'_k),(c'_k,d_k)$ is in $N$.
\item From a gadget of $x$, say, its gadget in the $i$-th clause, either the pair of edges (i)~$(a_i,b_i),(a'_i,b'_i)$ or
(ii)~$(a_i,b'_i),(a'_i,b_i)$ is in $N$.
\end{enumerate}
We are now ready to prove Theorem~\ref{thm:roommates}, whose proof is similar to the proof of Theorem~\ref{thm:min-size}.
\begin{theorem}
\label{thm:roommates}
$H,{\cal P}$ admits a popular matching if and only if $G$ admits a stable matching that is dominant.
\end{theorem}
\begin{proof}
Suppose $G$ admits a stable matching $S$ that is also dominant. We claim that $M = S \cup \{(t,r),(r',r'')\}$ is a popular matching in $H$. We will again use Theorem~\ref{thm:char-popular} to prove the popularity of $M$ in $H$.
There is exactly one blocking edge with respect to $M$: this is the edge $(r,r'')$.
Observe that properties~(i) and (ii) from Theorem~\ref{thm:char-popular} are easily seen to hold.
We will now show that property~(iii) from Theorem~\ref{thm:char-popular} also holds.
We need to check that the edge $(r,r'')$ is not reachable via an $M$-alternating path from $s$ in $H_M$.
Since the matching $S$ is dominant in $G$, there is no $S$-alternating path between $s$ and $t$ in $G_{S}$. Thus the blocking edge $(r,r'')$ is not reachable from $s$ by an $M$-alternating path in $G_{M}$.
We now need to show that
the blocking edge $(r,r'')$ is not reachable from $s$ by an $M$-alternating path in $H_M$, i.e., when the first edge of the alternating path is $(s,d'_k)$ for some $d'_k$. We know that $M$ includes either the dotted red pair of edges $(c_k,d'_k),(c'_k,d_k)$ or the dashed blue pair of edges $(c_k,d_k),(c'_k,d'_k)$ from this gadget. In both cases, it can be checked that the blocking edge $(r,r')$ is not reachable in $H_M$ by an $M$-alternating path with $(s,d'_k)$ as the starting edge. This proves one side of the reduction.
We will now show the converse. Suppose $H$ admits a popular matching $M$. We argued above that the pair of edges $(t,r)$ and $(r',r'')$ is in $M$. Consider the matching $N = M \setminus \{(t,r),(r',r'')\}$. We claim that $N$ is a stable matching in $G$.
From observations~1-3 given above, it follows that the only possibility of a blocking edge to $N$ is from a consistency edge. However a consistency edge cannot block $N$ as this would make a blocking edge reachable by an $M$-alternating path in $H_M$ from the unmatched vertex $s$ and this contradicts $M$'s popularity (by Theorem~\ref{thm:char-popular}).
We now claim that $N$ is a dominant matching in $G$. Suppose not. Then there is an $N$-alternating path between $s$ and $t$ in $G_{N}$. Thus in the graph $H_M$, there is an $M$-alternating path from the unmatched vertex $s$ to the blocking edge $(r,r'')$. This contradicts the popularity of $M$ in $H$ by Theorem~\ref{thm:char-popular}. Hence $N$ is a dominant matching in $G$. \qed
\end{proof}
We know that the problem of deciding if $G$ admits a stable matching that is dominant is NP-hard. This completes our new
proof of the NP-hardness of the popular roommates problem. Observe that every popular matching in $H$ is a max-size matching (since only the vertex $s$ is left unmatched), hence every popular matching in $H$ is dominant.
Thus our reduction above also shows a simple proof of NP-hardness of the dominant roommates problem.
\subsubsection{Conclusions and an open problem} We considered popular matching problems in a bipartite graph $G = (A \cup B, E)$ with strict preferences. We showed a simple $O(|E|^2)$ algorithm for deciding if there exists a popular matching in $G$ that is {\em not} stable. An open problem is to improve the running time of this algorithm.
We showed that the problems of deciding if a bipartite graph admits a stable matching that is (i)~dominant, (ii)~{\em not} dominant are NP-hard. These results imply many new hardness results for popular matchings in bipartite graphs, including the hardness of finding (1)~a popular matching that is {\em not} dominant, (2)~a min-size popular matching that is {\em not} stable, and (3)~a max-size popular matching that is {\em not} dominant.
\subsubsection{Acknowledgements.} \'{A}gnes Cseh was supported by the Federal Ministry of Education and Research of Germany in the framework of KI-LAB-ITSE (project number 01IS19066), OTKA grant K128611, and COST Action CA16228 European Network for Game Theory. Telikepalli Kavitha was supported by project no. RTI4001 of the Department of Atomic Energy, Government of India.
\end{document}
|
\begin{document}
\title{De-Gaussification by inconclusive photon subtraction}
\author{Stefano Olivares\footnote{[email protected]}
and Matteo G. A. Paris\footnote{[email protected]}}
\affiliation{Dipartimento di Fisica, Universit\`a degli Studi di
Milano, Italia}
\begin{abstract}
We address conditional de-Gaussification of continuous variable
states by inconclusive photon subtraction (IPS) and review in
details its application to bipartite twin-beam state of radiation.
The IPS map in the Fock basis has been derived, as well as its
counterpart in the phase-space. Teleportation assisted by IPS
states is analyzed and the corresponding fidelity evaluated as
a function of the involved parameters. Nonlocality of IPS states
is investigated by means of different tests including displaced
parity, homodyne detection, pseudospin, and displaced on/off
photodetection. Dissipation and thermal noise are taken into account,
as well as non unit quantum efficiency in the detection stage.
We show that the IPS process, for a suitable choice of the
involved parameters, improves teleportation fidelity and enhances
nonlocal properties.
\end{abstract}
\maketitle
\section{Introduction}
Nonclassical properties of the radiation field play a relevant
role in modern information processing since, in general, improve
continuous variable (CV) communication protocols based on light
manipulation \cite{vLB_rev,FOP:napoli:05}. Indeed, {\em quantum light}
finds application in several fundamental tests of quantum
mechanics \cite{ts0}, as well as in high precision measurements
and high capacity communication channels \cite{qcm, furusawa}.
Among nonclassical features, entanglement plays a major role,
being the essential resource for quantum
computing, teleportation, and cryptographic protocols. Recently,
CV entanglement has been proved as a valuable tool also for
improving optical resolution, spectroscopy, interferometry,
tomography, and discrimination of quantum operations.
Recent experimental realizations also include dense coding
\cite{dense} and teleportation network \cite{qtn}.
\par
Entanglement in optical systems is usually generated through
parametric downconversion in nonlinear crystals. The resulting
bipartite state, the so-called twin-beam state of radiation
(TWB), allows the realization of several beautiful experiments
and the demonstration of the above quantum protocols. However,
the resources available to generate CV entangled states are
unavoidably limited: nonlinearities are generally small, and,
in turn, the resulting states have a limited amount of
entanglement and energy. In this context, practical applications
require novel schemes to create more entangled states or to
increase the degree of entanglement of a given signal.
\par
In quantum mechanics, the reduction postulate provides an
alternative mechanism to achieve {\em effective} nonlinear
dynamics. In fact, if a measurement is performed on a
portion of a composite system the output state strongly
depends on the results of the measurement. As a consequence,
the {\em conditional} state of the unmeasured part, {\em i.e.}
the sub-ensemble corresponding to a given outcome, may be connected
to the initial one by a (strongly) nonlinear map.
In this paper, we focus our attention on a scheme of this kind,
and address a conditional method based on subtraction of photons
to enhance nonclassical features. In particular, we analyze
how, and to which extent, photon subtraction may be used
to increase nonlocal correlations of twin-beams.
As we will see, photon subtraction transforms the
Gaussian Wigner function of TWB into non-Gaussian one, and
therefore it is also referred to as a {\em de-Gaussification}
process.
\par
The photon subtraction process on TWBs was first proposed in
\cite{opatr}, where a well defined number of photons is being
subtracted from both the parties of a TWB, by transmitting each
mode through beam splitter and performing a joint photon-number
measurement on the reflected beams. The degree of entanglement
is then increased and the the fidelity of the CV teleportation
assisted by such photon subtracted state is improved \cite{coch}.
However, this scheme is based on the possibility of resolving the
actual number of revealed photons. In \cite{ips:PRA:67} we showed
that the improvement of teleportation fidelity is possible also
when the number of detected photon is not known. In our scheme we
use on/off avalanche photodetectors able only to distinguish the
presence from the absence of radiation. For this reason we
referred to this method as to inconclusive photon subtraction
(IPS). The single-mode version of this process has been recently
implemented \cite{weng:PRL:04} and the nonclassicality of the
generated state starting from squeezed vacuum has been
theoretically investigated \cite{jeong,fock:oli}.
In addition, nonlocal properties of the photon-subtracted TWBs have
been investigated by means of
different nonlocality tests
\cite{ips:nonloc,OP:PSnoise,nha,sanchez,daffer,IOP:05}, finding
enhanced nonlocal properties depending on the particular test and
on the choice of the involved parameters.
\par
This paper is devoted to review the effects of IPS process on TWBs
either in the ideal case, i.e., when the detection are not affected by
losses and no dissipation or thermal noise occurs during the
propagation of the involved modes, or when non unit quantum
efficiency is taken into account as well as the dynamics through
a noisy channel is considered.
\par
The paper is structured as follows: in the next section
we introduce photon subtraction as a method to enhance nonclassicality
of a radiation state and illustrate inconclusive photon subtraction
on a single-mode field. The de-Gaussification process
on two-mode fields is described in Sec.~\ref{s:IPS},
where the map of the IPS process is given both in the Fock representation
and in the phase-space. In Sec.~\ref{s:dyn:TWB} we briefly review the dynamics
of a TWB in noisy channels and show that IPS can be profitably applied also
in the presence of noise. The CV
teleportation protocol is described in Sec.~\ref{s:tele}, where
we compare the teleportation fidelity when the protocol is assisted
or not by the IPS process. In the following Sections, in order to
characterize in details the nonlocal properties of the IPS states,
we consider different {\em Bell} tests, namely, the nonlocality
test in the phase space (Sec.~\ref{s:DP}), the homodyne detection
test (Sec.~\ref{s:HD}), the pseudospin test (Sec.~\ref{s:PS}),
and a nonlocality test based on on/off photodetection
(Sec.~\ref{s:on:off}). Finally, Sec.~\ref{s:remarks} closes
the paper with some concluding remarks.
\section{Photon subtraction}\label{s:ps}
The idea of enhancing nonclassical properties of radiation by subtraction
of photons has been introduced in the context of Schr\"odinger cat
generation \cite{dak97} and subsequently applied to the improvement of CV
teleportation fidelity \cite{opatr}. In the schemes of
Refs.~\cite{dak97,opatr} the field-mode to be ``photon subtracted'' (PS) is
impinged onto a beam-splitter with high transmissivity and whose second
port is left unexcited. At the output of the beam splitter the reflected
mode undergoes photon number measurement whereas the conditional state of
the transmitted mode represents the PS state. The properties of the PS
state depend on the number of detected photons, with single-photon
subtracted states that play a major role in the enhancement of
nonclassicality. Unfortunately, the realization of photon number resolving
detectors is still experimentally challenging, and therefore a question
arises concerning the experimental feasibility of subtraction schemes.
\par
Photodetectors that are usually available in quantum optics such as
avalanche photodiodes (APDs) operates in the Geiger mode
\cite{rev, serg}. They can be used to reconstruct the photon
statistics \cite{CVP,CVP:B} but cannot be used as photon counters.
APDs show high quantum efficiency but their breakdown current
is independent of the number of detected photons, which in turn
cannot be determined. The outcome of these APD's is either ``off''
(no photons detected) or ``on'', i.e., a ``click'', indicating
the detection of one or more photons. Actually, such an
outcome can be provided by any photodetector (photomultiplier,
hybrid photodetector, cryogenic thermal detector) for which the
charge contained in dark pulses is definitely below that of the output
current pulses corresponding to the detection of at least one
photon. Note that for most high-gain photmultipliers the anodic pulses
corresponding to no photons detected can be easily discriminated by
a threshold from those corresponding to the detection of one or more
photons.
\par
It appears therefore of interest to investigate the properties of
photon subtracted states when the number of detected photons
is not discriminated. Such a process will be referred to as
inconclusive photon subtraction (IPS) throughout the paper.
\begin{figure}
\caption{Scheme of the IPS process: the input state
$\varrho_{\rm s}
\label{f:scheme}
\end{figure}
The scheme of the IPS process is sketched in figure \ref{f:scheme}. The
mode $a$, excited in the state $\varrho_{\rm s}$ is mixed with the vacuum
$\varrho_{0} = \ket{0}\bra{0}$ (mode $b$) at an unbalanced beam splitter (BS)
with transmissivity $T=\cos^2\phi$ and then, on/off avalanche
photodetection with quantum efficiency $\varepsilon$ is performed on the
reflected beam. APDs can only discriminate the presence of
radiation from the vacuum. The positive operator-valued measure
(POVM) $\{\Pi_0(\varepsilon),\Pi_1(\varepsilon)\}$ of the detector
is given by
\begin{eqnarray}
\Pi_0 (\varepsilon) = \sum_{k=0}^\infty(1-\varepsilon)^k \: |k\rangle\langle
k|\,, \qquad \Pi_1 (\varepsilon) = {\mathbb I} -
\Pi_0(\varepsilon)\label{onoffPOM}\;.
\end{eqnarray}
The whole process can be characterized by $T$ and $\varepsilon$ which will
be referred to as the IPS transmissivity and the IPS quantum efficiency.
The conditional state of the transmitted mode after the observation
of a click is given by
\begin{eqnarray}
\varrho_1 = \frac{1}{p_1(\phi,\varepsilon)} \hbox{Tr}_b
\left[U_{ab}(\phi)\varrho_{\rm s}\otimes\varrho_0\,U_{ab}^{\dag}(\phi)\:
{\mathbb I}_a \otimes \Pi_1 (\varepsilon)\right]
\label{cond1}\;,
\end{eqnarray}
where $U_{ab}(\phi) = \exp\{-\phi(a^{\dag}b - a b^{\dag})\}$ is the
evolution operator of the beam splitter, and $p_1(\phi,\varepsilon)$ is the
probability of a click. In general, the transformation (\ref{cond1})
realizes a non unitary quantum operation
$\varrho_1=\mathcal{E}(\varrho_{\rm s})$ with operator-sum decomposition
given by
\begin{eqnarray}
\mathcal{E}(\varrho_{\rm s})
= \frac{1}{p_1(\phi,\varepsilon)}\:
\sum_{p=1}^{\infty}\:m_p(\phi,\varepsilon)\:E_p(\phi)\: \varrho_{\rm s} \:
E_p^{\dag}(\phi)\:\label{map1}\;
\end{eqnarray}
where
\begin{align}
&m_p(\phi,\varepsilon) =
{\displaystyle \frac{\tan^{2p}\phi\,\,[1-(1-\varepsilon)^p]}{p!}}\,,
\\
&M_{p}(\phi) = a^p \, \cos^{a^\dag a}\phi \,.
\end{align}
which is found by explicit evaluation of the partial trace in
(\ref{cond1}).
The IPS state obtained by applying the map (\ref{map1}) to a
Gaussian state is no longer Gaussian, and therefore IPS represents
an effective source of non Gaussian states, which should be otherwise
generated by highly nonlinear, and thus inherently low rate, optical
processes.
\par
In general the IPS process can produce an output state whose energy is
larger than the one of the input state and whose nonclassical properties
can be enhanced. As an example, we address the photon subtraction onto a
Gaussian state described by the following Wigner function (using the Wigner
function formalism makes analytical calculations more straightforward):
\begin{equation}\label{single:gauss}
W_{\rm s}(z) = \frac{\exp\{ -F|z|^2 - G (z^2 + {z^*}^2) \}}
{\pi \sqrt{(F^2 - 4 G^2)^{-1}}}\,,
\end{equation}
whose energy is given by
\begin{equation}
E_{\rm s} = \int_{\mathbb{C}} d^2 z\, \left(|z|^2 - \frac12\right)\,
W_{\rm s}(z) = \frac{F}{F^2-4G^2} - \frac12\,.
\end{equation}
When the state (\ref{single:gauss}) undergoes the IPS process described
above, the Wigner function associated with the output state $\varrho_{1}$
reads \cite{fock:oli}
\begin{equation}\label{single:ips}
W_{1}(z) =
\frac{1}{p_1(\phi,\varepsilon)}\,\sum_{k=1}^{2}
\mathcal{C}_k(\phi,\varepsilon)\,
W_{k}(z)\,,
\end{equation}
with $\mathcal{C}_1(\phi,\varepsilon)=1$, $\mathcal{C}_2(\phi,\varepsilon)
= -(\varepsilon\sqrt{{\rm Det}[{\boldsymbol B} + \bmsigma_{\rm M}]})^{-1}$,
where
\begin{align}
{\boldsymbol B} = (1-T)\bmsigma + \frac{T}{2} \mathbbm{1}_2\,, \quad
\bmsigma_{\rm M} = \frac{2-\varepsilon}{2\varepsilon}\, \mathbbm{1}_2\,,
\end{align}
$\mathbbm{1}_2$ being the $2\times 2$ identity matrix, and $\bmsigma$ is
covariance associated with the state (\ref{single:gauss})
\begin{equation}
\bmsigma = \left(\begin{array}{cc}
(F+2G)^{-1} & 0 \\
0 & (F-2G)^{-1}
\end{array}\right)\,,
\end{equation}
where $[\bmsigma]_{hk} = \frac12 \langle \{R_h,R_k\} \rangle -
\langle R_h \rangle \langle R_k \rangle$, $\{A,B\} = AB + BA$ denotes the
anticommutator, and
\begin{equation}
\boldsymbol{R} = (R_1,R_2)^T \equiv \left( \frac{a + a^{\dag}}{\sqrt{2}},
\frac{a - a^{\dag}}{i\sqrt{2}} \right)^T \,,
\end{equation}
$(\cdots)^{T}$ being the transposition operation.
Notice that $W_1(z)$
is no longer Gaussian. In Eq.~(\ref{single:ips}) we defined
\begin{equation}
W_{k}(z) = \frac{\exp\{ -F_k |z|^2 - G_k (z^2 + {z^*}^2) \}}
{\pi \sqrt{(F_k^2 - 4 G_k^2)^{-1}}}\,,
\end{equation}
where
\begin{align}
&F_1 = \mathcal{U}_{+} + \mathcal{U}_{-}\,, \quad
G_1 = \frac12(\mathcal{U}_{+} - \mathcal{U}_{-})\,, \\
&F_2 = 2(\mathcal{V}_{+} + \mathcal{V}_{-})\,, \quad
G_2 = \mathcal{V}_{+} - \mathcal{V}_{-}\,,
\end{align}
with
\begin{align}
&\mathcal{U}_{\pm} = \frac{F \pm 2G}
{2T+(1-T)(F \pm 2G)}\,, \\
&\mathcal{V}_{\pm} = \frac{F + 2(1 \pm G) T}
{4T + (1-T)[2\varepsilon + (2-\varepsilon)(F \pm 2 G)]\}}\,.
\end{align}
Because of the analytical expression (\ref{single:ips}), the energy of
the photon subtracted state is simply given by
\begin{eqnarray}
E_1(T,\varepsilon) = \frac{1}{p_1(T,\varepsilon)}\,\sum_{k=1}^{2}
\mathcal{C}_k\,\left[
\frac{F_k}{F_k^2-4G_k^2} - \frac12
\right]\,,
\end{eqnarray}
with $\mathcal{C}_k \equiv \mathcal{C}_k(T,\varepsilon)$ and
where we put $T = \cos^2 \phi$.
\begin{figure}
\caption{Logarithmic plots of the energies $E_{\rm s}
\label{f:single:en}
\end{figure}
\begin{figure}
\caption{Plots of the energy $E_{1}
\label{f:single:eta}
\end{figure}
\par
Let us now focus our attention on the IPS process applied to the squeezed
vacuum $\ket{0,r} = S(r)\ket{0}$, $S(r) =
\exp\{ \frac12 r ({a^{\dag}}^2 - a^2) \}$ being the squeezing operator,
which has been recently realized experimentally \cite{weng:PRL:04}.
The Wigner function associated with $\ket{0,r}$ is given by
Eq.~(\ref{single:gauss}) with $F = 2 \cosh 2r$ and $G = - \sinh 2r$.
In Figs.~\ref{f:single:en} we plot the energies
$E_{\rm s}$ and $E_{1}$ of the input and output states, respectively, for
different values of the involved parameters as functions of $\tanh r$.
We can see that there is a threshold on $r$, depending on $T$ and
$\varepsilon$, under which the IPS state has a larger energy than the input
state. Furthermore, when $\varepsilon = 1$, $T \to 1$ and $r \to 0$ we can
see that $E_1 \to 1$: in these limits the output state approaches to the
squeezed Fock state $S(r)\ket{1}$ \cite{jeong,fock:oli}.
Finally, in Fig.~\ref{f:single:eta} $E_1$ is plotted for two values of
$T$ and different values of $\varepsilon$ as a function of $\tanh r$: we
find that as $r$ increases, the IPS efficiency is not so relevant in the
process.
\section{Photon subtraction on bipartite states}\label{s:IPS}
\begin{figure}
\caption{Scheme of the IPS process. The two modes, $a$ and $b$, of a shared
bipartite state $\varrho_{\rm s}
\label{f:IPS:scheme}
\end{figure}
In this Section we address de-Gaussification of bipartite states by IPS .
The de-Gaussification can be achieved by subtracting
photons from both modes through on/off detection \cite{ips:PRA:67,coch}.
The IPS scheme for two modes is sketched in Fig.~\ref{f:IPS:scheme}. The
modes $a$ and $b$ of the shared bipartite state $\varrho_{\rm s}$ are mixed
with vacuum modes at two unbalanced beam splitters (BS) with equal
transmissivity $T = \cos^2\phi$; the reflected modes $c$ and $d$ are then
revealed by avalanche photodetectors (APD) with equal efficiency
$\varepsilon$. The conditional measurement on modes $c$ and $d$, is
described by the POVM (assuming equal quantum efficiency for the
photodetectors)
\begin{align}
{\Pi}_{00} (\varepsilon) &= {\Pi}_{0,c} (\varepsilon) \otimes {\Pi}_{0,d}
(\varepsilon)\,, \\
{\Pi}_{01} (\varepsilon) &= {\Pi}_{0,c} (\varepsilon) \otimes
{\Pi}_{1,d} (\varepsilon)\:, \\
{\Pi}_{10} (\varepsilon) &= {\Pi}_{1,c} (\varepsilon) \otimes {\Pi}_{0,d}
(\varepsilon)\,, \\
{\Pi}_{11} (\varepsilon) &= {\Pi}_{1,c} (\varepsilon) \otimes
{\Pi}_{1,d} (\varepsilon)\;.
\label{povm11}
\end{align}
When the two photodetectors jointly click, the conditioned output state
of modes $a$ and $b$ is given by \cite{ips:PRA:67,ips:nonloc}
\begin{widetext}
\begin{equation}
\mathcal{E}(\varrho_{\rm s})
= \frac{\hbox{Tr}_{cd}\big[
U_{ac}(\phi)\otimes U_{bd}(\phi) \: \varrho_{\rm s}
\otimes |0\rangle_{cd}{}_{dc}\langle 0|
\: U_{ac}^{\dag}(\phi)\otimes U_{bd}^{\dag}(\phi) \:
{\mathbb I}_a \otimes
{\mathbb I}_b \otimes
{\Pi}_{11} (\varepsilon)
\big]}{p_{11}(r,\phi,\varepsilon)}\:, \label{ptr}
\end{equation}
\end{widetext}
where $U_{ac}(\phi)=\exp\{-\phi(a^{\dag} c-a c^{\dag}) \}$ and
$U_{bd}(\phi)$ are the evolution operators of the beam splitters,
$|0\rangle_{cd} \equiv |0\rangle_{c}\otimes|0\rangle_{d}$, and
$p_{11}(r,\phi,\varepsilon)$ is the probability of a click
in both the detectors.
The partial trace on modes $c$ and $d$ can be explicitly evaluated, thus
arriving at the following decomposition of the IPS map:
\begin{multline}
\mathcal{E}(\varrho_{\rm s})
= \frac{1}{p_{11}(r,\phi,\varepsilon)}\:\\
\times \sum_{p,q=1}^{\infty}\:m_p(\phi,\varepsilon)
\:M_{pq}(\phi)\: \varrho_{\rm s}
\: M_{pq}^{\dag}(\phi)\: m_q(\phi,\varepsilon)\:\label{KE}
\end{multline}
where
\begin{equation}
M_{pq}(\phi) = \frac{\mbox{}}{\mbox{}}
a^p b^q \, (\cos\phi)^{a^\dag a + b^\dag b}\,.
\end{equation}
Eq.~(\ref{KE}) is indeed an operator-sum representation of the IPS map:
$\{p,q\}\equiv \theta$ should be intended as a polyindex so that (\ref{KE})
reads $\mathcal{E}(\varrho_{\rm s}) = \sum_\theta A_\theta \varrho_{\rm s}
A^\dag_\theta$ with $A_\theta= [p_{11}(r,\phi,\varepsilon)]^{-1/2}\,
m_p(\phi,\varepsilon)\:M_{pq}(\phi)$.
\par
From now on we focus our attention on the case in which the shared state is
the twin-beam state of radiation (TWB) $\varrho_{\rm s} =
\dket{\Lambda}\dbra{\Lambda}$, where $\dket{\Lambda} =
\sqrt{1-\lambda^2}\sum_k \lambda^{k} \ket{k}\otimes\ket{k}$ with
$\lambda=\tanh r$, $r$ being the TWB squeezing parameter. The TWB is
obtained by parametric down-conversion of the vacuum, $\dket{\Lambda} =
\exp\{ r(a^\dag b^\dag - ab) \}\ket{0}$, $a$ and $b$ being field operators,
and it is described by the Gaussian Wigner function
\begin{equation}
W_{0}(\alpha,\beta) =
\frac{\exp\{
-2 \widetilde{A}_0 (|\alpha|^2+|\beta|^2)
+ 2 \widetilde{B}_0 (\alpha\beta + \calpha\cbeta) \}}
{\pi^2\sqrt{{\rm Det}[\bmsigma_0]}}\,,
\label{twb:wig}
\end{equation}
with
\begin{equation}
\widetilde{A}_0 = \frac{A_0}{4 \sqrt{{\rm Det}[\bmsigma_0]}}\,,\qquad
\widetilde{B}_0 = \frac{B_0}{4 \sqrt{{\rm Det}[\bmsigma_0]}}\,,
\end{equation}
where $A_0 \equiv A_0(r) = \cosh(2 r)$,
$B_0 \equiv B_0(r) = \sinh (2 r)$ and $\bmsigma_0$ is the covariance matrix
\begin{equation}\label{cvm:twb}
\bmsigma_0 = \frac12
\left(
\begin{array}{cc}
A_0\, \mathbbm{1}_2 & B_0\, \bmsigma_3\\[1ex]
B_0\, \bmsigma_3 & A_0\, \mathbbm{1}_2
\end{array}\right)\:,
\end{equation}
$\mathbbm{1}_2$ being the $2 \times 2$ identity matrix and $\bmsigma_3 =
{\rm Diag}(1,-1)$; $\bmsigma_0$ is defined as $[\bmsigma_0]_{hk} =
\frac12\langle \{ R_h, R_k \}\rangle - \langle R_h \rangle
\langle R_k \rangle$ with
\begin{align}
\boldsymbol{R} &= (R_1, R_2, R_3, R_4)^{T} \\
&\equiv \left(
\frac{a+a^{\dag}}{\sqrt{2}},\frac{a-a^{\dag}}{i\sqrt{2}},
\frac{b+b^{\dag}}{\sqrt{2}},\frac{b-b^{\dag}}{i\sqrt{2}}
\right)^{T}\,.
\end{align}
Now we explicitly calculate the Wigner function of
the corresponding IPS state, which, as one may expect, is no longer
Gaussian and positive-definite. The state entering the two beam splitters
is described by the Wigner function
\begin{equation}
W_{0}^{\hbox{\tiny (in)}}(\alpha,\beta,\zeta,\xi) =
W_{0}(\alpha,\beta)\,
\frac{4}{\pi^2} \exp\left\{ -2|\zeta|^2 - 2|\xi|^2 \right\}\,,
\end{equation}
where the second factor at the right hand side represents the two vacuum
states of modes $c$ and $d$.
The action of the beam splitters on $W^{\hbox{\tiny (in)}}_{r}$ can be
summarized by the following change of variables \cite{FOP:napoli:05}
\begin{align}
&\alpha \to \alpha\cos\phi + \zeta\sin\phi\,,
&\zeta \to \zeta\cos\phi - \alpha\sin\phi\,,\\
&\beta \to \beta\cos\phi + \xi\sin\phi\,,
&\xi \to \xi\cos\phi - \beta\sin\phi\,,
\end{align}
and the output state, after the beam splitters, is then given by
\begin{multline}
W_{r,\phi}^{\hbox{\tiny (out)}}(\alpha,\beta,\zeta,\xi) =\\
\frac{4}{\pi^2}\, W_{r,\phi}(\alpha,\beta)\,
\exp\left\{ -a |\xi|^2 + w \xi + \cw \cxi \right\} \\
\times\exp\big\{ -a |\zeta|^2 + (v + 2 \witB_0 \xi \sin^2\phi)\zeta \\
+ (\cv + 2 \witB_0 \cxi \sin^2\phi)\czeta \big\}\,,
\end{multline}
where
\begin{multline}
W_{r,\phi}(\alpha,\beta) =\\
\frac{
\exp\{ -b (|\alpha|^2 + |\beta|^2)
+ 2 \witB_0 \cos^2\phi\, (\alpha\beta + \calpha\cbeta) \}}
{\pi^2\sqrt{{\rm Det}[\bmsigma_0]}}
\end{multline}
and
\begin{align}
&a \equiv a(r,\phi) = 2 (\witA_0 \sin^2\phi + \cos^2\phi),\\
&b \equiv b(r,\phi) = 2 (\witA_0 \cos^2\phi + \sin^2\phi)\,,\\
&v \equiv v(r,\phi) = 2 \cos\phi\, \sin\phi\,
[(1-\witA_0)\calpha + \witB_0 \beta],\\
&w \equiv w(r,\phi) = 2 \cos\phi\, \sin\phi\,
[(1-\witA_0)\cbeta + \witB_0 \alpha]\,.
\end{align}
\par
At this stage on/off detection is performed on modes
$c$ and $d$ (see Fig.~\ref{f:IPS:scheme}). We are interested in
the situation when both the detectors click. The Wigner function
of the double click element $\Pi_{11}(\varepsilon)$ of the POVM
[see Eq.~(\ref{povm11})] is given by \cite{ips:PRA:67,cond:cola}
\begin{align}
W_{\varepsilon}(\zeta,\xi) &\equiv
W[\Pi_{11}(\varepsilon)](\zeta,\xi)\\
&= \frac{1}{\pi^2}\{
1-Q_{\varepsilon}(\zeta)-Q_{\varepsilon}(\xi)
+Q_{\varepsilon}(\zeta) Q_{\varepsilon}(\xi)
\}\,,
\end{align}
with
\begin{equation}
Q_{\varepsilon}(z) = \frac{2}{2-\varepsilon}\,
\exp\Bigg\{-\frac{2\varepsilon}{2-\varepsilon}\, |z|^2 \Bigg\}\,.
\end{equation}
Using Eq.~(\ref{ptr}) and the phase-space expression of trace
for each mode, i.e.,
\begin{equation}
{\rm Tr}[O_1\,O_2] = \pi \int_{\mathbb C} d^2z\, W[O_{1}](z)\,
W[O_{2}](z)\,,
\end{equation}
the Wigner function of the output state, conditioned to the double click
event, reads
\begin{equation}\label{w:ips:informal}
W_{r,\phi,\varepsilon}(\alpha,\beta) =
\frac{f(\alpha,\beta)}{p_{11}
(r,\phi,\varepsilon)}\,,
\end{equation}
where $f(\alpha,\beta) \equiv f_{r,\phi,\varepsilon}(\alpha,\beta)$ with
\begin{multline}\label{w:ips:informal:f}
f(\alpha,\beta) =
\pi^2\,\int_{\mathbb{C}^2}d^2\zeta\,d^2\xi\,
\frac{4}{\pi^2}\,W_{r,\phi}(\alpha,\beta)\, \\
\times \sum_{k=1}^4 \frac{C_k}{\pi^2}\,
G_{r,\phi,\varepsilon}^{(k)}(\alpha,\beta,\zeta,\xi)\,,
\end{multline}
with $C_k \equiv C_k(\varepsilon)$ and $C_1 = 1$,
$C_2 = C_3 = -2(2-\varepsilon)^{-1}$, $C_4 = 4(2-\varepsilon)^{-2}$;
the double-click probability $p_{11}(r,\phi,\varepsilon)$
can be written as function of $f(\alpha,\beta)$ as follows
\begin{equation}\label{w:ips:informal:p}
p_{11}(r,\phi,\varepsilon) =
\pi^2\,\int_{\mathbb{C}^2}d^2\alpha\,d^2\beta\,
f(\alpha,\beta)\,.
\end{equation}
The quantities $G_{r,\phi,\varepsilon}^{(k)}(\alpha,\beta,\zeta,\xi)$
in Eq.~(\ref{w:ips:informal:f}) are given by
\begin{multline}
G_{r,\phi,\varepsilon}^{(k)}(\alpha,\beta,\zeta,\xi) = \\
\exp\big\{ -x_k |\zeta|^2 + (v + 2 B \xi \sin^2\phi)\zeta \\
+(\cv + 2 B \cxi \sin^2\phi)\czeta \big\} \\
\times\exp\left\{ -y_k |\xi|^2 + w \xi + \cw \cxi \right\}\,,
\label{meas:int}
\end{multline}
where $x_k \equiv x_k(r,\phi,\varepsilon)$, and
$y_k \equiv y_k(r,\phi,\varepsilon)$ are
\begin{align}
&x_1 = x_3 = y_1 = y_2 = a \nonumber\\
&x_2 = x_4 = y_3 = y_4 = a + 2\varepsilon(2-\varepsilon)^{-1}\,. \nonumber
\end{align}
After the integrations we have
\begin{multline}
f(\alpha,\beta) =\frac{1}{\pi^2}\,
\sum_{k=1}^4 {\cal C}_k
\,\exp\{ (f_k-b)|\alpha|^2 + (g_k-b)|\beta|^2 \\
+(2 \witB_0 T + h_k)(\alpha\beta + \calpha\cbeta)\}
\end{multline}
and
\begin{equation}
p_{11}(r,T,\varepsilon) =
\sum_{k=1}^4
\frac{{\cal C}_k}
{(b-f_k)(b-g_k)-(2 \witB_0 T + h_k)^2}
\,,
\end{equation}
where we put $T = \cos^2\phi = 1 - \sin^2 \phi$, and defined
\begin{equation}
{\cal C}_k \equiv {\cal C}_k (r,T,\varepsilon) =
\frac{4 C_k}
{[x_ky_k - 4 \witB_0^2 (1-T)^2]\sqrt{{\rm Det}[\bmsigma_0]}}
\end{equation}
and $f_k \equiv f_k(r,T)$, $g_k \equiv g_k(r,T)$, and $h_k \equiv h_k(r,T)$
given by
\begin{align}
&f_k =
{\cal N}_k
\, [x_k \witB_0^2 + 4 \witB_0^2 (1-\witA_0)(1-T) + y_k (1-\witA_0)^2]\,,\\
&g_k =
{\cal N}_k
\, [x_k (1-\witA_0)^2 + 4 \witB_0^2 (1-\witA_0)
(1-T) + y_k \witB_0^2]\,,\\
&h_k =
{\cal N}_k
\, \{(x_k + y_k) \witB_0 (1-\witA_0)\nonumber\\
&\hspace{2cm}
+ 2 \witB_0 [\witB_0^2 + (1-\witA_0)^2] (1-T)\}\,,\\
&{\cal N}_k \equiv {\cal N}_k(r,T) =
{\displaystyle
\frac{4 T\, (1-T)}{x_k y_k - 4 \witB_0^2(1-T)^2}\,.
}
\end{align}
In this way, the Wigner function of the IPS state can be rewritten as
\begin{eqnarray}\label{ips:wigner}
W_{\hbox{\rm\tiny IPS}}(\alpha,\beta) =
\frac{1}{\pi^2\, p_{11}(r,T,\varepsilon)}
\sum_{k=1}^4 {\cal C}_k\,W_{k}(\alpha,\beta)\,,
\end{eqnarray}
with
\begin{multline}
W_{k}(\alpha,\beta) =
\exp\{ (f_k-b) |\alpha|^2 +(g_k-b) |\beta|^2 \\
+ (2\witB_0 T + h_k) (\alpha\beta + \calpha\cbeta)\}\,.
\end{multline}
Finally, the density matrix corresponding to $W_{\hbox{\rm\tiny
IPS}}(\alpha,\beta)$ reads as follows \cite{ips:PRA:67}
\begin{multline}
\label{ips:fck}
{\varrho}_{\hbox{\rm\tiny IPS}} =
\frac{1-\lambda^2}{p_{11}(r,T,\varepsilon)}
\sum_{n,m=0}^{\infty} (\lambda T)^{n+m}
\sum_{h,k=0}^{{\rm Min}[n,m]} {\cal A}_{h,k}(T,\varepsilon) \\
\times
\sqrt{ \frac{n}{h}\frac{n}{k}\frac{m}{h}\frac{m}{k} }
\, \ket{n-k}_a \ket{n-h}_b {_b}\bra{m-h} {_a}\bra{m-k}\,,
\end{multline}
where $\lambda = \tanh r$ and
\begin{equation}\label{ips:fhk}
{\cal A}_{h,k}(T,\varepsilon)= \left[ 1 - (1-\varepsilon)^h \right]
\left[ 1 - (1-\varepsilon)^k
\right] \left( \frac{1-T}{T} \right)^{h+k}\:.
\end{equation}
\par
\begin{figure}
\caption{Logarithmic plots of the energies $E_{\rm s}
\label{f:ips:en}
\end{figure}
\begin{figure}
\caption{Plots of the energy $E_{\hbox{\rm\tiny IPS}
\label{f:ips:eta}
\end{figure}
In Fig.~\ref{f:ips:en} we plot the energies $E_{\rm s}$ and
$E_{\hbox{\rm\tiny IPS}}$ of the bipartite input and output states,
respectively, for different values of the involved parameters as functions
of $\tanh r$. We recall that for a given Wigner function $W(v,w)$ of a
bipartite state, the corresponding energy is
\begin{equation}
E = \int_{\mathbb{C}^2} d^2 v\, d^2 w\, \left(|v|^2 + |w|^2 - 1\right)\,
W(v,w)\,.
\end{equation}
If the bipartite state has a Wigner function of the form
\begin{equation}\label{two:generic}
W_{\rm s}(v,w) =
\frac{\exp\{ -F|v|^2 -G|w|^2 + H (vw + v^* w^*) \}}
{\pi^2 (FG-H^2)^{-1}}\,,
\end{equation}
then its energy reads:
\begin{equation}
E_{\rm s} = \frac{F+G}{2(FG - H^2)}-1\,;
\end{equation}
thereby, in the case of a TWB as input state $F$, $G$, and $H$ are
obtained from Eq.~(\ref{twb:wig}) and the energy of the state emerging
from the IPS process can be written as
\begin{equation}
E_{\hbox{\rm\tiny IPS}} = \frac{1}{\pi^2\, p_{11}(r,T,\varepsilon)}
\sum_{k=1}^4 {\cal C}_k \left[
\frac{F_k+G_k}{2(F_k G_k - H_k^2)^2}-1
\right]\,
\end{equation}
with $F_k = b-f_h$, $G_k = b-g_h$, and $H_k = 2 \witB_0 T + h_k$ and all
the involved quantities are the same as in Eq.~(\ref{ips:wigner}).
As in the single mode case, we can see that there is a threshold on $r$,
depending on $T$ and $\varepsilon$, under which the IPS state has a larger
energy than the input state. In Fig.~\ref{f:ips:eta} $E_{\hbox{\rm \tiny
IPS}}$ is plotted for two values of $T$ and different values of
$\varepsilon$ as a function of $\tanh r$: we find that as $r$ decreases,
the IPS efficiency is not so relevant.
\par
The state given in Eq.~(\ref{ips:wigner}) is no longer a Gaussian state and
its use in the improvement of continuous variable teleportation
\cite{ips:PRA:67} as well as in the enhancement of the nonlocality
\cite{ips:nonloc,nha,sanchez} will be investigated in the following Sections.
\section{Dynamics of TWB in noisy channels}\label{s:dyn:TWB}
Before addressing the properties of the IPS bipartite state described in
the previous Section, we review the evolution of the twin-beam state of
radiation (TWB) in a noisy environment, namely, an environment where
dissipation and thermal noise take place \cite{OP:PSnoise}. As we
will see, we can include in our analysis the effect due to the propagation
through this kind of channel by a simple change of the involved quantities.
Using a more compact form, Eq.~(\ref{twb:wig}) can
also be rewritten as
\begin{equation}\label{gauss:form}
W_{0}(\bmX) =
\frac{\exp\left\{ -\frac12\, \bmX^{T}\,\bmsigma_{0}^{-1}\,\bmX \right\}}
{\pi^2 \sqrt{{\rm Det}[\bmsigma_0]}}\,,
\end{equation}
with $\bmX = (x_1,y_1,x_2,y_2)^{T}$, $\alpha=\frac{1}{\sqrt{2}}(x_1+iy_1)$
and $\beta=\frac{1}{\sqrt{2}}(x_2+iy_2)$, and $(\cdots)^{T}$ denoting the
transposition operation.
\par
When the two modes of the TWB interact with a noisy environment, namely in the
presence of dissipation and thermal noise, the evolution of the Wigner
function (\ref{twb:wig}) is described by the following Fokker-Planck equation
\cite{wm:quantopt:94,binary,seraf:PRA:69}
\begin{equation}\label{fp:eq:cmp}
\partial_t W_{t}(\bmX) = \frac12 \Big(
\partial_{\bmX}^T {\rm I}\!\Gamma \bmX + \partial_{\bmX}^T
{\rm I}\!\Gamma \bmsigma_{\infty} \partial_{\bmX} \Big) W_{t}(\bmX)\,,
\end{equation}
with $\partial_{\bmX} =
(\partial_{x_1},\partial_{y_1},\partial_{x_2},\partial_{y_2})^{T}$.
The damping matrix is given by ${\rm I}\!\Gamma = \bigoplus_{k=1}^2\,
\Gamma_k \mathbbm{1}_2$, whereas
\begin{eqnarray}
\bmsigma_{\infty} &= \bigoplus_{k=1}^{2}\, \bmsigma_{\infty}^{(k)} =
\left(
\begin{array}{cc}
\bmsigma_{\infty}^{(1)} & \boldsymbol{0} \\[1ex]
\boldsymbol{0} & \bmsigma_{\infty}^{(2)}
\end{array}
\right)\,,
\end{eqnarray}
where $\boldsymbol{0}$ is the $2 \times 2$ null matrix and
\begin{equation}
\bmsigma_{\infty}^{(k)} =
\frac12
\left(
\begin{array}{cc}
1 + 2 N_{k} & 0\\[1ex]
0 & 1 + 2 N_k
\end{array}
\right)\,.
\end{equation}
$\Gamma_k$, $N_k$ denote the damping rate and the average number of
thermal photons of the channel $k$, respectively. $\bmsigma_{\infty}$
represents the covariance matrix of the environment and, in turn, the
asymptotic covariance matrix of the evolved TWB. Since the environment is
itself excited in a Gaussian state, the evolution induced by
(\ref{fp:eq:cmp}) preserves the Gaussian form (\ref{gauss:form}). The
covariance matrix at time $t$ reads as follows
\cite{seraf:PRA:69,FOP:napoli:05}
\begin{equation}
\bmsigma_t = \mathbbm{G}_t^{1/2}\,\bmsigma_0\,\mathbbm{G}_t^{1/2}
+ (\mathbbm{1} - \mathbbm{G}_t)\,\bmsigma_{\infty}\,,
\end{equation}
where $\mathbbm{G}_t = \bigoplus_{k=1}^2\,e^{-\Gamma_k t}\,\mathbbm{1}_2$.
The covariance matrix $\bmsigma_t$ can be also written as
\begin{equation}\label{evol:cvm:12}
\bmsigma_t = \frac 12
\left(
\begin{array}{cc}
A_t(\Gamma_1,N_1)\, \mathbbm{1}_2& B_t(\Gamma_1)\,\bmsigma_3 \\[1ex]
B_t(\Gamma_2)\, \bmsigma_3 & A_t(\Gamma_2,N_2)\, \mathbbm{1}_2
\end{array}
\right)
\end{equation}
with
\begin{equation}
\label{AtBt}
\begin{array}{l}
A_t(\Gamma_k,N_k) = A_0\,e^{-\Gamma_k t}
+ \left(1-e^{-\Gamma_k t}\right) (1 + 2 N_k)\,,\\[1ex]
B_t(\Gamma_k) = B_0\,e^{-\Gamma_k t}\,.
\end{array}
\end{equation}
Finally, if we assume $\Gamma_1 = \Gamma_2 = \Gamma$ and $N_1 = N_2 = N$,
then the covariance matrix (\ref{evol:cvm:12}) becomes formally identical
to (\ref{cvm:twb}) and the corresponding Wigner function reads
\begin{equation}
W_{t}(\alpha,\beta) =
\frac{\exp\{
-2 \widetilde{A}_t (|\alpha|^2+|\beta|^2)
+ 2 \widetilde{B}_t (\alpha\beta + \calpha\cbeta)\}}
{\pi^2\sqrt{{\rm Det}[\bmsigma_t]}}\,,
\label{twb:wig:noise}
\end{equation}
with
\begin{equation}
\widetilde{A}_t = \frac{A_t(\Gamma,N)}{4\sqrt{{\rm Det}[\bmsigma_t]}}\,,
\qquad
\widetilde{B}_t = \frac{B_t(\Gamma)}{4\sqrt{{\rm Det}[\bmsigma_t]}}\,.
\end{equation}
If the IPS process is performed on a TWB evolved in a noisy
environment with both the channels having the same damping rate and thermal
noise, then the Wigner function of the state arriving at the beam splitters
is now given by Eq.~(\ref{twb:wig:noise}), and the output state is still
described by Eq.~(\ref{ips:wigner}), but with the following substitutions
\begin{equation}\label{sostituzioni}
\witA_0 \to \widetilde{A}_t \,,\quad
\witB_0 \to \widetilde{B}_t \,,\quad
\bmsigma_0 \to \bmsigma_t\,.
\end{equation}
\section{Continuous variable teleportation}\label{s:tele}
\begin{figure}
\caption{Scheme of the CV teleportation. One of the two modes of a shared
bipartite state $\varrho_{\rm s}
\label{f:TScheme}
\end{figure}
The scheme of continuous variable (CV) teleportation is sketched in
Fig.~\ref{f:TScheme}. A bipartite state $W_{\rm s}$ is shared between two
parties: one mode of the state is mixed at a balanced beam splitter (BS)
with the state to be teleported, $W_{\rm in}$, then double-homodyne
measurement is performed on the two emerging modes. The complex outcome
$\xi$ of the measurement is used in order to displace the remaining mode of
$W_{\rm s}$ and the teleported state $W_{\rm out}$ is obtained averaging
over all the possible outcomes. Here we address the teleportation of the
coherent state $\ket{\alpha}$, whose Wigner function reads
\begin{equation}
W_{\rm in}(z) = \frac{2}{\pi} \exp\{ -2 |z-\alpha|^2 \}\,.
\end{equation}
If we consider the following generic shared state:
\begin{equation}
W_{\rm s}(v,w) =
\frac{\exp\{ -F|v|^2 -G|w|^2 + H (vw + v^* w^*) \}}
{\pi^2 (FG-H^2)^{-1}}\,,
\end{equation}
and since the POVM describing the double homodyne detection is
\begin{equation}
W_{\xi}(z,v) = \frac{1}{\pi^2}\, \delta^{(2)}(z-v^*-\xi)\,,
\end{equation}
$\delta^{(2)}(\zeta)$ being the complex Dirac's delta function, the output
state $W_{\rm out}$ is given by \cite{FOP:napoli:05}
\begin{align}
W_{{\rm out}}(w) &= \pi^2 \int_{\mathbb C}d^2\xi
\int_{{\mathbb C}^2}d^2z\,d^2v\, W_{\rm in}(z)\,\nonumber \\
&\hspace{2cm} \times\,W_{\rm s}(v,w-\xi)\,
W_{\xi}(z,v)\\
&=\frac{1}{\pi \sigma_{\rm out}}
\exp\left\{ - \frac{|w-\alpha|^{2}}{\sigma_{\rm out}}\right\}\,,
\end{align}
where
\begin{equation}
\sigma_{\rm out} = \frac{1}{2} + \frac{F+G+2H}{FG-H^2}\,;
\end{equation}
in turn, the average fidelity of teleportation of coherent states reads as
follows:
\begin{align}\label{gen:fid}
\overline{F} &\equiv
\pi \int_{\mathbb C} d^2w\, W_{\rm in}(w)\,W_{\rm out}(w)\\
&=\frac{FG-H^2}{FG-H^2+F+G-2H} = \frac{2}{1+2\,\sigma_{\rm out}}\,.
\end{align}
\par
When the shared state is the TWB of Eq.~(\ref{twb:wig}), the average
fidelity is obtained from Eq.~(\ref{gen:fid}) with $F=G=2\widetilde{A}_0$
and $H=2\widetilde{B}_0$, i.e.,
\begin{equation}
\overline{F}_{\hbox{\tiny TWB}}(\lambda) = \mbox{$\frac12$}(1+\lambda)
\end{equation}
whereas in the presence of noise one should use the substitutions
(\ref{sostituzioni}). $\overline{F}_{\hbox{\tiny TWB}}$ is plotted in
Fig.~\ref{f:fid:TWB}.
\begin{figure}
\caption{Plots of the teleportation fidelity $\overline{F}
\label{f:fid:TWB}
\end{figure}
When the teleportation is assisted by IPS, then the fidelity reads as
follows:
\begin{equation}
\overline{F}_{\hbox{\tiny IPS}}
= \frac{1}{p_{11}(r,T,\varepsilon)} \sum_{k=1}^{4}
\frac{{\cal C}_k}{F_k G_k-H_k^2+F_k+G_k-2H_k}\,,
\end{equation}
with $F_k = b-f_h$, $G_k = b-g_h$, and $H_k = 2 \witB_0 T + h_k$ and all
the involved quantities are the same as in Eq.~(\ref{ips:wigner}). The
results are presented in Fig.~\ref{f:fidid} for $\varepsilon = 1$ and
$\Gamma t = N =0$. The IPS state improves the average fidelity of quantum
teleportation when $\lambda$ is below a certain threshold, which depends on
$T$ (and $\varepsilon$). Notice that, for $T< 0.5$,
$\overline{F}_{\hbox{\tiny IPS}}(\lambda)$ is always below
$\overline{F}_{\hbox{\tiny TWB}}(\lambda)$, at least for $\varepsilon=1$.
\begin{figure}
\caption{Plots of the teleportation fidelity $\overline{F}
\label{f:fidid}
\end{figure}
The effect of dissipation and thermal noise is shown in
Fig.~\ref{f:fidNoise}.
\begin{figure}
\caption{Plots of the teleportation fidelity $\overline{F}
\label{f:fidNoise}
\end{figure}
\par
In order to quantify the improvement and to study its dependence on $T$ and
$\varepsilon$, we define the following ``relative improvement'':
\begin{equation}
{\cal R}_{F}(r,T,\varepsilon,\Gamma,N) =
\frac{\overline{F}_{\hbox{\tiny IPS}}(r,T,\varepsilon,\Gamma,N)-
\overline{F}_{\hbox{\tiny TWB}}(r,\Gamma,N)}
{\overline{F}_{\hbox{\tiny TWB}}(r,\Gamma,N)}\,,
\end{equation}
which is plotted in Fig.~\ref{f:fid3D}: we can see that ${\cal R}_{F}$ and,
in turn, $\overline{F}_{\hbox{\tiny IPS}}$ are mainly affected by $T$ when
$\Gamma t$ and $N$ are fixed.
\begin{figure}
\caption{Plots of ${\cal R}
\label{f:fid3D}
\end{figure}
In Fig.~\ref{f:fidperN} we plot ${\cal R}_{F}$ as a function $\lambda=\tanh
r$ and the quantity ${\cal R}_{F}^{\rm (id)}$ defined as follows:
\begin{equation}
{\cal R}_{F}^{\rm (id)}(r,T,\varepsilon,\Gamma,N)=
\frac{\overline{F}_{\hbox{\tiny IPS}}(r,T,\varepsilon,\Gamma,N)-
\overline{F}_{\hbox{\tiny TWB}}(r,0,0)}
{\overline{F}_{\hbox{\tiny TWB}}(r,0,0)}\,,
\end{equation}
i.e., the relative improvement of the fidelity using IPS in the presence of
losses and thermal noise with respect to the fidelity using the TWB in
ideal conditions ($\Gamma t = N = 0$): we can see that, for the particular
choice of the parameters, not only the fidelity is improved with respect
the TWB-based teleportation in the presence of the same dissipation and
thermal noise (solid line in Fig.~\ref{f:fidperN}), but the results can be
also better than the ideal case (dot-dashed line). We can conclude that IPS
onto TWB degraded by dissipation and noisy environment can improve the
fidelity of teleportation up to and beyond the value achievable using the
TWB in ideal conditions.
\begin{figure}
\caption{Plot of the relative enhancement ${\cal R}
\label{f:fidperN}
\end{figure}
\begin{figure}
\caption{Plot of the teleportation fidelity as a function of the average
number of photons $N$ of the shared state in the case of TWB (dashed line)
and a photon subtracted TWB (solid line) for $T = 0.999$, $\varepsilon =
1$, and in ideal conditions (i.e., $\Gamma t = N = 0$. The inset is a
magnification of the region $0<N<2$.}
\label{f:fid:energy}
\end{figure}
\par
Finally, in Fig.~\ref{f:fid:energy} we plot the teleportation fidelity as a
function of the average number of photons $N$ of the shared state in the
case of TWB and a photon subtracted TWB: we can see that for a fixed energy
of the shared quantum channel the best fidelity is achieved by the TWB
state. The same result holds in the presence of dissipation and thermal
noise.
\par
In the next Sections we will analyze the nonlocality of the IPS state in
the presence of noise by means of Bell's inequalities \cite{OP:PSnoise}.
\section{Nonlocality in the phase space} \label{s:DP}
Parity is a dichotomic variable and thus can be used to establish
Bell-like inequalities \cite{CHSH}.
The displaced parity operator on two modes is defined as \cite{bana}
\begin{equation}
\hat{\Pi}(\alpha,\beta) =
D_a(\alpha)(-1)^{a^\dag a}D_a^\dag(\alpha)
\otimes D_b(\beta)(-1)^{b^\dag b}D_b^\dag(\beta)\,,
\end{equation}
where $\alpha, \beta \in {\mathbb C}$, $a$ and $b$ are mode operators and
$D_a(\alpha)=\exp\{\alpha a^\dag - \calpha a\}$ and $D_b(\beta)$ are
single-mode displacement operators. Since the two-mode Wigner function
$W(\alpha,\beta)$ can be expressed as \cite{FOP:napoli:05}
\begin{equation}
W(\alpha,\beta) = \frac{4}{\pi^2}\, \Pi(\alpha,\beta)\,,
\end{equation}
$\Pi(\alpha,\beta)$ being the expectation value of $\hat\Pi(\alpha,\beta)$,
the violation of these inequalities is also known as nonlocality in the
phase-space. The quantity involved in such inequalities can be written as
follows
\begin{equation}\label{bell:general}
{\cal B}_{\rm DP} = \Pi(\alpha_1,\beta_1)+ \Pi(\alpha_2,\beta_1)
+ \Pi(\alpha_1,\beta_2)-\Pi(\alpha_2,\beta_2)\,,
\end{equation}
which, for local theories, satisfies $|\mathcal{B}_{\rm DP}|\le 2$.
\par
Following Ref.~\cite{bana}, one can choose a particular set of
displaced parity operators, arriving at the following combination
\cite{ips:PRA:70}
\begin{multline}
{\cal B}_{\rm DP}({\cal J}) =
\Pi(\sqrt{\cal J},-\sqrt{\cal J})+ \Pi(-3\sqrt{\cal J},-\sqrt{\cal J})\\
+ \Pi(\sqrt{\cal J},3\sqrt{\cal J})-\Pi(-3\sqrt{J},3\sqrt{\cal J})\,,
\label{bell:ale}
\end{multline}
which, for the TWB, gives a maximum ${\cal B}_{\rm DP} = 2.32$ (for ${\cal
J} = 1.6 \times 10^{-3}$) greater than the value $2.19$ obtained in
Ref.~\cite{bana}. Notice that, even in the infinite squeezing limit, the
violation is never maximal, i.e., $|\mathcal{B}_{\rm DP}| < 2\sqrt{2}$
\cite{jeong1}.
\par
In Ref.~\cite{ips:PRA:70} we studied Eq.~(\ref{bell:ale}) for both the TWB
and the IPS state in an ideal scenario, namely in the absence of
dissipation and noise; we showed that, using IPS, the maximum violation
is achieved for $T,\varepsilon \to 1$ and for values of $r$ smaller than
for the TWB.
\par
\begin{figure}
\caption{Plots of the Bell parameters ${\cal B}
\label{f:DP}
\end{figure}
Now, by means of the Eq.~(\ref{ips:wigner}) and the substitutions
(\ref{sostituzioni}), we can study how noise affects ${\cal B}_{\rm DP}$.
The results are showed in Fig.~\ref{f:DP} for $\varepsilon = 1$: as one may
expect, the overall effect of noise is to reduce the violation of the
Bell's inequality. When dissipation alone is present ($N=0$), the maximum
of violation is achieved using the IPS for values of $r$ smaller than for
the TWB, as in the ideal case. On the other hand, one can see that the
presence of thermal noise mainly affects the IPS results. In fact, for
$\Gamma t = 0.01$ and $N=0.2$, one has $|{\cal B}_{\rm DP}^{\rm (TWB)}|>2$
for a range of $r$ values, whereas $|{\cal B}_{\rm DP}^{\rm (IPS)}|$ falls
below the threshold for violation. Note that the maximum of violation, both
for the TWB and the IPS state, depends on the squeezing parameter $r$.
\par
\begin{figure}
\caption{The surfaces are plots of the Bell parameters
${\cal B}
\label{f:DP_3D}
\end{figure}
In Fig.~\ref{f:DP_3D} we plot ${\cal B}_{\rm DP}^{\rm (IPS)}$ as a function
of $T$ and $\varepsilon$. We can see that the main contribution to the
Bell parameter is due to the transmissivity $T$. Moreover, as $T \to
1$, the Bell parameter is actually independent on $\varepsilon$.
Note that the values of ${\cal J}$ and $r$, which maximize the violation,
depend on $\Gamma t$ and $N$, as one can see from Fig.~\ref{f:DP}: in
Fig.~\ref{f:DP_3D} we have chosen to fix the environmental parameters in
order to compare the two plots, even if best results can be obtained
maximizing ${\cal B}_{\rm DP}^{\rm (IPS)}$ with respect ${\cal J}$ and
$T$.
\par
We conclude that, considering the displaced parity test in the presence
of noise, the IPS is quite robust if the thermal noise is below a threshold
value (depending on the environmental parameters) and for small values of the
TWB parameter $r$.
\section{Nonlocality and homodyne detection} \label{s:HD}
In principle there are two approaches how to test the Bell's inequalities
for bipartite state: either one can employ some test for continuous variable
systems, such as that described in Sec.~\ref{s:DP}, or one can convert the
problem to Bell's inequalities tests on two qubits by mapping the
two modes into two-qubit systems. In this and the following Section we
will consider this latter case.
\par
The Wigner function $W_{\hbox{\tiny IPS}}(\alpha,\beta)$ given in
Eq.~(\ref{ips:wigner}) is no longer positive-definite and thus
it can be used to test the violation of Bell's
inequalities by means of homodyne detection, i.e., measuring the
quadratures $x_{\vartheta}$ and $x_{\varphi}$ of the two IPS modes $a$ and
$b$, respectively, as proposed in Refs.~\cite{nha,sanchez}.
In this case, one can dichotomize the measured quadratures assuming as
outcome $+1$ when $x \ge 0$, and $-1$ otherwise. The nonlocality of
$W_{\hbox{\tiny IPS}}(\alpha,\beta)$ in ideal conditions has been studied in
Ref.~\cite{ips:PRA:70} where we also discussed the effect of the homodyne
detection efficiency $\eta_{\rm H}$.
\par
Let us now we focus our attention on $W_{\hbox{\tiny IPS}}(\alpha,\beta)$
when the IPS process is applied to the TWB evolved through the noisy
channel, namely, using the substitutions (\ref{sostituzioni}). After the
dichotomization of the homodyne outputs, one obtains the following Bell
parameter
\begin{equation}\label{bell:homo}
{\cal B}_{\rm HD} =
E(\vartheta_1,\varphi_1) + E(\vartheta_1,\varphi_2)
+ E(\vartheta_2,\varphi_1) - E(\vartheta_2,\varphi_2)\,,
\end{equation}
where $\vartheta_k$ and $\varphi_k$ are the phases of the two
homodyne measurements at the modes $a$ and $b$, respectively, and
\begin{equation}
E(\vartheta_h,\varphi_k) =
\int_{\mathbb{R}^2} d x_{\vartheta_h}\,d x_{\varphi_k}\,
{\rm sign}[x_{\vartheta_h}\, x_{\varphi_k}]\,
P(x_{\vartheta_h}, x_{\varphi_k})\,,
\end{equation}
$P(x_{\vartheta_h}, x_{\varphi_k})$ being the joint
probability of obtaining the two outcomes
$x_{\vartheta_h}$ and $x_{\varphi_k}$ \cite{sanchez}. As usual,
violation of Bell's inequality is achieved when $|{\cal B}_{\rm HD}|>2$.
\par
\begin{figure}
\caption{Plots of the Bell parameter ${\cal B}
\label{f:HD}
\end{figure}
In Fig.~\ref{f:HD} we plot ${\cal B}_{\rm HD}$ for $\vartheta_1 = 0$,
$\vartheta_2 = \pi/2$, $\varphi_1 = -\pi/4$ and $\varphi_2 = \pi/4$: as for
the ideal case \cite{ips:PRA:70,sanchez}, the Bell's inequality is
violated for a suitable choice of the squeezing parameter $r$. Obviously,
the presence of noise reduces the violation, but we can see that the effect
of thermal noise is not so large as in the case of the displaced parity
test addressed in Sec.~\ref{s:DP} (see Fig.~\ref{f:DP}).
\begin{figure}
\caption{The surfaces are plots of the Bell parameters
${\cal B}
\label{f:HD_3D}
\end{figure}
In Fig.~\ref{f:HD_3D} we plot ${\cal B}_{\rm HD}$ as a function of $T$
and $\varepsilon$: as for the displaced parity test (see
Fig.~\ref{f:DP_3D}), we can see that the main contribution to the
Bell parameter is due to the transmissivity $T$.
\par
Notice that the high efficiencies of this kind of detectors
allow a loophole-free test of hidden variable theories
\cite{gil}, though the violations obtained are quite small.
This is due to the intrinsic information loss of the binning
process, which is used to convert the continuous homodyne data in
dichotomic results \cite{mun1}.
\section{Nonlocality and pseudospin test} \label{s:PS}
Another way to map a two-mode continuous variable system into a two-qubit
system is by means of the pseudospin test: this consists in measuring
three single-mode Hermitian operator $S_k$ satisfying the Pauli matrix algebra
$[S_h,S_k]=2i\varepsilon_{hkl}\,S_l$, $S_k^2 = {\mathbb I}$, $h,k,l=1,2,3$,
and $\varepsilon_{hkl}$ is the totally antisymmetric tensor with
$\varepsilon_{123}=+1$ \cite{filip:PRA:66,chen:PRL:88}. For the sake of
clarity, we will refer to $S_1$, $S_2$ and $S_3$ as $S_x$, $S_y$ and $S_z$,
respectively. In this way one can write the following correlation function
\begin{equation}
E({\bf a},{\bf b}) = \langle ({\bf a}\cdot{\bf S})\,
({\bf b}\cdot{\bf S})\rangle\,,
\end{equation}
where ${\bf a}$ and ${\bf b}$ are unit vectors such that
\begin{align}
{\bf a}\cdot{\bf S} &= \cos \vartheta_a\, S_z +
\sin \vartheta_a\, (e^{i \varphi_a} S_{-} + e^{-i \varphi_a} S_{+})\,,\\
{\bf b}\cdot{\bf S} &= \cos \vartheta_b\, S_z +
\sin \vartheta_b\, (e^{i \varphi_b} S_{-} + e^{-i \varphi_b} S_{+})\,,
\end{align}
with $S_{\pm} = \frac12 (S_x \pm i S_y)$. In the following, without loss of
generality, we set $\varphi_k = 0$. Finally, the Bell parameter reads
\begin{equation}\label{bell:PS}
{\cal B}_{\rm PS} = E({\bf a}_1,{\bf b}_1)+E({\bf a}_1,{\bf b}_2)
+E({\bf a}_2,{\bf b}_1)-E({\bf a}_2,{\bf b}_2)\,,
\end{equation}
corresponding to the CHSH Bell's inequality $|{\cal B}_{\rm PS}|\le 2$. In
order to study Eq.~(\ref{bell:PS}) we should choose a specific
representation of the pseudospin operators; note that, as pointed out in
Refs.~\cite{revzen, ferraro:3:nonloc}, the violation of Bell inequalities
for continuous variable systems depends, besides on the orientational
parameters, on the chosen representation, since different $S_k$ leads to
different expectation values of ${\cal B}_{\rm PS}$. Here we consider the
pseudospin operators corresponding to the Wigner functions \cite{revzen}
\begin{align}
W_x(\alpha)&=\frac{1}{\pi}\,{\rm sign}\big[\Re{\rm e}[\alpha]\big]\,,\quad
W_z(\alpha)= -\frac{1}{2}\,\delta^{(2)}(\alpha)\,,\label{PS:W:xz}\\
&W_y(\alpha)=-\frac{1}{2\pi}\, \delta\big(\Re{\rm e}[\alpha] \big)\,
{\cal P} \frac{1}{\Im{\rm m}[\alpha]}\,,
\end{align}
where ${\cal P}$ denotes the Cauchy's principal value. Thanks to
(\ref{PS:W:xz}) one obtains
\begin{multline}
E_{\rm TWB}({\bf a},{\bf b}) = \cos\vartheta_a \cos\vartheta_b \\
+ \frac{2\sin\vartheta_a \sin\vartheta_b}{\pi}\,
\arctan\big[ \sinh(2r) \big]\,,
\end{multline}
for the TWB, and, for the IPS,
\begin{multline}
E_{\rm IPS}({\bf a},{\bf b}) =
\sum_{k=1}^4 \frac{{\cal C}_k}{p_{11}(r,T,\varepsilon)}
\Bigg[
\frac{\cos\vartheta_a \cos\vartheta_b}{4} \\
+ \frac{2 \sin\vartheta_a \sin\vartheta_b}{\pi{\cal A}_k}\,
\arctan\left( \frac{2 \wtB_0 T + h_k}{\sqrt{{\cal A}_k}} \right)
\Bigg]
\end{multline}
where $ {\cal A}_k=(b-f_k)(b-g_k)-(2 \wtB_0 T + h_k)^2$,
and all the other quantities have been defined in Sec.~\ref{s:IPS}.
\par
\begin{figure}
\caption{Plots of the Bell parameter ${\cal B}
\label{f:PS:id}
\end{figure}
In Fig.~\ref{f:PS:id} we plot ${\cal B}_{\rm PS}$ for the TWB and IPS in
the ideal case, namely in the absence of dissipation and thermal noise. For
all the Figures we set $\vartheta_{a_1}=0$, $\vartheta_{a_2}=\pi/2$, and
$\vartheta_{b_1}=-\vartheta_{b_2}=\pi/4$. As
usual the IPS leads to better results for small values of $r$. Whereas
${\cal B}_{\rm PS}^{\rm (TWB)} \to 2\sqrt{2}$ as $r\to \infty$,
${\cal B}_{\rm PS}^{\rm (IPS)}$ has a maximum and, then, falls below the
threshold $2$ as $r$ increases. It is interesting to note that there is a
region of small values of $r$ for which ${\cal B}_{\rm PS}^{\rm (TWB)}\le
2 < {\cal B}_{\rm PS}^{\rm (IPS)}$, i.e., the IPS process can increases
the nonlocal properties of a TWB which does not violates the Bell's
inequality for the pseudospin test, in such a way that the resulting state
violates it. This fact is also present in the case of the displaced parity
test described in Sec.~\ref{s:DP}, but using the pseudospin test the effect
is enhanced. Notice that the maximum violations for the IPS occur for a
range of values $r$ experimentally achievable.
\par
\begin{figure}
\caption{Plots of the Bell parameter ${\cal B}
\label{f:PS:tau}
\end{figure}
In Fig.~\ref{f:PS:tau} we consider the presence of the dissipation alone
and vary $T$. We can see that IPS is effective also when the
effective transmissivity $T$ is not very high.
We take into account the effect of dissipation and thermal noise
in Figs.~\ref{f:PS:gamma}, and \ref{f:PS:th}: we can conclude that
IPS is quite robust with respect to this sources of noise and, moreover,
one can think of employing IPS as a useful resource in order to reduce the
effect of noise.
\begin{figure}
\caption{Plots of the Bell parameter ${\cal B}
\label{f:PS:gamma}
\end{figure}
\begin{figure}
\caption{Plots of the Bell parameter ${\cal B}
\label{f:PS:th}
\end{figure}
\begin{figure}
\caption{The surfaces are plots of the Bell parameters
${\cal B}
\label{f:PS_3D}
\end{figure}
In Fig.~\ref{f:PS_3D} we plot ${\cal B}_{\rm PS}^{\rm (IPS)}$
as a function of $T$ and $\varepsilon$: the main effect on the Bell
parameter is due to the transmissivity $T$, as in the precious cases.
\section{Nonlocality and on/off photodetection}\label{s:on:off}
\begin{figure}
\caption{Scheme of the nonlocality test based on displaced
on/off photodetection: the two modes $a$ and $b$ of a bipartite state
$\varrho$ are locally displaced by an amount $\alpha$ and $\beta$
respectively, and then revealed through on/off photodetection.
The corresponding correlation function violates Bell's
inequalities for dichotomic measurements for a suitable choice of the
parameters $\alpha$ and $\beta$, depending on the kind of state
under investigation. The violation holds also for non-unit quantum
efficiency and non-zero dark counts.
}
\label{f:Donoff}
\end{figure}
The nonlocality test we are going to analyze is schematically depicted in
Fig.~\ref{f:Donoff}: two modes of the de-Gaussified TWB radiation field,
$a$ and $b$, described by the density matrix $\varrho$, are locally
displaced by an amount $\alpha$ and $\beta$ respectively and, finally,
they are revealed by on/off
photodetectors, i.e., detectors which have no output when no photon is
detected and a fixed output when one or more photons are detected. The
action of an on/off detector is described by the following two-value
positive operator-valued measure (POVM) $\{\Pi_{0,\eta,D},
\Pi_{1,\eta,D}\}$ \cite{FOP:napoli:05}
\begin{subequations}
\label{povm1}
\begin{align}
&{\Pi}_{0,\eta,D} = \frac1{1+D}\: \sum_{k=0}^{\infty}
\left( 1-\frac{\eta}{1+D} \right)^k \ket{k}\bra{k}\:,\\
&{\Pi}_{1,\eta,D} = \mathbb{I} - {\Pi}_{0,\eta,D}\:,\end{align}
\end{subequations}
$\eta$ being the quantum efficiency and $D$ the mean number
of dark counts, i.e., of clicks with vacuum input.
In writing Eq.~(\ref{povm1}) we have considered a thermal background
as the origin of dark counts. An analogous expression may be written
for a Poissonian background \cite{IOP:05}. For small
values of the mean number $D$ of dark counts (as it generally happens at
optical frequencies) the two kinds of background are indistinguishable.
\par
Overall, taking into account the displacement, the measurement
on both modes $a$ and $b$ is described by the POVM (we are
assuming the same quantum efficiency and dark counts for both the
photodetectors)
\begin{equation}
{\Pi}_{hk}^{(\eta,D)} (\alpha,\beta)=
{\Pi}_{h}^{(\eta,D)}(\alpha)\,
\otimes
{\Pi}_{k}^{(\eta,D)}(\beta)\,
\label{povmhk}\;,
\end{equation}
where $h,k = 0,1$, and ${\Pi}_{h}^{(\eta,D)}(z)\equiv D(z)\,
{\Pi}_{h,\eta,D}\,D^{\dag}(z)$, $D(z)=\exp\left\{z a^\dag - z^* a\right\}$
being the displacement operator and $z\in {\mathbb C}$ a complex
parameter.
\par
In order to analyze the nonlocality of the state $\varrho$,
we introduce the following correlation function:
\begin{align}
E_{\eta,D}(\alpha,\beta) &=
\sum_{h,k=0}^{1} (-)^{h+k}\,
\left\langle {\Pi}_{hk}^{(\eta,D)} (\alpha,\beta)
\right\rangle \label{E:eta} \\
&= 1 + 4\, {\cal I}_{\eta,D}(\alpha,\beta) -
2\, \left[ {\cal G}_{\eta,D}(\alpha) + {\cal Y}_{\eta,D}(\beta) \right]\:,
\nonumber
\end{align}
where
\begin{align}
&{\cal I}_{\eta,D}(\alpha,\beta) =
\left\langle {\Pi}_{00}^{(\eta,D)} (\alpha,\beta) \right\rangle
\label{DefFunsa} \\
&{\cal G}_{\eta,D}(\alpha) =
\left\langle {\Pi}_{0}^{(\eta,D)}(\alpha)\otimes\mathbb{I} \right\rangle
\label{DefFunsb} \\
&{\cal Y}_{\eta,D}(\beta) =
\left\langle \mathbb{I}\otimes{\Pi}_{0}^{(\eta,D)}(\beta) \right\rangle\:,
\label{DefFunsc}
\end{align}
and where $\media{ A } \equiv {\rm Tr}[\varrho\, A]$ denotes
ensemble average on both the modes.
The so-called Bell parameter is defined by considering four different
values of the complex displacement parameters as follows:
\begin{align}
{\cal B}_{\eta,D} &= E_{\eta,D}(\alpha,\beta) +
E_{\eta,D}(\alpha',\beta) \nonumber \\
&\hspace{1cm} + E_{\eta,D}(\alpha,\beta') - E_{\eta,D}(\alpha',\beta')\\
&= \:\: 2 + 4\left\{ {\cal I}_{\eta,D}(\alpha,\beta)
+ {\cal I}_{\eta,D}(\alpha',\beta) +
{\cal I}_{\eta,D}(\alpha,\beta')\right. \nonumber\\
&\left.\hspace{1cm}
- {\cal I}_{\eta,D}(\alpha',\beta')
- {\cal G}_{\eta,D}(\alpha) - {\cal Y}_{\eta,D}(\beta) \right\}\,.
\label{B:param}
\end{align}
Any local theory implies that $|{\cal B}_{\eta,D}|$ satisfies the
CHSH version of the Bell inequality, i.e., $|{\cal B}_{\eta,D}|\leq 2$
$\forall \alpha,\alpha',\beta, \beta'$ \cite{CHSH}, while
quantum mechanical description of the same kind of experiments does not
impose this bound.
\par
Notice that using Eqs.~(\ref{povm1}) and
(\ref{DefFunsa})--(\ref{DefFunsc}), we obtain the following scaling
properties for the functions ${\cal I}_{\eta,D}(\alpha,\beta)$, ${\cal
G}_{\eta,D}(\alpha)$ and ${\cal Y}_{\eta,D}(\beta)$
\begin{align}
\label{D2etaa}
&{\cal I}_{\eta,D}(\alpha,\beta) = \left(\frac{1}{1+D}\right)^2\,
{\cal I}_{\eta/(1+D)}(\alpha,\beta)\\
\label{D2etab}
&{\cal G}_{\eta,D}(\alpha) = \frac{1}{1+D}\,{\cal G}_{\eta/(1+D)}(\alpha)\\
\label{D2etac}
&{\cal Y}_{\eta,D}(\beta) = \frac{1}{1+D}\,{\cal Y}_{\eta/(1+D)}(\beta)
\end{align}
where ${\cal I}_{\eta} = {\cal I}_{\eta,0}$,
${\cal G}_{\eta} = {\cal G}_{\eta,0}$, and
${\cal Y}_{\eta} = {\cal Y}_{\eta,0}$.
Therefore, it will be enough to study the Bell parameter
for $D=0$, namely ${\cal B}_{\eta} = {\cal B}_{\eta,0}$, and then we can use
Eqs.~(\ref{D2etaa})--(\ref{D2etac}) to take into account the effects of
non negligible dark counts. From now on we will assume $D=0$ and suppress
the explicit dependence on $D$. Notice that using expression
(\ref{B:param}) for the Bell parameter the CHSH inequality $|{\cal
B}_{\eta,D}|\leq 2$ can be rewritten as
\begin{align}
-1 <& \:\:
{\cal I}_{\eta,D}(\alpha,\beta) +
{\cal I}_{\eta,D}(\alpha',\beta) +
{\cal I}_{\eta,D}(\alpha,\beta') \nonumber \\ &-
{\cal I}_{\eta,D}(\alpha',\beta')
- {\cal G}_{\eta,D}(\alpha) - {\cal Y}_{\eta,D}(\beta) < 0
\label{CH}\;,
\end{align}
which represents the CH version of the Bell inequality for our system \cite{CH}.
\par
In order to simplify the calculations, throughout this Section we will use
the Wigner formalism. The Wigner functions associated with the elements of
the POVM (\ref{povm1}) for $D=0$ are given by \cite{IOP:05}
\begin{align}
W[\Pi_{0,\eta}] (z) &= \frac{\Delta_\eta}{\pi \eta}\,
\exp\left\{ - \Delta_\eta\, |z|^2 \right\}\,, \\
W[\Pi_{1,\eta}] (z) &= W[\iid](z) - W[\Pi_{0,\eta}] (z) \,,
\end{align}
with $\Delta_\eta = 2 \eta / (2 - \eta)$, and $W[\iid](z)=\pi^{-1}$.
Then, noticing that for any operator $O$ one has
\begin{equation}
W[D(\alpha)\,O \,D^{\dag}(\alpha)](z) = W[O](z - \alpha)\,,
\end{equation}
it follows that
$W[D(\alpha)\,{\Pi}_{0,\eta}\,D^{\dag}(\alpha)](z)$ is given by
\begin{equation}
W[D(\alpha)\,{\Pi}_{0,\eta}\,D^{\dag}(\alpha)](z)
= W[\Pi_{0,\eta}] (z-\alpha)\,,
\end{equation}
and therefore
\begin{align}
W[\Pi_{00}^{(\eta,0)}(\alpha,\beta)](z,w)&=
W[\Pi_{0,\eta}] (z-\alpha)\, W[\Pi_{0,\eta}] (w-\beta) \\
W[\Pi_{0,\eta}(\alpha)\otimes\iid](z,w)&=
W[\Pi_{0,\eta}] (z-\alpha)\: \pi^{-1} \\
W[\iid \otimes \Pi_{0,\eta}(\beta)](z,w)&=
\pi^{-1}\:W[\Pi_{0,\eta}] (w-\beta)
\label{uwigs}\;.
\end{align}
Finally, thanks to the trace rule expressed in the phase space
of two modes, i.e.,
\begin{equation}
{\rm Tr}[O_1\,O_2] =
\pi^2
\int_{\mathbb{C}^2}\!d^{2}z \:d^{2}w
\, W[O_1](z,w)\,W[O_2](z,w)\,,
\end{equation}
one can evaluate the functions ${\cal I}_{\eta}(\alpha,\beta)$,
${\cal G}_{\eta}(\alpha)$, and ${\cal Y}_{\eta}(\beta)$, and in turn
the Bell parameter ${\cal B}_{\eta}$ in Eq.~(\ref{B:param}),
as a sum of Gaussian integrals in the complex plane.
\par
Let us now consider the TWB (\ref{twb:wig}). Since the Wigner functions of
the TWB and of the POVM (\ref{povmhk}) are Gaussian, it is quite simple
to evaluate ${\cal I}_{\eta}(\alpha, \beta)$, ${\cal G}_{\eta}(\alpha)$, and
${\cal Y}_{\eta}(\beta)$ of the correlation function (\ref{E:eta}) and, then,
${\cal B}_{\eta}$; we have
\begin{align}
&{\cal I}_{\eta} (\alpha,\beta) =
\frac{{\cal M}_{\eta}(r)}{\eta^2\sqrt{{\rm Det}[\bmsigma_0]}}\,\nonumber \\
&\hspace{1cm}\times \exp\big\{
- \widetilde{F}_{\eta} \, (|\alpha|^2 + |\beta|^2)
+\widetilde{H}_{\eta} \, (\alpha\beta + \calpha\cbeta)\big\}\\
&{\cal G}_{\eta}(\alpha) = {\cal Y}_{\eta}(\alpha) =
\frac{\left(2\sqrt{{\rm Det}[\bmsigma_0]}\right)^{-1} \Delta_{\eta}}
{[2(\witA_0^2 - \witB_0^2) + \witA_0 \Delta_{\eta}]}\nonumber \\
&\hspace{1cm}\times
\exp\left\{ -\frac{2 \Delta_{\eta} }
{ 2(\witA_0^2 - \witB_0^2) + \witA_0 \Delta_{\eta} }\,|\alpha|^{2} \right\}
\end{align}
with
\begin{align}
&\widetilde{F}_{\eta}
\equiv\widetilde{F}_{\eta}(r)=
\Delta_{\eta}-(2 \witA_0 + \Delta_\eta)\,{\cal M}_{\eta}(r)\\
&\widetilde{H}_{\eta} \equiv
\widetilde{H}_{\eta}(r)= 2 \witB_0 {\cal M}_{\eta}(r)\\
&{\cal M}_{\eta}(r)
= \frac{\Delta_{\eta}^2}{4(\witA_0^2 - \witB_0^2) + 4 \witA_0 \Delta_{\eta} +
\Delta_{\eta}^2 }\,,
\end{align}
\par
In order to study Eq.~(\ref{B:param}), we consider the parametrization
$\alpha = -\beta = {\cal J}$ and $\alpha' = -\beta' = -\sqrt{11}{\cal J}$
(more details are given in \cite{IOP:05}).
The parametrization was chosen after a semi-analytical
analysis and maximizes the violation of the Bell's inequality (for
$\eta=1$). In Fig.~\ref{f:3D} we plot ${\cal B}_{\eta}$ for $\eta=1$:
as one can see the inequality $|{\cal B}_{\eta}|\leq 2$ is violated for a
wide range of parameters, and the maximum violation (${\cal
B}_{\eta}=2.45$) is achieved when ${\cal J}=0.16$ and $r=0.74$.
The effect of non-unit efficiency in the detection stage is to reduce the
the violation; this is shown in Fig.~\ref{f:eta}, where we plot ${\cal
B}_{\eta}$ as a function of ${\cal J}$ with $r=0.74$ for different values
of the quantum efficiency. Note that though the violation in the ideal
case, i.e., $\eta=1$, is smaller than for the Bell states, the TWBs
are more robust when one takes into account non-unit quantum
efficiency.
\begin{figure}
\caption{Plot of ${\cal B}
\label{f:3D}
\end{figure}
\begin{figure}
\caption{Plot of ${\cal B}
\label{f:eta}
\end{figure}
\par
In the case of the state (\ref{ips:wigner}), the correlation function
(\ref{E:eta}) reads (for the sake of simplicity we do not write explicitly
the dependence on $r$, $T$ and $\varepsilon$)
\begin{multline}
E_{\eta}(\alpha,\beta) = 1 +
\frac{1}{p_{11}(r,T,\varepsilon)}
\sum_{k=1}^4 {\cal C}_k\,
\big\{ 4\, {\cal I}^{(k)}_{\eta}(\alpha, \beta) \\
- 2 \big[{\cal G}^{(k)}_{\eta}(\alpha) +
{\cal Y}^{(k)}_{\eta}(\beta)\big]\big\}\,,
\end{multline}
where
\begin{align}
&{\cal I}^{(k)}_{\eta} (\alpha,\beta) =
\frac{{\cal M}_{\eta}^{(k)}(r,T,\varepsilon)}{\eta^2}\, \nonumber\\
&\times
\exp\big\{
-\widetilde{G}_{\eta}^{(k)} \, |\alpha|^2
-\widetilde{F}_{\eta}^{(k)} \, |\beta|^2
+\widetilde{H}_{\eta}^{(k)} \, (\alpha\beta + \calpha\cbeta)
\big\}\,,\\
&{\cal G}^{(k)}_{\eta}(\alpha) =
\frac{ \Delta_{\eta}}{[G_k\,(F_k + \Delta_{\eta}) -
H_k^2]\,\eta} \nonumber\\
&\hspace{1cm}\times
\exp\left\{
-\frac{(F_k G_k - H_k^2)\,\Delta_{\eta}}{G_k\,(F_k +
\Delta_{\eta})-H_k^2}\,|\alpha|^{2} \right\}\,, \\
&{\cal Y}^{(k)}_{\eta}(\beta) =
\frac{ \Delta_{\eta}}{[F_k\,(G_k + \Delta_{\eta}) -
H_k^2]\,\eta} \nonumber\\
&\hspace{1cm}\times
\exp\left\{
-\frac{(F_k G_k - H_k^2)\,\Delta_{\eta}}{F_k\,(G_k +
\Delta_{\eta})-H_k^2}\,|\beta|^{2} \right\}\,.
\end{align}
with $\widetilde{F}_{\eta}^{(k)}
\equiv\widetilde{F}_{\eta}^{(k)}(r,T,\varepsilon)$,
$\widetilde{G}_{\eta}^{(k)}
\equiv\widetilde{G}_{\eta}^{(k)}(r,T,\varepsilon)$, and
$\widetilde{H}_{\eta}^{(k)}
\equiv\widetilde{H}_{\eta}^{(k)}(r,T,\varepsilon)$ given by
\begin{align}
&\widetilde{F}_{\eta}^{(k)} =
\Delta_{\eta}-(F_k+\Delta_\eta)\,{\cal M}_{\eta}^{(k)}(r,T,\varepsilon)\,,\\
&\widetilde{G}_{\eta}^{(k)} =
\Delta_{\eta}-(G_k+\Delta_\eta)\,{\cal M}_{\eta}^{(k)}(r,T,\varepsilon)\,,\\
&\widetilde{H}_{\eta}^{(k)} =
H_k\,{\cal M}_{\eta}^{(k)}(r,T,\varepsilon)\,,\\
&{\cal M}_{\eta}^{(k)}(r,T,\varepsilon)
= \frac{\Delta_{\eta}^2}{(F_k+\Delta_{\eta})(G_k+\Delta_{\eta}) - H_k^2}\,,
\end{align}
where $F_k = b-f_h$, $G_k = b-g_h$, and $H_k = 2 \witB_0 T - h_k$ and all
the involved quantities are the same as in Eq.~(\ref{ips:wigner}).
\par
In order to study Eq.~(\ref{B:param}), we consider the parametrization
$\alpha = -\beta = {\cal J}$ and $\alpha' = -\beta' = -\sqrt{11}{\cal J}$.
This parametrization was chosen after a semi-analytical analysis and
maximizes the violation of the Bell's inequality (for $\eta=1$)
\cite{IOP:05}. The results are showed in Figs.~\ref{f:IPS3D} and
\ref{f:IPStau} for $\eta = 1$ and $\varepsilon = 1$: we can see that the
IPS enhances the violation of the inequality $|{\cal B}_{\eta}|\le 2$ for
small values of $r$ (see also Refs.~\cite{OP:PSnoise,ips:PRA:67, ips:PRA:70}).
Moreover, as one may expect, the maximum of violation is
achieved as $T \to 1$, whereas decreasing the effective transmission of the
IPS process, one has that the inequality becomes satisfied for all the
values of $r$, as we can see in Fig.~\ref{f:IPStau} for $T = 0.6$.
\par
In Fig.~\ref{f:IPSeta} we plot ${\cal B}_{\eta}$ for the IPS
with $T = 0.9999$, $\varepsilon = 1$ and different $\eta$.
As for the TWB, we can have
violation of the Bell's inequality also for detection efficiencies near to
$80 \%$. As for the Bell states and the TWB, a $\eta$- and $r$-dependent
choice of the parameters in Eq.~(\ref{B:param}) can improve this result.
\begin{figure}
\caption{Plot of ${\cal B}
\label{f:IPS3D}
\end{figure}
\begin{figure}
\caption{Plot of ${\cal B}
\label{f:IPStau}
\end{figure}
\begin{figure}
\caption{Plot of ${\cal B}
\label{f:IPSeta}
\end{figure}
The effect on a non-unit $\varepsilon$ is studied in
Fig.~\ref{f:IPSTauEta}, where we plot ${\cal B}_\eta$ as a function of $T$
and $\varepsilon$ and fixed values of the other involved parameters.
We can see that the main effect on the Bell parameter is due to the
transmissivity $T$.
\begin{figure}
\caption{Plot of ${\cal B}
\label{f:IPSTauEta}
\end{figure}
\par
Finally, the effect of dissipation and thermal noise affecting the
propagation of the TWB before the IPS process is shown in
Fig.~\ref{f:IPSN}.
\begin{figure}
\caption{Plot of ${\cal B}
\label{f:IPSN}
\end{figure}
\section{Conclusions} \label{s:remarks}
We have analyzed in details a photon subtraction scheme to
de-Gaussify states of radiation and, in particular, to
enhance nonlocal properties of twin-beams. The scheme is
based on conditional inconclusive subtraction of photons (IPS),
which may be achieved by means of linear optical components
and avalanche on/off photodetectors. The IPS process can
be implemented with current technology and, indeed, application
to single-mode state has been recently realized with high
conditional probability \cite{weng:PRL:04}.
\par
We found that IPS process improves fidelity of
coherent state teleportation and show, by using several
different nonlocality tests, that it also enhances nonlocal
correlations. IPS may be profitably used also on
nonmaximmally mixed entangled states,
as the ones coming from the evolution of TWB in a noisy channel.
In addition, the effectiveness of the process is not dramatically
influenced either by the transmissivity
of the beam-splitter used to subtract photons, not by
the quantum efficiency of the detectors used to reveal them.
\par
We conclude that IPS on TWB is a robust and realistic scheme to
improve quantum information processing with CV radiation states.
\section{Acknowledgments}
This work has been supported by MIUR through the project
PRIN-2005024254-002.
\end{document}
|
\begin{document}
\allowdisplaybreaks
\renewcommand{\arabic{footnote}}{$\star$}
\newcommand{1510.08599}{1510.08599}
\renewcommand{042}{042}
\FirstPageHeading
\ShortArticleName{Zeros of Quasi-Orthogonal Jacobi Polynomials}
\ArticleName{Zeros of Quasi-Orthogonal Jacobi Polynomials\footnote{This paper is a~contribution to the Special Issue
on Orthogonal Polynomials, Special Functions and Applications.
The full collection is available at \href{http://www.emis.de/journals/SIGMA/OPSFA2015.html}{http://www.emis.de/journals/SIGMA/OPSFA2015.html}}}
\Author{Kathy DRIVER and Kerstin JORDAAN}
\AuthorNameForHeading{K.~Driver and K.~Jordaan}
\Address{Department of Mathematics and Applied Mathematics, University of Pretoria,\\ Pretoria, 0002, South Africa}
\Email{\href{mailto:[email protected]}{[email protected]}, \href{mailto:[email protected]}{[email protected]}}
\ArticleDates{Received October 30, 2015, in f\/inal form April 20, 2016; Published online April 27, 2016}
\Abstract{We consider interlacing properties satisf\/ied by the zeros of Jacobi polynomials in quasi-orthogonal sequences characterised by $\alpha>-1$, $-2<\beta<-1$. We give necessary and suf\/f\/icient conditions under which a conjecture by Askey, that the zeros of Jacobi polyno\-mials~$P_n^{(\alpha, \beta)}$ and~$P_{n}^{(\alpha,\beta+2)}$ are interlacing, holds when the parameters~$\alpha$ and~$\beta$ are in the range $\alpha>-1$ and $-2<\beta<-1$. We prove that the zeros of $P_n^{(\alpha, \beta)}$ and $P_{n+1}^{(\alpha,\beta)}$ do not interlace for any $n\in\mathbb{N}$, $n\geq2$ and any f\/ixed~$\alpha$,~$\beta$ with $\alpha>-1$, $-2<\beta<-1$. The interlacing of zeros of~$P_n^{(\alpha,\beta)}$ and $P_m^{(\alpha,\beta+t)}$ for $m,n\in\mathbb{N}$ is discussed for~$\alpha$ and~$\beta$ in this range, $t\geq 1$, and new upper and lower bounds are derived for the zero of $P_n^{(\alpha,\beta)}$ that is less than~$-1$.}
\Keywords{interlacing of zeros; quasi-orthogonal Jacobi polynomials}
\Classification{33C50; 42C05}
\renewcommand{\arabic{footnote}}{\arabic{footnote}}
\setcounter{footnote}{0}
\section{Introduction}
Let $\{p_n\}_{n=0}^\infty$, $\deg(p_n)=n$, $n\in\mathbb{N}$, be a sequence of orthogonal polynomials with respect to a~positive Borel measure $\mu$ supported on an interval~$(a,b)$. It is well known
(see~\cite{Sze}) that the zeros of $p_n$ are real and simple and lie in $(a,b)$ while, if we denote the zeros of $p_n$, in increasing
order by $x_{1,n}<x_{2,n}<\dots<x_{n,n}$, then
\begin{gather*}
x_{1,n}<x_{1,n-1}<x_{2,n}<x_{2,n-1}<\dots<x_{n-1,n-1}<x_{n,n},
\end{gather*}
a property called the interlacing of zeros.
Since our discussion will include interlacing of zeros of polynomials of non-consecutive degree, we recall the following def\/initions:
\begin{definition}\label{SI}
Let $n \in {\mathbb N}$. If $x_{1,n}<x_{2,n}<\dots<x_{n,n}$ are the zeros of $p_n$ and
$y_{1,n}<y_{2,n}<\dots<y_{n,n}$ are the zeros of $q_{n}$, then the zeros of $p_n$ and $q_{n}$ are interlacing if
\begin{gather*}x_{1,n}<y_{1,n}<x_{2,n}<y_{2,n}<\dots<x_{n,n}<y_{n,n}
\end{gather*}
or if
\begin{gather*}y_{1,n}<x_{1,n}<y_{2,n}<x_{2,n}<\dots<y_{n,n}<x_{n,n},
\label{1.5}\end{gather*}
\end{definition}
The def\/inition of interlacing of zeros of two polynomials whose degrees dif\/fer by more than one was introduced by Stieltjes~\cite{Sze}.
\begin{definition}
Let $m,n \in {\mathbb N}$, $m \leq n-2$. The zeros of the polynomials $p_n$ and~$q_m$ are interlacing if there exist~$m$ open intervals, with endpoints at successive zeros of~$p_n$, each of which contains exactly one zero of~$q_m$.
\end{definition}
{\bf Askey conjecture.} In \cite{Ask}, Richard Askey
conjectured that the zeros of the Jacobi polyno\-mials~$P_{n}^{(\alpha,\beta)}$ and $P_n^{(\alpha,\beta+2)}$ are interlacing for each $n \in {\mathbb N}$, $\alpha,\beta > -1$. A more
general version of the Askey conjecture was proved in~\cite{DrJoMb}, namely that
the zeros of $P_{n}^{(\alpha,\beta)}$ and the zeros of $P_n^{(\alpha-k,\beta+t)}$ are interlacing
for each $n\in\mathbb{N}$, $\alpha,\beta>-1$ and any real numbers $t$ and~$k$ with $0\leq t,k\leq 2$.
Here, we investigate Askey's conjecture,
and several extensions thereof, in the context of sequences of Jacobi polynomials that are
quasi-orthogonal of order~$1$.
The concept of quasi-orthogonality of order $1$ was introduced by
Riesz in~\cite{Rie} in his seminal work on the moment problem. Fej\'{e}r~\cite{Fej} considered
quasi-orthogonality of order $2$ while the general case was f\/irst studied by Shohat~\cite{Sho}. Chihara~\cite{Chi} discussed quasi-orthogonality of order $r$ in the context of three-term recurrence relations and Dickinson~\cite{Dickinson} improved Chihara's result by deriving a system of recurrence relations that provides both necessary and suf\/f\/icient conditions for quasi-orthogonality. Algebraic properties of the linear functional associated with quasi-orthogonality are investigated in \cite{Draux, maroni1, maroni2, maroni}. Quasi-orthogonal polynomials have also been studied in the context of connection coef\/f\/icients, see for example \cite{area, askey65,Dimitrov, Ism, sz,tr73,tr75,wilson} as well as Geronimus canonical spectral transformations of the measure (cf.~\cite{zh}). Properties of orthogonal polynomials associated with such Geronimus perturbations, including properties satisf\/ied by the zeros, have been analysed in~\cite{Branq}.
The def\/inition of quasi-orthogonality of a sequence of polynomials is the following:
\begin{definition}
Let $\{q_n\}_{n=0}^\infty$ be a sequence of polynomials with degree $q_n = n$ for each $n \in
{\mathbb N}$. For a positive integer $r < n$, the sequence $\{q_n\}_{n=0}^\infty$ is
quasi-orthogonal of order $r$ with respect to a positive Borel measure $\mu$ if
\begin{gather}
\int x^k q_n(x) d{\mu} (x) =0 \qquad \mbox{for} \quad k=0,\ldots,n-1-r.
\label{2}
\end{gather}
\end{definition}
If (\ref{2}) holds for $r=0$, the sequence $\{q_n\}_{n=0}^\infty$ is orthogonal with respect to
the measure~$\mu$. A~cha\-racterisation of a polynomial $q_n$ that is quasi-orthogonal of order~$r$ with respect to a~positive measure~$\mu$, as a linear combination of $p_n,p_{n-1},\dots,p_{n-r}$ where $\{p_n\}_{n=0}^\infty$ is orthogonal with respect to~$\mu$, was f\/irst investigated by Shohat (cf.~\cite{Sho}). A full statement and proof of this result can be found in \cite[Theorem~1]{BDR}.
Quasi-orthogonal polynomials arise in a natural way in the context of
classical orthogonal polynomials that depend on one or more parameters. The sequence of Jacobi polyno\-mials~$\big\{P_{n}^{(\alpha, \beta)}\big\} _{n=0}^\infty$ is orthogonal on $(-1,1)$ with respect to the weight function $(1-x)^{\alpha}(1+x)^{\beta}$ when $\alpha > -1$, $\beta > -1$. The three-term recurrence relation \cite[(4.5.1)]{Sze} satisf\/ied by the sequence is
\begin{gather}
c_n P_n^{(\alpha, \beta)} (x) = (x - d_n) P_{n-1}^{(\alpha, \beta)} (x) - e_n P_{n-2}^{(\alpha, \beta)} (x), \qquad n = 2,3,\dots, \label{1}
\end{gather}
where
\begin{gather}c_{n} = 2n(n +\alpha + \beta)/ (2n +\alpha + \beta-1)(2n +\alpha + \beta),\nonumber\\
d_{n} = \big({\beta}^2 - {\alpha}^2\big)/(2n +\alpha + \beta -2) (2n +\alpha + \beta), \label{d}\\
e_{n} = 2(n +\alpha -1) (n + \beta -1)/(2n +\alpha + \beta-2)(2n +\alpha + \beta -1),\nonumber
\end{gather}
and $P_{0}^{(\alpha,\beta)} (x) \equiv 1$, $P_1^{(\alpha,\beta)} (x) = \frac12 (\alpha
+\beta +2) x +\frac12(\alpha - \beta)$. For values of $\alpha$ and $\beta$ outside the range
$\alpha ,\beta > -1$, the Jacobi sequence $\big\{P_{n}^{(\alpha, \beta)}\big\} _{n=0}^\infty$ can be
def\/ined by the three term recurrence relation~\eqref{1}. The quasi-orthogonal Jacobi sequences of order $1$ and $2$ are of particular interest since, apart from the orthogonal Jacobi sequences, these are the only sequences of Jacobi polynomials for $\alpha,\beta\in \mathbb{R}$ where all~$n$ zeros of $P_{n}^{(\alpha,\beta)}$ are real and distinct.
In \cite[Theorem~7]{BDR}, it is proved that if $-1< \alpha,\beta <0$, and $k,l \in {\mathbb N}$ with $k+l<n$, the
Jacobi polynomials $\big\{P_{n}^{(\alpha-k, \beta-l)}\big\} _{n=0}^\infty$ are quasi-orthogonal of order~$k+l$ with respect to the weight function $(1-x)^{\alpha}(1+x)^{\beta}$ on the interval $[-1,1]$.
Interlacing properties of zeros of quasi-orthogonal and orthogonal Jacobi polynomials of the same or consecutive degree were discussed and the following result proved in \cite[Corollary~4]{BDR}.
\begin{lemma}\label{bdr}
Fix $\alpha$ and $\beta$, $\alpha > -1$ and $-2 < \beta< -1$ and denote the sequence of Jacobi
polynomials by $\big\{P_{n}^{(\alpha, \beta)}\big\}_{n=0}^\infty$. For each $n \in{\mathbb N}$, $n \geq
1$, let $x_{1,n}<x_{2,n}<\dots<x_{n,n}$ denote the zeros of the $($quasi-orthogonal$)$ polynomial $P_{n}^{(\alpha,\beta)}$ and
$y_{1,n}<y_{2,n}<\dots<y_{n,n}$ denote the zeros of the $($orthogonal$)$ polynomial $P_{n}^{(\alpha,\beta +1)}$. Then
\begin{gather} x_{1,n}< -1 < y_{1,n} < x_{2,n}< y_{2,n} <\dots< x_{n,n}< y_{n,n}<1
\label{2.1}
\end{gather}
and
\begin{gather}x_{1,n+1}< -1 < y_{1,n} < x_{2,n+1}< y_{2,n} <\dots< x_{n,n+1}< y_{n,n}<
x_{n+1,n+1}<1, \label{2.2}
\end{gather}
\end{lemma}
For proof see \cite[Corollary~4(ii)(a)]{BDR} with $\beta$ replaced by $\beta +1$.
In \cite{bu}, Bustamante, Mart\'{i}nez-Cruz and Quesada apply the interlacing properties of zeros of quasi-orthogonal and orthogonal Jacobi polynomials given in~\cite{DrJoMb} and in Lemma~\ref{bdr} to show that best possible one-sided polynomial approximants to a unit step function on the interval $[-1,1]$, which are in some cases unique, can be obtained using Hermite interpolation at interlaced zeros of quasi-orthogonal and orthogonal Jacobi polynomials.
We assume throughout this paper that~$\alpha$ and $\beta$ are f\/ixed numbers lying in the range $\alpha>-1$, $-2<\beta<-1$.
In Section~\ref{sectionintqo}, we analyse the interlacing properties of zeros of polynomials of consecutive, and non-consecutive, degree within a sequence of quasi-orthogonal Jacobi polynomials of order~$1$. In Section~\ref{sectionAskey} we prove a necessary and suf\/f\/icient condition for the Askey conjecture to hold between the zeros of an orthogonal and a quasi-orthogonal (order~1) sequence of Jacobi polynomials of the same degree and then extend this to the case where the polynomials are of consecutive degree. In Section~\ref{sectionst} we discuss interlacing properties and inequalities satisf\/ied by the zeros of orthogonal and quasi-orthogonal (order~$1$) Jacobi polynomials whose degrees dif\/fer by more than unity. In Section~\ref{bounds} we derive upper and lower bounds for the zero of $P_n^{(\alpha,\beta)}$ that is $<-1$.
Note that, since Jacobi polynomials satisfy the symmetry property \cite[equation~(4.1.1)]{Ism}
\begin{gather}
P_n^{(\alpha, \beta)} (x) = {(-1)}^n P_n^{(\beta, \alpha)} (-x), \label{2.6}
\end{gather}
each result proved for quasi-orthogonal Jacobi polynomials $P_n^{(\alpha,\beta)}$ with $\alpha > -1$, $-2 < \beta< -1$ has an analogue for the corresponding quasi-orthogonal polynomial with $\beta>-1$, $-2<\alpha<-1$.
\section{Quasi-orthogonal Jacobi polynomials of order 1}\label{sectionintqo}
\subsection[Zeros of $P_{n}^{(\alpha,\beta)}$ and $P_{n-k}^{(\alpha,\beta)}$, $k,n\in\mathbb{N}$, $1\leq k<n$]{Zeros of $\boldsymbol{P_{n}^{(\alpha,\beta)}}$ and $\boldsymbol{P_{n-k}^{(\alpha,\beta)}}$, $\boldsymbol{k,n\in\mathbb{N}}$, $\boldsymbol{1\leq k<n}$}
Our f\/irst result proves that for any $n \in {\mathbb N}$, $n \geq 2$, interlacing does not hold between the
$n$ real zeros of $P_{n}^{(\alpha,\beta)}$ and the $n+1$ real zeros of $P_{n+1}^{(\alpha,\beta)}$.
However, the $n-1$ zeros of $P_{n}^{(\alpha,\beta)}$ in $(-1,1)$ interlace with the $n$ zeros of $P_{n+1}^{(\alpha,\beta)}$ in $(-1,1)$.
Moreover, the $n+1$ zeros of $(1+x) P_{n}^{(\alpha,\beta)}(x)$ interlace with the $n+1$ zeros of
$P_{n+1}^{(\alpha,\beta)}(x)$ for each $n\in\mathbb{N}$.
\begin{theorem} \label{Th:2.1}
Fix $\alpha$ and $\beta$, $\alpha > -1$ and $-2 < \beta< -1$ and denote the sequence of Jacobi
polynomials by $\big\{P_{n}^{(\alpha, \beta)}\big\}_{n=0}^\infty$. For each $n \in{\mathbb N}$, $n \geq
1$, let $x_{1,n}<x_{2,n}<\dots<x_{n,n}$ denote the zeros of $P_{n}^{(\alpha,\beta)}$. Then
\begin{gather}x_{1,n} < x_{1,n+1}< -1 < x_{2,n+1}< x_{2,n} <\dots < x_{n,n+1}< x_{n,n}< x_{n+1,n+1}<1. \label{2.3}
\end{gather}
\end{theorem}
\begin{corollary} Let $\big\{P_{n}^{(\alpha, \beta)}\big\}_{n=0}^\infty$ denote the sequence of Jacobi polynomials and fix~$\alpha$ and~$\beta$ with $\alpha > -1$ and $-2 < \beta< -1$.
The zeros of $P_{n-k}^{(\alpha,\beta)}$ and the zeros of $P_{n}^{(\alpha,\beta)}$ do not interlace for any $k,n \in{\mathbb N}$, $n \geq 3$, $k\in\{1,\dots,n-1\}$.
\end{corollary}
\begin{proof}
It follows immediately from Def\/inition \ref{SI} that Stieltjes interlacing
does not hold between the zeros of two polynomials if any zero of the polynomial of smaller
degree lies outside the interval with endpoints at the smallest and largest zero of the
polynomial of larger degree. Since~\eqref{2.3} shows that $x_{1,n-2} < x_{1,n-1} < x_{1,n} <
-1 < x_{n,n}$ for each $n \in{\mathbb N}$, the smallest zero of~$P_{n-2}^{(\alpha,\beta)}$ lies outside the interval $(x_{1,n},x_{n,n})$ and this proves the result.
\end{proof}
\begin{remark}
Theorem \ref{Th:2.1} complements results proved by Dimitrov, Ismail and Rafaeli~\cite{DiIsRa} who consider the interlacing properties of zeros of orthogonal polynomials arising from perturbations of the weight function of orthogonality. However, the sequences of polynomials considered in \cite{DiIsRa} retain orthogonality. Shifting from the orthogonal case to the quasi-orthogonal order~$1$ Jacobi case $P_n^{(\alpha,\beta)}$ with $\alpha>-1$, $-2<\beta<-1$ may be viewed as a perturbation of the (orthogonal) Jacobi weight function ${(1-x)}^{\alpha}{(1+x)}^{\beta}$, $\alpha,\beta>-1$, by the factor ${(1+x)}^{-1}$.
\end{remark}
\begin{remark}\label{rem} Relation~\eqref{2.3} proves that for each f\/ixed $\alpha$ and $\beta$ with $\alpha >-1$ and $-2 < \beta< -1$, the zero of~$P_{n}^{(\alpha,\beta)}$ that is less than $-1$, increases with~$n$.
\end{remark}
\begin{corollary} \label{cor:2.2}
For each fixed $\alpha$, $\beta$ with $-2 < \alpha < -1$ and $\beta > -1$, and each $n \in
{\mathbb N}$, $n \geq 2$,
\begin{itemize}\itemsep=0pt
\item[$(i)$]the $n+1$ zeros of $(1-x) P_{n}^{(\alpha,\beta)}(x)$ interlace with the
$n+1$ zeros of $P_{n+1}^{(\alpha,\beta)}(x)$;
\item[$(ii)$] the $n-1$ zeros of $P_{n}^{(\alpha,\beta)}$ that
lie in the interval $(-1,1)$ interlace with the $n$ zeros of $P_{n+1}^{(\alpha,\beta)}$ that lie
in the interval $(-1,1)$;
\item[$(iii)$] interlacing does not hold between all the real zeros of $P_{n}^{(\alpha,\beta)}$ and all the real zeros of
$P_{n+1}^{(\alpha,\beta)}$ for any $n \in {\mathbb N}$, $n \geq 2$;
\item[$(iv)$] the zero of $P_n^{(\alpha,\beta)}$ that is $>1$ decreases with~$n$.
\end{itemize}
\end{corollary}
\begin{proof} The result follows from Theorem~\ref{Th:2.1} and the symmetry property (\ref{2.6}) of Jacobi polyno\-mials.
\end{proof}
\subsection[Co-primality and zeros of $P_{n}^{(\alpha,\beta)}$ and $P_{n-k}^{(\alpha,\beta)}$, $k,n\in\mathbb{N}$, $2\leq k<n$]{Co-primality and zeros of $\boldsymbol{P_{n}^{(\alpha,\beta)}}$ and $\boldsymbol{P_{n-k}^{(\alpha,\beta)}}$, $\boldsymbol{k,n\in\mathbb{N}}$, $\boldsymbol{2\leq k<n}$}
Common zeros of two polynomials, should they exist,
play a crucial role when discussing interlacing properties of their zeros, see, for example, \cite{DrMu}. The polynomials $P_{n}^{(\alpha,\beta)}$ and $P_{n-1}^{(\alpha,\beta)}$ of consecutive degree are co-prime for each $n \in{\mathbb N}$, $n \geq 1$, and each f\/ixed~$\alpha$,~$\beta$ with $\alpha > -1$ and $-2
< \beta< -1$. This follows from Theorem~\ref{Th:2.1} but is also immediate from the three term recurrence
relation~\eqref{1} since if $P_{n}^{(\alpha,\beta)}$ and $P_{n-1}^{(\alpha,\beta)}$ had a common
zero, this would also be a zero of $P_{n-2}^{(\alpha,\beta)}$. After suitable
iteration of~(\ref{1}), this contradicts $P_{0}^{(\alpha,\beta)} (x) \equiv 1$.
\begin{theorem}\label{Th:3.1} Let $\big\{P_{n}^{(\alpha, \beta)}\big\}_{n=0}^\infty$ denote the sequence of Jacobi polynomials and fix~$\alpha$ and $\beta$ with $\alpha > -1$ and $-2 < \beta< -1$. If $P_{n}^{(\alpha,\beta)}$ and
$P_{n-2}^{(\alpha,\beta)}$ are co-prime for each $n \in{\mathbb N}$, $n \geq 3$, then the zeros of $(x+1)(x-d_n)P_{n-2}^{(\alpha,\beta)}$ interlace with the zeros of $P_{n}^{(\alpha,\beta)}$ where $d_n$ is given in~\eqref{d}.
\end{theorem}
\begin{remark}We note that results analogous to Theorem~\ref{Th:3.1} can be proven for the zeros of Jacobi polynomials $P_{n}^{(\alpha,\beta)}$ and
$P_{n-k}^{(\alpha,\beta)}$ when $k,n\in\mathbb{N}$, $3\leq k < n$.
\end{remark}
\section{An extension of the Askey conjecture}\label{sectionAskey}
\subsection[Zeros of $P_{n}^{(\alpha,\beta)}$ and $P_{n}^{(\alpha,\beta+2)}$, $n\in\mathbb{N}$]{Zeros of $\boldsymbol{P_{n}^{(\alpha,\beta)}}$ and $\boldsymbol{P_{n}^{(\alpha,\beta+2)}}$, $\boldsymbol{n\in\mathbb{N}}$}
We investigate an extension of the Askey conjecture that the zeros of the Jacobi polyno\-mials~$P_{n}^{(\alpha,\beta)}$ and $P_{n}^{(\alpha,\beta+2)}$ are interlacing when $\alpha > -1$, $-2 < \beta< -1$ and prove a necessary and suf\/f\/icient condition for interlacing between the zeros of these two polynomials to occur.
\begin{theorem}\label{Th:Askey1}
Suppose that $\alpha > -1$, $-2 < \beta< -1$, and $\big\{P_n^{(\alpha,\beta)}\big\}_{n=0}^\infty$ is the sequence of Jacobi polynomials. Let $\delta:= -1-\frac{2(\beta+1)}{\alpha +\beta+2n +2}$. For each $n \in{\mathbb N}$, the zeros of $P_{n}^{(\alpha,\beta)}$ and $P_{n}^{(\alpha,\beta+2)}$ are interlacing if and only if $\delta < x_{2,n}$, where $x_{2,n}$ is the smallest zero of~$P_n^{(\alpha,\beta)}$ in the interval $(-1,1)$.
\end{theorem}
\begin{remark} Numerical evidence conf\/irms that the assumption in Theorem~\ref{Th:Askey1}, i.e., $\delta<x_{2,n}$, is reasonable. There are values of $\alpha$ and $\beta$ for which the condition is satisf\/ied and others where it is not. For example, when $n=5$, $\alpha=2.35$ and $\beta =-1.5$ we have $\delta=-0.922179$ and $x_{2,n}=-0.885666$ whereas for the same $n$ and $\alpha$ with $\beta =-1.9$ we have $\delta=-0.855422$ and $x_{2,n}=-0.961637$. Analytically one can see that the condition is more likely to be satisf\/ied when $\delta$ approaches~$-1$, a lower bound for~$x_{2,n}$, i.e., when $\beta \to -1$ with $\alpha>-1$ and $n\in\mathbb{N}$ f\/ixed.
\end{remark}
Although full interlacing between the zeros of $P_n^{(\alpha,\beta)}$ and $P_n^{(\alpha,\beta+2)}$ cannot occur when $\delta \geq x_{2,n}$, there is an interlacing result, involving the point~$\delta$, that holds between the zeros of $P_n^{(\alpha,\beta)}$ that lie in the interval $(-1,1)$ and the zeros of $P_n^{(\alpha,\beta+2)}$ provided the two polynomials have no common zeros.
\begin{theorem}\label{Th:Askey2}
Suppose that $\alpha > -1$, $-2 < \beta < -1$ and $\big\{P_n^{(\alpha,\beta)}\big\}_{n=0}^\infty$ is the sequence of Jacobi polynomials. Suppose that $P_n^{(\alpha,\beta)}$ and $P_n^{(\alpha,\beta+2)}$ have no common zeros and assume that $\delta:= -1-\frac{2(\beta+1)}{\alpha +\beta+2n +2}> x_{2,n}$. For each $n \in{\mathbb N}$, the zeros of $(x-\delta) P_n^{(\alpha,\beta)}(x)$ interlace with the zeros of $P_n^{(\alpha,\beta+2)}(x)$.
\end{theorem}
\subsection[Zeros of $P_{n}^{(\alpha,\beta)}$ and $P_{n-1}^{(\alpha,\beta+2)}$, $n\in\mathbb{N}$]{Zeros of $\boldsymbol{P_{n}^{(\alpha,\beta)}}$ and $\boldsymbol{P_{n-1}^{(\alpha,\beta+2)}}$, $\boldsymbol{n\in\mathbb{N}}$}
\begin{theorem}\label{Th:la} Let $\alpha > -1$, $-2 < \beta < -1$ and $\big\{P_n^{(\alpha,\beta)}\big\}_{n=0}^\infty$ be the sequence of Jacobi polyno\-mials. Let $x_{1,n}<x_{2,n}<\dots<x_{n,n}$ denote the zeros of $P_{n}^{(\alpha,\beta)}$ and $z_{1,n-1}<z_{2,n-1}<\dots<z_{n-1,n-1}$ denote the zeros of $P_{n-1}^{(\alpha,\beta+2)}$.
Then $P_{n-1}^{(\alpha,\beta+2)}$ and $P_n^{(\alpha,\beta)}$ are co-prime and the zeros of \mbox{$(1+x)P_{n-1}^{(\alpha, \beta+2)}$} interlace with the zeros of $P_n^{(\alpha,\beta)}$, i.e.,
\begin{gather*}
x_{1,n}<-1<x_{2,n}<z_{1,n-1}<x_{3,n}<\dots<x_{n-1,n}<z_{n-2,n-1}<x_{n,n}<z_{n-1,n-1}.
\end{gather*}
\end{theorem}
\section[Zeros of $P_n^{(\alpha,\beta)}$ and $P_{n-2}^{(\alpha,\beta+t)}$, $t \geq 1$, $n\in\mathbb{N}$]{Zeros of $\boldsymbol{P_n^{(\alpha,\beta)}}$ and $\boldsymbol{P_{n-2}^{(\alpha,\beta+t)}}$, $\boldsymbol{t \geq 1}$, $\boldsymbol{n\in\mathbb{N}}$}\label{sectionst}
For f\/ixed $\alpha>-1$, $-2 < \beta< -1$, and f\/ixed $t \geq 1$, the parameter $\beta+t$ is
greater than $-1$ and each sequence of Jacobi polynomials $\big\{P_{n}^{(\alpha,\beta+t)}\big\} _{n=0}^\infty$ is orthogonal on the interval $(-1,1)$. It is known (see \eqref{2.1} and \eqref{2.2}) that the zeros of the quasi-orthogonal polynomial $P_n^{(\alpha,\beta)}$ interlace with the zeros of
the (orthogonal) polynomial $P_{n-1}^{(\alpha,\beta+1)}$, as well as with the zeros of the (orthogonal) polynomial
$P_n^{(\alpha,\beta+1)}$. Here, we discuss interlacing between the zeros of~$P_n^{(\alpha,\beta)}$
and the zeros of the (orthogonal) polynomial $P_{n-2}^{(\alpha, \beta+1)}$. We also prove that
the zeros of~$P_n^{(\alpha,\beta)}$ and the zeros of $P_{n-2}^{(\alpha,\beta+t)}$ interlace for
continuous variation of $t$, $2 \leq t \leq 4$ and that the polyno\-mials~$P_n^{(\alpha,\beta)}$ and~$ P_{n-2}^{(\alpha,\beta+t)}$ are co-prime for any $t\in[2,4]$
\begin{theorem}\label{Th:St1} Let $n \in{\mathbb N}$, $n\geq3$, $\alpha$, $\beta$ fixed, $\alpha >-1$, $-2 < \beta < -1$, and suppose $\big\{P_n^{(\alpha,\beta)}\big\}_{n=0}^\infty$ is the sequence of Jacobi polynomials.
\begin{itemize}\itemsep=0pt
\item[$(i)$] The $n-2$ distinct zeros of $P_{n-2}^{(\alpha,\beta+1)}$ $($which all lie in the
interval $(-1,1))$ together with the point $\frac{2(n+\beta)(\alpha+\beta+n)}{(\alpha+\beta+2n)(\alpha+\beta+2n-1)}-1$, interlace
with the $n-1$ distinct zeros of $P_n^{(\alpha,\beta)}$ that lie in~$(-1,1)$, provided $P_{n-2}^{(\alpha,\beta+1)}$ and $P_n^{(\alpha,\beta)}$ are co-prime.
\item[$(ii)$] For $2 \le t \le 4$, the $n-2$ distinct zeros of $P_{n-2}^{(\alpha,\beta+t)}$
interlace with the $n-1$ zeros of $P_n^{(\alpha,\beta)}$ that lie in $(-1,1)$.
\end{itemize}
\end{theorem}
\begin{remark}
Note that Theorem \ref{Th:St1}(ii) does not assume that $P_{n-2}^{(\alpha,\beta+t)}$ and
$P_n^{(\alpha,\beta)}$ are co-prime, $2 \le t \le 4$. This assumption is not required since the proof will show that $P_{n-2}^{(\alpha,\beta+t)}$ and $P_n^{(\alpha,\beta)}$ are co-prime for every~$t$, $2 \le t \le 4$, $\alpha >-1$, $-2 < \beta < -1$, and $n \in{\mathbb N}$.
\end{remark}
\section[Bounds for the smallest zeros of $P_n^{(\alpha,\beta)}$, $n\in\mathbb{N}$]{Bounds for the smallest zeros of $\boldsymbol{P_n^{(\alpha,\beta)}}$, $\boldsymbol{n\in\mathbb{N}}$}\label{bounds}
In this section we derive upper and lower bounds for the zero
of $P_{n}^{(\alpha,\beta)}$ that lies outside the interval $(-1,1)$ when $\alpha > -1$, $-2 < \beta< -1$.
\begin{theorem}\label{Th:bounds} Let $n \in{\mathbb N}$, $n\geq3$, $\alpha$, $\beta$ fixed, $\alpha >-1$, $-2 < \beta < -1$. Denote the smallest zero of the Jacobi polynomial $P_n^{(\alpha,\beta)}$ by $x_{1,n}$. Then
\begin{gather}\label{bd}
-1+A_n<-1+\frac{D_n}{C_n}<x_{1,n}<-B_n<-1,
\end{gather}
where \begin{subequations}\label{labels}
\begin{gather}
A_n=\frac{2(\beta+1)}{2n+\alpha+\beta},\\
B_n=1-\frac{2(\beta +1)(\beta +2)}{(n+\beta+1) (n+\alpha +\beta+1)},\\
C_n=(\beta +3) (\alpha +\beta +2)+2 (n-1) (n+\alpha +\beta+2),\\
D_n=2(\beta+1)(\beta+3).
\end{gather}\end{subequations}
\end{theorem}
The upper and lower bounds obtained in Theorem \ref{Th:bounds} for the zero of $P_n^{(\alpha,\beta)}$, $\alpha>-1$, $-2<\beta<-1$, that is smaller than $-1$, approach $-1$ as $n\to\infty$. This is consistent with the observation that this zero increases with~$n$ (cf.\ Remark~\ref{rem}). These bounds for the smallest zero of a~quasi-orthogonal (order~1) Jacobi polynomial are remarkably good. We provide some numerical examples in Table~\ref{Jac} to illustrate the inequalities in~\eqref{bd}.
\begin{table}[!ht] \centering \caption{Bounds for the smallest zero of $P_{15}^{(\alpha,\beta)}(x)$ for dif\/ferent values of $\alpha>-1$ and $-2<\beta<-1$.}\label{Jac}
\begin{tabular}{|c|c|c|c|}
\hline
$\alpha$, $\beta$ & $-1+\frac{D_n}{C_n}$ & $x_{1,15}$ & $-B_n$\tsep{3pt}\bsep{3pt}\\
\hline
$\alpha=0.93$, $\beta=-1.9$ & $-1.0044$ & $-1.00287$ & $-1.00085$\\
\hline
$\alpha=-0.93$, $\beta=-1.9$ & $-1.005$& $-1.00327$ & $-1.00097$\\
\hline
$\alpha=-0.93$, $\beta=-1.05$& $-1.0004636$ & $-1.0004635$ & $-1.0045$ \\
\hline
$ \alpha=0.93$, $\beta=-1.05$& $-1.0004094$ & $-1.0004088$ & $-1.0004001$\\
\hline
$\alpha=8.3$, $\beta=-1.55$& $-1.00235$ &$-1.00231$ & $-1.00151$\\
\hline
\end{tabular}
\end{table}
\section{Proof of main results}
We will make use of the following mixed three term recurrence relations satisf\/ied by Jacobi polynomials.
The relations are derived from contiguous relations satisf\/ied by~$_2F_1$ hypergeometric functions and can easily be verif\/ied by comparing coef\/f\/icients of powers of~$x$ on both sides of the equations.
\begin{lemma}Let $\big\{P_n^{(\alpha,\beta)}\big\}_{n=0}^\infty$, $n\in\mathbb{N}$ be the sequence of Jacobi polynomials:
\begin{gather}
2 n (\alpha +\beta +n) P_n^{(\alpha ,\beta )}= -(1+x) (\alpha +n-1) (\alpha +\beta +2 n) P_{n-2}^{(\alpha ,\beta +1)} \nonumber\\
\qquad{}-[2 (\beta +n) (\alpha +\beta +n)- (x+1) (\alpha +\beta +2 n-1) (\alpha +\beta +2 n)] P_{n-1}^{(\alpha ,\beta )},
\label{2.17}\\
(x+1) (\alpha +\beta +n+1) P_{n-1}^{(\alpha ,\beta +2)}= 2 n P_n^{(\alpha ,\beta )} +2 (\beta +1) P_{n-1}^{(\alpha ,\beta +1)},\label{fo}\\
\label{n2b2}
\frac{(\beta +n)}{2n} \left(x+1-A_n\right) P_{n-1}^{(\alpha ,\beta )}=
\frac{(x+1)^2 (\alpha +n-1)}{4n} P_{n-2}^{(\alpha ,\beta +2)}+\frac{(\beta +1)}{\alpha +\beta +2 n} P_{n}^{(\alpha ,\beta )},\\
\label{n2b3}
\left(x+B_n\right) P_{n-1}^{(\alpha ,\beta )}-A(x)P_n^{(\alpha ,\beta )}+ \frac{(x+1)^3 (\alpha +n-1) (\alpha +\beta +2 n)}{4 (\beta +n) (\beta +n+1)}P_{n-2}^{(\alpha ,\beta +3)},
\\
\label{n2b4}
\left(C_n(x+1)-D_n\right)P_{n-1}^{(\alpha,\beta)}
=\frac{(x+1)^4 E_{n}}{8(n+\beta)(\beta+2)}P_{n-2}^{(\alpha,\beta+4)}-\frac{n B(x)}{2(n+\beta)(\beta+2)}P_{n}^{(\alpha,\beta)},
\end{gather}
where $A_n$, $B_n$, $C_n$ and $D_n$ are given in~\eqref{labels},
\begin{gather*}
E_n=(2n+\alpha+\beta)(n+\alpha-1)(n+\alpha+\beta+1)(n+\alpha+\beta+2)
\end{gather*} and
\begin{gather*}A(x)= \frac{n (2 (\beta +1) (\beta +2)-(n-1) (x+1) (\alpha +n-1))}{(\beta +n) (\beta +n+1) (\alpha +\beta +n+1)},\\
B(x)= \big(\alpha ^2+5 \alpha \beta +7 \alpha +4 \beta ^3+24 \beta ^2+39 \beta -2 n^3-3 \alpha n^2-5 \beta n^2-4 n^2-\alpha ^2 n-5 \alpha \beta n \\
\hphantom{B(x)=}{}
-4 \alpha n+10 \beta n+14 n+16\big)-2 (n-1)(n+\alpha-1) (2 n+\alpha +3 \beta+4)x\\
\hphantom{B(x)=}{} -(n-1)(n+\alpha-1)(2n+\alpha +\beta)x^2.
\end{gather*}
\end{lemma}
\begin{proof}[Proof of Theorem \ref{Th:2.1}] Evaluating the mixed three-term recurrence relation \cite[p.~265]{Rai}
\begin{gather*}
\frac12(2+\alpha +\beta +2n) (x+1) P_{n}^{(\alpha,\beta +1)}(x) = (n +1) P_{n +1}^{(\alpha,\beta)}(x) + (1+\beta+n)
P_{n}^{(\alpha,\beta)}(x)
\end{gather*}
at successive zeros $x_{i,n}$, $x_{i+1,n}$
of $P_{n}^{(\alpha,\beta)}$, $i\in\{1,\dots,n-1\}$, we obtain
\begin{gather}4{(n+1)}^2 P_{n +1}^{(\alpha,\beta)} (x_{i,n}) P_{n +1}^{(\alpha,\beta)}(x_{i+1,n})\nonumber\\
\qquad{} = {(2+\alpha +\beta +2n)}^2(x_{i,n}+1)(x_{i+1,n}+1) P_{n}^{(\alpha,\beta +1)}(x_{i,n}) P_{n}^{(\alpha,\beta +1)}(x_{i+1,n}). \label{2.5}
\end{gather}
Now, from \eqref{2.1}, $(1+x_{i,n})( 1+x_{i+1,n})<0$ when $i=1$ and $(1+x_{i,n})(
1+x_{i+1,n})>0$ for $i\in\{2,\dots,n-1\}$ while $P_{n}^{(\alpha,\beta +1)}(x_{i,n})
P_{n}^{(\alpha,\beta +1)}(x_{i+1,n}) <0$ for $i\in\{1,2,\dots,n-1\}$. We deduce from~\eqref{2.5}
that $P_{n +1}^{(\alpha,\beta)}(x_{i,n})$ and $P_{n +1}^{(\alpha,\beta)}(x_{i+1,n})$ have the same
sign for $i=1$ and dif\/fer in sign for $i=2,\dots,n-1$. Since the zeros are distinct, it follows that $P_{n +1}^{(\alpha,\beta)}$
has an even number of zeros in the interval $(x_{1,n},x_{2,n})$ and an odd number of zeros in each of the intervals $(x_{i,n},x_{i+1,n})$, $i\in\{2,\dots,n-1\}$. Therefore $P_{n
+1}^{(\alpha,\beta)}$ has at least $n-2$ simple zeros between $x_{2,n}$ and $x_{n,n}$, plus
its smallest zero $x_{1, n+1}$ which is $<-1$ and, from~\eqref{2.1} and~\eqref{2.2}, its largest zero $x_{n+1,
n+1} > y_{n,n} > x_{n,n}$. Therefore, $n$ zeros of $P_{n+1}^{(\alpha,\beta)}$ are accounted for
and we must still have either no zeros or two zeros of $P_{n +1}^{(\alpha,\beta)}$ in the interval
$(x_{1,n},x_{2,n})$ where $x_{1,n} < -1 < x_{2,n}$ for each $n \in{\mathbb N}$, $n \geq 1$. Since
exactly one of the zeros of $P_{n +1}^{(\alpha,\beta)}$ is $<-1$ for $n\in{\mathbb N}$, $n\geq
1$, the only possibility is $x_{1,n}< x_{1,n+1}< -1 < x_{2,n+1}< x_{2,n}$ which proves the
result.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Th:3.1}]
It follows from \eqref{1} and the assumption that $P_{n}^{(\alpha,\beta)}$
and $P_{n-2}^{(\alpha,\beta)}$ are co-prime that $P_{n}^{(\alpha,\beta)}(d_n)\neq 0$ since $c_n$, $e_n>0$ provided that $n\geq3$. Evaluating~\eqref{1} at the $n-2$ pairs
of successive zeros $x_{i,n}$ and $x_{i+1,n}$, $i\in\{2,\dots,n-1\}$, of $P_{n}^{(\alpha,\beta)}$ that lie in the interval $(-1,1)$, we obtain
\begin{gather}
\frac{P_{n-1}^{(\alpha,\beta)}(x_{i,n})P_{n-1}^{(\alpha,\beta)}(x_{i+1,n})}{P_{n-2}^{(\alpha,\beta)}(x_{i,n})
P_{n-2}^{(\alpha,\beta)}(x_{i+1,n})}= \frac{(e_n)^2}{(d_n-x_{i,n})(d_n-x_{i+1,n})}.\label{4.1}
\end{gather}
The right-hand side of~\eqref{4.1} is positive if and only if $d_n\notin(x_{i,n},x_{i+1,n})$, while
\begin{gather*}
P_{n-1}^{(\alpha,\beta)}(x_{i,n})P_{n-1}^{(\alpha,\beta)}(x_{i+1,n})<0
\end{gather*}
for each $i\in\{2,\dots,n-1\}$ since we know from Theorem~\ref{Th:2.1} that the zeros of $P_{n}^{(\alpha,\beta)}$
and $P_{n-1}^{(\alpha,\beta)}$ that lie in the interval $(-1,1)$ are interlacing. Therefore,
from~\eqref{4.1}, $P_{n-2}^{(\alpha,\beta)}$ changes sign between each pair of successive zeros
of $P_n^{(\alpha,\beta)}$ that lie in $(-1,1)$ except possibly for one pair~$x_{j,n}$,~$x_{j+1,n}$, with $x_{j,n} < d_n < x_{j+1,n}$, $j\in\{2,\dots,n-1\}$. There are
$n-2$ intervals with endpoints at the successive zeros of $P_{n}^{(\alpha,\beta)}$ that lie in
the interval $(-1,1)$ and $P_{n-2}^{(\alpha,\beta)}$ has exactly $n-3$ distinct zeros in
$(-1,1)$. Therefore, the zeros of $P_{n-2}^{(\alpha,\beta)}$ that lie in the interval $(-1,1)$,
together with the point $d_n$, must interlace with the $n-1$ zeros of
$P_n^{(\alpha,\beta)}$ that lie in $(-1,1)$. The stated interlacing result follows from Theorem~\ref{Th:2.1} since $x_{1,n-2} < x_{1,n} < -1 < x_{2,n}$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Th:Askey1}] Suppose that $\delta < x_{2,n}$. From \cite[equation~(11), p.~71]{Rai},
\begin{gather}
(2(\beta+1) + (x+1)(\alpha +\beta+2n +2)) P_{n}^{(\alpha,\beta+1)} \nonumber\\
\qquad{}= (x+1)(\alpha +\beta+n+2)P_{n}^{(\alpha,\beta+2)}+ 2(\beta +n +1) P_{n}^{(\alpha,\beta)}.\label{5.1}
\end{gather}
Since $P_{n}^{(\alpha,\beta)}$ and $P_{n}^{(\alpha,\beta+1)}$ are co-prime for each $n \in{\mathbb N}$ and each f\/ixed $\alpha$, $\beta$, $\alpha > -1$, $-2 < \beta< -1$ by \eqref{2.1}, it follows from~\eqref{5.1} that the only possible common zero of $P_{n}^{(\alpha,\beta)}$ and $P_{n}^{(\alpha,\beta+2)}$ is $\delta:= -1-\frac{2(\beta+1)}{\alpha +\beta+2n +2}$. If $\delta < x_{2,n}$ then $P_{n}^{(\alpha,\beta)}$ and $P_{n}^{(\alpha,\beta+2)}$ are co-prime
since all the zeros of $P_{n}^{(\alpha,\beta+2)}$ lie in $(-1,1)$ and $x_{2,n}$ is the smallest zero of $P_{n}^{(\alpha,\beta)}$ in $(-1,1)$.
Evaluating~\eqref{5.1} at successive zeros $x_{1,n}< -1< x_{2,n}<\dots<x_{n,n}<1$ of $P_{n}^{(\alpha,\beta)}$ we obtain, for each $i \in \{1,2,\dots,n-1\}$,
\begin{gather} (x_{i,n}+1) (x_{i+1,n}+1)P_n^{(\alpha,\beta+2)}(x_{i,n}) P_n^{(\alpha,\beta +2)}(x_{i+1,n}){(\alpha+\beta +n+2)}^2\nonumber\\
\qquad{}= {(\alpha+\beta +2n+2)}^2(x_{i,n} -\delta) (x_{i+1,n} -\delta)
P_{n}^{(\alpha,\beta+1)}(x_{i,n}) P_{n}^{(\alpha,\beta+1)}(x_{i+1,n}). \label{5.2}
\end{gather}
Now, from \eqref{2.1}, $(x_{i,n}+1)(x_{i+1,n}+1)<0$ when $i=1;$ $(x_{i,n}+1)(x_{i+1,n}+1)>0$ for $i = 2,3,\dots,n-1$ and
$P_n^{(\alpha,\beta+1)}(x_{i,n})P_n^{(\alpha,\beta+1)}(x_{i+1,n}) <0$ for each $i = 1, 2,\dots,n-1$. Since $x_{1,n}<-1<\delta < x_{2,n}$ by assumption, we deduce from~\eqref{5.2} that $P_n^{(\alpha,\beta +2)}(x_{i,n})$ and $P_n^{(\alpha,\beta+2)}(x_{i+1,n})$ dif\/fer in sign for
each $i=1,2,\dots,n-1$. It follows that $P_n^{(\alpha,\beta +2)}$ has an odd number of zeros in each one of the intervals $(x_{i,n},x_{i+1,n})$ for $i = 1, 2,3,\dots,n-1$. Also, from~\eqref{2.1}, $x_{n,n}< z_{n,n}$ where $z_{1,n}<z_{2,n}<\dots <z_{n,n}$ are the zeros of
$P_{n}^{(\alpha,\beta+2)}$. It follows that the zeros of $P_n^{(\alpha,\beta)}$ and $P_n^{(\alpha,\beta+2)}$ are interlacing.
Suppose that $\delta > x_{2,n}$. A similar analysis of~\eqref{5.2} shows that $P_n^{(\alpha,\beta +2)}$ has the same sign at the smallest two zeros $x_{1,n}$ and $x_{2,n}$ of $P_n^{(\alpha,\beta)}$ and therefore an even number of zeros in the interval $(x_{1,n},x_{2,n})$, which shows that interlacing does not hold. Obviously, if $\delta = x_{2,n}$ then~$\delta$ is a~common zero of $P_n^{(\alpha,\beta)}$ and $P_n^{(\alpha,\beta+2)}$ so interlacing does not hold. This completes the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Th:Askey2}]
Evaluating \eqref{5.1} at successive zeros $z_{i,n}$, $z_{i+1,n}$ of $P_{n}^{(\alpha,\beta+2)}$, we have, for each $i \in \{1,2,\dots,n-1\}$,
\begin{gather*}4{(\beta+n+1)} ^2 P_n^{(\alpha,\beta)}(z_{i,n}) P_n^{(\alpha,\beta)}(z_{i+1,n})\\
\qquad{} = {(\alpha +\beta +2n +2)}^2(z_{i,n} -\delta) (z_{i+1,n} -\delta)
P_{n}^{(\alpha,\beta+1)}(z_{i,n}) P_{n}^{(\alpha,\beta+1)}(z_{i+1,n}).
\end{gather*}
From \cite[Theorem~2.4]{DrJoMb} we know that if $y_{1,n}<y_{2,n}<\dots<y_{n,n}$ denote the zeros of $P_n^{(\alpha,\beta+1)}$, then
\begin{gather}
-1 < y_{1,n} < z_{1,n}< y_{2,n} < z_{2,n} < \dots< y_{n,n}< z_{n,n}<1, \label{4.5}
\end{gather}
so that $P_n^{(\alpha,\beta+1)}(z_{i,n})P_n^{(\alpha,\beta+1)}(z_{i+1,n}) <0$ for each $i = 1, 2,\dots,n-1$, while $(z_{i,n} -\delta)(z_{i+1,n} -\delta) > 0$ unless $\delta \in (z_{i,n}, z_{i+1,n})$. This means that there are two possibilities: (a) $P_n^{(\alpha,\beta)}$ has
$n-1$ sign changes between successive zeros of $P_n^{(\alpha,\beta+2)}$ in $(-1,1)$ and $\delta \notin (z_{i,n}, z_{i+1,n})$ for any $i \in \{1,2,\dots,n-1\}$; or~(b) $P_n^{(\alpha,\beta)}$ has $n-2$ sign changes between successive zeros of $P_n^{(\alpha,\beta+2)}$ in $(-1,1)$ and
$\delta$ lies in one interval, say $\delta \in (z_{j,n}, z_{j+1,n})$ where $j \in \{1,2,\dots,n-1\}$. If~(a) holds then since $P_n^{(\alpha,\beta)}$ has exactly $n-1$ simple zeros in $(-1,1)$, these zeros, together with the point~$\delta$, interlace with the zeros of
$P_n^{(\alpha,\beta+2)}$ in~$(-1,1)$. If, on the other hand, (b)~holds then $P_n^{(\alpha,\beta)}$ has no sign change, and therefore an even number of zeros, in the interval $(z_{j,n}, z_{j+1,n})$ that contains~$\delta$. Since~$P_n^{(\alpha,\beta)}$ has exactly $n-1$ simple zeros in $(-1,1)$ and $n-2$
sign changes in $(-1,1)$, we deduce that no zero of $P_n^{(\alpha,\beta)}$ lies in the interval $(z_{j,n}, z_{j+1,n})$ that
contains~$\delta$ and one zero of~$P_n^{(\alpha,\beta)}$ is either $< z_{1,n}$ or $> z_{n,n}$. Since we know from \eqref{2.1} that
the largest zero $x_{n,n}$ of~$P_n^{(\alpha,\beta)}$ satisf\/ies $x_{n,n}< y_{n,n}$ while from~\eqref{4.5} $y_{n,n}< z_{n,n}$, the only possibility is that the smallest zero~$x_{2,n}$ of~$P_n^{(\alpha,\beta)}$ in $(-1,1)$ is $< z_{1,n}$. Therefore, the zeros of $(x-\delta)P_n^{(\alpha,\beta)}(x)$ interlace with the zeros of~$P_n^{(\alpha,\beta+2)}(x)$ and the result follows.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{Th:la}] Evaluating \eqref{fo} at successive zeros $x_{i,n}$, $x_{i+1,n}$, $i\in\{1,2,\dots,n-1\}$ of~$P_n^{(\alpha,\beta}$ we obtain
\begin{gather*}
(x_{i,n}+1)(x_{i+1,n}+1)(\alpha +\beta +n+1)^2 P_{n-1}^{(\alpha ,\beta +2)}(x_{i,n})P_{n-1}^{(\alpha ,\beta +2)}(x_{i+1,n})\\
\qquad{} = 4 (\beta +1)^2 P_{n-1}^{(\alpha ,\beta +1)}(x_{i,n})P_{n-1}^{(\alpha ,\beta +1)}(x_{i+1,n}).
\end{gather*}
From \eqref{2.2} with $n$ replaced by $n-1$ we have $P_{n-1}^{(\alpha ,\beta +1)}(x_{i,n})P_{n-1}^{(\alpha ,\beta +1)}(x_{i+1,n})<0$ while $(1+x_{i,n})(1+x_{i+1,n})<0$ for $i=1$ and $>0$ for $i\in\{2,3,\dots,n-1\}$. Therefore $P_{n-1}^{(\alpha ,\beta +2)}(x_{i,n})P_{n-1}^{(\alpha ,\beta +2)}(x_{i+1,n}){<}0$ for each $i\in\{2,3,\dots,n-1\}$. Hence $P_{n-1}^{(\alpha ,\beta +2)}$ has an even number of sign changes in $(x_{1,n},x_{2,n})$ and an odd number of sign changes in $(x_{i,n},x_{i+1,n})$ for $i\in\{2,3\dots,n-1\}$. Since $P_{n-1}^{(\alpha,\beta+2)}$ has $n-1$ distinct zeros, there must be exactly one zero of $P_{n-1}^{(\alpha,\beta+2)}$ in each of the $n-2$ intervals $(x_{i,n},x_{i+1,n})$, $i\in\{2,3\dots,n-1\}$. The remaining zero of $P_{n-1}^{(\alpha,\beta+2)}$ must lie in $(-1,1)$ and cannot lie in the interval $(-1,x_{2,n})$. Therefore the only possibility is that the largest zero $z_{n-1,n-1}$ of $P_{n-1}^{(\alpha,\beta+2)}$ is $>x_{n,n}$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Th:St1}]
(i)~We can write \eqref{2.17} as \begin{gather}\label{2.18}
(k_1 - (x+1)k_2)P_{n-1}^{(\alpha,\beta)} (x) = -(1+x)k_3P_{n-2}^{(\alpha,\beta+1)}(x) - k_4 P_{n}^{(\alpha,\beta)} (x).
\end{gather}
Let $x_{i,n}$, $i\in\{1,2,\dots,n\}$ denote the zeros of $P_{n}^{(\alpha,\beta)}$ in ascending order. Note that $(1+x_{i,n}) \neq \frac{k_1}{k_2}$ for any $i\in\{1,\dots,n\}$ since that would
contradict the assumption that $P_{n}^{(\alpha,\beta)}$ and $P_{n-2}^{(\alpha,\beta+1)}$ are
co-prime. Evaluating~\eqref{2.18} at each pair of zeros $x_{i,n}$ and $x_{i+1,n}$,
$i\in\{2,\dots,n-1\}$, of $P_{n}^{(\alpha,\beta)}$ that lie in the interval $(-1,1)$, we obtain
\begin{gather} \label{2.19}
\frac{P_{n-1}^{(\alpha,\beta)}(x_{i,n})P_{n-1}^{(\alpha,\beta)}(x_{i+1,n})}{P_{n-2}^{(\alpha,\beta+1)}(x_{i,n})
P_{n-2}^{(\alpha,\beta+1)}(x_{i+1,n})}= \frac{(1+x_{i,n})(1+ x_{i+1,n}) k_3^2}{(k_1-
(x_{i,n}+1)k_2)(k_1- (x_{i+1,n}+1)k_2)}.
\end{gather}
Since $(1+x_{i,n})$ and $(1+x_{i+1,n})$ are positive for $i\in\{2,\dots,n-1\}$, the right-hand
side of \eqref{2.19} is positive if and only if $\frac{k_1}{k_2} -1 \notin(x_{i,n},x_{i+1,n})$
for any $i\in\{2,\dots,n-1\}$. Suppose, now, that
$\frac{k_1}{k_2} -1\notin(x_{i,n},x_{i+1,n})$ for any $i\in\{2,\dots,n-1\}$. Since the zeros $x_{i,n-1}$, $i\in\{2,\dots,n-1\}$ of~$P_{n-1}^{(\alpha,\beta)}$ interlace with the
zeros $x_{i,n}$, $i\in\{2,\dots,n\}$ of $P_n^{(\alpha,\beta)}$, $\alpha > -1$, $-2<\beta<-1$ (Theorem~\ref{Th:2.1}),
we see from~\eqref{2.19} that $P_{n-1}^{(\alpha,\beta)}(x_{i,n})P_{n-1}^{(\alpha,\beta)}(x_{i+1,n}) < 0$ for each
$i\in\{2,\dots,n-1\}$, $n \in{\mathbb N}$, $n \geq 2$. Therefore if
$\frac{k_1}{k_2} -1\notin(x_{i,n},x_{i+1,n})$ for any $i\in\{2,\dots,n-1\}$, the $n-2$ distinct
zeros of~$P_{n-2}^{(\alpha,\beta+1)}$ in $(-1,1)$ interlace with the $n-1$ zeros of~$P_{n}^{(\alpha,\beta)}$ that lie in $(-1,1)$. Further, by our assumption, the point
$\frac{k_1}{k_2} -1$ lies outside the interval with endpoints at the smallest positive zero
$x_{2,n}$ of~$P_{n}^{(\alpha,\beta)}$ and its largest zero $x_{n,n}$ so interlacing holds between the $n-2$ simple zeros of~$ P_{n-2}^{(\alpha,\beta+1)}$ together with
the point $\frac{k_1}{k_2} -1$ and the $n-1$ zeros of~$P_n^{(\alpha,\beta)}$ that lie in
$(-1,1)$. Suppose now that $\frac{k_1}{k_2} -1\in(x_{i,n}, x_{i+1,n})$ for some
$i\in\{2,\dots,n-1\}$. Then, in this single interval say $(x_{j,n}, x_{j+1,n})$ containing
$\frac{k_1}{k_2} -1$, there will be no sign change of~$P_{n-1}^{(\alpha,\beta)}$ but its sign
will change in each of the remaining $n-3$ intervals with endpoints at the successive zeros of
$P_{n}^{(\alpha,\beta)}$. However, evaluating~\eqref{2.18} at~$x_{1,n}$ and~$x_{2,n}$, we obtain
\begin{gather}\label{2.20}
\frac{P_{n-1}^{(\alpha,\beta)}(x_{1,n})P_{n-1}^{(\alpha,\beta)}(x_{2,n})}{P_{n-2}^{(\alpha,\beta+1)}(x_{1,n})
P_{n-2}^{(\alpha,\beta+1)}(x_{2,n})}= \frac{(1+x_{1,n})(1+ x_{2,n})k_2^2 k_3^2}{(\frac{k_1}{k_2} -1-x_{1,n})(\frac{k_1}{k_2} -1-x_{2,n})}.
\end{gather}
Now $\frac{k_1}{k_2} -1\in(x_{i,n}, x_{i+1,n})$ for some $i\in\{2,\dots,n-1\}$ so $\frac{k_1}{k_2} -1\notin(x_{1,n}, x_{2,n})$. The right-hand side of
\eqref{2.20} is therefore negative while, from Theorem \ref{Th:2.1} with $n$ replaced by $n-1$, we know that
$P_{n-1}^{(\alpha,\beta)}(x_{1,n})P_{n-1}^{(\alpha,\beta)}(x_{2,n})>0$.
Therefore $P_{n-2}^{(\alpha,\beta+1)}$ has a dif\/ferent sign at $x_{1,n}$ and~$x_{2,n}$ and
therefore one zero greater that $-1$ but less than $x_{2,n}$. We can therefore deduce that in each case, the $n-2$
simple zeros of $P_{n-2}^{(\alpha,\beta+1)}$, together with the point $\frac{k_1}{k_2} -1$,
interlace with the $n-1$ zeros of $P_n^{(\alpha,\beta)}$ in $(-1,1)$ if
$P_{n-2}^{(\alpha,\beta+1)}$ and $P_n^{(\alpha,\beta)}$ are co-prime.
(ii) Since the zeros of $P_{n-2}^{(\alpha,\beta +t)}$ are increasing functions of $t$ for
$2 \le t \le 4$ \cite[Theorem~6.21.1]{Sze}, it will be suf\/f\/icient to prove~(ii) in the two
special cases $t=2$ and $t=4$.
For the case $t=2$, we note that since the polynomials $P_{n}^{(\alpha,\beta)}$ and $P_{n-1}^{(\alpha,\beta)}$ are
co-prime~\eqref{2.3}, it follows from \eqref{n2b2} that the only possible common zero of
$P_{n-2}^{(\alpha,\beta +2)}$ and $P_{n}^{(\alpha,\beta)}$ is $-1 + \frac{2(\beta+1)}{\alpha +
\beta + 2n}$ which is $<-1$ for each $\alpha$, $\beta$, $\alpha>-1$, $-2<\beta < -1$. Since all of
the zeros of $P_{n-2}^{(\alpha,\beta +2)}$ lie in the interval $(-1,1)$, $P_{n-2}^{(\alpha,\beta +2)}$ and
$P_{n}^{(\alpha,\beta)}$ are co-prime for $\alpha>-1$, $-2<\beta < -1$.
Evaluating \eqref{n2b2} at the $n-2$ pairs of successive zeros $x_{i,n}$ and $x_{i+1,n}$,
$i\in\{2,\dots,n-1\}$ of $P_{n}^{(\alpha,\beta)}$ that lie in $(-1,1)$ yields
\begin{gather}
\frac{P_{n-1}^{(\alpha,\beta)}(x_{i,n})P_{n-1}^{(\alpha,\beta)}(x_{i+1,n})}{P_{n-2}^{(\alpha,\beta+2)}(x_{i,n})
P_{n-2}^{(\alpha, \beta+2)}(x_{i+1,n})} \nonumber\\
\qquad{} = \frac{(x_{i,n}+1)^2 (x_{i+1,n}+1)^2(\alpha+n-1)^2(\alpha+\beta+2n)^2}
{4(\beta+n)^2 (2n+\alpha+\beta)^2(x_{i,n}+1-A_n)(x_{i+1,n}+1-A_n)}.\label{2.21}
\end{gather}
The right-hand side of \eqref{2.21} is positive since $\frac{2(\beta+1)}{\alpha +\beta+2n} \notin (1 + x_{i,n}, 1 + x_{i+1,n})$ for any $i\in\{2,\dots$, $n-1\}$.
By Theorem \ref{Th:2.1}, $P_{n-1}^{(\alpha,\beta)}(x_{i,n})P_{n-1}^{(\alpha,\beta)}(x_{i+1,n} ){<} 0$ and
hence $P_{n-2}^{(\alpha,\beta+2)}(x_{i,n}) P_{n-2}^{(\alpha,\beta+2)}(x_{i+1,n}) {<} 0$ for each
$i\in\{2,\dots,n-1\}$, and, for $t=2$, the interlacing result follows.
For the case $t=4$, since $P_{n}^{(\alpha,\beta)}$ and $P_{n-1}^{(\alpha,\beta)}$ are
co-prime, \eqref{n2b4} implies that the only possible common zero of
$P_{n-2}^{(\alpha,\beta +4)}$ and $P_{n}^{(\alpha,\beta)}$ is $-1+\frac{D_n}{C_n}$. Since $D_n<0$ and $C_n>0$ for each
$\alpha$, $\beta$ with $\alpha>-1$, $-2<\beta < -1$, and $n \geq 3$, it follows that $-1+\frac{D_n}{C_n}<-1$ and therefore, since all of the zeros of
$P_{n-2}^{(\alpha,\beta +4)}$ lie in $(-1,1)$, $P_{n-2}^{(\alpha,\beta +4)}$ and $P_{n}^{(\alpha,\beta)}$ are co-prime for $\alpha>-1$, $-2<\beta < -1$.
Evaluating \eqref{n2b4} at the $n-2$ pairs of successive zeros $x_{i,n}$ and $x_{i+1,n}$,
$i\in\{2,\dots,n-1\}$ of $P_{n}^{(\alpha,\beta)}$ that lie in $(-1,1)$,
\begin{gather}
\frac{P_{n-1}^{(\alpha,\beta)}(x_{i,n})P_{n-1}^{(\alpha,\beta)}(
x_{i+1,n})}{P_{n-2}^{(\alpha,\beta+4)}(x_{i,n})
P_{n-2}^{(\alpha, \beta+4)}(x_{i+1,n})}\nonumber\\
\qquad{}
=\frac{(x_{i,n}+1)^4(x_{i+1,n}+1)^4E_n^2}{64(n+\beta)^2(\beta+2)^2(C_n(x_{i,n}+1)+D_n)(C_n(x_{i+1,n}+1)+D_n)}.\label{n2b4r}
\end{gather}
The right-hand side of \eqref{n2b4r} is positive since $\frac{D_n}{C_n} \notin (1 + x_{i,n}, 1 + x_{i+1,n})$ for $i\in\{2,\dots,n-1\}$. By Theorem \ref{Th:2.1}, $P_{n-1}^{(\alpha,\beta)}(x_{i,n})P_{n-1}^{(\alpha,\beta)}(x_{i+1,n} )< 0$ and
hence $P_{n-2}^{(\alpha,\beta+4)}(x_{i,n}) P_{n-2}^{(\alpha,\beta+4)}(x_{i+1,n}) < 0$ for each
$i\in\{2,\dots,n-1\}$, and, for $t=4$, the interlacing result follows.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Th:bounds}] Let $x_{1,n}$ and $y_{1,n-2}^{(\alpha,\beta+t)}$, $t\in\{2,3,4\}$ denote the smallest zero of $P_n^{(\alpha,\beta)}$ and $P_{n-2}^{(\alpha,\beta+t)}$ respectively. It follows from~\eqref{2.3}, Theorem~\ref{Th:St1}(ii) and the monotonicity of the zeros of Jacobi polynomials (cf.\ \cite[Theorem~7.1.2]{Ism}) that
\begin{gather}\label{order}
x_{1,n-1}<x_{1,n}<-1<x_{2,n}<y_{1,n-2}^{(\alpha,\beta+2)}<y_{1,n-2}^{(\alpha,\beta+3)}<y_{1,n-2}^{(\alpha,\beta+4)}.
\end{gather}
Since $\lim\limits_{x\to -\infty}P_n^{(\alpha,\beta)}(x)=\infty$ for $n$ even, while $\lim\limits_{x\to -\infty}P_n^{(\alpha,\beta)}(x)=-\infty$ for~$n$ odd, we deduce from~\eqref{order} that
\begin{gather}\label{sign}
\frac{P_{n-2}^{(\alpha,\beta+t)}(x_{1,n})}{P_{n-1}^{(\alpha,\beta)}(x_{1,n})}>0 \qquad \text{for} \quad t\in\{2,3,4\}.
\end{gather}
Evaluating \eqref{n2b2} at $x_{1,n}$, we obtain
\begin{gather*}
\frac{P_{n-2}^{(\alpha,\beta+2)}(x_{1,n})}{P_{n-1}^{(\alpha,\beta)}(x_{1,n})}=
\frac{2(\beta+n)((x_{1,n}+1)(2n+\alpha+\beta)-2(\beta+1))}{(x_{1,n}+1)^2(n+\alpha-1)(2n+\alpha+\beta)}
\end{gather*}
and therefore it follows from \eqref{sign} that $(x_{1,n}+1)(2n+\alpha+\beta)-2(\beta+1)>0$ for $n\geq3$.
This yields the bound \begin{gather*}
x_{1,n}>\frac{2(\beta+1)}{2n+\alpha+\beta}-1.
\end{gather*}
Next, evaluating \eqref{n2b3} at $x_{1,n}$ we obtain
\begin{gather}\label{ag}
\frac{P_{n-2}^{(\alpha,\beta+3)}(x_{1,n})}{P_{n-1}^{(\alpha,\beta)}(x_{1,n})}
=\frac{4(n+\beta)(n+\beta+1)(x_{1,n}+B_n)}{(x_{1,n}+1)^3(n+\alpha-1)(2n+\alpha+\beta)}.
\end{gather}
Since the left hand side of \eqref{ag} is positive by~\eqref{sign}, $(x_{1,n}+1)^3<0$ by~\eqref{2.1} and $B_n>1$ for $n\geq 3$, $\alpha>-1$, $-2<\beta<-1$, we see that $x_{1,n}<-B_n<-1$.
Evaluating \eqref{n2b4} at $x_{1,n}$ we obtain{\samepage
\begin{gather*}
\frac{P_{n-2}^{(\alpha,\beta+4)}(x_{1,n})}{P_{n-1}^{(\alpha,\beta)}(x_{1,n})}
=\frac{8(n+\beta)(\beta+2)(C_n(x_{1,n}+1)-D_n)}{(x_{1,n}+1)^4E_n}
\end{gather*}
and it follows from \eqref{sign} that $C_n(x_{1,n}+1)-D_n>0$.}
Finally, since $C_n-(\beta+3)(2n+\alpha+\beta)=2(n-1)(n+\alpha-1)>0$ for $n\geq 3$ and $\alpha>-1$, we see that
\begin{gather*}
-1+\frac{2(\beta+1)}{2n+\alpha+\beta}<-1+\frac{2(\beta+1)(\beta+3)}{C_n}
\end{gather*} for each $n\geq 3$, $\alpha>-1$ and $-2<\beta<-1$.
\end{proof}
\subsection*{Acknowledgments}
The research of both authors was funded by the National Research Foundation of South Africa. We thank the referees for helpful suggestions and insights.
\pdfbookmark[1]{References}{ref}
\LastPageEnding
\end{document}
|
\begin{document}
\title{Threshold phenomena in $k$-dominant skylines of
random samples}
\begin{abstract}
Skylines emerged as a useful notion in database queries for
selecting representative groups in multivariate data samples for
further decision making, multi-objective optimization or data
processing, and the $k$-dominant skylines were naturally introduced
to resolve the abundance of skylines when the dimensionality grows
or when the coordinates are negatively correlated. We prove in this
paper that the expected number of $k$-dominant skylines is
asymptotically zero for large samples when $1\le k\le d-1$ under two
reasonable (continuous) probability assumptions of the input points,
$d$ being the (finite) dimensionality, in contrast to the asymptotic
unboundedness when $k=d$. In addition to such an asymptotic
zero-infinity property, we also establish a sharp threshold
phenomenon for the expected ($d-1$)-dominant skylines when the
dimensionality is allowed to grow with $n$. Several related issues
such as the dominant cycle structures and numerical aspects, are
also briefly studied.
\end{abstract}
\noindent \emph{Key words.} Skyline, dominance, maxima, random
samples, Pareto optimality, threshold phenomena, multi-objective
optimization, computational geometry, asymptotic approximations,
average-case analysis of algorithms.
\section{Introduction}
The last decade has undergone a drastic change of information
dissemination from Web 1.0 to Web 2.0, the most notable
representative products being YouTube and Facebook. Data have been
generated in an unprecedented pace and range, powerful search
engines are indispensable, and screening useful or usable
information (via ``sort engines'') from the vast is generally
becoming more important than searching and gathering. Skylines of
multivariate data sample were introduced for selecting
representative groups in the database query literature by
B\"{o}rzs\"{o}nyi et al.\ (see \cite{BKS01}) and had appeared in
diverse areas under several different guises and names: \emph{Pareto
optimality}, \emph{efficiency}, \emph{maxima}, \emph{admissibility},
\emph{elite}, \emph{sink}, etc.; see \cite{CHT03,CHT11} and the
references therein for more information. These diverse terms reveal
the importance of the use of skyline as an effective means of data
summarization in theory and in practice. Many different notions and
variants of skylines have been proposed in the literature, following
the original paper \cite{BKS01}. In particular, the $k$\!-dominant
skylines were introduced by Chan et al.\ (see \cite{CJTTZ06}) in
situations when the skylines are abundant and have received much
attention since, although they had already been studied in the
Russian literature (see for example \cite{BO96,Orlova91}). We focus
in this paper on the asymptotic estimates of such skylines and prove
several types of threshold phenomena under different probability
assumptions of the input samples, which, in addition to their
theoretical interests, are believed to be useful for practitioners.
\paragraph{Skylines and $k$-dominant skylines} The definitions of
skyline and many of its variants are based on the notion of
dominance. Given a $d$-dimensional dataset $\mathscr{D}$, a point
$\v{p}\in\mathscr{D}$ is said to \emph{dominate} another point
$\v{q}\in\mathscr{D}$ if $p_j\le q_j$ for $1\le j\le d$, where
$\v{p}=(p_1,\dots,p_n)$ and $\v{q}=(q_1,\dots,q_n)$, and is less
than in at least one dimension. The non-dominated points in
$\mathscr{D}$ are called the \emph{skyline} (or \emph{skyline
points}) of $\mathscr{D}$. By relaxing the full dominance definition
to partial dominance, we say that a point $\v{p}\in\mathscr{D}$
\emph{$k$-dominates} another point $\v{q} \in\mathscr{D}$ if there
are $k$ dimensions in which $p_j$ is not greater than $q_j$ and is
less than in at least one of these $k$ dimensions\footnote{If we
change the definition of the $k$-dominant skyline to be ``exactly
$k$'' (instead of $\ge k$) coordinates smaller than or equal to and
at least $1$ smaller than, then the same types of results in this
paper also hold.}. The points in $\mathscr{D}$ that are not
$k$-dominated by any other points are defined to be the
\emph{$k$-dominant skyline} of $\mathscr{D}$; see \cite{CJTTZ06}.
See also \cite{BO96} for a different formulation.
The definition of $k$-dominant skyline implies that for a fixed
dataset the number of $k$-dominant skylines decreases as $k$ becomes
smaller. Such a monotonicity property will be used later. To see
this, consider any point $\v{p}$ in the unit square. It is a skyline
(or $2$-dominant skyline) point if no other points have
simultaneously smaller $x$- and smaller $y$-values; namely, no other
points can lie in the shaded region
\begin{tikzpicture}[scale=0.3]
\begin{scope}
\clip (-0.5,-0.5) rectangle (0,0);
\foreach \k in {0.8,0.7,...,-0.8} {
\draw [red!40,rotate=-40](\k,-2) -- (\k,2);
}
\end{scope}
\draw[-latex,line width=0.3pt](-0.5,-0.5) rectangle (0.5,0.5);
\draw[line width=0.3pt] (0,-0.5) -- (0,0.5);
\draw[line width=0.3pt] (-0.5,0) -- (0.5,0);
\filldraw[] circle(0.07);
\end{tikzpicture} (where $\v{p}$ is the dotted point in the middle
of this figure). However, to be a $1$-dominant skyline point
requires that all other points must have simultaneously larger $x$-
and larger $y$-values, or, equivalently, they cannot lie in the
shaded region
\begin{tikzpicture}[scale=0.3]
\begin{scope}
\clip (-0.5,0) rectangle (0,0.5);
\foreach \k in {0.8,0.7,...,-0.8} {
\draw [red!40,rotate=-40](\k,-2) -- (\k,2);
}
\end{scope}
\begin{scope}
\clip (-0.5,-0.5) rectangle (0.5,0);
\foreach \k in {0.8,0.7,...,-0.8} {
\draw [red!40,rotate=-40](\k,-2) -- (\k,2);
}
\end{scope}
\draw[-latex,line width=0.3pt](-0.5,-0.5) rectangle (0.5,0.5);
\draw[line width=0.3pt] (0,-0.5) -- (0,0.5);
\draw[line width=0.3pt] (-0.5,0) -- (0.5,0);
\filldraw[] circle(0.07);
\end{tikzpicture}.
On the other hand, the transitivity property of skylines fails for
$k$-dominant skylines when $1\le k\le d-1$, meaning that their
cardinality may be zero and there may be cycles.
\paragraph{The number of skyline points} The number of skyline
points is a key issue in their use and usefulness. This quantity
under suitable random assumptions of the input is also important for
practical modeling or reference purposes, as well as for the
analysis of skyline-finding algorithms. The two major, simple,
representative random models are \emph{hypercubes} and
\emph{simplices}. Assuming that the input dataset $\mathscr{D}
=\{\v{p}_1,\dots, \v{p}_n\}$ is taken uniformly and independently
from the hypercube $[0,1]^d$, then it has been known since the
1960's (see \cite{BNS66}) that the expected number of skyline points
of $\mathscr{D}$ is asymptotic to $\frac{(\log n)^{d-1}}{(d-1)!}$
for large $n$ and finite $d$, exhibiting the independence of the
coordinates. (Intuitively, if one sorts according to one dimension,
then each other dimension roughly contributes $\log n$ skyline
points.) On the other hand, if we assume that the input points are
uniformly sampled from the $d$-dimensional simplex $\{|x_1|+\cdots
+|x_d|\le 1, x_j\in(-1,0]\}$, then the expected number of skyline
points is asymptotic to $\Gamma \left(\frac1d\right) n^{1-\frac1d}$,
reflecting obviously a stronger negative correlation of the
coordinates; see \cite{BDHT05} and the references cited there. Here
$\Gamma$ denotes Euler's Gamma function. For the number of skyline
points under other models, see \cite{Baryshnikov07, Devroye86,
Devroye93,SY08} and the references therein.
On the other hand, in contrast to the recent growing trend of
studying high dimensional datasets, not much is known for the
expected number of skyline points when $d$ is allowed to grow with
$n$. Such a direction is especially useful as practical situations
always deal with finite $n$ and finite $d$ (whose dependence on $n$
is often not clear). The only exception along this direction is the
uniform estimates given in \cite{Hwang04} (see also \cite{BDHT05})
for the expected number of skyline points in a random uniform
samples of $n$ points from the hypercube $[0,1]^d$. While the order
$\frac{(\log n)^{d-1}} {(d-1)!}$ may seem slowly growing as $d$
increases, it soon reaches the order $n$ when $d$ is around $\log
n$, which is relatively small for moderate values of $n$.
Consequently, the skyline points become too numerous to be of direct
use. The growth of skyline points in the random $d$-dimensional
simplex model is even faster and we can show that almost all points
are skylines when $d$ roughly exceeds $\frac{\log n} {\log\log n}$,
again small for $n$ not too large.
\paragraph{The cardinality of $k$-dominant skyline} Since
$k$-dominant skyline were proposed (see \cite{CJTTZ06}) to resolve
the skyline-abundance problem, it is of interest to know their
quantity under suitable random models. A critical step in applying
$k$-dominant skyline is to identify an appropriate $k$ such that the
size of the $k$-dominant skyline is within the acceptable ranges.
But this may not be always feasible. Consider the $5$-dimensional
dataset $\mathscr{D}$ given in Table~\ref{tab:ex}. The six points
are all skyline points, one ($\v{p}_6$) is the $4$-dominant skyline
point and no point is in the $3$-dominant skyline. Clearly,
$\v{p}_6$ is to some extent better than the other points since it
contains two components with the lowest value $1$. However, it was
already mentioned in \cite{CJTTZ06} that some $k$-dominant skylines
may be empty. For example, if we drop $\v{p}_6$ from $\mathscr{D}$,
then the five points are all skyline points but all $k$-dominant
skylines are empty for $1\le k\le 4$. In this example, other
alternatives to $k$-dominant skylines have to be used.
Unfortunately, such a property of \emph{excessive skylines but few
$k$-dominant skylines} is not uncommon, and we show in this paper
that, under the hypercube and the simplex random models, the
expected number of $k$-dominant skylines both tends to zero for
large $n$ and $1\le k\le d-1$.
\begin{center}
\label{tab:ex}
\begin{tabular}{c|cccc}\toprule
point & skyline& $4$-dominant skyline & $3$-dominant skyline\\
\midrule
$\v{p}_1{\ }(1,2,2,3,3)$&\ding{52}&-&-\\
$\v{p}_2{\ } (3,1,2,2,3)$&\ding{52}&-&-\\
$\v{p}_3{\ } (3,3,1,2,2)$&\ding{52}&-&-\\
$\v{p}_4{\ } (2,3,3,1,2)$&\ding{52}&-&-\\
$\v{p}_5{\ } (2,2,3,3,1)$&\ding{52}&-&-\\
$\v{p}_6{\ } (2,3,1,1,3)$&\ding{52}&\ding{52}&-\\
\bottomrule
\end{tabular}\vspace*{.5cm}
\centerline{\emph{Table 1: An example showing the property of
many skylines but few $k$-dominant skylines.}}
\end{center}
\paragraph{Threshold phenomena} We clarify two types of threshold
phenomena for the expected number of $k$-dominant skylines in random
samples.
\begin{enumerate}
\item \emph{Large sample, bounded dimension}:
\[
\text{Expected number of $k$-dominant skylines}
\to \left\{\begin{array}{ll}
0, & \text{if }1\le k\le d-1;\\
\infty,& \text{if }k=d,
\end{array}
\right.
\]
as the sample size $n\to\infty$. While such a result is not new and
contained as a special case of the general theory developed in
\cite{BO96} for finite dimensional skylines, we will give an
independent, transparent, self-contained proof, which, in addition
to being more precise, can be extended to the case when the
dimensionality goes unbounded with the sample size.
\item \emph{Large sample, moderate dimension}: There exists
an integer $d_0=d_0(n)\approx \sqrt{\frac{2\log n}{\log\frac{\log n}
{\log\log n}}} +1$ such that (see \eqref{EMkn-large})
\begin{align*}
\text{Expected number of $(d-1)$-dominant skylines}
\to \left\{\begin{array}{ll}
0, &\text{if } d\le d_0-1;\\
\infty,& \text{if } d\ge d_0+2,
\end{array} \right.
\end{align*}
as $n\to\infty$, and the two cases $d=d_0$ and $d=d_0+1$ lead to
two different oscillating functions, the first ($d=d_0$)
fluctuating between $0$ and $\frac{e^{-\gamma}}{2-e^{-e^{-1}}}$
and the second between $\frac{e^{-\gamma}}{2-e^{-e^{-1}}}$ and
$O\left(\frac{\log n}{\log\log n}\right)$,
where $\gamma$ is Euler's constant; see \eqref{n-ii2}
and \eqref{n-ii3}. We consider only random samples from
hypercubes. Other regions and other values of $k$, $k<d-1$ are
expected to exhibit similar threshold phenomena with different
$d_0$, but the analysis becomes excessively long and involved.
More details will be discussed elsewhere.
\end{enumerate}
We see from these phenomena that the usual ``curse of high
dimensionality'' has thus another form here which one may term
``curse of constant dimensionality,'' which refers to the situation
when no $k$-dominant skyline point at all exists. Also the model
where dimensionality can vary with the sample size is, at least from
a practical point of view, more reasonable; see
Sections~\ref{sec:d-large} and \ref{sec:threshold} for more
discussions and details.
\paragraph{Related works} In addition to the partial dominance
used in defining $k$-dominant skylines (see \cite{CJTTZ06}), there
are also several other skyline variants for retrieving more
representative points; these include skybands \cite{PTFS05}, top-$k$
dominating queries \cite{IBS08,PTFS05,YM09}, strong skylines
\cite{ZGLTW05}, skyline frequency \cite{CJTTZ062}, approximately
dominating representatives \cite{KP07}, $\varepsilon$-skylines
\cite{XZT08}, and top-$k$ skylines \cite{BGG07, LYH09}. See also the
survey paper \cite{IBS08} for more information.
\paragraph{Organization of the paper} This paper presents a
systematic study on the asymptotic estimates of the number of
$k$-dominant skyline points under random models. It is organized as
follows. We derive in the next section (\S~\ref{sec:1st}) an
asymptotic vanishing property for the number of $k$-dominant skyline
points under a common hypercube model when the dimensionality is
bounded. The extension to include more points in the partial
dominant skyline is showed to suffer from a similar drawback in
Section~\ref{sec:layers}. We then prove in Section~\ref{sec:simplex}
that changing the underlying model from hypercube to simplex does
not improve either the asymptotic vanishing property.
Section~\ref{sec:cm} deals with a categorical model for which the
results have a very different nature. Roughly, as the total number
of sample points are finite in this model, the expected number of
$k$-dominant skylines will be asymptotically linear, meaning too
many choices for ranking or selection purposes. All these results
point to the negative side for the use of $k$-dominant skylines
under similar data situations. We then address the positive side in
the last few sections by considering again the hypercubes but with
growing dimensionality. A sharp threshold phenomenon is discovered
in Section~\ref{sec:threshold} when $d\to\infty$ with $n$, the
asymptotic approximations needed being derived in
Section~\ref{sec:d-large}. Another new threshold result is given in
Section~\ref{sec:nc} of the expected number of dominant cycles.
Section~\ref{sec:ub} provides a uniform lower-bound estimate for the
expected number of skyline points for $1\le k\le d-1$. We conclude
in Section~\ref{sec:fin} with some numerical aspects of the
estimates we derived.
\section{Random samples from hypercubes}
\label{sec:1st}
The simplest random model is the hypercube $[0,1]^d$, which is also
the most natural and most studied one. They can also be used when
data are discrete in nature but span uniformly over a sufficiently
large interval.
In this section, we derive asymptotic estimates for the expected
number of $k$-dominant skyline points in a random sample of $n$
points $\mathscr{D} := \{\v{p} _{1},\ldots ,\v{p}_n\}$ uniformly and
independently drawn from $[0,1]^d$, $d\ge2$. Let $M_{d,k}(n)$ denote
the number of $k$-dominant skyline points of $\mathscr{D}$. We first
derive a crude upper bound for the expected number
$\mathbb{E}[M_{d,k}(n)]$, which implies that
$\mathbb{E}[M_{d,k}(n)]$ is asymptotically zero as $n$ grows
unbounded and $1\le k\le d-1$. More precise estimates are possible
and will be derived in Section~\ref{sec:d-large}. For a point
$\v{p}\in [0,1]^d$, denoted by $B_{k}(\v{p})$ the region of the
points in $[0,1]^d$ that $k$-dominates $\v{p}$. Also, $|A|$ denotes
the volume of the region $A$.
\begin{thm}[Asymptotic zero-infinity property for large $n$ and
bounded $d$] For fixed $d\ge2$ \label{thm-q1}
\begin{equation}\label{a1}
\mathbb{E}[M_{d,k}(n)]
\to \left\{\begin{array}{ll}
0, & \text{if }1\le k\le d-1;\\
\infty,& \text{if }k=d,
\end{array}
\right.
\end{equation}
as $n\to\infty$.
\end{thm}
\noindent \emph{Proof.}\ The case $k=d$ has been known since the 1960's (see
\cite{BNS66}) and were re-derived several times in the literature.
We assume $1\le k \le d-1$. Since $M_{d,k}(n)\le
M_{d,d-1}(n)$ for fixed $d$ and for $1\le k\le d-1$, we only prove
that $\mathbb{E}[M_{d,d-1}(n)]\to 0$.
We start from the integral representation
\begin{align}
\mathbb{E}[M_{d,d-1}(n)] &= n\mb{P}\left(\v{p}_1 \text{ is a
($d-1$)-dominant skyline point}\right) \nonumber \\
&= n\int_{[0,1]^d} \left(1-|B_{d-1}(\v{x})|\right)^{n-1}
\text{d}\v{x}\label{EM} ,
\end{align}
because if $\v{x}$ is not $k$-dominated by any of the other $n-1$
points, they all have to lie in the region $[0,1]^d \setminus
B_k(\v{x})$. Here and throughout this paper, $\text{d}\v{x}$ is the
abbreviation of $\text{d}x_1\cdots \text{d}x_d$.
To estimate the integral in \eqref{EM}, we split it into two parts,
one part having sufficiently small volume (corresponding roughly to
small $x_1\cdots x_d$) and the other with $|B_{d-1}(\v{x})|$ bounded
away from zero, rendering the term $(1-|B_{d-1}(\v{x})|)^{n-1}$ also
small.
For a fixed number $t$ satisfying $1<t<\frac{d}{d-1}$, define the
region
\begin{align} \label{Qn}
Q_{n}
:=\bigcup_{1\le \ell \le d}\left\{\v{x}\in [0,1]^{d}:
x_{\ell}\le n^{-\frac td}\text{ and }
\prod_{j\not= \ell} x_j
\le n^{-\frac{d-1}d\,t}\right\} .
\end{align}
Then
\[
\mathbb{E}[M_{d,d-1}(n)]
\le n\left| Q_{n}\right|+n\int_{[0,1]^{d}\setminus Q_{n}}
\left( 1-\left| B_{d-1}(\v{x})\right|\right)^{n-1}
\mathrm{d} \v{x}.
\]
The volume of $Q_n$ is bounded above by
\begin{align*}
\left| Q_{n}\right|
\le dn^{-\frac td} \int_{\substack{x_1\cdots x_{d-1}
\le n^{-\frac{d-1}{d}\,t}
\\ \v{x}\in[0,1]^d}}\text{d}\v{x}.
\end{align*}
To estimate the last integral, let
\[
A_d(\delta) := \int_{\substack{x_1\cdots x_{d-1}\le \delta
\\ \v{x}\in[0,1]^d}}\text{d}\v{x} \qquad(d\ge2),
\]
where $0<\delta<1$. Then $A_2(\delta) = \delta$, and
\[
A_d(\delta) = \int_{\delta}^1 A_{d-1}
\left(\frac{\delta}{t}\right)
\text{d}t \qquad(d\ge3).
\]
A simple induction gives
\[
A_d(\delta) = \delta\frac{|\log \delta|^{d-2}}{(d-2)!}
\qquad(d\ge2),
\]
and we obtain, by taking $\delta = n^{-\frac{d-1}{d}\,t}$,
\[
|Q_n|=O\left(n^{-t} (\log n)^{d-2}\right) ,\label{Qn-est}
\]
On the other hand, by an inclusion-exclusion argument, we have
\begin{align} \label{Bd}
|B_{d-1}(\v{x})|
=\sum_{1\le \ell \le d}\prod_{j\neq \ell}
x_j-(d-1)\prod_{1\le j\le d}x_j.
\end{align}
Now if $\v{x}\in [0,1]^{d}\setminus Q_{n}$, then
\[
\left| B_{d-1}(\v{x})\right|
\ge \max_{1\le \ell \le d}\prod_{i\neq \ell }x_{i}
\ge n^{-\frac{d-1}{d}\,t}.
\]
Thus, we have
\begin{align} \label{dm1-skl}
\mathbb{E}[M_{d,d-1}(n)]
= O\left( n^{1-t}(\log n)^{d-2}\right)
+O\left( n\exp \left(-(n-1)n^{-\frac{d-1}{d}\,t}\right)\right),
\end{align}
and we see easily that the right-hand side tends to zero by our
choice of $t$. More precisely, if we take
\[
t = \frac{d}{d-1}\left(1
-\frac{\log\left(\frac d{d-1}\log n\right)}{\log n}\right),
\]
so as to balance the two $O$-terms in \eqref{dm1-skl}, then
\[
\mathbb{E}[M_{d,d-1}(n)] = O\left(n^{-\frac1{d-1}}
(\log n)^d\right).
\]
This and the monotonicity of $M_{d,k}(n)$ (in $k$) proves \eqref{a1}.
{\quad\rule{3mm}{3mm}\,}
The fact that $\mathbb{E}[M_{d,k}(n)]\to0$ implies that there are
many cycles formed by the $k$-dominant relation, but the
corresponding cycle structures are very difficult to quantify; see
Section~\ref{sec:fin} for some preliminary results.
\section{``Clouds" of $k$-dominant skylines}
\label{sec:layers}
The asymptotic vanishing property (Theorem~\ref{thm-q1}) for the
expected number of $k$-dominant skylines limits their usefulness if
the input data are known to be in similar randomness conditions. In
particular, if one is interested in finding the top-$K$
representative points, then the probability of getting enough number
of candidates tends to zero. A simple remedy to this situation (and
still following the same notion of partial dominance between points)
is to consider the number of points that are $k$-dominated by a
specified number, say $j$ of other points, which we refer to as the
``cloud" of $k$-dominant skylines. But we show that this also
suffers from similar vanishing drawback under the random hypercube
model, unless $j$ is chosen to be large enough.
Let $L_{d,k}(n,j)$ denote the number of points in the random sample
$\{\v{p}_1,\ldots ,\v{p}_n\}$ that are $k$-dominated by exactly $j$
points, where the $n$ points are uniformly and independently
selected from $[0,1]^{d}$. Note that $L_{d,k}(n,0)$ is nothing but
$M_{d,k}(n)$.
\begin{thm}[Asymptotic zero-infinity property for clouds of
$k$-dominant skylines] For fixed $d\ge2$ and $1\le k\le d-1$,
\[
\mathbb{E}[L_{d,k}(n,j)] \to \left\{\begin{array}{ll}
0, & \text{if }1\le k\le d-1;\\
\infty,& \text{if }k=d,
\end{array}
\right.
\]
uniformly for $0\le j=o(n^{(1-\varepsilon)/d})$, as $n\to\infty$,
where $\varepsilon>0$ is an arbitrarily small constant.
\end{thm}
The theorem roughly says that even allowing more flexible partial
dominance relation, the expected number of the skylines so
constructed still approaches zero as long as the dimensionality is
fixed.
\noindent \emph{Proof.}\
The case when $k=d$ is also derived in \cite{BNS66} (under the
name of ``$(j+1)^{\text{st}}$ layer, 1-st quadrant-admissible
points"), where it is showed that
\[
\mathbb{E}[L_{d,d}(n,j)] =
\sum_{j<i_1\le\cdots\le i_{d-1}\le n}\frac1{i_1\cdots i_{d-1}},
\]
from which we obtain
\begin{align} \label{Lddnj}
\mathbb{E}[L_{d,d}(n,j)]
\sim \frac{\left(\log\frac{n}{j+1}\right)^{d-1}}{(d-1)!},
\end{align}
if $\log(n/(j+1))\to\infty$, where the symbol ``$\sim$" means that
the ratio of both sides tends to $1$ as $n$ goes unbounded.
Alternatively, we can use the integral representation (see
\cite{BCHL98})
\begin{align}
\mathbb{E}[L_{d,d}(n,j)] &= n \binom{n-1}{j}\int_{[0,1]^d}
\left(x_1\cdots x_d\right)^{j} \left(1-x_1\cdots x_d
\right)^{n-1-j} \text{d} \v{x}\nonumber \\
&= \frac{n}{(d-1)!}\binom{n-1}{j}\int_0^1
t^j (1-t)^{n-1-j} \log\left(\tfrac1t\right)^{d-1}
\text{d} t, \label{int-ELdd}
\end{align}
by the change of variables $t \mapsto x_1\cdots x_d$. A
straightforward evaluation then gives (\ref{Lddnj}).
Note that $\frac{\mathbb{E}[L_{d,d}(n,j)]}n$ equals the probability
that the first-quadrant subtree of the root has size $j$ in random
quadtrees; see \cite[Appendix]{FLLS95}. This connection also
provides several other expressions for $\mathbb{E}[L_{d,d}(n,j)]$.
For example,
\begin{align*}
\mathbb{E}[L_{d,d}(n,j)]
= \binom{n-1}{j}\sum_{0\le \ell\le n-1-j}\binom{n-1-j}{\ell}
\frac{(-1)^\ell}{(j+1+\ell)^{d}};
\end{align*}
see also \cite{BDHT05}.
For the remaining cases, we consider only $k=d-1$ and prove that
$\mb{E}[L_{d,d-1}(n,j)]\to0$. The reason is that
\[
\sum_{0\le \ell \le j} L_{d,k}(n,\ell)
\le \sum_{0\le \ell \le j} L_{d,d-1}(n,\ell)
\qquad(1\le k\le d-1).
\]
To see this, observe that if a point $\v{p}$ $(d-1)$-dominates
another point $\v{q}$, then $\v{p}$ also $k$-dominates $\v{q}$
for $1\le k\le d-2$. Thus, the sum on the left-hand side, which
stands for the set that is $k$-dominated by at most $j$ points,
is less than the sum on the right-hand side, the set that is
$(d-1)$-dominated by at most $j$ points.
To prove $\mb{E}[L_{d,d-1}(n,j)]\to0$, we apply the same argument
used in the proof of Theorem~\ref{thm-q1} starting from the integral
representation
\begin{align*}
\mathbb{E}[L_{d,d-1}(n,j)]
&= n\int_{[0,1]^d}\mb{P}(\text{exactly } j
\text{ points in $\{\v{p}_2,\dots,\v{p}_n\}$
that $k$-dominate } \v{p}_1)\\
&= n\binom{n-1}{j}
\int_{[0,1]^d} B_{d-1}(\v{x})^j
\left(1-B_{d-1}(\v{x})\right)^{n-1-j}\text{d}\v{x}.
\end{align*}
Now we fix a constant $t$ satisfying $1<t<\frac d{d-1}$, and then
choose $Q_n$ as in \eqref{Qn}. Then we have
\[
|Q_n| = O\left(n^{-t}(\log n)^{d-2}\right),
\]
and
\[
n^{-\frac{d-1}{d}\,t} \le |B_{d-1}(\v{x})| \le 1
\qquad(\v{x}\in[0,1]^d\setminus Q_n).
\]
It follows that
\begin{align*}
\mathbb{E}[L_{d,d-1}(n,j)]
&\le n|Q_n| + n\binom{n-1}{j}
\int_{[0,1]\setminus Q_n} B_{d-1}(\v{x})^j
\left(1-B_{d-1}(\v{x})\right)^{n-1-j}\text{d}\v{x}\\
&= O\left(n^{1-t}(\log n)^{d-2}\right)
+ O\left(n\binom{n-1}{j} \exp\left(-(n-1-j)
n^{-\frac{d-1}{d}\,t}\right)\right).
\end{align*}
Now choose
\[
t = \frac{d}{d-1}\left(1-
\frac{\log((j+\frac{d}{d-1})\log n)}{\log n}\right).
\]
So that
\[
n\binom{n-1}{j}\exp\left(-(n-1-j)
n^{-\frac{d-1}{d}\,t}\right)
= O\left(n^{1+j} n^{-j-\frac{d}{d-1}}\right)
= O(n^{-\frac1{d-1}}),
\]
and
\[
n^{1-t} = n^{-\frac1{d-1}} \left(j+\tfrac{d}{d-1}\right)
^{\frac d{d-1}}(\log n)^{\frac d{d-1}}
= O\left(n^{-\frac{\varepsilon}{d-1}}
(\log n)^{\frac{d}{d-1}}\right),
\]
uniformly for $j=O(n^{\frac{1-\varepsilon}{d}})$. Thus
\[
\mathbb{E}[L_{d,d-1}(n,j)]
= O\left(n^{-\frac{\varepsilon}{d-1}}
(\log n)^{d-2+\frac d{d-1}}
+ n^{-\frac1{d-1}}\right) \to 0.
\]
This proves the theorem. {\quad\rule{3mm}{3mm}\,}
A more precise asymptotic estimate for $\mb{E}[L_{d,d-1}(n,j)]$ will
be derived in Section~\ref{sec:d-large}; see \eqref{Ldd}.
Another easy special case is $k=1$, which is dual to the case $k=d$
because we have
\[
\mathbb{E}[L_{d,1}(n,j)] = \mathbb{E}[L_{d,d}(n,n-1-j)].
\]
Thus, by (\ref{int-ELdd}), we have
\begin{align*}
\mathbb{E}[L_{d,1}(n,j)]
&= \frac{n}{(d-1)!}\binom{n-1}{j}
\int_0^1 t^{n-1-j} (1-t)^j (-\log t)^{d-1} \text{d}t\\
&\sim \frac{n^{j+1}}{(d-1)!j!}
\int_0^\infty e^{-nt} t^{j+d-1} \text{d}t\\
&\sim \binom{j+d-1}{j} n^{-d+1},
\end{align*}
for large $n$ and $0\le j=o(\sqrt{n})$.
\begin{figure}
\caption{\emph{Simulated values of $\sum_{0\le j\le m}
\label{fg2}
\end{figure}
In general, if we are to select the top $K$ representatives using
such clusters of partial dominant skylines, then how large should
$j$ be? That is, what is the minimum $m$ such that $\sum_{0\le j\le
m}L_{d,k}(n,j)>K$? Some simulation results are given in
Figure~\ref{fg2}.
\section{Random samples from simplices}
\label{sec:simplex}
We show in this section that the asymptotic vanishing property of
$k$-dominant skylines occurs not only in the case of the
$d$-dimensional hypercube distribution, but also in the
$d$-dimensional simplex distribution
\[
S_d=\left\{\v{x}:-1\le x_j\le 0 \text{ and }
\left\| \v{x}\right\|
:=\sum_{1\le j\le d}\left|x_j\right| \le 1\right\}.
\]
In particular, $S_2$ is the right triangle
\raisebox{-0.05cm}{\begin{tikzpicture}[scale=0.5]
\draw[line width=0.5pt] (0,0) -- (-0.5,0) -- (0,-0.5) -- (0,0);
\end{tikzpicture}}. Such a shape implies a negative dependence of the
two coordinates and thus a larger number of skyline points.
Let $M^{[s]}_k(n)$ denote the cardinality of the $k$-dominant
skyline of the set $\mathscr{D} := \{\v{p}_1,\ldots ,\v{p}_n\}$,
where these $n$ points are uniformly and independently distributed
over $S_{d}$. For a point $\v{p}\in S_d$, denote by
$B^{[s]}_k(\v{\v{p}})$ the region of points in $S_d$ that
$k$-dominate $\v{p}$.
\begin{thm}[Asymptotic vanishing property for finite-dimensional
simplex] For $1\le k\le d-1$,
\[
\mathbb{E}[M^{[s]}_{d,k}(n)]
\to \left\{\begin{array}{ll}
0, & \text{if }1\le k\le d-1;\\
\infty,& \text{if }k=d,
\end{array}
\right.
\]
as $n\to\infty$.
\end{thm}
\noindent \emph{Proof.}\ For $k=d$, it is known (see \cite{CHT11}) that
\begin{align*}
\mathbb{E}[M^{[s]}_{d,d}(n)]
&= d! n \int_D\left(1-\left(1-
{\textstyle\sum}_{1\le i\le d} x_i\right)^d\right)^{n-1}
\text{d} \v{x} \\
&=n\sum_{0\le j<d} \binom{d-1}{j}
(-1)^j\frac{\Gamma(n)\Gamma\left(\frac{j+1}d\right)}
{\Gamma\left(n+\frac{j+1}{d}\right)}\\
&= \Gamma\left(\tfrac1d\right) n^{1-\frac1d}\left(
1+O\left(d n^{-\frac1d}\right)\right),
\end{align*}
where $\Gamma$ denotes the Gamma function. Thus the expected number
of skylines tends to infinity as $n$ goes unbounded.
Consider now $1\le k<d$. It suffices to examine the case $k=d-1$.
For a point $\v{x}\in S_{d}$ ($\v{x}\neq \v{0}$), let $\bm{\xi}:=
\frac{\v{x}}{\|\v{x}\|}$. Then $B_{d-1}^{[s]}(\bm{\xi}) \subset
B^{[s]}_{d-1}(\v{x})$. We now prove that
\begin{equation}
\left|B^{[s]}_{d-1}(\bm{\xi})\right|
\ge \frac{1}{d!d^{d}}\qquad( \bm{\xi}\in S_{d},
\left\| \bm{\xi}\right\| =1). \label{t2}
\end{equation}
Since $\left\| \bm{\xi}\right\|=1$, there is at least one
coordinate $|\xi_j|\ge \frac1d$. Without loss of generality, assume
$|\xi_d| \ge \frac1d$. Then $\sum_{1\le j<d} | \xi_j| \le
\frac{d-1}d$. Let
\[
T := \{\v{y}\in S_{d}:y_j\le \xi_j
\text{ for }1\le j\le d-1\text{ and }y_d\le 0\}.
\]
We have $T\subset B^{[s]}_{d-1}(\bm{\xi})$ and
\[
|T| = |S_d||\xi_d| \ge \frac{1}{d!d^{d}},
\]
since $T$ is itself a simplex. Thus \eqref{t2} holds and we have
\begin{align*}
\mathbb{E}[M^{[s]}_{d,d-1}(n)]
&=nd!\int_{S_{d}}\left( 1-d!\left| B_{d-1}^{[s]}
(\v{x})\right| \right)^{n-1}\mathrm{d}\v{x}\\
&=O\left( n\left(1-d^{-d}\right)^{n}\right)\\
&\rightarrow 0,
\end{align*}
as $n\rightarrow \infty $. {\quad\rule{3mm}{3mm}\,}
We see in such a simplex model that the expected number of
$k$-dominant tends to zero at an \emph{exponential} rate (in $n$),
in contrast to the \emph{polynomial} rate in the hypercube model.
Does the expected number of $k$-dominant skyline points always tend
to zero? Here is a simple, artificial counterexample.
\quad \newline
\textbf{Example 1.} Assume $d=4,k=3$. Let
\[
A:=\left\{ (-t,-2t,3t,4t):1\le t\le 2\right\} .
\]
Then any two points in $A$ are incomparable (none dominating the
other) by the relation of $k$-dominance. Thus, the number of
$k$-dominant skyline points is equal to $n$ almost surely if
$\v{p}_1,\ldots ,\v{p}_n$ are uniformly and independently
distributed in $A$.
\section{A categorical model}
\label{sec:cm}
The preceding negative results are based on assuming that the points
are generated from some \emph{continuous models}, which are often a good
approximation to situations where the input can assume a sufficiently large
range of different values. What if we assume instead that the inputs are
sampled from some \emph{discrete space}, which is also often encountered
in practical applications? We show in this section that \emph{the
expected number of $k$-dominant skylines is always linear for
$1\le k\le d$}, in contrast to the asymptotic zero-infinity property
we derived above.
Assume that $n$ points $\mathscr{D} := \{\v{p}_1,\dots, \v{p}_n\}$
are chosen uniformly and independently from the product space
\[
\mathscr{P} := \bigotimes_{1\le j\le d}S_j,
\]
where
\[
S_j =\{1,2,\ldots ,u_j\}\qquad(u_j\ge2).
\]
Let $M_{d,k}^{[c]}(n)$ denote the number of $k$-dominant skylines in
$\mathscr{D}$. Unlike the continuous cases, the variation of the
random variables $M_{d,k}^{[c]}(n)$ is easier to predict as the
number of possible points in $\mathscr{P}$ is finite. Interestingly,
the first-order asymptotic estimate for the expected value of
$M_{d,k}^{[c]}(n)$ is independent of $k$ for $1\le k\le d$, where
the case $k=d$ gives the expected skyline count.
\begin{thm}[Asymptotic linearity for finite-dimensional categorical
model] The expected number of $k$-dominant skylines satisfies
\begin{align} \label{Mkn-cm}
\frac{\mb{E}[M_{d,k}^{[c]}(n)]}n\rightarrow \frac 1u
\qquad(1\le k\le d; d\ge 2),
\end{align}
as $n\rightarrow \infty$, where
\[
u := \prod_{1\le j\le d}u_j.
\]
\end{thm}
Now the problem is again the excessive number of skyline points.
Such a discrete model exhibits another interesting phenomenon, not
present for continuous model, namely, for fixed $n$, the expected
number of $k$-dominant skyline points is not monotonically
increasing as $d$ grows.
\noindent \emph{Proof.}\
Let $\v{x}=(x_1,x_2,\ldots, x_d)\in \mathscr{P}$. Denote by
$B^{[c]}_k(\v{x})$ the set of points in $\mathscr{P}$ that
$k$-dominate $\v{x}$. Then
\begin{align} \label{EMkn-cm}
\mb{E}[M_{d,k}^{[c]}(n)]
&= n\mb{P}(\v{p}_1 \text{ is a $k$-dominant
skyline point})\nonumber \\
&=\frac{n}{u}\sum_{\v{x}\in
\mathscr{P}}\left( 1-
\frac{\left| B_{k}^{[c]}(\v{x})\right|}{u}\right)^{n-1}.
\end{align}
If $\v{y}\in B_k^{[c]}(\v{x})$, then $\v{y}$ is better than or equal
to $\v{x}$ in all coordinates (at least one better) except for the
coordinates, say $j_1,\ldots ,j_{\ell }$ for $0\le \ell \le d-k$.
Thus
\[
\left| B_d^{[c]}(\v{x})\right|
= \prod_{1\le j\le d} x_j-1 ,
\]
and for $1\le k<d$
\begin{align}
\left| B_k^{[c]}(\v{x})\right|
=\sum_{0\le \ell\le d-k}\sum_{1\le
j_1<j_2<\cdots <j_\ell \le d}\left(
\frac{\prod_{1\le i\le d}x_{i}}{
\prod_{1\le i\le \ell} x_{j_i}}-1\right)
\prod_{1\le i\le \ell}
\left(u_{j_i}-x_{j_i}\right) . \label{Bkvx}
\end{align}
Here the product
\[
\frac{\prod_{1\le i\le d}x_{i}}{
\prod_{1\le i\le \ell} x_{j_i}}
= \prod_{i\neq j_r; r=1,\dots,\ell} x_i,
\]
enumerates all possible locations in the $d-\ell$ ($\ge k$)
coordinates that $k$-dominant skyline point can assume, and the
factor ``$-1$" removes the possibility that all $d-\ell$ coordinates
are equal to the corresponding $x_i$. The last product in
\eqref{Bkvx} describes all possible locations for the other $\ell$
coordinates.
Since there is a unique point $\v{1} := (\overbrace{1,\ldots ,1}^d)$
in $\mathscr{P}$ with $\left| B_{k}^{[c]}(\v{1})\right| =0$, all
other terms in the sum on the right-hand side of (\ref{EMkn-cm})
being exponentially small, we obtain \eqref{Mkn-cm}.
{\quad\rule{3mm}{3mm}\,}
\begin{figure}
\caption{\emph{A graphical rendering of $\mb{E}
\end{figure}
In the special case when all $u_j=2$ for $1\le j\le d$, then
\[
\left| B_{k}^{[c]}(\v{x})\right|
=\left( 2^\ell-1\right) \sum_{0\le j \le d-k}
\binom{d-\ell}{j},
\]
where $\v{x}\in \{1,2\}^d$ and $\ell$ denotes the number of times
``$2$" occurs in $\v{x}$ (and ``$1$" occurring $d-\ell$ times). The
closed-form expression \eqref{EMkn-cm} simplifies
\[
\mb{E}[M_{d,k}^{[c]}(n)]=\frac{n}{2^{d}}
\sum_{0\le\ell \le d}\binom{d}{\ell}\left( 1-\frac{
2^\ell-1}{2^d}\sum_{0\le j\le d-k}
\binom{d-\ell}{j}\right)^{n-1},
\]
from which it follows that
\[
\frac{\mb{E}[M_{d,k}^{[c]}(n)]}{n}\rightarrow
\frac{1}{2^{d}}\quad \text{as }
n\rightarrow \infty .
\]
\begin{figure}
\caption{\emph{Two plots of the ratio $\mb{E}
\end{figure}
Since the product space $\mathscr{P}$ is finite, we can indeed fully
characterize the asymptotic distribution of $M_{d,k}^{[c]}(n)$.
\begin{thm}[Asymptotic binomial distribution for finite-dimensional
categorical model] The distribution of $M_{d,k}^{[c]}(n)$ is
asymptotically equivalent to a binomial distribution with parameters
$n$ and $1/u$.
\end{thm}
\noindent \emph{Proof.}\
Let $X_n$ denote the number of $j$'s for which $\v{p}_j
=(1,\dots,1)$, $1\le j\le n$. Then, obviously, $X_n$ is binomially
distributed with parameters $n$ and $1/u$, namely,
\[
\mathbb{P}(X_n=\ell) =
\binom{n}{\ell}\frac1{u^\ell}
\left(1-\frac1u\right)^{n-\ell} \qquad
(0\le \ell \le n).
\]
Now if one of the points $\v{p}_j$ equals $(1,\ldots ,1)$, then
$M_{d,k}^{[c]}(n)=X_n$. Thus
\[
\mb{P}\left( M_{d,k}^{[c]}(n)\neq X_n\right)
\leq \mb{P}\left( \v{p}_j\neq (1,\ldots ,1)\right)
=\left( 1-\frac1u\right) ^{n}\rightarrow 0,
\]
and thus the distribution of $M_{d,k}^{[c]}(n)$ is asymptotic to the
distribution of $X_n$. {\quad\rule{3mm}{3mm}\,}
In particular, we see that the variance of $M_{d,k}^{[c]}(n)$ is
also asymptotically linear
\[
\frac{\mb{V}[M_{d,k}^{[c]}(n)]}{n}
\to \frac1{u}\left(1-\frac1u\right)
\qquad(1\le k\le d).
\]
The consideration can be easily extended to the case of non-uniform
discrete distributions. More generally, assume that the data set is
sampled from the set $\{\v{a} _{1},\ldots ,\v{a}_{m}\}\subset
\mathscr{P}$ and each point is endowed with the probability
$\mb{P}(\v{a}_j)$. Let $p_k(\v{a}_j)$ be the probability that
$\v{a}_j$ is $k$-dominated, that is, $p_k(\v{a}_j)$ is equal to the
sum of $\mb{P}(\v{a}_i)$ such that $\v{a}_i$ $k$-dominates
$\v{a}_j$. Then the expected number of $k$-dominant skyline points
satisfies
\[
\mb{E}[M_{d,k}^{[c]}(n)]
=n\sum_{1\le j\le m}\mb{P}(\v{a}_j)\left( 1-p_k(
\v{a}_j)\right)^{n-1}.
\]
Let
\[
q_k := \sum_{\substack{p_k(\v{a}_j)=0\\
1\le j\le m}} \mb{P}(\v{a}_j)
\]
be the probability of points in $\{\v{a}_1,\ldots ,\v{a}_m\}$ that
are not $k$-dominated. Then since the expected number of
$k$-dominant is expressed as a finite sum, we have
\[
\frac{\mb{E}[M_{d,k}^{[c]}(n)]}{n}\rightarrow q_{k},
\quad \text{as }n\rightarrow
\infty .
\]
Note that $p_{k}$ may range from zero to one.
\section{Uniform asymptotic estimates for $\mb{E}[M_{d,d-1}(n)]$}
\label{sec:d-large}
We derive in this section two uniform asymptotic estimates for
$\mb{E}[M_{d,d-1}(n)]$ in two overlapping ranges. To state our
results, we need to introduce the Lambert $W$-function (see
\cite{CGHK96}), which is implicitly defined by the equation
\begin{align} \label{lambert-w}
W(z)e^{W(z)} =z .
\end{align}
For our purpose, we take $W$ to be the principal branch that is
positive for positive $z$ and satisfies the asymptotic approximation
\begin{align} \label{Wx}
W(x) = \log x - \log\log x + \frac{\log\log x}{\log x}
+ O\left(\frac{(\log\log x)^2}{(\log x)^2} \right),
\end{align}
for large $x$.
Our first asymptotic estimate covers $d$ in the range
\[
3\le d\le \sqrt{\frac{2\log n}{W(2\log n)+K}},
\]
where $K\to\infty$ with $n$, and the second the range
\[
(\log n)^{1/3}\ll
d\le 2\sqrt{\frac{\log n}{W(\log n)-C}},
\]
for some constant $C>0$. The upper bounds of the two ranges do not
differ significantly but are sufficient for our purposes of proving
the threshold phenomenon, which we discuss in the next section.
Very roughly, the expected number of $(d-1)$-dominant skylines is
asymptotically negligible in the first range, and undergoes the
phase transition from being almost zero to unbounded in the
second.
\begin{thm}[Uniform estimate for large $n$ and moderate $d$]
\label{thm:ud} If $d\ge3$ and
\begin{align} \label{d-rg1}
\frac{2\log n}{d^2}-W(2\log n)\to\infty,
\end{align}
then
\begin{align} \label{dm1-dom}
\mathbb{E}[M_{d,d-1}(n)] =
\frac{n^{-\frac1{d-1}}}{d-1}\,\Gamma\left(\frac1{d-1}\right)^d
\left(1+O\left(dn^{-\frac1{(d-1)(d-2)}}\right)\right),
\end{align}
uniformly in $d$ for large $n$.
\end{thm}
Note that if $d$ is of the form
\[
d = \left\lfloor \sqrt{\frac{2\log n}
{W(2\log n)+2v}}\right\rfloor,
\]
then
\[
d n^{-\frac1{(d-1)(d-2)}} = e^{-v}\left(1+O\left(
\frac{(1+|v|)W(2\log n)^{3/2}}{\sqrt{\log n}}\right)\right),
\]
which becomes $o(1)$ if $v\to\infty$.
On the other hand, when $d=2$, we have, by \eqref{EM},
\[
\mathbb{E}[M_{d,d-1}(n)]
= n\int_0^1\!\!\int_0^1\left(1-x-y+xy\right)^{n-1}
\text{d} x\text{d} y
= \frac1n.
\]
\noindent \emph{Proof.}\
We again begin with the integral representation \eqref{EM},
where $B_{d-1}(\v{x})$ is given in \eqref{Bd}.
By the elementary inequalities (see \cite{BHLT01})
\[
e^{-nt}(1-nt^2)\le (1-t)^n \le e^{-nt}
\qquad(n\ge1; t\in[0,1]),
\]
we have
\[
E_{n,d}-E_{n,d}' \le \mathbb{E}[M_{d,d-1}(n+1)] \le E_{n,d},
\]
where
\begin{align*}
E_{n,d} &:= n\int_{[0,1]^d}
e^{-n|B_{d-1}(\v{x})|}\text{d} \v{x},\\
E_{n,d}' &:= n^2 \int_{[0,1]^d} |B_{d-1}(\v{x})|^2
e^{-n|B_{d-1}(\v{x})|}\text{d} \v{x}.
\end{align*}
We will see that $E_{n,d}'$ is asymptotically of smaller order
than $E_{n,d}$. The intuition here is that most contribution to the
integral comes from $\v{x}$ for which $|B_{d-1}(\v{x})|$ is small,
implying that $(1-|B_{d-1}(\v{x})|)^n$ is close to
$e^{-n|B_{d-1}(\v{x})|}$. Also replacing $n+1$ by $n$ in the
resulting asymptotic approximation gives rise only to smaller order
errors. However, the uniform error bound represents the most delicate
part of our proof.
We start with the asymptotic evaluation of $E_{n,d}$. By making the
change of variables $x_j \mapsto \frac{y_j}N$, where $N :=
n^{\frac1{d-1}}$,
\begin{align}
E_{n,d} &= N^{-1} \int_{[0,N]^d}
e^{-y_1\cdots y_d \left(\frac1{y_1}
+\cdots+\frac1{y_d}\right) +
\frac{d-1}{N}y_1\cdots y_d}\text{d} \v{y}\nonumber \\
&= N^{-1} \left(\phi_d(n)-f_d(n)
+R_d(n) \right),\label{End-Rnd}
\end{align}
where
\begin{align*}
\phi_d(n) & := \int_{\mathbb{R}_+^d}
e^{-y_1\cdots y_d \left(\frac1{y_1}
+\cdots+\frac1{y_d}\right)}\text{d} \v{y},\\
f_d(n) &:= \left(\int_{\mathbb{R}_+^d}-
\int_{[0,N]^d}\right)
e^{-y_1\cdots y_d \left(\frac1{y_1}
+\cdots+\frac1{y_d}\right)}\text{d} \v{y},\\
R_d(n) &:= \int_{[0,N]^d}
e^{-y_1\cdots y_d \left(\frac1{y_1}
+\cdots+\frac1{y_d}\right)}\left(
e^{\frac{d-1}{N}y_1\cdots y_d}-1\right)\text{d} \v{y}.
\end{align*}
We focus on the evaluation of the integral $\phi_d(n)$, leaving the
lengthier estimation of the two error terms $f_d(n)$ and $R_d(n)$ to
Appendix A.
We now carry out the change of variables $t_j := \prod_{\ell\not=j}
y_\ell$ for $1\le j\le d$, the Jacobian being
\[
\frac{\partial(y_1,\dots,y_d)}{\partial(t_1,\cdots,t_d)}
:=\left[\begin{array}{ccc}
\frac{\partial y_1}{\partial t_1} & \cdots
& \frac{\partial y_1}{\partial t_d} \\
\vdots & \ddots & \vdots \\
\frac{\partial y_d}{\partial t_1} & \cdots
& \frac{\partial y_d}{\partial t_d},
\end{array}\right]
\]
whose determinant is equal to $1/\det J$, where
\[
J :=\frac{\partial(t_1,\dots,t_d)}
{\partial(y_1,\cdots,y_d)}.
\]
Note that the entries of $J$ satisfy
\[
J_{i,j}=\left\{\begin{array}{ll}
0,& \text{if } i=j;\\
\displaystyle \frac{y_1\cdots y_d}{y_iy_j},
& \text{if }i\neq j.
\end{array}\right.
\]
It follows that
\[
\det J = (y_1\cdots y_d)^{d-2} \det T,
\]
where $T$ is a $d\times d$ matrix with $T_{i,i}=0$ and $T_{i,j}=1$
for $i\neq j$. The determinant of $T$ is seen to be $(-1)^{d-1}
(d-1)$ by adding all rows of $T$ to the first, by taking the factor
$d-1$ out, and then by subtracting the first row from all other
rows. Thus we have
\begin{align*}
\det J &= (-1)^{d-1}(d-1)(y_1\cdots y_d)^{d-2}\\
&= (-1)^{d-1}(d-1)(t_1\cdots t_d)^{\frac{d-2}{d-1}}.
\end{align*}
Thus, by the integral representation of the Gamma function
\[
\Gamma(x) = \int_0^\infty t^{x-1} e^{-t} \text{d} t
\qquad(x>0),
\]
we obtain
\begin{align*}
\phi_d(n) &= \frac{1}{d-1}
\int_{\mathbb{R}_+^d} e^{-(t_1+\cdots +t_d)}
(t_1\cdots t_d)^{-\frac{d-2}{d-1}}\text{d} \v{t}\\
&=\frac{1}{d-1}\left(\int_0^\infty
e^{-u} u^{-\frac{d-2}{d-1}}\text{d} u\right)^d\\
&= \frac1{d-1}\,\Gamma\left(\frac1{d-1}\right)^d.
\end{align*}
We will prove in Appendix A that
\begin{align}
\frac{f_d(n)}{\phi_d(n)}
&= O\left(d n^{-\frac1{(d-1)(d-2)}} \right),\nonumber \\
\frac{R_d(n)}{\phi_d(n)} &= O\left(d 2^{-d}
n^{-\frac1{d-1}} \right).\label{Rdn-ratio}
\end{align}
In a similar manner, we have
\begin{align*}
E_{n,d}' &= O\left(n^2\int_{\mathbb{R}_+^d}
\left(x_1\cdots x_d{\textstyle \sum}_{1\le j\le d}
\tfrac1{x_j}\right)^2 e^{-n x_1\cdots x_d
\sum_{1\le j\le d}\tfrac1{x_j}} \text{d}\v{x}\right)\\
&= O\left(\frac{n^{-\frac2{d-1}}}{d-1}\int_{\mathbb{R}_+^d}
\left(t_1+\cdots+t_d\right)^2 e^{-(t_1+\cdots+t_d)}
(t_1\cdots t_d)^{-\frac{d-2}{d-1}}\text{d}\v{t}\right).
\end{align*}
The last integral in a more general form can be evaluated as
follows. Let $[z^n]f(z)$ denote the coefficient of $z^n$ in the
Taylor expansion of $f$.
\begin{align*}
&\int_{\mathbb{R}_+^d}
\left(t_1+\cdots+t_d\right)^j e^{-(t_1+\cdots+t_d)}
(t_1\cdots t_d)^{-\frac{d-2}{d-1}}\text{d}\v{t}\\
&\qquad= j![z^j] \int_{\mathbb{R}_+^d}
e^{-(1-z)(t_1+\cdots+t_d)}
(t_1\cdots t_d)^{-\frac{d-2}{d-1}}\text{d}\v{t}\\
&\qquad= j![z^j] \frac{\Gamma(\frac1{d-1})^d}
{(1-z)^{\frac d{d-1}}}\\
&\qquad= j!\Gamma\left(\frac1{d-1}\right)^d
\binom{\frac1{d-1}+j}{j},
\end{align*}
for $j\ge0$. Thus
\begin{align*}
\frac{E_{n,d}'}{\phi_d(n)} = O\left(n^{-\frac2{d-1}} \right).
\end{align*}
Collecting these estimates proves the theorem. {\quad\rule{3mm}{3mm}\,}
When $d$ increases beyond the range \eqref{d-rg1}, the error term
$f_d(n)$ (see \eqref{End-Rnd}) is no more negligible, and a more
delicate analysis is needed.
\begin{thm}[Uniform asymptotic estimate in the critical range]
\label{thm:ud2}
If
\begin{align} \label{d-rg2}
\frac d{(\log n)^{1/3}}\to\infty\quad
\text{and}\quad d\le 2\sqrt{\frac{\log n}
{W\left(\frac{4\log n}{(e\log 2)^2}\right)}},
\end{align}
then, with $\rho := \frac{d}{e n^{1/d^2}}$,
\begin{align} \label{dm1-dom2}
\mathbb{E}[M_{d,d-1}(n)] =
\frac{n^{-\frac1{d-1}}}{d-1}\,\Gamma\left(\frac1{d-1}\right)^d
\left(\frac1{2-e^{-\rho}}+
O\left(\frac{\rho (\rho+1)e^{-\rho}}{(2-e^{-\rho})^3}
\left(\frac1d+\frac{\log n}{d^3}\right)\right)\right),
\end{align}
uniformly in $d$ for large $n$.
\end{thm}
The proof of this theorem is very long and is thus relegated in
Appendix B. The crucial step is to prove an asymptotic estimate
for $f_d(n)$ by an inductive argument by deriving first a recurrence
of the form
\[
f_d(n) = g_d(n) + \Phi[f_d](n) + \text{smaller order
terms},
\]
where
\begin{align*}
g_d(n) := \sum_{1\le j\le d-2}\binom{d}{j}
(-1)^{j-1} (d-1-j)^{j-1} \Gamma\left(\tfrac1{d-1-j}\right)^{d-j}
n^{\frac1{d-1}-\frac{1}{d-1-j}} ,
\end{align*}
and $\Phi$ is an operator defined by
\[
\Phi[f_d](n) := \sum_{1\le j\le d-2}\binom{d}{j}
(-1)^j n^{\frac1{d-1}-\frac{1}{d-1-j}}
\int_{(1,\infty)^j}\left(v_1\cdots v_j
\right)^{-1-\frac1{d-1-j}} f_{d-j}(nv_1\cdots v_j)\text{d}\v{v}.
\]
Then \eqref{dm1-dom2} follows from iterating the operator and a
careful analysis of the resulting sums.
\begin{cor} If $d$ is of the form
\[
d = \left\lfloor\sqrt{\frac{2\log n}{W(2\log n)-2v-2}}
\right \rfloor,
\]
then
\begin{align} \label{cor-ph-tr}
\frac{\mathbb{E}[M_{d,d-1}(n)]}
{\frac{n^{-\frac1{d-1}}}{d-1}\,
\Gamma\left(\frac1{d-1}\right)^d}
\sim \left\{\begin{array}{ll}
1, &\text{if }v\to-\infty;\\
\frac1{2-e^{-e^v}},&\text{if }v=O(1);\\
\frac12, &\text{if } v\to\infty.
\end{array} \right.
\end{align}
\end{cor}
\noindent \emph{Proof.}\ Observe that
\[
\rho = \frac{d}{e n^{1/d^2}}
= e^v\left(1+O\left(\frac{1+|v|}{W(2\log n)}\right)
\right).
\]
Thus \eqref{cor-ph-tr} follows from this and \eqref{dm1-dom2}.
{\quad\rule{3mm}{3mm}\,}
Combining the ranges \eqref{d-rg1} and \eqref{d-rg2} of the two
estimates \eqref{dm1-dom} and \eqref{dm1-dom2}, we see that
\begin{cor} If
\[
3\le d\le 2\sqrt{\frac{\log n}{W(4e^{-2}\log n)}},
\]
then
\[
\mathbb{E}[M_{d,d-1}(n)] \sim
\frac{1}{2-e^{-\rho}}\cdot
\frac{n^{-\frac1{d-1}}}
{d-1}\Gamma\left(\frac1{d-1}\right)^d,
\]
uniformly in $d$.
\end{cor}
We conclude from these estimates that $\mathbb{E}[M_{d,d-1}(n)]$ is,
modulo a constant term, very well approximated by
$\frac{n^{-\frac1{d-1}}}{d-1}\Gamma\left(\frac1{d-1}\right)^d$.
\noindent \textbf{Remark.} A similar analysis as that for
\eqref{dm1-dom} leads to ($L_{d,k}(n,j)$ is defined in
Section~\ref{sec:layers})
\begin{align} \label{Ldd}
\mathbb{E}[L_{d,d-1}(n,j)]
\sim c_{d,j} n^{-\frac1{d-1}},
\end{align}
for each finite integer $j\ge0$, where
\begin{align*}
c_{d,j}
&:= \frac1{(d-1)j!}
\int_{\mathbb{R}_+^d}(v_1+\cdots+v_d)^j
e^{-(v_1+\cdots +v_d)}(v_1\cdots v_d)^{-\frac{d-2}{d-1}}
\text{d}\v{v}\\
&= \frac1{d-1}\,\Gamma\left(\frac1{d-1}\right)^d
\binom{j+\frac1{d-1}}{j},
\end{align*}
uniformly when $\frac{2\log n}{d^2}-W(2\log n)\to\infty$ and
$j=o\left(n^{\frac{1-\varepsilon}{d}}\right)$, $\varepsilon\in(0,1)$.
The consideration for larger $d$ as for \eqref{dm1-dom2} is similar.
\section{Threshold phenomenon for $\mb{E}[M_{d,d-1}(n)]$ when
$d\to\infty$} \label{sec:threshold}
With the asymptotic estimates \eqref{dm1-dom} and \eqref{dm1-dom2}
we derived in the previous section, we prove in this section a less
expected threshold phenomenon for the expected number of
$(d-1)$-dominant skylines $\mb{E}[M_{d,d-1}(n)]$ (in random samples
from $d$-dimensional hypercube) when $d-1$ is near
$\sqrt{\frac{2\log n}{W(2\log n)}}$.
\begin{thm}[Threshold phenomenon] Let
\begin{align} \label{d-large}
d_0 =d_0(n):= \tr{\sqrt{\frac{2\log n}{W(2\log n)}}}
+1,
\end{align}
where $W$ denotes the Lambert-W function. Then the expected number
of ($d-1$)-dominant skyline points satisfies
\begin{align} \label{EMkn-large}
\lim_{n\to\infty}\mb{E}[M_{d,d-1}(n)] \to
\left\{\begin{array}{ll}
0, &\text{if } d< d_0;\\
\infty,& \text{if } d>d_0+1.
\end{array} \right.
\end{align}
If $d=d_0$, then $\lim_{n\to\infty} \mb{E}[M_{d,d-1}(n)]$ does not
exist and is oscillating between $0$ and
$\frac{e^{-\gamma}}{2-e^{-e^{-1}}}$
\begin{align} \label{n-ii2}
\mb{E}[M_{d,d-1}(n)] \sim
\frac{e^{-\gamma}}{2-e^{-e^{-1}}}\,
\varphi_0\left(\sqrt{\frac{2\log n}
{W(2\log n)}}\right),
\end{align}
where $\varphi_0(x)$ is a bounded oscillating function of $x$
defined by
\[
\varphi_0(x) := e^{-\{x\}} x^{-2\{x\}}.
\]
If $d=d_0+1$, then $\lim_{n\to\infty} \mb{E}[M_{d,d-1}(n)]$ does not
exist and is oscillating between $\frac{e^{-\gamma}}{2-e^{-e^{-1}}}$
and $O\left(\frac{\log n}{\log\log n}\right)$
\begin{align} \label{n-ii3}
\mb{E}[M_{d,d-1}(n)] \sim
\frac{e^{-\gamma}}{2-e^{-e^{-1}}}\,
\varphi_1\left(\sqrt{\frac{2\log n}
{W(2\log n)}}\right),
\end{align}
where $\varphi_1(x)$ is an oscillating function of $x$ defined by
\[
\varphi_1(x) := e^{1-\{x\}} x^{2-2\{x\}}.
\]
\end{thm}
\noindent \emph{Proof.}\
By monotonicity, it suffices to examine the asymptotic behavior
of $\mathbb{E}[M_{d,d-1}(n)]$ for $d$ near $d_0$. Observe that if
\[
d = d_0+m = \sqrt{\frac{2\log n}{W_n}}-\tau_n
+m + 1,
\]
where $m$ is an integer and $\tau$ denotes the fractional part of
$\sqrt{\frac{2\log n}{W(2\log n)}}$, namely,
\[
\tau_n :=
\left\{\sqrt{\frac{2\log n}{W_n}}\right\}
= \sqrt{\frac{2\log n}{W_n}} -
\tr{\sqrt{\frac{2\log n}{W_n}}},
\]
then
\[
\rho = \frac{d}{e n^{1/d^2}} = e^{-1}\left(1+O
\left(\frac{W_n^{\frac32}|m+\tau_n|}{\sqrt{\log n}}
\right)\right)\to e^{-1},
\]
where, here and throughout the proof, $W_n := W(2\log n)$. Thus for
bounded $m$
\[
\frac1{2-e^{-\rho}}\to\frac1{2-e^{-e^{-1}}}.
\]
On the other hand, by \eqref{dm1-dom2} and the asymptotic estimate
$\Gamma(x)= x^{-1}-\gamma+O(x)$ as $x\to0$, where $\gamma$ denotes
the Euler constant, we see that
\begin{align*}
\frac{n^{-\frac1{d-1}}}{d-1}\Gamma\left(\frac1{d-1}\right)^d
&= e^{-\gamma+m-\tau_n}\left(\frac{2\log n}{W_n}
\right)^{m-\tau_n}\left(1+O\left(
\frac{W_n^{\frac32}(m+\tau_n+1)^2}{\sqrt{\log n}}
\right)\right)\\
&\left\{\begin{array}{ll}
\to 0, &\text{if } m\le -1;\\
\sim e^{-\gamma}\varphi_0\left(\sqrt{\frac{2\log n}
{W_n}}\right),
&\text{if } m=0 ;\\
\sim e^{-\gamma}\varphi_1\left(\sqrt{\frac{2\log n}
{W_n}}\right),
&\text{if } m=1 ;\\
\to \infty, &\text{if } m\ge 2.
\end{array} \right.
\end{align*}
This proves \eqref{EMkn-large}, \eqref{n-ii2} and \eqref{n-ii3}. It
remains to consider more precisely the behavior of $\varphi_0(x)$
and $\varphi_1(x)$.
Obviously, by definition, $\varphi_0(x)\in (0,1]$ and
$\varphi_1(x)\in [1,\infty)$ because $\{x\}\in[0,1)$ for
$x\in\mathbb{R}_+$. If $\{x\}=0$, then $\varphi_0(x)=1$; more
generally,
\[
\varphi_0(x) \to \left\{\begin{array}{ll}
1, & \text{if }\{x\}\log x = o(1);\\
0, & \text{if }\{x\}\log x \to\infty.
\end{array}\right.
\]
On the other hand,
\[
\varphi_1(x) \to \left\{\begin{array}{ll}
1, & \text{if }(1-\{x\})\log x = o(1);\\
\infty, & \text{if }(1-\{x\})\log x \to\infty.
\end{array}\right.
\]
We now prove that
\begin{align} \label{phi-zero}
\tau_n=0 \text{ if and only if } n= i^{i^2} \; (i\ge2).
\end{align}
First, if $n=i^{i^2}$, then $2\log n=2i^2\log i$ and the positive
solution to the equation (see \eqref{lambert-w})
\[
W_ne^{W_n} = 2i^2\log i,
\]
is given by $W_n= 2\log i$, as can be easily checked. Thus
\begin{align} \label{ii2}
\sqrt{\frac{2\log n}{W_n}}=i \qquad(i\ge2).
\end{align}
Conversely, if the relation \eqref{ii2} holds, then the positive
solution to the equations
\[
\frac{2\log n}{W_n} = i^2,
\text{ and } W_ne^{W_n} = 2\log n,
\]
is given by $n=i^{i^2}$. This proves \eqref{phi-zero}.
It follows particularly, by \eqref{dm1-dom2}, that
\[
\lim_{i\to\infty} \mathbb{E}[M_{i,i-1}]\left(i^{i^2}\right)
= \frac{e^{-\gamma}}{2-e^{-e^{-1}}}.
\]
This completes the proof of the theorem. {\quad\rule{3mm}{3mm}\,}
The function $d_0$ of $n$ on the right-hand side of \eqref{d-large}
grows extremely slowly. Let $a_i := i^{i^2}$ with $a_1:=2$. Then
$d=i+1$ for $a_i\le n<a_{i+1}$, which is small for almost all
practical sizes of $n$
\[
d_0 = \left\{\begin{array}{ll}
2, & \text{if } 2\le n\le 15;\\
3, & \text{if } 16\le n\le 19682;\\
4, & \text{if } 19683 \le n\le 42949\,67295;\\
5, & \text{if } 42949\,67296\le n\le
2.98\dots\times 10^{17};\\
6, & \text{if } 2.98\dots\times 10^{17}
\le n\le 1.03\dots\times 10^{28}.
\end{array} \right.
\]
This partly explains why the asymptotic vanishing property of
$\mb{E}[M_{d,k}(n)]$ for large $n$ and fixed $d$ is ``invisible''
for moderate values of $n$.
Note that we did not replace the Lambert-W function in
\eqref{d-large} by its asymptotic expansion \eqref{Wx} so as to make
the expression more transparent, the reason being that no matter how
many terms of the asymptotic expansion of $W$ we use, the resulting
expression is never $o(1)$. This is because all terms in the
expansion are of orders in powers of $\log \log n$ and $\log\log
\log n$, and they are all much smaller than $\log n$ in the
numerator of the first term on the right-hand side of
\eqref{d-large}.
Extending the same analysis to other values of $k$ becomes more
difficult and messy except for $k=1$ for which we have
\[
\mathbb{E}[M_{d,1}(n)]
= n \int_{[0,1]^d} (x_1\cdots x_d)^{n-1} \text{d}\v{x}
= n^{1-d}.
\]
Note that this always tends to zero no matter how large the value of
$d$ is.
On the other hand, for $1\le k\le d-1$, we can derive the more
precise estimate
\begin{align*}
\mathbb{E}[M_{d,k}(n)] &=O\left(n\int_{[0,1]^d} \exp\left(-n
\sum_{1\le j_1<\cdots<j_k\le d}x_{j_1}\cdots x_{j_k}\right)
\text{d} \v{x}\right)\\
&=O\left( n^{1-\frac dk}\right).
\end{align*}
However, a more precise uniform asymptotic approximation (in $n, d$,
and $k$) is less obvious and describing the corresponding threshold
phenomena if any for other values of $k$ also remains unclear.
Intuitively, the asymptotic vanishing property is expected to hold
as long as $k \ge d/2$ no matter $d$ is finite or growing with $n$
because the probability of a $k$-dominance for a random pair of
points is larger than one half, meaning that it is less likely to
find $k$-dominant skyline in such a case.
\section{Expected number of dominant cycles}
\label{sec:nc}
The asymptotic zero-infinity property can be viewed from another
different angle by examining the \emph{number of dominant cycles}.
\noindent \textbf{Definition.} We say that $m$ points
$\{\v{p}_1,\dots,\v{p}_m\}$ form a $k$-dominant cycle (of length
$m$) if $\v{p}_i$ $k$-dominates $\v{p}_{i+1}$ for $i=1, \dots,m-1$
and $\v{p}_m$ $k$-dominates $\v{p}_1$.
Roughly, the number of $k$-dominant cycles is inversely proportional
to the number of $k$-dominant skylines. Note that by transitivity
there is no cycle when $k=d$. Thus the number of cycles seems a
better measure to clarify the structure of $k$-dominant skylines.
However, the general configuration of the cycle structure is very
complicated. We contend ourselves in this section with the
consideration of cycles of length $d$ when $k=d-1$.
\begin{lmm} Let $C_{n,d}$ denote the number of $(d-1)$-dominant
cycles of length $d$ in a random sample of $n$ points uniformly and
independently chosen from $[0,1]^d$. Then the expected value of
$C_{n,d}$ satisfies
\begin{align} \label{ECnd}
\mathbb{E}[C_{n,d}] = \binom{n}{d} \frac{d!^{2-d}}{d}.
\end{align}
\end{lmm}
\noindent \emph{Proof.}\
Since the total number of cycles of length $d$ is given by
$\binom{n}d\frac{d!}{d}$, we see that
\[
\mathbb{E}[C_{n,d}] = \binom{n}d\frac{d!}{d}
\mathbb{P}\left(\{\v{p}_1,\dots,\v{p}_d\} \text{ form
a $(d-1)$-dominant cycle of length $d$} \right).
\]
Assume that $\{\v{p}_1,\dots,\v{p}_d\}$ form a $(d-1)$-dominant
cycle of length $d$. Let
\[
\v{p}_i = (p_{i,1},\dots,p_{i,d}) \qquad(i=1,\dots,d).
\]
Then for each coordinate $j$, there exists an $\ell$ such that
\[
p_{1,j}>p_{2,j}>\cdots>p_{\ell, j}, \quad
p_{\ell, j}<p_{\ell+1,j}, \quad
p_{\ell+1,j}>\cdots>p_{d,j}>p_{1,j},
\]
and the $\ell$'s are all distinct ($d!$ cases). Thus the probability
of the event that $\{\v{p}_1,\dots,\v{p}_d\}$ form a
$(d-1)$-dominant cycle is given by
\[
\frac{d!}{d!^d},
\]
from which (\ref{ECnd}) follows. {\quad\rule{3mm}{3mm}\,}
In particular, we see that
\[
\mathbb{E}[C_{n,2}] = \frac{n(n-1)}{4},
\]
which means that half of the pairs are cycles, rendering the
$1$-dominant skylines less likely to occur. The first few other
$\mathbb{E}[C_{n,d}]$ are given by
\begin{align*}
\{\mathbb{E}[C_{n,d}] \}_{d\ge3}
&= \left\{\tfrac{n(n-1)(n-2)}{108},
\tfrac{n(n-1)(n-2)(n-3)}{55296},
\tfrac{n(n-1)(n-2)(n-3)(n-4)}{1036800000},\right.\\
&\qquad \left.
\tfrac{n(n-1)(n-2)(n-3)(n-4)(n-5)}{1160950579200000},\dots
\right\}.
\end{align*}
We see that the denominator grows very fast and we expect another
type of threshold phenomenon.
Let
\[
d_1 := \tr{\frac{\log n}{W(e^{-1}\log n)}+\tfrac12},
\]
and $\tau_n$ denote the fractional part of $\frac{\log
n}{W(e^{-1}\log n)}+\tfrac12$. Also let
\begin{align*}
\begin{split}
\upsilon(t) &:= \frac{1+\frac12\log 2\pi}{W+1}
+ \frac W{(\log n)(W+1)}\Biggl(t
\\&\qquad \left. -
\frac{12W^3+(35-12\log2\pi)W^2+(34-24\log2\pi)W+
23+(\log2\pi)^2}{24(W+1)^3}\right),
\end{split}
\end{align*}
where $t\in\mathbb{R}$ and $W$ represents $W(e^{-1}\log n)$. Note
that $W$ is of order $\log\log n$.
\begin{thm} The expected number of $(d-1)$-dominant cycles of
length $d$ satisfies
\begin{align*}
\lim_{n\to\infty}\mathbb{E}[C_{n,d}]
\to \left\{\begin{array}{ll}
\infty, & \text{if }2\le d<d_1;\\
0, & \text{if } d>d_1.
\end{array}
\right.
\end{align*}
When $d=d_1$, we can write $\tau_n=\upsilon(t)$; then
\begin{align} \label{dd1}
\lim_{n\to\infty}\mathbb{E}[C_{n,d}]
\left\{\begin{array}{ll}
\to 0& \text{if }t\to-\infty;\\
\sim e^t, & \text{if }t=O(1);\\
\to\infty,&\text{if }t\to\infty.
\end{array} \right.
\end{align}
\end{thm}
\noindent \emph{Proof.}\ Write
\[
d = d_1 - m = \frac{\log n}{W(e^{-1}\log n)} + \tfrac12
- v,
\]
where $v=m+\tau_n$. Then a straightforward calculation using
\eqref{ECnd} and Stirling's formula gives
\begin{align*}
\frac1d\log \mathbb{E}[C_{n,d}]
&= v\left(W(e^{-1}\log n)
+1\right)
- 1-\tfrac12\,\log 2\pi \\
&\qquad\qquad +
O\left(\frac{W(e^{-1}\log n)^2+(v^2+1)W(e^{-1}\log n)}
{\log n}\right).
\end{align*}
Thus $\mathbb{E}[C_{n,d}]\to\infty$ if $m\ge1$ and
$\mathbb{E}[C_{n,d}]\to-\infty$ if $m\le -1$. When $m=0$
($v=\tau_n$), this asymptotic expansion is insufficient and we need
more terms. If $v=\tau_n=\upsilon(t)$, then the same calculation as
above gives
\[
\mathbb{E}[C_{n,d}] = e^t\left(1+O\left(
\frac{W^2+1}{\log n}\right)\right).
\]
This implies \eqref{dd1}. {\quad\rule{3mm}{3mm}\,}
Let
\[
a_i := \left\lfloor\left(\tfrac{i-\frac12}{e}\right)^{i-\frac12}
\right\rfloor+1\qquad(i\ge1).
\]
Then
\[
d_1=d_1(n) = i \text{ if } a_i\le n<a_{i+1}.
\]
The first few values of $a_i$ are given as follows.
\[
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|} \hline
$i$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ &$10$
& $11$ & $12$\\ \hline
$a_i$ & $3$ & $10$ & $49$ & $290$ & $2022$ & $16165$
&$145405$ & $1453435$ & $15982276$\\ \hline
\end{tabular}
\]
\section{A uniform lower bound for $\mathbb{E}[M_{d,k}(n)]$}
\label{sec:ub}
The convergence rate in \eqref{a1} is very slow if $d$ is large and
$ k $ is close to $d$. It is interesting to characterize the
transition of $M_{d,k}(n)$ from zero to $ n$ as $k$ increases under
the condition that $d$ and $n$ are fixed. However, the exact
characterization is not easy, so we derive instead a lower bound
that provides a good approximation to the real transition.
\begin{thm}[Uniform lower bound in $d, k$ and $n$] Define
\[
\beta_{d,k}:=\sum_{0\le j\le d-k}\binom{d}{j}2^{-d}.
\]
Then, for $n\ge1$ and $1\le k\le d-1$,
\begin{equation}
\mathbb{E}[M_{d,k}(n)]
\ge n I_n(\beta_{d,k}), \label{t3}
\end{equation}
where
\[
I_n(x) := x\int_{x}^{1}t^{-2}
\left(1-t\right)^{n-1}\mathrm{d}t.
\]
\end{thm}
\noindent \emph{Proof.}\
Select two random points $\v{x},\v{y}$ uniformly and
independently in $ [0,1]^{d}$. Obviously,
\[
\mathbb{P}\left( \v{x}\text{ $k$-dominates }
\v{y}\right) =\beta_{d,k}.
\]
On the other hand, by definition, $\mathbb{P} \left( \v{x}\text{
$k$-dominates }\v{y} \right) = \int_{[0,1]^{d}}\left| B_{k}(\v{x})
\right| \mathrm{d}\v{x}$. Thus
\[
\int_{\lbrack 0,1]^{d}}\left| B_{k}(\v{x})\right|
\mathrm{d} \v{x}=\beta_{d,k}.
\]
Let
\[
F(t)=\left| \left\{ \v{x}\in [0,1]^{d}
:\left| B_{k}(\v{x})\right|
\le t\right\} \right| ,
\]
be the distribution function of $|B_{k}(\v{x})|$. By Markov
inequality
\[
t\left( 1-F(t)\right)
\le \int_{\lbrack 0,1]^{d}}\left|
B_{k}(\v{x})\right|\mathrm{d} \v{x}
\qquad(t\in(0,1)) .
\]
Thus
\[
F(t)
\ge 1-\frac{\int_{[0,1]^{d}}\left|
B_{k}(\v{x})\right|\mathrm{d}\v{x}}{t}
=1-\frac{\beta_{d,k}}{t}.
\]
Define
\[
G(t):=\max\left\{1-\frac{\beta_{d,k}}{t}, 0
\right\}.
\]
Then $F(t)\ge G(t)$. Now
\begin{equation} \label{tm1}
\mathbb{E}[M_{d,k}(n)]
=n\int_{[0,1]^{d}}\left( 1-\left|B_{k}(\v{x})\right|
\right)^{n-1}\mathrm{d}\v{x}
=n\int_{0}^{1}\left( 1-t\right)^{n-1}\frac{\mathrm{d}F(t)}
{\mathrm{d}t} .
\end{equation}
Since the integral on the right-hand side of \eqref{tm1} becomes
smaller if the distribution function $F(t)$ is replaced by $G(t)$,
we have
\begin{align*}
\mathbb{E}[M_{d,k}(n)]
\ge n\int_0^1 (1-t)^{n-1}
\frac{\mathrm{d}G(t)}{\mathrm{d}t} ,
\end{align*}
from which \eqref{t3} follows. {\quad\rule{3mm}{3mm}\,}
A useful, convergent asymptotic expansion for $I_n(x)$, derived by
successive integration by parts, is as follows.
\begin{align*}
I_n(x) &= \sum_{j\ge0}
\frac{(-1)^j (j+1)!}{n(n+1)\cdots(n+j)}
\,x^{-j-1}(1-x)^{n+j}\\
&= \frac{(1-x)^n}{nx} - \frac{2(1-x)^{n+1}}{n(n+1)x^2}
+ \cdots,
\end{align*}
as long as $x\gg1/n$. In particular, $I_n(x)\to0$ in this range of
$x$. If $xn\to c>0$, then
\[
I_n(x) \to c\int_c^\infty u^{-2} e^{-u} \text{d}u,
\]
the latter tending to $1$ as $c$ approaches zero.
We see that the transition of $I_n(x)$ from zero to one occurs at
$x\asymp n^{-1}$ (meaning that $x$ is of order proportional to
$n^{-1}$). In terms of $d$ and $k$, this arises when $d\to\infty$
and $\beta_{d,k}\asymp n^{-1}$. Now, by known estimate for binomial
distribution (see \cite{Hwang97} and the references cited there)
\[
\beta_{d,k} \asymp (2\alpha-1)^{-1}
d^{-1/2} 2^{-d}\alpha^{-\alpha d}
(1-\alpha)^{-(1-\alpha)d},
\]
when $k\ge d/2+K\sqrt{d}$, where $\alpha := k/d$ and $K>1$ is a
constant. We deduce from this that the transition of
$I_n(\beta_{d,k})$ from zero to one occurs at $c\log n$ for some
$c\in(0,1)$. The exact location of this $c$ matters less since $I_n$
is simply a lower bound; see Figure~\ref{fg1}.
\begin{figure}
\caption{\emph{Simulation result of $\mathbb{E}
\label{fg1}
\end{figure}
\section{Conclusions}
\label{sec:fin}
While the notion of $k$-dominant skyline appeared as a natural means
of solving the abundance of skyline, its use in diverse contexts has
to be carefully considered, in view of the results we derived in
this paper. We summarize our findings and highlight suggestions for
possible practical uses.
The asymptotic results we derived in this paper are either of a
vanishing type or of a blow-up nature; briefly, they are either zero
or infinity when the sample size goes unbounded, making the
selection of representative points more subtle. The expected number
of $k$-dominant skyline points approaches zero under either of the
following situations.
\begin{itemize}
\item Hypercube: both $d$ and $k<d$ bounded;
\item Simplex: both $d$ and $k<d$ bounded;
\item Hypercube: extending the $k$-dominant skyline to
the dominance by a cluster of $j$ points with both $d$ and $k$
bounded.
\end{itemize}
In all cases, zero appears as the limit when $n\to\infty$. However,
for practical purposes, $n$ is always finite, and thus the above
limit results become less useful from a computational point of view.
One needs asymptotic estimates that are uniform in $d$, $k$ and $n$.
But such results are often very difficult. The uniform asymptotic
approximation \eqref{dm1-dom} we obtained leads to several
interesting consequences, including particularly the threshold
phenomenon \eqref{EMkn-large}.
We conclude this paper by showing how the asymptotic results we
derived above can be applied in more practical situations. Assume
that our sample is of size, say $n=10^4$ or $n=10^5$, and the
dimensionality $d$ is in the range $\{4,5,6,7,8\}$ (smaller $d$ may
result in more biased inferences while larger $d$ will yield too
many skyline points). We also assume that our data set is
sufficiently random and can be modeled by the hypercube model. If
our aim is to choose a reasonably small number of candidates for
further decision making, then how can our asymptotic estimates help?
First, for this range of $n$ and $d$, the expected numbers of
skyline points can be easily computed by the recurrence relation
(see \cite{BDHT05})
\[
\mu_{n,d} = \frac1{d-1}\sum_{1\le j\le d-1}
H_n^{(d-j)}\mu_{n,j} \qquad(d\ge2),
\]
where $\mu_{n,d}:= \mb{E}[M_{d,d}(n)]$, $H_n^{(a)} := \sum_{1\le
j\le n}j^{-a}$ are the harmonic numbers and $\mu_{n,1} := 1$, and
are given approximately by
\[
\{164.7, 426.3, 902.7, 1633.1,2603\} \qquad(n=10^4;d=4,5,6,7,8),
\]
and
\[
\{304.9, 955.8, 2432.1, 5239.4,9845\} \qquad(n=10^5;d=4,5,6,7,8),
\]
which are often too many for further consideration. So we turn to
$(d-1)$-dominant skyline and estimate their numbers by our
asymptotic approximations. However, both Theorems~\ref{thm:ud} and
\ref{thm:ud2} have poor error terms, and a better numerical
approximation to $\mb{E}[M_{d,d-1}(n)$ for most moderately values
of $n$ and $d$ is given by
\[
\phi_d(n)-g_d(n)= \sum_{0\le j\le d-2}\binom{d}{j}
(-1)^j (d-1-j)^{j-1} \Gamma\left(\tfrac1{d-1-j}\right)^{d-j}
n^{\frac1{d-1}-\frac{1}{d-1-j}}.
\]
We thus obtain, for example, the following numerical values
\begin{align*}
\mb{E}[M_{d,d-1}(10^4)]\approx
\begin{tabular}{|c||c|c|c|c|c|} \hline
$d$ & $4$ & $5$ & $6$ & $7$ & $8$ \\ \hline\hline
$\phi_d(n)-g_d(n)$ & $0.61$ & $5.06$ & $24.85$ & $88.90$ & $243.96$
\\ \hline
Monte Carlo & $0.57$ & $4.82$ & $23.98$ & $83.89$ &
$226.65$ \\ \hline
\end{tabular}
\end{align*}
and
\begin{align*}
\mb{E}[M_{d,d-1}(10^5)]\approx
\begin{tabular}{|c||c|c|c|c|c|} \hline
$d$ & 4 & 5 & 6 & 7 & 8\\ \hline\hline
$\phi_d(n)-g_d(n)$ & 0.31& 3.69 & 24.94& 115.31 & 404.7\\ \hline
Monte Carlo & 0.29 & 3.61 & 24.38 & 111.79 & 386.08\\ \hline
\end{tabular}
\end{align*}
From these tables, one can choose a suitable $d$ according to the
need of practical uses. Here we also see the characteristic property
of the skylines, either very few or very many points.
Our Monte Carlo simulations are carried out by a three-phase
algorithm (extending our two-phase maxima-finding one in
\cite{CHT11}) for finding the $k$-dominant skylines. Briefly, the
first two phases are modified from the algorithms presented
in \cite{CHT11} and the last phase removes all cycles.
\section*{Appendix A. Error analysis: $d\le \sqrt{\frac{2\log n}
{W(2\log n)+K}}$}
Recall that $N := n^{\frac1{d-1}}$ and consider the integral
\begin{align*}
f_d(n) = \left(\int_{\mathbb{R}_+^d}-\int_{[0,N]^d}\right)
e^{-y_1\cdots y_d \left(\frac1{y_1}+\cdots+\frac1{y_d}
\right)} \text{d}\v{y}
= \sum_{1\le j\le d}\binom{d}{j}(-1)^{j-1} \phi_{d,j}(n),
\end{align*}
where
\begin{align}\label{phi-djn}
\phi_{d,j}(n) := \int_{[0,N]^{d-j}
\times (N,\infty)^j} e^{-y_1\cdots y_d
\left(\frac1{y_1}+\cdots+\frac1{y_d}
\right)} \text{d}\v{y}.
\end{align}
So our $\phi_d(n)=\frac1{d-1}\Gamma\left(\frac1{d-1}\right)^d$
corresponds to $\phi_{d,0}(n)$; see \eqref{End-Rnd}.
\begin{prop} \label{prop-1}
Let $d\ge3$ satisfies $\frac{2\log n}{d^2}-W(2\log n)\to\infty$.
Then
\begin{align}\label{Err-Bd}
f_d(n) = O\left(\phi_d(n)
d N^{-\frac1{d-2}}\right),
\end{align}
uniformly in $d$.
\end{prop}
\noindent \emph{Proof.}\ We first prove that uniformly for $1\le j\le d$,
\begin{align} \label{Endj}
\phi_{d,j}(n) = O\left(\Gamma\left(\tfrac1{d-2}\right)^{d-1}
N^{-\frac{j}{d-2}}\right).
\end{align}
Consider first the range $1\le j\le d-2$. By extending the
integration ranges and then carrying out the changes of variables
$y_\ell \mapsto Nv_{d-\ell+1}$ for $d-j+1\le \ell \le d$, we obtain
the bounds
\begin{align*}
\phi_{d,j}(n)
&= N^j\int_{(1,\infty)^j}\!\!\int_{[0,N]^{d-j}}
e^{-N^j v_1\cdots v_j y_1\cdots y_{d-j}
\left(\frac1{y_1}+\cdots+\frac1{y_{d-j}}+
\frac1{Nv_1}+\cdots+\frac1{Nv_j}
\right)} \text{d}\v{y}\text{d}\v{v} \\
&\le N^j\int_{(1,\infty)^j}\!\!\int_{\mathbb{R}_+^{d-j}}
e^{-N^j v_1\cdots v_jy_1\cdots y_{d-j}
\left(\frac1{y_1}+\cdots+\frac1{y_{d-j}}
\right)} \text{d}\v{y}\text{d}\v{v}.
\end{align*}
By the change of variables $y_j \mapsto \lambda^{-\frac1{d-1}} x_j$
for $1\le j\le d$, we have, for $\lambda>0$,
\[
\int_{\mathbb{R}_+^{d}}
e^{-\lambda y_1\cdots y_{d}
\left(\frac1{y_1}+\cdots+\frac1{y_{d}}\right)}\text{d}\v{y}
= \frac{\Gamma\left(\tfrac1{d-1}\right)^{d}}
{d-1}\,\lambda^{-\frac{d}{d-1}}\qquad(d\ge2).
\]
It follows that
\begin{align*}
\phi_{d,j}(n) &\le \frac{\Gamma\left(\tfrac1{d-1-j}\right)^{d-j}}
{d-1-j} \, N^{-\frac{j}{d-1-j}} \int_{(1,\infty)^j}
(v_1\cdots v_j)^{-1-\frac1{d-1-j}}\text{d}\v{v}\\
&= (d-1-j)^{j-1} \Gamma\left(\tfrac1{d-1-j}\right)^{d-j}
N^{-\frac{j}{d-1-j}}\\
&= O\left(\Gamma\left(\tfrac1{d-2}\right)^{d-1}
N^{-\frac{j}{d-2}}\right),
\end{align*}
uniformly for $1\le j\le d-2$. The remaining two cases $j=d-1, d$
are much smaller; we start with $\phi_{d,d}(n)$. By the same
analysis used above, we have
\begin{align*}
\phi_{d,d}(n) &= \int_{(N,\infty)^d} e^{-
x_1\cdots x_d \left(\frac1{x_1}+\cdots+\frac1{x_{d}}\right)}
\text{d}\v{x}\\
&\le \int_{(N,\infty)^d} e^{-
x_1\cdots x_d \left(\frac1{x_1}+\cdots+\frac1{x_{d-1}}\right)}
\text{d}\v{x}\\
&\le \int_{(N,\infty)^{d-1}} \frac{e^{-N
x_1\cdots x_{d-1} \left(\frac1{x_1}+\cdots+\frac1{x_{d-2}}
\right)}}{x_1\cdots x_{d-1}
\left(\frac1{x_1}+\cdots+\frac1{x_{d-2}}\right)}
\text{d}\v{x}.
\end{align*}
By the inequality
\begin{align} \label{ta-ineq}
\int_N^\infty t^{-\alpha} e^{-\lambda t} \text{d}t
\le \lambda^{-1}N^{-\alpha} e^{-\lambda N}
\qquad(\alpha\ge0,\lambda>0),
\end{align}
we obtain
\begin{align*}
\phi_{d,d}(n) &\le N^{-2} \int_{(N,\infty)^{d-2}}
\frac{e^{-N^2
x_1\cdots x_{d-2} \left(\frac1{x_1}+\cdots+\frac1{x_{d-2}}
\right)}}{x_1\cdots x_{d-2}
\left(\frac1{x_1}+\cdots+\frac1{x_{d-2}}\right)}
\text{d}\v{x}\\
&\le \cdots\\
&\le N^{-2-4-\cdots-2(d-3)}\int_{(N,\infty)^2}
\frac{e^{-N^{d-2}(x_1+x_2)}}{(x_1+x_2)^{d-2}}
\text{d}\v{x}\\
&= N^{-(d-2)(d-3)} \int_{2N}^\infty \frac{e^{-N^{d-2}w}}
{w^{d-2}}(w-2N) \text{d}w\\
&\le 2^{3-d} N^{-d^2+3d-1} e^{-2N^{d-1}}.
\end{align*}
Thus
\begin{align}\label{phi-ddn}
\phi_{d,d}(n) = O\left(2^{-d} n^{-d+2+\frac1{d-1}} e^{-2n}\right).
\end{align}
Finally,
\begin{align*}
\phi_{d,d-1}(n) &\le \int_{(N,\infty)^{d-1}} \frac{e^{-
x_1\cdots x_{d-1}}}
{x_1\cdots x_{d-1} \left(\frac1{x_1}+\cdots+\frac1{x_{d-1}}\right)}
\,\text{d}\v{x}\\
&\le \frac{1}{d-1}\int_{(N,\infty)^{d-1}}
\frac{e^{-x_1\cdots x_{d-1} }}
{(x_1\cdots x_{d-1})^{1+\frac1{d-1}}}
\,\text{d}\v{x},
\end{align*}
by the inequality of arithmetic and geometric means
\[
\frac1{d-1}\left(\frac1{x_1}+\cdots+\frac1{x_{d-1}}\right)
\ge (x_1\cdots x_{d-1})^{\frac1{d-1}}.
\]
Applying successively the inequality \eqref{ta-ineq}, we obtain
\begin{align*}
\phi_{d,d-1}(n) &\le \frac{N^{-1-\frac1{d-1}}}{d-1}
\int_{(N,\infty)^{d-2}}
\frac{e^{-Nx_1\cdots x_{d-1}}}
{(x_1\cdots x_{d-2})^{2+\frac1{d-1}}}
\,\text{d}\v{x}\\
&\le \cdots\\
&\le \frac{N^{-(d^2-2d+2)}}{d-1}
e^{-N^{d-1}}.
\end{align*}
It follows that
\begin{align}\label{phi-dd1n}
\phi_{d,d-1}(n) = O\left(d^{-1}
n^{-d+1-\frac1{d-1}} e^{-n}\right).
\end{align}
We see that both $\phi_{d,d}(n)$ and $\phi_{d,d}(n)$ are much
smaller than the right-hand side of \eqref{Endj}.
The remaining case is when $d=2$. Obviously,
\[
\phi_{2,1}(n) < \int_0^\infty \!\!\!\int_N^\infty e^{-y_1-y_2}
\text{d}y_2\text{d}y_1 = e^{-N}.
\]
The upper bound \eqref{Err-Bd} then follows from summing
$\phi_{d,j}(n)$ for $j$ from $1$ to $d$ using \eqref{Endj}
\begin{align*}
\sum_{1\le j\le d}\binom{d}j (-1)^{j-1} \phi_{d,j}(n)
&= O\left(\Gamma\left(\tfrac1{d-2}\right)^{d-1}
\sum_{j\ge1} \frac{d^j}{j!} N^{-\frac{j}{d-2}}\right)\\
&= O\left(\Gamma\left(\tfrac1{d-2}\right)^{d-1}
d N^{-\frac1{d-2}}\right),
\end{align*}
since $dN^{-\frac1{d-2}}\to0$ for $d$ in the range \eqref{d-rg1}.
It remains to estimate $R_d(n)$, which can be proved to be bounded
above by
\begin{align*}
R_d(n) &= O\left(\frac{d}{N}\int_{\mathbb{R}_+^d}
y_1\cdots y_d e^{-y_1\cdots y_d\left(\frac1{y_1}+\cdots
+\frac1{y_d}\right)} \text{d} \v{y}\right)\\
&= O\left(\frac{1}{N}\Gamma\left(\frac{2}{d-1}\right)^d \right);
\end{align*}
this proves \eqref{Rdn-ratio}. {\quad\rule{3mm}{3mm}\,}
\section*{Appendix B. Proof of Theorem~\ref{thm:ud2}}
We prove Theorem~\ref{thm:ud2} in this Appendix. Our method of proof
consists in a finer evaluation of the integrals $\phi_{d,j}(n)$,
leading to a more precise asymptotic approximation to $f_d(n)$.
\begin{prop} Uniformly for $d$ in the range \eqref{d-rg2}
\begin{align}\label{fdn-asymp}
f_d(n) \sim \frac{1-e^{-\rho}}{2-e^{-\rho}}\cdot
\frac1{d-1}\Gamma\left(\frac1{d-1}\right)^d,
\end{align}
where $\rho := \frac{d}{en^{1/d^2}}$.
\end{prop}
\noindent \emph{Proof.}\
Consider again \eqref{phi-djn} and start with the changes of
variables $y_\ell \mapsto Nv_{d-\ell+1}$ for $d-j+1\le \ell \le d$,
\begin{align*}
\phi_{d,j}(n)
= N^j\int_{(1,\infty)^j}\!\!\int_{[0,N]^{d-j}}
e^{-\lambda_{N,j}(\v{v})y_1\cdots y_{d-j}
\left(\frac1{y_1}+\cdots+\frac1{y_{d-j}}+
\frac1{Nv_1}+\cdots+\frac1{Nv_j}
\right)} \text{d}\v{y}\text{d}\v{v},
\end{align*}
where $\lambda_{N,j}(\v{v}) := N^j v_1\cdots v_j$. Then we carry out
the change of variables
\[
y_\ell \mapsto \lambda_{N,j}(\v{v})^{-\frac1{d-1-j}} x_\ell
\qquad(1\le \ell \le d-j),
\]
and obtain
\[
\phi_{d,j}(n) = \psi_{d,j}(n)+\omega_{d,j}(n),
\]
where
\begin{align*}
\psi_{d,j}(n) &=
N^{-\frac j{d-1-j}}\int_{(1,\infty)^j}\left(v_1\cdots v_j
\right)^{-1-\frac1{d-1-j}}
\int_{[0,N_0]^{d-j}}
e^{-x_1\cdots x_{d-j}
\left(\frac1{x_1}+\cdots+\frac1{x_{d-j}}
\right)} \text{d}\v{x}\text{d}\v{v},
\end{align*}
with
\[
N_0 := N^{\frac{d-1}{d-1-j}}(v_1\cdots v_j)^{\frac1{d-1-j}}
= (n v_1\cdots v_j)^{\frac1{d-1-j}},
\]
and the error introduced is bounded above by
\begin{align*}
\omega_{d,j}(n) &:= N^{-\frac j{d-1-j}}
\int_{(1,\infty)^j}\left(v_1\cdots v_j
\right)^{-1-\frac1{d-1-j}} \\
&\qquad \times\int_{[0,N_0]^{d-j}}
e^{-x_1\cdots x_{d-j}
\left(\frac1{x_1}+\cdots+\frac1{x_{d-j}}
\right)} \left(e^{-\frac{x_1\cdots x_{d-j}}{N_0}
\left(\frac1{v_1}+\cdots+\frac1{v_j}\right)}-1\right)
\text{d}\v{x}\text{d}\v{v} \\
&= O\left(N^{-1-\frac {2j}{d-1-j}}
\int_{(1,\infty)^j}\left(v_1\cdots v_j
\right)^{-1-\frac2{d-1-j}} \left(\tfrac1{v_1}+\cdots
+\tfrac1{v_j} \right)\right.\\
&\qquad\qquad\qquad\left. \times\int_{\mathbb{R}_+^{d-j}}
e^{-x_1\cdots x_{d-j}
\left(\frac1{x_1}+\cdots+\frac1{x_{d-j}}
\right)}x_1\cdots x_{d-j} \text{d}\v{x}\text{d}\v{v}
\right)\\
&= O\left(j 2^{-j}
(d-1-j)^{j-2}\Gamma\left(\tfrac{2}{d-1-j}\right)^{d-j}
N^{-1-\frac{2j}{d-1-j}}\right).
\end{align*}
Thus the total contribution of $\omega_{d,j}(n)$ to $f_d(n)$ is
bounded above by
\begin{align} \label{hdn}
\begin{split}
h_d(n) &:=
\sum_{1\le j\le d-2} \binom{d}{j}(-1)^{j-1}
\omega_{d,j}(n) \\
&\le \sum_{1\le j\le d-2}\binom{d}{j} j 2^{-j}
(d-1-j)^{j-2}\Gamma\left(\tfrac{2}{d-1-j}\right)^{d-j}
n^{\frac1{d-1}-\frac{2}{d-1-j}},
\end{split}
\end{align}
which will be seen to be of a smaller order.
\paragraph{The recurrence relation} Now
\begin{align*}
\psi_{d,j}(n) &=
N^{-\frac j{d-1-j}}\int_{(1,\infty)^j}\left(v_1\cdots v_j
\right)^{-1-\frac1{d-1-j}} \int_{\mathbb{R}_+^{d-j}}
e^{-x_1\cdots x_{d-j}
\left(\frac1{x_1}+\cdots+\frac1{x_{d-j}}
\right)} \text{d}\v{x}\text{d}\v{v} \\
&\qquad - N^{-\frac j{d-1-j}}\int_{(1,\infty)^j}\left(v_1\cdots v_j
\right)^{-1-\frac1{d-1-j}} f_{d-j}(nv_1\cdots v_j)\text{d}\v{v}\\
&= (d-1-j)^{j-1} \Gamma\left(\tfrac1{d-1-j}\right)^{d-j}
N^{-\frac{j}{d-1-j}} \\
&\qquad - N^{-\frac j{d-1-j}}\int_{(1,\infty)^j}\left(v_1\cdots v_j
\right)^{-1-\frac1{d-1-j}} f_{d-j}(nv_1\cdots v_j)\text{d}\v{v}.
\end{align*}
So we get the following recurrence relation.
\begin{lmm} The integrals $f_d(n)$ satisfy
\begin{align}
f_d(n) &= g_d(n) + h_d(n) + \eta_d(n)\nonumber \\
& \quad + \sum_{1\le j\le d-2}\binom{d}{j}
(-1)^j n^{\frac1{d-1}-\frac{1}{d-1-j}}
\int_{(1,\infty)^j}\left(v_1\cdots v_j
\right)^{-1-\frac1{d-1-j}}
f_{d-j}(nv_1\cdots v_j)\mathrm{d}\v{v},
\label{fdn-rr}
\end{align}
for $d\ge3$, with the initial condition
\[
f_2(n) = 2e^{-n}-e^{-2n},
\]
where $h_d(n)$ is given in \eqref{hdn},
\begin{align*}
g_d(n) := \sum_{1\le j\le d-2}\binom{d}{j}
(-1)^{j-1} (d-1-j)^{j-1}
\Gamma\left(\tfrac1{d-1-j}\right)^{d-j}
n^{\frac1{d-1}-\frac{1}{d-1-j}} ,
\end{align*}
and $\eta_d(n) := \phi_{d,d-1}(n) + \phi_{d,d}(n)$.
\end{lmm}
Note that, by \eqref{phi-ddn} and \eqref{phi-dd1n},
\begin{align*}
\eta_d(n)&= O\left(d^{-1}n^{-d+1-\frac1{d-1}}e^{-n}
+2^{-d}n^{-d+2+\frac1{d-1}}e^{-2n} \right)\\
&= O\left(n^{-d+2} e^{-n}\right).
\end{align*}
Also, by the change of variables $t\mapsto v_1\cdots v_j$, we have
\begin{align*}
f_d(n) &= g_d(n)+h_d(n)+\eta_d(n) \\
&\qquad+ \sum_{1\le j\le d-2}\binom{d}{j}
\frac{(-1)^j n^{\frac1{d-1}-\frac{1}{d-1-j}}}{(j-1)!}
\int_1^\infty t^{-1-\frac1{d-1-j}}(\log t)^{j-1}
f_{d-j}(nt)\text{d} t,
\end{align*}
which is easier to use for symbolic computation softwares.
We then obtain, for example,
\begin{align*}
f_3(n) &= 3n^{-\frac12}+ O\left(n^{-\frac32}\right),\\
f_4(n) &= 4\pi^{\frac32}n^{-\frac16}+O\left(n^{-\frac23}\right),\\
f_5(n) &= \frac{80\pi^4}{9\Gamma\left(\frac23\right)^4}
\,n^{-\frac1{12}}-60\pi^{\frac32}n^{-\frac14}
+O\left(n^{-\frac5{12}}\right).
\end{align*}
But the expressions soon become too messy.
\paragraph{Asymptotic estimate for $g_d(n)$} We derive first a
uniform asymptotic approximation to $g_d(n)$, which will be needed
later. We focus on the case when $d$ tends to infinity with $n$.
\begin{lmm} If $d$ satisfies \eqref{d-rg2}, then
\begin{align}
g_d(n) &= \frac1{d-1}
\Gamma\left(\frac1{d-1}\right)^d
\left\{1-e^{-\rho}+\rho e^{-\rho}\left(\frac{2\rho -1}{2d} +
\frac{\rho-3}{d^3}\,\log n \right) \right.\nonumber \\
&\hspace*{4cm}\left. +
O\left(\frac{\rho e^{-\rho}(\rho^3+1)}{d^2} \left(
1+\frac{\log^2 n}{d^4}\right)\right)\right\},
\label{gdn-asymp}
\end{align}
uniformly in $d$.
\end{lmm}
\noindent \emph{Proof.}\ First, we have
\begin{align*}
&\frac{\binom{d}{j}(-1)^{j-1} (d-1-j)^{j-1}
\Gamma\left(\tfrac1{d-1-j}\right)^{d-j}
n^{-\frac{j}{(d-1)(d-1-j)}}}
{\frac1{d-1}\Gamma\left(\frac1{d-1}\right)^d}\\
&\quad= \frac{d^j}{j!}(-1)^{j-1} n^{-\frac j{d^2}}
\exp\left(-j-\frac{2j^2-j}{2d}- \frac{j(j+2)}{d^3}\log n
+O\left(\frac{j^3}{d^2}+
\frac{j^3}{d^4}\log n\right)\right),
\end{align*}
uniformly for $j=o(d^{\frac23})$. Summing over all $j$ gives
\eqref{gdn-asymp}. Here the errors omitted are estimated by the
inequalities
\begin{align*}
\left\{\begin{array}{rl}
\binom{d}{j} \!\!\!\!&= O\left(\frac{d^j}{j!}\,
e^{-\frac{j^2}{2d}}\right),\\
\Gamma\left(\tfrac1{x}\right)\!\!\!\! &\le x, \qquad(x\ge1)\\
(d-1-j)^{d-1}\!\!\!\!&\le d^{d-1} e^{-j-\frac{j^2}{2d}},
\end{array}\right.
\end{align*}
for $1\le j\le d-2$, and we see that the contribution of terms in
$g_d(n)$ with indices larger than, say $j_0 := \lfloor
d^{\frac35}\rfloor$ are bounded
above by
\begin{align*}
&\sum_{j\ge j_0} \binom{d}{j}(-1)^{j-1} (d-1-j)^{j-1}
\Gamma\left(\tfrac1{d-1-j}\right)^{d-j}
\, n^{-\frac{j}{(d-1)(d-1-j)}} \\
&\qquad = O\left(\frac1{d-1}\Gamma\left(\frac1{d-1}\right)^d
\sum_{j\ge j_0}\frac{\rho^j}{j!}\right)\\
&\qquad = O\left(\frac1{d-1}\Gamma\left(\frac1{d-1}\right)^d
\frac{\rho^{j_0}}{j_0!}\right).
\end{align*}
Thus for $d$ in the range \eqref{d-rg2}
\begin{align*}
j_0\log\rho - \log j_0! &= \tfrac25 d^{\frac35}\log d
-d^{-\frac75} \log n + d^{\frac35}+O(\log d) \\
&\le -\left(2^{-\frac 75}-\tfrac1{5}\, 2^{\frac35}\right)
(\log n)^{\frac3{10}}(\log\log n)^{\frac7{10}}(1+o(1))\\
&\le -\tfrac3{40} (\log n)^{\frac3{10}}
(\log\log n)^{\frac7{10}} (1+o(1)),
\end{align*}
so that
\[
\frac{\rho^{j_0}}{j_0!} = O\left(e^{-\tfrac3{40}
(\log n)^{\frac3{10}}
(\log\log n)^{\frac7{10}}(1+o(1))}\right),
\]
and the sum of these terms is asymptotically negligible. The errors
$\sum_{j\ge j_0}\frac{\rho^j}{j!}$ are estimated similarly.
{\quad\rule{3mm}{3mm}\,}
\paragraph{Iteration of the $\Phi$-operator} To derive a similar
estimate for $f_d(n)$, we define the operator
\[
\Phi[f_d](n) := \sum_{1\le j\le d-2}\binom{d}{j}
(-1)^j n^{\frac1{d-1}-\frac{1}{d-1-j}}
\int_{(1,\infty)^j}\left(v_1\cdots v_j
\right)^{-1-\frac1{d-1-j}} f_{d-j}(nv_1\cdots v_j)\text{d}\v{v}.
\]
By iterating the recurrence \eqref{fdn-rr}, we obtain
\begin{align*}
f_d = g_d +h_d+\eta_d+ \sum_{1\le j\le d-2}
\Phi^j[g_d+h_d+\eta_d],
\end{align*}
where $\Phi^j[f_d]= \Phi[\Phi^{j-1}[f_d]]$ denotes the $j$-th
iterate of the $\Phi$-operator.
Surprisingly, despite of the complicated forms of the partial sums,
each $\Phi^m[g_d]$ can be explicitly evaluated and differs from
$g_d$ only by a single term.
\begin{lmm} For any $m\ge0$
\begin{align}\label{Phi-m}
\Phi^m[g_d](n) = \sum_{m< \ell\le d-2}\binom{d}{\ell}
(-1)^{\ell-1}(d-1-\ell)^{\ell-1}
\Gamma\left(\tfrac1{d-1-\ell}\right)^{d-\ell}
n^{\frac1{d-1}-\frac{1}{d-1-\ell}} \sigma_m(\ell) ,
\end{align}
where $\sigma_m(\ell)$ is always positive and defined by
\begin{align*}
\sigma_m(\ell) :=\sum_{\substack{j_1+\cdots+j_{m+1}=\ell\\
j_1,\dots,j_{m+1}\ge1}}\binom{\ell}{j_1,\cdots,j_{m+1}}.
\end{align*}
\end{lmm}
Note that
\begin{align*}
\sigma_m(\ell) &= \ell![z^\ell]\left(e^z-1\right)^{m+1}\\
&=\sum_{1\le r\le m+1}
\binom{m+1}{r}(-1)^{m+1-r}r^\ell.
\end{align*}
\noindent \emph{Proof.}\ By definition and by rearranging the terms
\[
g_d(n) = \sum_{1\le j\le d-2}\binom{d}{j+1}
(-1)^{d-j} j^{d-2-j} \Gamma\left(\tfrac1{j}\right)^{j+1}
n^{\frac1{d-1}-\frac{1}{j}}.
\]
Substituting this expression into the $\Phi$-operator, we see that
\begin{align*}
\Phi[g_d](n) &= \sum_{1\le j\le d-2}\binom{d}{j}
(-1)^j n^{\frac1{d-1}-\frac{1}{d-1-j}}
\int_{(1,\infty)^j}\left(v_1\cdots v_j
\right)^{-1-\frac1{d-1-j}} g_{d-j}(nv_1\cdots v_j)\text{d}\v{v}\\
&= \sum_{1\le j\le d-2}\binom{d}{j}
(-1)^j n^{\frac1{d-1}}\\
&\qquad \times\sum_{1\le \ell\le d-j-2}\binom{d-j}{\ell+1}
(-1)^{d-j-\ell} \ell^{d-2-j-\ell}
\Gamma\left(\tfrac1{\ell}\right)^{\ell+1}
n^{-\frac{1}{\ell}} \int_{(1,\infty)^j}\left(v_1\cdots v_j
\right)^{-1-\frac1{\ell}}\text{d} \v{v} .
\end{align*}
Then
\begin{align*}
\Phi[g_d](n)
&= \sum_{1\le j\le d-2}\binom{d}{j}
(-1)^j n^{\frac1{d-1}}
\sum_{1\le \ell\le d-j-2}\binom{d-j}{\ell+1}
(-1)^{d-j-\ell} \ell^{d-2-\ell}
\Gamma\left(\tfrac1{\ell}\right)^{\ell+1}
n^{-\frac{1}{\ell}} \\
&= \sum_{1\le \ell\le d-2}\binom{d}{\ell+1}
(-1)^{d-\ell}\ell^{d-2-\ell}
\Gamma\left(\tfrac1{\ell}\right)^{\ell+1}
n^{\frac1{d-1}-\frac{1}{\ell}}
\sum_{1\le j\le d-2-\ell}\binom{d-1-\ell}{j}\\
&= \sum_{1\le \ell\le d-2}\binom{d}{\ell+1}
(-1)^{d-\ell}\ell^{d-2-\ell}
\Gamma\left(\tfrac1{\ell}\right)^{\ell+1}
n^{\frac1{d-1}-\frac{1}{\ell}}
\left(2^{d-1-\ell}-2\right) \\
&= \sum_{1\le \ell\le d-2}\binom{d}{\ell}
(-1)^{\ell-1}(d-1-\ell)^{\ell-1}
\Gamma\left(\tfrac1{d-1-\ell}\right)^{d-\ell}
n^{\frac1{d-1}-\frac{1}{d-1-\ell}}
\left(2^{\ell}-2\right) .
\end{align*}
By repeating the same analysis and induction, we prove
\eqref{Phi-m}. {\quad\rule{3mm}{3mm}\,}
\begin{cor} If $d$ satisfies \eqref{d-rg2}, then
\[
\Phi^m[g_d](n) \sim (-1)^{m}
\tfrac1{d-1}\Gamma\left(\tfrac1{d-1}\right)^d
\left(1-e^{-\rho}\right)^{m+1}\qquad(m=0,1,\dots).
\]
\end{cor}
Summing over all $0\le m\le d-2$, we deduce \eqref{fdn-asymp}
and it remains only the error estimates.
\paragraph{Error analysis} The consideration of $\Phi^m[h_d]$ is
similar and we obtain
\begin{align*}
\Phi^m[h_d](n) &\le \sum_{m< \ell\le d-2}\binom{d}{\ell}
2^{-\ell} (d-1-\ell)^{\ell-2}
\Gamma\left(\tfrac2{d-1-\ell}\right)^{d-\ell}
n^{\frac1{d-1}-\frac{2}{d-1-\ell}} \sigma_m'(\ell)
\end{align*}
where
\begin{align*}
\sigma_m'(\ell) &:=\sum_{\substack{j_1+\cdots+j_{m+1}=\ell\\
j_1,\dots,j_{m+1}\ge1}}\binom{\ell}{j_1,\cdots,j_{m+1}}j_{m+1}\\
&= \ell![z^\ell]ze^z\left(e^z-1\right)^m\\
&= \ell \sum_{0\le r\le m}
\binom{m}{r}(-1)^{m-r}(r+1)^\ell \qquad(m\ge0).
\end{align*}
Thus, with
\[
\rho _0 := \frac{d}{e n^{2/d^2}}
\]
which is always $\le \log 2$ when $d$ satisfies
\eqref{d-rg2}, we then have
\begin{align*}
\frac{\Phi^m[h_d](n)}
{\tfrac{1}{d(d-1)2^d}\Gamma\left(
\tfrac1{d-1}\right)^d n^{-\frac1{d-1}}}
&= O\left(\sum_{0\le r\le m}\binom{m}{r}(-1)^{m-r}
\sum_{\ell\ge0} \frac{\rho_0^\ell}{(\ell-1)!}\,(r+1)^\ell \right)\\
&= O\left( \rho_0e^{\rho_0}
\sum_{0\le r\le m}\binom{m}{r}(-1)^{m-r}
(r+1)e^{r\rho_0}\right)\\
&= O\left( \rho_0 e^{\rho_0}\left((e^{\rho_0}-1)^{m-1}
\left((m+1)e^{\rho_0}-1\right) \right)\right).
\end{align*}
Now
\[
\sum_{0\le m\le d-2} \left((x-1)^{m-1}
\left((m+1)x-1\right)\right) = O(d^2)
\]
whenever $0\le x\le 2$. It follows that
\[
\sum_{0\le m\le d-2}\Phi^m[h_d] = O\left(
2^{-d}d^{-2} \Gamma\left(
\tfrac1{d-1}\right)^d n^{-\frac1{d-1}}
\rho_0 e^{\rho_0}\right),
\]
which holds uniformly as long as $e^{\rho_0}\le 2$. This is how the
upper limit of $d$ in \eqref{d-rg2} arises.
In such a case,
\[
\sum_{0\le m\le d-2}\Phi^m[h_d] = O\left(
2^{-d}d^{-1} \Gamma\left(
\tfrac1{d-1}\right)^d n^{-\frac1{d-1}-\frac2{d^2}}
\right).
\]
We consider now $\Phi^j[\eta_d]$. Note that an exponentially small
term remains exponentially small under the $\Phi$-operator because
\[
\int_{(1,\infty)^j} (v_1\cdots v_j)^{-1-\alpha}
e^{-nv_1\cdots v_j} \text{d} {\v{v}} \sim n^{-j} e^{-n}.
\]
So all terms of the forms $\Phi^m[\eta_d]$ are asymptotically
negligible. And we then deduce \eqref{fdn-asymp}.
{\quad\rule{3mm}{3mm}\,}
More calculations give
\begin{align*}
\frac{f_d(n)}{\frac1{d-1}\Gamma\left(\frac1{d-1}\right)^d}
&= \frac{1-e^{-\rho}}{2-e^{-\rho}} + \frac{\rho
e^{-\rho}}{(2-e^{-\rho})^3}
\left(\frac{2\rho -1+(\rho+\tfrac12)e^{-\rho}}{d}\right.\\
&\quad \left.+
\frac{2(\rho-3)+\left(\rho+3\right)e^{-\rho}}{d^3}\log n\right)
+O\left(\frac{\rho e^{-\rho}}{d^2}(\rho^3+1)
\left(1+\frac{\log^2 n}{d^4}\right)\right).
\end{align*}
Note that the range \eqref{d-rg1} arises because we had to drop
factors of the form $(-1)^j$ in estimating the sum of $h_d(n)$. With
a more careful analysis along the same inductive line, we can extend
the range of uniformity of \eqref{fdn-asymp}.
\end{document}
\begin{lmm} If
\begin{align}\label{dtt}
d:= \left\lfloor\frac{\log n}{W(\log n)+v}\right\rfloor,
\end{align}
where $v\in\mathbb{R}$. Then
\[
dn^{-1/d} \to
\left\{\begin{array}{ll}
0, & \text{if } v\to\infty;\\
e^{-v}, & \text{if } v=O(1);\\
\infty, & \text{if } v\to-\infty.
\end{array}
\right.
\]
\end{lmm}
Note that if $d$ satisfies \eqref{dtt}, then by \eqref{Wx}
\[
d \sim \frac{\log n}{\log\log n-\log\log \log n+v}.
\]
\noindent \emph{Proof.}\ Write
\[
d = \frac{\log n}{W(\log n)+v} -\tau_n,
\]
where $\tau_n$ denotes the fractional part of $(\log n)/W(\log n)$.
Then
\[
\log d - \frac{\log n}{d} \sim -v -\frac{v}{\log\log n}
-\frac{\tau_n W(\log n)^2}{\log n}.
\]
From this and the monotonicity of $dn^{-1/d}$ (in $d$) the lemma
follows. {\quad\rule{3mm}{3mm}\,}
\end{document}
|
\begin{document}
\date{27 September 2022}
\subjclass[2020]{11F11, 11F27, 11F37}
\keywords{Appell functions, theta functions, mock theta functions, Ramanujan's lost notebook}
\begin{abstract}
Ramanujan's lost notebook contains many mock theta functions and mock theta function identities not mentioned in his last letter to Hardy. For example, we find the four tenth-order mock theta functions and their six identities. The six identities themselves are of a spectacular nature and were first proved by Choi. We also find over eight sixth-order order mock theta functions in the lost notebook, but among their identities there is only single relationship like those of the tenth-orders. Using Appell function properties of Hickerson and Mortenson, we discover and prove three new identities for the sixth-order mock theta functions which are in the spirit of the six tenth-order identities.
\end{abstract}
\maketitle
\section{Introduction}
Let $q:=q_{\tau}=e^{2 \pi i \tau}$, $\tau\in\mathbb{H}:=\{ z\in \mathbb{C}| \textup{Im}(z)>0 \}$, and define $\mathbb{C}^*:=\mathbb{C}-\{0\}$. Recall
\begin{gather*}
(x)_n=(x;q)_n:=\prod_{i=0}^{n-1}(1-q^ix), \ \ (x)_{\infty}=(x;q)_{\infty}:=\prod_{i\ge 0}(1-q^ix),\notag \\
\Theta(x;q):=(x)_{\infty}(q/x)_{\infty}(q)_{\infty}=\sum_{n=-\infty}^{\infty}(-1)^nq^{\binom{n}{2}}x^n,
\end{gather*}
where in the ultimate line the equivalence of product and sum follows from Jacobi's triple product identity. Here $a$ and $m$ are integers with $m$ positive. Define
\begin{gather*}
\Theta_{a,m}:=\Theta(q^a;q^m), \ \ \Theta_m:=\Theta_{m,3m}=\prod_{i\ge 1}(1-q^{mi}), \ {\text{and }}\overline{\Theta}_{a,m}:=\Theta(-q^a;q^m).
\end{gather*}
We first revisit the tenth-order mock theta functions \cite{C1, C2, C3, RLN}
{\allowdisplaybreaks \begin{align*}
{\phi}_{10}(q)&:=\sum_{n\ge 0}\frac{q^{\binom{n+1}{2}}}{(q;q^2)_{n+1}}, \ \ {\psi}_{10}(q):=\sum_{n\ge 0}\frac{q^{\binom{n+2}{2}}}{(q;q^2)_{n+1}}, \\\
& \ \ \ \ \ {X}_{10}(q):=\sum_{n\ge 0}\frac{(-1)^nq^{n^2}}{(-q;q)_{2n}}, \ \ {\chi}_{10}(q):=\sum_{n\ge 0}\frac{(-1)^nq^{(n+1)^2}}{(-q;q)_{2n+1}},\notag
\end{align*}}
which satisfy many identities such as the slightly-rewritten \cite{C1,C2}
{\allowdisplaybreaks \begin{align}
q^{2}\phi_{10}(q^9)-\frac{\psi_{10}(\omega q)-\psi_{10}(\omega^2 q)}{\omega - \omega^2}
&=-q\frac{\Theta_{1,2}}{\Theta_{3,6}}\frac{\Theta_{3,15}\Theta_{6}}{\Theta_{3}},
\label{equation:tenth-id-1}\\
q^{-2}\psi_{10}(q^9)+\frac{\omega \phi_{10}(\omega q)-\omega^2\phi_{10}(\omega^2 q)}{\omega - \omega^2}
&=\frac{\Theta_{1,2}}{\Theta_{3,6}}\frac{\Theta_{6,15}\Theta_{6}}{\Theta_{3}},
\label{equation:tenth-id-2}\\
X_{10}(q^9)-\frac{\omega \chi_{10}(\omega q)-\omega^2\chi_{10}(\omega^2 q)}{\omega - \omega^2}
&=\frac{\overline{\Theta}_{1,4}}{\overline{\Theta}_{3,12}}\frac{\Theta_{18,30}\Theta_{3}}{\Theta_{6}},
\label{equation:tenth-id-3}\\
\chi_{10}(q^9)+q^{2}\frac{ X_{10}(\omega q)-X_{10}(\omega^2 q)}{\omega - \omega^2}&=-q^3\frac{\overline{\Theta}_{1,4}}{\overline{\Theta}_{3,12}}
\frac{\Theta_{6,30}\Theta_{3}}{\Theta_{6}},
\label{equation:tenth-id-4}
\end{align}}
where $\omega$ is a primitive third root of unity, as well as \cite{C3}
\begin{align}
\phi_{10}(q)-q^{-1}\psi_{10}(-q^4)+q^{-2}\chi_{10}(q^8)&=\frac{\overline{\Theta}_{1,2}\Theta(-q^2;-q^{10})}{\Theta_{2,8}},
\label{equation:RLN-id-five}\\
\psi_{10}(q)+q\phi_{10}(-q^4)+X_{10}(q^8)&=\frac{\overline{\Theta}_{1,2}\Theta(-q^6;-q^{10})}{\Theta_{2,8}}.
\label{equation:RLN-id-six}
\end{align}
The six identities were originally found in Ramanjuan's lost notebook \cite{RLN}. What led Ramanujan to these identities is a continuing mystery. Indeed, in Andrews and Berndt's fifth volume on Ramanujan's lost notebook \cite[p. 396]{ABV}, they state
``{\em It is inconceivable that an identity such as (\ref{equation:RLN-id-five}) could be stumbled upon by a mindless search algorithm without any overarching theoretical insight.}''
\noindent The six identities were first proved by Choi \cite{C1, C2, C3} using methods similar to those of Hickerson in his proof of the mock theta conjectures \cite{H1, H2}. Identities (\ref{equation:tenth-id-1})--(\ref{equation:tenth-id-4}) were later given short proofs by Zwegers \cite{Zw3}.
We recall that Appell functions are building blocks for Ramanujan's classical mock theta functions. We will define them as follows
\begin{equation}
m(x,z;q):=\frac{1}{\Theta(z;q)}\sum_{r=-\infty}^{\infty}\frac{(-1)^rq^{\binom{r}{2}}z^r}{1-q^{r-1}xz}.\label{equation:mdef-eq}
\end{equation}
As an example, two of Ramanujan's six-order mock theta functions read
\begin{align}
\phi(q)&:=\sum_{n\ge 0}\frac{(-1)^nq^{n^2}(q;q^2)_n}{(-q)_{2n}}=2m(q,-1;q^3)
\label{equation:6th-phi(q)},\\
\psi(q)&:=\sum_{n\ge 0}\frac{(-1)^nq^{(n+1)^2}(q;q^2)_n}{(-q)_{2n+1}}=m(1,-q;q^3)
\label{equation:6th-psi(q)}.
\end{align}
In \cite{M2018}, we gave short proofs of all six of Ramanujan's identities for the tenth-order mock theta functions by using a recent result on Appell function properties.
\begin{theorem} \label{theorem:msplit-general-n} \cite[Theorem $3.5$]{HM} For generic $x,z,z'\in \mathbb{C}^*$
{\allowdisplaybreaks \begin{align}
D_n(x,z,z';q)=z' \Theta_n^3 \sum_{r=0}^{n-1}
\frac{q^{{\binom{r}{2}}} (-xz)^r
\Theta\big(-q^{{\binom{n}{2}+r}} (-x)^n z z';q^n\big)
\Theta(q^{nr} z^n/z';q^{n^2})}
{\Theta(xz;q) \Theta(z';q^{n^2}) \Theta\big(-q^{{\binom{n}{2}}} (-x)^n z';q^n)\Theta(q^r z;q^n\big )},
\end{align}}
where
\begin{equation}
D_n(x,z,z';q):=m(x,z;q) - \sum_{r=0}^{n-1} q^{{-\binom{r+1}{2}}} (-x)^r m\big({-}q^{{\binom{n}{2}-nr}} (-x)^n, z'; q^{n^2} \big).
\label{equation:Dn-def}
\end{equation}
\end{theorem}
\noindent The idea behind the proofs is straightforward. Once one has the Appell function forms of the tenth-order mock theta functions, one regroups the Appell functions by using (\ref{equation:Dn-def}) and then replaces them with the appropriate sums of quotients of theta functions given by Theorem \ref{theorem:msplit-general-n}. Each of the six identities is then reduced to proving a theta function identity which can be verified through several applications of the three-term Weierstrass relation for theta functions \cite[(1.)]{We}, \cite{Ko}: For generic $a,b,c,d\in \mathbb{C}^*$
\begin{align*}
\Theta(ac;q)\Theta(a/c;q)\Theta(bd;q)\Theta(b/d;q)&=\Theta(ad;q)\Theta(a/d;q)\Theta(bc;q)\Theta(b/c;q)\\
&\qquad +b/c \cdot \Theta(ab;q)\Theta(a/b;q)\Theta(cd;q)\Theta(c/d;q).
\end{align*}
For the sixth-order functions, we find the following two identities in the lost notebook \cite[p. 135, Entry 7.4.2]{ABV}, \cite[p. 13, equations 5b, 6b]{RLN}.
\begin{align}
\phi(q^9)-\psi(q)-q^{-3}\psi(q^9)&=\frac{\overline{\Theta}_{3,12}\Theta_{6}^2}{\overline{\Theta}_{1,4}\overline{\Theta}_{9,36}}
\label{equation:RLN6-A},\\
\frac{\psi(\omega q)-\psi(\omega^2q)}{(\omega-\omega^2)q}&=\frac{\overline{\Theta}_{1,4}\overline{\Theta}_{9,36}\Theta_{3,6}}{\overline{\Theta}_{3,12}\Theta_{6}}
\label{equation:RLN6-B}.
\end{align}
Whereas the latter follows from the Appell function property \cite{HM, Zw2}
\begin{equation}
m(x,z_1;q)-m(x,z_0;q)=\frac{z_0(q)_{\infty}^3\Theta(z_1/z_0;q)\Theta(xz_0z_1;q)}{\Theta(z_0;q)\Theta(z_1;q)\Theta(xz_0;q)\Theta(xz_1;q)}, \label{equation:changing-z}
\end{equation}
the former has shadows of Theorem \ref{theorem:msplit-general-n}. In Section \ref{section:id0}, we will demonstrate that Theorem \ref{theorem:msplit-general-n} can be used to prove (\ref{equation:RLN6-A}); however, verifying the resulting theta function identity is more difficult, and instead of standard theta function identities, we will use a Maple software package developed by Frank Garvan \cite{FG}.
Of course, there are many more sixth-order mock theta functions in the lost notebook \cite{AH, BC}, see Section \ref{section:sixth-orders} for a list. It is natural to ask if they too enjoy identities similar to (\ref{equation:RLN6-A}). Once one has the Appell function forms of the other sixth-order mock theta functions, see Section \ref{section:sixth-orders}, one can use Appell function properties and Theorem \ref{theorem:msplit-general-n} to construct identities where one side looks like the left-hand side of (\ref{equation:RLN6-A}) and the other side is a sum of quotients of theta functions. One sees this play out in Section \ref{section:id0}. But do the sums collapse to a single quotient? This leads us to three new identities for the sixth-order mock theta functions.
\begin{theorem} The following identities for the sixth-order mock theta functions $\rho(q)$, $\sigma(q)$, $\lambda(q)$, $\mu(q)$, $\phi_{\_}(q)$, and $\psi_{\_}(q)$ are true.
\begin{align}
q\rho(q)+q^3\rho(q^9)-2\sigma(q^9)&=q\frac{\Theta_{3,6}\Theta_{6}^4\Theta_{18}}{\Theta_{1,6}^2\Theta_{2}\Theta_{9}^2},
\label{equation:newSixth-1}\\
q\lambda(q)+q^{3}\lambda(q^9)-2\mu(q^{9})&=-\frac{\Theta_{1}\Theta_3^3}{\Theta_2^2\Theta_{3,18}},
\label{equation:newSixth-2}\\
\psi\_(q)+q^{-3}\psi\_(q^9)-\phi\_(q^9)&=q \frac{\Theta_6^5\Theta_{18}^{3}}{\Theta_{1,6}^2\Theta_{2}\Theta_{9}^3\Theta_{3,18}}.
\label{equation:newSixth-3}
\end{align}
\end{theorem}
In Section \ref{section:prelim} we recall basic facts for theta functions and Appell functions. In Section \ref{section:tech}, we prove theta function identities using Frank Garvan's Maple packages for $q$-series and theta functions \cite{FG}. In Section \ref{section:sixth-orders}, we recall the Eulerian and Appell function forms for six sixth-order mock theta functions found in the lost notebook. In Sections \ref{section:id1} to \ref{section:id3} we prove identities (\ref{equation:newSixth-1}) to (\ref{equation:newSixth-3}) respectively.
\section{Preliminaries}\label{section:prelim}
We have the general identities:
\begin{subequations}
{\allowdisplaybreaks \begin{gather}
\Theta(q^n x;q)=(-1)^nq^{-\binom{n}{2}}x^{-n}\Theta(x;q), \ \ n\in\mathbb{Z},\label{equation:theta-elliptic}\\
\Theta(x;q)=\Theta(q/x;q)=-x\Theta(x^{-1};q)\label{equation:theta-inv}.
\end{gather}}
\end{subequations}
The Appell function $m(x,q,z)$ satisfies several functional equations and identities, which we collect in the form of a proposition \cite{HM, Zw2}:
\begin{proposition} For generic $x,z\in \mathbb{C}^*$
{\allowdisplaybreaks \begin{subequations}
\begin{gather}
m(x,z;q)=m(x,qz;q),\label{equation:mxqz-fnq-z}\\
m(x,z;q)=x^{-1}m(x^{-1},z^{-1};q),\label{equation:mxqz-flip}\\
m(x,z;q)=m(x,x^{-1}z^{-1};q).\label{equation:mxqz-fnq-newz}
\end{gather}
\end{subequations}}
\end{proposition}
We point out the $n=3$ specialization of \cite[Theorem $3.5$]{HM}:
\begin{corollary} \label{corollary:msplitn3zprime} For generic $x,z,z'\in \mathbb{C}^*$
\begin{align}
D_3(x,q,z,z')&=\frac{z'\Theta_3^3}{\Theta(xz;q)\Theta(z';q^{9})\Theta(x^3z';q^3)}\Big [
\frac{1}{z}\frac{\Theta(x^3zz';q^3)\Theta(z^3/z';q^{9})}{\Theta(z;q^3)}\label{equation:msplit3} \\
&\ \ \ \ \ -\frac{x}{q}\frac{\Theta(qx^3zz';q^3)\Theta(q^{3}z^3/z';q^{9})}{\Theta(qz;q^3)}
+\frac{x^2z}{q}\frac{\Theta(q^2x^3zz';q^3)\Theta(q^{6}z^3/z';q^{9})}{\Theta(q^2z;q^3)}\Big ],\notag
\end{align}
where
\begin{align}
D_3(x,z,z';q)&:=m(x,z;q)-m\Big (q^{3}x^3,z';q^{9}\Big )\label{equation:D3-def}\\
&\ \ \ \ \ +q^{-1}xm\Big (x^3,z';q^{9}\Big )-q^{-3}x^2m\Big (q^{-3}x^3,z';q^{9}\Big ).\notag
\end{align}
\end{corollary}
\section{Technical Results}\label{section:tech}
\begin{lemma} \label{lemma:dTerm-id1} We have
{\allowdisplaybreaks \begin{align}
D_3(1,-q,-q^9;q^3)
&=\frac{\Theta_{9}^3}{\overline{\Theta}_{1,3}\overline{\Theta}_{9,27}\overline{\Theta}_{0,9}}
\Big [
q\frac{\Theta_{1,9}\Theta_{6,27}}{\overline{\Theta}_{1,9}}
-q^{2}\frac{\Theta_{4,9}\Theta_{3,27}}{\overline{\Theta}_{4,9}}
-\frac{\Theta_{7,9}\Theta_{12,27}}{\overline{\Theta}_{7,9}} \Big ],
\label{equation:dTerm-id1}\\
D_3(1,q,q^9;q^6)
&=
-\frac{\Theta_{18}^3}{\Theta_{1,6}\Theta_{9,54}\Theta_{9,18}}
\Big [ q^{2} \frac{\Theta_{10,18}\Theta_{6,54}}{\Theta_{1,18}}
+q^{3} \frac{\Theta_{16,18}\Theta_{12,54}}{\Theta_{7,18}}
+\frac{\Theta_{4,18}\Theta_{30,54}}{\Theta_{13,18}} \Big ],
\label{equation:dTerm-id2}\\
D_3(1,-q^2,-q^{27};q^6)
&=\frac{\Theta_{18}^3}{\overline{\Theta}_{2,6}\overline{\Theta}_{27,54}\overline{\Theta}_{9,18}}
\Big [ q^{2}\frac{\Theta_{11,18}\Theta_{21,54}}{\overline{\Theta}_{2,18}}
+q^{10}\frac{\Theta_{17,18}\Theta_{3,54}}{\overline{\Theta}_{8,18}}
+q^{4}\frac{\Theta_{5,18}\Theta_{15,54}}{\overline{\Theta}_{14,18}}
\Big ],
\label{equation:dTerm-id3A}\\
D_3(1,-q,-1;q^6)
&=\frac{\Theta_{18}^3}{\overline{\Theta}_{1,6}\overline{\Theta}_{0,54}\overline{\Theta}_{0,18}}
\Big [ q^{-1}\frac{\Theta_{1,18}\Theta_{3,54}}{\overline{\Theta}_{1,18}}
+q^{-6}\frac{\Theta_{7,18}\Theta_{21,54}}{\overline{\Theta}_{7,18}}
+q^{-5}\frac{\Theta_{13,18}\Theta_{39,54}}{\overline{\Theta}_{13,18}}
\Big ],
\label{equation:dTerm-id3B}\\
D_{3}(1,q,-1;q^3)
&=-\frac{\Theta_{9}^3}{\Theta_{1}\overline{\Theta}_{0,27}\overline{\Theta}_{0,9}}
\Big [ q^{-1}\frac{\overline{\Theta}_{1,9}\overline{\Theta}_{3,27}}{\Theta_{1,9}}
-q^{-3}\frac{\overline{\Theta}_{4,9}\overline{\Theta}_{12,27}}{\Theta_{4,9}}
+q^{-2}\frac{\overline{\Theta}_{7,9}\overline{\Theta}_{21,27}}{\Theta_{7,9}}
\Big ].
\label{equation:dTerm-id4}
\end{align}}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma:dTerm-id1}]
We use Corollary \ref{corollary:msplitn3zprime} and (\ref{equation:theta-elliptic}).
\end{proof}
\begin{proposition}\label{proposition:finalTheta-ids} We have
{\allowdisplaybreaks \begin{align}
\frac{\overline{\Theta}_{3,12}\Theta_{6}^2}{\overline{\Theta}_{1,4}\overline{\Theta}_{9,36}}
&=-D_3(1,-q,-q^9;q^3)
+\frac{\Theta_{27}^3\Theta_{9}^2}{\overline{\Theta}_{0,27}\overline{\Theta}_{9,27}^3},
\label{equation:finalTheta-id1}\\
q \frac{\Theta_{3,6}\Theta_{6}^4\Theta_{18}}{\Theta_{1,6}^2\Theta_{2}\Theta_{9}^2}
&=-D_3(1,q,q^9;q^6)-\frac{\Theta_{54}^3\Theta_{18,54}^2}{\Theta_{9,54}^3\Theta_{27,54}},
\label{equation:finalTheta-id2}\\
-\frac{\Theta_1\Theta_3^3}{\Theta_2^2\Theta_{3,18}}
&=D_3(1,-q^2,-q^{27};q^6)+D_3(1,-q,-1;q^6) \label{equation:finalTheta-id3} \\
&\qquad \qquad +q^{12}\frac{\Theta_{54}^3\Theta_{9,54}^2}{\overline{\Theta}_{27,54}^2\overline{\Theta}_{18,54}^2}
-q^{-6}\frac{\Theta_{54}^3\Theta_{9,54}^2}{\overline{\Theta}_{0,54}^2\overline{\Theta}_{9,54}^2},
\notag \\
2q \frac{\Theta_6^5\Theta_{18}^{3}}{\Theta_{1,6}^2\Theta_{2}\Theta_{9}^3\Theta_{3,18}}
&=-D_{3}(1,q,-1;q^3)+q\frac{\Theta_{6}^3}{\Theta_{1}\Theta_{2}}
+2q^{9}\frac{\overline{\Theta}_{27,108}^3}{\Theta_{9}\overline{\Theta}_{9,36}}\label{equation:finalTheta-id4} \\
&\qquad -2\frac{\Theta_{27}^3\overline{\Theta}_{9,27}^2}{\Theta_{9}^2\overline{\Theta}_{0,27}\overline{\Theta}_{9,27}} +q^{6}\frac{\Theta_{54}^3}{\Theta_{9}\Theta_{18}}
+q^{-3}\frac{\Theta_{27}^3\overline{\Theta}_{9,27}^2}{\Theta_{9}^2\overline{\Theta}_{0,27}^2}.\notag
\end{align}}
\end{proposition}
\begin{proof}[Proof of Proposition \ref{proposition:finalTheta-ids}] Frank Garvan's Maple packages {\em qseries} and {\em thetaids} prove all four theta function identities \cite{FG}. We give a brief description of the process where we use (\ref{equation:finalTheta-id2}) as a running example.
We first normalize (\ref{equation:finalTheta-id2}) to obtain the equivalent identity
\begin{equation}
g({\tau}):=f_1(\tau)+f_2(\tau)+f_3(\tau)-f_4(\tau)-1=0,\label{equation:finalTheta-id2-normal}
\end{equation}
where
\begin{gather*}
f_1(\tau):=q\frac{\Theta_{1,6}^2\Theta_{2}\Theta_{9}^2}{\Theta_{3,6}\Theta_{6}^4\Theta_{18}}
\frac{\Theta_{18}^3}{\Theta_{1,6}\Theta_{9,54}\Theta_{9,18}}
\frac{\Theta_{10,18}\Theta_{6,54}}{\Theta_{1,18}},\\
f_2(\tau):=q^2\frac{\Theta_{1,6}^2\Theta_{2}\Theta_{9}^2}{\Theta_{3,6}\Theta_{6}^4\Theta_{18}}
\frac{\Theta_{18}^3}{\Theta_{1,6}\Theta_{9,54}\Theta_{9,18}}
\frac{\Theta_{16,18}\Theta_{12,54}}{\Theta_{7,18}},\\
f_3(\tau):=\frac{1}{q}\frac{\Theta_{1,6}^2\Theta_{2}\Theta_{9}^2}{\Theta_{3,6}\Theta_{6}^4\Theta_{18}}
\frac{\Theta_{18}^3}{\Theta_{1,6}\Theta_{9,54}\Theta_{9,18}}
\frac{\Theta_{4,18}\Theta_{30,54}}{\Theta_{13,18}}, \quad
f_4(\tau):=\frac{1}{q}\frac{\Theta_{1,6}^2\Theta_{2}\Theta_{9}^2}{\Theta_{3,6}\Theta_{6}^4\Theta_{18}}
\frac{\Theta_{54}^3\Theta_{18,54}^2}{\Theta_{9,54}^3\Theta_{27,54}}.
\end{gather*}
Here $N:=54$. For the first step, one uses \cite[Theorem 18]{Rob} to verify that each $f_{j}(\tau)$ is a modular function on $\Gamma_{1}(N)$ for each $1\le j\le 4$. For the second step, one uses \cite[Corollary 4]{CKP} to find a set $\mathcal{S}_{N}$ of inequivalent cusps for $\Gamma_{1}(N)$. One also determines the fan width of each cusp. For the third step, one uses \cite[Lemma 3.2]{Biag} to calculate the invariant order of each modular function at each of the cusps of $\Gamma_{1}(N)$. For the fourth step, one uses the Valence Formula \cite[p. 98]{Rank} to determine the number of terms to verify in order to confirm identity (\ref{equation:finalTheta-id2-normal}). To this end, one calculates $B$ where
\begin{equation*}
B:=\sum_{\substack{s\in\mathcal{S}_{N}\\ s\ne i\infty}}\textup{min}(\{ \textup{ORD}(f_{j},s,\Gamma_{1}(N)):1\le j\le n\}\cup\{0\}),
\end{equation*}
and where $\textup{ORD}(f,\zeta,\Gamma):=\kappa(\zeta,\Gamma)\textup{ord}(f,\zeta)$, with $\kappa(\zeta,\Gamma)$ denoting the fan width of the cusp $\zeta$ and $\textup{ord}(f,\zeta)$ denoting the invariant order. We direct the interested reader to \cite[p. 91]{Rank} for further details.
In our running example, $B=-63$. From the Valence Formula \cite[Corollary 2.5]{FG} we know that (\ref{equation:finalTheta-id2-normal}) is true if and only if
\begin{equation*}
\textup{ORD}(g(\tau), i\infty,\Gamma_{1}(N))>-B
\end{equation*}
Hence one only needs to verify identity (\ref{equation:finalTheta-id2-normal}) out through $\mathcal{O}(q^{64})$, which is what one does in the fifth and final step.
\end{proof}
\section{The sixth-order mock theta functions}\label{section:sixth-orders}
We recall the relevant `6th order' mock theta functions \cite[Section 5]{HM}, but we omit $\gamma(q)$.
{\allowdisplaybreaks \begin{align}
\rho(q)&:=\sum_{n\ge 0}\frac{q^{\binom{n+1}{2}}(-q)_n}{(q;q^2)_{n+1}}=-q^{-1}m(1,q;q^6)\label{equation:6th-rho(q)}\\
\sigma(q)&:=\sum_{n\ge 0}\frac{q^{\binom{n+2}{2}}(-q)_n}{(q;q^2)_{n+1}}=-m(q^2,q;q^6)\label{equation:6th-sigma(q)}\\
\lambda(q)&:=\sum_{n\ge 0}\frac{(-1)^nq^n(q;q^2)_n}{(-q)_n}
=q^{-1}m(1,-q^2;q^6)+q^{-1}m(1,-q;q^6)\label{equation:6th-lambda(q)}\\
&=2q^{-1}m(1,-q^2;q^6)+\frac{\Theta_{1,2}\overline{\Theta}_{3,12}}{\overline{\Theta}_{1,4}}\notag\\
\mu(q)&:={\sum_{n\ge 0}}^*\frac{(-1)^n(q;q^2)_n}{(-q)_n}=\frac12 +\frac12 \sum_{n\ge 0}\frac{(-1)^nq^{n+1}(1+q^n)(q;q^2)_n}{(-q;q)_{n+1}}\label{equation:6th-mu(q)}\\
&=m(q^2,-1;q^6)+ m(q^2,-q^3;q^6)=2m(q^2,-1;q^6)-\frac{\Theta_{1,2}\overline{\Theta}_{1,3}}{2\overline{\Theta}_{1,4}}\notag\\
{\phi}\_(q)&:=\sum_{n\ge 1}\frac{q^n(-q;q)_{2n-1}}{(q;q^2)_n}=-\frac{3}{4}m(q,q;q^3)-\frac{1}{4}m(q,-q;q^3)
\label{equation:phibar}\\
&=-m(q,q;q^3)-q\frac{\overline{\Theta}_{3,12}^3}{\Theta_1\overline{\Theta}_{1,4}}\notag\\
{\psi}\_(q)&:=\sum_{n\ge 1}\frac{q^n(-q;q)_{2n-2}}{(q;q^2)_n}=-\frac{3}{4}m(1,q;q^3)+\frac{1}{4}m(1,-q;q^3)\label{equation:psibar}\\
&=-\frac{1}{2}m(1,q;q^3)+q\frac{{\Theta}_{6}^3}{2\Theta_1 \Theta_2}\notag
\end{align}}
\section{Proof of identity (\ref{equation:RLN6-A})}\label{section:id0}
We rewrite left-hand side of identity (\ref{equation:RLN6-A}). We first recall the Appell function forms (\ref{equation:6th-phi(q)}) and (\ref{equation:6th-psi(q)}), and we then use properties (\ref{equation:mxqz-fnq-newz}) and (\ref{equation:changing-z}) to obtain
{\allowdisplaybreaks \begin{align*}
\phi(q^9)&-\psi(q)-q^{-3}\psi(q^9)\\
&=2m(q^9,-1;q^{27})-m(1,-q;q^3)-q^{-3}m(1,-q^{9};q^{27})\\
&=m(q^9,-1;q^{27})+m(q^9,-q^{-9};q^{27})-m(1,-q;q^3)-q^{-3}m(1,-q^{9};q^{27})\\
&=m(q^9,-q^{9};q^{27})+\frac{\Theta_{27}^3\Theta_{9}^2}{\overline{\Theta}_{0,27}\overline{\Theta}_{9,27}^3}\\
&\qquad +m(q^9,-q^{-9};q^{27})-m(1,-q;q^3)-q^{-3}m(1,-q^{9};q^{27}).
\end{align*}}
Next, property (\ref{equation:mxqz-flip}), Definition (\ref{equation:D3-def}), and Lemma \ref{lemma:dTerm-id1} yield
{\allowdisplaybreaks \begin{align*}
\phi(q^9)&-\psi(q)-q^{-3}\psi(q^9)\\
&=m(q^9,-q^{9};q^{27})+q^{-9}m(q^{-9},-q^{9};q^{27})-m(1,-q;q^3)-q^{-3}m(1,-q^{9};q^{27})\\
&\quad +\frac{\Theta_{27}^3\Theta_{9}^2}{\overline{\Theta}_{0,27}\overline{\Theta}_{9,27}^3}\\
&=-D_3(1,-q,-q^9;q^{3}) +\frac{\Theta_{27}^3\Theta_{9}^2}{\overline{\Theta}_{0,27}\overline{\Theta}_{9,27}^3}.
\end{align*}
The result follows from identity (\ref{equation:finalTheta-id1}).
\section{Proof of identity (\ref{equation:newSixth-1})}\label{section:id1}
We take a slightly different approach. We use Definition (\ref{equation:D3-def}), properties (\ref{equation:mxqz-flip}) and (\ref{equation:changing-z}), and Appell function forms (\ref{equation:6th-rho(q)}) and (\ref{equation:6th-sigma(q)}) to obtain
{\allowdisplaybreaks \begin{align*}
D_3(1,q,q^9;q^6)
&=m(1,q;q^6)-m(q^{18},q^{9};q^{54})+q^{-6}m(1,q^{9};q^{54})-q^{-18}m(q^{-18},q^{9};q^{54})\\
&=m(1,q;q^6)-m(q^{18},q^{9};q^{54})+q^{-6}m(1,q^{9};q^{54})-m(q^{18},q^{-9};q^{54})\\
&=m(1,q;q^6)-2m(q^{18},q^{9};q^{54})+q^{-6}m(1,q^{9};q^{54})
-\frac{\Theta_{54}^3\Theta_{18,54}^2}{\Theta_{9,54}^3\Theta_{27,54}}\\
&=-q\rho(q)+2\sigma(q^9)-q^3\rho(q^9)-\frac{\Theta_{54}^3\Theta_{18,54}^2}{\Theta_{9,54}^3\Theta_{27,54}}.
\end{align*}}
Rearranging terms, we have
\begin{align*}
q\rho(q)-2\sigma(q^9)+q^3\rho(q^9)
=-D_3(1,q,q^9;q^6)-\frac{\Theta_{54}^3\Theta_{18,54}^2}{\Theta_{9,54}^3\Theta_{27,54}}.
\end{align*}
The result follows from identity (\ref{equation:finalTheta-id2}).
\section{Proof of identity (\ref{equation:newSixth-2})}\label{section:id2}
Using Definition (\ref{equation:D3-def}), property (\ref{equation:mxqz-flip}), and then property (\ref{equation:changing-z}) twice produces
\begin{align*}
D_3&(1,-q^2,-q^{27};q^6)\\
&=m(1,-q^2;q^6)-m(q^{18},-q^{27};q^{54})+q^{-6}m(1,-q^{27};q^{54})-q^{-18}m(q^{-18},-q^{27};q^{54})\\
&=m(1,-q^2;q^6)-m(q^{18},-q^{27};q^{54})+q^{-6}m(1,-q^{27};q^{54})-m(q^{18},-q^{-27};q^{54})\\
&=m(1,-q^2;q^6)-2m(q^{18},-q^{27};q^{54})+q^{-6}m(1,-q^{18};q^{54})
-q^{12}\frac{\Theta_{54}^3\Theta_{9,54}^2}{\overline{\Theta}_{27,54}^2\overline{\Theta}_{18,54}^2}.
\end{align*}
Likewise, using Definition (\ref{equation:D3-def}), property (\ref{equation:mxqz-flip}), and then property (\ref{equation:changing-z}) yields
\begin{align*}
D_3&(1,-q,-1;q^6)\\
&=m(1,-q;q^6)-m(q^{18},-1;q^{54})+q^{-6}m(1,-1;q^{54})-q^{-18}m(q^{-18},-1;q^{54})\\
&=m(1,-q;q^6)-2m(q^{18},-1;q^{54})+q^{-6}m(1,-1;q^{54})\\
&=m(1,-q;q^6)-2m(q^{18},-1;q^{54})+q^{-6}m(1,-q^9;q^{54})
+q^{-6}\frac{\Theta_{54}^3\Theta_{9,54}^2}{\overline{\Theta}_{0,54}^2\overline{\Theta}_{9,54}^2}.
\end{align*}
Summing the above two expressions and rearranging terms yields
\begin{align*}
q&\lambda(q)+q^{3}\lambda(q^9)-2\mu(q^9)\\
&= D_3(1,-q^2,-q^{27};q^6)+D_3(1,-q,-1;q^6)
+q^{12}\frac{\Theta_{54}^3\Theta_{9,54}^2}{\overline{\Theta}_{27,54}^2\overline{\Theta}_{18,54}^2}
-q^{-6}\frac{\Theta_{54}^3\Theta_{9,54}^2}{\overline{\Theta}_{0,54}^2\overline{\Theta}_{9,54}^2}.
\end{align*}
The result follows by (\ref{equation:finalTheta-id3}).
\section{Proof of identity (\ref{equation:newSixth-3})}\label{section:id3}
Using Definition (\ref{equation:D3-def}), property (\ref{equation:mxqz-flip}), property (\ref{equation:changing-z}), and then the Appell function forms (\ref{equation:phibar}) and (\ref{equation:psibar}) gives
\begin{align*}
D_{3}&(1,q,-1;q^3)\\
&=m(1,q;q^3)-m(q^{9},-1;q^{27})+q^{-3}m(1,-1;q^{27})-q^{-9}m(q^{-9},-1;q^{27})\\
&=m(1,q;q^3)-2m(q^{9},-1;q^{27})+q^{-3}m(1,-1;q^{27})\\
&=m(1,q;q^3)
-2m(q^{9},-q^9;q^{27})-2\frac{\Theta_{27}^3\overline{\Theta}_{9,27}^2}{\Theta_{9}^2\overline{\Theta}_{0,27}\overline{\Theta}_{9,27}}\\
& \qquad \qquad +q^{-3}m(1,q^9;q^{27})+q^{-3}\frac{\Theta_{27}^3\overline{J}_{9,27}^2}{\Theta_{9}^2\overline{\Theta}_{0,27}^2}\\
&=-2\psi_{\_}(q)+2q\frac{\Theta_{6}^3}{2\Theta_{1}\Theta_{2}}
+2\phi_{\_}(q^{9})+2q^{9}\frac{\overline{\Theta}_{27,108}^3}{\Theta_{9}\overline{\Theta}_{9,36}}
-2\frac{\Theta_{27}^3\overline{\Theta}_{9,27}^2}{\Theta_{9}^2\overline{\Theta}_{0,27}\overline{\Theta}_{9,27}}\\
& \qquad \qquad -2q^{-3}\psi_{\_}(q^{9})+2q^{6}\frac{J_{54}^3}{2\Theta_{9}\Theta_{18}}
+q^{-3}\frac{\Theta_{27}^3\overline{\Theta}_{9,27}^2}{\Theta_{9}^2\overline{\Theta}_{0,27}^2}.
\end{align*}
Rearranging terms gives us
\begin{align*}
2\psi\_(q)+2q^{-3}\psi\_(q^9)-2\phi\_(q^9)&=-D_{3}(1,q,-1;q^3)+q\frac{\Theta_{6}^3}{\Theta_{1}\Theta_{2}}
+2q^{9}\frac{\overline{\Theta}_{27,108}^3}{\Theta_{9}\overline{\Theta}_{9,36}}
\\
& \qquad -2\frac{\Theta_{27}^3\overline{\Theta}_{9,27}^2}{\Theta_{9}^2\overline{\Theta}_{0,27}\overline{\Theta}_{9,27}} +q^{6}\frac{\Theta_{54}^3}{\Theta_{9}\Theta_{18}}
+q^{-3}\frac{\Theta_{27}^3\overline{\Theta}_{9,27}^2}{J_{9}^2\overline{\Theta}_{0,27}^2}.
\end{align*}
The result follows by (\ref{equation:finalTheta-id4}).
\end{document}
|
\begin{document}
\title{Quantitative characterization of stress concentration in the presence of closely spaced hard inclusions in two-dimensional linear elasticity\thanks{This work is supported by NRF 2016R1A2B4011304 and 2017R1A4A1014735}}
\author{Hyeonbae Kang\thanks{Department of Mathematics, Inha University, Incheon
22212, S. Korea ([email protected])} \and Sanghyeon Yu\thanks{Seminar for Applied Mathematics, Department of Mathematics, ETH Z\"urich, R\"amistrasse 101, CH-8092 Z\"urich, Switzerland ([email protected])}}
\maketitle
\begin{abstract}
In the region between close-to-touching hard inclusions, the stress may be arbitrarily large as the inclusions get closer. The stress is represented by the gradient of a solution to the Lam\'e system of linear elasticity. We consider the problem of characterizing the gradient blow-up of the solution in the narrow region between two inclusions and estimating its magnitude. We introduce singular functions which are constructed in terms of nuclei of strain and hence are solutions of the Lam\'{e} system, and then show that the singular behavior of the gradient in the narrow region can be precisely captured by singular functions. As a consequence of the characterization, we are able to regain the existing upper bound on the blow-up rate of the gradient, namely, $\epsilon^{-1/2}$ where $\epsilon$ is the distance between two inclusions. We then show that it is in fact an optimal bound by showing that there are cases where $\epsilon^{-1/2}$ is also a lower bound. This work is the first to completely reveal the singular nature of the gradient blow-up in the context of the Lam\'{e} system with hard inclusions. The singular functions introduced in this paper play essential roles to overcome the difficulties in applying the methods of previous works. Main tools of this paper are the layer potential techniques and the variational principle. The variational principle can be applied because the singular functions of this paper are solutions of the Lam\'{e} system.
\end{abstract}
\noindent {\footnotesize {\bf AMS subject classifications.} 35J47, 74B05, 35B40}
\noindent {\footnotesize {\bf Key words.} stress concentration, gradient blow-up, closely spaced inclusions, hard inclusion, Lam\'{e} system, linear elasticity, high contrast, optimal bound, singular functions, nuclei of strain}
\tableofcontents
\section{Introduction}
When two inclusions are close to touching, the physical field such as the stress or the electric field may be arbitrarily large in the narrow region between the inclusions. It is quite important to understand the field concentration precisely.
Stress concentration may occur in fiber-reinforced composites where elastic inclusions are densely packed \cite{bab}. The electric field can be greatly enhanced in the conducting inclusions case. It can be utilized to achieve subwavelength imaging and sensitive spectroscopy \cite{YA-SIREV}.
In response to such importance there has been much progress in understanding the field concentration in the last decade or so. In the context of electrostatics (or anti-plane elasticity), the field is the gradient of a solution to the Laplace equation and the precise estimates of the gradient were obtained. It is discovered that when the conductivity of the inclusions is $\infty$, the blow-up rate of the gradient is $\epsilon^{-1/2}$ in two dimensions \cite{AKL-MA-05,Yun-SIAP-07}, where $\epsilon$ is the distance between two inclusions, and it is $|\epsilon \ln \epsilon|^{-1}$ in three dimensions \cite{BLY-ARMA-09}. There is a long list of literature in this direction of research, e.g., \cite{AKLLL-JMPA-07, AKLLZ-JDE-09, BLY-CPDE-10, Gorb-MMS-16, GN-MMS-12, Lekner-PRSA-12, LLBY-QAM-14, LY-CPDE-09, LY-JDE-11, Yun-JMAA-09, Yun-JDE-16}.
While these works are related to the estimate of the blow-up rate of the gradient, there is other direction of research to characterize the singular behavior of the gradient \cite{ACKLY-ARMA-13, KLY-MA-15, KLY-JMPA-13, KLY-SIAP-14, LY-JMAA-15}. An explicit function, which is called a singular function, is introduced and the singular behavior of the gradient is completely characterized by this singular function. Since the singular function is closely related to this work, we include a brief discussion on it at the beginning of subsection \ref{subsec:singular}. All the work mentioned above are related to the homogenous equation and inclusions with smooth boundaries. Recently there have been important extensions to the inhomogeneous equation \cite{DL-arXiv} and inclusions with corners (the bow-tie shape) \cite{KYun}.
In this paper, we consider a similar problem in linear elasticity, i.e., the Lam\'e system.
We assume two hard inclusions, which have infinite shear modulus, are presented with a small separation distance $\epsilon$. The stress is represented in terms of the gradient of a solution to the Lam\'e system.
We are interested in the singular behavior of the stress (or the gradient) when the distance $\epsilon$ goes to zero.
Even though much progress has been made for the Laplace equation of the anti-plane elasticity as mentioned above, not much is known about the gradient blow-up in the context of the full elasticity, e.g., the Lam\'e system. Recently, a significant progress has been made by Bao {\it et al} \cite{BLL-ARMA-15, BLL-arXiv}: it is proved in \cite{BLL-ARMA-15} that $\epsilon^{-1/2}$ is an upper bound on the blow-up rate of the gradient in the two-dimensional Lam\'e system. We emphasize that there is significant difficulty in applying the methods for scalar equations to systems of equations. For instance, the maximum principle does not hold for the system. In \cite{BLL-ARMA-15} they come up with an ingenious iteration technique to overcome this difficulty and obtain the upper bound on the blow-up rate. However, it was still not known if it is also a lower bound.
The purpose of this paper is to construct singular functions for the two-dimensional Lam\'e system, like the one for electrostatics, and to characterize the singular behavior of the gradient using singular functions. In fact, we construct singular functions as elaborated linear combinations of nuclei of strain, and show that they capture the singular behavior of the gradient precisely. Nuclei of strain are the columns of the Kelvin matrix of the fundamental solution to the Lam\'e system and their variants. As a consequence of such characterization, we are able to reobtain the result of \cite{BLL-ARMA-15} with a different proof, which states that $\epsilon^{-1/2}$ is an upper bound on the blow-up rate of the gradient. More importantly, the characterization enables us to show that the rate $\epsilon^{-1/2}$ is actually optimal, optimal in the sense that there are cases where $\epsilon^{-1/2}$ is a lower bound on the blow-up rate.
To the best of our knowledge, this work is the first to completely reveal the singular nature of the gradient blow-up in the context of the Lam\'e system with hard inclusions. The singular functions introduced in this paper play essential roles to overcome the difficulties in applying the methods of previous works.
We emphasize that the nuclei of strain and singular functions are solutions of the Lam\'e system. This has a significant implication. We heavily use the variational principle for proving the characterization of the stress concentration in section \ref{sec:bvp}, which is only possible since singular functions are solutions of the Lam\'e system. This makes the method of this paper significantly different from that of \cite{BLL-ARMA-15}. We include a brief comparison of two methods at the end of subsection \ref{subsec:bvp}.
It is worth mentioning that the singular functions constructed in this paper are applied to an important problem other than analysis of the gradient blow-up. In fact, quantitative analysis of the gradient is closely related to the computation of the effective property of densely packed
composites. In \cite{BK-ARMA-01}, Beryland et al. provided the first rigorous justification of the asymptotic formula for the effective conductivity, which was found by Keller \cite{Keller-JAP-63}. However, the corresponding formulas of Flaherty-Keller \cite{FK-CPAM-73} for the effective elastic properties have not been rigorously proved to the best of our knowledge. Using singular functions of this paper we are able to prove the formulas in a mathematically rigorous way. We emphasize that this is possible only because singular functions are solutions of the Lam\'e system. We report this result in a separate paper \cite{KY}.
Accurate numerical computation of the gradient in the presence of closely spaced hard inclusions is a well-known challenging problem in computational mathematics and sciences. When computing the gradient, a serious difficulty arises since a fine mesh is required to capture the gradient blow-up in the narrow region. The precise characterization of the gradient blow-up can be utilized for designing an efficient numerical scheme to compute the gradient. This was done for the conductivity case in \cite{KLY-JMPA-13}. The result of this paper may open up a way to do computation for the isotropic elasticity.
It is worth mentioning that, for the Lam\'e system where two inclusions are circular holes, the gradient blow-up is recently characterized by a singular function in \cite{LY-arXiv}. Moreover, the optimal blow-up rate of the gradient is obtained. The holes are characterized by the vanishing traction condition on the boundary, and the blow-up rate is the same as the hard inclusion case, namely, $\epsilon^{-1/2}$. We emphasize that unlike the anti-plane elasticity, the hole case is not the dual problem of the hard inclusion case, and a different method is required to handle the hole case.
This paper consists of six sections including introduction and appendices. In section \ref{sec:prob}, we formulate the problem to be considered, derive some preliminary results which will be used in later sections, and describe geometry of two inclusions. In section \ref{sec:singular}, singular functions are constructed in terms of nuclei of strain and their properties are derived for later use. Section \ref{sec:bvp} and \ref{sec:free} respectively deal with the problem of characterizing the stress concentration in a bounded domain and in the free space. In section \ref{sec:symmetric_case} we consider the case when inclusions are symmetric, in particular, when inclusions are disks of the same radius, and show that $\epsilon^{-1/2}$ is a lower bound on the blow-up rate of the gradient when Lam\'e constants satisfy a certain constraint. Since each section is rather long and its subject can be viewed as independent, we include an introduction in each section. Appendices are to prove some results used in the text, especially existence and uniqueness of the solution to the exterior problem of the Lam\'e system and the layer potential representation of the solution to the boundary value problem and the free space problem.
Throughout this paper, we use the expression $A \lesssim B$ to imply that there is a constant $C$ independent of $\epsilon$ such that $A \le CB$. The expression $A \approx B$ implies that both $A \lesssim B$ and $B \lesssim A$ hold.
\section{Problem formulation and preliminaries}\langlebel{sec:prob}
In this section we formulate the problem of characterizing the stress concentration. The main tools in dealing with the problem are the layer potential technique and the variational principle. We introduce them in this section. We then consider the existence and uniqueness question of the exterior problem for the Lam\'e system with arbitrary Dirichlet data. The final subsection is to describe the geometry of two inclusions in a precise manner.
\subsection{Lam\'{e} system with hard inclusions: a problem formulation}
We consider two disjoint elastic inclusions $D_1$ and $D_2$ which are embedded in $\mathbb{R}^2$ occupied by an elastic material. We assume that $D_1$ and $D_2$ are simply connected bounded domains with $C^4$-smooth boundaries. We emphasize that the results of this paper are valid even if boundaries are $C^{3,\alpha}$ for some $\alpha>0$. But we assume that they are $C^4$ for convenience. Advantage of assuming $C^4$ is made clear in subsection \ref{subsection:geo_two_incl}. We also assume some convexity of the boundaries which is precisely described in the same subsection.
Let $(\lambda,\mu)$ be the pair of Lam\'{e} constants of $D^e:= \mathbb{R}^2 \setminus \overline{D_1 \cup D_2}$ which satisfies the strong ellipticity conditions
$\mu>0$ and $\lambda+\mu>0$. Then the elasticity tensor is given by $\mathbb{C}=(C_{ijkl})$ with
$$
C_{ijkl} = \lambda \delta_{ij} \delta_{kl} + \mu (\delta_{ik} \delta_{jl} + \delta_{il} \delta_{jk}), \quad i,j,k,l=1,2,
$$
where $\delta_{ij}$ denotes Kronecker's delta. The Lam\'e operator $\mathcal{L}_{\lambda,\mu}$ of the linear isotropic elasticity is defined by
\begin{equation}
\mathcal{L}_{\lambda,\mu} {\bf u}:= \nabla \cdot \mathbb{C} \widehat{\nabla}{\bf u} = \mu \Delta {\bf u}+(\lambda+\mu)\nabla \nabla \cdot {\bf u},
\end{equation}
where $\widehat{\nabla}$ denotes the symmetric gradient, namely,
$$
\widehat{\nabla} {\bf u} = \frac{1}{2} \left( \nabla {\bf u} +\nabla {\bf u}^T\right) \quad \text{($T$ for transpose)}.
$$
The corresponding conormal derivative $\partial_\nu {\bf u}$ on $\partial D_j$ is defined as
\begin{equation}
\partial_\nu {\bf u} = (\mathbb{C} \widehat{\nabla}{\bf u}) {\bf n},
\end{equation}
where ${\bf n}$ is the outward unit normal vector to $\partial D_j$ ($j=1,2$).
Given a displacement field ${\bf u}=(u_1,u_2)^T$, $\widehat{\nabla} {\bf u}$ is the strain tensor while the stress tensor $\mbox{\boldmath $\Gs$} = (\sigma_{ij})_{i,j=1}^2$ is defined to be
\begin{equation}\langlebel{def_stress}
\mbox{\boldmath $\Gs$} := \mathbb{C} \widehat{\nabla}{\bf u} = \lambda \mbox{tr} ( \widehat{\nabla} {\bf u}){\bf I} +2\mu \widehat{\nabla} {\bf u},
\end{equation}
namely,
\begin{align}
\sigma_{11} &= (\lambda+2\mu) \partial_1 u_1 + \lambda\partial_2 u_2,
\nonumber
\\
\sigma_{22} &= \lambda \partial_1 u_1 + (\lambda+2\mu) \partial_2 u_2, \langlebel{stress_cartesian}
\\
\sigma_{12} &= \sigma_{21} = \mu (\partial_2 u_1 + \partial_1 u_2). \nonumber
\end{align}
Here and throughout this paper, tr stands for the trace and $\partial_j$ denotes the partial derivative with respect to the $x_j$-variable for $j=1,2$.
Let $\mbox{\boldmath $\GY$}$ be the collection of all functions $\mbox{\boldmath $\Gy$}$ such that $\widehat{\nabla} \mbox{\boldmath $\Gy$} = 0$ in $\mathbb{R}^2$, {\it i.e.}, the three-dimensional vector space spanned by
the displacement fields of the rigid motions $\{\Psi_j\}_{j=1}^3$ defined as follows:
\begin{equation}\langlebel{Psidef}
\Psi_1({\bf x})=\begin{bmatrix} 1\\0 \end{bmatrix}, \quad
\Psi_2({\bf x})=\begin{bmatrix} 0\\1 \end{bmatrix}, \quad
\Psi_3({\bf x})=\begin{bmatrix} -y\\ x \end{bmatrix}.
\end{equation}
Throughout this paper we denote the point ${\bf x}$ in $\mathbb{R}^2$ by either $(x_1,x_2)^T$ or $(x,y)^T$ at its convenience.
We assume $D_1$ and $D_2$ are hard inclusions. This assumption is inscribed on the boundary conditions on $\partial D_j$ in the following problem: Let $\Omega$ be a bounded domain in $\mathbb{R}^2$ containing $D_1$ and $D_2$ such that $\mbox{dist}(\partial \Omega, D_1 \cup D_2) \geq C$ for some constant $C>0$.
Let us denote
$$
\widetilde{\Omega}=\Omega\setminus\overline{D_1\cup D_2}.
$$
For a given Dirichlet data ${\bf g}$ we consider the following problem:
\begin{equation}\langlebel{elas_eqn_bdd}
\ \left \{
\begin{array} {ll}
\displaystyle \mathcal{L}_{\lambda,\mu} {\bf u}= 0 \quad &\mbox{ in } \widetilde{\Omega},\\[2mm]
\displaystyle {\bf u}=\sum_{j=1}^3 c_{ij} \Psi_j({\bf x}) \quad &\mbox{ on } \partial D_i, \quad i=1,2 ,\\[2mm]
\displaystyle {\bf u} = \mathbf{g} \quad& \mbox{ on } \partial \Omega,
\end{array}
\right.
\end{equation}
where the constants $c_{ij}$ are determined by the conditions
\begin{equation}\langlebel{int_zero}
\int_{\partial D_i} \partial_\nu {\bf u} |_+ \cdot \Psi_j \,d\sigma =0, \quad i=1,2, \, j=1,2,3.
\end{equation}
Here and afterwards, the subscript $+$ denotes the limit from outside $\partial D_j$.
Let
\begin{equation}
\epsilon:= \mbox{dist}(D_1, D_2).
\end{equation}
The gradient $\nabla {\bf u}$ of the solution ${\bf u}$ to \eqnref{elas_eqn_bdd} may become arbitrarily large as two inclusions get closer, namely, as $\epsilon \to 0$. The main purpose of this paper is to characterize the blow-up of $\nabla {\bf u}$. Roughly speaking, we show that ${\bf u}$ can be decomposed as
\begin{equation}\langlebel{BuBsBb}
{\bf u}={\bf s}+ {\bf b},
\end{equation}
where $\nabla {\bf s}$ has the main singularity of $\nabla {\bf u}$ while $\nabla {\bf b}$ is regular or less singular. So the singular behavior of $\nabla {\bf u}$ is characterized by that of $\nabla {\bf s}$. We will find ${\bf s}$ in an explicit form. The characterization of the gradient blow-up enables us to show that the optimal blow-up rate of $\nabla {\bf u}$ in terms of $\epsilon$ is $\epsilon^{-1/2}$. It is proved in \cite{BLL-ARMA-15} that $\epsilon^{-1/2}$ is an upper bound on the blow-up rate of $\nabla {\bf u}$ as mentioned before.
The problem in the presence of hard inclusions may be considered as the limiting problem of a high contrast elasticity problem when the shear modulus of the inclusions degenerates to infinity \cite{BLL-ARMA-15}. When the shear modulus is bounded away from zero and infinity, it is known that the gradient is bounded regardless of the distance between inclusions \cite{LN}.
We also consider the free space problem: For a given function ${\bf H}$ satisfying $\mathcal{L}_{\lambda,\mu} {\bf H}= 0 \mbox{ in } \mathbb{R}^2$, the displacement field ${\bf u}$ satisfies
\begin{equation}\langlebel{elas_eqn_free}
\ \left \{
\begin{array} {ll}
\displaystyle \mathcal{L}_{\lambda,\mu} {\bf u}= 0 \quad &\mbox{in } D^e,\\[2mm]
\displaystyle
{\bf u}=\sum_{j=1}^3 d_{ij} \Psi_j \quad &\mbox{on } {\partial D_i}, \quad i=1,2 ,
\\[2mm]
\displaystyle {\bf u}({\bf x})-{\bf H}({\bf x}) = O(|{\bf x}|^{-1}) \quad& \mbox{as } |{\bf x}| \rightarrow \infty,
\end{array}
\right.
\end{equation}
where the constants $d_{ij}$ are determined by the condition \eqnref{int_zero}. We will obtain the decomposition of the form \eqnref{BuBsBb} and estimates of $\nabla {\bf u}$ for this problem as well.
\subsection{Layer potentials for 2D Lam\'{e} system}
The Kelvin matrix of fundamental solutions ${\bf \GG} = \left( \Gamma_{ij} \right)_{i, j = 1}^2$ to the Lam\'{e} operator $\mathcal{L}_{\lambda, \mu}$ is given by
\begin{equation}\langlebel{Kelvin}
\Gamma_{ij}({\bf x}) =
\displaystyle \alpha_1 \delta_{ij} \ln{|{\bf x}|} - \alpha_2 \displaystyle \frac{x_i x_j}{|{\bf x}| ^2}
\end{equation}
where
\begin{equation}
\alpha_1 = \frac{1}{4\partiali} \left( \frac{1}{\mu} + \frac{1}{\lambda + 2 \mu} \right) \quad\mbox{and}\quad
\alpha_2 = \frac{1}{4\partiali} \left( \frac{1}{\mu} - \frac{1}{\lambda + 2 \mu} \right).
\end{equation}
In short, ${\bf \GG}$ can be expressed as
\begin{equation}\langlebel{Kelvintensor}
{\bf \GG}({\bf x}) = \alpha_1 \ln|{\bf x}-{\bf y}| {\bf I} - \alpha_2 {\bf x} \otimes \nabla(\ln |{\bf x}|),
\end{equation}
where ${\bf I}$ is the identity matrix.
For a given bounded domain $D$ with $C^2$ boundary, the single and double layer potentials on $\partial D$ associated with the pair of Lam\'{e} parameters $(\lambda,\mu)$ are defined by
\begin{align}
& \mathcal{S}_{\partial D} [\mbox{\boldmath $\Gvf$}] ({\bf x}) := \int_{\partial D} {\bf \GG} ({\bf x}-{\bf y}) \mbox{\boldmath $\Gvf$}({\bf y}) d \sigma({\bf y}), \quad {\bf x} \in \mathbb{R}^2,\\
& \mathcal{D}_{\partial D} [\mbox{\boldmath $\Gvf$}] ({\bf x}) := \int_{\partial D} \partial_\nu {\bf \GG} ({\bf x}-{\bf y}) \mbox{\boldmath $\Gvf$}({\bf y}) d \sigma({\bf y}), \quad {\bf x} \in \mathbb{R}^2 \setminus \partial D,
\end{align}
where the conormal derivative $\partial_\nu {\bf \GG} ({\bf x}-{\bf y})$ is defined by
$$
\partial_\nu {\bf \GG} ({\bf x}-{\bf y}) {\bf b} = \partial_\nu ({\bf \GG} ({\bf x}-{\bf y}){\bf b})
$$
for any constant vector ${\bf b}$.
Let $H^{1/2}(\partial D)$ be the usual $L^2$-Sobolev space of order $1/2$ on $\partial D$ and $H^{-1/2}(\partial D)$ be its dual space. With functions $\Psi_j$ in \eqnref{Psidef} we define
\begin{equation}
H^{-1/2}_\Psi (\partial D) := \{ {\bf f}\in H^{-1/2}(\partial D)^2 : \int_{\partial D} {\bf f}\cdot \Psi_j =0, \,j=1,2,3 \}.
\end{equation}
The following propositions for representations of the solutions to \eqnref{elas_eqn_bdd} and \eqnref{elas_eqn_free} can be proved in a standard way (see, for example, \cite{AK-book-07}). We include brief proofs in Appendix.
\begin{prop}\langlebel{thm_u_layer_bdd_case}
Let ${\bf u}$ be the solution to \eqnref{elas_eqn_bdd} and let ${\bf f}:= \partial_\nu {\bf u}|_-$ on $\partial\Omega$. Define
\begin{equation}\langlebel{eqn_def_H_Omega}
{\bf H}_{\Omega}({\bf x}) =-\mathcal{S}_{\partial\Omega}[{\bf f}]({\bf x})+ \mathcal{D}_{\partial\Omega}[{\bf g}]({\bf x}), \quad {\bf x} \in \Omega.
\end{equation}
Then there is a unique pair $(\mbox{\boldmath $\Gvf$}_1,\mbox{\boldmath $\Gvf$}_2)\in H^{-1/2}_\Psi (\partial D_1) \times H^{-1/2}_\Psi(\partial D_2)$ such that
\begin{equation}\langlebel{rep_bdd}
{\bf u}({\bf x}) = {\bf H}_\Omega({\bf x}) + \mathcal{S}_{\partial D_1}[\mbox{\boldmath $\Gvf$}_1]({\bf x}) + \mathcal{S}_{\partial D_2}[\mbox{\boldmath $\Gvf$}_2]({\bf x}), \quad {\bf x} \in \Omega.
\end{equation}
In fact, $\mbox{\boldmath $\Gvf$}_j$ is given by $\mbox{\boldmath $\Gvf$}_j = \partial_\nu {\bf u}|_+$ on $\partial D_j$ for $j=1,2$.
\end{prop}
\begin{prop}\langlebel{thm_u_layer}
Let ${\bf u}$ be the solution to \eqnref{elas_eqn_free}. Then there is a unique pair $(\mbox{\boldmath $\Gvf$}_1,\mbox{\boldmath $\Gvf$}_2)\in H^{-1/2}_\Psi (\partial D_1) \times H^{-1/2}_\Psi(\partial D_2)$ such that
\begin{equation}\langlebel{singlerep}
{\bf u}({\bf x}) = {\bf H}({\bf x}) + \mathcal{S}_{\partial D_1}[\mbox{\boldmath $\Gvf$}_1]({\bf x}) + \mathcal{S}_{\partial D_2}[\mbox{\boldmath $\Gvf$}_2]({\bf x}), \quad {\bf x} \in D^e.
\end{equation}
In fact, $\mbox{\boldmath $\Gvf$}_j$ is given by $\mbox{\boldmath $\Gvf$}_j = \partial_\nu {\bf u}|_+$ on $\partial D_j$ for $j=1,2$.
\end{prop}
Note that $\int_{\partial D_j} \mbox{\boldmath $\Gvf$}_j =0$, which holds because $\mbox{\boldmath $\Gvf$}_j$ belongs to $H^{-1/2}_\Psi (\partial D_j)$. So, we have $\mathcal{S}_{\partial D_j}[\mbox{\boldmath $\Gvf$}_j]({\bf x}) = O(|{\bf x}|^{-1})$ as $|{\bf x}| \to \infty$. Thus ${\bf u}$ given by \eqnref{singlerep} satisfies the last condition in \eqnref{elas_eqn_free}.
Note that since the domains $D_1$ and $D_2$ are assumed to have $C^{4}$ boundaries, the solutions to \eqnref{elas_eqn_bdd} and \eqnref{elas_eqn_free} are $C^{3,\alpha}$ in $\Omega \setminus (D_1 \cup D_2)$ including $\partial D_1 \cup \partial D_2$ for any $0<\alpha <1$.
We now prove an analogue of the addition formula for ${\bf \GG} ({\bf x}-{\bf y})$. Let $\{ {\bf e}_1, {\bf e}_2 \}$ be the standard basis for $\mathbb{R}^2$. For $n\in \mathbb{Z}$ let
\begin{equation}
P_n({\bf x}) = r^{|n|} e^{in \theta},
\end{equation}
where $(r, \theta)$ denotes the polar coordinates of ${\bf x}$. Let
\begin{align}
{\bf v}_n^{(i)}({\bf x}) &= \alpha_1 P_n({\bf x}) {\bf e}_i -\alpha_2 x_i \nabla P_n({\bf x}), \quad i=1,2, \\
{\bf w}_n({\bf x}) &= \alpha_2 \nabla P_n ({\bf x}).
\end{align}
Since $P_n$ is harmonic in $\mathbb{R}^2$, one can easily see that ${\bf w}_n$ is a solution to the Lam\'{e} system in $\mathbb{R}^2$. To show that ${\bf v}_n^{(i)}$ is a solution to the Lam\'{e} system in $\mathbb{R}^2$, we prove a more general fact:
\begin{lemma}\langlebel{harLame}
If $h$ is a harmonic function, then a vector-valued function ${\bf v}$ of the form
\begin{equation}
{\bf v}({\bf x}) = \alpha_1 h({\bf x}){\bf e}_j-\alpha_2 x_j \nabla h({\bf x})
\end{equation}
for $j=1, 2$, is a solution of the Lam\'e system, namely, $\mathcal{L}_{\lambda, \mu} {\bf v}=0$.
\end{lemma}
\noindent {\sl Proof}. \
We only prove the case when $j=1$.
Let us write ${\bf v}=(v_1, v_2)^T$. Simple computations show that
$$
\Delta v_1 = -2\alpha_2 \partial_1^2 h,
$$
and
$$
\Delta v_2 = -2\alpha_2 \partial_1 \partial_2 h.
$$
We also have
$$
\nabla\cdot {\bf v} = \alpha_1 \partial_1 h -\alpha_2 (x_1 \Delta h + \partial_1 h)
=(\alpha_1-\alpha_2) \partial_1 h.
$$
Therefore we obtain
\begin{align*}
[\mu \Delta {\bf v} + (\lambda+\mu) \nabla(\nabla\cdot {\bf v})]\cdot{\bf e}_k &=
-2 \mu \alpha_2 \partial_1\partial_k h + (\lambda+\mu)(\alpha_1-\alpha_2) \partial_1\partial_k h
\\
&=
{\bf i}g(-\frac{1}{2\partiali} \frac{\lambda+\mu}{\lambda+2\mu}+\frac{1}{2\partiali} \frac{\lambda+\mu}{\lambda+2\mu}{\bf i}g)\partial_1 \partial_k h
=0
\end{align*}
for $k=1,2$. This completes the proof.
\qed
We obtain the following proposition.
\begin{prop}\langlebel{thm:funda_sol_series}
The fundamental solution ${\bf \GG}$ admits the following series expansion: for $|{\bf x}| > |{\bf y}|$ and for any constant vector ${\bf b}$ in $\mathbb{R}^2$
\begin{align}
{\bf \GG}({\bf x}-{\bf y}) {\bf b} &= -\sum_{n\neq 0}\frac{1}{2|n|}\frac{e^{-in\theta}}{r^{|n|}} \sum_{i=1}^2({\bf v}_n^{(i)}({\bf y}) \cdot {\bf b}){\bf e}_i
\nonumber \\
& \quad + \sum_{n\neq 0}\frac{1}{2|n|}\frac{{\bf x} e^{-in\theta }}{r^{|n|}} \big({\bf w}_n({\bf y})\cdot {\bf b}\big)
+\alpha_1 \ln|{\bf x}| {\bf b}, \langlebel{addition}
\end{align}
where ${\bf x}=(r, \theta)$ in the polar coordinates.
Moreover, the series converges absolutely and uniformly in ${\bf x}$ and ${\bf y}$ provided that there are numbers $r_1$ and $r_2$ such that $|{\bf y}| \le r_1 < r_2 \le |{\bf x}|$.
\end{prop}
\noindent {\sl Proof}. \
By \eqnref{Kelvintensor}, we have
$$
{\bf \GG}({\bf x}-{\bf y}){\bf b} = \alpha_1 \ln|{\bf x}-{\bf y}|{\bf b} - \alpha_2 (\nabla_{\bf y}(\ln |{\bf x}-{\bf y}|) \cdot {\bf b}) ({\bf x}-{\bf y})(-1)
$$
for any constant vector ${\bf b}$. The addition formula for $\ln|{\bf x}-{\bf y}|$ reads
$$
\ln|{\bf x}-{\bf y}| = \ln|{\bf x}| - \sum_{n\neq 0} \frac{1}{2|n|}\frac{e^{-in\theta}}{r^{|n|}} P_n({\bf y}).
$$
By substituting this formula to the one above, we obtain \eqnref{addition}.
\qed
\subsection{The exterior problem and the variational principle}
In this subsection we consider the following exterior Dirichlet problem for the Lam\'e system:
\begin{equation}\langlebel{eqn_ext_diri}
\left \{
\begin{array} {ll}
\displaystyle \mathcal{L}_{\lambda,\mu} \mathbf{v}= 0 \quad &\mbox{ \rm in } D^e,\\[2mm]
\displaystyle \mathbf{v}= \mathbf{g} \quad &\mbox{ \rm on } \partial D^e=\partial D_1 \cup \partial D_2,
\end{array}
\right.
\end{equation}
for ${\bf g} \in H^{1/2}(\partial D^e)^2:= H^{1/2}(\partial D_1)^2 \times H^{1/2}(\partial D_2)^2$. We seek a solution in the function space $\mathcal{A}^*$ defined as follows: Let $\mathcal{A}$ be the collection of all ${\bf v} \in H^1_\text{loc}(D^e)$ such that there exists a $2 \times 2$ symmetric matrix $B$ such that
\begin{equation}\langlebel{2002}
{\bf v}({\bf x})= \sum_{j=1}^2 \partial_j {\bf \GG}({\bf x}) B {\bf e}_j + O(|{\bf x}|^{-2}) \quad\mbox{as } |{\bf x}| \to \infty,
\end{equation}
where $\{{\bf e}_1, {\bf e}_2 \}$ is the standard basis of $\mathbb{R}^2$. We emphasize that ${\bf v}({\bf x})=O(|{\bf x}|^{-1})$ as $|{\bf x}| \to \infty$. We then define
\begin{equation}
\mathcal{A}^* := \left\{ {\bf u} = {\bf v} + \sum_{j=1}^3 b_j \Psi_j ~|~ {\bf v} \in \mathcal{A}, \ \ b_j: \text{constant} \right\}.
\end{equation}
A proof of the following theorem is given in Appendix.
\begin{theorem}\langlebel{thm_ext_diri}
For any ${\bf g} \in H^{1/2}(\partial D^e)^2$, \eqnref{eqn_ext_diri} admits a unique solution in $\mathcal{A}^*$.
\end{theorem}
This theorem in a different form is proved in \cite{Constanda} when $D^e$ is the compliment of a simply connected domain. Here, $\partial D^e$ has two components, namely, $\partial D^e = \partial D_1 \cup \partial D_2$. Moreover, the proof of this paper is completely different from that of \cite{Constanda}. It is worth mentioning that the term $\sum_{j=1}^3 b_j \Psi_j$ plays the role of the solution corresponding to the component of ${\bf g}$ spanned by $\Psi_j$, $j=1,2,3$.
The condition \eqnref{2002} is somewhat unfamiliar. To motivate it we prove the following lemma. This lemma will be used in the proof of Theorem \ref{thm_ext_diri}.
\begin{lemma}\langlebel{lem:Acal}
\begin{itemize}
\item[(i)] If $\mbox{\boldmath $\Gvf$}=(\mbox{\boldmath $\Gvf$}_1, \mbox{\boldmath $\Gvf$}_2) \in H^{-1/2}(\partial D_1)^2 \times H^{-1/2}(\partial D_2)^2$ and satisfies
\begin{equation}\langlebel{vanishingcon}
\int_{\partial D_1} \mbox{\boldmath $\Gvf$}_1 \cdot \Psi_k + \int_{\partial D_2} \mbox{\boldmath $\Gvf$}_2 \cdot \Psi_k =0, \quad k=1,2,3,
\end{equation}
then ${\bf v}$, defined by
\begin{equation}\langlebel{Bvtemp}
{\bf v}({\bf x})= \mathcal{S}_{\partial D_1} [\mbox{\boldmath $\Gvf$}_1]({\bf x}) + \mathcal{S}_{\partial D_2} [\mbox{\boldmath $\Gvf$}_2]({\bf x}), \quad {\bf x} \in D^e,
\end{equation}
belongs to $\mathcal{A}$.
\item[(ii)] If $\mbox{\boldmath $\Gy$}=(\mbox{\boldmath $\Gy$}_1, \mbox{\boldmath $\Gy$}_2)$ belongs to $H^{1/2}(\partial D_1)^2 \times H^{1/2}(\partial D_2)^2$, then ${\bf w}$, defined by
\begin{equation}
{\bf w}({\bf x})= \mathcal{D}_{\partial D_1} [\mbox{\boldmath $\Gy$}_1]({\bf x}) + \mathcal{D}_{\partial D_2} [\mbox{\boldmath $\Gy$}_2]({\bf x}), \quad {\bf x} \in D^e,
\end{equation}
belongs to $\mathcal{A}$.
\end{itemize}
\end{lemma}
\noindent {\sl Proof}. \
If ${\bf y} \in \partial D^e$ and $|{\bf x}| \to \infty$, then by the Taylor expansion we have
\begin{equation}\langlebel{Taylor}
{\bf \GG}({\bf x}-{\bf y}) = {\bf \GG}({\bf x}) + \sum_{j=1}^2 \partial_j {\bf \GG}({\bf x}) y_j + O(|{\bf x}|^{-2}).
\end{equation}
So ${\bf v}$ defined by \eqnref{Bvtemp} takes the form
\begin{equation}\langlebel{2000}
{\bf v}({\bf x})= {\bf \GG}({\bf x}) \int_{\partial D^e} \mbox{\boldmath $\Gvf$} + \sum_{j=1}^2 \partial_j {\bf \GG}({\bf x}) \int_{\partial D^e} y_j \mbox{\boldmath $\Gvf$} + O(|{\bf x}|^{-2}).
\end{equation}
Here and throughout this paper we use $\int_{\partial D^e} \mbox{\boldmath $\Gvf$}$ to denote $\int_{\partial D_1} \mbox{\boldmath $\Gvf$}_1 + \int_{\partial D_2} \mbox{\boldmath $\Gvf$}_2$ for ease of notation. The assumption \eqnref{vanishingcon} for $k=1,2$ implies that the first term in the right-hand side of \eqnref{2000} vanishes. Define the matrix $B:= (b_{ij})_{i,j=1,2}$ by
$$
\begin{bmatrix} b_{11} \\ b_{21} \end{bmatrix} := \int_{\partial D^e} y_1 \mbox{\boldmath $\Gvf$} \quad\mbox{and}\quad
\begin{bmatrix} b_{12} \\ b_{22} \end{bmatrix} := \int_{\partial D^e} y_2 \mbox{\boldmath $\Gvf$}.
$$
Then, we may rewrite \eqnref{2000} as
$$
{\bf v}({\bf x})= \sum_{j=1}^2 \partial_j {\bf \GG}({\bf x}) B {\bf e}_j + O(|{\bf x}|^{-2}) \quad\mbox{as } |{\bf x}| \to \infty.
$$
Note that the assumption \eqnref{vanishingcon} for $k=3$ implies $b_{12}=b_{21}$, namely, $B$ is symmetric.
To prove (ii), let ${\bf u}_j$ be the solution to $\mathcal{L}_{\lambda, \mu} {\bf u}_j =0$ in $D_j$ and ${\bf u}_j= \mbox{\boldmath $\Gy$}_j$ on $\partial D_j$. Then $\partial_\nu {\bf u}_j \in H^{-1/2}_\Psi (\partial D_j)$ and Green's formula for the Lam\'e system shows that the following holds:
$$
\mathcal{D}_{\partial D_1} [\mbox{\boldmath $\Gy$}_1]({\bf x})= \mathcal{S}_{\partial D_1} [\partial_\nu {\bf u}_1]({\bf x}), \quad \mathcal{D}_{\partial D_2} [\mbox{\boldmath $\Gy$}_2]({\bf x})= \mathcal{S}_{\partial D_2} [\partial_\nu {\bf u}_2]({\bf x}), \quad {\bf x} \in D^e.
$$
So, we have
$$
{\bf w}({\bf x})= \mathcal{S}_{\partial D_1} [\partial_\nu {\bf u}_1]({\bf x}) + \mathcal{S}_{\partial D_2} [\partial_\nu {\bf u}_2]({\bf x}), \quad {\bf x} \in D^e.
$$
So, (ii) follows from (i).
\qed
The most important property of the function of the form $\sum_{j=1}^2 \partial_j {\bf \GG}({\bf x}) B {\bf e}_j$ lies in the following fact.
\begin{lemma}\langlebel{Bfunction}
Let ${\bf v}({\bf x})= \sum_{j=1}^2 \partial_j {\bf \GG}({\bf x}) B {\bf e}_j$ for some symmetric matrix $B$. Then the following holds for any simple closed Lipschitz curve $C$ such that $0 \notin C$:
\begin{equation}\langlebel{crux}
\int_{C} \partial_\nu {\bf v} \cdot \Psi_k=0, \quad k=1,2,3.
\end{equation}
\end{lemma}
\noindent {\sl Proof}. \
Since the cases of $k=1,2$ are easier to prove, we only consider the case of $k=3$. Let $U$ be the bounded domain enclosed by $C$. If $0 \notin U$, then by Green's formula for the Lam\'e system, we have
$$
\int_{C} \partial_\nu {\bf v} \cdot \Psi_3 = \int_U \mathbb{C} \widehat{\nabla} {\bf v}: \widehat{\nabla} \Psi_3=0.
$$
Suppose that $0 \in U$. Then choose $B_r$, the disk of radius $r$ centered at $0$, so that $\overline{B_r} \subset U$. Then, we see that
$$
\int_{C} \partial_\nu {\bf v} \cdot \Psi_3 = \int_{\partial B_r} \partial_\nu {\bf v} \cdot \Psi_3.
$$
Straightforward but tedious computations show that on $\partial B_r$
$$
\partial_\nu {\bf v} \cdot \Psi_3 = \frac{1}{2\partiali r} (b_{21}-b_{12})+ \frac{\lambda+\mu}{\lambda+2\mu}(b_{11} -\frac{b_{12}}{2}-\frac{b_{21}}{2} - b_{22})\frac{\sin 2\theta}{2\partiali r},
$$
where $(r,\theta)$ is the polar coordinates.
So we obtain
$$
\int_{\partial B_R} \partial_\nu {\bf v} \cdot \Psi_3 = b_{21}-b_{12}.
$$
Since $b_{12}=b_{21}$, \eqnref{crux} follows. \qed
The following lemma shows that Green's formula holds for ${\bf u}, {\bf v}\in \mathcal{A}^*$. It is worth mentioning that the $-$ sign appears on the right-hand side of \eqnref{231} below since the normal vector on $\partial D^e$ is directed outward.
\begin{lemma}\langlebel{cor_betti}
If ${\bf u}, {\bf v}\in \mathcal{A}^*$ and $\mathcal{L}_{\lambda,\mu}{\bf u}=0$ in $D^e$, then
\begin{equation}\langlebel{231}
\int_{D^e}\mathbb{C} \widehat{\nabla} {\bf u}: \widehat{\nabla} {\bf v} =-\int_{\partial D^e} \partial_\nu {\bf u}|_+ \cdot {\bf v},
\end{equation}
where the left-hand side is understood to be
\begin{equation}
\int_{D^e}\mathbb{C} \widehat{\nabla} {\bf u}: \widehat{\nabla} {\bf v} = \lim_{R \to \infty} \int_{B_R \setminus (D_1 \cup D_2)}\mathbb{C} \widehat{\nabla} {\bf u}: \widehat{\nabla} {\bf v}.
\end{equation}
\end{lemma}
\noindent {\sl Proof}. \
We have
$$
\int_{B_R \setminus (D_1 \cup D_2)}\mathbb{C} \widehat{\nabla} {\bf u}: \widehat{\nabla} {\bf v}
= -\int_{\partial D^e} \partial_\nu {\bf u}|_+ \cdot {\bf v} + \int_{\partial B_R} \partial_\nu {\bf u}|_+ \cdot {\bf v}.
$$
So, it suffices to prove that
$$
\lim_{R \to \infty} \int_{\partial B_R} \partial_\nu {\bf u}|_+ \cdot {\bf v} =0.
$$
Let ${\bf u}= {\bf u}_1+{\bf u}_2+{\bf u}_3$ where ${\bf u}_1$ is of the form $\sum_{j=1}^2 \partial_j {\bf \GG}({\bf x}) B {\bf e}_j$, ${\bf u}_2({\bf x})=O(|{\bf x}|^{-2})$, and ${\bf u}_3$ is of the form $\sum_{k=1}^3 a_k \Psi_k$. We also let ${\bf v}= {\bf v}_1+{\bf v}_2$ where ${\bf v}_1({\bf x})=O(|{\bf x}|^{-1})$ and ${\bf v}_2$ is of the form $\sum_{k=1}^3 b_k \Psi_k$. Since $\partial_\nu {\bf u}_3=0$ on $\partial B_R$ for any $R$, we have
$$
\int_{\partial B_R} \partial_\nu {\bf u}|_+ \cdot {\bf v} = \int_{\partial B_R} \partial_\nu ({\bf u}_1+{\bf u}_2)|_+ \cdot ({\bf v}_1+{\bf v}_2).
$$
We see
$$
\lim_{R \to \infty} \left[ \int_{\partial B_R} \partial_\nu ({\bf u}_1+{\bf u}_2)|_+ \cdot {\bf v}_1 + \int_{\partial B_R} \partial_\nu {\bf u}_2|_+ \cdot {\bf v}_2 \right]=0
$$
by considering the decay at $\infty$ of the functions involved. We also have from \eqnref{crux}
$$
\int_{\partial B_R} \partial_\nu {\bf u}_1 |_+ \cdot {\bf v}_2 =0.
$$
This completes the proof. \qed
The following variational principle for the exterior Dirichlet problem plays a crucial role in what follows.
\begin{lemma}\langlebel{lem_var_principle}
Define
\begin{equation}\langlebel{eqn_def_Ecal}
\mathcal{E}_{D^e}[{\bf w}]:= \int_{D^e} \mathbb{C} \widehat{\nabla} {\bf w} :\widehat{\nabla}{\bf w}.
\end{equation}
Let ${\bf u}$ be the solution in $\mathcal{A}^*$ to \eqnref{eqn_ext_diri} with ${\bf g}\in H^{1/2}(\partial D^e)^2$. Then the following variational principle holds:
\begin{equation}\langlebel{variation}
\mathcal{E}_{D^e}[{\bf u}] =\min_{{\bf w}\in W_{\bf g}} \mathcal{E}_{D^e}[{\bf w}],
\end{equation}
where
$$
W_{{\bf g}} = \big\{ {\bf w}\in \mathcal{A}^* : {\bf w}|_{\partial D^e} = {\bf g}\big\}.
$$
\end{lemma}
\noindent {\sl Proof}. \
Let ${\bf w} \in W_{{\bf g}}$. By Lemma \ref{cor_betti}, we have
\begin{align*}
\int_{D^e}\mathbb{C} \widehat{\nabla}{\bf u}:\widehat{\nabla}{\bf u} = -\int_{\partial D^e} \partial_\nu {\bf u} |_+ \cdot {\bf g} = \int_{D^e}\mathbb{C} \widehat{\nabla}{\bf u}:\widehat{\nabla}{\bf w}.
\end{align*}
By the Cauchy-Schwartz inequality, we have
\begin{align*}
\int_{D^e} \mathbb{C}\widehat{\nabla}{\bf u}:\widehat{\nabla}{\bf u}
= \int_{D^e} \mathbb{C}\widehat{\nabla}{\bf u}:\widehat{\nabla}{\bf w}
\le \frac{1}{2}(\int_{D^e} \mathbb{C}\widehat{\nabla}{\bf u}:\widehat{\nabla}{\bf u}
+ \int_{D^e} \mathbb{C}\widehat{\nabla}{\bf w}:\widehat{\nabla}{\bf w}).
\end{align*}
Thus \eqnref{variation} holds. \qed
\subsection{An estimate for the free space problem}\langlebel{subsec:nabla_k_u_H_bdd}
The purpose of this subsection is to prove the following proposition which will be used in section \ref{sec:free}.
\begin{prop}\langlebel{freeest}
Let ${\bf u}$ be the solution to \eqnref{elas_eqn_free} for a given ${\bf H}$. Then for any disk $B$ centered at $0$ containing $\overline{D_1 \cup D_2}$ and for $k=0,1,2, \ldots$, there is a constant $C_k$ independent of $\epsilon$ (and ${\bf H}$) such that
\begin{equation}\langlebel{eqn_nablak_u_H_estim}
\| \nabla^k ({\bf u}-{\bf H})\|_{L^\infty(\mathbb{R}^2\setminus B)}\le C_k \|{\bf H} \|_{H^{1}(B)}.
\end{equation}
\end{prop}
The main emphasis of \eqnref{eqn_nablak_u_H_estim} is that the estimate holds independently of $\epsilon$, the distance between $D_1$ and $D_2$. It shows that even if ${\bf u}$ depends on $\epsilon$, the dependence is negligible far away from the inclusions.
To prove Proposition \ref{freeest}, we begin with the following lemma.
\begin{lemma}\langlebel{lem:Ecal_u_H_estim}
Let ${\bf u}$ be the solution to \eqnref{elas_eqn_free}. There is a constant $C$ independent of $\epsilon$ and ${\bf H}$ such that
\begin{equation}
\mathcal{E}_{D^e}[{\bf u}-{\bf H}] \le C \|{\bf H}\|^2_{H^{1}(B)},
\end{equation}
where $\mathcal{E}_{D^e}$ is defined in \eqnref{eqn_def_Ecal} and $B$ is a disk centered at $0$ containing $\overline{D_1 \cup D_2}$.
\end{lemma}
\noindent {\sl Proof}. \
We first observe from \eqnref{int_zero} and the second condition in \eqnref{elas_eqn_free} that
$$
\int_{\partial D_i} \partial_\nu {\bf u} |_+ \cdot {\bf u}=0, \quad i=1,2.
$$
Since $\mathcal{L}_{\lambda,\mu} {\bf H}=0$ in $\mathbb{R}^2$, we also have
$$
\int_{\partial D_i} \partial_\nu {\bf H} |_+ \cdot {\bf u}=0, \quad i=1,2.
$$
So we have
\begin{align*}
\mathcal{E}_{D^e}[{\bf u}-{\bf H}] &= \int_{D^e}\mathbb{C} \widehat{\nabla}({\bf u}-{\bf H}):\widehat{\nabla}({\bf u}-{\bf H}) \\
&= -\int_{\partial D^e} \partial_\nu ({\bf u}-{\bf H}) |_+ \cdot ({\bf u}-{\bf H})
=\int_{\partial D^e} \partial_\nu ({\bf u}-{\bf H}) |_+ \cdot {\bf H}.
\end{align*}
Let $R$ be the radius of $B$ and let $r$ be such that $r < R$ and $\overline{D_1 \cup D_2} \subset B_{r}$. Let $\chi$ be a smooth radial function such that $\chi({\bf x})=1$ if $|{\bf x}| \le r$ and $\chi({\bf x})=0$ if $|{\bf x}| \ge R$. Let ${\bf w}:= -\chi {\bf H}$. Then we have
\begin{align*}
\int_{D^e}\mathbb{C} \widehat{\nabla}({\bf u}-{\bf H}):\widehat{\nabla}{\bf w}
= -\int_{\partial D^e} \partial_\nu ({\bf u}-{\bf H}) |_+ \cdot {\bf w}
=\int_{\partial D^e} \partial_\nu ({\bf u}-{\bf H}) |_+ \cdot {\bf H}.
\end{align*}
It then follows that
\begin{align*}
\mathcal{E}_{D^e}[{\bf u}-{\bf H}] = \int_{D^e} \mathbb{C}\widehat{\nabla}({\bf u}-{\bf H}):\widehat{\nabla}{\bf w} \leq \frac{1}{2} \left( \mathcal{E}_{D^e}[{\bf u}-{\bf H}] + \mathcal{E}_{D^e}[{\bf w}] \right).
\end{align*}
So we have
\begin{align*}
\mathcal{E}_{D^e}[{\bf u}-{\bf H}] &\leq
\mathcal{E}_{D^e}[{\bf w}] \le C \| {\bf H}\|_{H^1(B)}^2.
\end{align*}
The proof is completed.
\qed
\noindent{\sl Proof of Proposition \ref{freeest}}.
By Proposition \ref{thm_u_layer}, the solution ${\bf u}$ is represented as
\begin{equation}
{\bf u} = {\bf H} + \mathcal{S}_{\partial D_1}[\mbox{\boldmath $\Gvf$}_1]+\mathcal{S}_{\partial D_2}[\mbox{\boldmath $\Gvf$}_2]
\end{equation}
with $\mbox{\boldmath $\Gvf$}_j = \partial_\nu {\bf u}|_+$ on $\partial D_j$, $j=1,2$. Proposition \ref{thm:funda_sol_series} yields
\begin{equation}\langlebel{u_H_far_expand}
(\mathbf{{\bf u}-{\bf H}})({\bf x})
= \sum_{n\neq 0}\frac{1}{2|n|}\frac{e^{-in\theta}}{r^{|n|}} \left( -M_n^{(1)} {\bf e}_1 - M_n^{(2)} {\bf e}_2 + M_n^{(3)}{\bf x} \right), \quad r=|{\bf x}| \geq R,
\end{equation}
where $R$ is the radius of $B$ and $M_n^{(i)}$ is given by
\begin{align}
M_n^{(i)} &= \int_{\partial D_1} {\bf v}_n^{(i)} \cdot \mbox{\boldmath $\Gvf$}_1 \, d\sigma +
\int_{\partial D_2} {\bf v}_n^{(i)} \cdot \mbox{\boldmath $\Gvf$}_2 \, d\sigma, \quad i=1,2,
\\
M_n^{(3)} &= \int_{\partial D_1} {\bf w}_n \cdot \mbox{\boldmath $\Gvf$}_1 \, d\sigma +
\int_{\partial D_2} {\bf w}_n \cdot \mbox{\boldmath $\Gvf$}_2 \, d\sigma.
\end{align}
Observe that the dependence of ${\bf u}-{\bf H}$ on $\epsilon$ is contained only in the coefficients $M_n^{(i)}$, $i=1,2,3$.
Let $r$ be such that $r < R$ and $B_{r}$ contains $\overline{D_1 \cup D_2}$. We now show that
there is a constant $C$ independent of $\epsilon$ and $n$ such that
\begin{equation}\langlebel{M3_estim}
|M_n^{(i)}| \le C |n| r^{|n|+2} \|{\bf H} \|_{H^1(B_r)}
\end{equation}
for all $n \neq 0$ and $i=1,2,3$.
For simplicity, we consider only $i=1$. The other cases can be proved in the exactly same way.
Let ${\bf v}$ be the solution to \eqnref{elas_eqn_free} when ${\bf H}={\bf v}^{(1)}_n$. Since $\mbox{\boldmath $\Gvf$}_j = \partial_\nu {\bf u}|_+$ on $\partial D_j$ and ${\bf v}|_{\partial D_j} \in \Psi$, we have using \eqnref{int_zero} that
\begin{align*}
M_n^{(1)} &= \int_{\partial D^e} {\bf v}_n^{(1)} \cdot \partial_\nu {\bf u}|_+ \\
&= -\int_{\partial D^e} ({\bf v}-{\bf v}_n^{(1)}) \cdot \partial_\nu {\bf u}|_+ \\
&= -\int_{\partial D^e} ({\bf v}-{\bf v}_n^{(1)}) \cdot \partial_\nu ({\bf u}-{\bf H})|_+
+\int_{\partial D^e} {\bf v}_n^{(1)} \cdot \partial_\nu {\bf H}|_+ \\
&= \int_{D^e} \mathbb{C} \widehat{\nabla} ({\bf v}-{\bf v}_n^{(1)}): \widehat{\nabla} ({\bf u}-{\bf H})
+ \int_{\partial D^e} {\bf v}_n^{(1)} \cdot \partial_\nu {\bf H}|_+ .
\end{align*}
So, by applying Lemma \ref{lem:Ecal_u_H_estim} to $\mathcal{E}_{D^e}[{\bf v}-{\bf v}_n^{(1)}]$ and $\mathcal{E}_{D^e}[{\bf u}-{\bf H}]$ on $B_{r}$, we obtain
\begin{align*}
|M_n^{(1)}| &\le \mathcal{E}_{D^e}\big[{\bf v}-{\bf v}_n^{(1)}\big]^{1/2}
\mathcal{E}_{D^e}\big[{\bf u}-{\bf H}\big]^{1/2}
+ \sum_{i=1}^2 \|{\bf v}_n^{(1)} \|_{H^{1/2}(\partial D_i)} \| \partial_\nu {\bf H} \|_{H^{-1/2}(\partial D_i)} \\
& \le C \|{\bf v}_n^{(1)} \|_{H^1(B_{r})} \|{\bf H} \|_{H^1(B_{r})} + \sum_{i=1}^2 \|{\bf v}_n^{(1)} \|_{H^{1}(D_i)} \| {\bf H} \|_{H^{1}(D_i)} \\
& \le C \|{\bf v}_n^{(1)} \|_{H^1(B_{r})} \|{\bf H} \|_{H^1(B_{r})}.
\end{align*}
Here and throughout this paper, the constant $C$ appearing in the course of estimations may differ at each occurrence. Since ${\bf v}_n^{(1)}$ is a homogeneous polynomial of order $n$, there is a constant $C$ independent of $n$ such that
$$
\|{\bf v}_n^{(1)} \|_{H^1(B_{r})} \le C |n| r^{|n|+2},
$$
assuming that $r >1$.
It follows from \eqnref{u_H_far_expand} and \eqnref{M3_estim} that
\begin{align*}
\|{\bf u}-{\bf H}\|_{L^\infty(\mathbb{R}^2\setminus B_R)} & \le C \sum_{n\neq 0} \frac{1}{2|n|} \frac{1}{R^{|n|}}(|n| r^{|n|+2})\|{\bf H} \|_{H^1(B_r)} \\
& \le C \sum_{n\neq 0} \left( \frac{r}{R} \right)^{|n|} \|{\bf H} \|_{H^1(B)} \le C \|{\bf H} \|_{H^1(B)}.
\end{align*}
This proves \eqnref{eqn_nablak_u_H_estim} for $k=0$.
If $k>0$, we differentiate \eqnref{u_H_far_expand} to obtain \eqnref{eqn_nablak_u_H_estim}. This completes the proof.
\qed
\subsection{Geometry of two inclusions}\langlebel{subsection:geo_two_incl}
In this subsection, we describe geometry of two inclusions $D_1$ and $D_2$. See Figure \ref{fig:two_inclusions}.
Suppose that there are unique points ${\bf z}_1\in \partial D_1$ and ${\bf z}_2\in \partial D_2$ such that
\begin{equation}
|{\bf z}_1-{\bf z}_2| = \mbox{dist}(D_1, D_2).
\end{equation}
We assume that $D_j$ is strictly convex near ${\bf z}_j$, namely, there is a common neighborhood $U$ of ${\bf z}_1$ and ${\bf z}_2$ such that $D_j \cap U$ is strictly convex for $j=1,2$. Moreover, we assume that
$$
\mbox{dist}(D_1, D_2 \setminus U) \ge C \quad\mbox{and}\quad \mbox{dist}(D_2, D_1 \setminus U) \ge C
$$
for some positive constant $C$ independent of $\epsilon$. This assumption says that other than neighborhoods of ${\bf z}_1$ and ${\bf z}_2$, $D_1$ and $D_2$ are at some distance to each other. We need one more assumption: the center of the circle which is osculating $D_j$ at ${\bf z}_j$ lies inside $D_j$ ($j=1,2$). This assumption is needed for defining the singular function ${\bf q}_3$ in \eqnref{q3def} later. We emphasize that strictly convex domains satisfy all the assumptions.
Let $\kappa_j$ be the curvature of $\partial D_j$ at ${\bf z}_j$. Let $B_j$ be the disk osculating to $D_j$ at ${\bf z}_j$ ($j=1,2$). Then the radius $r_j$ of $B_j$ is given by $r_j=1/\kappa_j$. Let $R_j$ be the reflection with respect to $\partial B_j$ and let ${\bf p}_1$ and ${\bf p}_2$ be the unique fixed points of
the combined reflections $R_1\circ R_2$ and $R_2\circ R_1$, respectively.
\begin{figure*}
\caption{Geometry of the two inclusions and osculating circles}
\end{figure*}
Let ${\bf n}$ be the unit vector in the direction of ${\bf p}_2-{\bf p}_1$ and let ${\bf t}$ be the unit vector perpendicular to ${\bf n}$ such that $({\bf n},{\bf t})$ is positively oriented.
We set $(x,y)\in\mathbb{R}^2$ to be the Cartesian coordinates such that
${\bf p}=({\bf p}_1+{\bf p}_2)/2$ is the origin and the $x$-axis is parallel to ${\bf n}$.
Then one can see (see \cite{AKL-MA-05}) that ${\bf p}_1$ and ${\bf p}_2$ are written as
\begin{equation}\langlebel{pjdef}
{\bf p}_1=(-a,0)\quad\mbox{and}\quad{\bf p}_2=(a,0),
\end{equation}
where the constant $a$ is given by
\begin{equation}\langlebel{a_def}
a :=\frac{\sqrt\epsilon\sqrt{ (2 r_1 + \epsilon) (2 r_2 + \epsilon) (2 r_1 + 2 r_2 +
\epsilon)}}{2 (r_1 + r_2 + \epsilon)},
\end{equation}
from which one can infer
\begin{equation}\langlebel{def_alpha}
a= \sqrt{\frac{2 }{\kappa_1+\kappa_2}}\sqrt{\epsilon}+O(\epsilon^{3/2}).
\end{equation}
Then the center ${\bf c}_i$ of $B_i$ ($i=1,2$) is given by
\begin{equation}\langlebel{c1c2}
{\bf c}_i = {\bf i}g((-1)^i \sqrt{r_i^2+a^2},0{\bf i}g)=\big((-1)^i r_i +O(\epsilon),0\big).
\end{equation}
So we have
\begin{equation}
{\bf z}_i = (-1)^{i+1}{\bf i}g(r_i-\sqrt{r_i^2+a^2},0{\bf i}g) = \left( (-1)^i\frac{ \kappa_i}{\kappa_1+\kappa_2} \epsilon + O(\epsilon^2), 0 \right).
\end{equation}
Let us consider the narrow region between $D_1$ and $D_2$. See Figure \ref{fig:narrow_gap}.
There exists $L>0$ (independent of $\epsilon$) and functions $f_1,f_2:[-L,L]\rightarrow \mathbb{R}$ such that
\begin{equation}\langlebel{foneftwo}
{\bf z}_1=\big(-f_1(0),0\big), \quad {\bf z}_2=\big(f_2(0),0\big), \quad f_1'(0)= f_2'(0)=0,
\end{equation}
and $\partial D_1$ and $\partial D_2$ are graphs of $-f_1(y)$ and $f_2(y)$ for $|y|<L$, {\it i.e.},
\begin{equation}\langlebel{Bx1Bx2}
{\bf x}_1(y):=(-f_1(y),y)\in \partial D_1 \quad\mbox{and}\quad {\bf x}_2(y):=(f_2(y),y)\in \partial D_2.
\end{equation}
Since $D_i$ is strictly convex near ${\bf z}_j$, $f_1$ is strictly convex.
Note that, for $i=1,2$ and $|y|<L$,
\begin{equation}\langlebel{fitaylor}
f_i(y) = \frac{\kappa_i}{\kappa_1+\kappa_2}\epsilon+\frac{1}{2!}\kappa_i y^2 +\frac{1}{3!}\omega_i y^3 + O(\epsilon^2+y^4)
\end{equation}
for some constant $\omega_i$. Let us define for later use a constant $\tau$ as
\begin{equation}\langlebel{def_tau}
\tau=|\kappa_1-\kappa_2|+|\omega_1|+|\omega_2|.
\end{equation}
We denote by $\Pi_l$ for $0<l\le L$ the narrow region between $D_1$ and $D_2$ defined as
\begin{equation}\langlebel{narrowregion}
\Pi_l = \{ (x,y) \in\mathbb{R}^2| -f_1(y)<x<f_2(y), \, |y|<l\}.
\end{equation}
\begin{figure*}
\caption{Geometry of the narrow gap region $\Pi_L$}
\end{figure*}
\section{Singular functions and their properties}\langlebel{sec:singular}
Let $\Psi_j$, $j=1,2,3$, be the rigid motions defined in \eqnref{Psidef} and let ${\bf h}_j$ be the solution to the following problem:
\begin{equation}\langlebel{hj_def}
\ \left \{
\begin{array} {ll}
\displaystyle \mathcal{L}_{\lambda,\mu} {\bf h}_j= 0 \quad &\mbox{ in } D^e,\\[2mm]
\displaystyle {\bf h}_j= -\frac{1}{2}\Psi_j({\bf x}) \quad &\mbox{ on } \partial D_1,\\[2mm]
\displaystyle {\bf h}_j= \frac{1}{2}\Psi_j({\bf x})\quad &\mbox{ on } \partial D_2.
\end{array}
\right.
\end{equation}
It turns out that ${\bf h}_j$ ($j=1,2,3$) captures the singular behavior of the solution ${\bf u}$ to \eqnref{elas_eqn_free}. In fact, ${\bf u}$ can be decomposed in the following form:
\begin{equation}
{\bf u} = \sum_{j=1}^3 c_j {\bf h}_j + {\bf b}
\end{equation}
for some constants $c_j$, where $\nabla {\bf b}$ is bounded in a bounded domain containing the narrow region $\Pi_L$ between $D_1$ and $D_2$. In other words, the blow-up behavior of $\nabla {\bf u}$ is completely characterized by that of $\sum_{j=1}^3 c_j \nabla {\bf h}_j$.
We emphasize that $|{\bf h}_j |_{\partial D_1} - {\bf h}_j |_{\partial D_2}|=1$ for $j=1,2$. So one expect that $|\nabla {\bf h}_j| \approx \epsilon^{-1}$ in the narrow region between $D_1$ and $D_2$. The function ${\bf h}_3$ has a weaker singularity since $|{\bf h}_3 |_{\partial D_1} - {\bf h}_3 |_{\partial D_2}|=|{\bf x}|$.
The purpose of this section is to construct explicit singular functions, denoted by ${\bf q}_j$, which yield good approximations of ${\bf h}_j$ and to derive their important properties.
\subsection{Construction of singular functions}\langlebel{subsec:singular}
We begin with a brief review of the singular function for the electro-static case.
Let ${\bf p}_j$ ($j=1,2$) be the fixed points of the combined reflections given in \eqnref{pjdef} and let
\begin{equation}\langlebel{qBdef}
q_B({\bf x}) = \frac{1}{2\partiali} (\ln|{\bf x}-{\bf p}_1|-\ln|{\bf x}-{\bf p}_2|).
\end{equation}
This function was introduced in \cite{Yun-SIAP-07} and used in an essential way for characterization of the gradient blow-up in the context of electro-statics \cite{ACKLY-ARMA-13}. The most important property of $q_B({\bf x})$ is that it takes constant values on $\partial B_j$, the circles osculating to $\partial D_j$ at ${\bf z}_j$, $j=1,2$. It is because $\partial B_1$ and $\partial B_2$ are circles of Apollonius of ${\bf p}_1$ and ${\bf p}_2$.
Note that $\frac{1}{2\partiali}\ln|{\bf x}|$ is a fundamental solution of the Laplacian and represents a point source of the electric field. So it is natural to expect that, even in the linear elasticity case, the point source functions may also characterize the gradient blow-up. There are various types of point source functions in linear elasticity which are often called {\it nuclei of strain}. We will use the following nuclei of strain as basic building blocks of the singular functions:
\begin{equation}\langlebel{nuclei}
{\bf \GG}({\bf x}){\bf e}_1, \quad {\bf \GG}({\bf x}){\bf e}_2, \quad
\frac{{\bf x}}{|{\bf x}|^2}, \quad \frac{{\bf x}^\partialerp}{|{\bf x}|^2}.
\end{equation}
where ${\bf x}^\partialerp=(-y,x)$ for ${\bf x}=(x,y)\in \mathbb{R}^2$.
These nuclei of strain have physical meanings: the function ${\bf \GG}({\bf x}){\bf e}_j$ represents the point force applied at the origin in the direction of ${\bf e}_j$, and the functions ${{\bf x}}/{|{\bf x}|}$ and ${{\bf x}^\partialerp}/{|{\bf x}|}$ represent the point source of the pressure and that of the moment located at the origin, respectively (see, for example, \cite{Love}).
We emphasize that the functions given in \eqnref{nuclei} are solutions to the Lam\'{e} system for ${\bf x} \neq 0$.
In fact, the first two are solutions since they are columns of the fundamental solution, and so are the last two because of the following relations:
\begin{equation}\langlebel{doublet}
\begin{cases}
\displaystyle (\alpha_1-\alpha_2) \frac{{\bf x}}{|{\bf x}|^2} = \partial_1 ({\bf \GG}({\bf x}) {\bf e}_1) + \partial_2 (\mathbf{\Gamma({\bf x})} {\bf e}_2),
\\[1em]
\displaystyle (\alpha_1+\alpha_2) \frac{{\bf x}^\partialerp}{|{\bf x}|^2} = \partial_1 ({\bf \GG}({\bf x}) {\bf e}_2) - \partial_2 (\mathbf{\Gamma({\bf x})} {\bf e}_1),
\end{cases}
\end{equation}
where $\alpha_1$ and $\alpha_2$ are constants appearing in the definition \eqnref{Kelvin} of the fundamental solution. The identities in \eqnref{doublet} can be proved by straightforward computations.
The singular functions of this paper are constructed as linear combinations of functions given in \eqnref{nuclei}. To motivate the construction, we temporarily assume that two inclusions $D_1$ and $D_2$ are symmetric with respect to both $x$- and $y$-axes. If we write ${\bf h}_1=(h_{11}, h_{12})^T$, then thanks to the symmetry of the inclusions and boundary conditions in \eqnref{hj_def}, the following two functions are also solutions of \eqnref{hj_def} for $j=1$:
$$
\begin{bmatrix} h_{11}(x,-y) \\ -h_{12}(x,-y) \end{bmatrix}, \quad
\begin{bmatrix} -h_{11}(-x,y) \\ h_{12}(-x,y) \end{bmatrix}.
$$
By the uniqueness of the solution, we see that ${\bf h}_1$ has the following symmetric property with respect to $x$- and $y$-axes:
\begin{equation}\langlebel{honesymm}
\begin{cases}
h_{11}(x,y)=h_{11}(x,-y)=-h_{11}(-x,y), \\
h_{12}(x,y)=-h_{12}(x,-y)=h_{12}(-x,y).
\end{cases}
\end{equation}
One can see that the following two functions have the same symmetry:
$$
{\bf \GG}({\bf x}-{\bf p}_1){\bf e}_1-{\bf \GG}({\bf x}-{\bf p}_2){\bf e}_1, \quad \frac{{\bf x}-{\bf p}_1}{|{\bf x}-{\bf p}_1|^2}+\frac{{\bf x}-{\bf p}_2}{|{\bf x}-{\bf p}_2|^2}.
$$
So the first singular function ${\bf q}_1$ is constructed as a linear combination of these functions.
On the other hand, one can see in a similar way that ${\bf h}_2=(h_{21}, h_{22})^T$ has the following symmetric property:
\begin{equation}\langlebel{htwosymm}
\begin{cases}
h_{21}(x,y)=-h_{21}(x,-y)=h_{21}(-x,y), \\
h_{22}(x,y)=h_{22}(x,-y)=-h_{22}(-x,y),
\end{cases}
\end{equation}
and the following two functions have the same symmetry:
$$
{\bf \GG}({\bf x}-{\bf p}_1){\bf e}_2-{\bf \GG}({\bf x}-{\bf p}_2){\bf e}_2, \quad \frac{({\bf x}-{\bf p}_1)^{\partialerp}}{|{\bf x}-{\bf p}_1|^2}+\frac{({\bf x}-{\bf p}_2)^{\partialerp}}{|{\bf x}-{\bf p}_2|^2}.
$$
So ${\bf q}_2$ is constructed as a linear combination of these functions.
The singular functions of this paper are defined by
\begin{equation}\langlebel{Bqone}
{\bf q}_1({\bf x}) := {\bf \GG}({\bf x}-{\bf p}_1){\bf e}_1-{\bf \GG}({\bf x}-{\bf p}_2) {\bf e}_1 + {\alpha_2 a} \left( \frac{{\bf x}-{\bf p}_1}{|{\bf x}-{\bf p}_1|^2}+\frac{{\bf x}-{\bf p}_2}{|{\bf x}-{\bf p}_2|^2}\right),
\end{equation}
and
\begin{equation}\langlebel{Bqtwo}
{\bf q}_2({\bf x}) := {\bf \GG}({\bf x}-{\bf p}_1){\bf e}_2 -{\bf \GG}({\bf x}-{\bf p}_2) {\bf e}_2 - {\alpha_2 a} \left( \frac{({\bf x}-{\bf p}_1)^{\partialerp}}{|{\bf x}-{\bf p}_1|^2}+\frac{({\bf x}-{\bf p}_2)^{\partialerp}}{|{\bf x}-{\bf p}_2|^2}\right),
\end{equation}
where $a$ is the number appearing in \eqnref{pjdef}. We emphasize that $a$ depends on $\epsilon$. In fact, we repeatedly use the fact that $a \approx \sqrt{\epsilon}$.
The functions ${\bf q}_j$ satisfy $\mathcal{L}_{\lambda, \mu} {\bf q}_j=0$ in $\mathbb{R}^2 \setminus \{ {\bf p}_1, {\bf p}_2 \}$, and
\begin{equation}
{\bf q}_j({\bf x})= O(|{\bf x}|^{-1}) \quad\mbox{as } |{\bf x}| \to \infty,
\end{equation}
as one can easily see. We emphasize that the symmetry of $D_1\cup D_2$ is not assumed here.
It will be proved later in Proposition \ref{prop_Brj} that
\begin{equation}\langlebel{BhjBqj}
{\bf h}_j \approx \frac{m_j}{\sqrt\epsilon} {\bf q}_j
\end{equation}
where $m_1$ and $m_2$ are constants defined by
\begin{equation}\langlebel{m2def}
m_1 := \big[(\alpha_1 - \alpha_2)\sqrt{2(\kappa_1+\kappa_2)}\,\big]^{-1}, \quad
m_2 := \big[(\alpha_1 + \alpha_2)\sqrt{2(\kappa_1+\kappa_2)}\,\big]^{-1}.
\end{equation}
So blow-up of $\nabla {\bf h}_j$ is captured by an explicit function $\frac{m_j}{\sqrt\epsilon} \nabla {\bf q}_j$. This is a crucial fact for investigating blow-up of $\nabla {\bf u}$ in this paper.
We now construct the third singular function ${\bf q}_3$ which approximates ${\bf h}_3$.
For that we introduce
${\bf \GG}^\partialerp = \big( \Gamma_{ij}^\partialerp \big)_{i, j = 1}^2$ which is defined by
\begin{equation}\langlebel{Kelvin_modified}
{\bf \GG}^\partialerp({\bf x}) =
\displaystyle \alpha_1 \arg{({\bf x})} \begin{bmatrix}1 & 0 \\ 0 & 1\end{bmatrix}
- \frac{\alpha_2}{|{\bf x}| ^2} \begin{bmatrix} -x_1 x_2 & -x_2^2 \\ x_1^2 & x_1 x_2 \end{bmatrix}.
\end{equation}
We emphasize that ${\bf \GG}^\partialerp$ is a multi-valued function since $\arg{({\bf x})}$ is. So ${\bf \GG}^\partialerp$ is defined in $\mathbb{R}^2$ except a branch-cut starting from the origin.
Note that
\begin{equation}\langlebel{Gej}
{\bf \GG}^\partialerp({\bf x}){\bf e}_j =\alpha_1 \arg({\bf x}) {\bf e}_j - \alpha_2 x_j \nabla (\arg({\bf x})), \quad j=1,2.
\end{equation}
Since $\arg({\bf x})$ is a harmonic function, we infer from Lemma \ref{harLame} that ${\bf \GG}^\partialerp({\bf x}){\bf e}_j$ is a solution to the Lam\'{e} system (except on the branch-cut).
We now define the singular function ${\bf q}_3$ by
\begin{align}
{\bf q}_3({\bf x}) &= m_3\left( {\bf \GG}^\partialerp({\bf x}-{\bf p}_1) - {\bf \GG}^\partialerp({\bf x}-{\bf c}_1) \right) {\bf e}_1 + m_3\left( {\bf \GG}^\partialerp({\bf x}-{\bf p}_2) - {\bf \GG}^\partialerp({\bf x}-{\bf c}_2) \right) {\bf e}_1 \nonumber \\
&\quad +m_3\alpha_2 a \left( \frac{({\bf x}-{\bf p}_1)^\partialerp}{|{\bf x}-{\bf p}_1|^2} - \frac{({\bf x}-{\bf p}_2)^\partialerp}{|{\bf x}-{\bf p}_2|^2} \right), \langlebel{q3def}
\end{align}
where
\begin{equation}\langlebel{m3def}
m_3:= \big[{(\alpha_1-\alpha_2)(\kappa_1+\kappa_2)}\big]^{-1},
\end{equation}
and ${\bf c}_1$ and ${\bf c}_2$ are centers of the osculating disks $B_1$ and $B_2$, respectively. It is worth mentioning that ${\bf \GG}^\partialerp({\bf x}-{\bf p}_j) - {\bf \GG}^\partialerp({\bf x}-{\bf c}_j)$ is well-defined in $\mathbb{R}^2$ except a branch-cut connecting ${\bf p}_j$ and ${\bf c}_j$. So, ${\bf q}_3$ is well-defined and a solution of the Lam\'e system in $D^e$.
We will show in Lemma \ref{lem_q3_asymp} that ${\bf q}_3$ has the same local behavior as ${\bf h}_3$.
In subsections to follow we derive technical estimates of ${\bf q}_j$ and its derivatives which will be used in later sections.
\subsection{Estimates of the function $\zeta$}\langlebel{sec:zeta}
We show in the next subsection that the singular function ${\bf q}_j$ ($j=1,2$) can be nicely represented using the function $q_B$ given in \eqnref{qBdef}. In fact, it is slightly more convenient to use the function $\zeta({\bf x})$ defined by
\begin{equation}
\zeta({\bf x}) = 2\partiali q_B({\bf x}).
\end{equation}
The following lemma collects estimates for the function $\zeta$ to be used in the next subsection. Some of the estimates are essentially proved in \cite{ACKLY-ARMA-13}. However, in that paper the estimates are not explicitly written and derivations of estimates are smeared in other proofs. So we include proofs.
\begin{lemma}
\begin{itemize}
\item[(i)] Let $\Pi_{L}$ be the narrow region defined in \eqnref{narrowregion}. It holds that
\begin{equation}\langlebel{GzBxest}
|\zeta({\bf x})| \lesssim \sqrt{\epsilon}, \quad {\bf x}\in \Pi_{L},
\end{equation}
and
\begin{equation}\langlebel{Gzgradest}
|\partial_1 \zeta({\bf x}) | \lesssim \frac{\sqrt\epsilon}{\epsilon+y^2}, \quad
|\partial_2 \zeta({\bf x}) | \lesssim \frac{\sqrt{\epsilon}|y|}{\epsilon+y^2}, \quad {\bf x}=(x,y) \in \Pi_{L}.
\end{equation}
\item[(ii)] Let ${\bf x}_j(y)$ be the defining functions for $\partial D_j$ for $j=1,2$ as given in \eqnref{Bx1Bx2}. For $|y|<L$ and $j=1,2$, we have
\begin{align}
|\zeta({\bf x}_j(y))-\zeta|_{\partial B_j}| &\lesssim \sqrt{\epsilon} (|\omega_j||y|+|y|^2), \langlebel{difference} \\
{\bf i}g| \frac{d}{dy} \zeta({\bf x}_j(y)){\bf i}g| &\lesssim \sqrt{\epsilon}, \langlebel{eqn_bdry_deri1_1i} \\
{\bf i}g|\frac{d^2}{dy^2}\zeta({\bf x}_j(y)){\bf i}g| &\lesssim \frac{\sqrt\epsilon}{{\epsilon}+ y^2}. \langlebel{eqn_bdry_deri2_1i}
\end{align}
\end{itemize}
\end{lemma}
\noindent {\sl Proof}. \
(i) Since ${\bf p}_1=(-a,0)$ and ${\bf p}_2=(a,0)$, we can rewrite $\zeta({\bf x})$ as
\begin{equation}\langlebel{xi_cartesian}
\zeta({\bf x})=\frac{1}{2}\ln \frac{(x+a)^2+y^2}{(x-a)^2+y^2} = \frac{1}{2} \ln \left( 1+ \frac{4 a x}{(x-a)^2+y^2} \right).
\end{equation}
Since $a \approx \sqrt\epsilon$, $\epsilon+y^2\approx (x \partialm a)^2+y^2$, and $|x|\lesssim \epsilon+y^2$ for $(x,y)\in\Pi_{L}$,
we obtain
$$
|\zeta({\bf x})| =\frac{1}{2}\ln(1+O(\sqrt\epsilon)) = O( \sqrt{\epsilon}) \quad\mbox{for } {\bf x}\in\Pi_{L},
$$
which yields \eqnref{GzBxest}.
Assume ${\bf x}=(x,y)\in \Pi_L$.
Since
$$
|x|\lesssim \epsilon+y^2 \quad\mbox{and}\quad \epsilon+y^2\lesssim (x\partialm a)^2+y^2,
$$
we have from the first identity in \eqnref{xi_cartesian} that
\begin{align*}
|\partial_1 \zeta({\bf x})| &= {\bf i}g| \frac{x+a}{(x+a)^2+y^2} - \frac{x-a}{(x-a)^2+y^2}{\bf i}g|
\\
&={\bf i}g|\frac{2 a (a^2-x^2+y^2) }{((x+a)^2+y^2) ((x+a)^2+y^2)} {\bf i}g|
\lesssim \frac{\sqrt\epsilon(\epsilon+y^2)}{(\epsilon+y^2)^2} \lesssim \frac{\sqrt\epsilon }{\epsilon+y^2},
\end{align*}
and
\begin{align*}
|\partial_2 \zeta({\bf x})|
&= |y| {\bf i}g| \frac{1}{(x+a)^2+y^2} - \frac{1}{(x-a)^2+y^2}{\bf i}g|
\\
&\lesssim |y| {\bf i}g|\frac{4 a x }{((x+a)^2+y^2) ((x-a)^2+y^2)} {\bf i}g|
\lesssim \frac{\sqrt{\epsilon} |y| |x|}{(\epsilon+y^2)^2} \lesssim\frac{\sqrt{\epsilon}|y|}{\epsilon+y^2}.
\end{align*}
So \eqnref{Gzgradest} is proved.
(ii) We now prove \eqnref{difference}. For simplicity, we assume $j=1$.
Let us write the boundary $\partial B_1$ of the osculating disk $B_1$ as $(-f_B(y),y)$ for $|y|<L$. Recall from \eqnref{fitaylor} that
\begin{equation}\langlebel{f1fB_diff}
f_1(y) - f_B(y) = \frac{1}{3!} \omega_1 y^3 + O(y^4).
\end{equation}
From \eqnref{xi_cartesian}, we have
\begin{align}
|\zeta({\bf x}_1(y))-\zeta|_{\partial B_1} |&=\frac{1}{2}
{\bf i}g|
\ln {\bf i}g(1-\frac{4af_1(y)}{(f_1(y)+a)^2+y^2}{\bf i}g)
-
\ln {\bf i}g(1-\frac{4af_B(y)}{(f_B(y)+a)^2+y^2}{\bf i}g)
{\bf i}g|
\nonumber
\\
&=
\frac{1}{2}
{\bf i}g|
\ln {\bf i}g(1- 4a \frac{\eta_1(y)}{\eta_2(y)} {\bf i}g){\bf i}g|,
\langlebel{xi1y_xiB_diff}
\end{align}
where
\begin{align}
\eta_1(y) &= \frac{f_1(y)}{(f_1(y)+a)^2+y^2} - \frac{f_B(y)}{(f_B(y)+a)^2+y^2},
\\
\eta_2(y) &=1-\frac{4af_B(y)}{(f_B(y)+a)^2+y^2}.
\end{align}
Since $a\approx \sqrt{\epsilon}$, $f_B(y) \approx \epsilon+y^2$, and $(f_B(y)+a)^2+y^2 \approx \epsilon+y^2$,
we see that
\begin{equation}\langlebel{G2_estim}
|\eta_2(y)|\approx 1 \quad \mbox{for }|y|<L.
\end{equation}
From \eqnref{f1fB_diff} and the facts that $f_1 \approx \epsilon+y^2$, $f_B \approx \epsilon+y^2$ and $a\approx \sqrt\epsilon$, we have, for $|y|<L$,
\begin{align}
|\eta_1(y)| &= \left| \frac{(f_1(y)-f_B(y))(y^2-f_1(y)f_B(y)+a^2)}{((f_1(y)+a)^2+y^2)((f_B(y)+a)^2+y^2)} \right|
\nonumber
\\
&\lesssim \frac{(\omega_1 |y|^3 +y^4) (y^2+(\epsilon+y^2)^2+\epsilon)}{(\epsilon+y^2)^2}
\nonumber
\\
&\lesssim |\omega_1| |y| + y^2.
\langlebel{G1_estim}
\end{align}
Since $a\approx \sqrt\epsilon$, it follows from \eqref{xi1y_xiB_diff}, \eqnref{G2_estim}, and \eqnref{G1_estim} that
$$
|\zeta({\bf x}_1(y))-\zeta|_{\partial B_1} | \lesssim \sqrt\epsilon (|\omega_1||y|+y^2).
$$
Therefore \eqnref{difference} is proved.
We now prove \eqnref{eqn_bdry_deri1_1i} and \eqnref{eqn_bdry_deri2_1i} for $j=1$. The cases for $j=2$ can be handled similarly.
In view of \eqnref{xi_cartesian}, we have
\begin{align}
\frac{d}{dy} \zeta({\bf x}_1(y))
&=\frac{d}{dy} \zeta(-f_1(y),y)\nonumber\\
&={\bf i}g(\frac{-f_1(y)+ a}{(-f_1(y)+ a)^2+y^2}
-\frac{-f_1(y)- a}{(-f_1(y)- a)^2+y^2}{\bf i}g)(-f_1'(y))
\nonumber\\
&\quad\quad + \frac{y}{(-f_1(y)+ a)^2+y^2}
-\frac{y}{(-f_1(y)- a)^2+y^2}
\nonumber\\
&=2 a \frac{N(y)}{D_+(y) D_-(y)},
\langlebel{xi1_bdry_deri_NDD}
\end{align}
where $N$ and $D_\partialm$ are given by
\begin{align*}
N(y)&:=(-1)( a^2-f_1(y)^2+y^2)f_1'(y)+2 f_1(y) y,
\\
D_\partialm(y) &:= (-f_1(y)\partialm a)^2+y^2.
\end{align*}
It is easy to see that
\begin{equation}\langlebel{eqn_Dy_estim}
D_\partialm(y) \approx \epsilon+y^2.
\end{equation}
As consequences of \eqnref{def_alpha} and \eqnref{fitaylor}, we have
$$
a^2=\frac{2\epsilon}{\kappa_1+\kappa_2}+O(\epsilon^2), \quad f_1(y)=\frac{\kappa_1}{\kappa_1+\kappa_2}\epsilon+\frac{1}{2}\kappa_1 y^2+ O(\epsilon^2+y^3),
$$
and hence
\begin{align}
N(y) &= (-1){\bf i}g(\frac{2\epsilon}{\kappa_1+\kappa_2}+y^2{\bf i}g)\kappa_1 y
\nonumber
\\
&\qquad +2{\bf i}g(\frac{\kappa_1 }{\kappa_1+\kappa_2}\epsilon + \frac{1}{2}\kappa_1 y^2{\bf i}g)y +O(\epsilon y^3 + \epsilon^2 y + y^4)
\nonumber
\\
&= O(\epsilon y^2 + \epsilon^2 y + y^4).
\langlebel{eqn_Ny_estim}
\end{align}
Then, from \eqnref{xi1_bdry_deri_NDD} and the fact that $ a \approx \sqrt{\epsilon}$, we have
\begin{align*}
{\bf i}g|\frac{d}{dy} \zeta({\bf x}_1(y)){\bf i}g| \lesssim a \frac{\epsilon y^2 + \epsilon^2 y+y^4 }{(\epsilon+y^2)^2} \lesssim \sqrt\epsilon.
\end{align*}
So, \eqnref{eqn_bdry_deri1_1i} is proved.
Now let us consider \eqnref{eqn_bdry_deri2_1i}. We have
\begin{align}
\frac{d^2}{dy^2} \zeta({\bf x}_1(y))
&=\frac{d^2}{dy^2} \zeta(-f_1(y),y)
=\frac{d}{dy}{\bf i}g( 2 a \frac{N(y)}{D_+(y) D_-(y)}{\bf i}g)
\nonumber\\
&=2 a\frac{N'(y)}{D_+(y) D_-(y)}-2 a\frac{N(y) (D_+)'(y)}{(D_+(y))^2 D_-(y)} -2 a
\frac{N(y) (D_-)'(y)}{ D_+(y) (D_-(y))^2}
\langlebel{xi1_bdry_deri2_ND}.
\end{align}
We also have
\begin{align*}
N'(y) &= - a^2 f_1'' + 2f_1 (f_1')^2 + (f_1)^2 f_1''-y^2 f_1''+2 f_1,
\\
(D_\partialm)'(y) &= 2(-f_1\partialm a)(-f_1')+2y.
\end{align*}
Since $f_1(y)\approx \epsilon+y^2$, $f_1'(y)=O(y)$ and $f_1''(y)=O(1)$, we have
$$
|N'(y)| \lesssim \epsilon+y^2, \quad |(D_\partialm)'(y)|\lesssim|y|.
$$
Then, from \eqnref{eqn_Dy_estim}-\eqnref{xi1_bdry_deri2_ND} and the fact that $a\approx \sqrt\epsilon$, we obtain
$$
{\bf i}g|\frac{d^2}{dy^2} \zeta({\bf x}_1(y)) {\bf i}g| \lesssim a \bigg( \frac{\epsilon+y^2}{(\epsilon+y^2)^2}+\frac{(\epsilon y^2+\epsilon^2 y+y^4)|y|}{(\epsilon+y^2)^3}\bigg) \lesssim \frac{\sqrt{\epsilon}}{\epsilon+y^2}.
$$
The proof is completed.
\qed
\subsection{Estimates of singular functions}\langlebel{subsec:estim_singular}
This subsection is to derive estimates of the singular function ${\bf q}_j$ in the narrow region $\Pi_{L}$ and on $\partial D_1 \cup \partial D_2$, which will be used in the later part of the paper.
We begin by showing that singular functions can be explicitly represented by the function $\zeta$ introduced in the previous subsection. Set
\begin{equation}\langlebel{AGzBxdef}
A_\zeta({\bf x}):= \left[1-\frac{\sinh^2\zeta({\bf x})}{a^2} y^2 \right]^{1/2}, \quad {\bf x}=(x,y).
\end{equation}
If ${\bf x}=(x,y)\in \Pi_{L}$, then it holds by \eqnref{GzBxest} that $|\zeta({\bf x})| \lesssim \sqrt{\epsilon}$. Since $a\approx \sqrt{\epsilon}$ by \eqnref{def_alpha}, we have
$$
\frac{\sinh^2\zeta({\bf x})}{a^2} \lesssim 1.
$$
So there exists a constant $0<L_0<L$ (independent of $\epsilon$) such that
\begin{equation}\langlebel{1sinh2Gza212}
1-\frac{\sinh^2\zeta({\bf x})}{a^2} y^2 \geq \frac{1}{2}, \quad {\bf x} =(x,y) \in \Pi_{L_0}.
\end{equation}
Note that
\begin{equation}\langlebel{AGzBx}
A_\zeta({\bf x}) = 1 + O(y^2), \quad {\bf x} =(x,y) \in \Pi_{L_0}.
\end{equation}
Singular functions ${\bf q}_j$ can be represented in terms of $\zeta$ as follows
\begin{prop}\langlebel{h_S1_cartesian}
Let ${\bf q}_i ({\bf x})= (q_{i1}({\bf x}), q_{i2}({\bf x}))^T$ for $i=1,2$. If ${\bf x}\in\Pi_{L_0}$, then $q_{ij}$ are given by
\begin{align}
\displaystyle q_{11}({\bf x}) &= \alpha_1 \zeta({\bf x}) - \alpha_2 A_\zeta({\bf x}) \sinh\zeta({\bf x}), \langlebel{qoneone} \\
\displaystyle q_{12}({\bf x})= q_{21}({\bf x}) &= \alpha_2 a^{-1}y \sinh^2\zeta({\bf x}), \langlebel{qonetwo} \\
\displaystyle q_{22}({\bf x}) &= \alpha_1 \zeta({\bf x}) + \alpha_2 A_\zeta({\bf x}) \sinh\zeta({\bf x}). \langlebel{qtwotwo}
\end{align}
\end{prop}
\noindent {\sl Proof}. \
From the definition \eqnref{Bqone} of ${\bf q}_1$ and the first identity in \eqnref{xi_cartesian}, we have
\begin{align}
q_{11}({\bf x}) &= {\alpha_1} \zeta({\bf x})- {\alpha_2} \bigg[ \frac{(x+a)^2}{(x+a)^2+y^2}-\frac{(x-a)^2}{(x-a)^2+y^2}\bigg] \nonumber\\
&\quad + {\alpha_2 a} \bigg[ \frac{x+a}{(x+a)^2+y^2}+\frac{x-a}{(x-a)^2+y^2}\bigg] \nonumber\\
&={\alpha_1} \zeta({\bf x}) - {\alpha_2} \bigg[ \frac{x(x+a)}{(x+a)^2+y^2}-\frac{x(x-a)}{(x-a)^2+y^2}\bigg] \nonumber\\
&= {\alpha_1} \zeta({\bf x}) - {\alpha_2} \frac{2a x (a^2-x^2+y^2)}{((x-a)^2+y^2)((x+a)^2+y^2)}
\langlebel{q11_xi_cartesian_rep}
\end{align}
for ${\bf x}\in \mathbb{R}^2\setminus\{{\bf p}_1,{\bf p}_2\}$.
Thanks to the first identity in \eqnref{xi_cartesian} again, we have
\begin{align}
\sinh\zeta({\bf x})&=\frac{1}{2}\left( \sqrt{\frac{(x+a)^2+y^2}{(x-a)^2+y^2}}- \sqrt{\frac{(x-a)^2+y^2}{(x+a)^2+y^2}}\right)
\nonumber
\\
&=\frac{2a x}{\sqrt{(x-a)^2+y^2}\sqrt{(x+a)^2+y^2}}
\langlebel{sinh_1i_cartes}
\end{align}
for ${\bf x}\in \mathbb{R}^2\setminus\{{\bf p}_1,{\bf p}_2\}$. Then straightforward computations yield
\begin{equation}\langlebel{A_1i_cartes}
A_\zeta({\bf x}) =\frac{a^2-x^2+y^2}{\sqrt{((x+a)^2+y^2)((x-a)^2+y^2)}}, \quad {\bf x}\in\Pi_{L_0}.
\end{equation}
This together with \eqnref{q11_xi_cartesian_rep} yields \eqnref{qoneone}.
The identity \eqnref{qonetwo} can be proved similarly. In fact, one can see that
\begin{align}
q_{12}({\bf x}) &= -{\alpha_2} \bigg[ \frac{(x+a)y}{(x+a)^2+y^2}-\frac{(x-a)y}{(x-a)^2+y^2}\bigg] \nonumber\\
& \quad + {\alpha_2 a} \bigg[ \frac{y}{(x+a)^2+y^2}+\frac{y}{(x-a)^2+y^2}\bigg] \nonumber\\
&={\alpha_2} \frac{4ax^2y}{((x+a)^2+y^2) ((x-a)^2+y^2)},
\langlebel{q12_xi_cartesian_rep}
\end{align}
and \eqnref{qonetwo} follows from \eqnref{sinh_1i_cartes}.
Similarly one can show that $q_{21}=q_{12}$ and \eqnref{qtwotwo} hold. We omit the proof.
\qed
Proposition \ref{h_S1_cartesian} already reveals an important property of the singular functions. They are almost constant near the points ${\bf z}_1$ and ${\bf z}_2$. This can be seen more clearly if two osculating disks have the same radii.
\begin{lemma} \langlebel{lem:Bq_on_circle}
Assume $B_1$ and $B_2$ have the same radii $r_0$. Then
it holds for ${\bf x} \in \partial B_i$, $i=1,2$, that
\begin{align}
{\bf q}_1({\bf x}) &= \left(\frac{\sqrt\epsilon}{m_1} + t_1\right) \frac{(-1)^i}{2} \Psi_1+ \alpha_2\frac{a}{r_0^2}
\begin{bmatrix} x \\ y \end{bmatrix}, \langlebel{qonesame}
\\
{\bf q}_2({\bf x}) &= \left(\frac{\sqrt\epsilon}{m_2}+t_2\right)\frac{(-1)^i}{2}\Psi_2 + \alpha_2\frac{a}{r_0^2}
\begin{bmatrix} y \\ x \end{bmatrix}, \langlebel{qtwosame}
\end{align}
where $m_j$ are constants defined by \eqnref{m2def} and $t_j$ are constants satisfying
\begin{equation}\langlebel{tj_estim}
|t_j| \le C (\alpha_1+\alpha_2)\epsilon^{3/2}
\end{equation}
for some constant $C$ independent of $(\alpha_1,\alpha_2)$, or equivalently, independent of $(\lambda, \mu)$, as well as $\epsilon$.
\end{lemma}
\noindent {\sl Proof}. \
If $r_1=r_2=r_0$, \eqnref{a_def} reads
\begin{equation}\langlebel{asame}
a= \frac{\sqrt{\epsilon(4r_0+\epsilon)}}{2} .
\end{equation}
We see from \eqnref{xi_cartesian} that the constant value $\zeta({\bf x})$ on $\partial B_j$ are as follows:
\begin{equation}\langlebel{zeta_Bi_s}
\zeta|_{\partial B_i} = (-1)^i \sinh^{-1} (a/r_0), \quad i=1,2.
\end{equation}
Let $s=\sinh^{-1} (a/r_0)$. Note that $a=r_0\sinh s$.
Then it follows from \eqnref{asame} that
$$
r_0\cosh s = {\sqrt{r_0^2+a^2}} = {r_0+\epsilon/2}.
$$
Since the center of $\partial B_i$ is $(-1)^i(r_0 + {\epsilon}/{2}, 0)$, we have
$$
\partial B_i = \big\{ (x,y)\in \mathbb{R}^2: \big(x - (-1)^i r_0 \cosh s\big)^2 +y^2 = r_0^2 \big\}.
$$
So, for $(x,y) \in \partial B_i$, we obtain
\begin{align}
(x\partialm a)^2 + y^2
&=x^2\partialm 2x r_0 \sinh s + r_0^2\sinh^2 s +y^2
\nonumber
\\
&=(x - (-1)^i r_0 \cosh s)^2 +y^2 -r_0^2 +(-1)^i 2x r_0 \cosh s \partialm 2x r_0 \sinh s
\nonumber
\\
&= ((-1)^i\cosh s\partialm \sinh s)2r_0x,
\langlebel{xpma2py2}
\end{align}
and
\begin{align*}
a^2 -x^2 +y^2 &=
(x - (-1)^i r_0 \cosh s)^2+y^2-r_0^2 -2x^2 +(-1)^i 2xr_0 \cosh s
\\
&=((-1)^i r_0 \cosh s-x)2x =\big((-1)^i (r_0 + \epsilon/2)-x\big)2x.
\end{align*}
Then, for $(x,y) \in \partial B_i$, we have
\begin{align}
\frac{2a x}{\sqrt{(x-a)^2+y^2}\sqrt{(x+a)^2+y^2}}
&=(-1)^i \frac{a}{r_0},
\nonumber
\\
\frac{a^2-x^2+y^2}{\sqrt{((x+a)^2+y^2)((x-a)^2+y^2)}}
&= 1+\frac{\epsilon}{2r_0}-(-1)^i\frac{x}{r_0}.
\langlebel{twofractions_pBi}
\end{align}
Note that $\zeta|_{\partial B_i}=(-1)^is$.
So we obtain from \eqnref{q11_xi_cartesian_rep} and \eqnref{twofractions_pBi} that
$$
q_{11}({\bf x}) = (-1)^i\alpha_1 s -\alpha_2 (-1)^i (1+\frac{\epsilon}{2r_0})\frac{a}{r_0} + \alpha_2\frac{a}{r_0}\frac{x}{r_0}, \quad {\bf x} \in \partial B_i.
$$
Since $a=\sqrt{r_0 \epsilon} + O(\epsilon^{3/2})$, $s=\sqrt{\epsilon/r_0} + O(\epsilon^{3/2})$, and $m_1=\sqrt{r_0}/[2(\alpha_1-\alpha_2)]$, it follows that
\begin{align*}
q_{11}({\bf x}) &=(-1)^i \left( (\alpha_1 - \alpha_2)
\sqrt{\epsilon/r_0}
+ (\alpha_1+\alpha_2)O(\epsilon^{3/2}) \right) + \alpha_2\frac{a}{r_0^2}x \\
&=(-1)^i {\bf i}g( \frac{1}{2}\frac{\sqrt\epsilon}{m_1} + (\alpha_1+\alpha_2)O(\epsilon^{3/2}) {\bf i}g) + \alpha_2\frac{a}{r_0^2}x, \quad {\bf x} \in \partial B_1.
\end{align*}
We also obtain from \eqnref{q12_xi_cartesian_rep} and \eqnref{xpma2py2} that
$$
q_{12}({\bf x}) = \alpha_2 \frac{a}{r_0^2}y, \quad {\bf x} \in \partial B_1 \cup \partial B_2.
$$
This proves \eqnref{qonesame}. One can prove \eqnref{qtwosame} similarly.
\qed
We have the following lemma in $\Pi_{L_0}$.
\begin{lemma}\langlebel{lem_hS_grad_estim}
We have, for ${\bf x} \in \Pi_{L_0}$,
\begin{equation}\langlebel{poneest}
|\partial_1 q_{11}({\bf x})| + |\partial_1 q_{22}({\bf x})| \lesssim \frac{\sqrt\epsilon}{\epsilon+y^2},
\end{equation}
and
\begin{equation}\langlebel{otherest}
|\partial_2 q_{11} ({\bf x})| + |\partial_2 q_{22} ({\bf x})| + | \nabla q_{12}({\bf x}) | \lesssim \frac{\sqrt\epsilon|y|}{\epsilon+y^2}+ \sqrt\epsilon.
\end{equation}
\end{lemma}
\noindent {\sl Proof}. \
We only consider ${\bf q}_1$. Estimates for ${\bf q}_2$ can be obtained similarly.
First we consider $\partial_1 q_{11}({\bf x})$. By \eqnref{qoneone}, we have
\begin{align}
\partial_1 q_{11} ({\bf x}) &= \alpha_1 \partial_1 \zeta({\bf x}) - \alpha_2 \cosh(\zeta({\bf x})) A_\zeta({\bf x}) \partial_1 \zeta({\bf x}) \nonumber \\
&\quad + \alpha_2 \sinh^2(\zeta({\bf x}))
\cosh(\zeta({\bf x})) \frac{y^2}{a^2}
A_\zeta({\bf x})^{-1} \partial_1 \zeta({\bf x}) .
\langlebel{eqn_dx_hS_1x}
\end{align}
Thanks to \eqnref{1sinh2Gza212}, we have
\begin{equation}\langlebel{AGz_approx_1}
|A_\zeta({\bf x})|\approx 1.
\end{equation}
Then, using the fact that $a\approx\sqrt\epsilon$, we obtain
$$
|\partial_1 q_{11} ({\bf x})| \lesssim |\partial_1 \zeta({\bf x})| + |\zeta({\bf x})|^2 \frac{y^2}{\epsilon} |\partial_1 \zeta({\bf x})|.
$$
We then infer from \eqnref{GzBxest} and \eqnref{Gzgradest} that
$$
|\partial_1 q_{11}({\bf x})| \lesssim \frac{\sqrt{\epsilon}}{\epsilon+y^2}
+ \epsilon \frac{y^2}{\epsilon}\frac{\sqrt{\epsilon}}{\epsilon+y^2}
\lesssim \frac{\sqrt\epsilon}{\epsilon+y^2}.
$$
This proves \eqnref{poneest}.
To prove \eqnref{otherest} for ${\bf q}_1$, we compute using \eqnref{qoneone} and \eqnref{qonetwo}
\begin{align*}
\partial_2 q_{11} ({\bf x}) &= \alpha_1 \partial_2 \zeta({\bf x})
- \alpha_2 \cosh(\zeta({\bf x})) A_\zeta({\bf x}) \partial_2 \zeta({\bf x})
\\
&\quad + \alpha_2 \sinh(\zeta({\bf x})) A_\zeta({\bf x})^{-1}
{\bf i}g(\sinh \zeta({\bf x}) \partial_2 \zeta({\bf x}) \frac{y^2}{a^2}+\sinh^2\zeta({\bf x}) \frac{y}{a^2}{\bf i}g),
\end{align*}
and
\begin{align*}
\partial_1 q_{12}({\bf x}) &= 2\alpha_2 a y \sinh\zeta({\bf x}) \cosh\zeta({\bf x}) \partial_1 \zeta({\bf x}), \\
\partial_2 q_{12}({\bf x}) &= 2\alpha_2 a y \sinh\zeta({\bf x}) \cosh\zeta({\bf x}) \partial_2 \zeta({\bf x}) + \alpha_2 a \sinh^2\zeta({\bf x}).
\end{align*}
So, \eqnref{otherest} can be proved in the same way as above.
The proof is completed.
\qed
Let ${\bf h}_1=(h_{11},h_{12})^T$ be the solution to \eqnref{hj_def} for $j=1$. Then,
we have ${\bf h}_{1}({\bf x}_2(y))-{\bf h}_{1}({\bf x}_1(y)) = (1,0)^T$. Since $|{\bf x}_2(0)- {\bf x}_1(0)|=\epsilon$,
one can expect
\begin{equation}\langlebel{h1_temp_estim}
\begin{cases}
\partial_1 h_{11}(0,0) = \epsilon^{-1} + O(1), \\
|\partial_2 h_{11} (0,0)| + |\partial_1 h_{12} (0,0)| + |\partial_2 h_{12} (0,0)| \lesssim 1.
\end{cases}
\end{equation}
One can expect a similar behavior for ${\bf h}_2$ as well.
We now show that $\frac{m_j}{\sqrt\epsilon}{\bf q}_j$ has the exactly same behavior as $\epsilon \to 0$.
\begin{lemma}\langlebel{singular_q_origin}
It holds for small $\epsilon>0$ that
\begin{equation}\langlebel{p1q11}
\begin{cases}
\partial_1 q_{11} (0,0) = \displaystyle \frac{1}{m_1\sqrt\epsilon} + O(\sqrt\epsilon), \\
|\partial_2 q_{11} (0,0)| + |\partial_1 q_{12} (0,0)| + |\partial_2 q_{12} (0,0)| \lesssim \sqrt\epsilon,
\end{cases}
\end{equation}
and
\begin{equation}\langlebel{p1q22}
\begin{cases}
\partial_1 q_{22} (0,0) = \displaystyle \frac{1}{m_2\sqrt\epsilon} + O(\sqrt\epsilon), \\
|\partial_1 q_{21} (0,0)| + |\partial_2 q_{21} (0,0)| + |\partial_2 q_{22} (0,0)| \lesssim \sqrt\epsilon.
\end{cases}
\end{equation}
\end{lemma}
\noindent {\sl Proof}. \
Since $\partial_1 \zeta(0,0) = 2/a$ and $\zeta(0,0)=0$, it follows from \eqnref{eqn_dx_hS_1x} that
$$
\partial_1 q_{11} (0,0) = \frac{2(\alpha_1-\alpha_2)}{a}.
$$
Since $a = \sqrt{2\epsilon}/\sqrt{(\kappa_1+\kappa_2)}+O(\epsilon^{3/2})$, the first equality in \eqnref{p1q11} follows.
From \eqnref{otherest}, we have
$$
|\partial_2 q_{11} (0,0)|+| \partial_1 q_{12}(0,0) |+| \partial_2 q_{12}(0,0) | \lesssim \sqrt\epsilon.
$$
This proves \eqnref{p1q11}. \eqnref{p1q22} can be proved similarly.
\qed
\begin{lemma}\langlebel{lem_qj_far_estim}
For $j=1,2$, we have
\begin{equation}\langlebel{Bqjest}
\|{\bf q}_j\|_{L^\infty(D^e\setminus \Pi_{L_0})}+\| \nabla \mathbf{q}_{j} \|_{L^\infty(D^e\setminus \Pi_{L_0})} \lesssim \sqrt\epsilon.
\end{equation}
\end{lemma}
\noindent {\sl Proof}. \
We only prove \eqnref{Bqjest} for $j=1$. The same proof applies to the case when $j=2$.
Recall that
$$
{\bf q}_1({\bf x}) = {\bf \GG}({\bf x}-{\bf p}_1){\bf e}_1-{\bf \GG}({\bf x}-{\bf p}_2) {\bf e}_1 + {\alpha_2 a} \left( \frac{{\bf x}-{\bf p}_1}{|{\bf x}-{\bf p}_1|^2}+\frac{{\bf x}-{\bf p}_2}{|{\bf x}-{\bf p}_2|^2}\right).
$$
Note that if ${\bf x}\in D^e \setminus \Pi_{L_0}$, then $1 \lesssim |{\bf x}-{\bf p}|$ for all ${\bf p}$ lying on the line segment $\overline{{\bf p}_1{\bf p}_2}$.
Since $a \approx \sqrt{\epsilon}$, the second term on the right-hand side of the above and its derivative is less than $\sqrt{\epsilon}$.
One can easily show that the first term also satisfies the same estimate. In fact, by the mean value theorem, we have
\begin{align*}
|{\bf \GG}({\bf x}-{\bf p}_1)-{\bf \GG}({\bf x}-{\bf p}_2)| \lesssim |\nabla {\bf \GG}({\bf x}-{\bf p}_*)| |{\bf p}_1-{\bf p}_2|
\end{align*}
for some ${\bf p}_*$ on $\overline{{\bf p}_1{\bf p}_2}$. We also have
\begin{align*}
|\nabla({\bf \GG}({\bf x}-{\bf p}_1)-{\bf \GG}({\bf x}-{\bf p}_2))| \lesssim |\nabla^2 {\bf \GG}({\bf x}-{\bf p}_{**})| |{\bf p}_1-{\bf p}_2|
\end{align*}
for some ${\bf p}_{**}$ on $\overline{{\bf p}_1{\bf p}_2}$. Since $|{\bf p}_1-{\bf p}_2|=2a\approx \sqrt\epsilon$, \eqnref{Bqjest} follows.
\qed
As a corollary, we have the following estimate for $\nabla{\bf q}_j$.
\begin{cor}\langlebel{cor_hS_blowup_estim}
For $j=1,2$, we have
\begin{equation}\langlebel{Bq12est}
\|\nabla {\bf q}_j\|_{L^\infty(D^e)} \approx \epsilon^{-1/2}.
\end{equation}
\end{cor}
\noindent {\sl Proof}. \
The upper estimate $\|\nabla {\bf q}_j\|_{L^\infty(D^e)} \lesssim \epsilon^{-1/2}$ is a consequence of Lemma \ref{lem_hS_grad_estim} and \ref{lem_qj_far_estim}, and the lower one is that of Lemma \ref{singular_q_origin}.
\qed
We have the following lemma on $\partial D_1\cup \partial D_2$.
\begin{lemma}\langlebel{lem_hS_bdry_estim}
Let ${\bf x}_k(y)$ be the defining functions for $\partial D_k$ for $k=1,2$ as given in \eqnref{Bx1Bx2}. For $|y|<L_0$, the following holds:
\begin{align}
q_{11} ({\bf x}_k(y)) &= (-1)^k {(\alpha_1-\alpha_2) \kappa_k a} + O \left( E \right), \langlebel{q11asym}
\\
q_{12}({\bf x}_k(y))=q_{21}({\bf x}_k(y)) &= {\alpha_2 \kappa_k^2 ay} + O \left( |y| E \right), \langlebel{q12asym} \\
q_{22} ({\bf x}_k(y)) &= (-1)^k {(\alpha_1+\alpha_2) \kappa_k a} + O \left( E \right), \langlebel{q22asym}
\end{align}
where
\begin{equation}\langlebel{Edef}
E:= \epsilon^{3/2} + \sqrt\epsilon y^2 + \tau \sqrt\epsilon |y|.
\end{equation}
\end{lemma}
\noindent {\sl Proof}. \
We see from \eqnref{xi_cartesian} that
$$
\zeta|_{\partial B_k} =(-1)^i\sinh^{-1} (\kappa_k a)= (-1)^i \kappa_k a + O(a^3).
$$
Since $a\approx \sqrt{\epsilon}$, we infer from \eqnref{difference} that
\begin{equation}\langlebel{101}
\zeta({\bf x}_k(y))= (-1)^i\kappa_k a +O \left( E \right),
\end{equation}
and
\begin{equation}\langlebel{102}
\sinh \zeta({\bf x}_k(y))= (-1)^i\kappa_k a +O \left( E \right).
\end{equation}
Combining \eqnref{AGzBx}, \eqnref{101} and \eqnref{102}, one can see that \eqnref{q11asym}, \eqnref{q12asym} and \eqnref{q22asym} follow from \eqnref{qoneone}, \eqnref{qonetwo}, and \eqnref{qtwotwo}, respectively.
\qed
Then, using \eqnref{def_alpha} and the definitions \eqnref{m2def} of $m_1$ and $m_2$, we immediately obtain the following corollary.
\begin{cor}\langlebel{cor_hS_D1D2_diff_estim}
For $|y|<L_0$, we have
\begin{align}
q_{11}({\bf x}_2(y)) - q_{11}({\bf x}_1(y)) &= m_1^{-1} \sqrt\epsilon +O(E),
\\
q_{12}({\bf x}_2(y)) - q_{12}({\bf x}_1(y)) &= q_{21}({\bf x}_2(y)) - q_{21}({\bf x}_1(y))= O(\sqrt\epsilon\tau|y|), \\
q_{22}({\bf x}_2(y)) - q_{22}({\bf x}_1(y)) &= m_2^{-1}\sqrt\epsilon +O(E),
\end{align}
where $E$ is given by \eqnref{Edef}.
\end{cor}
We then obtain the following lemma for estimates of the derivatives of ${\bf q}_j$.
\begin{lemma} \langlebel{lem_hS_bdry_deri_estim}
Let ${\bf x}_k(y)$ be the defining functions for $\partial D_k$ for $k=1,2$ as given in \eqnref{Bx1Bx2}. For $|y|<L_0$, the following holds:
\begin{align}
{\bf i}g| \frac{d}{dy} {\bf q}_{j}({\bf x}_k(y)) {\bf i}g| & \lesssim \sqrt\epsilon, \langlebel{qderiest1}
\\
{\bf i}g|\frac{d^2}{dy^2} q_{11}({\bf x}_k(y)) {\bf i}g| + {\bf i}g|\frac{d^2}{dy^2} q_{22} ({\bf x}_k(y)){\bf i}g| & \lesssim \frac{\sqrt\epsilon}{{\epsilon}+ y^2}, \langlebel{qderiest2} \\
{\bf i}g|\frac{d^2}{dy^2} q_{12} ({\bf x}_k(y)){\bf i}g| & \lesssim \sqrt\epsilon. \langlebel{qderiest3}
\end{align}
\end{lemma}
\noindent {\sl Proof}. \
We only prove inequalities corresponding to ${\bf q}_1 ({\bf x}_1(y))$. Those for other cases, namely, ${\bf q}_1 ({\bf x}_2(y))$ and ${\bf q}_2 ({\bf x}_k(y))$, can be treated similarly.
For ease of notation, let us define
$\varphi(y)$ and $\Phi(y)$ by
$$
\varphi(y):=\zeta({\bf x}_1(y)), \quad
\Phi(y):= A_\zeta ({\bf x}_1(y)).
$$
We see from \eqnref{difference}-\eqnref{eqn_bdry_deri2_1i} that
\begin{equation}\langlebel{Gvfest}
|\varphi(y)| \lesssim \sqrt\epsilon, \quad |\varphi'(y)| \lesssim\sqrt\epsilon, \quad |\varphi''(y)| \lesssim \frac{\sqrt\epsilon}{\epsilon+y^2}.
\end{equation}
We also have
\begin{equation}\langlebel{Phiest}
|\Phi(y)| \approx 1, \quad |\Phi'(y)| \lesssim |y|, \quad |\Phi''(y)| \lesssim 1.
\end{equation}
The first estimate in the above is an immediate consequence of \eqnref{AGz_approx_1}, and the last two can be proved using the definition \eqnref{AGzBxdef} of $A_\zeta({\bf x})$. In fact, straightforward computations yield
$$
\displaystyle\Phi'(y)=-\frac{1}{2a^2 \Phi} \left(y^2 \varphi' \sinh 2\varphi + 2y\sinh^2 \varphi\right),
$$
and
\begin{align*}
\Phi''(y)&=
\displaystyle-\frac{1}{2a^2 \Phi} \big( 4 y \varphi' \sinh 2\varphi + y^2 \varphi'' \sinh 2\varphi + 2y^2(\varphi')^2\cosh 2\varphi + 2\sinh^2 \varphi \big)
\\
&\qquad
+\frac{\Phi'}{2a^2 \Phi^2} \left(y^2 \varphi' \sinh 2\varphi + 2y\sinh^2 \varphi\right).
\end{align*}
Then, using \eqnref{Gvfest} and the fact that $|\Phi|\approx 1$, we obtain
\begin{align*}
|\Phi'(y)| &\lesssim \frac{1}{\epsilon}( y^2\epsilon + |y|{\epsilon})
\lesssim |y|,
\\[0.5em]
|\Phi''(y)| &\lesssim \frac{1}{\epsilon} {\bf i}g(|y|\epsilon + y^2 \frac{\sqrt\epsilon}{\epsilon+y^2}\sqrt\epsilon + y^2 \epsilon + \epsilon {\bf i}g) + \frac{|y|}{\epsilon} \big(y^2 \epsilon + |y| \epsilon \big)\lesssim 1.
\end{align*}
We have from \eqnref{qoneone} and \eqnref{qonetwo} that
\begin{align*}
\frac{d}{dy} q_{11} ({\bf x}_1(y))&= \alpha_1 \varphi' - \alpha_2 \left( \Phi \varphi'\cosh \varphi + \Phi'\sinh \varphi \right),
\\
\frac{d}{dy} q_{12} ({\bf x}_1(y))&= \alpha_2 a^{-1} (\sinh^2 \varphi + y \varphi' \sinh 2 \varphi),
\\
\frac{d^2}{dy^2} q_{11}({\bf x}_1(y))&= \alpha_1 \varphi'' -
\alpha_2\big( (\Phi' \varphi' + \Phi \varphi'' + \Phi' \varphi')\cosh\varphi+ (\Phi \varphi'^2 + \Phi'') \sinh\varphi \big),
\\
\frac{d^2}{dy^2} q_{12} ({\bf x}_1(y))
&= \alpha_2 a^{-1} (2\varphi'\sinh 2\varphi + y\varphi'' \sinh2\varphi + 2y\varphi'^2 \cosh 2\varphi ).
\end{align*}
Since $a \approx \sqrt{\epsilon}$, \eqnref{qderiest1}-\eqnref{qderiest3} now follow from \eqnref{Gvfest} and \eqnref{Phiest}.
\qed
We now estimate ${\bf q}_3$ whose behavior resembles that of the solution ${\bf h}_3=(h_{31},h_{32})^T$ to \eqnref{hj_def} for $j=3$.
Since ${\bf h}_3|_{\partial D_i} = \frac{(-1)^i}{2}(-y,x)^T$ for $i=1,2$, we see that
$$
{\bf h}_{3}({\bf x}_2(y))-{\bf h}_{3}({\bf x}_1(y)) = \left( -y, \frac{f_1(y) + f_2(y)}{2} \right).
$$
Since $|{\bf x}_2(y)-{\bf x}_1(y)|=f_1(y)+f_2(y)$,
one can expect that the following holds for small $\epsilon>0$ and for $(x,y)$ near the origin:
\begin{equation}\langlebel{h3_temp_estim}
\partial_1 h_{31} \approx \frac{-y}{f_1(y)+f_2(y)} + O(1) = -\frac{y}{\epsilon+\frac{1}{2}(\kappa_1+\kappa_2)y^2} + O(1)
\end{equation}
and
\begin{equation}
|\partial_2 h_{31}(x,y)|+|\partial_1 h_{32}(x,y)|+|\partial_2 h_{32}(x,y)| \lesssim 1.
\end{equation}
The following lemma shows that ${\bf q}_3$ has the exactly same local behavior.
\begin{lemma} \langlebel{lem_q3_asymp}
For ${\bf x} =(x,y)\in \Pi_L$, we have
\begin{equation}
\partial_1 q_{31}({\bf x}) = -\frac{y}{\epsilon+\frac{1}{2}(\kappa_1+\kappa_2)y^2} +O(1),
\end{equation}
and
\begin{equation}
|\partial_2 q_{31}({\bf x})|+|\partial_1 q_{32}({\bf x})|+|\partial_2 q_{32}({\bf x})| \lesssim 1.
\end{equation}
\end{lemma}
\noindent {\sl Proof}. \
Let $m_3$ be the number defined by \eqnref{m3def}.
For ease of computation, we decompose ${\bf q}_3$ as ${\bf q}_3=\widetilde{{\bf q}}_3+{\bf w}$ where ${\bf w}({\bf x}) = -m_3({\bf \GG}^\partialerp({\bf x}-{\bf c}_1)+{\bf \GG}^\partialerp({\bf x}-{\bf c}_2))$. It is clear that $|\nabla {\bf w}({\bf x})| \lesssim 1$ for ${\bf x}\in\Pi_L$.
Now we consider $\widetilde{{\bf q}}_3=(\tilde{q}_{31},\tilde{q}_{32})^T$, which is given by
$$
\widetilde{{\bf q}}_3=m_3\left({\bf \GG}^\partialerp({\bf x}-{\bf p}_1)+{\bf \GG}^\partialerp({\bf x}-{\bf p}_2)\right)+m_3 \alpha_2 a \left( \frac{({\bf x}-{\bf p}_1)^\partialerp}{|{\bf x}-{\bf p}_1|^2} - \frac{({\bf x}-{\bf p}_2)^\partialerp}{|{\bf x}-{\bf p}_2|^2} \right).
$$
From the definition \eqnref{Kelvin_modified} of ${\bf \GG}^\partialerp$, we have
\begin{align*}
\tilde{q}_{31}({\bf x}) &= m_3\alpha_1 \sum_{i=1}^2 \arg\big(x-(-1)^{i}a+iy\big)
\\
&\quad -m_3\alpha_2 \sum_{i=1}^2\frac{(x-(-1)^{i}a)(-y) }{(x-(-1)^i a)^2+y^2} + m_3\alpha_2 a \sum_{i=1}^2 \frac{(-1)^{i+1} (-y)}{(x-(-1)^ia)^2+y^2},
\\
\tilde{q}_{32}({\bf x}) &= -m_3\alpha_2 \sum_{i=1}^2 \frac{(x-(-1)^{i}a)^2 }{(x-(-1)^{i}a)^2+y^2} + m_3\alpha_2 a \sum_{i=1}^2 \frac{(-1)^{i+1}(x-(-1)^{i}a)}{(x-(-1)^{i}a)^2+y^2}.
\end{align*}
Straightforward computations yield
\begin{align*}
\partial_1 \tilde{q}_{31}({\bf x}) &= -(\kappa_1+\kappa_2)^{-1} f({\bf x})
- 2m_3\alpha_2 xy \big[x h_+({\bf x}) + a h_-({\bf x})\big], \\
\partial_2 \tilde{q}_{31}({\bf x}) &=
m_3(\alpha_1+\alpha_2)x g_+({\bf x})+m_3 \alpha_1 a g_-({\bf x})-2m_3\alpha_2 x y^2 h_+({\bf x}),
\\
\partial_1 \tilde{q}_{32}({\bf x}) &= -2m_3 \alpha_2 x g_+({\bf x})-m_3\alpha_2 a g_-({\bf x})
\\
&\qquad
+ 2m_3 \alpha_2 x \big[ (x^2+a^2) h_+({\bf x})+2ax h_-({\bf x})\big],
\\
\partial_2 \tilde{q}_{32}({\bf x}) &= 2m_3\alpha_2 xy \big[x h_+({\bf x}) + a h_-({\bf x})\big],
\end{align*}
where $f$, $g_\partialm$ and $h_\partialm$ are defined by
\begin{align*}
f({\bf x})&=\frac{y}{(x+a)^2+y^2}+\frac{y}{(x-a)^2+y^2},
\\
g_\partialm({\bf x})&=\frac{1}{(x+a)^2 + y^2}\partialm\frac{1}{(x-a)^2 + y^2},
\\
h_\partialm({\bf x})&=\frac{1}{((x+a)^2 + y^2)^2}\partialm\frac{1}{((x-a)^2 + y^2)^2}.
\end{align*}
Since $a\approx \sqrt\epsilon$, $|x|\lesssim \epsilon+y^2$ and $(x\partialm a)+y^2 \approx \epsilon+y^2$, we see that
\begin{align*}
&|g_+({\bf x})| \lesssim \frac{1}{\epsilon+y^2}, \quad |h_+({\bf x})| \lesssim \frac{1}{(\epsilon+y^2)^2},
\\
&|g_-({\bf x})| = \left| \frac{4ax}{((x+a)^2+y^2)((x-a)^2+y^2)} \right| \lesssim \frac{\sqrt\epsilon}{\epsilon+y^2},
\\
&|h_-({\bf x})| = \left| \frac{4ax((x+a)^2+(x-a)^2+2y^2)}{((x+a)^2+y^2)^2((x-a)^2+y^2)^2} \right| \lesssim \frac{\sqrt\epsilon}{(\epsilon+y^2)^2}.
\end{align*}
Therefore, we obtain
\begin{align*}
|&\partial_1 \tilde{q}_{31}({\bf x}) + (\kappa_1+\kappa_2)^{-1} f({\bf x})| \lesssim \frac{(\epsilon+y^2)^2 y}{(\epsilon+y^2)^2} + \frac{\epsilon(\epsilon+y^2)y}{(\epsilon+y^2)^2}\lesssim 1,
\\
|&\partial_2 \tilde{q}_{31}({\bf x})| \lesssim \frac{\epsilon+y^2}{\epsilon+y^2}+\sqrt\epsilon\frac{ \sqrt\epsilon}{\epsilon+y^2} +
+ \frac{(\epsilon+y^2)y^2}{(\epsilon+y^2)^2} \lesssim 1,
\\
|&\partial_1 \tilde{q}_{32}({\bf x})| \lesssim \frac{\epsilon+y^2}{\epsilon+y^2} +\sqrt\epsilon\frac{ \sqrt\epsilon}{\epsilon+y^2}
+ \frac{(\epsilon+y^2)^3 +\epsilon (\epsilon+y^2)}{(\epsilon+y^2)^2}+\frac{\epsilon(\epsilon+y^2)}{(\epsilon+y^2)^2} \lesssim 1, \\
|&\partial_2 \tilde{q}_{32}({\bf x})| \lesssim \frac{(\epsilon+y^2)^2 y}{(\epsilon+y^2)^2} + \frac{\epsilon(\epsilon+y^2)y}{(\epsilon+y^2)^2}\lesssim 1.
\end{align*}
Now it remains to show that
\begin{equation}\langlebel{f_asymp}
f({\bf x}) = \frac{(\kappa_1+\kappa_2) y}{\epsilon+\frac{1}{2}(\kappa_1+\kappa_2)y^2} + O(1).
\end{equation}
Since
\begin{align*}
\left| \frac{y}{(x\partialm a)^2 + y^2}-\frac{ y}{a^2+y^2} \right|
&\lesssim \left| \frac{y(x^2\partialm 2ax)}{((x\partialm a)^2 + y^2)(a^2+y^2)} \right|
\\
&\lesssim \frac{y((\epsilon+y^2)^2+\sqrt\epsilon(\epsilon+y^2))}{(\epsilon+y^2)^2}\lesssim 1,
\end{align*}
we see that
$$
f({\bf x}) = \frac{2y}{a^2+y^2} + O(1).
$$
Since
$$
\frac{y}{a^2+y^2}=\frac{y}{{2\epsilon}/{(\kappa_1+\kappa_2)}+O(\epsilon^2)+y^2}
=\frac{\frac{1}{2}(\kappa_1+\kappa_2) y}{\epsilon+\frac{1}{2}(\kappa_1+\kappa_2)y^2}+O(1),
$$
the desired estimate \eqnref{f_asymp} follows. This completes the proof.
\qed
\begin{lemma}\langlebel{cor_q3_near_origin}
The following holds:
\begin{equation}\langlebel{q3est1}
|\nabla {\bf q}_3(0,0)| \lesssim 1,
\quad
\nabla {\bf q}_3(0,a) = \frac{\sqrt{\kappa_1+\kappa_2}}{\sqrt{2\epsilon}} {\bf e}_1\otimes{\bf e}_1 + O(1).
\end{equation}
Moreover, we have
\begin{equation}\langlebel{Bq3est2}
\|\nabla {\bf q}_{3}\|_{L^\infty(D^e \setminus \Pi_L)} \lesssim 1,
\end{equation}
and
\begin{equation}\langlebel{Bq3est}
\|\nabla {\bf q}_{3}\|_{L^\infty(D^e)} \approx \frac{ 1}{\sqrt{\epsilon}}.
\end{equation}
\end{lemma}
\noindent {\sl Proof}. \
The estimates in \eqnref{q3est1} are consequences of Lemma \ref{lem_q3_asymp}. The estimate \eqnref{Bq3est} is a consequence of Lemma \ref{lem_q3_asymp} and \eqnref{Bq3est2}.
To prove \eqnref{Bq3est2}, recall that
\begin{align*}
{\bf q}_3({\bf x}) &= m_3\left( {\bf \GG}^\partialerp({\bf x}-{\bf p}_1) - {\bf \GG}^\partialerp({\bf x}-{\bf c}_1) \right) {\bf e}_1 + m_3\left( {\bf \GG}^\partialerp({\bf x}-{\bf p}_2) - {\bf \GG}^\partialerp({\bf x}-{\bf c}_2) \right) {\bf e}_1 \nonumber \\
&\quad -m_3\alpha_2 a \left( \frac{({\bf x}-{\bf p}_1)^\partialerp}{|{\bf x}-{\bf p}_1|^2} - \frac{({\bf x}-{\bf p}_2)^\partialerp}{|{\bf x}-{\bf p}_2|^2} \right).
\end{align*}
If ${\bf x}\in D^e \setminus \Pi_{L}$, then $1 \lesssim |{\bf x}-{\bf c}|$ for all ${\bf c}$ on the line segment $\overline{{\bf c}_1{\bf c}_2}$.
Note that ${\bf p}_1$ and ${\bf p}_2$ are on $\overline{{\bf c}_1{\bf c}_2}$. So, all the terms in parentheses above and their gradients are bounded. So, \eqnref{Bq3est2} follows.
\qed
\subsection{Approximations by singular functions}\langlebel{subsec:approx_energy_rjrj}
In this section we prove \eqnref{BhjBqj}. More precisely, we prove the following proposition.
\begin{prop}\langlebel{prop_Brj}
For $j=1,2$, let ${\bf h}_j$ be the solution to \eqnref{hj_def} in $\mathcal{A}^*$ and $m_j$ be the constant defined in \eqnref{m2def}. Then it holds that
\begin{equation}\langlebel{hj_def_decomp}
{\bf h}_j = \frac{m_j}{\sqrt\epsilon}{\bf q}_j + {\bf r}_j,
\end{equation}
where $\nabla {\bf r}_j$ satisfies
\begin{equation}\langlebel{BrjestDe}
\int_{D^e} \mathbb{C} \widehat{\nabla}{\bf r}_j:\widehat{\nabla}{\bf r}_j \lesssim 1.
\end{equation}
\end{prop}
To prove Proposition \ref{prop_Brj} we apply the variational principle. We emphasize that this is possible only because the singular function ${\bf q}_j$ is the solution to the Lam\'e system, namely, $\mathcal{L}_{\lambda,\mu} {\bf q}_j= 0$ in $D^e$, and so is ${\bf r}_j$. Note that ${\bf r}_j \in \mathcal{A}^*$ and
$$
{\bf r}_j= (-1)^i\frac{1}{2} \Psi_j - \frac{m_j}{\sqrt\epsilon}{\bf q}_j \quad \mbox{ on } \partial D_i, \quad i=1,2.
$$
Let
\begin{equation}\langlebel{Wjdef}
W_j = \left\{ {\bf v} \in \mathcal{A}^* ~|~ {\bf v}|_{\partial D_i} = (-1)^i\frac{1}{2}\Psi_j -\frac{m_j}{\sqrt\epsilon}{\bf q}_j \right\},
\end{equation}
and let $\mathcal{E}_{D^e}$ be the energy functional defined in \eqnref{eqn_def_Ecal}.
By the variational principle \eqnref{variation}, we have
\begin{equation}\langlebel{rj_var_prin}
\mathcal{E}_{D^e}[{\bf r}_j] =\min_{{\bf v}\in W_j} \mathcal{E}_{D^e}[{\bf v}].
\end{equation}
We define the test function ${\bf r}_j^K$ as follows: for $(x,y)\in \Pi_{L_0}$ let
\begin{align}\langlebel{eqn_def_rKj_PiL0}
\displaystyle{\bf r}^K_j(x,y) &:=
\frac{{\bf r}_j({\bf x}_2(y))-{\bf r}_j({\bf x}_1(y))}{f_1(y)+f_2(y)}[ x+f_1(y)] + {\bf r}_j({\bf x}_1(y)).
\end{align}
Note that
\begin{equation}\langlebel{rKbdry}
{\bf r}^K_j = (-1)^i\frac{1}{2}\Psi_j -\frac{m_j}{\sqrt\epsilon}{\bf q}_j = {\bf r}_j \quad\mbox{on } \partial D_i \cap \partial \Pi_{L_0}, \ \ i=1,2,
\end{equation}
and ${\bf r}_j^K$ is a linear interpolation of ${\bf r}_j|_{\partial D^e}$ in the $x$-direction. So, in $\Pi_{L_0}$, ${\bf r}_j^K(x,y)$ is a linear function of $x$ for each fixed $y$.
Let $B_0$ be a disk containing $\overline{D_1\cup D_2}$, and extend ${\bf r}^K_j$ to $D^e \setminus \Pi_{L_0}$ so that ${\bf r}_j^K|_{\mathbb{R}^2\setminus B_0}=0$, $\|{\bf r}_j^K\|_{H^1(D^e\setminus \Pi_{L_0})} \lesssim 1$, and the boundary condition \eqnref{rKbdry} holds on $\partial D_i$ for $i=1,2$.
Then, ${\bf r}^K_j$ belongs to $W_j$.
We have the following lemma.
\begin{lemma}\langlebel{lem_rjK_grad_estim}
We have, for $(x,y)\in \Pi_{L_0}$,
\begin{equation}\langlebel{Brjest}
|\nabla {\bf r}_j^K(x,y)| \lesssim 1+\frac{\tau|y|}{\epsilon+y^2}.
\end{equation}
\end{lemma}
\noindent {\sl Proof}. \
We prove \eqnref{Brjest} for $j=1$. The case for $j=2$ can be proved in a similar way.
Let us write ${\bf r}_j^{K}({\bf x})= (r^K_{j1}({\bf x}),r^K_{j2}({\bf x}) )^T$. To keep the expressions simple, we introduce
\begin{align}
d(y)&:=f_1(y)+f_2(y),\nonumber
\\
\partialhi(y)&:=1-\frac{m_1}{\sqrt\epsilon}\big[ q_{11}({\bf x}_2(y))-{q}_{11}({\bf x}_1(y))\big],\nonumber
\\
\eta(y) &:= -\frac{1}{2}-\frac{m_1}{\sqrt\epsilon} q_{11}({\bf x}_1(y)).
\langlebel{eqn_def_d_phi}
\end{align}
Then $r^K_{11}$ can be rewritten as
$$
r^K_{11}(x,y) = \frac{\partialhi(y)}{d(y)} x +\frac{\partialhi(y)f_1(y)}{d(y)} +\eta(y), \quad (x,y)\in \Pi_{L_0}.
$$
Straightforward computations show that
\begin{align}
\partial_1 r^K_{11} &= \frac{\partialhi}{d}, \langlebel{dx_rK_1x}
\\[0.5em]
\partial_2 r^K_{11} &= \left[\frac{\partialhi'}{d}-\frac{ \partialhi d'}{d^2} \right]x +\frac{\partialhi' f_1}{d} + \frac{\partialhi f_1' }{d}-\frac{ \partialhi f_1 d'}{d^2} +\eta'.
\langlebel{eqn_dy_rK_1x}
\end{align}
Note that
\begin{equation}\langlebel{eqn_estim_d}
d(y) \approx \epsilon +y^2, \quad |d'(y)|\lesssim|y| ,\quad |d''(y)|\lesssim 1.
\end{equation}
Note also that, from \eqnref{m2def}, Corollary \ref{cor_hS_D1D2_diff_estim} and Lemma \ref{lem_hS_bdry_deri_estim}, we have
\begin{equation}\langlebel{eqn_estim_phi}
|\partialhi(y)| \lesssim \epsilon+ y^2+\tau|y| ,\qquad |\partialhi'(y)|,|\eta'(y)| \lesssim 1.
\end{equation}
From \eqnref{dx_rK_1x}-\eqnref{eqn_estim_phi} and the fact that $|x|\lesssim \epsilon +y^2 $ for $(x,y)\in\Pi_L$, we have
\begin{align*}
|\partial_1 r^K_{11}| &\lesssim \frac{\epsilon + y^2 + \tau|y|}{\epsilon+y^2} \lesssim 1+\frac{ \tau|y|}{\epsilon+y^2},
\\[0.5em]
|\partial_2 r^K_{11}| &\lesssim \left[ \frac{1}{\epsilon+y^2}+\frac{(\epsilon+|y|)|y|}{(\epsilon+y^2)^2}\right] (\epsilon + y^2)
\\
&\qquad
+ \frac{\epsilon+y^2}{\epsilon+y^2} + \frac{(\epsilon+|y|) |y|}{\epsilon+y^2}+\frac{(\epsilon+|y|)(\epsilon+y^2)|y|}{(\epsilon+y^2)^2} + 1
\lesssim 1.
\end{align*}
In a similar way, one can see that
\begin{align*}
|\partial_1 r^K_{12}| &\lesssim 1+\frac{ \tau|y| }{\epsilon+y^2},
\quad
|\partial_2 r^K_{12}| \lesssim 1.
\end{align*}
This completes the proof.
\qed
\noindent{\sl Proof of Proposition \ref{prop_Brj}}.
By the variational principle \eqnref{rj_var_prin}, we have
\begin{align*}
\mathcal{E}_{D^e}[{\bf r}_j] \le \mathcal{E}_{D^e}[{\bf r}_j^K]
\lesssim \|\nabla{\bf r}_j^K \|_{L^2(D^e)}^2 .
\end{align*}
It follows from Lemma \ref{lem_rjK_grad_estim} that
\begin{align*}
\int_{D^e} |\nabla{\bf r}^K_j|^2
&\lesssim\int_{\Pi_{L_0}} |\nabla{\bf r}^K_j|^2 + \int_{D^e\setminus \Pi_{L_0}} |\nabla{\bf r}^K_j|^2
\\
&\lesssim\int_{-L_0}^{L_0}\int_{-f_1(y)}^{f_2(y)} {\bf i}g(\frac{\epsilon+|y|}{\epsilon+y^2}{\bf i}g)^2 \,dx\, dy +1
\\
&\lesssim \int_{-L_0}^{L_0} \frac{(\epsilon+|y|)^2}{\epsilon+y^2} \,dy +1
\lesssim 1.
\end{align*}
So the proof is complete.
\qed
\section{Stress concentration-boundary value problem}\langlebel{sec:bvp}
This section deals with the stress concentration, {\it i.e.}, the gradient blow-up of the solution to the boundary value problem \eqnref{elas_eqn_bdd}. We characterize the stress concentration in the narrow region between two inclusions in terms of the singular functions ${\bf q}_j$ defined in \eqnref{Bqone}, \eqnref{Bqtwo}, and \eqnref{q3def}. The main results (Theorem \ref{main_thm_2_general_bdd} and \ref{cor-bdd}) are stated and proved in subsection \ref{subsec:bvp}. Preliminary results required for proving main ones are also stated in the same subsection. Their proofs are given in subsequent subsections. At the end of subsection \ref{subsec:bvp} we include a brief comparison of this paper's method with that of \cite{BLL-ARMA-15} where the upper bound of the gradient blow-up is obtained.
\subsection{Characterization of stress concentration-BVP}\langlebel{subsec:bvp}
We first introduce functions ${\bf h}_{\Omega,j}$ for the boundary value problem, analogously to the functions ${\bf h}_j$ defined in \eqnref{hj_def} for the free space problem. They are solutions to the following problem:
\begin{equation} \langlebel{eqn_hOmega}
\ \left \{
\begin{array} {ll}
\displaystyle \mathcal{L}_{\lambda,\mu} {\bf h}_{\Omega,j}= 0 \quad &\mbox{ in } \widetilde{\Omega},\\[2mm]
\displaystyle {\bf h}_{\Omega,j}=\frac{(-1)^i}{2}\Psi_j \quad &\mbox{ on } \partial D_i, \ i=1,2,
\\[2mm]
\displaystyle {\bf h}_{\Omega,j}=0 \quad &\mbox{ on } \partial \Omega.
\end{array}
\right.
\end{equation}
One can easily see that the solution ${\bf u}$ to \eqnref{elas_eqn_bdd} admits the decomposition
\begin{equation}\langlebel{eqn_decomp_u_bdd}
{\bf u} = {\bf v}_{\Omega} - \sum_{j=1}^3 (c_{1j}-c_{2j}){{\bf h}}_{\Omega,j} \quad \mbox{in }\widetilde\Omega,
\end{equation}
where ${\bf v}_\Omega$ is the solution to $\mathcal{L}_{\lambda, \mu} {\bf v}_\Omega =0$ in $\widetilde{\Omega}$ with the boundary condition
\begin{equation}
{\bf v}_\Omega = \frac{1}{2} \sum_{j=1}^3 (c_{1j}+c_{2j}) \Psi_j \quad \mbox{on } \partial D_1 \cup \partial D_2.
\end{equation}
Note that
$$
{\bf v}_\Omega|_{\partial D_1} - {\bf v}_\Omega|_{\partial D_2} = \frac{1}{2} (c_{13}+c_{23}) (\Psi_3 |_{\partial D_1} - \Psi_3 |_{\partial D_2})= O(|{\bf x}|),
$$
from which one expects that $\nabla {\bf v}_\Omega$ does not blow up even when $\epsilon \to 0$. In fact, it was proved in \cite{BLL-ARMA-15} that
\begin{equation}\langlebel{eqn_v_Omega_grad_bdd}
\|\nabla{\bf v}_{\Omega}\|_{L^\infty(\widetilde\Omega)} \lesssim \| {\bf g}\|_{C^{1,\gamma}(\partial\Omega)}.
\end{equation}
So the singular behavior of $\nabla{\bf u}$ is determined by the function $\sum_{j=1}^3 (c_{1j}-c_{2j}){{\bf h}}_{\Omega,j}$.
In the sequel, we investigate asymptotic behavior of $c_{1j}-c_{2j}$ and ${\bf h}_{\Omega,j}$ as $\epsilon \to 0$.
For doing so, we introduce the following boundary integrals:
\begin{equation}\langlebel{def_Ical_Jcal}
\mathcal{I}_{jk}:=\int_{D^e} \mathbb{C} \widehat{\nabla} {\bf h}_j :\widehat{\nabla} {\bf h}_k \quad\mbox{ and }\quad \mathcal{J}_{\Omega,k}:=\int_{\partial D^e} \frac{\partial{\bf h}_k}{\partial \nu}{\bf i}g|_+ \cdot {\bf H}_\Omega, \quad j,k=1,2,3,
\end{equation}
where ${\bf h}_j$ is the solution to \eqnref{hj_def} in $\mathcal{A}^*$ and ${\bf H}_\Omega$ is the function defined by \eqnref{eqn_def_H_Omega}. We emphasize that $\mathcal{I}_{jk}$ is defined by ${\bf h}_j$, not by ${\bf h}_{\Omega,j}$.
The relation among $c_{1j}-c_{2j}$, $\mathcal{I}_{jk}$ and $\mathcal{J}_{\Omega,k}$ is given by the following lemma.
\begin{lemma}\langlebel{lem_represent_c1_c2_bdd}
The constants $c_{ij}$ appearing in \eqnref{elas_eqn_bdd} satisfy
\begin{equation}\langlebel{c_diff_rep}
\begin{bmatrix}
\mathcal{I}_{11} && \mathcal{I}_{12} && \mathcal{I}_{13} \\
\mathcal{I}_{12} && \mathcal{I}_{22} && \mathcal{I}_{23} \\
\mathcal{I}_{13} && \mathcal{I}_{23} && \mathcal{I}_{33}
\end{bmatrix}
\begin{bmatrix}
c_{11}-c_{21} \\
c_{12}-c_{22} \\
c_{13}-c_{23}
\end{bmatrix}
=
\begin{bmatrix}
\mathcal{J}_{\Omega,1} \\
\mathcal{J}_{\Omega,2} \\
\mathcal{J}_{\Omega,3}
\end{bmatrix}.
\end{equation}
\end{lemma}
By inverting \eqnref{c_diff_rep}, we will see that the asymptotic behavior of $c_{1j}-c_{2j}$ as $\epsilon \to 0$ can be described in terms of $\mathcal{K}_{\Omega,j}$ which are defined by
\begin{equation}\langlebel{def_Kcal_Omega}
{\mathcal{K}}_{\Omega,1}={\mathcal{J}}_{\Omega,1}-\frac{{\mathcal{J}}_{\Omega,3} \mathcal{I}_{13}}{\mathcal{I}_{33}}, \quad
{\mathcal{K}}_{\Omega,2}={\mathcal{J}}_{\Omega,2}-\frac{{\mathcal{J}}_{\Omega,3} \mathcal{I}_{23}}{\mathcal{I}_{33}},\quad
{\mathcal{K}}_{\Omega,3}=\frac{{\mathcal{J}}_{\Omega,3}}{\mathcal{I}_{33}}.
\end{equation}
In fact, the following propositions hold. Here, we mention that they are consequences of Proposition \ref{prop_Brj}, which is proved by the variational principle and the properties of singular functions ${\bf q}_j$.
\begin{prop}\langlebel{prop_Kcal_Omega_estim}
For $j=1,2,3$, we have
\begin{equation}\langlebel{Kcaljest}
|\mathcal{K}_{\Omega,j}|\lesssim \| {\bf g}\|_{C^{1,\gamma}(\partial\Omega)}.
\end{equation}
\end{prop}
\begin{prop}\langlebel{prop_diff_c1_c2_asymp_bdd}
We have
\begin{align}
c_{11}-c_{21}&= \mathcal{K}_{\Omega,1}m_1^{-1} \sqrt\epsilon +O(\sqrt\epsilon\widetilde{E}), \langlebel{c11c21}
\\
c_{12}-c_{22}&= \mathcal{K}_{\Omega,2}m_2^{-1} \sqrt\epsilon +O(\sqrt\epsilon\widetilde{E}), \langlebel{c12c22}
\\
c_{13}-c_{23}&= \mathcal{K}_{\Omega,3} + O(\widetilde{E}), \langlebel{c13c23}
\end{align}
where
\begin{equation}
\widetilde{E}:=(\sqrt\epsilon+\tau \sqrt\epsilon|\ln\epsilon|)\| {\bf g}\|_{C^{1,\gamma}(\partial\Omega)}.
\end{equation}
\end{prop}
As an immediate consequence of Propositions \ref{prop_Kcal_Omega_estim} and \ref{prop_diff_c1_c2_asymp_bdd}, we obtain the following corollary.
\begin{cor}\langlebel{cor_cij_estim}
We have
\begin{equation}
|c_{11}-c_{21}|+ |c_{12}-c_{22}| \lesssim \sqrt\epsilon \| {\bf g}\|_{C^{1,\gamma}(\partial\Omega)}
\end{equation}
and
\begin{equation}
|c_{13}-c_{23}| \lesssim \| {\bf g}\|_{C^{1,\gamma}(\partial\Omega)}.
\end{equation}
\end{cor}
Regarding the asymptotic behavior of ${\bf h}_{\Omega,j}$, we obtain the following proposition.
\begin{prop}\langlebel{hGOjandBq}
Let $m_j$, $j=1,2$, be the constant defined by \eqnref{m2def}. We have for $j=1,2$
\begin{equation}\langlebel{eqn_hOj_qj_rOj_decomp}
{\bf h}_{\Omega,j} = \frac{m_j}{\sqrt\epsilon} {\bf q}_j +{\bf r}_{\Omega,j},
\end{equation}
where ${\bf r}_{\Omega,j}$ satisfies
\begin{equation}\langlebel{rGOj}
\begin{cases}
\displaystyle|\nabla {\bf r}_{\Omega,j}({\bf x}) | \lesssim 1+\frac{\tau |y|}{\epsilon+y^2} &\quad \mbox{for }{\bf x}\in \Pi_{L_0},
\\[1em]
\displaystyle|\nabla {\bf r}_{\Omega,j}({\bf x}) | \lesssim 1 &\quad \mbox{for }{\bf x}\in \widetilde\Omega\setminus{\Pi_{L_0}}.
\end{cases}
\end{equation}
Here, $\tau$ is the constant defined by \eqnref{def_tau}. We also have
\begin{equation}\langlebel{eqn_hOj_qj_rOj_decomp2}
{\bf h}_{\Omega,3} = {\bf q}_3+ {\bf r}_{\Omega,3} \quad\mbox{in } \widetilde\Omega,
\end{equation}
where ${\bf r}_{\Omega,3}$ satisfies
\begin{equation}\langlebel{rGO3}
| \nabla {\bf r}_{\Omega,3}| \lesssim 1 \quad \mbox{in } \widetilde\Omega.
\end{equation}
\end{prop}
It is worth emphasizing that if two inclusions are symmetric with respect to both $x$- and $y$-axes, then $\tau=0$. So we have $|\nabla {\bf r}_{\Omega,j}|\lesssim 1$ in $\widetilde \Omega$ for $j=1,2$ as well.
With help of preliminary results presented above, we are now able to state and prove the main results of this section.
\begin{theorem}\langlebel{main_thm_2_general_bdd}
Let ${\bf u}$ be the solution to \eqnref{elas_eqn_bdd} for some ${\bf g} \in C^{1,\gamma}(\partial\Omega)$.
The following decomposition holds
\begin{equation}
{\bf u}({\bf x}) = {\bf b}_\Omega({\bf x})- \sum_{j=1}^3 \big({\mathcal{K}}_{\Omega,j}+s_{\Omega,j}\big){\bf q}_j({\bf x})
, \quad{\bf x}\in \widetilde\Omega,
\end{equation}
where ${\mathcal{K}}_{\Omega,j}$ are the constants defined by \eqnref{def_Kcal_Omega} (so satisfies \eqnref{Kcaljest}), $s_{\Omega, j}$ are constants satisfying
\begin{equation}
|s_{\Omega,j}| \lesssim \tau\sqrt\epsilon|\ln\epsilon| \|{\bf g}\|_{C^{1,\gamma}(\partial\Omega)},
\end{equation}
and the function ${\bf b}_\Omega$ satisfies
\begin{equation}
\|\nabla {\bf b}_\Omega\|_{L^\infty(\widetilde\Omega)} \lesssim \|{\bf g}\|_{C^{1,\gamma}(\partial\Omega)}.
\end{equation}
\end{theorem}
\begin{theorem}\langlebel{cor-bdd}
It holds that
\begin{equation}\langlebel{gradestbdd}
\frac{ \sum_{j=1}^2 |\mathcal{K}_{\Omega,j}| }{\sqrt{\epsilon}}\lesssim\| \nabla {\bf u}\|_{L^\infty(\widetilde\Omega)} \lesssim \frac{\|{\bf g}\|_{C^{1,\gamma}(\partial\Omega)}}{\sqrt{\epsilon}}.
\end{equation}
\end{theorem}
The upper estimate in \eqnref{gradestbdd} was proved in \cite{BLL-ARMA-15}. The lower estimate shows that $\epsilon^{-1/2}$ is also the lower bound on the blow-up rate of $\nabla {\bf u}$ as $\epsilon \to 0$, provided that
\begin{equation}
1 \lesssim \sum_{j=1}^2 |\mathcal{K}_{\Omega,j}|.
\end{equation}
We will show in some special cases that this is the case (see section \ref{sec:symmetric_case}).
\noindent{\sl Proof of Theorem \ref{main_thm_2_general_bdd}}.
According to Proposition \ref{prop_diff_c1_c2_asymp_bdd}, $c_{1j}-c_{2j}$ can be written as
\begin{align*}
c_{11}-c_{21}&= \mathcal{K}_{\Omega,1}m_1^{-1} \sqrt\epsilon + m_1^{-1}\sqrt\epsilon (s_{\Omega, 1} +s_{\Omega, 1}'),
\\
c_{12}-c_{22}&= \mathcal{K}_{\Omega,2}m_2^{-1} \sqrt\epsilon + m_2^{-1}\sqrt\epsilon (s_{\Omega, 2} +s_{\Omega, 2}'),
\\
c_{13}-c_{23}&= \mathcal{K}_{\Omega,3} + s_{\Omega, 3} +s_{\Omega, 3}',
\end{align*}
where the constants $s_{\Omega,j}$ and $s_{\Omega,j}'$ satisfy
\begin{align}
|s_{\Omega,j}|&\lesssim \tau \sqrt\epsilon |\ln\epsilon|\| {\bf g}\|_{C^{1,\gamma}(\partial\Omega)}, \langlebel{s_GO_j_estim_temp1}
\\
|s_{\Omega,j}'|&\lesssim \sqrt\epsilon\| {\bf g}\|_{C^{1,\gamma}(\partial\Omega)}. \langlebel{s_GO_j_estim_temp2}
\end{align}
By substituting \eqnref{eqn_hOj_qj_rOj_decomp} and above three identities into \eqnref{eqn_decomp_u_bdd}, we have
\begin{align}
{\bf u}
&= {\bf v}_\Omega - \bigg(\sum_{j=1}^2 (c_{1j}-c_{2j} ){\bf i}g(\frac{m_j}{\sqrt\epsilon}{\bf q}_j + {\bf r}_{\Omega,j}{\bf i}g)\bigg)
-(c_{13}-c_{23}) ({\bf q}_3 + {\bf r}_{\Omega,3})\nonumber
\\
&= {\bf v}_\Omega - \sum_{j=1}^3 (\mathcal{K}_{\Omega,j} + s_{\Omega,j} + s_{\Omega,j}' ){\bf q}_j - \sum_{j=1}^3(c_{1j}-c_{2j}) {\bf r}_{\Omega,j}.
\langlebel{eqn_u_decomp_temp1}
\end{align}
Let
$$
{\bf b}_\Omega := {\bf u} + \sum_{j=1}^3 (\mathcal{K}_{\Omega,j} + s_{\Omega,j}){\bf q}_j.
$$
Then, from \eqnref{eqn_u_decomp_temp1}, we have
\begin{align*}
\nabla {\bf b}_\Omega &= \nabla {\bf v}_\Omega -\sum_{j=1}^3 s_{\Omega,j}'\nabla{\bf q}_j -\sum_{j=1}^3 (c_{1j}-c_{2j}) \nabla {\bf r}_{\Omega,j}
=: I_1+I_2+I_3.
\end{align*}
We now prove that $I_j$ are bounded. That $|I_1| \lesssim \| {\bf g}\|_{C^{1,\gamma}(\partial\Omega)}$ is already mentioned in \eqnref{eqn_v_Omega_grad_bdd}.
By \eqnref{Bq12est} and \eqnref{Bq3est}, we have
\begin{equation}\langlebel{Bq123est}
\| \nabla {\bf q}_j\|_{L^\infty(\widetilde\Omega)} \lesssim \epsilon^{-1/2}, \quad j=1,2,3.
\end{equation}
So, by \eqnref{s_GO_j_estim_temp2}, we have
$$
|I_2| \lesssim \| {\bf g}\|_{C^{1,\gamma}(\partial\Omega)}.
$$
We have from \eqnref{rGOj} and \eqnref{rGO3} that
$$
\| \nabla {\bf r}_{\Omega,1}\|_{L^\infty(\widetilde\Omega)}+\| \nabla {\bf r}_{\Omega,2}\|_{L^\infty(\widetilde\Omega)} \lesssim 1+\tau /\sqrt\epsilon
$$
and
$$
\| \nabla {\bf r}_{\Omega,3}\|_{L^\infty(\widetilde\Omega)} \lesssim 1.
$$
Therefore, it follows from Corollary \ref{cor_cij_estim} that
\begin{align*}
|I_3| &\leq \left| \sum_{j=1}^2 (c_{1j}-c_{2j}) \nabla{\bf r}_{\Omega,j} \right| + \left| (c_{13}-c_{23}) \nabla {\bf r}_{\Omega,3} \right|
\\
&\lesssim
\big(\sqrt\epsilon (1+\tau /\sqrt\epsilon) + 1\big)\| {\bf g}\|_{C^{1,\gamma}(\partial\Omega)}\lesssim \| {\bf g}\|_{C^{1,\gamma}(\partial\Omega)}.
\end{align*}
The proof is complete.
\qed
\noindent{\sl Proof of Theorem \ref{cor-bdd}}.
The upper estimate in \eqnref{gradestbdd} is a consequence of Proposition \ref{prop_Kcal_Omega_estim}, Theorem \ref{main_thm_2_general_bdd}, and \eqnref{Bq123est}.
To derive the lower estimate, we consider $\nabla {\bf u}(0,0)$.
It follows from Lemma \ref{singular_q_origin}, Lemma \ref{cor_q3_near_origin} and Theorem \ref{main_thm_2_general_bdd} that
$$
\nabla {\bf u}(0,0) =-\frac{\mathcal{K}_{\Omega,1}}{m_1\sqrt\epsilon}{\bf e}_1\otimes{\bf e}_1
- \frac{\mathcal{K}_{\Omega,2}}{m_2\sqrt\epsilon}{\bf e}_2\otimes{\bf e}_1 + O(1+\tau \ln\epsilon).
$$
So we obtain the lower estimate.
\qed
As mentioned earlier, the upper bound in \eqnref{gradestbdd} was proved in \cite{BLL-ARMA-15}. So, it is helpful to compare the method of that paper with that of this paper. In fact, some of results obtained in \cite{BLL-ARMA-15} will be used for proofs in this section. There, the solution ${\bf u}$ to \eqnref{elas_eqn_bdd} is expressed as follows:
\begin{equation}\langlebel{yyli_decomp}
{\bf u}=\sum_{i=1}^2\sum_{k=1}^3 c_{ik} {\bf v}_{ik} + {\bf v}_3,
\end{equation}
where ${\bf v}_{ik}$ is the solution to
\begin{equation}\langlebel{vikyan}
\begin{cases}
\mathcal{L}_{\lambda,\mu}{\bf v}_{ik}=0 &\quad \mbox{in } \widetilde \Omega,
\\
{\bf v}_{ik} = \Psi_k &\quad \mbox{on }\partial D_i,
\\
{\bf v}_{ik} = 0 &\quad \mbox{on } \partial D_j \cup \partial\Omega, \ j\neq i,
\end{cases}
\end{equation}
and ${\bf v}_3$ is the solution to
$$
\begin{cases}
\mathcal{L}_{\lambda,\mu}{\bf v}_3=0 &\quad \mbox{in } \widetilde \Omega,
\\
{\bf v}_3 = 0 &\quad \mbox{on }\partial D_1\cup\partial D_2,
\\
{\bf v}_3 = {\bf g} &\quad \mbox{on } \partial\Omega.
\end{cases}
$$
Note that
\begin{equation}\langlebel{eqn_hGOk_v1k_v2k}
{\bf h}_{\Omega,k}=-\frac{1}{2}{\bf v}_{1k}+\frac{1}{2}{\bf v}_{2k}, \quad k=1,2,3.
\end{equation}
The $6\times 6$ linear system of equations for $c_{ik}$ is derived using \eqnref{int_zero}. The linear system is truncated to a $3\times 3$ one and then the difference $c_{1j}-c_{2j}$ is expressed using the following integrals:
\begin{align*}
a_{jk} &:= \int_{\partial D_1} \partial_\nu {\bf v}_{1j}|_+ \cdot \Psi_k = \int_{\widetilde\Omega} \mathbb{C} \widehat{\nabla} {\bf v}_{1j}: \widehat{\nabla} {\bf v}_{1k},
\\
b_k &:= \int_{\partial D_1} \partial_\nu {\bf v}_3|_+ \cdot \Psi_k,
\end{align*}
for $j,k=1,2,3$. Note that the integral $a_{jk}$ is similar to the quantity $\mathcal{I}_{jk}$ of this paper. The difference lies in that $\mathcal{I}_{jk}$ is defined using the free space solution ${\bf h}_j$.
To investigate asymptotic behavior of $a_{jk}$ and $b_k$ as $\epsilon\rightarrow 0$,
the function ${\bf v}_{ik}$ is approximated by ${\bf v}_{ik}^K$, which is defined by
\begin{equation}\langlebel{BvikK}
\begin{cases}
\displaystyle {\bf v}_{1k}^K(x,y)= \frac{-x+f_2(y)}{f_1(y)+f_2(y)} \Psi_k, \\
\noalign{
}
\displaystyle {\bf v}_{2k}^K(x,y)= \frac{x+f_1(y)}{f_1(y)+f_2(y)} \Psi_k,
\end{cases}
(x,y) \in \Pi_L, \quad k=1,2,3.
\end{equation}
In fact, it is proved that
\begin{align}
\nabla {\bf v}_{ik}(x,y) &= \nabla {\bf v}_{ik}^K(x,y) + O \left( 1+ \frac{y}{\epsilon+y^2} \right) \quad\mbox{for } k=1,2, \langlebel{Bao1} \\
\nabla {\bf v}_{i3}(x,y) &= \nabla {\bf v}_{i3}^K(x,y) + O (1). \langlebel{Bao2}
\end{align}
From these approximations which are derived using a new iteration technique, the upper bound on the blow-up rate, $\epsilon^{-1/2}$, of $|\nabla {\bf u}|$ is obtained in \cite{BLL-ARMA-15}. However, a lower bound has not been obtained. It is partly because the functions ${\bf v}_{ik}^K$ are {\it not} solutions of the Lam\'e system.
In this paper, we introduce new singular functions ${\bf q}_j$, which are solutions of the Lam\'{e} system, as explained in section \ref{sec:singular}. Using singular functions, we are able to derive precise asymptotic formulas for $\nabla {\bf u}$ as $\epsilon \to 0$. As a consequence we are able to reprove that $\epsilon^{-1/2}$ is indeed an upper bound on the blow-up rate. Moreover, the asymptotic formulas enable to show that $\epsilon^{-1/2}$ is a lower bound on the blow-up rate as well in some cases, as presented in section \ref{sec:symmetric_case}. We emphasize that the asymptotic formulas are obtained using the variational principle, which is possible only because ${\bf q}_j$ are solutions of the Lam\'{e} system.
\subsection{Preliminary estimates of boundary integrals}
In this subsection, we characterize asymptotic behaviors of the following boundary integrals as $\epsilon \to 0$:
$$
\int_{\partial D_i} {\partial_\nu {\bf q}_j}\cdot \Psi_k,
\quad
\int_{\partial D^e} {\partial_\nu {\bf q}_j}\cdot {\bf q}_k.
$$
These integrals appear in later sections.
We first prove the following lemma.
\begin{lemma}\langlebel{lem_hS_bdry_int12}
\begin{itemize}
\item[(i)] For $k=1,2$, we have
\begin{equation}\langlebel{qjpsik}
\displaystyle\int_{\partial D_i} {\partial_\nu {\bf q}_j}\cdot \Psi_k =(-1)^{i+1}\delta_{jk} , \quad i, j=1,2.
\end{equation}
\item[(ii)] For $k=3$, we have
\begin{align}
\displaystyle\int_{\partial D_i} {\partial_\nu {\bf q}_1}\cdot \Psi_3 &= 0,
\langlebel{int_h1S_bdry_Psi3}
\\
\displaystyle\int_{\partial D_i} {\partial_\nu {\bf q}_2}\cdot \Psi_3 &= (-1)^{i+1} {a} (-1 + 4\partiali \alpha_2 \mu),
\langlebel{int_h2S_bdry_Psi3}
\end{align}
for $i=1,2$, where $a$ is the constant defined by \eqnref{a_def}.
\end{itemize}
\end{lemma}
\noindent {\sl Proof}. \
Suppose that $k=1,2$.
Since $\mathcal{L}_{\lambda,\mu}{\bf \GG}({\bf x}-{\bf p}_l){\bf e}_j = \delta({\bf x}-{\bf p}_l) {\bf e}_j$, Green's formula yields
\begin{align*}
\int_{\partial D_i} \partial_{\nu_{\bf x}}{\bf \GG}({\bf x}-{\bf p}_l) {\bf e}_j\cdot \Psi_k \,d\sigma({\bf x}) = \int_{D_i} \mathcal{L}_{\lambda,\mu}{\bf \GG}({\bf x}-{\bf p}_l){\bf e}_j \cdot \Psi_k \,d\sigma({\bf x}) = \delta_{il}\delta_{jk}.
\end{align*}
Green's formula also yields
\begin{align*}
\int_{\partial D_i} \partial_{\nu_{\bf x}}\left(\frac{{\bf x}-{\bf p}_l}{|{\bf x}-{\bf p}_l|^2}\right)\cdot \Psi_k \,d\sigma({\bf x})=0.
\end{align*}
In fact, if $i=l$, then we apply Green's formula to $\mathbb{R}^2 \setminus D_i$, and to $D_i$ if $i \neq l$. So, \eqnref{qjpsik} follows from \eqnref{Bqone} and \eqnref{Bqtwo}.
We now prove \eqnref{int_h1S_bdry_Psi3} and \eqnref{int_h2S_bdry_Psi3} when $i=1$. The case when $i=2$ can be proved in the same way.
Let us prove \eqnref{int_h2S_bdry_Psi3} first.
In view of the definition \eqnref{Bqtwo} of ${\bf q}_2$, we have
\begin{equation}\langlebel{350}
\int_{\partial D_1} {\partial_\nu {\bf q}_2}\cdot \Psi_3 = \int_{\partial D_1} \partial_{\nu_{\bf x}} ({\bf \GG} ({\bf x}-{\bf p}_1){\bf e}_2) \cdot \Psi_3 -
{\alpha_2 a} \int_{\partial D_1} \partial_{\nu_{\bf x}}\left( \frac{({\bf x}-{\bf p}_1)^\partialerp}{|{\bf x}-{\bf p}_1|^2}\right) \cdot \Psi_3.
\end{equation}
Since $\mathcal{L}_{\lambda,\mu}({\bf \GG} ({\bf x}-{\bf p}_1) {\bf e}_2)= \delta_{{\bf p}_1}({\bf x}) {\bf e}_2$, one can see that
\begin{equation}\langlebel{351}
\int_{\partial D_1} \partial_{\nu_{\bf x}} ({\bf \GG} ({\bf x}-{\bf p}_1){\bf e}_2) \cdot \Psi_3 = {\bf e}_2 \cdot \Psi_3({\bf p}_1)= -a,
\end{equation}
where the last equality holds because ${\bf p}_1=(-a,0)$.
By using a change of variables ${\bf x}\rightarrow {\bf x}+{\bf p}_1$ and the fact that $\Psi_3({\bf x}+{\bf p}_1)=\Psi_3({\bf x}) - a\Psi_2$, we obtain
\begin{align}
& \int_{\partial D_1} \partial_{\nu_{\bf x}}\left( \frac{({\bf x}-{\bf p}_1)^\partialerp}{|{\bf x}-{\bf p}_1|^2}\right) \cdot \Psi_3({\bf x})
=\int_{\partial D_1-{\bf p}_1} \partial_{\nu_{\bf x}}\left( \frac{{\bf x}^\partialerp}{|{\bf x}|^2}\right) \cdot \Psi_3({\bf x}+{\bf p}_1) \nonumber \\
&= -a \int_{\partial D_1-{\bf p}_1} \partial_{\nu_{\bf x}}\left( \frac{{\bf x}^\partialerp}{|{\bf x}|^2}\right) \cdot \Psi_2
+\int_{\partial D_1-{\bf p}_1} \partial_{\nu_{\bf x}}\left( \frac{{\bf x}^\partialerp}{|{\bf x}|^2}\right) \cdot \Psi_3({\bf x}). \langlebel{int_dipole_Psi3_decomp}
\end{align}
One can show as before that
\begin{equation}\langlebel{int_dipole_Psi3_prep}
\int_{\partial D_1-{\bf p}_1} \partial_{\nu_{\bf x}}\left( \frac{{\bf x}^\partialerp}{|{\bf x}|^2}\right) \cdot \Psi_2 \, d\sigma({\bf x}) =0.
\end{equation}
Let $B$ be a disk centered at $0$ such that $\partial D_1-{\bf p}_1 \subset B$. Then Green's formula yields
$$
\int_{\partial D_1-{\bf p}_1} \partial_{\nu_{\bf x}}\left( \frac{{\bf x}^\partialerp}{|{\bf x}|^2}\right) \cdot \Psi_3({\bf x}) =
\int_{\partial B} \partial_{\nu_{\bf x}}\left( \frac{{\bf x}^\partialerp}{|{\bf x}|^2}\right) \cdot \Psi_3({\bf x}).
$$
Straightforward computations show that
$$
\partial_{\nu_{\bf x}}\left( \frac{{\bf x}^\partialerp}{|{\bf x}|^2}\right) = (-2\mu) \frac{{\bf x}^\partialerp}{|{\bf x}|^3} \quad \mbox{ for }{\bf x} \in \partial B.
$$
So, we have
$$
\int_{\partial D_1-{\bf p}_1} \partial_{\nu_{\bf x}}\left( \frac{{\bf x}^\partialerp}{|{\bf x}|^2}\right) \cdot \Psi_3({\bf x}) =
\int_{\partial B} \partial_{\nu_{\bf x}}\left( \frac{{\bf x}^\partialerp}{|{\bf x}|^2}\right) \cdot {\bf x}^{\partialerp}
= \int_{\partial B} (-2\mu) \frac{1}{|{\bf x}|}=-4\partiali \mu .
$$
It then follows from \eqnref{int_dipole_Psi3_decomp} and \eqnref{int_dipole_Psi3_prep} that
\begin{equation}\langlebel{352}
\int_{\partial D_1} \partial_{\nu_{\bf x}} \left( \frac{({\bf x}-{\bf p}_1)^\partialerp}{|{\bf x}-{\bf p}_1|^2}\right) \cdot \Psi_3({\bf x}) =-4\partiali \mu.
\end{equation}
Combining \eqnref{350}-\eqnref{352}, we obtain \eqnref{int_h2S_bdry_Psi3}.
We now prove \eqnref{int_h1S_bdry_Psi3}. Like \eqnref{351} we have
$$
\int_{\partial D_1} \partial_{\nu_{\bf x}} ({\bf \GG} ({\bf x}-{\bf p}_1){\bf e}_1) \cdot \Psi_3 = {\bf e}_1 \cdot \Psi_3({\bf p}_1)= 0.
$$
In the same way to show \eqnref{352} one can show that
$$
\int_{\partial D_1} \partial_{\nu_{\bf x}}\left( \frac{{\bf x}-{\bf p}_1}{|{\bf x}-{\bf p}_1|^2}\right) \cdot \Psi_3=0.
$$
Therefore, from the definition \eqnref{Bqone} of ${\bf q}_1$, we have \eqnref{int_h1S_bdry_Psi3}, and the proof is completed.
\qed
\begin{lemma}\langlebel{lem_dhS_hS_int12}
We have
\begin{align}
&\displaystyle\int_{\partial D^e} {\partial_{\nu} {\bf q}_j}
\cdot {\bf q}_j
= - {m_j^{-1}}\sqrt\epsilon + O(\tau\epsilon|\ln\epsilon|+\epsilon), \quad j=1,2,
\langlebel{qjqj}
\\
& \displaystyle\int_{\partial D^e} {\partial_\nu {\bf q}_1} \cdot {\bf q}_2 = O(\tau\epsilon|\ln\epsilon|+\epsilon).
\langlebel{q1q2}
\end{align}
\end{lemma}
Before proving Lemma \ref{lem_dhS_hS_int12}, we need to estimate the conormal derivatives $\partial_{\nu} {\bf q}_j$ on $\partial D_1 \cup \partial D_2$. We have the following lemma.
\begin{lemma} \langlebel{lem_conormal_estim}
For ${\bf x}=(x,y)\in (\partial D_1 \cup \partial D_2) \cap \partial\Pi_{L_0}$, we have
\begin{equation}\langlebel{4200}
\big|{\partial_\nu {\bf q}_1}({\bf x})\cdot {\bf e}_1\big| \lesssim \frac{\sqrt\epsilon}{\epsilon+y^2},
\quad
\big|{\partial_\nu {\bf q}_1}({\bf x})\cdot {\bf e}_2\big| \lesssim \frac{\sqrt\epsilon|y|}{\epsilon+y^2}+\sqrt\epsilon,
\end{equation}
and
\begin{equation}\langlebel{4300}
\big|{\partial_\nu {\bf q}_2}({\bf x})\cdot {\bf e}_1\big| \lesssim \frac{\sqrt\epsilon|y|}{\epsilon+y^2}+\sqrt\epsilon,
\quad
\big|{\partial_\nu {\bf q}_2}({\bf x})\cdot {\bf e}_2\big| \lesssim \frac{\sqrt\epsilon}{\epsilon+y^2}.
\end{equation}
\end{lemma}
\noindent {\sl Proof}. \ We prove \eqnref{4200} only. \eqnref{4300} can be proved similarly.
Let $\mbox{\boldmath $\Gs$}^1= (\sigma^1_{ij})_{i,j=1}^2$ be the stress tensor of ${\bf q}_1$, namely, $\mbox{\boldmath $\Gs$}^1 := \mathbb{C} \widehat{\nabla}{\bf q}_1$. According to \eqnref{stress_cartesian}, the entries of $\mbox{\boldmath $\Gs$}^1$ can be written as
\begin{align*}
\sigma^1_{11} &= (\lambda+2\mu) \partial_1 q_{11} + \lambda\partial_2 q_{12}, \\
\sigma^1_{22} &= \lambda \partial_1 q_{11} + (\lambda+2\mu) \partial_2 q_{12}, \\
\sigma^1_{12} &= \sigma^k_{21} = \mu (\partial_2 q_{11} + \partial_1 q_{12}).
\end{align*}
Thus we have the following estimates from Lemma \ref{lem_hS_grad_estim}:
$$
|\sigma^1_{11}| + |\sigma^1_{22}|\lesssim \frac{\sqrt\epsilon}{\epsilon+y^2},
\qquad
|\sigma^1_{12}| \lesssim \frac{\sqrt\epsilon|y|}{\epsilon+y^2} + \sqrt\epsilon \qquad \mbox{for } (x,y)\in\Pi_{L_0}.
$$
Note that $\partial_\nu {\bf q}_1=\mbox{\boldmath $\Gs$}^1 {\bf n}$ and the outward unit normal vector ${\bf n}$ on $\partial D_i \cap \partial\Pi_{L_0}$ is given as follows:
$$
{\bf n} = \frac{1}{\sqrt{1+(f_i'(y))^2}} \big( (-1)^{i+1},f_i'(y) \big), \quad i=1,2.
$$
Moreover, we have $|f_i'(y)|\lesssim |y|$. Therefore, we obtain
\begin{align*}
|{\partial_\nu {\bf q}_1}(x,y)\cdot {\bf e}_1|
&=|(\mbox{\boldmath $\Gs$}^1 {\bf n} )_1|
= \left| \frac{1}{\sqrt{1+(f_i'(y))^2}} ((-1)^{i+1}\sigma^1_{11} + f_i'(y)\sigma^1_{12}) \right|
\\
&\lesssim \frac{\sqrt\epsilon}{\epsilon+y^2} + \frac{\sqrt\epsilon y^2}{\epsilon+y^2} \lesssim \frac{\sqrt\epsilon}{\epsilon+y^2},
\end{align*}
and
\begin{align*}
|{\partial_\nu {\bf q}_1}(x,y)\cdot {\bf e}_2|
&=
|(\mbox{\boldmath $\Gs$}^1 {\bf n} )_2|
= \left| \frac{1}{\sqrt{1+(f_i'(y))^2}} ((-1)^{i+1}\sigma^1_{12} + f_i'(y)\sigma^1_{22}) \right|
\\
&\lesssim \frac{\sqrt\epsilon |y|}{\epsilon+y^2} + \sqrt\epsilon,
\end{align*}
for ${\bf x}=(x,y)\in \partial D_i \cap \partial \Pi_{L_0}$ and $i=1,2$.
The proof is completed.
\qed
\noindent{\sl Proof of Lemma \ref{lem_dhS_hS_int12}}. To prove \eqnref{qjqj}, we write
\begin{align*}
\int_{\partial D_1\cup \partial D_2} {\partial_\nu {\bf q}_1}\cdot {\bf q}_1
&= \sum_{i=1}^2 (-1)^i (\alpha_1-\alpha_2)\kappa_i a \int_{\partial D_i} {\partial_\nu {\bf q}_1}\cdot\Psi_1 \\
&\qquad + \sum_{i=1}^2 \int_{\partial D_i} {\partial_\nu {\bf q}_1}\cdot \big[{\bf q}_1 -
(-1)^i (\alpha_1-\alpha_2) \kappa_i a \, \Psi_1 \big].
\end{align*}
By Lemma \ref{lem_hS_bdry_int12} (i), we have
$$
\sum_{i=1}^2 (-1)^i (\alpha_1-\alpha_2)\kappa_i a \int_{\partial D_i} {\partial_\nu {\bf q}_1}\cdot\Psi_1 = -(\alpha_1-\alpha_2)(\kappa_1+\kappa_2) a.
$$
Then \eqnref{def_alpha} and \eqnref{m2def} yield
$$
\sum_{i=1}^2 (-1)^i (\alpha_1-\alpha_2)\kappa_i a \int_{\partial D_i} {\partial_\nu {\bf q}_1}\cdot\Psi_1 = -m_1^{-1} \sqrt{\epsilon} + O(\epsilon^{3/2}).
$$
It then remains to show that
\begin{equation}\langlebel{500}
\int_{\partial D_i} {\partial_\nu {\bf q}_1}\cdot \big[{\bf q}_1 -
(-1)^i (\alpha_1-\alpha_2) \kappa_i a \, \Psi_1 \big] = O(\tau\epsilon|\ln\epsilon|+\epsilon), \quad i=1,2.
\end{equation}
To prove \eqnref{500}, let us write
\begin{align*}
\int_{\partial D_1} {\partial_\nu {\bf q}_1}\cdot \big[{\bf q}_1 -
(-1) (\alpha_1-\alpha_2) \kappa_1 a \, \Psi_1
\big] =
\int_{\partial D_1 \cap \partial \Pi_{L_0} }
+ \int_{\partial D_1 \setminus \partial\Pi_{L_0} }
:=I_1 + I_2.
\end{align*}
From Lemma \ref{lem_qj_far_estim} and the fact that $a\approx\sqrt{\epsilon}$, we see that $|I_2|\lesssim \epsilon$.
Note that
$$
|I_1| \le \int_{\partial D_1\cap\partial \Pi_{L_0}} \big|{\partial_\nu {\bf q}_1}\cdot{\bf e}_1 \big(q_{11} + (\alpha_1-\alpha_2) \kappa_1 a
\big)\big| + \big|{\partial_\nu {\bf q}_1}\cdot{\bf e}_2 \,q_{12}\big|.
$$
From \eqnref{q11asym} and \eqnref{q12asym}, we see that
$$
| q_{11} + (\alpha_1-\alpha_2) \kappa_1 a | \lesssim \epsilon^{3/2} + \sqrt\epsilon y^2+ \tau \sqrt\epsilon |y|
$$
and
$$
|q_{12}|\lesssim \sqrt\epsilon |y|.
$$
It then follows from Lemma \ref{lem_conormal_estim} that
\begin{align*}
|I_1| &\lesssim \int_{\partial D_1\cap\partial \Pi_{L_0}} \frac{\sqrt\epsilon}{\epsilon+y^2} (\epsilon^{3/2} + \sqrt\epsilon y^2+ \tau \sqrt\epsilon |y|) + {\bf i}g(\frac{ \sqrt\epsilon |y|}{\epsilon+y^2} + \sqrt\epsilon {\bf i}g) \sqrt\epsilon|y|
\\
&\lesssim \int_{-L_0}^{L_0} \frac{\epsilon \tau|y|}{\epsilon+y^2} \,dy + \epsilon
\lesssim \tau \epsilon |\ln \epsilon|+ \epsilon.
\end{align*}
This proves \eqnref{500} for $i=1$. The case for $i=2$ can be proved in the same way. So,
\eqnref{qjqj} is proved.
Next we prove \eqnref{q1q2}. Thanks to \eqnref{qjpsik} with $j=1$ and $k=2$, we can write
\begin{align}
\int_{\partial D^e} {\partial_\nu {\bf q}_1}\cdot {\bf q}_2
&= \sum_{i=1}^2 \alpha_2 \kappa_i^2 a \int_{\partial D_i} \partial_\nu {\bf q}_1 \cdot y\Psi_1 \nonumber \\
&\qquad + \sum_{i=1}^2 \int_{\partial D_i} {\partial_\nu {\bf q}_1}\cdot \big[{\bf q}_2 -
\alpha_2 \kappa_i^2 a y \,\Psi_1 -(-1)^i(\alpha_1+\alpha_2) \kappa_i a \, \Psi_2) \big]. \langlebel{5000}
\end{align}
Green's formula yields
\begin{align*}
\int_{\partial D_i}\partial_\nu {\bf q}_1 \cdot y\Psi_1 &= \int_{\partial D_i}\partial_\nu {\bf q}_1 \cdot y\Psi_1
-\int_{\partial D_i}\partial_\nu (y\Psi_1) \cdot {\bf q}_1 +\int_{\partial D_i}\partial_\nu (y\Psi_1) \cdot {\bf q}_1 \\
&= \int_{ D_i}\mathcal{L}_{\lambda,\mu} {\bf q}_1 \cdot y\Psi_1
-\int_{ D_i} \mathcal{L}_{\lambda,\mu}(y\Psi_1) \cdot {\bf q}_1 +\int_{\partial D_i}\partial_\nu (y\Psi_1) \cdot {\bf q}_1 \\
&= \int_{ D_i}\mathcal{L}_{\lambda,\mu} {\bf q}_1 \cdot y\Psi_1 +\int_{\partial D_i}\partial_\nu (y\Psi_1) \cdot {\bf q}_1 .
\end{align*}
Observe from \eqnref{doublet} and the definition \eqnref{Bqone} of ${\bf q}_1$ that
\begin{equation}\langlebel{qonedirac}
\mathcal{L}_{\lambda,\mu}{\bf q}_1 = ({\delta_{{\bf p}_1}-\delta_{{\bf p}_2}){\bf e}_1} + \sum_{j=1}^2 \frac{\alpha_2 a}{\alpha_1-\alpha_2}
\big( \partial_1 \delta_{{\bf p}_j} {\bf e}_1 + \partial_2 \delta_{{\bf p}_j} {\bf e}_2 \big),
\end{equation}
where $\delta_{{\bf p}_j}$ denotes the Dirac delta at ${\bf p}_j$. So, we see that
$$
\int_{ D_i}\mathcal{L}_{\lambda,\mu} {\bf q}_1 \cdot y\Psi_1 =0.
$$
It follows from Lemma \ref{lem_qj_far_estim} and Lemma \ref{lem_hS_bdry_estim} that $\| {\bf q}_1\|_{L^\infty(\partial D_i)} \lesssim \sqrt{\epsilon}$ for $i=1,2$. So we have
$$
\int_{\partial D_i}\partial_\nu (y\Psi_1) \cdot {\bf q}_1 = O(\sqrt\epsilon),
$$
and hence
\begin{equation}\langlebel{5100}
\int_{\partial D_i}\partial_\nu {\bf q}_1 \cdot y\Psi_1 = O(\sqrt\epsilon), \quad i=1,2.
\end{equation}
Let
\begin{align*}
\int_{\partial D_1} {\partial_\nu {\bf q}_1}\cdot \big[{\bf q}_2 -
\alpha_2 \kappa_1^2 a y \,\Psi_1 +(\alpha_1+\alpha_2) \kappa_1 a \, \Psi_2)
\big] &=
\int_{\partial D_1 \cap \partial \Pi_{L_0} }
+
\int_{\partial D_1 \setminus \partial\Pi_{L_0} }
\\
&:=J_1+J_2.
\end{align*}
As before, from Lemma \ref{lem_qj_far_estim} and the fact that $a\approx\sqrt{\epsilon}$, we see that $|J_2| \lesssim \epsilon$.
From Lemma \ref{lem_hS_bdry_estim}, Lemma \ref{lem_conormal_estim} and the fact that $a\approx\sqrt\epsilon$, we have
\begin{align*}
|J_1|&\lesssim \int_{\partial D_1\cap\partial \Pi_{L_0}} \big|{\partial_\nu {\bf q}_1}\cdot\Psi_1 \, \big( q_{21} - \alpha_2 \kappa_1^2 a y \big) \big| + \big|{\partial_\nu {\bf q}_1}\cdot\Psi_2 \,\big( q_{22} + (\alpha_1+\alpha_2) \kappa_1 a
\big)\big|
\\
&\lesssim \int_{\partial D_1\cap\partial \Pi_{L_0}} {\bf i}g( \frac{\sqrt\epsilon}{\epsilon+y^2} + \frac{ \sqrt\epsilon |y|}{\epsilon+y^2} + \sqrt\epsilon {\bf i}g)
(\epsilon^{3/2} + \sqrt\epsilon y^2+ \tau \sqrt\epsilon |y|)
\\
&\lesssim \int_{-L_0}^{L_0} \frac{\tau\epsilon |y|}{\epsilon+y^2} \,dy + \epsilon
\\
&\lesssim \tau \epsilon |\ln \epsilon|+ \epsilon.
\end{align*}
So we obtain
\begin{equation}\langlebel{5200}
\left| \int_{\partial D_1} {\partial_\nu {\bf q}_1}\cdot \big({\bf q}_2 - \alpha_2 \kappa_1^2 a y \,\Psi_1 + (\alpha_1+\alpha_2) \kappa_1 a \,\Psi_2 \big) \right| \lesssim\tau \epsilon |\ln \epsilon|+ \epsilon.
\end{equation}
Similarly, one can see that
\begin{equation}\langlebel{5300}
\left| \int_{\partial D_2} {\partial_\nu {\bf q}_1}\cdot \big({\bf q}_2 - \alpha_2 \kappa_2^2 a y \,\Psi_1 - (\alpha_1+\alpha_2) \kappa_2 a \,\Psi_2 \big) \right| \lesssim\tau \epsilon |\ln \epsilon|+ \epsilon.
\end{equation}
Since $a\approx \sqrt\epsilon$, \eqnref{q1q2} follows from \eqnref{5000} and \eqnref{5100}-\eqnref{5300}.
The proof is completed.
\qed
\subsection{Proof of Lemma \ref{lem_represent_c1_c2_bdd}}
We first show that
\begin{equation}\langlebel{Ijk_different_rep}
\mathcal{I}_{jk} =\int_{\partial D_1} {\partial_\nu {\bf h}_j} \cdot \Psi_k =- \int_{\partial D_2} {\partial_\nu {\bf h}_j} \cdot \Psi_k, \quad
j=1,2,\,k=1,2,3.
\end{equation}
In fact, we see from Lemma \ref{cor_betti} and the boundary conditions of ${\bf h}_j$ that
$$
\mathcal{I}_{jk} = -\int_{\partial D^e} {\partial_\nu {\bf h}_j} \cdot {\bf h}_k
= \frac{1}{2}\int_{\partial D_1} {\partial_\nu {\bf h}_j} \cdot \Psi_k -
\frac{1}{2}\int_{\partial D_2} {\partial_\nu {\bf h}_j} \cdot \Psi_k.
$$
Since $\widehat{\nabla} \Psi_k=0$, we obtain using Lemma \ref{cor_betti} that
\begin{equation}\langlebel{eqn_int_hj_D1_D2_zero}
\int_{\partial D^e} {\partial_\nu {\bf h}_j} \cdot \Psi_k = -\int_{D^e} \mathbb{C} \widehat{\nabla} {\bf h}_j :\widehat{\nabla} \Psi_k =0.
\end{equation}
So, \eqnref{Ijk_different_rep} follows.
Since $\mathcal{L}_{\lambda,\mu}{\bf H}_\Omega=0$ in $D_i$, we have
$$
\int_{\partial D_i} {\partial_\nu {\bf H}_\Omega}\cdot {\bf h}_j = (-1)^i \frac{1}{2} \int_{\partial D_i} {\partial_\nu{\bf H}_\Omega}\cdot \Psi_j=0.
$$
Thus we have
\begin{equation}\langlebel{400}
\mathcal{J}_{\Omega,j} = \int_{\partial D^e} {\partial_\nu {\bf h}_j} \big|_+ \cdot {\bf H}_\Omega =
\int_{\partial D^e} {\partial_\nu {\bf h}_j} \big|_+ \cdot {\bf H}_\Omega
- {\partial_\nu {\bf H}_\Omega}\cdot {\bf h}_j .
\end{equation}
One can easily see from \eqnref{rep_bdd} and Lemma \ref{lem:Acal} (i) that ${\bf u}-{\bf H}_\Omega$ can be extended to $D^e$ so that the extended function, still denoted by ${\bf u}-{\bf H}_\Omega$, satisfies $\mathcal{L}_{\lambda,\mu}({\bf u}-{\bf H}_\Omega)=0$ in $D^e$ and ${\bf u}-{\bf H}_\Omega\in \mathcal{A}$. Therefore, we have from Lemma \ref{cor_betti}
$$
\int_{\partial D^e}
({\bf u}-{\bf H}_\Omega)\cdot {\partial_\nu {\bf h}_j} \big|_+ - {\partial_\nu({\bf u}-{\bf H}_\Omega)} \big|_+ \cdot{\bf h}_j
=0.
$$
We then infer from \eqnref{int_zero} and \eqnref{400} that
$$
\mathcal{J}_{\Omega,j} = \int_{\partial D^e} {\partial_\nu {\bf h}_j} \big|_+ \cdot {\bf u} - {\partial_\nu {\bf u}} \big|_+ \cdot {\bf h}_j = \int_{\partial D^e} {\partial_\nu {\bf h}_j} \big|_+ \cdot {\bf u}.
$$
Then the boundary condition in \eqnref{elas_eqn_bdd} and \eqnref{Ijk_different_rep} yield
\begin{align*}
\mathcal{J}_{\Omega,j} &= \sum_{k=1}^3 c_{1k} \int_{\partial D_1} {\partial_\nu {\bf h}_j} \big|_+ \cdot \Psi_k +
c_{2k} \int_{\partial D_2} {\partial_\nu {\bf h}_j} \big|_+ \cdot \Psi_k \\
& = \sum_{k=1}^3 (c_{1k} - c_{2k}) \mathcal{I}_{jk}.
\end{align*}
So, \eqnref{c_diff_rep} follows.
\qed
\subsection{Estimates of integrals $\mathcal{I}_{jk}$ and $\mathcal{J}_k$ and proof of Proposition \ref{prop_Kcal_Omega_estim}}
In this subsection we derive estimates of the integrals $\mathcal{I}_{jk}$ and $\mathcal{J}_{\Omega,k}$, and prove
Proposition \ref{prop_Kcal_Omega_estim} as a consequence. Some of estimates obtained in this subsection will be used in the next subsection as well.
\begin{lemma}\langlebel{prop_I11_I22_I12_asymp}
The following holds:
\begin{align}
\mathcal{I}_{11} &= m_1 \epsilon^{-1/2} + O(\tau|\ln\epsilon| + 1), \langlebel{Ical11} \\
\mathcal{I}_{12} &= O(\tau|\ln\epsilon| + 1), \langlebel{Ical12} \\
\mathcal{I}_{22} &= m_2 \epsilon^{-1/2} + O(\tau|\ln\epsilon| + 1), \langlebel{Ical22}
\end{align}
as $\epsilon \to 0$.
\end{lemma}
\noindent {\sl Proof}. \
According to \eqnref{hj_def_decomp}, we have
\begin{align*}
\mathcal{I}_{jk} &= \int_{D^e} \mathbb{C} \widehat{\nabla} {\bf h}_j :\widehat{\nabla} {\bf h}_k
=
\int_{D^e} \mathbb{C}\widehat{\nabla} (\frac{m_j}{\sqrt\epsilon}{\bf q}_j + {\bf r}_j):\widehat{\nabla} {\bf h}_k
\\
&= \frac{m_j}{\sqrt\epsilon}\int_{D^e} \mathbb{C}\widehat{\nabla} {\bf q}_j:\widehat{\nabla} {\bf h}_k
+\int_{D^e} \mathbb{C}\widehat{\nabla} {\bf r}_j:\widehat{\nabla} {\bf h}_k
\\
&=\frac{m_j}{\sqrt\epsilon}\int_{D^e} \mathbb{C}\widehat{\nabla} {\bf q}_j:\widehat{\nabla} {\bf h}_k
+\int_{D^e} \mathbb{C}\widehat{\nabla} {\bf r}_j:\widehat{\nabla} (\frac{m_k}{\sqrt\epsilon}{\bf q}_k + {\bf r}_k)
\\
&=\frac{m_j}{\sqrt\epsilon}\int_{D^e} \mathbb{C}\widehat{\nabla} {\bf q}_j:\widehat{\nabla} {\bf h}_k
+\frac{m_k}{\sqrt\epsilon}\int_{D^e} \mathbb{C}\widehat{\nabla} {\bf r}_j:\widehat{\nabla} {\bf q}_k
+ \int_{D^e} \mathbb{C}\widehat{\nabla} {\bf r}_j:\widehat{\nabla}{\bf r}_k .
\end{align*}
Since
\begin{align*}
\frac{m_k}{\sqrt\epsilon}\int_{D^e} \mathbb{C}\widehat{\nabla} {\bf r}_j:\widehat{\nabla} {\bf q}_k &=
\frac{m_k}{\sqrt\epsilon}\int_{D^e} \mathbb{C}\widehat{\nabla} ({\bf h}_j - \frac{m_j}{\sqrt\epsilon}{\bf q}_j):\widehat{\nabla} {\bf q}_k
\\
&=
\frac{m_k}{\sqrt\epsilon}\int_{D^e} \mathbb{C}\widehat{\nabla} {\bf q}_k:\widehat{\nabla} {\bf h}_j
-\frac{m_j m_k}{\epsilon}\int_{D^e} \mathbb{C}\widehat{\nabla} {\bf q}_j:\widehat{\nabla} {\bf q}_k,
\end{align*}
it follows that
\begin{align*}
\mathcal{I}_{jk}&=
\frac{m_j}{\sqrt\epsilon}\int_{D^e} \mathbb{C}\widehat{\nabla} {\bf q}_j:\widehat{\nabla} {\bf h}_k
+\frac{m_k}{\sqrt\epsilon}\int_{D^e} \mathbb{C}\widehat{\nabla} {\bf q}_k:\widehat{\nabla} {\bf h}_j \\
&\qquad
-\frac{m_j m_k}{\epsilon}\int_{D^e} \mathbb{C}\widehat{\nabla} {\bf q}_j:\widehat{\nabla} {\bf q}_k
+ \int_{D^e} \mathbb{C}\widehat{\nabla} {\bf r}_j:\widehat{\nabla}{\bf r}_k.
\end{align*}
Then Lemma \ref{cor_betti} yields
\begin{align*}
\mathcal{I}_{jk}
&= -\frac{m_j}{\sqrt\epsilon} \int_{\partial D^e}{\partial_\nu {\bf q}_j}\cdot {\bf h}_k
- \frac{m_k}{\sqrt\epsilon} \int_{\partial D^e}{\partial_\nu {\bf q}_k}\cdot {\bf h}_j \\
&\qquad +
\frac{m_j m_k}{\epsilon}\int_{\partial D^e} {\partial_\nu {\bf q}_j}\cdot {\bf q}_k
+\int_{D^e} \mathbb{C}\widehat{\nabla} {\bf r}_j:\widehat{\nabla}{\bf r}_k.
\end{align*}
Now, \eqnref{Ical11}-\eqnref{Ical22} follow from Lemma \ref{lem_hS_bdry_int12} and \ref{lem_dhS_hS_int12}. In fact, we have from Proposition \ref{prop_Brj} that
$$
\left| \int_{D^e}\mathbb{C} \widehat{\nabla} {\bf r}_j:\widehat{\nabla} {\bf r}_k \right|
\lesssim \mathcal{E}[{\bf r}_j]^{1/2} \mathcal{E}[{\bf r}_k]^{1/2} \lesssim 1.
$$
Since ${\bf h}_j = (-1)^i \frac{1}{2} \Psi_j$ on $\partial D_i$, we have
\begin{align*}
\mathcal{I}_{jk}
&= -\frac{m_j}{\sqrt\epsilon}\sum_{i=1}^2\frac{(-1)^i}{2}\int_{\partial D_i}{\partial_\nu {\bf q}_j}\cdot \Psi_k
\\
&\quad -\frac{m_k}{\sqrt\epsilon}\sum_{i=1}^2\frac{(-1)^i}{2}\int_{\partial D_i}{\partial_\nu {\bf q}_k}\cdot \Psi_j
+ \frac{m_j m_k}{\epsilon}\int_{\partial D^e} {\partial_\nu {\bf q}_j}\cdot {\bf q}_k + O(1).
\end{align*}
Then, from \eqnref{qjpsik}, \eqnref{qjqj} and \eqnref{q1q2}, we have
\begin{align*}
\mathcal{I}_{11}&= \frac{m_1^2}{\epsilon} {\bf i}g(-\frac{\sqrt\epsilon}{m_1} + O(\tau\epsilon|\ln\epsilon|+\epsilon){\bf i}g) + 2 \frac{m_1}{\sqrt\epsilon} +O(1)
= \frac{m_1}{\sqrt\epsilon} + O(\tau|\ln\epsilon| + 1),
\\
\mathcal{I}_{22}&= \frac{m_2^2}{\epsilon} {\bf i}g(-\frac{\sqrt\epsilon}{m_2} + O(\tau\epsilon|\ln\epsilon|+\epsilon){\bf i}g) + 2 \frac{m_2}{\sqrt\epsilon} +O(1)
= \frac{m_2}{\sqrt\epsilon} + O(\tau|\ln\epsilon| + 1),
\\
\mathcal{I}_{12}&= \frac{m_1 m_2}{\epsilon} O(\tau\epsilon|\ln\epsilon|+\epsilon) +O(1)= O(\tau|\ln\epsilon| + 1).
\end{align*}
This completes the proof.
\qed
\begin{lemma}\langlebel{lem_I13_I23_estim}
We have
\begin{equation}\langlebel{Icalj3}
|\mathcal{I}_{13}|, |\mathcal{I}_{23}| \lesssim 1,
\end{equation}
and
\begin{equation}\langlebel{Ical33}
\mathcal{I}_{33}\approx 1.
\end{equation}
\end{lemma}
\noindent {\sl Proof}. \
We prove \eqnref{Ical33} first. For that we closely follow the proof of (4.12) in \cite{BLL-ARMA-15}.
Let ${\bf h}_3^K$ be the function defined as follows: for $(x,y)\in \Pi_L$,
\begin{equation}\langlebel{eqn_h3K_def1}
{\bf h}^K_3(x,y)=\frac{ x+f_1(y) }{f_2(y)+f_1(y)} \Psi_3 +\frac{(-1)}{2}\Psi_3.
\end{equation}
We emphasize that
\begin{equation}\langlebel{heKv3K}
{\bf h}_3^K = -\frac{1}{2} {\bf v}_{13}^K + \frac{1}{2} {\bf v}_{23}^K,
\end{equation}
where ${\bf v}_{i3}$ is defined by \eqnref{BvikK}.
We then extend ${\bf h}_3^K$ to $D^e \setminus \Pi_{L}$ so that
\begin{align}
\begin{cases}
\displaystyle{\bf h}^K_3 = (-1)^i\frac{1}{2} \Psi_3 \quad &\mbox{ on } \partial D_i, i=1,2,
\\[3mm]
\displaystyle {\bf h}_3^K|_{\mathbb{R}^2\setminus B_0}=0,
\\[0.5em]
\displaystyle\|{\bf h}_3^K\|_{H^1(D^e\setminus \Pi_{L})} \lesssim 1,
\end{cases}
\langlebel{eqn_h3K_def2}
\end{align}
where $B_0$ is a disk which contains $\overline{D_1\cup D_2}$.
It is easy to see that, for $(x,y)\in \Pi_{L}$,
\begin{equation} \langlebel{eqn_h3K_asymp}
\partial_1 h^K_{31} = -\frac{y}{\epsilon+\frac{1}{2}(\kappa_1+\kappa_2) y^2} + O(1),
\end{equation}
and
\begin{equation} \langlebel{eqn_h3K_asymp2}
\partial_2 h^K_{31}, \,
\partial_1 h^K_{32}, \,
\partial_2 h^K_{32} = O(1).
\end{equation}
We mention that these estimates together with Lemma \ref{lem_q3_asymp} show that $\nabla {\bf h}_3^K$ and $\nabla {\bf q}_3$ have the same behavior in $\Pi_L$. In fact, we have
\begin{equation}\langlebel{q3h3}
|\nabla {\bf q}_3 - \nabla {\bf h}_3^K| \lesssim 1 \quad \mbox{in }\Pi_L.
\end{equation}
This estimate will be used in the proof of Proposition \ref{hGOjandBq}.
By Lemma \ref{lem_var_principle}, we have
\begin{align*}
\mathcal{I}_{33}&=\mathcal{E}_{D^e}[{\bf h}_3] \leq \mathcal{E}_{D^e} [{\bf h}_3^K]
\lesssim
\int_{D^e} |\nabla{\bf h}_3^K|^2
\\
&\lesssim\int_{\Pi_{L}} |\nabla{\bf h}^K_3|^2 + \int_{D^e\setminus \Pi_{L}} |\nabla{\bf h}^K_3|^2
\\
&\lesssim\int_{-L}^{L}\int_{-f_1(y)}^{f_2(y)} {\bf i}g(\frac{|y|}{\epsilon+y^2}{\bf i}g)^2 \,dx\, dy +1
\\
&\lesssim \int_{-L}^{L} \frac{y^2}{\epsilon+y^2} \,dy +1
\lesssim 1,
\end{align*}
where the second to last inequality holds since $f_2(y)+f_1(y) \lesssim \epsilon + y^2$.
To prove the opposite inequality, we invoke a result in \cite{BLL-ARMA-15}: For any ${\bf v}\in H^1(\Pi_{L}\setminus\Pi_{L_0})$ satisfying ${\bf v}=0$ on $\partial D_1 \cap \partial (\Pi_{L}\setminus\Pi_{L_0})$, it holds
\begin{equation}\langlebel{Korn_ver_h3}
\int_{\Pi_{L}\setminus\overline{\Pi_{L_0}}} |\nabla {\bf v}|^2
\lesssim \int_{\Pi_{L}\setminus\overline{\Pi_{L_0}}} |\widehat{\nabla} {\bf v}|^2.
\end{equation}
(See the proof of (4.12) in \cite{BLL-ARMA-15}.)
Let $\widetilde{{\bf h}}_3 := {\bf h}_3 + \frac{1}{2}\Psi_3$. Then $\widetilde{{\bf h}}_3=0$ on $\partial D_1 \cap \partial(\Pi_{L}\setminus\Pi_{L_0})$ and
$\widetilde{{\bf h}}_3=\Psi_3$ on $\partial D_2 \cap \partial(\Pi_{L}\setminus\Pi_{L_0})$. Therefore, using \eqnref{Korn_ver_h3}, we have
\begin{align}
\mathcal{I}_{33} = \mathcal{E}_{D^e}[{\bf h}_3]=\mathcal{E}_{D^e}[\widetilde{{\bf h}}_3] \gtrsim \int_{\Pi_{L}\setminus\overline{\Pi_{L_0}}} |\nabla \widetilde{{\bf h}}_3|^2 \gtrsim 1.
\end{align}
So, \eqnref{Ical33} is proved.
To prove \eqnref{Icalj3}, let $j=1$ or $2$. From Lemma \ref{cor_betti} and \eqnref{hj_def_decomp}, we have
\begin{align*}
\mathcal{I}_{j3}
&=\frac{m_j}{\sqrt\epsilon}\int_{D^e} \widehat{\nabla} {\bf q}_j:\widehat{\nabla} {\bf h}_3 +\int_{D^e} \mathbb{C} \widehat{\nabla} {\bf r}_j:\widehat{\nabla} {\bf h}_3 \\
&=-\frac{m_j}{\sqrt\epsilon}\int_{\partial D^e} \partial_\nu {\bf q}_j \cdot{\bf h}_3 +\int_{D^e} \mathbb{C} \widehat{\nabla} {\bf r}_j:\widehat{\nabla} {\bf h}_3 \\
&=\frac{1}{2} \frac{m_j}{\sqrt\epsilon} {\bf i}g(\int_{\partial D_1} \partial_\nu {\bf q}_j \cdot\Psi_3
- \int_{\partial D_2} \partial_\nu {\bf q}_j \cdot\Psi_3{\bf i}g)
+\int_{D^e} \mathbb{C} \widehat{\nabla} {\bf r}_j:\widehat{\nabla} {\bf h}_3 \\
&=: I+II.
\end{align*}
From Lemma \ref{lem_hS_bdry_int12} (ii) and the fact that $a \approx \sqrt\epsilon$, we have
$$
I=
\frac{m_j}{\sqrt\epsilon} \delta_{2j} {a(-1 + 4\partiali \alpha_2 \mu)} =O(1).
$$
It is clear from Proposition \ref{prop_Brj} and \eqnref{Ical33} that
$$
|II| \lesssim {\mathcal{E}_{D^e}[{\bf r}_j]}^{1/2} {\mathcal{I}_{33}}^{1/2}\lesssim 1.
$$
This proves \eqnref{Icalj3}.
\qed
\begin{lemma}\langlebel{lem_Jk_Omega_estim}
We have
$$
|\mathcal{J}_{\Omega,k}| \lesssim \| {\bf g}\|_{C^{1,\gamma}(\partial\Omega)}, \quad k=1,2,3.
$$
\end{lemma}
Before proving Lemma \ref{lem_Jk_Omega_estim}, let us make a short remark on regularity of ${\bf H}_\Omega$.
Recall that ${\bf H}_\Omega$ is defined by
$$
{\bf H}_\Omega = -\mathcal{S}_{\partial \Omega}\big[\partial_\nu {\bf u}\big|_{\partial\Omega}\big]+\mathcal{D}_{\partial \Omega}[{\bf g}]\quad \mbox{in }\Omega.
$$
As shown in \cite{BLL-ARMA-15}, we have
\begin{equation}\langlebel{Omega_gradu_bdd}
\| \nabla {\bf u}\|_{L^\infty(\widetilde\Omega \setminus \Pi_{L})} \lesssim \| {\bf g}\|_{C^{1,\gamma}(\partial\Omega)}.
\end{equation}
In particular, we have
$$
\| \partial_\nu {\bf u} \|_{L^\infty(\partial\Omega)}\lesssim \| {\bf g}\|_{C^{1,\gamma}(\partial\Omega)}.
$$
So, for any $\Omega_1$ such that $\overline{\Omega_1} \subset \Omega$, we have
\begin{equation}\langlebel{HOmega_regular}
\|{\bf H}_\Omega \|_{C^2(\Omega_1)}\lesssim \| {\bf g}\|_{C^{1,\gamma}(\partial\Omega)}.
\end{equation}
We also have
\begin{equation}\langlebel{HOmega_regular2}
\|{\bf H}_\Omega \|_{H^1(\partial\Omega)} \lesssim \| {\bf g}\|_{C^{1,\gamma}(\partial\Omega)}.
\end{equation}
Importance of these inequalities is that they hold independently of $\epsilon$.
\noindent{\sl Proof of Lemma \ref{lem_Jk_Omega_estim}}.
Let us first consider the case when $k=1,2$. For simplicity, we assume $\| {\bf g}\|_{C^{1,\gamma}(\partial\Omega)}=1$.
Since $\int_{\partial D^e} {\partial_\nu {\bf H}_\Omega} \cdot {\bf h}_k=0$, we have
$$
\mathcal{J}_{\Omega,k} = \int_{\partial D^e} {\partial_\nu{\bf h}_k} \cdot {\bf H}_\Omega
- \int_{\partial D^e} {\partial_\nu {\bf H}_\Omega} \cdot {\bf h}_k.
$$
Then \eqnref{hj_def_decomp} yields
\begin{align*}
\mathcal{J}_{\Omega,k} &=
\frac{m_k}{\sqrt\epsilon}{\bf i}g(\int_{\partial D^e} {\partial_\nu {\bf q}_k} \cdot {\bf H}_\Omega
- {\partial_\nu {\bf H}_\Omega} \cdot {\bf q}_k{\bf i}g)
+ \int_{\partial D^e} {\partial_\nu {\bf r}_k} \cdot {\bf H}_\Omega + (-1)\int_{\partial D^e} {\partial_\nu {\bf H}_\Omega} \cdot {\bf r}_k
\\
&:=I_k + II_k + III_k.
\end{align*}
Green's formula for the Lam\'e system and \eqnref{qonedirac} yield
\begin{align}
&\frac{\sqrt\epsilon}{m_1} I_1 = \int_{D_1\cup D_2} \mathcal{L}_{\lambda,\mu}{\bf q}_1 \cdot {\bf H}_\Omega \nonumber \\
&={ ({\bf H}_\Omega({\bf p}_1)-{\bf H}_\Omega({\bf p}_2))\cdot{\bf e}_1 }
- \sum_{j=1}^2 \frac{ \alpha_2 a}{(\alpha_1-\alpha_2)} { (\partial_1 {\bf H}_\Omega({\bf p}_j)\cdot{\bf e}_1+\partial_2 {\bf H}_\Omega({\bf p}_j)\cdot {\bf e}_2)}. \langlebel{m1I1}
\end{align}
Since $a \approx \sqrt\epsilon$ and \eqnref{HOmega_regular} holds, we have
$$
\sum_{j=1}^2 \frac{ \alpha_2 a}{(\alpha_1-\alpha_2)} { (\partial_1 {\bf H}_\Omega({\bf p}_j)\cdot{\bf e}_1+\partial_2 {\bf H}_\Omega({\bf p}_j)\cdot {\bf e}_2)} = O(\sqrt{\epsilon}).
$$
Since ${\bf p}_1 = (-a,0)$ and ${\bf p}_2=(a,0)$, the mean value theorem shows that there is a point, say ${\bf p}_*$, on the line segment $\overline{{\bf p}_1{\bf p}_2}$ such that
$$
|({\bf H}_\Omega({\bf p}_1)-{\bf H}_\Omega({\bf p}_2))\cdot{\bf e}_1| \le 2a |\partial_1 {\bf H}_\Omega({\bf p}_*)\cdot {\bf e}_1|.
$$
So, we have
$$
|({\bf H}_\Omega({\bf p}_1)-{\bf H}_\Omega({\bf p}_2))\cdot{\bf e}_1| \lesssim \sqrt\epsilon.
$$
Therefore, from \eqnref{m1I1}, we obtain
$$
I_1 = O(1).
$$
Similarly, one can show
$$
I_2 = O(1).
$$
Next we estimate $II_k$. Let ${\bf v}\in \mathcal{A}^*$ be the solution to the following exterior Dirichlet problem
\begin{equation}\langlebel{eqn_vH}
\ \left \{
\begin{array} {ll}
\displaystyle \mathcal{L}_{\lambda,\mu} {\bf v}= 0 \quad &\mbox{ in } D^e,\\[2mm]
\displaystyle {\bf v}= {{\bf H}_\Omega} \quad &\mbox{ on } \partial D^e.
\end{array}
\right.
\end{equation}
From \eqnref{variation}, we have
\begin{equation}
\mathcal{E}_{D^e}[{\bf v}] =\min_{{\bf w}\in W} \mathcal{E}_{D^e}[{\bf w}],
\end{equation}
where
$$
W: = \big\{ {\bf w}\in \mathcal{A}^* : {\bf w}|_{\partial D^e} = {\bf H}_\Omega\big\}.
$$
Let ${\bf w}$ be a function such that
\begin{equation}
\begin{cases}
{\bf w}|_{\Pi_L \cup \partial D^e}={\bf H}_\Omega,
\\
{\bf w}|_{\mathbb{R}^2\setminus B_0}=0,
\\
\| {\bf w} \|_{H^1(\mathbb{R}^2)} \lesssim \| {\bf g}\|_{C^{1,\gamma}(\partial\Omega)},
\end{cases}
\end{equation}
where $B_0$ is a disk which contains $\overline{D_1\cup D_2}$. It is worth to mention that the third condition in the above is fulfilled thanks to \eqnref{HOmega_regular}.
Since ${\bf w}\in W$, we have
\begin{equation}\langlebel{vH}
\mathcal{E}_{D^e}[{\bf v}] \leq \mathcal{E}_{D^e}[{\bf w}] \lesssim 1.
\end{equation}
Then, using Lemma \ref{cor_betti} and Proposition \ref{prop_Brj}, we obtain
\begin{align}
|II_k| &=
\left| \int_{\partial D^e} {\partial_\nu {\bf r}_k} \cdot {\bf v} \right| \nonumber \\
&= \left| \int_{D^e} \mathbb{C} \widehat{\nabla} {\bf r}_k : \widehat{\nabla}{\bf v} \right|
\lesssim {\mathcal{E}_{D^e}[{\bf r}_k]}^{1/2}{\mathcal{E}_{D^e}[{\bf v}]}^{1/2} \lesssim 1, \quad k=1,2.
\langlebel{eqn_JkII_1}
\end{align}
Let us now consider $III_k$. We see from Lemma \ref{lem_qj_far_estim} and \ref{lem_hS_bdry_estim} that
\begin{align*}
|{\bf r}_k|_{\partial D_i}| & = \left| \frac{(-1)^i}{2}\Psi_k- \frac{m_k}{\sqrt\epsilon}{\bf q}_k|_{\partial D_i} \right|
\lesssim 1.
\end{align*}
So it follows from \eqnref{HOmega_regular2} that
$$
|III_k| \lesssim \| \partial_\nu {\bf H}_\Omega \|_{L^2(\partial\Omega)} \lesssim 1, \quad k=1,2.
$$
Therefore, we have
$$
|\mathcal{J}_{\Omega,k}|\leq |I_k|+|II_k|+|III_k| \lesssim 1, \quad k=1,2.
$$
To deal with the case when $k=3$, let ${\bf v}$ be the solution to \eqnref{eqn_vH} as before. It follows from \eqnref{Ical33} and \eqnref{vH} that
\begin{align*}
|\mathcal{J}_{\Omega,3}| &= \left| \int_{\partial D^e} {\partial_\nu {\bf h}_3} \cdot {\bf v} \right|
\\
&= \left| \int_{D^e} \mathbb{C} \widehat{\nabla} {\bf h}_3 : \widehat{\nabla}{\bf v} \right|
\lesssim \mathcal{I}_{33}^{1/2} \mathcal{E}_{D^e}[{\bf v}]^{1/2} \lesssim 1.
\end{align*}
The proof is completed.
\qed
Proposition \ref{prop_Kcal_Omega_estim} follows from Lemma \ref{prop_I11_I22_I12_asymp}, \ref{lem_I13_I23_estim} and \ref{lem_Jk_Omega_estim}.
\subsection{Proof of Proposition \ref{prop_diff_c1_c2_asymp_bdd}}
Set
$$
\mathcal{I}:=
\begin{bmatrix}
\mathcal{I}_{11} && \mathcal{I}_{12} && \mathcal{I}_{13} \\
\mathcal{I}_{12} && \mathcal{I}_{22} && \mathcal{I}_{23} \\
\mathcal{I}_{13} && \mathcal{I}_{23} && \mathcal{I}_{33}
\end{bmatrix}.
$$
From Lemma \ref{prop_I11_I22_I12_asymp} and \ref{lem_I13_I23_estim}, we have
\begin{align}
\mbox{det}\, \mathcal{I} &=
{\mathcal{I}_{11}} \big(\mathcal{I}_{22} \mathcal{I}_{33}- \mathcal{I}_{23}^2\big)
-
{\mathcal{I}_{12}} \big(\mathcal{I}_{12} \mathcal{I}_{33}- \mathcal{I}_{13}\mathcal{I}_{23}\big)
+
{\mathcal{I}_{13}} \big(\mathcal{I}_{12} \mathcal{I}_{23}- \mathcal{I}_{13}\mathcal{I}_{22}\big)
\notag
\\
&= \mathcal{I}_{11} \mathcal{I}_{22} \mathcal{I}_{33} + O(\epsilon^{-1/2})
= \frac{m_1 m_2}{\epsilon} \mathcal{I}_{33}(1+O(\sqrt\epsilon)).
\langlebel{detI_asymp}
\end{align}
So, by \eqnref{Ical33}, the matrix $\mathcal{I}$ is invertible for sufficiently small $\epsilon$.
By Lemma \ref{lem_represent_c1_c2_bdd} and Cramer's rule, we have
\begin{align}
c_{11}-c_{21} &=
\frac{\mathcal{J}_{\Omega,1}}{\mbox{det}\,\mathcal{I}} \big(\mathcal{I}_{22} \mathcal{I}_{33}- \mathcal{I}_{23}^2\big)
-
\frac{\mathcal{J}_{\Omega,2}}{\mbox{det}\,\mathcal{I}} \big(\mathcal{I}_{12} \mathcal{I}_{33}- \mathcal{I}_{13}\mathcal{I}_{23}\big)
\notag
\\
&\quad +
\frac{\mathcal{J}_{\Omega,3}}{\mbox{det}\,\mathcal{I}} \big(\mathcal{I}_{12} \mathcal{I}_{23}- \mathcal{I}_{13}\mathcal{I}_{22}\big).
\langlebel{c11_c21_rep}
\end{align}
Recall from
Lemma \ref{prop_I11_I22_I12_asymp}, \ref{lem_I13_I23_estim} and \ref{lem_Jk_Omega_estim} that
$$
\mathcal{I}_{11}, \mathcal{I}_{22} \approx \epsilon^{-1/2}, \quad
|\mathcal{I}_{12}| \lesssim 1+\tau |\ln\epsilon|,
\quad
|\mathcal{I}_{j3}|\lesssim 1, \quad \mathcal{I}_{33}\approx 1,
$$
and
$$
|\mathcal{J}_{\Omega,j}| \lesssim \|{\bf g}\|_{C^{1,\gamma}(\partial\Omega)}, \quad
$$
for $j=1,2,3$.
For simplicity, we may assume $\|{\bf g}\|_{C^{1,\gamma}(\partial\Omega)}$=1.
Then, from \eqnref{detI_asymp} and \eqnref{c11_c21_rep}, we can easily check that
\begin{align*}
c_{11}-c_{21}&=
\frac{\mathcal{J}_{\Omega,1}}{\mbox{det}\,\mathcal{I}} \mathcal{I}_{22}\mathcal{I}_{33} -
\frac{\mathcal{J}_{\Omega,3}}{\mbox{det}\,\mathcal{I}} \mathcal{I}_{13}\mathcal{I}_{22}
+ O(\epsilon+\tau \epsilon|\ln\epsilon|).
\end{align*}
Hence, by applying \eqnref{Ical11} and the second equality in \eqnref{detI_asymp}, we obtain
$$
c_{11}-c_{21}
= \frac{\sqrt\epsilon}{m_1}{\bf i}g(\mathcal{J}_{\Omega,1}-\frac{\mathcal{J}_{\Omega,3} \mathcal{I}_{13}}{\mathcal{I}_{33}}{\bf i}g) + O(\epsilon+\tau \epsilon|\ln\epsilon|).
$$
Similarly, we have
\begin{align*}
c_{12}-c_{22}&=\frac{\sqrt\epsilon}{m_2}{\bf i}g(\mathcal{J}_{\Omega,2}-\frac{\mathcal{J}_{\Omega,3} \mathcal{I}_{23}}{\mathcal{I}_{33}}{\bf i}g) + O(\epsilon+\tau \epsilon|\ln\epsilon|),
\\
c_{13}-c_{23}&= \frac{\mathcal{J}_{\Omega,3}}{\mathcal{I}_{33}} + O(\sqrt\epsilon + \tau \sqrt\epsilon|\ln\epsilon|).
\end{align*}
Finally, the definition \eqnref{def_Kcal_Omega} of $\mathcal{K}_{\Omega,j}$ yield \eqnref{c11c21}-\eqnref{c13c23}. The proof of Proposition \ref{prop_diff_c1_c2_asymp_bdd} is completed.
\qed
\subsection{Proof of Proposition \ref{hGOjandBq}}
To prove \eqnref{rGOj} we modify the function ${\bf r}_j^K$ introduced in \eqnref{eqn_def_rKj_PiL0}. Let
${\bf r}_{\Omega,j}^K$ be a function in $C^2(\mathbb{R}^2)$ such that
\begin{equation}\langlebel{6000}
\begin{cases}
\displaystyle{\bf r}_{\Omega,j}^K = {\bf r}_j^K|_{\Pi_{L_0}} \quad & \mbox{in } \Pi_{L_0},
\\[2mm]
\displaystyle{\bf r}_{\Omega,j}^K = \frac{(-1)^i}{2} \Psi_j - \frac{m_j}{\sqrt\epsilon}{\bf q}_j\quad &\mbox{on } \partial D_i, \ i=1,2,
\\[2mm]
\displaystyle {\bf r}_{\Omega,j}^K =-\frac{m_j}{\sqrt\epsilon}{\bf q}_j \quad & \mbox{on } \partial \Omega,
\end{cases}
\end{equation}
and
\begin{equation}\langlebel{rK_Omega_C2}
\|{\bf r}^K_{\Omega,j} \|_{H^1(\mathbb{R}^2\setminus \Pi_{L_0})} \lesssim 1.
\end{equation}
We emphasize that ${\bf r}_{\Omega,j}^K= {\bf r}_{j}^K$ is a linear interpolation in the gap region $\Pi_{L_0}$. Note that $\nabla {\bf r}_j^K|_{\Pi_{L_0}}$ is already estimated in Lemma \ref{lem_rjK_grad_estim}.
Let
\begin{equation}\langlebel{eqn_decomp_r_Omega_j}
{\bf w}_j := {\bf r}_{\Omega,j}- {\bf r}^K_{\Omega,j}, \quad j=1,2 ,
\end{equation}
where ${\bf r}_{\Omega,j}$ is the function defined by \eqnref{eqn_hOj_qj_rOj_decomp}. We see that the function ${\bf w}_j$ is the solution to the following problem:
\begin{equation}\langlebel{eqn_wj}
\ \left \{
\begin{array} {ll}
\displaystyle \mathcal{L}_{\lambda,\mu} {\bf w}_j= -\mathcal{L}_{\lambda,\mu}{\bf r}_{\Omega,j}^K \quad &\mbox{ in } \widetilde\Omega,\\[2mm]
\displaystyle {\bf w}_j= 0 \quad &\mbox{ on } \partial D_1 \cup \partial D_2 \cup \partial \Omega.
\end{array}
\right.
\end{equation}
The following lemma can be proved by arguments parallel to the proof of Lemma 3.6 in \cite{BLL-ARMA-15}. So, we omit the proof.
\begin{lemma}\langlebel{lem_grad_estim_YYLi}
Let ${\bf v}$ be a solution to
$\mathcal{L}_{\lambda,\mu} {\bf v}= -\mathcal{L}_{\lambda,\mu}{\bf f}$ in $\widetilde{\Omega}$ with
${\bf v}=0$ on $\partial D_1\cup \partial D_2 \cup \partial\Omega$, where ${\bf f}$ is a given function belongs to $C^2(\mathbb{R}^2)$.
Assume that the following conditions hold:
\begin{itemize}
\item[(i)]
The function ${\bf v}$ satisfies
$$
\int_{\widetilde{\Omega}} |\nabla {\bf v}|^2\lesssim 1.
$$
\item[(ii)] The function ${\bf f}$ satisfies
$$
|(\mathcal{L}_{\lambda,\mu}{\bf f})(x,y)| \lesssim\frac{1}{\epsilon+y^2} \quad \mbox{ for }(x,y)\in \Pi_{L}.
$$
\end{itemize}
Then we have, for $0<L'<L$,
$$
\|\nabla {\bf v}\|_{L^\infty(\Pi_{L'})}\lesssim 1.
$$
\end{lemma}
\begin{lemma}\langlebel{cor_estim_wj}
For $j=1,2$, let ${\bf w}_j$ be the solution to \eqnref{eqn_wj}.
Then we have
\begin{equation}\langlebel{nablaBw}
|\nabla {\bf w}_j({\bf x})|\lesssim 1 \quad \mbox{ \rm for }{\bf x}\in \Pi_{L_0}.
\end{equation}
\end{lemma}
\noindent {\sl Proof}. \
It suffices to show that the hypotheses (i) and (ii) of Lemma \ref{lem_grad_estim_YYLi} are fulfilled, namely,
\begin{equation}\langlebel{hypo1}
\int_{\widetilde\Omega} |\nabla {\bf w}_j|^2\lesssim 1.
\end{equation}
and
\begin{equation}\langlebel{hypo2}
|(\mathcal{L}_{\lambda,\mu} {\bf r}_{\Omega,j}^K)(x,y)|\lesssim \frac{1}{\epsilon+y^2}
\quad \mbox{for } (x,y)\in \Pi_{L}.
\end{equation}
By the first Korn's inequality, the variational principle and Lemma \ref{lem_rjK_grad_estim}, we have
\begin{align*}
\int_{\widetilde\Omega} |\nabla {\bf w}_j|^2 &\leq
2 \int_{\widetilde\Omega} |\widehat{\nabla} {\bf w}_j|^2
\lesssim\int_{\widetilde\Omega} \mathbb{C} \widehat{\nabla} {\bf w}_j: \widehat{\nabla} {\bf w}_j
\\
&\lesssim \int_{\widetilde\Omega} \mathbb{C} \widehat{\nabla} {\bf r}_{\Omega,j}: \widehat{\nabla} {\bf r}_{\Omega,j}
+ \int_{\widetilde\Omega} \mathbb{C} \widehat{\nabla} {\bf r}_{\Omega,j}^K: \widehat{\nabla} {\bf r}_{\Omega,j}^K
\\
&\lesssim \int_{\widetilde\Omega} \mathbb{C} \widehat{\nabla} {\bf r}_{\Omega,j}^K: \widehat{\nabla} {\bf r}_{\Omega,j}^K
\\
&\lesssim
\int_{\Pi_{L_0}} |\nabla {\bf r}_{j}^K|^2 + \int_{\widetilde{\Omega}\setminus\Pi_{L_0}} |\nabla {\bf r}_{j}^K|^2
\lesssim\int_{-L_0}^{L_0} \int_{-f_1(y)}^{f_2(y)} \frac{\epsilon+|y|}{\epsilon+y^2} \,dx dy +1
\\
&\lesssim
\int_{-L_0}^{L_0} (\epsilon+|y|)\,dy
+1
\lesssim 1.
\end{align*}
So we obtain \eqnref{hypo1}.
We now prove \eqnref{hypo2}. Let $d$, $\partialhi$ and $\eta$ be the function defined in \eqnref{eqn_def_d_phi}. It follows from \eqnref{dx_rK_1x} and \eqnref{eqn_dy_rK_1x} that, for $(x,y)\in\Pi_{L_0}$,
\begin{align}
\partial_{11} r^K_{\Omega,11}=\partial_{11} r^K_{11} &=0,
\nonumber\\[0.5em]
\partial_{12} r^K_{\Omega,11}=\partial_{12} r^K_{11} &= \frac{\partialhi'}{d}-\frac{ \partialhi d'}{d^2},
\nonumber\\[0.5em]
\partial_{22} r^K_{\Omega,11}=\partial_{22} r^K_{11} &=
\left[
\frac{\partialhi''}{d} -\frac{2\partialhi' d' }{d^2} -\frac{\partialhi d''}{d^2}
+\frac{2 \partialhi d'^2}{d^3} \right]x
\nonumber\\
&\quad \quad
+\frac{\partialhi'' f_1}{d} + 2\frac{\partialhi'f_1'}{d} -2\frac{\partialhi' f_1 d'}{d^2} + \frac{\partialhi f_1''}{d}
\nonumber\\
&\quad\quad
-2 \frac{\partialhi f_1' d'}{d^2} - \frac{\partialhi f_1 d''}{d^2}+ 2 \frac{\partialhi f_1 d'^2 }{d^3}
+\eta''. \langlebel{dyy_rK_1x}
\end{align}
In addition to \eqnref{eqn_estim_d} and \eqnref{eqn_estim_phi}, we have
\begin{equation}\langlebel{eqn_estim_phi_recall_modified}
|\partialhi''(y)|,|\eta''(y)|\lesssim \frac{1}{\epsilon+y^2}.
\end{equation}
Then, using \eqnref{eqn_estim_d}, \eqnref{eqn_estim_phi}, \eqnref{dyy_rK_1x}, \eqnref{eqn_estim_phi_recall_modified} and the fact that $|x|\lesssim\epsilon+y^2$ for $(x,y)\in\Pi_{L_0}$, we have
\begin{align*}
|\partial_{12} r^K_{\Omega,11}| &\lesssim \frac{1}{\epsilon+y^2} + \frac{(\epsilon+y^2)|y|}{(\epsilon+y^2)^2}
\lesssim \frac{1}{\epsilon+y^2},
\end{align*}
and
\begin{align*}
|\partial_{22} r^K_{\Omega,11}| &\lesssim \left[\frac{1}{\epsilon+y^2} + \frac{|y|}{(\epsilon+y^2)^2}+\frac{\epsilon+y^2
}{(\epsilon+y^2)^2}+ \frac{(\epsilon+y^2)y^2}{(\epsilon+y^2)^3}\right] (\epsilon+y^2)
\\[0.5em]
&\qquad + \frac{1}{{\epsilon}+y^2}\frac{\epsilon+y^2}{\epsilon+y^2} + \frac{|y|}{\epsilon+y^2} +\frac{(\epsilon+y^2)|y|}{(\epsilon+y^2)^2}+ \frac{\epsilon+y^2}{\epsilon+y^2}
\\
&\qquad +
\frac{(\epsilon+y^2)y^2}{(\epsilon+y^2)^2} + \frac{(\epsilon+y^2)(\epsilon+y^2)}{(\epsilon+y^2)^2} + \frac{(\epsilon+y^2)(\epsilon+y^2)y^2}{(\epsilon+y^2)^3} + \frac{1}{{\epsilon}+y^2}
\\
&
\lesssim \frac{1}{{\epsilon}+y^2}.
\end{align*}
The proof is completed.
\qed
Now we are ready to prove Proposition \ref{hGOjandBq}.
\noindent{\sl Proof of Proposition \ref{hGOjandBq}}. Let us look into estimates in $\widetilde{\Omega}\setminus\Pi_{L}$ first. Let ${\bf v}_{ij}$ be the function defined in \eqnref{vikyan}. It is proved in \cite{BLL-ARMA-15} that
$$
\| \nabla {\bf v}_{ij} \|_{L^\infty(\widetilde{\Omega}\setminus\Pi_{L})} \lesssim 1, \quad i=1,2, \ \ j=1,2,3.
$$
Since ${\bf h}_{\Omega,j}=-\frac{1}{2}{\bf v}_{1j}+\frac{1}{2}{\bf v}_{2j}$, we have
$$
\| \nabla {\bf h}_{\Omega,j}\|_{L^\infty(\widetilde{\Omega}\setminus\Pi_{L_0})} \lesssim 1, \quad j=1,2,3.
$$
This estimate together with \eqnref{Bqjest} and \eqnref{Bq3est2} yields the second part of \eqnref{rGOj} and \eqnref{rGO3} on $\widetilde{\Omega}\setminus\Pi_{L_0}$.
By \eqnref{Brjest} and the first line of \eqnref{6000}, we have
$$
|\nabla {\bf r}_{\Omega,j}^K({\bf x}) | = |\nabla {\bf r}_j^K ({\bf x})| \lesssim 1+\frac{\tau |y|}{\epsilon+y^2} \quad \mbox{for }{\bf x}\in \Pi_{L_0}.
$$
Then, the first part of \eqnref{rGOj} follows from \eqnref{eqn_decomp_r_Omega_j} and \eqnref{nablaBw}.
The estimate \eqnref{rGO3} on $\Pi_{L}$ follows from \eqnref{eqn_hGOk_v1k_v2k}, \eqnref{Bao2}, \eqnref{heKv3K} and \eqnref{q3h3}. In fact, we have on $\Pi_L$
\begin{align*}
\nabla {\bf h}_{\Omega,3} &=-\frac{1}{2} \nabla {\bf v}_{13} + \frac{1}{2} \nabla {\bf v}_{23} \\
&=-\frac{1}{2} \nabla {\bf v}_{13}^K + \frac{1}{2} \nabla {\bf v}_{23}^K + O (1) \\
&= \nabla {\bf h}_3^K + O (1) = \nabla {\bf q}_3 + O (1).
\end{align*}
This completes the proof.
\qed
\section{Stress concentration-the free space problem}\langlebel{sec:free}
In this section we consider the free space problem \eqnref{elas_eqn_free} and characterize the singular behavior of its solution.
Analogously to $\mathcal{J}_{\Omega,j}$ in \eqnref{def_Ical_Jcal}, we define
\begin{equation}\langlebel{def_Jcal_free}
\mathcal{J}_{j}=\int_{\partial D^e} {\partial_\nu {\bf h}_j} \cdot {\bf H}, \quad j=1,2,3,
\end{equation}
where ${\bf H}$ is the background solution of the problem \eqnref{elas_eqn_free}. It is worth emphasizing that
$\mathcal{J}_j$ is defined using ${\bf H}$ while $\mathcal{J}_{\Omega,j}$ uses ${\bf H}_\Omega$. Analogously to $\mathcal{K}_{\Omega,j}$ in \eqnref{def_Kcal_Omega}, we define
\begin{equation}\langlebel{def_Kcal}
\mathcal{K}_1=\mathcal{J}_1-\frac{\mathcal{J}_3 \mathcal{I}_{13}}{\mathcal{I}_{33}}, \quad
\mathcal{K}_2=\mathcal{J}_2-\frac{\mathcal{J}_3 \mathcal{I}_{23}}{\mathcal{I}_{33}},\quad
\mathcal{K}_3=\frac{\mathcal{J}_3}{\mathcal{I}_{33}}.
\end{equation}
Then the constants $\mathcal{K}_j$ are bounded regardless of $\epsilon$ (see \eqnref{KGOcaljest}).
The following is the main result of this section
\begin{theorem}\langlebel{main_thm_2_general_free}
Let ${\bf u}$ be the solution to \eqnref{elas_eqn_free}.
Then we have the following decomposition of ${\bf u}-{\bf H}$:
\begin{equation}\langlebel{Bu-BH=}
({\bf u}-{\bf H})({\bf x}) = {\bf b}({\bf x})- \sum_{j=1}^3 \big({\mathcal{K}}_{j}+s_j\big){\bf q}_j({\bf x})
, \quad{\bf x}\in D^e,
\end{equation}
where the constants $s_j$, $j=1,2,3$, satisfy
\begin{equation}\langlebel{|s_j|}
|s_j|\lesssim \tau\sqrt\epsilon|\ln\epsilon| \|{\bf H} \|_{H^1(B)},
\end{equation}
and the function ${\bf b}$ satisfies
\begin{equation}\langlebel{nablaBb}
\|\nabla {\bf b}\|_{L^\infty(D^e)} \lesssim \|{\bf H} \|_{H^1(B)}.
\end{equation}
Here, $B$ is a disk containing $\overline{D_1 \cup D_2}$.
\end{theorem}
By the proof analogous to that of Theorem \ref{cor-bdd}, we can derive the following theorem from Theorem \ref{main_thm_2_general_free}.
\begin{theorem}
It holds that
\begin{equation}
\frac{\sum_{j=1,2}|\mathcal{K}_{j}|}{\sqrt{\epsilon}}\lesssim\| \nabla ({\bf u}-{\bf H})\|_{L^\infty(D^e)} \lesssim
\frac{\|{\bf H} \|_{H^1(B)}}{\sqrt{\epsilon}}.
\end{equation}
\end{theorem}
We prove Theorem \ref{main_thm_2_general_free} based on Theorem \ref{main_thm_2_general_bdd}. Let $B$ be a disk containing $\overline{D_1\cup D_2}$. We assume for convenience that the center of $B$ is $0$.
Then the solution ${\bf u}$ to \eqnref{elas_eqn_free} is the solution to \eqnref{elas_eqn_bdd} with $\Omega=B$ and ${\bf g} = {\bf u}|_{\partial B}$.
So we obtain the following decomposition of the solution ${\bf u}$ in $B$ by applying Theorem \ref{main_thm_2_general_bdd}:
\begin{equation}\langlebel{eqn_u_free_decomp_BR}
{\bf u} = {\bf b}_{B} - \sum_{j=1}^3 (\mathcal{K}_{B,j}+s_{B,j}) {\bf q}_j \quad \mbox{in } D^e \cap B,
\end{equation}
where the constants $s_{B,j}$ and the function ${\bf b}_{B}$ satisfy
\begin{equation}\langlebel{eqn_s_BR_estim}
|s_{B,j}| \lesssim \tau \sqrt\epsilon |\ln\epsilon| \| {\bf u}\|_{C^{1,\gamma}(\partial B)}.
\end{equation}
and
\begin{equation}\langlebel{eqn_nabla_b_BR_estim}
\|\nabla{\bf b}_{B}\|_{L^\infty(D^e\cap B)} \lesssim \| {\bf u}\|_{C^{1,\gamma}(\partial B)},
\end{equation}
Although \eqnref{eqn_u_free_decomp_BR} looks similar to \eqnref{Bu-BH=},
there are three things to be clarified. First, the coefficient of ${\bf q}_j$ in \eqnref{eqn_u_free_decomp_BR} is given by $\mathcal{K}_{B,j}$, not by $\mathcal{K}_j$.
Second, the right-hand sides of \eqnref{eqn_s_BR_estim} and \eqnref{eqn_nabla_b_BR_estim} depend on $\epsilon$ since $\|{\bf u}\|_{C^{1,\gamma}(\partial B)}$ does. We need to prove the $\|{\bf u}\|_{C^{1,\gamma}(\partial B)}$ is bounded regardless of $\epsilon$.
Third, the decomposition is valid only in $B$, not in the whole region $D^e$. In the following we elaborate on these issues to show that \eqnref{eqn_u_free_decomp_BR}-\eqnref{eqn_nabla_b_BR_estim} actually yield \eqnref{Bu-BH=}-\eqnref{nablaBb}.
\begin{lemma}\langlebel{Kcalj_lem}
$\mathcal{K}_{B,j} = \mathcal{K}_j$ for $j=1,2,3$.
\end{lemma}
\noindent {\sl Proof}. \
By Green's formula and the fact that ${\bf u}-{\bf H} \in \mathcal{A}$, we have
$$
-\mathcal{S}_{\partial B}\big[\partial_\nu({\bf u}-{\bf H}) |_{\partial B}\big]({\bf x})+\mathcal{D}_{\partial B}[({\bf u}-{\bf H})|_{\partial B}]({\bf x})
=0, \quad {\bf x} \in B.
$$
Then, by Green's formula again, we have
\begin{align*}
{\bf H}_{B}({\bf x}) &= -\mathcal{S}_{\partial B}\big[{\partial_\nu {\bf u}}\big|_{\partial B}\big]({\bf x})+\mathcal{D}_{\partial B}[{\bf u}|_{\partial B}]({\bf x})
\\
&= -\mathcal{S}_{\partial B}\big[{\partial_\nu {\bf H}}\big|_{\partial B}\big]({\bf x})+\mathcal{D}_{\partial B}[{\bf H}|_{\partial B}]({\bf x})
={\bf H}({\bf x}),
\end{align*}
for ${\bf x}\in B$. Therefore, Lemma \ref{Kcalj_lem} follows from \eqnref{def_Ical_Jcal}, \eqnref{def_Kcal_Omega} and \eqnref{def_Kcal}.
\qed
\begin{lemma}\langlebel{lem_uBnorm_estim}
Let $B_1$ be a disk containing $\overline{B}$. We have
\begin{equation}\langlebel{uB_norm_estim}
\| {\bf u}\|_{C^{1,\gamma}(\partial B)} \lesssim \| {\bf H}\|_{H^1(B_1)},
\end{equation}
and
\begin{equation}\langlebel{KGOcaljest}
|\mathcal{K}_j| \lesssim \| {\bf H}\|_{H^1(B_1)}, \quad j=1,2,3.
\end{equation}
\end{lemma}
\noindent {\sl Proof}. \
By Proposition \ref{freeest}, we have
$$
\| {\bf u}\|_{C^{1,\gamma}(\partial B)} \leq \| {\bf u}-{\bf H}\|_{C^{1,\gamma}(\partial B)} + \| {\bf H}\|_{C^{1,\gamma}(\partial B)} \lesssim \| {\bf H}\|_{H^1(B_1)}.
$$
By Proposition \ref{prop_Kcal_Omega_estim} and Lemma \ref{Kcalj_lem}, we have
$$
|\mathcal{K}_{j}|=|\mathcal{K}_{B,j}|\lesssim \| {\bf u}\|_{C^{1,\gamma}(\partial B)}.
$$
So, \eqnref{KGOcaljest} follows from \eqnref{uB_norm_estim}.
\qed
\noindent{\sl Proof of Theorem \ref{main_thm_2_general_free}}.
Let $s_j:=s_{B,j}$. Then it follows from \eqnref{eqn_s_BR_estim} and \eqnref{uB_norm_estim} that
\begin{equation}\langlebel{eqn_sj_estim}
|s_{j}| \lesssim \tau \sqrt\epsilon |\ln\epsilon| \| {\bf H}\|_{H^1(B_1)}.
\end{equation}
Let
\begin{equation}\langlebel{Bb}
{\bf b} := {\bf u}-{\bf H}+\sum_{j=1}^3 (\mathcal{K}_{j}+s_{j}) {\bf q}_j.
\end{equation}
To estimate ${\bf b}$ in $D^e$, we split the region $D^e$ into $D^e\cap B$ and $D^e\setminus B$.
Using \eqnref{eqn_u_free_decomp_BR}, Lemma \ref{Kcalj_lem} and the fact that $s_j=s_{B,j}$, we have
$$
{\bf b} = {\bf i}g({\bf u}+\sum_{j=1}^3 (\mathcal{K}_{B,j}+s_{B,j}) {\bf q}_j{\bf i}g) - {\bf H} ={\bf b}_{B}-{\bf H} \quad\mbox{in } D^e \cap B.
$$
So, we infer from \eqnref{eqn_nabla_b_BR_estim} and \eqnref{uB_norm_estim} that
\begin{align*}
\|\nabla{\bf b}\|_{L^\infty(D^e\cap B)}&=
\|\nabla{\bf b}_{B}\|_{L^\infty(D^e \cap B)}+ \| \nabla {\bf H} \|_{L^\infty(D^e \cap B)}
\\
&\lesssim \| {\bf u}\|_{C^{1,\gamma}(\partial B)} + \|\nabla {\bf H} \|_{L^\infty(D^e \cap B)}
\lesssim \| {\bf H}\|_{H^1(B_1)}.
\end{align*}
Let us now consider estimates on the region $D^e\setminus B$. From Proposition \ref{freeest} and \eqnref{Bb}, we see that
\begin{align*}
\| \nabla{\bf b}\|_{L^\infty(D^e\setminus B)}&
= {\bf i}g\|\nabla({\bf u}-{\bf H})+\sum_{j=1}^3 (\mathcal{K}_{j}+s_{j}) \nabla{\bf q}_j {\bf i}g\|_{L^\infty(D^e\setminus B)} \\
&\leq
\|\nabla({\bf u}- {\bf H}) \|_{L^\infty(D^e\setminus B)}
+ {\bf i}g\| \sum_{j=1}^3 (\mathcal{K}_{j}+s_{j}) \nabla{\bf q}_j {\bf i}g\|_{L^\infty(D^e\setminus B)}
\\
&\leq \| {\bf H}\|_{H^1(B_1)} + {\bf i}g\| \sum_{j=1}^3 (\mathcal{K}_{j}+s_{j}) \nabla{\bf q}_j {\bf i}g\|_{L^\infty(D^e\setminus B)}.
\end{align*}
Lemma \ref{lem_qj_far_estim} and \eqnref{Bq3est2} show that
$$
\| \nabla{\bf q}_j\|_{L^\infty(D^e\setminus B)} \lesssim 1, \quad j=1,2,3.
$$
Therefore, from \eqnref{KGOcaljest} and \eqnref{eqn_sj_estim}, we obtain
\begin{align*}
\|\nabla{\bf b}\|_{L^\infty(D^e\setminus B)}
&\leq \| {\bf H}\|_{H^1(B_1)} + (1+\sqrt\epsilon|\ln\epsilon|)\| {\bf H}\|_{H^1(B_1)} \leq \|{\bf H}\|_{H^1(B_1)}.
\end{align*}
So we have
$$
\|\nabla{\bf b}\|_{L^\infty(D^e)}\lesssim \| {\bf H}\|_{H^1(B_1)},
$$
and the proof of Theorem \ref{main_thm_2_general_free} is completed (with $B$ replaced by $B_1$).
\qed
\section{Symmetric inclusions and optimality of the blow-up rate}\langlebel{sec:symmetric_case}
In this section, we show that \eqnref{Bu-BH=} can be further simplified by assuming some symmetry of the inclusions $D_1$ and $D_2$. More importantly, we show that the blow-up rate $\epsilon^{-1/2}$ of $|\nabla{\bf u}|$ is \emph{optimal} by considering two circular inclusions. The singular functions ${\bf q}_j$ play an essential role here as well.
Let us first assume that the background field ${\bf H}$ can be decomposed as
\begin{equation}\langlebel{H_symm_decomp}
{\bf H} = {\bf H}_e + {\bf H}_o,
\end{equation}
where ${\bf H}_e=(H_{e1}, H_{e2})^T$ and ${\bf H}_o=(H_{o1}, H_{o2})^T$ respectively have the following symmetric properties:
\begin{equation}\langlebel{Hesymm}
\begin{cases}
H_{e1}(x,y)=H_{e1}(x,-y)=-H_{e1}(-x,y), \\
H_{e2}(x,y)=-H_{e2}(x,-y)=H_{e2}(-x,y),
\end{cases}
\end{equation}
and
\begin{equation}\langlebel{Hosymm}
\begin{cases}
H_{o1}(x,y)=-H_{o1}(x,-y)=H_{o1}(-x,y), \\
H_{o2}(x,y)=H_{o2}(x,-y)=-H_{o2}(-x,y).
\end{cases}
\end{equation}
If ${\bf H}$ is a uniform loading, that is, ${\bf H}({\bf x}) = (Ax,B y)^T + C(y, x)^T$ for some real coefficients $A,B$ and $C$, then we may take ${\bf H}_e = (Ax,By)^T$ and ${\bf H}_o = C(y,x)^T$.
\subsection{Symmetric inclusions}
Let us assume that $D_1 \cup D_2$ is symmetric with respect to both $x$- and $y$-axes. Then we have the following theorem.
\begin{theorem} \langlebel{thm_symm_charact_free}
Let ${\bf u}$ be the solution to \eqnref{elas_eqn_free} under the assumption that
$D_1 \cup D_2$ is symmetric with respect to both $x$- and $y$-axes. Then, it holds that
\begin{equation}\langlebel{BuBHBq}
({\bf u}-{\bf H})({\bf x}) = {\bf b}({\bf x})+ {\mathcal{J}}_{1}\mathbf{q}_1({\bf x})
+ {\mathcal{J}}_{2}\mathbf{q}_2({\bf x})
, \quad{\bf x}\in D^e,
\end{equation}
where the function ${\bf b}$ satisfies
\begin{equation}\langlebel{nablabH}
|\nabla {\bf b}({\bf x})| \lesssim \| {\bf H}\|_{H^1(B)} \quad\mbox{for all } {\bf x} \in D^e.
\end{equation}
Here, $B$ is a disk containing $\overline{D_1\cup D_2}$. Moreover, if ${\bf H}={\bf H}_e$, {\it i.e.}, ${\bf H}$ satisfies \eqnref{Hesymm}, then $\mathcal{J}_2=0$; If ${\bf H}$ satisfies \eqnref{Hosymm}, then $\mathcal{J}_1=0$.
\end{theorem}
\noindent {\sl Proof}. \
Since $D_1$ and $D_2$ are symmetric, the number $\tau$ defined by \eqnref{def_tau} is $0$. So it follows from \eqnref{|s_j|} that $s_j=0$ for $j=1,2,3$.
Now it remains to show that $\mathcal{K}_1=\mathcal{J}_1$, $\mathcal{K}_1=\mathcal{J}_2$ and $\mathcal{K}_3=0$, for which it is enough to show that $\mathcal{J}_3=0$ by the definition \eqnref{def_Kcal} of $\mathcal{K}_j$. Recall that
$$
\mathcal{J}_3=\int_{\partial D^e} {\partial_\nu {\bf h}_3}\cdot {\bf H}.
$$
Let ${\bf h}_3=(h_{31},h_{32})^T$. Thanks to the symmetry of the inclusions and the boundary condition of ${\bf h}_3$, one can see that the following two functions are also solutions of \eqnref{hj_def}:
$$
\begin{bmatrix} -h_{31}(x,-y) \\ h_{32}(x,-y) \end{bmatrix}, \quad
\begin{bmatrix} -h_{31}(-x,y) \\ h_{32}(-x,y) \end{bmatrix}.
$$
So, by the uniqueness of the solution we see that ${\bf h}_3$ satisfies the following symmetry:
\begin{equation}\langlebel{h3symm}
\begin{cases}
h_{31}(x,y)=-h_{31}(x,-y)=-h_{31}(-x,y), \\
h_{32}(x,y)=h_{32}(x,-y)=h_{32}(-x,y).
\end{cases}
\end{equation}
The outward normal ${\bf n}=(n_{1}, n_{2})^T$ to $\partial D^e$ satisfies
\begin{equation}\langlebel{nsymm}
\begin{cases}
n_{1}(x,y)=n_{1}(x,-y)=-n_1(-x,y), \\
n_{2}(x,y)=-n_{2}(x,-y)=n_{2}(-x,y).
\end{cases}
\end{equation}
So, the conormal derivative ${\bf f}:=\partial_\nu {\bf h}_3$ on $\partial D^e$ enjoys the following symmetry:
\begin{equation}\langlebel{fsymm}
\begin{cases}
f_1(x,y) = -f_1(x,-y) = -f_1(-x,y), \\
f_2(x,y) = f_2(x,-y) = f_2(-x,y).
\end{cases}
\end{equation}
Let ${\bf H} = {\bf H}_e + {\bf H}_o$ be the decomposition as in \eqnref{H_symm_decomp}. We write $\mathcal{J}_3$ as
$$
\mathcal{J}_3 = \int_{\partial D^e}{\bf f}\cdot {\bf H}_e + \int_{\partial D^e}{\bf f}\cdot {\bf H}_o=: I+II.
$$
Using the symmetry with respect to the $y$-axis in \eqnref{Hesymm} and \eqnref{fsymm}, we have
\begin{align*}
I&=\int_{\partial D_1 } (f_1,f_2)\cdot (H_{e1},H_{e2}) + \int_{\partial D_2} (f_1,f_2)\cdot (H_{e1},H_{e2})
\\
&=\int_{\partial D_1 } (f_1,f_2)\cdot (H_{e1},H_{e2}) + \int_{\partial D_1} (-f_1,f_2)\cdot (-H_{e1},H_{e2})
\\
&=2 \int_{\partial D_1} f_1 H_{e1} + f_2 H_{e2}.
\end{align*}
Then, the symmetry with respect to the $x$-axis in \eqnref{Hesymm} and \eqnref{fsymm}, we obtain
\begin{align*}
I&= 2\int_{\partial D_1 \cap \{y\geq 0\}} \big( f_1 H_{e1} + f_2 H_{e2} \big) + 2 \int_{\partial D_1 \cap \{y< 0\}} \big( f_1 H_{e1} + f_2 H_{e2} \big)
\\
&= 2 \int_{\partial D_1 \cap \{y\geq 0\}} \big( f_1 H_{e1} + f_2 H_{e2} \big) + 2 \int_{\partial D_1 \cap \{y\geq 0\}} \big( (-f_1) H_{e1} + f_2 (-H_{e2}) \big)
=0.
\end{align*}
By the exactly same way, we can show $II=0$.
Suppose that ${\bf H}$ has the symmetry property \eqnref{Hesymm}. Let ${\bf g}:=\partial_\nu {\bf h}_2$. Then \eqnref{htwosymm} and \eqnref{nsymm} show that ${\bf g}$ has the following symmetry properties:
\begin{equation}\langlebel{gsymm}
\begin{cases}
g_1(x,y) = -g_1(x,-y)= g_1(-x,y) , \\
g_2(x,y) = g_2(x,-y)= -g_2(-x,y) .
\end{cases}
\end{equation}
So one can see as before that
$$
\mathcal{J}_2 = \int_{\partial D^e}{\bf g}\cdot {\bf H}_e =0.
$$
Similarly, one can show that $\mathcal{J}_1=0$ if ${\bf H}$ has the symmetry property \eqnref{Hosymm}. This completes the proof. \qed
\begin{cor}\langlebel{10000}
Under the same hypothesis as in Theorem \ref{thm_symm_charact_free}, we have
\begin{equation}
(|{\mathcal{J}}_{1}|+|{\mathcal{J}}_{2}|) \frac{1}{\sqrt{\epsilon}}\lesssim \| \nabla ({\bf u}-{\bf H})\|_{L^\infty(D^e)} \lesssim \frac{1}{\sqrt{\epsilon}}.
\end{equation}
\end{cor}
We emphasize that $\mathcal{J}_{1}$ and $\mathcal{J}_{2}$ do depend on $\epsilon$. In the next subsection we show that there are some cases such that $1 \lesssim |\mathcal{J}_{1}| + |\mathcal{J}_{2}|$ by considering circular inclusions. It implies that $\epsilon^{-1/2}$ is the optimal blow-up rate of the gradient.
\subsection{Circular inclusions and optimality of the blow-up rate}
In this subsection we show that $\epsilon^{-1/2}$ is a lower bound on the blow-up rate of the gradient by considering two circular inclusions under a uniform loading.
\begin{prop}\langlebel{prop:J1_J2_lower}
Suppose that $D_1$ and $D_2$ are disks with the same radius $r_0$ and let ${\bf u}$ be the solution to \eqnref{elas_eqn_free}.
Let
\begin{equation}
\alpha^*= \alpha^*(\lambda,\mu) := \frac{\lambda+\mu}{\mu}.
\end{equation}
\begin{itemize}
\item[(i)] If ${\bf H}(x,y)=(A x,B y)^T$ with $A\neq 0$, then there are $\epsilon_0>0$ and $\alpha_0>0$ independent of $\epsilon$ such that for any $(\lambda,\mu)$ satisfying $\alpha^*(\lambda,\mu) \le \alpha_0$ and $\epsilon \le \epsilon_0$ the following holds:
\begin{equation}\langlebel{Jcaluniform1}
1\lesssim|\mathcal{J}_1| \quad \mbox{and} \quad \mathcal{J}_2=0.
\end{equation}
\item[(ii)] If ${\bf H}(x,y) = C(y, x)^T$ with $C\neq 0$, then there are $\epsilon_0>0$ and $\alpha_0>0$ independent of $\epsilon$ such that for any $(\lambda,\mu)$ satisfying $\alpha^*(\lambda,\mu) \le \alpha_0$ and $\epsilon \le \epsilon_0$ the following holds:
\begin{equation}\langlebel{Jcaluniform2}
\mathcal{J}_1=0\quad \mbox{and} \quad 1\lesssim|\mathcal{J}_2|.
\end{equation}
\end{itemize}
\end{prop}
We emphasize that the condition $\alpha^*(\lambda,\mu) \le \alpha_0$ can be satisfied even if $\alpha_0$ is small. In fact, the strong convexity condition requires $\mu>0$ and $\lambda+\mu>0$. So, by taking negative $\lambda$, the condition is satisfied.
\noindent {\sl Proof}. \
We only prove (i) since (ii) can be proved similarly.
Since ${\bf H}(x,y)=(A x,B y)^T$ satisfies \eqnref{Hesymm}, $\mathcal{J}_{2}=0$ by Theorem \ref{thm_symm_charact_free}.
To prove $1 \lesssim|\mathcal{J}_{1}|$, we define ${\bf r}_1$ by
\begin{equation}\langlebel{Br1def}
(1+ \frac{m_1}{\sqrt\epsilon} t_1){\bf h}_1 = \frac{m_1}{\sqrt\epsilon} {\bf q}_1 + {\bf r}_1,
\end{equation}
where $t_1$ is the constant appearing in Lemma \ref{lem:Bq_on_circle}. Then ${\bf r}_1$ satisfies $\mathcal{L}_{\lambda,\mu} {\bf r}_1=0$ in $D^e$ and ${\bf r}_1\in\mathcal{A}^*$. It also satisfies, according to Lemma \ref{lem:Bq_on_circle}, the boundary condition
\begin{equation}\langlebel{tilde_Br1_bdry}
{\bf r}_1 = - \frac{m_1 \alpha_2 a}{\sqrt\epsilon r_0^2} \begin{bmatrix} x \\ y \end{bmatrix} \quad\mbox{on } \partial D_1 \cup \partial D_2.
\end{equation}
Since
$$
\mathcal{J}_1=\int_{\partial D^e} {\partial_\nu {\bf h}_1}\cdot {\bf H},
$$
we may write, using \eqnref{Br1def}, $(1+\frac{m_1}{\sqrt\epsilon} t_1)\mathcal{J}_{1}$ as
\begin{align}
(1+\frac{m_1}{\sqrt\epsilon} t_1)\mathcal{J}_{1} &=
\int_{\partial D^e} \partial_\nu ((1+\frac{m_1}{\sqrt\epsilon} t_1){\bf h}_1 ) \cdot {\bf H}
-
\int_{\partial D^e} \partial_\nu {\bf H} \cdot ((1+\frac{m_1}{\sqrt\epsilon} t_1){\bf h}_1)
\nonumber
\\
&=
\int_{\partial D^e} \partial_\nu (\frac{m_1}{\sqrt\epsilon} {\bf q}_1 + {\bf r}_1 ) \cdot {\bf H}
-
\int_{\partial D^e} \partial_\nu {\bf H} \cdot (\frac{m_1}{\sqrt\epsilon} {\bf q}_1 + {\bf r}_1)
\nonumber
\\
&=
\frac{m_1}{\sqrt\epsilon} \left(\int_{\partial D^e} \partial_\nu {\bf q}_1 \cdot {\bf H}
- \partial_\nu {\bf H} \cdot {\bf q}_1 \right)
+ \int_{\partial D^e} \partial_\nu {\bf r}_1 \cdot {\bf H} - \int_{\partial D^e} \partial_\nu {\bf H} \cdot {\bf r}_1
\nonumber
\\
&=: I_1+I_2+I_3.
\langlebel{J1_decomp_GlGm}
\end{align}
To estimate $I_1$, we first recall that $m_1 := \big[(\alpha_1 - \alpha_2)\sqrt{2(\kappa_1+\kappa_2)}\,\big]^{-1}$. Since $\kappa_1=\kappa_2=1/r_0$,
we have
\begin{equation}\langlebel{monesqrt}
\frac{m_1}{\sqrt\epsilon}= \frac{\sqrt{r_0}}{2(\alpha_1-\alpha_2)\sqrt{\epsilon}}.
\end{equation}
Then Green's formula for the Lam\'{e} system and \eqnref{qonedirac} yield
\begin{align*}
I_1 &= \frac{m_1}{\sqrt\epsilon} \int_{D_1\cup D_2} \mathcal{L}_{\lambda,\mu}{\bf q}_1 \cdot {\bf H} \\
&= \frac{\sqrt{r_0}}{2(\alpha_1-\alpha_2)\sqrt{\epsilon}} \left[ ({\bf H}({\bf p}_1)-{\bf H}({\bf p}_2))\cdot{\bf e}_1
- \sum_{j=1}^2 \frac{\alpha_2 a}{\alpha_1-\alpha_2} (\partial_1 {\bf H}({\bf p}_j)\cdot{\bf e}_1+\partial_2 {\bf H}({\bf p}_j)\cdot {\bf e}_2) \right].
\end{align*}
Since ${\bf p}_1=(-a,0)$, ${\bf p}_2=(a,0)$ and ${\bf H}(x,y)=(Ax, By)^T$, we arrive at
\begin{equation}\langlebel{Ione}
I_1 =- \frac{a\sqrt{r_0}}{2(\alpha_1-\alpha_2)\sqrt{\epsilon}} \left[ 2A +
\frac{2\alpha_2(A+B)}{\alpha_1-\alpha_2} \right].
\end{equation}
Since $\alpha_1=\frac{\lambda+3\mu}{4\partiali\mu(\lambda+2\mu)}$ and $\alpha_2=\frac{\lambda+\mu}{4\partiali\mu(\lambda+2\mu)}$, we have
\begin{align}
\frac{1}{\alpha_1-\alpha_2} &= {2\partiali (\lambda+2\mu)} = 2\partiali \mu (1+ \alpha^*),
\langlebel{Ga1Ga2_Gastar1}
\\
\frac{\alpha_2}{\alpha_2-\alpha_2} &= \frac{\lambda+\mu}{2\mu} = \frac{\alpha^*}{2}.
\langlebel{Ga1Ga2_Gastar2}
\end{align}
Since $a=\sqrt{r_0 \epsilon}+O(\epsilon^{3/2})$, we have
\begin{equation}\langlebel{ar0}
\frac{a\sqrt{r_0}}{\sqrt{\epsilon}} = r_0 + O(\epsilon).
\end{equation}
Substituting \eqnref{Ga1Ga2_Gastar1}-\eqnref{ar0} into \eqnref{Ione}, we obtain
\begin{align}
I_1 &= - 2\partiali \mu(1+\alpha^*)(r_0 + O(\epsilon)) \left[ 2A + \alpha^*(A+B) \right]
\nonumber
\\
&= -2\partiali r_0 \mu \big(2A + O(\alpha^* + \epsilon+ \alpha^*\epsilon) \big).
\langlebel{lim_JI_GlGm}
\end{align}
To estimate $I_2$, let $B_r$ be the disk of radius $r$ centered at the origin containing $\overline{D_1 \cup D_2}$. Choose $r$ and $R$ so that $r<R$.
Let $\chi$ be a smooth radial function such that $\chi({\bf x})=1$ if $|{\bf x}| \le r$ and $\chi({\bf x})=0$ if $|{\bf x}| \ge R$. Let ${\bf v}:= \chi {\bf H}$. Then, using Green's formula, we obtain
\begin{equation}\langlebel{eqn_JII_estim_GlGm1}
|I_2| =
\left| \int_{\partial D^e} \frac{\partial {\bf r}_1}{\partial\nu} \cdot {\bf v} \right|
= \left| \int_{D^e} \mathbb{C} \widehat{\nabla} {\bf r}_1 : \widehat{\nabla}{\bf v} \right|
\leq {\mathcal{E}_{D^e}[{\bf r}_1]}^{1/2}{\mathcal{E}_{D^e}[{\bf v}]}^{1/2}.
\end{equation}
Let ${\bf w}({\bf x}):= \chi({\bf x}){\bf x}$ for ${\bf x} \in D^e$. Then
\begin{equation}
- \frac{m_1 \alpha_2 a}{\sqrt\epsilon r_0^2} {\bf w} = {\bf r}_1 \quad\mbox{on } \partial D^e.
\end{equation}
Then, the variational principle \eqnref{variation} yields
\begin{equation}
\mathcal{E}_{D^e}[{\bf r}_1] \le \left(\frac{m_1 \alpha_2 a}{\sqrt\epsilon r_0^2} \right)^2 \mathcal{E}_{D^e}[{\bf w}].
\end{equation}
We emphasize that the variational principle holds since ${\bf r}_1$ is a solution of the Lam\'e system. So, we arrive at
\begin{equation}\langlebel{Itwo}
|I_2| \le \frac{m_1 \alpha_2 a}{\sqrt\epsilon r_0^2} \mathcal{E}_{D^e}[{\bf w}]^{1/2} \mathcal{E}_{D^e}[{\bf v}]^{1/2}.
\end{equation}
Note that $\| \nabla {\bf w}\|^2_{L^2(D^e)} \lesssim 1$ and
$$
\mathcal{E}_{D^e}[{\bf w}]= \int_{D^e}\mathbb{C}\widehat{\nabla} {\bf w}: \widehat{\nabla}{\bf w}\lesssim (\lambda+2\mu) \| \nabla {\bf w}\|^2_{L^2(D^e)} \lesssim \lambda+2\mu.
$$
Here and throughout this proof, $X \lesssim Y$ implies $X \le C Y$ for some constant $C$ independent of $(\lambda, \mu)$ and $\epsilon$. Similarly, one can see that
$$
\mathcal{E}_{D^e}[{\bf v}] \lesssim \lambda+2\mu.
$$
So we infer from \eqnref{Itwo} that
\begin{equation}\langlebel{Itwo2}
|I_2| \lesssim \frac{m_1 \alpha_2 a}{\sqrt\epsilon r_0^2} (\lambda+2\mu) .
\end{equation}
It then follows from \eqnref{monesqrt}, \eqnref{Ga1Ga2_Gastar1}--\eqnref{ar0} that
\begin{align}
|I_2|&\lesssim \frac{\alpha_2}{\alpha_1-\alpha_2} (\lambda+2\mu) \lesssim \alpha^* \mu(1+\alpha^*) \lesssim \mu \alpha^*.
\langlebel{J_II_estim_GlGm}
\end{align}
Since ${\bf H}=(Ax,By)$, it is easy to see that
$$
|\partial_\nu {\bf H}|_{\partial D^e}| \lesssim \lambda+2\mu.
$$
So, by \eqnref{tilde_Br1_bdry} and the fact that $a\approx \sqrt\epsilon$, we see that
\begin{equation}\langlebel{J_III_estim_GlGm}
|I_3| \lesssim m_1 \alpha_2 (\lambda+2\mu) \lesssim \frac{\alpha_2}{\alpha_1-\alpha_2}(\lambda+2\mu)\lesssim \mu\alpha^*.
\end{equation}
Combining \eqnref{J1_decomp_GlGm}, \eqnref{lim_JI_GlGm}, \eqnref{J_II_estim_GlGm} and \eqnref{J_III_estim_GlGm}, we have
$$
(1+\frac{m_1}{\sqrt\epsilon} t_1)\mathcal{J}_{1} = -4\partiali r_0 \mu \big(A + O(\alpha^* + \epsilon+ \alpha^*\epsilon) \big).
$$
Recall from \eqnref{tj_estim} that $|t_j| \lesssim (\alpha_1+\alpha_2)\epsilon^{3/2}$.
So, using \eqnref{Ga1Ga2_Gastar2}, we have
$$
\left| \frac{m_1}{\sqrt\epsilon} t_1 \right| \lesssim \frac{\alpha_1+\alpha_2}{\alpha_1-\alpha_2}\epsilon \lesssim (1 + \frac{2\alpha_2}{\alpha_1-\alpha_2})\epsilon \lesssim (1+\alpha^*)\epsilon.$$
Therefore, we finally arrive at
$$
\mathcal{J}_1 = -\frac{{4\partiali r_0} \mu (A+ O(\alpha^*+\epsilon+ \alpha^*\epsilon))} {1 + O({\epsilon}+\alpha^* \epsilon)}.
$$
Since $A\neq 0$, there are $\alpha_0$ and $\epsilon_0$ such that
$$
1 \lesssim |\mathcal{J}_1|
$$
for all $(\lambda,\mu)$ satisfying $\alpha^* \le \alpha_0$ and $\epsilon \le \epsilon_0$. This completes the proof.
\qed
Corollary \ref{10000} and Proposition \ref{prop:J1_J2_lower} shows that $\epsilon^{-1/2}$ is a lower bound on $\nabla {\bf u}$ as $\epsilon \to 0$. In fact, we have
\begin{equation}
\|\nabla {\bf u} \|_{L^\infty(B \setminus (D_1 \cup D_2))} \approx \epsilon^{-1/2}
\end{equation}
as $\epsilon \to 0$, provided that $\alpha^* \le \alpha_0$.
In fact, we have a more refined estimate for $\nabla {\bf u}(0,0)$.
\begin{theorem}
Let ${\bf u}$ be the solution to \eqnref{elas_eqn_free} when $D_1$ and $D_2$ are disks with the same radius. Suppose that the Lam\'e parameters $(\langlembda,\mu)$ satisfy $\alpha^* \le \alpha_0$. Then the following holds as $\epsilon \to 0$.
\begin{itemize}
\item[(i)] If ${\bf H}(x,y)=(A x,B y)$ with $A\neq 0$, then
\begin{equation}\langlebel{divest}
|\partial_1 u_1 (0,0)| \approx \epsilon^{-1/2} \quad\mbox{and}\quad |\partial_2 u_1 (0,0)| + |\partial_1 u_2 (0,0)| + |\partial_2 u_2 (0,0)| \lesssim 1.
\end{equation}
\item[(ii)] If ${\bf H}(x,y) = C( y, x)$ with $C\neq 0$, then
\begin{equation}\langlebel{shearest}
|\partial_1 u_2 (0,0)| \approx \epsilon^{-1/2} \quad\mbox{and}\quad |\partial_1 u_1 (0,0)| + |\partial_2 u_1 (0,0)| + |\partial_2 u_2 (0,0)| \lesssim 1.
\end{equation}
\end{itemize}
\end{theorem}
\noindent {\sl Proof}. \
Suppose that ${\bf H}(x,y)=(A x,B y)$ with $A\neq 0$. It then follows from \eqnref{BuBHBq} and \eqnref{Jcaluniform1} that
\begin{equation}
\nabla({\bf u}-{\bf H})({\bf x}) = \nabla {\bf b}({\bf x})+ {\mathcal{J}}_{1} \nabla {\bf q}_1({\bf x}).
\end{equation}
Then \eqnref{p1q11}, \eqnref{nablabH} and \eqnref{Jcaluniform1} yield \eqnref{divest}. (ii) can be proved similarly. \qed
The estimates \eqnref{divest} and \eqnref{shearest} yield, in particular,
\begin{equation}\langlebel{divest2}
|\nabla \cdot {\bf u}(0,0)| \approx \epsilon^{-1/2} \quad\mbox{and}\quad |\partial_2 u_1 (0,0)| + |\partial_1 u_2 (0,0)| \lesssim 1
\end{equation}
if ${\bf H}(x,y)=(A x,B y)$ with $A\neq 0$, and
\begin{equation}\langlebel{shearest2}
|\partial_1 u_2 (0,0)| + |\partial_2 u_1 (0,0)|\approx \epsilon^{-1/2} \quad\mbox{and}\quad |\nabla \cdot {\bf u}(0,0)| \lesssim 1
\end{equation}
if ${\bf H}(x,y) = C( y, x)$ with $C\neq 0$. Note that $\nabla \cdot {\bf u}$ represents the bulk force while $|\partial_1 u_2| + |\partial_2 u_1|$ the magnitude of the shear force. These estimates are in accordance with results of numerical experiments in \cite{KK-Conm-16}.
\section*{Conclusion}
We investigate the problem of characterizing the stress concentration in the narrow region between two hard inclusions and deriving optimal estimates of the magnitude of the stress in the context of the isotropic linear elasticity. We introduce singular functions which are constructed using nuclei of strain, and then show that they capture precisely the singular behavior of the stress as the distance between two inclusions tends to zero. As consequences we are able to derive an upper bound of the blow-up rate of the stress, namely, $\epsilon^{-1/2}$ where $\epsilon$ is the distance between two inclusions. We then show that $\epsilon^{-1/2}$ is an optimal blow-up rate in the sense that it is also a lower bound on the rate of the stress blow-up in some cases. We show that it is a lower bound in the case when inclusions are disks of the same radii.
To show that $\epsilon^{-1/2}$ is a lower bound in the case of circular inclusions, we impose a certain condition on the Lam\'e parameters. This condition does not seem natural and may be removed.
In fact, it is likely, as suggested in numerical experiments in \cite{KK-Conm-16}, that $\epsilon^{-1/2}$ is a lower bound without any assumption on Lam\'e parameters if the background field is a uniform loading. It is quite interesting and challenging to clarify this.
\appendix
\section{The Neumann-Poincar\'e operator and the exterior problem}\langlebel{sec:ext}
In this section we prove Propositions \ref{thm_u_layer_bdd_case} and \ref{thm_u_layer}, and Theorem \ref{thm_ext_diri}. The proofs are based on the layer potential technique.
\subsection{The NP operator}
Let us begin by reviewing well-known results on the layer potentials on simple closed curves. Let $D$ be a simply connected bounded domain in $\mathbb{R}^2$ with the $C^{1,\alpha}$ ($\alpha >0$) smooth boundary. The co-normal derivative of the single layer potential and the double layer potential satisfy the following jump formulas:
\begin{align}
\partial_\nu\mathcal{S}_{\partial D} [\mbox{\boldmath $\Gvf$}]|_{\partialm} ({\bf x}) &= \left(\partialm \frac{1}{2}I + \mathcal{K}_{\partial D}^* \right) [\mbox{\boldmath $\Gvf$}](x), \quad {\bf x} \in \partial D, \langlebel{singlejump} \\
\mathcal{D}_{\partial D} [\mbox{\boldmath $\Gvf$}]|_{\partialm} ({\bf x}) &= \left(\mp \frac{1}{2}I + \mathcal{K}_{\partial D} \right) [\mbox{\boldmath $\Gvf$}](x), \quad {\bf x} \in \partial D, \langlebel{doublejump}
\end{align}
where $\mathcal{K}_{\partial D}$ is the boundary integral operator defined by
\begin{equation}
\mathcal{K}_{\partial D} [\mbox{\boldmath $\Gvf$}](x) := \mbox{p.v.} \int_{\partial D} \partial_{\nu_{{\bf y}}} {\bf \GG} ({\bf x}-{\bf y}) \mbox{\boldmath $\Gvf$}({\bf y}) d \sigma({\bf y}), \quad {\bf x} \in \partial D,
\end{equation}
and $\mathcal{K}_{\partial D}^*$ is the adjoint operator of $\mathcal{K}_{\partial D}$ on $L^2(\partial D)^2$. Here, p.v. stands for the Cauchy pricincipal value. The operators $\mathcal{K}_{\partial D}$ and $\mathcal{K}_{\partial D}^*$ are called the Neumann-Poincar\'e (NP) operators.
It is known that the operator $-1/2I + \mathcal{K}_{\partial D}^*$ is a Fredholm operator of index $0$, it is invertible on $H^{-1/2}(\partial D)^2$, and its kernel is of three dimensions (see, for example, \cite{DKV-Duke-88}). It is worth mentioning that the NP operator can be realized as a self-adjoint operator on $H^{-1/2}(\partial D)^2$ by introducing a new inner product, and it is polynomially compact (see \cite{AJKKY-arXiv}).
We now consider $D^e = \mathbb{R}^2 \setminus \overline{(D_1 \cup D_2)}$, whose boundary $\partial D^e$ consists of two disjoint curves $\partial D_1$ and $\partial D_2$. To define the NP operator in this case, we consider the solution to \eqnref{elas_eqn_free} in the form of \eqnref{singlerep}, namely,
$$
{\bf u}({\bf x}) = {\bf H}({\bf x}) + \mathcal{S}_{\partial D_1}[\mbox{\boldmath $\Gvf$}_1]({\bf x}) + \mathcal{S}_{\partial D_2}[\mbox{\boldmath $\Gvf$}_2]({\bf x}).
$$
The boundary condition on $\partial D^e$ in \eqnref{elas_eqn_free} amounts to
$$
\partial_\nu \left({\bf H} + \mathcal{S}_{\partial D_1}[\mbox{\boldmath $\Gvf$}_1] + \mathcal{S}_{\partial D_2}[\mbox{\boldmath $\Gvf$}_2] \right) |_- =0 \quad\mbox{on } \partial D^e,
$$
which, according to \eqnref{singlejump}, is equivalent to the following system of integral equations:
$$
\left\{
\begin{array}{rl}
\displaystyle \left(- \frac{1}{2}I + \mathcal{K}_{\partial D_1}^* \right) [\mbox{\boldmath $\Gvf$}_1] + \partial_{\nu} \mathcal{S}_{\partial D_2}[\mbox{\boldmath $\Gvf$}_2]|_{\partial D_1} &= -\partial_{\nu} {\bf H} \quad\mbox{on } \partial D_1, \\
\displaystyle \partial_{\nu} \mathcal{S}_{\partial D_1}[\mbox{\boldmath $\Gvf$}_1]|_{\partial D_2} + \left(- \frac{1}{2}I + \mathcal{K}_{\partial D_2}^* \right) [\mbox{\boldmath $\Gvf$}_2] &= -\partial_{\nu} {\bf H} \quad\mbox{on } \partial D_2.
\end{array}
\right.
$$
This system of integral equations can be rewritten as
\begin{equation}\langlebel{inteqnBH}
\left( -\frac{1}{2} \mathbb{I} + \mathbb{K}^* \right)
\begin{bmatrix} \mbox{\boldmath $\Gvf$}_1 \\ \mbox{\boldmath $\Gvf$}_2 \end{bmatrix} = - \begin{bmatrix} \partial_{\nu} {\bf H} |_{\partial D_1} \\ \partial_{\nu} {\bf H} |_{\partial D_2} \end{bmatrix},
\end{equation}
where $\mathbb{I}$ is the identity operator and $\mathbb{K}^*$, which is the NP operator on $\partial D^e$, is defined by
\begin{equation}
\mathbb{K}^* := \begin{bmatrix} \mathcal{K}_{\partial D_1}^* & \partial_{\nu_1} \mathcal{S}_{\partial D_2} \\ \partial_{\nu_2} \mathcal{S}_{\partial D_1} & \mathcal{K}_{\partial D_2}^* \end{bmatrix}.
\end{equation}
A special attention is necessary for the off-diagonal entries in the above: For example, $\partial_{\nu_1} \mathcal{S}_{\partial D_2}$ means that the single layer potential is defined on $\partial D_2$ and the co-normal derivative is evaluated on $\partial D_1$, so the operator maps $H^{-1/2}(\partial D_2)^2$ into $H^{-1/2}(\partial D_1)^2$. One can see that the adjoint operator $\mathbb{K}$ of $\mathbb{K}^*$ on $L^2(\partial D^e)^2$ is given by
\begin{equation}
\mathbb{K} = \begin{bmatrix} \mathcal{K}_{\partial D_1} & \mathcal{D}_{\partial D_2}|_{\partial D_1} \\ \mathcal{D}_{\partial D_1}|_{\partial D_2} & \mathcal{K}_{\partial D_2} \end{bmatrix}.
\end{equation}
Here $\mathcal{D}_{\partial D_2}|_{\partial D_1}$ means the double layer potential on $\partial D_2$ evaluated on $\partial D_1$. We emphasize that
\begin{equation}
(\mathcal{D}_{\partial D_1}[\mbox{\boldmath $\Gvf$}_1] + \mathcal{D}_{\partial D_2}[\mbox{\boldmath $\Gvf$}_2])|_+
= \left( -\frac{1}{2} \mathbb{I} + \mathbb{K} \right)
\begin{bmatrix} \mbox{\boldmath $\Gvf$}_1 \\ \mbox{\boldmath $\Gvf$}_2 \end{bmatrix} \quad\mbox{on } \partial D^e.
\end{equation}
\begin{lemma}
The operator $-1/2 \mathbb{I} + \mathbb{K}^*$ is of Fredholm index $0$ on $H^{-1/2}(\partial D^e)^2$.
\end{lemma}
\noindent {\sl Proof}. \
We express $-1/2 \mathbb{I} + \mathbb{K}^*$ as
\begin{equation}\langlebel{comppert}
-1/2 \mathbb{I} + \mathbb{K}^* = \begin{bmatrix} -1/2I + \mathcal{K}_{\partial D_1}^* & 0 \\ 0 & -1/2I + \mathcal{K}_{\partial D_2}^* \end{bmatrix} +
\begin{bmatrix} 0 & \partial_{\nu_1} \mathcal{S}_{\partial D_2} \\ \partial_{\nu_2} \mathcal{S}_{\partial D_1} & 0 \end{bmatrix}.
\end{equation}
Since $-1/2I + \mathcal{K}_{\partial D_j}^*$ is of Fredholm index $0$ for $j=1,2$, so is the first operator on the right-hand side above. Since $\partial D_1$ and $\partial D_2$ are apart, the second operator on the right-hand side is compact. Since the Fredholm index is invariant under a compact perturbation, $-1/2 \mathbb{I} + \mathbb{K}^*$ is of Fredholm index $0$.
\qed
In the following we prove Propositions \ref{thm_u_layer_bdd_case} and \ref{thm_u_layer}, and Theorem \ref{thm_ext_diri}. We prove Proposition \ref{thm_u_layer} first since it is simpler.
\subsection{Proof of Proposition \ref{thm_u_layer}}
We first prove the following lemma.
\begin{prop}\langlebel{appinv}
The operator $-1/2 \mathbb{I} + \mathbb{K}^*$ is invertible on $H^{-1/2}_\Psi(\partial D_1) \times H^{-1/2}_\Psi(\partial D_2)$.
\end{prop}
\noindent {\sl Proof}. \
As we see from \eqnref{comppert} that $-1/2 \mathbb{I} + \mathbb{K}^*$ is a compact perturbation of an operator which is invertible on $H^{-1/2}_\Psi(\partial D_1) \times H^{-1/2}_\Psi(\partial D_2)$. So by the Fredholm alternative it suffices to prove the injectivity of $-1/2 \mathbb{I} + \mathbb{K}^*$.
Suppose that
\begin{equation}\langlebel{injectassume}
\left( -\frac{1}{2} \mathbb{I} + \mathbb{K}^* \right)
\begin{bmatrix} \mbox{\boldmath $\Gvf$}_1 \\ \mbox{\boldmath $\Gvf$}_2 \end{bmatrix} = 0
\end{equation}
for some $(\mbox{\boldmath $\Gvf$}_1 , \mbox{\boldmath $\Gvf$}_2) \in H^{-1/2}_\Psi(\partial D_1) \times H^{-1/2}_\Psi(\partial D_2)$ and let
$$
{\bf u}({\bf x}):= \mathcal{S}_{\partial D_1}[\mbox{\boldmath $\Gvf$}_1]({\bf x}) + \mathcal{S}_{\partial D_2}[\mbox{\boldmath $\Gvf$}_2]({\bf x}), \quad {\bf x} \in \mathbb{R}^2.
$$
Then \eqnref{injectassume} implies that $\mathcal{L}_{\lambda, \mu} {\bf u}=0$ in $D_i$ and $\partial_\nu {\bf u}=0$ on $\partial D_i$ for $i=1,2$. So
\begin{equation}\langlebel{BuDi}
{\bf u}= \sum_{j=1}^3 a_{ij} \Psi_j \quad \mbox{in } D_i
\end{equation}
for some constants $a_{ij}$. Since the single layer potential is continuous across $\partial D_i$, we have ${\bf u}|_+= \sum_{j=1}^3 a_{ij} \Psi_j$ on $\partial D_i$. Moreover, by the jump formula \eqnref{singlejump} for the single layer potential, we have
$$
\int_{\partial D_i} \partial_{\nu} {\bf u}|_+ \cdot \Psi_j = \int_{\partial D_i} (\mbox{\boldmath $\Gvf$}_i + \partial_{\nu} {\bf u}|_-) \cdot \Psi_j = \int_{\partial D_i} \mbox{\boldmath $\Gvf$}_i \cdot \Psi_j =0
$$
since $\mbox{\boldmath $\Gvf$}_i \in H^{-1/2}_\Psi(\partial D_i)$. So ${\bf u}$ is a solution to \eqnref{elas_eqn_free} with ${\bf H}=0$. It is worth mentioning that the decay condition at $\infty$ is satisfies because $\mbox{\boldmath $\Gvf$}_j \in H^{-1/2}_\Psi(\partial D_j)$. We then have
$$
\int_{D^e} \mathbb{C} \widehat{\nabla} {\bf u}:\widehat{\nabla} {\bf u} = \int_{\partial D^e} \partial_\nu {\bf u}|_+ \cdot {\bf u} = \sum_{j=1}^3 \sum_{i=1}^2 a_{ij} \int_{\partial D_i} \partial_\nu {\bf u} \cdot \Psi_j =0,
$$
where the last equality follows from \eqnref{int_zero}. Hence ${\bf u}=0$ in $D^e$. By the jump formula \eqnref{singlejump} for the single layer potential, we have
$$
\mbox{\boldmath $\Gvf$}_j = \partial_\nu {\bf u}|_+ - \partial_\nu {\bf u}|_- = 0 \quad\mbox{on } \partial D_i
$$
for $i=1,2$. This completes the proof. \qed
\noindent{\sl Proof of Proposition \ref{thm_u_layer}}. Note that since $\mathcal{L}_{\lambda, \mu} {\bf H}=0$ in $\mathbb{R}^2$, $\partial_{\nu} {\bf H}|_{\partial D_i} \in H^{-1/2}_\Psi(\partial D_i)$ for $i=1,2$. So
we solve \eqnref{inteqnBH} for $(\mbox{\boldmath $\Gvf$}_1 , \mbox{\boldmath $\Gvf$}_2)$ on $H^{-1/2}_\Psi(\partial D_1) \times H^{-1/2}_\Psi(\partial D_2)$. Then ${\bf u}$ defined by \eqnref{singlerep} is the solution to \eqnref{elas_eqn_free}.
\subsection{Proof of Proposition \ref{thm_u_layer_bdd_case}}
Let ${\bf u}$ be the solution to \eqnref{elas_eqn_bdd} and let ${\bf f}:= \partial_\nu{\bf u}$ on $\partial\Omega$. Let ${\bf H}_{\Omega}$ be the function defined by \eqnref{eqn_def_H_Omega}. We emphasize that ${\bf H}_\Omega({\bf x})$ is defined not only for ${\bf x} \in \Omega$, but also for ${\bf x} \in \mathbb{R}^2 \setminus \overline{\Omega}$. Moreover, one can see from \eqnref{singlejump} and \eqnref{doublejump} that the following holds:
\begin{equation}\langlebel{HGOjump}
{\bf H}_\Omega|_- - {\bf H}_\Omega|_+ = {\bf g}, \quad \partial_\nu{\bf H}_\Omega|_- - \partial_\nu{\bf H}_\Omega|_+ = {\bf f} \quad \mbox{on } \partial\Omega.
\end{equation}
Let $(\mbox{\boldmath $\Gvf$}_1,\mbox{\boldmath $\Gvf$}_2)\in H^{-1/2}_\Psi (\partial D_1) \times H^{-1/2}_\Psi(\partial D_2)$ be the unique solution to \eqnref{inteqnBH} with ${\bf H}$ replaced by ${\bf H}_\Omega$, and let
\begin{equation}
{\bf v}_1({\bf x}) = {\bf H}_\Omega({\bf x}) + \mathcal{S}_{\partial D_1}[\mbox{\boldmath $\Gvf$}_1]({\bf x}) + \mathcal{S}_{\partial D_2}[\mbox{\boldmath $\Gvf$}_2]({\bf x}), \quad {\bf x} \in D^e \setminus \partial\Omega.
\end{equation}
Then ${\bf v}_1$ is a solution to
\begin{equation}\langlebel{elas_eqn_bdd2}
\ \left \{
\begin{array} {ll}
\displaystyle \mathcal{L}_{\lambda,\mu} {\bf v}= 0 \quad &\mbox{ in } D^e \setminus \partial\Omega,\\[2mm]
\displaystyle {\bf v}=\sum_{j=1}^3 a_{ij} \Psi_j({\bf x}), \quad &\mbox{ on } \partial D_i, \quad i=1,2 ,\\[2mm]
\displaystyle {\bf v}|_- - {\bf v}|_+ = {\bf g}, \quad \partial_\nu{\bf v}|_- - \partial_\nu{\bf v}|_+ = {\bf f} \quad & \mbox{on } \partial\Omega, \\[2mm]
\displaystyle {\bf v}({\bf x})= O(|{\bf x}|^{-1}) \quad &\mbox{as } |{\bf x}| \to \infty,
\end{array}
\right.
\end{equation}
where the constants $a_{ij}$ are determined by the condition \eqnref{int_zero}.
Let
$$
{\bf v}_2({\bf x}) := \left\{
\begin{array} {ll}
\displaystyle {\bf u}({\bf x}) \quad & {\bf x} \in \Omega \setminus \overline{D_1 \cup D_2}, \\[2mm]
\displaystyle 0 \quad & {\bf x} \in \mathbb{R}^2 \setminus \overline{\Omega}.
\end{array}
\right.
$$
Then ${\bf v}_2$ is also a solution to \eqnref{elas_eqn_bdd2} with the same ${\bf g}$ and ${\bf f}$.
Let ${\bf v}:={\bf v}_1-{\bf v}_2$. Then ${\bf v}$ is a solution to \eqnref{elas_eqn_bdd2} with ${\bf g}=0$ and ${\bf f}=0$. So, we have
$$
\int_{D^e} \mathbb{C} \widehat{\nabla} {\bf v}:\widehat{\nabla} {\bf v} = \int_{\partial D^e} \partial_\nu {\bf v}|_+ \cdot {\bf v} = \sum_{j=1}^3 \sum_{i=1}^2 c_{ij} \int_{\partial D_i} \partial_\nu {\bf u} \cdot \Psi_j =0,
$$
where the last equality follows from \eqnref{int_zero}. So we infer ${\bf v}=0$ in $D^e$. In particular, ${\bf u}={\bf v}_1$ in $\Omega \setminus \overline{D_1 \cup D_2}$ as desired.
\qed
\subsection{Proof of Theorem \ref{thm_ext_diri}}
Let
$$
V = \left\{{\bf f} \in H^{1/2}(\partial D^e)^2: \mathbb{K}[{\bf f}]=\frac{1}{2}{\bf f} \right\}
$$
and
$$
W = \left\{ {\bf f} \in H^{-1/2}(\partial D^e) :\mathbb{K}^*[{\bf f}]=\frac{1}{2}{\bf f} \right\},
$$
which are null spaces of $-1/2\mathbb{I} + \mathbb{K}$ and $-1/2\mathbb{I} + \mathbb{K}^*$, respectively. In particular, we have $\dim V=\dim W$. For $j=1,2,3$, let
$$
\alpha_j^1({\bf x}) = \begin{cases}
\Psi_j({\bf x}) &\quad \mbox{if } {\bf x} \in \partial D_1,
\\
0 &\quad \mbox{if } {\bf x} \in \partial D_2,
\end{cases}
$$
and
$$
\alpha_j^2({\bf x}) = \begin{cases}
0 &\quad \mbox{if } {\bf x} \in \partial D_1, \\
\Psi_j({\bf x}) &\quad \mbox{if } {\bf x} \in \partial D_2.
\end{cases}
$$
\begin{lemma}\langlebel{prop_alpha}
The following holds:
\begin{itemize}
\item[(i)] $\dim V=\dim W=6$.
\item[(ii)] $\{ \alpha_j^1, \alpha_j^2 : j=1,2,3 \}$ is a basis of $V$.
\end{itemize}
\end{lemma}
\noindent {\sl Proof}. \
If ${\bf x} \in \mathbb{R}^2 \setminus \overline{D_i}$, then
\begin{equation}
\mathcal{D}_{\partial D_i} [\Psi_j]({\bf x}) = \int_{D_i} \mathbb{C} \widehat{\nabla}_{\bf y} {\bf \GG}({\bf x}-{\bf y}) : \widehat{\nabla} \Psi_j({\bf y})=0,
\end{equation}
for $i=1,2$ and $j=1,2,3$. In particular, we have $(-1/2I + \mathcal{K}_{\partial D_i})[\Psi_j]=0$ on $\partial D_i$. So, we infer that $\{ \alpha_j^1, \alpha_j^2 \} \subset V$.
Since $\alpha_j^1$ and $\alpha_j^2$, $j=1,2,3$, are linearly independent, we infer $\dim V \ge 6$.
On the other hand, since $-1/2 \mathbb{I} + \mathbb{K}^*$ is a Fredholm operator of index 0, we have $H^{-1/2}(\partial D^e)^2 = \mathrm{Range}(-1/2\mathbb{I} + \mathbb{K}^*) \oplus W$. According to Proposition \ref{appinv}, $H^{-1/2}_\Psi(\partial D_1) \times H^{-1/2}_\Psi(\partial D_2) \subset \mathrm{Range}(-1/2\mathbb{I} + \mathbb{K}^*)$. Since $H^{-1/2}_\Psi(\partial D_i)^2$ has co-dimension 3 in $H^{-1/2}(\partial D_i)$. we infer that the co-dimension of $\mathrm{Range}(-1/2\mathbb{I} + \mathbb{K}^*)$ in $H^{-1/2}(\partial D^e)^2$ is larger than or equals to $6$. So, $\dim W \le 6$. This completes the proof. \qed
\begin{lemma}\langlebel{WPsi}
Let
\begin{equation}
W_\Psi := {\bf i}g\{ {\bf f}= ({\bf f}_1, {\bf f}_2)\in W : \int_{\partial D_1} {\bf f}_1 \cdot \Psi_j + \int_{\partial D_2} {\bf f}_2 \cdot \Psi_j =0 \, \mbox{ for } j=1,2,3 {\bf i}g\}.
\end{equation}
Then, $\dim W_\Psi = 3$.
\end{lemma}
\noindent {\sl Proof}. \
For $i=1,2$, define $\beta_j^i$ by
\begin{equation}
\beta_j^i:= \begin{cases}
\alpha_j^i \quad&\mbox{if } j=1,2, \\
\alpha_3^i + c_i \alpha_1^i + d_i \alpha_2^i \quad&\mbox{if } j=3,
\end{cases}
\end{equation}
where the constants $c_i$ and $d_i$ are chosen so that
\begin{equation}
\langle \beta_j^i, \beta_l^k \rangle =0 \quad\mbox{if } (i,j) \neq (k,l).
\end{equation}
Then there is an eigenfunction ${\bf f}_j^i \in W$ such that
\begin{equation}
\langle {\bf f}_j^i, \beta_l^k \rangle =
\begin{cases}
1 \quad\mbox{if } (i,j) = (k,l), \\
0 \quad\mbox{if } (i,j) \neq (k,l).
\end{cases}
\end{equation}
In fact, since $( -1/2\mathbb{I} + \mathbb{K}^* )[\beta_j^i] \in H^{-1/2}_\Psi(\partial D_1) \times H^{-1/2}_\Psi(\partial D_2)$,
there is a unique ${\bf g} = ({\bf g}_1, {\bf g}_2) \in H^{-1/2}_\Psi(\partial D_1) \times H^{-1/2}_\Psi(\partial D_2)$ such that
$$
\left( -\frac{1}{2}\mathbb{I} + \mathbb{K}^* \right)[{\bf g}] = \left( -\frac{1}{2}\mathbb{I} + \mathbb{K}^* \right)[\beta_j^i].
$$
Let ${\bf f} := \beta_j^i - {\bf g}$. Then, ${\bf f} \in W$. Moreover, we have
$$
\langle {\bf f}, \beta_l^k \rangle = \langle \beta_j^i, \beta_l^k \rangle.
$$
So ${\bf f}_j^i:= \langle \beta_j^i, \beta_j^i \rangle^{-1} {\bf f}$ is the desired function.
Let for $i=1,2$
$$
{\bf g}_1^i := {\bf f}_1^i-c^i {\bf f}_3^i, \quad {\bf g}_2^i := {\bf f}_2^i-d^i {\bf f}_3^i, \quad {\bf g}_3^i:= {\bf f}_3^i.
$$
Then, one can see that
\begin{equation}
\langle {\bf g}_j^i, \alpha_l^k \rangle =
\begin{cases}
1 \quad\mbox{if } (i,j) = (k,l), \\
0 \quad\mbox{if } (i,j) \neq (k,l).
\end{cases}
\end{equation}
Then ${\bf g}_j^1-{\bf g}_j^2$ ($j=1,2,3$) three linearly independent functions belonging to $W_\Psi$, while ${\bf g}_j^1+ {\bf g}_j^2$ ($j=1,2,3$) does not belong to $W_\Psi$. So $\dim W_\Psi =3$. \qed
Define the operator $\mathbb{S}: H^{-1/2}(\partial D^e)^2 \to H^{1/2}(\partial D^e)^2$ as follows: for ${\bf f} = ({\bf f}_1, {\bf f}_2) \in H^{-1/2}(\partial D^e)$ let
\begin{equation}\langlebel{defBv}
{\bf v}({\bf x}):= \mathcal{S}_{\partial D_1}[{\bf f}_1]({\bf x}) + \mathcal{S}_{\partial D_2}[{\bf f}_2]({\bf x}),
\end{equation}
and
\begin{equation}
\mathbb{S}[{\bf f}] := \begin{bmatrix} {\bf v}|_{\partial D_1} \\ {\bf v}|_{\partial D_2} \end{bmatrix}.
\end{equation}
\begin{lemma}\langlebel{Vdecom}
Let
\begin{equation}
V^+ := \mbox{span} \, \{ \Psi_1, \Psi_2, \Psi_3 \}.
\end{equation}
Then, the following holds:
\begin{itemize}
\item[(i)] $\mathbb{S}$ maps $W$ into $V$, and $\mathbb{S}$ is injective on $W_\Psi$.
\item[(ii)] $V= \mathbb{S}(W_\Psi) \oplus V^+$.
\end{itemize}
\end{lemma}
\noindent {\sl Proof}. \
Let ${\bf f} \in W$ and define ${\bf v}$ by \eqnref{defBv}. Then we have
$$
\partial_\nu {\bf v}|_-=(-1/2\mathbb{I} + \mathbb{K}^*)[{\bf f}]=0 \quad\mbox{on } \partial D^e.
$$
Since $\mathcal{L}_{\lambda, \mu} {\bf v}=0$ in $D_i$ ($i=1,2$), we infer that ${\bf v}= \sum_j a_{ij}\Psi_j$ on $\partial D_i$ for some constants $a_{ij}$. So $\mathbb{S}[{\bf f}] \in V$.
If further ${\bf f} \in W_\Psi$, then ${\bf v}({\bf x})=O(|{\bf x}|^{-1})$ as $|{\bf x}| \to 0$. So, if $\mathbb{S}[{\bf f}]=0$, then ${\bf v}=0$ in $D^e$, and hence ${\bf v}=0$ in $\mathbb{R}^2$. Thus we have ${\bf f}=\partial_\nu {\bf v}|_+ - \partial_\nu {\bf v}|_-=0$ on $\partial D^e$. This proves (i).
We now show that $\mathbb{S}(W_\Psi) \cap V^+ = \{ 0\}$. In fact, if ${\bf f} = ({\bf f}_1, {\bf f}_2) \in W_\Psi$ satisfies $\mathbb{S}[{\bf f}]= \sum_{j} a_j \Psi_j$ on $\partial D^e$, let
$$
{\bf v}({\bf x}):= \mathcal{S}_{\partial D_1}[{\bf f}_1]({\bf x}) + \mathcal{S}_{\partial D_2}[{\bf f}_2]({\bf x}), \quad {\bf x} \in \mathbb{R}^2.
$$
Then ${\bf v} \in \mathcal{A}$, and
\begin{align*}
\int_{D^e} \mathbb{C} \widehat{\nabla}{\bf v}: \widehat{\nabla}{\bf v} &= -\int_{\partial D^e} \partial_\nu {\bf v}|_+ \cdot {\bf v} \\
&= -\int_{\partial D^e} {\bf f} \cdot (\sum_{j} a_j \Psi_j) - \int_{\partial D^e} \partial_\nu {\bf v}|_- \cdot (\sum_{j} a_j \Psi_j)=0.
\end{align*}
So, ${\bf v} =0$ in $D^e$, and hence $\sum_{j} a_j \Psi_j=0$ on $\partial D^e$.
Since $\mathbb{S}$ is injective on $W_\Psi$, $\dim \mathbb{S}(W_\Psi)=3$. So $\dim \mathbb{S}(W_\Psi) \oplus V^+=6$. This yields (ii). \qed
Since $-1/2\mathbb{I} + \mathbb{K}$ is fredholm, we have $H^{1/2}(\partial D^e)^2=\mathrm{Range}(-1/2\mathbb{I} + \mathbb{K}) \oplus V^+$. So we obtain the following proposition.
\begin{prop}\langlebel{last}
$H^{1/2}(\partial D^e)^2 =\mathrm{Range}(-1/2\mathbb{I} + \mathbb{K}) \oplus \mathbb{S}(W_\Psi) \oplus V^+$.
\end{prop}
\noindent{\sl Proof of Theorem \ref{thm_ext_diri}}. Let ${\bf g} \in H^{1/2}(\partial D^e)^2$. According to the previous proposition, there is ${\bf f}=({\bf f}_1, {\bf f}_2) \in H^{1/2}(\partial D^e)^2$, $\mbox{\boldmath $\Gvf$}=(\mbox{\boldmath $\Gvf$}_1, \mbox{\boldmath $\Gvf$}_2) \in W_\Psi$, and constants $a_1,a_2,a_3$ such that
$$
{\bf g}= \left(-\frac{1}{2}\mathbb{I} + \mathbb{K} \right)[{\bf f}] + \mathbb{S}[\mbox{\boldmath $\Gvf$}] + \sum_{j=1}^3 a_j \Psi_j.
$$
Then the solution ${\bf u}$ is given by
$$
{\bf u} = \sum_{i=1}^2 \left( \mathcal{D}_{\partial D_i}[{\bf f}_i] + \mathcal{S}_{\partial D_i}[\mbox{\boldmath $\Gvf$}_i] \right) + \sum_{j=1}^3 a_j \Psi_j \quad\mbox{in } D^e.
$$
Note that $\sum_{i=1}^2 \left( \mathcal{D}_{\partial D_i}[{\bf f}_i] + \mathcal{S}_{\partial D_i}[\mbox{\boldmath $\Gvf$}_i] \right) \in \mathcal{A}$ by Lemma \ref{lem:Acal}. So ${\bf u} \in \mathcal{A}^*$.
For uniqueness, assume that ${\bf u}$ and ${\bf v}$ are two solutions in $\mathcal{A}^*$, and let ${\bf w}:={\bf u}-{\bf v}$. Then ${\bf w} \in \mathcal{A}^*$ and ${\bf w}=0$ on $\partial D^e$. Let ${\bf w}={\bf w}_1 + {\bf w}_2$ be such that ${\bf w}_1 \in \mathcal{A}$ and ${\bf w}_2= \sum_{j=1}^3 a_j \Psi_j$. Then by Lemma \ref{cor_betti}, we have
$$
0= \int_{D^e} \mathbb{C} \widehat{\nabla}{\bf w}:\widehat{\nabla}{\bf w}= \int_{D^e} \mathbb{C} \widehat{\nabla}{\bf w}_1:\widehat{\nabla}{\bf w}_1 .
$$
Since ${\bf w}_1({\bf x}) \to 0$ as $|{\bf x}| \to \infty$, ${\bf w}_1=0$ in $D^e$. So, $\sum_{j=1}^3 a_j \Psi_j=0$ on $\partial D^e$, which implies $a_j=0$, $j=1,2,3$. This completes the proof. \qed
\end{document}
|
\begin{document}
\title{Braid presentation of spatial graphs}
\author{Ken Kanno}
\address{Graduate School of Education, Waseda University, Nishi-Waseda 1-6-1, Shinjuku-ku, Tokyo, 169-8050, Japan}
\email{[email protected]}
\author{Kouki Taniyama}
\address{Department of Mathematics, School of Education, Waseda University, Nishi-Waseda 1-6-1, Shinjuku-ku, Tokyo, 169-8050, Japan}
\email{[email protected]}
\subjclass[2000]{Primary 57M25; Secondary 57M15}
\date{}
\dedicatory{}
\keywords{spatial graph, braid presentation}
\begin{abstract}
We define braid presentation of edge-oriented spatial graphs as a natural generalization of braid presentation of oriented links. We show that every spatial graph has a braid presentation. For an oriented link it is known that the braid index is equal to the minimal number of Seifert circles. We show that an analogy does not hold for spatial graphs.
\end{abstract}
\maketitle
\section{Introduction}
Throughout this paper we work in the piecewise linear category. Let $G$ be a finite edge-oriented graph. Namely $G$ consists of finite vertices and finite edges, and each edge has a fixed orientation. Edge-oriented graph is called digraph in graph theory. We consider a graph as a topological space in a usual way. Let ${\mathbb S}^3$ be the unit 3-sphere in the $xyzw$-space ${\mathbb R}^4$ centered at the origin of ${\mathbb R}^4$. An embedding of $G$ into ${\mathbb S}^3$ is called a {\it spatial embedding} of $G$. Then the image is also called a spatial embedding or a {\it spatial graph}.
Let $A$ (resp. $C$) be the intersection of ${\mathbb S}^3$ and the $zw$-plane (resp. $xy$-plane). Then the union $A\cup C$ is a Hopf link in ${\mathbb S}^3$. We call $A$ the {\it axis} and $C$ the {\it core}.
Let $\pi:{\mathbb S}^3-A\to C$ be a natural projection defined by
$\displaystyle{\pi(x,y,z,w)=(\frac{x}{\sqrt{x^2+y^2}},\frac{y}{\sqrt{x^2+y^2}},0,0)}$.
We give a counter-clockwise orientation to $C$ on $xy$-plane and fix it. We say that a continuous map $\varphi:G\to C$ is {\it locally orientation preserving} if for any edge $e$ of $G$ and any point $p$ on $e$ there is a neighbourhood $U$ of $p$ in $e$ such that the restriction map of $\varphi$ to $U$ is an orientation preserving embedding. Let $f:G\to {\mathbb S}^3$ be a spatial embedding. We say that $f$ or its image $f(G)$ is a {\it braid presentation} if $f(G)$ is disjoint from $A$ and the composition map $\pi\circ f':G\to C$ is locally orientation preserving where $f':G\to {\mathbb S}^3-A$ is the map defined by $f'(p)=f(p)$ for any $p$ in $G$. Note that this generalizes the braid presentation defined for $\theta_m$-curve in \cite{S-T}.
The following theorem shows that every edge-oriented spatial graph has a braid presentation up to ambient isotopy of ${\mathbb S}^3$. This generalizes Alexander's theorem that every oriented link can be expressed by a closed braid \cite{Alexander} and that proved for $\theta_m$-curve in \cite{S-T}.
\vskip 3mm
\begin{Theorem}\label{main-theorem1}
Let $G$ be a finite edge-oriented graph and $f:G\to {\mathbb S}^3$ a spatial embedding. Then there is a braid presentation $g:G\to {\mathbb S}^3$ that is ambient isotopic to $f$ in ${\mathbb S}^3$.
\end{Theorem}
\vskip 3mm
In \cite{Yamada} it is shown that the minimal number of Seifert circles of an oriented link is equal to the braid index of the link. We consider an analogy to spatial graphs. Let $g:G\to {\mathbb S}^3$ be a braid presentation. Let $P_p=\pi^{-1}(p)$ for $p\in C$. Let $\tilde{b}(g)=\tilde{b}(g(G))$ be the maximum of $|g(G)\cap P_p|$ where $|X|$ denotes the cardinality of the set $X$ and $p$ varies over all points in $C$. Let $f:G\to {\mathbb S}^3$ a spatial embedding. Let $b(f)=b(f(G))$ be the minimum of $\tilde{b}(g)$ where $g$ varies over all braid presentation that is ambient isotopic to $f$.
We call $b(f)=b(f(G))$ the {\it braid index} of $f$ or $f(G)$.
Let ${\mathbb S}^2$ be the intersection of ${\mathbb S}^3$ and the $xyz$-space. Then any spatial embedding $f:G\to {\mathbb S}^3$ has a diagram on ${\mathbb S}^2$ up to ambient isotopy of ${\mathbb S}^3$. Let $D$ be a diagram of $f$ on ${\mathbb S}^2$. Let $S(D)$ be a plane graph in ${\mathbb S}^2$ obtained from $D$ by smoothing every crossing of $D$. Here smoothing respect the orientations of the edges. See Figure \ref{smoothing}.
\begin{figure}\label{smoothing}
\end{figure}
Let $\mu(X)$ be the number of connected components of a space $X$. Let $s(f)=s(f(G))$ be the minimum of $\mu(S(D))$ where $D$ varies over all diagrams of $f$ up to ambient isotopy of ${\mathbb S}^3$. We call $s(f)=s(f(G))$ the {\it smoothing index} of $f$ or $f(G)$. Note that our $s(f)$ is different from $s(G)$ defined for $\theta_m$-curve in \cite{S-T}.
We will show in Proposition \ref{proposition1} that for any natural number $n$ there is a spatial embedding $f$ of $G$ with $b(f)\geq n$ unless $G$ contains no cycles as an unoriented graph. In contrast we will show in the next theorem that $s(f,G)=s(g,G)$ for any two spatial embeddings $f$ and $g$ of $G$ unless $G$ satisfies certain conditions.
By ${\rm indeg}(v,G)={\rm indeg}(v)$ (resp. ${\rm outdeg}(v,G)={\rm outdeg}(v)$) we denote the number of the edges whose head (resp. tail) is the vertex $v$ of $G$. Then ${\rm deg}(v,G)={\rm deg}(v)={\rm indeg}(v,G)+{\rm outdeg}(v,G)$ is called the {\it degree} of $v$ in $G$.
We say that an edge-oriented graph $G$ is {\it circulating} if ${\rm indeg}(v,G)={\rm outdeg}(v,G)$ for any vertex $v$ of $G$. Namely each component of a circulating graph is Eulerian. Let $\chi(X)$ be the Euler characteristic of a space $X$.
Then we have the following theorem.
\vskip 3mm
\begin{Theorem}\label{main-theorem2}
Let $G$ be a finite edge-oriented graph without isolated vertices.
(1) Suppose that $G$ is not circulating. Then for any spatial embedding $f:G\to {\mathbb S}^3$, $s(f)={\rm max}\{1,\chi(G)\}$.
(2) Suppose that $G$ is circulating. Then for any natural number $n$ there is a spatial embedding $f:G\to {\mathbb S}^3$ such that $s(f)\geq n$.
\end{Theorem}
\vskip 3mm
\begin{Remark}\label{remark}
{\rm
Another choice for the smoothing index as a generalization of the number of Seifert circles of an oriented link is the use of the first Betti number instead of the number of connected components. Let $s'(f)$ be the minimum of $\beta_1(S(D))$ among all diagrams $D$ of $f$. By the Euler-Poincar\'{e} formula we have $\beta_1(S(D))=\mu(S(D))-\chi(S(D))$. Since smoothing does not change the Euler characteristic we have $\chi(S(D))=\chi(G)$. Then we have $s'(f)=s(f)-\chi(G)$. Thus we have that $s'(f)$ is determined by $s(f)$ after all.
}
\end{Remark}
\section{Proof of Theorem \ref{main-theorem1}}
The following proof is a natural extension of a proof of Alexander's theorem by Cromwell \cite{Cromwell} using {\it rectangular diagram} of oriented links that appears in \cite{Brunn} \cite{Cromwell} \cite{M-M} etc.
In this section we regard ${\mathbb S}^2$ as a one-point compactification of the $xy$-plane. Thus we may suppose that all diagrams are on the $xy$-plane.
In the following we sometimes do not distinguish an abstract vertex or edge from its image in ${\mathbb S}^3$ or on ${\mathbb S}^2$.
\vskip 3mm
\noindent{\bf Proof of Theorem \ref{main-theorem1}.}
Let $D$ be a diagram of the spatial embedding $f:G\to {\mathbb S}^3$. We will deform $D$ step by step so that it is still a diagram of $f$ up to ambient isotopy in ${\mathbb S}^3$ as follows. First we move $D$ if necessary so that $D$ is left to the $y$-axis. Namely $D$ is contained in the region of the $xy$-plane defined by $x<0$.
By a local deformation near each vertex we may suppose that all edges go down with respect to the $y$-coordinate in each small neighbourhood of a vertex of $G$. Then we further deform $D$ so that it satisfies the following conditions.
(1) $D$ is a union of finitely many line segments.
(2) Each vertex $v$ has a small disk neighbourhood $N_v$ such that the diagram $D$ on $N_v$ is a union of ${\rm indeg}(v,G)+{\rm outdeg}(v,G)$ line segments each of which has $v$ as one of its end points, and each of which goes down with respect to the $y$-coordinate.
(3) A line segment that is not contained in any $N_v$ is parallel to $x$-axis or $y$-axis.
Then we have that each crossing of $D$ is a crossing between a horizontal line segment (parallel to $x$-axis) and a vertical line segment (parallel to $y$-axis). Then by a local deformation as illustrated in Figure \ref{manji} we may further assume that a horizontal line segment is over a vertical line segment at each crossing. Note that the disk neighbourhood $N_v$ can be taken to be arbitrarily small. Then by a slight deformation we have that a straight line that contains a vertical line segment going up with respect to the $y$-coordinate contains no other vertical line segments and is disjoint from any $N_v$.
Let $s_1,\cdots,s_n$ be the vertical line segments that go up with respect to the $y$-coordinate. We may suppose without loss of generality that the $x$-coordinate of $s_i$ is less than that of $s_j$ if $i<j$. Let $R_1,\cdots,R_n$ be sufficiently large upright rectangles with $R_1\supset\cdots\supset R_n$ and $s_i\subset \partial R_i$ for each $i$. We replace each $s_i$ by $\partial R_i-{\rm int}(s_i)$ that crosses under the horizontal line segments at every crossings. Finally we tilt the horizontal line segments other than $\partial R_i$ slightly so that they go down with respect to the $y$-coordinate. Then we finally have a diagram $D$ of $f$ that totally turns around the origin of the $xy$-plane. Then $D$ represents a braid presentation. See for example Figure \ref{braiding}. This completes the proof. $\Box$
\begin{figure}\label{manji}
\end{figure}
\begin{figure}\label{braiding}
\end{figure}
\section{Proof of Theorem \ref{main-theorem2}}
A vertex $v$ of an edge-oriented graph $G$ is called a {\it source} (resp. {\it sink}) of $G$ if ${\rm indeg}(v,G)=0$ (resp. ${\rm outdeg}(v,G)=0$).
\begin{Proposition}\label{proposition1}
Let $G$ be a finite edge-oriented graph. Suppose that $G$ contains a cycle as an unoriented graph.
Then for any natural number $n$ there is a spatial embedding $f:G\to {\mathbb S}^3$ such that $b(f)\geq n$.
\end{Proposition}
\vskip 3mm
\noindent{\bf Proof.} Let $\gamma$ be a cycle of $G$. Note that $\gamma$ may not be an oriented cycle as an edge-oriented subgraph of $G$. Let $\alpha$ be the number of the sources of $\gamma$. Let $f:G\to {\mathbb S}^3$ be a spatial embedding of $G$ such that the bridge index ${\rm bridge}(f(\gamma))$ of the knot $f(\gamma)$ is greater than or equal to $n+\alpha$. By the definition we have $b(f(\gamma))\leq b(f(G))$. We may suppose that $f(\gamma)$ is a braid presentation with $\tilde{b}(f(\gamma))=b(f(\gamma))$. Then $f(\gamma)$ has at most $2(\tilde{b}(f(\gamma))+\alpha)$ critical points with respect to the $y$-coordinate. Therefore we have ${\rm bridge}(f(\gamma))\leq\tilde{b}(f(\gamma))+\alpha$. Therefore we have $n+\alpha\leq{\rm bridge}(f(\gamma))\leq\tilde{b}(f(\gamma))+\alpha=b(f(\gamma))+\alpha$. Thus we have $n\leq b(f(\gamma))\leq b(f(G))$ as desired. $\Box$
\vskip 3mm
\begin{Proposition}\label{proposition2}
Let $G$ be a finite edge-oriented graph and $f:G\to {\mathbb S}^3$ a spatial embedding.Then $s(f)\geq\chi(G)$.
\end{Proposition}
\vskip 3mm
\noindent{\bf Proof.} Let $D$ be a diagram of $f$. It is sufficient to show that $\mu(S(D))\geq\chi(G)$.
Since smoothing does not change the Euler characteristic we have that $\chi(G)=\chi(S(D))$.
Let $\beta_1(X)$ be the first Betti number of a space $X$.
By the Euler-Poincar\'{e} formula we have $\chi(S(D))=\mu(S(D))-\beta_1(S(D))$. Therefore we have $\chi(S(D))\leq\mu(S(D))$. Thus we have $\mu(S(D))\geq\chi(G)$. $\Box$
\vskip 3mm
\noindent{\bf Proof of Theorem \ref{main-theorem2} (1).} First we show that $s(f)\geq{\rm max}\{1,\chi(G)\}$. Let $D$ be a diagram of $f$. By the definition we have $\mu(S(D))\geq1$. By Proposition \ref{proposition2} we have $\mu(S(D))\geq\chi(G)$. Thus we have $\mu(S(D))\geq{\rm max}\{1,\chi(G)\}$ holds for any diagram $D$ of $f$. This implies $s(f)\geq{\rm max}\{1,\chi(G)\}$.
Next we show that $f(G)$ has a diagram $D$ with $\mu(S(D))={\rm max}\{1,\chi(G)\}$. Let $H$ be the maximal subgraph of $G$ that has no vertices of degree less than 2. If $H$ is not an empty graph and $H$ is not circulating then we set $G'=H$. Suppose that $H$ is an empty graph or a circulating graph. Suppose that there is a component $I$ of $H$ that is not a component of $G$. Let $J$ be the component of $G$ containing $I$. Let $e$ be an edge of $J$ that is not an edge of $I$ but incident to a vertex of $I$. Let $G'$ be the minimal subgraph of $G$ that contains $H$ and $e$. Suppose that every component of $H$ is also a component of $G$. Let $e$ be an edge of $G$ that is not an edge of $H$. Let $G'$ be the minimal subgraph of $G$ that contains $H$ and $e$. Note that in any case we have $\chi(G')\leq1$.
Let $f'$ be the restriction map of the spatial embedding $f:G\to {\mathbb S}^3$ to $G'$. We will construct a diagram $D'$ of $f'$ with $\mu(S(D'))={\rm max}\{1,\chi(G')\}=1$. Namely we will construct $D'$ such that $S(D')$ is connected. We start from a diagram $D'$ of $f'$ and deform it step by step and finally have $D'$ with $S(D')$ connected.
By Theorem \ref{main-theorem1} we may suppose that $f'$ is a braid presentation. By deforming the braid presentation if necessary we have that $f'$ has a diagram $D'$ on the $xy$-plane with the following properties.
(1) There exists a rectangle $B=[-3,-1]\times[-2,2]$ in the $xy$-plane such that at every point on $D'$ in $B$ the edge-orientation goes down with respect to the $y$-coordinate.
(2) Outside of $B$ the diagram $D'$ consists of some parallel arcs turning around the origin of the $xy$-plane.
See for example Figure \ref{proof1} (a). In the following deformations we always keep the condition that everything goes down with respect to the $y$-coordinate inside $B$.
Suppose that $G'$ has some sources and/or sinks. Then by pulling up the sources and moving them to the right as illustrated in Figure \ref{proof1-2} and pulling down the sinks and moving them to the left we have that all sources are in $[-3,-1]\times[1,2]$ and all sinks are in $[-3,-1]\times[-2,-1]$ and all parallel arcs go left to the sources and go right to the sinks outside of $B'=[-3,-1]\times[-1,1]$. See for example Figure \ref{proof1} (b).
\begin{figure}\label{proof1}
\end{figure}
\begin{figure}\label{proof1-2}
\end{figure}
Then we deform the diagram $D'$ in $B'$ such that the following conditions hold.
(1) If a vertex $v$ of $G'$ satisfies $1\leq{\rm indeg}(v,G')\leq{\rm outdeg}(v,G')$ then it is rightmost in $B'$.
(2) If a vertex $v$ of $G'$ satisfies ${\rm indeg}(v,G')>{\rm outdeg}(v,G')\geq1$ then it is leftmost in $B'$.
See for example Figure \ref{proof2} (a).
Now we perform smoothing for the crossings in $B'$ and obtain $S(D')$ on $B'$. See for example Figure \ref{proof2} (b) and (c).
We do not deform $D'$ inside $B'$ any more. We will deform $D'$ only outside of $B'$. However to make the situation simple we further perform the following replacement of $S(D')$ on $B'$. For each vertex $v$ with $2\leq{\rm indeg}(v,G')\leq{\rm outdeg}(v,G')$ we replace a neighbourhood of it on $B'$ by ${\rm indeg}(v,G')-1$ parallel arcs and a vertex $u$ with ${\rm indeg}(u)=1$ and ${\rm outdeg}(u)={\rm outdeg}(v,G')-{\rm indeg}(v,G')+1$. Similarly for each vertex $v$ with ${\rm indeg}(v,G')>{\rm outdeg}(v,G')\geq2$ we replace a neighbourhood of it on $B'$ by ${\rm outdeg}(v,G')-1$ parallel arcs and a vertex $u$ with ${\rm outdeg}(u)=1$ and ${\rm indeg}(u)={\rm indeg}(v,G')-{\rm outdeg}(v,G')+1$. See Figure \ref{proof3} and Figure \ref{proof2} (d). Then we perform edge contractions if necessary so that there exist at most one vertex, say $v_+$ with $1={\rm indeg}(v_+)<{\rm outdeg}(v_+)$ and at most one vertex, say $v_-$ with ${\rm indeg}(v_-)>{\rm outdeg}(v_-)=1$. See for example Figure \ref{proof2} (e). Note that these replacements never decrease the number of connected components. Therefore it is sufficient to show that $S(D')$ is connected after these replacements.
\begin{figure}\label{proof2}
\end{figure}
\begin{figure}\label{proof3}
\end{figure}
Now suppose that there is just one sink of $G'$. Let $P$ be any point of $S(D')$. We start from $P$ along the flow of edge orientations of $S(D')$. If we come across the vertex $v_+$ then we choose the leftmost way. Namely we turn to the right at $v_+$. Then we see that as we turn around the origin of ${\mathbb R}^2$ we move to left and we finally reach to the sink. Thus we have that $S(D')$ is arcwise connected. Similarly if there are no sinks of $G'$ then starting from any point of $S(D')$ we finally reach to the outermost circle turning around the origin. Thus $S(D')$ is arcwise connected. Suppose that there are at most one source of $G'$. Then we see by going against the flow that $S(D')$ is arcwise connected.
Therefore it is sufficient to consider the case that there are at least two sinks and two sources of $G'$. Then by the definition of $G'$ we have that $G'$ has no vertices of degree one.
Let ${\mathcal B}$ be a disk in $B-B'$ containing all sinks in its interior. Let $s_1,\cdots,s_k$ be the sinks and $P_{1,1}$, $\cdots$, $P_{1,{\rm indeg}(s_1,G')}$, $\cdots$,$P_{k,1}$, $\cdots$, $P_{k,{\rm indeg}(s_k,G')}$ the points of intersection of $S(D')$ and $\partial{\mathcal B}$ such that they appear in this order on $\partial{\mathcal B}$, $P_{1,1}$ is adjacent to $v_-$ and $P_{i,j}$ is adjacent to $s_i$ for each $i$ and $j$.
See for example Figure \ref{proof1} and Figure \ref{proof4}.
\begin{figure}\label{proof4}
\end{figure}
We will deform $D'$ only on ${\mathcal B}$. We divide the points $P_{1,1}$, $\cdots$, $P_{1,{\rm indeg}(s_1,G')}$, $\cdots$,$P_{k,1}$, $\cdots$, $P_{k,{\rm indeg}(s_k,G')}$ into some sets of points ${\mathcal S}_1,\cdots,{\mathcal S}_\alpha$ such that for each $i$ the points in ${\mathcal S}_i$ are consecutive on $\partial{\mathcal B}$ and any two points in ${\mathcal S}_i$ can be connected by an arc in $S(D')$ outside ${\rm int}{\mathcal B}$. We may suppose without loss of generality that $P_{1,1}\in{\mathcal S}_1$ and for each $i$ there is a pair of consecutive points on $\partial{\mathcal B}$ such that one is contained in ${\mathcal S}_i$ and the other is contained in ${\mathcal S}_{i+1}$ where we consider $\alpha+1=1$.
We will show that ${\mathcal S}_i$ contains two or more points possibly except $i=1$. To see this we start from $P_{i,j}\neq P_{1,1}$ and trace $S(D')$ against the flow. Then we reach to $v_+$ or a source. Then we choose an edge that is next to the edge where we come and along the flow we trace $S(D')$. If we come across $v_+$ then we choose the leftmost way.
Then we must reach to $P_{i,j+1}$ or $P_{i,j-1}$ where we consider $P_{i,0}=P_{i-1,{\rm indeg}(s_{i-1},G')}$, $P_{i,{\rm indeg}(s_i,G')+1}=P_{i+1,1}$ and $P_{k+1,1}=P_{1,1}$.
Let ${\mathcal B}={\mathcal B}_1\supset{\mathcal B}_2\supset\cdots\supset{\mathcal B}_k$ be a sequence of disks such that their boundaries $\partial{\mathcal B}_1,\partial{\mathcal B}_2,\cdots,\partial{\mathcal B}_k$ forms concentric circles in ${\mathcal B}$.
Let ${\mathcal A}_i={\mathcal B}_{i}-{\rm int}{\mathcal B}_{i+1}$ be the annulus.
Suppose that the set $\{P_{1,1},\cdots,P_{1,{\rm indeg}(s_1,G')}\}$ is contained in ${\mathcal S}_1\cup\cdots\cup{\mathcal S}_i$ but not contained in ${\mathcal S}_1\cup\cdots\cup{\mathcal S}_{i-1}$.
First suppose that the set $\{P_{1,1},\cdots,P_{1,{\rm indeg}(s_1,G')}\}$ is a proper subset of ${\mathcal S}_1\cup\cdots\cup{\mathcal S}_i$.
Then we leave $s_1$ in ${\mathcal A}_1$ and rename the sinks $s_2,\cdots,s_k$ $s_1,\cdots,s_{k-1}$ and the points of intersection of $S(D')$ and $\partial{\mathcal B}_2$ as illustrated in Figure \ref{proof5}. Then we redivide the points $P_{1,1}$, $\cdots$, $P_{1,{\rm indeg}(s_1,G')}$, $\cdots$,$P_{k-1,1}$, $\cdots$, $P_{k-1,{\rm indeg}(s_{k-1},G')}$ into some sets of points, still denoted by ${\mathcal S}_1,\cdots,{\mathcal S}_\alpha$, such that for each $i$ the points in ${\mathcal S}_i$ are consecutive on $\partial{\mathcal B}_2$ and any two points in ${\mathcal S}_i$ can be connected by an arc in $S(D')$ outside ${\rm int}{\mathcal B}_2$. We may suppose without loss of generality that $P_{1,1}\in{\mathcal S}_1$ and for each $i$ there is a pair of consecutive points on $\partial{\mathcal B}_2$ such that one is contained in ${\mathcal S}_i$ and the other is contained in ${\mathcal S}_{i+1}$ where we consider $\alpha+1=1$.
Then by the construction we have that ${\mathcal S}_i$ contains two or more points possibly except $i=1$, or except $i=\alpha$.
If ${\mathcal A}_\alpha$ contains just one point then we reverse the cyclic order for the next step. Namely we rename again $s_1,s_2,\cdots,s_{k-1}$ $s_{k-1},s_{k-2},\cdots,s_{1}$ and rename ${\mathcal S}_1,{\mathcal S}_2,\cdots,{\mathcal S}_\alpha$ ${\mathcal S}_\alpha,{\mathcal S}_{\alpha-1},\cdots,{\mathcal S}_1$, and rename the points $P_{i,j}$ along the new cyclic order on $\partial{\mathcal B}_2$.
Next suppose that the set $\{P_{1,1},\cdots,P_{1,{\rm indeg}(s_1,G')}\}$ is equal to the set ${\mathcal S}_1\cup\cdots\cup{\mathcal S}_i$.
Then we deform $D'$ as illustrated in Figure \ref{proof6} and consider $S(D')$.
Note that new $P_{1,1}$ and new $P_{k-1,{\rm indeg}(s_{k-1},G')}$ can be connected by an arc in $S(D')$ outside ${\rm int}{\mathcal B}_2$.
Therefore we have that each new ${\mathcal S}_i$ contains at least two points.
Next we deform $D'$ inside ${\mathcal B}_2$ and leave new $s_1$ in ${\mathcal A}_2$ in a similar way.
We continue this deformation and finally have the desired $S(D')$.
Now we return to the whole graph $G$. Let $G''$ be the maximal subgraph of $G$ that contains $G'$ and $\mu(G'')=\mu(G')$. Let $T_1,\cdots,T_n$ be the tree components of $G$ that are disjoint from $G'$. Then $G=G''\cup T_1\cup\cdots\cup T_n$.
Let $f''$ be the restriction of the spatial embedding $f:G\to{\mathbb S}^3$ to $G''$. Let $D''$ be a diagram of $f''$ whose subdiagram for $f'$ is $D'$ and has no more crossings than $D'$. Then we have that $S(D'')$ and $S(D')$ have the same homotopy type. In particular $S(D'')$ is connected. Let $m={\rm min}(\beta_1(S(D'')),n)$. Let $Q_1,\cdots,Q_m$ be points on $S(D'')$ other than the vertices such that $S(D'')-\{Q_1,\cdots,Q_m\}$ is still connected. We may suppose that these points are away from the neighbourhoods of the crossings of $D''$ where the smoothings are performed. Let $D$ be a diagram of $f$ whose subdiagram for $f''$ is $D''$ such that the crossings of $D$ other than that of $D''$ are exactly the points $Q_1,\cdots,Q_m$ where the crossing $Q_i$ is between an edge of $G'$ and an edge of $T_i$.
Then we see that $\mu(S(D))=1+n-m$. See for example Figure \ref{proof7}.
Note that we have the following equality.
\[
\chi(G)=\chi(G'')+n=\chi(S(D''))+n=\mu(S(D''))-\beta_1(S(D''))+n=1-\beta_1(S(D''))+n.
\]
Therefore if $m={\rm min}(\beta_1(S(D'')),n)=n$ then we have $\chi(G)\leq1$ and $\mu(S(D))=1+n-m=1$ as desired.
If $m={\rm min}(\beta_1(S(D'')),n)=\beta_1(S(D''))$ then we have $\chi(G)\geq1$ and $\mu(S(D))=1+n-\beta_1(S(D''))=\mu(S(D''))-\beta_1(S(D''))+n=\chi(S(D''))+n=\chi(G'')+n=\chi(G)$ as desired. This completes the proof.
$\Box$
\begin{figure}\label{proof5}
\end{figure}
\begin{figure}\label{proof6}
\end{figure}
\begin{figure}\label{proof7}
\end{figure}
\vskip 3mm
\noindent{\bf Proof of Theorem \ref{main-theorem2} (2).} Let $\gamma$ be an oriented cycle of $G$. Let $f:G\to {\mathbb S}^3$ be a spatial embedding of $G$ such that the braid index $b(f(\gamma))$ of the knot $f(\gamma)$ is greater than or equal to $n-\chi(G)$.
Let $D$ be any diagram of $f$. It is sufficient to show that $\mu(S(D))\geq n$. We replace each neighbourhood of a vertex $v$ of $D$ to ${\rm indeg}(v,G)$ oriented arcs as follows. For a vertex $v$ that is not on $\gamma$ we replace it to mutually disjoint oriented arcs. See for example Figure \ref{proof8}.
Let $v$ be a vertex of $G$ that is on $\gamma$. Let $N$ be a small neighbourhood of $v$ on $D$. Suppose that there is a pair of edges not contained in $\gamma$, say $e_i$ and $e_o$, such that the head of $e_i$ is $v$, the tail of $e_o$ is $v$ and they are next to each other in $N$. Then we take them away from $v$ and connect them. We do this for all such pairs. Then we have the situation that all edges in $N$ not on $f(\gamma)$ go from the right of $f(\gamma)$ to the left of $f(\gamma)$ or from the left of $f(\gamma)$ to the right of $f(\gamma)$. Then we split off them and let $f(\gamma)$ goes over them. Let $D'$ be the result of these replacements.
See for example Figure \ref{proof9}. Then we have that $D'$ is a diagram of some oriented link, say $L$. Since $L$ contains a knot $f(\gamma)$ we have that the braid index $b(L)$ of $L$ is greater than or equal to $n-\chi(G)$. By the result in \cite{Yamada} we have that $\mu(S(D'))\geq b(L)$. Therefore we have $\mu(S(D'))\geq n-\chi(G)$. Note that the homotopy type of $S(D)$ is obtained from $S(D')$ by adding $\sum_{v}({\rm indeg}(v,G)-1)$ edges where the summation is taken over all vertices $v$ of $G$. Therefore we have that
$\mu(S(D))\geq\mu(S(D'))-\sum_{v}({\rm indeg}(v,G)-1)$. By the handshaking lemma and by the assumption that $G$ is circulating we have that ${\displaystyle \sum_{v}({\rm indeg}(v,G)-1)=-\chi(G)}$.
Thus we have $\mu(S(D))\geq n$ as desired. $\Box$
\begin{figure}\label{proof8}
\end{figure}
\begin{figure}\label{proof9}
\end{figure}
The following example shows that even for the circulating graphs the difference $b(f)-s(f)$ depends on the spatial embedding $f$.
\vskip 3mm
\begin{Example}\label{example}
{\rm
Let $G$ be a circulating graph on two vertices and eight edges joining them.
Let $f:G\to {\mathbb S}^3$ be a trivial embedding of $G$. Then we have that $b(f)=4$ and $s(f)=1$.
Let $g:G\to {\mathbb S}^3$ be a spatial embedding of $G$ illustrated in Figure \ref{example1}. Note that $g(G)$ contains a knot $K$ that is a connected sum of three figure eight knots. Then we have ${\rm bridge}(K)=4$. Suppose that $K$ is a braid presentation as its edge orientations. Then we may suppose that $K$ is as illustrated in Figure \ref{example2} where the box represents some $n$ braid. Then we have that ${\rm bridge}(K)\leq n-1$. Therefore we have that $n\geq5$. Since $G$ has two more oriented cycles other than $K$ we have that $b(g)\geq7$. Since $g$ is a braid presentation with $\tilde{b}(g)=7$ we have $b(g)=7$. However we have that $s(g)=1$ as illustrated in Figure \ref{example1}.
}
\end{Example}
\begin{figure}\label{example1}
\end{figure}
\begin{figure}\label{example2}
\end{figure}
\section*{Acknowledgments}
The authors are grateful to Professor Shin'ichi Suzuki for his constant guidance and encouragement. The authors are also grateful to Dr. Ryuzo Torii for his helpful comments.
{\normalsize
}
\end{document}
|
\begin{document}
\title{Transience, Recurrence and Critical Behavior for Long-Range Percolation}
\begin{abstract}
We study the behavior of the random walk on the infinite cluster
of independent long-range percolation in dimensions $d=1,2$, where $x$ and $y$ are
connected with probability $\sim\beta/\|x-y\|^{-s}$.
We show that if $d<s<2d$ then the
walk is transient, and if $s\geq 2d$, then the walk is recurrent. The proof of
transience is based on a renormalization argument. As a corollary of this
renormalization argument, we get that for every dimension $d\geq 1$,
if $d<s<2d$, then
there is no infinite cluster at criticality. This result is extended to the
free random cluster model.
A second corollary is that when $d\geq 2$ and $d<s<2d$ we can erase
all long enough bonds and still have an infinite cluster.
The proof of recurrence in two dimensions is based on general stability
results for recurrence in random electrical networks. In particular, we show
that i.i.d. conductances on a recurrent graph of bounded degree yield a
recurrent electrical network.
{\cal E}d{abstract}
\section{Introduction}
\subsection{background}\label{background}
Long-range percolation (introduced by Schulman in 1983 \cite{schul}) is a percolation
model on the integer lattice $\mathbb{Z}^d$ in which every
two vertices can be connected by a bond.
The probability of the bond between two vertices to be open depends on
the distance between the vertices.
The models that were studied the
most are models in which the probability of a bond to be open decays polynomially with
its length.
\subsection{The model - definitions and known results}
Let $\{P_k\}_{k\in\mathbb{Z}^d}$ be s.t. $0\leq P_k=P_{-k} < 1$ for every $k\in\mathbb{Z}^d$.
We consider the following percolation model on $\mathbb{Z}^d$: for every $u$ and $v$ in $\mathbb{Z}^d$,
the bond connecting $u$ and $v$ is open with probability $P_{u-v}$. The different bonds
are independent of each other.
\begin{definition}
For a function $f:\mathbb{Z}^d\to\mathbb{R}$, we say that $\{P_k\}$ is {\em asymptotic to} $f$ if
\begin{equation*}
\lim_{\|k\|\to\infty}\frac{P_k}{f}=1.
{\cal E}d{equation*}
We denote it by $P_k\sim f(k)$
{\cal E}d{definition}
Since the model is shift invariant and ergodic, the event that an infinite cluster exists
is a zero--one event. We say that $\{P_k\}$ is {\em percolating} if a.s. there exists an
infinite cluster.
We consider systems for which $P_k\sim \beta \|k\|_1^{-s}$ for certain $s$ and $\beta$.
The following facts are trivial.
\begin{itemize}
\item
If $s\leq d$, then $\sum_k{P_k}=\infty$. Therefore, By the Borel Cantelli Lemma,
every vertex is connected to infinitely many other vertices. Thus, there exists an infinite
cluster.
\item
If $\sum_k{P_k}\leq 1$ then by domination by a (sub)-critical Galton-Watson tree
there is no infinite cluster. Therefore,
for every $s>d$ and $\beta$ one can find a set $\{P_k\}$ s.t. $P_k\sim \beta {\|k\|_1^{-s}}$
and s.t. there is no infinite cluster.
{\cal E}d{itemize}
In \cite{schul}, Schulman proved that if $d=1$ and $s>2$, then there is no infinite cluster.
Newman and Schulman (\cite{NS}) and Aizenman and Newman (\cite{AN}) proved,
among other results, the following:
\begin{theorem}
(A)
If $d=1$, { } $1<s<2$, and $P_k\sim \beta |k|^{-s}$ for some $\beta>0$, then there exists a
$\{P'_k\}$ s.t. $P'_k=P_k$ for every $k\geq 2$, { } $P'_1<1$ and $\{P'_k\}$ is percolating.
I.e., if $1<s<2$ then by increasing $P_1$ one can make the system percolating.
\\(B)
If $d=1$, { } $s=2$, { } $\beta>1$, and $P_k\sim \beta |k|^{-s}$, then there exists a
$\{P'_k\}$ s.t. $P'_k=P_k$ for every $k\geq 2$, { } $P'_1<1$ and $\{P'_k\}$ is
percolating.
\\(C)
If $d=1$, { } $s=2$, { } $\beta\leq 1$, and $P_k\sim \beta |k|^{-s}$ then $\{P_k\}$ is not
percolating.
{\cal E}d{theorem}
These results show the existence of a phase transition for $d=1$, { } $1<s<2$ and
$\beta>0$, and for $d=1$, { } $s=2$ and $\beta>1$.
When considering $\mathbb{Z}^d$ for $d>1$, the picture is simpler. The following fact is a trivial
implication of the existence of infinite clusters for nearest-neighbor percolation:
\begin{itemize}
\item
If $d>1$, { } $s>d$ and $P_k\sim \beta {\|k\|_1^{-s}}$ for some $\beta>0$, then there exists a
percolating
$\{P'_k\}$ s.t. $P'_k=P_k$ for every $\|k\|_1\geq 2$ and $P'_k<1$ for every $k$ whose norm is
$1$.
{\cal E}d{itemize}
If $d>1$, then for any $s>d$ and $\beta>0$ we may obtain a transition between the
phases of existence
and non-existence of an infinite cluster by only changing $\{P_k|k\in A\}$ for a finite
set $A$.
In \cite{uniq}, Gandolfi, Keane and Newman proved a general uniqueness theorem.
A special case of it is the following theorem:
\begin{theorem}\label{GKN}
If $\{P_k\}_{k\in\mathbb{Z}^d}$ is percolating and for every $k\in\mathbb{Z}^d$ there exist $n$ and
$k_1,...,k_n$ s.t. $k=k_1+k_2+...+k_n$ and $P_{k_i}>0$ for all $1\leq i \leq n$ then
a.s. the infinite cluster is unique.
In particular,
If $\{P_k\}_{k\in\mathbb{Z}^d}$ is percolating and $P_k\sim \beta {\|k\|_1^{-s}}$ for some $s$ and
$\beta>0$, then a.s. the infinite cluster is unique.
{\cal E}d{theorem}
\subsection{Goals}
Random walks on percolation clusters have been studied intensively in
recent years. In \cite{GKZ}, Grimmett, Kesten and Zhang showed that a
supercritical percolation in $\mathbb{Z}^d$ is transient for all $d\geq 3$.
See also \cite{BPP}, \cite{elch} and \cite{us}.
The problem discussed in this paper,
suggested by Itai Benjamini, was to determine when a random walk on the long-range
percolation cluster is transient. In \cite{Jesper}, Jespersen and Blumen worked on
a model which is quite similar to the long-range percolation on $\mathbb{Z}$, and they
predict that when $s<2$ the random walk is transient, and when $s=2$ it is recurrent.
\subsection{Behavior of the random walk}\label{rwalk}
The main theorem proved here is:
\begin{theorem}\label{main}
(I) Consider long-range percolation on $\mathbb{Z}$ with parameters
$P_k\sim\beta |k|^{-s}$ such that a.s. there is an infinite cluster.
If $1<s<2$ then
the infinite cluster is
transient. If $s=2$, then the infinite cluster is recurrent.
\\(II) Let $\{P_k\}_{k\in\mathbb{Z}^2}$ be percolating for $\mathbb{Z}^2$
such that $P_{k}\sim\beta \|k\|_1^{-s}$.
If $2<s<4$ then the infinite cluster
is transient. If $s\geq 4$, then the infinite cluster is recurrent.
{\cal E}d{theorem}
In Section \text{ }f{trsprf}, we prove
the transience for the one-dimensional case where $1<s<2$ and
for the two-dimensional case where $2<s<4$.
Actually, we prove more - we show that for every $q>1$ there is a
flow on the infinite cluster with finite $q$-energy, where the $q$-energy of a
flow $f$ is defined as
\begin{equation}\label{energy}
{\cal E}_q(f)=\sum_e{f(e)^q}.
{\cal E}d{equation}
It is well known that finite $2$-energy is equivalent to
transience of the random walk (see e.g. \cite{yuval}, section 9),
so the existence of such flows is indeed a
generalization of the transience result (See also \cite{LP}, \cite{HoMo} and \cite{us}).
In Section \text{ }f{recprf} we prove the recurrence for the one-dimensional case
with $s=2$ and for the two-dimensional case with $s\geq 4$.
\subsection{Critical behavior}\label{critbehav}
As a corollary of the main renormalization lemma, we prove the following
theorem, which applies to every dimension:
\begin{theorem}\label{criti}
Let $d\geq1$ and let $\{P_k\}_{k\in\mathbb{Z}^d}$ be probabilities such that
$P_k\sim\beta \|k\|_1^{-s}$. Assume that $d<s<2d$.
Then, if $\{P_k\}$ is
percolating then it is not critical, i.e. there exists an $\epsilon>0$ such
that the sequence $\{P'_k=(1-\epsilon)P_k\}$ is also percolating.
{\cal E}d{theorem}
In \cite{hasl}, Hara and Slade proved, among other results, that for dimension $d\geq6$ and an
exponential decay of the probabilities, there is no infinite cluster at criticality.
It is of interest to compare Theorem \text{ }f{criti}
with the results of Aizenman and Newman (\cite{AN}), that show that for $d=1$
and $s=2$, a.s. there exists an infinite cluster at criticality. In \cite{ACCN},
Aizenman, Chayes, Chayes and Newman showed the same result for the Ising model -
They showed that if $s=2$, then at the critical temperature there is a
non-zero magnetization.
The technique that is used to prove Theorem \text{ }f{criti} is used in Section
\text{ }f{frcm} to prove the analogous result for
the infinite volume limit of the free random cluster model, and to get:
\begin{theorem}\label{fr_int}
Let $\{P_k\}$ be a sequence of nonnegative numbers such that
$P_k\sim \|k\|_1^{-s}$ ($d<s<2d$) and let $\beta>0$. Consider the infinite volume limit of the
free random cluster model with probabilities $1-e^{-\beta P_k}$ and with
$q\geq 1$ states.
Then, at the critical inverse temperature
\begin{equation*}
\beta_c=\inf(\beta |\text{ a.s. there exists an infinite cluster})
{\cal E}d{equation*}
there is no infinite cluster.
{\cal E}d{theorem}
However, this
technique fails to prove this result for the wired measure, so in the wired
case the question is still open. A partial answer for the case $s\leq\frac{3}{2}d$ is given
by Aizenman and $\text{Fern}\acute{\text{a}}\text{ndez }$ in \cite{AiFe}. Consider the
Ising model with $s\leq\frac{3}{2}d$ when
the interactions obey the {\em reflection positivity}
condition (which is defined there). Denote by $M(\beta)$ the magnetization at
inverse temperature $\beta$. Consider the critical exponent $\hat{\beta}$ such that
\begin{equation*}
M(\beta)\sim |\beta-\beta_c|^{\hat{\beta}}
{\cal E}d{equation*}
for $\beta$ near the critical value $\beta_c$. They proved that (under the above assumptions)
$\hat{\beta}$ (as well as other critical exponents) exists and they showed that
$\hat{\beta} = \halb$.
A corollary of Theorem \text{ }f{fr_int} is
\begin{corollary}\label{ising_extr}
Let $\{P_k\}_{k\in\mathbb{Z}^d}$ be nonnegative numbers s.t. $P_k=P_{-k}$ for every $k$
and s.t. $P_k\sim \|k\|_1^{-s}$ ($d<s<2d$).
Consider the Potts model with $q$ states on $\mathbb{Z}^d$, s.t. the interaction between
$v$ and $u$ is $P_{v-u}$. At the critical temperature,
the free measure is extremal.
{\cal E}d{corollary}
Another consequence of the renormalization lemma is the following:
\begin{theorem}\label{jeff}
Let $d>1$ and let $\{P_k\}_{k\in \mathbb{Z}^d}$ be probabilities s.t.
$P_k\sim \beta \|k\|_1^{-s}$ for
some $s<2d$. Assume that the independent percolation model with $\{P_k\}$ has
a.s. an infinite cluster. Then there exists $N$ s.t. the independent
percolation model with probabilities
\begin{equation*}
P'_k = \left\{
\begin{array}{ll}
P_k &\|k\|_1<N\\
0 &\|k\|_1\geq N
{\cal E}d{array}
\right.
{\cal E}d{equation*}
also has, a.s., an infinite cluster.
{\cal E}d{theorem}
In \cite{steif}, Meester and Steif prove the analogous result for supercritical
arrays of exponentially
decaying probabilities. It is still unknown whether the same statement is true
for probabilities that decay faster than $\|k\|_1^{-s}$ ($s<2d$) and slower than
exponentially.
\subsection{Random electrical networks}
The proof of
recurrence for the two-dimensional case involves some calculations on random
electrical networks. In Section \text{ }f{elnet} we study such networks,
and prove
stability results for their recurrence. One of our goals in that section is:
\begin{theorem}\label{iid}
Let $G$ be a recurrent graph with bounded degree. Assign i.i.d
conductances on the edges of $G$. Then, a.s., the resulting electrical
network is recurrent.
{\cal E}d{theorem}
In \cite{triid} Pemantle and Peres studied the analogous question for the transient case,
i.e.
under what conditions i.i.d. conductances on a transient graph would preserve
its transience. They proved that it occurs if and only if there exists $p<1$ s.t.
an infinite cluster for (nearest-neighbor) percolation with parameter $p$ is transient.
Comparing the results indicates that recurrence is more stable than transience for
this type of perturbation.
Section \text{ }f{elnet} is self-contained, i.e. it does not use any of the results proved
in other sections.
\section{The transience proof}\label{trsprf}
In this section we give the proof that the $d$-dimensional long-range
percolation cluster, with $d<s<2d$, is a transient graph.
Our methods use the idea of iterated renormalization for
long-range percolation that was introduced in \cite{NS}, where it was used in order
to prove the following theorem:
\begin{theorem}[Newman and Schulman, 1986]\label{thm:ns}
Let $1<s<2$ be fixed, and consider an independent one-dimensional
percolation model such that the bond between $i$ and $j$ is open with
probability $P_{i-j}=\eta_s(\beta,|i-j|)$, where
\begin{equation}\label{defeta}
\eta_s(\beta,k)=1-\exp(-{\beta |k|^{-s}}),
{\cal E}d{equation}
and each vertex is alive with probability $\lambda\leq 1$. Then for $\lambda$
sufficiently close to $1$
and $\beta$ large enough, there exists, a.s., an infinite cluster.
{\cal E}d{theorem}
In order to prove our results, we need the following definition and
the following two renormalization lemmas:
\begin{definition}
We say
that the cubes
${\cal C}_1=v_1+[0,N-1]^d$ and $C_2=v_2+[0,N-1]^d$ are {\em $k$ cubes
away from each other} if $\|v_1-v_2\|_1=Nk$.
{\cal E}d{definition}
We will always use the notion of two cubes being $k$ cubes away from each other for
pairs of cubes of the same size that are aligned on the same grid.
\begin{lemma}\label{normal1}
Let $\{P_k\}_{k\in\mathbb{Z}^d}$ be such that $P_k=P_{-k}$ for every $k$, such that
$P_k>0$ for every $k\in\mathbb{Z}^d\setminus\{0\}$,
and such that
there exists $d<s<2d$ s.t.
\begin{equation}\label{posliminf}
\liminf_{\|k\|_1\to\infty}\frac{P_k}{\|k\|_1^{-s}}>0.
{\cal E}d{equation}
Assume that the percolation model on $\mathbb{Z}^d$ with probabilities $\{P_k\}$ has,
a.s., an infinite cluster.
Then, for every $\epsilon>0$ and $\rho$ there exists $N$ such that with
probability bigger than $1-\epsilon$, inside the cube $[0,N-1]^d$
there exists an open cluster that contains at least $\rho N^{\frac{s}{2}}$ vertices.
{\cal E}d{lemma}
Lemma \text{ }f{normal1} shows that most of the cubes contain big clusters. We also
want to estimate the probability that the clusters in two different cubes are connected
to each other.
\begin{lemma}\label{connectprob}
Let $\{P_k\}_{k\in\mathbb{Z}^d}$ be such that $P_k=P_{-k}$ for every $k$, and such that
there exists $d<s<2d$ s.t.
\begin{equation*}
\liminf_{\|k\|_1\to\infty}\frac{P_k}{\|k\|_1^{-s}}>0.
{\cal E}d{equation*}
Let $k_0$ be s.t. if $\|k\|_1>k_0$ then $P_k>0$, and let
\begin{equation*}
{{\gothic o}thic a}mma=\inf_{k>k_0}\frac{-\log(1-P_k)}{\|k\|_1^{-s}}>0.
{\cal E}d{equation*}
Let $\rho>2(2k_0)^d$, and let $N$ and $l$ be integers. Let $C_1$ and $C_2$ be
cubes of side-length $N$, which are $l$ cubes away from each other.
Assume further that $C_1$ and $C_2$ contain clusters $U_1$ and $U_2$,
each of size $\rho N^\frac{s}{2}$.
Then, the probability that there is an open bond between a vertex in the
$U_1$ and a vertex in $U_2$ is at least $\eta_s(\zeta{{\gothic o}thic a}mma\rho^2,l)$
for $\zeta=2^{-s-1}d^{-s}$.
{\cal E}d{lemma}
In order to prove Lemma \text{ }f{normal1}, we will need a few definitions as well as another lemma, Lemma \text{ }f{lem:aldous} below, which is proved
in Appendix \text{ }f{app:aldous}.
Let $M$ be a (large) integer, and let $1<\xi<2$. An {\em inhomogenous random graph with size $M$ and parameter $\xi$}, as defined in \cite{aldous}, is a set of particles $h_1,\ldots,h_k$ of masses
$m(h_1),\ldots,m(h_k)$ such that $\sum_{i=1}^km(h_i)=M$, such that for every $i\neq j$, there is a bond between the particles $h_i$ and $h_j$ with probability
$\eta_\xi\big(m(h_i)\cdot m(h_j),M\big)$, and different bonds are independent of each other.
For a connected component $C$ in an inhomogenous random graph $H$, we say that its {\em mass} is $m(C)=\sum_{i:h_i\in C}m(h_i)$. For $\chi>0$ and an inhomogenous random graph $H$, we define
$N_\chi(H)$ to be the number of connected clusters in $H$ whose mass is greater than or equal to $\chi$.
\begin{lemma}\label{lem:aldous}
Let $1<\xi<2$, and let ${{\gothic o}thic a}mma<1$ be such that
\begin{equation}\label{eq:gammadef}
18{{\gothic o}thic a}mma>16+\xi.
{\cal E}d{equation}
There exists $\mbox{var}phi=\mbox{var}phi(\xi,{{\gothic o}thic a}mma)>0$ and $M_0=M_0(\xi,{{\gothic o}thic a}mma)$ such that for all $M>M_0$ and
every inhomogenous random graph $H$ with size $M$ and parameter $\xi$,
\[
P\big( N_{M^{{\gothic o}thic a}mma}(H) \geq 2\big) < M^{-\mbox{var}phi}.
\]
{\cal E}d{lemma}
\begin{proof}[Proof of Lemma \text{ }f{normal1}]
Let $s/d<\xi<2$, and let ${{\gothic o}thic a}mma<1$ be as in \eqref{eq:gammadef}.
Let $\mbox{var}phi=\mbox{var}phi(\xi,{{\gothic o}thic a}mma)$ be as in Lemma \text{ }f{lem:aldous}.
Notice that by Theorem \text{ }f {GKN} (\cite{uniq}) there is a unique infinite cluster.
Choose
\begin{equation*}
C_n=n^a\text{ and }D_n=n^{-b},
{\cal E}d{equation*}
where
\begin{equation}\label{eq:reqab}
a>b>1,\ 2b<a(2d-s), \mbox{ and }
(da-b)/da>{{\gothic o}thic a}mma.
{\cal E}d{equation}
Choose $\epsilon'$ s.t.
\begin{equation}\label{prodconv}
3\epsilon'\prod_{k=1}^{\infty}{(1+3D_k)}<\epsilon.
{\cal E}d{equation}
Such an $\epsilon'$ exists because the product in (\text{ }f{prodconv}) converges.
Let
\begin{equation*}
\lambda = \inf_{\|k\|_1>0}\frac{-\log(1-P_k)}{\|k\|_1^{-s}}.
{\cal E}d{equation*}
Notice that since
\begin{equation*}
\lim_{x\searrow 0}\frac{-\log(1-x)}{x}=1,
{\cal E}d{equation*}
we get that $\lambda>0$. By the choice of $\lambda$,
for every $k$
we have that $P_k\geq\eta_s(\lambda,\|k\|_1)$.
Denote by $\alpha$ the density of the infinite percolation cluster. Let
$M>\max(M_0,2/\alpha)$ where $M_0$ is as in Lemma \text{ }f{lem:aldous}
be s.t. the following conditions hold:
\begin{enumerate}
\item
with probability bigger than $1-\epsilon '$ at
least $\frac{1}{2}\alpha M^d$
of the vertices in $[0,M-1]^d$ are in the infinite cluster.
\item
For every $n\geq 1$,
\begin{equation}\label{eq:Mgamma}
\frac \alpha 2M((n-1)!)^{da-b} \geq \big[M(n!)^{da}\big]^{{\gothic o}thic a}mma.
{\cal E}d{equation}
\item
For every $n\geq 1$,
\begin{equation}\label{eq:Mxi}
\eta_\xi\left(1,2\big[M((n-1)!)^a\big]^d\right) \leq \eta_s(\lambda,dM(n!)^a)
{\cal E}d{equation}
\item
For every $n\geq 1$,
\begin{equation}\label{eq:Mvarphi}
n^{2ad} < \big(M[(n-1)!]^a\big)^{\mbox{var}phi/2}
{\cal E}d{equation}
\item
\begin{equation}\label{eq:Mvarphieps}
M^{-d\mbox{var}phi}<\epsilon'
{\cal E}d{equation}
\item
For every $n$,
\begin{equation}\label{eq:Mdvarphieps}
\big[M((n-1)!^a)\big]^{-d\mbox{var}phi/2}\leq\epsilon'D_n.
{\cal E}d{equation}
{\cal E}d{enumerate}
The existence of
this $M$ follows from the ($d$-dimensional) ergodic theorem as well as \eqref{eq:reqab} and the fact that
\eqref{eq:Mxi}, \eqref{eq:Mvarphi}, \eqref{eq:Mvarphieps} and \eqref{eq:Mdvarphieps} hold for all large enough $M$.
The infinite cluster is unique, so all of the percolating vertices in
$[0,M-1]^d$ will be connected to each other within some
big cube containing $[0,M-1]^d$.
Let $K$ be such that they are all connected inside $[-K,M+K-1]^d$ with
probability $>1-\epsilon '$.
We define a {\em semi-cluster} in a cube
\begin{equation*}
{\cal C}=\prod_{i=1}^{d}{[l_iM,(l_i+1)M-1]}\text{ }(l_i\in\mathbb{Z}\text{ }\forall i)
{\cal E}d{equation*}
to be a maximal (w.r.t. containment) set of vertices in the cube that is contained in a connected subset of the
$K$-enlargement
\begin{equation*}
{\cal C}_K=\prod_{i=1}^{d}{[l_iM-K,(l_i+1)M+K-1]}.
{\cal E}d{equation*}
of the cube.
We call a cube ${\cal C}$
{\em alive}
if there is a unique semi-cluster in ${\cal C}$ of size larger than $\frac{1}{2}\alpha M^d$.
We now show that by the choice of $M$ and $K$, the probability that there is more than one semi-cluster of size larger than $\frac{1}{2}\alpha M^d$ in
${\cal C}$ is less than $\epsilon'$.
Indeed, for every $x,y\in{\cal C}$, we have $P_{x-y}>\eta_\xi(1,M^d)=:\upsilon_1$. Therefore, we can sample the configuration $\omega$ in two steps as follows: For every $x,y\in{\cal C}$, let $P'_{x,y}$ be the value such that $P'_{x,y}+\upsilon_1-\upsilon_1P'_{x,y}=P_{x-y}$. We then sample $\omega'$ as and independent configuration where the bond $(x,y)$ appears w.p. $P'_{x,y}$ if $x,y\in{\cal C}$ and $P_{x-y}$ otherwise.
We then sample the configuration $\omega''$ as an i.i.d. $\upsilon_1$ configuration on ${\cal C}$, and $\omega:=\omega'\cup\omega''$. Then $\omega$ has the required distribution (i.e. independent with probability $P_{x-y}$ for the edge $(x,y)$ for all $x$ and $y$).
Now, for every two $\omega'$-semi-clusters, $S_1$ and $S_2$, the probability that they are connected in $\omega$ is the probability that there is an $\omega''$ edge between them, which is $\eta_\xi(|S_1|\cdot|S_2|,M^d)$. Thus, the semi-clusters in $\omega'$ are an inhomogeneous random graph, and by Lemma \text{ }f{lem:aldous}, the probability that there is more than one semi-cluster larger than $\frac{\alpha}{2}M^d$ is bounded by $M^{-d\mbox{var}phi}<\epsilon'$.
Therefore, a cube of side-length $M$ is alive with probability at least $1-3\epsilon `$.
\ignore{
For every living cube, choose a semi-cluster (by {\em semi-cluster} we mean
a set of vertices in the cube that is contained in a connected subset of the
$K$-enlargement of the cube) of size at least $\frac{1}{2}\alpha M^d$ inside it.
We say that two cubes ${\cal C}_1$ and ${\cal C}_2$ are attached to each other if there exists an
open bond between the semi-cluster in ${\cal C}_1$ and the semi-cluster in ${\cal C}_2$.
If the cubes ${\cal C}_1$ and ${\cal C}_2$ are alive and are $k$ cubes
away from each other, then the probability that they are connected
is at least
\begin{equation*}
\eta_s({{\gothic o}thic a}mma M^{2d-s},k)
{\cal E}d{equation*}
for ${{\gothic o}thic a}mma=\frac{1}{4}\alpha^2\lambda(2d)^{-s}$.
This is true because there are at least $\frac{1}{4}\alpha^2M^{2d}$ pairs of vertices
$(v_1,v_2)$ from the semi-clusters of $C_1$ and $C_2$ respectively s.t. $\|v_1-v_2\|_1>k_0$.
For these vertices, $\|v_1-v_2\|_1<2dkM$. So, the probability that there is no edge between $v_1$
and $v_2$ is bounded by $1-\eta_s(\lambda,2dkm)=\exp(-\lambda(2dkM)^{-s})$. So, the probability
that there is no edge between the semi-cluster in $C_1$ and the one in $C_2$ is no more than
\begin{eqnarray*}
\left[\exp(-\lambda(2dkM)^{-s})\right]^{\frac{1}{4}\alpha^2M^{2d}}
&=&\exp\left(-\frac{1}{4}\alpha^2M^{2d}\lambda(2dkM)^{-s}\right)\\
&=&\exp\left(-\frac{1}{4}\alpha^2(2d)\lambda^{-s}M^{2d-s}k^{-s}\right)\\
&=&1-\eta_s({{\gothic o}thic a}mma M^{2d-s},k).
{\cal E}d{eqnarray*}
}
For $k=1,2,\ldots,$ let
\[
M_k=M\prod_{l=1}^kC_l=M[k!]^a \ \ \ \ \ \mbox{and} \ \ \ \ \ U_k=\frac{\alpha}{2}\prod_{j=1}^{k-1}D_j=\frac{\alpha}{2}[(k-1)!]^{-b}
\]
Note that $M_1=M$.
\ignore{
Choose some large number $\beta$.
Take $M$
and $K$ s.t. ${{\gothic o}thic a}mma M^{2d-s}>\beta$ and s.t. the probability of a cube
to be alive is more than $1-2\epsilon'$. The probability that two cubes that are $k$
cubes away from each other are attached is at least $\eta_s(\beta,k)$.
}
Let $R$ be a number such that
\begin{equation}\label{katanmeps}
(MR+2K)^d<2(MR)^d.
{\cal E}d{equation}
We want to renormalize to
cubes of side length $N=RM+K$.
We cannot apply the renormalization
argument from \cite{NS}, because the events that two (close enough) cubes
are alive are dependent. Thus, we use a different technique
of renormalization:
Consider the $M$-sided cubes as first stage vertices. Then, take cubes of side-length $C_1$ of
first stage vertices, and consider them as second stage vertices. Now, take cubes of side-length
$C_2$ of second stage vertices and consider them as third stage vertices.
Keep on taking cubes of
side length $C_n$ of $n$-stage vertices and consider them as $n+1$ stage vertices.
Choose $R$ to be
\begin{equation*}
R=\prod_{n=1}^{L}C_n
{\cal E}d{equation*}
for $L$ large enough for (\text{ }f{katanmeps}) to hold.
We already have a notion of a first stage vertex being alive.
Define inductively that an $n$-stage vertex is {\em alive} if
at least $D_n(C_n)^d$ of the $(n-1)$-stage vertices in it are alive, and every
two of those vertices are {\em attached} to each other, i.e. there is an open bond
between the big clusters in these $n-1$ stage vertices.
Denote by $\lambda_n$ the
probability that an $n$-stage vertex is not alive. We want to bound $\lambda_n$:
Denote by $\phi_n$ the probability that there aren't enough living
$(n-1)$-stage vertices inside our $n$-stage vertex, and by $\psi_n$ the
probability that not every two of them are attached to each other. Then,
$\lambda_n\leq\phi_n+\psi_n$.
Given $\lambda_{n-1}$, the
expected number of dead $(n-1)$-stage vertices in
an $n$-stage vertex is $\lambda_{n-1}C_n^d$. Therefore, by the Markov
inequality,
\begin{equation*}
\phi_n\leq\frac{\lambda_{n-1}}{1-D_n}.
{\cal E}d{equation*}
To estimate $\psi_n$, again we use the same argument using Lemma \text{ }f{lem:aldous}.
For every ${\cal C}_1$ and ${\cal C}_2$ of level $n-1$ vertices of distance bounded by $C_n$, and every $x,y\in{\cal C}={\cal C}_1\cup{\cal C}_2$, we have $P_{x-y}>\eta_\xi(1,2M_{n-1}^d)=:\upsilon_{n}$. Therefore, we can sample the configuration $\omega$ in two steps as follows: For every $x,y\in{\cal C}$, let $P'_{x,y}$ be the value such that $P'_{x,y}+\upsilon_n-\upsilon_nP'_{x,y}=P_{x-y}$. We then sample $\omega'$ as and independent configuration where the bond $(x,y)$ appears w.p. $P'_{x,y}$ if $x,y\in{\cal C}$ and $P_{x-y}$ otherwise.
We then sample the configuration $\omega''$ and and i.i.d. $\upsilon_n$ configuration on ${\cal C}$, and $\omega:=\omega'\cup\omega''$. Then $\omega$ has the required distribution (i.e. independent with probability $P_{x-y}$ for the edge $(x,y)$ for all $x$ and $y$).
Now, for every two $\omega'$-semi-clusters, $S_1$ and $S_2$, the probability that they are connected in $\omega$ is the probability that there is an $\omega''$ edge between them, which is $\eta_\xi(|S_1|\cdot|S_2|,M^d)$. Thus, the semi-clusters in $\omega'$ are an inhomogeneous random graph, and by Lemma \text{ }f{lem:aldous}, the probability that there is more than one semi-cluster larger than $U_nM_{n-1}^d$ is bounded by $M_{n-1}^{-d\mbox{var}phi}$. However, if both ${\cal C}_1$ and ${\cal C}_2$ are alive and the big components in them are not connected to each other, then there are (at least) two semi-clusters in ${\cal C}$ which are larger than $U_nM_{n-1}^d$. Therefore,
\begin{eqnarray*}
\psi_n \leq P\left[
\exists_{{\cal C}_1 \mbox{ and } {\cal C}_2} \mbox{in $[0,M_n]^d$, alive and not connected}
\right]
\leq M_{n-1}^{-d\mbox{var}phi}{{M_n^d}\choose{2}}\leq M_{n-1}^{-d\mbox{var}phi/2}
\leq\epsilon'D_n.
{\cal E}d{eqnarray*}
\ignore{
Every living $(n-1)$-stage vertex includes at least
\begin{equation*}
V_n=\prod_{k=1}^{n-1}(C_k)^dD_k=((n-1)!)^{da-b}
{\cal E}d{equation*}
living first-stage vertices inside its connected component. The distance
between those first-stage vertices cannot exceed
\begin{equation*}
U_n=d\prod_{k=1}^{n}C_k=d(n!)^a.
{\cal E}d{equation*}
Therefore,
\begin{equation*}
\psi_n\leq \binom{(C_n)^d}{2}(1-\eta_s(\beta,U_n))^{{V_n}^2}
\leq {C_n}^{2d}e^{-\beta U_n^{-s}{V_n}^2}
{\cal E}d{equation*}
i.e.
\begin{eqnarray*}
\psi_n\leq n^{2da}e^{-\beta(d^{-s}(n!)^{-as}\cdot ((n-1)!)^{2(da-b)})}\\
=n^{2ad}\cdot e^{-\beta(d^{-s}n^{-as}\cdot ((n-1)!)^{a(2d-s)-2b})}.
{\cal E}d{eqnarray*}
(Notice that the event that the connecting edges exist might depend on the
existence of enough living vertices. However, in this case, the FKG inequality
works in our favor)
\\This shows that $\psi_n$ decays faster than exponentially, and therefore,
since we control $\beta$ and can make it as large as we like, we can achieve
\begin{equation*}
\psi_n<\epsilon 'D_n
{\cal E}d{equation*}
for every $n$.
}
By the choice of $M$ and $K$, and by the definition of $\lambda_1$, we see
that $\lambda_1<3\epsilon '$. In addition, for every $n$,
\begin{eqnarray*}
\lambda_n\leq\psi_n+\phi_n
&\leq&\epsilon 'D_n+\frac{\lambda_{n-1}}{1-D_n}\\
&\leq&\epsilon 'D_n+\lambda_{n-1}(1+2D_n)\\
&\leq&(1+3D_n)\max(\lambda_{n-1},\epsilon')
{\cal E}d{eqnarray*}
Therefore, by induction, we get that for every $n$
\begin{equation*}
\lambda_n\leq 3\epsilon '\prod_{k=1}^{n}(1+3D_k),
{\cal E}d{equation*}
and so, for all $n$,
\begin{equation*}
\lambda_n\leq \Theta\epsilon '
{\cal E}d{equation*}
where
\begin{equation*}
\Theta=2\prod_{k=1}^{\infty}{(1+3D_k)}<\infty.
{\cal E}d{equation*}
So, with probability at least $1-\Theta\epsilon '>1-\epsilon$, we have a
cluster of size
\begin{equation*}
\prod_{n=1}^L{D_n(C_n)^d}=\prod_{n=1}^{L}{n^{da-b}}
=\left(\prod_{n=1}^L{C_n}\right)^\frac{da-b}{a}
=R^\frac{da-b}{a}.
{\cal E}d{equation*}
This is larger than $2\rho R^{\frac{s}{2}}$
If $L$ is large enough, because
\begin{equation*}
\frac{da-b}{a}>\frac{s}{2}.
{\cal E}d{equation*}
So, by (\text{ }f{katanmeps}), the lemma is proved for
$N=RM+2K$.
{\cal E}d{proof}
\begin{proof}[Proof of Lemma \text{ }f{connectprob} ]
There are $\rho^2N^s$ pairs of vertices $(v_1,v_2)$ s.t. $v_1\in U_1$ and $v_2\in U_2$.
For every $v_1\in U_1$ there are at most $(2k_0)^d<\halb\rho N^\frac{s}{2}$ vertices at
distance smaller or equal to $k_0$ from $v_1$. So, at least half of the pairs $(v_1,v_2)$
satisfy $\|v_1-v_2\|_1>k_0$. All of the pairs satisfy $\|v_1-v_2\|_1\leq 2ldN$.
For a given pair $(v_1, v_2)$ s.t. $\|v_1-v_2\|_1>k_0$,
the probability that there is no edge between $v_1$ and $v_2$ is
bounded by $1-\eta_s({{\gothic o}thic a}mma,2ldN)$. So, the probability that
there is no edge between $U_1$ and $U_2$ is bounded by
\begin{eqnarray*}
\left[1-\eta_s({{\gothic o}thic a}mma,2ldN)\right]^{\halb\rho^2N^s}
&=&\left[\exp(-{{\gothic o}thic a}mma(2ldN)^{-s})\right]^{\halb\rho^2N^s}\\
&=&\exp(-{{\gothic o}thic a}mma(2ldN)^{-s}\cdot\halb\rho^2N^s)\\
&=&\exp(-\halb(2d)^{-s}{{\gothic o}thic a}mma\rho^2l^{-s})\\
&=&1-\eta_s(\zeta{{\gothic o}thic a}mma\rho^2,l)
{\cal E}d{eqnarray*}
{\cal E}d{proof}
We can now use Lemma \text{ }f{normal1} and Lemma \text{ }f{connectprob} to prove the following
extension of Theorem \text{ }f{criti}:
\begin{theorem}
Let $d\geq 1$, and let $\{P_k\}_{k\in\mathbb{Z}^d}$ be probabilities such that there exists $s<2d$
for which
\begin{equation}\label{hasum}
\liminf_{\|k\|\to\infty}\frac{P_k}{{\|k\|_1^{-s}}}>0.
{\cal E}d{equation}
Then, if $\{P_k\}$ is percolating then there exists an $\epsilon>0$ such that
$\{P'_k=(1-\epsilon)P_k\}$ is percolating too.
{\cal E}d{theorem}
\begin{proof}
Let $\{P_k\}_{k\in\mathbb{Z}^d}$ be a percolating system that satisfies (\text{ }f{hasum}).
Let $k_0$ and ${{\gothic o}thic a}mma$ be as in Lemma \text{ }f{connectprob}. Let, again,
$\zeta=2^{-s-1}d^{-s}$.
Let $\lambda<1$, $\beta$ and $\partialta>0$ be such that a system in which every
vertex is alive with probability $\lambda-\partialta$ and every two vertices $x$
and $y$ are connected to each other with probability $\eta_s(\beta(1-\partialta),\|x-y\|_1)$
is percolating. For one dimension one can choose such $\lambda, \beta$ and $\partialta$
by Theorem \text{ }f{thm:ns}. For higher dimensions we may use the fact that site-bond
nearest neighbor percolation with high enough parameters has, a.s., an infinite cluster.
Let $\rho>2(2k_0)^d$ be s.t. $\zeta{{\gothic o}thic a}mma\rho^2\geq\beta$. By Lemma \text{ }f{normal1},
there exists $N$ s.t.
a cube of side length $N$ contains a cluster of size $\rho N^\frac{s}{2}$
with probability bigger than $\lambda$. A Cube that contains
a cluster of size bigger or equal to $\rho N^\frac{s}{2}$ will be considered {\em alive}.
For $\omega\leq 1$, consider the system $\{P'_k=\omega P_k\}$.
The probability that in the system $\{P'_k\}$
an $N$-cube is alive is a continuous function of $\omega$.
If we define $k'_0$ and ${{\gothic o}thic a}mma'$ for $\{P'_k\}$
the same way we defined $k_0$ and ${{\gothic o}thic a}mma$, then we get that $k'_0=k_0$, and
${{\gothic o}thic a}mma'$ is a continuous function of $\omega$.
Choose $\epsilon$ be so small that in the system $\{P'_k=(1-\epsilon)P_k\}$ the
probability of an $N$-cube to be alive is no less than $\lambda-\partialta$ and that
${{\gothic o}thic a}mma'\geq(1-\partialta){{\gothic o}thic a}mma$. Then, in the system
$\{P'_k\}$, every $N$-cube is alive with probability bigger than $\lambda-\partialta$, and
two cubes at distance $k$ cubes from each other are connected with probability
bigger than
\begin{eqnarray*}
\eta_s(\zeta{{\gothic o}thic a}mma'\rho^2,k)&=&\eta_s((1-\partialta)\zeta{{\gothic o}thic a}mma\rho^2,k)\\
&\geq&\eta_s(\beta(1-\partialta),k).
{\cal E}d{eqnarray*}
So, by the choice of $\beta$, $\lambda$ and $\partialta$, a.s. there is an infinite cluster
in the system $\{P'_k\}$.
{\cal E}d{proof}
\begin{corollary}
Let $d\geq 1$, and let $\{P_k\}_{k\in\mathbb{Z}^d}$ be probabilities such that there exists $s<2d$
for which
\begin{equation}
\liminf_{\|k\|\to\infty}\frac{P_k}{{\|k\|_1^{-s}}}>0.
{\cal E}d{equation}
If $\{P_k\}$ is critical, i.e. for every $\epsilon>0$ the system
$\{(1+\epsilon)P_k\}$
is percolating but the system
$\{(1-\epsilon)P_k\}$
is not percolating, then $\{P_k\}$ is not percolating.
{\cal E}d{corollary}
Lemma \text{ }f{normal1} also serves us in proving Theorem \text{ }f{jeff}.
\begin{proof}[Proof of Theorem \text{ }f{jeff}]
Let $\{P_k\}_{k\in Z^d}$ be such that
\begin{equation*}
\liminf_{\|k\|\to\infty}\frac{P_k}{\|k\|_1^{-s}}>0
{\cal E}d{equation*}
for $s<2d$.
Let $k_0$, ${{\gothic o}thic a}mma$ and $\zeta$ be as before.
Let $\epsilon$ and $\rho>2(2k_0)^d$ be s.t. site-bond nearest neighbor
percolation s.t. every site is alive with probability $1-\epsilon$
and every bond is open with probability $\eta_s(\zeta{{\gothic o}thic a}mma\rho^2,1)$ on $Z^d$
percolates. Let $N$ be suitable for those $\epsilon$ and $\rho$ by
Lemma \text{ }f{normal1}. Now, erase all of the bonds of length bigger than $4Nd$.
Renormalize the space to cubes of side-length $N$.
By erasing only bonds of length $>4Nd$, we did not erase bonds that are
either inside
$N$-cubes, or between neighboring $N$-cubes.
So, the renormalized picture still gives us site-bond percolation with probabilities
$1-\epsilon$ and $\eta_s(\zeta{{\gothic o}thic a}mma\rho^2,1)$, and therefore an infinite cluster exists a.s.
{\cal E}d{proof}
Returning to transience, we now prove that for large enough parameters $\beta$
and $\lambda$, the infinite cluster is transient. Later we will use
Lemma \text{ }f{normal1} and Lemma \text{ }f{connectprob} to reduce any percolating system
(with $d<s<2d$) to one with these large $\beta$ and $\lambda$.
\begin{lemma}\label{trans1}
Let $d\geq 1$ and $d<s<2d$. Consider the independent bond-site percolation model in which every
two vertices, $i$ and $j$, are connected with probability
$\eta_s(\beta,\|i-j\|_1)$, and every vertex is alive with probability $\lambda<1$.
If $\beta$ is large enough and $\lambda$ is close enough to $1$, then (a.s.) the random
walk on the infinite cluster is transient.
{\cal E}d{lemma}
In order to prove Lemma \text{ }f{trans1}, we need the notion of a {\em renormalized graph}:
For a sequence $\{C_n\}_{n=1}^{\infty}$, we construct a graph whose vertices are marked
$V_l(j_l,..,j_1)$ where $l=0,1,...$ and $1\leq j_n\leq C_n$. For convenience,
set $V_k(0,0,..,0,j_l,..,j_1)=V_l(j_l,..,j_1)$. For $l\geq m$,
we define $V_l(j_l,...,j_m)$ to be the set
\begin{equation*}
\{V_l(j_l,...,j_m,u_{m-1},...,u_1)|1\leq u_{m-1}\leq C_{m-1},...,1\leq u_1\leq C_1\}.
{\cal E}d{equation*}
\begin{definition}
A {\em renormalized graph} for a sequence $\{C_n\}_{n=1}^{\infty}$ is a graph whose vertices
are $V_l(j_l,..,j_1)$ where $l=0,1,...$ and $1\leq j_n\leq C_n$, such that
for every $k\geq l>2$, every $j_k,...,j_{l+1}$ and every $u_l,u_{l-1}$ and
$w_l,w_{l-1}$, there is an edge connecting a vertex in $V_k(j_k,...,j_{l+1},u_l,u_{l-1})$
and a vertex in $V_k(j_k,...,j_{l+1},w_l,w_{l-1})$.
{\cal E}d{definition}
one may view a renormalized graph as a graph having the following recursive structure:
The $n$-th stage of the graph is composed of $C_n$
graphs of stage $(n-1)$, such that every $(n-2)$-stage graph in each of
them is connected to every $(n-2)$-stage graph in any other.
(A zero stage graph is a vertex).
\begin{lemma}\label{subgraph}
Under the conditions of Lemma \text{ }f{trans1}, if $\beta$
and $\lambda$ are large enough, then a.s the infinite cluster contains a
renormalized sub-graph with $C_n=(n+1)^{2d}$.
{\cal E}d{lemma}
\begin{proof}
We will show
that with a positive probability $0$ belongs to a renormalized sub-graph. Then, by
ergodicity of the shift operator and the fact that the event
$E=\{\text{There exists a renormalized sub-graph}\}$ is shift invariant we get
${\bf P}(E)=1$. In order to do that, we use the exact same technique used by Newman
and Schulman in \cite{NS}:
Take
\begin{equation}\label{hagda}
W_n=2(n+1)^2 \text{ } ; \text{ }\theta_n=1-\frac{n^{-1.5}}{2} \text{ };\text{ } \lambda_n=1-\frac{(n+1)^{-1.5}}{4}.
{\cal E}d{equation}
Renormalize $Z^d$ by viewing cubes of side-length $W_1$ as {\em first stage} vertices.
(the original vertices will be viewed as zero-stage vertices). Then, take cubes
of side-length $W_2$ of first-stage vertices as {\em second stage} vertices,
and continue grouping together cubes of side-length $W_n$ of $(n-1)$-stage vertices
to form {\em $n$ stage} vertices.
We now define inductively the notion of an ($n$-stage) vertex being alive:
The notion of a zero-stage vertex being alive is given to us. A first-stage
vertex is {\em alive} if at least $\theta_1W_1^d$ of its vertices are alive,
and they are all connected to each other.
For every living first-stage vertex, we choose $C_1$ zero-stage vertices, and call
them {\em active}. The {\em active part} of a first-stage vertex is the set of active
zero-stage vertices in it.
The active part of a living zero-stage vertex is the singleton containing the vertex.
We now define (inductively) simultaneously the notion of an $n$-stage vertex being alive,
and of the active part of this vertex.
For $n\geq 2$, we say that an $n$-stage vertex $v$ is {\em alive} if:
\\(A) At least $\theta_nW_n^d$ of its vertices are alive, and
\\(B) If $i_1$ is a living $(n-2)$-stage vertex that belongs to a living
$(n-1)$-stage vertex $i_2$ that belongs to $v$, and $j_1$ is a living $(n-2)$-stage
vertex that belongs to a living $(n-1)$-stage vertex $j_2$ that belongs to $v$ then
there exists an open bond connecting a zero-stage vertex in the active part of $i_1$
to a zero-stage vertex in the active part of $j_1$.
When choosing the active vertices, if the vertex that includes $0$ is alive, we choose it to
be active.
To define the active part: If $v$ is a living $n$-stage vertex, then we choose $C_n$ of its
living $n-1$-stage vertices to be active.
The active part of $v$ is the union of the active parts
of its active vertices. (notice that the active part is always a set of zero-stage vertices).
We denote the event that (A) occurs for the $n$-stage vertex containing $0$ by $A_n$,
and by $B_n$ we denote that (B) occurs for the $n$-stage vertex containing $0$.
$A_n(v)$ and $B_n(v)$ will denote the same event for the $n$-stage vertex $v$. Of
course, {\bf P}($A_n)={\bf P}(A_n(v))$ and ${\bf P}(A_n)={\bf P}(A_n(v))$ for every $v$.
Further, we denote by $L_n(v)$ the event that the $n$-stage vertex $v$ is alive,
and by $L_n$ the event that the $n$-stage vertex containing $0$ is alive.
Let $v$ be an $n$-stage vertex. Given $A_n$ we want
to estimate the probability of $B_n$: We have
at most
\begin{equation}\label{num_pairs}
{{(W_nW_{n-1})^d}\choose{2}}<4^d(n+1)^{8d}
{\cal E}d{equation}
pairs of $(n-2)$-vertices.
If $i_1$ and $i_2$ are living $(n-2)$-stage vertices in $v$, then the distance between
a zero-stage vertex in $i_1$ and a zero-stage vertex in $i_2$ cannot exceed
\begin{equation}\label{dist_act}
\prod_{k=1}^{n}{W_k}=2^n((n+1)!)^2.
{\cal E}d{equation}
The size of the active part in $i_1$ (and in $i_2$), is
\begin{equation}\label{num_act}
\prod_{k=1}^{n}{W_k}=((n+1)!)^{2d}
{\cal E}d{equation}
By (\text{ }f{num_act}) and (\text{ }f{dist_act}), the probability that there is no open bond
between $i_1$ and $i_2$ is bounded by
\begin{eqnarray*}
\left[\exp\left(-\beta\cdot 2^{-ns}((n+1)!)^{-2s}\right)\right]^{((n+1)!)^{4d}}\\
=\exp\left(-\beta\cdot 2^{-ns}((n+1)!)^{4d-2s}\right)
{\cal E}d{eqnarray*}
and by (\text{ }f{num_pairs}) we get
\begin{eqnarray}\label{probB}
{\bf P}\left[B_n^c|A_n\right]
&\leq&
4^d(n+1)^{8d}\exp\left(-\beta\cdot 2^{-ns}((n+1)!)^{4d-2s}\right)\\
\nonumber &\leq& \exp\left(9d\log(n)-\beta\cdot 2^{-ns}((n+1)!)^{4d-2s}\right)
{\cal E}d{eqnarray}
Assume that $\beta>1$. We may assume that because we deal with
"large enough" $\beta$.
By (\text{ }f{probB}), there exists $n_0$ s.t. if $n>n_0$ then
\begin{equation}\label{fastthanexp}
{\bf P}\left[B_n^c|A_n\right]<e^{-n}.
{\cal E}d{equation}
We now want to prove the following claim:
\begin{claim}\label{induc}
There exists $n_1$ such that for every $n>n_1$, if ${\bf P}(L_n)\geq\lambda_n$ then
${\bf P}(L_{n+1})\geq\lambda_{n+1}$.
{\cal E}d{claim}
\begin{proof}
Let $\psi={\bf P}(L_n)$. First, we like to estimate ${\bf P}(A_{n+1})$. The event
$A_{n+1}^c$ is the event that at least $(1-\theta_{n+1})W_{n+1}^d$ vertices are
dead. The number of dead vertices is a $(W_{n+1}^d, \psi)$ binomial variable,
and by the induction hypothesis together with (\text{ }f{hagda}),
$\psi < \halb(1-\theta_{n+1})$. So, by large deviation estimates,
\begin{eqnarray}\label{larged}
{\bf P}(A_{n+1}^c) &<& \exp\left(-\frac{\halb(1-\theta_{n+1})W_{n+1}^d}{16}\right)\\
\nonumber &\leq& \exp\left(-\frac{n^{2d-1.5}}{32}\right)
{\cal E}d{eqnarray}
If $n_1>n_0$, and is large enough, by (\text{ }f{fastthanexp}) and (\text{ }f{larged}),
\begin{eqnarray*}
{\bf P}(L_{n+1}^c) &\leq& {\bf P}(A_{n+1}^c)+{\bf P}(B_{n+1}^c|A_{n+1})\\
&\leq& \exp\left(-\frac{n^{2d-1.5}}{32}\right)+e^{-n}\\
&\leq& \frac{(n+1)^{-1.5}}{4}=1-\lambda_{n+1}
{\cal E}d{eqnarray*}
{\cal E}d{proof}
We can take $\beta$ and $\lambda$ so large that ${\bf P}(L_{n_1})>\lambda_{n_1}$. But then,
by Claim \text{ }f{induc}, for every $n>n_1$, ${\bf P}(L_{n})>\lambda_{n}$. So, since the events
$L_n$ are positively correlated,
\begin{equation*}
{\bf P}\left(\bigcap_{n=1}^{\infty}L_n\right)\geq\prod_{n=1}^{\infty}{{\bf P}(L_n)}>0
{\cal E}d{equation*}
So with positive probability, $0$ is in an infinite cluster. The active part of the infinite
cluster (i.e. the union of the active parts of the $n$-stage vertex containing $0$ for all $n$)
is a renormalized sub-graph of the infinite cluster that contains $0$.
{\cal E}d{proof}
\begin{proof}[proof of Lemma \text{ }f{trans1}]
In view of Lemma \text{ }f{subgraph} it suffices to show that for $C_n=(n+1)^{2d}$,
the renormalized graph is transient.
We build, inductively, a flow $F$ from $V_1(1)$ to infinity which has a
finite energy. First, $F$ flows $C_1^{-1}$ mass from $V_1(1)$ to each of
\begin{equation*}
\{V_1(i)\}_{i=2}^{C_1}.
{\cal E}d{equation*}
Now, inductively, assume that $F$ distributes the mass among
\begin{equation*}
\{V_n(i_1,...,i_n)|2\leq i_k\leq C_k\}.
{\cal E}d{equation*}
Then, for each $(n-1)$-stage graph $V_n(i),i\neq 1$ and every $n$-stage graph
$V_{n+1}(j),j\neq 1$, there are two vertices, $p^{(n)}_{i,j}\in V_n(i)$ and
$q^{(n)}_{i,j}\in V_{n+1}(j)$ which are connected to each other by an open bond.
(Notice that the vertices $\{p^{(n)}_{i,2},...p^{(n)}_{i,C_{n+1}}\}$, as well as
$\{q^{(n)}_{2,j},...q^{(n)}_{C_n,j}\}$ do not necessarily differ from each other).
Inductively, we know how to flow mass from one vertex in $V_n(i)$ to all of
$V_n(i)$. We can flow it backwards in the same manner to any desired vertex.
Flow the mass so that it will be distributed equally among
$\{p^{(n)}_{i,2},...p^{(n)}_{i,C_{n+1}}\}$ (if a vertex appears twice, it will get a
double portion). Now flow the mass from each $p^{(n)}_{i,j}$ to the corresponding
$q^{(n)}_{i,j}$,
and from $q^{(n)}_{i,j}$ (again by the inductive familiar way) we will flood
$V_{n+1}(j)$. Now, we bound the energy of the flow: $E_n$, The maximal
possible energy of the first $n$ stages of the flow (i.e. the part of the flow
which distributes the mass the origin to $V_{n+1}$ and takes it backwards to
$\{p^{(n+1)}_{i,j}\}\subset V_{n+1}$) can be
bounded by the energy of first $n-1$ stages of the flow, plus:
\\(A) Flowing between $p^{(n)}_{i,j}$ and $q^{(n)}_{j,i}$: This will have energy of
$(C_nC_{n+1})^{-1}$.
\\(B) Flowing inside $V_{n+1}$: the energy is bounded by
$\frac{E_{n-1}}{C_{n+1}}$.
\\So,
\begin{equation*}
E_n\leq \left(1+\frac{1}{C_{n-1}}\right)E_{n-1}+\frac{1}{C_nC_{n-1}}.
{\cal E}d{equation*}
The total energy is bounded by the supremum of $\{E_n\}$ which is finite because
\begin{equation*}
\sum_{n=1}^{\infty}{\frac{1}{C_n}}<\infty.
{\cal E}d{equation*}
{\cal E}d{proof}
Let $v$ be a vertex. The amount of flow that goes through $v$ is defined to be
$
f(v)=\halb\sum{|f(e)|}
$
where the sum is taken over all of the edges $e$ that have $v$ as an end point.
Then, we get a notion of the {\em energy
of the flow through the vertices}, defined as
\begin{equation*}
{\cal E}_{\text{vertices}}=\sum\limits_{v\text{ is a vertex}}f(v)^2
{\cal E}d{equation*}
\begin{remark}\label{vertices}
The same calculation as in Lemma \text{ }f{trans1} yields that not only the energy of the flow on the
bonds is finite, but also the energy of the flow through the vertices.
{\cal E}d{remark}
This fact allows us to obtain the the main goal of this section:
\begin{theorem}\label{thm:transience}
Let $d\geq 1$, and let $\{P_k\}_{k\in \mathbb{Z}^d}^\infty$ satisfy:
\\(A) $P_k = P_{-k}$ for every $k\in\mathbb{Z}$.
\\(B) the independent percolation model in which the bond between i and j is
open with probability $P_{i-j}$ has, a.s., an infinite cluster.
\\(C) there exists $d<s<2d$ s.t.
\begin{equation}\label{katanm2}
\liminf_{\|k\|\to\infty}\frac{P_k}{{\|k\|_1^{-s}}}>0.
{\cal E}d{equation}
\\(D)
\begin{equation*}\label{finitdeg}
\sum_{k\in\mathbb{Z}^d}{P_k}<\infty.
{\cal E}d{equation*}
Then, a.s., a random walk on the infinite cluster is transient.
{\cal E}d{theorem}
\begin{proof}
By (D), the degree of every vertex in the infinite cluster is finite, so the random walk
is well defined.
Let $\beta$ and $\lambda$ be
large enough for Lemma \text{ }f{trans1}. Then, by Lemma \text{ }f{normal1}, there
exists $N$ such that after renormalizing with cubes of side-length $N$ we
get a system whose connection probabilities dominate
$\eta_s(\beta,|i-j|)$, and the probability of a vertex to live is bigger than
$\lambda$. By Lemma \text{ }f{trans1}, there is a flow on this graph whose energy is
finite. For the walk to be transient, the energy of the flow should also be
finite inside the $N$-cubes. This is true because of Remark
\text{ }f{vertices} and the fact that inside each $N$-cube there are no
more than
$\left(
\begin{smallmatrix}
N^d
\\2
{\cal E}d{smallmatrix}
\right)$
bonds.
{\cal E}d{proof}
one can look on other types of energy as well. For any $q$, we define the
$q$-energy of a flow as in equation (\text{ }f{energy}).
Theorem \text{ }f{thm:transience} says that for every $\{P_k\}$ that satisfies conditions
(A) through (D),
there is a flow
with finite $2$-energy. Actually, one can say more:
\begin{theorem}
Let $\{P_k\}_{k\in\mathbb{Z}}$ be as in Theorem \text{ }f{thm:transience}. Then, For
every $q>1$, there is a flow with finite $q$-energy on the infinite cluster.
{\cal E}d{theorem}
\begin{proof}[A sketch of the proof]
The proof is essentially the same as the proof of Theorem \text{ }f{thm:transience}.
We can construct a renormalized sub-graph of the infinite cluster with
$C_n=(n+1)^{kd}$, for $k$ s.t. $k(q-1)>1$. We construct the flow the same way we did
it in Lemma \text{ }f{trans1}. The same energy estimation will now yield the
required finiteness of the energy. Lemma \text{ }f{normal1} and
Remark \text{ }f{vertices} are used the same way they were used in Theorem
\text{ }f{thm:transience}.
If we construct a renormalized graph with $C_n=2^n$ (such a graph a.s. exists as a
sub-graph
of the infinite cluster), we get a flow whose $q$-energy is finite for every $q>1$.
{\cal E}d{proof}
\section{The recurrence proofs}\label{recprf}
In this section we prove the recurrence results. Unlike the transient case,
here we give two different proofs - one for the one-dimensional case, and the
other for the two-dimensional case. We begin with the easier
one-dimensional case.
\begin{theorem}\label{one_d_recur}
Let $\{P_k\}_{k=1}^\infty$ be a sequence of probabilities s.t.:
\\(A) the independent percolation model in which the bond between i and j is
open with probability $P_{|i-j|}$ has, a.s., an infinite cluster, and
\\(B)
\begin{equation*}
\limsup_{k\to\infty}\frac{P_k}{k^{-2}}<\infty.
{\cal E}d{equation*}
Then, a.s., a random walk on the infinite cluster is recurrent.
{\cal E}d{theorem}
The proof of the theorem relies on the Nash-Williams theorem, whose proof
can be found in \cite{yuval}:
\begin{theorem}[Nash-Williams]\label{Thm:nw}
Let $G$ be a graph with conductance $C_e$ on every edge $e$. Consider a
random walk on the graph such that when the particle is at some vertex, it
chooses its way with probabilities proportional to the conductances on the
edges that it sees.
Let $\{\Pi_n\}_{n=1}^\infty$ be disjoint cut-sets, and Denote by
$C_{\Pi_n}$ the sum of the conductances in $\Pi_n$.
If
\begin{equation*}
\sum_n{C_{\Pi_n}^{-1}}=\infty
{\cal E}d{equation*}
then the random walk is recurrent.
{\cal E}d{theorem}
In order to prove theorem \text{ }f{one_d_recur}, we need the following definition and
three easy lemmas.
The following definition appeared originally in \cite{AN} and \cite{NS}.
\begin{definition}[Continuum Bond Model]\label{defcont}
Let $\beta$ be s.t.
\begin{equation*}
\int_{0}^{1}\int_{k}^{k+1}{\beta(x-y)^{-2}dydx}>P_k
{\cal E}d{equation*}
for every $k$. The {\em continuum bond model} is the two dimensional inhomogeneous Poisson
process $\xi$ with density $\beta(x-y)^{-2}$. We say that two sets $A$ and $B$ are
connected if $\xi(A\times B)>0$.
{\cal E}d{definition}
Notice that by the definition \text{ }f{defcont}, the probability that the interval $[i,i+1]$ is
connected to $[j,j+1]$ in the continuum model is not smaller than the probability that $i$
is directly connected to $j$ in the original model. (By saying that a vertex is
{\em directly connected} to an interval , we mean
that there is an open bond between this vertex and some vertex in the interval.) So, we get:
\begin{claim}\label{condisc}
Let $I$ be an interval. Let $M$ be the length of the shortest interval that contains all of
the vertices that are directly connected to $I$ in the original model. Let $M'$ be the length
of the smallest interval $J$ s.t. $\xi(I\times (\mathbb{R}-J))=0$. Then, $M'$ stochastically dominates
$M$.
{\cal E}d{claim}
\begin{lemma}\label{soi}
(A) Under the conditions of Theorem \text{ }f{one_d_recur}, let $I$ be an interval of
length $N$. Then, the probability that there exists a vertex of distance
bigger than $d$ from the interval, that is directly connected to the interval, is
$O\left(\frac{N}{d}\right)$.\\
(B) Consider the continuum bond model. Let $I$ be an interval of length $N$, and let
$J$ be the smallest interval s.t. $\xi(I\times (\mathbb{R}-J))=0$. Then
${\bf P}(|J|>d)=O\left(\frac{N}{d}\right)$.
{\cal E}d{lemma}
\begin{proof}
(A) Let
\begin{equation*}
\beta'=\sup_{k}\frac{P_k}{k^{-2}}<\infty.
{\cal E}d{equation*}
If $v$ is at distance $k$ from $I$, then the probability that $d$ is directly
connected to $I$ is bounded by
\begin{equation*}
\beta'\sum_{k=d}^{d+N}{k^{-2}}<\frac{\beta' N}{d^2}
{\cal E}d{equation*}
So, the probability that there is a vertex of distance bigger than $d$ that is directly
connected to $I$ is bounded by
\begin{equation*}
\sum_{k=d}^{\infty}\frac{\beta' N}{k^2} = O\left(\frac{N}{d}\right)
{\cal E}d{equation*}
(B) is proved exactly the same way.
{\cal E}d{proof}
\begin{lemma}\label{log}
Under the same conditions, and again letting $I$ be an interval of length $N$,
the expected number of open bonds exiting $I$ is O($\log N$).
There is a constant ${{\gothic o}thic a}mma$, s.t. the probability of having
more than ${{\gothic o}thic a}mma\log N$ open bonds exiting $I$ is smaller than $0.5$.
{\cal E}d{lemma}
\begin{proof}
Again, let
\begin{equation*}
\beta'=\sup_{k}\frac{P_k}{k^{-2}}<\infty.
{\cal E}d{equation*}
The expected number of open bonds exiting $I$ is
\begin{eqnarray*}
\sum_{v\in I, u\notin I}{\bf P}(v\leftrightarrow u)&\leq&
\beta'\sum_{v\in I, u\notin I}(u-v)^{-2}\\
&=&2\beta'\sum_{i=1}^{N}\sum_{k=i}^{\infty}k^{-2}\\
&\leq&4\beta'\sum_{i=1}^{N}\frac{1}{i}\\
&=&O(\log N).
{\cal E}d{eqnarray*}
Let $C$ be s.t. the expected value is less than $C\log N$ for all $n$. For any ${{\gothic o}thic a}mma>2C$,
by Markov's inequality, the probability that more than ${{\gothic o}thic a}mma\log N$ open bonds are
exiting $I$ is smaller than $0.5$.
{\cal E}d{proof}
\begin{lemma}\label{halfovern}
Let $A_i$ be independent events s.t. ${\bf P}(A_i)\geq 0.5$ for every $i$.
Then, a.s.,
\begin{equation*}
\sum_{i=1}^{\infty}\frac{1_{A_n}}{n}=\infty
{\cal E}d{equation*}
{\cal E}d{lemma}
\begin{proof}
Let
\begin{equation*}
U_k=\sum_{i=2^k}^{2^{k+1}-1}\frac{1_{A_i}}{i}
{\cal E}d{equation*}
Then,
\begin{equation}\label{U}
U_k\geq 2^{-(k+1)}\sum_{i=2^k}^{2^{k+1}-1}1_{A_i}
{\cal E}d{equation}
The variables $U_k$ are independent of each other, and by (\text{ }f{U}),
for every $k$ we have ${\bf P}(U_k>0.25)>0.5$. Therefore,
\begin{equation*}
\sum_{n=1}^{\infty}\frac{1_{A_n}}{n}=\sum_{k=0}^{\infty}U_k=\infty
{\cal E}d{equation*}
a.s.
{\cal E}d{proof}
\begin{proof}[Proof of theorem \text{ }f{one_d_recur}]
We will show that with probability $1$, the infinite cluster satisfies the
Nash-Williams condition.
Let $I_0$ be some interval. We define $I_n$ inductively to be the smallest
interval that contains all of the vertices that are connected directly to $I_{n-1}$.
Denote
\begin{equation*}
D_n=\frac{|I_{n+1}|}{|I_n|}.
{\cal E}d{equation*}
The edges exiting $I_{n+1}$ are stochastically dominated by the edges exiting
an interval of length $|I_{n+1}|$. (without the restriction that no edge starting
at $I_n$ exits $I_{n+1}$). Furthermore, given $I_n$ the edges exiting
$I_{n+1}$ are independent of those exiting $I_n$.
Let $\{U_n\}_{n=1}^\infty$ be independent copies of the continuum bond model.
Then, by Claim \text{ }f{condisc} $D_n$ is stochastically dominated by the sequence
$D'_n=\frac{|I'_{n+1}|}{|I_n|}$,
where $I'_{n+1}$ is the smallest interval s.t. $\mathbb{R}-I'_{n-1}$ is not connected to
the copy of $I_{n}$ in $U_n$.
The variables $D'_n$ are i.i.d. Therefore, by Lemma \text{ }f{soi}, the sequence
$\{\log(D_n)\}$ is dominated by a sequence of
i.i.d. variables $d_n=\log(D'_n)$, which satisfy ${\bf E} (d_n)<M$.
Let $\Pi_n$ be the set of
bonds exiting $I_n$. Then, $\{\Pi_n\}_{n=1}^\infty$ are disjoint cut-sets.
Given the intervals $\{I_n\}_{n=1}^N$, the set $\Pi_N$ is independent of
$\{\Pi_n\}_{n=1}^{N-1}$. Now, independently for each $n$, by Lemma \text{ }f{log},
with probability bigger than $0.5$,
\begin{equation}\label{pin}
|\Pi_N|<{{\gothic o}thic a}mma\sum_{n=1}^{N}{d_n}.
{\cal E}d{equation}
By the strong law of large numbers, with probability $1$, for all large
enough $N$,
\begin{equation}\label{sumdn}
\sum_{n=1}^{N}{d_n}<2MN.
{\cal E}d{equation}
Combining (\text{ }f{pin}), (\text{ }f{sumdn}) and Lemma \text{ }f{halfovern}, we get that
the Nash-Williams condition is a.s. satisfied.
{\cal E}d{proof}
We now work on the two-dimensional case. Our strategy in this case will
be to project the long bonds on the short ones. That is, for every open long bond we
find a path of nearest-neighbor bonds s.t. the end points of the path are those of
the original long bond. Then, we erase the long bond, and assign its conductance to
this path. In order to keep the conductance of the whole graph, if the path is
of length $n$, we add $n$ to the conductance of each of the bonds involved
in it. To make the discussion above more precise, we state it as a lemma.
\begin{lemma}\label{proj}
Let $s>3$ and let $P_{i,j}$ be a sequence of probabilities, such that
\begin{equation*}
\limsup_{i,j\to\infty}{\frac{P_{i,j}}{(i+j)^{-s}}}<\infty.
{\cal E}d{equation*}
Consider a shift invariant percolation model on $\mathbb{Z}^2$ on which a bond between
$(x_1,y_1)$ and $(x_2,y_2)$ is open with marginal probability
$P_{|x_1-x_2|,|y_1-y_2|}$. Assign conductance $1$ to every open bond,
and $0$ to every closed one. Call this electrical network $G_1$. Now,
perform the following projection process: for every open long
(i.e. not nearest neighbor)
bond $(x_1,y_1),(x_2,y_2)$ we
\\(A) erase the bond, and
\\(B) to each nearest neighbor bond in
$[(x_1,y_1),(x_1,y_2)]\cup[(x_1,y_2),(x_2,y_2)]$ increase the conductance by
$|x_1-x_2|+|y_1-y_2|$.
\\We call this new electrical network $G_2$. Then
\\(I) A.s. all of the conductances in $G_2$ are finite.
\\(II) The effective conductance of $G_2$ is bigger or equal to that of $G_1$.
\\(III) The distribution of the conductance of an edge in $G_2$ is shift
invariant.
\\(IV) If $s>4$ then the conductance of an edge is in $L^1$.
\\(V) If $s=4$ then the conductance $C_e$ of an edge has a {\em Cauchy tail},
i.e. there is a constant $\chi$ such that ${\bf P}(C_e>n\chi)\leq n^{-1}$
for every $n$.
{\cal E}d{lemma}
To complete the picture, we need the following theorem about random electrical
networks on $\mathbb{Z}^2$. The theorem is proved in the next section.
\begin{theorem}\label{cauchytail}
Let $G$ be a random electrical network on the
nearest neighbor bonds of the lattice $\mathbb{Z}^2$, such that all of
the edges have the same conductance distribution, and this distribution has a
Cauchy tail. Then, a.s., a random walk on $G$ is recurrent.
{\cal E}d{theorem}
Lemma \text{ }f{proj} and Theorem \text{ }f{cauchytail} imply the following theorem:
\begin{theorem}\label{two_d_recur}
Let $s\geq 4$ and let $P_{i,j}$ be probabilities, such that
\begin{equation}\label{sgadol4}
\limsup_{i,j\to\infty}{\frac{P_{i,j}}{(i+j)^{-s}}}<\infty.
{\cal E}d{equation}
Consider a shift invariant percolation model on $\mathbb{Z}^2$ on which the bond between
$(x_1,y_1)$ and $(x_2,y_2)$ is open with marginal probability
$P_{|x_1-x_2|,|y_1-y_2|}$.
If there exists an infinite cluster, then the random walk on this cluster is
recurrent.
{\cal E}d{theorem}
\begin{proof}
The case $s=4$ follows directly from \text{ }f{proj} and Theorem \text{ }f{cauchytail}. For
the case $s>4$, notice that if (\text{ }f{sgadol4}) holds for some $s>4$, then it holds
for $s=4$ too.
{\cal E}d{proof}
\begin{proof}[proof of Lemma \text{ }f{proj}]
(I): We calculate the expected number of bonds that are projected on the edge
$(x,y),(x,y+1)$:
W.l.o.g, the projected bond starts at some $(x,y_1\leq y)$, continues through
$(x,y_2\geq y+1)$, and ends at some $(x_1,y_2)$. The expected number will be
\begin{eqnarray*}
2\sum_{y_1\leq y,y_2\geq y+1,x_1}{P_{|y_2-y_1|,|x_1-x|}}
&\leq& 4M\sum_{j\leq 0,k\geq 1,h\geq 0}{(k-j+h)^{-s}}\\
&\leq& 4M\sum_{l>0,h\geq 0}{(l+h)^{1-s}}\\
&\leq& 4M\sum_{l>0}{(l)^{2-s}}<\infty,
{\cal E}d{eqnarray*}
where
\begin{equation*}
M=\sup_{i,j}{\frac{P_{i,j}}{(i+j)^{-s}}}<\infty.
{\cal E}d{equation*}
and therefore (I) is true.
\\(II) let $E$ be a bond which is projected on a path of length $n$. $E$ has
conductance $1$, and is therefore equivalent to a sequence of $n$ edges with
conductance $n$ each. So, Divide $E$ that way. By identifying the
endpoints of these edges with actual vertices of the lattice, we only increase
the effective conductance of the network.
\\(III) is trivial.
\\(IV) and (V) follow from the same calculation performed in the proof of (I).
{\cal E}d{proof}
\section{Random electrical networks}\label{elnet}
In this section we discuss random electrical networks.
We have two main goals in
this section:
\\{\bf Theorem \text{ }f{cauchytail}.}
{\em Let $G$ be a random electrical
network on the lattice $\mathbb{Z}^2$, such that all
of the edges have the same conductance distribution, and this distribution has
a Cauchy tail. (Notice that we do not require any independence).
Then, a.s., a random walk on $G$ is recurrent. }
and
\\{\bf Theorem \text{ }f{iid}.}
{\em Let $G$ be a recurrent graph with bounded degree. Assign i.i.d.
conductances on the edges of $G$. Then, a.s., the resulting electrical
network is
recurrent. }
Notice that if in Theorem \text{ }f{cauchytail} we don't require a Cauchy tail,
then the network might be transient. A good example would be the projected
two-dimensional long-range percolation with $3<s<4$ (See Lemma \text{ }f{proj}).
{}First, we prove Theorem \text{ }f{cauchytail}, which is important for the
previous section. We need the following lemma, which sets some bound for the
sum of random variables with a Cauchy tail:
\begin{lemma}\label{ergod}
Let $\{f_i\}_{i=1}^\infty$ be identically distributed positive random variables
that have a Cauchy tail. Then, every $\epsilon$ has $K$ and $N$ such that
if $n>N$, then
\begin{equation*}
{\bf P}\left(\frac{1}{n}\sum_{i=0}^{n}{f_i}>K\log n\right)<\epsilon.
{\cal E}d{equation*}
{\cal E}d{lemma}
\begin{proof}
$f_i$ has a Cauchy tail, so there exists $C$ such that for every $n$,
\begin{equation*}
{\bf P}(f_i>n)<\frac{C}{n}.
{\cal E}d{equation*}
Let $M>\frac{2}{\epsilon}$ be a large
number. Let $N$ be large enough that $CN^{1-M}<\halb\epsilon$. Choose $n>N$, and
let $g_i=\min (f_i,n^M)$ for all $1\leq i\leq n$. Then,
\begin{eqnarray*}
{\bf P}\left(\frac{1}{n}\sum_{i=1}^{n}{f_i}
\neq\frac{1}{n}\sum_{i=1}^{n}{g_i}\right)
&\leq& n\cdot{\bf P}(f_1\neq g_1)\\
\leq Cn^{1-M}&<&\halb\epsilon.
{\cal E}d{eqnarray*}
${\bf E} (g_i)\leq CM\log n$, and $g_i$ is positive. Therefore, by Markov's inequality,
if we take $K=CM^2$, then
\begin{equation*}
{\bf P}\left(\frac{1}{n}\sum_{i=1}^{n}{g_i}>K\log n\right)<
\frac{CM\log n}{CM^2\log n}=\frac{1}{M}<\halb\epsilon.
{\cal E}d{equation*}
and so
\begin{equation*}
{\bf P}\left(\frac{1}{n}\sum_{i=1}^{n}{f_i}>K\log n\right)<
\epsilon.
{\cal E}d{equation*}
{\cal E}d{proof}
We use another lemma:
\begin{lemma}\label{1on}
Let $A_n$ be a sequence of events such that ${\bf P}(A_n)>1-\epsilon$ for every
$n$, and let $\{a_n\}_{n=1}^\infty$ be a sequence s.t.
\begin{equation*}
\sum_{n=1}^\infty{a_n}=\infty.
{\cal E}d{equation*}
Then, with probability at least $1-\epsilon$,
\begin{equation*}
\sum_{n=1}^{\infty}{1_{A_n}}\cdot{a_n}=\infty.
{\cal E}d{equation*}
{\cal E}d{lemma}
\begin{proof}
It is enough to show that for any $M$,
\begin{equation*}
{\bf P}\left(\sum_{n=1}^{\infty}{1_{A_n}}\cdot{a_n}<M\right)\leq\epsilon.
{\cal E}d{equation*}
Assume that for some $M$ this is false. Define $B_M$ to be the event
\begin{equation*}
B_M=\left\{\sum_{n=1}^{\infty}{1_{A_n}}\cdot{a_n}<M\right\}.
{\cal E}d{equation*}
Since ${\bf P}(B_M)>\epsilon$, we know that there exists $\partialta>0$ such that
${\bf P}(A_n|B_M)>\partialta$ for all $n$.
Therefore,
\begin{equation*}
{\bf E} \left(\sum_{n=1}^{\infty}{1_{A_n}}\cdot{a_n}|B_M\right)
\geq\partialta\sum_{n=1}^{\infty}{a_n}=\infty,
{\cal E}d{equation*}
which contradicts the definition of $B_M$.
{\cal E}d{proof}
Now, we can prove Theorem \text{ }f{cauchytail}.
\begin{proof}[Proof of theorem \text{ }f{cauchytail}]
Let $G$ be a random electrical network on the lattice $\mathbb{Z}^2$, such that all of
the edges have the same conductance distribution, and this distribution has a
Cauchy tail.
\\Define the cutset $\Pi_n$ to be the set of edges exiting the
square $[-n,n]\times[-n,n]$. We want to estimate
\begin{equation*}
\sum_n{C_{\Pi_n}^{-1}}.
{\cal E}d{equation*}
Let $\epsilon>0$ be arbitrary. Let $e_n(i)$ be the $i$-th edge (out of
$(8n+4)$) in $\Pi_n$. By Lemma \text{ }f{ergod}, there exist $K$ and $N$,
such that for every $n>N$, we have
\begin{equation}\label{casum}
{\bf P}\left(\sum_{i=1}^{8n+4}{C(e_n(i))}\leq Kn\log n\right)>1-\epsilon.
{\cal E}d{equation}
Call the event in equation (\text{ }f{casum}) $A_n$.
Set ${a_n=(Kn\log n)^{-1}}$ for $n=N,...,\infty$.
Now,
\begin{equation*}
\sum_n{C_{\Pi_n}^{-1}}\geq\sum_{n=N}^\infty{1_{A_n}\cdot{a_n}}.
{\cal E}d{equation*}
By the definition of $\{a_n\}$,
\begin{equation*}
\sum_{n=N}^{\infty}a_n=\infty.
{\cal E}d{equation*}
On the other hand, ${\bf P}(A_n)>1-\epsilon$ for all $n$.
So, by Lemma \text{ }f{1on},
\begin{equation*}
{\bf P}\left(\sum_n{C_{\Pi_n}^{-1}}=\infty\right)\geq 1-\epsilon.
{\cal E}d{equation*}
Since $\epsilon$ is arbitrary, we get that a.s.
\begin{equation*}
\sum_n{C_{\Pi_n}^{-1}}=\infty.
{\cal E}d{equation*}
{\cal E}d{proof}
Now, we turn to prove Theorem \text{ }f{iid}. First, we need a lemma:
\begin{lemma}[Yuval Peres]\label{yuval}
Let $G$ be a recurrent graph, and let $C_e$ be random conductances on the edges
of $G$. Suppose that there exists $M$ such that ${\bf E} (C_e)<M$ for each edge $e$.
Then, a.s., $G$ with the conductances $\{C_e\}$ is a recurrent electrical
network.
{\cal E}d{lemma}
\begin{proof}
Let $v_0\in G$, and let $\{G_n\}$ be an increasing sequence of finite sub-graphs of $G$,
s.t. $v_0\in G_n$ for every $n$ and s.t.
$G=\cup_{n=1}^{\infty}G_n$. By the definition of effective conductance,
\begin{equation*}
\lim_{n\to\infty}C_{{\rm eff}}(G_n)=C_{{\rm eff}}(G)=0.
{\cal E}d{equation*}
Let $X_n$ be the space of functions $f$ s.t. $f(v_0)=1$ and
$f(u)=0$ for every $u\in G-G_n$. We know that
\begin{equation*}
C_{{\rm eff}}(G_n)=\inf_{f\in X_n}\sum_{(v,w) \text{ is an edge in } G}{(f(v)-f(w))^2}.
{\cal E}d{equation*}
If we denote by $H$ (resp. $H_n$) the electrical network of the graph $G$
(resp. $G_n$)
and conductances $C_e$, then
\begin{equation*}
C_{{\rm eff}}(H_n)=\inf_{f\in X_n}\sum_{(v,w) \text{ is an edge in G}}
{C_{v,w}(f(v)-f(w))^2}
{\cal E}d{equation*}
Let $f\in X_n$. Denote $G_n(f)$ for
\begin{equation*}
\sum_{(v,w) \text{ is an edge in $G$}}{(f(v)-f(w))^2}
{\cal E}d{equation*}
and $H_n(f)$ for
\begin{equation*}
\sum_{(v, w)\text{ is an edge in $G$}}{C_{v,w}(f(v)-f(w))^2}.
{\cal E}d{equation*}
There exists an $f\in X_n$ such that $G_n(f)=C_{{\rm eff}}(G_n)$. Since
${\bf E} (H_n(f))<MG_n(f)$,
we get that
\begin{equation*}
{\bf E} (C_{{\rm eff}}(H_n))\leq M(C_{{\rm eff}}(G_n).
{\cal E}d{equation*}
So, by Fatou's lemma,
\begin{equation*}
{\bf E}(C_{{\rm eff}}(H))\leq\lim_{n\to\infty}{\bf E}(C_{{\rm eff}}(H_n))\leq
M\lim_{n\to\infty}(C_{{\rm eff}}(G_n))=0,
{\cal E}d{equation*}
and therefore $C_{{\rm eff}}(H)=0$ a.s.
{\cal E}d{proof}
Now we can prove Theorem \text{ }f{iid}. The main idea is to change the conductances
in a manner that will not decrease the effective conductance, but after this
change, the conductances will have bounded expectations (although they might
be dependent).
\begin{proof}[Proof to Theorem \text{ }f{iid}]
Let $G$ be a recurrent graph, and let $d$ be the maximal degree in $G$.
Let $\{C_e\}_{\{e \text{ is an edge in }G\}}$ be i.i.d.
non-negative variables, and let $H$ be the electrical network defined on the
graph $G$ with the conductances $\{C_e\}$. We want to prove that with
probability one $H$ is recurrent. Let $M$ be so large that
\begin{equation*}
P(C_e\geq M)<\frac{1}{d^5}.
{\cal E}d{equation*}
We introduce some notation: edges whose conductances are bigger than $M$
will be called $bad$ edges. Vertices which belong to bad edges will also be
called bad. We look at connected clusters of bad edges. Edges that are good
but have at least one bad vertex, will be called {\em boundary} edges.
By the choice of $M$, the sizes of the clusters of bad edges are dominated by
sub-critical Galton-Watson trees. Define a new network $H'$ as
follows: Let $U(e)$ be the connected component to which $e$
belongs (if $e$ is bad) or to which $e$ is attached (if it is a boundary edge).
If $e$ is in the boundary of two components, then we take $U(e)$ to be their
union. For a bad or boundary $e$, the new conductance will be
$2M\cdot (\#U(e)+\#\partial U(e))^2$, where $\#$ measures the number of edges.
If $e$ is a good edge then its conductance is unchanged.
The size of the connected cluster satisfies
\begin{equation*}
{\bf P}(\#U(e)+\#\partial U(e)>n)=o(n^{-4}).
{\cal E}d{equation*}
Therefore, the expected values of the conductances of the edges are
uniformly bounded,
So by Lemma
\text{ }f{yuval}
$H'$ is recurrent. All we need to prove is that the effective resistance of
$H'$ is not bigger than that of $H$: Let $F$ be a flow, and let $U$ be a
connected component of bad edges in $G$.
The energy of $F$ on $U$ in the network $H$ will
be
\begin{equation*}
{\cal E}_{U,F}(H)=\sum_{e\in U\cup \partial U}{\frac{F_e^2}{C_e}}\geq
\sum_{e\in \partial U}{\frac{F_e^2}{C_e}}\geq
\sum_{e\in \partial U}{\frac{F_e^2}{M}}.
{\cal E}d{equation*}
{}For every $e$ in $U\cup \partial U$, the flow $F_e$ is smaller than
\begin{equation*}
\sum_{e'\in \partial U}|F_{e'}|,
{\cal E}d{equation*}
so
\begin{equation*}
F^2_e\leq \#\partial U\cdot\sum_{e'\in \partial U}{F_{e'}^2}
\leq M\cdot\#\partial U\cdot {\cal E}_{U,F}(H).
{\cal E}d{equation*}
Therefore,
\begin{eqnarray*}
{\cal E}_{U,F}(H')&=&
\sum_{e\in U\cup \partial U}{\frac{F_e^2}{2M\cdot (\#U+\#\partial U)^2}}\\
&\leq&(\#U+\#\partial U)\frac{M\cdot\#\partial U\cdot {\cal E}_{U,F}(H)}
{2M\cdot (\#U+\#\partial U)^2}\leq {\cal E}_{U,F}(H).
{\cal E}d{eqnarray*}
Thus, by Thomson's theorem (see \cite{yuval}), the effective resistance of
$H'$ is not bigger than that of $H$, and we are done.
{\cal E}d{proof}
\section{Critical behavior of the free long-range
random cluster model}\label{frcm}
We return to the critical behavior. Our goal in this section is to prove Theorem
\text{ }f{fr_int} and Corollary \text{ }f{ising_extr}
We begin with the following extension of Theorem \text{ }f{fr_int}:
\begin{theorem}\label{fcr}
Let $d<s<2d$ and let $\{P_k\}_{k\in\mathbb{Z}^d}$ be nonnegative numbers such that
$\forall_k(P_k=P_{-k})$ and
\begin{equation}\label{smas}
\liminf_{\|k\|\to\infty}\frac{P_k}{{\|k\|_1^{-s}}}>0.
{\cal E}d{equation}
Let $\beta>0$, and consider the infinite volume limit of the free random
cluster model with probabilities $1-e^{-\beta P_k}$ and with $q\geq 1$ states.
Then, a.s., at
\begin{equation*}
\beta_c=\inf(\beta |\text{ a.s. there exists an infinite cluster})
{\cal E}d{equation*}
there is no infinite cluster.
{\cal E}d{theorem}
We need the following extension of Lemma \text{ }f{normal1}:
\begin{lemma}\label{normal2}
Let $d\geq 1$.
Consider an ergodic (not necessarily independent) percolation model on $\mathbb{Z}^d$
which satisfies
\begin{equation}\label{dominate}
{\bf P}\left(i\leftrightarrow j | {\mathcal B}_{i,j}\right)\geq P_{i-j}
{\cal E}d{equation}
Where $i\leftrightarrow j$ denotes the event of having an open bond
between $i$ and $j$, and ${\mathcal B}_{i,j}$ is the $\sigma$-field created by all
of the events $\{i'\leftrightarrow j'\}_{(i',j')\neq(i,j)}$.
Assume further that:
\\(A) The distribution has the FKG property {\rm \cite{FKG}}.
\\(B) A.s. there is a unique infinite cluster.
\\(C) There exists $d<s<2d$ s.t.
\begin{equation*}
\liminf_{||k||\to\infty}\frac{P_k}{||k||^{-s}}>0.
{\cal E}d{equation*}
Then, for every $\epsilon>0$ and $\rho$ there exists $N$ such that with
probability bigger than $1-\epsilon$, inside the cube $[0,N-1]^d$ there exists
an open cluster which contains at least $\rho N^{\frac{s}{2}}$ vertices.
{\cal E}d{lemma}
Lemma \text{ }f{normal2} is proved exactly the same way as Lemma \text{ }f{normal1}. Lemma
\text{ }f{normal2} is valid for the free random cluster model measure considered in
Theorem \text{ }f{fcr}. We can use Lemma \text{ }f{normal2} to prove the
following:
\begin{lemma}\label{normalf}
Let $d<s<2d$ and let $\{P_k\}_{k\in\mathbb{Z}^d}$ be nonnegative numbers such that
$\forall_k(P_k=P_{-k})$ and
\begin{equation}\label{smals}
\liminf_{|k|\to\infty}\frac{P_k}{{\|k\|_1^{-s}}}>0.
{\cal E}d{equation}
Let $\beta>0$, and consider the infinite volume limit of the free random
cluster model with probabilities $1-e^{-\beta P_k}$. Assume that, a.s., there
is an infinite cluster.
Then, for every $\epsilon$ and $\rho$ there is an $N$ such that given the
the values (open or closed) of all of the edges that have at least one end
point out of the cube $[0,N-1]^d$, the probability of having an open cluster
of size $\rho N^{\halb s}$ within $[0,N-1]^d$ is larger than $1-\epsilon$.
{\cal E}d{lemma}
\begin{proof}
The proof follows the guideline of the proof of Lemma \text{ }f{normal1}: Choose
$\epsilon'$ and $\theta$, and let $M$ be s.t. by Lemma \text{ }f{normal2} with
probability larger than $1-\epsilon'$ there exists an open cluster of size
$\sqrt{\theta}M^{\halb s}$ inside $[0,M-1]^d$. Let $K$ be s.t. this probability is larger than
$1-2\epsilon'$ even if all of the edges with at least one endpoint out of
$[-K,K+M-1]^d$ are closed. Such $K$ exists because the free measure on $\mathbb{Z}^d$ is
the limit of the free measures on $[-K,K+M-1]^d$
when $K$ tends to infinity. Now, let $R$ be a large number.
Assume that all of the edges with (at least) one
endpoint out of $[-K,RM+K-1]^d$ are closed. For a cube
\begin{equation*}
{\cal C}=\prod_{j=1}^d{[l_iM,(l_i+1)M-1]}\qquad\qquad 0\leq l_i\leq R-1
{\cal E}d{equation*}
in $[-K,RM+K-1]^d$, the probability of the cube to be {\em alive}, i.e. to have
an open cluster of size $\sqrt{\theta}M^{\halb s}$
is larger than $1-2\epsilon'$
(because of domination). The probability that there exists an open bond between two
living cubes that are $k$ cubes away from each other is larger than
$\eta_s(\frac{\theta}{2},k)$.
Now, we can proceed exactly as in the proof of Lemma
\text{ }f{normal1}. With $\epsilon'$, $\theta$ and $R$ properly chosen, the lemma
is proved.
{\cal E}d{proof}
Now, we can prove Theorem \text{ }f{fcr}:
\begin{proof}[Proof of Theorem \text{ }f{fcr}]
Let $\{P_k\}_{k\in\mathbb{Z}^d}$ be such that for every $k$, $P_k=P_{-k}$ and such that
\begin{equation*}
\kappa=\liminf_{\|k\|\to\infty}\frac{P_k}{\|k\|_1^{-s}}>0.
{\cal E}d{equation*}
Let $\beta$ be s.t. for the Random Cluster Model with interactions $\{P_k\}$
and inverse temperature $\beta$ there exists, a.s., an
infinite cluster. What we need to show is that there exists an $\epsilon>0$
s.t. there exists an infinite cluster at inverse temperature $\beta-\epsilon$.
For every $a$ and $b$ consider the independent percolation model $\operatorname{id}p(a,b,s)$
where every vertex exists with probability $a$ and two vertices $x$ and $y$
are attached to each other with probability $1-e^{-b|x-y|^{-s}}$.
Let ${{\gothic o}thic a}mma$, $\lambda$ and $\partialta$ be s.t. in
$\operatorname{id}p(\lambda-\partialta,{{\gothic o}thic a}mma-\partialta,s)$ there exists, a.s., an infinite cluster.
Let $N$ be so large that by Lemma \text{ }f{normalf} with probability larger than
$\lambda$ there exists a cluster of size $\rho N^{\halb s}$ inside
$[0,N-1]^d$, where the probability is with respect to the free measure on
$[0,N-1]^d$, and $\rho$ is s.t.
\begin{equation}\label{rho}
\frac{\rho^2}{2q}>{{\gothic o}thic a}mma.
{\cal E}d{equation}
By the choice of $\rho$ (\text{ }f{rho}) we get that
the probability of having an open bond between clusters of size
$\rho N^{\halb s}$ that are located in the cubes at $Nx$ and $Ny$ is
(no matter what happens in any other bond)
at least $1-e^{-{{\gothic o}thic a}mma\|x-y\|_1^{-s}}$.
Now, let $\epsilon>0$ be s.t. in inverse temperature $\beta-\epsilon$ the
probability of having this big cluster is larger than $\lambda-\partialta$, and the
probability of having an open bond is larger than $e^{({{\gothic o}thic a}mma-\partialta)|x-y|^{-s}}$.
Such $\epsilon$ exists, because the probability of any event in a finite
random cluster model is a continuous function of the (inverse) temperature.
When considering the renormalized model in inverse temperature
$\beta-\epsilon$, it dominates
$\operatorname{id}p(\lambda-\partialta,{{\gothic o}thic a}mma-\partialta,s)$, and therefore has an infinite cluster.
{\cal E}d{proof}
We can now restate and prove Corollary \text{ }f{ising_extr}:
\\{\bf Corollary \text{ }f{ising_extr}.}
{\em
Let $\{P_k\}_{k\in\mathbb{Z}^d}$ be nonnegative numbers s.t. $P_k=P_{-k}$ for every $k$
and s.t. $P_k\sim \|k\|_1^{-s}$ ($d<s<2d$).
Consider the Potts model with $q$ states on $\mathbb{Z}^d$, s.t. the interaction between
$v$ and $u$ is $P_{v-u}$. At the critical temperature,
the free measure is extremal.
}
\begin{proof}[Proof of Corollary \text{ }f{ising_extr}]
Recall the following construction of a configuration of the free measure
of the Potts model:
choose a configuration of the free measure of the Random Cluster model, and
color each of the clusters by one of the $q$ states. The states of different clusters are
independent of each other.
By Theorem \text{ }f{fr_int}, there is no infinite cluster at the critical temperature.
Therefore, for every $n$ and $\epsilon$ there exists $K$ s.t. with probability $1-\epsilon$
for every $x$ s.t. $\|x\|_1\leq n$ and $y$ s.t. $\|y\|_1\geq K$, $x$ and $y$ belong to distinct
clusters.
Therefore, for the Potts model, there is an event $E$ of probability bigger than
$1-\epsilon$ s.t. given $E$, the coloring of $\{x:\|x\|_1\leq n\}$ is independent of the
coloring of $\{y:\|y\|_1\geq K\}$. Therefore, the tail $\sigma$-field
\begin{equation*}
\bigcap_{K=1}^{\infty}\sigma\left(v \text{ s.t. } \|v\|_1>K \right)
{\cal E}d{equation*}
is trivial, and therefore the measure is extremal.
{\cal E}d{proof}
\section{Remarks and problems}
Many more questions can be asked about these clusters. One example is the
volume growth rate. It can be shown that the growth of the infinite cluster
is not bigger than exponential with the constant
\begin{equation*}
\sum_{k\in\mathbb{Z}^d}P_k.
{\cal E}d{equation*}
In the case $d<s<2d$, The growth can be bounded from below by $\exp(n^{\phi(s)})$, for
$\phi(s)=\log_2(2d/s)-\epsilon$.
This can be proved as follows: if $\beta$ is large enough, then in the proof of
Theorem \text{ }f{thm:ns}, we may take $C_n=\exp(2^{\phi(s)\cdot n})$. Then, the
$n$-th degree cluster contains
\begin{equation*}
\prod_{k<n}{C_k},
{\cal E}d{equation*}
vertices, while its diameter is at most $2^n$. This gives a lower bound of
$\exp(n^{\phi(s)})$ for the growth. if $\beta$ is not so large, then by using
Lemma \text{ }f{normal1} we can make it large enough.
\\In the case $s=2d$, the volume growth rate is subexponential
(see \cite{BenBer}). In the case $s<2d$ it is not known. So, we get
a few questions on the structure of the infinite cluster.
\begin{question}\label{growth}
What is the volume growth rate of the infinite clusters of super-critical
long-range percolation with $d<s<2d$?
Is it exponential?
{\cal E}d{question}
\begin{question}
How many times do two independent random walks paths on the infinite cluster of
long-range percolation intersect?
{\cal E}d{question}
\begin{question}
Are there any nontrivial harmonic functions on the infinite cluster of
one-dimensional long-range percolation with $d<s<2d$?
{\cal E}d{question}
Other questions can be asked on the critical behavior. The renormalization
lemma (Lemma \text{ }f{normal1}) is only valid when $d<s<2d$. So, the arguments
given here say nothing about the critical behavior on other cases. At the case
$d=1$ and $s=2$, Aizenman and Newman proved that there exists an infinite
cluster at criticality (see \cite{AN}). For the other cases the following
questions are still open:
\begin{question}\label{sgeq2d}
Does critical long-range percolation have an infinite cluster when
$d \geq 2$ and
$s\geq 2d$?
{\cal E}d{question}
As remarked by G. Slade, the methods used in \cite{hasl} might be used to prove
that for $d>6$ and $s>d+2$ there is no infinite at criticality.
This can reduce Question \text{ }f{sgeq2d}
to the case $2\leq d\leq 6$.
\begin{question}
Does the conclusion of Theorem \text{ }f{jeff} hold for sequences which decay
faster than those treated in Theorem \text{ }f{jeff} and slower than those treated
by Steif and Meester
{\it {\em (\cite{steif})}}? i.e.
Let $d\geq 2$. For which percolating $d$-dimensional arrays of probabilities
$\{P_k\}_{k\in \mathbb{Z}^d}$ there exist an $N$ s.t. the independent
percolation model with probabilities
\begin{equation*}
P'_k = \left\{
\begin{array}{ll}
P_k &\|k\|_1<N\\
0 &\|k\|_1\geq N
{\cal E}d{array}
\right.
{\cal E}d{equation*}
also has, a.s., an infinite cluster?
{\cal E}d{question}
The arguments given in this paper are not strong enough to prove that there is
no infinite cluster in the wired random cluster model at the critical
temperature. So, the following question is still open:
\begin{question}\label{wired}
Is there an infinite cluster at the critical temperature in the wired random
cluster model with $d<s<2d$?
{\cal E}d{question}
A different formulation of the same question is
\\{\bf Question \text{ }f{wired} (Revised).}
{\em Let $d\geq 1$ and let $d<s<2d$. Let $\{P_k\}_{k\in Z^d}$ be s.t. $P_k=P_{-k}$
for every $k$ and s.t. $P_k\sim \|k\|_1^{-s}$. Consider the Potts model (with $q$ states)
with interaction
$P_{u-v}$ between $u$ and $v$. Let $\beta$ be the critical inverse temperature for
this Ising model. Is there a unique Gibbs measure at inverse temperature $\beta$?
}
Question \text{ }f{wired} is related to the question whether the free and the wired
measures agree on the critical point. Conjecturing that for high values of $q$,
the number of states,
the critical wired measure has an infinite cluster, we will get the conjecture
that the two measures won't agree at the critical point.
\appendix
\section{Proof of Lemma \text{ }f{lem:aldous}}\label{app:aldous}
The proof of Lemma \text{ }f{lem:aldous} is based on the methods from \cite{aldous}. As in \cite{aldous}, we construct simultaneously
a random walk and the random graph. We cite a result from \cite{aldous} on the connection between excursions of the random
walk and the connected components of the graph. We then use this result to prove Lemma \text{ }f{lem:aldous}.
The construction of the random walk and the random graph is as follows: For each ordered pair $(i,j),i\neq j$, let $U_{i,j}$ be an exponential
$M^{-\xi}m(h(j))$ variable, independent over pairs. Choose $v_1$ by size biased sampling (i.e. the probability that $v_1=h_i$ is proportional to
$m(h_i)$). Let $\{v:U_{v_1,v}\leq m(v_1\}$ be the set of children of $v_1$, and order them as $v(2),v(3),\ldots$ so that $U_{v_1,v(i)}$ is increasing.
Start the walk $z(\cdot)$ with $z(0)=0$, and let
\[
z(u)=-u+\sum_v m(v){\bf 1}_{(U_{v_1,v}\leq u)}, \ \ \ \ \ \ \ \ 0\leq u\leq m(v_1).
\]
In particular,
\[
z(m(v_1)) = -m(v_1) +\sum_{v\mbox{ child of } v_1}m(v).
\]
Inductively, write $\tau_{i-1}=\sum_{j\leq i-1}m(v_j)$. If $v_i$ is in the same component as $v_1$, then the set
\[
\{v\notin\{v_1,\ldots,v_{i-1}\}: v\mbox{ is a child of one of } \{v_1,\ldots,v_{i-1}\} \}
\]
consists of $v_1,\ldots,v_{l(i)}$ for some $\l(i)\geq i$. Let the children of $v_i$ be
\[
\{v\notin\{v_1,\ldots,v_{l(i)}\}: U_{v_i,v} \leq m(v_i)\},
\]
and order them as $v_{l(i)+1},v_{l(i)+2},\ldots$ such that $U_{v_i,v}$ is increasing.
Set
\begin{equation*}
z(\tau_{i-1}+u)=z(\tau_{i-1})-u + \sum_{v \mbox{ child of } v_i} m(v){\bf 1}_{U_{v_i,v}<u}, \ \ \ \ \ \ \ 0\leq u \leq m(v_i).
{\cal E}d{equation*}
After exhausting the component containing $v_1$, choose the next vertex by size biased sampling among the remaining vertices. Continue.
For simplicity, for $u>M$ we define $z(u)=z(M)+M-u.$
This construction yields a forest on the vertices $h_1,\ldots,h_k$, an ordering $v_1,\ldots,v_k$ and a walk $z(u); \ 0\leq u \leq M$. Add extra edges
between $v_i$ and $v_j$ for every pair such that $i<j\leq l(j)$ and $U_{v_i,v_j}\leq m(v_i)$. The resulting random graph has the same distribution as the
inhomogenous random graph, the ordering of the vertices $v_1,\ldots,v_k$ is size biased, and the relation between the components of the graph and the
random walk is as appears in the lemma below:
\begin{lemma}[\cite{aldous}, Page 828]\label{lem:appfromaldous}
Every connected component in the graph is a sequence of vertices ${v_i,v_{i+1}\ldots,v_j}$ such that
\[
z(\tau_j)=z(\tau_i)-m(v_i), \ \ \ z(u)\geq z(\tau_j) \mbox{ on } \tau_{i-1} < u < \tau_j.
\]
Furthermore, the size of the component is $\tau_j-\tau_{i-1}$.
{\cal E}d{lemma}
We now use this construction to prove Lemma \text{ }f{lem:aldous}.
For $0\leq u\leq M$, define $i(u)=\min\{i:\tau_{i}\geq u\}$ to be the particle that is being processed at time $u$.
We define the set $B(u)$ to be the set of all particles seen up to time $u$, namely
\[
B(u)=\{v_j:j\leq i(u) \mbox{ or } \exists_{k<i}\mbox{ s.t. $j$ is a child of $k$ } \mbox{ or } U_{i(u),j}<u-\tau_{i(u)-1}\}.
\]
Then define the {\em drift} $D(u)$ to be
\[
D(u)= - 1 + M^{-\xi}\sum_{h_i\notin B(u)}m^2(h_i).
\]
$D(\cdot)$ is the drift of $z(\cdot)$ in the sense that
\[
I(u):=z(u)-\int_0^u D(s)ds
\]
is a martingale.
Clearly, $D(u)$ is decreasing with $u$.
We remember that $1<\xi<2$, and that ${{\gothic o}thic a}mma$ is chosen such that $1>{{\gothic o}thic a}mma>\frac{16+\xi}{18}$. We also take ${{\gothic o}thic a}mma'$ s.t. ${{\gothic o}thic a}mma'>\frac{4+\xi}{6}$ and $3{{\gothic o}thic a}mma - 2 > {{\gothic o}thic a}mma'$.
\ignore{
We take $\epsilon$ so that
$\epsilon<{{\gothic o}thic a}mma-{{\gothic o}thic a}mma'$ and $\epsilon<(2{{\gothic o}thic a}mma'-\xi)/4$, but $\epsilon>1-{{\gothic o}thic a}mma$.
}
Note that ${{\gothic o}thic a}mma-{{\gothic o}thic a}mma'>2(1-{{\gothic o}thic a}mma)$, and take ${{\gothic o}thic a}mma-{{\gothic o}thic a}mma'> \epsilon >2(1-{{\gothic o}thic a}mma)$.
Then, $\epsilon/2>1-{{\gothic o}thic a}mma$, $\epsilon<{{\gothic o}thic a}mma-{{\gothic o}thic a}mma'$ and $\epsilon<(2{{\gothic o}thic a}mma'-\xi)/4$.
Let $\alpha=\xi-{{\gothic o}thic a}mma'-\epsilon$. Note that $\alpha-{{\gothic o}thic a}mma'=\xi-2{{\gothic o}thic a}mma'-\epsilon<-5\epsilon$. Let $\partialta<\epsilon/2-(1-{{\gothic o}thic a}mma)$, and let $\theta$ be so that ${{\gothic o}thic a}mma'-\partialta<\theta<{{\gothic o}thic a}mma'$.
\ignore{
\[
{{\gothic o}thic a}mma-{{\gothic o}thic a}mma' < (2{{\gothic o}thic a}mma'-\xi)/4 ?
\]
\begin{eqnarray*}
{{\gothic o}thic a}mma-{{\gothic o}thic a}mma' < 1-{{\gothic o}thic a}mma' < 1 - \frac{4+\xi}{6} = \frac{2-\xi}{6}< (2{{\gothic o}thic a}mma'-\xi)/4
{\cal E}d{eqnarray*}
}
\begin{claim}\label{claim:bigpart}
For $M$ large enough, with probability larger than $1-e^{-M^{({{\gothic o}thic a}mma-{{\gothic o}thic a}mma'-\epsilon)/2}}$, for every $u>\frac{1}{2}M^{{\gothic o}thic a}mma$, and every $h_i\notin B(u)$, we have
$m(h_i)<M^\alpha$.
{\cal E}d{claim}
\begin{proof}
As $B$ is increasing, it is enough to speak about $u=\frac{1}{2}M^{{\gothic o}thic a}mma$. By the construction, for a given particle $h_i$ with $m(h_i)\geq M^\alpha$,
\[
P(h_i\notin B(u))\leq \exp(-u M^{-\xi} M^\alpha)
=\exp\left(-\frac 12 M^{{{\gothic o}thic a}mma + \alpha -\xi}\right) = \exp\left(-\frac 12 M^{{{\gothic o}thic a}mma-{{\gothic o}thic a}mma'-\epsilon}\right),
\]
and since ${{\gothic o}thic a}mma-{{\gothic o}thic a}mma'-\epsilon>0$ and there are at most $M^{1-\alpha}<M$ such particles, the claim follows.
{\cal E}d{proof}
Let $A$ be the event $A=\{\forall_{h_i\notin B(M^{{\gothic o}thic a}mma/2)}\ m(h_i)<M^\alpha\}$. Then by the previous claim $P(A)\geq 1-e^{-M^{({{\gothic o}thic a}mma-{{\gothic o}thic a}mma'-\epsilon)/2}}$.
We calculate the variance of the difference of $I(\cdot)$ within one time unit. Assume that $u>\frac 12M^{{\gothic o}thic a}mma$. Let ${\mathcal F}_u$ be the $\sigma$-algebra generated by the process up to time $u$.
Remembering that
$E(I(u+1)|{\mathcal F}_u)=I(u)$, we get
\begin{eqnarray}\label{eq:varmart}
\nonumber
\mbox{var} (I(u+1)-I(u)|{\mathcal F}_u ; A) &\leq&
(D(u)+1)^2 + M^{-\xi}\sum_{i\notin B(u)} m^3(h_i)\\
&\leq& (D(u)+1)(D(u)+1+M^{\alpha}).
{\cal E}d{eqnarray}
We now divert our attention to the rate at which $D(u)$ decreases after time $M^{{\gothic o}thic a}mma/2$. Let
\[
L(u)=D(u)+1=M^{-\xi}\sum_{i\notin B(u)}m^2(h_i).
\]
\begin{lemma}\label{lem:decdrift} Let $\kappa=\frac 13 M^{{{\gothic o}thic a}mma'}$. Let $u> \frac 12M^{{\gothic o}thic a}mma$. Let $L:=L(u)$. Let
\[
\ell=\left\lceil\log_2\left(2M^\alpha/LM^{\xi-1}\right)\right\rceil.
\]
Then conditioned on the event $A$,
\begin{equation}\label{eq:decdrift}
P\left(\left.L(u) - L(u+\kappa) < \frac{L}{16\ell}\cdot \left[1-e^{-\frac L2 \frac\kappa M}\right] \right| {\mathcal F}_u ; A\right)
\leq \exp\left(- \left[1-e^{-\frac L2 \frac\kappa M} \right] \cdot \frac{LM^\xi}{32\ell M^{2\alpha}} \right).
{\cal E}d{equation}
{\cal E}d{lemma}
\begin{proof}
For every $i\notin B(u)$, the (conditional) probability $P_i$ that $i\notin B(u+\kappa)$ satisfies
$
P_i\leq e^{-M^{-\xi}\kappa m(h_i)}.
$
Let $L=L(u)$.
Note that
\begin{eqnarray*}
M^{-\xi}\sum_{i:m(h_i) < \frac 12LM^{\xi-1}}m^2(h_i)
&\leq& \frac 12LM^{\xi-1} M^{-\xi}\sum_{i:m(h_i) < \frac 12LM^{\xi-1}}m(h_i)\\
&\leq& \frac 12LM^{\xi-1} M^{-\xi} M = \frac 12L.
{\cal E}d{eqnarray*}
Therefore,
\begin{eqnarray*}
M^{-\xi}\sum_{i:m(h_i) \geq \frac 12LM^{\xi-1}}m^2(h_i)
\geq \frac 12L.
{\cal E}d{eqnarray*}
Recall that
\[
\ell=\left\lceil\log_2\left(2M^\alpha/LM^{\xi-1}\right)\right\rceil.
\]
For $i=0,\ldots,\ell$, let
\[
B_i=\left\{h_i\notin B(u):
2^i\leq \frac{m(h_i)}{\frac 12LM^{\xi-1}} <2^{i+1}
\right\}.
\]
Let $i_0$ maximize
\[
\sum_{h_i\in B_i}m^2(h_i),
\]
and let $B:=B_i$.
Then conditioned on $A$,
\begin{equation}\label{eq:inB}
M^{-\xi}\sum_{h_i\in B} m^2(h_i)\geq \frac{L}{2\ell}.
{\cal E}d{equation}
$P(h_i\in B(u+\kappa))\geq 1-\exp\left(-\frac L2 \frac\kappa M\right)$ for each $h_i\in B$, and $|B|\geq \frac{LM^\xi}{2\ell M^{2\alpha}}$. Therefore by standard binomial estimates,
\begin{eqnarray*}
P\left(L(u) - L(u+\kappa) < \frac{L}{16\ell}\cdot \left[1-e^{-\frac L2 \frac\kappa M}\right] \right)
\leq \exp\left(- \left[1-e^{-\frac L2 \frac\kappa M}\right] \cdot \frac{LM^\xi}{32\ell M^{2\alpha}} \right)
{\cal E}d{eqnarray*}
as desired.
\ignore{
\[
\frac{L^2\kappa M^\xi}{M^{2\alpha+1}}\geq M^{{{\gothic o}thic a}mma'+\xi-2\alpha-1}=M^{{{\gothic o}thic a}mma'+\xi-2(\xi-{{\gothic o}thic a}mma'-\epsilon)-1}
\]
\[
{{\gothic o}thic a}mma'+\xi-2(\xi-{{\gothic o}thic a}mma'-\epsilon)-1=3{{\gothic o}thic a}mma'-\xi-1-\epsilon > 3\frac{4+\xi}{6}-\xi-1-\epsilon
\]
\[
=2-\frac{\xi}{2}-1-\epsilon = 1-\frac{\xi}{2}-\epsilon
\]
need to check:
\[
\epsilon<1-\xi/2.
\]
\[
\epsilon < {{\gothic o}thic a}mma-{{\gothic o}thic a}mma'< 1-{{\gothic o}thic a}mma' < 1 - \frac{4+\xi}{6}=\frac{2-\xi}{6}<\frac{2-\xi}{2}=1-\xi/2.
\]
}
{\cal E}d{proof}
Note that if
$L$ is of constant order of magnitude or larger, then
the bound in \eqref{eq:decdrift} decays exponentially with a positive power of $M$. Thus, applying Lemma \text{ }f{lem:decdrift} again and again, we get the following corollary regarding the decrease of the drift:
\begin{corollary}\label{cor:decdrift}
Let $L_0=L(M^{{\gothic o}thic a}mma/2)$.
Then with probability larger than $1-M^{-1}$, for every $k$ such that
\[
L_0-\frac{kM^{\theta - 1}}{2}>\frac 12,
\]
we have
\begin{equation}\label{eq:decdrift2}
L(M^{{\gothic o}thic a}mma+k\kappa)\leq L_0-\frac {kM^{\theta-1}}{2}.
{\cal E}d{equation}
Furthermore, with probability larger than $1-M^{-1}$, for every $u$ such that $L(u)>\frac 12$ and $u>\frac 12 M^{{\gothic o}thic a}mma$,
\begin{equation}\label{eq:decdrift2}
L(u+\kappa)\leq L(u)-\frac {M^{\theta-1}}{2} \ \ \ \mbox{ and } \ \ \ L(u-\kappa)\geq L(u)+\frac {M^{\theta-1}}{2}.
{\cal E}d{equation}
{\cal E}d{corollary}
\ignore{
On the other hand, $L$ cannot decay too fast, as demonstrated in the following lemma:
\begin{lemma}\label{lem:Dslow}
Fix $u>\frac 12 M^{{\gothic o}thic a}mma$. Let $O=O(u)$ be the event
\[
O=\left\{ L(u)>\frac 12\ ,\
L(u+\kappa)<0.9L(u), \mbox{ and $u$ and $u+\kappa$ are in the same component excursion}
\right\}.
\]
Then there exists $\eta>0$ such that $P(O)\leq \exp(-M^\eta)$ for all $M$ large enough and all inhomogenous graphs of size $M$.
{\cal E}d{lemma}
\begin{proof}
For every particle $h_i$, with $m(h_i)>M^{\xi-{{\gothic o}thic a}mma'}/100$, we have
\[
P(h_i\notin B(u))\leq\exp\left(-M^{-\xi}\cdot \frac{M^{{{\gothic o}thic a}mma}}{2}\cdot\frac{M^{\xi-{{\gothic o}thic a}mma'}}{100} \right) =
\exp\big(-M^{{{\gothic o}thic a}mma-{{\gothic o}thic a}mma'}/200\big),
\]
and as there are at most $M$ such particles, the probability that there exists a particle outside $B(u)$ with mass greater than
$M^{\xi-{{\gothic o}thic a}mma'}/100$ decays stretched exponentially with $M$. For every particle $h_i$, if $h_i\in B(u+\kappa)\setminus B(u)$ and $u$ and $u+\kappa$ are in the same component excursion, then $h_i$ is a child of one of the particles appearing in the time window $[u,u+\kappa]$, and this happens, for every particle independently, with probability less than $1/100$.
Each particle contributes at most $M^{-\xi}m^2(h_i)<M^{\xi-2{{\gothic o}thic a}mma'}$ to the sum $L(u)$, and $\xi-2{{\gothic o}thic a}mma'<0$.
Let $f(h_i)=M^{-\xi}m^2(h_i)$ Then, we have
\[
\sum f(h_i) > M^{2{{\gothic o}thic a}mma'-\xi} \sup f(h_i).
\]
Thus, using the exponential Markov inequality,
\[
P(O)\leq e^{-M^{2{{\gothic o}thic a}mma'-\xi}},
\]
as desired.
{\cal E}d{proof}
}
Let $u_0=\max\left(\inf\{u:L(u)\leq 1
\},\ M^{{\gothic o}thic a}mma/2\right)$. Let $u_k=u_0+k\kappa$, $k\in\mathbb{Z}$. Let $T\leq\infty$ be the time at which the first excursion generated by a component larger than $M^{{\gothic o}thic a}mma$ ends.
If no such component exists, then $T=\infty$.
\begin{lemma}\label{lem:first_block}
There exists
$\mbox{var}phi>0$ such that for every $M$ large enough and every inhomogoneus random graph with size $M$ and parameter $\xi$,
\[
P(T\geq u_0) > 1-M^{-\mbox{var}phi}.
\]
{\cal E}d{lemma}
\begin{proof}
\ignore{
Let $R$ be the event
$
\left\{u_0\geq M^{{\gothic o}thic a}mma \mbox{ and } z\left(M^{{\gothic o}thic a}mma/2+M^{{{\gothic o}thic a}mma-\epsilon} \right)<\kappa \right\}.
$
We estimate the probability of $R$: By Corollary \text{ }f{cor:decdrift},
\[
P\left(R\ ;\ L(M^{{\gothic o}thic a}mma/2) < 1+\frac 32M^{\theta -1 + {{\gothic o}thic a}mma - {{\gothic o}thic a}mma'} \right) < M^{-1}.
\]
In other words,
$
P(R\mbox{ and }K^c) < M^{-1}
$
where
\[
K=\left\{\sum_{h_i\notin B(\frac 12M^{{\gothic o}thic a}mma)} M^{-\xi} m^2(h_i) < 1+\frac 32M^{\theta -1 + {{\gothic o}thic a}mma - {{\gothic o}thic a}mma'} \right\}
\]
Calculation yields:
\begin{eqnarray*}
E\left[\left.\sum_{h_i\in B(M^{{\gothic o}thic a}mma/2+M^{{{\gothic o}thic a}mma-\epsilon})\setminus B(M^{{\gothic o}thic a}mma/2)}m(h_i)
\right|K\right] &\geq&
(1-M^{-\epsilon})M^{{{\gothic o}thic a}mma - \epsilon}
\cdot (1+\frac 32M^{\theta -1 + {{\gothic o}thic a}mma - {{\gothic o}thic a}mma'})\\
&\geq&
M^{{{\gothic o}thic a}mma - \epsilon}
\cdot (1+M^{\theta -1 + {{\gothic o}thic a}mma - {{\gothic o}thic a}mma'}),
{\cal E}d{eqnarray*}
and
\[
\mbox{var}\left[\left.\sum_{h_i\in B(M^{{\gothic o}thic a}mma/2+M^{{{\gothic o}thic a}mma-\epsilon})\setminus B(M^{{\gothic o}thic a}mma/2)}m(h_i)
\right|K\right] \leq M^{{{\gothic o}thic a}mma + \alpha}
\cdot (1+M^{\theta -1 + {{\gothic o}thic a}mma - {{\gothic o}thic a}mma'}).
\]
Therefore,
}
Let $u\geq \frac 12M^{{\gothic o}thic a}mma$.
Calculation yields:
\[
E\left[\left.
\sum_{h_i\in B(u+\kappa)\setminus B(u)}m(h_i)
\right|{\mathcal F}_u\right]\geq (1-M^{-\epsilon})\kappa L(u).
\]
\ignore{
\[
P(h_i)=1-\exp(-M^{-\xi}m(h_i)\kappa)\geq M^{-\xi}m(h_i)\kappa(1 - M^{-\xi}m(h_i)\kappa)
\]
so we need
\[
M^{-\xi}m(h_i)\kappa<M^{-\epsilon}
\]
Check:
\[
M^{-\xi}m(h_i)\kappa < \frac 13 M^{-\xi+\alpha+{{\gothic o}thic a}mma'}<M^{-\epsilon}
\]
}
\ignore{
and
\[
\mbox{var}\left[\left.
\sum_{h_i\in B(u+\kappa)\setminus B(u)}m(h_i)
\right|{\mathcal F}_u\right]\leq M^\alpha\kappa L(u).
\]
Let $X=\sum_{h_i\in B(u+\kappa)\setminus B(u)}m(h_i)$.
Then, if $L(u)\geq 1+3M^{-2\epsilon}$, by chebichef's inequality we get
\begin{eqnarray*}
P\left[\left.
z(u+\kappa)-z(u) \leq \kappa(L(u)-1)/2
\right|{\mathcal F}_u\right]
&=&
P\left[\left.
X \leq \kappa\big[1+(D(u))/2\big]
\right|{\mathcal F}_u\right]\\
\leq
P\left[\left.
X \leq E(X|{\mathcal F}_u) - \kappa D(u)/2
\right|{\mathcal F}_u\right]
&\leq&
\frac{\mbox{var} (X|{\mathcal F}_u)}{\kappa^2 D^2(u)/4}
\leq
\frac{1}{12} \frac{L(u) M^{\alpha - {{\gothic o}thic a}mma'}}{D^2(u)}.\\
&\leq&
\frac{1}{12} \frac{L(u) M^{-5\epsilon}}{D^2(u)}.
{\cal E}d{eqnarray*}
}
Let $W(u)$ be the event $\{z(u+\kappa)-z(u) \geq \kappa D(u)/2\}$. Let $E$ be the event
\[
E=
\left\{
\forall_{0\leq k \leq M^{{\gothic o}thic a}mma/2\kappa}
D(M^{{\gothic o}thic a}mma-k\kappa)\geq
k\frac{M^{\theta-1}}{2}
\right\}.
\]
Note that by Corollary \text{ }f{cor:decdrift}, $P(E^c\ ;\ T<u_0)<M^{-1}$. Therefore, it suffices to estimate $P(E\ ;\ T<u_0)$.
By the exponential Markov inequality, for every $u>M^{{\gothic o}thic a}mma/2$
\[
P\left[\left.
W^c(u);A;D(u)\geq 3M^{-\epsilon}
\right|{\mathcal F}_u\right] \leq \frac{\exp(-M^{2\epsilon})}{16}.
\]
\ignore{
If $D(u)> 3M^{-\epsilon}$, then $L(u)>1+3M^{-\epsilon}$ and $(1-M^{-\epsilon})L(u)>1+D(u)-1.1M^{-\epsilon}>1+D(u)/2$. In particular, the gap is of order of magnitude of $D(u)$. Now exponential Markov:
Checking exponential Markov for this case:
\begin{eqnarray*}
\sum P_iX_i = M^{{{\gothic o}thic a}mma'}\ ; \ \sup X_i = M^\alpha < M^{{{\gothic o}thic a}mma'-5\epsilon}\\
P\left(X< M^{{{\gothic o}thic a}mma'}(1-M^{-\epsilon})\right)\leq\frac{E(e^{-KX})}{e^{-KM^{{{\gothic o}thic a}mma'}(1-M^{-\epsilon})}}
{\cal E}d{eqnarray*}
\[
\log\left(1-P_i+P_ie^{-KX_i}\right) < -P_i(KX_i - K^2X_i^2) < -P_iKX_i + M^\alpha K^2P_iX_i
\]
\[
\sum\log(...) < -KM^{{{\gothic o}thic a}mma'} + K^2M^\alpha M^{{{\gothic o}thic a}mma'}
\]
So we need to find $K$ that minimizes
\[
K^2M^{\alpha+{{\gothic o}thic a}mma'} - KM^{{{\gothic o}thic a}mma'-\epsilon}
\]
\begin{eqnarray*}
2KM^{\alpha+{{\gothic o}thic a}mma'} = M^{{{\gothic o}thic a}mma'-\epsilon}\\
K=M^{{{\gothic o}thic a}mma'-\epsilon-{{\gothic o}thic a}mma'-\alpha}/2=M^{-\epsilon-\alpha}/2
{\cal E}d{eqnarray*}
\[
K^2M^{\alpha+{{\gothic o}thic a}mma'} - KM^{{{\gothic o}thic a}mma'-\epsilon} = M^{{{\gothic o}thic a}mma'-2\epsilon-\alpha}/4>M^{\epsilon}/4.
\]
need to check: for small $x$,
\[
\log(1-p+pe^{-x})<-px+px^2?
\]
For small $x$,
\[
\log(1-p+pe^{-x}) < \log(1-p+p(1-x+x^2)) = \log(1-px+px^2)
<p(-x+x^2)-p^2(cx^2)...
\]
So all of this works if $x$ is (objectively) small enough and $p\ll 1$, which is, indeed the case.
}
Let $k_0=3M^{1-\epsilon-\theta}$, and let $W$ be the event
\[
W=
\bigcap_{k>k_0:u_0-k\kappa\geq M^{{\gothic o}thic a}mma/2}
W(u_0-k\kappa).
\]
Then $P(E\ ; W^c)<M^{-1}$. Therefore, it suffices to prove that if both the events $E$ and $W$ occur, then $T\geq u_0$. Under the event $E\cap W$, we have that
\[
z(3M^{{\gothic o}thic a}mma/4) - z(M^{{\gothic o}thic a}mma/2) \geq
\frac{\kappa M^{\theta-1}}{4}\sum_{k=0}^{M^{{{\gothic o}thic a}mma}/4\kappa}\left(M^{{{\gothic o}thic a}mma}/2\kappa-k\right)
\geq \frac {M^{2{{\gothic o}thic a}mma+\theta-{{\gothic o}thic a}mma'-1}}{576}\geq M^{2{{\gothic o}thic a}mma-\partialta-1}.
\]
Therefore, for under the event $E\cap W$, for every $u$ between $3M^{{\gothic o}thic a}mma/4$ and $u_0$ we have $z(u)>z(M^{{\gothic o}thic a}mma/2)$,
and therefore $u_0$ is in the same component excursion as $3M^{{\gothic o}thic a}mma/4$, and thus $T>u_0$.
\ignore{
between $u_0$ and $u_{-k_0}$ we have distance of $M^{1-\epsilon-\theta+{{\gothic o}thic a}mma'}$. So we need to show that
\[
1-\epsilon-\theta+{{\gothic o}thic a}mma' < 2{{\gothic o}thic a}mma-\partialta-1.
\]
\begin{eqnarray*}
1-\epsilon-\theta+{{\gothic o}thic a}mma' < 1-\epsilon + \partialta
{\cal E}d{eqnarray*}
So we need
\begin{eqnarray*}
2-\epsilon+2\partialta < 2{{\gothic o}thic a}mma.
{\cal E}d{eqnarray*}
i.e.
\begin{eqnarray*}
1-{{\gothic o}thic a}mma < \epsilon/2 - \partialta
{\cal E}d{eqnarray*}
}
\ignore{
To this end we start by
estimating the probability of the event $W(u)$ for various values of $u$.
Let $f(u)=M^{2\epsilon}D(u)$.
}
{\cal E}d{proof}
\begin{lemma}\label{lem:other_blocks}
With probability at least $1-CM^{-\mbox{var}phi}$,
there exists no excursion which is generated by a component larger than $M^{{\gothic o}thic a}mma$ which starts after time $u_0$.
{\cal E}d{lemma}
\begin{proof}
The calculation is similar to the one from the previous proof. First, for $k=0,1,2,\ldots,\frac{M^{1-\theta}}{2}$,
\begin{eqnarray*}
&&P\left(
\exists_{u\in[u_{k+1},u_{k+2}]} I(u)>I(u_{k})+\kappa |D(u_{k})|/2
\right)\\
&\leq& \frac{4\kappa M^\alpha }{\kappa^2 D(u_{k})^2}
\leq 48M^{\alpha-{{\gothic o}thic a}mma'}\cdot M^{2(1-\theta)}k^{-2}\\
&=& Ck^{-2} M^{2+\xi-\epsilon-2{{\gothic o}thic a}mma'-2\theta} = C\frac{M^{-\mbox{var}phi}}{k^2}
{\cal E}d{eqnarray*}
for some $\mbox{var}phi > 0$.
\ignore{
\begin{eqnarray*}
2+\xi-\epsilon-2{{\gothic o}thic a}mma'-2\theta
< 2+\xi - \frac{8+2\xi}{3} +2\partialta -\epsilon \\
=\frac{\xi - 2}{3} + 2\partialta -\epsilon < 0.
{\cal E}d{eqnarray*}
}
Thus
\[
P(F)\leq CM^{-\mbox{var}phi}\sum_{k=1}^\infty k^{-2}
\]
for
\begin{eqnarray*}
F=
\left\{\exists_{k\in\{1,2,\frac{M^{1-\theta}}{2}\}}
\exists_{u\in[u_{k+1},u_{k+2}]} I(u)>I(u_{k})+\kappa |D(u_{k})|/2
\right\}.
{\cal E}d{eqnarray*}
Similarly, for each $k\geq 2$,
\begin{eqnarray*}
P\left(
\exists_{u\in[u_{k-1},u_{k}]} \left|I(u)-I(u_{k-1})\right| \geq \frac 14\kappa |D(u_{k})|
\right)
\leq \frac{4\kappa M^\alpha }{\kappa^2 D(u_{k})^2}
\leq\frac{4M^{-\mbox{var}phi}}{(k-1)^2},
{\cal E}d{eqnarray*}
and thus
\[
P(B) \leq 4M^{-\mbox{var}phi}\sum_{k=1}^\infty k^{-2}
\]
for
\[
B=
\left\{\exists_{k\in\{2,3,\frac{M^{1-\theta}}{2}\}}
\exists_{u\in[u_{k-1},u_{k}]} \left|I(u)-I(u_{k-1})\right| \geq \frac 14\kappa |D(u_{k})|
\right\}.
\]
On the event $B^c\cap F^c$, for every $k$ and every $u\in[u_{k-1},u_k]$ and $u'\in[u_{k+1},u_{k+2}]$,
we have that
\[
z(u') < z(u_k) - k\kappa M^{\theta - 1}/2
\ \ \ \mbox {and} \ \ \
z(u) > z(u_k) -k\kappa M^{\theta - 1}/4.
\]
Therefore, $z(u')<z(u)-\frac k4\kappa M^{\theta - 1} < z(u)-kM^\alpha$, and therefore, since the particle mass is bounded by $M^\alpha$, no excursion of length greater than or equal to $M^{{{\gothic o}thic a}mma'}$ can start at any point $u$ between $u_0$ and $u_0+\frac M2$.
By standard estimates for the size biased sequence, the probability that there is an excursion of length $M^{{\gothic o}thic a}mma$ starting after $u_0+M/2$ and no excursion of length larger than $M^{{{\gothic o}thic a}mma'}$ between $u_0$ and $u_0$ and $u_0+M/2$ decays like $\exp(-M^{{{\gothic o}thic a}mma-{{\gothic o}thic a}mma'})$. Therefore, with probability at least $1-CM^{\mbox{var}phi}$, there is no such excursion starting after $u_0$.
\ignore{
\[
{{\gothic o}thic a}mma'+\theta - 1 > \alpha?
\]
\begin{eqnarray*}
{{\gothic o}thic a}mma'+\theta - 1 - \alpha > 2{{\gothic o}thic a}mma' - 1 - \partialta -\alpha \\
= 2{{\gothic o}thic a}mma' - 1 - \partialta - \xi + {{\gothic o}thic a}mma' + \epsilon
= 3{{\gothic o}thic a}mma' - 1 - \xi +\epsilon - \partialta\\
> 3{{\gothic o}thic a}mma' - 1 - \xi > \frac{4+\xi}{2} - 1 - \xi = 1-\frac \xi 2 > 0.
{\cal E}d{eqnarray*}
}
{\cal E}d{proof}
\begin{proof}[Proof of Lemma \text{ }f{lem:aldous}]
Lemma \text{ }f{lem:aldous} now follows from Lemma \text{ }f{lem:first_block} and Lemma \text{ }f{lem:other_blocks}.
{\cal E}d{proof}
\section*{Acknowledgment}
{}First, I thank Yuval Peres and Itai Benjamini for presenting these problems
to me and for helping me during the research. I also wish to thank Omer Angel
and Elchanan Mossel for helpful suggestions. I thank Jeff Steif for his help
in improving the exposition of the paper and for presenting
to me the question leading to Theorem \text{ }f{jeff}. I thank Michael Aizenman for
useful and interesting discussions.
I thank Mario W\"utrich for finding a mistake in an earlier version.
\begin{thebibliography}{AAA}
\bibitem{ACCN}
Aizenman M., Chayes J. T., Chayes L. and Newman C. M. (1988) Discontinuity of Magnetization in One Dimensional $1/|x-y|^2$ Ising and Potts Models. {\sl J. Stat. Phys.} {\bf 50}, 1--41.
\bibitem{AiFe}
Aizenman M. and $\text{Fern}\acute{\text{a}}\text{ndez }$ R. (1988) Critical Exponents for Long-Range Interactions. {\sl Let. Math. Phys.} {\bf 16}, 39--49
\bibitem{AN}
Aizenman M. and Newman C. M. (1986) Discontinuity of the Percolation Density
in One Dimensional $1/|x-y|^2$ Percolation Models. {\sl Commun. Math. Phys.} {\bf 107}, 611--647.
\bibitem{aldous}
Aldous, D. (1997) Brownian excursions, critical random graphs and the multiplicative coalescent.
{\sl Ann. Probab.} {\bf 25}, no. 2 , 812--854.
\bibitem{us}
Angel O., Benjamini I., Berger N., Peres Y. (2001) Transience of percolation clusters
on wedges. {\sl preprint}
\bibitem{BenBer}
Benjamini I. and Berger N. (2001) {\sl In Preparation}.
\bibitem{BPP}
Benjamini I. Pemantle R. and Peres Y. (1998)
Unpredictable paths and percolation.
{\sl Ann. Probab.} {\bf 26}, no. 3, 1198--1211.
\bibitem{FKG}
Fortuin C.M., Kasteleyn P.W. and Ginibre J. (1971) Correlation inequalities on some partially ordered sets. {\sl Comm. Math. Phys} {\bf 22} 89--103.
\bibitem{uniq}
Gandolfi A., Keane M. S. and Newman C. M. (1992) Uniqueness of the
infinite component in a random graph with applications to percolation and
spin glasses, {\it Probab. Theory Related Fields} {\bf 92}, 511--527.
\bibitem{Jesper} Jespersen S. and Blumen A. (2000) Small-world networks: Links with long-tailed
distributions , {\sl Phys. Rev. E} {\bf 62}, 6270--6274.
\bibitem{GKZ}
Grimmett G.R., Kesten H. and Zhang Y. (1993) Random walk on the infinite
cluster of the percolation model, {\sl Probab. Th. Rel. Fields} {\bf 96} 33--44
\bibitem{elch}
H\"{a}ggstr\"{o}m O. and Mossel E. (1998). Nearest-neighbor walks with low
predictability profile and percolation in $2+\epsilon$ dimensions. {\sl Ann. Probab.} {\bf 26}, 1212--1231.
\bibitem{hasl}
Hara T. and Slade G. (1990). Critical Behavior for Percolation in High Dimensions. {\sl Commun. Math. Phys.} {\bf 128}, 333--391.
\bibitem{HoMo}
Hoffman C., Mossel, E. (1998)
Energy of flows on percolation clusters,
{\it Annals of Potential Analysis, to appear}
\bibitem{LP}
Levin D. and Peres Y. (1998).
Energy and cutsets in infinite percolation clusters.
{\it Proceedings of the Cortona Workshop on Random Walks
and Discrete Potential Theory}, M. Picardello and W. Woess (Editors),
Cambridge Univ.\ Press.
\bibitem{steif}
Meester R. and Steif J. E. (1996) On the continuity of the critical value for long range
percolation in the exponential case. {\sl Comm. Math. Phys} {\bf 180} 483--504
\bibitem{NS}
Newman C. M. and Schulman L. S. (1986)
One Dimensional $1/|j-i|^s$ Percolation Models: The Existence of a Transition for $s\leq 2$. {\sl Commun. Math. Phys.} {\bf 104}, 547--571.
\bibitem{triid}
Pemantle R. and Peres Y. (1996) On which graphs are all random walks in random
environments transient? {\em Random Discrete Structures}, IMA Volume 76, D. Aldous and R. Pemantle (Editors), Springer-Verlag.
\bibitem{yuval}
Peres Y. (1999) Probability on trees: an introductory climb.
{\em Lectures on probability theory and statistics (Saint-Flour, 1997)}, 193--280, Lecture Notes in Math., {\bf 1717}, Springer, Berlin.
\bibitem{schul}
Schulman L. S. (1983) Long-range percolation in one dimension.
{\sl J. Phys. A} {\bf 16}, no. 17, L639--L641
{\cal E}d{thebibliography}
\noindent
Noam Berger, \\
Department of Statistics, \\
367 Evans Hall \#3860, \\
University of California Berkeley, \\
CA 94720-3860 \\
e-mail:[email protected]\\
{\cal E}d{document}
|
\begin{document}
\title[Topological Regular Neighborhoods]{Topological Regular Neighborhoods \thanks{This version of Part I is bare in
spots and short on polish, but experts will find all necessary data.}}
\author{Robert D. Edwards}
\address{Department of Mathematics, University of California, Los Angeles, }
\email{[email protected]}
\dedicatory{
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
\\Preparation of the electronic manuscript was supported by NSF Grant
DMS-0407583. Final editing was carried out by Fredric Ancel, Craig Guilbault
and Gerard Venema.}
\begin{abstract}
This article is one of three highly influential articles on the topology of
manifolds written by Robert D. Edwards in the 1970's but never published.
Organizers of the Workshops in Geometric Topology
(http://www.uwm.edu/\allowbreak\symbol{126}craigg/workshopgtt.htm) with the
support of the National Science Foundation have facilitated the preparation of
electronic versions of these articles to make them publicly available.
Preparation of the first of those articles \textquotedblleft Suspensions of
homology spheres\textquotedblright\ was completed in 2006. A more complete
introduction to the series can be found in that article, which is posted on
the arXiv at: http://arxiv.org\allowbreak/abs/math/\allowbreak0610573v1 and on
a web page devoted to this project: http://www.\allowbreak uwm.\allowbreak
edu/\allowbreak\symbol{126}craigg/\allowbreak EdwardsManuscripts.htm
Preparation of the second article \textquotedblleft Approximating certain
cell-like maps by homeomorphisms\textquotedblright\ is nearing completion. The
current article \textquotedblleft Topological \ regular
neighborhoods\textquotedblright\ is the third and final article of the series.
(\textbf{Note. }This ordering is not chronological, but rather by relative
readiness of the original manuscripts for publication.) It develops a
comprehensive theory of regular neighborhoods of locally flatly embedded
topological manifolds in high dimensional topological manifolds. The following
orignial abstract for that paper was also published as an AMS research
announcement:
\noindent\textbf{Original Abstract. }(AMS Notices Announcement): A theory of
topological regular neighborhoods is described, which represents the full
analogue in TOP of piecewise linear regular neighborhoods (or block bundles)
in PL. In simplest terms, a topological regular neighborhood of a manifold $M$
locally flatly embedded in a manifold $Q$ ($\partial M=\varnothing=\partial
Q\;$here) is a closed manifold neighborhood $V$ which is homeomorphic fixing
$\partial V\cup M$ to the mapping cylinder of some proper surjection $\partial
V\rightarrow M$. The principal theorem asserts the existence and uniqueness of
such neighborhoods, for $\dim Q\geq6$. One application is that a cell-like
surjection of cell complexes is a simple homotopy equivalence (first proved
for homeomorphisms by Chapman). There is a notion of transversality for such
neighborhoods, and the theory also holds for locally tamely embedded polyhedra
in topological manifolds. This work is a derivative of the work of
Kirby-Siebenmann; its immediate predecessor is Siebenmann's \textquotedblleft
Approximating cellular maps by homeomorphisms\textquotedblright\ Topology
11(1972), 271-294.
This version of Part I is bare in spots and short on polish, but experts will
find all necessary details. Part II is only sketched.
\end{abstract}
\maketitle
\tableofcontents
\noindent\textbf{Note from the editors. }This manuscript is an electronic
version of a handwritten manuscript obtained from the author and dating back
to 1973. As noted in the abstract, this is not a complete and polished work.
Part I is nearly complete but lacking in a few details; a plan for Part II is
described in the manuscript, but there is no evidence it was ever written.
Despite its incomplete nature, the handwritten version of this manuscript was
widely circulated and read. Its influence can be deduced from its appearance
(sometimes under the alternative title \textquotedblleft TOP regular
neighborhoods\textquotedblright) in the bibliographies of a large number of
important papers from that era.
In the process of editing the original manuscript, some obvious `typos' were
corrected and a few other minor improvements were made. For example a number
of missing references, which the author had intended to fill in later, have
been included, and others were updated from preprint status to their final
publication form. (This accounts for a few post-1973 references in the
bibliography.) In a few places, modern notation---more compatible with a \ Tex
doucument---replaces earler notation. Otherwise, this version remains faithful
to the original. In particular, no attempt was made to complete unfinished
portions of the manuscript. Notes from the author (sometimes to himself) about
missing details or planned improvements are included. The decision to leave
the manusript largely unaltered leads to a few awkward situations. For
example, some passages make references to the unwritten `Part II'; and in a
few places there are incomplete sentences---sometimes due to phrases cut off
or rendered unreadable by Xerox machines from long ago. A missing portion of
text is indicated by a short blank line: $\underline{\quad\quad\quad}$.
Despite the minor imperfections, readers will find much interesting and
important mathematics, and some excellent exposition, on these pages.
The editors apologize and accept full responsibility for any new errors that
crept into the manuscript during the conversion process.
\part{
}
\section{Introduction
}
A topological regular neighborhood of a manifold $M$ locally flatly embedded
in a manifold $Q$ ($\partial M=\varnothing=\partial Q$ here; all manifolds
topological) is most easily defined as a closed manifold neighborhood $V$ of
$M$ in $Q$ such that $(V;\partial V,M)$ is homeomorphic to the mapping
cylinder $(Z(r|);\partial V,M)$, of the restriction to $\partial V$ of some
proper retraction $r:V\rightarrow M$. The basic aim of this paper is to prove
the existence and uniqueness of such neighborhoods, for $\dim Q\geq6$. This is
essentially accomplished in Sections 5 and 6. It turns out that such
neighborhoods are more useful if their definition is given in less stringent
form. The alternative (but equivalent) definition is given in Section 1 and
developed in Sections 3 and 4.
\indent Topological regular neighborhoods can be regarded as the analogue in
TOP of block bundles in PL. They have the disadvantage of certain dimension
restrictions, but they have the advantage of a bit more flexibility: certain
pathological fibers are permitted and conversely certain nice fibers can be demanded.
\indent For example, the following is true: if $M^{m}$ is a locally flat
submanifold of $Q^{m+q}$ (say no boundaries), $m+q\geq6$, then $M$ has a
closed manifold mapping cylinder neighborhood $V$ in $Q$ (as above) such that
all fibers $\{r^{-1}(x)\}$ are locally flat $q$-discs which intersect
$\partial V$ in locally flat $(q-1)$-spheres.
\indent Hence one feature of topological regular neighborhoods is that they
may serve as ersatz disc bundle neighborhoods in dimensions where the latter
may fail to exist (see Remark 1.3). However, they have other uses as well, for
example for showing that a cell-like map of cell complexes is a simple
homotopy equivalence, and transversality. The theory also extends to tamely
embedded polyhedra in topological manifolds.
\indent There are several other prior and related neighborhood theories in the
literature, but we defer discussion of these until Section 2, after definitions.
\indent This work grew out of my alternative proof \cite{E2} of Chapman's
Theorem that a topological homeomorphism of polyhedra is a simple homotopy
equivalence. In fact, it was developed to correct a flaw in my first proof of
that theorem, a flaw which it turned out had a much simpler remedy. (The flaw
was an implicit assumption that all triangulations are combinatorial; the
remedy is represented by Theorem 1.2 in \cite{E1}.)
\indent I would like to thank L. Siebenmann for his many valuable comments and
suggestions concerning this paper. Also, I thank Alexis Marin and Ron Stern
for their participation in its development.\newline
\section{Notation, definitions and some examples}
Throughout this paper, we will adhere to the following notational conventions.
\[
B^{n}=[-1,1]^{n}\subset\mathbb{R}^{n}=\mathbb{R}^{n}\times0\subset
\mathbb{R}^{q}.
\]
$\partial B^{n}$ or $\dot{B}^{n},$ $\operatorname*{int}B^{n}$ or $\mathring
{B}^{n}$, $rB^{n}$ and $r\partial B^{n}$ (for $r>0$) are all used in the usual
ways. $D^{n}$ is used to denote any homeomorphic copy of the unit ball
$\{(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\mid\sum x_{i}^{2}\leq1\}$ in
$\mathbb{R}^{n}$ and $S^{n-1}$ any homeomorphic copy of its boundary; if the
context requires, regard $D^{n}$ and $S^{n-1}$ as actually being the unit ball
and sphere. (Reason for this rigmarole: Sometimes its useful to have distinct
$n$-cells $B^{n}$ and $D^{n}$ around.)
\indent Given map$\;f:X\rightarrow Y$, let $Z(f)$ denote the mapping cylinder
and $\rho:Z(f)\rightarrow Y$ the mapping cylinder retraction. Thus
\[
Z(f)=(X\times\lbrack0,1]\sqcup Y)/\{(x,1)\sim f(x)\text{ for }x\in X\}
\]
and $\rho(x,t)=f(x)$.
\indent A map of pairs $f:(X,A)\rightarrow(Y,B)$ is \emph{faithful }if
$f^{-1}(B)=A$, not more. We prefer to save `proper' for its more widespread
meaning: $f:X\rightarrow Y$ is \emph{proper} if preimages of compact sets are compact.
\indent The notation $f:X{\normalsize
{\includegraphics[
trim=0.000000in 0.002046in 0.000000in 0.002046in,
height=0.1064in,
width=0.2093in
]
{unto.eps}
}
}Y$ indicates that $\operatorname*{domain}(f)\subset X$, not necessarily equal
to $X$.
\indent Suppose $M^{m}$ is a topological manifold (with or without boundary,
compact or not). The following definition is the first of two.
\begin{definition}
[Mapping cylinder version]An \textit{(abstract) }\textbf{topological regular
neighborhood} of $M^{m}$ (TRN for short) is a triple $(V^{m+q},M^{m},r)$ where
$V$ is a manifold-with-boundary and $r:V\rightarrow M$ is a proper retraction
such that
\end{definition}
\begin{itemize}
\item[(1)] $(M,\partial M) \hookrightarrow(V, \partial V)$ is a faithful,
locally flat inclusion (\textit{faithful} $\equiv M \cap\partial V = \partial
M)$,
\item[(2)] $\delta V\equiv r^{-1}(\partial M)$ is a collared codimension 0
submanifold of $\partial V$ (define $\dot{V}=cl(\partial V-\delta V)$ and
$\mathring{V}=V-\dot{V}$), and
\item[(3)] $(V;\dot{V},M,r)$ is isomorphic (keeping $\dot{V}\cup M$ fixed) to
the mapping cylinder of $r|:\dot{V}\rightarrow M$, that is, $(V;\dot
{V},M,r)\approxeq(Z(r|_{\dot{V}});\dot{V},M,\rho)$ where $\rho$ is the mapping
cylinder retraction
\end{itemize}
\indent This definition, although quite natural, turns out to be too
restrictive for certain purposes. For example, one would like the composition
of TRN's to be a TRN. Consider:
\begin{example}
\label{Ex 1.1}(See Figure 1.) This example describes two mapping cylinder
TRN's $r_{1}:V_{1}\rightarrow V_{2}$ and $r_{2}:V_{2}\rightarrow J$ whose
composite $r_{2}r_{1}:V_{1}\rightarrow J$ is not a mapping cylinder TRN.
For the purposes of this example, let $I=J=K=[-1,1]$, to be thought of as
first, second, and third coordinate intervals in $\mathbb{R}^{3}$.
Let $(J\times K,J,r_{0})$ be the mapping cylinder TRN as pictured in Figure
1a,\begin{figure}
\caption{Example 1: Composition of TRN's}
\end{figure}such that $r_{0}^{-1}\left( 0\right) =\Delta^{1}\vee\Delta^{2}$
is the only non-interval point inverse. Product this TRN with the interval $I$
to get
\[
(V_{1},V_{2},r_{1})\equiv(I\times J\times K,I\times J,\operatorname{id}
_{I}\times r_{0})
\]
as shown in Figure 1b.
Let $V_{2}=I\times J\overset{r_{2}}{\longrightarrow}J$ be obtained from the
standard-projection TRN $\pi_{J}:I\times J\rightarrow J$ by a slight
perturbation of the projection map $\pi_{J}$, as shown in Figure 1c.
Specifically, let $h:I\times J\rightarrow I\times J$ be a $(t\times J)$-level
preserving homeomorphism such that
\[
h(I\times0)\cap I\times0=C\equiv\operatorname*{cl}\{(1/n,0)\mid n>0\}\subset
I\times0\text{,}
\]
and define $r_{2}=\pi_{J}h^{-1}:I\times J\rightarrow J$.
The composition $r_{2}r_{1}:V_{1}\rightarrow J$ is not a mapping cylinder
projection, as $(r_{2}r_{1})^{-1}(0)\approx(I\times0\times K)\cup
(C\times\Delta^{2})$, which is not a cone. See Figure 1d.
\end{example}
Before giving the second definition, it is worth considering the analogous
situation in PL for motivation. There, one can define an abstract regular
neighborhood of a manifold $M$ (without boundary here) as a triple $(V,M,r)$
where $V$ is a manifold with boundary, $M\subset\operatorname*{int}V$ and
$r:V\rightarrow M$ is a PL collapsible retraction, where \emph{collapsible}
means each point inverse $r^{-1}(x)$ is a collapsible polyhdedron. This is M.
Cohen's observation \cite{Co2}, and it provides an alternative way of defining
block bundles \cite[\S 4]{RSI}. Cohen shows that such a $V$ has topological
mapping cylinder structure (\cite{Co1}; one has to be careful with PL mapping
cylinders [?]). With this definition, the composition of PL regular
neighborhoods, as defined above, is readily a PL regular neighborhood
\cite[Lemma 8.6]{Co2}.
\indent The most general analogue in TOP of a piecewise linear collapsible
polyhedron is a cell-like compactum. This suggests the topological adaptation
of Cohen's definition. First we need some preliminary definitions, which we
give in anodyne form for the nonexpert in shape theory.
\indent Let $X$ be a finite dimensional compact metric space. Such an $X$ is
\emph{cell-like} \label{cell-like}if $X$ embeds in some euclidean space
$\mathbb{R}^{q}$ so that its image is cellular, that is, the intersection of
open $q$-cells. Similarly, $X$ is $k$\emph{-sphere-like} if $X$ embeds in some
euclidean space $X\hookrightarrow\mathbb{R}^{q}$ so that $X=\cap_{i=1}
^{\infty}f_{i}(S^{k}\times\mathbb{R}^{q-k})$ where each $f_{i}:S^{k}
\times\mathbb{R}^{q-k}\rightarrow\mathbb{R}^{q}$ is an embedding, and
$\operatorname*{image}f_{i+1}\hookrightarrow\operatorname*{image}f_{i}$ is a
homotopy equivalence. Also, $X$ is $k$\emph{-UV} if given any embedding
$X\hookrightarrow\mathbb{R}^{q}$ and any neighborhood $U$ of $X$ in
$\mathbb{R}^{q}$, there is a neighborhood $V$ of $X$, $V\subset U$, such that
any map $\alpha:S^{k}\rightarrow V$ is null-homotopic in $U$. $X$ is
$\emph{UV}^{k}$ if it is $j$-\emph{UV} for $0\leq j\leq k$. It is the message
of shape theory that these properties are intrinsic properties of $X$ and can
be so characterized, without any reference to a specific embedding.
\indent An inclusion $Y\hookrightarrow X$ of a closed subset $Y$ into a
locally compact, finite-dimensional separable metric space $X$ is a
\emph{shape equivalence} if for any embedding $X\hookrightarrow Q$ of $X$ onto
a closed subset of a manifold $Q$ (i.e., proper embedding), the following
holds: given neighborhoods $U$ of $X$ in $Q$ and $W$ of $Y$ in $Q$, there is a
neighborhood $V$ of $X$ in $Q$ such that $V$ homotopically deforms into $W$ in
$U$, keeping some neighborhood $N$ of $Y$ fixed. That is, there is a homotopy
$h_{t}:V\rightarrow U$, $t\in\lbrack0,1]$, joining $\operatorname{id}
_{V}=h_{0}:V\rightarrow U$ to a map $h_{1}:V\rightarrow W$, such that
$h_{t}|_{N}=\operatorname{id}$. (independent of $t$).
Shape theory says that if this definition holds for one proper embedding
$X\hookrightarrow Q$, it hold for all \cite[p.499]{La2}.
Suppose $r:V\rightarrow M$ is a proper retraction of spaces and $\dot{V}$ is
some distinguished closed subset of $V$. We use the notation $F_{x}=r^{-1}(x)$
and $\dot{F}_{x}=F_{x}\cap\dot{V}$, for $x\in M$. Recall $r$ is
\emph{cell-like} (CE) if each $F_{x}$ is cell-like \cite{La}. We call $r$
\emph{cell-like, sphere-like} (CS) if each $F_{x}$ is cell-like and each
$\dot{F}_{x}$ is sphere-like. Finally (and most importantly), we call $r$
\emph{cone-like} if each $F_{x}$ is cell-like and the pair $(F_{x}-x,\dot
{F}_{x})$ is proper shape equivalent to $(\dot{F}_{x}\times\lbrack0,1),\dot
{F}_{x})$. See \cite{BS}. In order to obviate proper shape theory, we remark
in advance that in the following definition, one can interpret
\emph{cone-like} \label{cone-like}to mean that $r$ is CS and each inclusion
$\dot{F}_{x}\hookrightarrow F_{x}-x$ is a shape equivalence (in fact, in
codimension $\geq3$ one need only assume $r$ is CE and each $\dot{F}_{x}$ has
property $1$-UV; details are in \S 3).
\indent$M^{m}$ is a topological manifold (with or without boundary, compact or
not). The following definition differs from the previous mapping cylinder
version only in condition (3).
\begin{definition}
[Cone-like version]An (abstract)\textbf{ topological regular}
\textbf{neighborhood }of $M^{m}$ is a triple $(V^{m+q},M^{m},r)$ where $V$ is
a manifold-with-boundary and $r:V\rightarrow M$ is a proper retraction such that
\end{definition}
\begin{itemize}
\item[(1)] $(M,\partial M)\hookrightarrow(V,\partial V)$ is a faithful locally
flat inclusion.
\item[(2)] $\delta V\equiv r^{-1}(\partial M)$ is a collared codimension $0$
submanifold of $\partial V$ (define $\dot{V}=cl(\partial V-\delta V)$ and
$\mathring{V}=V-\dot{V})$, and
\item[(3)] $r:V\rightarrow M$ is cone-like.\footnote{See the note at bottom of
page \pageref{condition(0)}.}
\end{itemize}
\indent The following examples are to illuminate the definition. The last two
are relevant only to codimension 2.
\begin{example}
This example shows why $r$ must be more than just cell-like. Let $V$ be any
compact contractible manifold and $m=\operatorname{point}\;\mathbb{\in
}\operatorname*{int}V$ and $r:V\rightarrow m$ the retraction. Then $r$ is CE,
but if one wants uniqueness to hold in the theory, there must be some
condition which force $\partial V$ to be a homotopy sphere instead of just a
homology sphere.
\end{example}
\begin{example}
This shows the need for the strong cone-like hypothesis on $r$ in codimension
2. (For polyhedra, see Siebenmann's example in \S 8\footnote{\textbf{Note from
editors:} In fact, this example did not make it into \S 8.}). Let
$(B^{m+2},D^{m})$ be a knotted locally flat ball pair such that the sphere
pair $(\partial B^{m+2},\partial D^{m})=(\partial B^{m+2},\partial B^{m})$ is
standard. Recall that these can be constructed with $(B^{m+2}-D^{m},\partial
B^{m+2}-D^{m})$ highly connected \cite{Wa}. There is a CS retraction
$r:B^{m+2}\rightarrow D^{m}$ which is a standard $B^{2}$-fibered projection
over $D^{m}-0$ and such that $F_{0}$ is homotopy equivalent to the
contractible space $B^{m+2}-(D^{m}-0)$, with $\dot{F}_{0}\approx S^{1}\times
B^{m}$. Since $(B^{m+2},D^{m})$ is not standard, it is necessary to rule out
such an $r$.
\end{example}
\begin{example}
This shows that in codimension 2, it is not enough to just assume that $r$ is
cell-like and each inclusion $\dot{F}_{x}\hookrightarrow F_{x}-x$ is a shape
equivalence (as opposed to the proper shape equivalence in the definition of
cone-like). \textbf{Note. }This example is incomplete. It requires a knotted
embedding $f:S^{n}\rightarrow S^{n+2}$ which permits a concordance
$F:S^{n}\times I\rightarrow S^{n+2}\times I$ to the standard $S^{2}$ so that
$S^{n+2}-f\left( S^{n}\right) \hookrightarrow S^{n+2}\times I-F\left(
S^{n}\times I\right) $ is a homotopy equivalence (everything locally flat).
Then we could construct this example.\label{ex: cell-like/cone-like}
\end{example}
\begin{remark}
(Concerning $\delta V$). If $(V,M,r)$ is a TRN of $M$, then $(\delta
V,\partial M,r|_{\delta V})$ is a TRN of $\partial M$ (either definition).
\textbf{Note:} $(\delta V)^{\cdot}=\partial{\dot{V}}$, which we will denote
$\delta\dot{V}$; also $(\delta V)^{\circ}=\partial\mathring{V}$, which we will
denote $\delta\mathring{V}$. Actually, our definition of TRN for manifolds
with boundary is not the most general, as one need not require $\delta V$ to
coincide with $r^{-1}(\partial M)$. We postpone this relaxation and its
details until the discussion of neighborhoods of polyhedral pairs in Part II,
where it becomes necessary.
\end{remark}
\begin{remark}
(Concerning the equivalence of definitions). It is routine to show that a
mapping cylinder TRN is a cone-like TRN, using definitions. The converse of
course is not strictly true, but it is as true as could be expected: if
$(V,M,r)$ is a cone-like TRN, then there is a mapping cylinder retraction
$r^{\prime}:V\rightarrow M$ which is arbitrarily close to $r$ and agrees with
$r$ on $\dot{V}$ $(\dim V\neq4;$ for $\dim V=3$ see next remark). That is,
$(V;\dot{V},M,r^{\prime})\approx(Z(r|_{\dot{V}});\dot{V},M,\rho)$ $(rel$
$\dot{V}\cup M)$. Details are in Section 4.
\end{remark}
\begin{remark}
(Concerning non-locally flat embeddings of $M$). The definitions make perfect
sense even if $M$ is not locally flatly embedded in $V$. However, we cannot
say anything non-trivial regarding existence-uniqueness in this case, and the
techniques of this paper are no help there. Recall that if non-combinatorial
triangulations of topological manifolds exist, i.e., if the double suspension
of some genuine homology sphere is topologically homeomorphic to a real sphere
, then there is a nonlocally flat embedding of $S^{1}$ (namely the suspension
circle of the above suspension) into some sphere such that the embedding has a
manifold mapping cylinder neighborhood. Further details are in \cite{Gl}.
If $M^{m}$ is an arbitrary, possibly wild submanifold of $Q^{m+q}$, then
$M=M\times0\subset Q\times\mathbb{R}^{1}$ is locally flat (no dimension
restrictions; details recounted in \cite{BrS} for $q>1$.). Thus if $V$ is a
TRN of a non-locally flatly embedded $M$ (either definition), then
$V\times\lbrack-1,1]$ is a genuine TRN of $M\times0$.
\end{remark}
\begin{remark}
(Concerning disc bundle neighborhoods). Topological regular neighborhoods may
serve as a partial substitute for topological disc bundle neighborhoods in
dimensions where the latter don't exist (although even when disc bundle
neighborhoods exist, the uniqueness of TRN's is still useful; e.g., the
topological invariance of simple homotopy type for cell complexes, \S 9). We
recall what is known about existence-uniqueness of disc bundle neighborhoods.
If $M^{m}\hookrightarrow\operatorname*{int}Q^{m+q}$ is a locally flat
topological embedding, then $M^{m}$ has a unique disc bundle neighborhood if
$m+q\leq3$ (semi-classical); $q=1$ \cite{Bro}, $q=2$, $m+q\geq5$ \cite[AMS
Notices 1971]{KS}, $m\geq3$, $m+q=5,6$ again essentially by \cite{KS} (no
upper bound on $m+q$ for existence); $m\leq q+2$ [resp. $m\leq6$, $m\leq5$],
$q\geq7$ [resp. $q=6$, $q=5$], with existence holding for these $m$ increased
by one \cite{St}.
Hence the first $m+q\neq4$ case where existence fails is $(m,m+q)=(4,7)$,
realizable by a counterexample of Hirsch.
\end{remark}
\begin{remark}
(Concerning low dimensions). Subsequent theorems in Part I are all stated and
proved for ambient dimension $\geq6$ (exceptions: the mapping cylinder theorem
(\S 4) only requires ambient dimension $\geq5$, and the local contractibility
theorem (\S 7) has no dimension restrictions). As usual, all theorems hold
when ambient dimension $\leq2$ (same proofs work) and all theorems hold when
ambient dimension $=3$, if we adopt the same convention that Siebenmann did in
\cite{Si1} to get around the Poincar\'{e} conjecture: in the cone-like
definition of TRN, assume in addition that each fiber $F_{x}$ has a manifold
neighborhood in $V$ which is prime $(\equiv$ there is no 2-sphere which
separates the manifold into two non-cells). The mapping cylinder definition
works as stated; its fibers automatically have this property. Ambient
dimensions 4 and 5 remain a mystery because of the failure of the s-cobordism
theorem there \cite{Si4}. But remember that in dimension 5, disc bundle
neighborhoods exist and are unique (see preceding Remark).
\end{remark}
\indent We continue with more definitions. Two abstract topological regular
neighborhoods $(V_{0},M,r_{0})$ and $(V_{1},M,r_{1})$ are \emph{homeomorphic}
if they are homeomorphic as triples $(V_{0},M,\delta V_{0})\approx
(V_{1},M,\delta V_{1})$ keeping $M$ fixed. Two such TRN's are
\emph{isomorphic} if they are homeomorphic via $h:(V_{0},M,\delta
V_{0})\overset{\approx}{\longrightarrow}(V_{1},M,\delta V_{1})$ so that
$r_{1}=r_{0}h^{-1}$. This notion seldom arises because of its excessive strength.
\indent If $(M,\partial M)\hookrightarrow(Q,\partial Q)$ is a faithful locally
flat inclusion and $V$ is a TRN of $M$ in $Q$, we always assume (unless
otherwise stated) that $V\cap\partial Q=\delta V$ and that $(\dot{V}
,\delta\dot{V})$ is collared in $(Q-\mathring{V},\partial Q-\delta\mathring
{V})$. Two TRN's $(V_{0},M,r_{0})$ and $(V_{1},M,r_{1})$ of $M$ in $Q$ are
\emph{equivalent }in $Q$ if there is a homeomorphism of $Q$ whose restriction
gives a homeomorphism of $V_{0}$ onto $V_{1}$. They are \emph{equivalent by
ambient isotopy} if this homeomorphism can be chosen isotopic to
$\operatorname{id}_{Q}$ through homeomorphisms of $Q$ fixed on $M$. Invariably
such an ambient isotopy will by construction leave a neighborhood of $M$
fixed; if not, it can be so arranged by the isotopy extension theorem.
\indent Although not explicitly required in the definition, all our
equivalences by ambient isotopy $h_{t}:Q\rightarrow Q$, $t\in\lbrack0,1]$, can
be followed by a \emph{cone-like homotopy} $r_{t}^{\prime}:V_{1}\rightarrow M$
($\equiv$ homotopy through cone-like retractions) joining $r_{0}^{\prime
}=r_{0}h_{1}^{-1}$ to $r_{1}^{\prime}=r_{1}$. This will sometimes prove
useful, and will be mentioned explicitly whenever it arises.
\indent We conclude this section with a useful example, which captures the
difference between topological disc bundles and topological regular neighborhoods.
\begin{example}
\label{Ex: capping off}(Capping Off). This example illustrates the fundamental
compactification operation for TRN's. Suppose $r:\mathbb{R}^{m}\times
B^{q}\rightarrow\mathbb{R}^{m}=\mathbb{R}^{m}\times0$ is any cone-like
retraction. Regard $S^{m}=\mathbb{R}^{m}\cup\infty$ and define $i$ =
$\operatorname{inclusion}\times\operatorname{id}:\mathbb{R}^{m}\times
B^{q}\hookrightarrow S^{m}\times B^{q}$. Then $\overline{r}=S^{m}\times
B^{q}\rightarrow S^{m}$ defined by
\[
\overline{r}=\left\{
\begin{array}
[c]{ll}
iri^{-1}\quad\text{on}\quad(S^{m}-\infty)\times B^{q} & \\
\operatorname{projection}\text{ to}\;\infty\;\text{on}\;\infty\times B^{q} &
\end{array}
\right.
\]
is a cone-like retraction.
\end{example}
\section{Statement of results; general remarks}
The primary goal of Part I is to prove:
\begin{theorem}
[Existence-Uniqueness Theorem]\label{existence-uniqueness thm}Suppose
$(M^{m},\partial M)\hookrightarrow(Q^{m+q},\partial Q)$ is a faithful locally
flat inclusion of topological manifolds, $m+q\geq7$, $(\geq6$ provided that
$\partial M=\varnothing$ or that the conclusion already holds at $\partial
M)$. Then $M$ has a topological regular neighborhood in $Q$, and any two are
equivalent by ambient isotopy of $Q$.
\end{theorem}
\noindent\textbf{Addendum.} The ambient isotopy $h_{t}:Q\rightarrow Q$ which
realizes the homeomorphism of $(V_{0},M,r_{0})$ to $(V_{1},M,r_{1})$ may be
chosen as the composition $h_{t}=h_{1,t}^{-1}h_{0,t}$ of two well-controlled
ambient isotopies $h_{0,t}$ and $h_{1,t}$, where \emph{well-controlled} means
that each $h_{i,t}$, $t\in\left[ 0,1\right] $, moves only those points which
lie near $V_{i}$ but not near $M$, along tracks which lie arbitrarily close to
individual fibers of $V_{i}$. Furthermore, the cone-like retractions
$r_{0}h_{1}^{-1}$ and $r_{1}$ of $V_{1}$ to $M$, which are close by
construction, can be joined by a small cone-like homotopy.\newline
\newline\textbf{Remark 2.2} (Concerning special neighborhoods.) There are
actually several useful subclasses of TRN's, each gotten by putting more
restrictions on the fibers $(F_{x},\dot{F}_{x})$ in either original
definition. The Existence-Uniqueness Theorem holds for each class (with no
change in the proof). Some sub-classes are in order of increasing restrictiveness:
\begin{itemize}
\item[(1)] (the original fibers, for comparison) $(F_{x},\dot{F}_{x}
)\overset{\text{shape}}{\thicksim}(B^{q},S^{q-1})$
\item[(2)] $(F_{x},\dot{F}_{x})\overset{\text{htpy equiv.}}{\thicksim}
(B^{q},S^{q-1})$
\item[(3)] (1) plus $F_{x}$ and $\dot{F}_{x}$ are ANR's. Note this implies (2) holds.
\item[(4)] $(F_{x},\dot{F}_{x})\overset{\text{homeo.}}{\approx}(B^{q}
,S^{q-1})$
\item[(5)] $(F_{x},\dot{F}_{x})\overset{\text{homeo.}}{\approx}(B^{q}
,S^{q-1})$ and each $(F_{x},\dot{F}_{x})$ is locally flat in $(V,\dot{V})$.
\end{itemize}
\indent Class (5) provides the nicest neighborhoods as far as existence is
concerned, whereas the original cone-like definition offers the strongest
uniqueness theorem. The cone-like homotopy of the Addendum belongs to the
appropriate class.
\indent The theory of topological regular neighborhoods is quite evidently
modelled on the theory of PL regular neighborhoods and PL block bundles (which
are really the same things, looked at from different perspectives, c.f.
\cite[\S 4]{RSI}. For the former, our preferred reference is Cohen \cite{Co2},
and we have already remarked (in \S 1 after Example \ref{Ex 1.1}) how the
treatment there is reflected here. Topological regular neighborhoods are not
by definition partitioned into blocks, but they can be if the core manifold
$M$ has a handle structure (as it does if $\dim M\neq4,5)$. This is discussed
more fully in Part II. Topological regular neighborhood theory is completely
parallel to block bundle theory, except for the bothersome dimension restrictions.
\indent It is worth recalling other topological neighborhood theories which
are already established. Suppose $X$ is a compact subset of a topological
manifold $Q$. If $X$ is arbitrary there is little that can be said, except
that most embeddings of $X$ into $Q$ (most $\equiv$ a dense $G_{\delta}$
subset of all embeddings) are \emph{locally tame}, defined to mean $Q-X$ is
$k$-LC at $X$ for all $0\leq k\leq\dim Q-\dim X-2$, where $\dim X$ is the
covering dimension. Interestingly, in the trivial range $2\dim X+2\leq\dim
Q\neq4$, homotopy implies ambient isotopy for such locally tame embeddings
\cite{Bry}. Below this range there is no hope of classifying neighborhoods as
there may be uncountably many distinct neighborhood germs, even for $X$ a
locally tamely embedded ANR.
If $X$ is shape dominated by a finite complex, there is a nice theory of open
regular neighborhoods worked out by Siebenmann \cite{Si3}. Briefly, an open
regular neighborhood of $X$ in $Q$ is an open neighborhood $U$ which satisfies
a certain compression property: given any compact subset $K$ of $U$ and any
neighborhood $W$ of $X$, there is a homeomorphism $h$ of $U$ having compact
support and fixing a neighborhood of $X$, such that $h(K)\subset W$. Such
neighborhoods have the homotopy type of $X$ and are unique. They exist if and
only if $X$ is shape dominated by a finite complex, the \textquotedblleft
if\textquotedblright\ part assuming $\dim X\leq\dim Q-3$ and $X\hookrightarrow
Q$ locally tame. Furthermore $X$ has an open radial neighborhood if and only
if $X$ actually has the shape of a finite complex $(U$ is \emph{radial} if
$U-X\approx Y\times\mathbb{R}^{1}$ for some compactum $Y$). The difference
between these situations is precisely measured by an obstruction in
$\widetilde{K}_{0}(\pi_{1}(U-X))$ that takes arbitrary values.
\indent Johnson has recently observed these facts for $X$ a topological
manifold \cite{Jo}.
\indent If $X^{m}$ is a polyhedron embedded in a topological manifold
$Q^{m+q}$, $q\geq3$, Weller has observed that any two closed manifold
neighborhoods of $X$ which are PL regular neighborhoods in some (possibly
unrelated) PL structures, are topological homeomorphic by Chapman's
topological invariance of simple homotopy type.
\indent This theory of topological regular neighborhoods represents a
sharpened form of the topological regular neighborhood theory of
Rourke-Sanderson \cite{RS4}. Briefly the relation is this: given a fixed
manifold $M$, the Rourke-Sanderson paper classifies germs at $M$ of all
manifold pairs $(Q,M)$, where $M$ is embedded in $Q$ as a locally flat
submanifold; two such pairs $(Q_{0},M)$ and $(Q_{1},M)$ have equivalent germs
if there are neighborhoods $U_{i}$ of $M$ in $Q_{i}$, $i=0,1$, such that
$(U_{0},M)\approx(U_{1},M)$ keeping $M$ fixed. This paper shows that each germ
class $[(Q,M)]$ contains as a representative a unique topological regular
neighborhood $(V,M)$. This paper recovers all the results of \cite{RS4}. We
recall them as they arise.
\indent A word on cell-like maps. They clearly play a central role in this
paper, so it is worth repeating some history from \cite{Si1} (whose complete
introduction is well worth reading). In 1967, D. Sullivan observed that the
geometrical formalism used by S. P. Novikov to prove that a homeomorphism
$h:M\rightarrow N$ of manifolds preserves rational Pontrjagin classes, uses
only the fact that $h$ is proper, and a \emph{hereditary homotopy equivalence}
in the sense that for each open $V\subset N$ the restriction $h^{-1}
V\rightarrow V$ is a homotopy equivalence. Lacher \cite{La} was able to
identify such proper equivalences as precisely CE maps, providing one
restricts attention to ENR's (= euclidean neighborhood retracts = retracts of
open subsets of euclidean space).
\indent This paper can be regarded as an extension of Siebenmann's \cite{Si1}
in the following sense: he establishes that a cell-like surjection of
$n$-manifolds is a limit of homeomorphisms. This paper establishes that a
cone-like retraction $r:V\rightarrow M$ of manifolds is locally the limit of
disc bundle projections. For this reasons our proofs in \S 5 bear strong
resemblance to Siebenmann's proofs.
\section{Homotopy properties of TRN's}
The purpose of this section is to prove Proposition 3.1. below, which
establishes certain basic homotopy properties of TRN's. The essential result,
without refinements, is that the difference $V_{1}-\mathring{V}_{0}$ between
two TRN's of the same manifold $M\subset V_{0}\subset\mathring{V}_{1}\subset
V_{1}$ is a proper $h$-cobordism.
\indent For simplicity, we will always assume $\partial M=\varnothing=\delta
V$ in this section, with the understanding that the $\partial M\neq
\varnothing\neq\delta V$ versions of all results also hold.
\indent When reading the following Proposition, it is worth keeping in mind
that parts (1) and (2) are trivial for mapping cylinder TRN's.\footnote{This
section, as well as perhaps pages \pageref{cell-like}-\pageref{cone-like},
could have benefitted from an overhaul for clarity. I wish to emphasize that
in Proposition \ref{Prop (homotopy proposition)} what we really want is a
property (0)\label{condition(0)} from which (1) and (2) follow.
\par
(0) $\left( V-M,\dot{V}\right) $ \emph{is proper homotopy equivalent to
}$\left( \dot{V}\times\lbrack0,1),\dot{V}\times0\right) $\emph{, by an
}$\varepsilon$\emph{-controlled proper homotopy equivalence.
}
\par
\noindent This property (0) is what \textquotedblleft
cone-like\textquotedblright\ is all about.}\label{footnote cross reference}
\begin{proposition}
[Homotopy Proposition]\label{Prop (homotopy proposition)}Suppose $(V,M,r)$ is
a topological regular neighborhood (either definition). Then
\begin{enumerate}
\item $M$ is a strong deformation retract of $V$. In fact, the following type
of partial deformations exist: Given any majorant map $\epsilon:M\rightarrow
(0,\infty)$ and any neighborhood $U$ of $M$ in $V$, there is a neighborhood
$W$ of $M$, $W\subset U$, and a deformation $f_{t}:V\rightarrow V$,
$t\in\lbrack0,1]$, such that $f_{0}=\operatorname{id}_{V}$, $f_{1}(V)\subset
U$ and for each $t$, $f_{t}|_{W}=\operatorname{id}_{W}$ and $f_{t}(V-W)\subset
V-W$, (i.e., $W$ is `undisturbed' by the homotopy), and $rf_{t}$ is $\epsilon
$-close to $r$.
\item $\dot{V}$ is a strong deformation retract of $V-M$. In fact, given
$\epsilon:M\rightarrow(0,\infty)$, there is a deformation $g_{t}
:V-M\rightarrow V-M\;(rel\;\dot{V})$, joining $g_{0}=\operatorname{id}_{V-M}$
to a retraction $g_{1}:V-M\rightarrow\dot{V}$, such that for each $t,rg_{t}$
is $\epsilon$-close to $r$.
\item If $(V_{0},M,r_{0})$ is a TRN such that $V_{0}\subset\mathring{V}$ is a
closed neighborhood of $M$ in $V$, then the difference $(V-\mathring{V}
_{0};\dot{V}_{0},\dot{V})$ is a proper $h$-cobordism.
\end{enumerate}
\end{proposition}
\indent Part (3) is a straightforward consequence of parts (1) and (2). The
remainder of this section is concerned with proving parts (1) and (2) for
cone-like TRN's.
\indent Before proceeding to the proof, we make some brief asides. The first
is to point out that in the definition of cone-like TRN, if one only assumes
that $r:V\rightarrow M$ is CE instead of conelike, then the fibers $F_{x}$ and
their boundaries $\dot{F}_{x}$ all have the cohomology properties one would
expect. Namely, by duality, $\check{H}^{\ast}(\dot{F}_{x})\approx H^{\ast
}(S^{q-1})$ and $\check{H}^{\ast}(F_{x}-x,\dot{F}_{x})=0$ (here $\check
{H}^{\ast}$ denotes \v{C}ech cohomology). See details below. Also in
codimension $\geq3$, $F_{x}-x$ is $1$-UV. However, as Example 2 shows,
$\dot{F}_{x}$ may not have the shape of $S^{q-1}$.
\indent If one is only interested in establishing the non-proper, codimension
$\geq3$ case of part (3) above, there is an especially simple proof, called to
my attention by Alexis Marin.
\begin{proposition}
[\textbf{Illustrative Proposition}]Suppose $(V_{i}^{m+q},M^{m},r_{i})$,
$i=0,1$, are \textbf{cell-like} TRN's of $M$, with $V_{0}\subset V_{1}$ and
$q\geq3$, such that all fiber boundaries $\{\dot{F}_{x,i}=r_{i}^{-1}
(x)\cap\dot{V}_{i}\mid x\in M$, $i=0,1\}$ are $1$-UV. Then the inclusion
$\dot{V}_{0}\hookrightarrow V_{1}-M$ is a homotopy equivalence. Hence, if
$V_{0}\subset\mathring{V}_{1}$, the difference $(V_{1}-\mathring{V}_{0}
;\dot{V}_{0},\dot{V}_{1})$ is an $h$-cobordism (using the additional parallel
facts that $\dot{V}_{i}\hookrightarrow V_{i}-M$ are homotopy equivalences,
$i=0,1$).
\end{proposition}
\begin{proof}
The cell-like retraction $r_{i}:V_{i}\rightarrow M$ is a homotopy equivalence
by the theorem of Lacher.
The maps $V_{i}-M\hookrightarrow V_{i}$ and $\dot{V}_{i}\overset{\alpha
}{\hookrightarrow}V_{i}\overset{r}{\longrightarrow}M$ induce $\pi_{1}
$-isomorphisms, the first by general position and the others because $r$ and
$r\alpha$ are $1$-UV surjections \cite[p.505]{La2}. Hence all universal covers
are compatible, and we have covering TRN's
It suffices to show $\widetilde{\dot{V}}\hookrightarrow\widetilde{V}
_{1}-\widetilde{M}$ induces homology isomorphisms, for then the theorems of
Hurewicz and Whitehead apply. The topmost square below represents
Lefschetz/Alexander duality and its naturality, and the remaining squares are
the homology sequence of a pair. (\textbf{Note. }For simplicity, the $\sim$'s
are omitted from the diagram.)
\[
\begin{CD}
H_{v}^{m+q-\ast}(V_{0}) @<{\approx}<\text{inclusion}^{\ast}< H_{c}^{m+q-\ast}(M)\\
@V\text{(Lefschetz Duality)} V{\approx}V @V{\approx}V\text{(Alexander Duality)}V\\
H_{\ast}(V_0,\dot{V_{0}}) @>{\approx}>> H_{\ast}(V_1,V_{1}-M)\\
@V{\partial}VV @V{\partial}VV\\
H_{\ast-1}(\dot{V_{0}}) @>{\approx}>\text{(Five Lemma)}> H_{\ast-1}(V_{1}-M)\\
@VVV @VVV\\
H_{\ast-1}(V_{0}) @>{\approx}>\text{inclusion}_{\ast}> H_{\ast-1}(V_{1})
\end{CD}
\]
\end{proof}
\indent Unfortunately the above proof has no straightforward generalization to
the proper category and to codimension 2, and it provides no information about
the tracks of the homotopies. For this reason we adopt the following approach,
which is, in a sense, more elementary because it uses no algebra and duality,
but unfortunately is more elaborate, using elementary shape theory.
\indent The following discussion uses the notion of \emph{resolution} of a TRN
$r:V\rightarrow M$, which provides a way of compactifying deleted fibers
$\{F_{x}-x\}$ by inserting a $(q-1)$-sphere in place of $x$. The definition is
local in character. Suppose $r:V^{m+q}\rightarrow\mathbb{R}^{m}$ is a TRN of
$\mathbb{R}^{m}$ $(\mathbb{R}_{+}^{m}$ in the with-boundary case). Let $U$
$\approx\mathbb{R}^{m}\times2B^{q}$ be a neighborhood of $\mathbb{R}
^{m}=\mathbb{R}^{m}\times0$ in $\mathring{V}$ such that $U$ is closed and
collared in $V$. Let $\lambda:2B^{q}\rightarrow2B^{q}$ be the map
$\lambda(B^{q})=0$, $\lambda|_{\partial2B^{q}}=\operatorname{id}$ and
$\lambda$ extended linearly on radial lines joining $\partial B^{q}$ to
$\partial2B^{q}$ and define $p:V\rightarrow V$ by letting $p|_{U}
=\operatorname{id}_{\mathbb{R}^{m}}\times\lambda$ and $p|_{V-U}=$ identity.
Define $r^{\prime}=rp:V\rightarrow\mathbb{R}^{m}$. Let $F_{x}^{\prime}$
denote
\[
(r^{\prime})^{-1}(x)=p^{-1}(F_{x}-x)\cup(x\times B^{q})
\]
with distinguished subsets $\dot{F}_{x}^{\prime}=\dot{F}_{x}$, $(D_{x},\dot
{D}_{x})=x\times(B^{q},\partial B^{q})$ and $A_{x}=F_{x}^{\prime
}-\operatorname*{int}D_{x}$.
It is another straightforward exercise to deduce parts (1) and (2) of
Proposition 3.1 above from part (2) of the following Proposition, by applying
it to successive coordinate charts of $M$ to manufacture the desired deformations.
\begin{proposition}
Suppose $r:V^{m+q}\rightarrow\mathbb{R}^{m}$ is a cone-like TRN, resolved to
$r^{\prime}:V\rightarrow\mathbb{R}^{m}$ as above. Then
\begin{enumerate}
\item for each $x\in\mathbb{R}^{m}$, the inclusions $\dot{F}_{x}
\hookrightarrow A_{x}$ and $\dot{D}_{x}\hookrightarrow A_{x}$ are shape
equivalences, and
\item the inclusions $\dot{V}\hookrightarrow V-\mathring{U}$ and $\dot
{U}\hookrightarrow V-\mathring{U}$ are proper homotopy equivalences. In fact,
given any majorant map $\epsilon:\mathbb{R}^{m}\rightarrow(0,\infty)$, there
exist deformation retractions $f_{t}:V-\mathring{U}\rightarrow V-\mathring
{U}\;(rel\;\dot{U})$ of $V-\mathring{U}$ into $\dot{U}$, and $g_{t}
:V-\mathring{U}\rightarrow V-\mathring{U}\;(rel\;\dot{V})$ of $V-\mathring{U}$
into $\dot{V}$, such that both $r^{\prime}f_{t}$ and $r^{\prime}g_{t}$ are
$\epsilon$-close to $r^{\prime}$.
\end{enumerate}
\end{proposition}
\textbf{Note.} The proof shows that the Proposition holds under the \emph{a
priori} weaker hypothesis that $r$ be CS and each inclusion $\dot{F}
_{x}\hookrightarrow F_{x}-x$ be a shape equivalence. It also holds if $q\geq
3$, $r$ is CE, and each $\dot{F}_{x}$ is $1$-UV.
\begin{proof}
[Proof of Proposition]Part (1). By the hypotheses and elementary shape theory,
$\dot{V}$ is a strong deformation retract of the noncompact $V-U$ in the
$\epsilon$-controlled manner suggested by part (2) (see below). For each $x$,
this provides a shape map from $A_{x}$ to $\dot{F}_{x}$: just push $A_{x}$
into $V-U$, and homotope it out to $\dot{V}$, as close as desired to $\dot
{F}_{x}$. This is a shape equivalence, the inverse of $\dot{F}_{x}
\hookrightarrow A_{x}$.
Assuming $r$ is conelike, that is, each $(A_{x}-\dot{D}_{x},\dot{F}_{x})$ is
proper shape equivalent to $(\dot{F}_{x}\times\lbrack0,1),\dot{F}_{x}\times
0)$, then in fact $(V-U,\dot{V})$ is proper homotopy equivalent to $(\dot
{V}\times\lbrack0,1),\dot{V}\times0)$ by a well controlled homotopy, and this
can be used to show each $\dot{D}_{x}\hookrightarrow A_{x}$ is a shape
equivalence, as above.
Consider now the weaker hypothesis of the Note. By excision, each inclusion
$\dot{D}_{x}\hookrightarrow A_{x}$ is degree 1 on $\check{C}$ech cohomology,
and by hypothesis $\dot{F}_{x}$ hence $A_{x}$ has the shape of some sphere,
necessarily $S^{q-1}$. Hence $\dot{D}_{x}\hookrightarrow A_{x}$ is a shape
equivalence. Note that this argument fails when it is not known that $\dot
{F}_{x}$ has the shape of a sphere (c.f. Example 4).
Part (2). Assuming $r$ is cone-like, then part (2) is a quick consequence of
the $\epsilon$-controlled proper homotopy equivalence $(V-U,\dot{V})\sim
(\dot{V}\times\lbrack0,1),\dot{V}\times0)$ mentioned in the second paragraph
above, and in fact there is no need to prove part (1). On the other hand, if
using the hypothesis of the Note, then one wants to know part (1)
$\Rightarrow$ part (2). This implication is a corollary of a Whitehead-type
theorem for shape, which we state in the Appendix.
\end{proof}
\section{Cone-like TRN's are mapping cylinder TRN's}
The purpose of this section is to prove the equivalence of the two definitions
given in \S 1. As already noted, a mapping cylinder TRN is clearly a cone-like TRN.
\begin{theorem}
Suppose $(V^{m+q},M^{m},r_{0})$ is a cone-like topological regular
neighborhood. Suppose $m+q\geq6$, or that $m+q=5$ and the conclusion below
already holds for $(\delta V,\partial M,r_{0}|)$. Then there is a mapping
cylinder retraction $r_{1}:V\rightarrow M$, arbitrarily close to $r_{0}$, such
that $r_{1}^{-1}(\partial M)=r_{0}^{-1}(\partial M)=\delta V$ and
$r_{1}|_{\delta V}=r_{0}|_{\delta V}$ if $r_{0}|_{\delta V}$ is already a
mapping cylinder retraction. Hence $(V,M,r_{1})$ is a mapping cylinder TRN. In
addition there is an arbitrarily small homotopy of cone-like retractions
$r_{t}:V\rightarrow M$, $t\in\lbrack0,1]$, joining $r_{0}$ to $r_{1}$, such
that $r_{t}^{-1}(\partial M)=\delta V$ and $r_{t}|_{\delta V}=r_{0}|_{\delta
V}$ if $r_{0}|_{\delta V}$ is already a mapping cylinder retraction.
\end{theorem}
\begin{proof}
This is proved using radial engulfing (PL if desired) to effect a shrinking
argument, just as in Edwards-Glaser \cite{EG}. The homotopy comes for free.
$\underline{\quad\quad\quad}$.
\end{proof}
\section{The Handle Straightening Theorem and Lemma}
The Existence-Uniqueness Theorem is based on the following Handle
Straightening Theorem, which is inspired by Siebenmann's Main Theorem in
\cite{Si1}. In essence, it is gotten by crossing the source manifold in
Siebenmann's theorem with $B^{q}$.
\indent Recall the notation $f:X{\small
{\includegraphics[
height=0.1064in,
width=0.2093in
]
{unto.eps}
}
}Y$ means that domain $f$ is a subset of $X$.
\begin{theorem}
[\textbf{Handle Straightening Theorem}]\label{handle straightening th}Suppose
given a cone-like TRN $(V^{m+q},B^{k}\times\mathbb{R}^{n},r)$, $k+n=m$,
$m+q\geq6$, along with an open embedding $f:B^{k}\times\mathbb{R}^{n}\times
B^{q}{\small
{\includegraphics[
height=0.1064in,
width=0.2093in
]
{unto.eps}
}
}V$ defined near
\[
\vdash\!\dashv\;\equiv B^{k}\times\mathbb{R}^{n}\times0\cup\partial
B^{k}\times\mathbb{R}^{n}\times B^{q}
\]
such that $f(x,0)=x$ for
\[
x\in B^{k}\times\mathbb{R}^{n};\ f(\partial B^{k}\times\mathbb{R}^{n}\times
B^{q})=\delta V\equiv r^{-1}(\partial B^{k}\times\mathbb{R}^{n})
\]
and $rf=\operatorname{projection}$ on $\partial B^{k}\times\mathbb{R}
^{n}\times B^{q}$.
Then there exists a triangle of maps
\begin {diagram}
B^{k}\times\mathbb{R}^{n}\times B^{q} & & & \\
\dTo^F_\approx & \rdTo^R & &\\
& &B^{k}\times\mathbb{R}^{n} &\\
&\ruTo_r & &\\
V^{m+q} & & &\\
\end{diagram}
\[
\text{(not commutative)
}
\]
such that
\begin{enumerate}
\item $R$ is a cone-like TRN retraction to $B^{k}\times\mathbb{R}^{n}
=B^{k}\times\mathbb{R}^{n}\times0$, with $R^{-1}(\partial B^{k}\times
\mathbb{R}^{n})=\partial B^{k}\times\mathbb{R}^{n}\times B^{q}$,
\item $F$ is a homeomorphism such that $F=f$ near $\vdash\!\dashv$,
\item $R=rF$ over $B^{k}\times(\mathbb{R}^{n}-4\mathring{B}^{n})\cup\partial
B^{k}\times\mathbb{R}^{n}$, and
\item $R=\operatorname{projection}$ over $B^{k}\times B^{n}\cup\partial
B^{k}\times\mathbb{R}^{n}$.
\end{enumerate}
\end{theorem}
\begin{remark}
If we define $r^{\prime}=RF^{-1}$, then
\begin{enumerate}
\item[a)] $r^{\prime}$ is a $q$-disc fiber bundle projection over $B^{k}\times
B^{n}\cup\partial B^{k}\times\mathbb{R}^{n}$, and
\item[b)] $r^{\prime}=r$ over $B^{k}\times(\mathbb{R}^{n}-4\mathring{B}
^{n})\cup\partial B^{k}\times\mathbb{R}^{n}$.
\end{enumerate}
\end{remark}
\noindent\textbf{Note. }There is a cone-like homotopy joining $r$ to
$r^{\prime}$, but its existence is not immediate from the proof below. The
discussion of such homotopies is deferred until \S .$\underline{\quad\quad}$.
\indent The Theorem above is deduced from the following Lemma using the
inversion device introduced in \cite{Si1}.
\begin{lemma}
[\textbf{Handle Straightening Lemma}]The same data is given, and the same
conclusion is drawn, except that (3) and (4) are replaced by
\begin{enumerate}
\item[(3$^{\prime}$)] $R=rF$ over $B^{k}\times B^{n}\cup\partial B^{k}
\times\mathbb{R}^{n}$
\item[(4$^{\prime}$)] $R=$ standard projection over $B^{k}\times
(\mathbb{R}^{n}-4\mathring{B}^{n})\cup\partial B^{k}\times\mathbb{R}^{n}$.
\end{enumerate}
\end{lemma}
\begin{proof}
[Proof that Lemma implies Theorem]In this proof, the Handle Lemma is applied
twice, the first time only to compactify $V$.
Let $S^{n}=\mathbb{R}^{n}\cup\infty$. The $F$ and $R$ given by the Handle
Lemma provide, via compactification (see Example \ref{Ex: capping off}), the
$F_{\infty}$ and $R_{\infty}$ in the triangle
\begin {diagram}
B^{k}\times S^{n}\times B^{q} & & & \\
\dTo^{F_{\infty}}_\approx &\rdTo^{R_{\infty}} & & \\
& &B^{k}\times\mathbb{R}^{n} & \\
&\ruTo_{r_{\infty}} & &\\
V_{\infty} & & &\\
\end{diagram}
\[
\text{(not commutative)
}
\]
(The replacement $A\rightsquigarrow A_{\infty}$ for $A=$ any of: $V,F,R,$ or
$\vdash\!\dashv$, suggests compactification, while $A^{\#}$ below suggests the
analogue of $A$ in the inverted context.) Restrict $F_{\infty}$ to a
neighborhood of
\[
\vdash\!\dashv^{\#}\equiv B^{k}\times(S^{n}-0)\times0\cup\partial B^{k}
\times(S^{n}-0)\times B^{q}
\]
in
\[
B^{k}\times(S^{n}-0)\times B^{q}
\]
to get
\[
f^{\#}:B^{k}\times(S^{n}-0)\times B^{q}{\small
{\includegraphics[
height=0.1064in,
width=0.2093in
]
{unto.eps}
}
}V^{\#}\equiv V_{\infty}-r^{-1}(B^{k}\times0).
\]
The Handle Lemma can be applied to TRN's of $B^{k}\times(S^{n}-0)$ by
imagining $S^{n}-0$ identified with $\mathbb{R}^{n}$ by the natural inversion
homeomorphism
\[
\theta:\mathbb{R}^{n}\cup\infty\rightarrow\mathbb{R}^{n}\cup\infty
\]
given by
\[
\theta(y)=y/|y|^{2}\quad\mbox{for}\quad y\neq0,\infty\quad\mbox{and}\quad
\theta(0)=\infty\quad\mbox{and}\quad\theta(\infty)=0.
\]
In such inverted applications, the original subsets $B^{k}\times rB^{n}$ and
$B^{k}\times(\mathbb{R}^{n}-r\mathring{B}^{n})$ of $B^{k}\times\mathbb{R}^{n}$
are replaced by $B^{k}\times(S^{n}-(1/r)\mathring{B}^{n})$ and $B^{k}
\times((1/r)B^{n}-0)$ of $B^{k}\times(S^{n}-0)$. (Note: Under this
interpretation of inversion, the homeomorphism $\theta$ does \textbf{not}
explicitly appear anywhere in the following proof).
Apply the Handle Lemma to the TRN $r^{\#}\equiv r_{\infty}|:V^{\#}\rightarrow
B^{k}\times(S^{n}-0)$ to get maps $F^{\#}$ and $R^{\#}$ in the triangle
\begin{diagram}
V_{\infty}-r^{-1}(B^{k}\times0)\equiv V^{\#} & & \\
&\rdTo^{r^{\#}\equiv r_{\infty}|V^{\#}} &\\
\uTo^{F^{\#}}_\approx & & B^{k}\times(S^{n}-0)\\
&\ruTo_{R^{\#}} \\
B^{k}\times(S^{n}-0)\times B^{q} & \\
\end{diagram}
\[
\text{(not commutative)
}
\]
Thus
(1) $R^{\#}$ is a cone-like TRN retraction to $B^{k}\times(S^{n}-0)$ with
$(R^{\#})^{-1}(\partial B^{k}\times(S^{n}-0))=\partial B^{k}\times
(S^{n}-0)\times B^{q}$,
(2) $F^{\#}$ is a homeomorphism such that $F^{\#}=f^{\#}$ near $\vdash
\!\dashv^{\#}$
(3) $R^{\#}=r^{\#}F^{\#}$ over $B^{k}\times(S^{n}-\mathring{B}^{n}
)\cup\partial B^{k}\times(S^{n}-0)$, and
(4) $R^{\#}=$ standard projection over $B^{k}\times((1/4)B^{n}-0)\cup\partial
B^{k}\times(S^{n}-0)$.
Extend $r^{\#}$ and $R^{\#}$ using $r_{\infty}$ and $R_{\infty}$ to get
\begin{diagram}
V^{\#} & \rInto & V_{\infty} & & \\
& & &\rdTo^{r_{\infty}^{\#}} &\\
\uTo^{F^{\#}} & & \includegraphics{upunto-F.eps} & &B^{k}\times S^{n}\\
& & & \ruTo_{R_{\infty}^{\#}} &\\
B^{k}\times(S^{n}-0)\times B^{q} &\rInto & B^{k}\times S^{n}\times B^{q}& &\\
\end{diagram}
\[
\text{(not commutative)
}
\]
We must extend $F^{\#}$ to a homeomorphism $F_{\infty}^{\#}:B^{k}\times
S^{n}\times B^{q}\rightarrow V_{\infty}$. First restrict $F^{\#}$ to
$B^{k}\times(S^{n}-(1/10)\mathring{B}^{k})\times B^{q}$ and then extend over
\[
D\equiv(B^{k}-(1-\epsilon)\mathring{B}^{k})\times(1/10)B^{n}\times B^{q}\cup
B^{k}\times(1/10)B^{n}\times\epsilon B^{q}
\]
via $f$ (for some small $\epsilon>0)$ to get an embedding
\[
G^{\#}:B^{k}\times S^{n}\times B^{q}-(1-\epsilon)\mathring{B}^{k}
\times(1/10)\mathring{B}^{n}\times(B^{q}-\epsilon\mathring{B}^{q})\rightarrow
V_{\infty}.
\]
Now the difference $\operatorname*{cl}(V_{\infty}-\operatorname*{image}
(G^{\#}))$ is a compact $s$-cobordism between manifolds-with-boundary
\[
G^{\#}((1-\epsilon)B^{k}\times(1/10)B^{n}\times\epsilon\mathring{B}^{q})
\]
and $\operatorname*{cl}(\partial V_{\infty}^{\#}-\operatorname*{image}G^{\#}
)$, with product boundary cobordism.
\[
G^{\#}(\partial\lbrack(1-\epsilon)B^{k}\times(1/10)B^{n}]\times(B^{q}
-\epsilon\mathring{B}^{q})).
\]
Hence this difference is a product, so $G^{\#}$ extends to $F_{\infty}^{\#}$
as desired.
Finally, taking restrictions to the original sets $V$, $B^{k}\times
\mathbb{R}^{n}\times B^{q}$ and $B^{k}\times\mathbb{R}^{n}$ yields the triangle
\begin{diagram}
V_{\infty} & \lInto & V & & &\\
&&&\rdTo^r&\\
\uTo^{F_{\infty}^{\#}} & & \uTo^{F_{\infty}^{\#}|\equiv F_{1}}_\approx & & B^{k}\times\mathbb{R}^{n}\\
&&&\ruTo_{R_{1}\equiv R_{\infty}^{\#}|} &\\
B^{k}\times S^{n}\times B^{q} &\lInto & B^{k}\times\mathbb{R}^{n}\times
B^{q} & & \\
\end{diagram}
\[
\text{(not commutative)
}
\]
The maps $F_{1}$ and $R_{1}$ satisfy properties (1) and (2) of the Handle
Theorem, along with
(3$^{\prime\prime}$) $R_{1}=rF_{1}$ over $B^{k}\times(\mathbb{R}^{n}
-\mathring{B}^{n})\cup\partial B^{k}\times\mathbb{R}^{n}$ and
(4$^{\prime\prime}$) $R_{1}=$ standard projection over $B^{k}\times
(1/4)B^{n}\cup\partial B^{k}\times\mathbb{R}^{n}$.
These are clearly equivalent to (3) and (4) of the Handle Theorem completing
the proof that the Handle Straightening Lemma implies the Handle Straightening Theorem.
\end{proof}
\begin{proof}
[Proof of Handle Straightening Lemma]The proof is based on a diagram which
derives from the classic diagram of Kirby-Siebenmann; its immediate
predecessor is the diagram in \cite{Si1}.
To make certain constructions precise, we make two preliminary modifications
in the given data. First, by compression toward $\vdash\!\dashv$ in
$B^{k}\times\mathbb{R}^{n}\times B^{q}$, we arrange that $f$ is defined on a
neighborhood of $(B^{k}-(1/2)\mathring{B}^{k})\times\mathbb{R}^{n}\times
B^{q}\cup B^{k}\times\mathbb{R}^{n}\times(1/2)B^{q}$ in $B^{k}\times
\mathbb{R}^{n}\times B^{q}$. Second, by redefining $r$ over $B^{k}
\times4\mathring{B}^{n}$ by conjugation, we arrange that $rf$ is standard
projection over $(B^{k}-(1/2)\mathring{B}^{k})\times3B^{n}\cup\partial
B^{k}\times\mathbb{R}^{n}$. Clearly there is no loss in proving the Lemma for
these modified $r$ and $f$.
The diagram is constructed essentially from the bottom up. All the right hand
triangles commute, as do all the squares but two: the one below $h$ and the
one containing $F$. The details of the construction follow
\begin{diagram}
\phantom{.} &\lTo &B^{k}\times\mathbb{R}^{n}\times B^{q} & & & \rTo^{R \phantom{long gap}} &
B^{k}\times\mathbb{R}^{n} & &&\\
\dTo & &\uTo_{j\times\operatorname{id}_{B^{q}}}& & & &\uTo_j & &\luInto(3,5) &\\
& &B^{k}\times\mathbb{R}^{n}\times B^{q} & & &\rTo^{S \phantom{long gap}} &
B^{k}\times\mathbb{R}^{n} & & &\\
& &\dTo_{e\times\operatorname{id}_{B^{q}}} & & & &\dTo_e & &\luInto(2,3)&\\
& &B^{k}\times T^{n}\times B^{q} & & &\rTo^{s \phantom{long gap}} &
B^{k}\times T^{n} & & &\\
& &\uInto & & & &\uInto& \luInto(3,1) & & &B^{k}\times2B^{n}\\
& &B^{k}\times T^{n}\times B^{q} &\rTo_\approx^h &W_{1} & \rTo^{r_{1}}
& B^{k}\times T^{n} & & \ldInto(2,1)\ldInto(2,3) &\ldInto(3,5)\\
&&\uInto & &\uInto & &\uInto & & &\\
& & B^{k}\times T_{0}^{n}\times B^{q} & \includegraphics{unto-f0.eps} &W_{0} & \rTo^{r_{0}}
& B^{k}\times T_{0}^{n} & & &\\
& &\dTo_{\alpha\times\operatorname{id}_{B^{q}}} & & \dTo_{\alpha_{0}} &
& \dTo^{\alpha=}_{\operatorname{id}\times \alpha '} & & &\\
& &B^{k}\times\mathbb{R}^{n}\times B^{q} & \includegraphics{unto-f.eps} & V^{m+q} & \rTo^r &
B^{k}\times\mathbb{R}^{n} & &&\\
&&&&\uTo&&&&\\
&\rTo^{\phantom{really long gap} F}&&&\phantom{.}&&&&\\
\end{diagram}
\centerline{\bf Main Diagram}
\noindent\textbf{Note. }Details in the remainder of the proof are not yet
completely filled in.\newline\newline\textbf{[About }$e$ \textbf{and}
$p$\textbf{.]} Regard $T^{n}$ as the quotient $\mathbb{R}^{n}/(8\mathbb{Z}
)^{n}$ of $\mathbb{R}^{n}$ where, $\mathbb{Z}$ denotes the integers, and let
$e^{\prime}:\mathbb{R}^{n}\rightarrow T^{n}$ be the corresponding quotient
map. Define $e=\operatorname{id}_{B^{k}}\times e^{\prime}$. Abusively we
regard $B^{k}\times rB^{n}\subset B^{k}\times T^{n}$ for $r<4$. Choose $p\in
T^{n}-2B^{n}$ and let $T_{0}^{n}=T^{n}-p$.\newline\newline\textbf{[About
}$\alpha:B^{k}\times T_{0}^{n}\rightarrow B^{k}\times3\mathring{B}^{n}
$\textbf{.]} Let $\alpha^{\prime}:T_{0}^{n}\rightarrow3\mathring{B}$ be an
immersion such that $\alpha^{\prime}|_{2B^{n}}=\operatorname{id}$. Define
$\alpha=\operatorname{id}_{B^{k}}\times\alpha^{\prime}$. This makes the four
triangles commute.\newline\newline\textbf{[About }$j:B^{k}\times\mathbb{R}
^{n}\rightarrow B^{k}\times\mathbb{R}^{n}$\textbf{.]} It is the non-surjective
embedding obtained by restriction of the homeomorphism $J:\mathbb{R}
^{m}\rightarrow4\mathring{B}^{m}=4\mathring{B}^{k}\times4\mathring{B}^{q}$
which fixes precisely $2\mathring{B}^{m}$ and on each ray from the origin is
linearly conjugate to the homeomorphism $\gamma:[0,\infty)\rightarrow
\lbrack0,-)$ defined by $\gamma|_{[0,-]}=\operatorname{id}$ and $\gamma
(x)=\underline{\quad\quad\quad}$. \newline\newline\textbf{[About }$W_{0}$,
$r_{0}$, $\alpha_{0}$ and $f_{0}$\textbf{.]} These are defined via pullback.
Thus
\[
W_{0}=\{(x,y)\in V\times B^{k}\times(T^{n}-p)\mid r(x)=\alpha(y)\}
\]
and $\alpha_{0}(x,y)=x$ and $r_{0}(x,y)=y$ and $f_{0}\equiv(f|,rf|):B^{k}
\times(T^{n}-p)\times0\cup\underline{\quad\quad\quad}\rightarrow W_{0}$. We
have that $\alpha_{0}$ is an immersion, $W_{0}$ is a manifold and $r_{0}$ is a
cone-like retraction to $f_{0}(B^{k}\times T_{0}^{n}\times0)$, by the obvious
generalization of \cite[Lemma 2.3]{Si1}. Also, $f_{0}$ is naturally an open
embedding of some neighborhood of $B^{k}\times T_{0}^{n}\times0\cup
(B^{k}-(1/2)\mathring{B}^{k})\times T_{0}^{n}\times B^{q}$, and $r_{0}f_{0}$
is standard projection on this set. \newline\newline\textbf{[Construction of
}$W_{1}$, $r_{1}$ and $h$\textbf{.]} The open embedding
\[
f_{0}|:(B^{k}-(1/2)B^{k})\times T_{0}^{n}\times B^{q}\rightarrow W_{0}
\]
defines by attachment a manifold
\[
W_{1}^{\prime}\equiv(B^{k}-(1/2)B^{k})\times T^{n}\times B^{q}\cup_{f_{0}
|}W_{0}
\]
and an open embedding
\[
f_{1}^{\prime}:\operatorname*{domain}f_{0}\cup(B^{k}-(1/2)B^{k})\times
T^{n}\times B^{q}\rightarrow W_{1}^{\prime}.
\]
Now use infinite $s$-cobordism theorem and capping off (Example
\ref{Ex: capping off}) to get $W_{1},r_{1}$ and $f_{1}$ and then get $\dot{h}$
by the compact $s$-cobordism theorem [Details to be filled out here].\newline
\newline\textbf{[Construction of }$s$\textbf{.]} The preceding step produced a
conelike retraction
\[
r_{1}h:(B^{k}\times T^{n}-(1/2)B^{k}\times p)\times B^{q}\rightarrow
B^{k}\times T^{n}-(1/2)B^{k}\times p
\]
which is standard projection near $\partial B^{k}\times T^{n}\times B^{q}$.
Let $s$ be the natural compactification of
\[
\omega(r_{1}h)(\omega^{-1}\times\operatorname{id}_{B^{q}})):(B^{k}\times
T^{n}-(0,p))\times B^{q}\rightarrow B^{k}\times T^{n}-(0,p)
\]
where
\[
\omega:B^{k}\times T^{n}-(1/2)B^{k}\times p\rightarrow B^{k}\times
T^{n}-(0,p)
\]
is a homeomorphism which is fixed near \underline{$\quad\quad\quad$}
.\newline\newline\textbf{[Construction of }$S$ and $R$\textbf{.]} $S$ is the
unique covering cone-like retraction. $R$ is defined by $jS(j\times
\operatorname{id})^{-1}$ on $j(B^{k}\times\mathbb{R}^{n})\times B^{q}$, and is
extended via the identity over all of $B^{k}\times\mathbb{R}^{n}\times B^{q}$.
It is the crux of the torus device that $R$ is continuous. \newline
\newline\textbf{[Construction of }$F$\textbf{.]} The left hand side of the
diagram from top to bottom defines an open embedding
\[
\phi:B^{k}\times2\dot{B}^{n}\times B^{q}\rightarrow r^{-1}(B^{k}\times2\dot
{B}^{n})\subset V
\]
such that $r\phi=R|_{B^{k}}\times2\dot{B}^{n}\times B^{q}$. Extend $\phi$ over
\underline{$\quad\quad\quad$} by $f$ and then over all of $B^{k}
\times\mathbb{R}^{n}\times B^{q}$ by engulfing, to get $F$. All the necessary
homotopies for engulfing follow from the Homotopy Proposition (Prop.
\ref{Prop (homotopy proposition)}); recall the engulfing may be PL if desired,
as $\operatorname*{int}V$ is PL triangulable.
\end{proof}
\section{Proof of Existence-Uniqueness Theorem}
This section proves Theorem \ref{existence-uniqueness thm}, without the
cone-like homotopy, but with the well-controlled ambient isotopy.
\begin{proof}
[Sketch of Proof]Existence follows from a good uniqueness theorem;
\textquotedblleft good\textquotedblright\ means we want a relative C-D
statement as in \cite[p.71]{EK}. This good uniqueness theorem follows in
straightforward fashion from the Handle Straightening Theorem
\ref{handle straightening th}, much like the situation in \cite{EK}.
\end{proof}
\section{Local contractibility of the space of cone-like retractions;
cone-like homotopies}
This section is independent of the preceding Sections 2-6, and has no
dimension restrictions. This section plays a role in this paper analogous to
the role of the local contractibility of the homeomorphism group of a manifold
(\cite{Ce}, \cite{EK}) in Siebenmann's paper \cite{Si1}.
\indent Let $M$ be a fixed manifold and $V$ a fixed topological regular
neighborhood of $M$, with distinguished submanifold $\delta V\subset\partial
V$ but \emph{without} a specific retraction. Let $C(V,M)$ be the space of all
cone-like retractions $r:V\rightarrow M$ such that $r^{-1}(\partial M)=\delta
V$, topologized with the majorant topology given by majorant maps on $M$. That
is, given majorant map $\epsilon:M\rightarrow(0,\infty)$, the $\epsilon
$-neighborhood of $r:V\rightarrow M$ is
\[
N(r,\varepsilon)=\{p\in C(V,M)\mid d(p(x),r(x))<\epsilon
(r(x))\;\mbox{for all}\;x\in V\}
\]
where $d$ is the metric on $M$. Although $C(V,M)$ is decidedly non-metric if
$M$ is not compact, it turns out that $C(V,M)$ is closed under Cauchy limits
if $d$ is a complete metric (Compare \cite{Si1}); this fact is not so
essential to us as the following facts.
\indent Call a cone-like retraction $r:V\rightarrow M$ \emph{locally
approximable by bundle }\newline\emph{projections} (\emph{locally
approximable} for short) if each $x\in M$ has an open neighborhood $W$ in $M$
such that $r|:r^{-1}(W)\rightarrow M$ is arbitrarily closely approximable by
disc bundle projections (uniformly, not majorantly). Let $C_{0}(V,M)$ denote
the subset of $C(V,M)$ of all such locally approximable retractions. Of
course, it is a corollary of Section 6 that $C_{0}(V,M)=C(V,M)$ if $\dim
V\geq6$; however, working with $C_{0}(V,M)$ obviates dimension restrictions.
\indent The goal of this section is to show that $C_{0}(V,M)$ is locally
0-connected (defined below) and that a certain cone-like homotopy extension
principle holds, analogous to the isotopy extension principle for
homeomorphisms. Actually $C_{0}(V,M)$ is locally $k$-connected for all $k$ by
a routine adaptation of the Eilenberg-Wilder argument. The torus techniques
for local contractibility fail us in this section so we turn to an adaptation
$\underline{\quad\quad\quad}$
\begin{proposition}
\textbf{(1)} Suppose $r\in C_{0}(V,M)$ and $U\approx\mathbb{R}^{m}$ or
$U\approx\mathbb{R}_{+}^{m}$ is a coordinate chart in $M$. Then $r|_{r^{-1}
(U)}$ is arbitrarily closely approximable by disc bundle projections
(uniformly here).
\noindent\textbf{(2)} $C_{0}(V,M)$ is closed in $C(V,M)$. (As noted above
$C(V,M)$ is closed in $P(V,M)=$ all proper maps $V\rightarrow M$, but we don't
need this).
\end{proposition}
\begin{theorem}
The local 0-connectivity (indeed locally $k$-connectivity) of $C_{0}(V,M)$, as
mentioned above.
\end{theorem}
\begin{proof}
This proof is accomplished by a limit argument, by first proving the
\textquotedblleft almost\textquotedblright\ local contractibility of the space
of disc bundle projections. This is completely analogous to my proof by
\textquotedblleft almost handle straightening\textquotedblright, using
\v{C}ernavskii meshing, of the following result.
\end{proof}
\begin{theorem}
\textbf{(A)} Given any PL manifold $M$ and $\epsilon>0$, there exists
$\delta>0$ such that if $h:M\rightarrow M$ is a PL homeomorphism which is
$\delta$-close to the identity, then $h$ may be PL $\epsilon$-isotoped as
close as desired to $\operatorname{id}_{M}$ (but not to $\operatorname{id}
_{M}$ by counterexample of Kirby-Siebennman.). This process is canonical
PL.\newline\textbf{(B) }(from \textbf{(A).}) The PL homeomorphism group of $M$
is locally contractible as a topological group (but not as a semisimplicial complex).
\end{theorem}
\section{Topological regular neighborhoods of polyhedra in manifolds}
This section sketches the extension of the previous sections to polyhedra in
manifolds. There are two technical points that have to be sorted out before
saying that the definitions of TRN's routinely extend to polyhedra. The first
concern is what is the polyhedral analogue of locally flat. The second
concerns what to do at the boundary $\delta V$, for polyhedral pairs.
\indent A faithful PL embedding $f:(X,Y)\rightarrow(Q,\partial Q)$ of a
polyhedral pair into a PL manifold is \emph{locally homotopically unknotted}
if for each $x\in X$, both deleted links
\[
\operatorname*{lk}(f(x),Q)-\operatorname*{lk}(f(x),f(X))
\]
and (if $x\in Y$)
\[
\operatorname*{lk}(f(x),\partial Q)-\operatorname*{lk}(f(x),f(Y))
\]
have\emph{ free} $\pi_{1}$ (for each component). For codimension $\geq3$ this
is always true as the $\pi_{1}$'s are trivial by general position. Note that
for $(X,Y)$ a codimension 2 manifold $(M,\partial M)$, this is just the usual
local homotopy unknottedness definition.
\indent A faithful topological embedding $f:(X,Y)\rightarrow(Q,\partial Q)$ of
a polyhedral pair into a topological manifold is \emph{locally tame} if for
each $x\in X$ there is an open neighborhood $(U,\partial U)$ of $f(x)$ in
$(Q,\partial Q)$ such that the embedding $f|:f^{-1}(U,\partial U)\rightarrow
(U,\partial U)$ is PL locally unknotted for some PL manifold structure on
$(U,\partial U)$. Note the PL structure on the source is induced from $X$, but
the PL structures on the $U$'s (for various $x$) need not be compatible. The
unknottedness condition is independent of the PL structure on $U$.
\indent In the definition of TRN's for polyhedra one should replace
\textquotedblleft$(M,\partial M)$ locally flat in $(V,\partial V)$
\textquotedblright\ with \textquotedblleft$(X,Y)$ locally tame in $(V,\partial
V)$\textquotedblright.
\indent There is another change required in case $Y\neq\varnothing$, because
the condition that $r^{-1}(Y)$ be a TRN of $Y$ is too restrictive, as the
following example shows.
\begin{example}
Let $X$ be an interval, $Y$ the midpoint of $X$, and $\left( V,\partial
V\right) =(2$-disc, boundary$)$ as shown. Then $r^{-1}\left( Y\right) $
must be disconnected.
\begin{center}
\includegraphics[
trim=0.000000in 0.000000in -0.056877in 0.000000in,
height=1.5056in,
width=1.5056in
]
{trn-disc.eps}
\end{center}
\end{example}
\indent This same problem cropped up in \cite{E1} and the remedy is the
same---namely to not require $\delta V$ to be all of $r^{-1}\left( Y\right)
$. Details are trivial.
\indent Having established these two technical points, then the definition of
TRN's (either mapping cylinder or cone-like) for polyhedra in manifolds is as indicated.
\begin{theorem}
Existence-Uniqueness holds exactly as in the manifold case (with the same
dimension restrictions on $Q$).
\end{theorem}
Perhaps the quickest proof of this is by analogy:
\[
\frac{\text{This proof}}{\text{Proof of Th. \ref{existence-uniqueness thm}}
}\quad\approx\quad\frac{\text{Siebenmann's \cite{Si2}}}{\text{Edwards-Kirby's
\cite{EK}}}
\]
\newline\indent That is, the extension to the above theorem of the proofs in
\S 5 and \S 6 is completely analogous to the extension to locally cone-like
TOP stratified sets of the local contractibility of the homeomorphism group of
a topological manifold, done by Siebenmann.
\indent The local unknottedness hypothesis ensures that the $s$-cobordism
theorem holds at all applications. Details omitted here.
\indent Replace CW complex by cell complex (i.e. don't need skeletal
filtration that CW complexes have.
\section{CW\ complexes}
\begin{remark}
In the following, one may replace `CW complex' with `cell complex'. In
particular, one doesn't need the skeletal filtration preent in CW complexes.
\end{remark}
It turns out the CW complexes in manifolds have topological regular
neighborhoods \emph{stably}, that is, $X\subset Q$ has a topological regular
neighborhood in $Q\times\mathbb{R}^{s}$ for some $s\geq0$. Furthermore they
are unique nonstably. The most useful application of these facts seems to be a
proof that a CE map ($\equiv$ proper cell-like surjection) of CW complexes is
a simple homotopy equivalence (first proved for homeomorphisms by Chapman
\cite{Ch}). Our discussion below is toward this goal.
\indent All our CW complexes from now on are \emph{finite} (i.e., compact).
This discussion trivially generalizes to nonfinite CW complexes of finite
dimension, but we postpone details for arbitrary CW complexes.
\indent Either definition of topological regular neighborhood given at the
start of the paper is valid with $M$ replaced by a CW complex $X$, subject to
certain provisos. For the mapping cylinder definition, they are: regard
$\partial X=\varnothing=\delta V$ always; replace \textquotedblleft locally
flat\textquotedblright\ by \textquotedblleft each $\dot{F}_{x}$ is
$1$-UV\textquotedblright, and always assume $\dim X\leq\dim V-3$. For the
second definition, the provisos are the same, except that \textquotedblleft
locally flat\textquotedblright\ is replaced by \textquotedblleft$X$ is $1$-LCC
in $V$\textquotedblright, that is, $V-X$ is $1$-LC at $X$. This implies each
$\dot{F}_{x}$ is $1$-UV, and in the presence of mapping cylinder structure,
the conditions are equivalent.
\begin{remark}
If $M_{f}$ is a mapping cylinder for some proper map $f:A\rightarrow B$, then
$M_{f}\times I^{k}$ (with $I^{k}=[-1,1]^{k}$) has a natural mapping cylinder
structure for the map
\[
(f\times\pi)|:(A\times I^{k}\cup M_{f}\times\partial I^{k})\rightarrow
B\times0=B
\]
where $\pi:I^{k}\rightarrow0$ is projection, as suggested by the diagram. The
new fibers $\{F_{b}\times I^{k}\}$ have $UV^{k-1}$ boundaries
\[
\{(F_{b}\times I^{k})^{\cdot}=\dot{F}_{b}\times I^{k}\cup F_{b}\times\partial
I^{k}\},
\]
regardless of the nature of $F_{b}$, because $(F_{b}\times I^{k})^{\cdot}$ has
the shape of $\Sigma^{k-1}\dot{F}_{b}$.
\end{remark}
\begin{theorem}
\label{s.h.e. theorem for TRNs}If $V$ is any abstract TRN of CW complex $X$,
as defined above, then $X\hookrightarrow V$ is a simple homotopy equivalence.
\end{theorem}
\begin{theorem}
\label{existence-uniqueness thm for CW complexes}Suppose $X\subset Q$ is a CW
complex embedded in a topological manifold, $\partial Q=\varnothing$. Then
\begin{enumerate}
\item (Existence) $X$ has a mapping cylinder TRN in $Q\times\mathbb{R}^{s}$
for some $s\geq0$, and
\item (Uniqueness) If $V_{0}$ and $V_{1}$ are two TRN's of $X$ in $Q$, then
$V_{0}$ is homeomorphic to $V_{1}$ by ambient isotopy of $Q$ which fixes a
neighborhood of $X$.
\end{enumerate}
\end{theorem}
\begin{corollary}
[to Theorem \ref{s.h.e. theorem for TRNs} and Part 1 of Theorem
\ref{existence-uniqueness thm for CW complexes}]A homeomorphism
$h:X\rightarrow Y$ of CW complexes is a simple homotopy equivalence.
\begin{proof}
[Proof of Corollary]Let $V$ be a TRN of $Y$, by Theorem 2. Then Theorem 1 says
that both the inclusion $\eta:Y\rightarrow V$ and the embedding $\eta
h:X\rightarrow V$ are simple homotopy equivalences, hence so is $h$.
\end{proof}
\end{corollary}
\begin{proof}
[Proof of Theorem \ref{s.h.e. theorem for TRNs}]This is just an extension to
CW complexes of an argument in \cite{E2}. One inducts on the number of cells
in $X$, and uses TRN uniqueness to accomplish the splitting of $V$ over
$S^{n-1}\times0$ in $S^{n-1}\times(-1,1)=$ open collar neighborhood of
$\infty$ in the last open cell of $X$. Once $V$ is split, one applies
induction and the Sum Theorem.
\end{proof}
\begin{proof}
[Proof of Theorem \ref{existence-uniqueness thm for CW complexes}
]\emph{(Uniqueness)}. Pull $V_{0}$ into $\operatorname*{int}V_{1}$ by
engulfing, and then apply the $s$-cobordism theorem to the difference
$V_{1}-\operatorname{int}V_{0}$. It is an $s$-cobordism because $V_{0}\subset
V_{1}$ is a simple homotopy equivalence, and throwing away
$\operatorname*{int}V_{0}$ with its codimension $\geq3$ spine does not change
this.
\noindent\emph{(Existence)}\textbf{ }Interestingly, the proof has nothing to
do with the previous theory; it is just a straightforward inductive exercise.
We first remark that in the following construction, the advantage of always
working in the ambient manifold $Q\times\mathbb{R}^{s}$ (even if
$Q=\mathbb{R}^{q}$), rather than constructing $V$ in the abstract, is that it
automatically provides the correct framing for the normal bundle of the
embedding $g_{\partial}:\partial D^{n}\rightarrow\partial(V\times B^{n})$
(defined below) which is used to attach the handle. If one didn't choose this
framing correctly, some future $g_{\partial}$ might not have a framing. Thus,
working in $Q$ obviates paying attention to bundle trivializations.
Suppose $Y$ is a CW complex with mapping cylinder TRN $r:V\rightarrow Y$,
where $V$ is a collared, $\operatorname*{codim}0$ submanifold of $Q$. Suppose
$X=Y\cup_{f|_{\partial D^{n}}}f(D^{n})\subset Q$ where $f:D^{n}\rightarrow Q$
is such that $f(D^{n})\cap Y=f(\partial D^{n})$ and $f|_{\operatorname*{int}
D^{n}}$ is an embedding. Define $g_{\partial}:\partial D^{n}\rightarrow
\operatorname{int}V\times\partial B^{n}\subset\partial(V\times B^{n})$ by
$g_{\partial}(x)=(f(x),x)$ (recall $D^{n}=B^{n})$; it is a locally flat
embedding. Let
\[
F=\{\lambda g_{\partial}(x)\mid x\in\partial D^{n},0\leq\lambda\leq1\}\subset
V\times B^{n}
\]
be the submapping cylinder of the natural map $g_{\partial}(\partial
D^{n})\rightarrow f(\partial D^{n})\subset Y$, where the fibers $\{\lambda
_{w}\}$ are those of the natural mapping cylinder retraction $r_{1}:V\times
B^{n}\rightarrow Y$.
Let $g:D^{n}\rightarrow Q\times\mathbb{R}^{n}-\operatorname*{int}(V\times
B^{n})$ be a locally flat embedding extending $g_{\partial}$, such that $g$ is
homotopic to $f$ in $V\times\mathbb{R}^{n}$ by a homotopy which agrees in
$\partial D^{n}$ with the straight line homotopy in $F$ joining $g_{\partial}$
to $f|_{\partial D^{n}}$. Then $X^{\prime}\equiv Y\cup F\cup g(D^{n})$ is
homeomorphic to $X$ by the restriction $h|:X^{\prime}\rightarrow X$ of a
homeomorphism $h:Q\times\mathbb{R}^{n}\rightarrow Q\times\mathbb{R}^{n}$
(since homotopy yields isotopy in the trivial range). Thus it suffices to
construct a TRN $V^{\prime}$ for $X^{\prime}$ in $Q\times\mathbb{R}^{n}$.
Let $(H,\delta H)$ be the total space of a normal disc bundle for
$(g(D^{n}),g(\partial D^{n}))$ in $(Q\times\mathbb{R}^{n}-\operatorname*{int}
(V\times B^{n}),\partial(V\times B^{n}))$. Then $(H,\delta H)\approx
(g(D^{n}),g(\partial D^{n}))\times B^{q}$. Define $V^{\prime}=V\times
B^{n}\cup_{\delta H}H$. We can define mapping cylinder retraction $r^{\prime
}:V^{\prime}\rightarrow X^{\prime}$ by adjusting the mapping cylinder
retraction $r_{1}:V\times B^{n}\rightarrow Y$ to \textquotedblleft turn the
corner\textquotedblright\ near $F$, and then extending over $H$, as follows.
Let $r_{1}^{\prime}:V\times B^{n}\rightarrow Y\cup F$ be the mapping cylinder
retraction obtained from $r_{1}$ as suggested by the following figure (note
the identifications made on the bottom of the rectangles are compatible with
the indicated projections.)
In particular, $r_{1}^{\prime}|:\delta H\rightarrow g(\partial
D^{n})$ is standard projection, so $r_{1}^{\prime}$ extends, using standard
projection $H\rightarrow g(D^{n})$, to a retraction $r^{\prime}:V^{\prime
}\rightarrow X^{\prime}$. Clearly $r^{\prime}$ is a mapping cylinder
retraction, and the $1$-UV property follows because $X^{\prime}$ is $1$-LCC in
$V^{\prime}$.
There is an interesting alternative way of defining $r^{\prime}:V^{\prime
}\rightarrow X^{\prime}$, observed by Siebenmann. Let $p_{1}:V^{\prime
}\rightarrow V\times B^{n}\cup g(D^{n})$ be the extension-via-the-identity of
some natural relative mapping cylinder retraction $p_{0}:H\rightarrow\delta
H\cup g(D^{n})$ and let $p_{2}:V\times B^{n}\cup g(D^{n})\rightarrow
X^{\prime}$ (\textbf{not} a retraction but a natural extension of
$r_{1}:V\times B^{n}\rightarrow Y$ such that $p_{2}|:g(\operatorname*{int}
D^{n})\rightarrow X^{\prime}-Y$ is a homeomorphism. Then $p=p_{2}
p_{1}:V^{\prime}\rightarrow X^{\prime}$ is a CE map which restricts in
$X^{\prime}$ to a CE map $p|:X^{\prime}\rightarrow X^{\prime}$. In the usual
fashion, let $q:V^{\prime}\rightarrow V^{\prime}$, with $q|_{\partial
V^{\prime}}=\operatorname{id}$, be a map which is a homeomorphism off $F$,
such that $q|_{X^{\prime}}=p|_{X^{\prime}}$. Then $r^{\prime}\equiv
pq^{-1}:V^{\prime}\rightarrow X^{\prime}$ is a well-defined mapping cylinder retraction.
\end{proof}
\section{Concerning mapping cylinder neighborhoods of other compacta}
Consider the following wildly optimistic
\textit{Every compact ENR (= euclidean neighborhood retract) }$X\subset R^{n}
$\textit{, with }$\dim X\leq n-3$\textit{ and }$R^{n}-X$\textit{ }
$1$\textit{-LC at }$X$\textit{, has a manifold mapping cylinder neighborhood
which is unique up to homeomorphism. Or at least, every such }$X$\textit{ has
such a unique neighborhood stably, in some }$\mathbb{R}^{n+p}$\textit{. (This
conjecture has a natural Hilbert cube version for compact ANR's).}
This conjecture is stronger than Borsuk's question (the finite dimensional
version) of whether compact ENR's have finite homotopy type; equivalent to
Borsuk's question is whether such $X$ as above have radial neighborhoods in
$\mathbb{R}^{n}$ or even $\mathbb{R}^{n+p}$ (recall $U$ is radial if
$U-X\approx Y\times\mathbb{R}^{1}$ for some compactum. (See \cite{Si3} for
best known implications). Incidentally, the easiest way to prove the
implication: $X$ has finite type $\Rightarrow$ $X$ has a radial neighborhood
stably, is to use the following readily proved stable version of
Geogehan-Summerhill \cite{GS}: two compact subsets $X$ and $Y$ of
$\mathbb{R}^{n}$ have the same (Borsuk) shape $\Leftrightarrow$ the quotients
$\mathbb{R}^{2n+2}/X\approx\mathbb{R}^{2n+2}/Y$ are homeomorphic.
If the conjecture above is true, it would imply that all such $X$ are CE
images of manifolds. It is known conversely that any finite dimensional CE
image of a manifold is an ENR. And such an ENR does have a mapping cylinder
neighborhood stably, namely a quotient of one for the source manifold
stabilized, via the decomposition argument of \cite{Sh}.
\indent It is interesting to compare the Conjecture to two questions raised by
Chapman in the Proceedings of the 1973 Georgia Topology Conference. These are
finite dimensional versions. Let $X$ be a compact ENR and $K,L$ finite cell
complexes.
\noindent\textbf{Question 1.} \emph{If }$f:K\rightarrow X$\emph{ and
}$g:L\rightarrow X$\emph{ are CE mappings, does there exist a simple homotopy
equivalence }$h:K\rightarrow L$\emph{ such that }$gh\sim f$\emph{?}
\noindent\textbf{Question 2.} \emph{If }$f:X\rightarrow K$\emph{ and
}$g:X\rightarrow L$\emph{ are CE mappings, does there exist a simple homotopy
equivalence }$h:K\rightarrow L$\emph{ such that }$hf\sim g$\emph{?}
The answer to Question 1 is yes if the stable \emph{uniqueness} part of the
Conjecture is true; the answer to Question 2 is yes if the stable
\emph{existence} part of the Conjecture is true.
\section{\textbf{Appendix: }An extension of some well known homotopy theorems}
This appendix presents a useful generalization of the familiar Whitehead
theorem for weak homotopy equivalences. Using an elementary shape theory
definition, the Theorem encompases Whitehead's Theorem on the one hand
($Z=\operatorname{point}$), and the Lacher-Kozlowski-Price-$\underline
{\quad\quad\quad}$ Theorem for cell-like mappings on the other hand, in
addition to having applications in between.
\indent We work in the category of locally compact metric ANR's and proper
maps (whose point universes need \textbf{not} be ANR's).
\indent A map $f:X\rightarrow Y$ of compact metric spaces (\emph{not}
necessarily ANR's) is a $k$\emph{-shape equivalence}
\label{defn: k-shape equivalence}if both $X$ and $Y$ have finitely many
components and $f$ induces isomorphisms on the homotopy groups up through
dimension $k$. As these homotopy groups are awkward inverse limits, we give
the definition in primitive form (assuming $X$ and $Y$ connected; otherwise
make it hold componentwise). If $X\hookrightarrow L$ and $Y\hookrightarrow M$
are embedded as subsets of ANR's $L$ and $M$ and if $U_{X}$ and $U_{Y}$ are
arbitrary neighborhoods then there are smaller neighborhoods $V_{X}\subset
U_{X}$ and $V_{Y}\subset U_{Y}$ and a map $F:V_{X}\rightarrow V_{Y}$ extending
$P:X\rightarrow Y$, such that for any $i$, $0\leq i\leq k:$
\begin{itemize}
\item \emph{injectivity}: for any map $\alpha:S^{i}\rightarrow V_{X}$ if
$F\alpha\sim0$ in $U_{Y}$, then $\alpha\sim0$ in $U_{X}$, and
\item \emph{surjectivity}: for any map $\beta:S^{i}\rightarrow V_{Y}$, there
is a map $\alpha:S^{i}\rightarrow V_{X}$ such that $F\alpha\sim\beta$ in
$U_{Y}$. Surjectivity can in fact be accomplished by homotopy rel basepoint,
as a consequence of $\pi_{1}$ surjectivity.
\end{itemize}
\indent As usual in shape theory, this definition holds for any pair of
embeddings of $X$ and $Y$ into ANR's if it holds for one pair.
\indent Some authors would define a $k$-shape equivalence as being only
surjective in dimension $k$ (e.g. \cite[p.404]{Sp}, \cite{Ko}) and would prove
the following theorem with $\dim J\leq k$ and $J=K$. However, it seems that
for applications, the form we state it in is perhaps more natural.
If $f:X\rightarrow Y$ is a map and $p:Y\rightarrow Z$ is a surjection, then
$f$ is a $k$\emph{-shape equivalence over} $Z$ if for each $z\in Z$,
$f|:f^{-1}(p^{-1}(z))\rightarrow p^{-1}(z)$ is a $k$-shape equivalence.
\noindent\textbf{Note.} In the following, [proper] means \textquotedblleft
proper\textquotedblright\ is optional. The theorem and corollary are most
believable with proper in place. In fact, on page
\pageref{defn: k-shape equivalence}, I haven't defined $k$-shape equivalent
for non-compact spaces.
\begin{theorem}
[{Compare \cite[p.404, Th.22]{Sp} and \cite{Ko}}]\textit{Suppose
}$f:X\rightarrow Y$\textit{ is a [proper] map of locally compact metric ANR's
and }$p:Y\rightarrow Z$\textit{ is a surjection to a separable metric space
}$Z$\textit{. Suppose }$f$\textit{ is a }$k$\textit{-shape equivalence. In the
diagram below, suppose }$J$\textit{ is an arbitrary simplicial complex, }$\dim
J\leq k+1$\textit{, with subcomplex }$L$\textit{, and }$g:L\rightarrow
X$\textit{ and }$h:J\rightarrow Y$\textit{ are maps which make the diagram
commute.}
\textit{Given any majorant map }$\epsilon:Z\rightarrow(0,\infty)$\textit{,
there exists a lift }$g^{\prime}:J\rightarrow X$\textit{ extending }
$g$\textit{ such that }$pfg^{\prime}$\textit{ is }$\epsilon$\textit{-close to
}$ph$\textit{. Furthermore, if }$K$\textit{ is a subcomplex of }$J$\textit{
with }$\dim K\leq k$\textit{ then }$fg^{\prime}|K$\textit{ may be assumed
}$(p,\epsilon)$\textit{-homotopic to }$h|_{K}$\textit{.}
\begin{diagram}
L & \rTo^g & X \\
\dInto & \ruDotsto^{g'} & \dTo_f \\
J & \rTo^h & Y \\
& & \dTo_p \\
& & Z \\
\end{diagram}
\begin{proof}
Standard lifting argument.
\end{proof}
\end{theorem}
The following Corollary encompasses several well-known theorems.
\begin{corollary}
Suppose $f:X\rightarrow Y$ is a [proper] map of locally compact metric ANR's
such that for some $k$,
\begin{enumerate}
\item $\dim X\leq k$ and $\dim Y\leq k$, and
\item $f$ is a $k$-shape equivalence over $Z$ for some proper surjection
$p:Y\rightarrow Z$.
\end{enumerate}
Then $f$ is a [proper] homotopy equivalence. In fact, there is a [proper]
homotopy inverse $g:Y\rightarrow X$ such that $fg\sim\operatorname{id}_{Y}$ by
an arbitrarily $p$-small homotopy, and $gf\sim\operatorname{id}_{X}$ by an
arbitrarily $pf$-small homotopy.\newline\newline\noindent\textbf{Proof.}
Routine mapping cylinder-nerve argument.
\end{corollary}
\part{
}
\section{Additional topics
}
This part is not yet written. Topics to include: neighborhoods of a pair,
neighborhoods by restriction, Lickorish-Siebenmann Theorem for TRN's,
transversality (with discussion of Hudson's example), the group TOP.
\end{document}
|
\begin{document}
\baselineskip16pt
\title{Neural-to-Tree Policy Distillation with Policy Improvement Criterion}
\begin{sciabstract}
While deep reinforcement learning has achieved promising results in challenging decision-making tasks, the main bones of its success --- deep neural networks are mostly black-boxes. A feasible way to gain insight into a black-box model is to distill it into an interpretable model such as a decision tree, which consists of if-then rules and is easy to grasp and be verified. However, the traditional model distillation is usually a supervised learning task under a stationary data distribution assumption, which is violated in reinforcement learning. Therefore, a typical policy distillation that clones model behaviors with even a small error could bring a data distribution shift, resulting in an unsatisfied distilled policy model with low fidelity or low performance. In this paper, we propose to address this issue by changing the distillation objective from behavior cloning to maximizing an advantage evaluation. The novel distillation objective maximizes an approximated cumulative reward and focuses more on disastrous behaviors in critical states, which controls the data shift effect. We evaluate our method on several Gym tasks, a commercial fight game, and a self-driving car simulator. The empirical results show that the proposed method can preserve a higher cumulative reward than behavior cloning and learn a more consistent policy to the original one. Moreover, by examining the extracted rules from the distilled decision trees, we demonstrate that the proposed method delivers reasonable and robust decisions.
\end{sciabstract}
\section{Introduction}
Deep reinforcement learning has achieved abundant accomplishments in many fields, including but not limited to order dispatching \citep{tang2019deep}, autonomous vehicles \citep{o2018scalable}, biology \citep{neftci2019reinforcement} and medicine \citep{popova2018deep}. However, its success usually depends on the Deep Neural Network (DNN), which has some pitfalls, for example, hard to verify \citep{bastani2016measuring}, lacking interpretability \citep{Deep_learning}, and so on. In contrast, the decision tree establishes an intelligible relationship between the features and the predictions, which can be also translated into rules to realize the robustness verification \citep{verif-tree}, and applied in security-critical areas \citep{healthcare}. However, decision trees are challenging to learn even in the supervised setting; there has been work that trains tree-based policy on dynamic environments \citep{ernst2005tree,liu2018toward}, but these algorithms have their limitations on generalizations (only scale to toy environments like \textit{CartPole}). To combine the advantages of both kinds of representation, a number of approaches based on knowledge distillation have been proposed \citep{frosst2017distilling,bastani2018verifiable}, i.e., neural-to-tree model distillation. Specifically, they trained a teacher DNN model, then used it to generate labeled samples, and finally fit the samples to a student tree, aiming to achieve robustness verification or model interpretation.
In sequential decision-making tasks, the basic assumption in model distillation by supervised learning --- stationary data distribution is violated \citep{ross2010efficient}. If greedily imitating the teacher behavior, the student could drift away from the demonstrated states due to error accumulation, especially in complex environments \citep{ross2011reduction}. To address this issue, Viper uses the Dagger \citep{ross2011reduction} imitation learning approach to constantly collect state actions pairs for iteratively revising the student DT policies. The augmented data facilitates the learning, but could also cause inconsistency with the teacher --- some of which might never be encountered by the teacher but affect the tree's structure. Therefore, it is meaningful and valuable to explore how to effectively distill with the offline data.
In this paper, we propose a novel distillation algorithm, Dpic, which optimizes a novel distillation objective derived from the policy improvement criterion. Intuitively, the data-driven distillation schema has the same prerequisite as policy optimization --- a known policy model and some trajectories sampled by it. Different from Behavior Cloning, which minimizes the 0-1 loss, or Viper, which considers the long-term cost in the online setting, policy optimization, in particular, TRPO, proposes an improvement criterion over old policy on performance. The criterion can be used with offline data and inherently controls the distribution shift problem. However, the gradient-based optimization techniques are hard to apply to the nondifferentiable tree-based model. Moreover, explicit actions are needed when evaluating the improvement --- it is not enough to distinguish between the right actions from the wrong ones. We instantiate our approach by modifying the traditional decision tree learning algorithm --- \textit{info gain} for classification and \textit{CART} for regression, to optimize the policy improvement criterion; we show the new optimization objective can effectively improve the performance in offline data. Then, we integrate our objective with the offline Viper; we find that our algorithm can provide a more consistent policy without sacrificing the efficiency and model complexity.
Overall, our main contributions in this work are:
\begin{itemize}
\item We proposed an advantage-guided neural-to-tree distillation approach, and analysed the connection between the advantage cost and the accumulative rewards of the distilled tree policy.
\item Two practical neural-to-tree distillation algorithms are devised and tested in several environments, showing that our methods performs well in both the terms of the average return and the consistency of state distribution.
\item We investigate the interpretability of the distilled tree policies in the fighting game and the self-driving task, showing that our distilled decision tree policies deliver reasonable explanations.
\end{itemize}
\section{Related Work}
\subsection{Policy Distillation}
Unlike other model distillation methods for RL \citep{policy_distillation,dis_policy_distillation}, which aim to satisfy the computational demand or timing demand, our work falls into the cross-model transfer --- neural-to-tree policy distillation. Some work \citep{coppens2019distilling} utilizes the distillation to explain existing policy but only limits it in the traditional supervised methods. The most related work to ours is Viper \citep{bastani2018verifiable}, which uses the DAgger \citep{ross2011reduction}, to constantly revise the policy in an online way and also takes into account the Q-function of teacher policy to prioritize states of critical importance. In contrast, the proposed method, derived from the policy improvement, evaluates, and maximizes the advantage of the distilled policy. As a result, the decision-tree learning in Viper only discriminates against the right actions from the wrong actions, while the proposed method evaluates every action and tries to select the best one. Moreover, our tree-based policy learning algorithm can achieve good enough performance offline to avoid model distortion due to data aggregation. It can also be regarded as an extension of cost-sensitive learning.
\subsection{Model-Agnostic Interpretation Method}
Our work can be regarded as a tool to implement the model-agnostic interpretation \citep{doshi2017towards,molnar2019interpretable}, which separates the explanations from the machine learning model. There has been work in the domain, such as providing feature importance \citep{NIPS2019_9211} that influences the outcome of the predictor and generating counterfactual inputs that would change the black box classification decision \citep{chang2018explaining,singla2019explanation}, which all provide a visual explanation of the decision. These methods largely focus on local explanations or analyzing the key area to influence the prediction. Another research direction is model distillation, which distills knowledge from the black-box model into a structured model to implement global interpretability \citep{frosst2017distilling,zhang2019interpreting}. As an effective neural-to-tree distillation method, our work belongs to the latter, aiming to make the entire RL policy model transparent. Due to the structured model, this method can meet verification, safety, and non-discrimination easily.
\section{Distilling Deep Models into Tree Models}
Deep neural network models encode their knowledge in the connections between neurons. Benefiting from these excellent fitting and generalization abilities, deep models are quite flexible and powerful in complex learning tasks. However, the distributed knowledge representation blocks their interpretability, which could pose potential dangers in real applications. Decision trees have been widely used in data mining and machine learning as a comprehensible knowledge representation, which can be easily comprehended and efficiently verified. Therefore, transforming a deep model into a decision tree model is a feasible way to break the black-box barrier of DNN. In this section, we firstly present a general decision tree learning process and then introduce previous studies on distillation into decision trees.
\subsection{Decision Tree Learning}
Decision tree learning, as a predictive modeling approach, is commonly used in machine learning and data mining. A tree is a hierarchy of filters in its internal nodes to assign input data to a leaf node with prediction.
By ``testing'' on the features continuously, i.e., adding if-then rules, the decision tree divides up the feature space, using the information from data to predict the corresponding value or label.
Formally, the decision tree (commonly binary tree) $\mathbb T$ is built by dividing the training data $\mathcal{D}$ into subsets $\mathcal{D}_L$ and $\mathcal{D}_R$ recursively.
The split point $x\in\mathcal{X}$ for each node $\mathbb{N}$ is chosen by a certain criterion, and the splitting will repeat recursively until the termination condition meets.
\subsection{Node Splitting Criterion}
\subsubsection{Error reduction}
Denote the misclassification error as $E$, data size as $N$, and the misclassification error rate as $\mathcal E=E/N$. A common target of decision tree learning is to minimize the sum of the misclassification error $E$ at each leaf node $\mathbb N$:
\begin{small}$\sum_{\mathbb N=1}^{|\mathbb T|}E_{\mathbb N}(\mathbb T)$\end{small}.
In the process of tree construction, a straightforward strategy is that the split point $x$ in each $\mathbb N$ is chosen by maximizing the \textit{error reduction}:
\begin{equation}
R^E_\mathbb N = \mathcal E(\mathcal D_\mathbb N)-\frac{N_{\mathbb N,L}}{N_\mathbb N}\mathcal E(\mathcal D_{\mathbb N,L}|x)-\frac{N_{\mathbb N,R}}{N_\mathbb N}\mathcal E(\mathcal D_{\mathbb N,R}|x)
\end{equation}
where the subscripts $\mathbb N, L$ and $\mathbb N, R$ represent $\mathbb N$'s child-nodes.
\subsubsection{Cost reduction}
Here we consider misclassification cost instead of 0-1 error.
Denote the cost of a sample in terms of classification to the label $k \in \mathcal Y$ as $C^k$, the cost labeled $k$ in $\mathbb N$ as
\begin{small}$C_{\mathbb N}^k=\sum_{\mathcal{D}_\mathbb N}C^k$\end{small}
, and the total cost at each leaf node $\mathbb N$ as
\begin{small}$W_\mathbb N = \sum_{k\in\mathcal{Y}}C^k_\mathbb N$ \end{small}.
Aligning with the traditional greedy algorithm, we define the classification cost rate \begin{small}
$\mathcal C^k={C^k}/W$
\end{small} and the minimal classification cost rate
\begin{small}
$\mathcal C=\min_{k\in\mathcal{Y}}\mathcal C^k = \min_{k\in\mathcal Y}{C^k}/W$
\end{small}.
Now the objective of decision tree is:
\begin{small}
$\sum_{\mathbb N=1}^{|\mathbb T|}C_\mathbb N(\mathbb T)$
\end{small}.
Correspondingly, the greedy node-splitting strategy is to maximize the \textit{cost reduction}:
\begin{equation}
R^C_\mathbb N = \mathcal C(\mathcal D_\mathbb N)-\frac{W_{\mathbb N, L}}{W_\mathbb N}\mathcal C(\mathcal D_{\mathbb N,L}|x)-\frac{W_{\mathbb N, R}}{W_\mathbb N}\mathcal C(\mathcal D_{\mathbb N, R}|x)
\end{equation}
\subsubsection{Node-splitting degradation problem}
From the definition, we can find that $\forall k \in \mathcal Y$,
\begin{equation}
\mathcal C^k(\mathcal D_\mathbb N)=\frac{W_{L}}{W}\mathcal C^k(\mathcal D_L)+\frac{W_R}{W}\mathcal C^k(\mathcal D_R)
\end{equation}
It means that if the labels of child nodes split by each split point are identical with the parent node, $R^C$ will be equal to zero, and the split point choice will be arbitrary. Formally, if
\begin{small}
$\forall x\in \mathcal X, \arg_{k \in\mathcal Y}\mathcal C(\mathcal D)=\arg_{k \in\mathcal Y}\mathcal C(\mathcal D_L|x)=\arg_{k \in\mathcal Y}\mathcal C(\mathcal D_R|x)$,
\end{small}
then
$
R^C\equiv 0
$.
We call it \textit{node-splitting degradation problem}. The mathematical cause is that the function $f(\mathbf{z})=\min\{z_i\}$ is not strictly convex. That is, if
\begin{small}
${\exists} \mathbf{z'}, \mathbf{z''}, \arg\min \{z'_i\}=\arg\min \{z''_i\}=\arg\min \{\theta z'_i + (1-\theta)z''_i\}$,
\end{small}
then
\begin{equation}
f(\theta \mathbf z' + (1-\theta)\mathbf z'') = \theta f(\mathbf z') + (1-\theta)f(\mathbf z'')
\end{equation}
Define $
\mathcal O = \{(\mathbf z', \mathbf z'', \theta \mathbf z' + ( 1- \theta ) \mathbf z'' ) | \arg \min \{z'_i\} = \arg \min \{ z''_i \} = \arg \min \{\theta z'_i + (1-\theta)z''_i\} \}$.
If $\forall x\in \mathcal X$, $(\mathcal C(\mathcal D_{L}|x), \mathcal C(\mathcal D_R|x), \mathcal C(\mathcal D)) \in \mathcal O$, then the evaluation for each split point will be identical, which could lead to a bad tree.
\subsubsection{Entropy form}
To tackle the problem, previous studies introduced the entropy related evaluations like \textit{info gain}.
The corresponding information gain is
\begin{equation}
IG^\mathcal C_\mathbb N = H_{\mathcal C}(\mathcal D_\mathbb N) - \frac{W_{\mathbb N,L}}{W_\mathbb N}H_{\mathcal C} (\mathcal D_{\mathbb N,L}|x) - \frac{W_{\mathbb N,R}}{W_\mathbb N}H_{\mathcal C}(\mathcal D_{\mathbb N,R}|x)
\end{equation}
where $H_{\mathcal C}(\mathcal D)$ indicates the entropy of cost distribution on $\mathcal D$, i.e., $$H_{\mathcal C}(\mathcal D)=-\sum_{k \in\mathcal Y}\mathcal C^k(\mathcal D)\log \mathcal C^k(\mathcal D).$$
Compared to the min function $\mathcal C(\mathcal D)=\min_{k \in \mathcal Y}\{\mathcal C^k(\mathcal D)\}$, the entropy form $H_{\mathcal C}(\mathcal D)$ has three properties that 1) is strictly convex; 2) has the same saddle points; 3) has the same monotonicity for each variable. These properties ensure that the objective can avoid the problem of degradation without effecting the optimization results. The detailed proof is attached in Appendix A.
\subsection{Distilling Deep Model Agents into Trees}
In the general formal of distillation, knowledge is transferred to the distilled model by training it on a transfer set. In RL, agents make decisions according to the decision models or the predictive models, i.e., the knowledge is usually stored in the decision behavior or the predictions for the future.
Correspondingly, there are two approaches to achieve knowledge distillation from an existing agent by constructing different transfer sets --- decision data-driven and predictive data-driven \citep{policy_distillation}. The former implements the distillation by imitating the demonstrated action greedily, i.e., behavior cloning. The latter approximates the Q-function of the DNN policy by a regression tree and afterwards chooses the action with highest ``quality''. Both transfer data are generated and collected by the policy to be distilled interacting with the environment.
\begin{table*}[t]
\renewcommand\arraystretch{1.2}
\centering
\begin{tabular}{lll}
\toprule
Name & Objective function & Distribution to be evaluated \\
\midrule
BC &
$\sum_{s\in \mathcal{D}}\mathbb{I}(\mathbb T(s)\neq \pi^{*}(s))$
&
$E_{\mathbb N}^a=\sum_{s\in \mathcal D_{\mathbb N}} \mathbb{I}(a\neq \pi ^ \text{n}(s)) $
\\
Dpic
&
$\sum_{s\in \mathcal{D}}-A(s, \mathbb T(s))$
&
$C_{\mathbb N}^a=\sum_{s\in \mathcal D_{\mathbb N}}-A(s,a)$
\\
Dpic$^R$
&
$
\sum_{s\in \mathcal{D}}-A(s, \mathbb T(s)) +\alpha \mathbb I(\mathbb T(s)\neq \pi^{*}(s))
$
&
$
C^{\text{R, a}}_{\mathbb N}=\sum_{s\in \mathcal D_{\mathbb N}}-A(s,a)
+\alpha \mathbb I(a \neq \pi^{*}(s))
$
\\
\bottomrule
\end{tabular}
\caption{The algorithms}
\label{comparison-table}
\end{table*}
\section{Distillation with Policy Improvement Criterion}
In this section, we discuss the problem that the traditional algorithm (behavior cloning) faces in distillation and then propose our distillation objective.
\subsection{Distribution Shift Problem}
\label{section distributin shift}
In supervised learning, the i.i.d. assumption is often made for both training data and testing data to imply that all samples stem from the same probability distribution. But in reinforcement learning, the state distribution is dependent on the policy, which means that an inappropriate behavior not only misses instant reward but also causes the data distribution shift, leading to an extra loss in cumulative rewards.
\subsubsection{Error accumulation}
Consider a student policy $\hat \pi$ that imitates an expert policy $\pi^*$ by behavior cloning, the training data (transfer set) is under the distribution induced by the expert: $d_{\pi^*}$ while the testing data is under $d_{\hat \pi}$.
Denote the immediate cost of doing action $a$ in state $s$ as $l(s, a)$, its expectation under the policy $\pi$ as $l_{\pi}(s)=\mathbb{E}_{a\sim\pi_s}(l(s,a))$, the expected $T$-step cost as $J(\pi)=T\mathbb{E}_{s\sim d_{\pi}}(l_\pi(s))$, and the 0-1 loss of executing action $a$ in state $s$ as $e(s,a)$. According to \citep{ross2010efficient}, if $\mathbb{E}_{s\sim d_{\pi}^*}[e_\pi(s)]\leq \epsilon$, then
\begin{equation}
\label{error-accum-1}
J(\hat{\pi})\leq J(\pi^*)+T^2\epsilon
\end{equation}
But if we have the long-term cost $l^l_\pi(s)=Tl_\pi(s)$ which satisfies
$\mathbb{E}_{s\sim d_{\pi}^*}[l^l_\pi(s)]\leq \epsilon^l$, then
\begin{equation}
\label{error-accum-2}
J(\hat{\pi})\leq J(\pi^*)+T\epsilon^l
\end{equation}
\subsection{Policy Improvement Criterion}
Define an infinite-horizon discounted Markov decision process by the tuple $(\mathcal{S}, \mathcal{A}, P, r, \rho_0, \gamma)$ with state space $\mathcal{S}$, action $\mathcal{A}$, state transition probability distribution $P: \mathcal S \times \mathcal A\times\mathcal S\rightarrow [0, 1]$, rewards function $r: \mathcal S \times \mathcal A \rightarrow R$ , initial state distribution $\rho_0 $ and the discounted factor $\gamma$. Given a policy $\pi$, its expected discounted reward $\eta(\pi)$:
\begin{equation}
\eta(\pi) = \mathbb{E}_{s_0, a_0,...\sim (\rho_0,\pi, P)}\left[\sum_{t=0}^{\infty}\gamma^tr_t(s_t)\right]
\end{equation}
where the notation $s_0, a_0, s_1...\sim (\rho_0,\pi, P)$ indicates $s_0\sim\rho_0(\cdot)$,$a_t\sim\pi(\cdot|s_t)$, $s_{t+1}\sim P(\cdot|s_t,a_t)$.
Recall the standard definitions of the state-action value function $Q_\pi$, the value function $V_\pi$, and the advantage function $A_\pi$ are:
\begin{equation}
Q_\pi(s_t, a_t)=\mathbb{E}_{s_{t+1},a_{t+1},\ldots\sim(\pi, P)}\left[\sum_{l=0}^{\infty}\gamma^k r(s_{t+l})\right]
\end{equation}
\begin{equation}
V_\pi(s_t)=\mathbb{E}_{a_t, s_{t+1},\ldots\sim(\pi, P)}\left[\sum_{l=0}^{\infty}\gamma^k r(s_{t+l})\right]
\end{equation}
\begin{equation}
A_\pi(s, a)=Q_\pi(s,a)-V_\pi(s)
\label{advantage}
\end{equation}
Then the expected return of student policy $\hat \pi$ can be expressed in terms of the advantage over expert policy $\pi^*$, accumulated over timesteps (the detailed proof is analogous to \citep{schulman2015trust}):
\begin{equation}
\label{equ: policy update 1}
\eta(\hat \pi)=\eta(\pi^*)+\mathbb{E}_{s_0, a_0,...\sim (\rho_0,\hat \pi, P)}\left[\sum_{t=0}^{\infty}\gamma^kA_{\pi^*}(s_t, a_t)\right]
\end{equation}
Rewrite the Equation (\ref{equ: policy update 1}) in terms of the sum over states $\rho_\pi(s)=\sum_{t=0}^{\infty}\gamma^tP(s_t=s)$, we have:
\begin{equation}
\label{equ: policy update 2}
\eta(\hat \pi)= \eta(\pi^*) + \sum_s\rho_{\hat\pi}(s)\sum_a\hat\pi(a|s)A_{\pi^*}(s,a),
\end{equation}
which means when we sample a batch of data from $\rho_{\hat\pi}$, we can use the $\pi^*$'s information (the advantage $A_{\pi^*}(s,a)$) to have an assessment for $\hat \pi$'s accumulated rewards.
\subsubsection{Distillation objective}
Now we use a decision tree $\mathbb T$ to mimic the DNN policy $\pi^*$, and the loss can be defined as
\begin{equation}
\begin{split}
\ell(\mathbb T)=\eta(\pi^*)-\eta(\mathbb T)
&= -\sum_s\rho_{\mathbb T}(s)A_{\pi^*}(s,\mathbb T(s))\\
&= -\sum_{s\in \mathcal{D_{\mathbb T}}}A_{\pi^*}(s, \mathbb T(s))
\end{split}
\end{equation}
Here $\mathcal D_\mathbb T$ indicate the data is sampled using the policy $\mathbb T$.
Considering that the offline data $\mathcal D_{\pi^*}$ is collected by $\pi^*$, we can rewrite the objective:
\begin{equation}
\ell(\mathbb T) = \sum_{s\in \mathcal{D_{\pi^*}}}-\frac{\rho_{\mathbb T}(s)}{\rho_{\pi^*}(s)}A_{\pi^\text{n}}(s, \mathbb T(s))
\end{equation}
When transfer data is infinite and the distilled tree has no complexity limitation, the generated policy $\mathbb T$ will be extremely similar to the existed policy $\pi^*$, and the difference of corresponding state distributions could be negligible. An objective can be given:
\begin{equation}
\label{objective1}
\hat{\ell}(\mathbb T) = \sum_{s\in \mathcal{D_{\pi^*}}}-A_{\pi^\text{n}}(s, \mathbb T(s))
\end{equation}
Now we can use the advantage information (characterizes the long-term effects in the offline setting) as a classification cost to construct the tree.
\subsubsection{Regularization}
As mentioned before, maximizing the pure advantage ignores the distribution mismatch problem caused by policy differences so that it only works when the tree is large. The main reason is that the new objective inherently distinguishes samples according to the advantage information, which hinders the tree's fitting ability. In other words, to fit identical data, optimizing the new objective generates a larger tree than behavior cloning. However, it is usually necessary to limit the model complexity when we refer to the model explanation, in which case the distribution mismatch problem is usually non-negligible. To lessen this problem, we further consider the objective of behavior cloning as a penalty term. Now we have the second objective:
\begin{equation}
\hat\ell^{R}(\mathbb T)=\sum_{s\in\mathcal D_{\pi^{*}}}-A_{\pi^*}(s, \mathbb T(s))+\alpha \mathbb I (\mathbb T(s)\neq \pi^{*}(s))
\end{equation}
where the temperature coefficient $\alpha$ determines the strength of the penalty and is set prior to training. With $\alpha$ increasing, the affect of advantage information on tree will fade away.
The algorithms are listed in Table \ref{comparison-table}.
\subsubsection{Advantage evaluation}
The calculation of the advantage requires $Q_{\pi^*}$ and $V_{\pi^*}$, but only one of them can be obtained in most RL algorithms. For policy gradient methods, the value function and action distribution are available, while for Q-learning based methods, the state-action value is accessible.
Inspired by the prior work --- SQIL \citep{reddy2019sqil}, we assume that the behavior of the DNN agent follows the maximum entropy model \citep{levine2018reinforcement}, and derive the advantage based on soft Q-learning \citep{haarnoja2017reinforcement}.
The DNN policy $\pi^*$ and the value function $V_{\pi^*}$ can be defined by $Q_{\pi^*}$ :
\begin{equation}
\label{softq}
\pi^*(a|s)=\frac{\exp{(Q_{\pi^*}(s,a))}}{\sum_{a'\in \mathcal A} \exp{(Q_{\pi^*}(s, a'))}}
\end{equation}
\begin{equation}
V_{\pi^*}(s)=\log{\sum_{a\in\mathcal A}
\exp{( Q_{\pi^*}(s,a)})}
\end{equation}
Therefore, we can convert the form between $Q_{\pi^*}$ and $V_{\pi^*}$. That is, our algorithm can be applied to all neural network policies without considering how these policies are trained.
\begin{table}[t]
\centering
\begin{tabular}{c|ccccccc}
\toprule
n &BC &FQ &Viper$_M$ & Dpic &Dpic$_M$ &Dpic$^R$ &Dpic$^R_M$ \\
\midrule
\multicolumn{8}{c}{CartPole} \\
\midrule
$1$&$\bm{70.4}$&$9.5$&$28.9$&$14.8$&$32.1$&$56.7$&$38.7$\\
$3$&$175.7$&$9.3$&$171.7$&$17.8$&$166.8$&$175.7$&$\bm{189.7}$\\
$7$&$167.8$&$12.1$&$181.2$&$112.5$&$154.3$&$\bm{194.0}$&$186.1$\\
$15$&$176.9$&$12.5$&$179.2$&$168.3$&$159.7$&$\bm{192.0}$&$187.4$\\
$31$&$190.0$&$26.4$&$168.6$&$\bm{199.2}$&$181.9$&$194.7$&$185.3$\\
$63$&$\bm{190.8}$&$25.9$&$167.5$&$171.1$&$181.2$&$186.9$&$190.5$\\
\midrule
\multicolumn{8}{c}{MountainCar} \\
\midrule
$1$&$-117.4$&$-200.0$&$-121.3$&$-117.4$&$-139.9$&$\bm{-116.5}$&$-120.0$\\
$3$&$-118.3$&$-200.0$&$-118.3$&$-118.7$&$-117.0$&$-118.4$&$\bm{-110.0}$\\
$7$&$-107.7$&$-200.0$&$-110.0$&$-105.8$&$-114.0$&$\bm{-105.3}$&$-105.3$\\
$15$&$-111.1$&$-200.0$&$\bm{-104.3}$&$-106.9$&$-116.2$&$-105.3$&$-105.3$\\
$31$&$-105.5$&$-200.0$&$-105.5$&$-106.0$&$-102.5$&$-100.0$&$\bm{-99.3}$\\
$63$&$-95.0$&$-200.0$&$-105.3$&$\bm{-94.2}$&$-105.5$&$-96.6$&$-97.5$\\
\midrule
\multicolumn{8}{c}{Acrobot} \\
\midrule
$1$&$-84.3$&$-500.0$&$-97.0$&$-86.9$&$-94.0$&$-79.4$&$\bm{-79.0}$\\
$3$&$-85.7$&$-500.0$&$-92.6$&$-83.5$&$-95.8$&$\bm{-82.5}$&$-98.8$\\
$7$&$-82.4$&$-500.0$&$-88.5$&$-94.8$&$-130.8$&$-86.9$&$\bm{-82.0}$\\
$15$&$-86.7$&$-500.0$&$-99.2$&$-83.0$&$-140.4$&$-89.3$&$\bm{-81.7}$\\
$31$&$-93.6$&$-500.0$&$-98.5$&$-90.0$&$-105.0$&$\bm{-82.5}$&$-86.2$\\
$63$&$-85.1$&$-500.0$&$-89.2$&$-86.6$&$\bm{-78.6}$&$-82.2$&$-81.5$\\
\bottomrule
\end{tabular}
\caption{The average returns of distill trees across ten runs.}
\label{table-offline}
\end{table}
\section{Evaluation}
In this section, we evaluate the proposed algorithms on four Gym tasks, including three classic control tasks and a variant of Pong game \citep{bastani2018verifiable} in Atari, which extracts 7-dimensional states from image observation. The DNN policies on classic control tasks are trained by Ape-X \citep{horgan2018distributed}, and for Pong, we adopt the optimal model provided in \citep{bastani2018verifiable}. The mean cumulative rewards obtained by the teachers on CartPole, MountainCar, Acrobot and Pong are $200.0$, $-63.2$, $-102.0$ and $21.0$, respectively. All results are averaged across 100 episodes. The algorithm with the regularization term has an additional hyper-parameter $\alpha$; we describe how we choose it below. Except for our algorithms, other algorithms adopt the standard decision tree algorithms (i.e., \textit{ID3} and \textit{CART}) in scikit-learn \citep{scikit-learn}.
\subsection{Comparison in Offline Setting}
On the classic control tasks, we compare our algorithms to existed other distillation methods, including behavior cloning (BC), fitting Q (FQ), and the offline Viper without data aggregation (Viper$_M$). The process of our tree construction is the same as the classical decision tree algorithm. In particular, for each tree node, we choose the split-point by minimizing $\hat{\ell}$ (Dpic) or $\hat{\ell}^R$ (Dpic$^R$) (summarized in Table \ref{comparison-table}). The transfer data (including the observation, the selected action, and the Q values of all actions) is collected by interacting with environments using well-trained neural network policies. Viper$_M$, Dpic$_M$, and Dpic$^R_M$ resample data as well as Viper, but others do not. In algorithms with regularization terms, we explore the following values for the error cost weighting coefficient $\alpha$, $\{0.02, 0.04, 0.08, 0.1, 0.15\}$ and choose the best one by the returns.
We train decision trees with maximum node numbers of $\{1, 3, 7, 15, 31, 63\}$ and run ten times by different fixed data in each setting. As shown in Table \ref{table-offline}, our algorithms can obtain the highest rewards in $15/18$ settings, regardless of the size of the tree. Moreover, with the increase of the number of nodes and the enhancement of tree expression ability, the difference between teacher and student is decreasing, which leads to a better result of the simplified algorithm ($199.2$ in CartPole, $-94.2$ in MounatinCar, and $-78.6$ in Acrobot). Dpic and Dpic$^R$ can thus apply to policy distillation with any limitations on model complexity.
\begin{figure}
\caption{Comparison to Viper in Atari Pong.
}
\label{average rewards}
\label{Maximum Mean Depency}
\label{pong}
\end{figure}
\subsection{Comparison to VIPER}
On the Atari Pong benchmark, we integrate our algorithm to the offline Viper (always collect data using the teacher) and compare it to the standard and offline Viper. In Figure \ref{average rewards}, we compare the reward achieved by three algorithms as a function of the number of rollouts (the experiment runs three times by different seeds). Moreover, we also compare the state distribution discrepancy using the maximum mean discrepancy (\textit{MK-MMD} \citep{gretton2012kernel}), which is a distance measure on the probability space. All algorithms limit the tree-size to 80. Dpic achieves comparable performance and efficiency with Viper but a much better consistency ($0.0871$ VS $0.3445$ in MMD). In contrast, the Viper without data aggregation has a equivalent model fidelity but a worse performance ($-7.6$ VS $21.0$).
\begin{figure*}
\caption{The attack actions explanation on \textit{Fighting Game}
\label{skills tree}
\label{feature importance}
\label{game}
\end{figure*}
\section{Policy Explanation}
Here we mainly make some trials on policy explanation through feature importance in a commercial fighting game and an autopilot simulation environment. These tasks are more complex: the fighting game requires masterly combo attack to beat the opponent while the autopilot simulator needs to avoid pedestrians and arrive the destination. The state sizes of both tasks are about 200. Besides, we also illustrate the online performance of the distilled trees.
The results shown in Table \ref{big-experiments}, and our methods both learn a better-performing tree in two tasks.
\begin{table}[htp]
\centering
\begin{tabular}{lcc}
\toprule
& Fighting Game & Autopilot Simulation \\
\midrule
Random & $44.32 \pm 19.87$ & $8.4514 \pm 11.02$ \\
Q-distillation & $181.81 \pm 24.32$ & $14.7221 \pm 6.51$ \\
BC & $ 187.70 \pm 18.82 $&$15.4205 \pm 7.10$ \\
Dpic & $190.78 \pm 17.47$ & $15.5012 \pm 7.94$ \\
Dpic$^R$ & \bm{$195.24 \pm 12.35$} & \bm{$16.0453 \pm 6.87$}\\
Viper$_M$ & $ 189.31 \pm 19.08 $ & $ 15.5540 \pm 7.20 $ \\
\midrule
DNN(Expert) & $194.20 \pm 17.84$ & $16.5686 \pm 6.36$\\
\bottomrule
\end{tabular}
\caption{The average returns of distilled trees.}
\label{big-experiments}
\end{table}
\subsubsection{Fighting Game}
The goal of the task is to beat the opponent in a limited time. The opponent can not fight back when suffering damage, so the best strategy is to perform consecutive strikes according to the skill tree (shown in Fig \ref{skills tree}) as much as possible.
To explore how the agent fights with the opponent, we analyse on the feature importance of four attack actions.
Traditional feature importance is an average rate that the decrease rate of impurity occurs on the splits between the root node and leaf node.
We extend the node impurity to the cost ratio and the value indicates the importance of the feature likewise.
Figure \ref{feature importance} shows the statistical results of the top one important features.
From our result, we can find that the agent pays attention to the combo attack, e.g., the \textit{light attack} is related to the \textit{combo2} status, the \textit{skill1} and \textit{skill2} are respectively related to the \textit{combo1} and the \textit{combo3}. On these combo statuses, the corresponding actions will ensure the combo attack running smoothly referring to Figure \ref{skills tree}. By contrast, the baseline result demonstrates the importance of distance, mana points, and whether the skill is available, which are more basic.
We also find that the agent focuses on the preparation time (\textit{foreswing} in the game) of other actions during attack process. We argue that the agent infers the current combo statuses based on the time because the preparation time of behaviors in different combo status will change.
All in all, our algorithm, by focusing on the reward information, provide a more objective-oriented explanation; while BC pay more attention on the numbers of state, and the distilled tree has more statistical characters.
\subsubsection{Autopilot Simulation}
We also test our algorithm on an autopilot simulation environment.
In this task, the agent needs to drive the car to the destination safely, meanwhile, pedestrians are walking randomly on the road.
The action sets include the steering wheel and the throttle, and each has 11 discrete actions. Different from previous experiments, the neural network policy is trained by PPO algorithm.
We calculate the advantage by Equation (\ref{softq}) and construct trees from 200, 000 samples. The observation includes 64 laser beams up front cover 180 degrees.
We sample a trajectory and mark out the top-5 important radar bearings per frame. Figure \ref{autopilot} shows sample states where our approach produces a more meaningful saliency feature than the baseline approach for the autopilot agent. Even though the two policies both focus on the pedestrian or curb locations, the traditional decision tree policy doesn't focus on orientation while ours focuses more on the obstacles ahead.
The whole process is shown in Appendix C.
\begin{figure}
\caption{The top-5 important radars on driving (blue lines).
The caption of each figure shows the corresponding wheel action. For example, ``right $7^{\circ}
\label{autopilot}
\end{figure}
\section{Conclusion and Future Work}
We have applied the model distillation on reinforcement learning to generate a tree-based policy. Considering the particularity of RL, we described a decision tree method for mimicking an existed neural network policy, which maximizes the cumulative rewards instead of classification accuracy. Experiments on a series of tasks show the improvements in both performance and consistency with the original policy, which verify that it is an effective way to employ the information outside the demonstrated action, e.g., Q, V, and demonstrated action distribution. Finally, our approach can be applied conveniently to even real games and industrial simulators, which may help provide interpretability and security for the automatic system.
\end{document}
|
\begin{document}
\title{Restorable Shortest Path Tiebreaking for Edge-Faulty Graphs}
\thispagestyle{empty}
\begin{abstract}
The \emph{restoration lemma} by Afek, Bremler-Barr, Kaplan, Cohen, and Merritt [Dist.\ Comp.\ '02] proves that, in an undirected unweighted graph, any replacement shortest path avoiding a failing edge can be expressed as the concatenation of two original shortest paths.
However, the lemma is \emph{tiebreaking-sensitive}: if one selects a particular canonical shortest path for each node pair, it is no longer guaranteed that one can build replacement paths by concatenating two \emph{selected} shortest paths.
They left as an open problem whether a method of shortest path tiebreaking with this desirable property is generally possible.
We settle this question affirmatively with the first general construction of \emph{restorable tiebreaking schemes}.
We then show applications to various problems in fault-tolerant network design.
These include a faster algorithm for subset replacement paths, more efficient fault-tolerant (exact) distance labeling schemes, fault-tolerant subset distance preservers and $+4$ additive spanners with improved sparsity, and fast distributed algorithms that construct these objects.
For example, an almost immediate corollary of our restorable tiebreaking scheme is the first nontrivial distributed construction of sparse fault-tolerant distance preservers resilient to \emph{three} faults.
\end{abstract}
\tableofcontents
\setcounter{page}{1}
\section{Introduction}
This paper builds on a classic work of Afek, Bremler-Barr, Kaplan, Cohen, and Merritt from 2002, which initiated a theory of \emph{shortest path restoration} in graphs \cite{ABKCM02}.
The premise is that one has a network, represented by a graph $G$, and one has computed its shortest paths and stored them in a routing table.
But then, an edge in the graph breaks, rendering some of the paths unusable.
We want to efficiently \emph{restore} these paths, changing the table to reroute them along a new shortest path between the same endpoints in the surviving graph.
An ideal solution will both avoid recomputing shortest paths from scratch and only require easy-to-implement changes to the routing table.
Motivated by the fact that the multiprotocol label switching (MPLS) allows for efficient concatenation of paths, Afek et al~\cite{ABKCM02} developed the following elegant structure theorem for the problem, called the \emph{restoration lemma}.
All graphs in this discussion are undirected and unweighted.
\begin{theorem} [Restoration Lemma \cite{ABKCM02}]
For any graph $G = (V, E)$, vertices $s,t \in V$, and failing edge $e \in E$, there exists a vertex $x$ and a replacement shortest $s \leadsto t$ path avoiding $e$ that is the concatenation of two original shortest paths $\pi(s, x), \pi(t, x)$ in $G$.
\end{theorem}
We remark that some versions of this lemma were perhaps implicit in prior work, e.g., \cite{HS01}.
The restoration lemma itself has proved somewhat difficult to apply directly, and most applications of this theory use weaker variants instead (e.g., \cite{bodwin2017preserving, BiloCG0PP18, ChechikCFK17}).
The issue is that the restoration lemma is \emph{tiebreaking-sensitive}, in a sense that we next explain.
To illustrate, let us try a naive attempt at applying the restoration lemma.
One might try to restore a shortest path $\pi(s, t)$ under a failing edge $e$ by searching over all possible midpoint nodes $x$, concatenating the existing shortest paths $\pi(s, x), \pi(t, x)$, and then selecting the replacement $s \leadsto t$ path to be the shortest among all concatenated paths that avoid $e$.
It might seem that the restoration lemma promises that one such choice of midpoint $x$ will yield a valid replacement shortest path.
But this isn't quite right: the restoration lemma only promises that \emph{there exist} two shortest paths of the form $\pi(s, x), \pi(t, x)$ whose concatenation forms a valid replacement path.
Generally there can be many tied-for-shortest $s \leadsto x$ and $t \leadsto x$ paths, and in designing the initial routing table we implicitly broke ties to select just one of them.
The bad case is when for the proper midpoint node $x$, we select the canonical shortest $s \leadsto x$ path to be one that uses the failing edge $f$ (even though the restoration lemma promises that a \emph{different} $s \leadsto x$ shortest path avoids $f$), and thus this restoration-by-concatenation algorithm wrongly discards $x$ as a potential midpoint node.
\begin{figure}
\caption{The restoration lemma is ``tiebreaking-sensitive'' in the sense that there could be several tied-for-shortest paths between $s$ and the midpoint node $x$. The restoration lemma promises that \emph{one}
\end{figure}
So, is it possible to break shortest path ties in such a way that this restoration-by-concatenation method works?
Afek et al.~\cite{ABKCM02} discussed this question extensively, and gave a partial negative resolution: when the input graph is a $4$-cycle, one cannot select \emph{symmetric} shortest paths to enable the method (for completeness, we include a formal proof in Theorem \ref{thm:impossible} in the appendix).
By ``symmetric'' we mean that, for all nodes $s, t$, the selected $s \leadsto t$ and $t \leadsto s$ shortest paths are the same.
However, Afek et al.~\cite{ABKCM02} also point out that the MPLS protocol is inherently asymmetric, and so in principle one can choose different $s \leadsto t$ and $t \leadsto s$ shortest paths.
They left as a central open question whether the restoration lemma can be implemented by an asymmetric tiebreaking scheme (see their remark at the bottom of page 8).
In the meantime, they showed that one can select a larger ``base set'' of $O(mn)$ paths\footnote{More precisely, their base set is generated by first choosing an \emph{arbitrary} set of $n \choose 2$ canonical shortest paths, and then taking every possible path in the base set that consists of a canonical shortest path concatenated with a single extra edge on either end. From this, we can compute a more precise upper bound on base set size of $\le m(n-1)$. Correctness of this base set can be proved using Theorem \ref{thm:wtdrestorationintro}.} such that one can restore shortest paths by concatenating two of these paths, and they suggested as an intermediate open question whether their base set size can be improved.
This method has found applications in network design (e.g., \cite{ABKCM02, bodwin2017preserving, BiloCG0PP18, ChechikCFK17}), but these applications tend to pay an overhead associated to the larger base set size.
Despite this, the main result of this paper is a positive resolution of the question left by Afek et al.~\cite{ABKCM02}: we prove that asymmetry is indeed enough to allow restorable tiebreaking in every graph.
\begin{theorem} [Main Result] \label{thm:intromain}
In any graph $G$, one can select a \textbf{single} shortest path for each \textbf{ordered} pair of vertices such that, for any pair of vertices $s, t$ and a failing edge $e$ such that an $s \leadsto t$ path remains in $G \setminus \{e\}$, there is a vertex $x$ and a replacement shortest $s \leadsto t$ path avoiding $e$ that is the concatenation of the selected path $\pi(s, x)$ and the reverse of the selected path $\pi(t, x)$.
\end{theorem}
We emphasize again that this theorem is possible only because we select independent shortest path for each \emph{ordered} pair of vertices, and thus asymmetry is allowed.
The shortest path tiebreaking method used in Theorem \ref{thm:intromain} has a few other desirable properties, outlined in Section \ref{sec:tiebreaking}.
Most importantly it is \emph{consistent}, which implies that the selected paths have the right structure to be encoded in a routing table.
It can also be efficiently computed, using a single call to any APSP algorithm that can handle directed weighted input graphs.
We next overview some of our applications of this theorem in algorithms and network design.
We will not specifically revisit the original application in \cite{ABKCM02} to the MPLS routing protocol, but let us briefly discuss the interaction between Theorem \ref{thm:intromain} and this protocol.
Note that Theorem \ref{thm:intromain} builds a $s \leadsto t$ replacement path by concatenating two paths of the form $\pi(s, x), \pi(t, x)$, which are directed towards a middle vertex $x$.
Since the MPLS protocol can efficiently concatenate \emph{oriented} paths (e.g., of the form $\pi(s, x), \pi(x, t)$), one would likely apply our theorem in this context by carrying two routing tables, one of which encodes our tiebreaking scheme $\pi$ and the other of which encodes its reverse $\overline{\pi}$ (i.e., $\pi(s, t) =: \overline{\pi}(t, s)$).
An $s \leadsto t$ replacement path would be computed by scanning over midpoint nodes $x$, and considering paths formed by concatenating the $s \leadsto x$ shortest path from the first routing table with the $x \leadsto t$ shortest path from the second routing table.
For more details on the MPLS protocol and its relationship to this method of path restoration, we refer to \cite{ABKCM02}.
\subsection{Applications}
\paragraph{Replacement Path Algorithms.}
Our first applications of our restorable shortest path tiebreaking are to computation of replacement paths.
The problem has been extensively studied in the \emph{single-pair} setting, where the input is a graph $G = (V, E)$ and a vertex pair $s, t$, and the goal is to report $\dist_{G \setminus \{e\}}(s, t)$ for every edge $e$ along a shortest $s \leadsto t$ path.
The single-pair setting can be solved in $\widetilde{O}(m+n)$ time \cite{HS01, malik1989k}.
Recently, Chechik and Cohen \cite{CC19} introduced the \emph{sourcewise} setting, in which one wants to solve the problem for all pairs in $\{s\} \times V$ simultaneously.
They gave an algorithm with $\widetilde{O}\left(m \sqrt{n} + n^2 \right)$ runtime on an $n$-node, $m$-edge graph, and they showed that this runtime is optimal (up to hidden polylog factors) under the Boolean Matrix Multiplication conjecture.
This was subsequently generalized to the $S \times V$ setting by Gupta, Jain, and Modi \cite{GJM20}.
We study the natural \emph{subsetwise} version of the problem, $\srp$, where one is given a graph $G$ and a vertex subset $S$, and the goal is to solve the replacement path problem simultaneously for all pairs in $S \times S$.
We prove:
\begin{theorem}
Given an $n$-vertex, $m$-edge undirected unweighted graph $G$ and $|S| = \sigma$ source vertices, there is a centralized algorithm that solves solves $\srp$ in $O(\sigma m)$ + $\widetilde{O}(\sigma^2 n)$ time.
\end{theorem}
We remark that, in the case where most pairs $s, t \in S$ have $\dist_G(s, t) = \Omega(n)$, the latter term in the runtime $\sigma^2 n$ is the time required to write down the output.
So this term is unimprovable, up to the hidden $\log$ factors.
The leading term of $\sigma m$ is likely required for any ``combinatorial'' algorithm to compute single-source shortest paths even in the non-faulty setting.
That is: $\sigma m$ is the time to run BFS search from $\sigma$ sources, and it is widely believed \cite{VW10} that multi-source BFS search is the fastest algorithm to compute $S \times S$ shortest paths in unweighted graphs, except for a class of ``algebraic'' algorithms that rely on fast matrix multiplication as a subroutine and which may be faster when $\sigma, m$ are both large \cite{Seidel95, SZ99}.
\paragraph{Fault-tolerant preservers and additive spanners.}
We next discuss our applications for the efficient constructions of fault-tolerant distance preservers, defined as follows:
\begin{definition}[$S \times T$ $f$-FT Preserver]
A subgraph $H \subseteq G$ is an $S \times T$ $f$-FT preserver if for every $s \in S, t \in T,$ and $F \subseteq E$ of size $|F| \le f$, it holds that
$\dist_{H \setminus F}(s,t)=\dist_{G \setminus F}(s,t).$
\end{definition}
When $T=S$ the object is called a \emph{subset} preserver of $S$, and when $T=V$ (all vertices in the input graph) the object is sometimes called an \emph{FT-BFS structure}, since the $f=0$ case is then solved by a collection of BFS trees. The primary objective for all of these objects is to minimize the size of the preserver, as measured by its number of edges.
For $f=1$, it was shown in \cite{bodwin2017preserving, BCPS20} that one can compute an $S \times S$ $1$-FT preserver with $O(|S|n)$ edges, by properly applying the original version of the restoration lemma by Afek et al.~\cite{ABKCM02}.
Our restorable tiebreaking scheme provides a simple and more general way to convert from $S \times V$ $(f-1)$-FT preservers to $S \times S$ $f$-FT preservers, which also enjoys better construction time, in the centralized and distributed settings.
For example, for $f=1$ we can compute an $S \times S$ 1-FT preserver simply by taking the union of BFS trees from each source $S$, where each BFS tree is computed using our tiebreaking scheme.
More generally, we get the following bounds:
\begin{theorem}
Given an $n$-vertex graph $G = (V, E)$, a set of source vertices $S \subseteq V$, and a fixed nonnegative integer $f$, there is an $(f+1)$-FT $S \times S$ distance preserver of $G, S$ on
$$O\left( n^{2-1/2^f} |S|^{1/2^f} \right) \mbox{~edges.}$$
\end{theorem}
This bound extends the results of \cite{bodwin2017preserving, BCPS20} to larger $f$; for $\ge 2$ faults (that is, $f \ge 1$ in the above theorem), it was not previously known how to compute preserves of this size.
Moreover, using a standard reduction in the literature, we can use these preservers to build improved fault-tolerant \emph{additive spanners}:
\begin{definition} [FT Additive Spanners]
Given a graph $G = (V, E)$ and a set of source vertices $S \subseteq V$, an $f$-FT $+k$ additive spanner is a subgraph $H$ satisfying
$$\dist_{H \setminus F}(s, t) \leq \dist_{G \setminus F}(s, t)+k$$
for all vertices $s, t \in S$ and sets of $|F| \le f$ failing edges.
\end{definition}
\begin{theorem} \label{thm:ftspanintro}
For any $n$-vertex graph $G = (V, E)$ and nonnegative integer $f$, there is an $(f+1)$-FT $+4$ additive spanner on $O_f\left(n^{1 + 2^f / (2^f + 1)}\right)$ edges.
\end{theorem}
This theorem extends a bound of Bil{\' o}, Grandoni, Gual{\' a}, Leucci, Proietti \cite{BGGLP15}, which establishes single-fault $+4$ spanners on $O(n^{3/2})$ edges; this is exactly the construction one gets by plugging in $f=0$ in the above theorem (the same result is also obtained as a corollary of results in \cite{bodwin2017preserving, BCPS20}).
For $\ge 2$ faults ($f \ge 1$ in the above theorem), $+4$ fault-tolerant spanners of the size given in Theorem \ref{thm:ftspanintro}.
However, there are many other notable constructions of fault-tolerant additive spanners with different $+c$ error bounds; see for example \cite{BGGLP15, Parter17, BCP12, bodwin2017preserving, BCPS15}.
\paragraph{Distributed constructions of fault-tolerant preservers.} Distributed constructions of FT preservers, in the $\mathsf{CONGEST}$ model of distributed computing \cite{Peleg:2000}, attracted attention recently \cite{DinitzK:11,GhaffariP16,DR20,ParterDualDist20}. In the context of exact distance preservers, Ghaffari and Parter \cite{GhaffariP16} presented the first distributed constructions of fault tolerant distance preserving structures. For every $n$-vertex $D$-diameter graph $G=(V,E)$ and a source vertex $s \in V$, they gave an $\widetilde{O}(D)$-round randomized $\mathsf{CONGEST}$ algorithm for computing a $1$-FT $\{s\} \times V$ preserver
with $O(n^{3/2})$ edges. Recently, Parter \cite{ParterDualDist20} extended this construction to $1$-FT $S \times V$ preservers with $\widetilde{O}(\sqrt{|S|}n^{3/2})$ edges and using $\widetilde{O}(D+\sqrt{n |S|})$ rounds. \cite{ParterDualDist20} also presented a distributed construction of source-wise preservers against two \emph{edge}-failures, with $O(|S|^{1/8}\cdot n^{15/8})$ edges and using $\widetilde{O}(D+n^{7/8}|S|^{1/8}+|S|^{5/4}n^{3/4})$ rounds. These constructions immediately yield $+2$ additive spanners resilient to two edge failures with subquadratic number of edges, and sublinear round complexity. To this date, we are still lacking efficient distributed constructions\footnote{By efficient, we mean with subquadratic number of edges and sublinear round complexity.} of $f$-FT preservers (or additive spanners) for $f\geq 3$. In addition, no efficient constructions are known for FT spanners with additive stretch larger than two (which are sparser in terms of number of edges w.r.t the current $+2$ FT-additive spanners). Finally, there are no efficient constructions of subsetwise FT-preservers, e.g., the only distributed construction for $1$-FT $S \times S$ preserver employs the sourcewise construction of $1$-FT $S \times V$ preservers, ending with a subgraph of $O(\sqrt{|S|}n^{3/2})$ edges which is quite far from the state-of-the-art (centralized) bound of $O(|S|n)$ edges.
In this work, we make a progress along all these directions. Combining the restorable tiebreaking scheme with the work of \cite{ParterDualDist20} allows us to provide efficient constructions of $f$-FT $S \times S$ preservers for $f \in \{1,2,3\}$ whose size bounds match the state-of-the-art bounds of the centralized constructions.
As a result, we also get the first distributed constructions of $+4$ additive spanners resilient to $f \in \{1,2,3\}$ edge faults.
\begin{theorem}[Distributed Constructions of Subsetwise FT-Preservers]\label{thm:dist-constructions}
For every $D$-diameter $n$-vertex graph $G=(V,E)$, there exist randomized distributed $\mathsf{CONGEST}$ algorithms for computing:
\begin{itemize}[noitemsep]
\item $1$-FT $S\times S$ preservers with $O(|S|n)$ edges and $\widetilde{O}(D+|S|)$ rounds.
\item $2$-FT $S \times S$ preservers with $O(\sqrt{S}n^{3/2})$ edges and $\widetilde{O}(D+\sqrt{|S|n})$ rounds.
\item $3$-FT $S \times S$ preservers with $O(|S|^{1/8}\cdot n^{15/8})$ edges and $\widetilde{O}(D+n^{7/8}|S|^{1/8}+|S|^{5/4}n^{3/4})$ rounds.
\end{itemize}
\end{theorem}
Using the $f$-FT $S\times S$ preservers for $f\in \{1,2,3\}$ for a subset $S$ of size $\sigma \in \{\sqrt{n}, n^{1/3}, n^{1/9}\}$ respectively, we get the first distributed constructions of $f$-FT $+4$ additive spanners.
\begin{corollary}[Distributed Constructions of FT-Additive Spanners]\label{cor:dist-add-constructions}
For every $D$-diameter $n$-vertex graph $G=(V,E)$, there exist randomized distributed algorithms for computing:
\begin{itemize}[noitemsep]
\item $1$-FT $+4$ additive spanners with $\widetilde{O}(n^{3/2})$ edges and $\widetilde{O}(D+\sqrt{n})$ rounds.
\item $2$-FT $+4$ additive spanners with $\widetilde{O}(n^{5/3})$ edges and $\widetilde{O}(D+n^{5/6})$ rounds.
\item $3$-FT $+4$ additive spanners preservers with $\widetilde{O}(n^{17/9})$ edges and $\widetilde{O}(D+n^{8/9})$ rounds.
\end{itemize}
\end{corollary}
One can also remove the $\log$ factors in these spanner sizes in exchange for an edge bound that holds in expectation, instead of with high probability.
\paragraph{Fault-Tolerant Exact Distance Labeling}
A \emph{distance labeling scheme} is a way to assign short bitstring labels to each vertex of a graph $G$ such that $\dist(s, t)$ can be recovered by inspecting only the labels associated with $s$ and $t$ (and no other information about $G$) \cite{GPPR04,FKMS05,SK85}. In an \emph{$f$-FT} distance labeling scheme, the labels are assigned to both the vertices and the edges of the graph, such that for any set of $|F| \le f$ failing edges, we can even recover $\dist_{G \setminus F}(s, t)$ by inspecting only the labels of $s, t$ and the edge set $F$.
These are sometimes called \emph{forbidden-set labels} \cite{CT07,CGKT07}, and they have been extensively studied in specific graph families, especially due to applications in routing \cite{ACG12, ACGP16}.
The prior work mainly focused on connectivity labels and \emph{approximate} distance labels. In those works, the labels were given also to the edges, and one can inspect the labels of failing edges as well.
In our setting we consider \emph{exact} distance labels. Interestingly, our approach will not need to use edge labels; that is, it recovers $\dist_{G \setminus F}(s, t)$ only from the labels of $s, t$ and a description of the edge set $F$.
Since one can always provide the entire graph description as part of the label, our main objective is in providing FT exact distance labels of \emph{subquadratic} length,
To the best of our knowledge, the only prior labeling scheme for recovering exact distances under faults was given by Bil{\' o} et al.~\cite{BCGLPP18}. They showed that given a source vertex $s$, one can recover distances in $\{s\} \times V$ under one failing edge using labels of size $O(n^{1/2})$. For the all-pairs setting, this would extend to label sizes of $O(n^{3/2})$ bits.
We prove:
\begin{theorem}[Subquadratic FT labels for Exact Distances]
For any fixed nonnegative integer $f \geq 0$, and $n$-vertex unweighted undirected graph, there is an $(f+1)$-FT distance labeling scheme that assigns each vertex a label of
$$O\left(n^{2 - 1/2^f} \log n\right) \text{ bits}.$$
\end{theorem}
For $f=0$ our vertex labels have size $\widetilde{O}(n)$, improving over $\widetilde{O}(n^{3/2})$ from \cite{BCGLPP18}.
Our size is near-optimal, for $f=0$, in the sense that $\Omega(n)$ label sizes are required even for non-faulty exact distance labeling. This is from a simple information-theoretic lower bound: one can recover the graph from the labeling, and there are $2^{\Theta(n^2)}$ $n$-vertex graphs, so $\Omega(n^2)$ bits are needed in total.
Finally, we remark that FT labels for exact distances are also closely related to distance sensitivity oracles: these are global and centralized data structures that reports $s$-$t$ distances in $G \setminus F$ efficiently. For any $f=O(\log n/\log\log n)$, Weimann and Yuster \cite{weimann2010replacement,weimann2013replacement} provided a construction of distance sensitivity oracles using subcubic space and subquadratic query time. The state-of-the-art bounds for this setting are given by van den Brand and Saranurak \cite{van2019sensitive}. It is unclear, however, how to balance the information of these global succinct data-structures among the $n$ vertices, in the form of distributed labels.
\subsection{Other Graph Settings}
The results of this paper do not extend to directed and/or weighted graphs; we use both undirectedness and unweightedness in the proof of our main theorem.
Indeed, it was noted in \cite{ABKCM02, bodwin2017preserving} that the restoration lemma itself is not still generally true for graphs that are weighted and/or directed, so there is not much hope for a direct extension of Theorem \ref{thm:intromain} to these settings.
However, we will briefly discuss two extensions of this theory to other graph settings that appear in prior work.
First, the original work of Afek et al.~\cite{ABKCM02} included the following version of the restoration lemma for weighted graphs:
\begin{theorem} [Weighted Restoration Lemma \cite{ABKCM02}] \label{thm:wtdrestorationintro}
For any undirected graph $G = (V, E, w)$ with positive edge weights, vertices $s, t \in V$, and failing edge $e \in E$, there exists an edge $(u, v)$ such that for \textbf{any} shortest paths $\pi(s, u), \pi(v, t)$, the path $\pi(s, u) \circ (u, v) \circ \pi(v, t)$ is a replacement $s \leadsto t$ shortest path avoiding $e$.
\end{theorem}
The weighted restoration lemma gives a weaker structural property than the original restoration lemma, since it includes a middle edge between the two concatenated shortest paths.
But, it is not tiebreaking-sensitive and it extends to weighted graphs, which make it useful in some settings (in particular to generate small base sets without reference to a particular tiebreaking scheme, as discussed above).
The second extension to mention is that the weighted and unweighted restoration lemmas both extend to DAGs, and so many of their applications extend to DAGs as well \cite{ABKCM02, bodwin2017preserving}.
It seems very plausible that our main result admits some kind of extension to unweighted DAGs, but we leave the appropriate formulation and proof as a direction for future work.
\section{Replacement Path Tiebreaking Schemes \label{sec:tiebreaking}}
In this section we formally introduce the framework of \emph{tiebreaking schemes} in network design, and we extend it into the setting where graph edges can fail.
The following objects are studied in prior work on non-faulty graphs:
\begin{definition} [Shortest Path Tiebreaking Schemes]
In a graph $G$, a \emph{shortest path tiebreaking scheme} $\pi$ is a function from vertex pairs $s, t$ to one particular shortest $s \leadsto t$ path $\pi(s, t)$ in $G$ (or $\pi(s, t) := \emptyset$ if no $s \leadsto t$ path exists).
\end{definition}
It is often useful to enforce coordination between the choices of shortest paths.
Two basic kinds of coordination are:
\begin{definition} [Symmetry]
A tiebreaking scheme $\pi$ is \emph{symmetric} if for all vertex pairs $s, t$, we have $\pi(s, t) = \pi(t, s)$ (when $\pi(s, t), \pi(t, s)$ are viewed as undirected paths).
\end{definition}
\begin{definition} [Consistency]
A tiebreaking scheme $\pi$ is \emph{consistent} if, for all vertices $s, t, u, v$, if $u$ precedes $v$ in $\pi(s, t)$, then $\pi(u, v)$ is a contiguous subpath of $\pi(s, t)$.
\end{definition}
Consistency is important for many reasons; it is worth explicitly pointing out the following two:
\begin{itemize}
\item As is well known, in any graph $G = (V, E)$ one can find a subtree that preserves all $\{s\} \times V$ distances.
Consistency gives a natural converse to this statement: if one selects shortest paths using a consistent tiebreaking scheme and then overlays the $\{s\} \times V$ shortest paths, one gets a tree.
\item It is standard to encode the shortest paths of a graph $G$ in a \emph{routing table} -- that is, a matrix indexed by the vertices of $G$ whose $(i, j)^{th}$ entry holds the vertex ID of the next hop on an $i \leadsto j$ shortest path.
For example, the standard implementation of the Floyd-Warshall shortest path algorithm outputs shortest paths via a routing table of this form.
Many routing tables in practice work similarly; for example, routing tables for the Internet typically encode only the next hop on the way to the destination.
Once again, consistency gives a natural converse: if one selects shortest paths in a graph using consistent tiebreaking, then it is possible to encode these paths in a routing table.
\end{itemize}
Since our goal is to study tiebreaking under edge faults, we introduce the following extended definition:
\begin{definition} [$f$-Replacement Path Tiebreaking Schemes]
In a graph $G = (V, E)$, an \emph{$f$-replacement path tiebreaking scheme ($f$-RPTS)} is a function of the form $\pi(s, t \mid F)$, where $s, t \in V$ and $F \subseteq E, |F| \le f$.
The requirement is that, for any fixed set of failing edges $F$ of size $|F| \le f$, the two-parameter function $\pi(\cdot, \cdot \mid F)$ is a shortest path tiebreaking scheme in the graph $G \setminus F$.
\end{definition}
We will say that an RPTS $\pi$ is symmetric or consistent if, for any given set $F$ of size $|F| \le f$, the tiebreaking scheme $\pi(\cdot, \cdot \mid F)$ is symmetric or consistent over the graph $G \setminus F$.
We also introduce the following natural property, which says that selected shortest paths do not change unless this is forced by a new fault:
\begin{definition} [Stability]
An $f$-RPTS $\pi$ is \emph{stable} if, for all $s, t, F$ of size $|F| \le f-1$ and edge $f \notin \pi(s, t \mid F)$, we have $\pi(s, t \mid F) = \pi(s, t \mid F \cup \{f\})$.
\end{definition}
Our main result is expressed formally using an additional coordination property of RPTSes that we call \emph{restorability}.
We will discuss that in the following section.
\section{Restorable Tiebreaking \label{sec:restorable}}
The main result in this paper concerns the following new coordination property:
\begin{definition} [$f$-Restorable Tiebreaking]
An RPTS $\pi$ is \emph{$f$-restorable} if, for all vertices $s, t$ and nonempty edge fault sets $F$ of size $|F| \le f$, there exists a vertex $x$ and a proper fault subset $F' \subsetneq F$ such that the concatenation of the paths $\pi(s, x \mid F'), \pi(t, x \mid F')$ forms an $s \leadsto t$ replacement path avoiding $F$.
(Note: it is not required that this concatenated path is specifically equal to $\pi(s, t \mid F)$, just that it is one of the possible replacement paths.)
\end{definition}
The motivation for this definition comes from restoration lemma by Afek et al~\cite{ABKCM02}, discussed above.
It is not obvious at this point that any or all graphs should admit a restorable RPTS.
That is the subject of our main result.
\subsection{Antisymmetric Tiebreaking Weight Functions and Restorability}
We will analyze a class of RPTSes generated by the following method.
Let $G = (V, E)$ be the undirected unweighted input graph.
Convert $G$ to a directed graph by replacing each undirected edge $(u, v) \in E$ with both directed edges $\{(u, v), (v, u)\}$.
For this symmetric directed graph, we define:
\begin{definition} [Antisymmetric Tiebreaking Weight Function] \label{def:atw}
A function $r : E \to \mathbb{R}$ is an antisymmetric $f$-fault tiebreaking weight (ATW) function for $G$ if it satisfies:
\begin{itemize}
\item (Antisymmetric) $r(u, v) = -r(v, u)$ for all $(u, v) \in E$,
\item ($f$-Fault Tiebreaking) Let $G^*$ be the directed weighted graph obtained by setting the weight of each edge in $G$ to be $w(u, v) := 1 + r(u, v)$.
The requirement is that for any edge subset $F$ of size $|F| \le f$, the graph $G^* \setminus F$ has a unique shortest path for each node pair, and moreover these unique shortest paths are each a shortest path in the graph $G \setminus F$.
\end{itemize}
\end{definition}
Any antisymmetric $f$-fault tiebreaking weight function $r$ naturally generates an $f$-RPTS for $G$, in which $\pi(s, t \mid F)$ is the unique shortest $s \leadsto t$ path in the graph $G^* \setminus F$.
We next prove that any $f$-RPTS generated in this way grants $f$-restorability.
Afterwards, we will discuss issues related to existence and computation of antisymmetric $f$-fault tiebreaking weight functions.
\begin{theorem} \label{thm:restorabletiebreaking}
Any $f$-RPTS $\pi$ generated by an antisymmetric $f$-fault tiebreaking weight function $r$ is simultaneously stable, consistent, and $f$-restorable.
\end{theorem}
\begin{proof}
Consistency and stability follow immediately from the fact that the paths selected by $\pi$ under fault set $F$ are unique shortest paths in the graph $G^* \setminus F$.
The rest of this proof establishes that $\pi$ is $f$-restorable.
We notice that it suffices to consider only the special case $f=1$, for the following reason.
Suppose we are analyzing $f$-restorability, and we consider an arbitrary set $F$ of $|F| \le f$ failing edges.
We can select an arbitrary subset $F' \subseteq F$ of all but one failing edge.
We can then view $G \setminus F'$ as the input graph, and we can view $\pi$ as a $1$-RPTS over this input graph.
If we can prove that $\pi$ is $1$-restorable on $G \setminus F'$, this implies in turn that $\pi$ is $f$-restorable over $G$.
So, the following proof assumes $f = 1$.
Let $s, t$ be vertices and let $(u, v)$ be the one failing edge.
We may assume without loss of generality that $(u, v) \in \pi(s, t)$, with that orientation, since otherwise by stability we have $\pi(s, t) = \pi(s, t \mid (u, v))$ and so claim is immediate by choice of (say) $x=t$.
Let $x$ be the last vertex along $\pi(s, t \mid (u, v))$ such that $\pi(s, x)$ avoids $(u, v)$, and let $y$ be the vertex immediately after $x$ along $\pi(s, t \mid (u, v))$.
Hence $(u, v) \in \pi(s, y)$.
These definitions are all recapped in the following diagram.
\begin{center}
\begin{tikzpicture}
\draw [fill=black] (0, 0) circle [radius=0.15];
\draw [fill=black] (7, 0) circle [radius=0.15];
\node at (0, -0.5) {$s$};
\node at (7, -0.5) {$t$};
\draw [fill=black] (3, 0) circle [radius=0.15];
\draw [fill=black] (4, 0) circle [radius=0.15];
\node at (3.5, 0) {\Huge $\mathbf \times$};
\node at (3, -0.5) {$u$};
\node at (4, -0.5) {$v$};
\draw [fill=black] (3, 1) circle [radius=0.15];
\draw [fill=black] (4, 1) circle [radius=0.15];
\node at (3, 1.5) {$x$};
\node at (4, 1.5) {$y$};
\draw [dashed] plot coordinates {(0, 0) (3, 1) (4, 1) (7, 0)};
\node [rotate=-21] at (5.8, 0.8) {$\pi(s, t \mid f)$};
\draw (0, 0) -- (3, 1);
\draw plot [smooth] coordinates {(0, 0) (3, 0) (4, 0) (4, 1)};
\node at (1.5, -0.4) {$\pi(s, y)$};
\node [rotate=21] at (1.2, 0.8) {$\pi(s, x)$};
\end{tikzpicture}
\end{center}
Our goal is now to argue that $\pi(t, x)$ does not use the edge $(u, v)$, and thus $\pi(s, x) \cup \pi(t, x)$ forms a replacement path avoiding $(u, v)$.
Since we have assumed that $\dist_G(u, t) > \dist_G(v, t)$, if $(v, u) \in \pi(t, x)$ then it appears with that particular orientation, $v$ preceding $u$.
In the following we will write $\dist^*(\cdot, \cdot)$ for the distance function in the directed reweighted graph $G^*$ (so $\dist^*$ is not necessarily integral, and it is asymmetric in its two parameters).
Recall that $\pi(s, y)$ includes the edge $(u, v)$, and hence the $u \leadsto y$ path through $v$ is shorter than the alternate $u \leadsto y$ path through $x$.
We thus have the inequality:
$$\dist^*(u, v) + \dist^*(v, y) < \dist^*(u, x) + \dist^*(x, y).$$
Since $(u, v), (x, y)$ are single edges we can write
$$(1 + r(u, v)) + \dist^*(v, y) < \dist^*(u, x) + (1 + r(x, y)),$$
Rearranging and using antisymmetry of $r$, we get
$$(1 + r(y, x)) + \dist^*(v, y) < \dist^*(u, x) + (1 + r(v, u)),$$
and so
$$\dist^*(v, y) + \dist^*(y, x) < \dist^*(v, u) + \dist^*(u, x).$$
This inequality says that the $v \leadsto x$ path that passes through $y$ is shorter in the reweighted graph than the one that passes through $u$.
So $(v, u) \notin \pi(v, x)$.
By consistency and the previously-mentioned fact that $\dist(v, t) < \dist(u, t)$, this also implies that $(v, u) \notin \pi(t, x)$, as desired.
\end{proof}
To complete our main result, we still need to prove existence of antisymmetric tiebreaking weight functions.
\begin{theorem} \label{thm:rwtexists}
Every undirected unweighted graph $G$ admits an antisymmetric $f$-fault tiebreaking weight function $r$ (for any $f$).
\end{theorem}
\begin{proof}
The simplest way to generate $r$ is randomly.
Let $n$ be the number of nodes in $G$, and let $\varepsilon < 1/(2n)$.
For each edge $(u, v) \in G$, set $r(u, v)$ to a uniform random real number in the interval $[-\varepsilon, \varepsilon]$, and set $r(v, u) := -r(u, v)$.
Antisymmetry of $r$ is immediate.
We then need to argue that $r$ acts as tiebreaking edge weights (with probability $1$).
Fix an edge subset $F$ and nodes $s, t$.
Consider two tied-for-shortest $s \leadsto t$ paths $q_1, q_2$ in the graph $G \setminus F$.
The probability that the lengths of $q_1, q_2$ remain tied in $G^*$ is $0$.
To see this, consider an edge $e \in q_1 \setminus q_2$.
If we fix the value of $r$ on all other edges in $(q_1 \cup q_2) \setminus e$, then there is at most one value of $r(e)$ that would cause the lengths of $q_1, q_2$ to tie.
The probability we set $r(e)$ to exactly this value in the interval $[-\varepsilon, \varepsilon]$ is $0$.
Thus, with probability $1$, among the set of shortest $s \leadsto t$ paths in $G \setminus F$ there will be a unique one of minimum length in $G^*$.
Finally, we need to show that no non-shortest path in $G$ becomes a shortest path in $G^*$.
Let $q_1$ again be a shortest $s \leadsto t$ path in $G \setminus F$, and let $q_2$ be a non-shortest simple $s \leadsto t$ path in $G \setminus F$.
Since $G \setminus F$ is unweighted, the length of $q_2$ is at least $1$ more than the length of $q_1$.
Since $\varepsilon < 1/(2n)$, and since $q_1, q_2$ each contain at most $n-1$ edges, the length of $q_1$ increases over the reweighting in $G^*$ by $<1/2$, and the length of $q_2$ decreases over the reweighting in $G^*$ by $<1/2$.
Thus $q_1$ remains strictly shorter than $q_2$ in $G^*$.
\end{proof}
\subsection{Bit Complexity and Determinism \label{sec:detbit}}
Before continuing, we address two possible shortcomings in the proof of Theorem \ref{thm:rwtexists}.
The first concern is that this proof operates in the real-RAM model: we allow ourselves to perturb edge weights by arbitrary real numbers in the interval $[-\varepsilon, \varepsilon]$.
Practical implementations might need to care about the \emph{bit complexity} of $r$; that is, they might pay a time/space penalty if there is a significant space overhead to representing the edge weights in the graph $G^*$.
Fortunately, there is a bit-efficient solution to this problem, based on an application of the \emph{isolation lemma} of Mulmuley, Vazirani, and Vazirani.
We will state a slightly special case of the isolation lemma here (for paths in graphs):
\begin{theorem} [Isolation Lemma \cite{MVV87}]
Let $G = (V, E)$ be a graph and let $\Pi$ be a set of paths in $G$.
Suppose we choose an integer weight for each edge in $G$ uniformly at random from the range $\{1, \dots, W\}$.
Then, with probability at least $1 - |E|/W$, there is a unique path $\pi \in \Pi$ that has minimum length among the paths in $\Pi$.
\end{theorem}
The surprise in the isolation lemma is that there is no dependence on the number of paths $|\Pi|$: there can even be exponentially many paths in $\Pi$, and yet we will still have a shortest one with good probability.
This makes it very helpful for tiebreaking applications, like the following.
\begin{corollary}
In any $n$-node undirected unweighted graph $G = (V, E)$ and integer $f \ge 1$, there is a randomized polynmoial time that returns an antisymmetric $f$-fault tiebreaking weight function $r$ for which each value $r(u, v)$ can be represented in $O(f \log n)$ bits.
\end{corollary}
\begin{proof}
Set $W := n^{f+4+c}$, where $c$ is a positive integer constant that we will choose later.
Set the values of the weight function $r(u, v)$ by selecting a uniform-random number from among the $2W+1$ numbers in set
$$\left\{\frac{i}{W} \cdot \frac{1}{2n} \ \mid \ i \in \{-W, -W+1 \dots, W-1, W\} \right\},$$
and set $r(v, u) := -r(u, v)$ to enforce antisymmetry.
Note that we need to encode one of $n^{f+4+c}$ values for each edge weight, which requires $\log (n^{f+4+c}) = O(f \log n)$ bits per edge.
In the following, let $G^*$ be the directed reweighted version of $G$ according to weight function $r$.
Now fix nodes $s, t$ and set of edges $F$ with $|F| \le f$, let $\Pi$ be the set of shortest $s \leadsto t$ paths in the graph $G \setminus F$.
Exactly as in Theorem \ref{thm:rwtexists}, since path lengths change by $<1/(2n)$ over the reweighting, no non-shortest path in $G \setminus F$ can become a shortest path in the graph $G^* \setminus F$ reweighted by $r$.
Meanwhile, by the isolation lemma, with probability at least
$$1 - \frac{|E|}{n^{f+4+c}} > 1 - \frac{1}{n^{f+2+c}},$$
there is a unique path in $\Pi$ whose weight decreases the most over the reweighting to $G^*$, which is thus the unique shortest $s \leadsto t$ path in $G^* \setminus F$.
By a union bound over the $\le n^{f+2}$ possible choices of $s, t, F$, there is $\ge 1 - \frac{1}{n^c}$ probability that for \emph{every} possible choice of $s, t, F$, there is a unique shortest $s \leadsto t$ path in the graph $G^* \setminus F$.
Thus, with high probability (controlled by the choice of $c$), $r$ is $f$-tiebreaking.
\end{proof}
The second possible shortcoming of this approach is that it is \emph{randomized}; it is natural to ask whether one can achieve an antisymmetric weight function \emph{deterministically}.
There one easy way to do so, using a slight tweak on a folklore method:
\begin{theorem}
There is a deterministic polynomial time algorithm that, given an $n$-node graph $G = (V, E)$, computes antisymmetric $f$-tiebreaking edge weights (for any $f$) using $O(|E|)$ bits per edge.
\end{theorem}
\begin{proof}
Assume without loss of generality that $G$ is connected (otherwise one can compute tiebreaking edge weights over each connected component individually).
We define our weight function $r$ as follows.
Arbitrarily assign the edges ID numbers $i \in \{1, \dots, |E|\}$.
Then, assign edge weights
$$r(u, v) := \text{sign}(u-v) \cdot C^{-i} \cdot \frac{1}{2n}.$$
where $C$ is some large enough absolute constant.
Antisymmetry is immediate, and we note that this expression is represented in $O(|E| + \log n) = O(|E|)$ bits.
To show that $r$ is indeed an $f$-tiebreaking function, let $G^*$ be the directed weighted graph associated to $r$, let $F$ be an arbitrary set of edge faults.
\begin{itemize}
\item First, we argue that no non-shortest path in $G \setminus F$ can become a shortest path in $G^* \setminus F$.
This part is essentially identical to Theorem \ref{thm:rwtexists}:
since $G$ is unweighted, the initial length of any non-shortest path is at least $1$ less than the length of a shortest path.
In $G^* \setminus F$, each edge weight changes by $\le 1/(2n)$, and thus the length of any simple path $\pi$ changes by $<1/2$.
It follows that no non-shortest path in $G \setminus F$ can become a shortest path in $G^* \setminus F$.
\item Next, we argue that $G^* \setminus F$ has \emph{unique} shortest paths.
Fix nodes $s, t$ and let $\pi_1, \pi_2$ be distinct $s \leadsto t$ (not-necessarily-shortest) paths.
Let $(u, v)$ be the edge in $\pi_1 \setminus \pi_2$ with smallest ID, and suppose without loss of generality that $\text{sign}(u, v) = 1$.
Then, since edge weights decrease geometrically, the weight increase in $(u, v)$ must be strictly larger than the total weight change over other edges in the symmetric difference of $\pi_1, \pi_2$.
Thus $\pi_2$ is shorter than $\pi_1$.
It follows that there cannot be two equally-shortest $s \leadsto t$ paths in $G^* \setminus F$. \qedhere
\end{itemize}
\end{proof}
One can ask whether there is a construction of antisymmetric $f$-fault tiebreaking weight functions that is simultaneously deterministic and which achieves reasonable bit complexity, perhaps closer to the $O(f \log n)$ bound achieved by the isolation lemma.
Proving or refuting such a construction for general graphs is an interesting open problem.
In fact, it is even open to achieve an algorithm that deterministically computes tiebreaking edge weights with bit complexity $O(\log n)$ (or perhaps a bit worse), even in the \emph{non-faulty} $f=0$ setting, and even if we do not require antisymmetry.
That said, we comment that the reweighting used in the previous theorem will only assign $O(|E|)$ different possible edge weights.
Thus, if we give up the standard representation of numbers, we could remap these edge weights and represent them using only $O(\log n)$ bits per edge.
\section{Applications}
\subsection{Fault-Tolerant Preservers and Tiebreaking Schemes}
We will begin by recapping some prior work on fault-tolerant $S \times V$ preservers in light of our RPTS framework (which will be used centrally in our applications).
We supply a new auxiliary theorem to fill in a gap in the literature highlighted by this framework.
Fault-tolerant $S \times V$ preservers were introduced by Parter and Peleg \cite{PP14}, initially called \emph{FT-BFS structures} (since the $f=0$ case is solved by BFS trees).
We mentioned earlier that, for a tiebreaking scheme $\pi$, one gets a valid BFS tree by overlaying the $\{s\} \times V$ shortest paths selected by $\pi$ if $\pi$ is consistent.
To generalize this basic fact, it is natural to ask what properties of an $f$-RPTS yield an $f$-FT $S \times V$ preserver of optimal size, when the structure is formed by overlaying all the replacement paths selected by $\pi$.
For $f=1$, this question was settled by Parter and Peleg \cite{PP14}:
\begin{theorem} [\cite{PP14}] \label{thm:oneftbfs} ~
For any $n$-vertex graph $G = (V, E)$, set of source vertices $S \subseteq V$, and consistent stable $1$-RPTS $\pi$, the $1$-FT $S \times V$ preserver formed by overlaying all $S \times V$ replacement paths selected by $\pi$ under $\le 1$ failing edge has $O\left(n^{3/2} |S|^{1/2}\right)$ edges. This bound is existentially tight.
\end{theorem}
The extension to $f=2$ was established by Parter \cite{parter2015dual} and Gupta and Khan \cite{GK19}:
\begin{theorem}[\cite{parter2015dual,GK19}] \label{thm:2ftbfs}~
For any $n$-vertex graph $G = (V, E)$ and $S \subseteq V$, there is a $2$-FT $S \times V$ preserver with
$$O\left(n^{5/3} |S|^{1/3}\right) \mbox{~edges}.$$
This bound is also existentially tight.
\end{theorem}
In contrast to Theorem \ref{thm:oneftbfs} which works with any stable and consistent $1$-RPTS, the upper bound part of Theorem \ref{thm:2ftbfs} uses additional properties beyond stability and consistency (in \cite{GK19}, called ``preferred'' paths).
For general fixed $f$, Parter \cite{parter2015dual} gave examples of graphs where any preserver $H$ has
$$|E(H)| = \Omega\left(n^{2 - \frac{1}{f+1}} |S|^{\frac{1}{f+1}} \right).$$
It is a major open question in the area to prove or refute tightness of this lower bound for any $f\geq 3$.
The only general upper bound is due to Bodwin, Grandoni, Parter, and Vassilevska-Williams \cite{bodwin2017preserving}, who gave a randomized algorithm that constructed preservers of size
$$|E(H)| = \widetilde{O}\left(n^{2 - 1/2^f} |S|^{1/2^f}\right)$$
in $\operatorname{poly}(n, f)$ time.
In fact, as we explain next, a reframing of this argument shows that the $\log$ factors are shaved by the algorithm that simply overlays the paths selected by a consistent stable RPTS.
(Note: naively, the runtime to compute all these paths is $n^{O(f)}$, hence much worse than the one obtained in \cite{bodwin2017preserving} and polynomial only if we treat $f$ as fixed.)
\begin{theorem} [\cite{bodwin2017preserving}] \label{thm:expftbfs}
For any $n$-vertex graph $G = (V, E)$, $S \subseteq V$, fixed nonnegative integer $f$, and consistent stable $f$-RPTS $\pi$, the $f$-FT $S \times V$ preserver formed by overlaying all $S \times V$ replacement paths selected by $\pi$ under $\le f$ failing edges has
$$O\left(n^{2 - 1/2^f} |S|^{1/2^f}\right) \mbox{~edges.}$$
\end{theorem}
\begin{proof} [Proof Sketch]
At a very high level, the original algorithm in \cite{bodwin2017preserving} can be viewed as follows.
Let $\pi$ be a consistent stable RPTS (stability is used throughout the argument, and consistency is mostly used to enable the ``last-edge observation'' -- we refer to \cite{bodwin2017preserving} for details on this observation).
Consider all the replacement paths $\{\pi(s, v \mid F)\}$ for $s \in S, v \in V$, and fault sets $F$, and consider the ``final detours'' of these paths -- the exact definition of a final detour is not important in this exposition, but the final detour is a suffix of the relevant path.
We then separately consider paths with a ``short'' final detour, of length $\le \ell$ for some appropriate parameter $\ell$, and ``long'' final detours of length $>\ell$.
It turns out that we can add all edges in short final detours to the preserver without much trouble.
To handle the edges in long final detours, we randomly sample a node subset $R$ of size $|R| = \Theta((n \log n) / \ell)$, and we argue that it hits each long detour with high probability.
The subsample $R$ is then used to inform some additional edges in the preserver, which cover the paths with long detours.
Let us imagine that we run the same algorithm, but we only sample a node subset of size $|R| = \Theta(n/\ell)$.
This removes the $\log$ factors from the size of the output preserver, but in exchange, any given path with a long detour is covered in the preserver with only \emph{constant} probability.
In other words: we have a randomized construction that gives an incomplete ``preserver'' $H'$ with only
$$|E(H')| = O\left(n^{2 - 1/2^f} |S|^{1/2^f}\right)$$
edges, which includes any given replacement path $\pi(s, v \mid F)$ with constant probability or higher.
Thus, if we instead consider our original algorithm for the preserver $H$, which includes every replacement path $\pi(s, v \mid F)$ with probability $1$, then $H$ will have more edges than $H'$ by only a constant factor.
So we have
$$|E(H)| = O\left(n^{2 - 1/2^f} |S|^{1/2^f}\right)$$
as well, proving the theorem.
\end{proof}
Plugging in $f=1$ to the bound in the previous theorem recovers the result of Parter and Peleg \cite{PP14}, but this bound is non-optimal already for $f=2$. It is natural to ask whether the analysis of consistent stable RPTSes can be improved, or whether additional tiebreaking properties are necessary.
As a new auxiliary result, we show the latter: consistent and stable tiebreaking schemes cannot improve further on this bound, and in particular these properties alone are non-optimal at least for $f=2$.
\begin{theorem} \label{thm:cslowerbound}
For any nonnegative integers $f,\sigma$, there are $n$-vertex unweighted (directed or undirected) graphs $G = (V, E)$, vertex subsets $S \subseteq V$ of size $|S| = \sigma$, and $f$-RPTSes that are consistent, stable, and (for undirected graphs) symmetric that give rise to $f$-FT $S \times V$ preservers on $\Omega(n^{2 - 1/2^f} \sigma^{1/2^f})$ edges.
\end{theorem}
The proof of this theorem is given in Appendix \ref{app:conslb}.
We also note that the lower bound of Theorem \ref{thm:cslowerbound} holds under a careful selection of one particular ``bad" tiebreaking scheme, which happens to be consistent, stable, and symmetric.
This lower bound breaks, for example, for many more specific classes of tiebreaking: for example, if one uses small perturbations to edge weights to break the ties. It is therefore intriguing to ask whether it is possible to provide $f$-FT $S \times V$ preservers with optimal edge bounds using random edge perturbations to break the replacement paths ties. The dual failure case of $f=2$ should serve here as a convenient starting point. It would be very interesting, for example, to see if one can replace the involved tiebreaking scheme of \cite{parter2015dual,GK19} (i.e., of preferred paths) using random edge perturbations while matching the same asymptotic edge bound (or alternatively, to prove otherwise), especially because these random edge perturbations enable restorable tiebreaking.
\subsection{Subset Replacement Path Algorithm}
In the $\srp$ problem, the input is a graph $G = (V, E)$ and a set of source vertices $S$, and the output is: for every pair of vertices $s, t \in S$ and every failing edge $e \in E$, report $\dist_{G \setminus \{e\}}(s, t)$.
We next present our algorithm for $\srp$.
We will use the following algorithm in prior work, for the single-pair setting:
\begin{theorem} [\cite{HS01}] \label{thm:singlepairalg}
When $|S|=2$, there is an algorithm that solves $\srp(G, S)$ in time $\widetilde{O}(m + n)$.
\end{theorem}
\begin{proof} [Proof Sketch]
Although we will use this theorem as a black box, we sketch its proof anyways, for completeness.
Let $S = \{s, t\}$, and suppose we perturb edge weights in $G$ to make shortest paths unique.
Compute shortest path trees $T_s, T_t$ leaving $s, t$ respectively.
Applying the weighted restoration lemma (Theorem \ref{thm:wtdrestorationintro}), every edge $(u, v)$ \emph{uniquely} defines a candidate $s \leadsto t$ replacement path, by concatenating the $s \leadsto u$ path in $T_s$ to the edge $(u, v)$ to the $v \leadsto t$ replacement path in $T_t$.
Notice that, after computing distances from $s$ and $t$, we can determine the length of this candidate path in constant time.
We may therefore quickly sort the edges $\{(u, v)\}$ by length of their associated candidate replacement path.
We then consider the edges along the true shortest path $\pi(s, t)$; initially all these edges are \emph{unlabeled}.
Stepping through each edge $(u, v)$ in our list in order, we look at its associated candidate replacement path $q \setminus (u, v)$.
For every still-unlabeled edge $e \in \pi(s, t)$ that is \emph{not} included in $q$, we label that edge with the length of $q$.
That is: since $q$ is the shortest among all candidate replacement paths that avoids $e$, it must witness the $s \leadsto t$ distance in the event that $e$ fails.
It is nontrivial to quickly determine the edge(s) along $\pi(s, t)$ that are both unlabeled and not contained in $q$.
This requires careful encoding of the shortest path trees $T_s, T_t$ in certain data structures.
We will not sketch these details; we refer to \cite{HS01} for more information.
\end{proof}
We then use Algorithm \ref{alg:srp} to solve $\srp$ in the general setting.
\begin{algorithm}
\caption{$\srp$ Algorithm \label{alg:srp}}
\label{ALG:basic}
\begin{algorithmic}[1]
\REQUIRE Undirected unweighted graph $G = (V, E)$ on $n$ vertices, vertex subset $S \subseteq V$ of size $|S| = \sigma$.
\STATE Let $\pi$ be a consistent stable $1$-restorable RPTS.
\FORALL{$s \in S$}
\STATE Compute an outgoing shortest path tree $T_s$, rooted at $s$, using the (non-faulty) shortest paths selected by $\pi$.
{\mathbb E\/}NDFOR
\FORALL{$s_1, s_2 \in S$}
\STATE Using Theorem \ref{thm:singlepairalg}, solve $\srp$ on vertices $\{s_1, s_2\}$ in the graph $T_{s_1} \cup T_{s_2}$.
{\mathbb E\/}NDFOR
\end{algorithmic}
\end{algorithm}
\begin{theorem}
Given an $n$-vertex, $m$-edge graph $G$ and $|S| = \sigma$ source vertices, Algorithm \ref{alg:srp} solves $\srp$ in $O(\sigma m)$ + $\widetilde{O}(\sigma^2 n)$ time.
\end{theorem}
\begin{proof}
First we show correctness.
Since $\pi$ is $1$-restorable, for any $s_1, s_2 \in S$ and failing edge $e$, there exists a vertex $x$ such that $\pi(s_1, x) \cup \pi(s_2, x)$ form a shortest $s_1 \leadsto s_2$ path in $G \setminus \{e\}$.
Since $\pi(s_1, x), \pi(s_2, x)$ are both contained in the graph $T_{s_1} \cup T_{s_2}$, the algorithm correctly outputs a shortest $s_1 \leadsto s_2$ replacement path avoiding $e$.
We now analyze runtime.
Computing $\pi$ takes $O(m)$ time, under Theorem \ref{thm:rwtexists}, due to the selection of random edge weights.\footnote{We analyze in the real-RAM computational model, but by the discussion in Section \ref{sec:detbit}, the (randomized) time to compute $\pi$ would be $\widetilde{O}(m)$ if one attends to bit precision.}
Using Dijkstra's algorithm, it takes $O(m + n \log n)$ time to compute an outgoing shortest path tree $T_s$ in $G'$ for each of the $|S| = \sigma$ source vertices, which costs $O(\sigma(m + n \log n))$ time in total.
Finally, we solve $\srp$ for each pair of vertices $s_1, s_2 \in S$, on the graph $T_{s_1} \cup T_{s_2}$ which has only $O(n)$ edges.
By Theorem \ref{thm:singlepairalg} each pair requires $\widetilde{O}(n)$ time, for a total of $\widetilde{O}(\sigma^2 n)$ time.
\end{proof}
\subsection{Distance Labeling Schemes}
In this section, we show how to use our $f$-restorable RPTSes to provide fault-tolerant (exact) distance labels of sub-quadratic size. We have:
\begin{theorem}
For any fixed nonnegative integer $f \geq 0$, and $n$-vertex unweighted undirected graph, there is an $(f+1)$-FT distance labeling scheme that assigns each vertex a label of
$$O\left(n^{2 - 1/2^f} \log n\right) \text{ bits}.$$
\end{theorem}
\begin{proof}
Let $\pi$ be a consistent stable restorable $f$-RPTS from Theorem \ref{thm:restorabletiebreaking} on the input graph $G = (V, E)$.
For each vertex $s \in V$, compute an $f$-FT $\{s\} \times V$ preserver by overlaying the $\{s\} \times V$ replacement paths selected by $\pi$ with respect to $\le f$ edge failures.
The label of $s$ just explicitly stores the edges of this preserver; the bound on label size comes from the number of edges in Theorem \ref{thm:expftbfs} and the fact that $O(\log n)$ bits are required to describe each edge.
To compute $\dist_{G \setminus F}(s, t)$, we simply read the labels of $s, t$ to determine their associated preservers and we union these together.
By the $f$-restorability of $\pi$, there exists a valid $s \leadsto t$ replacement path avoiding $F$ formed by concatenating a path included in the preserver of $s$ and a path included in the preserver of $t$.
So, it suffices to remove the edges of $F$ from the union of these two preservers and then compute $\dist(s, t)$ in the remaining graph.
\end{proof}
\subsection{Applications in Fault-Tolerant Network Design}
We next show applications in fault-tolerant network design.
The following theorem extends Theorem 2 in \cite{bodwin2017preserving} which was shown (by a very different proof) for $f=1$, to any $f\geq 1$.
\begin{theorem} \label{thm:oneftss}
Given an $n$-vertex graph $G = (V, E)$, a set of source vertices $S \subseteq V$, and a fixed nonnegative integer $f$, there is an $(f+1)$-FT $S \times S$ distance preserver of $G, S$ on $O\left( n^{2-1/2^f} |S|^{1/2^f} \right)$ edges.
\end{theorem}
\begin{proof}
Using Theorem \ref{thm:restorabletiebreaking}, let $\pi$ be an $(f+1)$-RPTS that is simultaneously stable, consistent, and $(f+1)$-restorable.
The construction is to build an $f$-FT $S \times V$ preserver by overlaying all $S \times V$ replacement paths selected by $\pi$ with respect to $\le f$ edge faults.
By Theorem \ref{thm:expftbfs}, this has the claimed number of edges.
To prove correctness of the construction, we invoke restorability.
Consider some vertices $s, t \in S$ and set of $|F| \le f+1$ edge faults.
Since $\pi$ is $(f+1)$-restorable, there is a valid $s \leadsto t$ replacement path avoiding $F$ that is the concatenation of two shortest paths of the form $\pi(s, x \mid F'), \pi(t, x \mid F')$, where $|F'| \le f$ and $x \in V$.
These replacement paths are added as part of the $f$-FT preservers, and hence the union of the preservers includes a valid $s \leadsto t$ replacement path avoiding $F$.
\end{proof}
We can then plug these subset distance preservers into a standard application to additive spanners.
We will black-box the relationship between subset preservers and additive spanners, so that it may be applied again when we give distributed constructions below.
The following lemma is standard in the literature on spanners.
\begin{lemma} \label{lem:prestospan}
Suppose that we can construct an $f$-FT $S \times S$ distance preserver on $g(n, \sigma, f)$ edges, for any set of $|S|=\sigma$ vertices in an $n$-vertex graph $G$.
Then $G$ has an $f$-FT $+4$ additive spanner on $O(g(n, \sigma, f) + nf + n^2 f/\sigma)$ edges.
\end{lemma}
\begin{proof}
For simplicity, we will give a randomized construction where the error bound holds deterministically and the edge bound holds in expectation.
Naturally, one can repeat the construction $O(\log n)$ times and select the sparsest output spanner to boost the edge bound to a high-probability guarantee.
Let $C \subseteq V$ be a subset of $\sigma$ vertices, selected uniformly at random.
Call the vertices in $C$ \emph{cluster centers}.
For each vertex $v \in V$, if it has at least $f+1$ neighbors in $C$, then add an arbitrary $f+1$ edges connecting $v$ to vertices in $C$ to the spanner $H$, and we will say that $v$ is \emph{clustered}.
Otherwise, if $v$ has $\le f$ neighbors in $C$, then add all edges incident to $v$ to the spanner, and we will say that $v$ is \emph{unclustered}.
The second and final step in the construction is to add an $f$-FT subset distance preserver over the vertex set $C$ to the spanner $H$.
\paragraph{Spanner Size.}
By hypothesis, the subset distance preserver in the construction costs $g(n, \sigma, f)$ edges, which gives the first term in our claimed edge bound.
Consider an arbitrary node $v$, and let us count the expected number of edges added due to $v$ in the first step of the construction.
We add $O(f)$ edges per \emph{clustered} nodes, for $O(nf)$ edges in total, which gives the second term in our edge bound.
We then analyze the edges added due to unclustered nodes; this part is a little more complicated, but still standard in the area (e.g., \cite{BKMP10}).
Let $v$ be an arbitrary node, let $N(v)$ be its neighborhood, and suppose $\deg(v) \ge 2f$.
Our goal is to bound the expected number of edges contributed by $v$ being unclustered, which is the quantity
$$\Pr\left[ |\{c \in N(v) \ \mid \ c \text{ cluster center}\}| \le f\right] \cdot \deg(v).$$
To compute this probability, we can imagine $\sigma$ sequential experiments, in which the next cluster center is randomly selected among the previously un-selected nodes.
The experiment is ``successful'' if the cluster center is one of the $\Theta(\deg(v))$ unselected nodes in $N(v)$, or ``unsuccessful'' if it is one of the $\Theta(n)$ other unselected nodes.
Thus each experiment succeeds with probability $\Theta(\deg(v)/n)$.
By standard Chernoff bounds, the probability that $\le f$ experiments succeed is
$$O\left(\frac{f}{\deg(v)} \cdot \frac{n}{\sigma}\right).$$
Thus the expected number of edges added due to $v$ unclustered is $O(nf/\sigma)$, and by unioning over the $n$ nodes in the graph, the total is $O(n^2 f/\sigma)$.
\paragraph{Spanner Correctness.}
Now we prove that $H$ is an $f$-EFT $+4$ spanner of $G$ (deterministically).
Consider any two vertices $s, t$ and a set of $|F| \le f$ edge faults, and let $q = q(s, t \mid F)$ be any replacement path between them.
Let $x$ be the first clustered vertex and $y$ the last clustered vertex in $q$.
Let $c_x, c_y$ be cluster centers adjacent to $x, y$, respectively, in the graph $G \setminus F$ (since $x, y$ are each adjacent to $f+1$ cluster centers initially, at least one adjacency still holds after $F$ is removed).
We then have:
\begin{align*}
\dist_{H \setminus F}(s, t) &\le \dist_{H \setminus F}(s, x) + \dist_{H \setminus F}(x, y) + \dist_{H \setminus F}(y, t)\\
&= \dist_{G \setminus F}(s, x) + \dist_{H \setminus F}(x, y) + \dist_{G \setminus F}(y, t) \tag*{all unclustered edges in $H$}\\
&\le \dist_{G \setminus F}(s, x) + \left(2 + \dist_{H \setminus F}(c_x, c_y) \right) + \dist_{G \setminus F}(y, t) \tag*{triangle inequality}\\
&= \dist_{G \setminus F}(s, x) + \left(2 + \dist_{G \setminus F}(c_x, c_y) \right) + \dist_{G \setminus F}(y, t) \tag*{$C \times C$ preserver}\\
&\le \dist_{G \setminus F}(s, x) + \left(4 + \dist_{G \setminus F}(x, y) \right) + \dist_{G \setminus F}(y, t) \tag*{triangle inequality}\\
&= \dist_{G \setminus F}(s, t) + 4.
\end{align*}
where the last equality follows since $x, y$ lie on a valid $s \leadsto t$ replacement path.
\end{proof}
Using this, we get:
\begin{theorem}\label{thm:additive}
For any $n$-vertex graph $G = (V, E)$ and nonnegative integer $f$, there is an $(f+1)$-FT $+4$ additive spanner on $O_f\left(n^{1 + 2^f / (2^f + 1)}\right)$ edges.
\end{theorem}
\begin{proof}
The construction is to simply applying Lemma \ref{lem:prestospan} to the subset distance preservers from Theorem \ref{thm:oneftss}, balancing parameters by choosing $\sigma := n^{1/(2^f + 1)}$.
From Theorem \ref{thm:oneftss}, the size of the subset distance preserver is
\begin{align*}
O_f\left(n^{2 - 1/2^f} \cdot \left(n^{1/(2^f + 1)}\right)^{1/2^f}\right) &= O_f\left(n^{2 - 1/2^f + 1/(2^f \cdot (2^f + 1))}\right)\\
&= O_f\left(n^{2 - 1/(2^f + 1)}\right)\\
&= O_f\left(n^{1 + 2^f / (2^f + 1)}\right).
\end{align*}
The $O(nf)$ term in Lemma \ref{lem:prestospan} can be ignored, and the $O(n^2 f/\sigma)$ term is again
\begin{align*}
O_f\left(n^{2 - 1/(2^f+1)}\right) = O_f\left(n^{1 + 2^f / (2^f + 1)}\right),
\end{align*}
and the theorem follows.
\end{proof}
\input{distributed-cons.tex}
\FloatBarrier
\appendix
\section{Impossibility of Symmetry and Restorability}
As observed by Afek et al.~\cite{ABKCM02}, one cannot generally have restorability and symmetry at the same time:
\begin{theorem}\label{thm:impossible}
There are input graphs that do not admit a tiebreaking scheme that is simultaneously symmetric and $1$-restorable.
\end{theorem}
\begin{proof}
The simplest example is a $C_4$:
\begin{center}
\begin{tikzpicture}
\draw [fill=black] (3, 0) circle [radius=0.15];
\draw [fill=black] (4, 0) circle [radius=0.15];
\node at (3.5, 0) {\Huge $\mathbf \times$};
\node at (3, -0.5) {$s$};
\node at (4, -0.5) {$t$};
\draw [fill=black] (3, 1) circle [radius=0.15];
\draw [fill=black] (4, 1) circle [radius=0.15];
\node at (3, 1.5) {$x$};
\node at (4, 1.5) {$y$};
\draw (3, 0) -- (3, 1) -- (4, 1) -- (4, 0) -- cycle;
\end{tikzpicture}
\end{center}
Assume $\pi$ is symmetric and consider the selected non-faulty shortest paths $\pi(s, y)$ and $\pi(x, t)$ going between the two opposite corners.
These paths must intersect on an edge; without loss of generality, suppose this edge is $(s, t)$.
Then $\pi(s, t)$ is just the single edge $(s, t)$.
Suppose this edge fails, and so the unique replacement $s \leadsto t$ path is $q = (s, x, y, t)$.
Since both $\pi(s, y)$ and $\pi(x, t)$ use the edge $(s, t)$, this path does not decompose into two non-faulty shortest paths selected by $\pi$.
Hence $\pi$ is not $1$-restorable.
\end{proof}
\section{Lower bound for $f$-failures preservers with a consistent and stable tie-breaking scheme \label{app:conslb}}
In this section we prove Theorem \ref{thm:cslowerbound} by giving a lower bound constructions for $S \times V$ distance preservers using a consistent and stable tie-breaking scheme.
We begin by showing the construction for the single source case (i.e., $\sigma=1$) and then extend it to the case of multiple sources.
Our construction is based on the graph $G_f(d)=(V_f,E_f)$, defined inductively.
For $f=1$, $G_1(d)=(V_1, E_1)$ consists of three components:
\begin{enumerate}[noitemsep]
\item a set of vertices $U=\{u^1_1,\ldots,u^1_d\}$ connected by a path
$P_1=[u^1_1, \ldots, u^1_d]$,
\item a set of terminal vertices $Z=\{z_1,\ldots,z_d\}$
(viewed by convention as ordered from left to right),
\item a collection of $d$ vertex disjoint paths $\{Q^1_{i}\}$,
where each path $Q^1_{i}$ connects $u^1_i$ and $z_i$ and has length of $d-i+1$ edges, for every $i \in \{1, \ldots, d\}$.
\end{enumerate}
The vertex $\mbox{\tt r}(G_1(d))=u^1_1$ is fixed as the root of $G_1(d)$, hence
the edges of the paths $Q^1_i$ are viewed as directed away from $u^1_i$,
and the terminal vertices of $Z$ are viewed as the \emph{leaves} of the graph,
denoted $\mbox{\tt Leaf}(G_1(d))=Z$.
Overall, the vertex and edge sets of $G_1(d)$ are
$V_1=U \cup Z \cup \bigcup_{i=1}^d V(Q^1_i)$ and
$E_1=E(P_1) \cup \bigcup_{i=1}^d E(Q^1_i)$.
For ease of future analysis, we assign labels to the leaves
$z_i \in \mbox{\tt Leaf}(G_1(d))$.
Let $\mbox{\tt Label}_f: \mbox{\tt Leaf}(G_f(d)) \to E(G_f(d))^f$.
The label of each leaf corresponds to a set of edge faults under which
the path from root to leaf is still maintained (as will be proved later on).
Specifically, $\mbox{\tt Label}_1(z_i, G_1(d))=(u^1_i,u^1_{i+1})$ for $i \in [1,d-1]$.
In addition, define
$P(z_i,G_1(d)) = P_1[\mbox{\tt r}(G_1(d)),u^1_i] \circ Q^1_i$
to be the path from the root $u^1_1$ to the leaf $z_i$.
To complete the inductive construction, let us describe the construction
of the graph $G_{f}(d)=(V_{f}, E_{f})$, for $f\ge 2$,
given the graph $G_{f-1}(\sqrt{d})=(V_{f-1}, E_{f-1})$.
The graph $G_{f}(d)=(V_{f}, E_{f})$ consists of the following components.
First, it contains a path $P_f=[u^f_1, \ldots, u^f_d]$, where
the vertex $\mbox{\tt r}(G_{f}(d))=u^f_1$ is fixed to be the root.
In addition, it contains $d$ disjoint copies of the graph $G'=G_{f-1}(\sqrt{d})$,
denoted by $G'_1, \ldots, G'_d$
(viewed by convention as ordered from left to right),
where each $G'_i$ is connected to $u^f_i$ by a collection of $d$
vertex disjoint paths $Q^f_i$, for $i \in \{1, \ldots, d\}$,
connecting the vertices $u^f_i$ with $\mbox{\tt r}(G'_i)$.
The length of $Q^f_i$ is $d-i+1$, and
the leaf set of the graph $G_{f}(d)$ is the union of the leaf sets of $G'_j$'s,
$\mbox{\tt Leaf}(G_{f}(d))=\bigcup_{j=1}^d \mbox{\tt Leaf}(G'_j)$.
Next, define the labels $\mbox{\tt Label}_f(z)$ for each $z \in \mbox{\tt Leaf}(G_{f}(d))$.
For every $j \in \{1, \ldots, d\}$ and any leaf $z_{j,i} \in \mbox{\tt Leaf}(G'_j)$,
let $\mbox{\tt Label}_f(z_{j,i}, G_{f}(d))=(u^f_j,u^f_{j+1}) \circ \mbox{\tt Label}_{f-1}(z_{j,i}, G'_j)$.
Denote the size (number of vertices) of $G_f(d)$ by $\mbox{\tt N}(f,d)$,
its depth (maximum distance between the root vertex $\mbox{\tt r}(G_f(d))$ to a leaf vertex in $\mbox{\tt Leaf}(G_f(d))$) by $\mbox{\tt depth}(f,d)$, and its number of leaves by $\mbox{\tt nLeaf}(f,d) = |\mbox{\tt Leaf}(G_f(d))|$.
Note that for $f=1$,
$\mbox{\tt N}(1,d) = 2d+d^2 \leq 2d^2$,
$\mbox{\tt depth}(1,d)=d$ and $\mbox{\tt nLeaf}(1,d)=d$.
We now observe that the following inductive relations hold.
\begin{observation}
\label{obs:rel}
(a) $\mbox{\tt depth}(f,d)=O(d)$, (b) $\mbox{\tt nLeaf}(f,d)=d^{2-1/2^{f-1}}$ and (c) $\mbox{\tt N}(f,d)=2f\cdot d^2$.
\end{observation}
\begin{proof}
(a) follows by the length of $Q^f_i$, which implies that
$\mbox{\tt depth}(f,d)=d+\mbox{\tt depth}(f-1,\sqrt{d})\leq 2d$.
(b) follows by the fact that the terminals of the paths starting with
$u_1^f, \ldots, u_d^f$ are the terminals of the graphs $G'_1, \ldots, G'_d$
which are disjoint copies of $G_{f-1}(\sqrt{d})$, so $\mbox{\tt nLeaf}(f,d)=d \cdot \mbox{\tt nLeaf}(f-1,\sqrt{d})$.
(c) follows by summing the vertices in the $d$ copies of $G'_i$
(yielding $d \cdot \mbox{\tt N}(f,d)$) and the vertices in $d$ vertex disjoint paths,
namely $Q^f_1, \ldots, Q^f_d$ of total $d^2$ vertices,
yielding $\mbox{\tt N}(f,d)=d \cdot \mbox{\tt N}(f-1,\sqrt{d})+d^2\leq 2fd^2$.
\end{proof}
Consider the set of leaves in $G_f(d)$, namely,
$\mbox{\tt Leaf}(G_f(d)) = \bigcup_{i=1}^d \mbox{\tt Leaf}(G'_i) = \{z_1, \ldots, z_\lambda\}$,
ordered from left to right according to their appearance in $G_f(d)$.
For every leaf vertex $z \in \mbox{\tt Leaf}(G_f(d))$, we define inductively a path $P(z, G_f(d))$ connecting the root $\mbox{\tt r}(G_{f}(d))=u^f_1$ with the leaf $z$. As described above for $f=1$, $P(z_i,G_1(d)) = P_1[\mbox{\tt r}(G_1(d)),u^1_i] \circ Q^1_i$. Consider a leaf $z \in \mbox{\tt Leaf}(G_f(d))$ such that $z$ is the $i^{th}$ leaf in the graph $G'_j$. We therefore denote $z$ as $z_{i,j}$, and define $P(z_{j,i},G_f(d)) = P_f[\mbox{\tt r}(G_f(d)),u^1_j] \circ Q^f_j \circ P(z_{j,i},G'_j)$. We next claim the following on these paths.
\begin{lemma}
\label{lem:prop_induc_path}
For every leaf $z_{j,i} \in \mbox{\tt Leaf}(G_f(d))$ it holds that: \\
(1) The path $P(z_{j,i}, G_f(d))$ is the only $u^f_1-z_{j,i}$ path in $G_f(d)$.\\
(2) $P(z_{j,i}, G_f(d)) \subseteq G \setminus \bigcup_{i \geq j}\mbox{\tt Label}_f(z_{j,i}, G_f(d)) \cup \bigcup_{k \geq j, \ell \in [1,\mbox{\tt nLeaf}(f-1,\sqrt{d})]}\mbox{\tt Label}_f(z_{k,\ell}, G_f(d))$.\\
(3) $P(z_{j,i}, G_f(d)) \not\subseteq G \setminus \mbox{\tt Label}_f(z_{k,\ell}, G_f(d))$
for $k<j$ and every $\ell \in [1,\mbox{\tt nLeaf}(f-1,\sqrt{d})]$, as well as for $k= j$ and every $\ell\in [1, i-1]$.
(4) $|P(z, G_f(d))| = |P(z', G_f(d))|$ for every $z,z' \in \mbox{\tt Leaf}(G_f(d))$.
\end{lemma}
\begin{proof}
We prove the claims by induction on $f$.
For $f=1$, the lemma holds by construction.
Assume this holds for every $f' \leq f-1$ and consider $G_f(d)$.
Recall that $P_f=[u^f_1, \ldots, u^f_d]$, and let $G'_1, \ldots, G'_d$ be $d$ copies
of the graph $G_{f-1}(\sqrt{d})$, viewed as ordered from left to right,
where $G'_j$ is connected to $u^f_j$. That is, there are disjoint paths $Q^f_j$
connecting $u^f_j$ and $\mbox{\tt r}(G'_j)$, for every $j\in \{1,\ldots, d\}$.
Consider a leaf vertex $z_{j,i}$, the $i^{th}$ leaf vertex in $G'_j$. By the inductive assumption, there exists a single path $P(z_{j,i}, G'_j)$
between the root $\mbox{\tt r}(G'_j)$ and the leaf $z_{j,i}$, for every $j \in \{1,\ldots, d\}$.
We now show that there is a single path between $\mbox{\tt r}(G_f(d)) = u^f_1$
and $z_{j,i}$ for every $j\in \{1,\ldots, d\}$.
Since there is a single path $P'$ connecting $\mbox{\tt r}(G_f(d))$ and $\mbox{\tt r}(G'_j)$ given by $P'=P_f[u^f_1, u^f_j]\circ Q^f_j$, it follows that
$P(z_{j,i}, G_f(d))=P' \circ P(z_{j,i}, G'_j)$ is a unique path in $G_f(d)$.
We now show (2). We first show that $P(z_{j,i}, G_f(d)) \subseteq G \setminus \bigcup_{\ell \geq i ~\mid~ z_{j,\ell} \in \mbox{\tt Leaf}(G'_j)} LAB_f(z_{j,\ell}, G_f(d))$. By the inductive assumption,
$P(z_{j,i}, G'_j) \in G \setminus \bigcup_{\ell \geq i} \mbox{\tt Label}_{f-1}(z_{j,\ell}, G'_j)$.
Since $\mbox{\tt Label}_f(z_{j,i}, G_f(d))=(u^f_{j}, u^f_{j+1}) \circ \mbox{\tt Label}_{f-1}(z_{j,i}, G'_j)$,
it remains to show that $e_\ell=(u^f_{\ell}, u^f_{\ell+1}) \notin P'$ for $\ell \geq i$.
Since $P'$ diverges from $P_f$ at the vertex $u^f_j$,
it holds that $e_j, \ldots, e_{d-1} \notin P(z_{j,i}, G_f(d))$. We next complete the proof for every leaf vertex $z_{k,\ell}$ for $z_{k,\ell} \in \mbox{\tt Leaf}(G'_q)$ for $k > j$ and every $\ell \in \mbox{\tt nLeaf}(f-1,\sqrt{d})$. The claim holds as the edges of $G'_j$ and $G'_k$ are edge-disjoint, and $e_j, \ldots, e_{d-1} \notin P(z_{j,i}, G_f(d))$.
Consider claim (3) and a leaf vertex $z_{j,i} \in \mbox{\tt Leaf}(G'_j)$ for some $j \in \{1,\ldots, d\}$ and $i \in \mbox{\tt nLeaf}(f-1,\sqrt{d})$. Let $Z_1=\{z_{j,\ell} \in \mbox{\tt Leaf}(G'_j) \mid \ell < i\}$ be the set
of leaves to the left of $z_{j,i}$ that belong to $G'_j$, and let
$Z_2=\{z_{k,\ell} \notin \mbox{\tt Leaf}(G'_j) \mid j > k\}$ be
the complementary set of leaves to the left of $z_{j,i}$. By the inductive assumption,
$P(z_{j,i}, G'_j) \nsubseteq G \setminus \mbox{\tt Label}_{f-1}(z_{j,\ell}, G'_j)$ for every $z_{j,\ell} \in Z_1$.
The claim holds for $Z_1$ as the order of the leaves in $G'_j$ agrees with their order in $G_f(d)$, and $\mbox{\tt Label}_{f-1}(z_{k,\ell}, G'_j) \subset \mbox{\tt Label}_{f}(z_{k,\ell}, G_f(d))$.
Next, consider the complementary leaf set $Z_2$ to the left of $z_{j,i}$.
Since for every $z_{k,\ell} \in Z_2$, the divergence point of $P(z_{k,\ell}, G_f(d))$ and $P_f$ is at $u^f_k$
for $k < j$, it follows that $e_k=(u^f_k, u^f_{k+1}) \in P(z_{j,i}, G_f(d))$,
and thus $P(z_{j,i}, G_f(d)) \nsubseteq G \setminus \mbox{\tt Label}_f(z_{k,\ell}, G_f(d))$
for every $z_{k,\ell} \in Z_2$. Finally, consider (4). By setting the length of the paths $Q^f_j$ to $d-j+1$ for every $j \in \{1,\ldots, d\}$, we have that $\dist(u^f_1, \mbox{\tt r}(G'_j))=d$ for every $j \in [1,d]$. The proof then follows by induction as well, since $|P(z_{j,i}, G'_j)|=|P(z_{k,\ell}, G'_k)|$ for every $k, j \in [1,d]$ and $i,\ell \in [1, \mbox{\tt nLeaf}(f-1,\sqrt{d}]$.
\end{proof}
Finally, we turn to describe the graph $G^*_f(V, E,W)$ which establishes our
lower bound, where $W$ is a particular bad edge weight function that determines the consistent tie-breaking scheme which provides the lower bound. The graph $G^*_f(V, E,W)$ consists of three components.
The first is the graph $G_{f}(d)$ for $d=\lfloor \sqrt{n/(4f)} \rfloor$.
By Obs. \ref{obs:rel}, $\mbox{\tt N}(f,d)=|V(G_{f}(d))|\leq n/2$.
The second component of $G^*_f(V, E,W)$ is a set of vertices
$X=\{x_1, \ldots, x_\chi\}$, where the last vertex of $P_f$, namely, $u^f_d$ is
connected to all the vertices of $X$.
The cardinality of $X$ is $\chi=n-\mbox{\tt N}(f,d)-1$.
The third component of $G^*_f(V, E,W)$ is a complete bipartite graph $B$
connecting the vertices of $X$ with the leaf set $\mbox{\tt Leaf}(G_f(d))$, i.e.,
the disjoint leaf sets $\mbox{\tt Leaf}(G'_1), \ldots, \mbox{\tt Leaf}(G'_d)$.
We finally define the weight function $W: E \to (1,1+1/n^2)$. Let $W(e)=1$ for every $e \in E \setminus E(B)$.
The weights of the bipartite graph edges $B$ are defined as follows. Consider all leaf vertices $\mbox{\tt Leaf}(G_f(d))$ from left to right given by $\{z_1, \ldots, z_\lambda\}$. Then, $W(z_j,x_i)=(\lambda-j)/n^4$ for every $z_j$ and every $x_i \in X$.
The vertex set of the resulting graph is thus
$V=V(G_{f}(d))\cup \{v^{*}\} \cup X$ and hence $|V|=n$.
By Prop. (b) of Obs. \ref{obs:rel},
$\mbox{\tt nLeaf}(G_f(d))=d^{2-1/2^{f-1}}=\Theta((n/f)^{1-1/2^f}),$
hence $|E(B)|=\Theta((n/f)^{2-1/2^f})$.
We now complete the proof of Thm. \ref{thm:cslowerbound} for the single source case.
\begin{proof}[Thm. \ref{thm:cslowerbound} for $|S|=1$.]
Let $s=u^f_1$ be the chosen source in the graph $G^*_f(V, E,W)$.
We first claim that under the weights $W$, there is a unique shortest path, denoted by $\pi(s,x_i ~\mid~ F)$ for every $x_i \in X$ and every fault set $F \in \{\mbox{\tt Label}_f(z_1, G_f(d)), \ldots, \mbox{\tt Label}_f(z_\ell, G_f(d))\}$. By Lemma \ref{lem:prop_induc_path}(1), there is a unique shortest path from each $s$ to each $z_j \in \mbox{\tt Leaf}(G_f(d))$ denoted by $P(z_j, G_f(d))$.
In addition, by Lemma \ref{lem:prop_induc_path}(4), the unweighted length of all the $s$-$z_j$ paths are the same for every $z_j$. Since each $x_i$ is connected to each $z_j$ with a distinct edge weight in $(1,1+1/n^2)$, we get that each $x_i$ has a unique shortest path from $s$ in each subgraph $G \setminus \mbox{\tt Label}_f(z_j, G_f(d))$. Note that since the uniqueness of $\pi$ is provided by the edge weights it is both consistent and stable. Also note that the weights of $W$ are sufficiently small so that they only use to break the ties between equally length paths.
We now claim that a collection of $\{s\} \times X$ replacement paths (chosen based on the weights of $W$) contains all edges of the bipartite graph $B$. Formally, letting
$$\mathcal{P}=\bigcup_{x_i \in X} \bigcup_{z_j \in \mbox{\tt Leaf}(G_f(d))} \pi(s,x_i ~\mid~ \mbox{\tt Label}_f(z_j, G_f(d)))~,$$
we will show that $E(B) \subseteq \bigcup_{P \in \mathcal{P}} P$ which will complete the proof.
To see this we show that $\pi(s,x_i ~\mid~ \mbox{\tt Label}_f(z_j, G_f(d)))=P(z_j, G_f(d)) \circ (z_j,x_i)$. Indeed, by Lemma \ref{lem:prop_induc_path}(2), we have that $P(z_j, G_f(d)) \subseteq G \setminus \mbox{\tt Label}_f(z_j, G_f(d))$. It remains to show that the shortest $s$-$x_i$ path (based on edge weights) in $G \setminus \mbox{\tt Label}_f(z_j, G_f(d))$ goes through $z_j$.
By Lemma \ref{lem:prop_induc_path}(2,3), the only $z_k$ vertices in $\mbox{\tt Leaf}(G_f(d))$ that are connected to $s$ in $G \setminus \mbox{\tt Label}_f(z_j, G_f(d))$ are $\{z_1,\ldots, z_j\}$. Since $W(z_1,x_i) > W(z_2,x_i)> \ldots > W(z_j,x_i)$, we have that $(z_j, x_i)$ is the last edge of $\pi(s,x_i ~\mid~ \mbox{\tt Label}_f(z_j, G_f(d)))$. As this holds for every $x_i \in X$ and every $z_j \in \mbox{\tt Leaf}(G_f(d))$, the claim follows.
\end{proof}
\paragraph{Extension to multiple sources.}
Given a parameter $\sigma$ representing the number of sources,
the lower bound graph $G$ includes $\sigma$ copies, $G'_1, \ldots, G'_\sigma$, of $G_f(d)$,
where $d=O(\sqrt{(n / 4f\sigma)})$.
By Obs. \ref{obs:rel}, each copy consists of at most $n/2\sigma$ vertices.
We now add to $G$ a collection $X$ of $\Theta(n)$ vertices connected to the $\sigma$ leaf sets
$\mbox{\tt Leaf}(G'_1), \ldots, \mbox{\tt Leaf}(G'_\sigma)$ by a complete bipartite graph $B'$. See Fig. \ref{fig:LB-graph-induc} for an illustration.
We adjust the size of the set $X$ in the construction so that $|V(G)|=n$.
Since $\mbox{\tt nLeaf}(G'_i)=\Omega((n / (f\sigma))^{1-1/2^f})$ (see Obs. \ref{obs:rel}),
overall $|E(G)| = \Omega(n \cdot \sigma \cdot \mbox{\tt nLeaf}(G_f(d))) =
\Omega(\sigma^{1/2^f}\cdot (n/f)^{2-1/2^f})$.
The weights of all graph edges not in $B'$ are set to $1$. For every $i \in \{1,\ldots, \sigma\}$, the edge weights of the bipartite graph $B_j=(\mbox{\tt Leaf}(G'_1),X)$ are set in the same manner as for the single source case.
Since the path from each source $s_i$ to $X$ cannot aid the vertices of $G'_j$
for $j \neq i$, the analysis of the single-source case can be applied
to show that each of the bipartite graph edges in necessary
upon a certain sequence of at most $f$-edge faults.
This completes the proof of Thm. \ref{thm:cslowerbound}.
\begin{figure}
\caption{\sf Top: Illustration of the graphs $G_1(d)$ and $G_f(d)$. Each graph $G'_i$ is a graph of the form $G_f(\sqrt{d}
\label{fig:LB-graph-induc}
\end{figure}
\begin{figure}
\caption{\sf Illustration of the lower bound graph $G^*_f(V,E,W)$ for $f=2$. The edge weights of the bipartite graph are monotone increasing as a function of the leaf index from left to right. \label{fig:LB-graph-final}
\label{fig:LB-graph-final}
\end{figure}
\end{document}
|
\mathrm{b}egin{document}
\title{The singularity category of a Nakayama algebra}
\keywords{Nakayama algebra, Resolution quiver, Singularity category}
\thanks{Supported by the National Natural Science Foundation of China (No. 11201446).}
\subjclass[2010]{Primary 16G10; Secondary 16D90, 18E30}
\author{Dawei Shen}
\address{School of Mathematical Sciences \\
University of Science and Technology of China \\
Hefei, Anhui 230026 \\
P. R. China}
\email{[email protected]}
\urladdr{http://home.ustc.edu.cn/~sdw12345/}
\date{\today}
\mathrm{b}egin{abstract}
Let $A$ be a Nakayama algebra. We give a description of the singularity category of $A$ inside its stable module category. We prove that there is a duality between the singularity category of $A$ and the singularity category of its opposite algebra. As a consequence, the resolution quiver of $A$ and the resolution quiver of its opposite algebra have the same number of cycles and the same number of cyclic vertices.
\end{abstract}
\maketitle
\section{Introduction}
Let $A$ be an artin algebra. Denote by $A\mbox{-}\mathrm{mod}$ the category of finitely generated left $A$-modules, and by $\mathbf{D}^\mathrm{b}(A\mbox{-}\mathrm{mod})$ the bounded derived category of $A\mbox{-}\mathrm{mod}$. Recall that a complex in $\mathbf{D}^\mathrm{b}(A\mbox{-}\mathrm{mod})$ is \emph{perfect} provided that it is isomorphic to a bounded complex of finitely generated projective $A$-modules. Following \cite{Buc1987, Hap1991, Orl2004}, the \emph{singularity category} $\mathbf{D}_\mathrm{sg}(A)$ of $A$ is the quotient triangulated category of $\mathbf{D}^\mathrm{b}(A\mbox{-}\mathrm{mod})$ with respect to the full subcategory consisting of perfect complexes. Recently, the singularity category of a Nakayama algebra is described in \cite{CY2013}.
Let $A$ be a connected Nakayama algebra without simple projective modules. Following \cite{Rin2013}, the \emph{resolution quiver} $R(A)$ of $A$ is defined as follows: the vertex set is the set of isomorphism classes of simple $A$-modules, and there is an arrow from $S$ to $\tau \soc P(S)$ for each simple $A$-module $S$; see also \cite{Gus1985}. Here, $P(S)$ is the projective cover of $S$, `soc' denotes the socle of a module, and $\tau = D \Tr$ is the Auslander-Reiten translation \cite{ARS1995}. A simple $A$-module is called \emph{cyclic} provided that it lies in a cycle of $R(A)$.
The following consideration is inspired by \cite{Rin2013}. Let $A$ be a connected Nakayama algebra of infinite global dimension. Let $\mathcal{S}_c$ be a complete set of pairwise non-isomorphic cyclic simple $A$-modules. Let $\mathcal{X}_c$ be the set formed by indecomposable $A$-modules $X$ such that $\operatorname{top} X$ and $\tau\soc X$ both belong to $\mathcal{S}_c$. Here, `top' denotes the top of a module. Denote by $\mathcal{F}$ the full subcategory of $A\mbox{-}\mathrm{mod}$ whose objects are finite direct sums of objects in $\mathcal{X}_c$. It turns out that $\mathcal{F}$ is a Frobenius abelian category, and it is equivalent to $A'\mbox{-}\mathrm{mod}$ with $A'$ a connected selfinjective Nakayama algebra. Denote by $\underline{\mathcal{F}}$ the stable category of $\mathcal{F}$ modulo projective objects; it is a triangulated category by \cite{Hap1988}. We emphasize that the stable category $\underline{\mathcal{F}}$ is a full subcategory of the stable module category $A\mbox{-}\underline{\mathrm{mod}}$ of $A$.
The well-known result of \cite{Buc1987, Hap1991} describes the singularity category of a Gorenstein algebra $A$ via the subcategory of $A\mbox{-}\underline\mathrm{mod}$ formed by Gorenstein projective modules. Here, we recall that an artin algebra is \emph{Gorenstein} if the injective dimension of the regular module is finite on both sides. In general, a Nakayama algebra is not Gorenstein \cite{Rin2013, CY2013}. The following result describes the singularity category of a Nakayama algebra via the subcategory $\underline{\mathcal{F}}$ of $A\mbox{-}\underline{\mathrm{mod}}$. For a Gorenstein Nakayama algebra, these two descriptions coincide; compare \cite{Rin2013}.
\mathrm{b}egin{thm}
Let $A$ be a connected Nakayama algebra of infinite global dimension. Then
the singularity category $\mathbf{D}_\mathrm{sg}(A)$ and the stable category $\underline{\mathcal{F}}$ are triangle equivalent.
\end{thm}
Denote by $A\mbox{-}\mathrm{inj}$ the category of finitely generated injective $A$-modules, and by $\mathbf{K}^\mathrm{b}(A\mbox{-}\mathrm{inj})$ the bounded homotopy category of $A\mbox{-}\mathrm{inj}$. We view $\mathbf{K}^\mathrm{b}(A\mbox{-}\mathrm{inj})$ as a thick subcategory of $\mathbf{D}^\mathrm{b}(A\mbox{-}\mathrm{mod})$ via the canonical functor. Then the quotient triangulated category $\mathbf{D}^\mathrm{b}(A\mbox{-}\mathrm{mod})/\mathbf{K}^\mathrm{b}(A\mbox{-}\mathrm{inj})$ is triangle equivalent to the opposite category of the singularity category $\mathbf{D}_\mathrm{sg}(A^\mathrm{op})$ of $A^\mathrm{op}$. Here, $A^\mathrm{op}$ is the opposite algebra of $A$. In general, it seems that for an arbitrary artin algebra $A$, there is no obvious relation between $\mathbf{D}_\mathrm{sg}(A)$ and $\mathbf{D}_\mathrm{sg}(A^\mathrm{op})$. However, we have the following result for a Nakayama algebra.
\mathrm{b}egin{prop}
Let $A$ be a Nakayama algebra. Then the singularity category $\mathbf{D}_\mathrm{sg}(A)$ is triangle equivalent to $\mathbf{D}^\mathrm{b}(A\mbox{-}\mathrm{mod})/\mathbf{K}^\mathrm{b}(A\mbox{-}\mathrm{inj})$. Equivalently, there is a triangle duality between $\mathbf{D}_\mathrm{sg}(A)$ and $\mathbf{D}_\mathrm{sg}(A^\mathrm{op})$.
\end{prop}
Let $A$ be a connected Nakayama algebra of infinite global dimension. Recall from \cite{Shen2014} that the resolution quivers $R(A)$ and $R(A^\mathrm{op})$ have the same number of cyclic vertices. The following result strengthens the previous one by a different method.
\mathrm{b}egin{prop}
Let $A$ be a connected Nakayama algebra of infinite global dimension. Then the resolution quivers $R(A)$ and $R(A^\mathrm{op})$ have the same number of cycles and the same number of cyclic vertices.
\end{prop}
The paper is organized as follows. In Section 2, we recall some facts on singularity categories of artin algebras and the simplification in the sense of \cite{Rin1976}. In Section 3, we introduce the Frobenius subcategory $\mathcal{F}$ and prove Theorem 1.1. The proofs of Propositions 1.2 and 1.3 are given in Sections 4 and 5, respectively.
Throughout this paper, we fix a commutative artinian ring $R$. All categories, morphisms and functors are $R$-linear.
\section{Preliminaries}
We first recall some facts on the singularity category of an artin algebra.
Let $A$ be an artin algebra over $R$. Recall that $A\mbox{-}\mathrm{mod}$ denotes the category of finitely generated left $A$-modules. Let $A\mbox{-}\mathrm{proj}$ denote the full subcategory consisting of projective $A$-modules, and $A\mbox{-}\mathrm{inj}$ the full subcategory consisting of injective $A$-modules. Denote by $A\mbox{-}\underline{\mathrm{mod}}$ the projectively stable category of finitely generated $A$-modules; it is obtained from $A\mbox{-}\mathrm{mod}$ by factoring out the ideal of all maps which factor through projective $A$-modules; see \cite[IV.1]{ARS1995}.
Recall that for an $A$-module $M$, its \emph{syzygy} $\syz(M)$ is the kernel of its projective cover $P(M) \to M$. This gives rise to the \emph{syzygy functor} $\syz: A\mbox{-}\underline{\mathrm{mod}} \to A\mbox{-}\underline{\mathrm{mod}}$. Let $\syz^0(M) = M$ and $\syz^{i+1}(M) = \syz(\syz^i(M))$ for $i \geq 0$. Denote by $\syz^i(A\mbox{-}\mathrm{mod})$ the full subcategory of $A\mbox{-}\mathrm{mod}$ formed by modules $M$ such that there is an exact sequence $0 \to M \to P_{i-1} \to \cdots \to P_{1} \to P_0$ with each $P_j$ projective. We also denote by $\syz^i_0(A\mbox{-}\mathrm{mod})$ the full subcategory of $\syz^i(A\mbox{-}\mathrm{mod})$ formed by modules without indecomposable projective direct summands.
Recall that $\mathbf{D}^\mathrm{b}(A\mbox{-}\mathrm{mod})$ denotes the bounded derived category of $A\mbox{-}\mathrm{mod}$, whose translation functor is denoted by $[1]$. For each integer $n$, let $[n]$ denote the $n$-th power of $[1]$. The category $A\mbox{-}\mathrm{mod}$ is viewed as a full subcategory of $\mathbf{D}^\mathrm{b}(A\mbox{-}\mathrm{mod})$ by identifying an $A$-module with the corresponding stalk complex concentrated at degree zero. Recall that a complex in $\mathbf{D}^\mathrm{b}(A\mbox{-}\mathrm{mod})$ is \emph{perfect} provided that it is isomorphic to a bounded complex of finitely generated projective $A$-modules. Perfect complexes form a thick subcategory of $\mathbf{D}^\mathrm{b}(A\mbox{-}\mathrm{mod})$, which is denoted by $\mathbf{perf}(A)$. Here, we recall that a triangulated subcategory is \emph{thick} if it is closed under direct summands.
Following \cite{Buc1987, Hap1991, Orl2004}, the quotient triangulated category
\[\mathbf{D}_\mathrm{sg}(A) = \mathbf{D}^\mathrm{b}(A\mbox{-}\mathrm{mod})/\mathbf{perf}(A)\]
is called the \emph{singularity category} of $A$. Denote by $q: \mathbf{D}^\mathrm{b}(A\mbox{-}\mathrm{mod}) \to \mathbf{D}_\mathrm{sg}(A)$ the quotient functor. We recall that the objects in $\mathbf{D}_\mathrm{sg}(A)$ are bounded complexes of finitely generated $A$-modules. The translation functor of $\mathbf{D}_\mathrm{sg}(A)$ is also denoted by $[1]$.
The following results are well known.
\mathrm{b}egin{lem}[{\cite[Lemma~2.1]{Chen2011b}}]\label{lem2.1}
Let $X$ be a complex in $\mathbf{D}_\mathrm{sg}(A)$ and $n_0>0$. Then for any $n$ sufficiently large, there exists a module $M$ in $\syz^{n_0}(A\mbox{-}\mathrm{mod})$ such that $X \simeq q(M)[n]$.
\end{lem}
\mathrm{b}egin{lem}[{\cite[Lemma~2.2]{Chen2011b}}]\label{lem2.2}
Let $0 \to N \to P_{n-1} \to \cdots \to P_0 \to M \to 0$ be an exact sequence in $A\mbox{-}\mathrm{mod}$ with each $P_i$ projective. Then there is an isomorphism $q(M) \simeq q(N)[n]$ in $\mathbf{D}_\mathrm{sg}(A)$. In particular, there is a natural isomorphism $\theta_M^n: q(M) \simeq q(\syz^n(M))[n]$ for any $M$ in $A\mbox{-}\mathrm{mod}$ and $n \geq 0$.
\end{lem}
Observe that the composition $A\mbox{-}\mathrm{mod} \to \mathbf{D}^\mathrm{b}(A\mbox{-}\mathrm{mod}) \xrightarrow{q} \mathbf{D}_\mathrm{sg}(A)$ vanishes on projective modules. Then it induces a unique functor $q': A\mbox{-}\underline{\mathrm{mod}} \to \mathbf{D}_\mathrm{sg}(A)$. It follows from Lemma~\ref{lem2.2} that for each $n \geq 0$, the following diagram of functors
\[\xymatrix{
A\mbox{-}\underline{\mathrm{mod}} \ar[d]_{q'} \ar[r]^{\syz^n} &A\mbox{-}\underline{\mathrm{mod}}\ar[d]^{q'} \\
\mathbf{D}_\mathrm{sg}(A)\ar[r]^{[-n]} &\mathbf{D}_\mathrm{sg}(A)}\]
is commutative. Let $M$ and $N$ be in $A\mbox{-}\mathrm{mod}$ and $n \geq 0$. Lemma~\ref{lem2.2} yields a natural map
\[\Phi^n: \underline{\Hom}_A(\syz^n(M), \syz^n(N)) \longrightarrow \Hom_{\mathbf{D}_\mathrm{sg}(A)}(q(M), q(N)).\]
Here, $\Phi^0$ is induced by $q'$ and $\Phi^n(f) = (\theta^n_N)^{-1}\circ(q'(f)[n])\circ\theta^n_M$ for $n \geq 1$.
Consider the following chain of maps $\{G^{n,n+1}\}_{n \geq 0}$ such that
\[G^{n,n+1}: \underline{\Hom}_A(\syz^n(M),\syz^n(N)) \longrightarrow \underline{\Hom}_A(\syz^{n+1}(M),\syz^{n+1}(N))\]
is induced by the syzygy functor $\syz$. The sequence $\{\Phi^n\}_{n \geq 0}$ is compatible with $\{G^{n,n+1}\}_{n\geq0}$, that is, $\Phi^{n+1} \circ G^{n,n+1} = \Phi^n$ for each $n \geq 0$. Then we obtain an induced map
\[\Phi: \varinjlim_{n\geq0}\underline{\Hom}_A(\syz^n(M),\syz^n(N)) \longrightarrow \Hom_{\mathbf{D}_\mathrm{sg}(A)}(q(M), q(N)).\]
\mathrm{b}egin{lem}[{\cite[Exemple~2.3]{KV1987}}]\label{lem2.3}
Let $M$ and $N$ be in $A\mbox{-}\mathrm{mod}$. Then there is a natural isomorphism
\[\Phi: \varinjlim_{n\geq0}\underline{\Hom}_A(\syz^n(M),\syz^n(N)) \overset{\simeq}\longrightarrow \Hom_{\mathbf{D}_\mathrm{sg}(A)}(q(M),q(N)).\]
\end{lem}
Next we recall the \emph{simplification} in the sense of \cite{Rin1976}.
Let $\mathcal{A}$ be an abelian category. Recall that an object $X$ in $\mathcal{A}$ is a \emph{brick} if $\End_{\mathcal{A}}(X)$ is a division ring. Two objects $X$ and $Y$ are \emph{orthogonal} if $\Hom_{\mathcal{A}}(X,Y) = 0$ and $\Hom_{\mathcal{A}}(Y,X) = 0$. A full subcategory $\mathcal{W}$ of $\mathcal{A}$ is called a \emph{wide subcategory} if it is closed under kernels, cokernels and extensions. In particular, $\mathcal{W}$ is an abelian category and the inclusion functor is exact. Recall that an abelian category $\mathcal{A}$ is called a \emph{length category} provided that each object in $\mathcal{A}$ has a composition series.
Let $\mathcal{E}$ be a set of objects in an abelian category $\mathcal{A}$. For an object $C$ in $\mathcal{A}$, an \emph{$\mathcal{E}$-filtration} of $C$ is given by a sequence of subobjects
\[0 = C_0 \subseteq C_1 \subseteq C_2 \subseteq \cdots \subseteq C_m = C,\]
such that each factor $C_i/C_{i-1}$ belongs to $\mathcal{E}$ for $1 \leq i \leq m$. Denote by $\mathcal{F}(\mathcal{E})$ the full subcategory of $\mathcal{A}$ formed by objects in $\mathcal{A}$ with an $\mathcal{E}$-filtration.
\mathrm{b}egin{lem}[{\cite[Theorem~1.2]{Rin1976}}]\label{lem2.4}
Let $\mathcal{E}$ be a set of pairwise orthogonal bricks in $\mathcal{A}$. Then $\mathcal{F}(\mathcal{E})$ is a wide subcategory of $\mathcal{A}$; moreover, $\mathcal{F}(\mathcal{E})$ is a length category and $\mathcal{E}$ is a complete set of pairwise non-isomorphic simple objects in $\mathcal{F}(\mathcal{E})$.
\end{lem}
Let $A$ be a connected Nakayama algebra without simple projective modules. Recall that the vertex set of the \emph{resolution quiver} $R(A)$ of $A$ is the set of isomorphism classes of simple $A$-modules, and there is an arrow from $S$ to $\gamma(S) = \tau\soc P(S)$ for each simple $A$-module $S$. Since each vertex in $R(A)$ is the start of a unique arrow, each connected component of $R(A)$ contains precisely one cycle. A simple $A$-module is called \emph{cyclic} provided that it lies in a cycle of $R(A)$.
Let $A$ be a connected Nakayama algebra of infinite global dimension. In particular, $A$ has no simple projective modules. Let $\mathcal{S}$ be a complete set of pairwise non-isomorphic simple $A$-modules. Denote by $\mathcal{S}_c$ the subset of all cyclic simple $A$-modules, and by $\mathcal{S}_{nc}$ the subset of all \emph{noncyclic} simple $A$-modules.
\mathrm{b}egin{lem}\label{lem2.5}
Let $A$ be a connected Nakayama algebra of infinite global dimension. Then $\mathcal{S}_c$ is a complete set of pairwise non-isomorphic simple $A$-modules of infinite injective dimension, and $\mathcal{S}_{nc}$ is a complete set of pairwise non-isomorphic simple $A$-modules of finite injective dimension.
\end{lem}
\mathrm{b}egin{proof}
This is dual to \cite[Corollaries~3.6 and 3.7]{Mad2005}.
\end{proof}
We will need the following fact. Recall that `top' denotes the top a module.
\mathrm{b}egin{lem}[{\cite[Corollary to Lemma~2]{Rin2013}}]\label{lem2.6}
Let $A$ be a connected Nakayama algebra without simple projective modules. Assume that $M$ is an indecomposable $A$-module and $m \geq 0$. Then either $\syz^{2m}(M) = 0$ or else $\operatorname{top} \syz^{2m}(M) = \gamma^m(\operatorname{top} M)$.
\end{lem}
\section{A Frobenius subcategory}
In this section, we introduce a Frobenius subcategory in the module category of a Nakayama algebra, whose stable category is triangle equivalent to the singularity category of the given algebra.
Throughout this section, $A$ is a connected Nakayama algebra of infinite global dimension. Denote by $n(A)$ the number of the isomorphism classes of simple $A$-modules. Denote by $l(M)$ the composition length of an $A$-module $M$. Recall that $\mathcal{S}$ denotes a complete set of pairwise non-isomorphic simple $A$-modules, $\mathcal{S}_c$ the subset of all cyclic simple $A$-modules and $\mathcal{S}_{nc}$ the subset of all noncyclic simple $A$-modules. Observe that the map $\gamma$ restricts to a permutation on $\mathcal{S}_c$. Let $\mathcal{X}_c$ be the set formed by indecomposable $A$-modules $X$ such that both $\operatorname{top} X$ and $\tau\soc X$ belong to $\mathcal{S}_c$.
The proof of the following well-known lemma uses the structure of indecomposable modules over Nakayama algebras; see \cite[IV.3 and VI.2]{ARS1995}. Each indecomposable $A$-module $X$ is uniserial, and it is uniquely determined by its top and its composition length. Its composition factors from the top are $S, \tau S, \cdots, \tau^{l-1}S$, where $S = \operatorname{top} X$ and $l = l(X)$. In particular, the projective cover $P(X)$ of $X$ is indecomposable.
\mathrm{b}egin{lem}\label{lem3.1}
Let $M$ be an indecomposable $A$-module which contains a nonzero projective submodule $P$. Then $M$ is projective.
\end{lem}
\mathrm{b}egin{proof}
Suppose that, on the contrary, $M$ is nonprojective. Then there is a proper surjective map $\pi: P(M) \to M$, where $P(M)$ is the projective cover of $M$. We have a proper surjective map $\pi^{-1}(P) \to P$; it splits, since $P$ is projective. This is impossible, since $P(M)$ is indecomposable, and thus its submodule $\pi^{-1}(P)$ is indecomposable.
\end{proof}
\mathrm{b}egin{lem}\label{lem3.2}
Let $f: X \to Y$ be a morphism in $\mathcal{X}_c$. Then $\mathbf{K}er f$, $\Coker f$ and $\operatorname{Im} f$ belong to $\mathcal{X}_c \cup \{0\}$.
\end{lem}
\mathrm{b}egin{proof}
We may assume that $f$ is nonzero. Then we have $\operatorname{top}(\operatorname{Im} f) = \operatorname{top} X$ and $\tau\soc(\operatorname{Im} f) = \tau\soc Y$, both of which belong to $\mathcal{S}_c$. Thus, $\operatorname{Im} f$ belongs to $\mathcal{X}_c$.
If $f$ is not a monomorphism, then $\operatorname{top}(\mathbf{K}er f) = \tau\soc(\operatorname{Im} f) = \tau\soc Y$ and $\tau\soc(\mathbf{K}er f) = \tau\soc X$. Thus, $\mathbf{K}er f$ belongs to $\mathcal{X}_c$.
If $f$ is not an epimorphism, then $\operatorname{top}(\Coker f) = \operatorname{top} Y$ and $ \tau\soc(\Coker f) = \operatorname{top}(\operatorname{Im} f) = \operatorname{top} X$. Thus, $\Coker f$ belongs to $\mathcal{X}_c$.
\end{proof}
\mathrm{b}egin{lem}\label{lem3.3}
Let $X$ be an object in $\mathcal{X}_c$. If $0 \subsetneq X'' \subsetneq X' \subsetneq X$ are subobjects of $X$ such that $X'/X''$ belongs to $\mathcal{X}_c$, then $X''$ and $X/X'$ belong to $\mathcal{X}_c$.
\end{lem}
\mathrm{b}egin{proof}
Since both $\operatorname{top} X'' = \tau\soc X'/X''$ and $\tau\soc X'' = \tau\soc X$ belong to $\mathcal{S}_c$, it follows from the definition that $X''$ belongs to $\mathcal{X}_c$. Similarly, since both $\operatorname{top} X/X' = \operatorname{top} X$ and $\tau\soc X/X' = \operatorname{top} X'/X''$ belong to $\mathcal{S}_c$, it follows from the definition that $X/X'$ belongs to $\mathcal{X}_c$.
\end{proof}
Denote by $\mathcal{P}_c$ a complete set of projective covers of modules in $\mathcal{S}_c$. We claim that $\mathcal{P}_c$ is a subset of $\mathcal{X}_c$. Indeed, we have $\operatorname{top} P(S) = S$ and $\tau\soc P(S) = \gamma(S)$, both of which belong to $\mathcal{S}_c$. It follows that $\mathcal{X}_c$ is closed under projective covers.
For each $S$ in $\mathcal{S}_c$, let $E(S)$ denote the indecomposable $A$-module of the least composition length among those objects $X$ in $\mathcal{X}_c$ with $\operatorname{top} X = S$. Inspired by \cite[Section~4]{Rin2013}, we call $E(S)$ the \emph{elementary module} associated to $S$. Denote by $\mathcal{E}_c$ the set of elementary modules.
Recall that $\mathcal{F}(\mathcal{E}_c)$ is the full subcategory of $A\mbox{-}\mathrm{mod}$ formed by $A$-modules with an $\mathcal{E}_c$-filtration.
The \emph{support} of an $A$-module $M$ is the subset of $\mathcal{S}$ consisting of those simple $A$-modules appearing as a composition factor of $M$. For a set $\mathcal{X}$ of $A$-modules, we denote by $\add \mathcal{X}$ the full subcategory of $A\mbox{-}\mathrm{mod}$ whose objects are direct summands of finite direct sums of objects in $\mathcal{X}$.
The following result is in spirit close to \cite[Proposition~2]{Rin2013}. In particular, we prove that each elementary module $E$ is a brick and thus $l(E)\leq n(A)$.
\mathrm{b}egin{prop}\label{prop3.4}
Let $A$ be a connected Nakayama algebra of infinite global dimension. Then the following statements hold.
\mathrm{b}egin{enumerate}[ref=\theprop(\arabic*)]
\item The set $\mathcal{E}_c$ of elementary modules is a set of pairwise orthogonal bricks, and thus $\mathcal{F}(\mathcal{E}_c)$ is a wide subcategory of $A\mbox{-}\mathrm{mod}$.
\item $\mathcal{F}(\mathcal{E}_c) = \add \mathcal{X}_c$, which is closed under projective covers.
\item\label{prop3.4(3)} Let $E$ and $E'$ be elementary modules. Then $E = E'$ if and only if their supports have nonempty intersection.
\end{enumerate}
\end{prop}
\mathrm{b}egin{proof}
(1) Let $f: E \to E'$ be a nonzero map between elementary modules. By Lemma~\ref{lem3.2} $\operatorname{Im} f$ belongs to $\mathcal{X}_c$. However, $\operatorname{Im} f$ is a factor module of $E$. By the definition of the elementary module $E$ we have $E = \operatorname{Im} f$. Then $f$ is an injective map. Similarly, $f$ is a surjective map and thus an isomorphism. Therefore $\mathcal{E}_c$ is a set of pairwise orthogonal bricks.
By Lemma~\ref{lem2.4} $\mathcal{F}(\mathcal{E}_c)$ is a wide subcategory of $A\mbox{-}\mathrm{mod}$. In particular, it is closed under direct sums and direct summands.
(2) We prove that any module $X$ in $\mathcal{X}_c$ belongs to $\mathcal{F}(\mathcal{E}_c)$, and thus $\add \mathcal{X}_c \subseteq \mathcal{F}(\mathcal{E}_c)$. We use induction on $l(X)$. Set $S = \operatorname{top} X \in \mathcal{S}_c$. If $X = E(S) \in \mathcal{E}_c$, we are done. Otherwise, there is a proper surjective map $\pi: X \to E(S)$. By Lemma~\ref{lem3.2} we have $\mathbf{K}er \pi \in \mathcal{X}_c$. Then by induction $\mathbf{K}er \pi \in \mathcal{F}(\mathcal{E}_c)$. Therefore $X \in \mathcal{F}(\mathcal{E}_c)$.
Recall that each elementary module $E$ satisfies that $\operatorname{top} E \in \mathcal{S}_c$ and $\tau\soc E \in \mathcal{S}_c$. It follows from its $\mathcal{E}_c$-filtration that each indecomposable object $X$ in $\mathcal{F}(\mathcal{E}_c)$ satisfies that $\operatorname{top} X \in \mathcal{S}_c$ and $\tau\soc X \in \mathcal{S}_c$. Then by definition $X$ belongs to $\mathcal{X}_c$. Therefore $\add \mathcal{X}_c \supseteq \mathcal{F}(\mathcal{E}_c)$, and thus $\mathcal{F}(\mathcal{E}_c) = \add \mathcal{X}_c$. Since $\mathcal{X}_c$ is closed under projective covers, we infer that $\add \mathcal{X}_c$ is closed under projective covers.
(3) Suppose that $E \neq E'$ have a common composition factor. We may assume that $l(E) \leq l(E')$. Since $E$ and $E'$ are orthogonal, there exists a chain $0 \subsetneq E_1 \subsetneq E_2 \subsetneq E'$ of $A$-modules such that $E_2/E_1 = E$. By Lemma~\ref{lem3.3} we have that $E'/E_2$ belongs to $\mathcal{X}_c$. This contradicts to the definition of the elementary module $E'$.
\end{proof}
\mathrm{b}egin{lem}\label{lem3.5}
Let $S$ be a cyclic simple $A$-module. Then the following statements hold.
\mathrm{b}egin{enumerate}[ref=\thelem(\arabic*)]
\item\label{lem3.5(1)} The injective dimension of $E(S)$ is infinite, and the injective dimension of $P(S)$ is finite.
\item\label{lem3.5(2)} There is a unique simple $A$-module $S'$ in $\mathcal{S}_c$ such that $\operatorname{top} E(S') = \tau\soc E(S)$ and $\Ext_A^1(E(S),E(S')) \neq 0$.
\end{enumerate}
\end{lem}
\mathrm{b}egin{proof}
(1) We recall from Lemma~\ref{lem2.5} that $\mathcal{S}_c$ is a complete set of pairwise non-isomorphic simple $A$-modules of infinite injective dimension. Since the elementary modules have pairwise disjoint supports, for each $S$ in $\mathcal{S}_c$, the support of $E(S)$ contains precisely one cyclic simple $A$-module, that is, $S$. In other words, each composition factor of $E(S)$ different from $S$ is a noncyclic simple $A$-module, and thus has finite injective dimension. It follows that $E(S)$ has infinite injective dimension.
Let $h: P(S) \to I$ be an injective envelope of the $A$-module $P(S)$. We claim that each composition factor $S'$ of $\Coker h$ is a noncyclic simple $A$-module, and thus has finite injective dimension. Consequently, the injective dimension of $\Coker h$ is finite. Therefore the injective dimension of $P(S)$ is finite.
For the claim, we observe by Lemma~\ref{lem3.1} $P(S) \subsetneq P(S') \subseteq I$. Then we have $\gamma(S') = \gamma(S)$. Recall that the restriction of $\gamma$ on cyclic simple $A$-modules is injective. Therefore $S'$ is a noncyclic simple $A$-module, since $S$ is a cyclic simple $A$-module and $S' \neq S$.
(2) Let $E = E(S)$. Recall that $P(S)$ lies in $\mathcal{X}_c$ and thus in $\mathcal{F}(\mathcal{E}_c)$. Consider the $\mathcal{E}_c$-filtration of $P(S)$, say
\[0 = M_0 \subsetneq M_1 \subsetneq \cdots \subsetneq M_{t-1} \subsetneq M_t = P(S),\]
such that $M_i/M_{i-1}$ is elementary for $1 \leq i \leq t$. We observe that $M_t/M_{t-1} = E$ and $t \geq 2$, since by (1) we have $E(S)\neq P(S)$. Set $E' = M_{t-1}/M_{t-2}$. Note that $E' = E(S')$ for some cyclic simple $A$-module $S'$. Then
\[\operatorname{top} E' = \operatorname{top}(M_{t-1}/M_{t-2}) = \tau\soc(M_t/M_{t-1}) = \tau\soc E.\]
Since $M_t/M_{t-2} = P(S)/M_{t-2}$ is indecomposable, the exact sequence
\[0 \to M_{t-1}/M_{t-2} \to M_t/M_{t-2} \to M_t/M_{t-1} \to 0\]
does not split. Then we have $\Ext_A^1(E, E') \neq 0$. The uniqueness of $S'$ is obvious, since $S' = \tau\soc E(S)$.
\end{proof}
Recall that by definition $\tau\soc E$ lies in $\mathcal{S}_c$ for each elementary module $E$. We have a map $\delta: \mathcal{S}_c \to \mathcal{S}_c$, which sends a cyclic simple $A$-module $S$ to $\delta(S) = \tau\soc E(S)$. We claim that $\delta$ is injective and thus bijective. Indeed, if $\delta(S) = \delta(\mathrm{b}ar{S})$, then $\soc E(S) = \soc E(\mathrm{b}ar{S})$. It follows from Lemma~\ref{prop3.4(3)} that $S = \mathrm{b}ar{S}$.
\mathrm{b}egin{cor}\label{cor3.6}
Let $S$ be a cyclic simple $A$-module and $t$ the minimal positive integer such that $\delta^t(S) = S$. Then $\mathcal{S}_c = \{S, \delta(S), \cdots, \delta^{t-1}(S)\}$ and $\mathcal{S}$ is the disjoint union of the supports of all elementary modules.
\end{cor}
\mathrm{b}egin{proof}
Since $A$ is a connected Nakayama algebra without simple projective modules, any nonempty subset of $\mathcal{S}$ which is closed under $\tau$ must be $\mathcal{S}$. By Lemma~\ref{lem3.5(2)} the union of the supports of all $E(\delta^i(S))$ is closed under $\tau$, we infer that this union is $\mathcal{S}$. Let $S'$ be a cyclic simple $A$-module. Then there exists an integer $0 \leq i \leq t-1$ such that the support of $E(\delta^i(S))$ and the support of $E(S)$ have nonempty intersection. It follows from Lemma~\ref{prop3.4(3)} that $S' = \delta^i(S)$.
\end{proof}
\mathrm{b}egin{prop}\label{prop3.7}
Let $A$ be a connected Nakayama algebra of infinite global dimension. Then $\mathcal{F}(\mathcal{E}_c)$ is equivalent to $A'\mbox{-}\mathrm{mod}$, where $A'$ is a connected selfinjective Nakayama algebra.
\end{prop}
\mathrm{b}egin{proof}
Let $P = \mathrm{op}lus_{S\in\mathcal{S}_c}P(S)$ and $A' = \End_A(P)^\mathrm{op}$. Then $P$ is a projective object in $\mathcal{F}(\mathcal{E}_c)$, since $\mathcal{F}(\mathcal{E}_c)$ is a wide subcategory of $A\mbox{-}\mathrm{mod}$. The natural projection $P(S) \to E(S)$ is a projective cover in the category $\mathcal{F}(\mathcal{E}_c)$. Recall from Lemma~\ref{lem2.4} that $\mathcal{F}(\mathcal{E}_c)$ is a length category with $\mathcal{E}_c = \{E(S)\mid S \in \mathcal{S}_c\}$ a complete set of pairwise non-isomorphic simple objects. We infer that for each object $X$ in $\mathcal{F}(\mathcal{E}_c)$, there is an epimorphism $P' \to X$ with $P'$ in $\add P$. Then $P$ is a projective generator for $\mathcal{F}(\mathcal{E}_c)$. We have an equivalence $\mathcal{F}(\mathcal{E}_c) \simeq A'\mbox{-}\mathrm{mod}$; compare \cite[Chapter IV, Theorem~5.3]{Mit1965}.
Since each indecomposable object in $\mathcal{F}(\mathcal{E}_c)$ is uniserial, we infer that $A'$ is a Nakayama algebra. Denote by $\tau'$ the Auslander-Reiten translation of $A'$. Then we have $\tau'E(S) = E(\delta(S))$ by Lemma~\ref{lem3.5(2)}. It follows from Corollary~\ref{cor3.6} that all simple $A'$-modules are in the same $\tau'$-orbit. Therefore, the Nakayama algebra $A'$ is connected.
It remains to show that $A'$ is selfinjective. Since $\gamma$ restricts to a permutation on $\mathcal{S}_c$, the modules in $\mathcal{P}_c$ have pairwise distinct socles. Therefore, we have $\mathcal{S}_c = \{\tau\soc P \mid P \in \mathcal{P}_c\}$. Let $E$ be an elementary module. Since $\tau\soc E$ lies in $\mathcal{S}_c$, there exists $P$ in $\mathcal{P}_c$ with $\soc P = \soc E$ and $\soc P \neq \soc E'$ for any elementary module $E' \neq E$. It follows that the socle of $P$ in the category $\mathcal{F}(\mathcal{E}_c)$ is $E$. We have proven that every simple $A'$-module can embed into a projective $A'$-module. Therefore, $A'$ is selfinjective.
\end{proof}
The following result is analogous to \cite[Proposition~4]{Rin2013}.
\mathrm{b}egin{lem}\label{lem3.8}
The following statements are equivalent for an indecomposable nonprojective $A$-module $M$.
\mathrm{b}egin{enumerate}[ref=\thelem(\arabic*)]
\item $M$ belongs to $\mathcal{F}(\mathcal{E}_c)$.
\item\label{lem3.8(2)} There is an exact sequence $0 \to M \to P_n \to \cdots \to P_1 \to P_0 \to M \to 0$ for some $n \geq 1$ such that each $P_i$ belongs to $\mathcal{P}_c$.
\item There is an exact sequence $P_1 \to P_0 \to M \to 0$ such that $P_i$ belongs to $\mathcal{P}_c$ for $i=0,1$.
\end{enumerate}
\end{lem}
\mathrm{b}egin{proof}
``(1) $\mathds{R}ightarrow$ (2)" Recall that for an indecomposable nonprojective module $M'$ over a selfinjective Nakayama algebra $A'$, there exists an exact sequence $0 \to M' \to P'_n \to \cdots \to P'_1 \to P'_0 \to M' \to 0$ for some $n \geq 1$ such that each $P'_i$ is indecomposable projective. Then (2) follows from Proposition~\ref{prop3.7}.
``(2) $\mathds{R}ightarrow$ (3)" This is obvious.
``(3) $\mathds{R}ightarrow$ (1)" Observe that $\operatorname{top} M = \operatorname{top} P_0$ and $\tau\soc M = \operatorname{top} \syz M = \operatorname{top} P_1$, both of which belong to $\mathcal{S}_c$. Then by definition $M$ belongs to $\mathcal{X}_c$.
\end{proof}
Recall that each component of the resolution quiver $R(A)$ has a unique cycle. For each noncyclic vertex $S$ in $R(A)$, there exists a unique path of minimal length starting with $S$ and ending in a cycle. We call the length of this path the distance between $S$ and the cycle. Let $d(A)$ be the maximal distance between noncyclic vertices and cycles. Observe that $\gamma^d(S)$ is cyclic for each simple $A$-module $S$.
\mathrm{b}egin{lem}\label{lem3.9}
Let $d = d(A)$ be as above. Then the following statements hold.
\mathrm{b}egin{enumerate}[ref=\thelem(\arabic*)]
\item\label{lem3.9(1)} $\syz^{2d}(M)$ belongs to $\mathcal{F}(\mathcal{E}_c)$ for any $M$ in $A\mbox{-}\mathrm{mod}$.
\item $\syz_0^{2d}(A\mbox{-}\mathrm{mod}) \subseteq \mathcal{F}(\mathcal{E}_c) \subseteq \syz^{2d}(A\mbox{-}\mathrm{mod})$.
\end{enumerate}
\end{lem}
\mathrm{b}egin{proof}
(1) We may assume that $M$ is indecomposable. It follows from Lemma~\ref{lem2.6} that either $\syz^{2d}(M)$ is zero or $\operatorname{top} \syz^{2d}(M) = \gamma^d(\operatorname{top} M)$. If $\syz^{2d}(M)$ is indecomposable projective, then $\syz^{2d}(M)$ belongs to $\mathcal{P}_c$. If $\syz^{2d}(M)$ is indecomposable nonprojective, then $\operatorname{top} \syz^{2d}(M) = \gamma^d(\operatorname{top} M)$ and $\tau\soc \syz^{2d}(M) = \operatorname{top} \syz^{2d+1}(M) = \gamma^d(\operatorname{top} \syz M)$. Then by definition $M$ belongs to $\mathcal{X}_c$.
(2) The first inclusion follows from (1), and the second one follows from Lemma~\ref{lem3.8(2)}.
\end{proof}
By Proposition~\ref{prop3.7} $\mathcal{F}(\mathcal{E}_c)$ is a Frobenius category whose projective objects are precisely $\add \mathcal{P}_c$. Denote by $\underline{\mathcal{F}}(\mathcal{E}_c)$ the stable category of $\mathcal{F}(\mathcal{E}_c)$ modulo projective objects. It is a triangulated category; see \cite{Hap1988}.
Recall from Proposition~\ref{prop3.4} that $\mathcal{F}(\mathcal{E}_c)$ is a wide subcategory of $A\mbox{-}\mathrm{mod}$ which is closed under projective covers. Consider the inclusion functor $i: \mathcal{F}(\mathcal{E}_c) \to A\mbox{-}\mathrm{mod}$. It induces uniquely a fully-faithful functor $i': \underline{\mathcal{F}}(\mathcal{E}_c) \to A\mbox{-}\underline\mathrm{mod}$. We recall the induced functor $q': A\mbox{-}\underline{\mathrm{mod}} \to \mathbf{D}_\mathrm{sg}(A)$ in Section~2.
The following is the main result of this section, which describes the singularity category of $A$ as a subcategory of the stable module category of $A$.
\mathrm{b}egin{thm}\label{thm3.10}
Let $A$ be a connected Nakayama algebra of infinite global dimension. Then the composite functor $q'\circ{i}': \underline{\mathcal{F}}(\mathcal{E}_c) \to A\mbox{-}\underline{\mathrm{mod}} \to \mathbf{D}_\mathrm{sg}(A)$ is a triangle equivalence.
\end{thm}
\mathrm{b}egin{proof}
Observe that the composite functor $\mathcal{F}(\mathcal{E}_c) \xrightarrow{i} A\mbox{-}\mathrm{mod} \to \mathbf{D}^\mathrm{b}(A\mbox{-}\mathrm{mod}) \xrightarrow{q} \mathbf{D}_\mathrm{sg}(A)$ is a $\partial$-functor in the sense of \cite[Section~1]{Kel1991}; compare \cite[Lemma~2.4]{Chen2011a}. Then the functor $q' \circ i'$ is a triangle functor; see \cite[Lemma~2.5]{Chen2011a}.
Recall that the subcategory $\mathcal{F}(\mathcal{E}_c)$ of $A\mbox{-}\mathrm{mod}$ is wide and closed under projective covers; moreover, $\mathcal{F}(\mathcal{E}_c)$ is a Frobenius category. Then the restriction of the syzygy functor $\syz: A\mbox{-}\underline{\mathrm{mod}} \to A\mbox{-}\underline{\mathrm{mod}}$ on $\underline{\mathcal{F}}(\mathcal{E}_c)$ is an autoequivalence, in particular, it is fully faithful. Then the functor $q'\circ i'$ is fully faithful by the natural isomorphism in Lemma~\ref{lem2.3}.
It remains to show that the functor $q'\circ i'$ is also dense. Let $X$ be an object in $\mathbf{D}_\mathrm{sg}(A)$. It follows from Lemmas \ref{lem2.1} and \ref{lem3.9(1)} that there exists a module $M$ in $\mathcal{F}(\mathcal{E}_c)$ and $n$ sufficiently large such that $X \simeq q(M)[n]$ in $\mathbf{D}_\mathrm{sg}(A)$. By above, the image $\operatorname{Im}(q'\circ i')$ is a triangulated subcategory of $\mathbf{D}_\mathrm{sg}(A)$, in particular, it is closed under $[m]$ for all $m \in \mathds{Z}$. It follows from $X \simeq q(M)[n]$ that $X$ lies in $\operatorname{Im}(q'\circ i')$. This finishes our proof.
\end{proof}
We observe the following immediate consequence of Proposition~\ref{prop3.7} and Theorem~\ref{thm3.10}.
\mathrm{b}egin{cor}[compare {\cite[Corollary~3.11]{CY2013}}]\label{cor3.11}
Let $A$ be a connected Nakayama algebra of infinite global dimension. Then there is a triangle equivalence between $\mathbf{D}_\mathrm{sg}(A)$ and $A'\mbox{-}\underline{\mathrm{mod}}$ for a connected selfinjective Nakayama algebra $A'$.
\end{cor}
\section{A duality between singularity categories}
In this section, we prove that there is a triangle duality between the singularity category of a Nakayama algebra and the singularity category of its opposite algebra. The proof uses the Frobenius subcategory in the previous section.
Let $A$ be a connected Nakayama algebra of infinite global dimension. We recall from Propositions \ref{prop3.4} and \ref{prop3.7} that the category $\mathcal{F} = \mathcal{F}(\mathcal{E}_c)$ is a wide subcategory of $A\mbox{-}\mathrm{mod}$ closed under projective covers; it is equivalent to $A'\mbox{-}\mathrm{mod}$ for a connected selfinjective Nakayama algebra $A'$.
Consider the inclusion functor $i: \mathcal{F} \to A\mbox{-}\mathrm{mod}$. We claim that it admits an exact right adjoint $i_\rho: A\mbox{-}\mathrm{mod} \to \mathcal{F}$.
For the claim, recall from the proof of Proposition~\ref{prop3.7} that $A' = \End_A(P)^\mathrm{op}$ with $P = \mathrm{op}lus_{S\in\mathcal{S}_c}P(S)$. We identify $\mathcal{F}$ with $A'\mbox{-}\mathrm{mod}$. Then the inclusion $i$ is identified with $P\otimes_{A'}-$. The right adjoint is given by $i_\rho = \Hom_A(P,-)$. It is exact since ${}_AP$ is projective.
The adjoint pair $(i, i_\rho)$ induces an adjoint pair $(i^*, i_\rho^*)$ of triangle functors between bounded derived categories. Here, for an exact functor $F$ between abelian categories, $F^*$ denotes its extension on bounded derived categories.
Recall that $\mathbf{K}^\mathrm{b}(A\mbox{-}\mathrm{inj})$ denotes the the bounded homotopy category of $A\mbox{-}\mathrm{inj}$. We view $\mathbf{K}^\mathrm{b}(A\mbox{-}\mathrm{inj})$ as a thick subcategory of $\mathbf{D}^\mathrm{b}(A\mbox{-}\mathrm{mod})$ via the canonical functor. We mention that by the usual duality on module categories, the quotient triangulated category $\mathbf{D}^\mathrm{b}(A\mbox{-}\mathrm{mod})/\mathbf{K}^\mathrm{b}(A\mbox{-}\mathrm{inj})$ is triangle equivalent to the opposite category of the singularity category $\mathbf{D}_\mathrm{sg}(A^\mathrm{op})$ of $A^\mathrm{op}$. Here, $A^\mathrm{op}$ is the opposite algebra of $A$.
The proof of the following result is similar to \cite[Propsition 2.13]{CY2013}. Recall that $\mathcal{P}_c$ is a complete set of pairwise non-isomorphic indecomposable projective objects in $\mathcal{F} = \mathcal{F}(\mathcal{E}_c)$.
\mathrm{b}egin{lem}\label{lem4.1}
Let $A$ be a connected Nakayama algebra of infinite global dimension. Then the above functors $i_\rho^*$ and $i^*$ induce mutually inverse triangle equivalences between $\mathbf{D}^\mathrm{b}(A\mbox{-}\mathrm{mod})/\mathbf{K}^\mathrm{b}(A\mbox{-}\mathrm{inj})$ and $\mathbf{D}^\mathrm{b}(\mathcal{F})/\mathbf{K}^\mathrm{b}(\add \mathcal{P}_c)$.
\end{lem}
\mathrm{b}egin{proof}
Observe by \cite[Lemma 3.3.1]{CK2011} that $i^*: \mathbf{D}^\mathrm{b}(\mathcal{F}) \to \mathbf{D}^\mathrm{b}(A\mbox{-}\mathrm{mod})$ is fully faithful. It follows that its right adjoint $i_\rho^*$ induces a triangle equivalence
\[\overline{i_\rho^*}: \mathbf{D}^\mathrm{b}(A\mbox{-}\mathrm{mod})/\mathbf{K}er i_\rho^* \simeq \mathbf{D}^\mathrm{b}(\mathcal{F});\]
see \cite[Chapter I, Section 1, 1.3 Proposition]{GZ1967}. Here, $\mathbf{K}er F$ denotes the essential kernel of an additive functor $F$.
We claim that $\mathbf{K}er i_\rho^* = \mathrm{thick}\langle\mathcal{S}_{nc}\rangle$, the smallest thick subcategory of $\mathbf{D}^\mathrm{b}(A\mbox{-}\mathrm{mod})$ containing $\mathcal{S}_{nc}$. Here, we recall that $\mathcal{S}_{nc}$ denotes the set of noncyclic simple $A$-modules. By Lemma~\ref{lem2.5} each noncyclic simple $A$-module has finite injective dimension. It follows from the claim that $\mathbf{K}er i_\rho^* \subseteq \mathbf{K}^b(A\mbox{-}\mathrm{inj})$.
For the claim, we observe that $\mathbf{K}er i_\rho = \mathcal{F}(\mathcal{S}_{nc})$, the full subcategory of $A\mbox{-}\mathrm{mod}$ formed by $A$-modules with an $\mathcal{S}_{nc}$-filtration. The claim follows from the fact that a complex $X$ is in $\mathbf{K}er i_\rho^*$ if and only if each cohomology $H^i(X)$ is in $\mathbf{K}er i_\rho$.
We observe that $i_\rho$ preserves injective objects since it has an exact left adjoint. It follows that $i_\rho^*(\mathbf{K}^\mathrm{b}(A\mbox{-}\mathrm{inj})) \subseteq \mathbf{K}^\mathrm{b}(\add \mathcal{P}_c)$. By Lemma~\ref{lem3.5(1)} each module $Q$ in $\mathcal{P}_c$ has finite injective dimension. Note that $i_\rho^*Q = Q$. Therefore $i_\rho^*(\mathbf{K}^\mathrm{b}(A\mbox{-}\mathrm{inj})) \supseteq \mathbf{K}^\mathrm{b}(\add \mathcal{P}_c)$, and thus $i_\rho^*(\mathbf{K}^\mathrm{b}(A\mbox{-}\mathrm{inj})) = \mathbf{K}^\mathrm{b}(\add \mathcal{P}_c)$. From this equality, the triangle equivalence $\overline{i_\rho^*}$ restricts to a triangle equivalence
\[\mathbf{K}^\mathrm{b}(A\mbox{-}\mathrm{inj})/\mathbf{K}er i_\rho^* \simeq \mathbf{K}^\mathrm{b}(\add \mathcal{P}_c).\]
The desired equivalence follows from \cite[Chapitre I, \S2, 4-3 Corollaire]{Ver1977}.
\end{proof}
\mathrm{b}egin{prop}\label{prop4.2}
Let $A$ be a Nakayama algebra. Then the singularity category $\mathbf{D}_\mathrm{sg}(A)$ is triangle equivalent to $\mathbf{D}^\mathrm{b}(A\mbox{-}\mathrm{mod})/\mathbf{K}^\mathrm{b}(A\mbox{-}\mathrm{inj})$. Equivalently, there is a triangle duality between $\mathbf{D}_\mathrm{sg}(A)$ and $\mathbf{D}_\mathrm{sg}(A^\mathrm{op})$.
\end{prop}
\mathrm{b}egin{proof}
Without loss of generality, we may assume that $A$ is a connected Nakayama algebra of infinite global dimension. Then the singularity category $\mathbf{D}_\mathrm{sg}(A)$ is triangle equivalent to the stable category $\underline{\mathcal{F}}$ by Theorem~\ref{thm3.10} .
Since $\mathcal{F}$ is a Frobenius abelian category, it follows from \cite[Theorem~2.1]{Ric1989} that the stable category $\underline{\mathcal{F}}$ is triangle equivalent to $\mathbf{D}^\mathrm{b}(\mathcal{F})/\mathbf{K}^\mathrm{b}(\add \mathcal{P}_c)$. Then the conclusion follows from Lemma~\ref{lem4.1}.
\end{proof}
\section{The resolution quivers}
Let $A$ be a connected Nakayama algebra of infinite global dimension. Recall that $n(A)$ denotes the number of isomorphism classes of simple $A$-modules. By Corollary~\ref{cor3.11} the Auslander-Reiten quiver of the singularity category $\mathbf{D}_\mathrm{sg}(A)$ is isomorphic to a truncated tube $\mathds{Z}\mathbb{A}_m/\langle\tau^t\rangle$, where $m = m(A)$ denotes its height and $t = t(A)$ denotes its rank. Here, we use the fact that the Auslander-Reiten quiver of the stable module category of a connected selfinjective Nakayama algebra is a truncated tube; compare~\cite[VI.2]{ARS1995}.
Recall that $R(A)$ denotes the resolution quiver of $A$. We denote by $c(A)$ the number of cycles in $R(A)$. Let $C$ be a cycle in $R(A)$. Then the \emph{size} $s(C)$ of $C$ is the number of vertices in $C$, and the \emph{weight} $w(C)$ of $C$ is $\frac{\sum_Sl(P(S))}{n(A)}$, where $S$ runs though all vertices of $C$. Here, $l(P(S))$ is the composition length the projective cover $P(S)$ of a simple $A$-module $S$. Recall from~\cite{Shen2014} that all cycles in the resolution quiver $R(A)$ have the same size and the same weight. We denote $s(A) = s(C)$ and $w(A) = w(C)$ for an arbitrary cycle $C$ in $R(A)$.
For two positive integers $a$ and $b$, we denote their greatest common divisor by $(a,b)$.
\mathrm{b}egin{lem}\label{lem5.1}
Let $m = m(A)$ and $t = t(A)$ be as above. Then $c(A) = (m+1,t)$, $s(A) = \frac{t}{(m+1,t)}$ and $w(A) = \frac{m+1}{(m+1,t)}$.
\end{lem}
\mathrm{b}egin{proof}
Recall from \cite[Theorem~3.8]{CY2013} that there exists a sequence of algebra homomorphisms
\[A = A_0 \xrightarrow{\eta_0} A_1 \xrightarrow{\eta_1} A_2 \xrightarrow{} \cdots \xrightarrow{} A_{r-1} \xrightarrow{\eta_{r-1}} A_r\]
such that each $A_i$ is a connected Nakayama algebra and $A_r$ is selfinjective; moreover, each $\eta_i$ induces a triangle equivalence between $\mathbf{D}_\mathrm{sg}(A_i)$ and $\mathbf{D}_\mathrm{sg}(A_{i+1})$. Following \cite[Lemma~2.2]{Shen2014}, each $\eta_i$ induces a bijection between the set of cycles in $R(A_i)$ and the set of cycles in $R(A_{i+1})$, which preserves sizes and weights. Then we have $m(A_i) = m(A_{i+1})$, $t(A_i) = t(A_{i+1})$, $c(A_i) = c(A_{i+1})$, $s(A_i) = s(A_{i+1})$ and $w(A_i) = w(A_{i+1})$ for $0 \leq i \leq r-1$. Therefore it is enough to prove the equations for selfinjective Nakayama algebras.
Let $A$ be a connected selfinjective Nakayama algebra. Then $t$ equals the number of isomorphism classes of simple $A$-modules, and $m+1$ equals the radical length of $A$. We claim that $s(A) = \frac{t}{(m+1,t)}$. Therefore, we have $c(A) = \frac{t}{s(A)} = (m+1,t)$ and $w(A) = \frac{(m+1)s(A)}{t} = \frac{m+1}{(m+1,t)}$.
For the claim, let $\{S_1, \cdots, S_t\}$ be a complete set of pairwise non-isomorphic simple $A$-modules such that $\tau S_i = S_{i+1}$ for $1 \leq i \leq t$. Here, we let $S_{t+j} = S_j$ for each $j > 0$. Then we have $\gamma(S_i) = S_{i+m+1}$, and thus $\gamma^d(S_i) = S_i$ if and only if $t$ divides $d(m+1)$. It follows that $R(A)$ consists of cycles of size $\frac{t}{(m+1,t)}$.
\end{proof}
The following result establishes the relationship between the resolution quiver of a Nakayama algebra and the resolution quiver of its opposite algebra.
\mathrm{b}egin{prop}\label{prop5.2}
Let $A$ be a connected Nakayama algebra of infinite global dimension. Then the following statements hold.
\mathrm{b}egin{enumerate}
\item The resolution quivers $R(A)$ and $R(A^\mathrm{op})$ have the same number of cycles and the same number of cyclic vertices.
\item All cycles in $R(A)$ and $R(A^\mathrm{op})$ have the same weight.
\end{enumerate}
\end{prop}
\mathrm{b}egin{proof}
By Proposition~\ref{prop4.2} there is a triangle duality between $\mathbf{D}_\mathrm{sg}(A)$ and $\mathbf{D}_\mathrm{sg}(A^\mathrm{op})$. Then the Auslander-Reiten quiver of $\mathbf{D}_\mathrm{sg}(A^\mathrm{op})$ is isomorphic to the opposite quiver of the Auslander-Reiten quiver of $\mathbf{D}_\mathrm{sg}(A)$. Therefore, we have $m(A) = m(A^\mathrm{op})$ and $t(A) = t(A^\mathrm{op})$. It follows from Lemma~\ref{lem5.1} that $c(A) = c(A^\mathrm{op})$, $s(A) = s(A^\mathrm{op})$ and $w(A) = w(A^\mathrm{op})$.
\end{proof}
The following example shows that these two resolution quivers $R(A)$ and $R(A^\mathrm{op})$ may not be isomorphic in general.
\mathrm{b}egin{exm}
Let $A$ be a connected Nakayama algebra with admissible sequence $(7,6,6,5)$. Assume that $\{S_1, S_2, S_3, S_4\}$ is a complete set of pairwise non-isomorphic simple $A$-modules such that $\tau S_i = S_{i+1}$ for $1 \leq i \leq 4$. Then we have $l(P_1) = 7$, $l(P_2) = 6$, $l(P_3) = 6$ and $l(P_4) = 5$. There is an arrow from $S_i$ to $S_j$ in $R(A)$ if and only if $4$ divides $i-j+l(P_i)$.
Denote by $D$ the usual duality, and by $(-)^*$ the duality on projectives. Then $\{DS_4, DS_3,DS_2,DS_1\}$ is a complete set of pairwise non-isomorphic simple $A^\mathrm{op}$-modules such that $\tau' DS_i = DS_{i-1}$. Here, $\tau'$ is the Auslander-Reiten translation of $A^\mathrm{op}$. We observe that $l(P_4^*) = 6$, $l(P_3^*) = 7$, $l(P_2^*) = 6$ and $l(P_1^*) = 5$. Therefore the admissible sequence of $A^\mathrm{op}$ is $(6,7,6,5)$. There is an arrow from $DS_i$ to $DS_j$ in $R(A^\mathrm{op})$ if and only if $4$ divides $i-j-l(P_i^*)$.
The resolution quivers $R(A)$ and $R(A^\mathrm{op})$ are shown as follows.
\[\mathrm{b}egin{aligned}[c]
\xymatrix@R=10pt{
S_3 \ar@{->}[r] &S_1 \ar@/^/[r] &S_4 \ar@/^/[l]\ar@{<-}[r] &S_2}
\end{aligned}
,\qquad
\mathrm{b}egin{aligned}[c]
\xymatrix@R=10pt{
DS_1 \ar@{->}[dr] \\
&DS_4 \ar@/^/[r] &DS_2 \ar@/^/[l] \\
DS_3 \ar@{->}[ur]}
\end{aligned}.\]
\end{exm}
\end{document}
|
\begin{document}
{\small
\noindent\textsl{Manuscript submitted for publication} in :\\
\textbf{ESAIM: Control, Optimisation and Calculus of Variations}\\
URL: \texttt{http://www.edpsciences.org/cocv/}}
\title[Monge parameterizations with 3 state and 2 controls]{
Flatness and Monge parameterization\\of two-input systems,
\\ control-affine with 4 states \\or general with 3 states}
\author{David Avanessoff}
\author{Jean-Baptiste Pomet}
\dedicatory{INRIA, B.P. 93, 06902 Sophia Antipolis cedex, France\\
\textup{\texttt{[email protected]},
\texttt{[email protected]}}}
\date{April 21, 2005, revised November 30, 2005.}
\keywords{Dynamic feedback linearization, Flat control systems, Monge problem,
Monge equations}
\subjclass{93B18, 93B29, 34C20}
\begin{abstract}
This paper studies Monge parameterization, or differential flatness,
of control-affine systems with four states and two
controls. Some of them are known to be flat, and this implies admitting a
Monge parameterization.
Focusing on systems outside this class, we describe the only possible
structure of such a parameterization for these systems, and give a lower bound
on the order of this parameterization, if it exists.
This lower-bound is good enough to recover the known results about
``$(x,u)$-flatness'' of these systems, with much more elementary techniques.
on the order of this parameterization, if it exists.
\paragraph{\textsc{R{\'e}sum{\'e}}} On s'int{\'e}resse aux param{\'e}trisations de Monge, ou {\`a}
la platitude, des syst{\`e}mes affines {\`a} quatre {\'e}tats et deux
entr{\'e}es. Des travaux antérieurs caractérisent ceux de ces systèmes qui
sont ``$(x,u)$-plats'', mais on ne sait pas si certains des sytèmes restants
sont plats, ou non. La conjecture est qu'aucun n'est plat, ni
Monge-paramétrable. Pour ces sytèmes, on montre que toute
param{\'e}trisation est d'un type particulier, et on donne une borne
inf{\'e}rieure sur l'ordre de cette param{\'e}trisation, suffisante pour
retrouver, de mani{\`e}re beaucoup plus {\'e}l{\'e}mentaire, le résultat
connu sur la ``$(x,u)$-platitude''.
\end{abstract}
\maketitle
\section{Introduction}
In control theory, after a line of research on exact linearization by dynamic
state feedback \cite{Isid-Moo-deL86,Char-Lev-Mar89,Char-Lev-Mar91}, the concept of
differential flatness was introduced in 1992 in\cite{Flie-Lev-Mar-R92cras} (see also
\cite{Flie-Lev-Mar-R95ijc,Flie-Lev-Mar-R99geo}).
Flatness is equivalent to exact linearization by dynamic state feedback of a
special type, called ``endogenous''~\cite{Flie-Lev-Mar-R92cras}, but, as
pointed out in that reference, it has its own interest, maybe more important than linearity.
An interpretation and framework for that notion is also proposed in
\cite{Aran-Moo-Pom95vars,Pome95vars,VanN-Rat-Mur98}; see
\cite{Mart-Rou-Mur02tri} for a recent review.
The \emph{Monge problem} (see the the survey article \cite{Zerv32}, published in 1932, that mentions
the prominent contributions \cite{Hilb12} and \cite{Cart15}, and others)
is the one of finding explicit formulas giving the ``general solution'' of
an under-determined system of ODEs as functions of some arbitrary functions of
time and a certain number of their time-derivatives (in fact \cite{Zerv32} allows
to change the independent variable, but we keep it to be time).
Let us call such formulas a \emph{Monge parameterization}, its \emph{order}
being the number of time-derivatives.
The authors of \cite{Flie-Lev-Mar-R92cras} already made the link with the above
mentioned work on under-determined systems of ODEs dating
back from the beginning of 20\textsuperscript{th} century; for instance, they used
\cite{Hilb12,Cart15} to obtain, in \cite{Rouc94,Mart-Rou94} some results on
flatness or linearizability of control systems.
Let us precise the relation between flatness and Monge parameterizability~:
flatness is existence of some functions
---we call this collection of functions a \emph{flat output}---
of the state, the controls and a certain number $j$ of
time-derivatives of the control, that ``invert'' the formulas of a Monge
parameterization, i.e. a solution $t\mapsto(x(t),u(t))$ of the control system
corresponds to only one choice of the arbitrary functions of time appearing
in the parameterization, given by these functions.
Let us call $j$ the \emph{order} of the flat output.
Characterizing differential flatness, or dynamic
state feedback linearizability is still an open problem~\cite{Flie-Lev-Mar-R99open}, apart from the
case of single-input systems~\cite{Char-Lev-Mar89,Cart15}.
The main difficulty is that the order of a parameterization or
a flat output, if there exists any, is not known beforehand:
for a given system, if one can construct a parameterization, or a flat output,
it has a definite order, but if, for some integer $j$, one prove that there is
no parameterization of order $j$, then it might admit a parameterization of
higher order, and we do not know any a priori bound on the possible $j$'s.
In the present paper, we consider systems of the smallest dimensions for which
the answer is not known; we do not really overcome the above mentioned ``main difficulty'', in the sense that
we only say that our class of systems does not admit a parameterization of
order less than some numbers, but the description of the parameterization
that we give, and the resulting system of PDEs is valid at any order.
Consider a general control-affine system in $\mathbb{R}^4$ with two controls, where $\xi\in\mathbb{R}^4$ is the state, $\widetilde{w}_1$ and $\widetilde{w}_2$ are the two scalar controls
and $X_0$, $X_1$ and $X_2$ are three smooth vector fields~:
\begin{displaymath}
\dot{\xi}={X}_0(\xi)+\widetilde{w}_1 X_1(\xi)+\widetilde{w}_2 X_2(\xi)\ .
\end{displaymath}
In \cite{Pome97cocv}, one can find a
necessary and sufficient condition on $X_0$, $X_1$, $X_2$ for this system to admit a
flat output depending on the state and control only ($j=0$ according to
the above notations).
Systems who do \emph{not} satisfy this conditions may or may not
admit flat outputs depending also on some time-derivatives of the control
($j>0$). This is recalled and commented in section~\ref{sec-plan} and \ref{sec-main}.
Instead of the above control system, we study a reduced equation
(\ref{sys3}); let us briefly explain why it represents, modulo a possibly
dynamic feedback transformation, all the relevant cases.
Systems for which the iterated Lie brackets
of $X_1$ and $X_2$ do not have maximum rank can be treated in a rather simple
manner \cite[first cases of Theorem~3.1]{Pome97cocv}; if on the contrary
iterated Lie brackets do have maximum rank, it is well known (Engel normal form for
distributions of rank 2 in $\mathbb{R}^4$, see~\cite{Brya-Che-Gar-G-G91}) that,
after a nonsingular feedback
($\widetilde{w}_i=\beta^{i,0}(\xi)+\beta^{i,1}(\xi)w_1+\beta^{i,2}(\xi)w_2$, $i=1,2$,
with $\beta^{1,1}\beta^{2,2}-\beta^{1,2}\beta^{2,1}\neq0$),
there are
coordinates such that the system reads
\begin{equation}
\label{sys4}
\dot{\xi}_1=w_1\,,\ \
\dot{\xi}_2=\gamma(\xi_1,\xi_2,\xi_3,\xi_4)+\xi_3w_1\,,\ \
\dot{\xi}_3=\delta(\xi_1,\xi_2,\xi_3,\xi_4)+\xi_4w_1\,,\ \
\dot{\xi}_4=w_2
\end{equation}
with some smooth functions $\gamma$ and $\delta$.
One can eliminate $w_1$ and $w_2$ and, renaming $\xi_1,\xi_2,\xi_3,\xi_4$ as
$x,y,z,w$, obtain the two following relations between these four functions of time~:
\begin{equation}
\label{sysbis}
\dot{y}=\gamma(x,y,z,w)+z\dot{x}\,,\ \ \
\dot{z}=\delta(x,y,z,w)+w\dot{x}
\end{equation}
(this can also be seen as a control system with state $(x,y,z)$ and controls
$w$ and $\dot{x}$).
If $\gamma$ does not depend on $w$, this system is always
parameterizable, and even flat (see \cite{Pome97cocv} or Example~\ref{ex-x} below).
If, on the contrary, $\gamma$ does depend on its last argument, one can,
around a point where the partial derivative is nonzero, invert $\gamma$ with
respect to $w$, \textit{i.e.} \ transform the first equation into
$w=g(x,y,z,\dot{y}-z\dot{x})$ for some function $g$, and obtain, substituting
into the last equation, a single differential relation between
$x,y,z$ written as (\ref{sys3}) in next section.
Note that (\ref{sys3}) also represents the general (non-affine) systems in
$\mathbb{R}^3$ with two controls that satisfy the necessary condition given in
\cite{Rouc94,Slui93}, i.e. they are ``ruled''; we do not develop this here, see
\cite{Avan05th} or a future publication.
The paper technically focuses on Monge parameterizations of (\ref{sys3}).
The problem is unsolved if $g$ and $h$ are such that system (\ref{sys4}) does
not satisfy the above mentioned necessary and sufficient condition.
We do not give a complete solution, but our results are more general than ---and
imply---
these of \cite{Pome97cocv}.
The techniques used in the present paper, derived from the original
proof of non-parameterizability of some special systems
in~\cite{Hilb12} (see also~\cite{Rouc92}), are much simpler and elementary that these of
\cite{Pome97cocv}: recovering the results from that paper in this
way has some interest in itself.
\section{Problem statement}
\label{sec-state}
\subsection{The systems under consideration}
\label{sec-sys}
This paper studies the solutions $t\mapsto(x(t),y(t)$, $z(t))$ of the scalar
differential equation
\begin{equation}
\label{sys3}
\dot{z}\ \;=\;\
h(x,y,z,\lambda)\;+\;g(x,y,z,\lambda)\,\dot{x}\ \ \ \
\mbox{with}\ \ \ \
\lambda=\dot{y}-z\dot{x}\
\end{equation}
where $g$ and $h$ are two real analytic functions $\Omega\to\mathbb{R}$, $\Omega$ being
an open connected subset of $\mathbb{R}^4$.
We assume that $g$ does depend on $\lambda$; more precisely, associating
to $g$ a map
$G:\Omega\to\mathbb{R}^4$ defined by
$G(x,y,z,\lambda)=(x,y,z,g(x,y,z,\lambda))$,
and
denoting by $g_4$ the partial derivative of $g$ with respect
to its fourth argument,
\begin{equation}
\label{g4}
g_4\mbox{ does not vanish on } \Omega
\ \ \ \mbox{and}\ \ \
G\mbox{ defines a diffeomorphism }\Omega\to G(\Omega)\;.
\end{equation}
We denote by $\widehat{\Omega}$ the open connected subset of $\mathbb{R}^5$ defined from
$\Omega$ by~:
\begin{equation}
\label{OO}
(x,y,z,\dot{x},\dot{y})\in\widehat{\Omega}\ \Leftrightarrow\
(x,y,z,\dot{y}-z\dot{x})\in\Omega\ .
\end{equation}
From $g$ and $h$ one may define $\gamma$ and $\delta$, two real analytic
functions $G(\Omega)\to\mathbb{R}$, such that
$G^{-1}(x,y,z,w)=(x,y,z,\gamma(x,y,z,w))$ and $\delta=h\circ G^{-1}$, \textit{i.e.}
\begin{eqnarray}\label{defgamma}
&\!\!\!\!w=g(x,y,z,\lambda)\Leftrightarrow \lambda=\gamma(x,y,z,w)\ ,
\\\label{defdelta}&
h(x,y,z,\lambda)=\delta(x,y,z,g(x,y,z,\lambda))\,,
\ \ \
\delta(x,y,z,w)=h(x,y,z,\gamma(x,y,z,w))\,.
\end{eqnarray}
Then, one may associate to (\ref{sys3}) the control-affine
system (\ref{sys4}) in $\mathbb{R}^4$ with two controls, that can also be written as
(\ref{sysbis}); our interest however focuses on
system (\ref{sys3}) defined by $g$ and $h$ as above. Let us set some conventions~:
\begin{description}
\item[The functions $\gamma$ and $\delta$] when using the notations $\gamma$ and $\delta$, \emph{it
is not assumed that they are related to $g$ and $h$} by
(\ref{defgamma}) and (\ref{defdelta}), unless this is explicitly stated.
\item[Notations for the derivatives]
We denote partial derivatives by subscript
indexes. For functions of
many variables, like $\varphi(u,\ldots,u^{(k)},v,\ldots,v^{(\ell)})$ in
(\ref{para1}), we use the name of the variable as a subscript~:
$p_{xu^{(k-1)}}$ means $\partial^2p/\partial x \partial u^{(k-1)}$, $\varphi_{v^{(\ell)}}$ means $\partial\varphi/\partial v^{(\ell)}$
in
(\ref{EDPp}-b).
Since the arguments of $g$, $h$, $\gamma$, $\delta$ and a few other
functions will sometimes be
intricate functions of other variables, we use numeric subscripts for their
partial derivatives~:
$h_2$ stands for $\partial h/\partial y$, or $g_{4,4,4}$ for
$\partial^3 g/\partial\lambda^3$.
To avoid confusions, we will not use numeric subscripts for other purposes
than partial derivatives, except the subscript 0, as in
$(x_0,y_0,z_0,\dot{x}_0,\dot{y}_0)$ for a reference point.
The dot denotes, as usual, derivative with respect to time, and
$^{(j)}$ the $j$\textsuperscript{th} time-derivatives.
\end{description}
The following elementary lemma
---we do write it for the argument is used repeatedly throughout
the paper---
states that no differential equation
independent from (\ref{sys3}) can be satisfied identically by \emph{all} solutions of
(\ref{sys3})~:
\begin{lmm}
\label{lem-absurd}
For $M\in\mathbb{N}$, let $W$ be an open subset of $\mathbb{R}^{3+2M}$ and
$R:W\to\mathbb{R}$ a smooth function.
If \emph{any} solution $(x(.),y(.),z(.))$ of system (\ref{sys3}),
defined on some time-interval $I$
and such that
$(z(t),x(t),\ldots,x^{(M)}(t),y(t),\ldots$ $y^{(M)}(t))$ is in $W$ for all $t$
in $I$, satisfies
$\ \ R(z(t),y(t),\ldots,y^{(M)}(t),x(t),\ldots,x^{(M)}(t))=0\ $ identically on
$I$, then $R$ is identically zero on $W$.
\end{lmm}
\begin{proof}
For \emph{any} $\mathcal{X}\in W$ there is a germ of solution of (\ref{sys3})
such that
$(z(0),x(0),\ldots,x^{(M)}(0)$, $y(0),\ldots,y^{(M)}(0))=\mathcal{X}$. Indeed,
take e.g. for $x(.)$ and $y(.)$ the polynomials in $t$ of degree $M$ that
have these derivatives at time zero; Cauchy-Lipschitz theorem then yields a
(unique) $z(.)$ solution of (\ref{sys3}) with the prescribed $z(0)$.
\end{proof}
\subsection{The notion of parameterization}
In order to give rigorous definitions without taking care of
time-intervals of definition of the solutions, we consider germs of solutions
at time 0, instead of solutions themselves.
For $O$ an open subset of $\mathbb{R}^n$, the notation
$\mathcal{C}_0^{\infty}({\mathbb{R}},O)$ stands for the set of germs at $t=0$
of smooth functions
of one variable with values in $O$, see e.g. \cite{Golu-Gui73}.
Let $k$, $\ell$, $L$ be some non negative integers, $U$ an open subset of
${\mathbb{R}}^{k+\ell+2}$ and
$V$ an open subset of $\mathbb{R}^{2L+3}$.
We denote by
${\mathcal{U}}\subset{\mathcal{C}}_0^{\infty}({\mathbb{R}},{\mathbb{R}}^2)$
(resp. $\mathcal{V}\subset{\mathcal{C}}_0^{\infty}({\mathbb{R}},{\mathbb{R}}^3)$~)
the set of germs
of smooth functions $t\mapsto(u(t),v(t))$ (resp. $t\mapsto(x(t),y(t),z(t))$~)
such that their jets at $t=0$ to the order precised below are
in $U$ (resp. in $V$)~:
\begin{eqnarray}\label{defu}
{\mathcal{U}}=\{(u,v)\in {\mathcal{C}}_0^{\infty}(\mathbb{R},\mathbb{R}^2) |
(u(0),\dot{u}(0),\ldots,u^{(k)}(0),v(0),\ldots,v^{(\ell)}(0)) \in U \},
\\\label{defv}
\mathcal{V}=\{(x,y,z)\in\mathcal{C}_0^{\infty}(\mathbb{R},\mathbb{R}^3) |
(x(0),y(0),z(0),\dot{x}(0),\dot{y}(0),\ldots,x^{(L)}(0),y^{(L)}(0)) \in V \}.
\end{eqnarray}
These are open sets for the Whitney
${\mathcal{C}}^{\infty}$ topology \cite[p.~42]{Golu-Gui73}.
\begin{dfntn}[Monge parameterization]\label{defparam}
Let $k,\ell, L$ be non negative integers, $L>0$, $k\leq\ell$, and
$\mathcal{X}=(x_0,y_0,z_0,\dot{x}_0,\dot{y}_0,\ldots,x^{(L)}_0$, $y^{(L)}_0)$ a point
in $\widehat{\Omega}\times\mathbb{R}^{2L-2}$ ($\widehat{\Omega}$ is defined in (\ref{OO})).
A \emph{parameterization of order $(k,\ell)$ at $\mathcal{X}$} for system
(\ref{sys3}) is defined by
\begin{itemize}
\item
a neighborhood $V$ of $\mathcal{X}$ in $\widehat{\Omega}\times\mathbb{R}^{2L-2}$,
\item
an open subset $U\subset{\mathbb{R}}^{k+\ell+2}$ and
\item
three real analytic functions $U\to {\mathbb{R}}$, denoted
$\varphi$, $\psi$, $\chi$,
\end{itemize}
such that, with $\mathcal{U}$ and $\mathcal{V}$ defined
from $U$ and $V$ according
to (\ref{defu})-(\ref{defv}), and\\
$\Gamma:\mathcal{U}\to{\mathcal{C}}_0^{\infty}(\mathbb{R},\mathbb{R}^3)$ the
map that assigns to $(u,v)\in{\mathcal{U}}$ the germ $\Gamma(u,v)$ at $t=0$ of
\begin{equation}
\label{para1}
t\ \mapsto\ \left(\begin{array}{c}x(t)\\y(t)\\z(t)\end{array}\right)
=
\left(
\begin{array}{l}
\varphi(u(t),\dot{u}(t),\ldots,u^{(k)}(t),v(t),\dot{v}(t),\ldots,v^{(\ell)}(t))\\
\psi(u(t),\dot{u}(t),\ldots,u^{(k)}(t),v(t),\dot{v}(t),\ldots,v^{(\ell)}(t))\\
\chi(u(t),\dot{u}(t),\ldots,u^{(k)}(t),v(t),\dot{v}(t),\ldots,v^{(\ell)}(t))
\end{array}\right)
\ ,
\end{equation}
the following three properties hold~:
\begin{enumerate}
\item\label{def2}
for all $(u,v)$ belonging to $\mathcal{U}$,
$\Gamma(u,v)$
is a solution of system (\ref{sys3}),
\item\label{def1}
the map $\Gamma$ is open and $\Gamma({\mathcal{U}})\supset{\mathcal{V}}$,
\item\label{def3}
the two maps $U\to{\mathbb{R}}^3$ defined by the triples $(\varphi_{u^{(k)}},\psi_{u^{(k)}},\chi_{u^{(k)}})$ and\\
$(\varphi_{v^{({\ell})}},\psi_{v^{({\ell})}},\chi_{v^{({\ell})}})$ are
identically zero on no open subset of $U$.
\end{enumerate}
\end{dfntn}
\begin{rmrk}[On ordering the pairs $(k,\ell)$]
\label{rmk-kl}
Since $u$ and $v$ play a symmetric role, they can always be exchanged, and
there is no lack of generality in assuming $k\leq\ell$.
This convention is useful only when giving bounds on $(k,\ell)$. For
instance, $k\geq2$ means that both integers are no smaller than 2.
\end{rmrk}
\begin{xmpl}
\label{ex-xu}
Consider the equation $\ \dot{z}=y+(\dot{y}-z\dot{x})\dot{x}\ $, \textit{i.e.} \
(\ref{sys3}) with $g=\lambda$, $h=y$ (and $\widehat{\Omega}=\mathbb{R}^3$). At any
$(x_0,y_0,z_0,\dot{x}_0,\dot{y}_0,\ddot{x}_0,\ddot{y}_0)$ such that
$\ddot{x}_0+{\dot{x}_0}^{\,3}\neq1$, a parameterization of order $(1,2)$ is
given by~:
\begin{equation}
\label{eq:ex-xu}
x=v\,,\ \
y=\frac{{\dot{v}}^2u+\dot{u}}{\ddot{v}+{\dot{v}}^3-1}\,,\ \
z=\frac{(1-\ddot{v})u+\dot{v}\dot{u}}{\ddot{v}+{\dot{v}}^3-1}\,.
\end{equation}
It is easy to check that $(x,y,z)$ given by these formulas does satisfy the
equation, point~\ref{def1} is true because the above formulas can be
``inverted'' by $u=-z+y\dot{x}$, $v=x$ (this gives the ``flat output'' see
section~\ref{sec-flat}), point~\ref{def3} is true because $\psi_{\dot{u}}$,
$\psi_{\ddot{v}}$, $\chi_{\dot{u}}$ and $\chi_{\ddot{v}}$ are nonzero rational
functions. Here, $L=2$ and $V$ can be taken the whole set of
$(x,y,z,\dot{x},\dot{y},\ddot{x},\ddot{y})\in\mathbb{R}^7$ such that
$\ddot{x}+{\dot{x}}^{3}\neq1$ and $U$ the whole set of
$(u,\dot{u},v,\dot{v},\ddot{v})\in\mathbb{R}^5$ such that
$\ddot{v}+{\dot{v}}^3\neq1$.
\end{xmpl}
\begin{xmpl}
\label{ex-x}
Suppose that the function $\gamma$ in (\ref{sysbis}) depends on $x,y,z$ only
(this is treated in \cite[case 6 in Theorem~3.1]{Pome97cocv}).
For such systems, eliminating $w$ does not lead to (\ref{sys3}),
but to the simpler relation $\dot{y}-z\dot{x}=\gamma(x,y,z)$. One can easily
adapt the above definition replacing (\ref{sys3}) by this relation.
This system $\dot{y}-z\dot{x}=\gamma(x,y,z)$ admits a
parameterization of order (1,1) at any $(x_0,y_0,z_0,\dot{x}_0,\dot{y}_0)$ such
that $\dot{x}_0+\gamma_3(x_0,y_0,z_0)\neq 0$.
\\\textit{Proof.}\
In a neighborhood of such a point,, the map
$(x,\dot{x},y,z)\mapsto(x,\dot{x},y,\gamma(x,y,z)+z\dot{x})$ is a
local diffeomorphism, whose inverse can be written as
$(x,\dot{x},y,\dot{y})\mapsto(x,\dot{x},y$, $\chi(x,\dot{x},y,\dot{y}))$, thus
defining a map $\chi$.
Then $x=u, y=v, z=\chi(u,\dot{u},v,\dot{v})$ defines a parameterization of order
(1,1) in a neighborhood of these points.
\end{xmpl}
\begin{rmrk}
\label{rmrk-L}
The integer $L$ characterizes the number of derivatives needed
to describe the open set where the parameterization is valid.
For instance, in Examples~\ref{ex-xu} and \ref{ex-x}, $L$ must be taken no smaller than
2 and 1 respectively.
Obviously, a parameterization of order $(k,\ell)$ at
$(x_0,y_0,z_0,\dot{x}_0,\dot{y}_0,\ldots,x^{(L)}_0,y^{(L)}_0)$ is also, for $L'>L$ and
\emph{any} $(x^{(L+1)}_0,y^{(L+1)}_0,\ldots,x^{(L')}_0,y^{(L')}_0)$,
a parameterization of the same
order at $(x_0,y_0,z_0,\dot{x}_0,\dot{y}_0,\ldots,x^{(L')}_0,y^{(L')}_0)$.
\end{rmrk}
The above definition is local around some jet of solutions of
(\ref{sys3}). In general, the idea of a global parameterization, meaning
that $\Gamma$ would be defined globally, is not realistic; it is not
realistic either to require that there exists a parameterization around all jets
(this would be ``everywhere local'' rather than ``global'')~: the systems in
example~\ref{ex-x} admit a local parameterization around ``almost every'' jets,
meaning jets outside the zeroes
of a real analytic function (namely jets such that $\dot{x}+\gamma_3(x,y,z)\neq0$).
We shall not define more precisely the notion of ``almost everywhere local''
parameterizability, but rather the following (sloppier) one.
\begin{dfntn}
\label{def-param-bof}
We say that system (\ref{sys3}) \emph{admits a parameterization of
order $(k,\ell)$ somewhere in $\Omega$} if there exist an integer $L$ and at
least one jet $(x_0,y_0,z_0,\dot{x}_0,\dot{y}_0,\ldots$, $x^{(L)}_0,y^{(L)}_0)\in \widehat{\Omega}\times\mathbb{R}^{2L-2}$
with a parameterization of order $(k,\ell)$ at this jet in the sense of Definition~\ref{defparam}
\end{dfntn}
In a colloquial way this is a ``somewhere local'' property. Using real
analyticity, it should imply ``almost everywhere local'', but
we do not investigate this.
\subsection{The functions $S$, $T$ and $J$}
\label{sec-ST}
Given $g,h$,
let us define three functions $S$, $T$ and $J$, to be used to discriminate
different cases. They were already more or less present in \cite{Pome97cocv}.
The most compact way is as follows~: let $\omega$, $\omega^1$ and $\eta$ be the following
differential forms in the variables $x,y,z,\lambda$~:
\begin{equation}
\label{om}
\begin{array}{ll}
\!\!\!\!\omega^1=\xdif{y}-z\xdif{x}\,,
&
\omega \ =\ -2\,{g_{4}}^{2}\xdif{x}
+\left(g_{4,4}\,h_{4}- g_{4}\,h_{4,4}\right)\omega^1
-g_{4,4}\left(\xdif{z}-g\xdif{x}\right),
\\
&
\eta \ = \ \xdif{z}-g\xdif{x}-h_4\,\omega^1\ .
\end{array}
\end{equation}
From (\ref{g4}),
$\omega\!\wedge\!\omega^1\!\wedge\!\eta=2{g_4}^2\xdif{x}\!\wedge\!\xdif{y}\!\wedge\!\xdif{z}\neq0$.
Decompose $\xdif{\omega}\!\wedge\!\omega$ on the basis
$\omega,\omega^1,\eta,\xdif{\lambda}$, thus defining the functions $S$, $T$
and $J$ (we say more on their expression and meaning in section~\ref{sec-ST0})~:
\begin{eqnarray}
\label{STJ}
\xdif{\omega}\wedge\omega &=& -\left(
\frac{S}{2g_4}\,\xdif{\lambda}\wedge\eta+\frac{T}{2}\,\xdif{\lambda}\wedge\omega^1
\;+\;J\,\omega^1\wedge\eta
\right)\wedge\omega\ .
\end{eqnarray}
\begin{xmpl}\label{3ex}
Le us illustrate the computation of $S$, $T$
and $J$ on the following three particular cases of (\ref{sys3}). For each of
them, the table below gives the differential forms $\omega$ and $\eta$, the
decomposition of $\xdif{\omega}\wedge\omega$ on $\omega^1, \omega, \eta,
\xdif{\lambda}$ and the resulting $S,T,J$ according to (\ref{STJ}).
System (a) was already studied in Example~\ref{ex-xu}.
\begin{equation}
\label{eq:ex}
\mbox{(a):}\ \dot{z}=y+(\dot{y}-z\dot{x})\dot{x}\;,
\ \ \ \ \ \
\mbox{(b):}\ \dot{z}=y+(\dot{y}-z\dot{x})(\dot{y}-(z-1)\dot{x})\;,
\ \ \ \ \ \
\mbox{(c):}\ \dot{z}=y+(\dot{y}-z\dot{x})^2\dot{x}\;.
\end{equation}
\begin{center}
\begin{tabular}{c|cc|c|c|c|}
\begin{tabular}{c} system\\(\ref{eq:ex})\end{tabular} & $g (x,y,z,\lambda)$ & $h(x,y,z,\lambda)$ &
$ \begin{array}{c} -\omega/2 \\ \eta \end{array}$
& $\xdif{\omega}\wedge\omega$
& $S,T,J$
\\
\hline
(a)
& $\lambda$ & $y$
& $\begin{array}{l}
\xdif{x}
\\
\xdif{z}-\lambda\xdif{x}
\end{array} $
& 0 & 0, 0, 0
\\
\hline
(b)
& $\lambda$ & $y+\lambda^2$
& $\begin{array}{l}
\xdif{y}-(z-1)\xdif{x}
\\
\xdif{z}-\lambda\xdif{x}-2\lambda\omega^1
\end{array} $
& $\omega^1\wedge\eta\wedge\omega$
& 0, 0, $-1$
\\
\hline
(c)
& $\lambda^2$ & $y$
& $\begin{array}{l}
\xdif{z}+3\lambda^2\xdif{x}
\\
\xdif{z}-\lambda^2\xdif{x}
\end{array} $
& $\frac{3}{\lambda}\xdif{\lambda}\wedge\eta\wedge\omega$
& $-12$, 0, 0
\\
\hline
\end{tabular}
\end{center}
\end{xmpl}
\subsection{Contributions and organization of the paper}
\label{sec-plan}
If $S=T=J=0$, \textit{i.e.} \ $\xdif{\omega}\wedge\omega=0$, system (\ref{sys3}) admits a
parameterization of order (1,2), at all points except some singularities.
This is stated further as Theorem~\ref{prop-plats}, but was already contained
in~\cite{Pome97cocv}.
We conjecture that these systems are the only parameterizable ones of
these dimensions, \textit{i.e.} \ system (\ref{sys3}) admits no parameterization of any
order if $(S,T,J)\neq(0,0,0)$, \textit{i.e.} \ if $\xdif{\omega}\wedge\omega\neq0$.
This is unfortunately still a conjecture, but we give the following results, valid if
$(S,T,J)\neq(0,0,0)$
(recall that $k\leq\ell$, see Remark~\ref{rmk-kl})~:
\begin{itemize}
\item
system (\ref{sys3}) admits no parameterization of order $(k,\ell)$ with $k\leq
2$ or $k=\ell=3$ (Theorem~\ref{th-3}),
\item
a parameterization of order
$(k,\ell)$ must come from a solution of the system of PDEs $\mathcal{E}^{\gamma,\delta}_{k,\ell}$ (Theorem~\ref{th-cns}),
\item since a solution of this system of PDEs is also sufficient to construct a
parameterization (Theorem~\ref{edpparam}), the conjecture can be entirely re-formulated in terms
of this system of partial differential relations.
\end{itemize}
Note that this allows one to recover the results
from \cite{Pome97cocv} on $(x,u)$-flatness\footnote{The term ``dynamic
linearizable'' in \cite{Pome97cocv} is synonymous to ``flat'' here.
}. See Remark~\ref{rmrk-old} for details.
The paper is organized as follows.
Section~\ref{sec-edp} is about the above mentioned partial differential system $\mathcal{E}^{\gamma,\delta}_{k,\ell}$.
Section~\ref{sec-ST0} is devoted to some special constructions for the case
where $S=T=0$, and geometric interpretations.
The main results are stated in Section~\ref{sec-main}, based on sufficient
conditions obtained in Sections~\ref{sec-edp} and \ref{sec-ST0}, and necessary
conditions stated and proved in Section~\ref{sec-nec}.
Sections~\ref{sec-flat} and \ref{sec-concl} comment on flatness vs. Monge
parameterization and then give a conclusion and perspectives.
\section{A system of partial differential equations}
\label{sec-edp}
\emph{This section can profitably be skipped or overlooked in a first
reading}; the reader will come back when needed to this material that might
appear, at first sight, somehow disconnected from the thread of the paper.
It defines $\mathcal{E}^{\gamma,\delta}_{k,\ell}$ and its ``regular solutions'', proves that a regular solution induces a
parameterization of order $(k,\ell)$, and that no regular solution exists
unless $k\geq3$ and $\ell\geq4$.
\subsection{The equation $\mathcal{E}^{\gamma,\delta}_{k,\ell}$, regular solutions}
For $k$ and $\ell$ some positive integers, we define a partial differential
system in $k+\ell+1$ independent variables and one dependent variable,
\textit{i.e.} \ the unknown is one function of $k+\ell+1$ variables.
The dependent variable is denoted by $p$ and the independent variables by
$u,\dot{u},\ldots,u^{(k-1)},x,v,\dot{v},\ldots,v^{({\ell-1})}$.
Although the names of the variables may suggest
``time-derivatives'', time is \emph{not} a variable here.
In $\mathbb{R}^{k+\ell+1}$ with the independent variables as coordinates, let $F$ be the differential operator of order 1
\begin{equation}
\label{F}
F=\sum_{i=0}^{k-2}u^{(i+1)}
\partialx{u^{(i)}}+\displaystyle\sum_{i=0}^{\ell-2}v^{(i+1)}\partialx{v^{(i)}}\
,
\end{equation}
where the first sum is zero if $k\leq 1$ and the second one is zero if $\ell\leq 1$.
Let $\widetilde{\Omega}$ be an open connected subset of $\mathbb{R}^4$ and
$\gamma,\delta$ two real analytic functions $\widetilde{\Omega}\to\mathbb{R}$ such that
$\gamma_4$ (partial derivative of $\gamma$ with respect to its
4\textsuperscript{th} argument, see end
of section~\ref{sec-sys}) does not vanish on
$\widetilde{\Omega}$.
Consider the system of two partial differential equations and three inequations~:
\begin{equation}
\label{EDPp}
\mathcal{E}^{\gamma,\delta}_{k,\ell}\left\{
\begin{array}{ll}
p_{u^{(k-1)}}\bigl(Fp_x-\delta(x,p,p_x,p_{xx})\bigr)
-p_{xu^{(k-1)}}\bigl(Fp-\gamma(x,p,p_x,p_{xx})\bigr)=0\;,&\mbox{(a)}\\
p_{u^{(k-1)}}\,p_{xv^{(\ell-1)}}-p_{xu^{(k-1)}}\,p_{v^{(\ell-1)}}=0\;,
&\mbox{(b)}
\\
p_{u^{(k-1)}}\neq 0\;,&\mbox{(c)}
\\
p_{v^{(\ell-1)}}\neq 0\;,&\mbox{(d)}\\
\gamma_1+\gamma_2\,p_x+\gamma_3\,p_{xx}+\gamma_4\,p_{xxx}-\delta\neq0\;.
&\mbox{(e)}
\end{array}\right.\!\!\!\end{equation}
To any $p$ satisfying $\mathcal{E}^{\gamma,\delta}_{k,\ell}$, we associate
two functions $\sigma$ and $\tau$, and a vector field $E$~:
\begin{equation}\label{defsigtau}
\sigma=-\frac{p_{v^{(\ell-1)}}}{p_{u^{(k-1)}}}
\;,\
\tau=\frac{-Fp+\gamma(x,p,p_x,p_{xx})}{p_{u^{(k-1)}}}\;,\ E=\sigma\partialx{u^{(k-1)}}+\partialx{v^{(\ell-1)}}\,.
\end{equation}
We also introduce the differential operator $D$ (see Remark~\ref{rmk-spur} on
the additional variables $\dot{x},\ldots,x^{(k+\ell-1)}$)~:
\begin{equation}
\label{D}
D=F+\tau\partialx{u^{(k-1)}}+
\displaystyle\sum_{i=0}^{k+\ell-2}x^{(i+1)}\partialx{x^{(i)}}\ .
\end{equation}
\begin{dfntn}[Regular solutions of $\mathcal{E}^{\gamma,\delta}_{k,\ell}$]
\label{def-Kreg}
A \emph{regular} solution of system $\mathcal{E}^{\gamma,\delta}_{k,\ell}$ is a real analytic
function $p:O\to\mathbb{R}$, with $O$ a connected open subset of
$\mathbb{R}^{k+\ell+1}$,
such that the image of $O$ by $(x,p,p_x,p_{xx})$ is contained in
$\widetilde{\Omega}$, (\ref{EDPp}-a,b) are identically satisfied on $O$, the left-hand
sides of (\ref{EDPp}-c,d,e) are not identically zero,
and, for at least one integer $K\in\{1,\ldots,k+\ell-2\}$,
\begin{equation}
\label{EDPpK}
ED^{K}p\neq0
\end{equation}
(not identically zero, as a function of
$u,\ldots,u^{(k-1)},x,v,\ldots,v^{(\ell-1)},\dot{x},\ldots,x^{(K)}$ on
$O\times\mathbb{R}^K$).
\hspace{2em}
We call it \emph{$K$-regular} if $K$ is the smallest such
integer, \textit{i.e.} \ if $ED^ip=0$ for all $i\leq K-1$.
\end{dfntn}
\begin{rmrk}[on the additionnal variables $\dot{x},\ldots,x^{(k+\ell-1)}$ in $D$]
\label{rmk-spur}
These variables appear in the expression (\ref{D}). Note that $D$ is only
applied (recursively) to functions of
$u,\ldots,u^{(k-1)},x$, $v,\ldots,v^{(\ell-1)}$ only; hence we view it as a
vector field in $\mathbb{R}^{k+\ell+1}$ with these variables as parameters.
In fact, $D$ is only used in $ED^ip$, $1\leq i\leq k+\ell-1$. This is a
polynomial with respect to the variables $\dot{x},\ddot{x},\ldots,x^{(i)}$ with coefficients depending on
$u,\ldots,u^{(k-1)},x,v,\ldots,v^{(\ell-1)}$ via the functions $p$, $\gamma$,
$\delta$ and their partial derivatives. Hence $ED^ip=0$ means that all these
coefficients are zero, \textit{i.e.} \ it encodes a collection of differential
relations on $p$, where the spurious variables
$\dot{x},\ddot{x},\ldots,x^{(i)}$ no longer appear. Likewise, $ED^ip\neq 0$ means that one of these relations is
not satisfied.
\end{rmrk}
\begin{dfntn}
\label{def-Kreg-gen}
We say that system $\mathcal{E}^{\gamma,\delta}_{k,\ell}$ \emph{admits a regular (resp. $K$-regular) solution somewhere in
$\widehat{\Omega}$}
if there exist at least an open connected $O\subset\mathbb{R}^{k+\ell+1}$ and a regular (resp. $K$-regular) solution $p:O\to\mathbb{R}$.
\end{dfntn}
\begin{rmrk}
\label{rmk-EDPbis}
It is easily seen that $p$ is solution of $\mathcal{E}^{\gamma,\delta}_{k,\ell}$ if and only if
there exist $\sigma$ and $\tau$ such that $(p,\sigma,\tau)$ is a solution of
\begin{equation}
\label{EDPbis}
\begin{array}{rll}
Fp+\tau p_{u^{(k-1)}}=\gamma(x,p,p_x,p_{xx})&\hspace{1em}&
Ep=0\,,\ \sigma_x=0\,,
\\
Fp_x+\tau p_{x,u^{(k-1)}}=\delta(x,p,p_x,p_{xx})&&p_{u^{(k-1)}}\neq0\,,\
\tau_x\neq0\,,\ \sigma\neq0
\end{array}
\end{equation}
Indeed, (\ref{EDPp}) does imply the above relations with $\sigma$ and $\tau$
given by (\ref{defsigtau}); in particular,
$\tau_x\neq0$ is equivalent to (e) and $\sigma\neq0$ to (d); conversely,
eliminating $\sigma$ and $\tau$ in (\ref{EDPbis}), one recovers $\mathcal{E}^{\gamma,\delta}_{k,\ell}$.
Note also that, with $g$ and $h$ related to $\gamma$ and $\delta$ by (\ref{defgamma}) and
(\ref{defdelta}), any solution of the above equations and inequations satisfies
\begin{equation}
\label{psolgh}
Dp_x=h(x,p,p_x,Dp-p_x\dot{x})+g(x,p,p_x,Dp-p_x\dot{x})\dot{x}\ .
\end{equation}
\end{rmrk}
The following will be used repeatedly in the paper~:
\begin{lmm}
\label{lemAAA}
If $p$ is a solution of system $\mathcal{E}^{\gamma,\delta}_{k,\ell}$ and
\begin{enumerate}
\item\label{pt1} either it satisfies a relation of the type $p_x=\alpha(x,p)$ with
$\alpha$ a function of two variables,
\item\label{pt2} or it satisfies a relation of the type $p_{xx}=\alpha(x,p,p_x)$ with
$\alpha$ a function of three variables,
\item\label{pt3} or it satisfies two relations of the type
$p_{xxx}=\alpha(x,p,p_x,p_{xx})$ and\\
$Fp_{xx}+\tau p_{xxu^{(k-1)}}=\psi(x,p,p_x,p_{xx})$, with $\psi$
and $\alpha$ two functions of four variables,
\end{enumerate}
then it satisfies $ED^ip=0$ for all $i\geq0$ and
hence is \emph{not} a regular solution of $\mathcal{E}^{\gamma,\delta}_{k,\ell}$.
\end{lmm}
\begin{proof}
Point~\ref{pt1} implies point~\ref{pt2} because
differentiating the relation $p_x=\alpha(x,p)$ with respect to $x$
yields $p_{xx}=\alpha_x(x,p)+p_x\alpha_p(x,p)$.
Likewise, point~\ref{pt2} implies point~\ref{pt3}~:
differentiating the relation $p_{x,x}=\alpha(x,p,p_x)$ with respect to $x$
yields
$p_{xxx}=\alpha_x(x,p,p_x)+p_x\alpha_p(x,p,p_x)+p_{xx}\alpha_{p_x}(x,p,p_x)$
while differentiating it along the vector field
$F+\tau\,\partial\!/\partial u^{(k-1)}$ and using (\ref{EDPbis}) yields \\
$Fp_{xx}+\tau p_{xxu^{(k-1)}}=\gamma(x,p,p_x,p_{xx})\,\alpha_p(x,p,p_x) +
\delta(x,p,p_x,p_{xx})\,\alpha_{p_x}(x,p,p_x)$.
Let us prove that point~\ref{pt3} implies $ED^ip=ED^ip_x=ED^ip_{xx}=0$ for all
$i\geq0$, hence the lemma. It is indeed true for $i=0$ and the following three
relations:
$Dp=\gamma(x,p,p_x,p_{xx})+\dot{x}\,p_{x}$,
$Dp_x=\delta(x,p,p_x,p_{xx})+\dot{x}\,p_{xx}$,
$Dp_{xx}=\psi(x,p,p_x,p_{xx})+\dot{x}\,\alpha(x,p,p_x,p_{xx})$,
that are implied by (\ref{D}), (\ref{EDPbis}) and the two relations in point~\ref{pt3}
allow one to go from $i$ to $i+1$
($ED^ix=Ex^{(i)}=0$ and $ED^i\dot{x}=Ex^{(i+1)}=0$ from the very definition of
$D$ and $E$).
\end{proof}
\subsection{The relation with Monge parameterizations}
\label{sec-edpsuff}
Let us now explain how a Monge parameterization for system (\ref{sys3}) can be
deduced from a regular solution $p:O\to\mathbb{R}$ of $\mathcal{E}^{\gamma,\delta}_{k,\ell}$.
This may seem anecdotic but it is not, for we shall prove (\textit{cf.} \
sections~\ref{sec-main} and \ref{sec-nec}) that all Monge parameterizations
are of this type, except when $g$ and $h$ are such that
$\xdif{\omega}\wedge\omega=0$ (see (\ref{om})-(\ref{STJ})).
We saw in Remark~\ref{rmk-EDPbis} that (\ref{EDPp}-e) is equivalent to
$\tau_x\neq0$; let
$(u_0,\ldots,u_0^{(k-1)},x_0,v_0,\ldots$, $v_0^{(\ell-1)})\inO$ be such that
$\tau_x(u_0,\ldots,u_0^{(k-1)},x_0,v_0,\ldots,v_0^{(\ell-1)})\neq0$.
Choose any $(u_0^{(k)},v_0^{(\ell)})\in\mathbb{R}^2$ (for instance with
$v_0^{(\ell)}=0$) such that
\begin{equation}
\label{eq:ukvl}
u_0^{(k)}-\sigma(u_0,\ldots, u_0^{(k-1)},v_0,\ldots, v_0^{(\ell-1)})\,v_0^{(\ell)}
=\tau(u_0,\ldots,u_0^{(k-1)},x_0,v_0,\ldots,v_0^{(\ell-1)}).
\end{equation}
Then, the implicit function theorem provides a neighborhood
$V$ of $(u_0,\ldots,u_0^{(k)}$, $v_0,\ldots,v_0^{(\ell)})$ in $\mathbb{R}^{k+\ell+2}$
and a real analytic map $\varphi:V\to\mathbb{R}$
such that $\varphi(u_0,\ldots,u_0^{(k)}$, $v_0,\ldots,v_0^{(\ell)})=x_0$ and
\begin{equation}
\label{defphi}
\tau(u,\ldots,u^{(k-1)},\varphi(u\cdots v^{(\ell)}),v,\ldots,v^{(\ell-1)})\,=\,
u^{(k)}-\sigma(u,\ldots,u^{(k-1)},v,\ldots,v^{(\ell-1)})\,v^{(\ell)}
\end{equation}
identically on $V$. Two other maps $V\to\mathbb{R}$ may be defined by
\begin{eqnarray}
\label{defpsi}
\psi(u,\ldots,u^{(k)},v,\ldots,v^{(\ell)})&=&p(u,\ldots,u^{(k-1)},\varphi(\cdots),v,\ldots,v^{(\ell-1)}),
\\
\label{defchi}
\chi(u,\ldots,u^{(k)},v,\ldots,v^{(\ell)})&=&
p_x(u,\ldots,u^{(k-1)},\varphi(\cdots),v,\ldots,v^{(\ell-1)}).
\end{eqnarray}
From these $\varphi$, $\psi$ and $\chi$, one can define a map $\Gamma$ as in
(\ref{para1}) that is a candidate for a parameterization.
We prove below that, if $p$ is a regular solution of $\mathcal{E}^{\gamma,\delta}_{k,\ell}$, then this
$\Gamma$ is
indeed a parameterization, at least away from some singularities. The following lemma
describes these singularities;
it is proved in Appendix~\ref{app-1}.
\begin{lmm}\label{jacob}
Let $O$ be an open connected subset of $\mathbb{R}^{k+\ell+1}$ and $p:O\to\mathbb{R}$ be a
$K$-regular solution of system $\mathcal{E}^{\gamma,\delta}_{k,\ell}$, see (\ref{EDPp}). Define the map
$\pi:O\times\mathbb{R}^K\to\mathbb{R}^{K+2}$ by
\begin{equation}
\label{pi}
\!\!\pi(u\cdots u^{(k-1)},x,v\cdots v^{(\ell-1)},\dot{x}\cdots x^{(K)})=\!\left(\!\!\!
\begin{array}{c}
p_x(u\cdots u^{(k-1)},x,v\cdots v^{(\ell-1)})\\
p(u\cdots u^{(k-1)},x,v\cdots v^{(\ell-1)})
\\Dp(u\cdots u^{(k-1)},x,v\cdots v^{(\ell-1)},\dot{x})\\\vdots
\\D^Kp(u\cdots u^{(k-1)},x,v\cdots v^{(\ell-1)},\dot{x}\cdots x^{(K)})
\end{array}\!\!\!\right).
\end{equation}
There exist two non-negative integers $i_0\leq k$ and
$j_0\leq\ell$ such that $i_0+j_0=K+2$ and
\begin{equation}
\label{detpi}
\det\left(
\partialxy{\pi}{u^{(k-i_0)}},\ldots,\partialxy{\pi}{u^{(k-1)}},
\partialxy{\pi}{v^{(\ell-j_0)}},\ldots,\partialxy{\pi}{v^{(\ell-1)}}\right)
\end{equation}
is a nonzero real analytic function on $O\times\mathbb{R}^K$.
\end{lmm}
We can now state precisely the announced sufficient condition.
Its interest is discussed in Remark~\ref{rmk-suff}.
\begin{thrm}\label{edpparam}
Let $p:O\to\mathbb{R}$, with $O\subset\mathbb{R}^{k+\ell+1}$ open, be a
$K$-regular solution of system $\mathcal{E}^{\gamma,\delta}_{k,\ell}$, and $i_0,j_0$ be given by Lemma~\ref{jacob}.
Then, the maps $\varphi,\psi,\chi$ constructed above define a parameterization
$\Gamma$ of system (\ref{sys3}) of order $(k,\ell)$ (see
Definition~\ref{defparam}) at any
jet of solutions $(x_0,y_0,z_0,\dot{x}_0,\ldots,x_0^{(K)},\dot{y}_0,\ldots,y_0^{(K)})$
such that,
for some $u_0,\ldots,u_0^{(k-1)},v_0,\ldots,v_0^{(\ell-1)}$,
\begin{equation}\label{lien}\left.\begin{array}{l}
(u_0,\ldots,u_0^{(k-1)},x_0,v_0,\ldots,v_0^{(\ell-1)})\inO\,,\\
z_0=p_x(u_0,\ldots, u_0^{(k-1)}, v_0,\ldots, v_0^{(\ell-1)}, x_0)\,,\\
y_0^{(i)}=D^ip(u_0,\ldots, u_0^{(k-1)}, v_0,\ldots, v_0^{(\ell-1)}, x_0, \ldots,
x_0^{(i)})\ \ \ 0\leq i\leq K\,,
\end{array}\right\}
\end{equation}
the left-hand sides of (\ref{EDPp}-c,d,e) are all nonzero at
$(u_0,\ldots,u_0^{(k-1)},x_0,v_0,\ldots,v_0^{(\ell-1)})$,
and the function $ED^Kp$ and the determinant (\ref{detpi}) are nonzero at point
$(u_0,\ldots,u_0^{(k-1)},x_0,\ldots,x_0^{(K)}$, $v_0,\ldots,v_0^{(\ell-1)})\in
O\times\mathbb{R}^{K}\ $.
\end{thrm}
\begin{proof}
Let us prove that $\Gamma$ given by (\ref{para1}),
with the maps $\varphi,\psi,\chi$ constructed above,
satisfies the three points of Definition~\ref{defparam}.
Differentiating (\ref{defphi}) with respect to $u^{(k)}$ and $v^{(\ell)}$ yields
$\varphi_{u^{(k)}}\tau_x=1$, $\varphi_{v^{(\ell)}}\tau_x=-\sigma$, hence the
point~\ref{def3} ($\sigma\neq0$ from (\ref{EDPbis})).
To prove point \ref{def2},
let $u(.),v(.)$ be arbitrary and $x(.),y(.),z(.)$ be defined by (\ref{para1}).
Differentiating (\ref{para1}) with respect to time, using relations
(\ref{defpsi}) and (\ref{defchi}), taking $u^{(k)}(t)$ from
(\ref{defphi}), one has
$$
\dot{y}(t)=Fp+\tau p_{u^{(k-1)}}+v^{(\ell)}(t)Ep+\dot{x}(t)\,z(t)\,,\ \ \
\dot{z}(t)=Fp_x+\tau p_{x,u^{(k-1)}}+v^{(\ell)}(t)Ep_x+\dot{x}(t)\,p_{xx}\,,
$$
where $F$ is given by (\ref{F}) and the argument $(u(t)\ldots u^{(k-1)}(t),x(t),v(t)\ldots
v^{(\ell-1)}(t))$ for $Fp$, $Fp_x$ $Ep$, $Ep_x$, $\tau$, $p_{x,u^{(k-1)}}$, $p_{u^{(k-1)}}$ and $p_{xx}$ is omitted.
Then, (\ref{EDPbis}) implies, again omitting
the arguments of $p_{xx}$, one has
$\dot{y}(t)=\gamma(x(t),y(t),z(t),p_{xx})+z(t)\dot{x}(t)$, and
$\dot{z}(t)=\delta(x(t),y(t),z(t),p_{xx})+p_{xx}\dot{x}(t)$.
The first equation yields $p_{xx}=g(x(t),y(t)$, $z(t),\dot{y}(t)-z(t)\dot{x}(t))$
with $g$ related to $\gamma$ by (\ref{defgamma}), and then the second one
yields (\ref{sys3}), with $h$ related to $\delta$ by (\ref{defdelta}).
This proves point~\ref{def2}. The rest of the proof is devoted to point \ref{def1}.
Let $t\mapsto(x(t),y(t),z(t))$ be a solution of (\ref{sys3}).
We may consider $\Gamma(u,v)=(x,y,z)$ (see (\ref{para1}))
as a system of three
ordinary differential equations in two unknown functions $u, v$~:
\begin{eqnarray}
\label{eqq101}
\!\!\!\!u^{(k)} -\sigma(u,\ldots,u^{(k-1)},v,\ldots, v^{(\ell-1)}) v^{(\ell)} -
\tau(u,\ldots,u^{(k-1)},x,v,\ldots, v^{(\ell-1)})
&=&0,
\\
\label{eqq102}
p(u,\ldots,u^{(k-1)},x,v,\ldots, v^{(\ell-1)}) &=&y,
\\
\label{eqq103}
p_x(u,\ldots,u^{(k-1)},x,v,\ldots, v^{(\ell-1)}) &=&z.
\end{eqnarray}
Differentiating (\ref{eqq102}) $K+1$ times, substituting $u^{(k)}$ from (\ref{eqq101}), and using the fact
that $ED^ip=0$ for $i\leq K$ (see Definition~\ref{def-Kreg}), we get
\begin{eqnarray}
\label{eqq102i}
D^ip\,(u(t),\ldots,u^{(k-1)}(t),v(t),\ldots,
v^{(\ell-1)}(t),x(t),\ldots,x^{(i)}(t))=\frac {d^iy}{dt^i}(t)
\;,\ \ 1\leq i\leq K,
\\[1ex]
\nonumber
v^{(\ell)}(t)\;ED^{K}p\,(u(t),\ldots,u^{(k-1)}(t),v(t),\ldots,
v^{(\ell-1)}(t),x(t),\ldots,x^{(K)}(t))
\hspace{8em}
\\
\label{eqq102K}
\;+\;D^{K+1}p\,(u(t),\ldots,u^{(k-1)}(t),v(t),\ldots,
v^{(\ell-1)}(t),x(t),\ldots,x^{(K+1)}(t))
=\frac {d^{K+1}y}{dt^{K+1}}(t)\,.
\end{eqnarray}
Equations (\ref{eqq102})-(\ref{eqq103})-(\ref{eqq102i}) can be written
\begin{equation}
\label{eq:pipi}
\pi(u,\ldots,\ldots,u^{(k-1)},x,v,\ldots,
v^{(\ell-1)},\dot{x},\ldots,x^{(K)})\ =\
\left(
\begin{array}{c}
z\\y\\\dot{y}\\\vdots\\y^{(K)}
\end{array}\right)
\end{equation}
with $\pi$ given by (\ref{pi}).
From the implicit function theorem, since the determinant (\ref{detpi}) is nonzero,
(\ref{eqq102})-(\ref{eqq103})-(\ref{eqq102i})
yields
$u^{(k-i_0)},\ldots,u^{(k-1)}$, $v^{(\ell-j_0)},\ldots,v^{(\ell-1)}$ as explicit
functions of $u,\ldots,u^{(k-i_0-1)}$, $v,\ldots,v^{(\ell-j_0-1)}$,
$x,\ldots,x^{(K)}$, $y,\ldots,y^{(K)}$ and $z$.
Let us single out these giving the lowest order derivatives~:
\begin{equation}
\begin{array}{l}\label{uvij}
u^{(k-i_0)}=f^1(u,\ldots,u^{(k-1-i_0-1)},v,\ldots,v^{(\ell-j_0-1)},x,\ldots,x^{(K)},z,y,\ldots,y^{(K)}),\\
v^{(\ell-j_0)}=f^2(u,\ldots,u^{(k-1-i_0-1)},v,\ldots,v^{(\ell-j_0-1)},x,\ldots,x^{(K)},z,y,\ldots,y^{(K)}).
\end{array}
\end{equation}
Let us prove that, provided that $(x,y,z)$ is a solution of
(\ref{sys3}), system (\ref{uvij}) is equivalent
to (\ref{eqq101})-(\ref{eqq102})-(\ref{eqq103}), \textit{i.e.} \ to $\Gamma(u,v)=(x,y,z)$.
It is obvious that any $t\mapsto(u(t),v(t),x(t),y(t),z(t))$ that satisfies (\ref{sys3}),
(\ref{eqq101}), (\ref{eqq102}) and (\ref{eqq103}) also satisfies (\ref{uvij}),
because these equations were obtained from consequences of those. Conversely, let
$t\mapsto(u(t),v(t),x(t),y(t),z(t))$ be such that (\ref{sys3}) and
(\ref{uvij}) are satisfied; differentiating (\ref{uvij})
and substituting each time $\dot{z}$ from (\ref{sys3}) and
$(u^{(k-i_0)},v^{(\ell-j_0)})$ from (\ref{uvij}), one obtains
\begin{equation}\!
\begin{array}{l}\label{uvijDIFF}
u^{(k-i_0+i)}=f^{1,i}(u,\ldots,u^{(k-1-i_0-1)},v,\ldots,v^{(\ell-j_0-1)},x,\ldots,x^{(K+i)},z,y,\ldots,y^{(K+i)}),\
i\in\mathbb{N},\\
v^{(\ell-j_0+j)}=f^{2,j}(u,\ldots,u^{(k-1-i_0-1)},v,\ldots,v^{(\ell-j_0-1)},x,\ldots,x^{(K+j)},z,y,\ldots,y^{(K+j)}),\
j\in\mathbb{N}.
\end{array}\!\!\!
\end{equation}
Now, substitute the values of $u^{(k-i_0)},\ldots,u^{(k)}$,
$v^{(\ell-j_0)},\ldots,v^{(\ell)}$ from (\ref{uvijDIFF}) into (\ref{eqq101}),
(\ref{eqq102}) and (\ref{eqq103}); either the obtained relations are
identically satisfied, and hence it is true that any solution of (\ref{sys3}) and
(\ref{uvij}) also satisfies (\ref{eqq101})-(\ref{eqq102})-(\ref{eqq103}), or
one obtains at least one relation of the form (recall that $k\leq\ell$):
$$
R(u,\ldots,u^{(k-1-i_0-1)},v,\ldots,v^{(\ell-j_0-1)},x,\ldots,x^{(K+\ell)},z,y,\ldots,y^{(K+\ell)})=0.
$$
This relation has been obtained (indirectly) by differentiating and combining
(\ref{sys3})-(\ref{eqq101})-(\ref{eqq102})-(\ref{eqq103}).
This is absurd because
(\ref{eqq101})-(\ref{eqq102})-(\ref{eqq103})-(\ref{eqq102i})-(\ref{eqq102K}) are the only independent
relations of order $k,\ell$ obtained by differentiating and combining\footnote{
In other words,
(\ref{eqq101})-(\ref{eqq102})-(\ref{eqq103})-(\ref{eqq102i})-(\ref{eqq102K}),
as a system of ODEs in $u$ and $v$, is formally integrable (see e.g. \cite[Chapter IX]{Brya-Che-Gar-G-G91}).
This means, for a systems of ODEs with independent variable $t$, that no new independent equation of the same orders ($k$ with
respect to $u$ and $\ell$ with respect to $v$) can be obtained by
differentiating
and combining these equations.
It is known \cite[Chapter IX]{Brya-Che-Gar-G-G91} that a sufficient
condition is that this is true when differentiating only once and the system
allows one to express the highest order derivatives as functions of the others.
Formal integrability also means that, given any initial condition
$(u(0),\ldots,u^{(k)}(0),v(0),\ldots,v^{(\ell)}(0))$ that satisfies these
relations, there is a solution of the system of ODEs with these initial conditions.
}
(\ref{eqq101})-(\ref{eqq102})-(\ref{eqq103})
because, on the one hand, since $D^Kp\neq0$, differentiating more (\ref{eqq102K}) and (\ref{eqq101})
will produce higher order differential equations in which higher order
derivatives cannot be eliminated, and on the other hand, differentiating
(\ref{eqq103}) and substituting $\dot{z}$ from (\ref{sys3}), $u^{(k)}$
from (\ref{eqq101}) and $\dot{y}$ from (\ref{eqq102i}) for $i=1$ yields the
trivial $0=0$ because $p$ is a solution of $\mathcal{E}^{\gamma,\delta}_{k,\ell}$, see the proof of
point~\ref{def2} above.
We have now established that, for $(x,y,z)$ a solution of (\ref{sys3}), $\Gamma(u,v)=(x,y,z)$ is equivalent to
(\ref{uvij}).
Using
Cauchy Lipschitz theorem with continuous dependence on the parameters, one can define a continuous map
$s:\mathcal{V}\to\mathcal{U}$ mapping a germ $(x,y,z)$ to the unique
germ of solution of (\ref{uvij}) with fixed initial condition
$(u,\ldots,u^{(k-i_0-1)},v,\ldots,v^{(\ell-j_0-1)})=(u_0,\ldots,u_0^{(k-i_0-1)},v_0,\ldots,v_0^{(\ell-j_0-1)})$.
Then
$s$ is a continuous right inverse of $\Gamma$, \textit{i.e.} \ $\Gamma\circ s=Id$.
This proves point~\ref{def1}.
\end{proof}
\subsection{On (non-)existence of regular solutions of system $\mathcal{E}^{\gamma,\delta}_{k,\ell}$}
\label{sec-EDP}
\begin{cnjctr}
\label{conj-EDP} For any real analytic functions $\gamma$ and $\delta$ (with
$\gamma_4\neq0$), and any integers $k,\ell$, the partial differential system
$\mathcal{E}^{\gamma,\delta}_{k,\ell}$ (see (\ref{EDPp})) does not admit any regular solution $p$.
\end{cnjctr}
An equivalent way of stating this conjecture is: ``the equations $ED^ip=0$,
for $1\leq i\leq k+\ell-2$, are consequences of (\ref{EDPp})''. Note that ``$ED^ip=0$'' in
fact encodes several partial differential relations on $p$; see Remark~\ref{rmk-spur}.
If $\gamma$ and $\delta$ are polynomials, this can be easily
phrased in terms of the differential ideals in the set of polynomials with
respect to the variables $u,\ldots,u^{(k-1)},x,v,\ldots,v^{(\ell-1)}$ with
$k+\ell+1$ commuting derivatives (all the partial derivatives with respect to
these variables).
This is still a conjecture for general integers $k$ and $\ell$, but
we prove it for ``small enough'' $k,\ell$, namely
if one of them is smaller than 3 or if $k=\ell=3$.
The following statements assume $k\leq\ell$ (see remark~\ref{rmk-kl}).
\begin{prpstn}
\label{prop-edp}
If system $\mathcal{E}^{\gamma,\delta}_{k,\ell}$, with $k\leq\ell$, admits a regular solution, then $k\geq3$,
$\ell\geq4$ and the determinant
\begin{equation}
\label{eq:det2}\left|
\begin{array}{lll}
p_{u^{(k-1)}}&p_{u^{(k-2)}}&p_{u^{(k-3)}} \\
p_{xu^{(k-1)}}&p_{xu^{(k-2)}}&p_{xu^{(k-3)}} \\
p_{xxu^{(k-1)}}&p_{xxu^{(k-2)}}&p_{xxu^{(k-3)}}
\end{array}
\right|
\end{equation}
is a nonzero real analytic function.
\end{prpstn}
\begin{proof}
Straightforward consequence of Lemma~\ref{lemAAA} and the three following
lemmas, proved in appendix~\ref{app-sys}.
\end{proof}
\begin{lmm}
\label{lem-1}
If $p$ is a solution of system $\mathcal{E}^{\gamma,\delta}_{k,\ell}$
and either $k=1$ or
$\left|
\begin{array}{ll}
p_{u^{(k-1)}}&p_{u^{(k-2)}}\\
p_{xu^{(k-1)}}&p_{xu^{(k-2)}}
\end{array}
\right|=0$,
then around each point such that $p_{u^{(k-1)}}\neq0$, there exists a function
$\alpha$ of two variables such that a relation $p_x=\alpha(x,p)$ holds
identically on a neighborhood of that point.
\end{lmm}
\begin{lmm}
\label{lem-2}
Suppose that $p$ is a solution of $\mathcal{E}^{\gamma,\delta}_{k,\ell}$ with
\begin{equation}
\label{inv2}
\ell\geq k\geq2\;,\ \ \ p_{u^{(k-1)}}\neq0\;,\ \ \
\left|
\begin{array}{ll}
p_{u^{(k-1)}}&p_{u^{(k-2)}}\\
p_{xu^{(k-1)}}&p_{xu^{(k-2)}}
\end{array}
\right|
\neq0\ .
\end{equation}
If either $k=2$ or the determinant (\ref{eq:det2}) is identically zero,
then, around any point where the two quantities in (\ref{inv2}) are nonzero,
there exists a function
$\alpha$ of three variables such that a relation $p_{x,x}=\alpha(x,p,p_x)$ holds
identically on a neighborhood of that point.
\end{lmm}
\begin{lmm}
\label{lem-3}
Let $k=\ell=3$.
For any solution $p$ of $\mathcal{E}^{\gamma,\delta}_{k,\ell}l\gamma\delta33$, in a neighborhood of any point where the
determinant (\ref{eq:det2}) is nonzero, there exist two functions $\alpha$
and $\psi$ of four variables such that
$p_{xxx}=\alpha(x,p,p_x,p_{xx})$ and
$Fp_{xx}+\tau p_{xxu^{(k-1)}}=\psi(x,p,p_x,p_{xx})$
identically on a neighborhood of that point.
\end{lmm}
\section{Remarks on the case where $S=T=0$.}
\label{sec-ST0}
\subsection{Geometric meaning of the differential form {\boldmath$\omega$ and the condition $S=T=0$}}
For $(x,y,z)$ such that the set
$\Lambda=\{\lambda\in\mathbb{R},(x,y,z,\lambda)\in\Omega\}$ is nonempty,
(\ref{sys3}) defines, by varying $\lambda$ in $\Lambda$ and
$\dot{x}$ in $\mathbb{R}$, a surface $\Sigma$ in [the tangent space at
$(x,y,z)$ to] $\mathbb{R}^3$. Fixing $\lambda$ in $\Lambda$ and varying
$\dot{x}$ in $\mathbb{R}$ yields a straight line $S_\lambda$ (direction
$(1,z,g(x,y,z,\lambda))$). Obviously, $\Sigma=\bigcup_{\lambda\in\Lambda}S_\lambda$;
$\Sigma$ is a ruled surface.
For each $\lambda\in\Lambda$, let $P_\lambda$ be the \emph{osculating
hyperbolic paraboloid to $\Sigma$ along $S_\lambda$}, \textit{i.e.} \ the
unique\footnote{General hyperbolic paraboloid:
$\left(a^{11}\dot{x}+a^{12}Y+a^{13}Z\right)
\left(a^{21}\dot{x}+a^{22}Y+a^{23}Z\right)
+a^{31}\dot{x}+a^{32}Y+a^{33}Z+a^0=0$, where the matrix
$[a^{ij}]$ is invertible and $Y,Z$ stand for $\dot{y}-z\dot{x}-\lambda,
\dot{z}-g\dot{x}-h$. It contains $S_\lambda$
if and
only if $a^{11}=a^{31}=a^0=0$. Contact at order 2 means $a^{13}=0$, $a^{33}=-a^{12}a^{21}/g_{4}$,
$a^{32}=-h_{4}a^{33}$, $a^{22}=\frac12a^{21}(g_{4}h_{44}-g_{44}h_{4})/{g_{4}}^2$,
$a^{23}=\frac12a^{21}g_{44}/{g_{4}}^2$.
Normalization: $a^{12}=a^{21}=1$.
}
such quadric that contains $S_\lambda$ and has a contact of order 2 with
$\Sigma$ at \emph{all} points of $S_\lambda$. Its equation is
\begin{displaymath}
\textstyle\left( \dot{y}-z\dot{x}-\lambda \right)
\left( \dot{x}+{\frac { h_{{44}}g_{{4}}-g_{{44}}h_{{4}} }{{2\,g_{{4}}}^{2}}}\left( \dot{y}-z\dot{x}-\lambda \right)+{\frac {g_{{44}} }{2\,{g_{4}}^{2}}}\left( \dot{z}-g\dot{x}-h \right) \right) -{\frac {\dot{z}-g\dot{x}-h}{g_{{4}}}}+{\frac {h_{{4}} }{g_{{4}}}}\left( \dot{y}-z\dot{x}-\lambda \right)=0
\end{displaymath}
where we omitted the argument $(x,y,z,\lambda)$ of $h$ and $g$. With $\omega$,
$\omega^1$, $\eta$ defined in (\ref{om})
and $\dot{\xi}$ the vector with coordinates $\dot{x},\dot{y},\dot{z}$, the
above equation reads
\begin{displaymath}
-\,\left(\langle\omega^1,\dot{\xi}\rangle-\lambda\right)
\frac{\langle\omega,\dot{\xi}\rangle + \left(
h_{{44}}g_{{4}}-g_{{44}}h_{{4}} \right)\lambda + g_{{44}}h}
{2{g_{4}}^{2}}
\,-\, \frac{ \langle\eta,\dot{\xi}\rangle-h}{g_{4}} =0\ ,
\end{displaymath}
that can in turn be rewritten
$\langle\omega^1,\dot{\xi}\rangle\langle\omega,\dot{\xi}\rangle
-\langle\omega^3,\dot{\xi}\rangle-a^0=0$, with $\omega^3$ and $a^0$ some
differential form and function;
$\omega$, $\omega^3$ and $a^0$ are uniquely defined up to multiplication by a
non-vanishing function; they encode how the ``osculating hyperbolic paraboloid''
depends on $x,y,z$ and $\lambda$.
We will have to distinguish the case when $S$ and $T$, whose explicit
expressions derive from (\ref{om}) and (\ref{STJ}):
\begin{equation}\label{defST}
S=2\,g_{4}\,g_{4,4,4}\;-\;3\,{g_{4,4}}^2\;,\ \ \
T=2\,g_{4}\,h_{4,4,4}\;-\;3\,g_{4,4}\,h_{4,4}\;,
\end{equation}
are zero.
From (\ref{STJ}), it means that the Lie derivative of
$\omega$ along $\partial/\partial\lambda$ is co-linear to $\omega$, and this
is classically equivalent to a decomposition $\omega=k\,\hat{\omega}^2$ where
$k\neq0$ is a function of the four variables $x,y,x,\lambda$ but
$\hat{\omega}^2$ is a differential form in the \emph{three} variables $x,y,z$, the
first integrals of $\partial/\partial\lambda$. Then,
one can prove that the form $\hat{\omega}^3=\omega^3/k$ and the function $\hat{a}^0=a^0/k$
also involve the variables $x,y,z$ only.
From $\omega$'s expression, one can
take for instance $k=g_{44}$ or $k=gg_{44}-2{g_{4}}^2$ (they do not vanish
simultaneously because $g_{4}$ does not vanish).
Hence $S=T=0$ if and only if, for each fixed $(x,y,z)$, the osculating
hyperbolic paraboloid $P_\lambda$ in fact does not depend on $\lambda$ \textit{i.e.} \
the surface $\Sigma$ itself is a hyperbolic paraboloid, its
equation being
\begin{equation}
\label{eqq50}
\langle\omega^1,\dot{\xi}\rangle
\langle\hat{\omega}^2,\dot{\xi}\rangle
+
\langle\hat{\omega}^3,\dot{\xi}\rangle
+
\hat{a}^0=0\ ,
\end{equation}
where $\dot\xi$ is the vector of coordinates $\dot{x},\dot{y},\dot{z}$.
This yields the following proposition\footnote{
We introduced the osculating hyperbolic paraboloid because it gives some
geometric insight on $\omega$, $S$ and $T$, but it is not formally
\emph{needed}: Proposition~\ref{propST0} can be
stated without it, and proved as
follows, based on (\ref{defST}) (see also \cite{Avan05th}): the general
solution of $S=0$ is a
linear fractional expression $\displaystyle g=
\bigl.\bigl(\hat{b}^0+\hat{b}^1\lambda \bigr)\bigr/
\bigl(\hat{c}^0+\hat{c}^1\lambda\bigr)$ where $\hat{b}^0$, $\hat{b}^1$,
$\hat{c}^0$, $\hat{c}^1$ are functions of $x,y,z$ only
---this is known, for $S/(g_{4})^2$ is the Schwartzian derivative of $g$
with respect to its 4\textsuperscript{th} argument, but anyway elementary---
and $g_4\neq0$ translates into $\hat{b}^0\hat{c}^1-\hat{b}^1\hat{c}^0\neq
0$; then $T=0$ yields $\displaystyle h=
\bigl.\bigl(\hat{a}^0+\hat{a}^1\lambda+\hat{a}^2\lambda^2 \bigr)\bigr/
\bigl(\hat{c}^0+\hat{c}^1\lambda\bigr)$
with $\hat{a}^0$, $\hat{a}^1$, $\hat{a}^2$ functions of $x,y,z$.
With such $g$ and $h$, multiplying both sides of (\ref{sys3}) by
$\hat{c}^0+\hat{c}^1\lambda$ yields the equation in
Proposition~\ref{propST0}.
},
where the functions $\hat{a}^0$, $\hat{a}^1$,
$\hat{a}^2$, $\hat{b}^0$, $\hat{b}^1$, $\hat{c}^0$, $\hat{c}^1$ of $x,y,z$ are
defined by
\begin{equation}
\label{om12}
\hat{\omega}^2=\frac{\omega}{k}=\hat{b}^1\xdif{x}+\hat{a}^2\omega^1-\hat{c}^1\xdif{z}
,\ \
\hat{\omega}^3=\frac{\omega^3}{k}=\hat{b}^0\xdif{x}+\hat{a}^1\omega^1-\hat{c}^0\xdif{z}
,\ \
\hat{a}^0=\frac{a^0}{k}\;.
\end{equation}
\begin{prpstn}
\label{propST0}
If $S$ and
$T$, given by (\ref{om})-(\ref{STJ}), or (\ref{defST}), are identically zero
on $\Omega$,
then, for any $(x_0,y_0,z_0,\lambda_0)$ in $\Omega$, there exist an open set
$W\subset\mathbb{R}^3$, an open interval $I\subset\mathbb{R}$, with
$(x_0,y_0,z_0,\lambda_0)\in W\times I\subset\Omega$, and seven smooth functions $W\to\mathbb{R}$ denoted by $\hat{a}^0$, $\hat{a}^1$, $\hat{a}^2$,
$\hat{b}^0$, $\hat{b}^1$, $\hat{c}^0$, $\hat{c}^1$ such that
$\hat{c}^0+\hat{c}^1\lambda$ does not vanish on $W\times I$,
$\hat{c}^1\hat{b}^0-\hat{b}^1\hat{c}^0$ does not vanish on $W$,
and, for $(x,y,z,\lambda)\in W\times I$, $\dot{x}\in\mathbb{R}$ and $\dot{z}\in\mathbb{R}$,
equation (\ref{sys3}) is equivalent to
\begin{displaymath}
\lambda\left(\hat{b}^1\!(\!x,\!y,\!z\!)\dot{x}+\hat{a}^2\!(\!x,\!y,\!z\!)\lambda
-\hat{c}^1\!(\!x,\!y,\!z\!)\dot{z}
\right)
+
\left( \hat{b}^0\!(\!x,\!y,\!z\!)\dot{x}+\hat{a}^1\!(\!x,\!y,\!z\!)\lambda -\hat{c}^0\!(\!x,\!y,\!z\!)\dot{z}\right)
+
\hat{a}^0\!(\!x,\!y,\!z\!)=0.
\end{displaymath}
\end{prpstn}
\subsection{A parameterization of order $(1,2)$ if $S=T=J=0$}
It is known~\cite{Pome97cocv} that system (\ref{sys3})
is $(x,u)$-flat (see section~\ref{sec-flat}) if $S=T=J=0$. For the sake of completeness, let re-state this result
in terms of parameterization. We start with the following particular case of (\ref{sys3}):
\begin{equation}
\label{eq:new2}
\dot{z} = \kappa(x,y,z) \,\dot{x} \,\lambda
+a(x,y,z) \,\lambda
+b(x,y,z) \,\dot{x}+c(x,y,z)
\ \ \mbox{with}\ \
\lambda=\dot{y}-z\dot{x}
\end{equation}
where $\kappa$ does not vanish on the domain where it is defined.
Note that Example~\ref{ex-xu} was of this type with $\kappa=1$, $a=b=0$, $c=y$.
For short, define the following vector fields:
$$
X^0=c\partialx{z}\,,\
X^1=\partialx{x}+z\partialx{y}+b\partialx{z}\,,\
X^2=\partialx{y}+a\partialx{z}\,,\
X^3=\kappa\partialx{z}\ .
$$
Note that, for $h$ an arbitrary smooth function of
$x$, $y$ and $z$, $X^0h$, $X^1h$, $X^2h$, $X^3h$ also depend on $x,y,z$ only.
\begin{lmm}
\label{lmm-canonxulin}
System (\ref{eq:new2}) admits a parameterization of order (1,2) at
any $(x_0,y_0,z_0,\dot{x}_0,\dot{y}_0$, $\ddot{x}_0,\ddot{y}_0)$ such that
\begin{equation}
\label{eq:new4}
\kappa\,\ddot{x}_0+\kappa^2{\dot{x}_0}^3
+\bigl(X^1\kappa-X^3b+2a\kappa\bigr){\dot{x}_0}^2
+\bigl(X^1a+X^0\kappa-X^3c-X^2b+a^2\bigr){\dot{x}_0} +X^0a-X^3c\neq0.
\end{equation}
\end{lmm}
\begin{proof}
From (\ref{eq:new4}), the two vector fields $Y=X^2+\dot{x}X^3$ and
$Z=[\,X^0+\dot{x}X^1\,,\,X^2+\dot{x}X^3\,]+\ddot{x}X^3$ are linearly
independent at point $(x_0,y_0,z_0,\dot{x}_0,\ddot{x}_0)$.
Let then $h$ be a function of $(x,y,z,\dot{x})$ such that $Yh=0$
and $Zh\neq 0$; its ``time-derivative along system
(\ref{eq:new2})'', given by
$\dot{h}= X^0h +\bigl(X^1h\bigr)\dot{x} +\bigl(Yh\bigr)\lambda+\bigl(\partial h/\partial\dot{x})\ddot{x}$,
does not depend on $\lambda$: it is a function of
$(x,y,z,\dot{x},\ddot{x})$; also, since $Yh=0$, one has $Y\dot{h}=Zh$;
finally, $Zh\neq 0$ implies that
$\xdif{h}\wedge\xdif{\dot{h}}\wedge\xdif{x}\wedge\xdif{\dot{x}}\wedge\xdif{\ddot{x}}\neq0$.
In turn, this implies that
$(x,y,z,\dot{x},\ddot{x})\mapsto(h(x,y,z,\dot{x}),\dot{h}(x,y,z,\dot{x},\ddot{x}),x,\dot{x},\ddot{x})$
defines a local diffeomorphism at $(x_0,y_0,z_0,\dot{x}_0,\ddot{x}_0)$. Let $\psi$ and $\chi$ be the two functions of
five variables such that the inverse of that local diffeomorphism is
$(u,\dot{u},v,\dot{v},\ddot{v})\mapsto
(v,\psi(u,\dot{u},v,\dot{v},\ddot{v}),\chi(u,\dot{u},v,\dot{v},\ddot{v}),\dot{v},\ddot{v})$.
The parameterization (\ref{para1}) is given by~:
$x=v,y=\psi(u,\dot{u},v,\dot{v},\ddot{v}),z=\chi(u,\dot{u},v,\dot{v},\ddot{v})$.
\end{proof}
\begin{thrm}
\label{prop-plats}
If $S=T=J=0$, then system (\ref{sys3}) admits a parameterization of order $(1,2)$ at any
$(x_0,y_0,z_0,\dot{x}_0$, $\dot{y}_0,\ddot{x}_0,\ddot{y}_0) \in
\bigl(\widehat{\Omega}\times\mathbb{R}^2\bigr)\setminus F$,
where $F\subset\widehat{\Omega}\times\mathbb{R}^2$ is closed with empty interior.
\end{thrm}
\begin{proof}
From Proposition~\ref{propST0}, (\ref{sys3}) and (\ref{eqq50}) are
identical. Since
$\xdif{\hat{\omega}^2}\wedge\hat{\omega}^2=0$ (see (\ref{STJ})-(\ref{om12})),
there is a local
change of coordinates $(\tilde{x},\tilde{y},\tilde{z})=P(x,y,z)$ such that
$\hat{\omega}^2=k'\xdif{\tilde{x}}$ and
$\omega^1=k''\big(\xdif{\tilde{y}}-\tilde{z}\xdif{\tilde{x}}\big)$ with
$k'\neq0$, $k''\neq0$. Hence $P$ transforms (\ref{eqq50}) into
(\ref{eq:new2}), for some $\kappa,a,b,c$.
Lemma~\ref{lmm-canonxulin} gives $\varphi,\psi,\chi$ defining a
parameterization of order (1,2) for this system. Then
$P^{-1}\circ\varphi,P^{-1}\circ\psi,P^{-1}\circ\chi$ define one for
the original system (\ref{sys3}), or (\ref{eqq50});
$\bigl(\widehat{\Omega}\times\mathbb{R}^2\bigr)\setminus F$ is the inverse image by
$P$ of the set defined by (\ref{eq:new4}).
\end{proof}
\subsection{A normal form if $S=T=0$ and $J\neq0$}
\begin{prpstn}
\label{lem11}
Assume that the functions $g$ and $h$ defining system (\ref{sys3}) are such
that $S$ and $T$ defined by (\ref{STJ}) or (\ref{defST}) are identically zero
on $\Omega$, and let
$(x_0,y_0,z_0,\lambda_0)\in\Omega$ be such that
$J(x_0,y_0,z_0,\lambda_0)\neq0$.
There exist an open set
$W\subset\mathbb{R}^3$ and an open interval $I\subset\mathbb{R}$ such that
$(x_0,y_0,z_0,\lambda_0)\in W\times I\subset\Omega$, a smooth
diffeomorphism $P$ from $W$ to $P(W)\subset\mathbb{R}^3$ and six smooth functions
$P(W)\to\mathbb{R}$ denoted $\kappa$, $\alpha$,
$\beta$, $a$, $b$, $c$ such that, with the
change of coordinates
$(\tilde{x},\tilde{y},\tilde{z})=P(x,y,z)$, system (\ref{sys3})
reads
\begin{equation}
\label{eq:bof}
\dot{\tilde{z}}=\kappa(\tilde{x},\tilde{y},\tilde{z})\left(\dot{\tilde{y}}-\alpha(\tilde{x},\tilde{y},\tilde{z})\,\dot{\tilde{x}}\right)
\left(\dot{\tilde{y}}-\beta(\tilde{x},\tilde{y},\tilde{z})\,\dot{\tilde{x}}\right)
\;+\;a(\tilde{x},\tilde{y},\tilde{z})\,\dot{\tilde{x}}+b(\tilde{x},\tilde{y},\tilde{z})\,\dot{\tilde{y}}+c(\tilde{x},\tilde{y},\tilde{z})
\end{equation}
and none of the functions
$\kappa$, $\alpha-\beta$,
$\alpha_3$ and $\beta_3$ vanish on $W$.
\end{prpstn}
\begin{proof}
From Lemma~\ref{propST0}, we consider system (\ref{eqq50}).
Let $P^1,P^2$ be a pair of independent first integrals of the vector field
$\hat{c}^1\left(\partialx{x}+z\partialx{y}\right) +\hat{b}^1\partialx{z}$;
from (\ref{om12}), $\omega^1,\hat{\omega}^2$ span the annihilator of this
vector field, and hence are independent linear combinations of $\xdif{P}^1$ and
$\xdif{P}^2$:
possibly interchanging $P^1$ and $P^2$ or adding one to the other,
there exist smooth functions
$k^1,k^2,f^1,f^2$ such that
$\hat{\omega}^i=k^i\left(\xdif{P}^2-f^i\xdif{P}^1\right)$, $f^1-f^2\neq0$, $k^i\neq0$,
$i=1,2$.
Now, take for $P^3$ any function such that
$\xdif{P}^1\wedge\xdif{P}^2\wedge\xdif{P}^3\neq0$;
decomposing $\hat{\omega}^3$, we get three smooth
functions $ p^0, p^1, p^2$ such that $\hat{\omega}^3\ =\ p^0\left(-\xdif{P}^3+
p^1\xdif{P}^1+ p^2\xdif{P}^2\right)$, $p^0\neq0$.
The change of coordinates
$P=(P^1,P^2,P^3)$ does transform system
(\ref{eqq50}) into (\ref{eq:bof}) with
$$
\kappa=\frac{k^1k^2}{ p^0}\circ P^{-1},\ \
\alpha=f^1\circ P^{-1},\ \ \beta=f^2\circ P^{-1},\ \ a= p^1\circ P^{-1},\ \
b= p^2\circ P^{-1},\ \ c=\frac{\hat{a}^0}{ p^0}\circ P^{-1}\;.
$$
$\kappa$ and $\alpha-\beta$ are nonzero because $f^1-f^2$, $k^1$ and $k^2$
are.
$\alpha_3$ and $\beta_3$ are nonzero because the inverse images of
$\alpha_3\xdif{\tilde{x}}\wedge\xdif{\tilde{y}}\wedge\xdif{\tilde{z}}$ and
$\beta_3\xdif{\tilde{x}}\wedge\xdif{\tilde{y}}\wedge\xdif{\tilde{z}}$ by $P$
are $\xdif{P}^1\wedge\xdif{P}^2\wedge\xdif{f}^i$ for $i=1,2$, that are equal,
by construction, to $\xdif{\omega}^1\wedge\omega^1/(k^1)^2$ and
$\xdif{\hat{\omega}}^{2}\wedge\hat{\omega}^2/(k^2)^2$, which are both nonzero
(the second one because $J\neq0$).
\end{proof}
Note that (\ref{eq:bof}) is not in the form (\ref{sys3})
unless $\alpha=\tilde{z}$ or $\beta=\tilde{z}$. This suggests, since $\alpha_3\neq0$ and $\beta_3\neq0$, the following
local changes of coordinates $A$ and $B$, that both turn
(\ref{eq:bof}) to a new system of the form (\ref{sys3}):
\begin{equation}
\label{diffST0}
(\tilde{x},\tilde{y},\tilde{z})\mapsto A(\tilde{x},\tilde{y},\tilde{z})=(\tilde{x},\tilde{y},\alpha(\tilde{x},\tilde{y},\tilde{z}))\ \ \ \mbox{and}\ \ \
(\tilde{x},\tilde{y},\tilde{z})\mapsto B(\tilde{x},\tilde{y},\tilde{z})=(\tilde{x},\tilde{y},\beta(\tilde{x},\tilde{y},\tilde{z}))\;.
\end{equation}
These two systems of the form (\ref{sys3}) correspond to two choices $h^1,g^1$
and $h^2,g^2$ instead of the original $h,g$, and they yield, according to (\ref{defgamma}) and
(\ref{defdelta}), two possible sets of
functions $\gamma$ and $\delta$. These will be used in Theorem~\ref{th0};
let us give their explicit expression~:
\begin{equation}
\label{case1}
\gamma^i(x,y,z,w)=
\frac{w-m^{i,0}(x,y,z)}{m^{i,1}(x,y,z)}
\ ,\ \
\delta^i=n^{i,0}+n^{i,1}\gamma+n^{i,2}\gamma^2
\ ,\ \ i\in\{1,2\}
\end{equation}
with (these are obtained from each other by interchanging $\alpha$ and $\beta$)~:
\begin{equation}
\label{case2}
\begin{array}{l}
m^{1,0}=\left(\alpha_1+\alpha\,\alpha_2+(a+b\,\alpha)\,\alpha_3\right)\circ A^{-1}\,,\ \
m^{1,1}=\left(\kappa\,\alpha_3\,(\alpha-\beta)\right)\circ A^{-1}\,,\\
n^{1,0}=\alpha_3\circ A^{-1}\,,\ \
n^{1,1}=(\alpha_2+b\,\alpha_3)\circ A^{-1}\,,\ \
n^{1,2}=(\kappa\,\alpha_3)\circ A^{-1}\,,
\\[1ex]
m^{2,0}=\left(\beta_1+\beta\,\beta_2+(a+b\,\beta)\,\beta_3\right)\circ B^{-1}\,,\ \
m^{2,1}=\left(\kappa\,\beta_3\,(\beta-\alpha)\right)\circ B^{-1}\,,\\
n^{2,0}=\beta_3\circ B^{-1}\,,\ \
n^{2,1}=(\beta_2+b\,\beta_3)\circ B^{-1}\,,\ \
n^{2,2}=(\kappa\,\beta_3)\circ B^{-1}\,.
\end{array}
\end{equation}
\begin{xmpl}
\label{ex:2gamma}
System (\ref{eq:ex}-b) in Example~\ref{3ex} is already as in (\ref{eq:bof}). The above choices are, for this system:
\begin{equation}
\label{eq:ex2gamma}
\gamma^1(x,y,z,w)=w
,\
\delta^1(x,y,z,w)=y+w^2
,\ \gamma^2(x,y,z,w)=-w
,\
\delta^2(x,y,z,w)=y+w^2.
\end{equation}
\end{xmpl}
\section{Main results}
\label{sec-main}
We gather here our main results in a synthetic manner.
They rely on precise \emph{local} results from other sections~:
sufficient (sections~\ref{sec-ST0} and \ref{sec-edpsuff})
or necessary (section~\ref{sec-nec}) conditions for parameterizability,
results on solutions of the partial differential system $\mathcal{E}^{\gamma,\delta}_{k,\ell}$
(section~\ref{sec-EDP}) and on the relation between flatness and
parameterizability (section~\ref{sec-flat}).
We are not able to give local precise necessary and sufficient conditions at a
given point (jet) because singularities are not the same for necessary and for
sufficient conditions; instead, we use the ``somewhere'' notion as in
Definitions~\ref{def-Kreg-gen} and \ref{def-param-bof}.
\begin{thrm}\label{th-cns}
System (\ref{sys3}) admits a parameterization of order $(k,\ell)$ somewhere in $\Omega$ if and only if
\begin{enumerate}
\item \label{cns1}either $S=T=J=0$ on $\Omega$ (in this case, one can take
$(k,\ell)=(1,2)$),
\item \label{cns2}or $S=T=0$ on $\Omega$ and one of the two systems
$\mathcal{E}^{\gamma,\delta}_{k,\ell}l{\gamma^1}{\delta^1}k\ell$ or $\mathcal{E}^{\gamma,\delta}_{k,\ell}l{\gamma^2}{\delta^2}k\ell$
with $\gamma^i$, $\delta^i$ given by (\ref{case1})-(\ref{case2}), admits
a regular solution somewhere in $\widehat{\Omega}$.
\item \label{cns3}or $S$ and $T$ are not both identically zero, and the system $\mathcal{E}^{\gamma,\delta}_{k,\ell}$
with $\gamma$ and $\delta$ defined from $g$ and $h$ according to
(\ref{defgamma}) and (\ref{defdelta}) admits
a regular solution somewhere in $\widehat{\Omega}$.
\end{enumerate}
\end{thrm}
\begin{proof}
Sufficiency~:
the parameterization is provided, away from an explicitly described set of
singularities, by Theorem~\ref{prop-plats} if
point~\ref{cns1} holds, and by Theorem~\ref{edpparam} if one of the two
other points holds.
For necessity, assume that there is a
parameterization of order $(k,\ell)$ at a point $(x,y,z,\dot{x},\dot{y},\ldots,x^{(L)},y^{(L)})$
in $\bigl(\widehat{\Omega}\times\mathbb{R}^{2L-2}\bigr)\backslash F$. From Theorems
\ref{thSTnot0} and \ref{th0}, it implies that one of the three points holds.
\end{proof}
\begin{xmpl}\label{ex:mainth}
Consider again systems (a), (b) and (c) in (\ref{eq:ex}).
From point~\ref{cns1} of the theorem, system (a) admits a parameterization
of order (1,2), see also Example~\ref{ex-xu}. System (b) is concerned by
point~\ref{cns2} of the theorem: it has a parameterization of order $k,\ell$
if and only one of the two systems of PDEs
\begin{equation}
\label{EDPp2}
\begin{array}{ll}
p_{u^{(k-1)}}\bigl(Fp_x-p-{p_{xx}}^2)\bigr)
-p_{xu^{(k-1)}}\bigl(Fp\pm p_{xx})\bigr)=p_{u^{(k-1)}}\,p_{xv^{(\ell-1)}}-p_{xu^{(k-1)}}\,p_{v^{(\ell-1)}}=0\;,
\\
p_{u^{(k-1)}}\neq 0\;,\ \ \
p_{v^{(\ell-1)}}\neq 0\;,\ \ \
p+{p_{xx}}^2\pm p_{xxx}\neq0
\end{array}
\end{equation}
admits a ``regular solution''. Point~\ref{cns3} of the theorem is relevant to
system (c) because $S\neq0$: (c) admits a parameterization of
order $k,\ell$ if and only there is a ``regular solution'' $p$ to
\begin{equation}
\label{EDPp3}
\begin{array}{ll}
p_{u^{(k-1)}}\bigl(Fp_x-p)\bigr)
-p_{xu^{(k-1)}}\bigl(Fp-\sqrt{p_{xx}})\bigr)=p_{u^{(k-1)}}\,p_{xv^{(\ell-1)}}-p_{xu^{(k-1)}}\,p_{v^{(\ell-1)}}=0\;,
\\
p_{u^{(k-1)}}\neq 0\;,\ \ \
p_{v^{(\ell-1)}}\neq 0\;,\ \ \
p-{p_{xxx}}/{2\sqrt{p_{xx}}} \neq0\ .
\end{array}
\end{equation}
If Conjecture~\ref{conj-EDP} is true, neither system (b) nor system (c) admits
a parameterization of any order.
\end{xmpl}
Theorem~\ref{th-cns} gives a central role to the system of PDEs $\mathcal{E}^{\gamma,\delta}_{k,\ell}$. It
makes Conjecture~\ref{conj-EDP} equivalent to Conjecture~\ref{conj-param}
below.
Theorem~\ref{th-3} states that the conjecture is true for $k,\ell$ ``small enough''.
\begin{cnjctr}
\label{conj-param}
If $\xdif{\omega}\wedge\omega$ (or $(S,T,J)$) is not identically zero on
$\Omega$, then system (\ref{sys3}) does not admit a parameterization of any
order at any point (jet of any order).
\end{cnjctr}
\begin{thrm}\label{th-3}
If system (\ref{sys3}) admits a parameterization of order $(k,\ell)$, with $k\leq\ell$, at some
jet, then either $S=T=J=0$ or $k\geq3$ and $\ell\geq4$.
\end{thrm}
\begin{proof}
This is a simple consequence of Theorem~\ref{th-cns} and
Proposition~\ref{prop-edp}.
\end{proof}
\begin{rmrk}
\label{rmk-suff}
If our Conjecture~\ref{conj-EDP} is correct, the systems $\mathcal{E}^{\gamma,\delta}_{k,\ell}$ never
have any regular solutions, and the sufficiency part of Theorem~\ref{th-cns}
(apart from case~\ref{cns1}) is essentially void, and so is
Theorem~\ref{edpparam}. However, Conjecture~\ref{conj-EDP} is still a
conjecture, and the interest of the sufficient conditions above is to
make this conjecture, that only deals with a set of
partial differential equalities and inequalities, equivalent to
Conjecture~\ref{conj-param} below.
For instance, if one comes up with a regular solution of some of these
systems $\mathcal{E}^{\gamma,\delta}_{k,\ell}$, this will yield a new class of systems that admit a
parameterization.
\end{rmrk}
\begin{rmrk}[on recovering the results of \cite{Pome97cocv}]
\label{rmrk-old}
The main result in that reference can be phrased~:
\begin{quote}
``~(\ref{sys4}) is $(x,u)$-dynamic linearizable (\textit{i.e.} \ $(x,u)$-flat) if
and only if $S=T=J=0$~''~.
\end{quote}
Sufficiency is elementary in~\cite{Pome97cocv}; Theorem~\ref{prop-plats} implies it.
The difficult part is to prove that $S=T=J=0$ is necessary; that
proof is very technical in~\cite{Pome97cocv}: it relies on some simplifications performed via
computer algebra.
From our Proposition~\ref{prop-xu33}, $(x,u)$-flatness implies existence of a parameterization of
some order $(k,\ell)$ with $k\leq 3$ and $\ell\leq3$. Hence Theorem~\ref{th-cns}
does imply the above statement.
\end{rmrk}
\section{Necessary conditions}
\label{sec-nec}
\subsection{The case where $S$ and $T$ are not both zero}
The following lemma is needed to state the theorem.
\begin{lmm}\label{lemphi}
If $(S,T,J)\neq(0,0,0)$ and system (\ref{sys3}) admits a
parameterization $(\varphi,\psi,\chi)$ of order $(k,\ell)$
at point
$(x_0,y_0,z_0,\ldots,x^{(L)}_0,y^{(L)}_0)\in\mathbb{R}^{2L+3}$,
then $\varphi_{u^{(k)}}$ is a nonzero real analytic function.
\end{lmm}
\begin{proof}
Assume a parameterization where $\varphi$ does not depend on $u^{(k)}$. Substituting in (\ref{sys3}) yields
$$
\dot{\chi}=h(\varphi,\psi,\chi,\dot{\psi}-\chi\dot{\varphi})
+g(\varphi,\psi,\chi,\dot{\psi}-\chi\dot{\varphi})\dot{\varphi}\ .
$$
Since $\dot{\varphi}$ does
not depend on $u^{(k+1)}$, differentiating twice with respect to $u^{(k+1)}$ yields
$$
\chi_{u^{(k)}}=\psi_{u^{(k)}}(h_{4}+g_{4}\dot{\varphi})\ ,\ \ \
0={\psi_{u^{(k)}}}^2(h_{4,4}+g_{4,4}\dot{\varphi})\ .
$$
If $\psi_{u^{(k)}}$ was zero, then, from the first relation,
$\chi_{u^{(k)}}$ would too, and this would contradict point~\ref{def3} in
Definition~\ref{defparam}; hence the second relation implies that
$h_{4,4}+g_{4,4}\dot{\varphi}$ is identically zero. From point~\ref{def1} in the
same definition, it implies that \emph{all} solutions of (\ref{sys3}) satisfy
the relation~:
$h_{4,4}(x,y,z,\dot{y}-z\dot{x})+g_{4,4}(x,y,z,\dot{y}-z\dot{x})\dot{x}=0$.
From Lemma~\ref{lem-absurd}, this implies that $h_{4,4}$ and
$g_{4,4}$ are the zero function of four variables, and hence $S=T=J=0$. This
proves the lemma.
\end{proof}
\begin{thrm}\label{thSTnot0}
Assume that either $S$ or $T$ is not identically zero on $\Omega$, and
that system (\ref{sys3}) admits a
parameterization of order $(k,\ell)$ at
$\mathcal{X}=(x_0,y_0,z_0,\dot{x}_0,\dot{y}_0,\ldots,x^{(L)}_0,y^{(L)}_0)\in\widehat{\Omega}\times\mathbb{R}^{2L-2}$,
with $k,\ell,L$ some integers and
$\varphi,\psi,\chi$ defined on $U\subset\mathbb{R}^{k+\ell+2}$.
Then $k\geq1$, $\ell\geq 1$ and, for any point
$(u_0,\ldots,u_0^{(k)},v_0,\ldots,v_0^{(\ell)})\in U$ (not necessarily sent to
$\mathcal{X}$ by the parameterization) such that
$$\varphi_{u^{(k)}}(u_0,\ldots,u_0^{(k)},v_0,\ldots,v_0^{(\ell)})\neq 0,$$
there exist a neighborhood $O$ of
$(u_0,\ldots,u^{(k-1)}_0,\,\varphi(u_0\,\cdots\,v_0^{(\ell)})\,,v_0,\ldots,v^{(\ell-1)}_0)$
in $\mathbb{R}^{k+\ell+1}$ and a regular solution $p:O\to\mathbb{R}$ of $\mathcal{E}^{\gamma,\delta}_{k,\ell}$,
related to $\varphi,\psi,\chi$ by
(\ref{defphi}), (\ref{defpsi}) and (\ref{defchi}), the functions $\gamma$ and
$\delta$ being related to $g$ and
$h$ by (\ref{defgamma}) and (\ref{defdelta}).
\end{thrm}
\begin{rmrk}
\label{rmk-Lmin}
The regular solution $p$ is $K$-regular for some positive integer $K\leq
k+\ell-2$. If $L>K$, Theorem~\ref{edpparam} implies,
possibly away from some singular values of
$(x_0,y_0,z_0,\dot{x}_0,\dot{y}_0,\ldots$, $x^{(K)}_0,y^{(K)}_0)$, that system
(\ref{sys3}) also admits a parameterization of order $(k,\ell)$ at
$(x_0,y_0,z_0,\dot{x}_0,\dot{y}_0$, $\ldots,x^{(K)}_0,y^{(K)}_0)$.
See also Remark~\ref{rmrk-L}.
\end{rmrk}
\begin{proof}
Assume that system (\ref{sys3}) admits a parameterization
$(\varphi,\psi,\chi)$ of order $(k,\ell)$ at
$(x_0,y_0$, $z_0, \dot{x}_0,$ $\dot{y}_0$, $\ldots, x^{(L)}_0$, $y^{(L)}_0)$.
Since $\varphi_{u^{(k)}}$ does not vanish, one can apply the inverse function theorem to
the map
$$(u,\dot{u},\ldots,u^{(k)},v,\dot{v},\ldots,v^{({\ell})}) \mapsto
(u,\ldots,u^{(k-1)},\varphi(u,\ldots,u^{(k)},v,\ldots,v^{(\ell)}),v,\ldots,v^{({\ell})})$$
and define locally a function $r$ of $k+\ell+2$ variables such that
\begin{equation}
\label{defr}
\varphi(u,\dot{u},\ldots,u^{(k)},v,\dot{v},\ldots,v^{({\ell})})=x
\ \Leftrightarrow\
r(u,\dot{u},\ldots,u^{(k-1)},x,v,\dot{v},\ldots,v^{({\ell})})=u^{(k)}
\ .
\end{equation}
Defining two functions $p,q$ by substitution of $u^{(k)}$ in $\psi$, $\chi$,
the parameterization can be re-written implicitly as
\begin{equation} \label{eqp}
\left\{\begin{array}{ll}
y=p(u,\dot{u},\ldots,u^{(k-1)},x,v,\dot{v},\ldots,v^{({\ell})}),\\
z=q(u,\dot{u},\ldots,u^{(k-1)},x,v,\dot{v},\ldots,v^{({\ell})}),\\
u^{(k)}=r(u,\dot{u},\ldots,u^{(k-1)},x,v,\dot{v},\ldots,v^{({\ell})}).
\end{array}\right. \end{equation}
We now work with this form of the parameterization and
$u,\dot{u},\ldots,u^{(k-1)},x,\dot{x},\ddot{x},\ldots\;v,\dot{v},\ldots$, $v^{({\ell})},v^{({\ell+1})},\ldots\;$
instead of
$u,\dot{u},\ldots,u^{(k-1)},u^{(k)},\ldots\;v,\dot{v},\ldots,v^{({\ell})},v^{({\ell+1})},\ldots\;$.
In order to simplify notations,
let us agree that, if $k=0$, the list $u,\dot{u},\ldots,u^{(k-1)}$ is
empty and any term involving the index $k-1$ is zero (same with $\ell-1$ if $\ell=0$).
Let us also define $\mathcal{P}$ and $\mathcal{Q}$ by
\begin{equation}
\label{PQ}
\mathcal{P}=Fp+rp_{u^{(k-1)}}+v^{(\ell)}p_{v^{(\ell-1)}}+v^{(\ell+1)}p_{v^{(\ell)}}\,,\ \
\mathcal{Q}=Fq+rq_{u^{(k-1)}}+v^{(\ell)}q_{v^{(\ell-1)}}+v^{(\ell+1)}q_{v^{(\ell)}}\ ,
\end{equation}
with $F$ given by (\ref{F}).
$\mathcal{P}$ and $\mathcal{Q}$ depend on
$u,\dot{u},\ldots,u^{(k-1)},x,v,\dot{v},\ldots,v^{({\ell+1})}$
but not on $\dot{x}$; $Fp$ and $Fq$ depend neither on $\dot{x}$ nor on $v^{({\ell+1})}$.
When substituting (\ref{eqp}) in (\ref{sys3}), using
$\dot{y}={\mathcal{P}}+\dot{x}p_x$ and $\dot{z}={\mathcal{Q}}+\dot{x}q_x$,
one obtains~:
\begin{equation}\label{eq2}
{\mathcal{Q}}+\dot{x}q_x=h(x,p,q,\lambda)+g(x,p,q,\lambda)\dot{x}\ \ \
\mbox{with}\ \ \lambda={\mathcal{P}}+\dot{x}(p_x-q).
\end{equation}
Differentiating each side three times with respect to
$\dot{x}$, one obtains~:
\begin{eqnarray}
\label{eq201}
&&\hspace{-3.6em}
q_x=
\left(h_{4}(x,p,q,\lambda)+g_{4}(x,p,q,\lambda)\dot{x}\right)(p_x-q)+g(x,p,q,\lambda),\\
\label{eq202}
&&\hspace{-3.6em}
0=
\left(h_{4,4}(x,p,q,\lambda)+g_{4,4}(x,p,q,\lambda)\dot{x}\right)(p_x-q)^2+2g_{4}(x,p,q,\lambda)(p_x-q),\\
\label{eq203}
&&\hspace{-3.6em}
0=
\left(h_{4,4,4}(x,p,q,{\lambda})+g_{4,4,4}(x,p,q,{\lambda})\dot{x}\right)(p_x-q)^3+3g_{4,4}(x,p,q,\lambda)(p_x-q)^2.
\end{eqnarray}
Combining (\ref{eq202}) and (\ref{eq203}) to cancel the first term in each
equation, one obtains
(see $S$ and $T$ in (\ref{defST}))~:
\begin{equation}
\label{eq:2023}
\left( \vphantom{\frac12}
T(x,p,q,{\lambda})+S(x,p,q,{\lambda})\dot{x}\right)(p_x-q)^2=0.
\end{equation}
The second factor must be zero because,
if $T+S\dot{x}$ was identically zero as a function of
$u,\ldots,u^{(k-1)},x,v,\ldots$ $v^{(\ell-1)}$, then,
by Definition~\ref{defparam} (point \ref{def1}), \emph{all} solutions
$(x(t),y(t),z(t))$ of $(\ref{sys3})$ would satisfy
$T(x,y,z,\dot{y}-z\dot{x})+\dot{x}S(x,y,z,\dot{y}-z\dot{x})=0$ identically, and
this would imply that $S$ and $T$ are identically zero functions of 4
variables, but we supposed the contrary.
The relation $q=p_x$ implies
\begin{equation}
\label{PPP}
\lambda=\mathcal{P}=Fp+rp_{u^{(k-1)}}+v^{(\ell)}p_{v^{(\ell-1)}}+v^{(\ell+1)}p_{v^{(\ell)}}
\end{equation}
and (\ref{eq201}) then yields
$p_{xx}=g(x,p,p_x,\lambda)$, or, with $\gamma$ defined by (\ref{defgamma}),
\begin{equation}
\label{eqq30}
\lambda=\gamma(x,p,p_x,p_{xx})\ .
\end{equation}
Since neither $p$ nor $Fp$ nor $r$ depend on $v^{(\ell+1)}$, (\ref{PPP}) and (\ref{eqq30}) yield
$p_{v^{(\ell)}}=0$, \textit{i.e.} \ $p$ is a
function of $u,\ldots,u^{(k-1)},x$, $v,\ldots,v^{(\ell-1)}$ only.
Then (\ref{PPP}) and (\ref{eqq30}) imply (\ref{ducon0}) with
$f=\gamma$. Furthermore, since $\varphi_{v^{(\ell)}}\neq0$ (point~\ref{def3} of Definition~\ref{defparam}),
(\ref{defr}) implies $r_{v^{(\ell)}}\neq0$.
Also, if $p$ was a function of $x$ only, then all solutions of (\ref{sysST0}) should
satisfy a relation $y(t)=p(x(t))$, which is absurd from Lemma~\ref{lem-absurd}.
We may then apply lemma~\ref{lem0} and assert that $k\geq1$, $\ell\geq1$,
$p_{u^{(k-1)}}\neq0$, $p_{v^{(\ell-1)}}\neq0$.
Since $p$ does not depend on $v^{(\ell)}$, (\ref{eqq30}) implies that the
right-hand side of (\ref{PPP}) does not depend on $v^{(\ell)}$ either; since
$p_{u^{(k-1)}}\neq0$, $r$ must be affine with respect to $v^{(\ell)}$, \textit{i.e.}
\begin{equation}
\label{r}
r\ =\ \tau\,+\,\sigma\,v^{(\ell)}\ ,
\end{equation}
with $\sigma$ and $\tau$ some functions of
$u,\ldots,u^{(k-1)},x,v,\ldots,v^{(\ell-1)}$.
Since $p$, $q=p_x$, $\lambda$ and $q_x=p_{xx}$ do not depend on $v^{(\ell)}$,
(\ref{eq2}) implies that ${\mathcal{Q}}$ does not depend on $v^{(\ell)}$
either; with $p_x=q$, and $r$ given by (\ref{r}), the expression of
${\mathcal{Q}}_{v^{(\ell)}}$ is
$\sigma p_{xu^{(k-1)}}+p_{xv^{(\ell-1)}}$ while, from (\ref{PPP}), the expression of
${\mathcal{P}}_{v^{(\ell)}}$.
Collecting this, one gets
\begin{equation}
\label{eqq31}
\sigma p_{u^{(k-1)}}+p_{v^{(\ell-1)}}=\sigma
p_{xu^{(k-1)}}+p_{xv^{(\ell-1)}}=0\ .
\end{equation}
Since $p_{u^{(k-1)}}\neq0$ and $p_{v^{(\ell-1)}}\neq0$,
(\ref{eqq31}) implies $E\sigma=0$, and also $\sigma_x=0$, $\sigma\neq0$. Then,
since $r_x\neq0$ (see (\ref{defr})), (\ref{r}) implies $\tau_x\neq0$.
With the above remarks, (\ref{PPP}) yields $\mathcal{P}=\lambda=Fp+\tau p_{u^{(k-1)}}$
and hence, from (\ref{eqq30}), the first relation in (\ref{EDPbis}).
In a similar way, (\ref{PQ}) yields $\mathcal{Q}=Fp_x+\tau p_{xu^{(k-1)}}$,
and substituting in (\ref{eq2}), one obtains (the terms involving $\dot{x}$
disappear according to (\ref{eqq30}))
$Fp_x-\delta(x,p,p_x,p_{xx})+p_{xu^{(k-1)}}\tau=0$ with $\delta$ defined by
(\ref{defdelta}).
This proves that $p$ satisfies (\ref{EDPbis}), equivalent to (\ref{EDPp}) according to
Remark~\ref{rmk-EDPbis}, and hence that $p$ is a solution of $\mathcal{E}^{\gamma,\delta}_{k,\ell}$.
To prove by contradiction that it is $K$-regular for some $K\leq k+\ell+1$,
assume that $ED^ip=0$ for $0\leq i\leq k+\ell$.
Then
$p_x,p,\ldots,D^{(k+\ell-1)}p, x,\ldots,x^{(k+\ell-1)}$ are $2k+2\ell+1$
functions in the $2k+2\ell$ variables $u,\ldots,u^{(k-1)}, v,\ldots, v^{(\ell-1)},
x,\ldots,x^{(k+\ell-1)}$. At points where the Jacobian matrix has constant
rank, there is at least one nontrivial relation between them. From
point~\ref{def1} of Definition~\ref{defparam}, this would imply that all
solutions of system (\ref{sys3}) satisfy this relation, say
$R(z(t),y(t),\ldots,y^{(k+\ell-1)}(t),x(t),\ldots,x^{(k+\ell-1)}(t))=0$, which
is absurd from Lemma~\ref{lem-absurd}.
\end{proof}
\subsection{The case where $S$ and $T$ are zero}
Here, the situation is slightly more complicated:
we also establish that any parameterization ``derives from'' a solution of
the system of PDEs (\ref{EDPp}), but this is correct only if $J$ is not zero,
and there are two distinct (non equivalent) choices for $\gamma$ and $\delta$.
If $J\neq0$, we saw, in
section~\ref{sec-ST0}, that possibly after a change of coordinates, system
(\ref{sys3}) can be written as (\ref{eq:bof}), which we re-write here without
the tildes:
\begin{equation}
\label{sysST0}
\dot{z}=\kappa(x,y,z)\left(\dot{y}-\alpha(x,y,z)\,\dot{x}\right)
\left(\dot{y}-\beta(x,y,z)\,\dot{x}\right)
\;+\;a(x,y,z)\,\dot{x}+b(x,y,z)\,\dot{y}+c(x,y,z)\;,
\end{equation}
where $\kappa,\alpha,\beta,a,b,c$ are real analytic functions of three
variables and
$
\kappa\neq0$, $\alpha-\beta\neq0$, $\partial\alpha/\partial x\neq0$, $\partial\beta/\partial x\neq0$.
We state the theorem for this class of systems, because it is simpler to
describe the two possible choices for $\gamma$ and $\delta$ than with
(\ref{sys3}), knowing that $S=T=0$.
\begin{lmm}
If system (\ref{sysST0}) admits a parameterization $(\varphi,\psi,\chi)$ of order $(k,\ell)$
at a point, then $\varphi_{u^{(k)}}$ is a nonzero real analytic function.
\end{lmm}
\begin{proof}After a change of coordinates (\ref{diffST0}), use Lemma~\ref{lemphi}.\end{proof}
\begin{thrm}\label{th0}
Let $(x_0,y_0,z_0)$ be a point where $\kappa$, $\alpha-\beta$,
$\alpha_3$ and $\beta_3$ are nonzero, and $k,\ell,L$ three integers.
If system (\ref{sysST0}) has a parameterization of order
$(k,\ell)$ at $\mathcal{X}=(x_0,y_0,z_0,\dot{x}_0,\dot{y}_0$, $\ldots,x^{(L)}_0$,
$y^{(L)}_0)$
with $\varphi,\psi,\chi$ defined on $U\subset\mathbb{R}^{k+\ell+2}$,
then $k\geq1$, $\ell\geq 1$ and, for any point
$(u_0,\ldots,u_0^{(k)},v_0$, $\ldots,v_0^{(\ell)})\in U$ (not necessarily sent to
$\mathcal{X}$ by the parameterization) such that
$$\varphi_{u^{(k)}}(u_0,\ldots,u_0^{(k)},v_0,\ldots,v_0^{(\ell)})\neq 0,$$
there exist a neighborhood $O$ of $(u_0,\ldots,u^{(k-1)}_0,\,\varphi(u_0\,\cdots\,v_0^{(\ell)})\,,v_0,\ldots,v^{(\ell-1)}_0)$
in $\mathbb{R}^{k+\ell+1}$ and a regular solution $p:O\to\mathbb{R}$ of one of the two systems
$\mathcal{E}^{\gamma,\delta}_{k,\ell}l{\gamma^1}{\delta^1}k\ell$ or $\mathcal{E}^{\gamma,\delta}_{k,\ell}l{\gamma^2}{\delta^2}k\ell$
with $\gamma^i$, $\delta^i$ given by (\ref{case1})-(\ref{case2}), such that $p,\varphi,\psi,\chi$
are related by (\ref{defphi}), (\ref{defpsi}) and (\ref{defchi}).
\end{thrm}
Remark~\ref{rmk-Lmin} applies to this theorem in the same way as theorem~\ref{thSTnot0}.
\begin{proof}
Like in the beginning of the proof of Theorem~\ref{thSTnot0},
a parameterization $(\varphi, \psi, \chi)$
of order $(k,\ell)$ with $\varphi_{u^{(k)}}\neq0$ yields
an implicit form (\ref{eqp}).
Substituting in (\ref{sysST0}), one obtains an identity between two polynomials in
$v^{(\ell+1)}$ and $\dot{x}$.
The coefficient of $(v^{(\ell+1)})^2$ in the right-hand side must be zero and
this yields that $p$ cannot depend on $v^{(\ell)}$; the linear term in
$v^{(\ell+1)}$ then implies that $q$ does not depend on $v^{(\ell)}$ either.
To go further, let us define, as in the proof of Theorem~\ref{thSTnot0},
\begin{equation}
\label{j}
\mathcal{P}=Fp+rp_{u^{(k-1)}}+v^{(\ell)}p_{v^{(\ell-1)}}\,,\ \
\mathcal{Q}=Fq+rq_{u^{(k-1)}}+v^{(\ell)}q_{v^{(\ell-1)}}\,,
\end{equation}
with $F$ as in (\ref{F}). Still substituting in (\ref{sysST0}), the terms of degree 0, 1 and 2 with respect
to $\dot{x}$ then yield
\begin{equation} \label{eq:deg2xter}
\begin{array}{l}
{\mathcal{Q}}=\kappa(x,p,q){\mathcal{P}}^2+b(x,p,q){\mathcal{P}}+c(x,p,q)\,,\\
q_x=\kappa(x,p,q)\left(2p_x-\alpha(x,p,q)-\beta(x,p,q)\right)\mathcal{P}+a(x,p,q)+b(x,p,q)p_x\,,\\
0=\left(p_x-\alpha(x,p,q)\right)\left(p_x-\beta(x,p,q)\right)\,.
\end{array}\end{equation}
The factors in
the third equation cannot both be zero because $\alpha-\beta\neq0$. Let us assume
\begin{equation}
\label{pxqALT}
p_x-\alpha(x,p,q)=0\;,\ \ \
p_x-\beta(x,p,q)\neq 0
\end{equation}
(interchange the roles of $\alpha$ and $\beta$ for the other alternative).
Since $\alpha_3\neq0$, the map $A$ defined in (\ref{diffST0}) has locally an
inverse $A^{-1}$, and the equation in (\ref{pxqALT}) is equivalent to
$(x,p,q)=A^{-1}(x,p,p_x)$; by differentiation an expression of $q_x$ as a
function of $x,p,p_x,p_{xx}$ is obtained; solving the second equation in
(\ref{eq:deg2xter}) for $\mathcal{P}$ and substituting $q$ and $q_x$,
one obtains $\mathcal{P}=\gamma^1(x,p,p_x,p_{xx})$ with $\gamma^1$ defined by
(\ref{case1})-(\ref{case2}).
If one had chosen the other alternative in (\ref{pxqALT}), $A$ and $\gamma^1$
would be replaced by $B$ and $\gamma^2$.
Since $\mathcal{P}$ is also given by (\ref{j}), the relation (\ref{ducon0})
holds with $f=\gamma^1$;
also, for the same reasons as in the proof of Theorem~\ref{thSTnot0} (two
lines further than (\ref{eqq30})), $r_{v^{(\ell)}}$ is nonzero and it would be absurd
that $p$ depends on $x$ only.
One may then apply Lemma~\ref{lem0} and deduce that $k\geq1$, $\ell\geq1$,
$p_{u^{(k-1)}}\neq0$, $p_{v^{(\ell-1)}}\neq0$.
Since neither $p$ nor $\mathcal{P}=\gamma^1(x,p,p_x,p_{xx})$ depend on $v^{(\ell)}$
and $p_{u^{(k-1)}}\neq0$, the first equation in (\ref{j}) implies that $r$
assumes the form (\ref{r}) with $\sigma$ and $\tau$ some functions of the
$k+\ell+1$ variables
$u,\dot{u},\ldots,u^{(k-1)},x,v,\dot{v},\ldots,v^{({\ell-1})}$, and that two
relations hold: on the one hand $\sigma p_{u^{(k-1)}}+p_{v^{(\ell-1)}}=0$, i.e. one of the
relations in (\ref{EDPbis}), and on the other hand the first relation in
(\ref{EDPbis}) with $\gamma=\gamma^1$.
Similarly, the second equation in (\ref{j}) yields
$\sigma q_{u^{(k-1)}}+q_{v^{(\ell-1)}}=0$ and
$Fq+\tau q_{u^{(k-1)}}=\mathcal{Q}=\kappa\mathcal{P}^2+b\mathcal{P}+c$.
Applying $F+\tau\partial/\partial{u^{(k-1)}}$ and $E$ to the first relation in
(\ref{pxqALT}) and using the four relations we just established, one obtains
on the one hand the second relation in (\ref{EDPbis}), with $\delta=\delta^1$
($\delta^1$ defined in (\ref{case1})-(\ref{case2})) and on the other hand
$\sigma p_{xu^{(k-1)}}+p_{xv^{(\ell-1)}}=0$. The relations $\sigma_x=0$,
$\sigma\neq0$ and $\tau_x\neq0$ are then obtained exactly like at the end of
the proof of theorem~\ref{thSTnot0}; hence $p$ satisfies (\ref{EDPbis}) with
$\gamma=\gamma^1$ and $\delta=\delta^1$; this proves, thanks to
Remark~\ref{rmk-EDPbis}, that $p$ is a solution of
$\mathcal{E}^{\gamma,\delta}_{k,\ell}l{\gamma^1}{\delta^1}k\ell$ (it would be $\mathcal{E}^{\gamma,\delta}_{k,\ell}l{\gamma^2}{\delta^2}k\ell$
if one had chosen the other alternative in (\ref{pxqALT})).
The last paragraph of the proof of Theorem~\ref{thSTnot0} can be used to prove
that this solution is $K$-regular with $K\leq k+\ell+1$.
\end{proof}
\section{Flat outputs and differential flatness}
\label{sec-flat}
\begin{dfntn}[flatness, endogenous parameterization~\cite{Flie-Lev-Mar-R92cras}]
\label{def-flat}
A pair $A\!=\!(a,b)$ of real analytic functions
on a neighborhood of $(x_0,y_0,z_0,\ldots,x^{(j)}_0,y^{(j)}_0)$ in
$\widehat{\Omega}\times\mathbb{R}^{2j-2}$ is a
\emph{flat output of order $j$} at
$\mathcal{X}=(x_0,y_0,z_0,\ldots,x^{(L)}_0,y^{(L)}_0)$
(with $L\geq j\geq0$) for system (\ref{sys3}) if there exists a Monge
parameterization (\ref{para1}) of some order $(k,\ell)$ at $\mathcal{X}$
such that any germ
$(x(.),y(.),z(.),u(.),v(.))\in\mathcal{V}\times\mathcal{U}$
(with $U,V$ possibly smaller than in (\ref{para1}))
satisfies (\ref{flat1}) if and only if it satisfies (\ref{flat2}):
\begin{eqnarray}
\label{flat1}&\left.
\begin{array}{rcl}
\varphi\bigl(u(t),\dot{u}(t),\ldots,u^{(k)}(t),v(t),\dot{v}(t),\ldots,v^{(\ell)}(t)\bigr)&=&x(t)
\\
\psi\bigl(u(t),\dot{u}(t),\ldots,u^{(k)}(t),v(t),\dot{v}(t),\ldots,v^{(\ell)}(t)\bigr)&=&y(t)
\\
\chi\bigl(u(t),\dot{u}(t),\ldots,u^{(k)}(t),v(t),\dot{v}(t),\ldots,v^{(\ell)}(t)\bigr)&=&z(t)
\end{array}
\right\}\;,
\\
\label{flat2}&
\left.\!\!\!\!\!\!\!\!\!\!\!\!
\begin{array}{l}
\dot{z}(t)=h\bigl(x(t),y(t),z(t),\,\dot{y}(t)\!-\!z(t)\dot{x}(t)\,\bigr)
\;+\;g\bigl(x(t),y(t),z(t),\,\dot{y}(t)\!-\!z(t)\dot{x}(t)\,\bigr)\;\dot{x}(t)
\\
u(t)=a\bigl(x(t),y(t),z(t),\dot{x}(t),\dot{y}(t),\ddot{x}(t),\ddot{y}(t),
\ldots, x^{(j)}(t),y^{(j)}(t)\bigr)
\\
v(t)=b\bigl((x(t),y(t),z(t),\dot{x}(t),\dot{y}(t),\ddot{x}(t),\ddot{y}(t),
\ldots, x^{(j)}(t),y^{(j)}(t)\bigr)\
\end{array}
\right\}\!.
\end{eqnarray}
System (\ref{sys3}) is called \emph{flat} if and only if it admits a flat
output of order $j$ for some $j\in\mathbb{N}$.
A Monge parameterization is \emph{endogenous}\footnote{
This terminology (endogenous vs. exogenous) is borrowed from the authors of
\cite{Flie-Lev-Mar-R92cras,Mart92th}; it usually qualifies feedbacks rather than
parameterizations, but the notion is exactly the same.
}
if and only if there exists a flat output associated to this parameterization
as above.
\end{dfntn}
In control theory, flatness is a better
known notion than Monge parameterization.
For general control systems, it implies existence of a parameterization
(obvious in the above definition), and people
conjecture~\cite{Flie-Lev-Mar-R99open} that the two notions are in fact
equivalent, at least away from some singular points. In any case, our results are relevant to both: systems (\ref{sys3}) that are
proved to be parameterizable are also flat and our efforts toward proving that
the other ones are not parameterizable would also prove that they are not flat.
Theorem~\ref{edpparam} gave a procedure to derive a parameterization of
(\ref{sys3}) from a regular solution $p$ of $\mathcal{E}^{\gamma,\delta}_{k,\ell}$, and we saw in
Section~\ref{sec-main} that, unless $S=T=J=0$, these are the only possible
parameterizations. One can tell when such a parameterization is endogenous:
\begin{prpstn}
Let $p:O\to\mathbb{R}$, with $O\subset\mathbb{R}^{k+\ell+1}$ open, be a
regular solution of system $\mathcal{E}^{\gamma,\delta}_{k,\ell}$.
The parameterization of order $(k,\ell)$
of system (\ref{sys3}) associated to $p$ according to Theorem~\ref{edpparam}
is endogenous if and only if $p$ is exactly $(k+\ell-2)$-regular; then, the
associated flat output is of order $j\leq k+\ell-2$.
\end{prpstn}
\begin{proof}
In the end of the proof of Theorem~\ref{edpparam}, it was established that
(\ref{flat1}), written
$\Gamma(u,v)=(x,y,z)$, is equivalent to (\ref{uvij}) if $(x,y,z)$ is a
solution of (\ref{sys3}).
If either $i_0<k$ or $j_0<\ell$ in (\ref{uvij}), then there are, for fixed
$x(.),y(.),z(.))$, infinitely many solutions $u(.),v(.))$ of (\ref{uvij})
while there is a unique one for (\ref{flat2}). Hence $i_0=k$ and $j_0=\ell$
if (\ref{flat1}) is equivalent to (\ref{flat2}); then $K=i_0+j_0-2=k+\ell-2$
so that $p$ is $(k+\ell-2)$-regular and (\ref{uvij}) (where $u$ and $v$ do
not appear in the right-hand side) is of the form (\ref{flat2}) with
$j=K=k+\ell-2$.
\end{proof}
The main result in~\cite{Pome97cocv} is a necessary condition for ``$(x,u)$-dynamic linearizability'' ($(x,u)$-flatness
might be more appropriate) of system (\ref{sys4}).
For system (\ref{sys4}), it means existence of a flat output whose components
are functions
of $\xi^1,\xi^2,\xi^3,\xi^4,w^1,w^2$; for system (\ref{sys3}), it translates
as follows.
The functions $\gamma$ and $\delta$ in
(\ref{sys4}) are supposed to be related to $g$ and $h$ in
(\ref{sys3}) according to (\ref{defgamma}) and (\ref{defdelta}).
\begin{dfntn}
\label{def-xu}
System (\ref{sys4}) is ``$(x,u)$-dynamic linearizable'' is and only if system (\ref{sys3}) admits a flat output of order 2 of a
special kind~:
$A(x,y,z,\dot{x},\dot{y},\ddot{x},\ddot{y})=\mathfrak{a}(x,y,z,\lambda,\dot{x},\dot{\lambda})$
for some smooth $\mathfrak{a}$.
\end{dfntn}
The following proposition is usefull to recover the main result from
\cite{Pome97cocv}, see Remark~\ref{rmrk-old}.
\begin{prpstn}
\label{prop-xu33}
If system \textup{(\ref{sys4})} is ``$(x,u)$-dynamic linearizable'' in the sense of
\textup{\cite{Pome97cocv}}, then \textup{(\ref{sys3})} admits a parameterization of order
$(k,\ell)$ with $k\leq3$ and $\ell\leq3$.
\end{prpstn}
\begin{proof}$\ $\\[-1\baselineskip]
Consider the map
$\displaystyle
(x,y,z,\lambda,\dot{x},\dot{\lambda},\ldots,x^{(4)},\lambda^{(4)}) \mapsto
\left( \!\!
\begin{array}{c}
\mathfrak{a}(x,y,z,\lambda,\dot{x},\dot{\lambda})
\\
\dot{\mathfrak{a}}(x,y,z,\lambda,\dot{x},\dot{\lambda},\ddot{x},\ddot{\lambda})
\\
\ddot{\mathfrak{a}}(x,y,z,\lambda,\dot{x},\dot{\lambda},\ldots,x^{(3)},\lambda^{(3)})
\\
\mathfrak{a}^{(3)}(x,y,z,\lambda,\dot{x},\dot{\lambda},\ldots,x^{(4)},\lambda^{(4)})
\end{array}
\!\!\right).
$
\\
Its Jacobian is $8\times12$, and has rank 8, but the $8\times8$ sub-matrix
corresponding to derivatives with respect to
$\dot{x},\dot{\lambda},\ldots,x^{(4)},\lambda^{(4)}$ has rank 4 only.
Hence $x$, $y$, $z$, and $\lambda$ can be expressed as functions of the
components of
$\mathfrak{a},\dot{\mathfrak{a}},\ddot{\mathfrak{a}},\mathfrak{a}^{(3)}$,
yielding a Monge parameterization of order at most $(3,3)$.
\end{proof}
\section{Conclusion}
\label{sec-concl}
Let us discuss both flatness (see Section~\ref{sec-flat}) and Monge parameterization.
For convenience, assume $k\leq\ell$
and call F-systems the systems (\ref{sys3}) such
that $S=T=J=0$ and C-systems all the other ones.
F-systems are flat; this was proved in~\cite{Pome97cocv}. This paper adds that they
admit a Monge parameterization of order (1,2), but does \emph{not} prove
differential flatness of any system not known to be flat up to now:
C-systems are not believed to be flat.
It does not either prove non-flatness of any system:
it only \emph{conjectures} that no C-system admits a parameterization, and hence none of
them is flat. To the best of our
knowledge, no one knows whether simple systems like
(\ref{eq:ex}-b) or (\ref{eq:ex}-c) are flat of not.
The first contribution of the paper is to prove that a C-system admits a
parameterization of order $(k,\ell)$ if and only if the PDEs
$\mathcal{E}^{\gamma,\delta}_{k,\ell}$, for suitable $\gamma,\delta$, admit a ``regular solution'' $p$. The second
contribution is to prove that, for any $\gamma,\delta$, there is no regular
solution to $\mathcal{E}^{\gamma,\delta}_{k,\ell}$ if either $k\leq2$ or $k=\ell=3$
(this does not contradict existence of parameterizations of order $(1,2)$ for
F-systems: these do not ``derive from'' a solution of these PDEs). We guess, in
Conjecture~\ref{conj-EDP}, that even for higher values of the integers $k,\ell$,
\emph{none} of these PDEs have any regular solution; this would
imply that C-systems are not flat.
Besides recovering the results from~\cite{Pome97cocv} with far more
natural and elementary arguments, we believe that some insight was gained on
Monge parameterizations of
\emph{any order} for ``C-systems'', by reducing non-parameterizability to
non-existence of solutions to a systems of PDEs that can easily be written for
any $k,\ell$.
The main perspective raised by this paper is to \textbf{prove Conjecture~\ref{conj-EDP}}.
The only \emph{theoretical} difficulty is, in fact, that no a priori bound on the
integers $k,\ell$ is known.
Indeed, as explained in Section~\ref{sec-EDP}, for fixed
$k,\ell,\gamma,\delta$, it amounts to a classical problem.
To prove Proposition~\ref{prop-edp}, we solved, in a synthetic manner, that
problem for $k\leq2$ or $k=\ell=3$ and arbitrary $\gamma$ and $\delta$.
We lack a non-finite argument, or a better understanding of the
structure, to go to arbitrary $k,\ell$.
Let us comment more on the (non trivial) case where
$\gamma$ and $\delta$ are \emph{polynomials}, for instance the very simple ones in
(\ref{EDPp2}). For fixed $k,\ell$, the question can be formulated in terms of
differential polynomial rings:
does the differential ideal generated by left-hand sides of the equations (\ref{EDPp2})
contain the polynomials $ED^ip$~?
Differential elimination (see~\cite{Ritt50} or the recent
survey~\cite{Hube03}) is relevant here; finite algorithms
have been already implemented in computer algebra.
Although we have not yet succeeded (because of complexity) in carrying out
these computations, even on example (\ref{EDPp2}) for $(k,\ell)=(3,4)$, and
although it will certainly not \emph{provide} a bound on $k,\ell$, we do believe that
computer algebra is a considerable potential help.
Another perspective is to enlarge the present approach to \textbf{higher
dimensional control systems}.
For instance, what would play the role of our system of PDEs $\mathcal{E}^{\gamma,\delta}_{k,\ell}$ when,
instead of (\ref{sys3}), one considers a single relation between more than
three scalar functions of time (this captures, instead of (\ref{sys4}),
control affine systems with $n$ states and 2 controls, $n>4$)~?
We have very little insight on this question: the present paper strongly takes
advantage of the special structure inherent to our small dimension; the
situation could be far more complex.
\appendix
\section{}
\label{app-1}
\begin{proof}[Proof of Lemma \ref{jacob}]
For this proof only, the notation $\mathcal{F}_{i,j}$ ($0\leq i\leq k$,
$0\leq j\leq\ell$) stands either for the following family of $i+j$ vectors in
$\mathbb{R}^{K+2}$ or for the corresponding $(K+2)\times(i+j)$ matrix~:
\begin{displaymath}
\mathcal{F}_{i,j}\ =\ \left(
\partialxy{\pi}{u^{(k-i)}},\ldots,\partialxy{\pi}{u^{(k-1)}},
\partialxy{\pi}{v^{(\ell-j)}},\ldots,\partialxy{\pi}{v^{(\ell-1)}}
\right)
\end{displaymath}
with the convention that if $i$ or $j$ is zero the corresponding list is empty;
$\mathcal{F}_{i,j}$ depends on
$u,\ldots,u^{(k-1)},v,\ldots$, $v^{(\ell-1)},x,\ldots,x^{(K)}$.
Let us first prove that, at least outside a closed subset of empty interior,
\begin{equation}
\label{eq:hh}
\rank\mathcal{F}_{k,\ell}\ =\ K+2\ .
\end{equation}
Indeed, if it is smaller at all points of $O\times\mathbb{R}^K$, then, around
points (they form an open dense set) where it is locally constant, there
is at least one function $R$ such
that a non-trivial identity
$R(p_x,p,\ldots,D^{K}p,x,\ldots,x^{(K)})=0$ holds and the partial
derivative of $R$ with respect to at least one of its $K+2$ first arguments is
nonzero.
Since $p$ is $K$-regular, applying $E$ to this relation, shows that $R$
does not depend on $D^{K}p$, and hence does not depend on $x^{(K)}$
either. Then, applying $ED$, $ED^2$ and so on, and using the fact that,
according to (\ref{psolgh}), $Dp_x$ is a function of $p_x,p,Dp,x,\dot{x}$, we get finally a relation $R(p_x,p,x)=0$ with
$(R_{p_x},R_p)\neq(0,0)$.
Differentiating with respect to $u^{(k-1)}$, one obtains
$R_{p_x}p_{xu^{(k-1)}}+R_p p_{u^{(k-1)}}=0$; hence, from the first relation in
(\ref{EDPp}-c), $R_{p_x}\neq 0$, and the relation $R(p_x,p,x)=0$ implies, in a
neighborhood of almost any point, $p_x=f(p,x)$
for some smooth function $f$. From Lemma~\ref{lemAAA}, this would contradict
the fact that the solution $p$ is $K$-regular. This proves (\ref{eq:hh}).
Let now $W_s$ ($1\leq s\leq K+2$) be the set of pairs $(i,j)$ such that $i+j=s$
and the rank of $\mathcal{F}_{i,j}$ is $s$ at least at one point in
$O\times\mathbb{R}^K$, \textit{i.e.} \ one of the $s\times s$ minors of $\mathcal{F}_{i,j}$ is a
nonzero real analytic function on $O\times\mathbb{R}^K$. The lemma states that $W_{K+2}$ is
nonempty; in order to prove it by contradiction, suppose that
$W_{K+2}=\varnothing$ and
let $\overline{s}$ be the smallest $s$ such that $W_s=\varnothing$.
From (\ref{EDPp}-c), $W_1$ contains $(1,0)$, hence $2\leq \overline{s}\leq K+2<k+\ell+1$.
Take $(i',j')$ in $W_{\overline{s}-1}$; $\mathcal{F}_{i',j'}$ has rank $i'+j'$ (\textit{i.e.} \ is
made of $i'+j'$ linearly independent vectors) on an open dense set
$A\subset O\times\mathbb{R}^K$.
Let the $i_1\leq k$ and $j_1\leq\ell$ be the largest such that
$\mathcal{F}_{i_1,j'}$ and $\mathcal{F}_{i',j_1}$ have rank $\overline{s}-1$ on $A$.
On the one hand, since $i'+j'=\overline{s}-1<k+\ell$, one has either
$i'<k$ or $j'<\ell$. On the other hand since $W_{\overline{s}}$ is empty, it contains neither $(i'+1,j')$ nor
$(i',j'+1)$; hence the rank of $\mathcal{F}_{i'+1,j'}$ is less than $i'+j'+1$
if $i'<k$, and so is the rank of $\mathcal{F}_{i',j'+1}$ if $j'<\ell$.
To sum up, the following implications hold:
$\;i'<k\Rightarrow i_1\geq i'+1\;$ and $\;j'<\ell\Rightarrow j_1\geq j'+1\;$.
From (\ref{eq:hh}), one has either $i_1<k$ or $j_1<\ell$.
Possibly exchanging $u$ and $v$, assume $i_1<k$; all the vectors
$\partial{\pi}/\partial{u^{(k-i_1)}}$, \ldots,
$\partial{\pi}/\partial{u^{(k-i'+1)}}$,
$\partial{\pi}/\partial{u^{(\ell-j_1)}}$, \ldots,
$\partial{\pi}/\partial{u^{(\ell-j'+1)}}$ are then linear combinations of the vectors in
$\mathcal{F}_{i',j'}$, while $\partial{\pi}/\partial{u^{(k-i_1-1)}}$ is not~:
\begin{equation}
\label{eqq1}
\rank\mathcal{F}_{i',j'}=i'+j'\,,\ \ \ \
\rank\mathcal{F}_{i_1,j_1}=i'+j'\,,\ \ \ \
\rank\left(
\partialxy{\pi}{u^{(k-i_1-1)}}\,,\,\mathcal{F}_{i',j'}
\right)=i'+j'+1
\end{equation}
on an open dense subset of $O\times\mathbb{R}^K$, that we still call $A$ although it
could be smaller.
In a neighborhood of any point in this set, one can, from the third relation, apply
the inverse function theorem and obtain, for an open
$\Omega\subset\mathbb{R}^{k+\ell+K+1}$, a map $\Omega\to\mathbb{R}^{i'+j'+1}$ that expresses
$u^{(k-i')},\ldots,u^{(k-1)}$, $v^{(\ell-j')},\ldots,v^{(\ell-1)}$ and
$u^{(k-i_1-1)}$ as functions of
$u,\ldots,u^{(k-i_1-2)}$, $u^{(k-i_1)},\ldots,u^{(k-i'-1)}$, $v,\ldots,v^{(\ell-j'-1)}$,
$x,\ldots,x^{(K)}$ and $i'+j'+1$ functions
chosen among $p_x,p,Dp,\ldots,D^{K}p$ ($i'+j'+1$
columns defining an invertible minor in $\left(
\partial{\pi}/\partial{u^{(k-i_1-1)}}\,,\,\mathcal{F}_{i',j'}
\right)$).
Focusing on $u^{(k-i_1-1)}$, one has
\begin{equation}
\label{eqq5}
\!\!\!\!\!\!\begin{array}{l}
\!\!u^{(k-i_1-1)}=\\
B\left(u,\ldots,u^{(k-i_1-2)},u^{(k-i_1)},\ldots,u^{(k-i'-1)},v,\ldots,v^{(\ell-j'-1)},x,\ldots,x^{(K)},p_x,p,\ldots,D^{K}p\right)
\end{array}
\end{equation}
where $B$ is some smooth function of $k+\ell+2K+2-i'-j'$ variables and
we have written all the functions $p_x,p,Dp,\ldots,D^{K-1}p$ although $B$
really depends only on $i'+j'+1$ of them.
Differentiating (\ref{eqq5}) with respect to
$u^{(k-i')},\ldots,u^{(k-1)},v^{(\ell-j')},\ldots,v^{(\ell-1)}$, one has, with
obvious matrix notation, $\left(\partialxy{B}{p_x}\ \partialxy{B}{p}\ \cdots\
\partialxy{B}{D^{K-1}p}\right)\mathcal{F}_{i',j'}=0$,
where the right-hand side
is a line-vector of dimension $i'+j'$; from (\ref{eqq1}), this implies
\begin{equation}
\label{eqq60}
\left(\partialxy{B}{p_x}\ \partialxy{B}{p}\ \cdots\
\partialxy{B}{D^{K-1}p}\right)\mathcal{F}_{i_1,j_1}\ =\ 0\;,
\end{equation}
where the right-hand side
is a now a bigger line-vector of dimension $i_1+j_1$.
Differentiating (\ref{eqq5}) with respect to
$u^{(k-i_1)},\ldots,u^{(k-i'-1)},v^{(\ell-j_1)},\ldots,v^{(\ell-j'-1)}$ and
using (\ref{eqq60}) yields that $B$ does not depend on its arguments
$u^{(k-i_1)},\ldots,u^{(k-i'-1)}$ and $v^{(\ell-j_1)},\ldots,v^{(\ell-j'-1)}$.
$B$ cannot depend on $D^{K}p$ either because $ED^{K}p\neq0$ and all the
other arguments of $B$ are constant along $E$; then it cannot depend on $x^{(K)}$ either because
$x^{(K)}$ appears in no other argument; (\ref{eqq5}) becomes
\begin{displaymath}
u^{(k-i_1-1)}=B\left(u,\ldots,u^{(k-i_1-2)},v,\ldots,v^{(\ell-j_1-1)},x,\ldots,x^{(K-1)},p_x,p,\ldots,D^{K-1}p\right)\;.
\end{displaymath}
Applying $D$, using (\ref{psolgh}) and
substituting $u^{(k-i_1-1)}$ from above, one gets, from some smooth $C$ ,
\begin{equation}
\label{eq:00001}
u^{(k-i_1)}=C\bigl(\, u,\ldots,u^{(k-i_1-2)},
\underbrace{v,\ldots,v^{(\ell-j_1)}}_{\substack{{\rm empty\; if}\; j_1=\ell}},
x,\ldots,x^{(K)},p_x,p,\ldots,D^{K}p\bigr)\;.
\end{equation}
Differentiating with respect to
$u^{(k-i')},\ldots,u^{(k-1)},v^{(\ell-j')},\ldots,v^{(\ell-1)}$ yields\\
$\left(\partialxy{C}{p_x}\ \partialxy{C}{p}\ \cdots\
\partialxy{C}{D^{K-1}p}\right)\mathcal{F}_{i',j'}=0$,
the right-hand side being a line-vector of dimension $i'+j'$. From the first
two relations in
(\ref{eqq1}), $\partial\pi/\partial u^{(k-i_1-1)}$ is a linear
combination of the columns of $\mathcal{F}_{i',j'}$, hence one also has
$\displaystyle
\left(\partialxy{C}{p_x}\ \partialxy{C}{p}\ \cdots\
\partialxy{C}{D^{K-1}p}\right)\partialxy{\pi}{u^{(k-i_1-1)}}\ =\ 0\ .
$
\\This implies that the derivative of
the right-hand side of (\ref{eq:00001}) with respect to $u^{(k-i_1-1)}$ is
zero. This is absurd.
\end{proof}
\section{Proof of Lemmas \ref{lem-1}, \ref{lem-2} and \ref{lem-3}}
\label{app-sys}
We need some notations and
preliminaries.
With $F$, $E$ and $\tau$ defined in (\ref{F}) and (\ref{defsigtau}), define the vector fields
\begin{eqnarray}
\label{XY}&\displaystyle
{X}=\partialx{x}\;,\ \ \
{Y}=F+\tau\partialx{u^{(k-1)}}\
\\
\label{eq:bra}&
{X}_1=[{X},{Y}],\ {X}_2=[{X}_1,{Y}]\,,\hspace{2em}
{E}_2=[E,{Y}],\ {E}_3=[{E}_2,{Y}]\ .
\end{eqnarray}
Then (\ref{EDPbis}) obviously implies
\begin{equation}
\label{YpYpx}
Yp=\gamma(x,p,p_x,p_{x,x})\,,\ \ \ \ Yp_x=\delta(x,p,p_x,p_{x,x})\,,\ \ \ \ {X}\sigma=0\,,
\end{equation}
and a simple computation yields (we recall $E$ from (\ref{defsigtau}))~:
\begin{eqnarray}
\label{66}&
{X}_1=\tau_x\,\partialx{u^{(k-1)}}\,,\ \
{X}_2=\tau_x\,\partialx{u^{(k-2)}}+(\cdots)\partialx{u^{(k-1)}}\,,\ \
\\
\nonumber
&
E=\partialx{v^{(\ell-1)}}+\sigma\,\partialx{u^{(k-1)}}\,,\ \
{E}_2=\partialx{v^{(\ell-2)}}+\sigma\,\partialx{u^{(k-2)}}
+(\cdots)\partialx{u^{(k-1)}}\,,\ \
\\
\nonumber
&
{E}_3=\partialx{v^{(\ell-3)}}+\sigma\,\partialx{u^{(k-3)}}
+(\cdots)\partialx{u^{(k-2)}}+(\cdots)\partialx{u^{(k-1)}}\,.
\end{eqnarray}
The vector field ${X}_1$ and ${X}_2$ are linearly independent because
$\tau_x\neq0$, see (\ref{EDPbis}).
Computing the following brackets
and decomposing on ${X}_1$ and ${X}_2$, one gets
\begin{eqnarray}
\label{bralie}&
[{X},{X}_1]=\lambda {X}_1\,,\ \ \ [{X}_1,{X}_2]=\lambda'{X}_1+\lambda''{X}_2\ ,
\\
\label{bralie2}&
[{X},{E}]=0\,,\ \ \ [{X},{E}_2]=\mu{X}_1\,,\ \ \
[{X},{E}_3]=\mu'{X}_1+\mu''{X}_2\,,\ \ \
[{E}_2,{X}_2]=\nu'{X}_1+\nu''{X}_2\ .
\end{eqnarray}
for some functions $\lambda,\lambda',\lambda'',\mu,\mu',\mu'',\nu',\nu''$.
\begin{proof}[Proof of Lemma \Rref{lem-1}]
From (\ref{EDPp}-c), $y=p(u,\ldots, u^{(k-1)},x,v,\ldots, v^{(\ell-1)})$ defines
local coordinates
$u,\ldots u^{(k-2)}$, $y,x$, $v,\ldots,v^{(\ell-1)}$.
Composing $p_x$ by the inverse of this change of coordinates, there is a function $\alpha$ of $k+\ell+1$ variables such that
$p_x=\alpha(u,\ldots,u^{(k-2)},\,p\,,x,\ldots, v^{(\ell-1)})$ identically.
Since $Ep=Ep_x=0$ (see (\ref{EDPbis})), applying $E$ to both sides of this identity
yields that $\alpha$ does not depend on its argument $v^{(\ell-1)}$.
Similarly, if $k\geq2$,
differentiating both sides of the same identity with respect to
$u^{(k-1)}$ and $u^{(k-2)}$, the fact that the determinant in the
lemma is zero implies that $\alpha$ does
not depend on its argument $u^{(k-2)}$.
To sum up, $p$ and $p_x$ satisfy an identity
$$p_x=\alpha(u,\ldots,u^{(k-3)},\,p\,,x,v,\ldots, v^{(\ell-2)}),$$
where the first list is empty if $k=1$ or $k=2$.
Now define two integers $m\leq k-3$ and $n\leq\ell-2$ as
the smallest such that
$\alpha$ depends on $u,\ldots,u^{(m)},x,y,v,\ldots,v^{(n)}$, with the
convention that $m<0$ if $k=1$, $k=2$, or $\alpha$ depends on none of the
variables $u,\ldots,u^{(k-3)}$ and $n<0$
if $\alpha$ depends on none of the variables $v,\ldots,v^{(\ell-2)}$.
Applying ${Y}$ to both sides of the above identity yields
$$
{Y} p_x=\alpha_y\,{Y} p
+\sum_{i=0}^{m}u^{(i+1)}\alpha_{u^{(i)}}\,+\sum_{i=0}^{n}v^{(i+1)}\alpha_{
v^{(i)}}\ ,
$$
where, if $m<0$ or $n<0$, the corresponding sum is empty.
Using (\ref{YpYpx}), since
$p_{xx}=\alpha_x+\alpha\alpha_y$, one can replace ${Y} p$ with
$\gamma(x,y,\alpha,\alpha_x+\alpha\alpha_y)$ and ${Y} p_x$ with
$\delta(x,y,\alpha,\alpha_x+\alpha\alpha_y)$
in the above equation, where all terms except the last one of each non-empty
sum therefore depend on $u,\ldots,u^{(m)},x,y,v,\ldots,v^{(n)}$ only. Differentiating with
respect to $u^{(m+1)}$ and $v^{(n+1)}$ yields $\alpha_{u^{(m)}}=\alpha_{v^{(n)}}=0$,
which is possible only if $m<0$ and $n<0$, hence the lemma.
\end{proof}
\begin{proof}[Proof of Lemma \Rref{lem-2}]
From (\ref{inv2}), setting
\begin{equation}
\label{yz}
y=p(u,\ldots,u^{(k-1)},x,v,\ldots,v^{(\ell-1)}),\
z=p_x(u,\ldots,u^{(k-1)},x,v,\ldots,v^{(\ell-1)}),
\end{equation}
one gets some local coordinates
$(u,\ldots,u^{(k-3)},x,y,z,v,\ldots,v^{(\ell-1)})$.
In these coordinates, the vector fields
${X}$ and ${Y}$ defined by (\ref{XY}) have the following expressions, where
$\chi$ and $\alpha$ are some functions, to be studied further~:
\begin{eqnarray}
\label{3-X}
{X}&=&\partialx{x}+z\partialx{y}+\alpha\partialx{z}\;,
\\
\label{3-F}
{Y}&=&\gamma\partialx{y}+\delta\partialx{z}
+\chi\partialx{u^{(k-3)}}
+\sum_{i=0}^{k-4}u^{(i+1)}\partialx{u^{(i)}}
+\sum_{i=0}^{\ell-1}v^{(i+1)}\partialx{v^{(i)}}\;.
\end{eqnarray}
In the expression of ${Y}$, the third term is zero if $k=2$,
the fourth term ($\sum_{i=0}^{k-4}\cdots$) is zero if $k=2$ or $k=3$,
and the notations $\gamma$ and $\delta$ are slightly abusive~: $\gamma$
stands for the function
$$(u,\ldots,u^{(k-3)},x,y,z,v,\ldots,v^{(\ell-1)})\mapsto\gamma(x,y,z,\alpha(u,\ldots,u^{(k-3)},x,y,z,v,\ldots,v^{(\ell-1)}))\;,$$
and the same for $\delta$. With the same abuse of notations, (\ref{EDPp}-e) reads
\begin{equation}
\label{taux3}
{X}\gamma-\delta\neq0.
\end{equation}
The equalities
$(\sigma\partialx{u^{(k-1)}}+\partialx{v^{(\ell-1)}})u^{(k-2)}
=\partialx{x}u^{(k-2)}=\partialx{u^{(k-1)}}u^{(k-2)} =0$ are obvious in the
original coordinates.
Since the inverse of the change of coordinates (\ref{yz}) is given by
$$
u^{(k-2)}\!=\chi(u,\ldots,u^{(k-3)},x,y,z,v,\ldots,v^{(\ell-1)}),\
u^{(k-1)}\!={Y}\!\chi(u,\ldots,u^{(k-3)},x,y,z,v,\ldots,v^{(\ell-1)}),
$$
and ${E}$, ${X}$ and ${X}_{1}$ are given by (\ref{66}), those equalities imply
\begin{equation}
\label{eq:666}
{E}\chi={X}\chi={X}_{\,1}\,\chi=0\ .
\end{equation}
Then, from (\ref{eq:bra}), (\ref{3-X}) and (\ref{3-F}),
\begin{eqnarray}
\label{cro1}
{X}_1&=& \left({X}\gamma-\delta\right)\partialx{y} \;+\;
\left({X}\delta-{Y}\alpha\right)\partialx{z} \;,
\\[0pt]
\label{cro2}
[{X},{X}_1] &=&
\left({X}^2\gamma-2{X}\delta-{Y}\alpha\right)\partialx{y}
\;+\;
\left({X}^2\delta-{X}{Y}\alpha-{X}_1\alpha\right)\partialx{z}\ .
\end{eqnarray}
With these expressions of ${X}$ and ${X}_1$, the first relation in
(\ref{bralie}) implies~:
\begin{equation}
\label{det0}
\left|
\begin{array}{cc}
{X}\gamma-\delta&{X}^2\gamma-2{X}\delta+{Y}\alpha
\\
{X}\delta-{Y}\alpha&{X}^2\delta-{X}{Y}\alpha-{X}_1\alpha
\end{array}
\right|=0 \ .
\end{equation}
The definition of $\alpha$ implies ${X} z=\alpha$.
In the original coordinates, this translates into the identity
$p_{xx}=\alpha(u,\ldots,u^{(k-3)},x,p,p_x,v,\ldots,v^{(\ell-1)})$.
Since $Ep=Ep_x=Ep_{x,x}=0$ (see (\ref{EDPbis})), applying $E$ to both sides of this identity
yields that $\alpha$ does not depend on its argument $v^{(\ell-1)}$. Also,
if $k\geq 3$,
differentiating both sides with respect to $u^{(k-1)}$, $u^{(k-2)}$ and
$u^{(k-3)}$, we obtain that the determinant (\ref{eq:det2}) is zero if and
only if $\alpha$ does not depend on its argument $u^{(k-3)}$.
To sum up, under the assumptions of the lemma,
\begin{equation}
\label{al2}
\alpha\ \mbox{depends on}\ \
u,\ldots,u^{(k-4)},x,y,z,v,\ldots,v^{(\ell-2)}
\ \ \mbox{only}
\end{equation}
with the convention that the first list is empty if $k=2$ or $k=3$.
Now define two integers $m\leq k-4$ and $n\leq\ell-2$ as
the smallest such that
$\alpha$ depends on $u,\ldots,u^{(m)},x,y,v,\ldots,v^{(n)}$, with the
convention that $m<0$ if $k=2$, $k=3$, or $\alpha$ depends on none of the
variables $u,\ldots,u^{(k-4)}$, and $n<0$
if $\alpha$ depends on none of the variables $v,\ldots,v^{(\ell-2)}$.
We have
\begin{equation}
\label{eq:mn}
m\geq0\;\Rightarrow\;\alpha_{u^{(m)}}\neq0\ ,\ \ \ \ n\geq0\;\Rightarrow\;\alpha_{v^{(n)}}\neq0\ .
\end{equation}
Since $m$ is no larger that $k-4$, $\chi$ does not appear in
the expression of ${Y}\alpha$~:
\begin{eqnarray}
\label{Falpha3}
{Y}\alpha & = & \gamma\alpha_{y}+\delta\alpha_{z}
+\sum_{i=0}^{m}u^{(i+1)}\alpha_{u^{(i)}}
+\sum_{i=0}^{n}v^{(i+1)}\alpha_{v^{(i)}}
\end{eqnarray}
where the first (or second) sum is empty if $m$ (or $n$) is negative.
In the left-hand side of (\ref{det0}), all the terms depend only on
$u,\ldots,u^{(m)},x,y,z,v,\ldots,v^{(n)}$, except ${Y}\alpha$, ${X}{Y}\alpha$
and ${X}_1\alpha$ that depend on
$u^{(m+1)}$ if $m\geq0$ or on $v^{(n+1)}$ if $n\geq0$ (see above); the
determinant is a polynomial of degree two with respect to
$u^{(m+1)}$ and $v^{(n+1)}$ with coefficients depending on $u,\ldots,u^{(m)},x,y,z,v,\ldots,v^{(n)}$ only, and the term of degree two, coming from $({Y}\alpha)^2$, is
$$
\left(\alpha_{u^{(m)}}u^{(m+1)}
+\alpha_{v^{(n)}}v^{(n+1)}\right)^2\ .
$$
Hence (\ref{det0}) implies $\alpha_{u^{(m)}}=\alpha_{v^{(n)}}=0$ and, from
(\ref{eq:mn}), negativity of $m$ and $n$ are negative. By definition of these integers, this implies that $\alpha$ depends on $(x,y,z)$
only: in the original coordinates, one has $p_{xx}=\alpha(x,p,p_x)$.
\end{proof}
Before proving Lemma~\ref{lem-3}, we need to extract more information from the
previous proof~:
\begin{lmm}
\label{lem-2cro}
Assume, as in Lemma~\ref{lem-2}, that $p$ is a solution of $\mathcal{E}^{\gamma,\delta}_{k,\ell}$ satisfying
(\ref{inv2}), but assume also that
$\ell\geq k\geq3$ and the determinant (\ref{eq:det2}) is nonzero.
Then $[{X},{E}_2]=[{X},{E}_3]=0$.
\end{lmm}
\begin{proof}
Starting as in the proof of Lemma~\ref{lem-2}, one does not obtain
(\ref{al2}) but, since (\ref{eq:det2}) is nonzero,
\begin{equation}
\label{al2'}
\alpha\ \mbox{depends on}\ \
u,\ldots,u^{(k-4)},x,y,z,v,\ldots,v^{(\ell-2)}
\ \ \mbox{and}\ \ \alpha_{u^{(k-3)}}\neq 0\ .
\end{equation}
Since $Ep=Ep_x=0$, one has $E=\partial/\partial v^{(\ell-1)}$ in these
coordinates.
The first equation in (\ref{eq:666}) then
reads $\chi_{v^{(\ell-1)}}=0$, and (\ref{eq:bra}) and (\ref{3-F}) yield
$$
{E}_2=\partialx{v^{(\ell-2)}}\;,\ \ \
[{X},{E}_2]=-\,\alpha_{v^{(\ell-2)}}\,\partialx{z}\ .
$$
Since $[{X},{E}_2]=\mu{X}_1$ (see (\ref{bralie2})), relations (\ref{cro1}) and
(\ref{taux3}) imply that
$\alpha_{v^{(\ell-2)}}$, $\mu$, and the bracket $[{X},{E}_2]$ are zero, and
prove the first part of the lemma. Let us turn to $[{X},{E}_3]$~:
from (\ref{eq:bra}) and (\ref{3-F}), one gets, since ${E}_2$ and ${X}$
commute, and ${X}\chi=0$,
\begin{equation}
\label{eq:EE3}
{E}_3=\chi_{v^{(\ell-2)}}\partialx{u^{(k-3)}}+\partialx{v^{(\ell-3)}}
\;,\ \ \
[{X},{E}_3]=-({E}_3\alpha)\,\partialx{z}\ .
\end{equation}
In order to prove that ${E}_3\alpha=0$,
let us examine equation (\ref{det0}). For short, we use the symbol $\mathcal{O}$
to denote \emph{any function} that depends on
$u,\ldots,u^{(k-3)},x,y,z,v,\ldots,v^{(\ell-3)}$ only. For instance,
${X}\gamma-\delta=\mathcal{O}$, and all terms in the determinant are of this
nature, except the following three~:
\begin{eqnarray*}
{Y}\alpha & = & \chi\,\alpha_{u^{(k-3)}} +v^{(\ell-2)}\alpha_{v^{(\ell-3)}} +\mathcal{O},\\
{X}{Y}\alpha & = & \chi\,{X}\alpha_{u^{(k-3)}} +v^{(\ell-2)}{X}\alpha_{v^{(\ell-3)}}
+\mathcal{O},\\
{X}_1\alpha & = & -\alpha_z\,\left(\chi\,\alpha_{u^{(k-3)}}
+v^{(\ell-2)}\alpha_{v^{(\ell-3)}}\right) +\mathcal{O}
\end{eqnarray*}
(we used ${X}\chi=0$).
Setting $\zeta=\chi\,\alpha_{u^{(k-3)}}
+v^{(\ell-2)}\alpha_{v^{(\ell-3)}}$, one has
\begin{equation}
\label{99}
{X}\zeta =\frac{{X}\alpha_{u^{(k-3)}}}{\alpha_{u^{(k-3)}}}\zeta
+\mathbf{b}\,v^{(\ell-2)}
\ \ \ \mbox{with}\ \ \
\mathbf{b}={X}\alpha_{v^{(\ell-3)}}
-\alpha_{v^{(\ell-3)}}\,\frac{{X}\alpha_{u^{(k-3)}}}{\alpha_{u^{(k-3)}}}\ ,
\end{equation}
and equation (\ref{det0}) reads
\begin{eqnarray}
\label{88}
\zeta^2
+\mathcal{O}\,\zeta
-({X}\gamma-\delta)\,\mathbf{b}\,v^{(\ell-2)}+\mathcal{O}
&=&0\ .
\end{eqnarray}
Differentiating with respect to ${X}$ and using (\ref{99}) yields
\begin{eqnarray*}
2\, \frac{{X}\alpha_{u^{(k-3)}}}{\alpha_{u^{(k-3)}}}\,\zeta^2
+
\left(2\mathbf{b}\,v^{(\ell-2)}+
\mathcal{O}\right)
\zeta
+\mathcal{O}\,
v^{(\ell-2)}
+\mathcal{O} &=&0\ .
\end{eqnarray*}
Then, eliminating $\zeta$ between these two polynomials yields the resultant
$$
\left|
\begin{array}{cccc}
1 & \mathcal{O} & -({X}\gamma-\delta)\mathbf{b}\, v^{(\ell-2)}+\mathcal{O} & 0
\\
0 & 1 & \mathcal{O} & -({X}\gamma-\delta)\mathbf{b}\, v^{(\ell-2)}+\mathcal{O}
\\
2\frac{{X}\alpha_{u^{(k-3)}}}{\alpha_{u^{(k-3)}}} &
2\mathbf{b}\, v^{(\ell-2)}+\mathcal{O} &
\mathcal{O}\, v^{(\ell-2)}+\mathcal{O} & 0
\\
0 & 2\frac{{X}\alpha_{u^{(k-3)}}}{\alpha_{u^{(k-3)}}} &
2\mathbf{b}\, v^{(\ell-2)}+\mathcal{O} &
\mathcal{O}\, v^{(\ell-2)}+\mathcal{O}
\end{array}
\right|=0\ .
$$
This is a polynomial of degree at most three with respect to $v^{(\ell-2)}$,
the coefficient of $(v^{(\ell-2)})^3$ being $-4\mathbf{b}^3({X}\gamma-\delta)$.
Hence $\mathbf{b}=0$ and, from (\ref{88}), $\zeta$ does not depend on
$v^{(\ell-2)}$.
This implies ${E}_3\alpha=0$ because, from
(\ref{eq:EE3}) and the definition of $\zeta$, one has
$
\zeta_{v^{(\ell-2)}}={E}_3\alpha
$.
\end{proof}
\begin{proof}[Proof of Lemma \Rref{lem-3}]
The independent variables in $\mathcal{E}^{\gamma,\delta}_{k,\ell}l\gamma\delta33$ are $u,\dot{u},\ddot{u},x,v,\dot{v},\ddot{v}$.
Since the determinant (\ref{eq:det2}) is nonzero,
one defines local coordinates
$(x,y,z,w,v,\dot{v},\ddot{v})$ by
\begin{equation}
\label{yzw}
y=p(u,\dot{u},\ddot{u},x,v,\dot{v},\ddot{v}),\ \ \
z=p_x(u,\dot{u},\ddot{u},x,v,\dot{v},\ddot{v}),\ \ \
w=p_{xx}(u,\dot{u},\ddot{u},x,v,\dot{v},\ddot{v}).
\end{equation}
In these coordinates, ${X}$ and ${Y}$, defined in (\ref{XY}), have the following expressions, with
$\psi$ and $\alpha$ some functions to be studied further~:
\begin{eqnarray}
\label{4-X}
{X}&=&\partialx{x}+z\partialx{y}+w\partialx{z}+\alpha\partialx{w}\;,
\\
\label{4-F}
{Y}&=&\gamma\partialx{y}+\delta\partialx{z}
+\psi\partialx{w}
+\dot{v}\partialx{v}+\ddot{v}\partialx{\dot{v}}\;.
\end{eqnarray}
Then, using, for short, the following notation $\Gamma$~:
\begin{equation}
\label{Gamma}
\Gamma\ =\ {X}\gamma-\delta\ \neq\ 0\ ,
\end{equation}
one has
\begin{eqnarray}
\label{C4-X1}
{X}_1&\!\!\!\!=\!\!\!\!&\Gamma\partialx{y}+\left({X}\delta-\psi\right)\partialx{z}
+\left({X}\psi-{Y} \alpha\right)\partialx{w},
\\
\label{C4-[X,X1]}
\!\!\!\!\!\!\!\!\!\!\!\!\!
[{X},{X}_1]&\!\!\!\!=\!\!\!\!&\left({X}\Gamma-{X}\delta+\psi\right)\partialx{y}
+\left({X}^2\delta-2{X}\psi+{Y} \alpha\right)\partialx{z}
+\left({X}^2\psi-{X}{Y} \alpha -{X}_1\alpha\right)\partialx{w}.
\end{eqnarray}
Also,
$$
{E}=\partialx{\ddot{v}}
\,,\ \ \ \
{E}_2=[{E}_1,{Y}]=\psi_{\ddot{v}}\partialx{w}+\partialx{\dot{v}}
\,,\ \ \ \
[{X},{E}_2]=
\psi_{\ddot{v}}\partialx{z}+\left({X}\psi_{u^{(k-1)}}-{E}_2\alpha\right)\partialx{w}\,
$$
but, from Lemma~\ref{lem-2cro}, one has $[{X},{E}_2]=0$, hence
$\psi_{\ddot{v}}=0$, ${E}_2=\partial/\partial{\dot{v}}$ and
$\alpha_{\dot{v}}=0$.
Then
$$
{E}_3=[\partialx{\dot{v}},{Y}]=\psi_{\dot{v}}\partialx{w}+\partialx{v}
\,,\ \ \ \
[{X},{E}_3]=\psi_{\dot{v}}\partialx{z}+
\left({X}\psi_{\dot{v}}-{E}_3\alpha\right)\partialx{w}\,,
$$
but, from Lemma~\ref{lem-2cro}, one has $[{X},{E}_3]=0$, hence
$\psi_{\dot{v}}=0$, ${E}_3=\partial/\partial{v}$ and
$\alpha_{v}=0$.
To sum up,
\begin{equation}
\label{eq:0}
{E}=\partialx{\ddot{v}}
\,,\ \ \ \
{E}_2=\partialx{\dot{v}}\,,\ \ {E}_3=\partialx{v}\,,
\end{equation}
$\alpha$ depends at most on $(x,y,z,w)$ only and $\psi$ on $(x,y,z,w,v)$.
\\\textbf{Notation:}
until the end of this proof, $\mathcal{O}$ stands for \emph{any} function of
$x,y,z,w$ only. For instance, $\alpha=\mathcal{O}$, $\gamma=\mathcal{O}$,
$\delta=\mathcal{O}$, $\Gamma=\mathcal{O}$, ${X}\Gamma=\mathcal{O}$ ,
${X}\delta=\mathcal{O}$ and ${X}^2\delta=\mathcal{O}$.
From (\ref{bralie}), (\ref{C4-X1}) and (\ref{C4-[X,X1]}), one has
$\left|
\begin{array}{cc}
\Gamma & {X}\delta-\psi\\
{X}\Gamma-{X}\delta+\psi & {X}^2\delta-2{X}\psi+{Y} \alpha
\end{array}\right|=0\ .$
Hence
\begin{equation}
\label{Xpsi}
{X}\psi\ =\ \frac1{2\Gamma}\psi^2+\mathcal{O}\psi+\mathcal{O}\ .
\end{equation}
We now write the expression (\ref{C4-X1}) of ${X}_1$ as
\begin{eqnarray}
\label{X1sep}
&&{X}_1={X}_1^0+\psi{X}_1^1+\psi^2{X}_1^2
\\
\label{X1011}
&\mbox{with}&\displaystyle
{X}_1^0=\Gamma\partialx{y}+\mathcal{O}\,\partialx{z}+\mathcal{O}\,\partialx{w}\;,\ \ \
{X}_1^1=-\,\partialx{z}+\mathcal{O}\,\partialx{w}\;,\ \ \
{X}_1^2=\frac1{2\Gamma}\partialx{w}\;.
\end{eqnarray}
Note that ${X}_1^0$, ${X}_1^1$ and ${X}_1^2$ are vector fields in the variables $x,y,z,w$ only.
Now define~:
\begin{eqnarray}
\label{U}
U&=& -{X}_1^1-\frac{\psi}{\Gamma}\,\partialx{w}\ =\
\partialx{z}+\left(\mathcal{O}-\frac\psi\Gamma\right)\partialx{w}\ ,
\\
\label{V}
V&=& {X}_1^0-\psi^2{X}_1^2\ =\
\Gamma\partialx{y}+\mathcal{O}\,\partialx{z}+\left(\mathcal{O}-\frac{\psi^2}{2\Gamma}\right)\partialx{w}\
,
\end{eqnarray}
so that
\begin{equation}
\label{X1UV}
{X}_1=V-\psi U
\end{equation}
and, from (\ref{4-F}) and (\ref{X1sep}) one deduces the following expression of ${X}_2=[{X}_1,{Y}]$~:
\begin{equation}
\label{C4-X2}
{X}_2\ =\ \big({Y}\psi\big)\,U+\big({X}_1\psi\big)\,\partialx{w}
+\psi^3\frac{\Gamma_w}{2\Gamma^2}\,\partialx{w}
+\psi^2\left(\frac{\gamma_w}{2\Gamma}\partialx{y}+\mathcal{O}\partialx{z}+\mathcal{O}\partialx{w}\right)
+\psi{X}_2^1+{X}_2^0
\end{equation}
where ${X}_2^1$ and ${X}_2^0$ are two vector fields in the variables
$x,y,z,w$ only.
This formula and (\ref{eq:0}) imply
$[{E}_2,{X}_2]=\big({Y}\psi\big)_{\dot{v}}\,U=\psi_{v}\,U$; hence, from
the last relation in (\ref{bralie2}), either $\psi_{v}$ is identically zero or
$U$ is a linear combination of ${X}_1$ and ${X}_2$.
We assume, until the end of the proof, that $U$ is a linear combination of
${X}_1$ and ${X}_2$.
This implies, using (\ref{X1UV}), that ${X}_2$ and ${X}_1$ are linear
combinations of $U$ and $V$; hence $U,V$ is
another basis for ${X}_1,{X}_2$.
Also, from (\ref{bralie}) $[U,V]$ must be a linear combination of $U$ and $V$.
From (\ref{U}) and (\ref{V}),
\begin{displaymath}
[U,V]=\frac{{X}_1\psi}{\Gamma}\partialx{w}
-\psi^2\,\mathcal{O}\,\partialx{w}
+\psi\,W^1+W^0
\end{displaymath}
where $W^1$ and $W^0$ are two vector fields in the variables
$x,y,z,w$ only, and, finally, with $Z^1$ and $Z^0$ two other vector fields in the variables
$x,y,z,w$ only, one has, from (\ref{C4-X2})
\begin{displaymath}
{X}_2-({Y}\!\psi)\,U-\Gamma\,[U,V]\ =\
\psi^3\frac{\Gamma_w}{2\Gamma^2}\,\partialx{w}
+
\psi^2\left(\frac{\gamma_w}{2\Gamma}\partialx{y}
+\mathcal{O}\partialx{z}+\mathcal{O}\partialx{w}\right)
+\psi\,Z^1+Z^0\ .
\end{displaymath}
This vector field is also a linear
combination of $U$ and $V$. Computing the determinant in the basis
$\partial/\partial y$, $\partial/\partial z$, $\partial/\partial w$, one has,
using (\ref{U}) and (\ref{V}),
$$
\det\big(U,V,\,{X}_2-({Y}\!\psi)U-\Gamma[U,V]\,\big)=
\frac{\gamma_w}{\Gamma^3}\psi^4+\mathcal{O}\psi^3+\mathcal{O}\psi^2+\mathcal{O}\psi+\mathcal{O}
=0\
.
$$
It is assumed from the definition of $\mathcal{E}^{\gamma,\delta}_{k,\ell}$ that the
partial derivative of $\gamma$ with respect to its fourth argument is
nonzero; hence $\gamma_w\neq0$ and the above polynomial of degree 4 with
respect to $\psi$ is nontrivial; its coefficients depend on $x,y,z,w$ only,
hence $\psi$ cannot depend on $v$.
We have proved that, in any case, both $\alpha$ and $\psi$ depend on
$x,y,z,w$ only, and this yields the desired identities in the lemma.
\end{proof}
\section{}
\label{app-lem0}
\begin{lmm}
\label{lem0}
Let $p$ be a smooth function of $u,\ldots,u^{(k-1)},x,v,\ldots,v^{(\ell-1)}$, $r$
a smooth function of
$u,\ldots,u^{(k-1)},x$, $v,\ldots,v^{(\ell)}$, with $r_{v^{(\ell)}}\neq0$, and $f$
a smooth function of four variables such that
\begin{equation}
\label{ducon0}
\sum_{i=0}^{k-2}u^{(i+1)}p_{u^{(i)}}\;+\;rp_{u^{(k-1)}}\;+\;
\sum_{i=0}^{\ell-1}v^{(i+1)}p_{v^{(i)}}\ =\ f(x,p,p_{x},p_{xx})
\end{equation}
where, by convention, $rp_{u^{(k-1)}}$ is zero if $k=0$ and the first
(resp. last) sum is zero if $k\leq 1$ (resp. $\ell=0$).
Then either $p$ depends on $x$ only or
\begin{equation}
\label{ducon}
k\geq1\,,\ \ \ \ell\geq1\,,\ \ \ p_{u^{(k-1)}}\neq0\,,\ \ \ p_{v^{(\ell-1)}}\neq0\,.
\end{equation}
\end{lmm}
\begin{proof}
Let $m\leq k-1$ and $n\leq\ell-1$ be the smallest integers such that
$p$ depends on $u,\ldots,u^{(m)},x,v,\ldots,v^{(n)}$;
if $p$ depends on none of the variables $u,\ldots,u^{(k-1)}$ (or
$v,\ldots,v^{(\ell-1)}$), take $m<0$ (or $n<0$).
Then $p_{u^{(m)}}\neq0$ if $m\geq0$
and $p_{v^{(n)}}\neq0$ if $n\geq0$.
The lemma states that either $m<0$ and $n<0$ or
$k\geq 1$, $\ell\geq1$ and $(m,n)=(k-1,\ell-1)$.
This is indeed true~:\\
- if $m=k-1$ and $k\geq1$ then
$n=\ell-1$ and $\ell\geq 1$ because if not, differentiating both sides in (\ref{ducon0})
with respect to $v^{(\ell)}$ would yield $r_{v^{(\ell)}}p_{u^{(k-1)}}=0$, but
the lemma assumes that $r_{v^{(\ell)}}\neq0$,\\
- if $m<k-1$ or $m=0$, (\ref{ducon0}) becomes~:
$\sum_{i=0}^{m}u^{(i+1)}p_{u^{(i)}}+\sum_{i=0}^{n}v^{(i+1)}p_{v^{(i)}}=f(x,p,p_{x},p_{xx})$;
if $m\geq0$, differentiating with respect to $u^{(m+1)}$ yields $p_{u^{(m)}}=0$ and if
$n\geq0$, differentiating with respect to $u^{(m+1)}$ yields $p_{v^{(n)}}=0$;
hence $m$ and $n$ must both be negative.
\end{proof}
\noindent
This paper owes a lot not only to the original article
\cite{Hilb12}, but also to the careful re-reading of that article by \emph{P.~Rouchon} in
\cite{Rouc92} (see also in \cite{Rouc94}).
\noindent
Also, the authors are very grateful to their colleague \emph{José Grimm} at INRIA Sophia
Antipolis for an extremely careful reading of the manuscript that led to many
improvements.
\end{document}
|
\begin{document}
\title{The water waves equations: from Zakharov to Euler}
\author{
T. Alazard,
N. Burq,
C. Zuily}
\date{\empty}
\maketitle
\abstract{Starting form the Zakharov/Craig-Sulem formulation of the gravity water waves equations,
we prove that one can define a pressure term and hence obtain a solution
of the classical Euler equations.
It is proved that these results hold in rough domains, under minimal assumptions
on the regularity
to ensure, in terms of Sobolev spaces, that the solutions are $C^1$.}
\sigmaection{Introduction}
We study the dynamics of an incompressible layer of inviscid liquid,
having constant density, occupying
a fluid domain with a free surface.
We begin by describing the fluid domain. Hereafter, $d\ge 1$, $t$
denotes the time variable and $x\in \mathbf{R}^d$ and $y\in \mathbf{R}$ denote the horizontal and vertical variables.
We work in a fluid domain with free boundary of the form
$$
\Omega=\{\,(t,x,y)\in (0,T)\times\mathbf{R}^d\times\mathbf{R} \, : \, (x,y) \in \Omega(t)\,\},
$$
where $\Omega(t)$ is the $d+1$-dimensional
domain located between two hypersurfaces:
a free surface denoted by $\Sigma(t)$ which
will be supposed to be a graph and a fixed bottom $\Gamma$.
For each time $t$, one has
$$
\Omega(t)=\leqft\{ (x,y)\in \mathcal{O} \, :\, y < \eta(t,x)\right\},
$$
where $\mathcal{O}$ is a given open connected domain and where $\eta$ is the free surface elevation.
We denote by $\Sigma$ the free surface:
$$
\Sigma = \{(t,x,y): t\in(0,T), (x,y)\in \Sigma(t)\},
$$
where $ \Sigma(t)=\{ (x,y)\in\mathbf{R}^d\times \mathbf{R}\,:\, y=\eta(t,x)\}$
and we set $\Gamma=\partial\Omega(t)\sigmaetminus \Sigma(t)$.
Notice that $\Gamma$ does not depend on time.
Two classical examples are the case of infinite depth
($\mathcal{O}=\mathbf{R}^{d+1}$ so that $\Gamma=\emptyset$)
and the case where the bottom is the graph of a function (this corresponds to the case
$\mathcal{O}=\{(x,y)\in\mathbf{R}^d\times \mathbf{R}\,:\, y> b(x)\}$ for some given function $b$).
We introduce now a condition which ensures that, at time $t$, there exists a fixed strip separating the free surface from the bottom.
\begin{equation}\leqft\lvertbel{eta*}
(H_t):\qquad \exists h>0 : \quad \Gamma
\sigmaubset \{(x,y)\in \mathbf{R}^d \times \mathbf{R}: y<\eta(t, x)-h\}.
\end{equation}
No regularity assumption will be made on the bottom $\Gamma$.
\sigmaubsubsection*{The incompressible Euler equation with free surface}
Hereafter, we use the following notations
$$
\nabla=(\partial_{x_i})_{1\leq i\leq d},\quad \nabla_{x,y} =(\nabla,\partial_y),
\quad \Delta = \sigmaum_{1\leq i\leq d} \partial_{x_i}^2,\quad
\Delta_{x,y} = \Delta+\partial_y^2.
$$
The Eulerian velocity field $v\colon \Omega \rightarrow \mathbf{R}^{d+1}$
solves the incompressible Euler equation
\begin{equation*}
\partial_{t} v +v\cdot \nabla_{x,y} v + \nabla_{x,y} P
= - g e_y ,\quad \text{div}\,_{x,y} v =0 \quad\text{in }\Omega,
\end{equation*}
where $g$ is the acceleration due to gravity ($g>0$) and $P$ is the pressure.
The problem is then given by three boundary conditions:
\begin{itemize}
\item a kinematic condition (which states that the free surface moves with the fluid)
\begin{equation}
\partial_{t} \eta = \sigmaqrt{1+|\nabla \eta|^2 } \, (v \cdot n) \quad \text{ on } \Sigma,
\end{equation}
where $n$ is the unit exterior normal to $\Omega(t)$,
\item
a dynamic condition (that expresses a balance of forces across the free surface)
\begin{equation}\leqft\lvertbel{syst}
P=0 \quad \text{ on } \Sigma,
\end{equation}
\item the "solid wall" boundary condition at the bottom $\Gamma$
\begin{equation}
v\cdot \nu=0,
\end{equation}
\end{itemize}
where $\nu$ is the normal vector to $\Gamma$
whenever it exists. In the case of arbitrary bottom this condition will be implicit and contained in a variational formulation.
\sigmaubsubsection*{The Zakharov/Craig-Sulem formulation}
A popular form of the water-waves system
is given by the Zakharov/Craig-Sulem
formulation. This is an elegant
formulation of the water-waves equations where
all the unknowns are evaluated at the free surface only.
Let us recall the derivation of this system.
Assume, furthermore, that the motion of the liquid is irrotational.
The velocity field $v$
is therefore given by $v=\nabla_{x,y} \Phi$
for some velocity potential $\Phi\colon \Omega\rightarrow \mathbf{R}$ satisfying
$$
\Delta_{x,y}\Phi=0\quad\text{in }\Omega,
\qquad \partial_\nu \Phi =0\quad\text{on }\Gamma,
$$
and the Bernoulli equation
\begin{equation}\leqft\lvertbel{Bern}
\partial_{t} \Phi +\frac{1}{2} \leqft\lvert \nabla_{x,y}\Phi\right\rvert^2 + P +g y = 0 \quad\text{in }\Omega.
\end{equation}
Following Zakharov~\cite{Zakharov1968},
introduce the trace of the potential on the free surface:
$$
\psi(t,x)=\Phi(t,x,\eta(t,x)).
$$
Notice that since $\Phi$ is harmonic, $\eta$ and $\Psi$ fully determines $\Phi$.
Craig and Sulem (see~\cite{CrSu}) observe that one can
form a system of two evolution equations for $\eta$ and
$\psi$. To do so, they introduce the Dirichlet-Neumann operator $G(\eta)$
that relates $\psi$ to the normal derivative
$\partial_n\Phi$ of the potential by
\begin{align*}
(G(\eta) \psi) (t,x)&=\sigmaqrt{1+|\nabla\eta|^2}\,
\partial _n \Phi\arrowvert_{y=\eta(t,x)}\\
&=(\partial_y \Phi)(t,x,\eta(t,x))-\nabla_x \eta (t,x)\cdot (\nabla_x \Phi)(t,x,\eta(t,x)).
\end{align*}
(For the case with a rough bottom, we recall the precise construction later on).
Directly from this definition, one has
\begin{equation}\leqft\lvertbel{eq:1}
\partial_t \eta=G(\eta)\psi.
\end{equation}
It is proved in~\cite{CrSu} (see also the computations in~\S\ref{S:pressure})
that the condition $P=0$ on the free surface implies that
\begin{equation}\leqft\lvertbel{eq:2}
\partial_{t}\psi+g \eta
+ \frac{1}{2}\leqft\lvert\nabla \psi\right\rvert^2 -\frac{1}{2}
\frac{\bigl(\nabla \eta\cdot\nabla \psi +G(\eta) \psi \bigr)^2}{1+|\nabla \eta|^2}
= 0.
\end{equation}
The system~\eqref{eq:1}--\eqref{eq:2} is in Hamiltonian form
(see~\cite{CrSu,Zakharov1968}), where the Hamiltonian is given by
$$
\mathcal{H}=\frac{1}{2} \int_{\mathbf{R}^d} \psi G(\eta)\psi +g\eta^2\, dx.
$$
The problem to be considered here is that of the equivalence of the previous two formulations of the water-waves problem.
Assume that the Zakharov/Craig-Sulem system has been solved. Namely, assume
that, for some $r>1+ d/2$,
$(\eta, \psi) \in C^0(I, H^{r}(\mathbf{R}^d)\times H^{r}(\mathbf{R}^d))$ solves~\eqref{eq:1}-\eqref{eq:2}.
We would like to show that we have indeed solved the initial system of Euler's equation with free boundary. In particular we have to define the pressure which does not appear in the above system~\eqref{eq:1}-\eqref{eq:2}. To do so we set
$$ B=\frac{\nabla \eta \cdot\nabla \psi+ G(\eta)\psi}{1+|\nabla \eta|^2},
\qquad
V= \nabla \psi -B \nabla\eta.$$
Then $B$ and $V$ belong to the space $C^0(I, H^{\frac{1}{2}}(\mathbf{R}^d)).$
It follows from~\cite{ABZ1} that (for fixed $t$) one can define unique variational solutions to the problems
$$
\Delta_{x,y}\Phi=0\quad\text{in }\Omega,\qquad
\Phi\arrowvert_\Sigma =\Psi,\qquad \partial_\nu \Phi =0\quad\text{on }\Gamma.
$$
$$
\Delta_{x,y} Q=0\quad\text{in }\Omega,\qquad
Q\arrowvert _\Sigma =g \eta + \frac{1}{2}(B^2 + \vert V \vert^2) ,\qquad \partial_\nu Q =0\quad\text{on }\Gamma.
$$
Then we shall define $P\in \mathcal{D}'(\Omega)$ by
$$
P:= Q-gy-\frac{1}{2} \leqft\lvert \nabla_{x,y}\Phi\right\rvert^2
$$
and we shall show firstly that $P$ has a trace on $\Sigma$ which is equal to $0$ and secondly that $ Q =-\partial_t \Phi $ which will show, according to \eqref{Bern} that we have indeed solved Bernouilli's (and therefore Euler's) equation.
These assertions are not straightforward because we are working with solutions of low regularity and we consider general bottoms (namely no regularity assumption is assumed on the bottom).
Indeed, the analysis would have been much easier for $r>2+d/2$ and a flat bottom.
\bigbreak
\noindent\textbf{Acknowledgements.}
T.A. was supported by the French Agence Nationale de la Recherche, projects ANR-08-JCJC-0132-01 and ANR-08-JCJC-0124-01.
\sigmaection{Low regularity Cauchy theory}
Since we are interested in low regularity solutions, we begin by recalling the well-posedness results proved in~\cite{ABZ3}. These results clarify the Cauchy theory of
the water waves equations as well in terms of regularity indexes for the initial conditions as for the smoothness of the bottom of the domain (namely no regularity assumption is assumed on the bottom).
Recall that the Zakharov/Craig-Sulem system reads
\begin{equation}\leqft\lvertbel{system}
\leqft\{
\begin{aligned}
&\partial_{t}\eta-G(\eta)\psi=0,\\[0.5ex]
&\partial_{t}\psi+g \eta
+ \frac{1}{2}\leqft\lvert\nabla \psi\right\rvert^2 -\frac{1}{2}
\frac{\bigl(\nabla \eta\cdot\nabla \psi +G(\eta) \psi \bigr)^2}{1+|\nabla \eta|^2}
= 0.
\end{aligned}
\right.
\end{equation}
It is useful to introduce the vertical and horizontal components of the velocity,
\begin{gather*}
B:= (v_y)\arrowvert_{y=\eta} = (\partial_y \Phi)\arrowvert_{y=\eta},\quad
V := (v_x)\arrowvert_{y=\eta}=(\nabla_x \Phi)\arrowvert_{y=\eta}.
\end{gather*}
These can be defined in terms of $\eta$ and $\psi$ by means of the formulas
\begin{equation}\leqft\lvertbel{defi:BV}
B=\frac{\nabla \eta \cdot\nabla \psi+ G(\eta)\psi}{1+|\nabla \eta|^2},
\qquad
V= \nabla \psi -B \nabla\eta.
\end{equation}
Also, recall that the Taylor coefficient $a=-\partial_y P\arrowvert_{\Sigma}$
can be defined in terms of $\eta,V, B,\psi$ only (see~\S$4.3.1$ in~\cite{LannesLivre}).
In \cite{ABZ3} we proved the following results about low regularity solutions.
We refer to the introduction of~\cite{ABZ3,LannesJAMS}
for references and a short historical survey of the background of this problem.
\begin{theo}[~\cite{ABZ3}]
\leqft\lvertbel{theo:Cauchy}
Let $d\ge 1$, $s>1+d/2$ and consider an initial data $(\eta_{0},\psi_{0})$ such that
$(i)$
$\eta_0\in H^{s+\frac{1}{2}}(\mathbf{R}^d),\quad \psi_0\in H^{s+\frac{1}{2}}(\mathbf{R}^d),\quad V_0\in H^{s}(\mathbf{R}^d),\quad B_0\in H^{s}(\mathbf{R}^d)$,
$(ii)$ the condition ($H_0$) in \eqref{eta*} holds initially for $t=0$,
$(iii)$ there exists a positive constant $c$ such that, for all $x$ in $\mathbf{R}^d$,
$a_0(x)\geq c$.
Then there exists $T>0$ such that
the Cauchy problem for \eqref{system}
with initial data $(\eta_{0},\psi_{0})$ has a unique solution
$$
(\eta,\psi)\in C^0\big([0,T], H^{s+\frac{1}{2}}(\mathbf{R}^d)\times H^{s+\frac{1}{2}}(\mathbf{R}^d)\big),
$$
such that
\begin{enumerate}
\item $(V,B)\in C^0\big([0,T], H^{s}(\mathbf{R}^d)\times H^{s}(\mathbf{R}^d)\big)$,
\item the condition ($H_t)$ in \eqref{eta*} holds for $t\in [0,T]$ with $h$ replaced by
$h/2$,
\item $a(t,x)\ge c/2,$ for all $(t,x)$ in $[0,T] \times \mathbf{R}^d$.
\end{enumerate}
\end{theo}
\begin{theo}[~\cite{ABZ3}]\leqft\lvertbel{theo.strichartz}
Assume $\Gamma = \emptyset$. Let $d=2$, $s>1+\frac{d}{2}- \frac{1}{12}$ and consider an initial data
$(\eta_{0},\psi_{0})$ such that
\begin{equation*}
\eta_0\in H^{s+\frac{1}{2}}(\mathbf{R}^d),\quad \psi_0\in H^{s+\frac{1}{2}}(\mathbf{R}^d), \quad V_0\in H^{s}(\mathbf{R}^d),
\quad B_0\in H^{s}(\mathbf{R}^d).
\end{equation*}
Then there exists $T>0$ such that
the Cauchy problem for \eqref{system}
with initial data $(\eta_{0},\psi_{0})$ has a solution $(\eta, \psi)$ such that
$$
(\eta,\psi,V,B)\in C^0\big([0,T];H^{s+\frac{1}{2}}(\mathbf{R}^d)\times H^{s+\frac{1}{2}}(\mathbf{R}^d) \times H^{s}(\mathbf{R}^d)\times H^{s}(\mathbf{R}^d)\big).
$$
\end{theo}
\begin{rema}
$(i)$ For the sake of simplicity we stated Theorem \ref{theo.strichartz} in dimension $d=2$
(recall that $d$ is the dimension of the interface). One can prove such a result in any dimension $d \geq 2,$ the number
$1/12$ being replaced by an index depending on $d$.
$(ii)$ Notice that in infinite depth ($\Gamma = \emptyset$) the Taylor condition
(which is assumption $(iii)$ in Theorem~\ref{theo:Cauchy}) is always satisfied as proved by Wu~(\cite{WuInvent}).
\end{rema}
Now having solved the system \eqref{system} in $(\eta, \psi)$ we have to show that we have indeed solved the initial system in $(\eta,v)$. This is the purpose of the following section.
There is one point that should be emphasized concerning the regularity.
Below we consider solutions $(\eta,\psi)$ of \eqref{system} such that
$$
(\eta,\psi)\in C^0\big([0,T];H^{s+\frac{1}{2}}(\mathbf{R}^d)\times H^{s+\frac{1}{2}}(\mathbf{R}^d)),
$$
with the only assumption that $s>\frac{1}{2} +\frac{d}{2}$
(and the assumption that there exists
$h>0$ such that the condition ($H_t)$ in \eqref{eta*}
holds for $t\in [0,T]$). Consequently, the result proved
in this note apply to the settings considered in the above theorems.
\sigmaection{From Zakharov to Euler}
\sigmaubsection{The variational theory}
In this paragraph the time is fixed so we will skip it and work in a fixed domain $\Omega$ whose top boundary $\Sigma$ is Lipschitz i.e $\eta \in W^{1,\infty}(\mathbf{R}^d).$
We recall here the variational theory, developed in~\cite{ABZ1},
allowing us to solve the following problem in the case of arbitrary bottom,
\begin{equation}\leqft\lvertbel{dirichlet}
\Delta \Phi = 0 \quad \text{in } \Omega, \quad \Phi\arrowvert_{\Sigma} = \psi, \quad \frac{\partial \Phi}{\partial\nu}\arrowvert_\Gamma = 0.
\end{equation}
Notice that $\Omega$ is not necessarily bounded below. We proceed as follows.
Denote by $\mathcal{D}$ the space of functions $u\in C^\infty(\Omega)$ such that $\nabla_{x,y} u\in L^2(\Omega) $ and let $\mathcal{D}_0 $ be the subspace of functions $u \in \mathcal{D}$ such that $u$ vanishes near the top boundary~$\Sigma.$
\begin{lemm}[see Prop 2.2 in~\cite{ABZ1}]
There exist a positive weight $g\in L^\infty_{loc}(\Omega)$
equal to $1$
near the top boundary $\Sigma$ of $\Omega$ and
$C>0$ such that for all $u\in \mathcal{D}_0$
\begin{equation}\leqft\lvertbel{poincare}
\iint_\Omega g(x,y) \vert u(x,y)\vert^2 dx dy
\leqq C \iint_\Omega \vert \nabla_{x,y}u(x,y) \vert^2 dx dy.
\end{equation}
\end{lemm}
Using this lemma one can prove the following result.
\begin{prop}[see page 422 in~\cite{ABZ1}]\leqft\lvertbel{hilbert}
Denote by $H^{1,0}(\Omega)$ the space of functions
$u$ on $\Omega$ such that there exists a sequence
$(u_n) \sigmaubset \mathcal{D}_0$ such that
$$\nabla_{x,y}u_n \to \nabla_{x,y} u \quad \text{ in }
L^2(\Omega), \quad u_n \to u \quad \text{ in } L^2(\Omega, g dx dy),
$$
endowed with the scalar product
$$
(u , v)_{H^{1,0}(\Omega)} = (\nabla_x u ,\nabla_x v)_{L^2(\Omega)}
+ (\partial_y u ,\partial_yv)_{L^2(\Omega)}.
$$
Then $H^{1,0}(\Omega)$ is a Hilbert space and \eqref{poincare} holds for $u \in H^{1,0}(\Omega).$
\end{prop}
Let $\psi \in H^{\frac{1}{2}}(\mathbf{R}^d)$.
One can construct (see
below after \eqref{u}) $\underline{\psi} \in H^1(\Omega)$
such that
$$
\sigmaupp \underline{\psi}\sigmaubset \{(x,y): \eta (t,x)-h\leqq y \leqq \eta(x)\},
\quad \underline{\psi}\arrowvert_{\Sigma} = \psi.
$$
Using Proposition~\ref{hilbert} we deduce that there exists a unique $u \in H^{1,0}(\Omega)$
such that, for all $\theta \in H^{1,0}(\Omega)$,
$$
\iint_\Omega \nabla_{x,y}u(x,y) \cdot \nabla_{x,y} \theta (x,y)dx dy
= -\iint_\Omega \nabla_{x,y}\underline{\psi}(x,y) \cdot\nabla_{x,y}
\theta (x,y)dx dy.
$$
Then to solve the problem \eqref{dirichlet} we set $\Phi = u + \underline{\psi}.$
\begin{rema}
As for the usual Neumann problem the meaning
of the third condition in \eqref{dirichlet} is included
in the definition of the space $H^{1,0}(\Omega). $
It can be written as in \eqref{dirichlet} if the bottom $\Gamma$ is sufficiently smooth.
\end{rema}
\sigmaubsection{The main result}
Let us assume that the Zakharov system \eqref{system}
has been solved on $I =(0,T)$, which means that we have found, for $ s>\frac{1}{2} + \frac{d}{2},$ a solution
$$
(\eta, \psi) \in C^0(\overline{I}, H^{s+\frac{1}{2}}(\mathbf{R}^d)\times H^{s+\frac{1}{2}}(\mathbf{R}^d)) ,
$$
of the system
\begin{equation}\leqft\lvertbel{Zaharov}
\leqft\{
\begin{aligned}
&\partial_t \eta = G(\eta) \psi,\\
& \partial \psi = - g \eta - \frac{1}{2} \vert \nabla \psi \vert ^2
+\frac{1}{2} \frac{(\nabla \psi \cdot \nabla \eta + G(\eta) \psi)^2}{1 + \vert \nabla \eta \vert^2}.
\end{aligned}
\right.
\end{equation}
Let $B,V$ be defined by \eqref{defi:BV}. Then $(B,V) \in C^0(I, H^{s-\frac{1}{2}}(\mathbf{R}^d) \times H^{s-\frac{1}{2}}(\mathbf{R}^d)).$
The above variational theory shows that one can solve (for fixed $t$) the problem
\begin{equation}\leqft\lvertbel{eq:Q}
\Delta_{x,y} Q = 0 \quad \text{in } \Omega, \quad Q\arrowvert_ \Sigma = g \eta + \frac{1}{2}(B^2 + \vert V \vert^2)\in H^{\frac{1}{2}}(\mathbf{R}^d).
\end{equation}
Here is the main result of this article.
\begin{theo}\leqft\lvertbel{maintheo}
Let $\Phi$ and $Q$ be the variational solutions of the problems \eqref{dirichlet} and \eqref{eq:Q}. Set $P = Q-gy -\frac{1}{2}\vert \nabla_{x,y}\Phi\vert^2.$ Then $v := \nabla_{x,y}\Phi$ satisfies the Euler system
$$\partial_{t} v +(v\cdot \nabla_{x,y}) v + \nabla_{x,y} P
= - g e_y \quad \text{in }\Omega, $$
together with the conditions
\begin{equation}
\leqft\{
\begin{aligned}
&\text{div}\,_{x,y} v =0, \quad \text{curl}\, _{x,y} v =0 \quad \text{in } \Omega, \\
& \partial_t \eta = (1+| \nabla\eta|^2)^\frac{1}{2} \, (v\cdot n) \quad \text{on } \Sigma, \\
& P= 0 \quad \text{on } \Sigma.
\end{aligned}
\right.
\end{equation}
\end{theo}
The rest of the paper is devoted to the proof of this result. We proceed in several steps.
\sigmaubsection{Straightenning the free boundary}
First of all if condition $(H_t)$ is satisfied on $I,$ for $T$
small enough, one can find $\eta_*\in L^\infty(\mathbf{R}^d)$ independent of $t$ such that
\begin{equation}\leqft\lvertbel{eta}
\leqft\{
\begin{aligned}
&(i)\quad \nabla_x \eta_* \in H^\infty(\mathbf{R}^d),\quad \Vert \nabla_x \eta_* \Vert_{L^\infty(\mathbf{R}^d)}\leqq C \Vert \eta \Vert_{L^\infty(I,H^{s+\frac{1}{2}}(\mathbf{R}^d))},\\
&(ii) \quad \eta(t,x)-h \leqq \eta_*(x) \leqq \eta(t,x)- \frac{h}{2}, \quad \forall (t,x)\in I\times \mathbf{R}^d,\\
&(iii) \quad \Gamma \sigmaubset \{(x,y) \in \mathcal{O}: y<\eta_*(x)\}.
\end{aligned}
\right.
\end{equation}
Indeed using the first equation in \eqref{Zaharov} we have
\begin{equation*}
\begin{aligned}
\Vert \eta(t,\cdot)-\eta_0\Vert_{L^\infty(\mathbf{R}^d)} &\leqq \int_0^t\Vert G(\eta)\psi(\sigmaigma,\cdot)\Vert_{H^{s-\frac{1}{2}}(\mathbf{R}^d)} d\sigmaigma\\
&\leqq TC\big(\Vert(\eta, \psi)\Vert_{L^\infty(I, H^{s+\frac{1}{2}}(\mathbf{R}^d) \times H^{s+\frac{1}{2}}(\mathbf{R}^d))}\big).
\end{aligned}
\end{equation*}
Therefore taking $T$ small enough
we make $\Vert \eta(t,\cdot)-\eta_0\Vert_{L^\infty(\mathbf{R}^d)}$
as small as we want. Then we take
$ \eta_*(x) = -\frac{2h}{3} + e^{-\varrhoa\vert D_x \vert}\eta_0 $
and writing
$$\eta_*(x) = -\frac{2h}{3} + \eta(t,x) - (\eta(t,x) - \eta_0( x))
+ (e^{-\varrhoa\vert D_x \vert}\eta_0 - \eta_0(x)),
$$
we obtain \eqref{eta}.
In what follows we shall set
\begin{equation}\leqft\lvertbel{lesomega}
\leqft\{
\begin{aligned}
&\Omega_1(t) = \{(x,y): x \in \mathbf{R}^d, \eta_*(x)<y< \eta(t,x)\},\\
&\Omega_1 = \{(t,x,y): t\in I, (x,y)\in \Omega_1(t)\},
\quad \Omega_2 = \{(x,y)\in \mathcal{O}: y\leqq\eta_*(x)\},\\
&\tilde{\Omega}_1 = \{(x,z): x \in \mathbf{R}^d, z \in (-1,0)\},\\
&\tilde{\Omega}_2 = \{(x,z) \in \mathbf{R}^d \times (-\infty,-1]: (x, z+1+\eta_*(x)) \in \Omega_2\} \\
&\tilde {\Omega} = \tilde{\Omega_1} \cup \tilde{\Omega_2}
\end{aligned}
\right.
\end{equation}
Following Lannes~(\cite{LannesJAMS}), for $t\in I$ consider the map $(x,z)\mapsto (x, \rho(t,x,z))$
from $\tilde{\Omega} $ to $\mathbf{R}^{d+1}$ defined by
\begin{equation}\leqft\lvertbel{diffeo}
\leqft\{
\begin{aligned}
\rho(t,x,z) &= (1+z)e^{\varrhoa z \leqft\lvertngle D_x \right\rvertngle} \eta(t,x) -z \eta_*(x) \quad \text{if } (x,z) \in \tilde{\Omega}_1\\
\rho(t,x,z) &= z+1+ \eta_*(x) \quad \text{if } (x,z) \in \tilde{\Omega}_2.
\end{aligned}
\right.
\end{equation}
where $\varrhoa$ is chosen such that
$$\varrhoa \Vert \eta \Vert_{L^ \infty(I, H^{s+\frac{1}{2}}(\mathbf{R}^d))}:= \varrhoa_0 <<1.$$
Notice that since $s>\frac{1}{2} + \frac{d}{2},$ taking $\varrhoa $
small enough and using \eqref{eta} $(i), (ii),$ we obtain the estimates
\begin{equation}\leqft\lvertbel{rhokappa}
\begin{aligned}
(i)& \quad \partial_z \rho(t ,x,z) \geq \min\big({\frac{h}{3}, 1}\big)
\quad \forall (t,x,z)\in I \times \tilde{\Omega} ,\\
(ii)&\quad \Vert \nabla_{x,z} \rho\Vert_{ L^\infty(I \times \tilde{\Omega} )}
\leqq C (1+\Vert \eta \Vert_{L^\infty(I,H^{s+\frac{1}{2}}(\mathbf{R}^d))}) .\\
\end{aligned}
\end{equation}
It follows from \eqref{rhokappa} $(i)$ that the map $(t,x,z) \mapsto (t,x,\rho(t,x,z))$ is a diffeomorphism from $I \times \tilde{\Omega} $ to $\Omega$ which is of class $W^{1,\infty}.$
We denote by $\kappa$ the inverse map of $\rho$:
\begin{equation}\leqft\lvertbel{kappa}
\begin{aligned}
(t,x,z)\in I\times \tilde{\Omega} , (t,x,\rho(t,x,z))
&= (t,x,y)\\
\Longleftrightarrow (t,x,z) &= (t,x,\kappa(t,x,y)), (t,x,y) \in \Omega.
\end{aligned}
\end{equation}
\sigmaubsection{The Dirichlet-Neumann operator}
Let $\Phi$ be the variational solution described above (with fixed $t$)
of the problem
\begin{equation}\leqft\lvertbel{var}
\leqft\{
\begin{aligned}
&\Delta_{x,y} \Phi =0 \quad \text{in } \Omega (t),\\
&\Phi \arrowvert_{\Sigma(t)} = \psi(t,\cdot) ,\\
&\partial_\nu \Phi \arrowvert_{\Gamma} = 0.
\end{aligned}
\right.
\end{equation}
Let us recall that
\begin{equation}\leqft\lvertbel{u}
\Phi = u + \underline{\psi}
\end{equation}
where $u \in H^{1,0}(\Omega(t))$ and $\underline{\psi}$ is an extension of $\psi$ to $\Omega (t).$
Here is a construction of $\underline{\psi.}$
Let $\chi \in C^\infty(\mathbf{R}), \chi(a)= 0$ if $a\leqq-1,\chi(a) = 1$ if $a\geq -\frac{1}{2}.$
Let $\underline{\tilde{\psi}}(t,x,z) = \chi(z) e^{z \leqft\lvertngle D_x \right\rvertngle} \psi(t,x)$
for $z\leqq 0.$ It is classical that $ \underline{\tilde{\psi}}\in L^\infty(I,H^1(\tilde{\Omega}))$
if $\psi \in L^\infty(I,H^\frac{1}{2}(\mathbf{R}^d))$ and
$$
\Vert \underline{\tilde{\psi}} \Vert_{L^\infty(I, H^1(\tilde{\Omega}))}
\leqq C \Vert \psi \Vert_{L^\infty(I, H^\frac{1}{2}(\mathbf{R}^d))}.
$$
Then we set
\begin{equation}\leqft\lvertbel{psisoul}
\underline{\psi}(t,x,y)= \underline{\tilde{\psi}}\big(t, x, \kappa(t,x,y)\big).
\end{equation}
Since $\eta \in C^0(I, W^{1,\infty}(\mathbf{R}^d))$
we have $\underline{\psi}(t, \cdot) \in H^1(\Omega(t)),\,
\underline{\psi}\arrowvert_{\Sigma(t)} = \psi$
and
$$
\Vert \underline{\psi}(t, \cdot) \Vert_{H^1(\Omega(t))}
\leqq C\big(\Vert \eta \Vert_{L^\infty(I, W^{1,\infty}(\mathbf{R}^d))}\big)
\Vert \psi \Vert_{L^\infty(I, H^\frac{1}{2}(\mathbf{R}^d))}.
$$
Then we define the Dirichlet-Neumann operator by
\begin{equation}
\begin{aligned}
G(\eta)\psi (t,x) &= \sigmaqrt{1+\vert \nabla \eta \vert^2}\partial_n \Phi\arrowvert_{\Sigma}\\
&=(\partial_y \Phi)(x, \eta(t,x)) - \nabla_x \eta(t,x) \cdot(\nabla_x \Phi)(t,x,\eta(t,x)).
\end{aligned}
\end{equation}
It has been shown in \cite{ABZ3} (see~$\S3$) that $G(\eta)\psi $ is well defined in
$C^0(\overline{I},H^{-\frac{1}{2}}(\mathbf{R}^d))$
if $\eta\in C^0(\overline{I},W^{1,\infty}(\mathbf{R}^d))$
and $\psi \in C^0(\overline{I},H^{\frac{1}{2}}(\mathbf{R}^d))$.
\begin{rema}
Recall that we have set
\begin{equation}\leqft\lvertbel{omega}
\Omega(t) = \{(x,y) \in \mathcal{O}: y< \eta(t,x)\},
\quad \Omega = \{(t,x,y): t\in I, (x,y)\in \Omega(t)\}.
\end{equation}
For a function $f\in L^1_{loc}(\Omega)$ if $\partial_t f$
denotes its derivative in the sense of distributions we have
\begin{equation}\leqft\lvertbel{derf}
\leqft\lvertngle \partial_t f, \varphi \right\rvertngle
= \lim_{\varepsilon \to 0}\Big\leqft\lvertngle \frac{f(\cdot +\varepsilon,\cdot,\cdot) - f(\cdot,\cdot,\cdot)}{\varepsilon}, \varphi \Big\right\rvertngle, \quad \forall \varphi \in C_0^\infty(\Omega).
\end{equation}
This point should be clarified due to the
particular form of the set $\Omega$
since we have to show that if $(t,x,y) \in \sigmaupp \varphi = K$
then
$(t+\varepsilon,x,y)\in \Omega $ for $\varepsilon$ sufficiently small independently of the point $(t,x,y)$.
This is true. Indeed if $(t,x,y) \in K$ there exists a fixed $\varrhoa>0$
(depending only on $K,\eta $) such that $y\leqq \eta(t,x) - \varrhoa.$
Since by \eqref{Zaharov}
$$
\vert \eta(t +\varepsilon,x) - \eta(t,x)\vert \leqq \varepsilon \Vert G(\eta)\psi \Vert_{L^\infty(I\times \mathbf{R}^d)}\leqq \varepsilon C
$$ where
$C= C\big(\Vert(\eta,\psi)\Vert_{L^\infty(I,H^{s+\frac{1}{2}}(\mathbf{R}^d)\times H^{s+\frac{1}{2}}(\mathbf{R}^d))}\big),$
we have if $\varepsilon < \frac{\varrhoa}{C} $,
$$
y-\eta(t+\varepsilon,x) = y-\eta(t ,x) + \eta(t ,x) - \eta(t+\varepsilon,x) \leqq -\varrhoa + \varepsilon C<0.
$$
Notice that since
$\eta \in C^0(\overline{I}, H^{s+\frac{1}{2}}(\mathbf{R}^d)), \partial_t \eta = G(\eta) \psi \in C^0(\overline{I}, H^{s - \frac{1}{2}}(\mathbf{R}^d))$
and \\ $s> \frac{1}{2} + \frac{d}{2}$ we have $\rho \in W^{1,\infty}(I\times \tilde{\Omega}).$
\end{rema}
The main step in the proof of Theorem \ref{maintheo} is the following.
\begin{prop}\leqft\lvertbel{theoprinc}
Let $\Phi$ be defined by \eqref{var} and $Q\in H^{1,0}(\Omega(t))$ by \eqref{eq:Q}. Then for all $t\in I$
\begin{equation*}
\begin{aligned}
& (i) \quad \partial_t \Phi(t, \cdot) \in H^{1,0}(\Omega(t)), \\
& (ii) \quad \partial_t \Phi = -Q \quad \text{in } \mathcal{D}'(\Omega).
\end{aligned}
\end{equation*}
\end{prop}
This result will be proved in \S 3.6.
\sigmaubsection{Preliminaries}
If $f $ is a function defined on $\Omega$
we shall denote by $\tilde{f}$ its image
by the diffeomorphism $(t,x,z)\mapsto (t,x,\rho(t,x,z))$.
Thus we have
\begin{equation}\leqft\lvertbel{image}
\tilde{f}(t,x,z) = f(t,x,\rho(t,x,z)) \Leftrightarrow f(t,x,y) = \tilde{f}\big(t,x,\kappa(t,x,y)\big).
\end{equation}
Formally we have the following equalities for $(t,x,y) = (t,x,\rho(t,x,z)) \in \Omega$
and $\nabla = \nabla_x$
\begin{equation}\leqft\lvertbel{dt}
\leqft\{
\begin{aligned}
\partial_yf(t,x,y) &= \frac{1}{\partial_z \rho}\partial_z \tilde{f}(t,x,z)
\Leftrightarrow \partial_z \tilde{f}(t,x,z) = \partial_z \rho(t,x,\kappa(t,x,y)) \partial_y f(t,x,y),\\
\nabla f(t,x,y) &= \big(\nabla \tilde{f} - \frac{\nabla \rho}{\partial_z \rho}\, \partial_z\tilde{f}\big)(t,x,z) \Leftrightarrow \nabla \tilde{f}(t,x,z) = \big( \nabla f + \nabla \rho\,\, \partial_y f\big)(t,x,y),\\
\partial_t f(t,x,y) &= \big( \partial_t \tilde{f}
+ \partial_t \kappa (t,x,y) \partial_z\tilde{f}\big)(t,x,\kappa(t,x,y)).
\end{aligned}
\right.
\end{equation}
We shall set in what follows
\begin{equation}\leqft\lvertbel{lambda}
\Lambda_1 = \frac{1}{\partial_z \rho} \partial_z,\quad \Lambda_2
= \nabla_x - \frac{\nabla_x \rho}{\partial_z \rho}\partial_z
\end{equation}
Eventually recall that if $u$ is the function defined by \eqref{u} we have
\begin{equation}\leqft\lvertbel{egvar}
\iint_{\Omega(t)} \nabla_{x,y}u(t,x,y) \cdot \nabla_{x,y}\theta(x,y) dx dy
= - \iint_{\Omega(t)} \nabla_{x,y}\underline{\psi}(t, x,y) \cdot \nabla_{x,y}\theta(x,y) dx dy
\end{equation}
for all $\theta \in H^{1,0}(\Omega(t)) $ which implies that for $t\in I,$
\begin{equation}\leqft\lvertbel{est}
\Vert \nabla_{x,y} u(t, \cdot)\Vert_{L^2(\Omega(t))} \leqq C(\Vert \eta \Vert_{L^\infty(I, W^{1,\infty}(\mathbf{R}^d)}) \Vert \psi \Vert_{L^\infty(I,H^\frac{1}{2}(\mathbf{R}^d))}.
\end{equation}
Let $u$ be defined by \eqref{u}.
Since $(\eta,\psi)\in C^0(\overline{I}, H^{s+\frac{1}{2}}(\mathbf{R}^d)\times H^{s+\frac{1}{2}}(\mathbf{R}^d))$
the elliptic regularity theorem proved in \cite{ABZ3}, (see Theorem 3.16),
shows that,
$$
\partial_z \tilde{u}, \nabla_x \tilde{u}
\in C_z^0([-1,0], H^{s-\frac{1}{2}}(\mathbf{R}^d))\sigmaubset C^0([-1, 0] \times \mathbf{R}^d),
$$
since $s-\frac{1}{2} > \frac{d}{2}.$
It follows from \eqref{dt} that $\partial_y u$ and $\nabla_x u$ have a trace on $\Sigma$ and
$$
\partial_y u\arrowvert_\Sigma
= \frac{1}{\partial_z \rho(t,x,0)} \partial_z \tilde{u}(t,x,0), \quad \nabla_x u\arrowvert_\Sigma
= \big(\nabla_x \tilde{u} - \frac{\nabla_x \eta}{\partial_z \rho(t,x,0)} \partial_z \tilde{u}\big)(t,x,0).
$$
Since $\tilde{u}(t,x,0) =0$ it follows that
$$
\nabla_x u\arrowvert_\Sigma + (\nabla_x \eta) \partial_y u\arrowvert_\Sigma = 0
$$
from which we deduce, since $\Phi = u + \underline{\psi}$,
\begin{equation}\leqft\lvertbel{debutV}
\nabla_x \Phi \arrowvert_\Sigma + (\nabla_x \eta) \partial_y \Phi\arrowvert_\Sigma = \nabla_x \psi.
\end{equation}
On the other hand one has
\begin{equation}\leqft\lvertbel{suiteV}
G(\eta)\psi= \big(\partial_y \Phi - \nabla_x \eta \cdot\nabla_x \Phi \big)\arrowvert_\Sigma.
\end{equation}
It follows from \eqref{debutV} and \eqref{suiteV} that we have
\begin{equation}\leqft\lvertbel{finV}
\nabla_x \Phi\arrowvert_\Sigma = V, \quad \partial_y \Phi\arrowvert_\Sigma = B.
\end{equation}
According to \eqref{eq:Q}, $P = Q - gy - \frac{1}{2}\vert \nabla_{x,y} \Phi \vert^2$ has a trace on $\Sigma$ and $P\arrowvert_ \Sigma =0.$
\sigmaubsection{The regularity results}\leqft\lvertbel{S:pressure}
The main steps in the proof of Proposition \ref{theoprinc} are the following.
\begin{lemm}\leqft\lvertbel{dtphi}
Let $\tilde{u}$ be defined by \eqref{image} and $\kappa$ by \eqref{kappa}. Then for all $t_0 \in I$
the function $(x,y) \mapsto U(t_0,x,y):= \partial_t \tilde{u}(t_0,x,\kappa (t_0,x,y))$ belongs to $H^{1,0}(\Omega(t_0)).$ Moreover there exists a function $\mathcal{F}: \mathbf{R}^+ \to \mathbf{R}^+$ such that $$ \sigmaup_{t\in I} \iint_{\Omega(t)} \vert \nabla_{x,y}U(t,x,y)\vert^2 dx dy \leqq \mathcal{F}(\Vert (\eta, \psi)\Vert_{L^\infty(I,H^{s+\frac{1}{2}}(\mathbf{R}^d) \times H^{s+\frac{1}{2}}(\mathbf{R}^d))}).$$
\end{lemm}
\begin{lemm}\leqft\lvertbel{chain}
In the sense of distributions on $\Omega $ we have the chain rule
$$\partial_t u(t,x,y) = \partial_t \tilde{u}(t,x,\kappa(t,x,y))
+ \partial_t \kappa(t,x,y) \partial_z\tilde{u}(t,x, \kappa(t,x,y)).
$$
\end{lemm}
These lemmas are proved in the next paragraph.
\begin{proof}[of Proposition \ref{theoprinc}]
According to \eqref{u} and Lemma \ref{chain} we have
\begin{equation}\leqft\lvertbel{Dtphi}
\partial_t \Phi(t,x,y) = \partial_t \tilde{u}(t,x,\kappa(t,x,y)) + \underline{w}(t,x,y)
\end{equation}
where
$$\underline{w}(t,x,y) = \partial_t \kappa(t,x,y) \partial_z\tilde{u}(t,x, \kappa(t,x,y)) + \partial_t \underline{\psi}(t,x,y).$$
According to Lemma \ref{dtphi} the first term in the right hand side of \eqref{Dtphi} belongs to $H^{1,0}(\Omega(t)).$ Denoting by $\tilde{\underline{w}} $ the image of $\underline{w}$, if we show that
\begin{equation}\leqft\lvertbel{underw}
\leqft\{
\begin{aligned}
&(i)\quad \tilde{\underline{w}} \in H^1(\mathbf{R}^d \times \mathbf{R}),\\
&(ii) \quad \sigmaupp{ \tilde{\underline{w}}} \sigmaubset \{(x,z)\in \mathbf{R}^d \times (-1,0)\}\\
&(iii) \quad w\arrowvert_\Sigma = -g\eta - \frac{1}{2}(B^2 +\vert V \vert^2)
\end{aligned}\right.
\end{equation}
then $\partial_t \Phi$ will be the variational solution of the problem
$$\Delta_{x,y} (\partial_t \Phi) = 0, \quad \partial_t \Phi \arrowvert_\Sigma = -g\eta - \frac{1}{2}(B^2 +\vert V \vert^2).$$
By uniqueness, we deduce from \eqref{eq:Q} that $\partial_t \Phi = -Q,$ which completes the proof of Proposition \ref{theoprinc}. Therefore we are left with the proof of \eqref{underw}.
Recall that $\tilde{\underline{\psi}}(t,x,z) = \chi(z) e^{z\leqft\lvertngle D_x \right\rvertngle }\psi(t,x).$ Moreover by Lemma \ref{chain} we have
$$ \widetilde{ \partial_t \underline{\psi}} =\Big( \partial_t \tilde{\underline{\psi}} - \frac{\partial_t \rho(t,x,z)}{\partial_z\rho(t,x,z)} \partial_z\tilde{\underline{\psi}}\Big)(t,x,z). $$
Since $\psi \in H^{s+\frac{1}{2}}(\mathbf{R}^d), \partial_t \psi \in H^{s-\frac{1}{2}}(\mathbf{R}^d), \partial_t \eta \in H^{s-\frac{1}{2}}(\mathbf{R}^d)$, the classical properties of the Poisson kernel show that $\partial_t \tilde{\underline{\psi}} $ and $\partial_z\tilde{\underline{\psi}}$ and $\frac{\partial_t \rho }{\partial_z\rho } $ belong to $ H^s(\mathbf{R}^d \times (-1,0))$ therefore to $H^1(\mathbf{R}^d \times (-1,0)) $ since $s>\frac{1}{2} + \frac{d}{2}.$ It follows that the points $(i)$ and $(ii)$ in \eqref{underw} are satisfied by $\widetilde{ \partial_t \underline{\psi}}$. Now according to \eqref{diffeo} $\widetilde{\partial_t \kappa}$ is supported in $\mathbf{R}^d \times (-1,0)$ and it follows from the elliptic regularity that $\widetilde{\partial_t \kappa}\partial_z \tilde{u}$ belongs to $H^1(\mathbf{R}^d \times (-1,0)).$ Let us check now point $(iii)$. Since $\partial_t \eta = G(\eta)\psi$ we have
\begin{equation}\leqft\lvertbel{trace}
\partial_t \underline{\psi}(t,x,y)\arrowvert_\Sigma = \widetilde{ \partial_t \underline{\psi}} \arrowvert_{z=0} = \partial_t \psi - G(\eta)\psi \cdot \partial_y \underline{\psi}(t,x,y) \arrowvert_\Sigma.
\end{equation}
On the other hand we have
\begin{equation*}
\begin{aligned}
\partial_t \kappa(t,x,y) \partial_z\tilde{u}(t,x, \kappa(t,x,y))\arrowvert_\Sigma &= \partial_t \kappa(t,x,y)\partial_z \rho(t,x, \kappa(t,x,y)) \partial_y {u}(t,x,y)\arrowvert_\Sigma\\
&= -\partial_t \rho(t,x, \kappa(t,x,y)) \partial_y {u}(t,x,y)\arrowvert_\Sigma\\
&= -\partial_t \rho(t,x, \kappa(t,x,y)) \big(\partial_y\Phi(t,x,y) - \partial_y \underline{\psi}(t,x,y) \big)\arrowvert_\Sigma\\
&= -G(\eta)\psi \cdot (B- \partial_y \underline{\psi}(t,x,y) \arrowvert_\Sigma.
\end{aligned}
\end{equation*}
So using \eqref{trace} we find
$$ \underline{w}\, \arrowvert_\Sigma = \partial_t \psi - B G(\eta)\psi.$$
\end{proof}
It follows from the second equation of \eqref{Zaharov} and from \eqref{defi:BV} that
$$ \underline{w}\, \arrowvert_\Sigma = -g\eta -\frac{1}{2}(B^2+ \vert V \vert^2).$$
This proves the claim $(iii)$ in \eqref{underw} and ends the proof of Proposition \ref{theoprinc}.
\sigmaubsection{Proof of the Lemmas}
\sigmaubsubsection{Proof of Lemma \ref{dtphi}}
Recall (see \eqref{diffeo} and \eqref{lambda}) that we have set
\begin{equation*}\leqft\lvertbel{lambdaj}
\leqft\{
\begin{aligned}
\rho(t,x,z) &= (1+z)e^{\varrhoa z \leqft\lvertngle D_x \right\rvertngle} \eta(t,x) -z \eta_*(x) \quad \text{if } (x,z) \in \tilde{\Omega}_1,\\
\rho(t,x,z) &= z+1+ \eta_*(x) \quad \text{if } (x,z) \in \tilde{\Omega}_2,\\
\Lambda_1(t) &= \frac{1}{\partial_z \rho (t,\cdot)} \partial_z, \quad \Lambda_2(t) = \nabla_x - \frac{\nabla_x \rho(t, \cdot)}{\partial_z \rho(t,\cdot)}\partial_z
\end{aligned}
\right.
\end{equation*}
and that $\kappa_t$ has been defined in \eqref{kappa}.
If we set $\hat{\kappa_t}(x,y) = (x, \kappa(t, x,y))$ then $\hat{\kappa_t}$ is a bijective map from the space $H^{1,0}(\tilde{\Omega})$ (defined as in Proposition \ref{hilbert}) to the space $H^{1,0}( \Omega(t)). $ Indeed near the top boundary ($z\in (-2,0))$ this follows from the classical invariance of the usual space $H^1_0$ by a $W^{1,\infty}$-diffeomorphism, while, near the bottom, our diffeomorphim is of class $H^\infty$ hence preserves the space $H^{1,0}.$
Now we fix $t_0 \in I$, we take $\varepsilon\in \mathbf{R}\sigmaetminus{\{0\}}$ small enough and we set for $t\in I$
\begin{equation}\leqft\lvertbel{FH}
\leqft\{
\begin{aligned}
F (t) &= \iint_{\Omega (t)} \nabla_{x,y}u(t,x,y) \cdot \nabla_{x,y}\theta(x,y) dx dy \\
H (t) &= - \iint_{\Omega (t)} \nabla_{x,y}\underline{\psi}(t, x,y) \cdot \nabla_{x,y}\theta(x,y) dx dy \\
\end{aligned}
\right.
\end{equation}
where $\theta \in H^{1,0}(\Omega(t)) $ is chosen as follows.
In $F(t_{0}+\varepsilon )$ we take
$$\theta_1 (x,y) = \frac{ u (t_0 + \varepsilon,x, y) - \tilde{u}(t_0, x,\kappa (t_0+ \varepsilon, x,y))}{\varepsilon} \in H^{1,0}(\Omega(t_0+ \varepsilon)).$$
In $F(t_0)$ we take
$$\theta_2 (x,y) = \frac{\tilde{u}(t_0 + \varepsilon,x, \kappa(t_0,x,y)) - u(t_0, x,y)}{\varepsilon} \in H^{1,0}(\Omega(t_0)).$$
Then in the variables $(x,z)$ we have
\begin{equation}\leqft\lvertbel{thetaj}
\begin{aligned}
& \tilde{\theta}_1(x,z) = \theta_1(x,\rho(t_0 +\varepsilon,x,z)) = \frac{ \tilde{u} (t_0 + \varepsilon,x, z) - \tilde{u}(t_0, x,z)}{\varepsilon}\\
& \tilde{\theta}_2(x,z) = \theta_2(x,\rho(t_0,x,z)) = \frac{ \tilde{u} (t_0 + \varepsilon,x, z) - \tilde{u}(t_0, x,z)}{\varepsilon},
\end{aligned}
\end{equation}
so we see that $\tilde{\theta}_1(x,z) =\tilde{\theta}_2(x,z)=:\tilde{\theta} (x,z)$.
It follows from \eqref{egvar} that for all $t\in I$ we have $F (t) = H (t).$ Therefore
$$
J _\varepsilon(t_0) =: \frac{F (t_0 + \varepsilon) - F (t_0)}{\varepsilon}=\mathcal{J} _\varepsilon(t_0) =: \frac{H (t_0 + \varepsilon) - H (t_0)}{\varepsilon}.
$$
Then after changing variables as in \eqref{diffeo} we obtain
\begin{equation}\leqft\lvertbel{J=sumK}
\begin{aligned}
J _\varepsilon(t_0) = \frac{1}{\varepsilon}\sigmaum_{j=1}^2 \iint_{\tilde{\Omega} }
\big[&\Lambda_j(t_0+\varepsilon)\tilde{u}(t_0+\varepsilon, x,z)\Lambda_j(t_0+\varepsilon)
\tilde{\theta}(x,z)\partial_z \rho(t_0+ \varepsilon,x,z) \\
-&\Lambda_j(t_0)\tilde{u}(t_0 , x,z)\Lambda_j(t_0)
\tilde{\theta}(x,z) \partial_z\rho(t_0,x,z)\big]dx dz
=:\sigmaum_{j=1}^2 K_{j,\varepsilon}(t_0).
\end{aligned}
\end{equation}
With the notation used in \eqref{lambdaj} we can write,
\begin{equation}\leqft\lvertbel{champ}
\Lambda_j(t_0+ \varepsilon) - \Lambda_j(t_0) = \beta_{j,\varepsilon}(t_0,x,z) \partial_z, \quad j=1,2.
\end{equation}
Notice that since the function $\rho$ does not depend on $t$ for $z \leqq -1$ we have $ \beta_{j,\varepsilon} =0$ in this set.
Then we have the following Lemma.
\begin{lemm}\leqft\lvertbel{estbeta}
There exists a non decreasing function $\mathcal{F}:\mathbf{R}^+ \to \mathbf{R}^+$ such that
$$
\sigmaup_{t_0 \in I}\iint_{\tilde{\Omega} }\vert \beta_{j,\varepsilon}(t_0,x,z)\vert^2 dxdz
\leqq \varepsilon^2 \mathcal{F}\big(\Vert (\eta, \psi)\Vert_{L^\infty(I,H^{s+\frac{1}{2}}(\mathbf{R}^d) \times H^{s+\frac{1}{2}}(\mathbf{R}^d))}\big).
$$
\end{lemm}
\begin{proof}
In the set $\{(x,z): x\in \mathbf{R}^d, z \in (-1,0)\}$ the most delicate term to deal with is
$$
(1) =: \frac{\nabla_x \rho }{ \partial_z \rho} (t_0+ \varepsilon,x,z) - \frac{\nabla_x \rho }{ \partial_z \rho}(t_0,x,z)= \varepsilon \int_0^1 \partial_t\Big( \frac{\nabla_x \rho}{\partial_z \rho} \Big)(t_0 + \varepsilon \leqft\lvertmbda,x,z)
d\leqft\lvertmbda.
$$
We have
$$
\partial_t\Big( \frac{\nabla_x \rho}{\partial_z \rho} \Big)
=\frac{\nabla_x \partial_t \rho}{\partial_z \rho}
-\frac{(\partial_z \partial_t \rho)\nabla_x \rho}{(\partial_z \rho)^2}.
$$
First of all we have $\partial_z \rho \geq \frac{h}{3}.$
Now since $s-\frac{1}{2} >\frac{d}{2} \geq \frac{1}{2},$
we can write
\begin{equation}\leqft\lvertbel{nablarho}
\begin{aligned}
\Vert\nabla_x \partial_t \rho(t,\cdot) \Vert_{ L^2(\tilde{\Omega}_1) }
&\leqq 2\Vert e^{\varrhoa z \vert D_x \vert}G(\eta)\psi(t,\cdot) \Vert_{L^2((-1,0),H^1(\mathbf{R}^d))} \\
& \leqq C\Vert G(\eta)\psi(t,\cdot)\Vert_{H^\frac{1}{2} (\mathbf{R}^d)}\leqq C\Vert G(\eta)\psi(t,\cdot)\Vert_{H^{s- \frac{1}{2}} (\mathbf{R}^d)}\\
&\leqq C\big(\Vert (\eta, \psi)\Vert_{L^\infty(I,H^{s+\frac{1}{2}}(\mathbf{R}^d) \times H^{s+\frac{1}{2}}(\mathbf{R}^d))}\big).
\end{aligned}
\end{equation}
On the other hand we have
\begin{equation*}
\begin{aligned}
\Vert \nabla_x \rho(t,\cdot) \Vert_{L^\infty(\tilde{\Omega}_1)}
&\leqq C\Vert e^{\varrhoa z \vert D_x \vert}\nabla_x \eta(x,\cdot) \Vert_{L^\infty((-1,0), H^{s-\frac{1}{2}}(\mathbf{R}^d)}
+ \Vert \nabla_x \eta_*\Vert_{L^\infty(\mathbf{R}^d)}\\
&\leqq C' \Vert \eta(t,\cdot) \Vert_{H^{s+\frac{1}{2}}(\mathbf{R}^d)}
+\Vert \nabla_x \eta_*\Vert_{L^\infty(\mathbf{R}^d)} \leqq C"\Vert \eta(t,\cdot) \Vert_{H^{s+\frac{1}{2}}(\mathbf{R}^d)}
\end{aligned}
\end{equation*}
by \eqref{eta}. Eventually since
$$
\partial_z \partial_t \rho
= e^{\varrhoa z \vert D_x \vert}G(\eta) \psi
+ (1+z)\varrhoa e^{\varrhoa z \vert D_x \vert}\vert D_x \vert G(\eta) \psi,
$$
we have as in \eqref{nablarho}
\begin{equation}\leqft\lvertbel{estrho2}
\Vert\partial_z \partial_t \rho(t,\cdot) \Vert_{ L^2(\tilde{\Omega})} \leqq C\big(\Vert (\eta, \psi)\Vert_{L^\infty(I,H^{s+\frac{1}{2}}(\mathbf{R}^d) \times H^{s+\frac{1}{2}}(\mathbf{R}^d))}\big).
\end{equation}
Then the Lemma follows.
\end{proof}
Thus we can write for $j=1,2,$
\begin{equation}\leqft\lvertbel{J1=}
\begin{aligned}
K_{j,\varepsilon}(t_0) &= \sigmaum_{k=1}^4\iint_{\tilde{\Omega}_1}A^k_{j,\varepsilon}(t_0,x,z)dx dz, \quad \\
A^1_{j,\varepsilon}(t_0,\cdot)
&= \Lambda_j(t_0)\Big[\frac {\tilde{u}(t_0+\varepsilon,\cdot)-\tilde{u}(t_0,\cdot)}{\varepsilon}\Big]
\Lambda_j(t_0)\tilde{\theta}(\cdot)\partial_z\rho(t_0,\cdot),\\
A^2_{j,\varepsilon}(t_0,\cdot)
&=\Big[\frac{\Lambda_j(t_0+\varepsilon) -\Lambda_j(t_0)}{\varepsilon}\Big]
\tilde{u}(t_0,\cdot)\Lambda_j(t_0)\tilde{\theta}(\cdot)\partial_z \rho(t_0,\cdot),\\
A^3_{j,\varepsilon}(t_0,\cdot)
&=\Lambda_j(t_0+\varepsilon)\tilde{u}(t_0+\varepsilon,\cdot)
\Big[\frac{\Lambda_j(t_0+\varepsilon) -\Lambda_j(t_0)}{\varepsilon}\Big]\tilde{\theta}(\cdot)\partial_z\rho(t_0,\cdot),\\
A^4_{j,\varepsilon}(t_0,\cdot)
&=\Lambda_j(t_0+\varepsilon)\tilde{u}(t_0+\varepsilon,\cdot)
\Lambda_j(t_0+\varepsilon)\tilde{\theta}(\cdot)
\Big[\frac{\partial_z\rho(t_0+\varepsilon, \cdot) - \partial_z\rho(t_0,\cdot)}{\varepsilon}\Big].\\
\end{aligned}
\end{equation}
In what follows to simplify the notations we shall set $X=(x,z)\in \tilde{\Omega} $ and we recall that $ \Lambda_j(t_0+\varepsilon) -\Lambda_j(t_0)=0$ when $z\leqq -1.$
First of all, using the lower bound $\partial_z\rho(t_0,X) \geq \frac{h}{3}$,
we obtain
\begin{equation}\leqft\lvertbel{A1}
\iint_{\tilde{\Omega} }A^1_{j,\varepsilon}(t_0,X)dX \geq \frac{h}{3} \leqft\lVert \Lambda_j(t_0)\Big[\frac{\tilde{u}(t_0 + \varepsilon,\cdot) - \tilde{u}(t_0,\cdot)}{\varepsilon} \Big] \right\rVert^2_{L^2(\tilde{\Omega} )}.
\end{equation}
Now it follows from \eqref{champ} that
$$\leqft\lvert \iint_{\tilde{\Omega}}A^2_{j,\varepsilon}(t_0,X)dX \right\rvert
\leqq \sigmaup_{t\in I}\Vert \frac{\beta_{j,\varepsilon}}{\varepsilon}\Vert_{L^2(\tilde{\Omega})}
\sigmaup_{t\in I}\Vert \partial_z \tilde{u}(t,\cdot)\Vert_{L^\infty_z(-1,0, L^\infty(\mathbf{R}^d))}\Vert
\Lambda_j(t_0)\tilde{\theta}\Vert_{L^2(\tilde{\Omega} )}.
$$
Since $s-\frac{1}{2}>\frac{d}{2}$ the elliptic regularity theorem shows that
\begin{equation}\leqft\lvertbel{regtheo}
\begin{aligned}
\sigmaup_{t\in I}\Vert \partial_z \tilde{u}(t,\cdot)\Vert_{L^\infty_z(-1,0, L^\infty(\mathbf{R}^d))}&\leqq \sigmaup_{t\in I}\Vert \partial_z \tilde{u}(t,\cdot)\Vert_{L^\infty_z(-1,0, H^{s-\frac{1}{2}}(\mathbf{R}^d))}\\
&\leqq C\big(\Vert(\eta, \psi)\Vert_{L^\infty(I,H^{s+\frac{1}{2}}(\mathbf{R}^d)\times H^{s+\frac{1}{2}}(\mathbf{R}^d))}\big)
\end{aligned}
\end{equation}
Using Lemma \ref{estbeta} we deduce that
\begin{equation}\leqft\lvertbel{A2}
\leqft\lvert \iint_{\tilde{\Omega}}A^2_{j,\varepsilon}(t_0,X)dX \right\rvert
\leqq C\big(\Vert(\eta, \psi)\Vert_{L^\infty(I,H^{s+\frac{1}{2}}(\mathbf{R}^d)
\times H^{s+\frac{1}{2}}(\mathbf{R}^d))}\big)\Vert\Lambda_j(t_0)\tilde{\theta}\Vert_{L^2(\tilde{\Omega})}.
\end{equation}
Now write
\begin{multline*}
\iint_{\tilde{\Omega} }A^3_{j,\varepsilon}(t_0,X)dX =\\
= \iint_{\tilde{\Omega} }\Lambda_j(t_0+ \varepsilon)\tilde{u}(t_0+\varepsilon,X)
\beta_\varepsilon(t_0+\varepsilon,X)\partial_z\tilde{\theta}(t_0,X)\partial_z\rho(t_0,X)dX.
\end{multline*}
By elliptic regularity, $\Lambda_j(t)\tilde{u}$ is bounded in $L^\infty_{t,x,z}$ by a fonction depending only on
$$
\Vert(\eta, \psi)\Vert_{L^\infty(I,H^{s+\frac{1}{2}}(\mathbf{R}^d)\times H^{s+\frac{1}{2}}(\mathbf{R}^d))}.
$$
Therefore we can write
\begin{equation}\leqft\lvertbel{A3}
\leqft\lvert \iint_{\tilde{\Omega}}A^3_{j,\varepsilon}(t_0,X)dX \right\rvert
\leqq C\big(\Vert(\eta, \psi)\Vert_{L^\infty(I,H^{s+\frac{1}{2}}(\mathbf{R}^d)
\times H^{s+\frac{1}{2}}(\mathbf{R}^d))}\big)\Vert\partial_z\tilde{\theta}\Vert_{L^2(\tilde{\Omega})}.
\end{equation}
Since
$$
\frac{\partial_z\rho(t_0+\varepsilon, x,z) - \partial_z\rho(t_0,x,z)}{\varepsilon}
=\int_0^1 \partial_t \partial_z \rho(t_0+ \leqft\lvertmbda\varepsilon, x,z) d\leqft\lvertmbda
$$
(which vanishes when $z\leqq -1$), we find using \eqref{regtheo} and \eqref{estrho2}
\begin{equation}\leqft\lvertbel{A4}
\leqft\lvert \iint_{\tilde{\Omega}}A^4_{j,\varepsilon}(t_0,X)dX \right\rvert \leqq C\big(\Vert(\eta, \psi)\Vert_{L^\infty(I,H^{s+\frac{1}{2}}(\mathbf{R}^d)\times H^{s+\frac{1}{2}}(\mathbf{R}^d))}\big)\Vert\partial_z\tilde{\theta}\Vert_{L^2(\tilde{\Omega})}.
\end{equation}
Now we consider
\begin{equation}\leqft\lvertbel{estdH}
\mathcal{J}_\varepsilon = \frac{H (t_0+\varepsilon)-H (t_0)}{\varepsilon}.
\end{equation}
We make the change of variable $(x,z)\to (x,\rho(t_0,x,z))$
in the integral and we decompose the new integral as in \eqref{J=sumK},
\eqref{J1=}. This gives, with $X=(x,z)$,
$$
\mathcal{J}_\varepsilon = \sigmaum_{j=1}^2\mathcal{K}_{j,\varepsilon}(t_0),
\quad \mathcal{K}_{j,\varepsilon}(t_0)
= \sigmaum_{k=1}^4\iint_{\tilde{\Omega}}\mathcal{A}^k_{j,\varepsilon}(t_0,X)dX,
$$
where $\mathcal{A}^k_{j,\varepsilon}$ has the same form as $-A^k_{j,\varepsilon}$ in \eqref{J1=}
except the fact that $\tilde{u}$ is replaced by $\underline{\tilde{\psi}}.$
Recall that $\underline{\tilde{\psi}}(t,x,z) = \chi(z)e^{z\vert D_x\vert} \psi(t,x).$ Now we have
\begin{equation*}
\begin{aligned}
\Vert \Lambda_j \partial_t \underline{\tilde{\psi}}\Vert_{L^\infty(I, L^2(\tilde{\Omega}))}
&\leqq \mathcal{F}\big(\Vert \eta\Vert_{L^\infty(I, H^{s+\frac{1}{2}}(\mathbf{R}^d))}\big)\Vert
\partial_t \underline{\tilde{\psi}}\Vert_{L^\infty(I, L_z^2((-1,0),H^1(\mathbf{R}^d)))} \\
&\leqq \mathcal{F}\big(\Vert \eta\Vert_{L^\infty(I, H^{s+\frac{1}{2}}(\mathbf{R}^d))}\big)
\Vert \partial_t\psi\Vert_{L^\infty(I, H^\frac{1}{2} (\mathbf{R}^d))}\\
&\leqq \mathcal{F}\big(\Vert \eta\Vert_{L^\infty(I, H^{s+\frac{1}{2}}(\mathbf{R}^d))}\big)
\Vert \partial_t\psi\Vert_{L^\infty(I, H^{s-\frac{1}{2}} (\mathbf{R}^d))}
\end{aligned}
\end{equation*}
since $s-\frac{1}{2} \geq \frac{1}{2}.$
Using the equation \eqref{Zaharov} on $\psi$,
and the fact that $H^{s-\frac{1}{2}}(\mathbf{R}^d)$ is an algebra we obtain
$$ \Vert \Lambda_j \partial_t \underline{\tilde{\psi}}\Vert_{L^\infty(I, L^2(\tilde{\Omega}))} \leqq \mathcal{F}\big(\Vert (\eta,\psi)\Vert_{L^\infty(I, H^{s+\frac{1}{2}}(\mathbf{R}^d))\times H^{s+\frac{1}{2}}(\mathbf{R}^d))} ).$$
It follows that we have
\begin{equation}\leqft\lvertbel{estAA1}
\leqft\lvert \iint_{\tilde{\Omega}}\mathcal{A}^1_{j,\varepsilon}(t_0,X)dX\right\rvert \leqq \mathcal{F}\big(\Vert(\eta, \psi)\Vert_{L^\infty(I,H^{s+\frac{1}{2}}(\mathbf{R}^d)\times H^{s+\frac{1}{2}}(\mathbf{R}^d))}\big) \Vert\Lambda_j(t_0)\tilde{\theta}\Vert_{L^2(\tilde{\Omega})}.
\end{equation}
Now since
\begin{equation*}
\begin{aligned}
\Vert \Lambda_j(t_0)\underline{\tilde{\psi}} (t,\cdot)\Vert_{L^2_z((-1,0),L^\infty(\mathbf{R}^d))}
&\leqq \mathcal{F}(\Vert \eta\Vert _{L^\infty(I, H^{s+\frac{1}{2}}(\mathbf{R}^d))})\Vert
\underline{\tilde{\psi}}(t_0,\cdot)\Vert_{L^2_z((-1,0),H^{\frac{d}{2}+\varepsilon}(\mathbf{R}^d))}\\
&\leqq \mathcal{F}(\Vert \eta\Vert _{L^\infty(I, H^{s+\frac{1}{2}}(\mathbf{R}^d))})\Vert\psi\Vert_{L^\infty(I,H^{s+\frac{1}{2}}(\mathbf{R}^d))}
\end{aligned}
\end{equation*}
we can use the same estimates as in \eqref{A2}, \eqref{A3}, \eqref{A4} to bound the terms $\mathcal{A}_{j,\varepsilon}^k$ for $k = 2,3,4$. We obtain finally
\begin{equation}\leqft\lvertbel{H}
\leqft\lvert \frac{H (t_0 + \varepsilon) - H (t_0)}{\varepsilon}\right\rvert \leqq C\big(\Vert(\eta, \psi)\Vert_{L^\infty(I,H^{s+\frac{1}{2}}(\mathbf{R}^d)\times H^{s+\frac{1}{2}}(\mathbf{R}^d))}\big)\sigmaum_{j=1}^2 \Vert\Lambda_j(t_0)\tilde{\theta}\Vert_{L^2(\tilde{\Omega})}.
\end{equation}
Summing up using \eqref{J1=}, \eqref{A1}, \eqref{A2},\eqref{A3},\eqref{A4}, \eqref{H} we find that, setting
$$ \tilde{U}_\varepsilon(t_0,\cdot) = \frac{\tilde{u}(t_0 + \varepsilon,\cdot) - \tilde{u}(t_0,\cdot)}{\varepsilon},$$
there exists a non decreasing function $\mathcal{F}:\mathbf{R}^+ \to \mathbf{R}^+$ such that for all $\varepsilon>0 $
$$\sigmaum_{j=1}^2 \sigmaup_{t_0 \in I} \Vert \Lambda_j(t_0) \tilde{U}_\varepsilon(t_0,\cdot)\Vert_{L^2(\tilde{\Omega})} \leqq \mathcal{F} \big( \Vert(\eta, \psi)\Vert_{L^\infty(I,H^{s+\frac{1}{2}}(\mathbf{R}^d)\times H^{s+\frac{1}{2}}(\mathbf{R}^d))}\big).$$
Since $\tilde{u}(t,\cdot) \in H^{1,0}(\tilde{\Omega}), $ the Poincar\' e inequality ensures that
\begin{equation}\leqft\lvertbel{Uborne}
\leqft\lVert\tilde{U}_\varepsilon(t_0,\cdot)\right\rVert_{L^2(\tilde{\Omega} )} \leqq C\big(\Vert(\eta, \psi)\Vert_{L^\infty(I,H^{s+\frac{1}{2}}(\mathbf{R}^d)\times H^{s+\frac{1}{2}}(\mathbf{R}^d))}\big).
\end{equation}
It follows that we can extract a subsequence $(\tilde{U}_{\varepsilon_k})$ which converges in the weak-star topology of $(L^\infty \cap C^0)(I, H^{1,0}(\tilde{\Omega}))$.
But this sequences converge in $\mathcal{D}'(I\times \tilde{\Omega})$
to $ \partial_t \tilde{u}. $ Therefore $\partial_t \tilde{u} \in C^0(I, H^{1,0}(\tilde{\Omega}))$ and this implies that $\partial_t \tilde{u}(t_0, \cdot, \kappa(t_0, \cdot,\cdot))$ belongs to $H^{1,0}(\Omega(t_0) $ which completes the proof of Lemma \ref{dtphi}.
\sigmaubsubsection{Proof of Lemma \ref{chain}}
Let $ \varphi \in C_0^\infty(\Omega)$ and set
\begin{equation}\leqft\lvertbel{I+J}
\begin{aligned}
&v_\varepsilon(t,x,y) = \frac{1}{\varepsilon}[\tilde{u}(t+\varepsilon,x,\kappa(t+\varepsilon,x,y)) - \tilde{u}(t +\varepsilon ,x,\kappa(t,x,y))], \\
&w_\varepsilon(t,x,y) = \frac{1}{\varepsilon}[\tilde{u}(t+\varepsilon,x,\kappa(t,x,y)) - \tilde{u}(t ,x,\kappa(t,x,y))],\\
&J_\varepsilon = \iint_{\Omega}v_\varepsilon(t,x,y) \varphi(t,x,y) dt dx dy, \quad K_\varepsilon = \iint_{\Omega}w_\varepsilon(t,x,y) \varphi(t,x,y) dt dx dy,\\
&I_\varepsilon = J_\varepsilon + K_\varepsilon.
\end{aligned}
\end{equation}
Let us consider first $K_\varepsilon.$ In the integral in $y$ we make the change of variable $\kappa(t,x,y) = z \Leftrightarrow y = \rho(t,x,z).$ Then setting $\tilde{\varphi}(t,x,z) = \varphi(t,x,\rho(t,x,z))$ and $X = (x,z) \in \tilde{\Omega}$ we obtain
$$K_\varepsilon = \iint_I \int_{\tilde{\Omega}}\frac{ \tilde{u}(t+\varepsilon,X) - \tilde{u}(t ,X)}{\varepsilon} \tilde{\varphi}(t,X) \partial_z \rho(t,X)dt dX.$$
Since $\rho \in C^1(I \times \tilde{\Omega})$ we have $\tilde{\varphi} \cdot \partial_z \rho \in C_0^0(I\times \tilde{\Omega}).$ Now we know that the sequence $\tilde{U}_\varepsilon = \frac{ \tilde{u}(\cdot+\varepsilon,\cdot) - \tilde{u}(\cdot,\cdot)}{\varepsilon} $ converges in $\mathcal{D}'(I\times\tilde{\Omega})$ to $ \partial_t\tilde{u}.$ We use this fact, we approximate $\tilde{\varphi} \cdot \partial_z \rho$ by a sequence in $C_0^\infty(I\times\tilde{\Omega})$ and we use \eqref{Uborne} to deduce that
$$
\lim_{\varepsilon \to 0} K_\varepsilon = \int_I\iint_{\tilde{\Omega}} \partial_t \tilde{u}(t,X)\tilde{\varphi}(t,X) \partial_z \rho(t,X)dt dX.$$
Coming back to the $(t,x,y)$ variables we obtain
\begin{equation}\leqft\lvertbel{K}
\lim_{\varepsilon \to 0} K_\varepsilon = \iint_{ \Omega} \partial_t \tilde{u}(t,x,\kappa(t,x,y)) \varphi (t,x,y) dt dx dy.
\end{equation}
Let us look now to $J_\varepsilon.$ We cut it into two integrals; in the first we set $\kappa(t+\varepsilon,x,y) =z$ in the second we set $\kappa(t ,x,y) =z.$ With $X=(x,z)\in \tilde{\Omega}$ we obtain
$$J_\varepsilon = \frac{1}{\varepsilon}\int_I \iint_{\tilde{\Omega}_1 }\tilde{u}(t + \varepsilon,X) \Big( \int_0^1 \frac{d}{d\sigmaigma}\big\{\varphi(t,x,\rho(t+\varepsilon \sigmaigma,X))\partial_z\rho(t+\varepsilon \sigmaigma,X)\big\}d\sigmaigma\Big) dtdX.$$
Differentiating with respect to $\sigmaigma$ we see easily that
$$J_\varepsilon = \int_I \iint_{\tilde{\Omega}}\tilde{u}(t+\varepsilon,X) \frac{\partial}{\partial z}\Big( \int_0^1 \partial_t\rho(t+\varepsilon \sigmaigma,X) \varphi(t,x,\rho(t+\varepsilon \sigmaigma,X))d\sigmaigma\Big) dt dX.$$
Since $\tilde{u}$ is continuous in $t$ with values in $L^2(\tilde{\Omega}), \partial_t\rho $ is continous in $(t,x,z)$ and $\varphi \in C_0^\infty$ we can pass to the limit and we obtain
$$\lim_{\varepsilon \to 0}J_\varepsilon = \int_I \iint_{\tilde{\Omega}_1}\tilde{u}(t ,X) \frac{\partial}{\partial z}\Big( \partial_t \rho(t,X) \varphi(t,x,\rho(t ,X)) \Big) dt dX.$$
Now we can integrate by parts. Since, thanks to $\varphi,$ we have compact support in $z$ we obtain
$$\lim_{\varepsilon \to 0}J_\varepsilon = - \int_I \iint_{\tilde{\Omega}}\partial_z \tilde{u}(t ,X)\partial_t \rho(t,X) \varphi(t,x,\rho(t ,X)) dt dX.$$
Now since
$$\partial_t \rho(t,X) = -\partial_t \kappa(t,x,y)\partial_z \rho(t,x,z)$$
setting in the integral in $z$, $\rho(t,X) = y$ we obtain
\begin{equation}\leqft\lvertbel{Jeps}
\lim_{\varepsilon \to 0}J_\varepsilon = \iint_{\Omega}\partial_z \tilde{u}(t ,x,\kappa(t,x,y)) \partial_t\kappa(t,x,y) \varphi(t,x,y) dt dx dy.
\end{equation}
Then Lemma \ref{chain} follows from \eqref{I+J}, \eqref{K} and \eqref{Jeps}.
\addcontentsline{toc}{section}{Bibliography}
\end{document}
|
\begin{document}
\newcolumntype{C}[1]{>{\centering\arraybackslash}p{#1}}
\title{Optimal harvesting and spatial patterns in a semi arid
vegetation system
}
\author{Hannes Uecker\\ \small
Institut f\"ur Mathematik, Universit\"at Oldenburg,\\
\small D26111 Oldenburg, [email protected]}
\normalsize
\maketitle
\begin{abstract} We consider an infinite time horizon spatially distributed
optimal harvesting
problem for a vegetation and soil water reaction diffusion system,
with rainfall as the main external parameter.
By Pontryagin's maximum principle we derive
the associated four component canonical system, and numerically
analyze this and hence
the optimal control problem in two steps.
First we numerically compute a rather rich bifurcation structure of
{\em flat} (spatially homogeneous) and {\em patterned} canonical
steady states
(FCSS and PCSS, respectively), in 1D and 2D. Then we compute
time dependent solutions of the canonical system that connect to
some FCSS or PCSS. The method is efficient in dealing with
non-unique canonical steady states, and thus also with
multiple local maxima of the objective function.
It turns out that over wide parameter regimes the FCSS, i.e., spatially uniform
harvesting, are not optimal. Instead,
controlling the system to a PCSS yields a higher profit.
Moreover, compared to (a simple model of) private optimization, the social
control gives a higher yield, and vegetation survives
for much lower rainfall. {In addition, the computation of the optimal (social)
control gives an optimal tax to incorporate into the private optimization. }
\end{abstract}
\noindent}\def\ds{\displaystyle}\def\lap{\Delta}\def\al{\alpha
{\bf Keywords:} distributed optimal control, bioeconomics, optimal harvesting\\
{\bf MSC:} 49J20, 49N90, 35B32
\section{Introduction and main results}
Vegetation patterns such as spots and stripes appear in ecosystems all over the
world, in particular in so called semi arid areas \cite{DBC08}.
Semi arid here means
that there is enough water to support {\em some}
vegetation, but not enough water for a dense homogeneous vegetation.
Such systems are often modeled in the form of two or more component
reaction--diffusion systems
for plant and water densities, with rainfall $R$
as the main bifurcation parameter, and the patterns
are attributed
to a positive feedback loop between plant density and water infiltration.
Starting with a homogeneous equilibrium
of high plant density for large $R$, stationary
spatial patterns appear as $R$ is lowered, often following a
universal sequence \cite{GRS14}. This may lead to
catastrophic, sometimes irreversible, regime shifts, where
a patterned vegetation suddenly dies out completely, leaving
a desert behind as $R$ drops below a certain
threshold, or as some other parameter such as harvesting or
grazing by herbivores
is varied. There is a rather large number
of specific models, each agreeing with field observations in
various parameter regimes, see e.g., \cite{meron01,riet01}, and
\cite{riet04} for a review,
or \cite{SBB09,meron13} and the references therein for further reviews
including more recent work on
early warning signals for desertification and other critical transitions,
usually associated with subcritical bifurcations.
A so far much less studied problem is the spatially distributed
dynamic optimal control (OC) of vegetation systems
by choosing harvesting or grazing by herbivores
in such a space and time dependent way that some economic objective function
is maximized.
Following \cite{BX10} we consider an infinite
time horizon OC problem for a reaction--diffusion system for
vegetation biomass and soil water, which is roughly based on
\cite{riet01}. Related optimal control problems have also been considered
in \cite{BX08,Xe10,Brocketal2013}, with the focus on the
so called ``Optimal Diffusion Instability'' of flat
canonical steady states (FCSS, see \S\ref{pssec} for OC related
definitions), which
similar to a Turing bifurcation yields the bifurcation of
spatially {\em patterned} canonical steady states (PCSS) from FCSS.
However, in these works, and
in \cite{BX10}, only few PCSS have been actually calculated numerically,
and no canonical paths, i.e., time dependent solutions of the canonical
system.
Finite time horizon cases recently have been considered in
\cite{CP12, ACKT13, Apre14},
mostly focusing on theoretical aspects, and on
problems with control constraints, which altogether gives a rather
different setting than ours; see also Remark \ref{lrem}.
Here we apply the numerical framework from \cite{GU15, U15p2} to the
so called canonical system for the states and co--states,
derived via Pontryagin's Maximum Principle. First we
calculate (branches) of canonical steady states (CSS), including
branches of PCSS, and in a second step
canonical paths connecting to some CSS. Our main result is that
in wide ranges of parameters, in particular for low rainfall $R$,
FCSS and their canonical paths are not optimal, and that controlling
the system to a PCSS yield a higher profit than uniform harvesting.
{Thus, this seems to be the first example of a bio-economically
motivated optimal control problem, where the global bifurcation structure
of CSS has been computed in some detail, showing multiple CSS at
fixed parameter values and with dominant non--spatially homogeneous
steady states, and where moreover canonical paths to such PCSS have
been computed, with significant gains in welfare.}
We also compare these results to results for the same system
with (initially) no external control, i.e., the system with private
optimization, and find that the optimal control significantly
increases the profit, and supports vegetation at significantly
lower rainfall levels, {which again has important welfare implications.
Remarkably, in our system the co-state
of the vegetation can be identified with
a tax for the private optimization problem, and thus,
by solving the canonical system we find an optimal space and time
dependent tax for the private optimization problem.}
A standard reference on ecological economics or
``Bioeconomics'' is \cite{Cl90}, including a very readable
account, and applications, of Pontryagin's Maximum Principle{} in the context of
ODE models; see also \cite{LW07}.
A review of management rules for semi arid grazing systems including
comparison to real data, and making plain the importance of such rules,
is given in \cite{QB12}. However, the
(family of) models discussed there consist of discrete time
evolutions without spatial dependence, and with focus on the
rainfall $R$ as a time--dependent stochastic parameter; here we
consider a deterministic PDE model. Thus comparison between
the two (classes of) models is difficult, but it should be interesting
to include spatial dependence into the model in \cite{QB12}.
Finally, in, e.g., \cite{Neu03}, \cite{DL09},
stationary
spatial OC problems for a fishery model are considered, including
numerical simulations, which correspond to our calculation of
canonical steady states for our model \reff{oc1} below.
The results of \cite{Neu03, DL09} show that for their models it is
{\em economically optimal} to provide ``no--take'' marine reserves.
This is similar to our finding of {\em optimal} patterned
canonical steady states, but
here we go beyond the steady case with the computation of optimal paths.
This in particular tells us how to
dynamically control the system to an (at least locally)
optimal steady state.
Moreover, even for the steady states there are important
technical differences between the OC problems considered in
\cite{Neu03, DL09} with {\em unique} positive canonical steady states,
and the OC problems considered here, which in relevant parameter regimes
are distinguished by having {\em many} canonical steady states, of which several
can be locally stable (see Remark \ref{defrem} for definitions, and, again,
Remark \ref{lrem}.)
{Thus, management rules for our system are considerably
more complicated than those in, e.g., \cite{Neu03,DL09}, as they
must take the different CSS into account, and given an initial state,
must decide to the domain of
attraction of which CSS it belongs. Partly to illustrate this
point, we also compute a Skiba (or indifference) point \cite{Skiba78}
between different CSS in \S\ref{pskibasec}. }
In the remainder of this Introduction we explain the model and the use
of Pontryagin's Maximum Principle{} to derive the canonical system (\S\ref{pssec}), explain
the basic idea of the numerical method (\S\ref{sssec}), and summarize
the results (\S\ref{srsec}). In \S\ref{rsec} we present the
quantitative results, and in \S\ref{dsec} we give a
brief further discussion.
The method has been implemented in {\tt Matlab} as an ``add-on'' package
{\tt p2pOC} to the continuation and bifurcation software {\tt pde2path}
\cite{p2pure}, and
the matlab functions and scripts to run (most of) the simulations, the
underlying libraries, manuals of the software, and some more demos
can be downloaded at \cite{p2phome}. \\[3mm]
\begin{minipage}{\textwidth}}\def\emip{\end{minipage}
{\bf Acknowledgment.} I thank D. Gra\ss, ORCOS Wien,
for valuable comments on the economic terminology, and the anonymous
referees for valuable questions and comments on the first version of
the manuscript.
\emip
\subsection{Problem setup}\label{pssec}
In dimensionless variables the model for vegetation and soil water
with social optimization from \cite{BX10} is as follows.
Let $\Om\subset{\mathbb R}^d$ be a one--dimensional (1D) or two--dimensional (2D)
bounded domain,
$v=v(x,t)$ the vegetation density at time $t\ge 0$ and space $x\in\Om$,
$w=w(x,t)$ the soil water saturation,
\bce
$E=E(x,t)$ the harvesting
effort (control), $H(v,E)=v^\al E^{1-\al}$ the harvest, and
\ece
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}ga{\label{j0}
J(v_0,w_0,E)=\frac 1 {|\Om|}
\int_0^\infty\er^{-\rho t}\int_\Om J_c(v(x,t),E(x,t))\, {\rm d}}\def\ri{{\rm i}}\def\sech{{\rm sech}}\def\er{{\rm e} x\, {\rm d}}\def\ri{{\rm i}}\def\sech{{\rm sech}}\def\er{{\rm e} t
}
the (spatially averaged) objective function, where $\rho>0$ is the discount
rate, and
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}ga{
J_c(v,E)=pH(v,E)-cE
}
is the (local) current value profit. This profit depends on the price $p$,
the costs $c$ for harvesting, and $v$, $E$ in a classical Cobb--Douglas form
with elasticity parameter $0<\al<1$.
The problem reads
\begin{subequations}\label{oc1}
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}al{
V(v_0,w_0)&=\max_{E(\cdot,\cdot)} J(v_0,w_0,E), \quad\text{where}\\
\pa_t v&=d_1\Delta v+[gwv^\eta-d(1+\del v)]v-H(v,E),\\
\pa_t w&=d_2\Delta w+R(\beta+\xi v)-(r_u v+r_w)w, \\
(v,w)|_{t=0}&=(v_0,w_0).
}
\end{subequations}
The parameters and
default values are explained in Table \ref{tab1}, together with
brief comments on the terms appearing in \reff{oc1}. One essential
feature of (\ref{oc1}b,c) is the positive feedback loop between
vegetation $v$ and water $w$. Clearly, the vegetation
growth rate $gwv^\eta$ increases with $w$, but the vegetation
$v$ also has a positive effect on water infiltration,
for instance due to loosened soil, modeled by
$R\xi v$ in (\ref{oc1}c).
\begin{remark}}\def\erem{\end{remark}\label{grazrem}{\rm In \cite{BX10}, $E$ is also refered to as
the grazing by herbivores, and \reff{oc1} is called a semi arid
grazing system. This seems somewhat oversimplified, because from a practical
point of view, controlling in detail the movement and grazing behavior
of, e.g., cattle, certainly is a more complicated problem than
controlling genuine harvesting. Thus, henceforth we stick to calling $E$
the harvesting effort.}
\mbox{$\rfloor$}\erem
We complement
\reff{oc1} with homogeneous Neumann boundary conditions (BC)
$\pa_\nu v=0$ and $\pa_\nu w=0$ on $\pa\Om$, where $\nu$ is
the outer normal. The discounted
time integral in \reff{j0} is typical for economic (here bioeconomic)
problems, where ``profits now'' weight more than mid or far
future profits. More specifically, $\rho$ corresponds to a long-term
investment rate. We normalize $J_{ca}$ by $|\Om|$ for easier
comparison between different domains and space dimensions.
In (bio)economics, the control $E$, chosen externally by a ``social
planner'' to maximize the social value $J$, is often called social
control, as opposed to private optimization, see \reff{pde2} below.
Finally,
the $\max$ in (\ref{oc1}a) runs over all {\em admissible} controls $E$;
essentially this means that $E\in L^\infty([0,\infty)\times\Om,{\mathbb R})$,
where moreover implicitly we have the control constraint $E\ge 0$, and
state constraint $v,w\ge 0$ for the associated solutions of (\ref{oc1}b,c).
However, in our simulations these constraints will always
naturally be fulfilled,
i.e., {\em inactive}, see also Remark \ref{lrem}.
{\small
\begin{table*}\bce\begin{tabular}{|p{10mm}|p{92mm}|p{38mm}|}
\hline
param.&meaning&default values\\
\hline
$g,\eta$&coefficient and exponent in plant growth rate
$gwv^\eta$&$g=0.001, \eta=0.5$\\
$d,\del$&coefficients in plant death rate
$d(1+\del v)$&$d=0.03, \del=0.005$\\
$\beta,\xi$&coefficients in the infiltration function $\beta+\xi v$,
&
$\beta=0.9, \xi=0.001$\\
$R$&rainfall parameter, used as main bifurcation parameter&
between 4 and 100\\
$r_u, r_w$&water uptake and evaporation parameters in the water loss
rate $r_u v+r_w$&$r_u=0.01, r_w=0.1$\\
$d_{1,2}$& diffusion constants for vegetation and water (resp.)&
$d_1=0.05$, $d_2=10$\\\hline
$\rho$&discount rate&$\rho=0.03$\\
$c,p,\al$&(economic) param.~in the harvesting
\mbox{$H(v,E)=v^\al E^{1-\al}$} and in the value $J_c(v,E){=}pH(v,E){-}cE$&
$c=1, p=1.1, \al=0.3$\\
&($\rho$ and $p$ used as bifurcation param.~in \S\ref{rhosec})&\\
\hline
\end{tabular}
\caption{Dimensionless parameters and default values in \reff{oc1};
see \cite{BX10} for further comments on the modeling. In particular,
following \cite[\S4.2]{BX10} we have a rather larger $d_2$. \label{tab1}}
\ece
\end{table*}
}
Introducing the costates $(\lam,\mu)=(\lam,\mu)(x,t)$ and the
(local current value) Hamiltonian
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}al{
\CH(v,w,\lam,\mu,E)=J_c(v,E)
&+\lam\bigl[d_1\Delta v+(gwv^\eta-d(1+\del v))v-H\bigr]\notag\\
&+\mu\bigl[d_2\Delta w+R(\beta+\xi v)-(r_u v+r_w)w\bigr],
\label{hammax2}
}
by Pontryagin's Maximum Principle{} for $\tilde{\CH}=
\int_0^\infty \er^{-\rho t} \ov{\CH}(t)\, {\rm d}}\def\ri{{\rm i}}\def\sech{{\rm sech}}\def\er{{\rm e} t$ with the spatial integral
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}ga{\label{fullH}
\ov{\CH}(t)=\int_\Om \CH(v(x,t),w(x,t),\lam(x,t),\mu(x,t),E(x,t))
\, {\rm d}}\def\ri{{\rm i}}\def\sech{{\rm sech}}\def\er{{\rm e} x,
}
an optimal solution $(v,w,\lam,\mu)$ has to solve the canonical system (CS)
\begin{subequations}\label{cs}
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}al{
\pa_t v&=\pa_\lam\CH=d_1\Delta v+[gwv^\eta-d(1+\del v)]v-H,\\
\pa_t w&=\pa_\mu\CH=d_2\Delta w+R(\beta+\xi v)-(r_u v+r_w)w,\\
\pa_t\lam&=\rho\lam-\pa_v\CH=\rho\lam-p\al v^{\al-1}E^{1-\al}-
\lam\bigl[g(\eta+1)wv^\eta-2d\del v-d-\al v^{\al-1}E^{1-\al}]\\\notag
&\qquad\qquad\qquad-\mu(R\xi-r_uw)-d_1\Delta \lam,
\\
\pa_t\mu&=\rho\mu-\pa_w\CH=\rho\mu-\lam g v^{\eta+1}+\mu(r_u v+r_w)-d_2\Delta \mu,
}
where $E=\operatornamewithlimits{argmax}_{\tilde{E}}\CH(v,w,\lam,\mu,\tilde{E})$, which is obtained from
solving $\pa_E \CH=0$ for $E$, giving
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}ga{
E=\left(\frac{(p-\lam)(1-\al)}{c}\right)^{1/\al}v.
}
The costates $(\lam,\mu)$ also fulfill zero flux BC, and
derivatives like $\pa_v \CH$ etc are taken
variationally, i.e., for $\ov{\CH}$. For instance,
for $\Phi(v,\lam)=\lam\Delta v$ we have $\ov{\Phi}(v,\lam)
=\int_\Om \lam\Delta v\, {\rm d}}\def\ri{{\rm i}}\def\sech{{\rm sech}}\def\er{{\rm e} x
=\int_\Om (\Delta\lam)v\, {\rm d}}\def\ri{{\rm i}}\def\sech{{\rm sech}}\def\er{{\rm e} x$ by Gau\ss' theorem, hence
$\delta_v \ov{\Phi}(v,\lam)[h]=\int (\Delta\lam) h\, {\rm d}}\def\ri{{\rm i}}\def\sech{{\rm sech}}\def\er{{\rm e} x$, and
by the Riesz representation theorem we
identify $\delta_v \ov{\Phi}(v,\lam)$ and hence $\pa_v\Phi(v,\lam)$
with the multiplier $\Delta\lam$. Moreover,
we used the so called intertemporal
transversality conditions
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}ga{\label{tcond}
\lim_{t\rightarrow}\def\Ra{\Rightarrow}\def\la{\leftarrow\infty}\er^{-\rho t}\int_\Om v(x,t)\lam(x,t)\, {\rm d}}\def\ri{{\rm i}}\def\sech{{\rm sech}}\def\er{{\rm e} x=0
\text{ and } \lim_{t\rightarrow}\def\Ra{\Rightarrow}\def\la{\leftarrow\infty}\er^{-\rho t}\int_\Om w(x,t)\mu(x,t)\, {\rm d}}\def\ri{{\rm i}}\def\sech{{\rm sech}}\def\er{{\rm e} x=0,
}
which is justified since we are only interested in bounded solutions.
\end{subequations}
For convenience setting
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}ga{
u:=(v,w,\lam,\mu):\Om\times [0,\infty)\rightarrow}\def\Ra{\Rightarrow}\def\la{\leftarrow{\mathbb R}^4,
}
we collect (\ref{cs}a--e) and the boundary conditions into
\begin{subequations}\label{cs2}
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}al{\label{cs2a}
\pa_t u&=-G(u):=\CD\Delta u+f(u), \quad \CD=\bpm d_1&0&0&0\\0&d_2&0&0\\
0&0&-d_1&0\\0&0&0&-d_2\epm,\\
\pa_\nu u&=0\text{ on } \pa\Om, \quad (v,w)|_{t=0}=(v_0,w_0). \label{cs2b}
}
\end{subequations}
In \reff{cs2b} we only have initial conditions for the states,
i.e., half the variables, and
\reff{cs2a} is ill-posed as an initial value problem
due to the backward diffusion in $(\lam,\mu)$. Thus, below
we shall further restrict the transversality condition \reff{tcond}
to requiring that $u(t)$ converges to a steady state, i.e.~a solution
of
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}ga{\label{cs3}
G(u)=0,\quad \pa_\nu u=0\text{ on } \pa\Om.
}
\begin{remark}}\def\erem{\end{remark}\label{defrem}{\rm {\bf (Definitions and notations)}
A solution $u$ of the canonical system \reff{cs2} is called a
\emph{canonical path},
and a solution $\uh$ of \reff{cs3} is called a
\emph{canonical steady state (CSS)}. With a slight abuse of notation
we also call $(v,w,E)$ with $E$ given by (\ref{cs}e) a canonical
path, suppressing the associated co-states $\lam,\mu$.
In particular, if $\hat u$ is a CSS, so is $(\hat v,\hat w,\hat E)$.
A CSS $\hat u$ is called {\em flat} if it is spatially homogeneous,
and {\em patterned} otherwise, and we use the acronyms
FCSS and PCSS, respectively.
For convenience, given $t\mapsto u(t)$
we also write
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}ga{\label{jredef}
J(u):=J(v_0,w_0,E),
}
with $(v_0,w_0)$ and $E$ (via (\ref{cs}e)) taken from $u$.
A canonical path $u$ (or $(v,w,E)$)
is called {\em optimal} if there is no canonical
path starting at the state values $(v(0), w(0))$
and yielding a higher $J$ than $J(u)$. As a special case, a
CSS $\uh=(\hat v,\hat w, \hat \lam,
\hat \mu)$ is called optimal if there is no canonical
path starting at $(\hat v, \hat w)$ and yielding a higher $J$
than $J(\hat v,\hat w, \hat E)$. We use the acronyms OSS for any optimal
CSS, and FOSS
and POSS for optimal flat or patterned CSS $\uh$, respectively.
An OSS $\uh$
is called {\em locally stable} if
for all admissible $(v_0,w_0)$ close to $(\hat v, \hat w)$
there is an optimal path $u$ with
and $\lim_{t\rightarrow}\def\Ra{\Rightarrow}\def\la{\leftarrow\infty}u(t){=}\uh$.\footnote{For ``close to'' and $\lim$
we may use, e.g., the $H^1(\Om)$ norm, but since all solutions will be
smooth, for instance as solutions of semilinear elliptic systems with
smooth coefficients, we decided to omit the introduction of function
spaces here. Similarly, $(v_0,w_0)$ ``admissible'' should be read as
$v_0(x),w_0(x)\ge 0$ for $x\in\Om$.}
Similarly, $\uh$ is called {\em globally stable} if for all
admissible $(v_0,w_0)$ the associated optimal path has
$\lim_{t\rightarrow}\def\Ra{\Rightarrow}\def\la{\leftarrow\infty}u(t){=}\uh$.
Finally, the
{\em domain of attraction} of a locally (or globally)
stable OSS $\uh$ is defined as the set of all $(v_0,w_0)$ such that
the associated optimal path $u$ fulfills $\lim_{t\rightarrow}\def\Ra{\Rightarrow}\def\la{\leftarrow\infty}u(t){=}\uh$.
See also \cite{GU15} for more formal definitions, and further
comments on the notion of optimal system, and, e.g.,
the transversality condition \reff{tcond}.
}
\mbox{$\rfloor$} \erem
\begin{remark}}\def\erem{\end{remark}\label{lrem}{\rm
For background on OC in a PDE setting see for instance
\cite{Tr10} and the references therein, or specifically \cite{RZ99, RZ99b, LM01} and
\cite[Chapter5]{AAC11} for Pontryagin's Maximum Principle{}
for OC problems for semilinear parabolic state evolutions. However,
these works are in a
finite time horizon setting, often the objective function
is linear in the control, and there are state or control constraints.
For instance, denoting the control by $k=k(x,t)$, often $k$ is chosen
from some bounded interval $K$, and therefore is not obtained from
the analogue of (\ref{cs}e), but rather takes values from $\pa K$, which
is usually called bang-bang control.
In, e.g., \cite{CP12, ACKT13}, some specific models of this type have been
studied in a rather theoretical way, i.e., the focus is on
deriving the canonical system and showing well-posedness and
the existence of an optimal control. \cite{Apre14} additionally
contains numerical simulations for a finite time horizon control--constrained
OC problem for a three species spatial predator-prey system, again
leading to bang--bang type controls. See also \cite{NPS11}
and the references therein for numerical methods for (finite time horizon)
constrained parabolic optimal control problems.
Similarly, the (stationary) fishery problems in \cite{Neu03, DL09}
come with harvesting (i.e.~control) constraints. Moreover, in contrast
to our zero--flux boundary conditions (\ref{cs2}b)
Dirichlet boundary conditions are imposed.
In \cite{KXL15} the models from \cite{Neu03, DL09} are extended to Robin boundary conditions,
and a finite time horizon, with a discounted profit of the form
$J=\int_0^T \int_\Om p\er^{-\rho t}h(x,t)u(x,t)\, {\rm d}}\def\ri{{\rm i}}\def\sech{{\rm sech}}\def\er{{\rm e} x\, {\rm d}}\def\ri{{\rm i}}\def\sech{{\rm sech}}\def\er{{\rm e} t$,
where $p,\rho>0$ denote the price and discount rate, $h$ is the harvest,
and $u$ the (fish) population density, which fulfills a rather general
semilinear parabolic equation including advection. The first focus is
again on well--posedness and the first order optimality conditions, and
numerical simulations are presented for some specific model choices,
illustrating the dynamic formation and evolution of marine reserves.
However, the setting again is quite different from ours, due to the
finite time horizon, and since $J$ is linear in $h$ and $u$, and
since consequently there are constraints on $h$, leading to (unique)
bang--bang controls.
Here we do not consider (active)
control or state constraints, and no terminal time, but the infinite
time horizon.
Our models and method are motivated by \cite{BX08, BX10}, which also
discuss Pontryagin's Maximum Principle{} in this setting. We do not aim at theoretical results, but
rather consider
\reff{cs2} after a spatial discretization as a (large) ODE problem,
and essentially treat this using the notations and ideas from
\cite{BPS01} and \cite[Chapter 7]{grassetal2008}, and the algorithms from
\cite{GU15, U15p2}, to numerically compute optimal paths.}
\mbox{$\rfloor$}
\erem
\subsection{Solution steps}\label{sssec}
Using the canonical system \reff{cs2}
we proceed in two steps:
first we compute (branches of) CSS, and
second we solve the
``connecting orbit problem'' of computing canonical paths connecting
to some CSS.
Thus we take a
broader perspective than aiming at computing just one optimal control,
given an initial condition $(v_0,w_0)$, which without further information
is an ill-posed problem anyway. Instead, our method aims to give a
somewhat global
picture by identifying the (in general multiple)
optimal CSS and their respective domains
of attraction, as follows:
\bci
\item[(i)] To calculate
CSS (which automatically fulfill \reff{tcond})
we set up \reff{cs3} as a bifurcation problem
and use the package \p2p\ \cite{p2pure, p2p2}
to find a branch of FCSS, from which various branches of PCSS bifurcate.
We focus on the rainfall parameter $R$ as the main bifurcation parameter,
but in \S\ref{rhosec}
also briefly discuss the dependence on the discount rate $\rho$ and
the price $p$ as bifurcation parameters.
By calculating in parallel the (spatially averaged) current value profits
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}ga{\label{jca}
J_{c,a}(v,E):=\frac 1 {|\Om|} \int_\Om J_c(v(x),E(x))\, {\rm d}}\def\ri{{\rm i}}\def\sech{{\rm sech}}\def\er{{\rm e} x
}
of the CSS we can moreover immediately find which of the CSS maximize $J_{c,a}$
and hence $J=J_{c,a}/\rho$ amongst the CSS.
\item[(ii)] In a second step we calculate canonical paths ending
at a CSS (and
often starting at the state values of a different CSS), and the objective
values of the canonical paths.
\eci
Using a Finite Element Method (FEM) discretization, \reff{cs2} is
converted into the ODE system (with a slight abuse of notation)
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}ga{\label{dcs}
M\, {\rm d}}\def\ri{{\rm i}}\def\sech{{\rm sech}}\def\er{{\rm e}t u=-G(u),
}
where $u\in{\mathbb R}^{4n}$ is a large vector containing the nodal values
of $u=(v,w,\lam,\mu)$ at the $n$ spatial discretization points
(typical values are $n=30$ to $n=100$ in 1D, and $n=1000$ and larger in 2D),
and $M\in {\mathbb R}^{4n\times 4n}$ is the so called mass matrix, which is large but sparse. In \reff{dcs} and the following we mostly suppress
the dependence of $G$ on the rainfall $R$ (or the other parameters).
For (i) we thus need to solve the problem $G(u)=0$, which can be considered
as an elliptic problem after changing the signs in the equations
$G_{3,4}(u)=0$ for the costates.
For (ii) we choose a suitable truncation time $T>0$ and
replace the transversality condition \reff{tcond} by the
condition
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}ga{
u(T)\in W_s(\hat u) \text{ and } \|u(T)-\hat{u}\| \text{ small},
}
where $W_s(\hat{u})$ is the stable manifold of the CSS
$\hat{u}$ for the finite dimensional approximation
\reff{dcs} of \reff{cs}. In practice we use
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}ga{
u(T)\in E_s(\hat u) \text{ and } \|u(T)-\hat{u}\| \text{ small},
}
where $E_s(\hat u)$ is the stable eigenspace of $\hat u$, i.e.,
the linear approximation of $W_s(\hat{u})$ at $\uh$. At $t=0$ we already
have the boundary conditions
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}ga{\label{ic1}
(v,w)|_{t=0}=(v_0,w_0)\in{\mathbb R}^{2n}
}
for the states.
To have a well-defined two point boundary value problem in time
we thus need
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}ga{\label{spp} \dim E_s(\hat u)=2n.}
Since the (generalized, in the sense of $M$ on the left hand side of \reff{dcs})
eigenvalues of the linearization $-\pa_u G(\hat u)$ of \reff{cs2} around
$\uh$ are always symmetric around $\rho/2$, see \cite[Appendix A]{GU15},
we always have $\dim E_s(\hat u)\le 2n$. The number
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}ga{\label{ddef}
d(\hat u)=2n-\dim E_s(\hat u)
}
is called the {\em defect} of $\hat u$, a CSS $\hat u$ with $d(\hat u)>0$
is called {\em defective}, and if $d(\hat u)=0$, then
$\uh$ has the so called {\em saddle point property} (SPP). Clearly, these
are the only CSS such that for all
$(v_0,w_0)$ close to $(\hat v,\hat w)$ we may expect a solution
for the connecting orbits problem
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}ga{\label{tbvpCS}
M\, {\rm d}}\def\ri{{\rm i}}\def\sech{{\rm sech}}\def\er{{\rm e}t u=-G(u), \quad (v,w)|_{t=0}=(v_0,w_0), \quad
u(T)\in E_s(\hat u), \text{ and $\|u(T)-\uh\|$ small.}
}
See \cite{GU15} for further comments on the significance
of the SPP \reff{spp} on the discrete level, and its (mesh-independent)
meaning for the canonical system as a PDE.
\subsection{Results}\label{srsec}
The bifurcation diagram (i) for \reff{cs3} turns out to be quite rich,
already over small 1D domains. Thus we mostly focus on 1D,
but we include a short 2D discussion in \S\ref{2dsec}.
Details will be given in \S\ref{rsec},
but in a nutshell we have: In pertinent $R$ regimes there are
many CSS, but most of them do not fulfill the SPP, and most of
these that do fulfill the SPP are not optimal. On the other hand,
in particular at low $R$ there are locally stable POSS
(patterned optimal steady states).
Before further commenting
on this, we briefly review results for the so called uncontrolled case.
In \cite[\S2]{BX10} it is shown that the case of private
objectives, where economic agents (ranchers) are located at each site $x$,
and each one maximizes his or her private profits, leads to the
system
\begin{subequations}\label{pde2}
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}al{
\pa_t v&=d_1\Delta v+(gwv^\eta-d(1+\del v)-A)v,\\
\pa_t w&=d_2\Delta w+R(\beta+\xi v)-(r_u v+r_w)w,
}
\end{subequations}
i.e., the harvest is $H=Av$, $A>0$ a fixed parameter. In detail,
ranchers with certain property rights individually
maximizing $\pi(v,E)=p v^\al E^{1-\al}-cE$ leads to
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}ga{\label{poE}
\text{$E=\gamma}\def\om{\omega}\def\th{\theta}\def\uh{\hat{u}}\def\vh{\hat{v} v$ with
$\gamma}\def\om{\omega}\def\th{\theta}\def\uh{\hat{u}}\def\vh{\hat{v}=\ds\left(\frac{p(1-\al)}{c}\right)^{1/\al}$ and hence $A=
\gamma}\def\om{\omega}\def\th{\theta}\def\uh{\hat{u}}\def\vh{\hat{v}^{1-\al}$.}
}
The same happens in the so called open access case, where agents may
harvest freely, giving
$E=\hat\gamma}\def\om{\omega}\def\th{\theta}\def\uh{\hat{u}}\def\vh{\hat{v} v$ with $\hat\gamma}\def\om{\omega}\def\th{\theta}\def\uh{\hat{u}}\def\vh{\hat{v}=\ds \left(\frac{c}{p}\right)^{-1/\al}$ and
hence $A=\hat\gamma}\def\om{\omega}\def\th{\theta}\def\uh{\hat{u}}\def\vh{\hat{v}^{1-\al}$. On the other hand, \reff{pde2} can also be seen
as a ``undisturbed Nature'' case with modified vegetation death rates
$\tilde{d}=d+A$ and $\tilde{\del}=\frac d {d+A}$.
For the economic parameters $(c,p,\al)=(1,1.1,0.3)$ from Table \ref{tab1}
we obtain $A=0.543$, which is rather large compared to the original
$d=0.03$ from Table \ref{tab1}.
The bifurcation picture for steady states of \reff{pde2} is
roughly similar to that of \reff{cs3}, but there are also significant
differences, and we altogether summarize our results as follows:
\bci
\item[{\bf (a)}] For large $R$ there is a ``high vegetation'' FCSS,
which is a globally stable FOSS (flat optimal steady state).
\item[{\bf (b)}] For smaller $R$ the FCSS from {\bf (a)} loses optimality,
and there bifurcate branches of (locally stable) POSS
(Patterned Optimal Steady States).
In particular, we can calculate canonical paths from
the state values of the FCSS to some POSS which increase
the profit (up to 40\%, see \ref{p0val}).
\item[{\bf (c)}] The uncontrolled flat steady states (FSS) of \reff{pde2}
only exist
for much larger $R$ values than the FCSS.
At equal $R$, the profit $J$ (or equivalently the
discounted value $J/\rho$) of the FSS is much lower (e.g., one tenth, see
Table \ref{tab3}, bottom center)
than the value of the FCSS of \reff{cs3}.
\item[{\bf (d)}] For the initial value problem \reff{pde2} we may consider the
stability of steady states, while CSS at best have the SPP.
It turns out that the FSS branch loses stability at a much larger
value of $R$ to a patterned steady state of \reff{pde2},
than the FOSS loses optimality in
the optimal control problem \reff{oc1}. Thus, in the uncontrolled problem
pattern formation sets in at larger $R$.
\eci
Roughly speaking, {\bf (b)} means that at low $R$ it is advantageous
to restrict harvesting to certain areas. This is similar
to the marine reserves in the fishery models in \cite{Neu03, DL09},
and not
entirely surprising, as it is well known both in models
and in field studies of semi arid systems
that also in uncontrolled systems low rainfall
levels lead to
patterned (or patchy) vegetation, which often can be sustained at
lower rainfall levels than a uniform state \cite{meron01,riet04,meron13};
this is precisely what happens here as well.
However, the quantitative differences between steady states of
\reff{cs2} and \reff{pde2} are quite significant, in particular in
the sense {\bf (c),(d)}: The controlled system sustains vegetation
at much lower $R$ values, and at equal $R$ it yields a much
higher profit than \reff{pde2}. Moreover, the computation of
canonical paths from some initial state (often a CSS) to some OSS
yields precise information {\em how} to go to the OSS to maximize
the objective function.
{\begin{remark}}\def\erem{\end{remark}\label{taxrem}{\rm In economics,
the co-states $\lam$ and $\mu$ are also called {\em shadow prices},
which are sometimes
difficult to interpret; here, however, the shadow price $\lam$ has a
nice interpretation as an optimal tax for private optimization, as follows.
Introducing a tax $\tau$ per unit harvest, i.e.,
setting
\def\tilde{\pi}{\tilde{\pi}}
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}ga{\label{potax1}
\tilde{\pi}(v,E)=(p-\tau)v^\al E^{1-\al}-cE,
}
we obtain
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}ga{\label{poEt}
E=\left(\frac{(p-\tau)(1-\al)}{c}\right)^{1/\al}v
}
instead of \reff{poE}. Thus, after solving the optimization
problem \reff{oc1} via Pontryagin's Maximum Principle,
and thus in particular computing the shadow prize
$\lam$ of the vegetation constraint along an optimal path, comparing \reff{poEt}
to (\ref{cs}e) we see that
private optimization maximizes $J$ from \reff{j0}, if the tax $\tau$
is set in an in general time and space dependent way as $\tau=\lam$.
}
\mbox{$\rfloor$}\erem }
As already said, our method follows \cite{GU15}, where we study optimal
controls for a model of a shallow lake ecology/economy, given by
a scalar parabolic PDE. However, the results are rather different.
First of all, without control, i.e., for a fixed control
equal to some parameter, the scalar PDE in \cite{GU15} shows no
pattern formation: the patterns in \cite{GU15} are only due
to the control, which may be called ``control induced pattern formation''
or, as in \cite{BX08, BX10}, ``optimal diffusion instability'' (ODI).
However, the parameters in \cite{GU15} have to
be carefully fine--tuned to obtain POSS, which moreover are only locally stable,
see also \cite{grass2014}.
Here, to obtain POSS we need no fine--tuning of parameters.
From the methodical and algorithmic point of view, our results for \reff{oc1}
illustrate that our two--step approach is well suited to deal with
non--uniqueness of CSS in nonlinear PDE optimal control problems,
and the typically associated multiple canonical paths and multiple local maxima
of the objective function. See also \S\ref{dsec}
for further discussion of efficiency.
\section{Numerical simulations}\label{rsec}
\subsection{1D canonical steady states bifurcating from
the FCSS branch}
\label{1dcss-sec}
The bifurcation scenario
for the stationary problem $G(u)=0$ can be studied conveniently
with \p2p. First we concentrate on the 1D case $u=u(x)$, $x\in(-L,L)$,
where the domain length must be chosen in such a way to capture
pertinent instabilities of the FCSS branch. In \cite{BX08, BX10} conditions
for pattern forming instabilities
in terms of the Hamiltonian $\CH$ and its derivatives at a
FCSS are given. These are similar to the well known
Turing space conditions \cite{Murray89book}, and moreover allow
the calculation of the critical wave--number $k_c$ of
the bifurcation patterns. For instance, at $R=5$ \cite{BX10}
find $(k_-,k_+)\approx (0.146, 1.455)$ for the band of
unstable wave numbers at the FCSS.
If one is interested in accurately capturing the first bifurcation,
then one should
either fit the domain to the (wave number of the) first instability (see, e.g.,
\cite{uwsnak14} for examples), or use a very
large domain, which gives a rather dense set of allowed wave numbers.
However, for simplicity, and with the (expensive) $t$--dependent
canonical paths in mind, here we do not want
to use a very large domain, and, moreover, rather take the point
of view that the domain comes with the model. Thus, we
do not want to be too
precise on fitting the domain to the first instability over an
infinite domain, and simply choose $L=5$. Of course, increasing the domain size
(certainly in integer multiples of $L$) will only increase the
number of patterns and bifurcations, and on the other hand
there is a critical minimal
domain size below which no patterns exist.
In order to present our results in a domain independent way
we give averaged quantities such as $J_{c,a}$, see \reff{jca}, and
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}ga{\label{sprdef}
\spr{v}:=\frac 1 {|\Om|} \int_\Om v(x)\, {\rm d}}\def\ri{{\rm i}}\def\sech{{\rm sech}}\def\er{{\rm e} x, \quad
\text{ and so on}.
}
Figure \ref{f1}(a) shows a basic bifurcation diagram of 1D CSS. We start
with a FCSS at $R=34$ which can be easily found from an initial guess
$(v,w,\lam,\mu)=(400, 10, 0.5, 1)$ followed by a Newton
loop.\footnote{(\ref{cs}a,b) also has
the trivial solution branch $(v,w)=(0,R\beta/r_w)$, which yields the
trivial branch FCSS$_0$ with
$(v,w,\lam,\mu)=(0,R\beta/r_w,0,0) \text{ (and hence $E=0$),}$
which however is of little interest here.}
\begin{figure}
\caption{{\small (a) (partial) bifurcation diagram of CSS in 1D, including
a (small) selection of bifurcation points indicated by $\circ$. The
labeled points are tabulated in Table \ref{tab3}
\label{f1}
\end{figure}
See Table \ref{tab3}
for numerical values of some of the FCSS found this way, and of other
selected points in the bifurcation diagrams. Following the FCSS branch
with decreasing $R$ we find a
number of branch points to PCSS, and near $R=4$
we find a fold for the FCSS branch. The lower FCSS branch continues back
to large $R$, but is not interesting from a modeling point
of view. The upper FCSS branch has the SPP until the first bifurcation
at $R=R_c\approx 21.5$, where a PCSS branch p1 with period $20/3$
bifurcates subcritically; see the example plots of solutions on p1.
This is a pitchfork
bifurcation, but here and in general we only follow the branch in one direction;
the other direction is often related to the first by symmetry, e.g., spatial
translation by half a period.
\begin{table}[ht]\begin{footnotesize}
\bce
\begin{tabular}{|c|ccC{5mm}C{1mm}C{10mm}ccC{9mm}C{1mm}cc|}
\hline
point&FCSS/13&p1/16&p1/34&&FCSS/17&p1/11&p1/38&\mbox{\hs{-2mm}}FCSS/28&&p1/49&p2/23\\
R&28&28&28&&26&26&26&20&&20&20\\
$\spr{v}$&376.32&337.92&283.31&&335.36&311.63&252.46&223.59&&175.49&185.08\\
$\spr{w}$&9.25&10.37&13.17&&9.3&10.05&13.34&9.62&&13.28&11.64\\
$\spr{\lambda}$&0.59&0.53&0.35&&0.59&0.56&0.34&0.62&&0.34&0.45\\
$\spr{\mu}$&1.09&1.06&1.06&&1.04&1.02&1.03&0.87&&0.92&0.89\\
$J_{c}$&25.85&25.06&25.02&&22.45&22&22.14&13.66&&14.43&13.61\\
$d$&0&2&0&&0&2&0&2&&0&1\\
\hline
point&FCSS/44&FCSS/91&p1/65&\vline&\mbox{\hs{-2mm}}FSS$^*$/121&p1$^*$/97&p6$^*$/30&\mbox{\hs{-2mm}}FCSS/105&\vline&q1/76&q6/69\\
R&10&10&10&\vline& 60&60&60&60&\vline&20&20\\
$\spr{v}$&75.08&0.21&71.33&\vline& 79.73&163.42&86.5&1304.5&\vline&183.14&151.22\\
$\spr{w}$&11.46&88.21&12.71&\vline& 65.51&39.21&61.4&10.06&\vline&12.02&14.54\\
$\spr{\lambda}$&0.68&0.9&0.42&\vline&NA&NA&NA&0.49&\vline&0.35&0.31 \\
$\spr{\mu}$&0.5&0.0006&0.65&\vline&NA&NA&NA&1.75&\vline&0.91 &0.91\\
$J_{c}$&3.51&0.002&4.36&\vline& 14.3&29.31&15.51&120.8&\vline&13.93&13.93\\
$d$&4&4&0&\vline& NA(u)&NA(s)&NA(u)&0&\vline&0&0\\
\hline
\end{tabular}
\caption{{\small Characteristics of selected points marked in
Fig.~\ref{f1}(1D, top half and bottom left in the table),
Fig.~\ref{fnc} (bottom center in the table, where $^*$ denotes values
from the case of private optimization) and
Fig.~\ref{f3} (2D case, bottom right in the table). NA for the ${}^*$ values
means that these values are not defined; for the defect the additional
u,s are used to indicate unstable vs stable solutions.}
\label{tab3}}
\ece \end{footnotesize}
\end{table}
The p1 branch has a fold at $R_f\approx 31$, after which it has the SPP down
to a secondary bifurcation at $R_2\approx 9.4$.
Near $R=3.1$ the p1 branch also
has a second
fold, after which it continues to larger $R$ as a branch of small amplitude
patterns. (a) also shows the PCSS branches bifurcating from the second and
third bifurcation points on the FCSS; interestingly, p3 connects back to
p2 in a secondary bifurcation. However, except for the FCSS branch for
$R>R_c$ and the p1 branch between the first fold and the first
secondary bifurcation, no other branch has solutions with the SPP,
cf.~\reff{spp}.
Figure \ref{f1}(b) shows the FCSS and p1 branches in a $J_{c,a}$ over
$R$ bifurcation diagram. This already shows that at, e.g., $R=20$
(in fact, for $R$ smaller than about 24.4),
solutions on p1 yield a larger $J_{c,a}$ than the FCSS,
and thus appear as a candidates
for POSS. From the applied point of view, probably the most
interesting aspect of the solution plots in (c) and (d) is
that after the first fold on the p1 branch
the effort $E$ has local minima, not maxima, at the maxima of
$v$. Instead, $E$ has maxima on the slopes near the maximum of $v$.
{Taking into account (\ref{cs}e) and the co-state plots in
the second row of (d), this can be attributed to the distinctive
peaks in $\lam$ at the maxima of $v$. These peaks in the shadow price
of the vegetation evolution illustrate that it is not optimal
to harvest at the peaks in $v$ as this will strongly decrease future
income. Also note that the (average) shadow prices $\spr{\lam}$ on
the p1 branch after the fold are lower than on the FCSS
branch at the same $R$, while at least at low $R$, $\spr{\mu}$ is higher
on p1 than on FCSS. }
The vegetation patterns (p1 branch)
survive for lower $R$ (up to $R_{\text{crit}}\approx 3.1$) than
the FCSS branch ($R_{\text{crit}}\approx 3.7$).
However, the difference is not large, and this bottom end of $R$
will not
be our interest here, despite its significance for critical transitions.
Instead, we are interested in the optimality
of CSS for intermediate $R_{\min}<R<R_f$, with $R_{\min}=5$, say.
\begin{remark}}\def\erem{\end{remark}\label{unirem}{\rm Although our picture of CSS obtained above
is already somewhat complicated, naturally it
is far from complete. Firstly, we only followed the first three
bifurcations from the FCSS branch, and secondly, there are (plenty of)
secondary bifurcations on the branches p1, p2 and p3, which here
we do not follow.
In particular, given that the 1.5--modal (in, e.g., $v$)
branch
p1 maximizes $J$ amongst the CSS, a natural question is whether
there also exist unimodal or 0.5--modal branches, which might given
even higher $J$. The answer is (partly) yes: while we could not
find a 0.5--modal branch, there is a unimodal
branch p0, which bifurcates from p2 in a secondary bifurcation,
or, more precisely, connects p1 and p2. See \S\ref{unisec} for details,
where inter alia we study the bifurcation behavior in the price $p$.
Moreover, p0 then maximizes $J$ amongst the CSS, and, loosely speaking,
turns out to be a global maximum for \reff{oc1}.
Nevertheless, until \S\ref{unisec},
for the sake of clarity we restrict to the primary
branches which bifurcate from the FCSS when varying $R$. However,
one should keep in mind that whatever method one uses to
study optimization problems like \reff{oc1} it is always possible
to be stuck with some local maxima, and to miss some global maximum.
}
\mbox{$\rfloor$}
\erem
\subsection{Comparison to private optimization}
\label{ncsec}
As already explained in the Introduction, private objectives, i.e.,
individual ranchers maximizing $\pi(v,E)=p v^\al E^{1-\al}-cE$,
leads to the system
\begin{subequations}\label{pde12}
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}al{
\pa_t v&=d_1\Delta v+(gwv^\eta-d(1+\del v)-A)v,\\
\pa_t w&=d_2\Delta w+R(\beta+\xi v)-(r_u v+r_w)w,
}
\end{subequations}
with $A=\gamma}\def\om{\omega}\def\th{\theta}\def\uh{\hat{u}}\def\vh{\hat{v}^{1-\al}=0.543$ for the economic parameters
$(c,p,\al)=(1,1.1,0.3)$ from Table \ref{tab3}. In this
section we compare the bifurcation diagram in $R$ for steady states of
\reff{pde12}, see Fig.~\ref{fnc}, to that for \reff{cs2} in Fig.\ref{f1}.
Roughly speaking, both are similar, but for \reff{pde12} the bifurcations to
patterned steady states occur at larger $R$,
and of course also have to be interpreted differently. First of
all, we start the bifurcation diagram at $R=130$ with a dynamically
stable flat steady state (FSS) of \reff{pde12}, which loses stability at $R_c\approx 122$ due to a
supercritical pitchfork bifurcation of a branch p1nc of patterns with period 5.
There are a number of further bifurcations from the FSS branch;
as an example we give p6nc. The p1nc solutions lose stability in a secondary
bifurcation near $R=61$ (not followed here), and eventually all branches
undergo a fold between $R=36$ and $R=24$, and turn into small
amplitude branches.
Similar to the CSS case, here we also have $\spr{\pi({\rm p1nc})}>
\spr{\pi({\rm FSS})}$
i.e., the patterned states yield a higher (average) profit than the
flat states
\footnote{although $J_{c,a}$ and $\spr{\pi}$ have different interpretations,
in Table \ref{tab3} we use $J_{c,a}$ also for $\spr{\pi}$, as both are
actually defined by the same expression}.
In (c,d) we compare the FSS branch
with the FCSS branch. Besides again showing that the fold of the FCSS is at
much lower $R$, and hence the socially
controlled system supports a uniform vegetation
down to much lower $R$, this also illustrates that, at given $R$,
$\spr{P}$ and $\spr{v}$ are
significantly higher on the FCSS branch. Finally,
\reff{pde12} has the trivial branch $(v,w)=(0,R\beta/r_w)$,
which however again is of no interest to us.
\begin{figure}
\caption{{\small (a) Bifurcation diagram for the case of private optimization,
where $\spr{\pi}
\label{fnc}
\end{figure}
\subsection{Canonical paths}\label{1dcpsec}
\subsubsection{Main results}
Having computed CSS branches is only half of the program outlined above;
we also need to solve the time dependent problem \reff{tbvpCS} to
\bci
\item
given a point $(v_0,w_0)$ and a CSS $\hat{u}$,
determine if
there exists a canonical path from $(v_0,w_0)$ to $\hat u$;
\item
compare canonical paths $t\mapsto u(t)$ (or
$(v,w,E)(t)$)
to different $\hat u$, i.e.: compute and compare their economic values
$J(u):=J(v_0,w_0,E)$, cf.~\ref{jredef},
to find {\em optimal} paths.
\eci
Assuming that the spatial mesh consists of $n$ points, we summarize the
algorithm for the first point as follows, with more details
given in \cite{GU15,U15p2}. First we compute $\Psi\in {\mathbb R}^{2n\times 4n}$
corresponding to the unstable eigenspace of $\hat u$ such that the
right BC $u(T)\in E_s(\uh)$ is equivalent to $\Psi (u(T)-\uh)=0$. Then, to
solve \reff{tbvpCS} we use a modification {\tt mtom} of the two-point BVP
solver TOM \cite{MT04},
which allows to handle systems of the form $M\, {\rm d}}\def\ri{{\rm i}}\def\sech{{\rm sech}}\def\er{{\rm e}t u=-G(u)$,
where $M\in{\mathbb R}^{4n\times 4n}$ is the mass matrix arising in the FEM
discretization of \reff{cs}. A crucial step in solving
(nonlinear) BVPs is a good initial guess for a
solution $t\mapsto u(t)$, and we combine {\tt mtom} with
a continuation algorithm in the initial states, again see
\cite[\S2.1]{GU15}
for further discussion, and \cite{U15p2} for implementation details.
For the second point we note that for a CSS $\uh$ we simply have
$J(\uh)=J_{c,a}(\uh)/\rho$.
Given a canonical path $u(t)$ that
converges to a CSS $\uh$, and a final time $T$, we may then approximate
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}al{
J(v_0,w_0,E)=&\int_0^T\er^{-\rho t}J_{c,a}(v(t,\cdot),E(t,\cdot))\, {\rm d}}\def\ri{{\rm i}}\def\sech{{\rm sech}}\def\er{{\rm e} t
+\frac {\er^{-\rho T}}{\rho}J_{c,a}(\uh).
}
In principle, given $\uh=(\hat v,\hat w, \hat \lam, \hat \mu)$ with
$d(\uh)=0$ we could choose any
$(v_0,w_0)$ in a neighborhood of $(\hat v,\hat w)$ (or globally,
if $\uh$ is a globally stable OSS) and aim to find a canonical paths from
$(v_0,w_0)$ to $\uh$. However, the philosophy of our simulations
rather is to start at the state values of
some CSS, and see if we can find canonical paths to
some other CSS which give a higher $J$.
We discuss such canonical paths in decreasing $R$, starting with $R=26$ in
Fig.~\ref{vf2}, and postponing the situation at $R=28$ to \S\ref{pskibasec}.
\begin{remark}}\def\erem{\end{remark}\label{prerem}{\rm a) Note again, that our discussion
is based on the primary bifurcations in $R$ from the FCSS in Fig.~\ref{f1},
which misses a branch of unimodal CSS, cf.~Remark \ref{unirem}
and \S\ref{unisec}.\\
b) Although only the state values $(v_0,w_0)$ are fixed as initial
conditions for canonical paths as connecting orbits, in order not to
clutter notations and language we write, e.g., $u_{\text{FCSS$\to$PS}}$
for a connecting orbit starting at the state values of
$\uh_{\text{FCSS}}$ and connecting to $\uh_\text{PS}$.
}
\mbox{$\rfloor$}
\erem
\paragraph{$R=26$.}
At $R=26$ we have two
CSS with $d(\uh)=0$, namely $\uh_{\text{FCSS}}$ given by FSS/pt17,
and $\uh_{\text{PS}}$
given by p1/pt38. Figure \ref{vf2} shows two canonical paths
to these CSS. (a) shows $\|v(t)-v_0\|_\infty$ and $\|v(t)-v_1\|_\infty$
for the path from the ``intermediate'' patterned CSS $\uh_{\text{PSI}}$
given by p1/pt11 to
$\uh_{\text{FCSS}}$. This indicates
that and how fast the canonical path leaves the initial $(v_0,w_0)$ and
approaches the goal $(v_1,w_1)=(\hat v,\hat w)$ (the differences
in the second component $\|w(t)-w_*\|_\infty$ are always smaller).
Moreover we plot $4J_{c,a}(t)$, illustrating
that $J_{c,a}(t)$ does not vary much along the canonical path.
However, the differences may accumulate.
\begin{figure}
\caption{{\small $R=26$. (a) convergence behavior, the
current value profit, and obtained objective values for
the canonical paths from p1/pt11
to the FCSS (left), and $E,v,w$ on the path; (b) the same for the
canonical path from the FCSS to p1/pt38. (c) co-state paths from (a),(b). }
\label{vf2}
\end{figure}
The values of the solutions are as follows.
We readily have
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}ga{\label{cpv1}
J(\uh_{\text{PSI}})=733.37<J(\uh_{\text{PS}})=738.12<J(\uh_{\text{FCSS}})=748.29,
}
and for the paths $u_\text{PSI$\to$FCSS}$ (a,b),
$u_\text{FCSS$\to$PS}$ (c), and
$u_\text{PSI$\to$PS}$ (not shown) we obtain
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}ga{\label{cpv2}
J(u_\text{PSI$\to$FCSS})=735.51, \quad J(u_\text{PSI$\to$PS})=744.28,
\text{ and } J(u_\text{FCSS$\to$PS})=749.53.
}
The result for $J(u_\text{PSI$\to$FCSS})$
seems natural, as controlling the system to
a CSS with a higher value should increase $J$.
However, the results for $J(u_\text{PSI$\to$PS})$ and
$J(u_\text{FCSS$\to$PS})$ may at first seem counter--intuitive.
In $u_\text{FCSS$\to$PS}$ we go to a CSS with a smaller value, but
the transients yield a higher profit for the path.
In particular, this shows that
{\em a CSS which maximizes $J$ amongst all CSS is not necessarily optimal}.
Similarly, although $\uh_{\text{PS}}$ as a CSS has a lower value
than $\uh_{\text{FCSS}}$,
starting at $\uh_{\text{PSI}}$ it is advantageous to go to $\uh_{\text{PS}}$
rather than to $\uh_{\text{FCSS}}$.
Due to folds in the continuation in the initial states,
again see \cite{GU15} for details, we could not compute a path
from $\uh_{\text{PS}}$ to $\uh_{\text{FCSS}}$. Thus we conclude
that such paths do not exist, and (tentatively, see Remark \ref{prerem})
classify $\uh_{\text{PS}}$ as an at least locally stable
POSS, with $\uh_{\text{FCSS}}$ and $\uh_{\text{PSI}}$
in its domain of attraction.
The control to go from $\uh_{\text{PSI}}$ to $\uh_{\text{FCSS}}$ in (b) is
intuitively clear: Increase/decrease $E$ near the maxima/minima of $v_0$. Going
from $\uh_{\text{FCSS}}$ to $\uh_{\text{PS}}$ in (c)
warrants a bit more discussion:
For a short transient, $E$ is reduced
around the locations $x_2= -5$ and $x_4=5/3$ of the maxima of
$\hat v_{\text{PS}}$.
This is enough to give an increase of $v$ around $x_{2,4}$.
However, under the given conditions this does not decrease soil water
near $x_{2,4}$, i.e., the increased infiltration at larger $v$
dominates the higher uptake by plants. After this transient,
$E$ increases near $x_{2,4}$, thus producing the higher $J$;
see also the discussion of Fig.~\ref{f2b} below.
{As the behavior of $E$ follows from (\ref{cs}e), i.e.,
$E=\left(\frac{(p-\lam)(1-\al)}
{c}\right)^{1/\al}v$ for illustration
we also plot $\lam,\mu$ for the paths in (a), (b) in Fig.~\ref{vf2}(c). }
\begin{figure}
\caption{{\small The canonical path from FCSS to PS at $R=10$.
\label{f2b}
\label{f2b}
\end{figure}
\paragraph{Smaller $R$.}
For $R{<}R_c{\approx}21.5$, in Fig.\ref{f1} the only CSS with the SPP is the
patterned state $\uh_{\text{PS}}(R)$. In Fig.~\ref{f2b} we
focus on the case $R{=}10$, and here only remark that the results are
qualitatively similar for all $R_{\min}{<}R{<}R_c$.
We show the canonical path from the FCSS to the PCSS, where
now the strategy to reach a patterned state
already indicated in Fig\ref{vf2}(c)
becomes more prominent.
Up to $t\approx 10$, $E$ is reduced around $x_2=-5$ and $x_4=5/3$.
Conversely, $E$ is initially increased near the minima $x_1=-5/3$ and
$x_3=5$ of $\hat v$, leading to a decrease of $v$ and an increase
of $w$ near $x_{1,3}$.
On the other hand, due to diffusion of $w$
the increase of $v$ near $x_{2,4}$ does {\em not} lead to a decrease
of $w$ compared to the FCSS $w_0$. Instead $w$ increases significantly
{\em everywhere}. After this transient
the harvesting effort $E$ is increased near $x_{2,4}$, leading to
an overall quick convergence of $u(t)$ to the PCSS $\uh$.
Thus, the main point for the strategy to go from $\uh_{\text{FCSS}}$
to $\uh_{\text{PS}}$
is to initially introduce a spatial variation (of the right
wavelength) into $E$, which yields maxima of $v$ at the minima of this initial
$E$, but then to rather quickly switch to the harvesting on the slopes
of the generated maxima of $v$. The canonical path shows precisely how to do
this.
Also note (blue curve in (Fig.~\ref{f2b}a)) that the initial harvesting briefly yields a higher
$J_{c,a}$ than $J_{c,a}(\uh_{\text{PS}})$
but in the transition $4<t<25$, say, $J_{c,a}(t)$ is significantly
below $J_{c,a}(\uh_{\text{PS}})$.
For the values we have
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}ga{\label{r10val}
J(\uh_{\text{FCSS}})=116.9<J(u_\text{FCSS$\to$PS})=132.49<
J(\uh_{\text{PS}})=145.26.
}
Thus, again tentatively,
see Remark \ref{prerem}, and in particular \reff{p0val} below,
we classify $\uh_{\text{PS}}$ at $R=10$ as a POSS, with
$\uh_{\text{FCSS}}$ in its domain of attraction.
For completeness we remark that at $R=20$ we have
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}ga{\label{r20val}
\text{$J(\uh_{\text{FCSS}})=455.31$,
$J(\uh_{\text{PS}})=480.88$, and $J(u_{\text{FCSS$\to$PS}})=474.57$.}
}
\subsubsection{A patterned Skiba point}\label{pskibasec}
In ODE OC applications, if there are several locally stable OSS,
then often an important issue is to identify their domains of attractions.
These are separated by so called threshold or Skiba--points (if $N=1$)
or Skiba--manifolds (if $N>1$) \cite{Skiba78, grassetal2008,KW10}.
\begin{figure}
\caption{{\small In (a), the blue and red
lines gives $J$ for the canonical paths $u_{\text{$\to$FCSS}
\label{vf3}
\end{figure}
Roughly speaking, these are (consist of) initial states from which
there are more than one
optimal paths {\em with the same value, but connecting to different CSS}.
In PDE
applications, even under spatial discretization with moderate $nN$,
Skiba manifolds should be expected to become very complicated objects.
In Fig.~\ref{vf3} we just given one example of a patterned Skiba
point ``between'' $\uh_{\text{PS}}$ given by p1/pt34
and $\uh_{\text{FSS}}$ given by
FCSS/pt13, at $R=28$, where $\uh_{\text{PS}}$ and $\uh_{\text{FCSS}}$ are
the two
possible targets for canonical paths. The blue and red lines in (a) gives
$J$ for the canonical paths as indicated.
The lines intersect near $\al\approx 0.9$,
giving the same value $J$. Hence, while the two paths are completely
different, they both are equally optimal, and
for illustration (b) shows the two paths for $\al=0.9$, where
$|J_{\text{$\to$FCSS}}-J_{\text{$\to$PS}}|<0.08$.
\subsection{2D results}\label{2dsec}
The basic mechanisms of pattern formation in
reaction--diffusion models can usually be studied in 1D, but
for quantitative results for vegetation--water ecosystem models one should
also consider the more pertinent 2D situation, and clearly the same
applies to the OC system. Even though we do not
expect qualitatively different results from the 1D case, here we
give a short overview over 2D PCSS and the associated canonical paths.
\begin{figure}
\caption{{\small Partial bifurcation diagram and example plots of CSS in 2D.
\label{f3}
\label{f3}
\end{figure}
The first question concerns the second spatial length, now called
$y$--direction. For classical
Turing bifurcations, typically stripes and/or hexagonal spots are
the most stable bifurcating 2D patterns, see \cite{uwsnak14} and the references
therein for discussion,
and thus by analogy here we choose the domain
$\Om=(-L,L)\times (-\sqrt{3} L/2, \sqrt{3}L/2)$ with $L=5$ as in 1D.
Figure \ref{f3}(a) shows five branches bifurcating from the FCSS.
It turns out, that the 1D branch p1 actually comes out of the 5th
bifurcation point in 2D, and is therefore called q5 now, while the
first branch q1 corresponds to horizontal stripes, see (c).
Thus, $L=5$, chosen for simplicity, does
not capture the first instability in 1D. However, the first
bifurcation points are very close together; moreover, upon continuation
to low $R$ the q5 branch still yields the highest $J_{c,a}$, see (b).
The sixth branch q6 is a hexagon branch.
Similar to q5, both q1 and q6 have the SPP after their first folds.
The other branches are other types of stripes or spots,
for instance ``squares'', but none of these have the SPP. All branches exhibit
some secondary bifurcations, not shown here,
and to not overload the bifurcation
diagram we only plot the starting segments of q2 (green) and q3 (magenta).
\begin{figure}
\caption{{\small Snapshots of $E$ and $(v,w)$ on the canonical
path from the FCSS to the
hexagons. The states $v,w$ both go directly into their ultimate hex
pattern, which then only grows, while $E$ shows a switching
analogously to the 1D case.}
\label{f4}
\end{figure}
At, for instance, $R=20$ we now have 3 possible targets for canonical
paths: $\uh_{\text{hs}}$ (horizontal stripes) from q1,
$\uh_{\text{hex}}$ (hexagons) from q6, and $\uh_{\text{vs}}$
(vertical stripes) from q5, already
discussed in 1D as p1. It turns out that we can reach each of these
from the FCSS, with $J(\uh_{{\rm FCSS}})=455.31$, cf.~\reff{r20val},
and similarly $J(\uh_{\text{vs}})=480.88$ and $\uh_{\text{FCSS$\to$vs}}=474.56$
as in 1D (since these values are normalized by $|\Om|$).
For the path to $\uh_{\text{hs}}$ with $J(\uh_{\text{hs}})=464.33$ we obtain
$J=464.91$, and the path to $\uh_{\text{hex}}$ with $J(\uh_{\text{hex}})
=464.33=J(\uh_{\text{hs}})$
(up to 2 digits) yields $J=467.38$. Thus we again have $V(\uh_{{\rm FCSS}})=
474.6$. The strategies for these paths are the natural extensions
of the 1D case: given a target PCSS $\uh$, initially $E$ has
minima at the maxima of $\hat v$, but after a rather short transient
during which $v(t,\cdot)$ develops maxima at the right places,
$E$ changes to harvesting in the neighborhood of these maxima.
Movies of these paths can be found at \cite{p2phome},
and in Fig.~\ref{f4}
we present some snapshots.
We could not compute canonical paths from any of the PCSS to any
other PCSS, with the continuation typically failing due to a sequence of
folds.
Thus we strongly expect all three PCSS to be locally stable POSS.
\subsection{Remarks on further parameter dependence}
\label{rhosec}
So far we varied the rainfall $R$ as our external bifurcation parameter.
Similarly, we could vary some other of the physical parameters
$g,\eta,\ldots,d_{1,2}$
(first six rows of Table \ref{tab1}), and in most cases may expect
bifurcations to patterned CSS.
Maybe even more interesting from an application point of view
is the dependence on the economic parameters
$\rho, c,p$ and $\al$ (discount rate, cost for harvesting, price of harvest,
and elasticity), as these may vary strongly with economic
circumstances.
Moreover, varying a second parameter often
also gives bifurcations
to branches which were missed upon continuation of just the primary
parameter, and these may play an important role in the overall
structure of the solution set; this does happen here, see \S\ref{unisec} below.
\subsubsection{Experiments with the discount rate $\rho$}\label{dratesec}
In Fig.\ref{rf1} we illustrate the dependence of
the PCSS on the p1 branch from Fig.~\ref{f1} on $\rho$, at fixed $R=10$,
cf.~also Fig.~\ref{f2b}. Panel (a) shows the bifurcation diagram;
to obtain the blue and black branches we continued
the points p1/pt65 and FCSS/pt44 from $\rho=0.03$ down to $\rho=0.005$,
reset counters, renamed p1 to r1, and continued to larger $\rho$ again.
Both, the FCSS and the r1 branches then show some folds at $\rho\approx 0.185$
and $\rho\approx 0.325$, respectively.
More importantly, the r1 branch has the SPP
for small $\rho$, but loses it
at $\rho\approx 0.032$ to another PCSS branch s1. Solutions on s1
have maxima of different heights in $v$, see (b), and have the SPP only
up to the
fold at $\rho=\rho_f\approx 0.046$. Moreover, there are further bifurcations
from the FCSS branch to PCSS branches, but none of these has the SPP.
\begin{figure}
\caption{{\small Bifurcations when varying the
discount rate $\rho$ at $R=10$.
The blue branch (which at $\rho=0.03$ corresponds to Fig.~\ref{f2b}
\label{rf1}
\end{figure}
With $\uh_{\text{PS}}$ given by s1/pt9 ($\rho=0.04$), the control
to go from the FCSS to $\uh$ shows a similar switching strategy
as, e.g., in Fig.~\ref{f2b}. The values are
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}ga{\label{skewval}
J(\uh_{\text{FCSS}})=82.5,\quad J(u_{\text{FCSS$\to$PS}})=92.8,\quad
J(\uh_{\text{PS}})=106.8,
}
which are more or less comparable
to Fig.~\ref{f2b} (with $\rho=0.03$).
On the other hand, for the canonical path from the FCSS to
r1/pt0 at $\rho=0.005$ we obtain
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}ga{\label{lrval}
J(\uh_{\text{FCSS}})=774.9,\quad J(u_{\text{FCSS$\to$PS}})=892.2,
\quad J(\uh_{\text{PS}})=897.7.
}
Additional to the larger
total values due to the smaller discount rate, compared
to Fig.~\ref{f2b} the canonical path to $\uh_{\text{PS}}$ now has
almost the same value as $\uh_{\text{PS}}$ itself.
This illustrates that at low $\rho$ the
transients have less influence, as expected.
For $\rho>\rho_f$ none of the CSS on
the branches that are shown in Fig.~\ref{rf1}, or that can be obtained
from the shown bifurcation points, have the SPP. This does not mean
that PCSS with the SPP do not exist in this parameter regime, but rather
that they must be obtained by continuation and bifurcation in some other way,
cf.~Remark \ref{unirem} and \S\ref{unisec}.
\subsubsection{Dependence on the price $p$, and the unimodal branch}\label{unisec}
In Fig.~\ref{rf2}(a) we illustrate the dependence
of the FCSS and p1 branches on the price $p$, with fixed $R=10$,
starting at $p=1.1$ with p1/pt65 and FCSS/pt44 from Fig.~\ref{f1},
respectively.
Naturally, the values
decrease as $p$ decreases, and not surprisingly p1 bifurcates
from the FSS branch at some $p_c{\approx}0.55$. Next, as an additional
benefit we find the ``unimodal'' branch p0, which bifurcates from the
FCSS branch near $p{=}0.5$, and which yields a higher
$J_{c,a}$ than p1.
\begin{figure}
\caption{{\small (a) branches FCSS (black), p1 (blue), and p0 (red) over $p$,
$R=10$ fixed. (b) Continuation in $R$ of new branch p0 found in (a),
and bifurcating branch p01 (green), together with known branches FSS (black) and p2 from Fig.~\ref{f1}
\label{rf2}
\end{figure}
Thus, in a next step we continue p0 to $p=1.1$ and then
switch back to continuation in $R$, with $p=1.1$ fixed, i.e.,
p0/pt0 in (b)-(c) is pt112 from (a).
It turns out that the p0 branch has the SPP up the fold at
$R_{\text{f}}\approx 30.15$, and slightly below the fold there is a
bifurcation point to the green branch. This contains
some ``skewed'' solutions, and connects p0 and p1.
Ultimately, p0 connects back to p2 from
Fig.~\ref{f1} at low amplitude near $R=21.1$. Thus, we could also have found
p0 by following secondary bifurcations in Fig.~\ref{f1}.
The values pertinent for
the canonical path from FCSS/pt44 to
p0/pt0 are
\red}\def\p2p{{\tt pde2path}}\def\Gti{\tilde{G}ga{\label{p0val}
J(\uh_{\text{FCSS}})=116.9\ (\text{as in \reff{r10val}}),\quad
J(u_{\text{FCSS$\to$P0}})=145.11,\quad
J(\uh_{\text{P0}})=165.42,
}
which shows that the path to p0 dominates the path to p1 from
Fig.~\ref{f2b}. Thus, the point FCSS/pt44 is in the domain of attraction
of p0, and not of p1.
On the other hand, we could not find canonical paths from p1/pt65
to p0/pt0 (or vice versa). Therefore p1/pt65 can still be classified
as an at least locally stable POSS, and similarly, p0/pt0 is {\em only
locally stable}, since it does not attract p1/pt65. Next one could
compute a number of Skiba--points (cf.~\S\ref{pskibasec})
``between'' p0 and p1 to roughly
characterize the respective domains of attraction, but here we refrain
from this.
Finally, despite trying some further combinations
of continuations/bifurcations and also suitable direct
initial guesses followed by a Newton loop,
we could not find a ``half''--modal branch in
the parameter regions studied so far,
i.e., a branch on which solutions are monotonous in $v$. Thus, it appears
that such ``long wave'' PCSS, i.e., with period 20,
do not exist in these parameter regions.
\section{Summary and discussion}\label{dsec}
Our numerical approach for spatially distributed optimal control problems
may yield rich results, if applied carefully, in the following sense.
First, the canonical system may have many steady states, and
it is in general not clear how to find all {\em relevant} CSS,
and which of the CSS have the SPP and hence
are suitable targets for canonical paths,
and second
it needs to be checked which CSS ultimately belong to optimal paths.
On the other hand, the value $J$ of the CSS
can easily be calculated in parallel with the
bifurcation diagram of the CSS, allowing to identify which
CSS maximize $J$ amongst all CSS. Compared to the computation
of canonical paths (or direct methods for the optimization
problem \reff{oc1}), this first step is relatively cheap numerically, but
(together with the SPP)
typically still gives a strong indication for optimal CSS.
The computation of canonical paths is a connecting orbits problem, and
in particular in two spatial dimensions this may
become numerically expensive. In practice we found our two--step
approach to be reasonably fast for up to 5000 degrees
of freedom of $u$ at fixed time, e.g., for our vegetation
model 1250 spatial discretization
points and 4 components, and
up to 100 temporal discretization points;
up to such values a continuation step in the calculation
of a canonical path takes up to a minute on a desktop computer
(Intel i7, 2.3 GHz), such that a typical canonical path is
computed in about 5 continuation steps in at most 5 minutes,
and often much more quickly. In 1D, with $n=50$, say,
typical canonical paths are computed in
a few seconds.
Here we applied our method exemplarily to the optimal
control model from \cite{BX10}, and for this we again
summarize our main results as follows, cf.~also {\bf (a)--(d)}
in the Introduction:
\bci
\item[(i)] Compared to the case of private optimization, we have CSS (both flat
and patterned) for significantly
lower rainfall values $R$, and the whole Turing (like) bifurcation scenario
is shifted to lower $R$. This is important for welfare as it means a much
increased robustness of vegetation (and hence harvest) with respect to low
rainfall.
Moreover, at a given $R$ the social control gives a significantly
(almost ten times) higher $J$
for steady states than private optimization; see, e.g., Table \ref{tab3}, bottom center, and Fig.~\ref{fnc}(c), for numerical values.
\item[(ii)] At low $R$, some PCSS yield a higher $J$ than the FCSS, and
some of these PCSS are locally stable POSS.
\item[(iii)]
The optimal controls to reach such a POSS $\uh$ from a FCSS
follow some general rules: first decrease the harvesting effort $E$ at the
location of the maxima of the desired POSS, then start harvesting
near (but not at) the maxima of the POSS, as determined by $E$ from the POSS.
The increase in welfare by controlling the system from a FCSS
to a POSS can be up to 40\%, see \reff{p0val}. See also, e.g., \reff{cpv2}, \reff{r10val}, \reff{r20val}, and the values given in \S\ref{2dsec} for
further numerical values.
{\item[(iv)] The co--state (or shadow price) $\lam$ computed for an optimal path
can be interpreted
as an optimal tax for private optimization, see Remark \ref{taxrem}. }
\eci
{The points (ii) and (iii) emphasize that resource management rules
in systems with multiple CSS may need to take several of these into account,
as also illustrated by the Skiba point computations in \S\ref{pskibasec}.}
We strongly expect similar results about POSS in other spatially
distributed optimal control problems for (Turing like) systems
of PDE. Thus we hope that our numerical approach
is a valuable tool to
study the basic behavior of spatially distributed optimal control models
with the states fulfilling a reaction diffusion system.
\renewcommand{1.05}{1.05}
\small
\input{veg-r1.bbl}
\end{document}
|
\begin{document}
\title[Characterization of two-scale gradient Young measures]
{Characterization of two-scale gradient Young measures and
application to homogenization}
\author[J.-F. Babadjian, M. Ba\'{\i}a \& P. M. Santos]
{Jean-Fran\c cois Babadjian, Margarida Ba\'{\i}a \& Pedro M. Santos}
\address[Jean-Fran\c{c}ois Babadjian]{SISSA, Via Beirut 2-4, 34014 Trieste, Italy}
\email{[email protected]}
\address[Margarida Ba\'{\i}a]{Instituto Superior Tecnico, Av. Rovisco Pais, 1049-001 Lisbon, Portugal}
\email{[email protected]}
\address[Pedro M. Santos]{Instituto Superior Tecnico, Av. Rovisco Pais, 1049-001 Lisbon, Portugal}
\email{[email protected]}
\maketitle
\begin{abstract}
This work is devoted to the study of two-scale gradient Young
measures naturally arising in nonlinear elasticity homogenization
problems. Precisely, a characterization of this class of measures
is derived and an integral representation formula for homogenized
energies, whose integrands satisfy very weak regularity
assumptions, is obtained in terms of two-scale gradient Young
measures.
\end{abstract}
\begin{center}
\begin{minipage}{13cm}
\small{ \keywords{\noindent {\sc Keywords:} Young measures,
homogenization, $\Gamma$-convergence, two-scale convergence.}
\subjclass{\noindent {\sc 2000 Mathematics Subject Classification:}
74Q05, 49J45, 28A33, 46E27.} }
\end{minipage}
\end{center}
\section{Introduction}
\noindent Young (or Parametrized) measures have been introduced in
optimal control theory by L. C. Young \cite{LCY3} to study non
convex variational problems for which there were no classical
solution, and to provide an effective notion of generalized solution
for problems in Calculus of Variations.
Starting with the works of Tartar \cite{T1} on hyperbolic
conservation laws, Young measures have been an important tool for
studying the asymptotic behavior of solutions of nonlinear partial
differential equations (see also DiPerna \cite{DP}). A key feature
of these measures is their capacity to capture the oscillations of
minimizing sequences of non convex variational problems, and many
applications arise {\it e.g.} in models of elastic crystals (see
Chipot \& Kinderlehrer \cite{CK} and Fonseca \cite{F}), phase
transition (see Ball \& James \cite{BJ}), optimal design (see
Bonnetier \& Conca \cite{BC}, Maestre \& Pedregal \cite{MP} and
Pedregal \cite{P3}). The special properties of Young measures
generated by sequences of gradients of Sobolev functions have been
studied by Kinderlehrer \& Pedregal \cite{KP2,KP} and are relevant
in the applications to nonlinear elasticity.
The lack of information on the spatial structure of oscillations
presents an obstacle for the application of Young measures to
homogenization problems. Two-scale Young measures, which have been
introduced by E in \cite{W} to study periodic homogenization of
nonlinear transport equations, contain some information on the
amount of oscillations and extend Nguetseng's notion of two-scale
convergence (see \cite{N1} and Allaire \cite{A}). Other
(generalized) multiscale Young measures have been introduced in the
works of Alberti \& M\"{u}ller \cite{AM} and Ambrosio $\&$ Frid
\cite{AF}.\\
From a variational point of view periodic homogenization of
integral functionals rests on the study of the equilibrium
states, or minimizers, of a family of functionals of the type
\begin{equation}\label{main-funct}
{\mathcal{F}}_\e(u):=\int_\O f\left(x,\left\la \frac{x}{\e}\right\ra,\nabla
u(x)\right)\, dx,
\end{equation} as $\e\to 0$, under suitable boundary conditions. Here $\O$ (bounded
open subset of ${\mathbb{R}}^N$) is the reference configuration of a
nonlinear elastic body with periodic microstructure and whose
heterogeneities scale like a small parameter $\e>0$. The function
$u\in W^{1,p}(\O;{\mathbb{R}}^d)$ stands for a deformation and $f:\O \times Q
\times {\mathbb{R}}^{d \times N} \to [0,+\infty)$, with $Q:=(0,1)^N$, is the
stored energy density of this body that is assumed to satisfy
standard $p$-coercivity and $p$-growth conditions, with $p>1$. The
presence of the term $\la x/\e \ra$ (fractional part of the vector
$x/\e$ componentwise) takes into account the periodic
microstructure of the body leading the integrand of
(\ref{main-funct}) to be periodic with respect to that variable. The
macroscopic (or averaged) description of this material may be
understood by the $\G$-limit of (\ref{main-funct}) with respect to
the weak $W^{1,p}(\O;{\mathbb{R}}^d)$-topology (or, equivalently, with
respect to the $L^{p}(\O;{\mathbb{R}}^d)$-topology if $\O$ is, for instance,
Lipschitz) and it has already been studied by many authors in the
Sobolev setting. Namely, under several regularity assumptions on $f$
it has been proved that for all $u\in W^{1,p}(\O;{\mathbb{R}}^{d})$,
\begin{equation}\label{fhom-2}
{\mathcal F}_{\rm hom}(u):=\G\text{-}\lim_{\e\to 0} \mathcal F_{\e}(u) =\int_\O f_{\rm
hom}(x,\nabla u(x))\, dx,\end{equation} \noindent where for all
$(x,\xi) \in \O \times {\mathbb{R}}^{d \times N}$
\begin{eqnarray}\label{fhom}
&&\hspace{-1cm}f_{\rm hom}(x,\xi)\nonumber\\
&&\hspace{-0.5cm}= \lim_{T \to +\infty}\inf_{\phi} \left\{
- \hskip -1em \int_{(0,T)^N}f(x,y,\xi + \nabla \phi(y))\, dy : \phi \in
W^{1,p}_0((0,T)^N;{\mathbb{R}}^d)\right\}
\end{eqnarray}
(see Ba\'{\i}a \& Fonseca \cite{BF,BF1}, Braides \cite{B1}, Braides
\& Defranceschi \cite{BD}, Marcellini \cite{Ma} and M\"uller
\cite{Mu}).
We also refer to Anza Hafsa, Mandallena \& Michaille \cite{HMM}
where a formula for the function $f_{\rm hom}$ has been given in
terms of gradient Young measures. In the convex case, Barchiesi
\cite{Bar} and Pedregal \cite{P} have derived the same $\G$-limit
result (\ref{fhom-2}) with Young measures techniques. The main
contribution in \cite{Bar} is to weaken, as most as possible, the
regularity of $f$ that is assumed to be an ``admissible integrand"
in the sense of Valadier \cite{V} (see Definition \ref{admint}
below). Using the same kind of arguments, Pedregal has extended
this result to the nonconvex case in \cite{P2}.\\
We note that solutions of
$$\min_{u=u_0 \text{ on }\partial \O} \int_\O f_{\rm
hom}(x,\nabla u(x))\, dx$$ \noindent only give an average of the
oscillations that minimizing sequences may develop.
From a mathematical
point of view, the main property of Young measures is their
capability of describing the asymptotic behavior of integrals of the
form $$\int_{\O} f(v_{\e}(x))\,dx,$$ where $f$ is some nonlinear
function and $\{v_{\e}\}$ is an oscillating sequence. To address
the homogenization of (\ref{main-funct}) we consider Young measures
generated by sequences of the type $\{(\la \cdot / \e\ra, \nabla
u_\e)\}$, which are, roughly speaking, what we will call {\it
two-scale gradient Young measures}. From a physical point of view,
we seek to capture microstructures -- due to finer and finer
oscillations of minimizing sequences that cannot reach an optimal
state -- at a given scale $\e$ (period of the material
heterogeneities). In this way, the minima of the limit problem
captures two kinds of oscillations of the minimizing sequences:
those due to the periodic heterogeneities of the material and those
due to a possible multi-well structure.
Our main result gives a complete algebraic
characterization of two-scale gradient Young
measures (see Definition \ref{MGYM} below) in the spirit of Kinderlehrer \& Pedregal \cite{KP}.
We derive this characterization in terms of a Jensen's inequality with
test functions in the space ${\mathcal{E}}_p$ of continuous functions
$f:{\overline Q} \times {\mathbb{R}}^{d \times N} \to {\mathbb{R}}$ such that the
limit $$\lim_{|\xi|\to +\infty}\frac{f(y,\xi)}{1+|\xi|^p}$$ exists
uniformly with respect to $y \in \overline Q$. Namely, we prove the
following result.
\begin{thm}\label{bbs}Let $\O$ be a bounded open subset
of ${\mathbb{R}}^N$ with Lipschitz boundary and let $\nu \in L^\infty_w(\O
\times Q;{\mathcal{M}}({\mathbb{R}}^{d \times N}))$ be such that $\nu_{(x,y)} \in
{\mathcal{P}}({\mathbb{R}}^{d \times N})$ for a.e.\ $(x,y) \in \O \times Q$. The family
$\{\nu_{(x,y)}\}_{(x,y) \in \O \times Q}$ is a {\rm two-scale
gradient Young measure} if and only if the three conditions below
hold:
\begin{itemize}
\item[i)] there exist $u \in W^{1,p}(\Omega; {\mathbb{R}}^d)$ and $u_1 \in
L^{p}(\Omega; W_{per}^{1,p}(Q;{\mathbb{R}}^d))$ such that
\begin{equation}\label{i}\int_{{\mathbb{R}}^{d \times N}} \xi\, d\nu_{(x,y)}(\xi) =
\nabla u(x) + \nabla_y u_1(x,y) \quad \text{ for a.e. }(x,y) \in \O
\times Q;
\end{equation}
\item[ii)] for every $f\in {\mathcal{E}}_p$
\begin{equation}\label{ii}
\int_Q \int_{{\mathbb{R}}^{d \times N}} f(y,\xi) \, d\nu_{(x,y)}(\xi)\, dy
\geqslant f_{\rm hom}(\nabla u(x)) \quad \text{ for a.e. }x \in \O,
\end{equation}
where
\begin{eqnarray}\label{fhom1}
&&\hspace{-1cm}f_{\rm hom}(\xi)\nonumber\\
&&\hspace{-0.5cm}= \lim_{T \to +\infty}\inf_{\phi} \left\{
- \hskip -1em \int_{(0,T)^N}f(\la y \ra,\xi + \nabla \phi(y))\, dy : \phi \in
W^{1,p}_0((0,T)^N;{\mathbb{R}}^d)\right\};
\end{eqnarray}
\item[iii)] \begin{equation}\label{iii}\displaystyle (x,y) \mapsto
\int_{{\mathbb{R}}^{d \times N}} {|\xi|}^p d\nu_{(x,y)}(\xi) \in L^1(\O
\times Q).\end{equation}
\end{itemize}
\end{thm}
We note that ${\mathcal{E}}_p$ is separable (see Section 3), and thus condition
(ii) needs only to be checked for countably many test functions $f$.
The proof of this theorem is similar to that of Kinderlehrer \&
Pedregal \cite{KP}. We first address the homogeneous case, that is,
we consider two-scale gradient Young measures that are independent
of the macroscopic variable $x \in \O$. This case rests on the
Hahn-Banach Separation Theorem. The general case will be obtained by
splitting $\O$ into suitable small subsets and by approximating
these measures by two-scale Young measures that are piecewise
constant with respect to the variable $x \in \O$.
Theorem \ref{bbs} turns out to be useful to obtain a representation
of the $\G$-limit of (\ref{main-funct}) in terms of two-scale
gradient Young measures. This is the aim of our second result, where
following Barchiesi \cite{Bar}, we consider very weak regularity
hypothesis on the integrand $f$.
\begin{thm}\label{exprGlim} Let $\O$ be a bounded open subset
of ${\mathbb{R}}^N$ with Lipschitz boundary and let $f:\O\times Q\times
{\mathbb{R}}^{d\times N}\rightarrow [0,+\infty)$ be an admissible
integrand. Assume that there exist constants $\a$, $\b>0$ and $p
\in (1,+\infty)$ such that for all $(x,y,\xi) \in \O\times Q
\times {\mathbb{R}}^{d\times N}$
\begin{equation}\label{pgrowth}
\a|\xi|^p \leqslant f(x,y,\xi)\leqslant \b(1+|\xi|^{p}).
\end{equation}
Then the functional ${\mathcal{F}}_\e$ $\G$-converges with respect to the
weak $W^{1,p}(\O;{\mathbb{R}}^d)$-topology (or equivalently the strong
$L^p(\O;{\mathbb{R}}^d)$-topology) to ${\mathcal{F}}_{\rm hom}: W^{1,p}(\O;{\mathbb{R}}^{d})\to
[0,+\infty)$ given by
\begin{equation}\label{for-hom}\mathcal
F_{\rm hom}(u)= \min_{\nu \in {\mathcal{M}}_u} \int_\O \int_Q \int_{{\mathbb{R}}^{d
\times N}} f(x,y,\xi)\, d\nu_{(x,y)}(\xi)\,dy\, dx,\end{equation}
where
\begin{eqnarray}\label{mu}
{\mathcal M}_{u} & := & \Bigg\{ \nu \in L_w^\infty(\O \times Q ;
{\mathcal{M}}({\mathbb{R}}^{d \times N})): ~ \{\nu_{(x,y)}\}_{(x,y)\in \O\times Q}
\text{ is a }\nonumber\\
&& \hspace{0.5cm}\text{two-scale gradient Young measure such that }\nonumber\\
&& \hspace{0.5cm}\nabla u(x) = \int_Q \int_{{\mathbb{R}}^{d \times N}}\xi
\, d\nu_{(x,y)}(\xi)\, dy ~\text{ for a.e. }x \in \O \Bigg\}
\end{eqnarray}
for all $u \in W^{1,p}(\O;{\mathbb{R}}^{d}).$
\end{thm}
The overall plan of this work in the ensuing sections will be as
follows: Section 2 collects the main notations and results used
throughout. Section 3 is devoted to the proof of Theorem \ref{bbs},
and in Section 4 we address the proof of the homogenization result
Theorem \ref{exprGlim}.
\section{Some preliminaries}
\noindent The purpose of this section is to give a brief overview of
the concepts and results that are used in the sequel. Almost all
these results are stated without proofs as they can be readily found
in the references given below.
\subsection{Notation}
\noindent Throughout this work $\O$ is an open bounded subset of
${\mathbb{R}}^N$ with Lipschitz boundary, ${\mathcal{A}}(\O)$ denotes the family of all
open subsets of $\O$, ${\mathcal{L}}^N$ is the Lebesgue measure in ${\mathbb{R}}^N$,
${\mathbb{R}}^{d\times N}$ is identified with the set of real $d\times N$
matrices and $Q:=(0,1)^N$ is the unit cube in ${\mathbb{R}}^N$. The symbols
$\la \cdot \ra$ and $[\cdot]$ stand, respectively, for the
fractional and integer part of a number, or a vector, componentwise.
The Dirac mass at a point $a\in {\mathbb{R}}^m$ is denoted by $\delta_a$. The
symbol
$- \hskip -1em \int_A$ stands for the average $\text{\small${\mathcal{L}}^N(A)^{-1}$}\int_A$.\\
Let $U$ be an open subset of ${\mathbb{R}}^m$. Then:
\begin{itemize}
\item ${\mathcal{C}}_c(U)$ is the space of continuous functions $f:U\to {\mathbb{R}}$
with compact support.
\item ${\mathcal{C}}_0(U)$ is the closure of ${\mathcal{C}}_c(U)$ for the uniform
convergence; it coincides with the space of all continuous
functions $f:U \to {\mathbb{R}}$ such that, for every $\eta >0$, there
exists a compact set $K_\eta \subset U$ with $|f| < \eta$ on $U
\setminus K_\eta$.
\item ${\mathcal{M}}(U)$ is the space of real-valued Radon measures with
finite total variation. We recall that by the Riesz Representation
Theorem ${\mathcal{M}}(U)$ can be identified with the dual space of ${\mathcal{C}}_0(U)$
through the duality $$\la \mu ,\phi \ra = \int_U \phi \, d\mu,
\qquad\mu \in {\mathcal{M}}(U), \quad \phi \in {\mathcal{C}}_0(U).$$
\item ${\mathcal P}(U)$ denotes the space of probability
measures on $U$, {\it i.e. } the space of all $\mu \in {\mathcal{M}}(U)$ such that $\mu
\geqslant 0$ and $\mu(U) =1$.
\item $L^1(\O;{\mathcal{C}}_0(U))$ is the space of maps $\phi : \O \to
{\mathcal{C}}_0(U)$ such that
\begin{itemize}
\item[{\small i})] $\phi$ is strongly measurable, {\it i.e. } there
exists a sequence of simple functions $s_n : \O \to {\mathcal{C}}_0(U)$ such
that $\|s_n(x) - \phi(x)\|_{{\mathcal{C}}_0(U)} \to 0$ for a.e. $x \in \O$;
\item[{\small ii})] $x \mapsto \|\phi(x)\|_{{\mathcal{C}}_0(U)} \in L^1(\O)$.
\end{itemize}
We recall that the linear space spanned by $\{\varphi \otimes \psi :
\; \varphi \in L^1(\O) \text{ and } \psi \in {\mathcal{C}}_{0}(U) \}$ is dense
in $L^{1}(\O;{\mathcal{C}}_{0}(U))$.
\item $L^{\infty}_{w}(\O;{\mathcal{M}}(U))$ is the space of maps $\nu:
\O \to {\mathcal{M}}(U)$ such that
\begin{itemize}
\item[{\small i)}] $\nu$ is weak* measurable, {\it i.e. } $x
\mapsto \langle \nu_{x}, \phi \rangle$ is measurable for every $\phi
\in {\mathcal{C}}_0(U);$
\item[{\small ii)}] $x \mapsto \|\nu_{x}\|_{{\mathcal{M}}(U)}\in
L^{\infty}(\O).$
\end{itemize}
The space $L^{\infty}_{w}(\O;{\mathcal{M}}(U))$ can be identified with the dual
of $L^{1}(\O;{\mathcal{C}}_{0}(U))$ through the duality
$$\la \mu ,\phi \ra
= \int_\O \int_U \phi(x,\xi) \, d\mu_x(\xi)\, dx, \qquad \mu \in
L^{\infty}_{w}(\O;{\mathcal{M}}(U)), \quad \phi \in L^1(\O;{\mathcal{C}}_0(U)),$$ where
$\phi(x,\xi):=\phi(x)(\xi)$ for all $(x,\xi)\in \O\times U$. Hence
it can endowed with the weak* topology (see {\it e.g.}\ Theorem 2.11
in M\'alek, Ne\v cas, Rokyta \& R\r u\v zi\v cka \cite{MNRR}).
\item The space $W_{\rm per}^{1,p}(Q;{\mathbb{R}}^d)$ stands for the
$W^{1,p}$-closure of all functions $f \in {\mathcal{C}}^1({\mathbb{R}}^N,{\mathbb{R}}^d)$ which
are $Q$-periodic.
\end{itemize}
\subsection{Young measures}
\noindent We recall here the notion of Young measure and some of
its basic properties. We refer the reader to {Braides}
\cite{BLNotes}, {M\"uller} \cite{M}, Pedregal \cite{Pbook},
Roub\'{\i}\v{c}ek \cite{TR}, Valadier \cite{V1} and references
therein for a detailed description on the subject.
\begin{defi}{\rm (Young measure)} Let $\nu\in L^{\infty}_{w}(\O;{\mathcal{M}}({\mathbb{R}}^m))$
and let $z_n:\Omega \rightarrow {\mathbb{R}}^m$ be a sequence of measurable
functions. The family of measures $\{\nu_x\}_{x \in \O}$ is said to
be the {\rm Young measure} generated by $\{z_n\}$ provided
$\nu_{x}\in {\mathcal P}({\mathbb{R}}^m)$ for a.e.\- $x\in \O$ and
$$\delta_{z_n}\xrightharpoonup {*}{} \nu~\text{in}~ L^{\infty}_{w}(\O;{\mathcal{M}}({\mathbb{R}}^m)),$$
{\it i.e. } for all $\psi\in L^{1}(\O;{\mathcal{C}}_{0}({\mathbb{R}}^m))$
$$\lim_{n \to +\infty}\int_{\O}\psi(x,z_n(x))\,dx=\int_{\O}\int_{{\mathbb{R}}^m}\psi(x,\xi)\,d\nu_{x}(\xi)\,dx.$$
\end{defi}
The family $\{\nu_x\}_{x \in \O}$ is said to be a {\it homogeneous}
Young measure if the map $x \mapsto \nu_x$ is independent of $x$.
In this case the family $\{\nu_x\}_{x \in \O}$ is identified with a
single element $\nu$ of ${\mathcal{M}}({\mathbb{R}}^m)$.
The following result asserts the existence of Young measures (see
Ball \cite{ball}).
\begin{thm}\label{young}
Let $\{z_n\}$ be a sequence of measurable functions $z_n :\O \to
{\mathbb{R}}^m$. Then there exist a subsequence $\{z_{n_k}\}$ and $\nu \in
L^{\infty}_{w}(\O; {\mathcal{M}}({\mathbb{R}}^{m}))$ with $\nu_x \geqslant 0$ for a.e.\ $x
\in \O$, such that $\delta_{z_{n_k}}\xrightharpoonup {*}{} \nu$ in
$L^{\infty}_{w}(\O;{\mathcal{M}}({\mathbb{R}}^m))$ and the following properties hold:
\begin{itemize}
\item[(i)] $\|\nu_x\|_{{\mathcal{M}}({\mathbb{R}}^m)} = \nu_x({\mathbb{R}}^m) \leqslant 1$ for
a.e.\ $x \in \O$;\\
\item[(ii)] if \, ${\rm dist}(z_{n_k},K)\to 0$ in measure
for some closed set $K\subset {\mathbb{R}}^{m}$, then
${\rm Supp}(\nu_x) \subset K$ for a.e.\ $x \in \O$;\\
\item[(iii)] $\|\nu_x\|_{{\mathcal{M}}({\mathbb{R}}^m)} = 1$ if and only if there
exists a Borel function $g : {\mathbb{R}}^m \to [0,+\infty]$ such that
$$\lim_{|\xi|\to +\infty}g(\xi) = +\infty \quad \text{ and } \quad
\sup_{k \in {\mathbb{N}}} \int_\O g(z_{n_k}(x))\, dx <+\infty;$$
\item[(iv)] if $f:\O \times {\mathbb{R}}^m \to [0,+\infty]$ is a normal
integrand, then $$\liminf_{k \to +\infty} \int_\O f(x,z_{n_k}(x))\,
dx \geqslant \int_\O \int_{{\mathbb{R}}^m} f(x,\xi)\, d\nu_x(\xi)\, dx;$$
\item[(v)]if $(iii)$ holds and if $f:\O \times {\mathbb{R}}^m \to
[0,+\infty]$ is a Carath\'eodory integrand such that the sequence
$\{f(\cdot,z_{n_k})\}$ is equi-integrable then
$$\lim_{k \to +\infty} \int_\O f(x,z_{n_k}(x))\,dx=\int_\O \int_{{\mathbb{R}}^m} f(x,\xi)\, d\nu_x(\xi)\, dx.$$
\end{itemize}
\end{thm}
\subsection{Two-scale gradient Young measures}
\noindent As remarked by Pedregal \cite{P}, regular Young measures
do not always provide enough information on the oscillations of a
certain sequence $\{v_\e\}$. To better understand oscillations that
occur at a given length scale $\e$ we may study the Young measure
generated by the pair $\{(\la \cdot/\e \ra,v_\e)\}$. In this paper
we are interested in the case where $v_\e=\nabla u_\e$, for some
sequence $\{u_\e\} \subset W^{1,p}(\O;{\mathbb{R}}^d)$, with $1<p<\infty$.
Let $\mu \in L_w^\infty(\O; {\mathcal{M}}({\mathbb{R}}^N \times {\mathbb{R}}^{d \times N}))$ and
$\{u_\e\} \subset W^{1,p}(\O;{\mathbb{R}}^d)$ be such that the pair $\{(\la
\cdot/\e \ra,\nabla u_\e)\}$ generates the Young measure
$\{\mu_{x}\}_{x\in \O}$. By an application of the Generalized
Riemann-Lebesgue Lemma (see {\it e.g.}\ Lemma 5.2 in Allaire
\cite{A} or Theorem 3 in Lukkassen, Nguetseng \& Wall \cite{LNW})
the sequence $\{\la \cdot/\e\ra\}$ generates the homogeneous Young
measure $dy:={\mathcal{L}}^N \lfloor Q$ (restriction of the Lebesgue measure
to $Q$). Then by the Disintegration Theorem (see Valadier \cite{V2})
there exists a map $\nu \in L^\infty_w(\O \times Q;{\mathcal{M}}({\mathbb{R}}^{d \times
N}))$ with $\nu_{(x,y)} \in {\mathcal{P}}({\mathbb{R}}^{d \times N})$ for a.e.\ $(x,y)
\in \O \times Q$ and such that $\mu_x = \nu_{(x,y)} \otimes dy$ for
a.e. $x\in \O$, {\it i.e. }
$$\int_{{\mathbb{R}}^N \times {\mathbb{R}}^{d \times N}} \phi(y,\xi)\, d\mu_x(y,\xi)=\int_Q
\int_{{\mathbb{R}}^{d \times N}} \phi(y,\xi)\, d\nu_{(x,y)}(\xi)\, dy$$ for
every $\phi \in {\mathcal{C}}_0({\mathbb{R}}^N \times {\mathbb{R}}^{d \times N})$.
The family $\{\nu_{(x,y)}\}_{(x,y) \in \O \times Q}$ is referred in
\cite{P} as the {\it two-scale (gradient) Young measure} associated
to the sequence $\{\nabla u_\e\}$ at scale $\e$. More precisely, we
give the following definition.
\begin{defi}\label{MGYM}
Let $\nu \in L^\infty_w(\O \times Q;{\mathcal{M}}({\mathbb{R}}^{d \times N}))$. The
family $\{\nu_{(x,y)}\}_{(x,y) \in \O \times Q}$ is said to be a
{\rm two-scale gradient Young measure} if $\nu_{(x,y)} \in
{\mathcal{P}}({\mathbb{R}}^{d \times N})$ for a.e.\ $(x,y) \in \O \times Q$ and if for
every sequence $\{\e_n\} \to 0$ there exists a bounded
sequence $\{u_n\}$ in $W^{1,p}(\O;{\mathbb{R}}^d)$ such that
$\{(\la\cdot/\e_n \ra,\nabla u_n)\}$ generates the Young measure
$\{\nu_{(x,y)} \otimes dy\}_{x\in \O}$ {\it i.e. } for every $z \in L^1(\O)$
and $\varphi \in {\mathcal{C}}_0({\mathbb{R}}^N \times {\mathbb{R}}^{d \times N})$,
$$\lim_{n \to +\infty}\int_\O z(x)\,
\varphi\left(\left\la\frac{x}{\e_n}\right\ra,\nabla u_n(x)\right) dx
= \int_\O \int_Q \int_{{\mathbb{R}}^{d \times N}}z(x) \, \varphi(y,\xi)\,
d\nu_{(x,y)}(\xi)\, dy\, dx.$$ In this case
$\{\nu_{(x,y)}\}_{(x,y)\in \O\times Q}$ is also called the two-scale
Young measure associated to $\{\nabla u_n\}$.
\end{defi}
\begin{example}\label{ex}
{\rm Let $\{\e_n\} \to 0$, and let $u:\O\to{\mathbb{R}}^{d}$ and
$u_{1}:\O \times {\mathbb{R}}^N \to {\mathbb{R}}^d$ be smooth functions such that
$u_1(x,\cdot)$ is $Q$-periodic for all $x \in \O$. Define
$$u_n(x):=u(x)+\e_n u_{1}\left(x,\frac{x}{\e_n}\right).$$
The two-scale gradient Young measure $\{\nu_{(x,y)}\}_{(x,y)\in
\O\times Q}$ associated to $\{ \nabla u_n\}$ is given by
$$\nu_{(x,y)}:=\delta_{\nabla u(x) + \nabla_{y} u_{1}(x,y)} \text{ for
all } (x,y)\in \O\times Q.$$ Indeed, let us show that
$\{(\la\cdot/\e_n\ra,\nabla u_n)\}$ generates the Young measure
$\{\nu_{(x,y)}\otimes dy\}_{x \in \O}$. First we note that $\nabla
u_n(x) = \nabla u(x) + \e_n \nabla_x u_1(x,x/\e_n) + \nabla_y
u_1(x,x/\e_n)$ for every $x \in \O$. As $\e_n \nabla_x u_1(\cdot,
\cdot/\e_n) \to 0$ strongly in $L^p(\O;{\mathbb{R}}^{d \times N})$ because $x
\mapsto \nabla_x u_1(x,x/\e)$ is weakly convergent in $L^p(\O;{\mathbb{R}}^{d
\times N})$ (see {\it e.g.}\ Example 3 in Lukkassen, Nguetseng \&
Wall \cite{LNW}), then in particular
$$(\la \cdot / \e_n \ra ,\nabla u_n(\cdot)) - \left(\la \cdot / \e_n \ra , \nabla u(\cdot)
+ \nabla_y u_1(\cdot,\cdot/\e_n)\right) =(0,\e_n \nabla_x
u_1(\cdot,\cdot/\e_n)) \to 0$$ in measure. Thus from Lemma 6.3 in
Pedregal \cite{Pbook} the sequences $$\{(\la \cdot / \e_n \ra
,\nabla u_n)\} \text{ and } \{\left(\la \cdot / \e_n \ra , \nabla
u(\cdot) + \nabla_y u_1(\cdot,\cdot/\e_n)\right)\}$$ generate the
same Young measure. By Riemann-Lebesgue's Lemma we have for every
$\psi \in L^1(\O)$ and every $\varphi \in {\mathcal{C}}_0({\mathbb{R}}^N \times {\mathbb{R}}^{d
\times N})$
\begin{eqnarray*}
\lim_{n \to +\infty}\int_\O \psi(x) \, \varphi\left(
\left\la\frac{x}{\e_n}\right\ra, \nabla u(x) + \nabla_y
u_1\left(x,\frac{x}{\e_n}\right) \right)\, dx\\
= \int_\O \int_Q \psi(x)\, \varphi(y, \nabla u(x) + \nabla_y
u_1(x,y) )\, dy \, dx \end{eqnarray*} which proves the claim. }
\end{example}
\begin{example}\label{ex1}
{\rm Let $\{\e_n\} \to 0$, and let $u:\O\to{\mathbb{R}}^{d}$ and
$u_{2}:\O \times {\mathbb{R}}^N \times {\mathbb{R}}^N \to {\mathbb{R}}^d$ be smooth functions
such that $u_2(x,\cdot,\cdot)$ is separately $Q$-periodic with
respect to its second and third variable, for all $x \in \O$. Define
$$v_n(x):=u(x)+\e_n^2 u_{2}\left(x,\frac{x}{\e_n},\frac{x}{\e_n^2}\right).$$
Arguing as previously, both sequences $$\{(\la \cdot / \e_n \ra
,\nabla v_n)\}\text{ and }\{\left(\la \cdot / \e_n \ra , \nabla
u(\cdot) + \nabla_z u_2(\cdot,\cdot/\e_n,\cdot/\e_n^2)\right)\}$$
generate the same Young measure. Using once more the
Riemann-Lebesgue Lemma we have that for every $\psi \in L^1(\O)$ and
every $\varphi \in {\mathcal{C}}_0({\mathbb{R}}^N \times {\mathbb{R}}^{d \times N})$
\begin{eqnarray*} \lim_{n \to +\infty}\int_\O \psi(x) \,
\varphi\left( \left\la\frac{x}{\e_n}\right\ra, \nabla u(x) +
\nabla_z u_2\left(x,\frac{x}{\e_n},\frac{x}{\e_n^2}\right) \right)\, dx\\
= \int_\O \int_Q \int_Q \psi(x)\, \varphi(y, \nabla u(x) + \nabla_z
u_2(x,y,z) )\, dz\, dy \, dx. \end{eqnarray*} Hence, the two-scale
Young measure associated to $\{\nabla v_{n}\}$ is $$\nu_{(x,y)}:=
\int_Q \delta_{\nabla u(x) + \nabla_z u_{2} (x,y,z)} \,dz,$$ for
a.e.\ $(x,y) \in \O \times Q$, {\it i.e. }
$$\la \nu_{(x,y)}, \phi \ra= \int_Q \phi(\nabla u(x) + \nabla_z u_{2}
(x,y,z))\,dz \quad \text{ for all }\phi \in {\mathcal{C}}_0({\mathbb{R}}^{d \times
N}).$$ Note that in this example we do not get a Dirac mass because
there are oscillations occurring at different scales than $\e_n$,
namely at scale $\e_n^2$, that the two-scale Young measure misses
(see Valadier in \cite{V} for more details). }
\end{example}
\begin{rmk}\label{underlying}
{\rm Let $\{\e_n\}$, $\{u_n\}$ and $\nu$ be as in Definition
\ref{MGYM}. Since $\nabla u_n$ do not change if we add or remove a
constant, there is no loss of generality to assume that all the
functions $u_n$ have zero average. Moreover, let $\{u_{n_k}\}$ be a
subsequence of $\{u_n\}$. Then there exists a subsequence
$\{u_{n_{k_{j}}}\}$ and $u \in W^{1,p}(\O;{\mathbb{R}}^d)$ (with zero
average) such that $u_{n_{k_{j}}} \rightharpoonup u$ in
$W^{1,p}(\O;{\mathbb{R}}^d)$ and
$$\nabla u(x) = \int_Q \int_{{\mathbb{R}}^{d \times N}} \xi\,
d\nu_{(x,y)}(\xi)\, dy \quad \text{ a.e.\ in }\O$$ (see {\it e.g.}\
the proof of Lemma \ref{nec} below). It follows that $u$ is uniquely
defined because if $v$ is the weak $W^{1,p}(\O;{\mathbb{R}}^d)$-limit of
another subsequence of $u_{n_k}$ then
$$\nabla u(x) = \int_Q \int_{{\mathbb{R}}^{d \times N}} \xi\,
d\nu_{(x,y)}(\xi)\, dy=\nabla v(x) \quad \text{ a.e. in }\O,$$ which
implies that $u=v$ since they both have zero average. As a
consequence $u_n \rightharpoonup u$ in $W^{1,p}(\O;{\mathbb{R}}^d)$ and we
can show in a similar way that $u$ is also independent of the
sequence $\{\e_n\}$. The function $u$ is called {\it the underlying
deformation} of $\{\nu_{(x,y)}\}_{(x,y) \in \O \times Q}$.}
\end{rmk}
In the following lemma, we show that there is no loss of generality
to assume that sequences of generators in Definition \ref{MGYM}
match the boundary condition of the underlying deformation.
\begin{lemma}\label{boundary}
Let $\{\e_n \} \to 0$ and $\{u_n\} \subset W^{1,p}(\Omega;{\mathbb{R}}^d)$ be
such that $u_n \rightharpoonup u$ in $W^{1,p}(\Omega;{\mathbb{R}}^d)$ for
some $u \in W^{1,p}(\Omega;{\mathbb{R}}^d)$. Suppose that $\{(\la
\cdot/\e_n\ra, \nabla u_n )\}$ generates the Young measure $\{
\nu_{(x,y)} \otimes dy \}_{x \in \O}$. Then there exists a sequence
$\{v_n\} \subset W^{1,p}(\Omega;{\mathbb{R}}^d)$ such that $v_n
\rightharpoonup u$ in $W^{1,p}(\Omega;{\mathbb{R}}^d)$, $v_n=u$ on a
neighborhood of $\partial \O$ and $\{(\la \cdot/\e_n\ra, \nabla v_n
)\}$ also generates $\{\nu_{(x,y)} \otimes dy \}_{x \in \O}$.
\end{lemma}
\begin{proof}
For any $k \in {\mathbb{N}}$ let $\O_k:=\left\{x\in \O:~ {\rm dist}(x,{\mathbb{R}}^N
\setminus \O)>1/k\right\}$ and let $\Phi_k \in {\mathcal{C}}^{\infty}_
{c}(\O;[0,1])$ be a cut-off function such that
$$\Phi_k:=\left\{\begin{array}{l}
1 \quad \text{if} ~ x \in \O_k,\\
0 \quad \text{if} ~ x \in \O \setminus \overline
\O_{k+1}
\end{array}\right.$$
and $|\nabla \Phi_k| \leqslant C\,k$, for some constant $C>0$. Let $w_{n,k} \in W^{1,p}(\O;{\mathbb{R}}^d)$ be
given by
$$w_{n,k}:=(1-\Phi_k)u+\Phi_k u_n,$$
\noindent from where
$$\nabla w_{n,k}=(1-\Phi_k)\nabla u+\Phi_k \nabla
u_n+(u_n-u)\otimes \nabla \Phi_k.$$ Since $u_n \to u$ strongly in
$L^p(\O;{\mathbb{R}}^d)$ then
\begin{equation}\label{bdy1}
\lim_{k \to +\infty} \lim_{n \to +\infty}
{\|w_{n,k}-u\|}_{L^p(\O;{\mathbb{R}}^d)}=0
\end{equation}
and, as a consequence of
$$\lim_{k \to +\infty} \lim_{n \to +\infty} {\|(u_n-u)\otimes \nabla
\Phi_k\|}_{L^p(\O;{\mathbb{R}}^{d \times N})}=0,$$ it follows that
\begin{equation}\label{bdy3}
\sup_{n,k \in {\mathbb{N}}}\|\nabla w_{n,k}\|_{L^p(\O;{\mathbb{R}}^{d \times N})}
<+\infty.
\end{equation}
Let $z$ and $\varphi$ be in a countable dense
subset of $L^1(\O)$ and ${\mathcal{C}}_{0}({\mathbb{R}}^N \times {\mathbb{R}}^{d\times N}),$
respectively. Then
\begin{eqnarray}\label{bdy4}
&&\lim_{k \to +\infty} \lim_{n \to +\infty}\int_{\O}z(x)\,
\varphi\left(\left\la\frac{x}{\e_n}\right\ra, \nabla
w_{n,k}(x)\right)\, dx\nonumber\\
&&\hspace{2cm} = \lim_{k \to +\infty} \lim_{n \to +\infty}\int_{\O_k}z(x)\,
\varphi\left(\left\la\frac{x}{\e_n}\right\ra, \nabla
u_n(x)\right)\, dx\nonumber\\
&&\hspace{3cm} + \lim_{k \to +\infty} \lim_{n \to +\infty}\int_{\O
\setminus \O_k}z(x)\, \varphi\left(\left\la\frac{x}{\e_n}\right\ra,
\nabla w_{n,k}(x)\right)\, dx\nonumber\\
&&\hspace{2cm} = \int_{\O} z(x) \int_Q \int_{{\mathbb{R}}^{d \times N}}
\varphi (y,\xi) \, d\nu_{(x,y)} \, dy \, dx.
\end{eqnarray}
By a diagonalization argument (see {\it e.g.} Lemma 7.2 in Braides,
Fonseca \& Francfort \cite{BFF}) and taking into account
(\ref{bdy1})-(\ref{bdy4}), we can find a sequence $\{k(n)\} \nearrow
+\infty$ such that, upon setting $v_n:=w_{n,k(n)}$, we have $v_n
\rightharpoonup u$ in $W^{1,p}(\O;{\mathbb{R}}^d)$, $v_n=u$ on a neighborhood
of $\partial \O$ and for every $z$ and $\varphi$ in a countable
dense subset of $L^1(\O)$ and ${\mathcal{C}}_0({\mathbb{R}}^N \times {\mathbb{R}}^{d\times N})$,
respectively,
\begin{equation*}
\lim_{n \to +\infty}\int_{\O}z(x)\,
\varphi\left(\left\la\frac{x}{\e_n}\right\ra, \nabla v_n(x)\right)\,
dx = \int_\Omega z(x) \int_Q \int_{{\mathbb{R}}^{d \times N}} \varphi(y,\xi)
\,d\nu_{(x,y)}(\xi) \,dy\,dx.
\end{equation*}
\end{proof}
A two-scale gradient Young measure $\{\nu_{(x,y)}\}_{(x,y) \in \O
\times Q}$ is said to be {\it homogeneous} if the map $(x,y) \mapsto
\nu_{(x,y)}$ is independent of $x$. In this case, $\nu$ can be
identified with an element of $L^\infty_w(Q;{\mathcal{M}}({\mathbb{R}}^{d \times N}))$
and we write $\{\nu_y\}_{y \in Q} \equiv\{\nu_{(x,y)}\}_{(x,y) \in
\O \times Q}.$\\
We next define the average of a map $\nu \in L^\infty_w(\O \times
Q;{\mathcal{M}}({\mathbb{R}}^{d \times N}))$ for which $\{\nu_{(x,y)}\}_{(x,y) \in \O
\times Q}$ is a two-scale gradient Young measure. This notion,
useful for the analysis developed on Section \ref{homogeneous}, will
provide an important example of homogeneous two-scale gradient Young
measure.
\begin{defi}\label{ave} Let $\nu \in L^\infty_w(\O \times Q;{\mathcal{M}}({\mathbb{R}}^{d \times
N}))$ be such that $\{\nu_{(x,y)}\}_{(x,y) \in \O \times Q}$ is a
two-scale gradient Young measure. The average of
$\{\nu_{(x,y)}\}_{(x,y) \in \O \times Q}$ (with respect to the
variable $x$) is the family $\{\overline \nu_{y}\}_{y \in Q}$
defined by
$$\la {\overline \nu}_y, \varphi \ra:=
- \hskip -1em \int_{\O}\int_{{\mathbb{R}}^{d \times N}}\varphi(\xi)\,d\nu_{(x,y)} \,dx$$ for every
$\varphi \in {\mathcal{C}}_0({\mathbb{R}}^{d \times N})$.
\end{defi}
If $\{\nu_{(x,y)}\}_{(x,y) \in \O \times Q}$ is a two-scale
gradient Young measure, then it can be seen that $\overline
\mu:= \overline \nu_y \otimes dy$ is the average of $\{\mu_x\}_{x
\in \O}$ with $\mu_x:=\nu_{(x,y)} \otimes dy$ and $\mu\in
L^{\infty}_{w}(\O;{\mathcal{M}}({\mathbb{R}}^N \times {\mathbb{R}}^{d \times N}))$. Thus,
$\overline \mu$ is a homogeneous Young measure by Definition
\ref{MGYM} and Theorem 7.1 in Pedregal \cite{Pbook}.
In the following Lemma we prove that $\{\overline \nu_y\}_{y \in
Q}$ is actually a homogeneous two-scale gradient Young measure. We
will use the same kind of blow up argument as in the proof of
Theorem 7.1 in Pedregal \cite{Pbook}, splitting $Q$ into suitable
subsets. However, contrary to \cite{Pbook} we will not use
Vitali's Covering Theorem because the radii of this sets
(which may vary from one to another) may interact with the
length scale of our problem, $\e$, in a way we are unable to
control. We will construct a covering consisting of
subsets of fixed radius. It is enough for our purposes to consider
the case where the underlying deformation is affine and $\O=Q$.
\begin{lemma}\label{average}
Let $\nu \in L^\infty_w(Q \times Q;{\mathcal{M}}({\mathbb{R}}^{d \times N}))$ be such
that $\{\nu_{(x,y)}\}_{(x,y) \in Q \times Q}$ is a two-scale
gradient Young measure with underlying deformation $F\cdot$, for
$F\in {\mathbb{R}}^{d\times N}$. Then $\{\overline \nu_y\}_{y\in Q}$ is a
homogeneous two-scale gradient Young measure with the same underlying
deformation.
\end{lemma}
\begin{proof}
Note that by Definition \ref{ave} and Fubini's Theorem, it follows
that $y \mapsto \overline \nu_y$ is weakly*-measurable and thus
$\overline \nu \in L^\infty_w(Q;{\mathcal{M}}({\mathbb{R}}^{d \times N}))$. We have to
show that for every sequence $\{\e_n \} \to 0$, there exists
$\{v_n\} \subset W^{1,p}(Q;{\mathbb{R}}^{d})$ such that $\{(\la \cdot/\e_n
\ra,\nabla v_n)\}$ generates $\overline
\mu:=\overline{\nu}_{y}\otimes dy$ and $v_n \rightharpoonup F\cdot$
in $W^{1,p}(Q;{\mathbb{R}}^{d}).$
Let $\{u_n\} \subset W^{1,p}(Q;{\mathbb{R}}^d)$ be such that $\{(\la n \,
\cdot \ra, \nabla u_n)\}$ generates $\{\nu_{(x,y)} \otimes dy\}_{x
\in \O}$ and $u_n \rightharpoonup F\cdot$ in $W^{1,p}(Q;{\mathbb{R}}^d)$ (see
Remark \ref{underlying}). Without loss of generality we may assume
that $u_n(x)= Fx$ on a neighborhood of $\partial Q$ (see Lemma
\ref{boundary}).
Let $\{\e_n\} \to 0$ and for each $n$ define
$\rho_n:=\e_n [1/\sqrt\e_n]$. Then there exist $m_n \in {\mathbb{N}}$, $a_i^n
\in \rho_n {\mathbb{Z}}^N \cap Q$ and a measurable set $E_n \subset Q$ with
${\mathcal{L}}^N(E_n) \to 0$ such that
$$Q= \bigcup_{i=1}^{m_n}\big(a^n_{i}+\rho_{n}Q\big) \cup E_n.$$
Define
$$v_n(x):=\left\{ \begin{array}{ll}\rho_n
u_{\rho_n/\e_n}\left(\frac{x-a^n_{i}}{\rho_n}\right)+Fa^n_{i}&
\text{ if } x\in a^n_{i}+\rho_n Q \text{ and } i \in
\{1,\ldots,m_n\},\\\\
Fx & \text{ otherwise}. \end{array}\right.$$ Note that the previous
definition makes sense since $\rho_n/\e_n \in {\mathbb{N}}$. We remark that
$\{v_n\} \subset W^{1,p}(Q;{\mathbb{R}}^d)$ and $v_n \rightharpoonup F\cdot$
in $W^{1,p}(Q;{\mathbb{R}}^d)$ since $u_n \rightharpoonup F\cdot$ in this
space. Let $z \in {\mathcal{C}}_c(Q)$ and $\varphi\in {\mathcal{C}}_{0}({\mathbb{R}}^N \times
{\mathbb{R}}^{d\times N})$. Then we have
\begin{eqnarray*}
&&\int_{Q}z(x)\, \varphi\left(\left\la\frac{x}{\e_n}\right\ra,
\nabla v_n(x)\right)\, dx\\
&&\hspace{1cm}= \sum_{i=1}^{m_n} \int_{a^n_{i}+\rho_n Q}z(x)\,
\varphi\left(\left\la\frac{x}{\e_n}\right\ra,
\nabla u_{\rho_n/\e_n}\left(\frac{x-a^n_{i}}{\rho_n}\right)\right)\, dx\\
&&\hspace{1.5cm} + \int_{E_n}z(x)\,
\varphi\left(\left\la\frac{x}{\e_n}\right\ra,
F\right)\, dx \\
&&\hspace{1cm}=\sum_{i=1}^{m_n}
z(a^n_{i})\int_{a^n_{i}+\rho_nQ}\varphi\left(\left\la\frac{x}{\e_n}\right\ra,
\nabla u_{\rho_n/\e_n}\left(\frac{x-a^n_{i}}{\rho_n}\right)\right)\,
dx\\
&&\hspace{1.5cm} +\sum_{i=1}^{m_n} \int_{a^n_{i}+\rho_nQ}
\left(z(x)-z(a^n_{i})\right)
\varphi\left(\left\la\frac{x}{\e_n}\right\ra, \nabla
u_{\rho_n/\e_n}\left(\frac{x-a^n_{i}}{\rho_n}\right)\right)\,
dx\\
&&\hspace{1.5cm} + \int_{E_n}z(x)\,
\varphi\left(\left\la\frac{x}{\e_n}\right\ra, F\right)\, dx,
\end{eqnarray*}
and, consequently,
\begin{eqnarray}\label{1333}
&&\int_{Q}z(x)\, \varphi\left(\left\la\frac{x}{\e_n}\right\ra,
\nabla v_n(x)\right)\, dx\nonumber\\
&&\hspace{2cm}= \sum_{i=1}^{m_n}\rho_n^N
z(a^n_{i})\int_{Q}\varphi\left(\left\la\frac{a^n_{i}+\rho_nx}{\e_n}\right\ra,
\nabla u_{\rho_n/\e_n}(x)\right)\, dx\nonumber\\
&&\hspace{3cm} + o(1), \text{ as }n \to +\infty
\end{eqnarray}
by changing variables, using the uniform continuity of $z$ and the
fact that ${\mathcal{L}}^N(E_n) \to 0$. Hence, as $a_i^n/\e_n \in {\mathbb{Z}}^N$, it
follows that
\begin{eqnarray}\label{1333}
&&\int_{Q}z(x)\, \varphi\left(\left\la\frac{x}{\e_n}\right\ra,
\nabla v_n(x)\right)\, dx\nonumber\\
&&\hspace{2cm}= \sum_{i=1}^{m_n}\rho_n^N
z(a^n_{i})\int_{Q}\varphi\left(\left\la\frac{x}{\e_n/\rho_n}\right\ra,
\nabla u_{\rho_n/\e_n}(x)\right)\, dx\nonumber\\
&&\hspace{3cm} + o(1), \text{ as }n \to +\infty,
\end{eqnarray}
and passing to the limit in (\ref{1333}) and using Definition
\ref{ave}, we conclude that
\begin{eqnarray*}
&&\lim_{n \to +\infty}\int_{Q}z(x)\,
\varphi\left(\left\la\frac{x}{\e_n}\right\ra, \nabla v_n(x)\right)\,
dx\\
&&\hspace{2cm}= \int_{Q} z(x)\, dx \int_{Q} \int_Q \int_{{\mathbb{R}}^{d
\times N}}
\varphi(y,\xi)\, d\nu_{(x,y)}(\xi)\,dy \, dx\\
&&\hspace{2cm}= \la \overline \nu, \varphi \ra \int_{Q} z(x)\, dx.
\end{eqnarray*}
Since by density the previous equality holds for every $z \in
L^1(\O)$, then $\{(\la\cdot/\e_n\ra,\nabla v_n)\}$ generates the
homogeneous Young measure $\overline \nu_y \otimes dy$ and,
consequently, $\{\overline\nu_y\}_{y \in Q}$ is a homogeneous
two-scale gradient Young measure.
\end{proof}
\section{Proof of Theorem \ref{bbs}}\label{Proof1}
\noindent The aim of this section is to prove Theorem \ref{bbs}. We
start by introducing the space ${\mathcal{E}}_p$ of all continuous functions
$f : \overline Q \times {\mathbb{R}}^{d \times N} \to {\mathbb{R}}$ such that the
limit $$\lim_{|\xi| \to +\infty} \frac {f(y,\xi)}{1+{|\xi|}^p}$$
exists uniformly with respect to $y\in \overline Q$. As an example,
the function $(y,\xi) \mapsto a(y)|\xi|^p$, where $a \in
{\mathcal{C}}(\overline Q)$, is in ${\mathcal{E}}_p$.
It can be checked that ${\mathcal{E}}_p$ is a Banach space under the norm
\begin{equation*}
{\|f\|}_{{\mathcal{E}}_p}:= \sup_{y \in \overline Q, \, \xi \in {\mathbb{R}}^{d \times
N}} \frac {|f(y,\xi)|}{1+{|\xi|}^p}.
\end{equation*}
In addition, ${\mathcal{E}}_p$ is isomorphic to the space ${\mathcal{C}}\big({\overline Q}
\times ({\mathbb{R}}^{d \times N} \cup \{\infty\})\big)$ under the map
$$\begin{array}{llll}
& {\mathcal{E}}_p & \longrightarrow & {\mathcal{C}}\big({\overline Q} \times
({\mathbb{R}}^{d \times N} \cup \{\infty\}) \big)\\
\displaystyle & f & \longmapsto & \displaystyle (y,\xi) \mapsto
\left\{\begin{array}{lll}
\hspace{0.3cm} \frac{f(y,\xi)}{1+|\xi|^p}& \text{ if }&(y,\xi)\in \overline{Q}\times {\mathbb{R}}^{d\times N}\\
\lim\limits_{|\xi| \to +\infty} \frac {f(y,\xi)}{1+{|\xi|}^p}&
\text{ if } & |\xi|=+\infty,
\end{array}\right.
\end{array}$$
where ${\mathbb{R}}^{d \times N} \cup \{\infty\}$ denotes the one-point
compactification of ${\mathbb{R}}^{d \times N}$, and, consequently, it is
separable. Furthermore, for all $f\in {\mathcal{E}}_p$ there exists a constant
$c>0$ such that
\begin{equation}\label{Ep}
|f(y,\xi)| \leqslant c(1+|\xi|^p), \quad \text{ for all }(y,\xi) \in
\overline Q \times {\mathbb{R}}^{d \times N}.
\end{equation}
We denote by $({\mathcal{E}}_p)'$ the dual space of ${\mathcal{E}}_p$ and the brackets
$\la \cdot,\cdot\ra_{({\mathcal{E}}_p)',{\mathcal{E}}_p}$ stand for the duality product
between $({\mathcal{E}}_p)'$ and ${\mathcal{E}}_p$.
\subsection{Necessity}
\noindent We start by showing that conditions i)-iii) in
(\ref{i})-(\ref{iii}) are necessary. Precisely we prove the
following result.
\begin{lemma}\label{nec} Let $\nu \in L^\infty_w(\O \times Q;{\mathcal{M}}({\mathbb{R}}^{d \times
N}))$ be such that $\{\nu_{(x,y)}\}_{(x,y) \in \O \times Q}$ is a
{\rm two-scale gradient Young measure}. Then
\begin{itemize}
\item[i)] there exist $u \in W^{1,p}(\Omega; {\mathbb{R}}^d)$ and $u_1 \in
L^{p}(\Omega; W_{per}^{1,p}(Q;{\mathbb{R}}^d))$ such that
$$\int_{{\mathbb{R}}^{d \times N}} \xi \, d\nu_{(x,y)}(\xi) = \nabla u(x) + \nabla_y u_1(x,y)
\quad \text{ for a.e.\ }(x,y) \in \O \times Q;$$
\item[ii)] for every $f\in {\mathcal{E}}_p$ we have that
\begin{equation*}
\int_Q \int_{{\mathbb{R}}^{d \times N}} f(y,\xi) \, d\nu_{(x,y)}(\xi)\, dy
\geqslant f_{\rm hom}(\nabla u(x)) \quad \text{ for a.e.\ }x \in \O,
\end{equation*}
where $f_{\rm hom}$ is given by (\ref{fhom1}); \item[iii)] $\displaystyle
(x,y) \mapsto \int_{{\mathbb{R}}^{d \times N}} {|\xi|}^p d\nu_{(x,y)}(\xi)
\in L^1(\O \times Q).$
\end{itemize}
\end{lemma}
\begin{proof}
Let $\{\nu_{(x,y)}\}_{(x,y) \in\O \times Q}$ be a two-scale gradient
Young measure.
We start by proving that i) holds. By Definition \ref{MGYM} and
Remark \ref{underlying} there exists $u \in W^{1,p}(\O;{\mathbb{R}}^d)$ such
that for every sequence $\{\e_n\} \to 0$ one can find $\{u_n\}
\subset W^{1,p}(\O;{\mathbb{R}}^d)$ such that $\{(\la \cdot/\e_n\ra ,\nabla
u_n) \}$ generates the Young measure $\{\nu_{(x,y)} \otimes dy\}_{x
\in \O}$ and $u_n \rightharpoonup u$ in $W^{1,p}(\O;{\mathbb{R}}^d)$. Up to a
subsequence (still denoted by $u_n$), we can also assume that
$\{|\nabla u_n|^p\}$ is equi-integrable (see the Decomposition Lemma
in Fonseca, M\"uller \& Pedregal \cite{FMP}) and that there exists a
function $u_1 \in L^{p}(\O;W^{1,p}_{\rm per} (Q;{\mathbb{R}}^d))$ such that
the sequence $\{\nabla u_n\}$ two-scale converges to $\nabla u +
\nabla_y u_1$ (see {\it e.g.}\ Theorem 13 in Lukkassen, Nguetseng \&
Wall \cite{LNW}; see also Allaire \cite{A} or Nguetseng \cite{N1}).
Consequently, for all $\phi \in {\mathcal{C}}^\infty_c(\O \times Q;{\mathbb{R}}^{d
\times N})$ we have that
\begin{eqnarray}\label{2scaleconv}&&\lim_{n \to +\infty} \int_\O \nabla u_n(x) \cdot
\phi\left(x,\left\la\frac{x}{\e_n}\right\ra\right)dx\nonumber\\
&&\hspace{2cm} = \int_\O \int_Q (\nabla u(x) + \nabla_y u_1(x,y))
\cdot \phi(x,y)\, dy\, dx.\end{eqnarray} Set $f(x,y,\xi)=\xi \cdot
\phi(x,y)\,\,$ for $(x,y,\xi) \in \O\times Q\times {\mathbb{R}}^{d \times
N}$. As $f$ is a Carath\'eodory integrand (measurable in $x$ and
continuous in $(y,\xi)$) and the sequence $\{f(\cdot,\la
\cdot/\e_n\ra,\nabla u_n(\cdot))\}$ is equi-integrable, by Theorem
\ref{young} v) we get that
\begin{equation}\label{YM} \lim_{n \to +\infty} \int_\O \nabla u_n(x) \cdot
\phi\left(x,\left\la\frac{x}{\e_n}\right\ra\right)dx= \int_\O \int_Q
\int_{{\mathbb{R}}^{d \times N}} \xi \cdot \phi(x,y)\, d\nu_{(x,y)}(\xi)\,
dy\, dx.
\end{equation}
Consequently, from (\ref{2scaleconv}) and (\ref{YM}) we get for
a.e.\ $(x,y) \in \O \times Q$
$$\int_{{\mathbb{R}}^{d \times N}} \xi \, d\nu_{(x,y)}(\xi) = \nabla u(x) +
\nabla_y u_1(x,y),$$ which proves i).
Let us see now that iii) is satisfied. As $\{\nabla u_n\}$ is
$p$-equi-integrable then by Theorem \ref{young} v) we get that
$$\int_\O \int_Q \int_{{\mathbb{R}}^{d \times N}} |\xi|^p \,
d\nu_{(x,y)}(\xi)\, dy\, dx = \lim_{n \to +\infty} \int_\O |\nabla
u_n|^p\, dx < +\infty,$$ which completes the proof of iii).
Finally, let us see that condition ii) holds by application of the
classical $\G$-convergence result for the homogenization of integral
functionals (see Braides \cite{B1} or M\"{u}ller \cite{Mu}). Let $f
\in {\mathcal{E}}_p$. In particular $f$ satisfies the $p$-growth condition
(\ref{Ep}) but it is not necessarily $p$-coercive. For every $\a>0$
and $M>0$, define $f_{M,\a}(y,\xi):=f_M(y,\xi) + \a |\xi|^p$ where
$f_M(y,\xi)=\max\{-M,f(y,\xi)\}$. Then
$$\a|\xi|^p - M \leqslant
f_{M,\a}(y,\xi) \leqslant (c+\a) (1+|\xi|^p), \quad \text{ for all
}(y,\xi) \in \overline Q \times {\mathbb{R}}^{d \times N}.$$ Hence, by e.g.\
Theorem 14.5 in Braides \cite{B1} ($\G$-$\liminf$ inequality) and
since $f_{M,\a} \geqslant f$, we get that for every $A \in {\mathcal{A}}(\O)$
\begin{eqnarray}\label{2056}
\liminf_{n \to +\infty} \int_A f_{M,\a} \left(\left\la
\frac{x}{\e_n}\right\ra,\nabla u_n(x) \right)dx & \geqslant & \int_A
(f_{M,\a})_{\rm hom}(\nabla u(x))\, dx\nonumber\\
& \geqslant & \int_A f_{\rm hom}(\nabla u(x))\, dx
\end{eqnarray}
where $f_{\rm hom}$ is defined in (\ref{fhom1}). On the other hand,
\begin{eqnarray}\label{2057}
\liminf_{n \to +\infty} \int_A f_{M,\a}\left(\left\la
\frac{x}{\e_n}\right\ra,\nabla u_n(x) \right)dx & \leqslant & \liminf_{n
\to +\infty} \int_A f_M \left(\left\la
\frac{x}{\e_n}\right\ra,\nabla u_n(x) \right)dx\nonumber\\
&& + \a \sup_{n \in {\mathbb{N}}} \int_A |\nabla u_n|^p\, dx.
\end{eqnarray}
Gathering (\ref{2056}) and (\ref{2057}), and passing to the limit
as $\a \to 0$, we obtain that
\begin{equation}\label{gliminfca}
\liminf_{n \to +\infty} \int_A f_M\left(\left\la
\frac{x}{\e_n}\right\ra,\nabla u_n (x)\right)dx \geqslant \int_A f_{\rm
hom}(\nabla u(x))\, dx.
\end{equation}
Define the set
$$A_n^M:= \left\{x \in A : \; f\left(\left\la
\frac{x}{\e_n}\right\ra, \nabla u_n(x) \right) \leqslant -M \right\}$$
and notice that by Chebyshev's Inequality $${\mathcal{L}}^N (A_n^M) \leqslant
c/M,$$ for some constant $c >0$ independent of $n$ and $M$. Then
\begin{eqnarray}\label{cali}
\int_A f_M \left(\left\la \frac{x}{\e_n}\right\ra, \nabla u_n(x)
\right) dx & = & -M {\mathcal{L}}^N(A_n^M)\nonumber\\
&& + \int_{A\setminus A_n^M} f\left(\left\la
\frac{x}{\e_n}\right\ra, \nabla u_n(x) \right)dx\nonumber\\
& \leqslant & \int_{A\setminus A_n^M} f\left(\left\la
\frac{x}{\e_n}\right\ra, \nabla u_n(x) \right)dx.
\end{eqnarray}
As $\{ |\nabla u_n|^p \}$ is equi-integrable, by the $p$-growth
condition (\ref{Ep}), it follows that $\{f(\la \cdot/\e_n\ra,\nabla
u_n)\}$ is also equi-integrable. Thus
\begin{equation}\label{matin}
\int_{A_n^M}f\left(\left\la \frac{x}{\e_n}\right\ra, \nabla u_n(x)
\right)dx \xrightarrow[M \to +\infty]{} 0 \end{equation} uniformly
with respect to $n \in {\mathbb{N}}$. By (\ref{gliminfca}), (\ref{cali})
and (\ref{matin}) we get that
\begin{equation}\label{gliminfbis}
\liminf_{n \to +\infty} \int_A f\left(\left\la
\frac{x}{\e_n}\right\ra,\nabla u_n (x)\right)dx \geqslant \int_A f_{\rm
hom}(\nabla u(x))\, dx.
\end{equation}
Finally, since $\{f(\la \cdot/\e_n\ra,\nabla u_n)\}$ is
equi-integrable, by Theorem \ref{young} v) we have that
\begin{equation}\label{2scale}
\lim_{n \to +\infty} \int_A
f\left(\left\la\frac{x}{\e_n}\right\ra,\nabla u_n (x)\right)dx =
\int_A \int_Q \int_{{\mathbb{R}}^{d \times N}} f(y,\xi)\, d\nu_{(x,y)}(\xi)\,
dy\, dx
\end{equation}
and we conclude the proof of ii) thanks to (\ref{gliminfbis}) and
(\ref{2scale}) together with a localization argument.
\end{proof}
\subsection{Sufficiency}
\noindent We show here that these conditions are also sufficient to
characterize two-scale gradient Young measures. Following the lines
of Kinderlehrer \& Pedregal \cite{KP}, we first study the
homogeneous case. The non-homogeneous one will be obtained through a
suitable approximation of two-scale gradient Young measures by
piecewise constant ones.
\subsubsection{Homogeneous case}\label{homogeneous}
\noindent Our aim here is to prove the following result.
\begin{lemma}\label{homo}
Let $F \in {\mathbb{R}}^{d \times N}$ and $\nu \in L_w^\infty(Q;{\mathcal{M}}({\mathbb{R}}^{d
\times N}))$ be such that $\nu_y \in {\mathcal{P}}({\mathbb{R}}^{d \times N})$ for
a.e.\ $y\in Q$. Assume that
\begin{equation}\label{n1}\displaystyle F =
\int_Q \int_{{\mathbb{R}}^{d \times N}}\xi \, d\nu_y(\xi)\,
dy,\end{equation}
\begin{equation}\label{n2}f_{\rm hom}(F) \leqslant \int_Q
\int_{{\mathbb{R}}^{d \times N}} f(y,\xi)\, d\nu_y(\xi)\; dy \end{equation}
for every $f \in {\mathcal{E}}_p$, and that
\begin{equation}\label{powerp}\int_Q \int_{{\mathbb{R}}^{d \times N}}
|\xi|^p \, d\nu_y(\xi)\, dy <+\infty.\end{equation} Then
$\{\nu_y\}_{y \in Q}$ is a homogeneous two-scale gradient Young
measure.
\end{lemma}
As Kinderlehrer \& Pedregal \cite{KP}, the argument in this case
will rest on the Hahn-Banach Separation Theorem that implies any
element $\nu \in L^\infty_w(Q;{\mathcal{M}}({\mathbb{R}}^{d \times N}))$, for which the
hypothesis of Theorem \ref{bbs} are satisfied, to be in a suitable
convex and weak* closed subset of homogeneous two-scale gradient
Young measures. To prove Lemma \ref{homo} we start by giving some
notations and auxiliary lemmas.
For every $F \in {\mathbb{R}}^{d \times N}$ let
\begin{eqnarray}\label{MF}
M_F & \hspace{-0.8cm} := \hspace{-0.6cm}& \bigg\{\nu \in
L^\infty_w(Q;{\mathcal{M}}({\mathbb{R}}^{d \times
N})): ~~~~\{\nu_y\}_{y \in Q} \text{ is a homogeneous two-scale }\nonumber\\
&&\hspace{0.5cm}\text{gradient Young measure and } \int_Q
\int_{{\mathbb{R}}^{d \times N}}\xi\, d\nu_{y}(\xi)\, dy = F \bigg\}.
\end{eqnarray}
\begin{rmk}\label{indep}{\rm
The set $M_F$ is independent of $\O$, {\it i.e. } if $\nu \in M_F$ and
$\O'\subset {\mathbb{R}}^{N}$ is another domain, then for all $\{\e_n\} \to
0$ there exist a sequence $\{v_n\}\in W^{1,p}(\O';{\mathbb{R}}^d)$ such that
$\{( \la\cdot / \e_n \ra,\nabla v_n)\}$ generates $\nu_y \otimes
dy$. Indeed, let $r>0$ such that $\O' \subset r \O$. Fix an
arbitrary sequence $\{\e_n\} \to 0$ and define $\delta_n=\e_n/r$.
Then there exists a sequence $\{u_n\}\subset W^{1,p}(\O;{\mathbb{R}}^d)$ such
that $\{( \la \cdot / \delta_n \ra,\nabla u_n)\}$ generates the
homogeneous Young measure $\nu_y \otimes dy$. Define now $v_n(x)=r
\, u_n(x/r)$ so that $v_n$ belongs to $W^{1,p}(r\O ;{\mathbb{R}}^d)$ and thus
{\it a fortiori} to $W^{1,p}(\O';{\mathbb{R}}^d)$. A simple change of
variable shows that the sequence $\{(\la \cdot / \e_n \ra, \nabla
v_n)\}$ generates the homogeneous Young measure $\nu_y \otimes dy$
as well. }\end{rmk}
The next technical result allows us to construct two-scale
gradient Young measures from measures of this class that are
defined on disjoint subsets of $\O$. It will be of use in Lemma
\ref{closeconvex} below to prove the convexity of the set $M_F$.
\begin{lemma}\label{colagem de MGYM}
Let $D$ be an open subset of $\O$ with Lipschitz boundary, and let
$\mu$, $\nu \in L^\infty_w(\O \times Q;{\mathcal{M}}({\mathbb{R}}^{d \times N}))$ be
such that $\{\mu_{(x,y)}\}_{(x,y) \in \O \times Q}$ and
$\{\nu_{(x,y)}\}_{(x,y) \in \O \times Q}$ are two-scale gradient
Young measures with same underlying deformation $u \in
W^{1,p}(\O;{\mathbb{R}}^d)$. Let
$$\sigma_{(x,y)}:=\left\{\begin{array}{ll}
\mu_{(x,y)} & \text{if } ~ (x,y) \in D \times Q\\
\nu_{(x,y)} & \text{if } ~ (x,y) \in (\O \setminus D) \times Q.
\end{array}\right.$$
\noindent Then $\sigma \in L^\infty_w(\O \times Q;{\mathcal{M}}({\mathbb{R}}^{d \times
N}))$ and $\{\sigma_{(x,y)}\}_{(x,y) \in \O \times Q}$ is a
two-scale gradient Young measure with underlying deformation $u
\in W^{1,p}(\O;{\mathbb{R}}^d)$.
\end{lemma}
\begin{proof}
We have to show that for every sequence $\{\e_n\} \to 0$ there
exists $\{w_n\} \subset W^{1,p}(\O;{\mathbb{R}}^d)$ such that $w_n
\rightharpoonup u$ in $W^{1,p}(\O;{\mathbb{R}}^d)$ and $\{(\la \cdot/\e_n
\ra, \nabla w_n)\}$ generates the Young measure $\{\sigma_{(x,y)}
\otimes dy\}_{x \in \O}$.
By Lemma \ref{boundary}, there exist sequences $\{u_n\} \subset
W^{1,p}(D;{\mathbb{R}}^d)$ and $\{v_n\} \subset W^{1,p}(\O \setminus
{\overline D};{\mathbb{R}}^d)$ such that $u_n \rightharpoonup u$ in
$W^{1,p}(D;{\mathbb{R}}^d)$, $v_n \rightharpoonup u$ in $W^{1,p}(\O \setminus
\overline D;{\mathbb{R}}^d)$, $u_n=v_n=u$ on $\partial D$ and such that
$\{(\la \cdot/\e_n \ra, \nabla u_n)\}$ and $\{(\la \cdot/\e_n \ra,
\nabla v_n)\}$ generate, respectively, the Young measures
$\{\mu_{(x,y)} \otimes dy\}_{x \in D}$ and $\{\nu_{(x,y)} \otimes
dy\}_{x \in \O\setminus \overline D}$.
Define
$$w_n:=\left\{\begin{array}{ll}
u_n & \text{if } x \in D,\\
v_n & \text{if } x \in \O \setminus \overline D
\end{array}\right.$$
Then $\{w_n\} \subset W^{1,p}(\O;{\mathbb{R}}^d)$, $w_n \rightharpoonup u$ in
$W^{1,p}(\O;{\mathbb{R}}^d)$ and given $z \in L^1(\O)$ and $\varphi \in
{\mathcal{C}}_0({\mathbb{R}}^N \times {\mathbb{R}}^{d \times N})$ we have
\begin{eqnarray*}
&&\lim_{n \to +\infty} \int_{\O} z(x) \,
\varphi\left(\left\la\frac{x}{\e_n}\right\ra, \nabla w_n(x)\right)
\,dx\\
&&\hspace{2cm}= \lim_{n \to +\infty} \int_{D} z(x) \,
\varphi\left(\left\la\frac{x}{\e_n}\right\ra,
\nabla u_n(x)\right) \,dx \\
&&\hspace{2.5cm}+ \lim_{n \to +\infty} \int_{\O \setminus D} z(x)\,
\varphi\left(\left\la\frac{x}{\e_n}\right\ra,
\nabla v_n(x)\right) \,dx\\
&&\hspace{2cm}= \int_{\O} z(x) \int_Q \int_{{\mathbb{R}}^{d \times N}}
\varphi(x,\xi) \,d\sigma_{(x,y)}(\xi) \,dy\,dx,
\end{eqnarray*}
which concludes the proof.
\end{proof}
As a consequence of Remark \ref{indep} there is no loss of
generality to assume hereafter that $\O=Q$. We can now prove the
following result.
\begin{lemma}\label{closeconvex}
$M_{F}$ is a convex and weak*-closed subset of $({\mathcal{E}}_p)'$.
\end{lemma}
\begin{proof}
We identify every element $\nu \in M_F$ with a homogeneous Young
measure $\nu_y \otimes dy$.
We start by showing that $M_{F}$ is a subset of $({\mathcal{E}}_p)'$. For this
purpose let $\nu \in M_F$. Arguing exactly as
in the proof of Lemma \ref{nec} one can show that
$$K:=\int_Q \int_{{\mathbb{R}}^{d \times N}}|\xi|^p\,d\nu_y(\xi)\, dy <
+\infty.$$ Hence, using the fact that $\nu_y$ are probability
measures for a.e.\ $y \in Q$, for every $f \in {\mathcal{E}}_p$ we have that
\begin{eqnarray*}
\int_Q \int_{{\mathbb{R}}^{d \times N}}f(y,\xi)\, d\nu_y(\xi)\, dy & \leqslant &
\|f \|_{{\mathcal{E}}_p}\int_Q \int_{{\mathbb{R}}^{d \times
N}}(1+|\xi|^p)\,d\nu_y(\xi)\, dy\\
& = & (1+K)\|f\|_{{\mathcal{E}}_p}.
\end{eqnarray*}
As a consequence, $M_F \subset ({\mathcal{E}}_p)'$.
Let us now prove that $M_F$ is closed for the weak*-topology of
$({\mathcal{E}}_p)'$. Denoting by $\overline{M_F}$ the closure of $M_F$ for the
weak*-topology of $({\mathcal{E}}_p)'$ it is enough show that
$\overline{M_F}\subset M_F$. Since ${\mathcal{E}}_p$ is separable, the weak*-topology of $({\mathcal{E}}_p)'$ is locally metrizable and
thus, if $\nu \in \overline{M_F}$, there exists a sequence
$\{\nu^k\} \subset M_F$ such that $\nu^k \xrightharpoonup{*}{} \nu$
in $({\mathcal{E}}_p)'$. Hence, since the map $(y,\xi)\mapsto \xi_{ij}$ is in
${\mathcal{E}}_p$ (where $1\leqslant i \leqslant d$ and $1 \leqslant j \leqslant N$), we get,
from the definition of weak*-convergence in $({\mathcal{E}}_p)'$, that
\begin{equation}\label{condF}
\int_Q \int_{{\mathbb{R}}^{d \times N}} \xi \,d\nu_y(\xi)\, dy=\lim_{k \to
+\infty}\int_Q \int_{{\mathbb{R}}^{d \times N}} \xi \,d\nu^k_y(\xi)\, dy=F.
\end{equation}
It remains to show that $\{{\nu_y}\}_{y\in Q}$ is a homogeneous
two-scale Young measure. By definition, given $\{\e_n\} \to 0$, for
each $k \in {\mathbb{N}}$ there exist sequences $\{u^k_n\}_{n \in {\mathbb{N}}}
\subset W^{1,p}(Q;{\mathbb{R}}^{d})$ such that $\{(\la\cdot/\e_n \ra, \nabla
u^k_n)\}_{n \in {\mathbb{N}}}$ generate the homogeneous Young measures
$\nu^k_y \otimes dy$. For every $(z,\varphi)$ in a countable dense
subset of $L^{1}(Q)\times {\mathcal{C}}_{0}({\mathbb{R}}^N \times {\mathbb{R}}^{d\times N})$ we
have that
\begin{eqnarray*}&&\lim_{k\to +\infty}\lim_{n \to +\infty}
\int_{Q}z(x)\, \varphi\left(\left\la\frac{x}{\e_n}\right\ra, \nabla
u^k_n(x)\right)\, dx\\
&&\hspace{2cm}= \lim_{k \to +\infty}
\int_{Q}\int_{Q}\int_{{\mathbb{R}}^{d\times N}} z(x)\, \varphi(y,\xi)\,
d\nu^k_{y}(\xi) \,dy\, dx\\
&&\hspace{2cm}= \int_{Q}z(x)\, dx \int_{Q} \int_{{\mathbb{R}}^{d\times N}}
\varphi(y,\xi)\, d\nu_{y}(\xi) \,dy,
\end{eqnarray*}
where we have used the fact that ${\mathcal{C}}_0({\mathbb{R}}^N \times {\mathbb{R}}^{d \times
N}) \subset {\mathcal{E}}_p$ in the second equality. By a diagonalization
argument we can find a sequence $\{k(n)\}\nearrow +\infty$ such
that, setting $v_n:=u^{k(n)}_n$, we have that
\begin{eqnarray*}
\lim_{n \to +\infty}\int_{Q}z(x)\,
\varphi\left(\left\la\frac{x}{\e_n}\right\ra, \nabla
v_n(x)\right)\, dx = \int_{\O}z(x)\, dx \int_{Q}\int_{{\mathbb{R}}^{d\times
N}} \varphi(y,\xi) \, d\nu_{y}(\xi) \,dy.
\end{eqnarray*}
Thus, $\{\nu_y\}_{y\in Q}$ is a homogeneous two-scale Young measure,
which together with (\ref{condF}) implies that $\nu\in M_F.$
Next we show that $M_{F}$ is convex. Given $\mu$, $\nu\in M_F$ and
$t \in (0,1)$ we have to show that $t\mu+(1-t)\nu \in M_{F}$. Let
$D=(0,t)\times {(0,1)}^{N-1} \subset Q$ and define
$$\sigma_{(x,y)}:=\left\{\begin{array}{ll}
\mu_y & \text{if} ~ (x,y) \in D \times Q\\
\nu_y & \text{if} ~ (x,y) \in (Q \setminus D) \times Q.
\end{array}\right.$$
By Lemma \ref{colagem de MGYM} we have that
$\{\sigma_{(x,y)}\}_{(x,y) \in Q \times Q}$ is a two-scale
gradient Young measure and from Lemma \ref{average} its average
$\{\overline \sigma_y\}_{y \in Q}$ is a homogeneous two-scale
gradient Young measure. We claim that
$\overline{\sigma}=t\mu+(1-t)\nu$. Indeed, for every $\varphi \in
L^1(Q ; {\mathcal{C}}_0({\mathbb{R}}^{d \times N}))$
\begin{eqnarray*}
\int_{Q}\int_{{\mathbb{R}}^{d \times N}}\varphi(y,\xi)\, d\overline
\sigma_y(\xi)\, dy &=& \int_{Q} \int_{Q}\int_{{\mathbb{R}}^{d \times
N}}\varphi(y,\xi)\,d\sigma_{(x,y)} (\xi)\, dy \,dx\\
& = & t\int_Q\int_{{\mathbb{R}}^{d \times N}}\varphi(y,\xi)\,
d\mu_y (\xi)\, dy\\
&& + (1-t) \int_Q\int_{{\mathbb{R}}^{d \times N}}\varphi(y,\xi)\, d\nu_y
(\xi)\, dy.
\end{eqnarray*}
In particular, $$\int_{Q}\int_{{\mathbb{R}}^{d \times N}}\xi\, d\overline
\sigma_y(\xi)\, dy =
t\int_Q\int_{{\mathbb{R}}^{d \times N}}\xi\, d\mu_y (\xi)\, dy\\
+ (1-t) \int_Q\int_{{\mathbb{R}}^{d \times N}}\xi\, d\nu_y (\xi)\, dy=F,$$
and thus $\overline{\sigma}=t\mu+(1-t)\nu \in M_F$.
\end{proof}
We are now in position to show the sufficiency of conditions i)-iii)
in (\ref{i})-(\ref{iii}) in the homogeneous case.
\begin{proof}[Proof of Lemma \ref{homo}]
Let $F\in {\mathbb{R}}^{d\times N}$ and $\nu\in L_{w}^{\infty}(Q;
{\mathcal{M}}({\mathbb{R}}^{d\times N}))$ be such that $\nu_y \in {\mathcal
P}({\mathbb{R}}^{d\times N})$ for a.e. $y\in Q$, and
(\ref{n1})-(\ref{powerp}) hold. We will proceed by contradiction
using the Hahn-Banach Separation Theorem. Assume that
$\{\nu_y\}_{y\in Q}$ is not a homogeneous two-scale Young measure.
By Lemma \ref{closeconvex}, $M_F$ is a convex and weak* closed
subset of $({\mathcal{E}}_p)'$. Moreover, by (\ref{powerp}) and the fact that
$\{\nu_y\}_{y\in Q}$ is a family of probability measures, we get
that $\nu \in ({\mathcal{E}}_p)'$ as well (see {\it e.g.}\ the first part of
the proof of Lemma \ref{closeconvex}). As $\nu \not\in M_F$,
according to the Hahn-Banach Separation Theorem, we can separate
$\nu$ from $M_F$ {\it i.e. } there exist a linear weak* continuous map $L :
({\mathcal{E}}_p)' \to {\mathbb{R}}$ and $\alpha \in {\mathbb{R}}$ such that $\langle
L,\nu\rangle_{({\mathcal{E}}_p)',{\mathcal{E}}_p} < \alpha$ and $\langle
L,\mu\rangle_{({\mathcal{E}}_p)',{\mathcal{E}}_p} \geqslant \alpha$ for all $\mu \in M_F$. Let
$f \in {\mathcal{E}}_p$ be such that
\begin{eqnarray}\label{1}
\alpha \leqslant \langle L , \mu \rangle_{({\mathcal{E}}_p)'',({\mathcal{E}}_p)'} & = & \langle
\mu , f \rangle_{({\mathcal{E}}_p)',{\mathcal{E}}_p}\nonumber\\
& = & \int_Q \int_{{\mathbb{R}}^{d \times N}} f(y,\xi)\, d\mu_y(\xi)\, dy
\quad\text{for all}\,\,\,\mu \in M_F,
\end{eqnarray}
and
\begin{eqnarray}\label{2}
\alpha >\langle L , \nu \rangle_{({\mathcal{E}}_p)'',({\mathcal{E}}_p)'} & = & \langle \nu
, f \rangle_{({\mathcal{E}}_p)',{\mathcal{E}}_p}\nonumber\\
& = & \int_Q \int_{{\mathbb{R}}^{d \times N}} f(y,\xi)\, d\nu_y(\xi)\, dy
\geqslant f_{\rm hom}(F).
\end{eqnarray}
Let
$$f_{\rm H}(F) := \inf_{\mu \in M_F} \int_Q \int_{{\mathbb{R}}^{d \times N}}
f(y,\xi)\, d\mu_y(\xi)\, dy, \quad F \in {\mathbb{R}}^{d \times N}.$$ Then,
by (\ref{1}), we have that $\a \leqslant f_{\rm H}(F)$. We are going to
show that
\begin{equation}\label{n3}f_{\rm H}(F) \leqslant f_{\rm
hom}(F),\end{equation} which is a contradiction with (\ref{2}) and
asserts the conclusion of this lemma.
To prove (\ref{n3}), let $T \in {\mathbb{N}}$ and $\phi \in
W_0^{1,p}((0,T)^N;{\mathbb{R}}^{d})$. Extend $\phi$ to ${\mathbb{R}}^N$ by
$(0,T)^N$-periodicity and consider the sequence
$$\phi_n(x)= Fx+\e_n \phi\left(\frac{x}{\e_n}\right),$$
where $\{\e_n\} \to 0$ is an arbitrary sequence. Let $\varphi \in
{\mathcal{C}}_0({\mathbb{R}}^N \times {\mathbb{R}}^{d \times N})$ and $z \in L^1(Q)$. Then, since
$T \in {\mathbb{N}}$, the function $y \mapsto \varphi(\la y \ra,F+\nabla
\phi(y))$ is $(0,T)^N$-periodic and according to the
Riemann-Lebesgue Lemma, we get that
\begin{eqnarray}
&&\lim_{n \to +\infty}\int_Q z(x)\,
\varphi\left(\left\la\frac{x}{\e_n}\right\ra,\nabla \phi_n(x)
\right)dx\nonumber\\
&&\hspace{2cm}= \lim_{n \to +\infty}\int_Q z(x)\,
\varphi\left(\left\la\frac{x}{\e_n}\right\ra,F+\nabla
\phi\left(\frac{x}{\e_n}\right) \right)dx\nonumber\\
&&\hspace{2cm}=\int_Q z(x)\,dx - \hskip -1em \int_{(0,T)^N} \varphi(\la y
\ra,F+\nabla \phi(y))\,dy.\label{n4}
\end{eqnarray}
Observe that
\begin{eqnarray}
&&- \hskip -1em \int_{(0,T)^N} \varphi(\la y \ra,F+\nabla
\phi(y))\,dy\nonumber\\
&&\hspace{1cm}= \frac1 {T^N} \sum_{a_i \in {\mathbb Z}^N \cap
[0,T)^N} \int_{a_i+Q} \varphi(\la y \ra ,F+\nabla
\phi(y))\,dy \nonumber\\
&&\hspace{1cm}= \frac1 {T^N} \sum_{a_i \in {\mathbb Z}^N \cap
[0,T)^N} \int_{Q} \varphi(\la a_i+y \ra,F+\nabla
\phi(a_i+y))\,dy \nonumber\\
&&\hspace{1cm}= \frac1 {T^N} \sum_{a_i \in {\mathbb Z}^N \cap
[0,T)^N} \int_{Q} \varphi(y,F+\nabla \phi(a_i+y))\,dy.\label{n5}
\end{eqnarray}
Thus, from (\ref{n4}) and (\ref{n5}), the pair $\{(\la\cdot
/\e_n\ra,\nabla \phi_n )\}$ generates the homogeneous Young measure
$$\mu:=\sum_{a_i \in {\mathbb Z}^N \cap [0,T)^N} \frac1 {T^N} \delta_{F+\nabla \phi(a_i+y)} \otimes dy.$$
Then
$$\int_Q \int_{{\mathbb{R}}^{d \times N}}\xi\, d\nu_{y}(\xi)\, dy = F,$$
which implies that $\mu\in M_F.$ In addition
$$- \hskip -1em \int_{(0,T)^N} f(\la y \ra,F+\nabla
\phi(y))\,dy=\int_{Q}\int_{{\mathbb{R}}^{d\times
N}}f(y,\xi)\,d\mu_{y}(\xi)\,dy,$$ and then
$$- \hskip -1em \int_{(0,T)^N}f(\la y\ra,F+\nabla
\phi(y))\,dy \geqslant f_{\rm H}(F).$$ As a consequence, taking the
infimum over all $\phi \in W_0^{1,p}((0,T)^N;{\mathbb{R}}^{d})$ and the limit
as $T \to +\infty$ we get that $f_{\rm hom}(F) \geqslant f_{\rm H}(F)$
which proves (\ref{n3}).
\end{proof}
Let us conclude this section by stating a localization result
which allows us to construct homogeneous two-scale gradient Young
measures starting from any kind of them.
\begin{proposition}\label{localization}
Let $\nu \in L^\infty_w(\O \times Q ; {\mathcal{M}}({\mathbb{R}}^{d \times N}))$ be such
that $\{\nu_{(x,y)}\}_{(x,y) \in \O \times Q}$ is a two-scale
gradient Young measure. Then for a.e.\ $a \in \O$,
$\{\nu_{(a,y)}\}_{y \in Q}$ is a homogeneous two-scale gradient
Young measure.
\end{proposition}
\begin{proof}
Since $\{\nu_{(x,y)}\}_{(x,y) \in \O \times Q}$ is a two-scale
gradient Young measure, from Lemma \ref{nec} it satisfies
properties (\ref{i}), (\ref{ii}) and (\ref{iii}) above. Since
$u_1(x,\cdot)$ is $Q$-periodic for a.e.\ $x \in \O$, integrating
(\ref{i}) with respect to $y \in Q$, it follows that
\begin{equation}\label{ibis}
\int_Q \int_{{\mathbb{R}}^{d \times N}} \xi \, d\nu_{(x,y)}(\xi)\, dy= \nabla
u(x)\quad \text{ for a.e.\ }x \in \O.
\end{equation}
Furthermore, (\ref{iii}) implies that
\begin{equation}\label{iiibis}
\int_Q \int_{{\mathbb{R}}^{d \times N}} |\xi|^p\, d\nu_{(x,y)}(\xi)\, dy <
+\infty, \quad \text{ for a.e.\ }x \in \O.
\end{equation}
Let $E \subset \O$ be a set of Lebesgue measure zero such that
(\ref{ibis}), (\ref{ii}) and (\ref{iiibis}) do not hold. Then for
every $a \in \O \setminus E$
$$\int_Q \int_{{\mathbb{R}}^{d \times N}} \xi \, d\nu_{(a,y)}(\xi)\, dy= \nabla
u(a),$$
$$\int_Q \int_{{\mathbb{R}}^{d \times N}} |\xi|^p \, d\nu_{(a,y)}(\xi)\, dy
<+\infty,$$ and
$$\int_Q \int_{{\mathbb{R}}^{d \times N}} f(y,\xi)\, d\nu_{(a,y)}(\xi)\, dy \geqslant
f_{\rm hom}( \nabla u(a))$$ \noindent for every $f \in {\mathcal{E}}_p$.
As a consequence of Lemma \ref{homo}, for every $a \in \O\setminus
E$, the family $\{\nu_{(a,y)}\}_{y \in Q}$ is a homogeneous
two-scale gradient Young measure.
\end{proof}
\subsubsection{The nonhomogeneous case}\label{nonhomogeneous}
\noindent We treat now the general case whose proof is based on
Proposition \ref{localization} and a suitable decomposition of the
domain $\O$. We use (a variant of) Vitali's covering Theorem and an
approximation of two-scale gradient Young measures by measures of
this class that are piecewise constant with respect to $x$.
\begin{lemma}\label{nonhomo}
Let $\O$ be a bounded and open subset of ${\mathbb{R}}^N$ with Lipschitz
boundary. Let $\nu \in L_w^\infty(\O \times Q;{\mathcal{M}}( {\mathbb{R}}^{d \times
N}))$ be such that $\nu_{(x,y)} \in {\mathcal{P}}({\mathbb{R}}^{d \times N})$ for
a.e.\ $(x,y) \in \O \times Q$. Suppose that
\begin{itemize}
\item[(i)] there exist $u \in W^{1,p}(\O;{\mathbb{R}}^d)$ and $u_1 \in
L^p(\O;W^{1,p}_{\rm per}(Q;{\mathbb{R}}^d))$ satisfying
\begin{equation}\label{1715}
\int_{{\mathbb{R}}^{d \times N}} \xi \, d\nu_{(x,y)}(\xi)= \nabla u(x) +
\nabla_y u_1(x,y)\quad \text{ for a.e.\ }(x,y) \in \O \times Q;
\end{equation}
\item[(ii)] for every $f \in {\mathcal{E}}_p$,
\begin{equation}\label{1721}
f_{\rm hom}(\nabla u(x)) \leqslant \int_Q \int_{{\mathbb{R}}^{d \times N}}
f(y,\xi)\, d\nu_{(x,y)}(\xi)\, dy \quad \text{ for a.e.\ }x \in \O;
\end{equation}
\item[(iii)] $\displaystyle (x,y) \mapsto \int_{{\mathbb{R}}^{d \times N}} |\xi|^p\,
d\nu_{(x,y)}(\xi) \in L^1(\O \times Q).$
\end{itemize}
Then $\{\nu_{(x,y)}\}_{(x,y) \in \O \times Q}$ is a two-scale
gradient Young measure with underlying deformation $u$.
\end{lemma}
\begin{proof}
In a first step, we address the case where the underlying
deformation is zero, while the general case is treated
afterwards.
{\it Step 1.} Assume $u=0$ and let $(\varphi, z)$ be in a countable
dense subset of ${\mathcal{C}}_0({\mathbb{R}}^N \times {\mathbb{R}}^{d \times N})\times L^1(\O)$.
Set
$$\overline \varphi(x):= \int_Q \int_{{\mathbb{R}}^{d \times N}}
\varphi(y,\xi)\, d\nu_{(x,y)}(\xi)\, dy.$$ Let $k \in {\mathbb{N}}$ and let
$E \subset \O$ be the set of Lebesgue measure zero given by
Proposition \ref{localization}. According to Lemma 7.9 in
Pedregal \cite{Pbook}, there exist points $a_i^k \in \O \setminus E$
and positive numbers $\rho_i^k \leqslant 1/k$ such that $\{a_i^k +
\rho_i^k \overline \O\}$ are pairwise disjoint for each $k$,
$$\overline \O=\bigcup_{i\geqslant 1}(a_i^k + \rho_i^k \overline \O) \cup
E_k,\qquad {\mathcal{L}}^N(E_k)=0$$ and
\begin{equation}\label{Vitali}
\int_\O z(x) \, \overline \varphi(x)\, dx = \lim_{k \to +\infty}
\sum_{i\geqslant 1} \overline \varphi(a_i^k) \int_{a_i^k + \rho_i^k \O}
z(x)\, dx. \end{equation} For each $k \in {\mathbb{N}}$, let $m_k \in {\mathbb{N}}$
large enough so that
\begin{equation}\label{Mk}
\left|\sum_{i=1}^{m_k} \overline \varphi(a_i^k) \int_{a_i^k +
\rho_i^k \O} z(x)\, dx - \sum_{i \geqslant 1} \overline \varphi(a_i^k)
\int_{a_i^k + \rho_i^k \O} z(x)\, dx \right| < \frac{1}{k}.
\end{equation}
For fixed $i$ and $k$, by the choice of $a_i^k$ and Proposition
\ref{localization} the family $\{\nu_{(a_i^k,y)}\}_{y \in Q}$ is a
homogeneous two-scale gradient Young measure. Hence by Remark
\ref{indep} and Lemma \ref{boundary}, for every sequence $\{\e_n\}
\to 0$, there exist sequences $\{u_n^{i,k}\}_{n \in {\mathbb{N}}} \subset
W_0^{1,p}(a_i^k + \rho_i^k\O;{\mathbb{R}}^d)$ such that
$$\lim_{n \to +\infty}\int_{a_i^k + \rho_i^k\O} z(x) \,
\varphi \left(\left\la \frac{x}{\e_n}\right\ra,\nabla u_n^{i,k}(x)
\right)\, dx = \overline \varphi (a_i^k) \int_{a_i^k + \rho_i^k\O}
z(x)\, dx .$$ Summing up
\begin{eqnarray}\label{homoaik}
&&\lim_{n \to +\infty}\sum_{i=1}^{m_k}\int_{a_i^k + \rho_i^k\O}
z(x)\, \varphi \left(\left\la \frac{x}{\e_n}\right\ra,\nabla
u_n^{i,k}(x) \right)\, dx\nonumber\\
&&\hspace{2cm} = \sum_{i=1}^{m_k}\overline \varphi (a_i^k)
\int_{a_i^k + \rho_i^k\O} z(x)\, dx.
\end{eqnarray}
Let us define
$$u_n^k(x):=\left\{
\begin{array}{ll}
u_n^{i,k}(x) & \text{ if } x \in a_i^k + \rho_i^k \O,\\
0 & \text{ otherwise}
\end{array}\right.$$
and remark that $u_n^k \in W^{1,p}_0(\O;{\mathbb{R}}^d)$. Since the sets
$a_i^k + \rho_i^k\O$ are pairwise disjoint for each $k$ we have
that
\begin{eqnarray}\label{compute} &&\int_\O z(x) \, \varphi
\left(\left\la \frac{x}{\e_n}\right\ra,\nabla u_n^{k}(x) \right)\,
dx\nonumber\\
&&\hspace{1cm}= \sum_{i \geqslant 1} \int_{a_i^k + \rho_i^k \O} z(x) \,
\varphi \left(\left\la \frac{x}{\e_n}\right\ra,\nabla u_n^{i,k}(x)
\right)\, dx\nonumber\\
&&\hspace{1cm}= \sum_{i = 1}^{m_k} \int_{a_i^k + \rho_i^k \O} z(x)\,
\varphi \left(\left\la \frac{x}{\e_n}\right\ra,\nabla
u_n^{i,k}(x) \right)\, dx\nonumber\\
&&\hspace{2cm}+ \int_{\O \cap \, \bigcup_{i > m_k} (a_i^k + \rho_i^k
\O)} z(x) \, \varphi \left(\left\la \frac{x}{\e_n}\right\ra,\nabla
u_n^{k}(x) \right)\, dx.
\end{eqnarray}
But as $z \in L^1(\O)$ and ${\mathcal{L}}^N\left( \O \cap \,\bigcup_{i
> m_k} (a_i^k + \rho_i^k \O) \right) \to 0$, as $k \to +\infty$, it
follows that
\begin{eqnarray}\label{meas}
\lim_{k \to +\infty}\lim_{n \to +\infty} \left| \int_{\O\cap\,
\bigcup_{i
> m_k} (a_i^k + \rho_i^k \O)} z(x)\, \varphi \left(\left\la
\frac{x}{\e_n}\right\ra,\nabla u_n^{k}(x) \right)\, dx \right|=0.
\end{eqnarray}
Then, gathering (\ref{Vitali})-(\ref{meas}) we obtain that
\begin{eqnarray*}
&&\lim_{k \to +\infty}\lim_{n \to +\infty} \int_\O z(x) \, \varphi
\left(\left\la \frac{x}{\e_n}\right\ra,\nabla u_n^{k}(x)
\right)\, dx\\
&&\hspace{1cm} = \lim_{k \to +\infty}\lim_{n \to +\infty} \sum_{i =
1}^{m_k} \int_{a_i^k + \rho_i^k \O} z(x) \, \varphi \left(\left\la
\frac{x}{\e_n}\right\ra,\nabla u_n^{i,k}(x)
\right)\, dx\\
&&\hspace{1cm} = \lim_{k \to +\infty} \sum_{i=1}^{m_k}\overline
\varphi (a_i^k) \int_{a_i^k + \rho_i^k\O} z(x)\, dx\\
&&\hspace{1cm} = \lim_{k \to +\infty} \sum_{i\geqslant 1}\overline
\varphi (a_i^k) \int_{a_i^k + \rho_i^k\O} z(x)\, dx\\
&&\hspace{1cm} =\int_\O z(x) \, \overline \varphi(x)\, dx.
\end{eqnarray*}
A diagonalization argument implies the existence of a sequence
$\{k(n)\} \nearrow +\infty$, as $n \to +\infty$, such that upon
setting $u_n:=u_n^{k(n)}$, then
$$\lim_{n \to +\infty} \int_\O z(x) \, \varphi
\left(\left\la \frac{x}{\e_n}\right\ra,\nabla u_n(x) \right)\,
dx=\int_\O z(x)\, \overline \varphi(x)\, dx$$ and
$u_n\rightharpoonup 0$ in $W^{1,p}(\O;{\mathbb{R}}^d)$, which
completes the proof whenever $u=0$.\\
{\it Step 2.} Consider now a general $u \in W^{1,p}(\O;{\mathbb{R}}^d)$ and
$\nu$ satisfying properties (i)-(iii). We define $\tilde \nu \in
L^\infty_w(\O \times Q; {\mathcal{M}}({\mathbb{R}}^{d \times N}))$ by
\begin{equation}\label{nutilde}
\la \tilde \nu , \varphi \ra := \int_\O \int_Q \int_{{\mathbb{R}}^{d \times
N}} \varphi(x,y,\xi - \nabla u(x))\, d\nu_{(x,y)}(\xi)\, dy \, dx,
\end{equation} for every $\varphi \in L^1(\O \times Q;{\mathcal{C}}_0({\mathbb{R}}^{d \times N}))$.
We can easily check that $\tilde \nu$ satisfies the analogue of
properties (i)-(iii) with $\tilde{u}=0$. Hence, applying Step 1,
for every sequence $\{\e_n\} \to 0$ we may find a sequence
$\{\tilde u_n\} \subset W^{1,p}(\O;{\mathbb{R}}^d)$ such that $\{(\la
\cdot/\e_n \ra,\nabla \tilde u_n)\}$ generates the Young measure
$\{\tilde\nu_{(x,y)} \otimes dy\}_{x \in \O}$. Defining
$u_n:=\tilde u_n + u$, we claim that $\{(\la \cdot/\e_n \ra,\nabla
u_n)\}$ generates $\{\nu_{(x,y)}\otimes dy\}_{x \in \O}$. Indeed
let $\psi \in L^1(\O;{\mathcal{C}}_0({\mathbb{R}}^N \times {\mathbb{R}}^{d \times N}))$ and
define the $\tilde \psi(x,y,\xi):=\psi(x,y,\xi+\nabla u(x))$ where
$\tilde \psi \in L^1(\O;{\mathcal{C}}_0({\mathbb{R}}^N \times {\mathbb{R}}^{d \times N}))$ as
well. Then by (\ref{nutilde}),
\begin{eqnarray*}
\lim_{n \to +\infty} \int_\O \psi\left(x,\left\la \frac{x}{\e_n}
\right\ra,\nabla u_n(x) \right)\, dx & = & \lim_{n \to +\infty}
\int_\O \tilde \psi\left(x,\left\la \frac{x}{\e_n}
\right\ra,\nabla \tilde u_n(x) \right)\, dx\\
& = & \int_\O \int_Q \int_{{\mathbb{R}}^{d \times N}}\tilde
\psi(x,y,\xi)\, d\tilde\nu_{(x,y)}(\xi)\, dy\, dx\\
& =& \int_\O \int_Q \int_{{\mathbb{R}}^{d \times N}} \psi(x,y,\xi)\,
d\nu_{(x,y)}(\xi)\, dy\, dx
\end{eqnarray*}
which completes the proof.
\end{proof}
The next corollary asserts the independence of the sequence in
Definition \ref{MGYM}.
\begin{coro}\label{scaleindep}
Let $\{u_n\}$ be a bounded sequence in $W^{1,p}(\O;{\mathbb{R}}^d)$. Assume
that there exists a sequence $\{\e_n\} \to 0$ such that the pair
$\{(\la \cdot/\e_n\ra,\nabla u_n)\}$ generates a Young measure
$\{\nu_{(x,y)} \otimes dy\}_{x \in \O}$. Then the family
$\{\nu_{(x,y)}\}_{(x,y) \in \O \times Q}$ is a two-scale gradient
Young measure.
\end{coro}
\section{Proof of Theorem \ref{exprGlim}}
\noindent Before proving Theorem \ref{exprGlim} we start by
recalling Valadier's notion of {\it admissible integrand} (see
\cite{V}).
\begin{defi}\label{admint}
A function $f:\O\times Q \times {\mathbb{R}}^{d\times N}\rightarrow
[0,+\infty)$ is said to be an {\rm admissible integrand} if for any
$\eta>0$, there exist compact sets $K_{\eta}\subset \O$ and
$Y_{\eta}\subset Q$, with ${\mathcal{L}}^{N}(\O\setminus K_{\eta})<\eta$ and
${\mathcal{L}}^{N}(Q\setminus Y_{\eta})<\eta$, and such that
$f|_{K_{\eta}\times Y_{\eta}\times {\mathbb{R}}^{d\times N}}$ is continuous.
\end{defi}
We observe that from Lemma 4.11 in Barchiesi \cite{Bar}, if $f$ is
an admissible integrand then, for fixed $\e>0$, the function
$(x,\xi) \mapsto f(x,\la x/\e\ra ,\xi)$ is ${\mathcal{L}}(\O) \otimes
\mathcal B({\mathbb{R}}^{d \times N})$-measurable, where ${\mathcal{L}}(\O)$ and
$\mathcal B({\mathbb{R}}^{d \times N})$ denote, respectively, the
$\sigma$-algebra of Lebesgue measurable subsets of $\O$ and Borel
subsets of ${\mathbb{R}}^{d \times N}$. In particular, the functional
(\ref{main-funct}) is well defined in $W^{1,p}(\O;{\mathbb{R}}^{d})$.
\begin{proof}[Proof of Theorem \ref{exprGlim}]
Let $u\in W^{1,p}(\O;{\mathbb{R}}^{d})$ and let $\{\e_n\} \to 0$. We start
by showing that
\begin{equation}\label{glimsup}
\G\text{-}\limsup_{n \to +\infty}{\mathcal{F}}_{\e_n}(u) \leqslant
\inf_{\nu \in {\mathcal{M}}_u} \int_\O \int_Q \int_{{\mathbb{R}}^{d \times N}}
f(x,y,\xi)\, d\nu_{(x,y)}(\xi)\,dy\, dx.
\end{equation}
where ${\mathcal{M}}_u$ is the set defined in (\ref{mu}). Let $\nu \in {\mathcal{M}}_u$,
by Remark \ref{underlying} there exists a sequence $\{u_n\}
\subset W^{1,p}(\O;{\mathbb{R}}^d)$ such that $\{(\la \cdot / \e_n \ra ,
\nabla u_n )\}$ generates the Young measure $\{\nu_{(x,y)} \otimes
dy\}_{x \in \O}$ and $u_n \rightharpoonup u$ in
$W^{1,p}(\O;{\mathbb{R}}^d)$. Extract a subsequence $\{\e_{n_k}\} \subset
\{\e_n\}$ such that
\begin{equation*}
\limsup_{n \to +\infty} {\mathcal{F}}_{\e_n}(u_n) = \lim_{k \to +\infty}
{\mathcal{F}}_{\e_{n_k}}(u_{n_k})
\end{equation*}
and that $\{|\nabla u_{n_k}|^p\}$ is equi-integrable, which is
always possible by the Decomposition Lemma (see Lemma 1.2 in
Fonseca, M\"uller \& Pedregal \cite{FMP}). In particular, due to
the $p$-growth condition (\ref{pgrowth}), the sequence
$\{f(\cdot,\la\cdot/\e_{n_k}\ra,\nabla u_{n_k})\}$ is
equi-integrable as well and applying Theorem 2.8 (ii) in Barchiesi
\cite{Bar2} we get that
\begin{eqnarray}\label{n7}
\G\text{-}\limsup_{n \to +\infty}{\mathcal{F}}_{\e_n}(u) & \leqslant & \lim_{k \to
+\infty} \int_\O
f\left(x,\left\la\frac{x}{\e_{n_k}}\right\ra,\nabla
u_{n_k}(x)\right)\, dx\\
& = &\int_\O \int_Q \int_{{\mathbb{R}}^{d \times N}} f(x,y,\xi)\,
d\nu_{(x,y)}(\xi)\, dy\, dx.
\end{eqnarray}
Taking the infimum over all $\nu \in {\mathcal{M}}_u$ in the right hand side
of the (\ref{n7}) yields to (\ref{glimsup}).
Let us prove now that
\begin{equation}\label{gliminf}
\G\text{-}\liminf_{n \to +\infty}{\mathcal{F}}_{\e_n}(u) \geqslant \inf_{\nu
\in {\mathcal{M}}_u} \int_\O \int_Q \int_{{\mathbb{R}}^{d \times N}} f(x,y,\xi)\,
d\nu_{(x,y)}(\xi)\,dy\, dx.
\end{equation}
Let $\eta>0$ and $\{u_n\}\subset W^{1,p}(\O;{\mathbb{R}}^d)$ such that
$u_n \rightharpoonup u$ in $W^{1,p}(\O;{\mathbb{R}}^d)$ and
\begin{equation}\label{gliminfeta}
\liminf_{n \to +\infty} {\mathcal{F}}_{\e_n}(u_n) \leqslant \G\text{-}\liminf_{n
\to +\infty} {\mathcal{F}}_{\e_n}(u) + \eta.
\end{equation}
For a subsequence $\{n_k\}$, we can assume that there exists
$\nu \in L^\infty_w(\O \times Q;{\mathcal{M}}({\mathbb{R}}^{d \times N}))$ such that
$\{(\la \cdot / \e_{n_k} \ra,\nabla u_{n_k})\}$ generates a
Young measure $\{\nu_{(x,y)} \otimes dy\}_{x \in \O}$ and
\begin{equation}\label{lim=liminf}
\lim_{k \to +\infty}{\mathcal{F}}_{\e_{n_k}}(u_{n_k}) = \liminf_{n \to
+\infty} {\mathcal{F}}_{\e_n}(u_n).
\end{equation}
We remark that $\{\nabla
u_{n_k}\}$ is equi-integrable since it is bounded in $L^p(\O;{\mathbb{R}}^{d
\times N})$ and $p>1$. Thus, by Theorem \ref{young} (v) we get that
for every $A \in {\mathcal{A}}(\O)$,
$$\int_A\nabla u(x)\, dx = \lim_{k \to +\infty} \int_A \nabla u_{n_k} (x)\,
dx= \int_A \int_Q \int_{{\mathbb{R}}^{d \times N}} \xi\, d\nu_{(x,y)}(\xi)\,
dy\, dx.$$ By the arbitrariness of the set $A$, it follows that
\begin{equation}\label{nablau(x)}
\nabla u(x) =\int_Q \int_{{\mathbb{R}}^{d \times N}} \xi\,
d\nu_{(x,y)}(\xi)\, dy\quad \text{ a.e. in }\O. \end{equation} As a
consequence of Corollary \ref{scaleindep} $\{\nu_{(x,y)}\}_{(x,y)
\in \O \times Q}$ is a two-scale gradient Young measure and, by
(\ref{nablau(x)}), we also have that $\nu \in {\mathcal{M}}_u$. Applying now
Theorem 2.8 (i) in Barchiesi \cite{Bar2} we get that
\begin{eqnarray*}
&&\lim_{k \to +\infty} \int_\O
f\left(x,\left\la\frac{x}{\e_{n_k}}\right\ra,\nabla u_{n_k}(x)
\right)\, dx\\
&&\hspace{2cm} \geqslant \int_\O \int_Q \int_{{\mathbb{R}}^{d \times N}} f(x,y,\xi)\, d\nu_{(x,y)}(\xi)\, dy\, dx\\
&&\hspace{2cm} \geqslant \inf_{\nu \in {\mathcal{M}}_u} \int_\O \int_Q \int_{{\mathbb{R}}^{d
\times N}} f(x,y,\xi)\, d\nu_{(x,y)}(\xi)\,dy\, dx.
\end{eqnarray*}
Hence by (\ref{gliminfeta}), (\ref{lim=liminf}) and the
arbitrariness of $\eta$ we get the desired result. Gathering
(\ref{glimsup}) and (\ref{gliminf}), we obtain that
$$\G\text{-}\lim_{n \to +\infty}{\mathcal{F}}_{\e_n}(u)=\inf_{\nu \in
{\mathcal{M}}_u} \int_\O \int_Q \int_{{\mathbb{R}}^{d \times N}} f(x,y,\xi)\,
d\nu_{(x,y)}(\xi)\,dy\, dx.$$
It remains to prove that the minimum is attained. To this aim,
consider a recovering sequence $\{\bar u_n\} \subset
W^{1,p}(\O;{\mathbb{R}}^d)$. Arguing exactly as before we can assume that (a
subsequence of) $\{\nabla \bar u_n\}$ generates a two-scale gradient
Young measure $\{\nu_{(x,y)}\}_{(x,y) \in \O \times Q}$, that $\nu
\in {\mathcal{M}}_u$ and $\{f(\cdot,\la\cdot/\e_n \ra,\nabla \bar u_n)\}$ is
equi-integrable. According to Theorem 2.8 (ii) in Barchiesi
\cite{Bar2} and using the fact that $\{\bar u_n\}$ is a recovering
sequence,
\begin{eqnarray*}
\G\text{-}\lim_{n \to +\infty}{\mathcal{F}}_{\e_n}(u) & = & \lim_{n \to
+\infty} \int_\O f\left(x,\left\la\frac{x}{\e_n}\right\ra,\nabla \bar u_n(x) \right)\, dx\\
& = & \int_\O \int_Q \int_{{\mathbb{R}}^{d \times N}} f(x,y,\xi)\,
d\nu_{(x,y)}(\xi)\,dy\, dx
\end{eqnarray*}
which completes the proof.
\end{proof}
Let us conclude by stating a Corollary which provides an alternative
formula to derive the homogenized energy density $f_{\rm hom}$ in
(\ref{fhom1}).
\begin{coro}\label{glim}
If $f:Q \times {\mathbb{R}}^{d \times N} \to [0,+\infty)$ is a Carath\'eodory
integrand (independent of $x$) and satisfying (\ref{pgrowth}), then
for every $u \in W^{1,p}(\O;{\mathbb{R}}^d)$,
$${\mathcal{F}}_{\rm hom}(u)=\int_\O f_{\rm hom}(\nabla u(x))\, dx,$$
where for every $F \in {\mathbb{R}}^{d \times N}$,
$$f_{\rm hom}(F) = \min_{\nu \in M_F} \int_Q \int_{{\mathbb{R}}^{d \times N}} f(y,\xi)\, d\nu_y(\xi)\, dy$$
and $M_F$ is defined in (\ref{MF}).
\end{coro}
\begin{proof}
It known from {\it e.g.}\ Theorem 14.5 in Braides \& Defranceschi
\cite{BD} that
$${\mathcal{F}}_{\rm hom}(u)=\int_\O f_{\rm hom}(\nabla u(x))\, dx$$
where $f_{\rm hom}$ is defined in (\ref{fhom1}). By Theorem
\ref{exprGlim} with $\O=Q$ and $u(x)=Fx$, we get that
$$f_{\rm hom}(F)=\min_{\nu \in {\mathcal{M}}_u} \int_Q \int_Q \int_{{\mathbb{R}}^{d
\times N}} f(x,y,\xi)\, d\nu_{(x,y)}(\xi)\,dy\, dx.$$ The thesis
follows from Lemma \ref{average}.
\end{proof}
\noindent {\it Acknowledgments.} The authors wish to thank Irene
Fonseca for suggesting the problem. They also gratefully acknowledge
Gianni Dal Maso and Marco Barchiesi for fruitful comments and
stimulating discussions.
The research of J.-F. Babadjian has been supported by the Marie
Curie Research Training Network MRTN-CT-2004-505226 `Multi-scale
modelling and characterisation for phase transformations in advanced
materials' (MULTIMAT). He also acknowledge CEMAT, Mathematical Department of the Instituto Superior Técnico, Lisbon, for its hospitality and
support.
The research of M. Ba\'{\i}a was supported by Funda\c{c}\~{a}o
para a Ci\^{e}ncia e a Tecnologia under Grant PRAXIS XXI SFRH
$\hspace{-0.1cm}\backslash\hspace{-0.1cm}$ BPD
$\hspace{-0.1cm}\backslash\hspace{-0.1cm}$ 22775
$\hspace{-0.1cm}\backslash\hspace{-0.1cm}$ 2005 and by Fundo
Social Europeu.
\end{document}
|
\betaegin{document}
\title[Wall's continued fractions and Jacobi matrices]
{A note on Wall's modification of the Schur algorithm and linear pencils of Jacobi matrices}
\author{Maxim Derevyagin}
\address{
Maxim Derevyagin\\
University of Mississippi\\
Department of Mathematics\\
Hume Hall 305 \\
P. O. Box 1848 \\
University, MS 38677-1848, USA }
\email{[email protected]}
\date{\today}
\subjclass{Primary 47A57, 47B36; Secondary 30E05, 30B70, 42C05}
\keywords{the Schur algorithm, Jacobi matrix, linear pencil, interpolation problem,
continued fraction, Nevanlinna function, orthogonal polynomials, orthogonal rational functions}
\betaegin{abstract}
In this note we revive a transformation that was introduced by H. S. Wall and that establishes a one-to-one correspondence between continued fraction representations of Schur, Carath\'eodory, and Nevanlinna functions. This transformation can be considered as an analog of the Szeg\H{o} mapping but it is based on the Cayley transform, which relates the upper half-plane to the unit disc. For example, it will be shown that, when applying the Wall transformation, instead of OPRL, we get a sequence of orthogonal rational functions that satisfy three-term recurrence relation of the form $(H-\lambda J)u=0$, where $u$ is a semi-infinite vector, whose entries are the rational functions. Besides, $J$ and $H$ are Hermitian Jacobi matrices for which a version of the Denisov-Rakhmanov theorem holds true. Finally we will demonstrate how pseudo-Jacobi polynomials (aka Routh-Romanovski polynomials) fit into the picture.
\end{abstract}
\maketitle
\section{Introduction}
In September of the year 1916, Issai Schur submitted the first paper of the series of two \cite{Schur1}, \cite{Schur2} that presented a new parametrization of functions that are analytic and bounded by 1 in the open unit disc ${\mathbb D}} \def\dE{{\mathbb E}} \def\dF{{\mathbb F}$ and an algorithm for computing the corresponding parameters. The algorithm is now known as the Schur algorithm. In fact, it's been literally a hundred years, and yet there is still a continuing interest in further developing the findings of I. Schur. One of the reasons for that is because the Schur algorithm is a successor of the Euclidean algorithm, which has many theoretical and practical applications. Another one is that the Schur algorithm is intimately related to orthogonal polynomials on the unit circle (hereafter abbreviated by OPUC) and the latter has seen an enormous progress since the beginning of the 21st century. However, the ideology of this note is based on an old result that appeared in 1944 in a paper by Hubert Stanely Wall \cite{Wall44}. Nevertheless, a proper recasting of the result gives new insights and perspectives to the theory of orthogonal polynomials. We will see it later but now let's briefly recall basics of the Schur algorithm. First of all, we need to consider a Schur function $f$, which is an analytic function mapping ${\mathbb D}} \def\dE{{\mathbb E}} \def\dF{{\mathbb F}$ to its closure $\overline{{\mathbb D}} \def\dE{{\mathbb E}} \def\dF{{\mathbb F}}$, that is,
\betaegin{equation*}
\sup_{z\in{\mathbb D}} \def\dE{{\mathbb E}} \def\dF{{\mathbb F}}\, |f(z)| \leq 1.
\end{equation*}
As a matter fact, a Schur function is the input for the Schur algorithm. Namely, given a Schur function $f$ the Schur algorithm generates a sequence of Schur functions $\{f_n\}_{n=0}^{\infty}$ by means of the following relations
\betaegin{equation} \label{SchurAlg}
\betaegin{split}
f_0(z)&=f(z), \\
f_n(z) &= \varphirac{\gamma_n + zf_{n+1}(z)}{1+\betaar\gamma_n zf_{n+1}(z)}, \quad n=0, 1, 2, \dots,
\end{split}
\end{equation}
where $\gamma_n=f_n(0)$ are called Schur parameters and satisfy the relation
\betaegin{equation}\label{SchurCond}
|\gamma_n|<1, \quad n=0, 1, 2, \dots.
\end{equation}
To be more precise, the Schur algorithm gives a sequence that is either finite or infinite. In what follows we are mainly interested in Schur functions that produce infinite sequences of Schur parameters. In other words, when we say that $f$ is a Schur function we mean that there is an infinite sequence of linear fractional transformations \eqref{SchurAlg} generated by $f$ unless said otherwise.
The Schur algorithm is the key to Wall's theory and we are now ready to proceed with it, which we are going to do in the following way. The goal of the next section is to show that orthogonal polynomials on the real line (henceforth OPRL) and OPUC correspond to totally different interpolation problems. Hence, in addition to the attempt to identify OPUC and OPRL, it is also natural to find the real line image of OPUC when applying a transformation that keeps the underlying interpolation problems equivalent. This is what is actually done in Section 3 and Section 4 through the findings of H. S. Wall. The section after that is where we simply adopt a more general theory developed in \cite{BDZh}, \cite{Der10}, and \cite{DZh} to this very particular case of Wall's continued fraction representation of Nevanlinna functions. Besides, Section 5 reveals the relation between the spectral theories of OPUC and linear pencils of Jacobi matrices. At the end, in Section 6, we give an example based on pseudo-Jacobi polynomials, which are sometimes called Romanovski or even Routh-Romanovski polynomials.
\section{The same old moment problems}
One of the original ideas to construct the Schur algorithm was to solve an interpolation problem, which now bears the name Schur's coefficient problem (for instance see \cite[Chapter 3, Section 3]{A65} or \cite[Section 9]{DK03}). This interpolation problem is not of principal interest here and it's better for us to consider an equivalent problem in the class of Carath\'eodory functions (see \cite[Chapter 5, Section 1]{A65}).
Before formulating it, recall that a Carath\'eodory function $F$ is an analytic function on ${\mathbb D}} \def\dE{{\mathbb E}} \def\dF{{\mathbb F}$ which obeys
\betaegin{equation} \label{CarCond}
F(0)=1, \qquad \Rl F(z) >0\,\mbox{ when }\,z\in\mathbb{D}.
\end{equation}
It is well known that if $f$ is a Schur function then the function
\betaegin{equation} \label{Carathe}
F(z) = \varphirac{1+zf(z)}{1-zf(z)}
\end{equation}
is a Carth\'eodory function and vice versa.
Indeed, for the function $F$ defined by \eqref{Carathe} it is easily seen that
\betaegin{equation}\label{ReOfCar}
\Rl F(z)=\varphirac{1-|zf(z)|^2}{|1-zf(z)|^2}>0, \quad z\in{\mathbb D}} \def\dE{{\mathbb E}} \def\dF{{\mathbb F}.
\end{equation}
So, we are in the position to formulate the Carath\'eodory coefficient problem: given the complex numbers $c_1$, $c_2$, $c_3$, \dots, find necessary and sufficient conditions for the function
\betaegin{equation}\label{CarCP}
F(z)=1+2c_1z+2c_2z^2+2c_3z^3+\dots
\end{equation}
to be a Carath\'eodory function. To get to the conditions using the Schur algorithm is relatively easy since the Schur parameters can be expressed in terms of the coefficients $c_1$, $c_2$, $c_3$, \dots. For instance, the formulas can be extracted in a similar way as it is done in \cite[Section 1.3]{OPUC1} (see also \cite[Section 9]{DK03}). Therefore, the condition \eqref{SchurCond} is a solution to the the Carath\'eodory coefficient problem. Clearly, there could be only one function corresponding to the coefficients $c_1$, $c_2$, $c_3$, $\dots$ due to the uniqueness theorem for analytic functions. This means that we don't have to think about describing all possible functions generated by the given sequence $c_1$, $c_2$, $c_3$, \dots. Therefore, the problem is fully resolved.
Another way of solving Carath\'eodory's coefficient problem leads to trigonometric moment problems. To see this, one needs to take into account that Carath\'{e}odory functions admit the representation \cite[Chapter 3, Section 1]{A65}
\betaegin{equation} \label{IntCara}
F(z) = \int_0^{2\pi} \varphirac{e^{i\theta}+z}{e^{i\theta}-z}\, d\mu(\theta)
\end{equation}
for some non-trivial (that is, infinitely supported) probability measure $\mu$. Next, after combining \eqref{CarCP} and \eqref{IntCara} one can see that the Carath\'eodory coefficient problem reads: find necessary and sufficient conditions on the coefficients $c_1$, $c_2$, $c_3$, $\dots$ in order that there exists a probability measure $\mu$ such that
\[
c_n=\int_0^{2\pi} e^{-in\theta}\, d\mu(\theta), \quad n=1,2,3,\dots.
\]
Once we have a moment problem it seems natural to consider orthogonal polynomials since they play a prominent role for the analogous moment problems on the real line, which will be discussed later. So, given a moment sequence $c_1$, $c_2$, $c_3$, $\dots$ one can define a sequence of monic polynomials by the formulas
\[
\Phi_n(z)=\varphirac{1}{D_{n-1}}\betaegin{vmatrix}
c_0&\overline{c}_1&\dots&\overline{c}_n\\
c_1&{c}_0&\dots&\overline{c}_{n-1}\\
\vdots&\vdots&&\vdots\\
c_{n-1}&{c}_{n-2}&\dots&\overline{c}_1\\
1&z&\dots&z^n\\
\end{vmatrix}, \quad n=0,1,2\dots,
\]
where $c_0=1$, $D_{-1}=1$, and $D_{n-1}=\det(c_{k-j})_{k,j=0}^{n-1}$ with $c_{-j}=\overline{c}_j$ for $j=1,2,3\dots$. Evidently, $\Phi_n$ is correctly defined if and only if $D_{n-1}\ne 0$, which is a step towards the condition we are looking for.
In addition, if $D_{n-1}\ne 0$ for $n=1,2,\dots$ then the polynomials $\Phi_n$ satisfy the Szeg\H{o} recurrence \cite{Ger40}:
\betaegin{equation}\label{SzRec}
\betaegin{split}
\Phi_{n+1}(z)=z\Phi_n(z)-\overline{\alpha^Lpha}_n\Phi_n^*(z)\\
\Phi_{n+1}^*(z)=\Phi_n^*(z)-\alpha^Lpha_nz\Phi_n(z),
\end{split}
\end{equation}
where $\Phi_0=1$ and $\Phi_n^*$ is the polynomial reversed to $\Phi_n$, that is,
\betaegin{equation}\label{RevPol}
\Phi_n^*(z)=z^n\overline{\Phi_n(1/\overline{z})}.
\end{equation}
Finally, according to Favard's theorem on the unit circle (for example, see \cite{ENZG91}) the polynomials $\Phi_n$ are orthogonal with respect to a positive measure supported on the unit circle $\dT$ if and only if the coefficients $\alpha^Lpha_n$ called Verblunsky coefficients satisfy
\betaegin{equation}\label{VerbCond}
|\alpha^Lpha_n|<1, \quad n=0, 1, 2, \dots,
\end{equation}
which delivers the condition we wanted to get to resolve the Carath\'eodory coefficient problem. The latter condition came into play in a way different from using the Schur algorithm but \eqref{VerbCond} is actually the same as \eqref{SchurCond} if we take into account the Geronimus theorem \cite[Theorem 3.1.4]{OPUC1}:
\[
\alpha^Lpha_n=\gamma_n,\quad n=0,1,2,\dots.
\]
Thus, we are back to the Schur algorithm. As a result, one sees that OPUC are associated with solving the interpolation problem in the class of Carath\'eodory functions and, consequently, they absorb the information about the interpolation.
To close up this discussion we should consider the real line case that lies within the same circle of concepts. Let us begin with a Hamburger moment problem \cite[Chapter 2, Section 1]{A65}: given an infinite sequence of real numbers $s_0=1$, $s_1$, $s_2$, \dots; it is required to find a probability measure $\sigma$ supported on the real line $\dR$ such that
\betaegin{equation}\label{Hmp}
s_n=\int_{\dR} t^{n}\, d\sigma(t), \quad n=1,2,3,\dots.
\end{equation}
This problem is not always solvable and therefore we need to discuss the existence criterion, which will be done through the use of OPRL. To this end, introduce the polynomials
\[
P_n(\lambda)=\varphirac{1}{\sqrt{|\Delta_n\Delta_{n-1}|}}\betaegin{vmatrix}
s_0&{s}_1&\dots&{s}_n\\
s_1&{s}_2&\dots&{s}_{n+1}\\
\vdots&\vdots&&\vdots\\
s_{n-1}&{s}_{n}&\dots&s_{2n-1}\\
1&\lambda&\dots&\lambda^n\\
\end{vmatrix}, \quad n=0,1,2\dots,
\]
where $\lambda$ is in the upper half-plane $\dC_+$, $\Delta_{-1}=1$, and $\Delta_n=\det(s_{k+j})_{k,j=0}^{n}$. These polynomials are correctly defined provided that $\Delta_n\ne0$ for $n=1,2,3,\dots$ and, in this case, are orthogonal with respect to a quasi-definite moment functional, which implies that they satisfy three-terms recurrence relations \cite[Chapter 1]{Chi78}:
\betaegin{equation}\label{qOPrelations}
\lambda P_{n}(\lambda)=b_{n}P_{n+1}(\lambda)+a_nP_n(\lambda)+\epsilon_{n-1}b_{n-1}P_{n-1}(\lambda), \quad n=0,1,2,\dots
\end{equation}
where $b_{-1}=0$, $\epsilon=\pm 1$, $b_n>0$ and $a_n\in\dR$. Then in this context the Favard theorem reads that there exists a measure $\sigma$ satisfying \eqref{Hmp} if and only if $\epsilon_n=1$ for all nonnegative integers $n$ \cite[Chapter I, Theorem 4.4]{Chi78}. While we are on the subject, it is worth mentioning that in the latter case \eqref{qOPrelations} can be rewritten by means of a symmetric tridiagonal matrix called a Jacobi matrix
\[
\betaegin{pmatrix}
a_0 & b_0& &\\
b_0& a_1& b_1&\\
& b_1&a_2&\\
&&&\ddots&
\end{pmatrix}
\betaegin{pmatrix}
P_0\\
P_1\\
P_2\\
\vdots
\end{pmatrix}=\lambda \betaegin{pmatrix}
P_0\\
P_1\\
P_2\\
\vdots
\end{pmatrix}.
\]
It is noteworthy that such an explicit appearance of Jacobi matrices here is strikingly different to the unit circle case for which it took many years and papers to create a proper analog of Jacobi matrices (for details see \cite[Chapter 4]{OPUC1}).
What concerns the underlying interpolation problems, one has to recall that a Nevanlinna function $\varphi$ is an analytic function on $\dC_+$ which satisfies
\[
\operatorname{Im} \varphi(\lambda)>0, \quad \lambda\in\dC_+.
\]
Next, it is not so hard to check that if $\sigma$ is a positive measure on $\dR$ then the function
\[
\varphi(\lambda)=\int_{\dR}\varphirac{d\sigma(t)}{t-\lambda}
\]
is a Nevanlinna function and
\betaegin{equation}\label{NevHam}
\varphi(\lambda)=-\varphirac{s_0}{\lambda}-\varphirac{s_1}{\lambda^2}-\dots-\varphirac{s_{2n}}{\lambda^{2n+1}}+
o\left(\varphirac{1}{\lambda^{2n+1}}\right), \quad \lambda=iy, \quad y\to\infty,
\end{equation}
for any $n$. Moreover, the Hamburger-Nevanlinna theorem (see \cite[Chapter 3, Section 2]{A65} or \cite[Proposition 4.13]{Simon98}) says that the classical Hamburger moment problem is equivalent to finding a Nevanlinna function with the property \eqref{NevHam} for all nonnegative integer $n$. To solve this equivalent problem one can apply a step-by-step algorithm similar to the Schur algorithm, which for a given $\varphi_0=\varphi$ gives a sequence of Nevanlinna functions
\[
\varphi_j(\lambda)=-\varphirac{1}{\lambda-a_j+b_j^2\varphi_{j+1}(\lambda)}, \quad j=0,1,2 \dots,
\]
where $a_j$ and $b_j$ are the same coefficients as in \eqref{qOPrelations}. Furthermore, as one can see the algorithm in the real line case is a straightforward generalization of Euclid's algorithm to the case of formal Laurent series \cite[Section 5.1]{JT}. Consequently, it leads to a continued fraction
\betaegin{equation}\label{Jfraction}
\varphi(\lambda)\sim-\varphirac{1}{\lambda-a_0-\displaystyle{\varphirac{b_0^2}
{\lambda-a_1-\displaystyle{\varphirac{b_1^2}{\ddots}}}}}=-\cfr{1}{\lambda-a_0}-
\cfr{b_0^2}{\lambda-a_{1}}-
\cfr{b_1^2}{\lambda-a_{2}}-\dots
\end{equation}
which generates the relation \eqref{qOPrelations} in the standard way \cite[Chapter 1, Section 4]{A65}, \cite[Section 5]{Simon98}. By the way, in what follows we will be using the second form of representing continued fractions since it makes formulas more transparent and shorter.
Summing up we see that we have arrived at the desired interpolation problem and the step-by-step algorithm in the real line case. However, this time the interpolation is at $\infty$ (see \cite[Chapter 3, Section 3.6]{A65}) unlike the unit circle case when the corresponding problem is the multiple interpolation at $0$. This means that although the two problems look somewhat similar, they are different in nature. Indeed, the unit circle case concerns the multiple interpolation at $0$, which belongs to the domain of analyticity of Carath\'eodory functions. But the real line case deals with the multiple interpolation at $\infty$, which belongs to the boundary of the domain of analyticity of Nevanlinna functions. That is, a Nevanlinna function does not have to be analytic at $\infty$ and thus we cannot apply the uniqueness theorem. In turn, this entails that there might be many Nevanlinna functions satisfying \eqref{NevHam}, which leads to the theory of extensions of symmetric Jacobi operators to self-adjoint ones \cite{A65}, \cite{Simon98}.
To conclude this section let us formulate the following statement, which is well known but perhaps was never worded exactly this way.
\betaegin{proposition}
The trigonometric and Hamburger moment problems are representatives of a class of interpolation problems, which are called Nevanlinna-Pick problems. Moreover, there are Hamburger moment problems that cannot be restated in the form of trigonometric moment problems. In other words, the OPRL theory is not equivalent to the OPUC theory.
\end{proposition}
\section{Revisiting the Wall ideas}
As explained in the previous section, the theories of OPUC and OPRL do not line up completely since they correspond to different kinds of interpolation problems. So, one can ask a few natural questions. For instance, what would be the theory corresponding to OPUC on the real line? One of the answers to the question is given by the Szeg\H{o} mapping \cite[Section 13.1]{OPUC2}. However, this answer is not entirely natural for the corresponding interpolation problems. As is known, a simple transformation establishes a one-to-one correspondence between Carath\'eodory and Nevanlinna functions. More precisely, it is clear that if $F$ is a Carath\'eodory function then the function
\[
\varphi(\lambda)=iF(z), \quad z=\varphirac{i-\lambda}{i+\lambda},
\]
is a Nevanlinna function and in view of this transformation there is a natural relation between interpolation problems in the classes of Carath\'eodory and Nevanlinna functions. Besides, the Cayley transform is also natural for relating unitary and self-adjoint operators. Hence, an instinctive way to answer the question would be through the use of the Cayley transform. It appears that this scheme was realized by H. S. Wall \cite{Wall44}, \cite{Wall46} (see also \cite{Wall48}). Actually, H. S. Wall developed the idea of representing Schur, Carath\'eodory, and Nevanlinna functions by means of continued fractions and the starting point of that study was the classical Schur algorithm. In this section we reframe the core that lies behind Wall's representations and tailor it to our further needs.
Let us start by noticing that the first obvious discrepancy between OPUC and OPRL is that the sequence of transformations \eqref{SchurAlg} is not a continued fraction contrary to the real line case. Nevertheless, the first step that was done by H. S. Wall is the observation
\[
f_n(z)=\varphirac{\gamma_n + zf_{n+1}(z)}{1+\betaar\gamma_n zf_{n+1}(z)}
=\gamma_n+\varphirac{(1-|\gamma_n|^2)z}{\displaystyle{\overline{\gamma}_nz+\varphirac{1}{f_{n+1}(z)}}}.
\]
Clearly, such representations can be combined into the following expansion
\betaegin{equation}\label{WF}
f(z)\sim\gamma_0+\cfr{(1-|\gamma_0|^2)z}{\overline{\gamma}_0z}+
\cfr{1}{\gamma_1}+
\cfr{(1-|\gamma_1|^2)z}{\overline{\gamma}_1z}+\dots.
\end{equation}
Let us stress here again that the structure of this fraction and, hence, the underlying recurrence relations are not suitable for having any operator interpretations. So, to speculate one may say that after ending up \eqref{WF} H. S. Wall decided to see if it was possible to do any better for another class of analytic functions. As we saw already the next one in line would be the class of Carath\'eodory functions.
In order to get Wall's representation of Carath\'eodory functions it will be convenient to represent linear fractional transformations using $2\times 2$ matrices. Namely, we can follow the notation from \cite[page 33]{OPUC1} and rewrite \eqref{SchurAlg} in the following manner
\betaegin{equation}\label{MatSA}
\betaegin{pmatrix}
f_n(z)\\
1
\end{pmatrix}\stackrel{.}{=}
\betaegin{pmatrix}
z&\gamma_n\\
\overline{\gamma}_nz&1
\end{pmatrix}
\betaegin{pmatrix}
f_{n+1}(z)\\
1
\end{pmatrix},
\end{equation}
where the symbol $\stackrel{.}{=}$ is used in the sense that
\[
\betaegin{pmatrix}
a\\
b
\end{pmatrix}\stackrel{.}{=}
\betaegin{pmatrix}
c\\
d
\end{pmatrix} \Leftrightarrow \varphirac{a}{b}=\varphirac{c}{d}.
\]
Next step, is to use the Wall Ansatz, which consists in introducing the function
\betaegin{equation}\label{WallAnsatz}
\betaegin{pmatrix}
h_n(z)\\
1
\end{pmatrix}\stackrel{.}{=}
\betaegin{pmatrix}
-\delta_n&1\\
{\delta}_nz&1
\end{pmatrix}
\betaegin{pmatrix}
f_{n}(z)\\
1
\end{pmatrix},
\end{equation}
where the coefficients $\delta_n$ are given by the recurrence relations
\[
\delta_0=1, \quad \delta_n=\varphirac{\overline{\gamma}_{k-1}-\delta_{k-1}}{1-\gamma_{k-1}\delta_{k-1}}, \quad k=1,2, \dots n.
\]
Then \eqref{MatSA} becomes
\[
\betaegin{pmatrix}
h_n(z)\\
1
\end{pmatrix}\stackrel{.}{=}
\betaegin{pmatrix}
-\delta_n&1\\
{\delta}_nz&1
\end{pmatrix}
\betaegin{pmatrix}
z&\gamma_n\\
\overline{\gamma}_nz&1
\end{pmatrix}
\betaegin{pmatrix}
-\delta_{n+1}&1\\
{\delta}_{n+1}z&1
\end{pmatrix}^{-1}
\betaegin{pmatrix}
h_{n+1}(z)\\
1
\end{pmatrix},
\]
which reduces to
\betaegin{equation}\label{HalfTr}
\betaegin{pmatrix}
h_n(z)\\
1
\end{pmatrix}\stackrel{.}{=}
\betaegin{pmatrix}
0& (z+1)(\delta_n-\overline{\gamma}_n)\\
z(z+1)\varphirac{\delta_n(1-|\gamma_n|^2)}{1-\gamma_n\delta_n}& \delta_n(z+1)\left(\varphirac{1-\overline{\gamma}_n\overline{\delta}_n}{1-\gamma_n\delta_n}-z\right)
\end{pmatrix}
\betaegin{pmatrix}
h_{n+1}(z)\\
1
\end{pmatrix}.
\end{equation}
Since the $(1,1)$-entry of the $2\times 2$ matrix in \eqref{HalfTr} is $0$, the corresponding transformations obviously lead to a continued fraction but before writing it down we can simplify it. To this end, let us notice that due to the definition of $\stackrel{.}{=}$, multiplying each of the entries of the $2\times 2$ matrix by the same non-vanishing expression gives a relation equivalent to the original one. Particularly, if we multiply the matrix by $\overline{\delta}_n(1-\gamma_n\delta_n)/(z+1)$ and take into account that $|\delta_n|=1$, we get
\betaegin{equation}\label{MainMTr}
\betaegin{pmatrix}
h_n(z)\\
1
\end{pmatrix}\stackrel{.}{=}
\betaegin{pmatrix}
0& |1-\gamma_n\delta_n|^2\\
z{(1-|\gamma_n\delta_n|^2)}& (1-\overline{\gamma}_n\overline{\delta}_n)-(1-\gamma_n\delta_n)z
\end{pmatrix}
\betaegin{pmatrix}
h_{n+1}(z)\\
1
\end{pmatrix}.
\end{equation}
Still, the elements of the matrix in the latter relation looks a bit heavy and it's possible to make them easier as was done by H. S. Wall. So, let us introduce two sequence of numbers
\betaegin{equation}\label{gr}
g_{n+1}=\varphirac{|1-\gamma_n\delta_n|^2}{2\Rl(1-\gamma_n\delta_n)},\quad
r_{n+1}=-\varphirac{\operatorname{Im}(1-\gamma_n\delta_n)}{\Rl(1-\gamma_n\delta_n)}, \quad n=0, 1, 2, \dots
\end{equation}
and then divide the $2\times 2$ matrix in \eqref{MainMTr} by $\Rl(1-\gamma_n\delta_n)>0$. This manipulation leads to the equivalent representation of \eqref{MainMTr}
\betaegin{equation}\label{MainMTrgr}
\betaegin{pmatrix}
h_n(z)\\
1
\end{pmatrix}\stackrel{.}{=}
\betaegin{pmatrix}
0& 2g_{n+1}\\
2(1-g_{n+1})z& (1+ir_{n+1})-(1-ir_{n+1})z
\end{pmatrix}
\betaegin{pmatrix}
h_{n+1}(z)\\
1
\end{pmatrix}.
\end{equation}
Next, one can observe that \eqref{gr}, \eqref{SchurCond}, and $|\delta_k|=1$ imply
\betaegin{equation}\label{grCond}
0<g_k<1,\quad -\infty<r_k<+\infty, \quad k=1,2,3,\dots.
\end{equation}
Indeed, to see the validity of the first one we need to notice that $\Rl(1-\gamma_n\delta_n)>0$ and
\[
\betaegin{split}
1>&|\gamma_n\delta_n|^2=|1-(1-\gamma_n\delta_n)|^2=
\\&1-2\Rl(1-\gamma_n\delta_n)+\Rl^2(1-\gamma_n\delta_n)+\operatorname{Im}^2(1-\gamma_n\delta_n)=
\\&1-2\Rl(1-\gamma_n\delta_n)+|1-\gamma_n\delta_n|^2.
\end{split}
\]
The second inequality \eqref{grCond} is obvious. Now, everything is clean and we
can consecutively apply the linear fractional transformations \eqref{MainMTrgr}, which eventually gives a continued fraction expansion of $h_0$
\betaegin{equation}\label{CFforh_0}
h_0(z)=\varphirac{1-f(z)}{1+zf(z)}\sim \cfr{2g_1z}{(1+ir_{1})-(1-ir_{1})z}+
\cfr{4(1-g_{1})g_2z}{(1+ir_{2})-(1-ir_{2})z}+
\dots.
\end{equation}
The structure of the latter continued fraction resembles the continued fraction derived by Ya. L. Geronimus \cite{Ger41} but Geronimus' fraction is different. Besides, the continued fraction \eqref{CFforh_0} is implicitly present in \cite{DG}, where a tridiagonal approach to OPUC was developed in the relation to the Bistritz test and the split Levinson algorithm.
Switching to the original goal, which is to get a continued fraction representation of Carath\'eodory functions, one can see that in general $h_0$ does not belong to the class of Carath\'eodory functions. Nevertheless, a simple linear fractional transformation sends $h_0$ to $F$ defined by \eqref{Carathe}
\[
F(z)=\varphirac{1+z}{1-z+2z h_0(z)}.
\]
Therefore, we arrive at the following representation of $F$
\betaegin{equation}\label{CarFrac}
\Scale[1.05]{F(z)\sim \cfr{1+z}{1-z}+\cfr{4g_1z}{(1+ir_{1})-(1-ir_{1})z}+
\cfr{4(1-g_{1})g_2z}{(1+ir_{2})-(1-ir_{2})z}+
\cfr{4(1-g_{2})g_3z}{(1+ir_{3})-(1-ir_{3})z}+\dots}.
\end{equation}
Besides, the continued fraction \eqref{CarFrac} converges locally uniformly in ${\mathbb D}} \def\dE{{\mathbb E}} \def\dF{{\mathbb F}$ \cite[(iv) on page 292]{Wall48}, which is essentially a consequence of a convergence result regarding to \eqref{WF} (see \cite[Theorem 77.1]{Wall48}).
Finally, we are in the position to formulate a partial answer to the question posed at the beginning of this section.
\betaegin{theorem}[Wall's theorem] \label{WallTf} Let $F$ be a Carath\'eodory function corresponding to the Schur parameters $\gamma_0$, $\gamma_1$, $\gamma_2$, \dots. Then the Nevanlinna function $\varphi$ defined via the relation
\betaegin{equation}\label{CayleyT}
\varphi(\lambda)=iF(z), \quad z=\varphirac{i-\lambda}{i+\lambda}
\end{equation}
can also be generated with the help of the following continued fraction
\betaegin{equation}\label{ThforN}
\varphi(\lambda)\sim
-\cfr{1}{\lambda}-\cfr{g_1(\lambda^2+1)}{\lambda-r_{1}}-
\cfr{(1-g_{1})g_2(\lambda^2+1)}{\lambda-r_{2}}-
\cfr{(1-g_{2})g_3(\lambda^2+1)}{\lambda-r_{3}}-\dots,
\end{equation}
where the numbers $g_k$ and $r_k$ are defined by \eqref{gr}. Conversely, any two sequences of numbers $g_k$ and $r_k$ that obey \eqref{grCond} produce a Nevannlina function $\varphi$ normalized by the condition $\varphi(i)=i$ through \eqref{ThforN}.
In this case, the Schur parameters of the Carath\'eodory function $F$ defined by \eqref{CayleyT} can be recovered in the following two steps:
\[
u_k:=1-\gamma_k\delta_k=\varphirac{2g_{k+1}}{1+r_{k+1}^2}-\varphirac{2g_{k+1}r_{k+1}}{1+r_{k+1}^2}i
\]
and then since $\delta_0=1$ we have
\[
\gamma_0=1-u_0, \quad \gamma_{k+1}=\varphirac{u_0u_1\dots u_k}{\overline{u}_0\overline{u}_1\dots \overline{u}_k}(1-u_{k+1}),\quad k=0, 1, 2, \dots.
\]
\end{theorem}
\betaegin{proof}
To get \eqref{ThforN} from \eqref{CarFrac} is easy. It is just the straightforward substitution of \eqref{CayleyT} into
\eqref{CarFrac}. The rest is a consequence of the convergence result that was mentioned above and simple algebraic manipulations.
\end{proof}
The reader who is familiar with the types of continued fractions can easily recognize a Thiele fraction in \eqref{ThforN} \cite[Appendix A]{JT}. More precisely, it looks like an even or odd part of a Thiele fraction (see \cite[Sections 2.4.2 and 2.4.3]{JT} for the definitions of even and odd parts of a continued fraction) and more details abou the relation between \eqref{ThforN} and Thiele fractions can be found in \cite{STZh}. In fact, Thiele fractions are associated with an interpolating process. Hence, it is quite natural that we got them as we are solving the interpolation problem, which consists in finding necessary and sufficient conditions on the Taylor coefficients of $\varphi$ at $i$ in order that $\operatorname{Im}\varphi(\lambda) >0$ for $\lambda\in\dC_+$. Saying it differently, \eqref{ThforN} is basically the image of the Schur algorithm under the Cayley transform.
At the same time, \eqref{ThforN} is a particular case of continued fractions of type $R_{II}$ that were introduced by M. Ismail and D. Masson \cite{IM95} and were shown to generate biorthogonal rational functions. That is, we have some orthogonality behind the scene here but we can also see this in a different way since in a sense the original object is OPUC.
\betaegin{corollary}[Wall's characterization]\label{CharNev}
There is a bijection between pairs of infinite sequence with the property \eqref{grCond} and Nevanlinna functions $\varphi$ normalized by the condition $\varphi(i)=i$ provided that we also consider finite sequences where the last $g_n$ is equal to $1$. In case we have infinitely many $g_k$, the continued fraction \eqref{ThforN} converges locally uniformly in $\dC_+$.
\end{corollary}
\betaegin{proof}
The statement is a consequence of Theorem \ref{WallTf} and the corresponding result for Schur parameters \cite{DK03} (see also \cite[Theorem 3.1.3]{OPUC1}).
\end{proof}
\betaegin{remark} Almost everything from this section is present in \cite{Wall44} or \cite{Wall46} in one way or another. Besides, the corresponding material was also included to the book \cite[Sections 77 and 78]{Wall48}. One of a few modifications done here is the direct use of \eqref{CayleyT}. As a matter of fact, H. S. Wall didn't obtain \eqref{ThforN} immediately from \eqref{CarFrac} and rather used an intermediate class of analytic functions (see \cite[Theorem 78.1]{Wall48}) in order to get his characterization of Nevanlinna functions. Nevertheless, the combination of his steps gives exactly the Cayley transform \eqref{CayleyT}.
\end{remark}
\section{Approximants to the Wall continued fractions}
It is well known that OPRL appear as denominators of approximants to $J$-fractions \eqref{Jfraction} (details can be found in \cite{A65} or \cite{Simon98}). So, in this section we are going to explore the approximants to the continued fraction \eqref{ThforN}, which does resemble \eqref{Jfraction} but corresponds to the multiple interpolation at $i$ instead of the multiple interpolation at $\infty$.
To begin with, note that we know that combining the first $n+1$ iterates \eqref{SchurAlg} leads to the following representation of the function $f=f_0$
\betaegin{equation}\label{nIterations}
f(z)=\varphirac{A_{n}(z)+zB_{n}^*(z)f_{n+1}(z)}{B_{n}(z)+zA_{n}^*(z)f_{n+1}(z)},
\end{equation}
where $A_{n}$, $B_{n}$ are polynomials called Wall polynomials, $A_{n}^*$, $B_{n}^*$ are the reversed polynomials defined by
\[
A_n^*(z)=z^n\overline{A_n(1/\overline{z})}, \quad B_n^*(z)=z^n\overline{B_n(1/\overline{z})},
\]
and $f_{n+1}(z)$ is the $(n+1)$-th iterate of the Schur algorithm \cite[Section 1.3]{OPUC1}.
It turns out that the Wall polynomials and the sequence $\{\Phi_n\}_{n=0}^{\infty}$ of OPUC are related via the Pint\'{e}r-Nevai formula \cite{PN}:
\betaegin{equation*}
\Phi_n(z)=zB_{n-1}^*(z)-A_{n-1}^*(z),\qquad \Phi_n^*(z)=B_{n-1}(z)-zA_{n-1}(z).
\end{equation*}
In fact, there are recurrence formulas for Wall polynomials which can be easily obtained from the matrix interpretation of formula \eqref{nIterations}, that is,
\[
\betaegin{pmatrix}
zB_n^*(z)&A_n(z)\\
zA_n^*(z)&B_n(z)
\end{pmatrix}
=\betaegin{pmatrix}
z&\gamma_0\\
\overline{\gamma}_0z&1
\end{pmatrix}
\betaegin{pmatrix}
z&\gamma_1\\
\overline{\gamma}_1z&1
\end{pmatrix}\dots
\betaegin{pmatrix}
z&\gamma_n\\
\overline{\gamma}_nz&1
\end{pmatrix}.
\]
Then, the next order of business will be to find the formulas for approximants to the continued fraction \eqref{CarFrac}. To this end, let us first recall that
\[
\betaegin{pmatrix}
F(z)\\
1
\end{pmatrix}\stackrel{.}{=}
\betaegin{pmatrix}
0&1+z\\
2z&1-z
\end{pmatrix}
\betaegin{pmatrix}
h_{0}(z)\\
1
\end{pmatrix}.
\]
Secondly, it follows from \eqref{nIterations} that
\[
\betaegin{pmatrix}
f_0(z)\\
1
\end{pmatrix}\stackrel{.}{=}\betaegin{pmatrix}
zB_n^*(z)&A_n(z)\\
zA_n^*(z)&B_n(z)
\end{pmatrix}\betaegin{pmatrix}
f_{n+1}(z)\\
1
\end{pmatrix}
\]
Next, taking into account \eqref{WallAnsatz} the latter relation reduces to
\[
\betaegin{pmatrix}
h_0(z)\\
1
\end{pmatrix}\stackrel{.}{=}\betaegin{pmatrix}
-1&1\\
z&1
\end{pmatrix}\betaegin{pmatrix}
zB_n^*(z)&A_n(z)\\
zA_n^*(z)&B_n(z)
\end{pmatrix}
\betaegin{pmatrix}
-\delta_{n+1}&1\\
{\delta}_{n+1}z&1
\end{pmatrix}^{-1}
\betaegin{pmatrix}
h_{n+1}(z)\\
1
\end{pmatrix}
\]
which further gives
\[
\betaegin{pmatrix}
F(z)\\
1
\end{pmatrix}\stackrel{.}{=}
\betaegin{pmatrix}
0&1+z\\
2z&1-z
\end{pmatrix}\betaegin{pmatrix}
-1&1\\
z&1
\end{pmatrix}\betaegin{pmatrix}
zB_n^*(z)&A_n(z)\\
zA_n^*(z)&B_n(z)
\end{pmatrix}
\betaegin{pmatrix}
1&-1\\
-\delta_{n+1}z&-{\delta}_{n+1}
\end{pmatrix}
\betaegin{pmatrix}
h_{n+1}(z)\\
1
\end{pmatrix}.
\]
Now, introducing
\[
\betaegin{split}
W_n(z)&=
\betaegin{pmatrix}
w_{1,1}^{(n)}(z)&w_{1,2}^{(n)}(z)\\
w_{2,1}^{(n)}(z)&w_{2,2}^{(n)}(z)
\end{pmatrix}
\\&:=
\betaegin{pmatrix}
z^2+z&1+z\\
-z&2
\end{pmatrix}\betaegin{pmatrix}
zB_n^*(z)&A_n(z)\\
zA_n^*(z)&B_n(z)
\end{pmatrix}
\betaegin{pmatrix}
1&-1\\
-\delta_{n+1}z&-{\delta}_{n+1}
\end{pmatrix}
\end{split}
\]
yields
\betaegin{equation}\label{nIterationsCar}
F(z)=\varphirac{w_{1,1}^{(n)}(z)+w_{1,2}^{(n)}(z)h_{n+1}(z)}{w_{2,1}^{(n)}(z)+w_{2,2}^{(n)}(z)h_{n+1}(z)},
\end{equation}
which is another form of the representation
\betaegin{equation}\label{CarFracTrunc}
\Scale[1.05]{F(z)=\cfr{1+z}{1-z}+\cfr{4g_1z}{(1+ir_{1})-(1-ir_{1})z}+
\dots+
\cfr{4(1-g_{n})g_{n+1}z}{(1+ir_{n+1})-(1-ir_{n+1})z+2(1-g_{n+1})zh_{n+1}(z)}}.
\end{equation}
Hence, we get the formula for the approximants
\[
\varphirac{w_{1,1}^{(n)}(z)}{w_{2,1}^{(n)}(z)}=
\cfr{1+z}{1-z}+\cfr{4g_1z}{(1+ir_{1})-(1-ir_{1})z}+
\dots+
\cfr{4(1-g_{n})g_{n+1}z}{(1+ir_{n+1})-(1-ir_{n+1})z}
\]
by setting $h_{n+1}=0$ in \eqref{nIterationsCar} and \eqref{CarFracTrunc}.
\betaegin{remark} It appears that in \cite{CCSV14} the authors considered the following continued fraction
\[
\varphirac{1+z-(1-z)F(z)}{2zF(z)}=h_0(z)=\cfr{2g_1}{(1+ir_{1})-(1-ir_{1})z}+
\cfr{4(1-g_{1})g_2z}{(1+ir_{2})-(1-ir_{2})z}+\dots
\]
and in subsequent papers they were developing a theory of the corresponding polynomials. Actually, as one can see from the above reasoning, such a theory is just a veiled theory of OPUC. However, this relation deserves a special attention from the point of view of spectral transformation (see \cite{Zh97} for the terminology and explanations of the importnace to the field) as $h_0$ is a spectral transformation of $F$ and vice versa.
\end{remark}
At this point we are ready to figure out what is happening for the case of Nevanlinna functions. To do so, one has to make the Cayley transform \eqref{CayleyT}. So, after applying it, the first transformation
\[
\betaegin{pmatrix}
0&1+z\\
2z&1-z
\end{pmatrix}
\]
reduces to
\[
\cW_0(\lambda)=\betaegin{pmatrix}
0&-1\\
i-\lambda&\lambda
\end{pmatrix},
\]
where some simplifications were made in accordance with the fact that $\cW_0$ represents a linear fractional transformation. Then, the transformations \eqref{MainMTrgr} become
\[
\cW_n(\lambda)=
\betaegin{pmatrix}
0& g_{n}(i+\lambda)\\
(1-g_{n})(i-\lambda)& \lambda-r_{n}
\end{pmatrix}, \quad n=1,2,\dots.
\]
Clearly, the family $\cW_n$ generates \eqref{ThforN} and also we have
\[
\betaegin{pmatrix}
\varphi(\lambda)\\
1
\end{pmatrix}\stackrel{.}{=}
\cW_0(\lambda)
\cW_1(\lambda)
\dots
\cW_{n}(\lambda)
\betaegin{pmatrix}
H_{n}(\lambda)\\
1
\end{pmatrix}, \quad n=0, 1, 2\dots,
\]
where $H_{n}(\lambda)=h_{n}(z)$ with $z=(i-\lambda)/(i+\lambda)$ or, equivalently,
\betaegin{equation}\label{ThforNtrunc}
\varphi(\lambda)=
-\cfr{1}{\lambda}-\cfr{g_1(\lambda^2+1)}{\lambda-r_{1}}-
\dots-
\cfr{(1-g_{n-1})g_n(\lambda^2+1)}{\lambda-r_{n}+(1-g_{n})(i-\lambda)H_n(\lambda)}.
\end{equation}
As a result, we have the following statement.
\betaegin{theorem}
The transfer matrix
\[
\cW_{[0,n]}(\lambda):=\cW_0(\lambda)
\cW_1(\lambda)
\dots
\cW_{n}(\lambda)
\]
has the following structure
\[
\cW_{[0,n]}(\lambda)=
\betaegin{pmatrix}
(1-g_n)(i-\lambda){\mathcal A}} \def\cB{{\mathcal B}} \def\cC{{\mathcal C}_{n-1}(\lambda)&{\mathcal A}} \def\cB{{\mathcal B}} \def\cC{{\mathcal C}_{n}(\lambda)\\
(1-g_n)(i-\lambda)\cB_{n-1}(\lambda)&\cB_{n}(\lambda)
\end{pmatrix},
\]
where the polynomials ${\mathcal A}} \def\cB{{\mathcal B}} \def\cC{{\mathcal C}_n$ and $\cB_n$ satisfy the recurrence relations
\betaegin{equation}\label{MPrelations}
\betaegin{split}
{\mathcal A}} \def\cB{{\mathcal B}} \def\cC{{\mathcal C}_{n}(\lambda)=(\lambda-r_n){\mathcal A}} \def\cB{{\mathcal B}} \def\cC{{\mathcal C}_{n-1}(\lambda)-(1-g_{n-1})g_{n}(\lambda^2+1){\mathcal A}} \def\cB{{\mathcal B}} \def\cC{{\mathcal C}_{n-2}(\lambda)\\
\cB_{n}(\lambda)=(\lambda-r_n)\cB_{n-1}(\lambda)-(1-g_{n-1})g_{n}(\lambda^2+1)\cB_{n-2}(\lambda)
\end{split}
\end{equation}
with the initial conditions
\[
{\mathcal A}} \def\cB{{\mathcal B}} \def\cC{{\mathcal C}_{-1}=0, \quad {\mathcal A}} \def\cB{{\mathcal B}} \def\cC{{\mathcal C}_{0}=-1, \quad \cB_{-1}=1, \quad \cB_{0}=\lambda.
\]
Besides, the formula
\betaegin{equation}\label{DmNP}
\varphi(\lambda)=\varphirac{(1-g_n){\mathcal A}} \def\cB{{\mathcal B}} \def\cC{{\mathcal C}_{n-1}(\lambda)(\tau(\lambda)^{-1}+\lambda)-{\mathcal A}} \def\cB{{\mathcal B}} \def\cC{{\mathcal C}_{n}(\lambda)}{(1-g_n)\cB_{n-1}(\lambda)(\tau(\lambda)^{-1}+\lambda)-\cB_{n}(\lambda)}
\end{equation}
gives a description of all Nevanlinna functions $\varphi$ that have the prescribed $n+1$ Taylor coefficient at $\lambda=i$ in terms of an arbitrary Nevanlinna functions $\tau$ such that
\betaegin{equation}\label{FirstI}
\varphi(i)=i.
\end{equation}
In fact, the latter condition is also the first interpolation condition because it is inherited from the unit circle case since we only consider probability measures on $\dT$.
\end{theorem}
\betaegin{proof}
At first, define $\cW_{[0,n]}$ to be
\[
\cW_{[0,n]}(\lambda)=
\betaegin{pmatrix}
\cC_{n}(\lambda)&{\mathcal A}} \def\cB{{\mathcal B}} \def\cC{{\mathcal C}_{n}(\lambda)\\
{\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_{n}(\lambda)&\cB_{n}(\lambda)
\end{pmatrix}.
\]
Next, the definition of $\cW_{[0,n]}$ entails the formula
\[
\betaegin{split}
\betaegin{pmatrix}
\cC_{n}(\lambda)&{\mathcal A}} \def\cB{{\mathcal B}} \def\cC{{\mathcal C}_{n}(\lambda)\\
{\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_{n}(\lambda)&\cB_{n}(\lambda)
\end{pmatrix}&=\cW_{[0,n]}(\lambda)=\cW_{[0,n-1]}(\lambda)\cW_{n}(\lambda)\\
&=
\betaegin{pmatrix}
\cC_{n-1}(\lambda)&{\mathcal A}} \def\cB{{\mathcal B}} \def\cC{{\mathcal C}_{n-1}(\lambda)\\
{\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_{n-1}(\lambda)&\cB_{n-1}(\lambda)
\end{pmatrix}\betaegin{pmatrix}
0& g_{n}(i+\lambda)\\
(1-g_{n})(i-\lambda)& \lambda-r_{n}
\end{pmatrix},
\end{split}
\]
which gives
\[
\cC_{n}(\lambda)=(1-g_n)(i-\lambda){\mathcal A}} \def\cB{{\mathcal B}} \def\cC{{\mathcal C}_{n-1}(\lambda), \quad {\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_{n}(\lambda)=(1-g_n)(i-\lambda)\cB_{n-1}(\lambda),
\]
and then \eqref{MPrelations} follows from the rest of the relations. The initial conditions are the result of the form of $\cW_0$. So, we can now proceed to formula \eqref{DmNP}. As a matter of fact, it is a consequence of \eqref{ThforNtrunc} and the corresponding result for Schur functions \cite{DK03} if we set
\[
\tau(\lambda)=-\varphirac{1}{\lambda+(i-\lambda)H_n(\lambda)},
\]
which means that
\[
\tau(\lambda)=-\cfr{1}{\lambda}-
\cfr{g_{n+1}(\lambda^2+1)}{\lambda-r_{n+1}}-
\cfr{(1-g_{n+1})g_{n+2}(\lambda^2+1)}{\lambda-r_{n+2}}-
\dots
\]
and therefore the function $\tau$ could be arbitrary Nevanlinna function verifying \eqref{FirstI} in view of Corollary \ref{CharNev}.
\end{proof}
\betaegin{remark} Formulas \eqref{nIterations}, \eqref{nIterationsCar}, and \eqref{DmNP} represent a standard piece of the theory of any truncated Nevanlinna-Pick problem, which is the generality that includes all interpolation problems considered here (for more details see \cite[Chapter 3]{A65}). Those formulas are obtained one from another and, essentially, is just one formula. In particular, we have that
\[
i\varphirac{w_{1,1}^{(n)}(z)}{w_{2,1}^{(n)}(z)}=\varphirac{{\mathcal A}} \def\cB{{\mathcal B}} \def\cC{{\mathcal C}_{n+1}(\lambda)}{\cB_{n+1}(\lambda)}, \quad z=\varphirac{i-\lambda}{i+\lambda},
\]
which is a way to have a connection between OPUC and the polynomials that we get on the real line.
\end{remark}
\section{The underlying linear pencils}
Here we will consider the recurrence relations \eqref{MPrelations} as a particular case of the theory of linear pencils of tridiagonal matrices that was elaborated in \cite{BDZh}, \cite{Der10}, and \cite{DZh}, which, in turn, had their origin in \cite{Zh99}.
Above all, there is no doubt that the relations
\betaegin{equation}\label{rec_rel}
u_{j+1}(\lambda)-(\lambda-r_{j})u_{j}(\lambda)+(1-g_{j-1})g_{j}(\lambda^2+1)u_{j-1}(\lambda)=0,
\quad j=0, 1, 2, \dots
\end{equation}
were known to H. S. Wall for they are naturally associated with the fraction \eqref{ThforN} (for example, see \cite[Section 2.1.1]{JT}). To be more clear, \eqref{rec_rel} is exactly the same as the second relation in \eqref{MPrelations} if one uses the following agreement
\[
u_{n}=\cB_{n-1}, \quad r_0=0, \quad g_0=0.
\]
Evidently, the initial conditions
\[
u_{-1}=0, \quad u_0=1
\]
guarantee that $u_{n}=\cB_{n-1}$.
At first glance, one my conclude that although \eqref{rec_rel} is a three-term recurrence relation, it looks peculiar and the connection to Jacobi matrices, which is a very powerful tool for OPRL, is unclear. That might be the reason it didn't attract any attention at that time. So, let's take a more careful look at \eqref{rec_rel} and try to reinterpret the relation as a spectral problem. With this thought in mind, one can rewrite \eqref{rec_rel} in the following manner
\[
u_{j+1}(\lambda)+r_{j}u_{j}(\lambda)+(1-g_{j-1})g_{j}u_{j-1}(\lambda)-\lambda u_{j}(\lambda)+(1-g_{j-1})g_{j}\lambda^2u_{j-1}(\lambda)=0,
\]
which is a quadratic eigenvalue problem and is also known as a quadratic pencil. The explicit operator form
\[
\Scale[0.92]{
\betaegin{pmatrix}
r_0 & 1& &\\
g_1& r_1& 1&\\
& (1-g_{1})g_{2}&r_2&\\
&&&\ddots&
\end{pmatrix}
\betaegin{pmatrix}
u_0\\
u_1\\
u_2\\
\vdots
\end{pmatrix}-\lambda \betaegin{pmatrix}
u_0\\
u_1\\
u_2\\
\vdots
\end{pmatrix}
+
\lambda^2\betaegin{pmatrix}
0&&&\\
g_1&0&&\\
&(1-g_{1})g_{2}&&\\
&&\ddots&
\end{pmatrix}
\betaegin{pmatrix}
u_0\\
u_1\\
u_2\\
\vdots
\end{pmatrix}=0}
\]
looks messy but it's an operator interpretation of \eqref{rec_rel}. A usual method to deal with quadratic pencils is to reduce them to linear pencils, that is, to spectral problems of the form $(A-\lambda B)u=0$
and this is what we are going to do next. Actually, it would be rather efficient to use specifics of the problem than applying the standard machinery of pencils. Say, repeating the reasoning from \cite{Der10}, one can verify that the recurrence relation~\eqref{rec_rel} can be renormalized to the
following one
\betaegin{equation}\label{r_rl}
\mathfrak{b}_j(i-\lambda)\widehat{u}_{j+1}-
(\lambda-\mathfrak{a}_j)\widehat{u}_{j}-{\mathfrak{b}}_{j-1}(i+\lambda)\widehat{u}_{j-1}
=0,\quad j=0, 1, 2, \dots,
\end{equation}
where the numbers $\mathfrak{a}_j$ and $\mathfrak{b}_j$
are defined by the formulas
\betaegin{equation}\label{ab}
\mathfrak{a}_j=r_j,\quad \mathfrak{b}_j=\sqrt{(1-g_{j})g_{j+1}},\quad
j=0, 1, 2, \dots
\end{equation}
and the transformation $u\to\widehat{u}$ has the following form
\betaegin{equation}\label{wTransform}
\widehat{u}_0=u_0,\quad
\widehat{u}_j={\displaystyle\varphirac{u_j}{\mathfrak{b}_0\dots
\mathfrak{b}_{j-1}(i-\lambda)^j}},\quad j=1,2,3,\dots.
\end{equation}
Now, it is easy to see that relation~\eqref{r_rl} leads to the linear pencil
\betaegin{equation}\label{WallPencil}
(H-\lambda J)\widehat{u}=0,
\end{equation}
where
\[
H=\left(
\betaegin{array}{cccc}
\mathfrak{a}_0 & i\mathfrak{b}_0 & & \\
-i{\mathfrak{b}}_0 & \mathfrak{a}_1 & i\mathfrak{b}_1 & \\
& -i{\mathfrak{b}}_1 & \mathfrak{a}_2 & \ddots \\
& & \ddots & \ddots \\
\end{array}
\right),\quad\
J=\left(
\betaegin{array}{cccc}
1 & \mathfrak{b}_0 & & \\
\mathfrak{b}_0 & 1 & \mathfrak{b}_1 & \\
& \mathfrak{b}_1 & 1 & \ddots \\
& & \ddots & \ddots \\
\end{array}
\right)
\]
are Jacobi matrices. Apart from this, there is another way of transforming \eqref{rec_rel} into a linear pencil \cite[Proposition 3]{Zh99} but the method described in \cite[Proposition 3]{Zh99} gives a liner pencil that is formed by two bi-diagonal matrices.
As a matter of fact, working with linear pencils is a bit more delicate than dealing with ordinary spectral problems. For instance, even if the operators $J$ and $H$ generating the pencil are symmetric, it doesn't mean that we have to expect good spectral properties. That is why we need to check if we can say more about the Jacobi matrices $J$ and $H$.
\betaegin{proposition}\label{positiveWall}
The operator $J$ is self-adjoint and nonnegative, that is,
\betaegin{equation}\label{Nonnegativity}
(J\xi,\xi)_{\ell^2}\ge 0
\end{equation}
for any finite $x\in \ell^2$, i.e. $\xi=(\xi_0,\xi_1,\dots,\xi_n,0,0\dots)^{\top}$.
\end{proposition}
\betaegin{proof}
Since $J$ has only real entries, it would be enough to prove \eqref{Nonnegativity} only for vector with real elements. So, let us consider the quadratic form
\betaegin{equation}\label{jform2}
\left(J\xi,\xi\right)=\xi_0^2+2\mathfrak{b}_0{\xi}_0{\xi}_1+\xi_1^2+2\mathfrak{b}_1{\xi}_1{\xi}_2+\dots+\xi_n^2,
\end{equation}
where
$\xi=({\xi}_0,{\xi}_1,\dots,{\xi}_n)^{\top}\in\dR^{n+1}$.
To complete the proof we need to use one of Wall's theorems on chain sequences. To this end, recall that a sequence $\mathfrak{b}_j^2$ is called a chain sequence (see \cite[Section 19]{Wall48} or \cite[Chapter III, Section 5]{Chi78}) if there exists a sequence $\{g_k\}_{k=0}^{\infty}$ such that
\[
\betaegin{split}
0\le g_0<1,\quad 0<g_k<1, \quad k=1,2,3\dots\\
\mathfrak{b}_k^2=(1-g_{k-1})g_k, \quad k=1,2,3\dots.
\end{split}
\]
In other words, our sequence $\mathfrak{b}_j^2$ is definitely a chain sequence. Consequently, \cite[Theorem 20.1]{Wall48} (see also \cite[Chapter III, Sections 5 and 6 ]{Chi78}) brings us to the desired statement.
\end{proof}
Therefore, we have the nonnegativity of $J$ and it suggests that there are ways to analyze the linear pencil relatively easy. However, it still has to be done a bit more carefully than in the ordinary case. The rough idea is to replace the problem
$(H-\lambda J)\widehat{u}=0$ with an ordinary one such as $(J^{-1/2}HJ^{-1/2}-\lambda I)\widehat{u}=0$ (see \cite{Der10}) or $(J^{-1}H-\lambda I)\widehat{u}=0$. The task is not evident as there might be pitfalls and later on we will discuss an example with a drastic change in comparison with Jacobi matrices and that only happens for the linear pencil case.
\betaegin{remark}
It is noteworthy that Proposition \ref{Nonnegativity} is just another form of one of Wall's results, which means that H. S. Wall has proved that the pencil $H-\lambda J$ is a good object in the sense of the spectral theory. Under this circumstances, it makes perfect sense to refer to the pencils in question as to Wall pencils.
\end{remark}
Once we established the fact that Wall pencils lead to self-adjoint operators and, thus, have a reasonable spectral theory behind, we can discuss it. At first, the results from \cite{Der10} are applicable to Wall pencils but under an additional restriction that the measure in the integral representation of the function $\varphi$ is a probability measure. This condition simply means that the underlying pencil can be reduced to a self-adjoint operator. In all other case, one has to handle non-densely defined self-adjoint operators.
Regarding the behavior of the entries of $H-\lambda J$, we can get a pencil form for the majority of results from the theory of OPUC. In particular, we are ready to formulate the result that is mainly the motivation to write this note.
\betaegin{theorem}[The Denisov-Rakhmanov Theorem]\label{DR} Let $\varphi$ be a Nevanlinna function of the form
\[
\varphi(\lambda)=a\lambda+\int_{\dR}\varphirac{1+t\lambda}{t-\lambda}d\sigma(t),
\]
which is subject to $\varphi(i)=i$.
If $\sigma'>0$ almost everywhere with respect to the Lebesgue measure on $\dR$ then for the corresponding Wall pencil \eqref{WallPencil} we have that
\betaegin{equation}
\mathfrak{a}_k\to 0, \quad \mathfrak{b}_k\to\varphirac{1}{2} \quad \text{ as } k\to \infty.
\end{equation}
\end{theorem}
\betaegin{proof}
The statement is an immediate consequence of the Rakhmanov theorem for the unit circle and the Wall constructions. Indeed, the condition that
$\sigma'>0$ almost everywhere with respect to the Lebesgue measure on $\dR$ ensures that the corresponding measure $\mu$ on the unit circle is positive almost everywhere on $\dT$. Actually, the measure $\mu$ is determined by $F$, which, in turn, is defined through \eqref{CayleyT}. Besides, the relation between $\sigma$ and $\mu$ is explicit and can be found in \cite[Section 59]{AG}. Next, it follows from the Rakhmanov theorem (for example, see \cite[Corollary 9.1.11]{OPUC2})
that
\[
\lim_{k\to\infty}\gamma_k=0.
\]
The rest is immediate from formulas \eqref{gr} and \eqref{ab}.
\end{proof}
As one might see the situation is a bit unusual because the entries of the pencil are uniformly bounded but the corresponding measure is supported on the whole real line, which never happens for OPRL. The trick with the support is hidden in the property of $J$, which has no bounded inverse. That is, when we reduce the pencil spectral problem to an ordinary one we have to deal with
the operator $J^{-1/2}HJ^{-1/2}$, which is not bounded and, in the general situation we have here, doesn't have to be densely defined (see \cite{Der10}).
Note that Theorem \ref{DR} is just another way to look at the spectral theory of OPUC but it does lead to further insights in the theory of linear pencils. Another interesting observation can be made. To this end, let us recall that the results from \cite{Der10} and \cite{DZh} together with the Wall theory basically read that any Nevanlinna function
\[
\varphi(\lambda)=a\lambda+b+\int_{\dR}\varphirac{1+t\lambda}{t-\lambda}d\sigma(t),
\]
admits the continued fraction representation
\betaegin{equation}\label{ContF}
\varphi(\lambda)=-\cfr{1}{a^{(2)}_0\lambda-a_0^{(1)}}-
\cfr{b_0^2(\lambda-z_0)(\lambda-\overline{z}_0)}{a^{(2)}_1\lambda-a_1^{(1)}}-
\cfr{b_{1}^2(\lambda-z_1)(\lambda-\overline{z}_1)}{a^{(2)}_2\lambda-a_2^{(1)}}-
\dots,
\end{equation}
where $z_0$, $z_1$, $\dots$ are some given numbers from $\dC_+$, that is, they are the interpolation nodes. Besides, this continued fraction and the theory in \cite{Der10} and \cite{DZh} are based on a different renormalization of the entries. Still, it is also a result of the Schur algorithm but to obtain \eqref{ContF} one needs to change the point at which the Schur transformation is performed. In addition, as one can see from the findings in \cite{Der10} and \cite{DZh} the entries of \eqref{ContF} depend on the respective interpolation node continuously. Therefore, it seems plausible that the following statement is true.
\betaegin{conjecture} Let $\sigma$ be the measure corresponding to $\varphi$. If $\sigma'>0$ almost everywhere with respect to the Lebesgue measure on $\dR$ and $z_k\to i$ as $k\to\infty$ then there exists a renormalization of the coefficients of \eqref{ContF} such that
\[
a_k^{(1)}\to 0,\quad a^{(2)}_k\to 1,\quad b_{k}\to\varphirac{1}{2}\quad \text{ as } k\to \infty.
\]
\end{conjecture}
Furthermore, one can easily reformulate the Szeg\H{o} theorem in terms of Wall pencils and then make a similar conjecture and that can practically be done for the majority of the results from \cite{OPUC1} and \cite{OPUC2}.
\section{The reference measure and other related distributions}
Now is the time to consider examples. The first one that we have already encountered is the reference measure, that is, the limiting pencil from Theorem \ref{DR}. Since the essence of that theorem comes from OPUC, i.e., from the relation
\[
\lim_{k\to\infty}\gamma_k=0,
\]
we just need to find the Wall pencil that corresponds to the case
\[
\gamma_k=0, \quad k=0,1,2,\dots.
\]
Next, \eqref{gr} and \eqref{ab} yield
\[
\left(
\betaegin{array}{cccc}
0 & \varphirac{1}{\sqrt{2}}i & & \\
-\varphirac{1}{\sqrt{2}}i & 0 & \varphirac{1}{2}i & \\
& -\varphirac{1}{2}i & 0 & \ddots \\
& & \ddots & \ddots \\
\end{array}
\right)-\lambda \left(
\betaegin{array}{cccc}
1 & \varphirac{1}{\sqrt{2}} & & \\
\varphirac{1}{\sqrt{2}} & 1 & \varphirac{1}{2} & \\
& \varphirac{1}{2} & 1 & \ddots \\
& & \ddots & \ddots \\
\end{array}
\right).
\]
and the underlying Nevanlinna function has the following continued fraction representation
\[
m_0(\lambda)=
-\varphirac{1}{\lambda-\displaystyle{\varphirac{\varphirac{1}{2}(\lambda^2+1)}{{\lambda}-
\displaystyle{\varphirac{\varphirac{1}{4}(\lambda^2+1)}{{\ddots}}}}}}.
\]
Besides, since $f(z)\equiv 0$ in ${\mathbb D}} \def\dE{{\mathbb E}} \def\dF{{\mathbb F}$ is the function that gives the sequence of Schur parameters that are equal to zero, we can easily get that $m_0(\lambda)=i$ in $\dC_+$, which can be extended to the lower half-plane
\[
m_0(\lambda)=\betaegin{cases} i &\mbox{if } \lambda\in\dC_+ \\
-i & \mbox{if } \lambda\in\dC_- \end{cases}.
\]
As is known, $m_0$ has the following integral representation
\[
m_0(\lambda)=\int_{\dR}\varphirac{1+t\lambda}{t-\lambda}\varphirac{dt}{1+t^2}.
\]
In a sense, this example is the simplest one in the theory of Wall pencils but has a feature showing a difference between Wall pencils and Jacobi matrices. Namely, one of the most powerful tools in the theory of Jacobi matrices is the $m$-function (aka Weyl function). Now, if one thinks in terms of $m$-functions of Jacobi matrices then one would define the $m$-function of a Wall pencil by the following formula
\[
m_0(\lambda)=((H_0-\lambda J_0 )^{-1}e,e)_{\ell^2}, \quad e=(1,0,\dots)^\top,
\]
which we would expect to hold for all $\lambda\in\dC_+$ as we are dealing with the self-adjoint case (there is only one solution of the corresponding interpolation problem).
Thus, we would also have
\[
m_0(\lambda)=\varphirac{1}{\lambda}((1/\lambda-(H_0-\lambda_0 J_0)^{-1}J_0)^{-1}(H_0-\lambda_0J_0)^{-1}e,e)_{\ell^2},
\]
where $\lambda_0\in\dC_+$ is some fixed point. The latter would imply that
\[
m_0(\lambda)\to 0, \quad \lambda=iy, \quad y\to\infty,
\]
which is impossible in our case. This dissimilarity means that we cannot define $m$-functions of Wall pencils in the same way as it is done for Jacobi matrices (cf. \cite{Der10}). For instance, it could be done in a way similar to the way one defines Weyl functions for differential operators. Say, to this end one could use boundary triplets and abstract Weyl functions \cite{DM95}. Let us stress that although Jacobi matrices and Wall pencils are generated by different interpolation problems, Wall pencils are more generic as they are in one-to-one corresponds with the entire class of Nevanlinna functions, which is not the case for Jacobi matrices.
In principal, any explicit example of OPUC can be transformed to a Wall pencil, whose entries and the generating measure can be computed. However, looking at the integral representation of $m_0$ one sees the Cauchy distribution in there and it leads to certain ideas related to the mechanism of the transformation. Actually, it turns out that one can easily construct a family of examples based on the Cauchy distribution. As a matter of fact, \cite{BO} deals with the pseudo-Jacobi ensemble, which is based on the Cauchy distribution, and some formulas from \cite{BO} show the presence of functions that would be appropriate to consider in the framework of this note. So, following \cite{BO}
let us recall that the Gauss hypergeometric function is a function defined via the series
\[
{{}_2F_1\left(\left.\betaegin{array}{c}a,b\\ c \end{array}\right|z\right)=
1+\varphirac{a b}{c}\varphirac{z}{1!}+\varphirac{a(a+1)b(b+1)}{c(c+1)}\varphirac{z^2}{2!}
+\dots,}
\]
where $z$ is an independent variable and $a$, $b$, and $c$ are complex parameters. Clearly, if either $a$ or $b$ is a negative integer then ${}_2F_1\left(\left.\betaegin{array}{c}a,b\\ c \end{array}\right|z\right)$ is a polynomial. Therefore, setting $a=-n$ to be a nonpositive integer and $z=1/(1+ix)$ we see that the function
\[
R_n(x)={}_2F_1\left(\left.\betaegin{array}{c}-n,b\\ c \end{array}\right| \varphirac{2}{1+ix}\right)
\]
is a rational function in a new variable $x$. Next, using the contiguous relation \cite[Section 2.5]{AAR99}
\[
\Scale[0.85]{
a(1-z){}_2F_1\left(\left.\betaegin{array}{c}a+1,b\\ c \end{array}\right| z\right)+
(c-2a-(b-a)z){}_2F_1\left(\left.\betaegin{array}{c}a,b\\ c \end{array}\right| z\right)-
(c-a){}_2F_1\left(\left.\betaegin{array}{c}a-1,b\\ c \end{array}\right| z\right)=0}
\]
for the function $R_n$, one gets
\[
-n\left(1-\varphirac{2}{1+ix}\right)R_{n-1}(x)+\left(c+2n-(b+n)\varphirac{2}{1+ix}\right)R_n(x)-(b+n)R_{n+1}(x)=0,
\]
which reduces to the following recurrence relation
\betaegin{equation}\label{CauchyCom}
n(x+i)R_{n-1}(x)+((c-2b)i-x(c+2n))R_n(x)+(x-i)(b+n)R_{n+1}(x)=0.
\end{equation}
As was shown in \cite{IM95} (see also \cite{Zh99}) if a system satisfies a relation of the type \eqref{CauchyCom}, then it is a system of orthogonal rational functions. That is, the rational functions $R_n$ are orthogonal with respect to some functional. As a matter of fact, \eqref{CauchyCom} is far beyond the scope of the current paper since its coefficients are generally complex numbers with no symmetry. However, if we restrict ourselves to the case
\[
c=2b, \quad b=s\in\dR,
\]
we get the following statement for the functions $R_n(x)=R_n(x,s)$.
\betaegin{proposition}
The rational functions $R_n(x,s)$ satisfy the following doubly spectral relations
\betaegin{equation}\label{CauchyNonSym1}
n(x+i)R_{n-1}(x,s)+2x(s+n)R_n(x,s)+(2s+n)(x-i)R_{n+1}(x,s)=0.
\end{equation}
This means that if $s$ is fixed then \eqref{CauchyNonSym1} as an $x$-relation is easily reducible to \eqref{r_rl}, which is equivalent to a recurrence relation of type $R_{II}$ introduced in \cite{IM95}, and if $x$ is fixed then \eqref{CauchyNonSym1} is an $R_I$-type recurrence relation in $s$ introduced in \cite{IM95} as well.
Consequently, there exist a functional in $x$ and a functional in $s$ such that the rational functions $R_n(x,s)$ form an orthogonal system in $s$ and an orthogonal system in $x$.
\end{proposition}
\betaegin{proof}
The proof is just the application of Favard's type results from \cite{IM95} to \eqref{CauchyNonSym1}, i.e. we first consider \eqref{CauchyNonSym1} as a relation in $x$ and get the functional in $x$. Then, we look at \eqref{CauchyNonSym1} as a relation in $s$.
\end{proof}
\betaegin{remark}
The polynomials $(x-i)^nR_n$ appear in \cite{BO} and it is their large-$n$ behavior that essentially matters for the pseudo-Jacobi ensemble.
\end{remark}
We can also make \eqref{CauchyNonSym1} symmetric, which can be done by introducing new rational functions
\[
C_n(x)=\sqrt{1^{-1}\varphirac{1}{2(1+s)}\dots \left(\varphirac{2s+n-1}{2(n-1+s)}\right)^{-1}\varphirac{n}{2(n+s)}}R_n(x).
\]
For the new rational functions, relation \eqref{CauchyNonSym1} reads
\betaegin{equation}\label{CauchySym}
\small
\sqrt{\varphirac{(n-1+2s)n}{4(n-1+s)(n+s)}}(x+i)C_{n-1}(x)+xC_n(x)+
\sqrt{\varphirac{(n+2s)(n+1)}{4(n+s)(n+1+s)}}(x-i)C_{n+1}(x)=0.
\end{equation}
\normalsize
One of good things that come from \eqref{CauchySym} is that the zeroes of $C_n$ are real (say, the results of \cite{DZh} or \cite{Der10} can be applied in this case). Besides, one can also notice that for $C_n$ we have
\betaegin{equation}\label{chain}
r_{k-1}=0,\quad g_k=\varphirac{k}{2k+2s},\quad k=1,2,3,\dots.
\end{equation}
which shows that the example fits into the Wall theory as long as $0<g_k<1$ for all $k$, that is, when
\[
s>-\varphirac{1}{2}.
\]
However, if $s>1/2$ then we can explicitly write the measure of orthogonality by making a connection to pseudo-Jacobi polynomials, which are also called Routh-Romanovski polynomials.
\betaegin{proposition} If $s>1/2$ then
\[
\int_{\dR}C_n(t)\varphirac{1}{(t-i)^k}\varphirac{dt}{(1+t^2)^s}=0, \quad k=1,2,\dots, n.
\]
\end{proposition}
\betaegin{proof}
Without loss of generality, we will show the orthogonality for $R_n$ rather than for $C_n$. At first, we notice that $R_n$ is a finite sum and the summation can be reversed in order to get the following
\[
\betaegin{split}
R_n(x)&={}_2F_1\left(\left.\betaegin{array}{c}-n,s\\ 2s \end{array}\right| \varphirac{2}{1+ix}\right)\\
&=\left(\varphirac{-2}{1+ix}\right)^n\varphirac{(s)_n}{(2s)_n}{}_2F_1\left(\left.\betaegin{array}{c}-n,1-n-2s\\ 1-n-s \end{array}\right| \varphirac{1+ix}{2}\right),
\end{split}
\]
where $(a)_n$ is the Pochhammer symbol, that is, $(a)_n=a(a+1)\dots(a+n-1)$.
Recalling that Jacobi polynomials are defined by
\[
P_n^{(\alpha^Lpha,\betaeta)}(x)=\varphirac{(\alpha^Lpha+1)_n}{n!}{}_2F_1\left(\left.\betaegin{array}{c}-n,n+\alpha^Lpha+\betaeta+1\\ \alpha^Lpha+1 \end{array}\right| \varphirac{1-x}{2}\right)
\]
we get that
\[
R_n(x)= \varphirac{c_n}{(1+ix)^n}P_n^{(-s-n,-s-n)}(-ix).
\]
Next, according to \cite{As87} we have
\[
\int_{\dR}P_n^{(-n-s,-n-s)}(ix)(1+ix)^m\varphirac{dx}{(1+x^2)^{s+n}}=0, \quad m=0,1,2\dots n-1,
\]
which clearly leads to the desired result.
\end{proof}
As one might notice, the theory related to the example may easily go beyond the condition $s>-\varphirac{1}{2}$. Say, if $s$ is not a negative integer, then we can make the inverse Wall transformation, which will return a sequence of Schur parameters. Then the formulas that determine $\gamma_k$ from $r_k$ and $g_k$ show that there could be only a finite number of Schur parameters that lie outside of the closed unit disc. Such situations are feseable to understand and, recently, it has been shown in \cite{DS16} that the OPUC techniques can still work in such nonclassical cases.
\noindent\textbf{Acknowledgements.} I'd like to thank Alexei Zhedanov, who has been feeding me with the information on the theory underlying $R_{II}$ recurrence relations for years and who, after reading the manuscript, provided me with a few comments that helped to improve the presentation of the paper. Besides, I'm very grateful to my wife, Anastasiia, who read the manuscript and found an enormous amount of typos and inaccuracies.
\betaegin{thebibliography}{99}
\betaibitem{A65}
N.I. Akhiezer, \textit{The classical moment problem and some related problems in analysis}, Hafner publishing Co., New-York, 1965.
\betaibitem{AG}
N. I. Akhiezer, I. M. Glazman, \textit{Theory of linear operators in Hilbert space}. Translated from the Russian and with a preface by Merlynd Nestell. Reprint of the 1961 and 1963 translations. Two volumes bound as one. Dover Publications, Inc., New York, 1993
\betaibitem{AAR99}
G.E. Andrews, R. Askey, R. Roy, \textit{Special functions}. Encyclopedia of Mathematics and its Applications, 71. Cambridge University Press, Cambridge, 1999.
\betaibitem{As87}
R. Askey, \textit{An integral of Ramanujan and orthogonal polynomials}, J. Indian Math. Soc. (N.S.) 51 (1987), 27--36 (1988).
\betaibitem{BDZh}
B.~Beckermann, M.~Derevyagin, and A.~Zhedanov, {\it The linear pencil approach to
rational interpolation}, {J. Approx. Theory} 162 (2010), 1322--1346.
\betaibitem{BO}
A. Borodin, G. Olshanski, {\it Infinite random matrices and ergodic measures},
Commun. Math. Physics. 223 (2001), 87--123.
\betaibitem{CCSV14}
K. Castillo, M. S. Costa, A. Sri Ranga, D. O. Veronese,
\textit{A Favard type theorem for orthogonal polynomials on the unit circle from a three term recurrence formula},
J. Approx. Theory 184 (2014), 146--162.
\betaibitem{Chi78}
T. S. Chihara,
\textit{An introduction to orthogonal polynomials}.
Mathematics and its Applications, Vol. 13. Gordon and Breach Science Publishers, New York-London-Paris, 1978.
\betaibitem{DG}
P. Delsarte, Y. Genin, {\it The tridiagonal approach to Szeg\"o's
orthogonal polynomials, Toeplitz linear system, and related
interpolation problems}, SIAM J. Math. Anal., Vol. 19, No. 3
(1988), 718--735.
\betaibitem{Der10}M.S.~Derevyagin, {\it The Jacobi matrices approach to Nevanlinna-Pick problems},
{J. Approx. Theory}, J. Approx. Theory 163 (2011), no. 2, 117--142.
\betaibitem{DS16}
M. Derevyagin, B. Simanek, On Szeg\H{o}'s theorem for a nonclassical
case, J. Funct. Anal. (2016), http://dx.doi.org/10.1016/j.jfa.2016.07.009
\betaibitem{DZh}
M.S.~Derevyagin, A.S.~Zhedanov, {\it An operator approach to
multipoint Pad\'e approximations}, {\em J.\ Approx.\ Theory} {157} (2009) 70-88.
\betaibitem{DM95}
V.A. Derkach, M. M. Malamud,
\textit{The extension theory of Hermitian operators and the moment problem},
Analysis. 3.
J. Math. Sci. 73 (1995), no. 2, 141--242.
\betaibitem{DK03}
H. Dym, V. Katsnelson,
\textit{Contributions of Issai Schur to analysis}.
Studies in memory of Issai Schur (Chevaleret/Rehovot, 2000),
Progr. Math., 210, Birkh\"auser Boston, Boston, MA, 2003.
\betaibitem{ENZG91}
T. Erd\'elyi, P. Nevai, J. Zhang, J. Geronimo, {\it A simple proof of "Favard's theorem" on the unit circle}, Atti Sem. Mat. Fis. Univ. Modena 39 (1991), no. 2, 551--556.
\betaibitem{Ger40}
Ya. Geronimus, \textit{Generalized orthogonal polynomials and the Christoffel-Darboux formula},
C. R. (Doklady) Acad. Sci. URSS (N.S.) 26, (1940), 847--849.
\betaibitem{Ger41}
Ya. Geronimus,
\textit{Sur quelques propri\'et\'es des polynomes orthogonaux g\'en\'eralis\'es},
Rec. Math. [Mat. Sbornik] N.S. 9 (51), (1941). 121--135. (Russian. French summary)
\betaibitem{JT}
W.B. Jones, W.J. Thron, {\it Continued fractions.
Analytic theory and applications}. Encyclopedia of Mathematics and
its Applications, 11. Addison-Wesley Publishing Co., Reading,
Mass., 1980.
\betaibitem{IM95}
M.E.H. Ismail, D.R. Masson, {\it Generalized orthogonality
and continued fractions}, J. Approx. Theory, 83 (1995), 1--40.
\betaibitem{PN} F. Pint\'{e}r and P. Nevai, {\em Schur functions and orthogonal polynomials on the unit circle}, In "Approximation Theory and Function Series", Bolyai Soc. Math. Stud., 5, pp. 293--306, J\'{a}nos Bolyai Math. Soc., Budapest, 1996.
\betaibitem{Schur1}
I. Schur, \textit{\"Uber Potenzreihen, die im Innern des Einheitskreises beschr\"ankt sind}, J. Reine Angew. Math. 147 (1917), 205--232 (in German).
\betaibitem{Schur2}
I. Schur, \textit{\"Uber Potenzreihen, die im Innern des Einheitskreises beschr\"ankt sind}, J. Reine Angew. Math. 148 (1918), 122--145 (in German).
\betaibitem{Simon98}
B. Simon,
\textit{The classical moment problem as a self-adjoint finite difference operator},
Adv. Math. 137 (1998), no. 1, 82--203.
\betaibitem{OPUC1} B. Simon, {\em Orthogonal Polynomials on the Unit Circle, Part One: Classical Theory}, American Mathematical Society, Providence, RI, 2005.
\betaibitem{OPUC2} B. Simon, {\em Orthogonal Polynomials on the Unit Circle, Part Two: Spectral Theory}, American Mathematical Society, Providence, RI, 2005.
\betaibitem{STZh}
V. P. Spiridonov, S. Tsujimoto, A. S. Zhedanov,
\textit{Integrable discrete time chains for the Frobenius-Stickelberger-Thiele polynomials},
Comm. Math. Phys. 272 (2007), no. 1, 139--165.
\betaibitem{Wall44}
H.S. Wall, {\it Continued fractions and bounded analytic functions}, Bull. Amer. Math. Soc. 50, (1944). 110--119.
\betaibitem{Wall46}
H.S. Wall, {\it Bounded J-fractions}, Bull. Amer. Math. Soc. Vol 52 (1946) 686--693.
\betaibitem{Wall48}
H. S. Wall, \textit{Analytic Theory of Continued Fractions}. D. Van Nostrand Company, Inc., New York, N. Y., 1948.
\betaibitem{Zh97}
A. Zhedanov, \textit{Rational spectral transformations and orthogonal polynomials}, J. Comput. Appl. Math. 85 (1997), no. 1, 67--86.
\betaibitem{Zh99} A. Zhedanov, \textit{Biorthogonal rational functions and the generalized eigenvalue problem}, J. Approx. Theory 101 (1999), no. 2, 303--329.
\end{thebibliography}
\end{document}
|
\begin{document}
\title{A FE-inexact heterogeneous ADMM for Elliptic Optimal Control Problems with {$L^1$}-Control Cost}
\author{Xiaoliang Song$\mathbf{^1}$ \and Bo Yu$\mathbf{^{1}}$ \and Yiyang Wang$\mathbf{^1}$\and \and Xuping Zhang Wang$\mathbf{^1}$}
\institute{Xiaoliang Song\at
\email{[email protected]}
\and
Bo Yu \at
\email{[email protected]}
\and
Yiyang Wang \at
\email{[email protected]}
\and
Xuping Zhang \at
\email{[email protected]}
\and
\at
$\mathbf{^1}$ School of Mathematical Sciences,
Dalian University of Technology, Dalian, Liaoning, 116025, China
}
\maketitle
\begin{abstract}
Elliptic PDE-constrained optimal control problems with $L^1$-control cost ($L^1$-EOCP) are considered. To solve $L^1$-EOCP, the primal-dual active set (PDAS) method, which is a special semismooth Newton (SSN) method, used to be a priority. However, in general solving Newton equations is expensive. Motivated by the success of alternating direction method of multipliers (ADMM), we consider extending the ADMM to $L^1$-EOCP. To discretize $L^1$-EOCP, the piecewise linear finite element (FE) is considered. However, different from the finite dimensional $l^1$-norm, the discretized $L^1$-norm does not have a decoupled form. To overcome this difficulty, an effective approach is utilizing nodal quadrature formulas to approximately discretize the $L^1$-norm and $L^2$-norm. It is proved that these approximation steps will not change the order of error estimates. To solve the discretized problem, an inexact heterogeneous ADMM (ihADMM) is proposed. Different from the classical ADMM, the ihADMM adopts two different weighted inner product to define the augmented Lagrangian function in two subproblems, respectively. Benefiting from such different weighted techniques, two subproblems of ihADMM can be efficiently implemented. Furthermore, theoretical results on the global convergence as well as the iteration complexity results $o(1/k)$ for ihADMM are given. In order to obtain more accurate solution, a two-phase strategy is also presented, in which the primal-dual active set (PDAS) method is used as a postprocessor of the ihADMM. Numerical results not only confirm error estimates, but also show that the ihADMM and the two-phase strategy are highly efficient.
\keywords{optimal control\and sparsity\and finite element\and ADMM}
\subclass{ 49N05\and 65N30\and 49M2\and 68W15}
\end{abstract}
\section{Introduction}
\label{intro}
In this paper, we study the following non-differentiable optimal control problem with \textsl{$L^1$}-control cost, which is known to lead to sparse controls:
\begin{equation}\label{eqn:orginal problems}
\qquad \left\{ \begin{aligned}
\min \limits_{(y,u)\in Y\times U}^{}\ \ J(y,u)&=\frac{1}{2}\|y-y_d\|_{L^2(\Omega)}^{2}+\frac{\alpha}{2}\|u\|_{L^2(\Omega)}^{2}+\beta\|u\|_{L^1(\Omega)} \\
{\rm s.t.}\qquad\qquad Ay&=u+y_c\ \ \mathrm{in}\ \Omega, \\
y &=0\quad \mathrm{on}\ \partial\Omega,\\
u &\in U_{ad}=\{v(x)|a\leq v(x)\leq b, {\rm a.e }\ \mathrm{on}\ \Omega\}\subseteq U,
\end{aligned} \right.\tag{$\mathrm{P}$}
\end{equation}
where $Y:=H_0^1(\Omega)$, $U:=L^2(\Omega)$, $\Omega\subseteq \mathbb{R}^n$, $n=2$ or $3$, is a convex, open and bounded domain with $C^{1,1}$- or polygonal boundary $\Gamma$; $y_c$, $y_d \in L^2(\Omega)$ and parameters $-\infty<a<0<b<+\infty$, $\alpha$, $\beta>0$. Moreover, the operator $A$ is a second-order linear elliptic differential operator. Such the optimal control problem (\ref{eqn:orginal problems}) plays an important role in the placement of control devices \cite{Stadler}. In some cases, it is difficult or undesirable to place control devices all over the control domain and hope to localize controllers in small and most effective regions.
For the study of optimal control problems with sparsity promoting terms, as far as we know, the first paper devoted to this study is published by Stadler \cite{Stadler}, in which structural properties of the control variables were analyzed and two Newton-typed algorithms (including the semismooth Newton algorithm and the primal-dual active set method) were proposed in the case of the linear-quadratic elliptic optimal control problem. In 2011, a priori and a posteriori error estimates were first given by Wachsmuth and Wachsmuth in \cite{WaWa} for piecewise linear control discretizations, in which the convergence rate is obtained to be of order $\mathcal{O}(h)$ under the $L^2$ norm.
In a sequence of papers \cite{CaHerWa1,CaHerWaoptimal}, for the non-convex case governed by a semilinear elliptic equation, Casas et al. proved second-order necessary and sufficient optimality conditions. Using the second-order sufficient optimality conditions, the authors provided an error estimates of order $h$ w.r.t. the $L^\infty$ norm for three different choices of the control discretization (including the piecewise constant, piecewise linear control discretization and the variational control discretization).
Next, let us mention some existing numerical methods for solving problem (\ref{eqn:orginal problems}). Since problem (\ref{eqn:orginal problems}) is nonsmooth, thus applying semismooth Newton methods is used to be a priority. A special semismooth Newton method with the active set strategy, called the primal-dual active set (PDAS) method is introduced in \cite{BeItKu} for control constrained elliptic optimal control problems. It is proved to have the locally superlinear convergence (see \cite{Ulbrich2} for more details). Furthermore, mesh-independence results for semismooth Newton methods were established in \cite{meshindependent}. However, in general, it is expensive in solving Newton equations, especially when the discretization is in a fine level.
Recently, for the finite dimensional large scale optimization problem, some efficient first-order algorithms, such as iterative shrinkage/soft thresholding algorithms (ISTA) \cite{Blumen}, accelerated proximal gradient (APG)-based method \cite{inexactAPG,Beck,inexactABCD}, ADMM \cite{Boyd,SunToh1,SunToh2,Fazel}, etc., have become the state of the art algorithms. Thanks to the iteration complexity $O(1/k^2)$, a fast inexact proximal (FIP) method in function space, which is actually the APG method, was proposed to solve the problem (\ref{eqn:orginal problems}) in \cite{FIP}. As we know, the efficiency of the FIP method depends on how close the step-length is to the Lipschitz constant. However, in general, choosing an appropriate step-length is difficult since the Lipschitz constant is usually not available analytically.
In this paper, we will mainly focus on the ADMM algorithm. The classical ADMM was originally proposed by Glowinski and Marroco \cite{GlowMarro} and Gabay and Mercier \cite{Gabay}, and it has found lots of efficient applications in a broad spectrum of areas. In particular, we refer to \cite{Boyd} for a review of the applications of ADMM in the areas of distributed optimization and statistical learning.
Motivated by the success of the finite dimensional ADMM algorithm, it is reasonable to consider extending the ADMM to infinite dimensional optimal control problems, as well as the corresponding discretized problems. In 2016, the authors \cite{splitbregmanforOCP} adapted the split Bregman method (equivalent to the classical ADMM) to handle PDE-constrained optimization problems with total variation regularization. However, for the discretized problem,
the authors did not take advantage of the inherent structure of problem and still used the classical ADMM to solve it.
In this paper, making full use of inherent structure of problem, we aim to design an appropriate ADMM-type algorithm to solve problem (\ref{eqn:orginal problems}). In order to employ the ADMM algorithm and obtain a separable
by adding an artificial variable $z$, we can separate the smooth and nonsmooth terms and equivalently reformulate problem (\ref{eqn:orginal problems}) as:
\begin{equation}\label{eqn:modified problems}
\left\{ \begin{aligned}
\min \limits_{(y,u,z)\in Y\times U\times U}^{}\ \ J(y,u,z)&=\frac{1}{2}\|y-y_d\|_{L^2(\Omega)}^{2}+\frac{\alpha}{4}\|u\|_{L^2(\Omega)}^{2}+\frac{\alpha}{4}\|z\|_{L^2(\Omega)}^{2}+\beta\|z\|_{L^1(\Omega)} \\
{\rm s.t.}\qquad\qquad\qquad Ay&=u+y_c\ \ \mathrm{in}\ \Omega, \\
y&=0\quad \mathrm{on}\ \partial\Omega,\\
u&=z,\\
z&\in U_{ad}=\{v(x)|a\leq v(x)\leq b, {\rm a.e }\ \mathrm{on}\ \Omega\}\subseteq U.
\end{aligned} \right.\tag{$\mathrm{\widetilde{P}}$}
\end{equation}
An attractive feature of problem (\ref{eqn:modified problems}) is that the objective function with respect to each variable is strongly convex, which ensures the existence and uniqueness of the optimal solution. Moreover, in many algorithms, strong convexity is a boon to good convergence and makes possible more convenient stopping criteria.
Then an inexact ADMM in function space is developed for (\ref{eqn:modified problems}).
Focusing on the inherent structure of the ADMM in function space is worthwhile for us to propose an appropriate discretization scheme and give a suitable algorithm to solve the corresponding discretized problem. As will be mentioned in the Section \ref{sec:2}, since each subproblem of the inexact ADMM algorithm for (\ref{eqn:modified problems}) has a well-structure, it can be efficiently solved. Thus, it will be a crucial point in the numerical analysis to construct similar structures for the discretized problem.
To discretize problem (\ref{eqn:modified problems}), we consider using the piecewise linear finite element to discretize the state variable $y$, the control variable $u$ and the artificial variable $z$. However, the resulting discretized problem is not in a decoupled form as the the finite dimensional $l^1$-regularization optimization problem usually does, since the discretized $L^1$-norm does not have a decoupled form:
\begin{equation*}
\|z_h\|_{L^{1}(\Omega_h)}=\int_{\Omega_h}\left|\sum_{i=1}^{n}z_i\phi_i(x)\right|\mathrm{d}x.
\end{equation*}
Thus, we employ the following nodal quadrature formulas to approximately discretize the $L^1$-norm and we have
\begin{equation*}
\|z_h\|_{L^{1}_h(\Omega_h)}=\sum_{i=1}^{n}|z_i|\int_{\Omega_h}\phi_i(x)\mathrm{d}x.
\end{equation*}
which has introduced in \cite{WaWa}.
Moreover, in order to obtain a closed form solution for the subproblem of $z$, an similar quadrature formulae is also used to discretize the squared $L^2$-norm:
\begin{equation}\label{equ:approxL2norm}
\|z_h\|_{L^{2}_h(\Omega_h)}^2={\sum_{i=1}^{n}(z_i)^2}\int_{\Omega_h}\phi_i(x)\mathrm{d}x.
\end{equation}
For the new finite element discretization scheme, we establish a priori finite element error estimate w.r.t. the $L^2$ norm, i.e.
$\|u-u_h\|_{L^2(\Omega)}\leq C(\alpha^{-1}h+\alpha^{-\frac{3}{2}}h^2)$,
which is same to the result shown in \cite{WaWa}.
To solve (\ref{equ:approx discretized matrix-vector form}), i.e., the discrete version of (\ref{eqn:modified problems}), we consider using the ADMM-type algorithm. However, when the classical ADMM is directly used to solve (\ref{equ:approx discretized matrix-vector form}), there is no well-structure as in continuous case and the corresponding subproblems can not be efficiently solved.
Thus, making use of the inherent structure of (\ref{equ:approx discretized matrix-vector form}), an heterogeneous ADMM is proposed. Meanwhile, sometimes it is unnecessary to exactly compute the solution of each subproblem even if it is doable, especially at the early stage of the whole process. For example, if a subproblem is equivalent to solving a large-scale or ill-condition linear system, it is a natural idea to use the iterative methods such as some Krylov-based methods. Hence, taking the inexactness of the solutions of associated subproblems into account, a more practical inexact heterogeneous ADMM (ihADMM) is proposed. Different from the classical ADMM, we utilize two different weighted inner products to define the augmented Lagrangian function for two subproblems, respectively.
Specifically, based on the $M_h$-weighted inner product, the augmented Lagrangian function with respect to the $u$-subproblem in $k$-th iteration is defined as
\begin{equation*}
\mathcal{L}_\sigma(u,z^k;\lambda^k)=f(u)+g(z^k)+\langle\lambda,M_h(u-z^k)\rangle+\frac{\sigma}{2}\|u-z^k\|_{M_h}^{2},
\end{equation*}
where $M_h$ is the mass matrix. On the other hand, for the $z$-subproblem, based on the $W_h$-weighted inner product, the augmented Lagrangian function in $k$-th iteration is defined as
\begin{equation*}
\mathcal{L}_\sigma(u^{k+1},z;\lambda^k)=f(u^{k+1})+g(z)+\langle\lambda,M_h(u^{k+1}-z)\rangle+\frac{\sigma}{2}\|u^{k+1}-z\|_{W_h}^{2},
\end{equation*}
where the lumped mass matrix $W_h$ is diagonal.
As will be mentioned in the Section \ref{sec:4}, benefiting from different weighted techniques, each subproblem of ihADMM for (\ref{equ:approx discretized matrix-vector form}) can be efficiently solved. Specifically, the $u$-subproblem of ihADMM, which result in a large scale linear system, is the main computation cost in whole algorithm. $M_h$-weighted technique could help us to reduce the block three-by-three system to a block two-by-two system without any computational cost so as to reduce calculation amount. On the other hand, $W_h$-weighted technique makes $z$-subproblem have a decoupled form and admit a closed form solution given by the soft thresholding operator and the projection operator onto the box constraint $[a,b]$. Moreover, global convergence and the iteration complexity result $o(1/k)$ in non-ergodic sense for our ihADMM will be proved.
Taking the precision of discretized error into account, we should mention that using our ihADMM algorithm to solve problem (\ref{equ:approx discretized matrix-vector form}) is highly enough and efficient in obtaining an approximate solution with moderate accuracy.
Furthermore, in order to obtain more accurate solutions, if necessarily required, combining ihADMM and semismooth Newton methods together, we give a two-phase strategy. Specifically, our ihADMM algorithm as the Phase-I is used to generate a reasonably good initial point to warm-start Phase-II. In Phase-II, the PDAS method as a postprocessor of our ihADMM is employed to solve the discrete problem to high accuracy.
The remainder of the paper is organized as follows. In Section \ref{sec:2}, an inexact ADMM algorithm in function space for solving (\ref{eqn:orginal problems}) is described. In Section \ref{sec:3}, the finite element approximation is introduced and priori error estimates are proved. In Section \ref{sec:4}, an inexact heterogeneous ADMM (ihADMM) is proposed for the discretized problem. And as the Phase-II algorithm, the PDAS method is also presented. In Section \ref{sec:5},
numerical results are given to confirm the finite element error estimates and show the efficiency of our ihADMM and the two-phase strategy. Finally, we conclude our paper in Section \ref{sec:6}.
\section{An inexact ADMM for (\ref{eqn:modified problems}) in function Space}
\label{sec:2}
In this paper, we assume the elliptic PDEs involved in problem (\ref{eqn:modified problems}).
\begin{equation}\label{eqn:state equations}
\begin{aligned}
Ay&=u+y_c \qquad \mathrm{in}\ \Omega, \\
y&=0 \qquad \qquad \mathrm{on}\ \partial\Omega,
\end{aligned}
\end{equation}
satisfy the following assumption.
\begin{assumption}\label{equ:assumption:1}
The linear second-order differential operator $A$ is defined by
\begin{equation}\label{operator A}
(Ay)(x):=-\sum \limits^{n}_{i,j=1}\partial_{x_{j}}(a_{ij}(x)y_{x_{i}})+c_0(x)y(x),
\end{equation}
where functions $a_{ij}(x), c_0(x)\in L^{\infty}(\Omega)$, $c_0\geq0$,
and it is uniformly elliptic, i.e. $a_{ij}(x)=a_{ji}(x)$ and there is a constant $\theta>0$ such that
\begin{equation}\label{equ:operator A coercivity}
\sum \limits^{n}_{i,j=1}a_{ij}(x)\xi_i\xi_j\geq\theta\|\xi\|^2 \quad \mathrm{for\ almost\ all}\ x\in \Omega\ \mathrm{and\ all}\ \xi \in \mathbb{R}^n.
\end{equation}
\end{assumption}
Then, the weak formulation of (\ref{eqn:state equations}) is given by
\begin{equation}\label{eqn:weak form}
\mathrm{Find}\ y\in H_0^1(\Omega):\ a(y,v)=(u+y_c,v)_{L^2(\Omega)}\quad \mathrm{for}\ \forall v \in H_0^1(\Omega),
\end{equation}
with the bilinear form
\begin{equation}\label{eqn:bilinear form}
a(y,v)=\int_{\Omega}(\sum \limits^{n}_{i,j=1}a_{ji}y_{x_{i}}v_{x_{i}}+c_0yv)\mathrm{d}x.
\end{equation}
\begin{proposition}{\rm\textbf{\cite [Theorem B.4] {KiSt}}}\label{equ:weak formulation}
Under Assumption {\ref{equ:assumption:1}}, the bilinear form $a(\cdot,\cdot)$ in {\rm (\ref{eqn:bilinear form})} is bounded and $V$-coercive for $V=H^1_0(\Omega)$. In particular, for every $u \in L^2(\Omega)$ and $y_c\in L^2(\Omega)$, {\rm (\ref{eqn:state equations})} has a unique weak solution $y\in H^1_0(\Omega)$ given by {\rm (\ref{eqn:weak form})}. Furthermore,
\begin{equation}\label{equ:control estimats state}
\|y\|_{H^1}\leq C (\|u\|_{L^2(\Omega)}+\|y_c\|_{L^2(\Omega)}),
\end{equation}
for a constant $C$ depending only on $a_{ij}$, $c_0$ and $\Omega$.
\end{proposition}
By Proposition \ref{equ:weak formulation}, the solution operator $\mathcal{S}$: $H^{-1}(\Omega)\rightarrow H^1_0(\Omega)$ with $y(u):=\mathcal{S}(u+y_c)$ is well-defined and called the control-to-state mapping, which is a continuous linear injective operator. Since $H_0^1(\Omega)$ is a Hilbert space, the adjoint operator $\mathcal{S^*}$: $H^{-1}(\Omega)\rightarrow H_0^1(\Omega)$ is also a continuous linear operator.
It is clear that problem (\ref{eqn:modified problems}) is continuous and strongly convex . Therefore, the existence and uniqueness of solution of (\ref{eqn:modified problems}) is obvious. The optimal solution can be characterized by the following Karush-Kuhn-Tucker (KKT) conditions:
\begin{theorem}[{\rm First-Order Optimality Condition}]\label{First-Order Optimality Condition}
Under Assumption \ref{equ:assumption:1}, {\rm($y^*$, $u^*$, $z^*$)} is the optimal solution of {\rm(\ref{eqn:modified problems})}, if and only if there exists adjoint state $p^*\in H_0^1(\Omega)$ and Lagrange multiplier $\lambda^*\in L^2(\Omega)$, such that the following conditions hold in the weak sense
\begin{subequations}\label{eqn:KKT}
\begin{eqnarray}
&&\begin{aligned}\label{eqn1:KKT}
y^*=\mathcal{S}(u^*+y_c),
\end{aligned} \\
&& \begin{aligned}\label{eqn2:KKT}
p^*&=\mathcal{S}^*(y_d-y^*),
\end{aligned}\\
&&\frac{\alpha}{2} u^*-p^*+\lambda^*=0,\label{eqn3:KKT}\\
&&u^*=z^*,\label{eqn4:KKT}\\
&&{z^*} \in U_{ad},\label{eqn5:KKT}\\
&&{\left\langle\frac{\alpha}{2} z^*-\lambda^*,\tilde{z}-z^*\right\rangle_{L^2(\Omega)}+\beta(\|\tilde{z}\|_{L^1(\Omega)}-\|z^*\|_{L^1(\Omega)})}\geq0,\quad \forall \tilde{z} \in U_{ad}.\label{eqn6:KKT}
\end{eqnarray}
\end{subequations}
Moreover,
we have
\begin{equation}\label{equ:z-piecepoint form}
u^*=\mathrm{\Pi}_{U_{ad}}\left(\frac{1}{\alpha}{\rm{soft}}\left(p^*,\beta\right)\right),
\end{equation}
where the projection operator $\mathrm{\Pi}_{U_{ad}}(\cdot)$ and the soft thresholding operator $\rm {soft}(\cdot,\cdot)$ are defined as follows, respectively,
\begin{eqnarray}
\mathrm{\Pi}_{U_{ad}}(v(x))&:=&\max\{a,\min\{v(x),b\}\}, \quad {\rm soft}(v(x),\beta):={\rm{sgn}}(v(x))\circ\max(|v(x)|-\beta,0)\label{softtheresholds}.
\end{eqnarray}
In addition, the optimal control $u$ has the regularity $u\in H^1(\Omega)$.
\end{theorem}
As one may know, ADMM is a simple but powerful algorithm.
Next, we will introduce the ADMM in function space.
Focusing on the ADMM algorithm in function space will help us to better understand the inherent structure. And then it will help us to propose an appropriate discretization scheme and giving a suitable ADMM-type algorithm to solve the corresponding discretized problem.
Using the operator $\mathcal{S}$, the problem (\ref{eqn:modified problems}) can be equivalently rewritten as the following form:
\begin{equation}\label{eqn:reduced form}
\left\{ \begin{aligned}
&\min \limits_{u, z}^{}\ \ f(u)+g(z)\\
&~~{\rm{s.t.}}\quad u=z,
\end{aligned} \right.\tag{$\mathrm{RP}$}
\end{equation}
with the reduced cost function
\begin{eqnarray}
f(u):&=&\frac{1}{2}\|\mathcal{S}(u+y_c)-y_d\|_{L^2(\Omega)}^{2}+\frac{\alpha}{4}\|u\|_{L^2(\Omega)}^{2},\\
g(z):&=&\frac{\alpha}{4}\|z\|_{L^2(\Omega)}^{2}+\beta\|z\|_{L^1(\Omega)}+\delta_{U_{ad}}(z).
\end{eqnarray}
Let us define the augmented Lagrangian function of (\ref{eqn:reduced form}) as follows:
\begin{equation}\label{augmented Lagrangian function}
\mathcal{L}_\sigma(u,z;\lambda)=f(u)+g(z)+\langle\lambda,u-z\rangle_{L^2(\Omega)}+\frac{\sigma}{2}\|u-z\|_{L^2(\Omega)}^{2}
\end{equation}
with the Lagrange multiplier $\lambda \in L^2(\Omega)$ and $\sigma>0$ be a penalty parameter. Moreover, for the convergence property and the iteration complexity analysis, we define the function $R: (u,z,\lambda)\rightarrow [0,\infty)$ by:
\begin{equation}\label{KKT function}
R(u,z,\lambda):=\|\nabla f(u)+\lambda\|^2_{L^2(\Omega)}+{\rm dist}^2(0, -\lambda+\partial g(z))+\|u-z\|^2_{L^2(\Omega)}
\end{equation}
Then, the iterative scheme of inexact ADMM for the problem (\ref{eqn:reduced form}) is shown in Algorithm \ref{algo1:ADMM for problems RP}.
\begin{algorithm}[H]
\caption{inexact ADMM algorithm for (\ref{eqn:reduced form})}\label{algo1:ADMM for problems RP}
\textbf{Input}: {$(z^0, u^0, \lambda^0)\in {\rm dom} (\delta_{U_{ad}}(\cdot))\times L^2(\Omega) \times L^2(\Omega)$ and a parameter $\tau \in (0,\frac{1+\sqrt{5}}{2})$. Let $\{\epsilon_k\}^\infty_{k=0}$ be a sequence satisfying $\{\epsilon_k\}^\infty_{k=0}\subseteq [0,+\infty)$ and $\sum\limits_{k=0}^{\infty}\epsilon_k<\infty$. Set $k=0$.}\\
\textbf{Output}: {$ u^k, z^{k}, \lambda^k$}
\begin{description}
\item[Step 1] Find an minizer (inexact)
\begin{equation*}\label{u-subproblem}
\begin{aligned}
u^{k+1}&=\arg\min \mathcal{L}_\sigma(u,z^k;\lambda^k)-\langle\delta^k, u\rangle_{L^2(\Omega)},
\end{aligned}
\end{equation*}
where the error vector ${\delta}^k$ satisfies $\|{\delta}^k\|_{L^2(\Omega)} \leq {\epsilon_k}$.
\item[Step 2] Compute $z^k$ as follows:
\begin{equation*}\label{z-subproblem}
\begin{aligned}
z^k&=\arg\min\mathcal{L}_\sigma(u^{k+1},z;\lambda^k).
\end{aligned}
\end{equation*}
\item[Step 3] Compute
\begin{eqnarray*}
\lambda^{k+1} &=& \lambda^k+\tau\sigma(u^{k+1}-z^{k+1}).
\end{eqnarray*}
\item[Step 4] If a termination criterion is not met, set $k:=k+1$ and go to Step 1.
\end{description}
\end{algorithm}
About the global convergence as well as the iteration complexity of the inexact ADMM for (\ref{eqn:modified problems}), we have the following results.
\begin{theorem}
Suppose that Assumption {\rm\ref{equ:assumption:1}} holds. Let $(y^*,u^*,z^*,p^*,\lambda^*)$ is the KKT point of {\rm(\ref{eqn:modified problems})} which satisfies {\rm(\ref{eqn:KKT})}, the sequence $\{(u^{k},z^{k},\lambda^k)\}$ is generated by Algorithm \ref{algo1:ADMM for problems RP} with the associated state $\{y^k\}$ and adjoint state $\{p^k\}$, then we have
\begin{eqnarray*}
&&\lim\limits_{k\rightarrow\infty}^{}\{\|u^{k}-u^*\|_{L^{2}(\Omega)}+\|z^{k}-z^*\|_{L^{2}(\Omega)}+\|\lambda^{k}-\lambda^*\|_{L^{2}(\Omega)} \}= 0,\\
&& \lim\limits_{k\rightarrow\infty}^{}\{\|y^{k}-y^*\|_{H_0^{1}(\Omega)}+\|p^{k}-p^*\|_{H_0^{1}(\Omega)} \}= 0.
\end{eqnarray*}
Moreover, there exists a constant $C$ only depending on the initial point ${(u^0,z^0,\lambda^0)}$ and the optimal solution ${(u^*,z^*,\lambda^*)}$ such that for $k\geq1$,
\begin{eqnarray}
&&\min\limits^{}_{1\leq i\leq k} \{R(u^i,z^i,\lambda^i)\}\leq\frac{C}{k}, \quad \lim\limits^{}_{k\rightarrow\infty}\left(k\times\min\limits^{}_{1\leq i\leq k} \{R(u^i,z^i,\lambda^i)\}\right) =0.
\end{eqnarray}
where $R(\cdot)$ is defined as in {\rm(\ref{KKT function})}
\end{theorem}
\begin{proof}
The proof is a direct application of the general inexact ADMM in Hilbert Space for the problem (\ref{eqn:reduced form}) and omitted here. We refer the reader to literatures \cite{Boyd,inexactADMM}.
\end{proof}
\begin{remark}
1). The first subproblems of Algorithm {\rm\ref{algo1:ADMM for problems RP}} is a convex differentiable optimization problem with respect to $u$, if we omit the error vector $\delta^k$, thus it is equivalent to solving the following system{\rm:}
\begin{equation}\label{equ:saddle point problems1}
\left[
\begin{array}{ccc}
I & 0 & \quad\mathcal{S}^{-*} \\
0 & (\frac{\alpha}{2}+\sigma)I & \quad-I \\
\mathcal{S}^{-1} & -I & \quad0 \\
\end{array}
\right]\left[
\begin{array}{c}
y^{k+1} \\
u^{k+1} \\
p^{k+1} \\
\end{array}
\right]=\left[
\begin{array}{c}
y_d \\
\sigma z^k-\lambda^k \\
y_c \\
\end{array}
\right],
\end{equation}
Moreover,
we could eliminate the variable $p$ and derive the following reduced system{\rm:}
\begin{equation}\label{equ:saddle point problems2}
\left[
\begin{array}{cc}
(\frac{\alpha}{2}+\sigma)I & \quad \mathcal{S}^* \\
-\mathcal{S} & \quad I\\
\end{array}
\right]\left[
\begin{array}{c}
u^{k+1} \\
y^{k+1} \\
\end{array}
\right]=\left[
\begin{array}{c}
\mathcal{S}^*y_d+\sigma z^k-\lambda^k \\
\mathcal{S}y_c,\\
\end{array}
\right]
\end{equation}
where $I$ represents the identity operator.
2). It is easy to see that $z$-subproblem has a closed solution:
\begin{equation}\label{z-closed form solution}
\begin{aligned}
z^{k+1}
&=\mathrm{\Pi}_{U_{ad}}\left(\frac{1}{\gamma}{\rm{soft}}\left(\sigma u^{k+1}+\lambda^k,\beta\right)\right),
\end{aligned}
\end{equation}
where $\gamma=0.5\alpha+\sigma$.
\end{remark}
Based on the well-structure of (\ref{equ:saddle point problems2}) and (\ref{z-closed form solution}), it will be a crucial point in the numerical analysis to establish relations parallel to
(\ref{equ:saddle point problems2}) and (\ref{z-closed form solution}) also for the discretized problem.
\section{Finite Element Approximation}
\label{sec:3}
The goal of this section is to study the approximation of problems (\ref{eqn:orginal problems}) and (\ref{eqn:modified problems}) by finite elements.
To achieve our aim, we first consider a family of regular and quasi-uniform triangulations $\{\mathcal{T}_h\}_{h>0}$ of $\bar{\Omega}$. For each cell $T\in \mathcal{T}_h$, let us define the diameter of the set $T$ by $\rho_{T}:={\rm diam}\ T$ and define $\sigma_{T}$ to be the diameter of the largest ball contained in $T$. The mesh size of the grid is defined by $h=\max_{T\in \mathcal{T}_h}\rho_{T}$. We suppose that the following regularity assumptions on the triangulation are satisfied which are standard in the context of error estimates.
\begin{assumption}\label{assumption on mesh}
There exist two positive constants $\kappa$ and $\tau$ such that
\begin{equation*}
\frac{\rho_{T}}{\sigma_{T}}\leq \kappa \quad and\quad \frac{h}{\rho_{T}}\leq \tau,
\end{equation*}
hold for all $T\in \mathcal{T}_h$ and all $h>0$.
Let us define $\bar{\Omega}_h=\bigcup_{T\in \mathcal{T}_h}T$, and let ${\Omega}_h \subset\Omega$ and $\Gamma_h$ denote its interior and its boundary, respectively. In the case that $\Omega$ is a convex polyhedral domain, we have $\Omega=\Omega_h$. In the case that $\Omega$ has a $C^{1,1}$- boundary $\Gamma$, we assumed that $\bar{\Omega}_h$ is convex and that all boundary vertices of $\bar{\Omega}_h$ are contained in $\Gamma$, such that
$|\Omega\backslash {\Omega}_h|\leq c h^2$,
where $|\cdot|$ denotes the measure of the set and $c>0$ is a constant.
\end{assumption}
On account of the homogeneous boundary condition of the state equation, we use
\begin{equation*}
Y_h =\left\{y_h\in C(\bar{\Omega})~\big{|}~y_{h|T}\in \mathcal{P}_1~ {\rm{for\ all}}~ T\in \mathcal{T}_h~ \mathrm{and}~ y_h=0~ \mathrm{in } ~\bar{\Omega}\backslash {\Omega}_h\right\}
\end{equation*}
as the discretized state space, where $\mathcal{P}_1$ denotes the space of polynomials of degree less than or equal to $1$. For a given source term $y_c$ and right-hand side $u\in L^2(\Omega)$, we denote by $y_h(u)$ the approximated state associated with $u$, which is the unique solution for the following discretized weak formulation:
\begin{equation}\label{eqn:discrete weak solution}
\int_{\Omega_h}\left(\sum \limits^{n}_{i,j=1}a_{ij}{y_h}_{x_{i}}{v_h}_{x_{j}}+c_0y_hv_h\right)\mathrm{d}x=\int_{\Omega_h}(u+y_c)v_h{\rm{d}}x \qquad \forall v_h\in Y_h.
\end{equation}
Moreover, $y_h(u)$ can also be expressed by $y_h(u)={\mathcal{S}}_{h}(u+y_c)$, in which ${\mathcal{S}}_{h}$ is a discretized vision of $\mathcal{S}$ and an injective, selfadjoint operator. The following error estimates are well-known.
\begin{lemma}{\rm\textbf{\cite[Theorem 4.4.6]{Ciarlet}}}\label{eqn:lemma1}
For a given $u\in L^2(\Omega)$, let $y$ and $y_h(u)$ be the unique solution of {\rm(\ref{eqn:weak form})} and {\rm(\ref{eqn:discrete weak solution})}, respectively. Then there exists a constant $c_1>0$ independent of $h$, $u$ and $y_c$ such that
\begin{equation}\label{estimates1}
\|y-y_h(u)\|_{L^2(\Omega)}+h\|\nabla y-\nabla y_h(u)\|_{L^2(\Omega)}\leq c_1h^2(\|u\|_{L^2(\Omega)}+\|y_c\|_{L^2(\Omega)}).
\end{equation}
In particular, this implies $\|\mathcal{S}-\mathcal{S}_h\|_{L^2\rightarrow L^2}\leq c_1h^2$ and $\|\mathcal{S}-\mathcal{S}_h\|_{L^2\rightarrow H^1}\leq c_1h$.
\end{lemma}
Considering the homogeneous boundary condition of the adjoint state equation (\ref{eqn:state equations}) and the projection formula (\ref{equ:z-piecepoint form}), we use
\begin{equation*}
U_h =\left\{u_h\in C(\bar{\Omega})~\big{|}~u_{h|T}\in \mathcal{P}_1~ {\rm{for\ all}}~ T\in \mathcal{T}_h~ \mathrm{and}~ u_h=0~ \mathrm{in } ~\bar{\Omega}\backslash{\Omega}_h\right\},
\end{equation*}
as the discretized space of the control $u$ and artificial variable $z$.
For a given regular and quasi-uniform triangulation $\mathcal{T}_h$ with nodes $\{x_i\}_{i=1}^{N_h}$, let $\{\phi_i(x)\} _{i=1}^{N_h}$ be a set of nodal basis functions associated with nodes $\{x_i\}_{i=1}^{m}$, where the basis functions satisfy the following properties:
\begin{equation}\label{basic functions properties}
\phi_i(x) \geq 0, \quad \|\phi_i(x)\|_{\infty} = 1 \quad \forall i=1,2,...,N_h,\quad \sum\limits_{i=1}^{N_h}\phi_i(x)=1.
\end{equation}
The elements $z_h \in U_h$, $u_h\in U_h$ and $y_h\in Y_h$, can be represented in the following forms, respectively,
\begin{equation*}
u_h=\sum \limits_{i=1}^{N_h}u_i\phi_i(x), \quad z_h=\sum \limits_{i=1}^{N_h}z_i\phi_i(x),\quad y_h=\sum \limits_{i=1}^{N_h}y_i\phi_i(x),
\end{equation*}
and $u_h(x_i)=u_i$, $z_h(x_i)=z_i$ and $y_h(x_i)=y_i$ hold.
Let $U_{ad,h}$ denotes the discretized feasible set, which is defined by
\begin{eqnarray*}
U_{ad,h}:&=&U_h\cap U_{ad}
=\left\{z_h=\sum \limits_{i=1}^{N_h}z_i\phi_i(x)~\big{|}~a\leq z_i\leq b, \forall i=1,...,N_h\right\}\subset U_{ad}.
\end{eqnarray*}
Following the approach of \cite{Cars}, for the error analysis further below, let us introduce a quasi-interpolation operator $\Pi_h:L^1(\Omega_h)\rightarrow U_h$ which provides interpolation estimates. For an arbitrary $w\in L^1(\Omega)$, the operator $\Pi_h$ is constructed as follows:
\begin{equation}\label{equ:quasi-interpolation}
\Pi_hw=\sum\limits_{i=1}^{N_h}\pi_i(w)\phi_i(x), \quad \pi_i(w)=\frac{\int_{\Omega_h}w(x)\phi_i(x){\rm{d}}x}{\int_{\Omega_h}\phi_i(x){\rm{d}}x}.
\end{equation}
And we know that:
\begin{equation}\label{quasi-interpolation property}
w\in U_{ad} \Rightarrow \Pi_hw \in U_{ad,h}, \quad {\rm for\ all}\ w\in L^1(\Omega).
\end{equation}
Based on the assumption on the mesh and the control discretization , we extend $\Pi_hw$ to $\Omega$ by taking $\Pi_hw=w$ for every $x\in\Omega\backslash {\Omega}_h$, and have the following estimates of the interpolation error. For the detailed proofs, we refer to \cite{Cars,MeReVe}.
\begin{lemma}\label{eqn:lemma3}
There is a constant $c_2$ independent of $h$ such that
\begin{equation*}
h\|z-\Pi_hz\|_{L^2(\Omega)}+\|z-\Pi_hz\|_{H^{-1}(\Omega)}\leq c_2h^2\|z\|_{H^1(\Omega)},
\end{equation*}
holds for all $z\in H^1(\Omega)$.
\end{lemma}
Now, we can consider a discretized version of problem (\ref{eqn:modified problems}) as:
\begin{equation}\label{equ:separable discrete problem}
\left\{ \begin{aligned}
&\min J_h(y_h,u_h,z_h)=\frac{1}{2}\|y_h-y_d\|_{L^2(\Omega_h)}^{2}+\frac{\alpha}{4}\|u_h\|_{L^2(\Omega_h)}^{2}
+\frac{\alpha}{4}\|z_h\|_{L^{2}(\Omega_h)}^{2}+\beta\|z_h\|_{L^{1}(\Omega_h)} \\
&\qquad\qquad {\rm{s.t.}}\qquad \quad~~ y_h=\mathcal{S}_h(u_h+y_c), \\
&\qquad \qquad \qquad \quad\qquad u_h=z_h,\\
&\qquad \qquad \qquad \quad\qquad z_h\in U_{ad,h},
\end{aligned} \right.\tag{$\mathrm{\widetilde{P}}_{h}$}
\end{equation}
where
\begin{eqnarray}
\label{eqn:exact norm1}\|z_h\|^2_{L^2(\Omega_h)}&=& \int_{\Omega_h}\left(\sum\limits_{i=1}^{N_h}z_i\phi_i(x)\right)^2\mathrm{d}x, \\
\label{eqn:exact norm2}\|z_h\|_{L^1(\Omega_h)}&=&\int_{\Omega_h}\big{|}\sum\limits_{i=1}^{N_h}z_i\phi_i(x)\big{|}\mathrm{d}x.
\end{eqnarray}
This implies, for problem (\ref{eqn:orginal problems}), we have the following discretized version:
\begin{equation}\label{equ:discrete problem}
\left\{ \begin{aligned}
&\min \limits_{(y_h,u_h,z_h)\in Y_h\times U_h\times U_h}^{}J_h(y_h,u_h,z_h)=\frac{1}{2}\|y_h-y_d\|_{L^2(\Omega_h)}^{2}+\frac{\alpha}{2}\|u_h\|_{L^2(\Omega_h)}^{2}+\beta\|u_h\|_{L^{1}(\Omega_h)} \\
&\qquad\qquad {\rm{s.t.}}\qquad \quad~~ y_h=\mathcal{S}_h(u_h+y_c), \\
&\qquad \qquad \qquad \quad\qquad u_h\in U_{ad,h}.
\end{aligned} \right.\tag{$\mathrm{P}_{h}$}
\end{equation}
For problem (\ref{equ:discrete problem}), in \cite{WaWa}, the authors gave the following error estimates results.
\begin{theorem}{\rm\textbf{\cite[Proposition 4.3]{WaWa}}}\label{theorem:error1}
Let $(y, u)$ be the optimal solution of problem {\rm(\ref{eqn:orginal problems})}, and $(y_h, u_h)$ be the optimal solution of problem {\rm(\ref{equ:discrete problem})}. For every $h_0>0$, $\alpha_0>0$, there is a constant $C>0$ such that for all $0<\alpha\leq\alpha_0$, $0<h\leq h_0$ it holds
\begin{equation}\label{u-error-estimates}
\|u-u_h\|_{L^2(\Omega)}\leq C(\alpha^{-1}h+\alpha^{-\frac{3}{2}}h^2),
\end{equation}
where $C$ is a constant independent of $h$ and $\alpha$.
\end{theorem}
However, the resulting discretized problem (\ref{equ:separable discrete problem}) is not in a decoupled form as the finite dimensional $l^1$-regularization optimization problem usually does, since (\ref{eqn:exact norm1}) and (\ref{eqn:exact norm2})
do not have a decoupled form. Thus, if we directly apply ADMM algorithm to solve the discretized problem, then the $z$-subproblem can not have a closed form solution which similar to (\ref{z-closed form solution}).
Thus, directly solving (\ref{equ:separable discrete problem}) it can not make full use of the advantages of ADMM.
In order to overcome this bottleneck, we introduce the nodal quadrature formulas to approximately discretized the \textsl{$L^2$}-norm and \textsl{$L^1$}-norm. Let
\begin{eqnarray}
&&\|z_h\|_{L^{2}_h(\Omega_h)}:=\left(\sum\limits_{i=1}^{N_h}(z_i)^2\int_{\Omega_h}\phi_i(x)\mathrm{dx}\right)^\frac{1}{2},\label{eqn:approx norm2}\\
&&\|z_h\|_{L^{1}_h(\Omega_h)}:=\sum\limits_{i=1}^{N_h}|z_i|\int_{\Omega_h}\phi_i(x)\mathrm{dx},\label{eqn:approx norm1}
\end{eqnarray}
and call them $L^{2}_h$- and $L^{1}_h$-norm, respectively.
It is obvious that the \textsl{$L^{2}_h$}-norm and the \textsl{$L^{1}_h$}-norm can be considered as a weighted $l^2$-norm and a weighted $l^1$-norm of the coefficient of $z_h$, respectively. Both of them are norms on $U_h$. In addition, the \textsl{$L^{2}_h$}-norm is a norm induced by the following inner product:
\begin{equation}\label{eqn:approx inner product}
\langle z_h,v_h\rangle_{L^{2}_h(\Omega_h)}=\sum\limits_{i=1}^{N_h}(z_iv_i)\int_{\Omega_h}\phi_i(x)\mathrm{d}x\quad {\rm{for}}\ z_h,v_h\in U_h.
\end{equation}\\
More importantly, the following properties hold.
\begin{proposition}{\rm\textbf{\cite[Table 1]{Wathen}}}\label{eqn:martix properties}
$\forall$ $z_h\in U_h$, the following inequalities hold:
\begin{eqnarray}
\label{equ:martix properties1}&&\|z_h\|^2_{L^{2}(\Omega_h)}\leq\|z_h\|^2_{L^{2}_h(\Omega_h)}\leq c\|z_h\|^2_{L^{2}(\Omega_h)}, \quad where \quad c=
\left\{ \begin{aligned}
&4 \quad if \quad n=2, \\
&5 \quad if \quad n=3.
\end{aligned} \right.
\\
\label{equ:martix properties2} &&\int_{\Omega_h}|\sum_{i=1}^n{z_i\phi_i(x)}|~\mathrm{d}x\leq\|z_h\|_{L^{1}_h(\Omega_h)}.
\end{eqnarray}
\end{proposition}
Thus, based on (\ref{eqn:approx norm1}) and (\ref{eqn:approx norm2}), we derive a new discretized optimal control problems
\begin{equation}\label{equ:approx discretized}
\left\{ \begin{aligned}
&\min J_h(y_h,u_h,z_h)=\frac{1}{2}\|y_h-y_d\|_{L^2(\Omega_h)}^{2}+\frac{\alpha}{4}\|u_h\|_{L^2(\Omega_h)}^{2}
+\frac{\alpha}{4}\|z_h\|_{L^{2}_h(\Omega_h)}^{2}+\beta\|z_h\|_{L^{1}_h(\Omega_h)} \\
&\qquad\quad \quad{\rm{s.t.}}\qquad \quad~~ y_h=\mathcal{S}_hu_h, \\
&\qquad \qquad \qquad\qquad \quad u_h=z_h,\\
&\qquad \qquad \qquad\qquad \quad z_h\in U_{ad,h}.
\end{aligned} \right.\tag{$\mathrm{\widetilde{DP}}_{h}$}
\end{equation}
It is should mentioned that the approximate $L^{1}_h$ was already used in \cite[Section 4.4]{WaWa}. However, different from their discretization schemes, in this paper, in order to keep the separability of the discrete $L^2$-norm with respect to $z$, we use (\ref{eqn:approx norm2}) to approximately discretize it.
In addition, although these nodal quadrature formulas incur additional discrete errors, as it will be proven that these approximation steps will not change the order of error estimates as shown in (\ref{u-error-estimates}), see Theorem \ref{theorem:error1}. More importantly, these nodal quadrature formulas will turn out to be crucial in order to obtain formulas parallel to (\ref{equ:saddle point problems2}) and (\ref{z-closed form solution}) for the discretized problem (\ref{equ:approx discretized}), see Remark 4.4 below.
Analogous to the continuous problem (\ref{eqn:modified problems}), the discretized problem (\ref{equ:approx discretized}) is also a strictly convex problem, which is uniquely solvable. We derive the following first-order optimality conditions, which is necessary and sufficient for the optimal solution of (\ref{equ:approx discretized}).
\begin{theorem}[{\rm \textbf{Discrete first-order optimality condition}}]
$(u_h, z_h, y_h)$ is the optimal solution of {\rm{(\ref{equ:approx discretized})}}, if and only if there exist an adjoint state $p_h$ and a Lagrange multiplier $\lambda_h$, such that the following conditions are satisfied
\begin{subequations}\label{eqn:DKKT}
\begin{eqnarray}
&&y_h=\mathcal{S}_h (u_h+y_c)\label{eqn1:DKKT}, \\
&&p_h=\mathcal{S}_h^*(y_h-y_d)\label{eqn2:DKKT},\\
&&\frac{\alpha}{2} u_h+p_h+\lambda_h=0\label{eqn3:DKKT},\\
&&u_h=z_h\label{eqn4:DKKT},\\
&&{z_h} \in U_{ad,h}\label{eqn5:DKKT},\\
&&{\left\langle\frac{\alpha}{2} z_h,\tilde{z}_h-z_h\right\rangle_{L^{2}_h(\Omega_h)}-(\lambda_h,\tilde{z}_h-z_h)_{L^2(\Omega_h)}+\beta\left(\|\tilde{z}_h\|_{L^{1}_h(\Omega_h)}-\|z\|_{L^{1}_h(\Omega_h)}\right)}\geq0, \label{eqn6:DKKT}\\
\nonumber\qquad&&\forall \tilde{z}_h \in U_{ad,h}.
\end{eqnarray}
\end{subequations}
\end{theorem}
Now, let us start to do error estimation. Let $(y, u, z)$ be the optimal solution of problem (\ref{eqn:modified problems}), and $(y_h, u_h, z_h)$ be the optimal solution of problem (\ref{equ:approx discretized}). We have the following results.
\begin{theorem}\label{theorem:error2}
Let $(y, u, z)$ be the optimal solution of problem {\rm(\ref{eqn:modified problems})}, and $(y_h, u_h,z_h)$ be the optimal solution of problem {\rm(\ref{equ:approx discretized})}. For any $h>0$ small enough and $\alpha_0>0$, there is a constant $C$ such that: for all $0<\alpha\leq\alpha_0$,
\begin{eqnarray*}
\frac{\alpha}{2}\|u-u_h\|^2_{L^2(\Omega)}+\frac{1}{2}\|y-y_h\|^2_{L^2(\Omega)}\leq C(h^2+\alpha h^2+\alpha^{-1} h^2+h^3+\alpha^{-1}h^4+\alpha^{-2}h^4),
\end{eqnarray*} where $C$ is a constant independent of $h$ and $\alpha$.
\end{theorem}
\proof
Due to the optimality of $z$ and $z_h$, $z$ and $z_h$ satisfy (\ref{eqn6:KKT}) and (\ref{eqn6:DKKT}), respectively. Let us use the test function $z_h\in U_{ad,h}\subset U_{ad}$ in (\ref{eqn6:KKT}) and the test function $\tilde{z}_h:=\Pi_hz\in U_{ad,h}$ in (\ref{eqn6:DKKT}), thus we have
\begin{eqnarray}
\label{eqn:error1}&&{\left\langle\frac{\alpha}{2} z-\lambda,z_h-z\right\rangle_{L^2(\Omega)}+\beta\left(\|z_h\|_{L^1(\Omega)}-\|z\|_{L^1(\Omega)}\right)}\geq0, \\
\label{eqn:error2}&&{\left\langle\frac{\alpha}{2} z_h,\tilde{z}_h-z_h\right\rangle_{L^{2}_h(\Omega_h)}-\langle\lambda_h,\tilde{z}_h-z_h\rangle_{L^2(\Omega_h)}+\beta\left(\|\tilde{z}_h\|_{L^{1}_h(\Omega_h)}-\|z_h\|_{L^{1}_h(\Omega_h)}\right)}\geq0.
\end{eqnarray}
Because $z_h=0$ on $\bar\Omega\backslash {\Omega}_h$, the integrals over $\Omega$ can be replaced by integrals over $\Omega_h$ in (\ref{eqn:error1}), and it can be rewritten as
\begin{eqnarray}\label{eqn:error3}
\hspace{0.3in}{\left\langle\frac{\alpha}{2} z-\lambda,z-z_h\right\rangle_{L^2(\Omega_h)}+\beta\left(\|z\|_{L^1(\Omega_h)}-\|z_h\|_{L^1(\Omega_h)}\right)}&\leq&\left\langle\lambda-\frac{\alpha}{2} z,z\right\rangle_{L^2(\Omega\backslash {\Omega}_h)}-\beta\|z\|_{L^1(\Omega\backslash {\Omega}_h)} \\
\nonumber&\leq& \langle\lambda,z\rangle_{L^2(\Omega\backslash {\Omega}_h)}\leq ch^2,
\end{eqnarray}
where the last inequality follows from the boundedness of $\lambda$ and $z$ and the assumption $|\Omega\backslash {\Omega}_h|\leq c h^2$.
By the definition of the quasi-interpolation operator in (\ref{equ:quasi-interpolation}) and (\ref{equ:martix properties1}) in Proposition \ref{eqn:martix properties}, we have
\begin{equation}\label{estimates inner product}
\begin{aligned}
\langle z_h,\tilde{z}_h- z_h\rangle_{L^{2}_h(\Omega_h)}&=\langle z_h,\tilde{z}_h\rangle_{L^{2}_h(\Omega_h)}-\|z_h\|^2_{L^{2}_h(\Omega_h)}
\leq\langle z_h,z-z_h\rangle_{L^2(\Omega_h)}.
\end{aligned}
\end{equation}
Thus, (\ref{eqn:error2}) can be rewritten as
\begin{eqnarray}
\label{eqn:error4}&&{\left\langle-\frac{\alpha}{2} z_h+\lambda_h,z-z_h\right\rangle_{L^{2}(\Omega_h)}+\langle\lambda_h,\tilde{z}_h-z\rangle_{L^2(\Omega_h)}-\beta\left(\|\tilde{z}_h\|_{L^{1}_h(\Omega_h)}-\|z_h\|_{L^{1}_h(\Omega_h)}\right)}\leq0.
\end{eqnarray}
Adding up and rearranging (\ref{eqn:error3}) and (\ref{eqn:error4}), we obtain
\begin{equation}\label{eqn:error estimat1}
\begin{aligned}
\frac{\alpha}{2}\|z-z_h\|^2_{L^2(\Omega_h)}\leq& \langle\lambda-\lambda_h,z-z_h\rangle_{L^2(\Omega_h)}-\langle\lambda_h,\tilde{z}_h-z\rangle_{L^2(\Omega_h)}\\
&+\beta\left(\|z_h\|_{L^1(\Omega_h)}-\|z\|_{L^1(\Omega_h)}+\|\tilde{z}_h\|_{L^{1}_h(\Omega_h)}-\|z_h\|_{L^{1}_h(\Omega_h)}\right)+ch^2\\
\leq&\begin{array}{c}
\underbrace{\left\langle\frac{\alpha}{2}(u_h-u)+p_h-p,z-z_h\right\rangle_{L^2(\Omega_h)}}\\
I_1 \end{array}
-\begin{array}{c}
\underbrace{\left\langle\frac{\alpha}{2}u_h+p_h,\tilde{z}_h-z\right\rangle_{L^2(\Omega_h)}}\\
I_2
\end{array}\\
&+\begin{array}{c}
\underbrace{\beta\left(\|z_h\|_{L^1(\Omega_h)}-\|z\|_{L^1(\Omega_h)}+\|\tilde{z}_h\|_{L^{1}_h(\Omega_h)}-\|z_h\|_{L^{1}_h(\Omega_h)}\right)}\\
I_3
\end{array}
+ch^2,
\end{aligned}
\end{equation}
where the second inequality follows from (\ref{eqn3:KKT}) and (\ref{eqn3:DKKT}).
Next, we first estimate the third term $I_3$. By (\ref{equ:martix properties2}) in Proposition \ref{eqn:martix properties}, we have $\|z_h\|_{L^1(\Omega_h)}\leq\|z_h\|_{L^{1}_h(\Omega_h)}$.
And following from the definition of $\tilde{z}_h=\Pi_h(z)$ and the non-negativity and partition of unity of the nodal basis functions, we get
\begin{equation}\label{equ:equation about l^1 norm}
\|\tilde{z}_h\|_{L^{1}_h(\Omega_h)}=\|\Pi_h(z)\|_{L^{1}_h(\Omega_h)}
=\sum\limits_{i=1}^{N_h}\left|\frac{\int_{\Omega_h}z(x)\phi_i{\rm{d}}x}{\int_{\Omega_h}\phi_i{\rm{d}}x}\right|{\int_{\Omega_h}\phi_i{\rm{d}}x}=\|z\|_{L^1(\Omega_h)}.
\end{equation}
Thus, we have $I_3\leq 0$.
For the terms $I_1$ and $I_2$, from $u=z$,~$u_h=z_h$, we get
\begin{equation*}
I_1-I_2=-\frac{\alpha}{2}\|u-u_h\|^2_{L^2(\Omega_h)}+\langle p_h-p,\tilde{z}_h-z_h\rangle_{L^2(\Omega_h)}+\left\langle\frac{\alpha}{2}u+p,\tilde{z}_h-z\right\rangle_{L^2(\Omega_h)}+
\frac{\alpha}{2}\langle u_h-u,\tilde{z}_h-z\rangle_{L^2(\Omega_h)}.
\end{equation*}
Then (\ref{eqn:error estimat1}) can be rewritten as
\begin{equation}\label{error estimat12}
\begin{aligned}
\frac{\alpha}{2}\|z-z_h\|^2_{L^2(\Omega_h)}+\frac{\alpha}{2}\|u-u_h\|^2_{L^2(\Omega_h)}&\leq
\begin{array}{c}
\underbrace{\langle p_h-p,\tilde{z}_h-z_h\rangle_{L^2(\Omega_h)}}\\
I_4
\end{array}
+\begin{array}{c}
\underbrace{\left\langle\frac{\alpha}{2}u+p,\tilde{z}_h-z\right\rangle_{L^2(\Omega_h)}}\\
I_5
\end{array}\\
&+
\begin{array}{c}
\underbrace{\frac{\alpha}{2}\langle u_h-u,\tilde{z}_h-z\rangle_{L^2(\Omega_h)}}\\
I_6
\end{array}+ch^2.
\end{aligned}
\end{equation}
For the term $I_4$, let $\tilde{p}_h=\mathcal{S}^*_h(y-y_d)$, we have
\begin{align*}
I_4&=\langle p_h-\tilde{p}_h+\tilde{p}_h-p,\tilde{z}_h-z_h\rangle_{L^2(\Omega_h)} \\
&=-\|y-y_h\|^2_{L^2(\Omega_h)}+\begin{array}{c}
\underbrace{\langle y_h-y,(\mathcal{S}_h-\mathcal{S})(\tilde{z}_h+y_c)-\mathcal{S}(z-\tilde{z}_h)\rangle_{L^2(\Omega_h)}}\\
I_7
\end{array}\\
&\quad + \begin{array}{c}
\underbrace{(y-y_d,(\mathcal{S}_h-\mathcal{S})(\tilde{z}_h-z_h))_{L^2(\Omega_h)}}\\
I_8
\end{array}
.
\end{align*}
Consequently,
\begin{equation}\label{error estimat13}
\frac{\alpha}{2}\|z-z_h\|^2_{L^2(\Omega_h)}+\frac{\alpha}{2}\|u-u_h\|^2_{L^2(\Omega_h)}+\|y-y_h\|^2_{L^2(\Omega_h)}\leq I_5+I_6+I_7+I_8+ch^2.
\end{equation}
In order to further estimate (\ref{error estimat13}), we will discuss each of these items from $I_5$ to $I_8$ in turn. Firstly, from the regularity of the optimal control $u$, i.e., $u\in H^1(\Omega)$, and (\ref{equ:z-piecepoint form}), we know that
\begin{equation}\label{eqn:exact function estimats1}
\|u\|_{H^1(\Omega)}\leq \frac{1}{\alpha}\|p\|_{H^1(\Omega)}+\left(\frac{\beta}{\alpha}+a+b\right)\mathcal{M}(\Omega),
\end{equation}
where $\mathcal{M}(\Omega)$ denotes the measure of the $\Omega$. Then we have
\begin{equation*}
\|\frac{\alpha}{2}u+p\|_{H^1(\Omega)}\leq\frac{3}{2}\|p\|_{H^1(\Omega)}+\frac{1}{2}(\beta+\alpha a+\alpha b)\mathcal{M}(\Omega).
\end{equation*}
Moreover, due to the boundedness of the optimal control $u$, the state $y$, the adjoint state $p$ and the operator $\mathcal{S}$, we can choose a large enough constant $L>0$ independent of $\alpha$, $h$ and a constant $\alpha_0$, such that for all $0<\alpha\leq\alpha_0$ and $h>0$, the following inequation holds:
\begin{equation}\label{eqn:exact function estimats2}
\frac{3}{2}\|p\|_{H^1(\Omega)}+(\beta+\alpha a+\alpha b)\mathcal{M}(\Omega)+\|y-y_d\|_{L^2(\Omega)}+\|y_c\|_{L^2(\Omega)}+\|\mathcal{S}\|_{\mathcal{L}(H^{-1},L^2)}+\sup\limits_{u_h\in U_{ad,h}}{}\|u_h\|\leq L.
\end{equation}
From (\ref{eqn:exact function estimats2}) and $u=z$, we have $\|z\|_{H^1(\Omega)}\leq \alpha^{-1}L$. Thus, for the term $I_5$, utilizing Lemma \ref{eqn:lemma3}, we have
\begin{align}\label{error estimat14}
I_5\leq \|\frac{\alpha}{2}u+p\|_{H^1(\Omega_h)}\|\tilde{z}_h-z\|_{H^{-1}(\Omega_h)}\leq c_2 L \|z\|_{H^1(\Omega_h)}h^2 \leq c_2 L^2 \alpha^{-1}h^2.
\end{align}
For terms $I_6$ and $I_7$, using H$\mathrm{\ddot{o}}$lder's inequality, Lemma \ref{eqn:lemma1} and Lemma \ref{eqn:lemma3}, we have
\begin{align}\label{error estimat15}
I_6&\leq \frac{\alpha}{4}\|u_h-u\|^2_{L^2(\Omega_h)}+\frac{\alpha}{4}\|\tilde{z}_h-z\|^2_{L^2(\Omega_h)}\leq \frac{\alpha}{4}\|u_h-u\|^2_{L^2(\Omega_h)}+\frac{c_2^2L^2\alpha^{-1}}{4}h^2,
\end{align}
and
\begin{equation}\label{error estimat16}
\begin{aligned}
I_7&\leq\frac{1}{2}\|y-y_h\|^2_{L^2(\Omega_h)}+2\|\mathcal{S}_h-\mathcal{S}\|^2_{\mathcal{L}(L^{2},L^2)}
(\|\tilde{z}_h\|^2_{L^2(\Omega_h)}+\|y_c\|^2_{L^2(\Omega_h)})
+\|\mathcal{S}\|_{\mathcal{L}(H^{-1},L^2)}\|z-\tilde{z}_h\|^2_{H^{-1}(\Omega_h)}\\
&\leq\frac{1}{2}\|y-y_h\|^2_{L^2(\Omega_h)}+2c_1^2L^2h^4+c_2^2L^3\alpha^{-2}h^4.
\end{aligned}
\end{equation}
Finally, about the term $I_8$, we have
\begin{equation}\label{error estimat17}
\begin{aligned}
I_8&\leq \|y-y_d\|_{L^2(\Omega_h)}\|\mathcal{S}_h-\mathcal{S}\|_{\mathcal{L}(L^{2},L^2)}(\|\tilde{z}_h-z\|_{L^2(\Omega_h)}+\|z-z_h\|_{L^2(\Omega_h)})\\
&\leq c_1Lh^2(c_2L\alpha^{-1}h+\|z-z_h\|_{L^2(\Omega_h)})\\
&\leq \frac{\alpha}{4}\|z-z_h\|^2_{L^2(\Omega_h)}+c_1c_2\alpha^{-1}L^2h^3+4c_1^2L^2\alpha^{-1}h^4.
\end{aligned}
\end{equation}
Substituting (\ref{error estimat14}), (\ref{error estimat15}), (\ref{error estimat16}) and (\ref{error estimat17}) into (\ref{error estimat13}) and rearranging, we get
\begin{align*}
\frac{\alpha}{2}\|u-u_h\|^2_{L^2(\Omega_h)}+\frac{1}{2}\|y-y_h\|^2_{L^2(\Omega_h)}\leq C(h^2+\alpha^{-1} h^2+\alpha^{-1}h^3+\alpha^{-1}h^4+\alpha^{-2}h^4),
\end{align*}
where $C>0$ is a properly chosen constant. Using again the assumption $|\Omega\backslash\Omega_h|\leq ch^2$, we can get
\begin{align*}
\frac{\alpha}{2}\|u-u_h\|^2_{L^2(\Omega)}+\frac{1}{2}\|y-y_h\|^2_{L^2(\Omega)}\leq C(h^2+\alpha h^2+\alpha^{-1} h^2+h^3+\alpha^{-1}h^4+\alpha^{-2}h^4).
\end{align*}
\endproof
\begin{corollary}\label{corollary:error1}
Let $(y, u, z)$ be the optimal solution of problem {\rm(\ref{eqn:modified problems})}, and $(y_h, u_h, z_h)$ be the optimal solution of problem {\rm(\ref{equ:approx discretized})}. For every $h_0>0$, $\alpha_0>0$, there is a constant $C>0$ such that for all $0<\alpha\leq\alpha_0$, $0<h\leq h_0$ it holds
\begin{eqnarray*}
\|u-u_h\|_{L^2(\Omega)}\leq C(\alpha^{-1}h+\alpha^{-\frac{3}{2}}h^2),
\end{eqnarray*}
where $C$ is a constant independent of $h$ and $\alpha$.
\end{corollary}
\section{An ihADMM algorithm and two-phase strategy for discretized problems}\label{sec:4}
In this section, we will introduce an inexact ADMM algorithm and a two-phase strategy for discrete problems. Firstly, in order to establish relations parallel to (\ref{equ:saddle point problems2}) and (\ref{z-closed form solution}) for the discrete problem (\ref{equ:approx discretized}), we propose an inexact heterogeneous ADMM (ihADMM) algorithm with the aim of solving (\ref{equ:approx discretized}) to moderate accuracy.
Furthermore, as we have mentioned, if more accurate solution is necessarily required, combining our ihADMM and the primal-dual active set (PDAS) method is a wise choice. Then a two-phase strategy is introduced. Specifically, utilizing the solution generated by our ihADMM, as a reasonably good initial point, the PDAS method is used as a postprocessor of our ihADMM.
Firstly, let us define following stiffness and mass matrices:
\begin{eqnarray*}
K_h &=& \left(a(\phi_i, \phi_j)\right)_{i,j=1}^{N_h},\quad
M_h=\left(\int_{\Omega_h}\phi_i\phi_j{\mathrm{d}}x\right)_{i,j=1}^{N_h},
\end{eqnarray*}
where the bilinear form $a(\cdot,\cdot)$ is defined in (\ref{eqn:bilinear form}).
Due to the quadrature formulas (\ref{eqn:approx norm2}) and (\ref{eqn:approx norm1}), a lumped mass matrix
$ W_h={\rm{diag}}\left(\int_{\Omega_h}\phi_i(x)\mathrm{d}x\right)_{i,j=1}^{N_h}$
is introduced. Moreover, by (\ref{equ:martix properties1}) in Proposition \ref{eqn:martix properties}, we have the following results about the mass matrix $M_h$ and the lump mass matrix $W_h$.
\subsection{An inexact heterogeneous ADMM algorithm}
Denoting by $y_{d,h}:=\sum\limits_{i=1}^{N_h}y_d^i\phi_i(x)$ and $y_{c,h}:=\sum\limits_{i=1}^{N_h}y_c^i\phi_i(x)$ the $L^2$-projection of $y_d$ and $y_c$ onto $Y_h$, respectively, and identifying discretized functions with their coefficient vectors, we can rewrite the problem (\ref{equ:approx discretized}) as a matrix-vector form:
\begin{equation}\label{equ:approx discretized matrix-vector form}
\left\{\begin{aligned}
&\min\limits_{(y,u,z)\in\mathbb{R}^{3N_h}}^{}~~ \frac{1}{2}\|y-y_d\|_{M_h}^{2}+\frac{\alpha}{4}\|u\|_{M_h}^{2}+\frac{\alpha}{4}\|z\|_{W_h}^{2}+\|W_hz\|_1\\
&~~~ \quad {\rm{s.t.}}\qquad\quad K_hy=M_h(u+y_c),\\
&\ \qquad\quad\quad\quad\quad u=z,\\%\{v\in\mathbb{R}^n: a\leq v_i\leq b, i=1,...,n\}.
&\ \qquad\quad\quad\quad\quad z\in[a,b]^{N_h}.
\end{aligned} \right.\tag{$\overline{\mathrm{DP}}_{h}$}
\end{equation}
By Assumption \ref{equ:assumption:1}, we have the stiffness matrix $K_h$ is a symmetric positive definite matrix. Then problem (\ref{equ:approx discretized matrix-vector form}) can be rewritten the following reduced form:
\begin{equation}\label{equ:reduced approx discretized matrix-vector form}
\left\{\begin{aligned}
&\min\limits_{(u,z)\in\mathbb{R}^{2N_h}}^{}~~ f(u)+g(z)\\
&~~ \quad {\rm{s.t.}} \quad \qquad u=z.
\end{aligned} \right.\tag{$\overline{\mathrm{RDP}}_{h}$}
\end{equation}
where
\begin{eqnarray}
\label{equ:fu function} f(u)&=& \frac{1}{2}\|K_h^{-1}M_h(u+y_c)-y_d\|_{M_h}^{2}+\frac{\alpha}{4}\|u\|_{M_h}^{2}, \quad
g(z) = \frac{\alpha}{4}\|z\|_{W_h}^{2}+\beta\|W_hz\|_1+\delta_{[a,b]^{N_h}}.
\end{eqnarray}
To solve (\ref{equ:reduced approx discretized matrix-vector form}) by using ADMM-type algorithm, we first introduce the augmented Lagrangian function for (\ref{equ:reduced approx discretized matrix-vector form}). According to three possible choices of norms ($\mathbb{R}^{N_h}$ norm, $W_h$-weighted norm and $M_h$-weighted norm), for the augmented Lagrangian function, there are three versions as follows: for given $\sigma>0$,
\begin{eqnarray}
\mathcal{L}^1_\sigma(u,z;\lambda)&:=&f(u)+g(z)+\langle\lambda,u-z\rangle+\frac{\sigma}{2}\|u-z\|^{2}, \label{aguLarg1}\\
\mathcal{L}^2_\sigma(u,z;\lambda)&:=&f(u)+g(z)+\langle\lambda,M_h(u-z)\rangle+\frac{\sigma}{2}\|u-z\|_{W_h}^{2}, \label{aguLarg2}\\
\mathcal{L}^3_\sigma(u,z;\lambda)&:=&f(u)+g(z)+\langle\lambda,M_h(u-z)\rangle+\frac{\sigma}{2}\|u-z\|_{M_h}^{2}\label{aguLarg3}.
\end{eqnarray}
Then based on these three versions of augmented Lagrangian function, we give the following four versions of ADMM-type algorithm for (\ref{equ:reduced approx discretized matrix-vector form}) at $k$-th ineration: for given $\tau>0$ and $\sigma>0$,
\begin{equation}\label{inexact ADMM1}
\left\{\begin{aligned}
&u^{k+1}=\arg\min_u\ f(u)+\langle\lambda^k,u-z^k\rangle+\sigma/2\|u-z^k\|^{2},\\
&z^{k+1}=\arg\min_z\ g(z)+\langle\lambda^k,u^{k+1}-z\rangle+\sigma/2\|u^{k+1}-z\|^{2},\\
&\lambda^{k+1}=\lambda^k+\tau\sigma(u^{k+1}-z^{k+1}).
\end{aligned}\right.\tag{ADMM1}
\end{equation}
\begin{equation}\label{inexact ADMM2}
~~~~~~~~ \left\{\begin{aligned}
&u^{k+1}=\arg\min_u\ f(u)+\langle\lambda^k,M_h(u-z^k)\rangle+\sigma/2\|u-z^k\|_{W_h}^{2},\\
&z^{k+1}=\arg\min_z\ g(z)+\langle\lambda^k,W_h(u^{k+1}-z)\rangle+\sigma/2\|u^{k+1}-z\|_{W_h}^{2},\\
&\lambda^{k+1}=\lambda^k+\tau\sigma(u^{k+1}-z^{k+1}).
\end{aligned}\right.\tag{ADMM2}
\end{equation}
\begin{equation}\label{inexact ADMM3}
~~~~~~~~ \left\{\begin{aligned}
&u^{k+1}=\arg\min_u\ f(u)+\langle\lambda^k,M_h(u-z^k)\rangle+\sigma/2\|u-z^k\|_{M_h}^{2},\\
&z^{k+1}=\arg\min_z\ g(z)+\langle\lambda^k,M_h(u^{k+1}-z)\rangle+\sigma/2\|u^{k+1}-z\|_{M_h}^{2},\\
&\lambda^{k+1}=\lambda^k+\tau\sigma(u^{k+1}-z^{k+1}).
\end{aligned}\right.\tag{ADMM3}
\end{equation}
\begin{equation}\label{inexact ADMM4}
~~~~~~~~ \left\{\begin{aligned}
&u^{k+1}=\arg\min_u\ f(u)+\langle\lambda^k,M_h(u-z^k)\rangle+{\color{blue}\sigma/2\|u-z^k\|_{M_h}^{2}},\\
&z^{k+1}=\arg\min_z\ g(z)+\langle\lambda^k,M_h(u^{k+1}-z)\rangle+{\color{red}\sigma/2\|u^{k+1}-z\|_{W_h}^{2}},\\
&\lambda^{k+1}=\lambda^k+\tau\sigma(u^{k+1}-z^{k+1}).
\end{aligned}\right.\tag{ADMM4}
\end{equation}
As one may know, (\ref{inexact ADMM1}) is actually the classical ADMM for (\ref{equ:reduced approx discretized matrix-vector form}). The remaining three ADMM-type algorithms are proposed based on the structure of (\ref{equ:reduced approx discretized matrix-vector form}). Now, let us start to analyze and compare the advantages and disadvantages of the four algorithms.
Firstly, we focus on the $z$-subproblem in each algorithm. Since both identity matrix $I$ and lumped mass matrix $W_h$ are diagonal, it is clear that all the $z$-subproblems in (\ref{inexact ADMM1}), (\ref{inexact ADMM2}) and (\ref{inexact ADMM4}) have a closed form solution, except for the $z$-subproblem in (\ref{inexact ADMM3}).
Specifically, for $z$-subproblem in (\ref{inexact ADMM1}), the closed form solution could be given by:
\begin{equation}\label{equ:closed form solution for ADMM1}
z^k={\rm\Pi}_{U_{ad}}\left((\frac{\alpha}{2}W_h+\sigma I)^{-1}W_h{\rm soft}(W_h^{-1}(\sigma u^{k+1}+\lambda^k),\beta)\right).
\end{equation}
Similarly, for $z$-subproblems in (\ref{inexact ADMM2}) and (\ref{inexact ADMM4}), the closed form solution could be given by:
\begin{equation}\label{equ:closed form solution for ADMM2 and ADMM4}
z^{k+1}={\rm\Pi}_{U_{ad}}\left(\frac{1}{\sigma+0.5\alpha}{\rm soft}\left(\sigma u^{k+1}+W_h^{-1}M_h\lambda^k{\text{,}}~\beta\right)\right)
\end{equation}
Fortunately, the expression of (\ref{equ:closed form solution for ADMM2 and ADMM4}) is the similar to (\ref{z-closed form solution}). As we have mentioned that, from the view of both the actual numerical implementation and convergence analysis of the algorithm, establishing such parallel relation is important.
Next, let us analyze the structure of $u$-subproblem in each algorithm. For (\ref{inexact ADMM1}), the first subproblem at $k$-th iteration
is equivalent to solving the following linear system:
\begin{equation}\label{eqn:saddle point1}
\left[
\begin{array}{ccc}
M_h & \quad0 & \quad K_h \\
0 & \quad\frac{\alpha}{2}M_h+\sigma I & \quad-M_h \\
K_h & \quad-M_h & \quad0 \\
\end{array}
\right]\left[
\begin{array}{c}
y^{k+1} \\
u^{k+1} \\
p^{k+1} \\
\end{array}
\right]=\left[
\begin{array}{c}
M_hy_d \\
\sigma z^k-\lambda^k \\
M_hy_c \\
\end{array}
\right].
\end{equation}
Similarly, the $u$-subproblem in (\ref{inexact ADMM2}) can be converted into the following linear system:
\begin{equation}\label{eqn:saddle point2}
\left[
\begin{array}{ccc}
M_h & \quad0 & \quad K_h \\
0 & \quad\frac{\alpha}{2}M_h+\sigma W_h & \quad-M_h \\
K_h & \quad-M_h & \quad0 \\
\end{array}
\right]\left[
\begin{array}{c}
y^{k+1} \\
u^{k+1} \\
p^{k+1} \\
\end{array}
\right]=\left[
\begin{array}{c}
M_hy_d \\
\sigma W_h(z^k-\lambda^k) \\
M_hy_c \\
\end{array}
\right].
\end{equation}
However, the $u$-subproblem in both (\ref{inexact ADMM3}) and (\ref{inexact ADMM4}) can be rewritten as:
\begin{equation}\label{eqn:saddle point3}
\left[
\begin{array}{ccc}
M_h & \quad0 & \quad K_h \\
0 & \quad(0.5\alpha+\sigma) M_h & \quad-M_h \\
K_h & \quad-M_h & \quad0 \\
\end{array}
\right]\left[
\begin{array}{c}
y^{k+1} \\
u^{k+1} \\
p^{k+1} \\
\end{array}
\right]=\left[
\begin{array}{c}
M_hy_d \\
M_h(\sigma z^k-\lambda^k) \\
M_hy_c \\
\end{array}
\right].
\end{equation}
In (\ref{eqn:saddle point3}), since $p^{k+1}=(0.5\alpha+\sigma)u^{k+1}-\sigma z^k+\lambda^k$, it is obvious that (\ref{eqn:saddle point3}) can be reduced into the following system by eliminating the variable $p$ without any computational cost:
\begin{equation}\label{eqn:saddle point4}
\left[
\begin{array}{cc}
\frac{1}{0.5\alpha+\sigma}M_h & K_h \\
-K_h & M_h
\end{array}
\right]\left[
\begin{array}{c}
y^{k+1} \\
u^{k+1}
\end{array}
\right]=\left[
\begin{array}{c}
\frac{1}{0.5\alpha+\sigma}(K_h(\sigma z^k-\lambda^k)+M_hy_d)\\
-M_hy_c
\end{array}
\right],
\end{equation}
while, reduced forms of (\ref{eqn:saddle point1}) and (\ref{eqn:saddle point2}):
both involve the inversion of $M_h$.
For above mentioned reasons, we prefer to use (\ref{inexact ADMM4}), which is called the heterogeneous ADMM (hADMM). However, in general, it is expensive and unnecessary to exactly compute the solution of saddle point system (\ref{eqn:saddle point4}) even if it is doable, especially at the early stage of the whole process. Based on the structure of (\ref{eqn:saddle point4}), it is a natural idea to use the iterative methods such as some Krylov-based methods. Hence, taking the inexactness of the solution of $u$-subproblem into account, a more practical inexact heterogeneous ADMM (ihADMM) algorithm is proposed.
Due to the inexactness of the proposed algorithm, we first introduce an error tolerance. Throughout this paper, let $\{\epsilon_k\}$ be a summable sequence of nonnegative numbers, and define
\begin{equation}\label{error sequence}
C_1:=\sum\limits^{\infty}_{k=0}\epsilon_k\leq\infty, \quad C_2:=\sum\limits^{\infty}_{k=0}\epsilon_k^2\leq\infty.
\end{equation}
The details of our ihADMM algorithm is shown in Algorithm \ref{algo4:inexact heterogeneous ADMM for problem RHP} to solve (\ref{equ:approx discretized matrix-vector form}).
\begin{algorithm}[H]
\caption{inexact heterogeneous ADMM algorithm for (\ref{equ:approx discretized matrix-vector form})}\label{algo4:inexact heterogeneous ADMM for problem RHP}
\textbf{Input}: {$(z^0, u^0, \lambda^0)\in {\rm dom} (\delta_{[a,b]}(\cdot))\times \mathbb{R}^n \times \mathbb{R}^n $ and parameters $\sigma>0$, $\tau>0$.
Set $k=1$.}\\
\textbf{Output}: {$ u^k, z^{k}, \lambda^k$}
\begin{description}
\item[Step 1] Find an minizer (inexact)
\begin{eqnarray*}
u^{k+1}&=&\arg\min f(u)+(M_h\lambda^k,u-z^k)
+\frac{\sigma}{2}\|u-z^k\|_{M_h}^{2}-\langle\delta^k, u\rangle,
\end{eqnarray*}
where the error vector ${\delta}^k$ satisfies $\|{\delta}^k\|_{2} \leq {\epsilon_k}$
\item[Step 2] Compute $z^k$ as follows:
\begin{eqnarray*}
z^{k+1}&=&\arg\min g(z)+(M_h\lambda^k,u^{k+1}-z)
+\frac{\sigma}{2}\|u^{k+1}-z\|_{W_h}^{2}
\end{eqnarray*}
\item[Step 3] Compute
\begin{eqnarray*}
\lambda^{k+1} &=& \lambda^k+\tau\sigma(u^{k+1}-z^{k+1}).
\end{eqnarray*}
\item[Step 4] If a termination criterion is not met, set $k:=k+1$ and go to Step 1
\end{description}
\end{algorithm}
\subsection{Convergence results of ihADMM}
For the ihADMM (Algorithm \ref{algo4:inexact heterogeneous ADMM for problem RHP}), in this section we establish the global convergence and the iteration complexity results in non-ergodic sense for the sequence generated by Algorithm \ref{algo4:inexact heterogeneous ADMM for problem RHP}.
Before giving the proof of Theorem \ref{discrete convergence results}, we first provide a lemma, which is useful for analyzing the non-ergodic iteration complexity of ihADMM and introduced in \cite{SunToh1}.
\begin{lemma}\label{complexity lemma}
If a sequence $\{a_i\}\in \mathbb{R}$ satisfies the following conditions:
\begin{eqnarray*}
&&a_i\geq0 \ \text{for any}\ i\geq0\quad and\quad \sum\limits_{i=0}^{\infty}a_i=\bar a<\infty.
\end{eqnarray*}
Then we have $\min\limits^{}_{i=1,...,k}\{a_i\} \leq \frac{\bar a}{k}$, and $\lim\limits^{}_{k\rightarrow\infty} \{k\cdot\min\limits^{}_{i=1,...,k}\{a_i\}\} =0$.
\end{lemma}
For the convenience of the iteration complexity analysis in below, we define the function $R_h: (u,z,\lambda)\rightarrow [0,\infty)$ by:
\begin{equation}\label{discrete KKT function}
R_h(u,z,\lambda)=\|M_h\lambda+\nabla f(u)\|^2+{\rm dist}^2(0, -M_h\lambda+\partial g(z))+\|u-z\|^2.
\end{equation}
By the definitions of $f(u)$ and $g(z)$ in (\ref{equ:fu function}), it is obvious that $f(u)$ and $g(z)$ both are closed, proper and convex functions. Since $M_h$ and $K_h$ are symmetric positive definite matrixes, we know the gradient operator $\nabla f$ is strongly monotone, and we have
\begin{equation}\label{subdifferential strongly monotone}
\langle\nabla f(u_1)-\nabla f(u_2), u_1-u_2\rangle=\|u_1-u_2\|^2_{\Sigma_{f}},
\end{equation}
where ${\Sigma_{f}}=\frac{\alpha}{2} M_h+M_hK_h^{-1}M_hK_h^{-1}M_h$ is symmetric positive definite. Moreover, the subdifferential operator $\partial g$ is a maximal monotone operators, e.g.,
\begin{equation}\label{subdifferential monotone}
\langle\varphi_1-\varphi_2, z_1-z_2\rangle\geq\frac{\alpha}{2}\|z_1-z_2\|^2_{ W_h} \quad \forall\ \varphi_1\in\partial g(z_1),\ \varphi_2\in \partial g(z_2).
\end{equation}
For the subsequent convergence analysis, we denote
\begin{eqnarray}
\label{equ:exact u}\bar{u}^{k+1}&:=&\arg\min f(u)+\langle M_h\lambda^k,u-z^k\rangle
+\frac{\sigma}{2}\|u-z^k\|_{M_h}^{2},\\
\label{equ:exact z} \bar z^{k+1}&:=&{\rm\Pi}_{U_{ad}}\left(\frac{1}{\sigma+0.5\alpha}{\rm soft}\left(\sigma \bar u^{k+1}+W_h^{-1}M_h\lambda^k{\text{,}}~\beta\right)\right),
\end{eqnarray}
which are the exact solutions at the $(k+1)$-th iteration in Algorithm \ref{algo4:inexact heterogeneous ADMM for problem RHP}. The following results show the gap between $(u^{k+1}, z^{k+1})$ and $(\bar u^{k+1}, \bar z^{k+1})$ in terms of the given error tolerance $\|{\delta}^k\|_{2} \leq {\epsilon_k}$.
\begin{lemma}\label{gap between exact and inexact solution}
Let $\{(u^{k+1}, z^{k+1})\}$ be the squence generated by Algorithm {\rm\ref{algo4:inexact heterogeneous ADMM for problem RHP}}, and $\{\bar u^{k+1}\}$, $\{\bar z^{k+1}\}$ be defined in {\rm(\ref{equ:exact u})} and {\rm(\ref{equ:exact z})}. Then for any $k\geq0$, we have
\begin{eqnarray}
\label{equ:error u}\|u^{k+1}-\bar u^{k+1}\| &=&\|(\sigma M_h+\Sigma_f)^{-1}\delta^k\|\leq \rho\epsilon_k, \\
\label{equ:error z}\|z^{k+1}-\bar z^{k+1}\| &\leq&\|u^{k+1}-\bar u^{k+1}\|\leq \frac{\rho\sigma}{\sigma+0.5\alpha}\epsilon_k,
\end{eqnarray}
where $\rho:=\|(\sigma M_h+\Sigma_f)^{-1}\|$.
\end{lemma}
Next, for $k\geq0$, we define
\begin{eqnarray*}
&r^k=u^k-z^k,\quad \bar r^k=\bar u^k-\bar z^k& \\
&\tilde{\lambda }^{k+1}=\lambda^k+\sigma r^{k+1},\quad \bar{\lambda }^{k+1}=\lambda^k+\tau\sigma \bar r^{k+1}, \quad \hat{\lambda }^{k+1}=\lambda^k+\sigma \bar r^{k+1},&
\end{eqnarray*}
and give two inequalities which is essential for establishing both the global convergence and the iteration complexity of our ihADMM
\begin{proposition}\label{descent proposition}
Let $\{(u^{k}, z^{k}, \lambda^{k})\}$ be the sequence generated by Algorithm {\rm\ref{algo4:inexact heterogeneous ADMM for problem RHP}} and $(u^{*}, z^{*}, \lambda^{*})$ be the KKT point of problem {\rm(\ref{equ:reduced approx discretized matrix-vector form})}. Then for $k\geq0$ we have
\begin{equation}\label{inequlaities property1}
\begin{aligned}
&\langle\delta^k,u^{k+1}-u^*\rangle +\frac{1}{2\tau\sigma}\|\lambda^k-\lambda^*\|^2_{M_h}+\frac{\sigma}{2}\|z^k-z^*\|^2_{M_h}
-\frac{1}{2\tau\sigma}\|\lambda^{k+1}-\lambda^*\|^2_{M_h}-\frac{\sigma}{2}\|z^{k+1}-z^*\|^2_{M_h}\\
&\geq\|u^{k+1}-u^*\|^2_{T}
+\frac{\sigma}{2}\|z^{k+1}-z^*\|^2_{2W_h-M_h} +\frac{\sigma}{2}\|r^{k+1}\|^2_{W_h-\tau M_h}
+\frac{\sigma}{2}\|u^{k+1}-z^k\|^2_{M_h},
\end{aligned}
\end{equation}
where $T:=\Sigma_f-\frac{\sigma}{2}(W_h-M_h)$.
\end{proposition}
\begin{proposition}\label{descent proposition2}
Let $\{(u^{k}, z^{k}, \lambda^{k})\}$ be the sequence generated by Algorithm {\rm\ref{algo4:inexact heterogeneous ADMM for problem RHP}}, $(u^{*}, z^{*}, \lambda^{*})$ be the KKT point of the problem {\rm(\ref{equ:reduced approx discretized matrix-vector form})} and $\{\bar u^k\}$ and $\{\bar z^k\}$ be two sequences defined in {\rm(\ref{equ:exact u})} and {\rm(\ref{equ:exact z})}, respectively. Then for $k\geq0$ we have
\begin{equation}\label{inequlaities property}
\begin{aligned}
&\frac{1}{2\tau\sigma}\|\lambda^k-\lambda^*\|^2_{M_h}+\frac{\sigma}{2}\|z^k-z^*\|^2_{M_h}
-\frac{1}{2\tau\sigma}\|\bar \lambda^{k+1}-\lambda^*\|^2_{M_h}-\frac{\sigma}{2}\|\bar z^{k+1}-z^*\|^2_{M_h}\\
\geq\ &\|\bar u^{k+1}-u^*\|^2_{T}
+\frac{\sigma}{2}\|\bar z^{k+1}-z^*\|^2_{2W_h-M_h} +\frac{\sigma}{2}\|\bar r^{k+1}\|^2_{W_h-\tau M_h}+\frac{\sigma}{2}\|\bar u^{k+1}-z^k\|^2_{M_h},
\end{aligned}
\end{equation}
where $T:=\Sigma_f-\frac{\sigma}{2}(W_h-M_h)$.
\end{proposition}
Then based on former results, we have the following convergence results.
\begin{theorem}\label{discrete convergence results}
Let $(y^*,u^*,z^*,p^*,\lambda^*)$ is the KKT point of {\rm(\ref{equ:approx discretized matrix-vector form})}, then the sequence $\{(u^{k},z^{k},\lambda^k)\}$ is generated by Algorithm {\rm\ref{algo4:inexact heterogeneous ADMM for problem RHP}} with the associated state $\{y^k\}$ and adjoint state $\{p^k\}$, then for any $\tau\in (0,1]$ and $\sigma\in (0, \frac{1}{4}\alpha]$, we have
\begin{eqnarray}
\label{discrete iteration squence convergence1}&&\lim\limits_{k\rightarrow\infty}^{}\{\|u^{k}-u^*\|+\|z^{k}-z^*\|+\|\lambda^{k}-\lambda^*\| \}= 0\\
\label{discrete iteration squence convergence2}&& \lim\limits_{k\rightarrow\infty}^{}\{\|y^{k}-y^*\|+\|p^{k}-p^*\| \}= 0
\end{eqnarray}
Moreover, there exists a constant $C$ only depending on the initial point ${(u^0,z^0,\lambda^0)}$ and the optimal solution ${(u^*,z^*,\lambda^*)}$ such that for $k\geq1$,
\begin{eqnarray}
\label{discrete iteration complexity1}&&\min\limits^{}_{1\leq i\leq k} \{R_h(u^i,z^i,\lambda^i)\}\leq\frac{C}{k}, \quad
\lim\limits^{}_{k\rightarrow\infty}\left(k\times\min\limits^{}_{1\leq i\leq k} \{R_h(u^i,z^i,\lambda^i)\}\right) =0.
\end{eqnarray}
where $R_h(\cdot)$ is defined as in {\rm(\ref{discrete KKT function})}.
\begin{proof}
It is easy to see that $(u^*,z^*)$ is the unique optimal solution of discrete problem (\ref{equ:reduced approx discretized matrix-vector form}) if and only if there exists a Lagrangian multiplier $\lambda^*$ such that the following Karush-Kuhn-Tucker (KKT) conditions hold,
\begin{subequations}
\begin{eqnarray}
\label{equ: exact variational inequalities1}&-M_h\lambda^*=\nabla f(u^*),\\
\label{equ: exact variational inequalities2}&M_h\lambda^*\in \partial g(z^*),\\
\label{equ: exact variational inequalities3}& u^*=z^*.
\end{eqnarray}
\end{subequations}
In the inexact heterogeneous ADMM iteration scheme, the optimality conditions for $(u^{k+1}, z^{k+1})$ are
\begin{subequations}
\begin{eqnarray}
\label{equ: inexact variational inequalities1}&\delta^k-(M_h\lambda^k+\sigma M_h(u^{k+1}-z^k))=\nabla f(u^{k+1}),\\
\label{equ: inexact variational inequalities2}&M_h\lambda^k+\sigma W_h(u^{k+1}-z^{k+1})\in \partial g(z^{k+1}).
\end{eqnarray}
\end{subequations}
Next, let us first prove the \textbf{global convergence of iteration sequences,} e.g., establish the proof of (\ref{discrete iteration squence convergence1}) and (\ref{discrete iteration squence convergence2}).
The first step is to show that $\{(u^k, z^k, \lambda^k)\}$ is bounded. We define the following sequence $\theta^k$ and $\bar\theta^k$ with:
\begin{equation}\label{iteration sequence}
\begin{aligned}
\theta^k &= \left(\frac{1}{\sqrt{2\tau\sigma}}M_h^{\frac{1}{2}}(\lambda^k-\lambda^*), \sqrt{\frac{\sigma}{2}}M_h^{\frac{1}{2}}(z^k-z^*)\right), \quad
\bar\theta^k = \left(\frac{1}{\sqrt{2\tau\sigma}}M_h^{\frac{1}{2}}(\bar\lambda^k-\lambda^*), \sqrt{\frac{\sigma}{2}}M_h^{\frac{1}{2}}(\bar z^k-z^*)\right).
\end{aligned}
\end{equation}
According to Proposition \ref{eqn:martix properties}, for any $\tau\in (0,1]$ and $\sigma\in (0, \frac{1}{4}\alpha]$ for, we have
$\Sigma_f-\frac{\sigma}{2}(W_h-M_h) \succ 0$, and
$\quad W_h-\tau M_h \succ 0$ .
Then, by Proposition \ref{descent proposition2}, we get $\|\bar\theta^{k+1}\|^2\leq\|\theta^k\|^2$. As a result, we have:
\begin{equation}\label{descent theta}
\begin{aligned}
\|\theta^{k+1}\| &\leq \|\bar\theta^{k+1}\|+\|\bar\theta^{k+1}-\theta^{k+1}\|
= \|\bar\theta^{k}\|+\|\bar\theta^{k+1}-\theta^{k+1}\| .
\end{aligned}
\end{equation}
Employing Lemma \ref{gap between exact and inexact solution}, we get
\begin{equation}\label{descent theta and bartheta}
\begin{aligned}
\|\bar\theta^{k+1}-\theta^{k+1}\|^2 &= \frac{1}{2\tau\sigma}\|\bar\lambda^{k+1}-\lambda^{k+1}\|^2_{M_h}+\frac{\sigma}{2}\|\bar z^{k+1}-z^{k+1}\|^2_{M_h} \\
&\leq(2\tau+1/2)\sigma\|M_h\|\rho^2\epsilon_k^2\leq5/2\sigma\|M_h\|\rho^2\epsilon_k^2,
\end{aligned}
\end{equation}
which implies $\|\bar\theta^{k+1}-\theta^{k+1}\|\leq\sqrt{5/2\sigma\|M_h\|}\rho\epsilon_k$. Hence, for any $k\geq0$, we have
\begin{equation}\label{boundedness theta and bartheta}
\begin{aligned}
\|\theta^{k+1}\| &\leq \|\theta^k\|+\sqrt{5/2\sigma\|M_h\|}\rho\epsilon_k
\leq \|\theta^0\|+\sqrt{5/2\sigma\|M_h\|}\rho\sum\limits^{\infty}_{k=0}\epsilon_k=\|\theta^0\|+\sqrt{5/2\sigma\|M_h\|}\rho C_1\equiv\bar\rho.
\end{aligned}
\end{equation}
From $\|\bar\theta^{k+1}\|\leq\|\theta^{k}\|$, for any $k\geq0$, we also have $\|\bar\theta^{k+1}\|\leq\bar\rho$. Therefore, the sequences $\{\theta^k\}$ and $\{\bar \theta^k\}$ are bounded. From the definition of $\{\theta^k\}$ and the fact that $M_h\succ0$, we can see that the sequences $\{\lambda^k\}$ and $\{z^k\}$ are bounded. Moreover, from updating technique of $\lambda^k$, we know $\{u^k\}$ is also bounded. Thus, due to the boundedness of the sequence $\{(u^{k}, z^{k}, \lambda^k)\}$, we know the sequence has a subsequence
$\{(u^{k_i}, z^{k_i}, \lambda^{k_i})\}$ which converges to an accumulation point $(\bar u, \bar z, \bar\lambda)$. Next we should show that $(\bar u, \bar z, \bar\lambda)$ is a KKT point and equal to $(u^*, z^*, \lambda^*)$.
Again employing Proposition \ref{descent proposition2}, we can derive
\begin{equation}\label{convergence inequlity}
\begin{aligned}
&\sum\limits^{\infty}_{k=0}\left(\|\bar u^{k+1}-u^*\|^2_{T}
+\frac{\sigma}{2}\|\bar z^{k+1}-z^*\|^2_{2W_h-M_h} +\frac{\sigma}{2}\|\bar r^{k+1}\|^2_{W_h-\tau M_h}+\frac{\sigma}{2}\|\bar u^{k+1}-z^k\|^2_{M_h}\right)\\
\leq &\sum\limits^{\infty}_{k=0}(\|\theta^k\|^2-\|\theta^{k+1}\|^2+\|\theta^{k+1}\|^2-\|\bar \theta^{k+1}\|^2)
\leq \|\theta^0\|^2+2\bar\rho\sqrt{5/2\sigma\|M_h\|}\rho C_1<\infty.
\end{aligned}
\end{equation}
Note that $T\succ0, W_h-M_h\succ0, W_h-\tau M_h\succ0$ and $M_h\succ0$, then we have
\begin{equation}\label{limit convergence2}
\begin{aligned}
\lim\limits^{}_{k\rightarrow\infty}\|\bar u^{k+1}-u^*\|=0,\quad \lim\limits^{}_{k\rightarrow\infty}\|\bar z^{k+1}-z^*\|=0,\quad
\lim\limits^{}_{k\rightarrow\infty} \|\bar r^{k+1}\|=0,\quad
\lim\limits^{}_{k\rightarrow\infty}\|\bar u^{k+1}-z^k\|=0.
\end{aligned}
\end{equation}
From the Lemma \ref{gap between exact and inexact solution}, we can get
\begin{equation}\label{limit convergence3}
\begin{aligned}
&\|u^{k+1}-u^*\|\leq\|\bar u^{k+1}-u^*\|+\|u^{k+1}-\bar u^{k+1}\|\leq\|\bar u^{k+1}-u^*\|+\rho\epsilon_k,\\
&\|z^{k+1}-z^*\|\leq\|\bar z^{k+1}-z^*\|+\|z^{k+1}-\bar z^{k+1}\|\leq\|\bar z^{k+1}-z^*\|+\rho\epsilon_k.
\end{aligned}
\end{equation}
From the fact that $\lim\limits^{}_{k\rightarrow\infty} \epsilon_k=0$ and (\ref{limit convergence2}), by taking the limit of both sides of (\ref{limit convergence3}), we have
\begin{equation}\label{limit convergence4}
\begin{aligned}
\lim\limits^{}_{k\rightarrow\infty}\| u^{k+1}-u^*\|=0,\quad \lim\limits^{}_{k\rightarrow\infty}\| z^{k+1}-z^*\|=0,\quad
\lim\limits^{}_{k\rightarrow\infty} \| r^{k+1}\|=0,\quad
\lim\limits^{}_{k\rightarrow\infty}\| u^{k+1}-z^k\|=0.
\end{aligned}
\end{equation}
Now taking limits for $k_i\rightarrow\infty$ on both sides of (\ref{equ: inexact variational inequalities1}), we have
\begin{equation*}
\lim\limits^{}_{k_i\rightarrow\infty}(\delta^{k_i}-(M_h\lambda^{k_i}+\sigma M_h(u^{k_i+1}-z^{k_i})))=\nabla f(u^{k_i+1}),
\end{equation*}
which results in $-M_h\bar\lambda=\nabla f(u^*)$. Then from (\ref{equ: exact variational inequalities1}), we know $\bar\lambda=\lambda^*$. At last, to complete the proof, we need to show that $\lambda^*$ is the limit of the sequence of $\{\lambda^k\}$. From
(\ref{boundedness theta and bartheta}), we have for any $k>k_i$,
$\|\theta^{k+1}\|\leq\|\theta^{k_i}\|+\sqrt{5/2\sigma\|M_h\|}\rho\sum\limits^{k}_{j={k_i}}\epsilon_j$.
Since $\lim\limits^{}_{k_i\rightarrow\infty}\|\theta^{k_i}\|=0$ and $\sum\limits_{k=0}^{\infty}\epsilon_k<\infty$, we have that $\lim\limits^{}_{k\rightarrow\infty}\|\theta^{k}\|=0$, which implies $\lim\limits^{}_{k\rightarrow\infty}\| \lambda^{k+1}-\lambda^*\|=0$.
Hence, we have proved the convergence of the sequence $\{(u^{k+1}, z^{k+1}, \lambda^{k+1})\}$, which completes the proof of (\ref{discrete iteration squence convergence1}). For the proof of (\ref{discrete iteration squence convergence2}), it is easy to show by the definition of the sequence $\{(y^k, p^k)\}$, here we omit it.
At last, we establish the proof of (\ref{discrete iteration complexity1}), e.g., \textbf{the iteration complexity results in non-ergodic sendse for the sequence generated by the ihADMM.}
Firstly, by the optimality condition (\ref{equ: inexact variational inequalities1}) and (\ref{equ: inexact variational inequalities2}) for $(u^{k+1}, z^{k+1})$, we have
\begin{subequations}
\begin{eqnarray}
\label{equ:discrete inexact variational inequalities3}&\delta^k+(\tau-1)\sigma M_h r^{k+1}-\sigma M_h(z^{k+1}-z^k)=M_h\lambda^{k+1}+\nabla f(u^{k+1}),\\
\label{equ:discrete inexact variational inequalities4}&\sigma (W_h-\tau M_h)r^{k+1}\in -M_h\lambda^{k+1}+\partial g(z^{k+1}).
\end{eqnarray}
\end{subequations}
By the definition of $R_h$ and denoting $w^{k+1}:=(u^{k+1},z^{k+1},\lambda^{k+1})$, we derive
\begin{equation}\label{discrete KKT function for iteration}
\begin{aligned}
R_h(w^{k+1})&=\|M_h\lambda^{k+1}+\nabla f(u^{k+1})\|^2+{\rm dist}^2(0, -M_h\lambda^{k+1}+\partial g(z^{k+1}))+\|u^{k+1}-z^{k+1}\|^2\\
&\leq 2\|\delta^k\|^2+\eta\|r^{k+1}\|^2+4\sigma^2 \|M_h\|\|u^{k+1}-z^{k}\|_{M_h}^2,
\end{aligned}
\end{equation}
where $\eta:=2(\tau-1)^2\sigma^2 \|M_h\|^2+2\sigma^2 \|M_h\|^2+\sigma^2\| W_h-\tau M_h\|^2+1.$
In order to get a upper bound for $R_h(w^{k+1})$, we will use (\ref{inequlaities property1}) in Proposition
\ref{descent proposition}. First, by the definition of $\theta^{k}$ and (\ref{boundedness theta and bartheta}), for any $k\geq0$ we can easily have
\begin{eqnarray*}
\|\lambda^k-\lambda^*\| \leq\bar\rho\sqrt{\frac{2\tau\sigma}{\|M_h^{-1}\|}} , \quad
\|z^k-z^*\| \leq\bar\rho\sqrt{\frac{2}{\sigma\|M_h^{-1}\|}}.
\end{eqnarray*}
Next, we should give a upper bound for $\langle\delta^k, u^{k+1}-u^*\rangle$:
\begin{equation}\label{discrete inner product estimates}
\begin{aligned}
\langle\delta^k, u^{k+1}-u^*\rangle&\leq\|\delta^k\|(\|u^{k+1}-z^{k+1}\|+\|z^{k+1}-z^*\|)
\leq\left(\left(1+\frac{2}{\sqrt{\tau}}\right)\frac{2\sqrt{2}\bar\rho}{\sqrt{\tau\sigma\|M_h^{-1}\|}}\right)\|\delta^k\|\equiv \bar\eta\|\delta^k\|.
\end{aligned}
\end{equation}
Then by (\ref{inequlaities property1}) in Proposition \ref{descent proposition}, we have
\begin{equation}\label{summable1}
\begin{aligned}
\sum\limits^{\infty}_{k=0}\left(\frac{\sigma}{2}\|r^{k+1}\|^2_{W_h-\tau M_h}
+\frac{\sigma}{2}\|u^{k+1}-z^k\|^2_{M_h}\right)&\leq \sum\limits^{\infty}_{k=0}(\theta^{k}-\theta^{k+1})
+\sum\limits^{\infty}_{k=0}\langle\delta^k,u^{k+1}-u^*\rangle \\
&\leq \theta^0+\bar\eta\sum\limits^{\infty}_{k=0}\|\delta^k\|
\leq\theta^0+\bar\eta\sum\limits^{\infty}_{k=0}\epsilon^k=\theta^0+\bar\eta C_1.
\end{aligned}
\end{equation}
Hence,
\begin{equation}\label{summable2}
\begin{aligned}
\sum\limits^{\infty}_{k=0}\|r^{k+1}\|^2\leq \frac{2(\theta^0+\bar\eta C_1)}{\sigma\|(W_h-\tau M_h)^{-1}\|},\quad
\sum\limits^{\infty}_{k=0}\|u^{k+1}-z^k\|^2_{M_h}\leq \frac{2(\theta^0+\bar\eta C_1)}{\sigma}.
\end{aligned}
\end{equation}
By substituting (\ref{summable2}) to (\ref{discrete KKT function for iteration}), we have
\begin{equation}\label{summable3}
\begin{aligned}
\sum\limits^{\infty}_{k=0}R_h(w^{k+1})&\leq2\sum\limits^{\infty}_{k=0}\|\delta^k\|^2
+\eta\sum\limits^{\infty}_{k=0}\|r^{k+1}\|^2+2\sigma^2 \|M_h\|\sum\limits^{\infty}_{k=0}\|u^{k+1}-z^{k}\|_{M_h}^2\\
&\leq C:=2C_2+\eta\frac{2(\theta^0+\bar\eta C_1)}{\sigma\|(W_h-\tau M_h)^{-1}\|}+2\sigma^2 \|M_h\|\frac{2(\theta^0+\bar\eta C_1)}{\sigma}
\end{aligned}
\end{equation}
Thus, by Lemma \ref{complexity lemma}, we know (\ref{discrete iteration complexity1}) holds. Therefore, combining the obtained global convergence results, we complete the whole proof of the Theorem \ref{discrete convergence results}.
\end{proof}
\end{theorem}
\subsection{Numerical computation of the $u$-subproblem of Algorithm \ref{algo4:inexact heterogeneous ADMM for problem RHP}}\label{subsection linear sysytem}
\subsubsection{Error analysis of the linear system {(\ref{eqn:saddle point4})}}
As we know, the linear system (\ref{eqn:saddle point4}) is a special case of the generalized saddle-point problem, thus some Krylov-based methods could be employed to inexactly solve the linear system. Let $(r^k_1, r^k_2)$ be the residual error vector, that means:
\begin{equation}\label{eqn:saddle point4 with error}
\left[
\begin{array}{cc}
\frac{1}{0.5\alpha+\sigma}M_h & K_h \\
-K_h & M_h
\end{array}
\right]\left[
\begin{array}{c}
y^{k+1} \\
u^{k+1}
\end{array}
\right]=\left[
\begin{array}{c}
\frac{1}{0.5\alpha+\sigma}(K_h(\sigma z^k-\lambda^k)+M_hy_d)+r_1\\
-M_hy_c+r_2
\end{array}
\right],
\end{equation}
and $\delta^k=(0.5\alpha+\sigma)M_hK_h^{-1}r_1^k+M_hK_h^{-1}M_hK_h^{-1}r_2^k$, thus in the numerical implementation we require
\begin{equation}\label{error estimates1}
\|r^k_1\|_{2}+\|r^k_2\|_{2}\leq\frac{\epsilon_k}{\sqrt{2}\|M_hK_h^{-1}\|_2\max\{\|M_hK_h^{-1}\|_{2},0.5\alpha+\sigma\}}
\end{equation}
to guarantee the error vector $\|{\delta}^k\|_{2} \leq {\epsilon_k}$.
\subsubsection{An efficient precondition techniques for solving the linear systems}\label{precondition}
To solve (\ref{eqn:saddle point4}), in this paper, we use the generalized minimal residual {\rm (GMRES)} method. In order to speed up the convergence of the {\rm GMRES} method, the preconditioned variant of modified hermitian and skew-hermitian splitting {\rm(PMHSS)} preconditioner $\mathcal{P}$ is employed which is introduced in {\rm \cite{Bai}}:
\begin{equation}\label{precondition matrix1}
\mathcal{P_{HSS}}=\frac{1}{\gamma}\left[
\begin{array}{lcc}
I & \quad\sqrt{\gamma}I \\
-\sqrt{\gamma}I & \quad\gamma I \\
\end{array}
\right]\left[
\begin{array}{lcc}
M_h+\sqrt{\gamma}K_h & 0 \\
0 & M_h+\sqrt{\gamma}K_h \\
\end{array}
\right],
\end{equation}
where $\gamma=0.5\alpha+\sigma$. Let $\mathcal{A}$ denote the coefficient matrix of linear system (\ref{eqn:saddle point4}).
In our numerical experiments, the approximation $\widehat{G}$ corresponding to the matrix $G:=M_h+\sqrt{\gamma}K_h$ is implemented by 20 steps of Chebyshev semi-iteration when the parameter $\gamma$ is small, since in this case the coefficient matrix $G$ is dominated by the mass matrix and 20 steps of Chebyshev semi-iteration is an appropriate approximation for the action of $G$'s inverse. For more details on the Chebyshev semi-iteration method we refer to {\rm\cite{ReDoWa,chebysevsemiiteration}}. Meanwhile, for the large values of $\gamma$, the stiffness matrix $K_h$ makes a significant contribution. Hence, a fixed number of Chebyshev semi-iteration is no longer sufficient to approximate the action of $G^{-1}$. In this case, the way to avoid this difficulty is to approximate the action of $G^{-1}$ with two AMG V-cycles, which obtained by the \textbf{amg} operator in the iFEM software package\footnote{\noindent \textrm{For more details about the iFEM software package, we refer to the website \url{http://www.math.uci.edu/~chenlong/programming.html} }}.
\subsubsection{Terminal condition}\label{terminal condition}
Let $\epsilon$ be a given accuracy tolerance. Thus we terminate our ihADMM method when $\eta\leq\epsilon$,
where
$\eta=\max{\{\eta_1,\eta_2,\eta_3,\eta_4,\eta_5\}}$,
in which
\begin{equation*}
\begin{aligned}
& \eta_1=\frac{\|K_hy- M_hu-M_h y_c\|}{1+\|M_hy_c\|},\quad \eta_2=\frac{\|M_h(u-z)\|}{1+\|u\|},\quad
\eta_3=\frac{\|M_h(y- y_d)+ K_hp\|}{1+\|M_hy_d\|},\\
&\eta_4=\frac{\|0.5\alpha M_hu -M_hp + M_h\lambda\|}{1+\|u\|},\quad
\eta_5=\frac{\|z-{\rm\Pi}_{[a,b]}\left(\frac{\alpha}{2}{\rm soft}(W_h^{-1}M_h\lambda, \beta\right)\|}{1+\|u\|}.
\end{aligned}
\end{equation*}
\subsection{A two-phase strategy for discrete problems}\label{terminal condition for PDAS}
In this section, we introduce the primal-dual active set (PDAS) method as a Phase-II algorithm to solve the discretized problem.
For problem (\ref{equ:approx discretized matrix-vector form}), eliminating artificial variable $z$, we have
\begin{equation}\label{approx discretized matrix-vector form for PDAS}
\left\{ \begin{aligned}
&\min \limits_{(y,u)\in\mathbb{R}^{2N_h}}^{}\frac{1}{2}\|y-y_d\|_{M_h}^{2}+\frac{\alpha}{4}\|u\|_{M_h}^{2}+\frac{\alpha}{4}\|u\|_{W_h}^{2}+\beta\|W_hu\|_1 \\
&\quad \quad {\rm{s.t.}}\qquad K_hy=M_hu+M_hy_c \\
&\qquad \qquad \quad\quad u\in [a,b]^{N_h}
\end{aligned} \right.\tag{$\mathrm{\overline{P}}_{h}$}
\end{equation}
The full numerical scheme is summarized in Algorithm \ref{algo1:discrete Primal-Dual Active Set (PDAS) method}:
\begin{algorithm}[H]
\caption{Primal-Dual Active Set (PDAS) method}
\label{algo1:discrete Primal-Dual Active Set (PDAS) method}
Initialization: Choose $y^0$, $u^0$, $p^0$ and $\mu^0$. Set $k=0$ and $c>0$.
\begin{description}
\item[Step 1] Determine the following subsets
\begin{equation*}\begin{aligned}
\mathcal{A}^{k+1}_a &= \{i: u_i^{k}+c(\mu_i^{k}+w_i\beta)-a<0\},\quad
\mathcal{A}^{k+1}_b = \{i: u^{k}_i+c(\mu^{k}_i-w_i\beta)-b>0\}, \\
\mathcal{A}^{k+1}_0 &= \{i: |u^{k}_i+c\mu^{k}_i|<cw_i\beta)\}, \quad
\mathcal{I}^{k+1}_+ = \{i: cw_i\beta<u^{k}_i+c\mu^{k}_i<b+cw_i\beta)\}, \\
\mathcal{I}^{k+1}_- &= \{i: a-cw_i\beta<u^{k}_i+c\mu^{k}_i<-cw_i\beta)\}.
\end{aligned}
\end{equation*}
\item[Step 2] Solve the following system
\begin{equation*}
\left\{\begin{aligned}
&K_hy^{k+1} - M_hu^{k+1}=0, \\
&K_hp^{k+1} + M_h(y^{k+1}-y_d)=0,\\
&\alpha T_hu^{k+1}-M_hp^{k+1}+\mu^{k+1} = 0,
\end{aligned}\right.
\end{equation*}
where $T_h=\frac{1}{2}(M_h+W_h)$, and
\begin{equation*}
u^{k+1}=\left\{\begin{aligned}
a\quad {\rm a.e.\ on}~ \mathcal{A}^{k+1}_{a}\\
b\quad {\rm a.e.\ on}~ \mathcal{A}^{k+1}_{b}\\
0\quad {\rm a.e.\ on}~ \mathcal{A}^{k+1}_{0},
\end{aligned}\right. \qquad and\quad \mu_i^{k+1}=\left\{\begin{aligned}
-w_i\beta\quad {\rm a.e.\ on}~ \mathcal{I}^{k+1}_{-}\\
w_i\beta\quad {\rm a.e.\ on}~ \mathcal{I}^{k+1}_{+}\\
\forall i=1,2,...,N_h.
\end{aligned}\right.
\end{equation*}
\item[Step 3] If a termination criterion is not met, set $k:=k+1$ and go to Step 1
\end{description}
\end{algorithm}
In actual numerical implementations, let $\epsilon$ be a given accuracy tolerance. Thus we terminate our Phase-II algorithm (PDAS method) when $\eta\leq\epsilon$,
where $\eta=\max{\{\eta_1,\eta_2,\eta_3\}}$ and
\begin{equation*}
\begin{aligned}
& \eta_1=\frac{\|K_hy- M_hu-M_h y_c\|}{1+\|M_hy_c\|},\quad\quad\eta_2=\frac{\|M_h(y- y_d)+ K_hp\|}{1+\|M_hy_d\|}, \\
&\eta_3=\frac{\|u-{\rm\Pi}_{[a,b]}\left(\frac{\alpha}{2}{\rm soft}(W_h^{-1}M_h(p-u), \beta\right)\|}{1+\|u\|}.
\end{aligned}
\end{equation*}
\subsection{\textbf {Algorithms for comparison}}\label{subsection5.4}
In this section, in order to show the high efficiency of our ihADMM and two-phase strategy, we introduce the details of some mentioned existing methods for sparse optimal control problems.
As a comparison, one can only employ the PDAS method to solve (\ref{approx discretized matrix-vector form for PDAS}). An important issue for the successful application of the PDAS scheme, is the use of a robust line-search method for globalization purposes. However, since there exist a nonsmooth term $\beta\|W_hu\|_1$ in the objective function of (\ref{approx discretized matrix-vector form for PDAS}), we do not have differentiability (in the classical sense) of the minimizing function and the classical Armijo, Wolfe and Goldstein line search schemes can not be used. To overcome this difficulty, an alternative approach, i.e., the derivative-free line-search (DFLS) procedure, is used. For more details of DFLS, one can refer to \cite{DFLS1}. Then a globalized version of PDAS with DFLS is given.
In addition, as we have mentioned in Section \ref{intro}, instead of our ihADMM method and PDAS method, one can also apply the APG method \cite{FIP} to solve problem (\ref{approx discretized matrix-vector form for PDAS}) for the sake of numerical comparison, see \cite{FIP} for more details of the APG method.
\section{Numerical results}\label{sec:5}
In this section, we will use the following example to evaluate the numerical behaviour of our ihADMM and two-phase strategy for problem (\ref{equ:approx discretized matrix-vector form}) and verify the theoretical error estimates given in Section \ref{sec:3}. For comparison, we will also show the numerical results obtained by the classical ADMM and the APG algorithm, and the PDAS with line search.
\subsection{Algorithmic Details}\label{subsec:5.1}
\quad
\textbf{Discretization.} As show in Section \ref{sec:3}, the discretization was carried out by using piecewise linear and continuous finite elements.
The assembly of mass and the stiffness matrices, as well as the lump mass matrix was left to the iFEM software package. To present the finite element error estimates results, it is convenient to introduce the experimental order of convergence (EOC), which for some positive error functional $E(h)$ with $h> 0$ is defined as follows: Given two grid sizes $h_1\neq h_2$, let
\begin{equation}\label{EOC}
\mathrm{EOC}:=\frac{\log {E(h_1)}-\log {E(h_2)}}{\log{h_1}-\log {h_2}}.
\end{equation}
It follows from this definition that if $E(h)=\mathcal{O}(h^{\gamma})$ then $\mathrm{EOC}\approx\gamma$. The error functional $E(\cdot)$ investigated in the present section is given by
$E_2(h):=\|u-u_h\|_{L^2{(\Omega)}}$.
\textbf{Initialization.} For all numerical examples, we choose $u=0$ as initialization $u^0$ for all algorithms.
\textbf{Parameter Setting.} For the classical ADMM and our ihADMM, the penalty parameter $\sigma$ was chosen as $\sigma=0.1 \alpha$. About the step-length $\tau$, we choose $\tau=1.618$ for the classical ADMM, and $\tau=1$ for our ihADMM. For the PDAS method, the parameter in the active set strategy was chosen as $c=1$. For the APG method, we estimate an approximation for the Lipschitz constant $L$ with a backtracking method.
\textbf{Terminal Condition.} In our numerical experiments, we measure the accuracy of an approximate optimal solution by using the corresponding K-K-T residual error for each algorithm. For the purpose of showing the efficiency of our ihADMM, we report the numerical results obtained by running the classical ADMM and the APG method to compare with the results obtained by our ihADMM. In this case, we terminate all the algorithms when $\eta<10^{-6}$ with the maximum number of iterations set at 500. Additionally, we also employ our two-phase strategy to obtain more accurate solution. As a comparison, a globalized version of the PDAS algorithm are also shown. In this case, we terminate the our ihADMM when $\eta<10^{-3}$ to warm-start the PDAS algorithm which is terminated when $\eta<10^{-10}$. Similarly, we terminate the PDAS algorithm with DFLS when $\eta<10^{-10}$.
\textbf{Computational environment.}
All our computational results are obtained by MATLAB Version 8.5(R2015a) running on a computer with
64-bit Windows 7.0 operation system, Intel(R) Core(TM) i7-5500U CPU (2.40GHz) and 8GB of memory.
\subsection{Examples}\label{subsec:5.2}
\begin{example}\label{example:1}
\begin{equation*}
\left\{ \begin{aligned}
&\min \limits_{(y,u)\in H^1_0(\Omega)\times L^2(\Omega)}^{}\ \ J(y,u)=\frac{1}{2}\|y-y_d\|_{L^2(\Omega)}^{2}+\frac{\alpha}{2}\|u\|_{L^2(\Omega)}^{2}+\beta\|u\|_{L^1(\Omega)} \\
&\qquad\quad{\rm s.t.}\qquad \quad\quad\quad-\Delta y=u+y_c\quad \mathrm{in}\ \Omega,\\%=(0,1)\times(0,1) \\
&\qquad \qquad \quad\qquad \qquad \qquad ~y=0\quad \mathrm{on}\ \partial\Omega,\\
&\quad\qquad \qquad \qquad\qquad u\in U_{ad}=\{v(x)|a\leq v(x)\leq b, {\rm a.e }\ \mathrm{on}\ \Omega \}.
\end{aligned} \right.
\end{equation*}\end{example}
Here, we consider the problem with control $u\in L^2(\Omega)$ on the unit square $\Omega= (0, 1)^2$ with $\alpha=0.5, \beta=0.5$, $a=-0.5$ and $b=0.5$. It is a constructed problem, thus we set $y^*=\sin(\pi x_1)\sin(\pi x_2)$ and $p^*=2\beta\sin(2\pi x_1)\exp(0.5x_1)\sin(4\pi x_2)$. Then through $u^*=\mathrm{\Pi}_{U_{ad}}\left(\frac{1}{\alpha}{\rm{soft}}\left(p^*,\beta\right)\right)$, $y_c=y^*-\mathcal{S}u^*$ and $y_d=\mathcal{S}^{-*}p^{*}+y^*$, we can construct the example for which we know the exact solution.
An example for the discretized optimal control on mesh $h=2^{-7}$ is shown in Figure \ref{example1fig:control on h=$2^{-7}$}. The error of the control $u$ w.r.t the $L^2$-norm and the experimental order of convergence (EOC) for control are presented in Table \ref{tab:1}. They also confirm that indeed the convergence rate is of order $O(h)$.
Numerical results for the accuracy of solution, number of iterations and cpu time obtained by our ihADMM, classical ADMM and APG methods are shown in Table \ref{tab:1}. As a result from Table \ref{tab:1}, we can see that our proposed ihADMM method is an efficient algorithm to solve problem (\ref{equ:approx discretized matrix-vector form}) to medium accuracy. Moreover, it is obvious that our ihADMM outperform the classical ADMM and the APG method in terms of in CPU time, especially when the discretization is in a fine level. It is worth noting that although the APG method require less number of iterations when the termination condition is satisfied, the APG method spend much time on backtracking step with the aim of finding an appropriate approximation for the Lipschitz constant. This is the
reason that our ihADMM has better performance than the APG method in actual numerical implementation. Furthermore, the numerical results in terms of iteration numbers illustrate the mesh-independent performance of the ihADMM and the APG method, except for the classical ADMM.
In addition, to obtain more accurate solution, we employ our two-phase strategy. The numerical results are shown in Table \ref{tab:2}. In order to show our the power and the importance of our two-phase framework, as a comparison, numerical results obtained by the PDAS with line search are also shown in Table \ref{tab:2}. It can be observed that our two-phase strategy is faster and more efficient than the PDAS with line search in terms of the iteration numbers and CPU time.
\begin{figure}
\caption{Optimal control $u_h$ on the square, $h=2^{-7}
\label{example1fig:control on h=$2^{-7}
\end{figure}
\begin{example}\label{example:2}\rm{\cite[Example 1]{Stadler}}
\begin{equation*}
\left\{ \begin{aligned}
&\min \limits_{(y,u)\in Y\times U}^{}\ \ J(y,u)=\frac{1}{2}\|y-y_d\|_{L^2(\Omega)}^{2}+\frac{\alpha}{2}\|u\|_{L^2(\Omega)}^{2}+{\beta}\|u\|_{L^1(\Omega)} \\
&\quad{\rm s.t.}\qquad -\Delta y=u,\quad \mathrm{in}\ \Omega=(0,1)\times(0,1) \\
&\qquad \qquad \qquad y=0,\quad \mathrm{on}\ \partial\Omega\\
&\qquad \qquad\qquad u\in U_{ad}=\{v(x)|a\leq v(x)\leq b, {\rm a.e }\ \mathrm{on}\ \Omega \},
\end{aligned} \right.
\end{equation*}
\end{example}
where the desired state $y_d=\frac{1}{6}\sin(2\pi x)\exp(2x)\sin(2\pi y)$, the parameters $\alpha=10^{-5}$, $\beta=10^{-3}$, $a=-30$ and $b=30$. In addition, the exact solutions of the problem is unknown. Instead we use the numerical solutions computed on a grid with $h^*=2^{-10}$ as reference solutions.
An example for the discretized optimal control on mesh $h=2^{-7}$ is displayed in Figure \ref{fig:discretized control on h=$2^{-7}$}. The error of the control $u$ w.r.t the $L^2$ norm with respect to the solution on the finest grid ($h^*=2^{-10}$) and the experimental order of convergence (EOC) for control are presented in Table \ref{tab:3}. They confirms the linear rate of convergence w.r.t. $h$ as proved in Theorem \ref{theorem:error2} and Corollary \ref{corollary:error1}.
Numerical results for the accuracy of solution, number of iterations and cpu time obtained by our ihADMM, classical ADMM and APG methods are also shown in Table \ref{tab:3}. Experiment results show that the ADMM has evident advantage over the classical ADMM and the APG method in computing time. Furthermore, the numerical results in terms of iteration numbers also illustrate the mesh-independent performance of our ihADMM. In addition, in Table \ref{tab:4}, we give the numerical results obtained by our two-phase strategy and the PDAS method with line search. As a result from Table \ref{tab:4}, it can be observed that our two-phase strategy outperform the PDAS with line search in terms of the CPU time. These results demonstrate that our ihADMM is highly efficient in obtaining an approximate solution with moderate accuracy. And our two-phase strategy could represent an effective alternative to PDAS method.
\begin{figure}
\caption{Optimal control $u_h$ on the square, $h=2^{-7}
\label{fig:discretized control on h=$2^{-7}
\end{figure}
\section{Concluding remarks}\label{sec:6}
In this paper, elliptic PDE-constrained optimal control problems with $L^1$-control cost ($L^1$-EOCP) are considered. In order to make discretized problems have a decoupled form, instead of directly using the standard piecewise linear finite element to discretize the problem, we utilize nodal quadrature formulas to approximately discretize the $L^1$-norm and $L^2$-norm. It was proven that these approximation steps do not change the order of error estimates. By taking advantage of inherent structures of the problem, we proposed an inexact heterogeneous ADMM (ihADMM) to solve discretized problems. Furthermore, theoretical results on the global convergence as well as the iteration complexity results $o(1/k)$ for ihADMM were given. Moreover, in order to obtain more accurate solution, a two-phase strategy was introduced, in which the primal-dual active set (PDAS) method is used as a postprocessor of the ihADMM. Numerical results demonstrated the efficiency of our ihADMM and the two-phase strategy.
\section*{Acknowledgments}
The authors would like to thank Dr. Long Chen for the FEM package
iFEM \cite{Chen} in Matlab and also would like to thank the colleagues for their valuable suggestions that led to improvement in this paper.
\begin{table}[H]
\caption{Example \ref{example:1}: The convergence behavior of our ihADMM, classical ADMM and APG for (\ref{equ:approx discretized matrix-vector form}). In the table, $\#$dofs stands for the number of degrees of freedom for the control variable on each grid level.}\label{tab:1}
\begin{center}
\begin{tabular}{@{\extracolsep{\fill}}|c|c|c|c|c|c|c|c|}
\hline
\hline
\multirow{2}{*}{$h$}&\multirow{2}{*}{$\#$dofs}&\multirow{2}{*}{$E_2$}&\multirow{2}{*}{EOC}& \multirow{2}{*}{Index} &\multirow{2}{*}{ihADMM} & \multirow{2}{*}{classical ADMM} & \multirow{2}{*}{APG} \\
&&&&&&&\\
\hline
&&&&\multirow{2}{*}{iter} &\multirow{2}{*}{27} &\multirow{2}{*}{32} &\multirow{2}{*}{13} \\
&&&&&&&\\
\multirow{2}{*}{$2^{-3}$} &\multirow{2}{*}{49}&\multirow{2}{*}{0.3075}&\multirow{2}{*}{--}
&\multirow{2}{*}{residual $\eta$}
&\multirow{2}{*}{7.15e-07} &\multirow{2}{*}{7.55e-07}
&\multirow{2}{*}{6.88e-07} \\
&&&&&&&\\
&&&&\multirow{2}{*}{CPU time/s} &\multirow{2}{*}{0.19} &\multirow{2}{*}{0.23} &\multirow{2}{*}{0.18} \\
&&&&&&&\\
\hline
&&&&\multirow{2}{*}{iter} &\multirow{2}{*}{31} &\multirow{2}{*}{44} &\multirow{2}{*}{13} \\
&&&&&&&\\
\multirow{2}{*}{$2^{-4}$} &\multirow{2}{*}{225}&\multirow{2}{*}{0.1237}&\multirow{2}{*}{1.3137}&\multirow{2}{*}{residual $\eta$}
&\multirow{2}{*}{9.77e-07} &\multirow{2}{*}{9.91e-07}
&\multirow{2}{*}{8.23e-07} \\
&&&&&&&\\
&&&&\multirow{2}{*}{CPU times/s} &\multirow{2}{*}{0.37} &\multirow{2}{*}{0.66} &\multirow{2}{*}{0.32} \\
&&&&&&&\\
\hline
&&&&\multirow{2}{*}{iter} &\multirow{2}{*}{31} &\multirow{2}{*}{58} &\multirow{2}{*}{12} \\
&&&&&&&\\
\multirow{2}{*}{$2^{-5}$} &\multirow{2}{*}{961}&\multirow{2}{*}{0.0516}&\multirow{2}{*}{1.2870}
&\multirow{2}{*}{residual $\eta$}
&\multirow{2}{*}{7.41e-07}&\multirow{2}{*}{8.11e-07}
&\multirow{2}{*}{7.58e-07} \\
&&&&&&&\\
&&&&\multirow{2}{*}{CPU time/s} &\multirow{2}{*}{1.02} &\multirow{2}{*}{2.32} &\multirow{2}{*}{1.00} \\
&&&&&&&\\
\hline
&&&&\multirow{2}{*}{iter} &\multirow{2}{*}{32} &\multirow{2}{*}{76} &\multirow{2}{*}{14} \\
&&&&&&&\\
\multirow{2}{*}{$2^{-6}$} &\multirow{2}{*}{3969}&\multirow{2}{*}{0.0201}&\multirow{2}{*}{1.3112}&\multirow{2}{*}{residual $\eta$}
&\multirow{2}{*}{7.26e-07}&\multirow{2}{*}{8.10e-07}
&\multirow{2}{*}{7.88e-07} \\
&&&&&&&\\
&&&&\multirow{2}{*}{CPU time/s} &\multirow{2}{*}{4.18} &\multirow{2}{*}{9.12} &\multirow{2}{*}{4.25} \\
&&&&&&&\\
\hline
&&&&\multirow{2}{*}{iter} &\multirow{2}{*}{31} &\multirow{2}{*}{94} &\multirow{2}{*}{14} \\
&&&&&&&\\
\multirow{2}{*}{$2^{-7}$} &\multirow{2}{*}{16129}&\multirow{2}{*}{0.0078}&\multirow{2}{*}{1.3252}&\multirow{2}{*}{residual $\eta$}
&\multirow{2}{*}{ 5.33e-07}&\multirow{2}{*}{7.85e-07}
&\multirow{2}{*}{4.45e-07} \\
&&&&&&&\\
&&&&\multirow{2}{*}{CPU time/s} &\multirow{2}{*}{17.72} &\multirow{2}{*}{65.82} &\multirow{2}{*}{26.25} \\
&&&&&&&\\
\hline
&&&&\multirow{2}{*}{iter} &\multirow{2}{*}{32} &\multirow{2}{*}{127} &\multirow{2}{*}{13} \\
&&&&&&&\\
\multirow{2}{*}{$2^{-8}$} &\multirow{2}{*}{65025}&\multirow{2}{*}{0.0026}&\multirow{2}{*}{1.3772}
&\multirow{2}{*}{residual $\eta$}
&\multirow{2}{*}{6.88e-07}&\multirow{2}{*}{8.93e-07}
&\multirow{2}{*}{7.47e-07} \\
&&&&&&&\\
&&&&\multirow{2}{*}{CPU time/s} &\multirow{2}{*}{70.45} &\multirow{2}{*}{312.65} &\multirow{2}{*}{80.81} \\
&&&&&&&\\
\hline
&&&&\multirow{2}{*}{iter} &\multirow{2}{*}{31} &\multirow{2}{*}{255} &\multirow{2}{*}{13} \\
&&&&&&&\\
\multirow{2}{*}{$2^{-9}$} &\multirow{2}{*}{261121}&\multirow{2}{*}{0.0009}&\multirow{2}{*}{1.4027}
&\multirow{2}{*}{residual $\eta$}
&\multirow{2}{*}{7.43e-07}&\multirow{2}{*}{7.96e-07}
&\multirow{2}{*}{6.33e-07} \\
&&&&&&&\\
&&&&\multirow{2}{*}{CPU time/s} &\multirow{2}{*}{525.28} &\multirow{2}{*}{4845.31} &\multirow{2}{*}{620.55} \\
&&&&&&&\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[H]
\caption{Example \ref{example:1}: The convergence behavior of our two-phase strategy, PDAS with line search.}\label{tab:2}
\begin{center}
\begin{tabular}{@{\extracolsep{\fill}}|c|c|c|cc|c|c|c|}
\hline
\hline
\multirow{2}{*}{$h$}&\multirow{2}{*}{$\#$dofs}& \multirow{2}{*}{Index of performance} &\multicolumn{2}{c|}{Two-Phase strategy}& \multirow{2}{*}{PDAS with line search} \\
\cline{4-5}
& & &ihADMM\ $+$\ PDAS&& \\
\hline
&&\multirow{2}{*}{iter} &\multirow{2}{*}{13\quad $+$\quad 5} &&\multirow{2}{*}{21} \\
&&&&&\\
\multirow{2}{*}{$2^{-3}$} &\multirow{2}{*}{49}
&\multirow{2}{*}{residual $\eta$}
&\multirow{2}{*}{8.55e-12} & &\multirow{2}{*}{7.88e-12} \\
&&&&&\\
&&\multirow{2}{*}{CPU time/s} &\multirow{2}{*}{0.17} &&\multirow{2}{*}{0.32} \\
&&&&&\\
\hline
&&\multirow{2}{*}{iter} &\multirow{2}{*}{13\quad $+$\quad 6} &&\multirow{2}{*}{22} \\
&&&&&\\
\multirow{2}{*}{$2^{-4}$} &\multirow{2}{*}{225}&\multirow{2}{*}{residual $\eta$}
&\multirow{2}{*}{1.24e-11} & &\multirow{2}{*}{1.87e-11}
\\
&&&&&\\
&&\multirow{2}{*}{CPU times/s} &\multirow{2}{*}{0.27} &&\multirow{2}{*}{0.54} \\
&&&&&\\
\hline
&&\multirow{2}{*}{iter} &\multirow{2}{*}{14\quad $+$\quad 5} &&\multirow{2}{*}{22} \\
&&&&&\\
\multirow{2}{*}{$2^{-5}$} &\multirow{2}{*}{961}
&\multirow{2}{*}{residual $\eta$}
&\multirow{2}{*}{8.10e-12} & &\multirow{2}{*}{8.42e-12} \\
&&&&&\\
&&\multirow{2}{*}{CPU time/s} &\multirow{2}{*}{0.95} &&\multirow{2}{*}{2.07} \\
&&&&&\\
\hline
&&\multirow{2}{*}{iter} &\multirow{2}{*}{14\quad $+$\quad 6} &&\multirow{2}{*}{23} \\
&&&&&\\
\multirow{2}{*}{$2^{-6}$} &\multirow{2}{*}{3969}&\multirow{2}{*}{residual $\eta$}
&\multirow{2}{*}{4.15e-12} & &\multirow{2}{*}{4.00e-12} \\
&&&&&\\
&&\multirow{2}{*}{CPU time/s} &\multirow{2}{*}{3.65} &&\multirow{2}{*}{6.98}\\
&&&&&\\
\hline
&&\multirow{2}{*}{iter} &\multirow{2}{*}{15\quad $+$\quad 6} &&\multirow{2}{*}{23} \\
&&&&&\\
\multirow{2}{*}{$2^{-7}$} &\multirow{2}{*}{16129}&\multirow{2}{*}{residual $\eta$}
&\multirow{2}{*}{ 1.43e-12} & &\multirow{2}{*}{1.52e-12} \\
&&&&&\\
&&\multirow{2}{*}{CPU time/s} &\multirow{2}{*}{22.10} &&\multirow{2}{*}{43.13} \\
&&&&&\\
\hline
&&\multirow{2}{*}{iter} &\multirow{2}{*}{15\quad $+$\quad 5} &&\multirow{2}{*}{24} \\
&&&&&\\
\multirow{2}{*}{$2^{-8}$} &\multirow{2}{*}{65025}
&\multirow{2}{*}{residual $\eta$}
&\multirow{2}{*}{5.21e-12} & &\multirow{2}{*}{5.03e-12} \\
&&&&&\\
&&\multirow{2}{*}{CPU time/s} &\multirow{2}{*}{68.22} &&\multirow{2}{*}{140.18}\\
&&&&&\\
\hline
&&\multirow{2}{*}{iter} &\multirow{2}{*}{15\quad $+$\quad 6} &&\multirow{2}{*}{24} \\
&&&&&\\
\multirow{2}{*}{$2^{-9}$} &\multirow{2}{*}{261121}
&\multirow{2}{*}{residual $\eta$}
&\multirow{2}{*}{3.77e-12} & &\multirow{2}{*}{3.76e-12} \\
&&&&&\\
&&\multirow{2}{*}{CPU time/s}&\multirow{2}{*}{540.57} &&\multirow{2}{*}{1145.63}\\
&&&&&\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[H]
\caption{Example \ref{example:2}: The convergence behavior of ihADMM, classical ADMM and APG for (\ref{equ:approx discretized matrix-vector form}).
}\label{tab:3}
\begin{center}
\begin{tabular}{@{\extracolsep{\fill}}|c|c|c|c|c|c|c|c|}
\hline
\hline
\multirow{2}{*}{$h$}&\multirow{2}{*}{$\#$dofs}&\multirow{2}{*}{$E_2$}&\multirow{2}{*}{EOC}& \multirow{2}{*}{Index} &\multirow{2}{*}{ihADMM} & \multirow{2}{*}{classical ADMM} & \multirow{2}{*}{APG} \\
&&&&&&&\\
\hline
&&&&\multirow{2}{*}{iter} &\multirow{2}{*}{40} &\multirow{2}{*}{48} &\multirow{2}{*}{18} \\
&&&&&&&\\
\multirow{2}{*}{$2^{-3}$} &\multirow{2}{*}{49}&\multirow{2}{*}{0.3075}&\multirow{2}{*}{--}
&\multirow{2}{*}{residual $\eta$}
&\multirow{2}{*}{8.22e-07} &\multirow{2}{*}{8.65e-07}
&\multirow{2}{*}{7.96e-07} \\
&&&&&&&\\
&&&&\multirow{2}{*}{CPU time/s} &\multirow{2}{*}{0.30} &\multirow{2}{*}{0.51} &\multirow{2}{*}{0.24} \\
&&&&&&&\\
\hline
&&&&\multirow{2}{*}{iter} &\multirow{2}{*}{41} &\multirow{2}{*}{56} &\multirow{2}{*}{18} \\
&&&&&&&\\
\multirow{2}{*}{$2^{-4}$} &\multirow{2}{*}{225}&\multirow{2}{*}{0.1237}&\multirow{2}{*}{1.3137}&\multirow{2}{*}{residual $\eta$}
&\multirow{2}{*}{7.22e-07} &\multirow{2}{*}{8.01e-07}
&\multirow{2}{*}{7.58e-07} \\
&&&&&&&\\
&&&&\multirow{2}{*}{CPU times/s} &\multirow{2}{*}{0.45} &\multirow{2}{*}{0.71} &\multirow{2}{*}{0.44} \\
&&&&&&&\\
\hline
&&&&\multirow{2}{*}{iter} &\multirow{2}{*}{40} &\multirow{2}{*}{69} &\multirow{2}{*}{19} \\
&&&&&&&\\
\multirow{2}{*}{$2^{-5}$} &\multirow{2}{*}{961}&\multirow{2}{*}{0.0516}&\multirow{2}{*}{1.2870}
&\multirow{2}{*}{residual $\eta$}
&\multirow{2}{*}{8.12e-07}&\multirow{2}{*}{8.01e-07}
&\multirow{2}{*}{7.90e-07} \\
&&&&&&&\\
&&&&\multirow{2}{*}{CPU time/s} &\multirow{2}{*}{1.60} &\multirow{2}{*}{3.05} &\multirow{2}{*}{1.58} \\
&&&&&&&\\
\hline
&&&&\multirow{2}{*}{iter} &\multirow{2}{*}{42} &\multirow{2}{*}{85} &\multirow{2}{*}{18} \\
&&&&&&&\\
\multirow{2}{*}{$2^{-6}$} &\multirow{2}{*}{3969}&\multirow{2}{*}{0.0201}&\multirow{2}{*}{1.3112}&\multirow{2}{*}{residual $\eta$}
&\multirow{2}{*}{6.11e-07}&\multirow{2}{*}{7.80e-07}
&\multirow{2}{*}{6.45e-07} \\
&&&&&&&\\
&&&&\multirow{2}{*}{CPU time/s} &\multirow{2}{*}{7.25} &\multirow{2}{*}{14.62} &\multirow{2}{*}{7.45} \\
&&&&&&&\\
\hline
&&&&\multirow{2}{*}{iter} &\multirow{2}{*}{40} &\multirow{2}{*}{108} &\multirow{2}{*}{18} \\
&&&&&&&\\
\multirow{2}{*}{$2^{-7}$} &\multirow{2}{*}{16129}&\multirow{2}{*}{0.0078}&\multirow{2}{*}{1.3252}&\multirow{2}{*}{residual $\eta$}
&\multirow{2}{*}{ 6.35e-07}&\multirow{2}{*}{7.11e-07}
&\multirow{2}{*}{5.62e-07} \\
&&&&&&&\\
&&&&\multirow{2}{*}{CPU time/s} &\multirow{2}{*}{33.85} &\multirow{2}{*}{101.36} &\multirow{2}{*}{34.39} \\
&&&&&&&\\
\hline
&&&&\multirow{2}{*}{iter} &\multirow{2}{*}{41} &\multirow{2}{*}{132} &\multirow{2}{*}{19} \\
&&&&&&&\\
\multirow{2}{*}{$2^{-8}$} &\multirow{2}{*}{65025}&\multirow{2}{*}{0.0026}&\multirow{2}{*}{1.3772}
&\multirow{2}{*}{residual $\eta$}
&\multirow{2}{*}{7.55e-07}&\multirow{2}{*}{7.83e-07}
&\multirow{2}{*}{7.57e-07} \\
&&&&&&&\\
&&&&\multirow{2}{*}{CPU time/s} &\multirow{2}{*}{158.62} &\multirow{2}{*}{508.65} &\multirow{2}{*}{165.75} \\
&&&&&&&\\
\hline
&&&&\multirow{2}{*}{iter} &\multirow{2}{*}{42} &\multirow{2}{*}{278} &\multirow{2}{*}{18} \\
&&&&&&&\\
\multirow{2}{*}{$2^{-9}$} &\multirow{2}{*}{261121}&\multirow{2}{*}{0.0009}&\multirow{2}{*}{1.4027}
&\multirow{2}{*}{residual $\eta$}
&\multirow{2}{*}{5.25e-07}&\multirow{2}{*}{5.56e-07}
&\multirow{2}{*}{4.85e-07} \\
&&&&&&&\\
&&&&\multirow{2}{*}{CPU time/s} &\multirow{2}{*}{1781.98} &\multirow{2}{*}{11788.52} &\multirow{2}{*}{1860.11} \\
&&&&&&&\\
\hline
&&&&\multirow{2}{*}{iter} &\multirow{2}{*}{41} &\multirow{2}{*}{500} &\multirow{2}{*}{19} \\
&&&&&&&\\
\multirow{2}{*}{$2^{-10}$} &\multirow{2}{*}{1046529}&\multirow{2}{*}{--}&\multirow{2}{*}{1.4027}
&\multirow{2}{*}{residual $\eta$}
&\multirow{2}{*}{8.78e-07}&\multirow{2}{*}{Error}
&\multirow{2}{*}{8.47e-07} \\
&&&&&&&\\
&&&&\multirow{2}{*}{CPU time/s} &\multirow{2}{*}{42033.79} &\multirow{2}{*}{Error} &\multirow{2}{*}{44131.27} \\
&&&&&&&\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[H]
\caption{Example \ref{example:2}: The behavior of two-phase strategy and the PDAS method.}\label{tab:4}
\begin{center}
\begin{tabular}{@{\extracolsep{\fill}}|c|c|c|cc|c|c|c|}
\hline
\hline
\multirow{2}{*}{$h$}&\multirow{2}{*}{$\#$dofs}& \multirow{2}{*}{Index of performance} &\multicolumn{2}{c|}{Two-Phase strategy}& \multirow{2}{*}{PDAS with line search} \\
\cline{4-5}
& & &ihADMM\ $+$\ PDAS&& \\
\hline
&&\multirow{2}{*}{iter} &\multirow{2}{*}{18\quad $+$\quad 8} &&\multirow{2}{*}{24} \\
&&&&&\\
\multirow{2}{*}{$2^{-3}$} &\multirow{2}{*}{49}
&\multirow{2}{*}{residual $\eta$}
&\multirow{2}{*}{4.45e-12} & &\multirow{2}{*}{4.36e-12} \\
&&&&&\\
&&\multirow{2}{*}{CPU time/s} &\multirow{2}{*}{0.35} &&\multirow{2}{*}{0.53} \\
&&&&&\\
\hline
&&\multirow{2}{*}{iter} &\multirow{2}{*}{18\quad $+$\quad 8} &&\multirow{2}{*}{25} \\
&&&&&\\
\multirow{2}{*}{$2^{-4}$} &\multirow{2}{*}{225}&\multirow{2}{*}{residual $\eta$}
&\multirow{2}{*}{5.84e-12} & &\multirow{2}{*}{6.01e-11}
\\
&&&&&\\
&&\multirow{2}{*}{CPU times/s} &\multirow{2}{*}{0.68} &&\multirow{2}{*}{1.02} \\
&&&&&\\
\hline
&&\multirow{2}{*}{iter} &\multirow{2}{*}{19\quad $+$\quad 7} &&\multirow{2}{*}{24} \\
&&&&&\\
\multirow{2}{*}{$2^{-5}$} &\multirow{2}{*}{961}
&\multirow{2}{*}{residual $\eta$}
&\multirow{2}{*}{6.89e-12} & &\multirow{2}{*}{6.87e-12} \\
&&&&&\\
&&\multirow{2}{*}{CPU time/s} &\multirow{2}{*}{1.98} &&\multirow{2}{*}{2.99} \\
&&&&&\\
\hline
&&\multirow{2}{*}{iter} &\multirow{2}{*}{18\quad $+$\quad 8} &&\multirow{2}{*}{26} \\
&&&&&\\
\multirow{2}{*}{$2^{-6}$} &\multirow{2}{*}{3969}&\multirow{2}{*}{residual $\eta$}
&\multirow{2}{*}{2.15e-11} & &\multirow{2}{*}{2.28e-11} \\
&&&&&\\
&&\multirow{2}{*}{CPU time/s} &\multirow{2}{*}{8.42} &&\multirow{2}{*}{12.63}\\
&&&&&\\
\hline
&&\multirow{2}{*}{iter} &\multirow{2}{*}{19\quad $+$\quad 7} &&\multirow{2}{*}{25} \\
&&&&&\\
\multirow{2}{*}{$2^{-7}$} &\multirow{2}{*}{16129}&\multirow{2}{*}{residual $\eta$}
&\multirow{2}{*}{ 4.06e-11} & &\multirow{2}{*}{3.88e-11} \\
&&&&&\\
&&\multirow{2}{*}{CPU time/s} &\multirow{2}{*}{43.45} &&\multirow{2}{*}{65.18} \\
&&&&&\\
\hline
&&\multirow{2}{*}{iter} &\multirow{2}{*}{20\quad $+$\quad 8} &&\multirow{2}{*}{25} \\
&&&&&\\
\multirow{2}{*}{$2^{-8}$} &\multirow{2}{*}{65025}
&\multirow{2}{*}{residual $\eta$}
&\multirow{2}{*}{8.45e-12} & &\multirow{2}{*}{8.72e-12} \\
&&&&&\\
&&\multirow{2}{*}{CPU time/s} &\multirow{2}{*}{189.04} &&\multirow{2}{*}{283.20}\\
&&&&&\\
\hline
&&\multirow{2}{*}{iter} &\multirow{2}{*}{20\quad $+$\quad 8} &&\multirow{2}{*}{26} \\
&&&&&\\
\multirow{2}{*}{$2^{-9}$} &\multirow{2}{*}{261121}
&\multirow{2}{*}{residual $\eta$}
&\multirow{2}{*}{7.33e-12} & &\multirow{2}{*}{7.21e-12} \\
&&&&&\\
&&\multirow{2}{*}{CPU time/s}&\multirow{2}{*}{2155.01} &&\multirow{2}{*}{3232.63}\\
&&&&&\\
\hline
&&\multirow{2}{*}{iter} &\multirow{2}{*}{20\quad $+$\quad 8} &&\multirow{2}{*}{26} \\
&&&&&\\
\multirow{2}{*}{$2^{-10}$} &\multirow{2}{*}{1046529}
&\multirow{2}{*}{residual $\eta$}
&\multirow{2}{*}{9.58e-12} & &\multirow{2}{*}{9.73e-12} \\
&&&&&\\
&&\multirow{2}{*}{CPU time/s}&\multirow{2}{*}{58049.57} &&\multirow{2}{*}{87035.63}\\
&&&&&\\
\hline
\end{tabular}
\end{center}
\end{table}
\end{document}
|
\begin{document}
\title{Spontaneous Emission Near Superconducting Bodies}
\author{Bo-Sture K. Skagerstam}\email{[email protected]}
\author{Per Kristian Rekdal}\email{[email protected]}
\affiliation{Complex Systems and Soft Materials Research Group, Department of Physics,
The Norwegian University of Science and Technology, N-7491 Trondheim, Norway}
\date{\today}
\begin{abstract}
In the present paper we study the spontaneous photon emission due to a magnetic spin-flip transition
of a two-level atom in the vicinity of a dielectric body like a normal conducting metal or
a superconductor. For temperatures below the transition temperature $T_c$ of a superconductor,
the corresponding spin-flip lifetime is boosted by several orders of magnitude as
compared to the case of a normal conducting body. Numerical results of an exact formulation are also
compared to a previously derived approximative analytical
expression for the spin-flip lifetime and we find an excellent agreement.
We present results on how the spin-flip lifetime depends on the temperature $T$ of a superconducting body
as well as its thickness $H$. Finally, we study how non-magnetic impurities as well as possible Eliashberg
strong-coupling effects influence the spin-flip rate.
It is found that non-magnetic impurities as well as strong-coupling effects have no dramatic impact
on the spin-flip lifetime.
\end{abstract}
\pacs{34.50.Dy, 03.65.Yz, 03.75.Be, 42.50.Ct}
\maketitle
It is well-known that the rate of spontaneous emission of atoms will be modified due to the presence
of a dielectric body \cite{purcell_46}. In current investigations of atom microtraps this issue is of
fundamental importance since such decay processes have a direct bearing on the stability of e.g. atom chips.
In magnetic microtrap experiments, cold atoms are trapped due to the presence of magnetic field gradients created e.g. by
current carrying wires \cite{folman_02}.
Such microscopic traps provide a powerful tool for the control and manipulation of
cold neutral atoms over micrometer distances \cite{henkel_06}.
Unfortunately, this proximity of the cold atoms to a dielectric body introduces additional decay channels.
Most importantly, Johnson-noise currents in the material give rise to electromagnetic field fluctuations.
For dielectric bodies at room temperature made of normal conducting metals,
these fluctuations may be strong enough to deplete the quantum state of the atom and, hence,
expel the atom from the magnetic microtrap \cite{jones_03}.
Reducing this disturbance from the surface is therefore strongly desired.
In order to achieve this, the use of superconducting dielectric bodies instead of
normal conducting metals has been proposed \cite{scheel_05}. Some experimental work in this context
has been done as well, e.g. by Nirrengarten {\it et al.} \cite{haroche_06}, where cold atoms
were trapped near a superconducting surface.
In the present article we will consider the spin-flip rate when the electrodynamic properties
of the superconducting body are described in terms of either a simple two-fluid model or
in terms of the detailed microscopic Mattis-Bardeen \cite{mattis_58} and
Abrikosov-Gor'kov-Khalatnikov \cite{gorkov58} theory of weak-coupling BCS superconductors.
In addition, we will also study how non-magnetic impurities,
as well as strong coupling effects according to the low-frequency limit of
the Eliashberg theory \cite{eliashberg_60}, will affect the spontaneous emission rate.
Following Ref.\cite{rekdal_04} we consider an atom in an initial state $|i \rangle$ and trapped at position ${\bf r}_A= (0,0,z)$
in vacuum near a dielectric body. The rate $\Gamma_B$ of spontaneous and thermally stimulated magnetic spin-flip
transition into a final state $|f\rangle$ is then
\bea \label{gamma_B_generel}
\lefteqn{\Gamma_B = \, \mu_0 \, \frac{2 \, (\mu_B g_S)^2}{\hbar} \; \sum_{j,k} ~
S_j \, S_k^{\, *} }
\nonumber
\\
&& \; \times \;
\mbox{Im} \, [ \; \nabla \times \nabla \times
\bm{G}({\bf r}_A , {\bf r}_A , \omega ) \; ]_{jk} \; ( {\overline n} + 1 ) \, ,
\eea
where we have introduced the dimensionless components $S_j \equiv \langle f | \hat{S}_j/\hbar | i \rangle$
of the electron spin operators $\hat{S}_j$ with $j=x,y,z$.
Here $g_S \approx 2$ is the gyromagnetic factor of the electron,
and $\bm{G}({\bf r},{\bf r}',\omega )$ is the dyadic Green tensor of Maxwell's theory.
\eq{gamma_B_generel} follows from a consistent quantum-mechanical treatment of electromagnetic
radiation in the presence of an absorbing body \cite{dung_00,henry_96}. In this theory a
local response is assumed, i.e. the characteristic skin depth should be larger than the mean free path
of the electric charge carriers of the absorbing body.
Thermal excitations of the electromagnetic field modes are accounted for by the factor
$( {\overline n} + 1 )$, where ${\overline n} = 1 / ( e^{\hbar \omega/k_{\text{B}} T}- 1 )$ and
$\omega \equiv 2 \pi \, \nu$ is the angular frequency of the spin-flip transition.
Here $T$ is the temperature of the dielectric body, which is assumed to be in thermal equilibrium
with its surroundings.
The dyadic Green tensor is the unique solution to the Helmholtz equation
\bea \label{G_Helm}
\nabla\times\nabla\times \bm{G}({\bf r},{\bf r}',\omega) - k^2
\epsilon({\bf r},\omega) \bm{G}({\bf r},{\bf r}',\omega) = \delta( {\bf r} - {\bf r}' ) \bm{1} \, ,\nonumber\\
\eea
with appropriate boundary conditions. Here $k=\omega/c$ is the wavenumber in vacuum, $c$ is the speed of light
and $\bm{1}$ the unit dyad. The tensor $\bm{G}({\bf r},{\bf r}',\omega)$ contains all
relevant information about the geometry of the material and, through the relative electric permittivity
$\epsilon({\bf r},\omega)$, about its dielectric properties.
The fluctuation-dissipation theorem is build into this theory \cite{dung_00,henry_96}.
The decay rate $ \Gamma^{\, 0}_{ B } $ of a magnetic spin-flip transition for an atom in
free-space is well-known (see e.g. Refs.\cite{dung_00}). This free-space decay rate is
$\Gamma^{\, 0}_B = \Gamma_B S^{\, 2}$, where $\Gamma_B = \mu_0 \, ( \mu_B g_S )^2 \, k^3 / ( 3 \pi \hbar )$
and where we have introduced the dimensionless spin factor $S^{\, 2} \equiv S_x^{\, 2} + S_y^{\, 2} + S_z^{\, 2}$.
The free-space lifetime corresponding to this magnetic spin-flip rate is $\tau^{\, 0}_{ B } \equiv 1/\Gamma^{\, 0}_{ B }$.
In the present paper we only consider $^{87}$Rb atoms that are initially pumped into the
$|5 S_{1/2},F=2,m_F=2\rangle \equiv |2,2\rangle$ state, and assuming the rate-limiting transition
$|2,2 \rangle \rightarrow |2,1 \rangle$ in correspondence to recent experiments \cite{vuletic_04,harber_03,hinds_03,haroche_06}.
The spin factor is $S^2=1/8$ (c.f. Ref.\cite{rekdal_04}) and the frequency is $\nu = 560$ kHz.
The numerical value of the free-space lifetime then is $\tau^0_B = 1.14 \times 10^{25}$ s.
In the following we will consider a geometry where an atom is trapped
at a distance $z$ away from a dielectric slab with thickness $H$.
Vacuum is on both sides of the slab, i.e. $\epsilon({\bf r},\omega) = 1$
for any position ${\bf r}$ outside the body. The slab can be e.g. a superconductor or
a normal conducting metal, described by a dielectric function $\epsilon(\omega)$.
The total transition rate for magnetic spontaneous emission
\bea \label{total_B}
\Gamma_{ B } = ( \Gamma^{\, 0}_{B} + \Gamma^{\, \rm{slab}}_{B} ) \, ( {\overline n} + 1 ) \, ,
\eea
can then be decomposed into a free part and a part purely due to the presence of the slab.
The latter contribution for an arbitrary spin orientation is then given by
\bea
\Gamma^{\, \rm{slab}}_{B} &=& \label{gamma_B_gen_ja}
2 \, \Gamma^{\, 0}_{B} \,
\bigg ( ~ ( S_x^{\, 2} \, + \, S_y^{\, 2} ) \, I_{\|} \, + \, S_z^{\, 2} \, I_{\perp} ~ \bigg ) ~ ,
\eea
\noi
with the atom-spin orientation dependent integrals
\bea \nonumber
I_{\|} &=&
\frac{3}{16 k z} \,
\textrm{Re}
\bigg \{
\int_{0}^{2 k z} d x \, e^{ i x }
\bigg [ {\cal C}_{N}(x) - ( \frac{x}{2 k z} )^2 {\cal C}_{M}(x) \bigg ]
\\ \label{I_para}
&& + \, \int_{0}^{\infty} d x \, e^{ - x } \frac{1}{i}
\bigg [ {\cal C}_{N}(ix) + ( \frac{x}{2 k z} )^2 {\cal C}_{M}(ix) \bigg ] \bigg \} \, ,
\\ \nonumber
\\ \nonumber
I_{\perp} &=&
\frac{3}{8 k z} \,
\textrm{Re}
\bigg \{ \,
\int_{0}^{2 k z} d x \, e^{ i x } \,
\bigg [ 1 - ( \frac{x}{2 k z} )^2 \bigg ] \, {\cal C}_{M}(x)
\\ \label{I_perp}
&& ~~~~~ + ~ \int_{0}^{\infty} d x \, e^{ - x } \, \frac{1}{i} \, \bigg [ 1 + ( \frac{x}{2 k z} )^2 \bigg ] \,
{\cal C}_{M}(ix) \bigg \} ,
\eea
\noi
where the scattering coefficients are given by \cite{li_94}
\bea
{{\cal C}}_{N}(x) = r_p(x) ~ \frac{1 - e^{ix \, H/z} }
{1 \, - \, r_p^2(x) \, e^{ix \, H/z} } ~ ,
\\ \nonumber
\\
{{\cal C}}_{M}(x) = r_s(x) ~ \frac{1 - e^{ix \, H/z} }
{1 \, - \, r_s^2(x) \, e^{ix \, H/z} } ~ .
\eea
\noi
The electromagnetic field polarization dependent Fresnel coefficients are
\bea
r_p(x) &=& \frac{\epsilon(\omega) \, x \, - \, \sqrt{\, ( 2 k z )^2 ( \, \epsilon(\omega) - 1 \, ) \, + \, x^2} }
{\epsilon(\omega) \, x \, + \, \sqrt{\, ( 2 k z )^2 ( \, \epsilon(\omega) - 1 \, ) \, + \, x^2} } ~ ,
\\ \nonumber
\\ \label{r_s}
r_s(x) &=& \frac{ x \, - \, \sqrt{\, ( 2 k z )^2 ( \, \epsilon(\omega) - 1 \, ) \, + \, x^2} }
{ x \, + \, \sqrt{\, ( 2 k z )^2 ( \, \epsilon(\omega) - 1 \, ) \, + \, x^2} } ~ .
\eea
\noi
For the special case $H=\infty$, the integrals in Eqs.(\ref{I_para}) and (\ref{I_perp}) are simply a convenient
re-writing of Eqs.(8)-(12) in Ref.\cite{rekdal_07}. Note that $I_{\perp} \approx 2 \, I_{\|}$ provided $k z \ll 1$.
Throughout this article, we use the same spin-orientation as in Refs.\cite{scheel_05,rekdal_06}, i.e. $S_y^2 = S_z^2$ and $S_x=0$.
\begin{figure}
\caption{$\tau_B$ of a trapped atom near a superconducting film as a function of the temperature $T/T_c$.
The solid as well as the dashed-dotted line correspond to the
two-fluid mode and the Gorter-Casimir temperature dependence. We use $\lambda_L(0) = 35 \,$nm and
$\delta(T_c) \approx 150 \, \mu$m \cite{pronin_98}
\label{tau_sfa_t}
\end{figure}
As the total current density responds linearly and locally to the electric field,
the dielectric function can be written
\bea \label{eps_j_2}
\epsilon(\omega) = 1 - \frac{\sigma_2(T)}{\epsilon_0 \, \omega} + i \, \frac{\sigma_1(T)}{\epsilon_0 \, \omega} \, .
\eea
Here $\sigma(T) \equiv \sigma_1(T) + i \sigma_2(T)$ is the complex optical conductivity.
We may now parameterize this complex conductivity in terms of the London penetration length
$\lambda_L(T) \equiv \sqrt{1/\omega \mu_0 \sigma_2(T)}$ and the skin depth
$\delta (T) \equiv \sqrt{2/\omega \mu_0 \sigma_1(T)}$.
In this case, the dielectric function is $\epsilon(\omega) = 1 - 1/k^2 \lambda^2_L(T) + i \, 2/k^2\delta^2(T)$.
If, in addition, we consider a non-zero and sufficiently small frequency
in the range $0 < \omega \ll \omega_g \equiv 2 \Delta(0)/\hbar$,
where $\Delta(0)$ is the energy gap of the superconductor at zero temperature,
the current density may be described in terms of a two-fluid model \cite{london_34_40}.
The London penetration length is $\lambda_L(T) = \lambda_L(0)/\sqrt{ n_s(T)/n_0 }$
and the skin depth is $\delta(T) = \delta(T_c)/\sqrt{n_n(T)/n_0}$.
Here the electron density in the superconducting and normal state are $n_s(T)$ and $n_n(T)$, respectively,
such that $n_s(T) + n_n(T) = n_0$ and $n_s(0) = n_n(T \geq T_c) = n_0$ \cite{london_34_40}.
A convenient summary of the two-fluid model is expressed by the relations
\bea \label{tfm_sigma1_2}
\sigma_1(T) = \sigma_n \sqrt{\frac{n_n(T)}{n_0}} ~~ , ~~ \, \sigma_2(T) = \sigma_L \sqrt{\frac{n_s(T)}{n_0}} \, ,
\eea
where $\sigma_n \equiv \sigma_1(T_c)$ and $\sigma_L \equiv 1/\omega\mu_0\lambda^2_L(0)$.
Considering, in particular, the Gorter-Casimir temperature dependence \cite{gorter_34}
for the current densities, the electron density in the normal state is $n_n(T)/n_0 = (T/T_c)^4$.
For niobium we use $\delta(T_c) = \sqrt{2/\omega \mu_0 \sigma_n} \approx 150 \, \mu$m as
$\sigma_n \approx 2 \times 10^7 (\Omega \textrm{m})^{-1}$ and
$\lambda_L(0) = 35 \,$nm according to Refs.\cite{pronin_98}. In passing, we remark that
the value of $\sigma_n$ as obtained in Ref.\cite{casalbuoni_05} is two orders of magnitude
larger than the corresponding value inferred from the data presented in Refs.\cite{pronin_98}.
The lifetime $\tau_B \equiv 1/\Gamma_B$ for spontaneous emission as a function of $T$ is shown in \fig{tau_sfa_t}
for $H=0.9 \, \mu$m (solid line). We confirm the observation in Ref.\cite{rekdal_06} that for temperatures
below $T_c$ and for $H=\infty$ (dash-dotted line), the spin-flip lifetime is boosted
by several orders of magnitude.
In Ref.\cite{rekdal_06}, the spin-flip lifetime was, however, calculated by making use of the
approximative and analytical expression
\bea \label{skagerstam06}
\frac{\tau^B_0}{\tau^B } = ( {\bar n} + 1 )\left( 1 + (\frac{3}{4})^3 \sqrt{\epsilon_0 \omega} \,
\frac{\sigma_1(T)}{\sigma_2^{3/2}(T)} \, \frac{1}{(kz)^{4}} \right) \, ,
\eea
valid provided $\lambda_L(T) \ll \delta (T)$ and $\lambda_L(T) \ll z \ll \lambda$.
Comparing this analytical expression with the numerical results as presented in \fig{tau_sfa_t},
based on the exact equations Eqs.(\ref{total_B})-(\ref{r_s}), we find an excellent agreement. This observation remains true
when $\sigma_1 (T)$ and $\sigma_2 (T)$ are obtained from more detailed and microscopic considerations to be discussed below.
For temperatures $T/T_c > 1$ we can neglect the $\sigma_2 (T)$ dependence and, for $\delta (T) \ll z$,
we confirm the result of Ref.\cite{scheel_05}, i.e.
\bea \label{knight05}
\frac{\tau^B_0}{\tau^B } = ( {\bar n} + 1 )\left( 1 + (\frac{3}{4})^3 \sqrt{\frac{2\epsilon_0 \omega}{\sigma_1(T)}}
\, \frac{1}{(kz)^{4}} \right) \, .
\eea
For $T \simeq T_c$ we have to resort to numerical investigations.
In contrast to the traditional Drude model, more realistic descriptions of a normal conducting metal in
terms of a permittivity include a significant real contribution to the dielectric function
in addition to an imaginary part.
One such description is discussed in Ref.\cite{brevik_05}, where
\bea \label{eps_iver}
\epsilon(\omega,T) = 1 - \frac{\omega_p^2}{\omega^2 + \nu(T)^2} + i \, \frac{\nu(T) \, \omega_p^2}{\omega \, ( \, \omega^2 + \nu(T)^2 \, ) } \, ,
\eea
and $\hbar\nu(T) = 0.0847 (T/\theta)^5\int_0^{\theta/T} dx \, x^5 e^x /(e^x - 1)^2$ eV using a Bloch-Gr\"{u}neisen
approximation. Here $\theta=175$ K for gold. The plasma frequency is $\hbar \omega_p = 9$ eV.
For temperatures $T\simeq 0.25 T_c$, we observe that Eq.(\ref{eps_iver}) leads to $\sigma_1(T) \simeq \sigma_2(T)$,
and that for lower temperatures $\sigma_2(T)$ will be the dominant contribution to the conductivity.
For temperatures $T/T_c \gtrsim 1$, in the use of Eq.(\ref{eps_iver}) we can set $\sigma_2(T) \simeq 0$ when
calculating the lifetime. For a bulk material of gold this leads to almost two orders of magnitude longer lifetime
as compared to niobium since for gold $\delta(T_c) \approx 1 \, \mu$m, using the parameters corresponding
to Fig.\ref{tau_sfa_t}. This finding is in accordance with Eq.(\ref{knight05}).
As seen from Fig.\ref{tau_sfa_t}, for a thin film and for $T/T_c \geq 1$ we find the opposite and remarkable result,
i.e. a decrease in conductivity can lead to a larger lifetime.
A much more detailed and often used description of the electrodynamic properties of superconductors than
the simple two-fluid model was developed by Mattis-Bardeen \cite{mattis_58}, and independently by
Abrikosov-Gor'kov-Khalatnikov \cite{gorkov58}, based on the weak-coupling BCS theory of superconductors.
In the clean limit, i.e. $l \gg \xi_0$, where $l$ is the electron mean free path and $\xi_0$ is the coherence
length of a pure material, the complex conductivity, normalized to $\sigma_n \equiv \sigma_1(T_c)$,
can be expressed in the form \cite{klein_94}
\bea \nonumber
\frac{\sigma(T)}{\sigma_n} &=& \int_{\Delta(T) - \hbar \omega}^{\infty} \frac{dx}{\hbar \omega} \,
\tanh \bigg ( \frac{x+\hbar \omega}{2 k_B T} \bigg ) \, g(x)
\\ \label{sigma_klein}
&-& \int_{\Delta(T)}^{\infty} \frac{dx}{\hbar \omega} \, \tanh \bigg ( \frac{x}{2 k_B T} \bigg ) \, g(x) \, , ~~~~~~~
\eea
\noi
where $g(x) = ( x^2 + \Delta^2(T) + \hbar \omega \, x )/u_1 \, u_2$
and $u_1 = \sqrt{x^2 - \Delta^2(T)}$, $u_2 = \sqrt{(x + \hbar \omega)^2 - \Delta^2(T)}$.
Here, the well-known BCS temperature dependence for the superconducting energy gap $\Delta(T)$ is given by \cite{fetter_71}
\bea \nonumber
&&\ln{ \left [ ~ \bigg( \, \hbar \omega_D + \sqrt{(\hbar \omega_D)^2 + \Delta^2(0)} \, \bigg) / \Delta(0) ~ \right ] }
\\ \label{trans_Debye}
&& =
\int_0^{\hbar \omega_D} \frac{d x}{\sqrt{x^2 + \Delta^2(T)}} \, \tanh \left[ \, \frac{\sqrt{x^2 + \Delta^2(T)}}{2 \, k_B T} \, \right ] \, , ~~~~
\eea
where $\omega_D$ is the Debye frequency and $\Delta(0) = 3.53 \, k_B T_c/2$.
For niobium, the Debye frequency is $\hbar \omega_D = 25$ meV.
According to a theorem of Anderson \cite{anderson_59,abrikosov_58}, the presence of non-magnetic impurities,
which we only consider in the present paper, will not modify the superconducting energy gap as given by \eq{trans_Debye}.
The complex conductivity will, however, in general be modified due to the presence of such impurities.
In the dirty limit where $l \ll \xi_0$, the complex conductivity has been examined within the framework
of the microscopic BCS theory (see e.g. Ref.\cite{chang_89}). In this case, the complex conductivity,
now normalized to $\sigma_L$, can conveniently be written in the form
\bea \nonumber
\frac{\sigma(T)}{\sigma_L} &=&
\int_{\Delta(T) - \hbar \omega}^{\infty} \frac{dx}{2} ~ \tanh \bigg ( \frac{x+\hbar \omega}{2 k_B T} \bigg )
\\ \nonumber
&& \times \bigg ( \frac{g(x) + 1}{u_2 - u_1 + i \hbar/\tau} -
\frac{g(x) - 1}{u_2 + u_1 - i \hbar/\tau} \bigg )
\\ \nonumber
&-& \int_{\Delta(T)}^{\infty} \frac{dx}{2} ~ \tanh \bigg ( \frac{x}{2 k_B T} \bigg )
\\ \label{sigma_chang}
&& \times \bigg ( \frac{g(x) + 1}{u_2 - u_1 + i \hbar/\tau} +
\frac{g(x) - 1}{u_2 + u_1 + i \hbar/\tau} \bigg ) \, . ~~~
\eea
Here we choose $\tau$ such that $\hbar/ \tau \Delta(0)=\pi\xi_0 /l = 13.61$, corresponding to the
experimental coherence length $\xi_0 = 39 $ nm and the mean free path $l(T \simeq 9 K) = 9$ nm.
The normalization constant is $\sigma_L = 1.85 \times 10^{14}\, (\Omega \mbox{m})^{-1}$ corresponding to
$\lambda_L(0) = 35 \, \mu$m for niobium \cite{pronin_98}.
\begin{figure}
\caption{The complex conductivity $\sigma(T) \equiv \sigma_1(T) + i \sigma_2(T)$ as a function of the temperature $T/T_c$
with $\hbar/\tau \Delta(0)=13.61$ \cite{pronin_98}
\label{sigma_sfa_t_FIG}
\end{figure}
As the temperature decreases below $T_c$, Cooper pairs will be created. Despite a very small fraction of
Cooper pair for temperatures just below $T_c$, the imaginary part of the conductivity as given by \eq{sigma_chang}
exhibits a vast increase (c.f. \fig{sigma_sfa_t_FIG}).
Furthermore, due to the modification of the quasi-particle dispersion in the superconducting state, there is
an increase in $\sigma_1(T)$ as well just below $T_c$. This is the well-known coherence Hebel-Schlichter
peak \cite{hebel_59}. In contrast to the simple Gorter-Casimir temperature dependence,
both Eqs.(\ref{sigma_klein}) and (\ref{sigma_chang}) describe well the presence of the Hebel-Schlichter peak,
with a peak height less then $8 \, \sigma_n$ for both cases (c.f. \fig{sigma_sfa_t_FIG}),
at least for the values of the physical parameters under consideration in the present paper.
In the opposite temperature limit, i.e. $T \ll T_c$, numerical studies of \eq{sigma_chang} show that
$\sigma_1(T)$ decreases exponentially fast. As seen in \fig{sigma_sfa_t_FIG}, the imaginary part
of the conductivity, on the other hand, is more or less constant for such temperatures.
In passing we observe that there is only a minor
difference in $\sigma_2(T)$ as obtained from Eqs.(\ref{sigma_klein}) and (\ref{sigma_chang}) respectively.
For temperatures around the peak value of the Hebel-Schlichter peak, $\sigma_1(T)$ obtained
from Eq.(\ref{sigma_chang}) is, however, approximatively twenty percent larger than $\sigma_1(T)$ as obtained
from Eq.(\ref{sigma_klein}). This difference has, nevertheless a small effect on the lifetime $\tau_B$.
Hence, computing $\tau_B$ using Eqs.(\ref{sigma_klein}) or (\ref{sigma_chang}) for the complex conductivity,
we realize that the presence of non-magnetic impurities have no dramatic impact on the lifetime for
spontaneous emission (see \fig{tau_sfa_t_Hinf_FIG}). A comparison of the values of $\tau_B$ as obtained
using the two-fluid model for $H=\infty$ as presented in \fig{tau_sfa_t}
and the corresponding result as shown in \fig{tau_sfa_t_Hinf_FIG} shows, for our set of physical parameters,
that the two-fluid model overestimates $\tau_B$ with three order of magnitude.
\begin{figure}
\caption{$\tau_B$ of a trapped atom near a superconducting bulk as a function of the temperature $T/T_c$.
The other relevant parameters are the same as in \fig{tau_sfa_t}
\label{tau_sfa_t_Hinf_FIG}
\end{figure}
For finite values of the lifetime $\tau$ and for non-magnetic impurities we can also
investigate the validity of the two-fluid model approximation in terms of the lifetime $\tau_B$
for spontaneous emission processes. As we now will see, there are large deviations between
the microscopic theory and the two-fluid model approximation, in particular for small
temperatures. According to Abrikosov and Gor'kov (for an excellent account
see e.g. Ref.\cite{sadovskii_06} and references cited therein), the density of
superconducting electrons is given by
\bea \label{sigma_anderson}
\frac{n_s(T)}{n_0} \approx \frac{\pi \tau}{\hbar} \, \Delta(T) \, \tanh \left ( \, \frac{\Delta(T)}{2 \, k_B T} \, \right ) ~ ,
\eea
provided that $\tau \Delta(0)/\hbar \ll 1$.
We can now compute the dielectric function \eq{eps_j_2} using \eq{tfm_sigma1_2}.
We find that $\sigma_2(T)/\sigma_L$ obtained in this way agrees well the
corresponding quantity obtained from \eq{sigma_chang}. There is, however, a considerable discrepancy
between the two-fluid expression for $\sigma_1(T)/\sigma_L$ and the corresponding expressions
obtained from the microscopic theory as given by \eq{sigma_chang}.
The numerical results for the lifetime in this case are illustrated in the dashed-dotted lines
in \fig{tau_sfa_t_Hinf_FIG} and in \fig{tau_sfa_t_H09_FIG}.
\begin{figure}
\caption{$\tau_B$ of a trapped atom near a superconducting film as a function of the temperature $T/T_c$
with $H=0.9 \, \mu$m.
The other relevant parameters and labels are the same as in \fig{tau_sfa_t_Hinf_FIG}
\label{tau_sfa_t_H09_FIG}
\end{figure}
Since we are considering low frequencies $0 < \omega \ll \omega_g \equiv 2 \Delta(0)/\hbar$,
strong coupling effects can now be estimated by making use of the low-frequency limit of
the Eliashberg theory \cite{eliashberg_60} and its relation to the BCS theory (see e.g. Ref.\cite{carbotte_90}).
The so-called mass-renormalization factor $Z_N$, which in general is both frequency and temperature dependent,
is then replaced by its zero-temperature limit, which for niobium has the value $Z_N \approx 2.1$ \cite{carbotte_90}.
Using the strong-coupling expressions for the optical conductivity in a suitable form as e.g. given in Ref. \cite{klein_94},
we then find that the complex conductivity $\sigma (T)/\sigma_n$ is rescaled by
$\sigma_n \rightarrow \sigma_n/Z_N$ with the lifetime of non-magnetic impurities rescaled by
$\tau \rightarrow \tau/Z_N$. The change in the lifetime for spontaneous emission can
then e.g. be inferred from the relation Eq.(\ref{skagerstam06}), and we find only a minor
decrease of $\tau_B$ by the numerical factor $1/\sqrt{Z_N}\approx 0.69$, which also agrees well with more precise numerical evaluations.
The lifetime for spontaneous emission exhibits a minimum with respect to variation of the thickness $H$
of the superconducting film. This fact is illustrated in \fig{tau_sfa_H_FIG}.
Below the minimum at $H_{min} \approx 0.1 \, \mu$m,
a decrease of the thickness $H$ leads to an increase of lifetime in proportion to $H^{-1}$. This happens despite the
growth in polarization noise because the region generating the noise is becoming thinner
as it is limited by $H$, and not $\lambda_L(T)$.
Eventually, the lifetime reaches the free-space lifetime is $\tau^0_B$ as $H$ tends to zero.
On the other hand, for large $H$, i.e. $H \gg \delta(T)$, the lifetime is constant with respect to $H$,
giving the same result as for an infinite thick slab.
In the region between, i.e. $\lambda_L(T) \lesssim H \lesssim \delta(T)$, the lifetime is proportional to $H$.
Numerical studies show that a non-zero $\sigma_2(T)$ is important for
a well pronounced minimum of $\tau_B$ as a function of $H$.
\begin{figure}
\caption{$\tau_B$ as a function of the thickness $H$ of the film. The complex conductivity
is computed applying \eq{sigma_chang}
\label{tau_sfa_H_FIG}
\end{figure}
Some experimental work has been done using a superconducting body, e.g. Nirrengarten {\it et al.} \cite{haroche_06}.
Here cold atoms were trapped near a superconducting surface. At the distance of $440 \, \mu$m from the chip surface,
the trap lifetime reaches $115$ s at low atomic densities and with a temperature $40 \, \mu$K of the chip.
We believe the vast discrepancy between this experimental value and our theoretical calculations must rely on
effects that we have not taken into account in our analysis.
The use of a thin superconducting film may lead to the presence vortex motion
and pinning effects in (see e.g. Refs.\cite{larkin_94,villegas_05}). The presence of vortices will
in general modify the dielectric properties of the dielectric body. If we, as an example,
consider a vortex system in the liquid phase in a finite slab geometry, one expects a strongly
temperature dependent $\sigma_1 (T)$ with a peak value $\sigma_1 (T) \simeq 1.3\times 10^7/H^2[\mu m]\nu [kHz] \Omega m$ \cite{larkin_94}.
Close to this peak $\sigma_1 (T) \simeq \sigma_2 (T)$, and for $\nu \simeq 560 \, kHz$ we find a lifetime
for spontaneous emission two orders of magnitude larger than a film made out of gold with the same geometry.
It is an interesting possibility that spontaneous emission processes close to thin superconducting films
could be used for an experimental study of the physics of vortex condensation. This possibility has also
been noticed in a related consideration, which has appeared during the preparation of the
present work \cite{fermani_2007}. There are also fabrication issues concerning the Nb-O
chemistry \cite{halbritter_99} which may have an influence on the lifetime for spontaneous emission.
To summarize, we have studied the rate for spontaneous photon emission, due to a magnetic
spin-flip transition, of a two-level atom in the vicinity of a normal conducting metal or a superconductor.
Our results confirms the conclusion in Ref.\cite{rekdal_06},
namely that the corresponding magnetic spin-flip lifetime will be boosted by several orders of magnitude by
replacing a normal conducting film with a superconducting body. This conclusion holds when
describing the electromagnetic properties of the superconductivity in terms of a simple two-fluid model
as well as in terms of a more detailed and precise microscopic Mattis-Bardeen and
Abrikosov-Gor'kov-Khalatnikov theory.
For the set of physical parameters as used in Ref.\cite{rekdal_06} it so happens, more or less by chance,
that the two-fluid model results agree well with the results from the microscopic BCS theory.
We have, however, seen that even though the two-fluid model gives a qualitatively correct physical picture
for spontaneous photon emission, it, nevertheless, leads to large quantitative deviations when
compared to a detailed microscopic treatment. We therefore have to resort to the microscopic
Mattis-Bardeen \cite{mattis_58} and Abrikosov-Gor'kov-Khalatnikov \cite{gorkov58} theory in order
to obtain precise predictions. We have also show that non-magnetic impurities as well as strong-coupling
effects have no dramatic impact on the rate for spontaneous photon emission. Vortex condensation in thin
superconducting films may, however, be of great importance. Finally, we stress the close relation
between the spin-flip rate for spontaneous emission and the complex conductivity, which indicates
a new method to experimentally study the electrodynamical properties of a superconductor or a normal conducting metal.
In such a context the parameter dependence for a bulk material as given by Eq.(\ref{skagerstam06}) may be useful.
\begin{center}ACKNOWLEDGEMENTS
\end{center}
This work has been supported in part by the Norwegian University of Science and Technology (NTNU)
and the Norwegian Research Council (NFR). One of the authors (B.-S.S.) wishes to thank Professors Y. Galperin,
T.H. Johansen, V. Shumeiko and G. Wendin for fruitful discussions.
\end{document}
|
\begin{document}
\title{Levi Problem in Complex Manifolds}
\begin{abstract}
Let $U $ be a pseudoconvex open set in a complex manifold $M$. When is $U$ a Stein manifold?
There are classical counter examples due to Grauert, even when $U$ has real-analytic boundary or has strictly pseudoconvex points.
We give new criteria for the Steinness of $U$ and we analyze the obstructions. The main tool is
the notion of Levi-currents. They are positive ${\partial\overline\partial}$-closed
currents $T$ of bidimension $(1,1)$ and of mass $1$ directed by the directions where all continuous psh functions in $U$ have vanishing Levi-form. The extremal ones, are supported on the sets where all continuous psh functions are constant.
We also construct under geometric conditions, bounded strictly psh exhaustion functions, and hence we obtain Donnelly- Fefferman weights.
To any infinitesimally homogeneous manifold, we associate a foliation. The dynamics of the foliation
determines the solution of the Levi-problem.
Some of the results can be extended to the context of pseudoconvexity with respect to a Pfaff-system.
\end{abstract}
\noindent
{\bf Classification AMS 2010:} Primary: 32Q28, 32U10; 32U40; 32W05; Secondary 37F75\\
\noindent
{\bf Keywords:} Levi-problem, ${\partial\overline\partial}$-closed currents, foliations.
\section{Introduction} \label{intro}
Let $(M,\omega)$ be a complex Hermitian manifold of dimension $n.$
Let $U\mathbb{S}ubset M$ be a relatively compact domain with smooth boundary. We can assume that
$
U:=\left\lbrace z\in U_1:\ r(z)<0 \right\rbrace,
$
where $r$ is a function of class $\mathcal C^\infty$ in a neighborhood $U_1$ of $\overline{U},$
such that $dr$ is non-vanishing on $\partial U.$ Recall that $U$ is pseudoconvex if the Levi form
of $r$ is nonnegative on the complex tangent vectors to $\partial U.$ More precisely,
\begin{equation}\label{e:Levi}
\langle i{\partial\overline\partial} r(z),it\wedge\bar{t} \rangle\geq 0\qquad\text{if}\qquad \langle\partial r(z),t \rangle=0.
\end{equation}
Condition \eqref{e:Levi} is independent of the choice of $r.$
The Levi problem is, whether a pseudoconvex domain is Stein, i.e., biholomorphic to a complex submanifold of $\mathbb{C}^N,$ see \cite{Ho}.
Grauert has characterized Stein manifolds as follows.
\begin{theorem} {\rm (\cite{G1})}
A complex manifold $M$ is Stein iff there is a strictly psh exhaustion function on $M.$
\end{theorem}
Narasimhan has given a similar characterization for Stein spaces \cite{N1}.
The Levi problem admits a positive solution in many cases, in particular, when $M$ is $\mathbb{C}^n$
or $\mathbb{P}^n.$ We refer to the surveys by Narasimhan \cite{N2}, Siu \cite{Siu}, PeternellÊ\cite {P} and to the recent
discussion by Ohsawa \cite{O}. See also the book by H\"ormander \cite{Ho}.
In the general case, Grauert has given two remarkable examples.
Let $M:=\mathbb{C}^n/\Lambda$ be a complex torus. Assume that $e_1:=(1,0,\ldots,0)$ is the first vector
in the lattice $\Lambda.$ Let $\pi:\ \mathbb{C}^n\to M$ denote the canonical projection.
Then $U:=\pi(0<\mathbb{R}e z_1<1/2)$ is pseudoconvex (Levi-flat) and $U$ is not Stein. Indeed,the compact set $\pi(\mathbb{R}e z_1=1/4)$ is foliated by images of $\mathbb{C},$ hence holomorphic functions in $U$ which are necessarily bounded
on such images are constant.
Hirschowitz has analyzed such examples by introducing the notion of
infinitesimally homogeneous manifolds. A manifold $M$ is infinitesimally homogeneous if the global
holomorphic vector fields generate the tangent space at every point of $M.$ He then showed \cite{H1,H2}
\begin{theorem}{\rm (Hirschowitz) } Let $U$
be a domain in an infinitesimally homogeneous manifold. Assume $U$ satisfies the Kontinuit\"atssatz. Then $U$ admits a continuous
psh exhaustion function. If moreover $U$ does not contain a holomorphic image of $\mathbb{C},$ which is relatively compact in $U,$ then $U$ is Stein.
\end{theorem}
A second example of Grauert \cite{G2} shows that the boundary of $U$ can be strictly pseudoconvex
except on a small set and still, $U$ is not Stein. In the present article we analyze the obstructions of being Stein for pseudoconvex domains with smooth boundary. Our main tool is the notion of Levi currents.
With the previous notations, a positive current $T$ of bidimension $(1,1)$ in $M,$ supported
on $\partial U$
is a Levi current if it satisfies the following Pfaff system
\begin{equation}\label{e:Pfaff}
T\wedge \partial r=0,\qquad T\wedge {\partial\overline\partial} r=0,\qquad i{\partial\overline\partial} T=0,\qquad \langle T,\omega\rangle
=1.\end{equation}
Observe that the support of the Levi current is very restricted, and that it is directed by the null space of the Levi form. We then obtain the following result.
\begin{theorem}\label{T:main_1}
Let $U\mathbb{S}ubset M$ be a pseudoconvex domain with smooth boundary. If $\partial U$ has no Levi current, then $U$ is a modification of a Stein manifold. Moreover, there is a smooth function $v,$
such that if $\rho:=re^{-v},$ there is $\eta>0$ such that the function $\hat\rho:=-(-\rho)^\eta$
satisfies $i{\partial\overline\partial}\hat\rho\geq c|\hat\rho|\omega$ on $U\setminus K,$ where $K$ is compact.
\end{theorem}
Clearly, when $M=\mathbb{C}^n,$ or a Stein manifold, Levi currents do not exist. Indeed, positive currents with compact support, satisfying $i{\partial\overline\partial} T=0,$
are necessarily $0.$ So the last part of the above theorem is an extension of the Diederich-Forn{\ae}ss theorem \cite{DF1} which considers the case where $M=\mathbb{C}^n.$ A crucial point in their proof is that,
for a pseudoconvex domain $U$ in $\mathbb{C}^n,$ the function $-\log{\rm dist}(\cdot,\mathbb{C}^n\setminus \overline U)$
is psh. This tool is not available here.
A similar result was proved when $M=\mathbb{P}^n$ or more generally
for manifolds of positive holomorphic sectional curvature by Ohsawa and the author in \cite{OS}. It uses some geometric
inequalities satisfied by the distance to the boundary due to Takeuchi and Elencwajg \cite{T,E}.
The interest of constructing bounded exhaustion functions satisfying the above estimates is that the function $\psi:=-\log(-\hat\rho)$ satisfies the Donnelly-Fefferman condition and is proper and hence
one can apply their theorem \cite{DoF}, see also \cite{B}.
When the domain $U$ has real analytic boundary the non-existence of Levi currents is equivalent to the non-existence of a germ of holomorphic curve on the boundary of $U.$ This uses results
from \cite{DF2}.
In Section 2, after proving the above results, we address the question of finding bounded strictly
psh exhaustion function $\hat\rho,$ such that $\psi:=-\log(-\hat\rho)$ satisfies the Donnelly-Fefferman condition. We show in particular the following result.
\begin{theorem}\label{T:main_2}
Let $U\mathbb{S}ubset M$ be a pseudoconvex domain with smooth boundary. Assume that there is a compact set $
K\mathbb{S}ubset U$ and a bounded function $v$ on $U\setminus K,$ such that $i{\partial\overline\partial} v\geq \omega$ on $U\setminus K.$ Then $U$ is a modification of a Stein manifold, and admits a bounded exhaustion function which is strictly psh out of a compact set.
\end{theorem}
In Section \ref{S:Steiness} we give a criterion for Steiness.
In Section \ref{S:Levi} we introduce the notion of Levi currents on an arbitrary complex manifold
(not just on the boundary of a pseudoconvex domain). This permits to prove the following.
\begin{theorem}\label{T:main_3}
Let $U\mathbb{S}ubset M$ be a pseudoconvex domain with smooth boundary. Assume it admits a continuous psh exhaustion function $\varphi.$ Assume $U$ is not a modification of a Stein manifold and that it contains at most finitely many compact varieties of positive dimension. Then there is a number $t_0,$
such that for every $t>t_0$ the level set $\{\varphi=t\}$ has a Levi-current $T_t.$
In particular, each $T_t$ is a positive ${\partial\overline\partial}$-closed current of mass one with compact support.
\end{theorem}
We also address briefly the general question of the existence of bounded strictly psh functions, through the notion of Liouville currents.
In Section \ref{S:manifolds_vector_fields}, we show that if $M$ is infinitesimally homogeneous and $U$ is not Stein, then $U$ is foliated by complex manifolds of fixed dimension $d>0,$
and the closure of each leave is compact in $U.$
In Section \ref{S:Levi-Pfaff} we give a foliated version of the above results. More precisely, we
develop the notion of pseudoconvexity with respect to a Pfaff system.
\section{Bounded psh exhaustion functions}\label{S:bounded_psh}
The proof of Theorem \ref{T:main_1} is based on the following proposition.
\begin{proposition}\label{P:2.1}
Let $U\mathbb{S}ubset M$ be a pseudoconvex domain with smooth boundary. There is no Levi current
on $\partial U$ iff there is a smooth strictly psh function $u$ in a neighborhood of $\partial U.$
\end{proposition}
{\rm pr}oof
If $u$ is a strictly psh function in a neighborhood of $\partial U$ and $T$ is a positive current
supported on $\partial U,$ then
$$
\langle T,i{\partial\overline\partial} u \rangle=\langle i{\partial\overline\partial} T,u \rangle.
$$
So if $i{\partial\overline\partial} T=0,$ we get that $T=0.$ Hence there is no Levi current.
We next show that any positive ${\partial\overline\partial}$-closed current $T$ of mass one supported on $\partial U$
is a Levi current. Since $T$ is ${\partial\overline\partial}$-closed, then
$\langle T, i{\partial\overline\partial} r^2\rangle=0.$ Expanding and using that it is supported on $\{r=0\},$we get,
$T\wedge i\partial r\wedge\overline{\partial} r=0.$
Therefore, $T\wedge \partial r=0.$
Let $\chi$ be a smooth non-negative function with compact support. Using that
$T\wedge \partial r=0,$ we get that:
$$
0=\langle T,i{\partial\overline\partial} (\chi r) \rangle=\langle T, \chi i{\partial\overline\partial} r \rangle.
$$
But $\partial U$ is pseudoconvex, i.e., $\langle i{\partial\overline\partial} r, it\wedge \bar{t} \rangle\geq 0$ when
$\langle \partial r,t\rangle=0.$
The current $T$ is directed by the complex tangent current space to $\partial U$
because $T\wedge \partial r=0.$
It follows that $T\wedge\chi i{\partial\overline\partial} r=0$ for an arbitrary $\chi.$ Hence, $T$ is a Levi current
on $\partial U.$ So it is enough to show that if there is no ${\partial\overline\partial}$-closed positive current of mass one supported on $\partial U,$ there is a smooth strictly psh function in a neighborhood of
$\partial U.$
Let
$$
\mathcal C:=\left\lbrace T:\ T\geq 0\quad \text{bidimension}\quad (1,1)\quad\text{supported on }\quad
\partial U,\quad \langle T,\omega\rangle=1 \right\rbrace,
$$
and
$$ Y:=\left\lbrace i{\partial\overline\partial} u,\ u\ \text{test smooth function on M} \right\rbrace^\perp.$$
The space $Y$ is the space of the $i{\partial\overline\partial}$-closed currents on $M.$
We have assumed that $\mathcal C\cap Y$ is empty. The convex compact $\mathcal C$ is in the dual of a reflexive space. The Hahn-Banach theorem implies that $\mathcal C$ and $Y$ are strongly separated.
Hence, there is $\delta>0$ and a test function $u,$ such that $\langle i{\partial\overline\partial} u,T\rangle \geq \delta,$
for every $T$ in $\mathcal C.$ So the function $u,$ is strictly psh at all points of $\partial U,$ and hence in a
neighborhood of $\partial U.$ Similar use of Hahn-Banach theorem occurs in \cite{S1, Su}
\endproof
Since we have a strongly psh function in a neighborhood of $\partial U,$ Theorem \ref{T:main_1}
will be a consequence of the following theorem.
\begin{theorem}\label{T:2.2}
Let $K$ be a compact set in $U.$ Assume there is function $v$ in $U\setminus K$ such that
one of the following conditions is satisfied:
\begin{enumerate}
\item[(i)] $v$ is bounded and $i{\partial\overline\partial} v\geq \omega;$
\item[(ii)] $v\geq 0,$ $i\partial v\wedge \overline{\partial}v\leq i{\partial\overline\partial} v,$ and
$i{\partial\overline\partial} v\geq \omega.$
\end{enumerate}
Then there is a psh exhaustion function $\hat\rho,$ vanishing on $\partial U$ and such that
on $U\setminus K,$
$i{\partial\overline\partial} \hat\rho \geq |\hat\rho|\omega.$
\end{theorem}
{\rm pr}oof
We know that for $z\in\partial U,$ $\langle i{\partial\overline\partial} r(z),it\wedge \bar{t} \rangle\geq 0,$ when
$\langle \partial r(z),t\rangle=0.$ It follows that there is a constant $C,$ such that for $z\in\partial U$ and $t$ arbitrary in the tangent space $T^{(1,0)}_z(M),$
$$
\langle i{\partial\overline\partial} r(z),it\wedge \bar{t} \rangle\geq-C|\langle \partial r(z),t\rangle| |t|.
$$
Choose a small neighborhood $V$ of $\partial U$ such that every point $z$ in $V$ projects to a point $z_1\in\partial U.$ Then $\partial r(z)=\partial r(z_1)+O(r)$ Hence,
\begin{equation}\label{e:2.1}
\begin{split}
\langle i{\partial\overline\partial} r(z),it\wedge \bar{t} \rangle&\geq \langle i{\partial\overline\partial} r(z_1),it\wedge \bar{t} \rangle
-C_0|r(z)||t|^2\\
&\geq-C_1 |\langle \partial r(z_1),t\rangle||t|-C_1|r(z)||t|^2\\
&\geq-C|\langle \partial r(z),t\rangle||t|-C|r(z)||t|^2.
\end{split}
\end{equation}
Define $\rho:=re^{-Av}$ and $\hat\rho:=-(-\rho)^\eta,$ we will choose $A$ and $\eta$ later.
Observe first that by Richberg's theorem \cite{R}, we can assume that $v$ is smooth. We have
$$
i{\partial\overline\partial} \hat\rho=\eta|r|^{\eta-2}e^{-A\eta v}[D(t)].
$$
To get that $i{\partial\overline\partial} \hat\rho\gtrsim|r|^2\omega,$ we need to show that
\begin{equation}\label{e:star}
[D(t)]\gtrsim |r|^2\omega.
\end{equation}
Let $\mathcal Lv$ denote $\langle i{\partial\overline\partial} v(z),it\wedge\bar{t}\rangle.$ We then have
\begin{eqnarray*}
D(t)&=& Ar^2(\mathcal Lv -\eta A|\langle \partial v,t\rangle|^2)+|r|(\mathcal Lr-2\eta\mathbb{R}e \langle \partial r,t\rangle
\overline{ \langle \partial v,t\rangle}\big)\\
&+& (1-\eta) |\langle \partial r,t\rangle|^2.
\end{eqnarray*}
We also have
$$
2\eta|r| |\mathbb{R}e\langle \partial r,t\rangle \overline{\langle\partial v,t \rangle|}\leq r^2
|\langle\partial v,t \rangle|^2+\eta^2 |\langle\partial r,t \rangle|^2.
$$
So using relation \eqref{e:2.1} we get
\begin{eqnarray*}
D(t) &\geq & Ar^2\big (\mathcal L v -(\eta A+A^{-1}) |\langle\partial v,t \rangle|^2 \big)
+(1-\eta-\eta^2)|\langle\partial r,t \rangle|^2\\
&-& C|r||\langle\partial r,t \rangle||t|-Cr^2|t|^2.
\end{eqnarray*}
Hence,
\begin{eqnarray*}
D(t) &\geq & Ar^2(\mathcal Lv -{C\over A}|t|^2- (\eta A+ {1\over A}) |\langle\partial v,t \rangle|^2
- {1\over A\sqrt{\eta}}|t|^2 \mathbb{B}ig)\\
&+& (1-\eta-\eta^2 -C\sqrt{\eta}) |\langle\partial r,t \rangle|^2.
\end{eqnarray*}
Since $i{\partial\overline\partial} v\geq \omega,$ and $i\partial v\wedge \bar{\partial} v\leq i{\partial\overline\partial} v,$ it suffices to take
$A\simeq {1\over 2\sqrt{\eta}}$ and $\eta$ small enough. If $v$ is bounded, we can assume $v\geq 0$
and replace $v$ by $Cv^2,$ then condition (ii) is satisfied.
\endproof
\begin{remark}\rm
(i) In particular, we obtain that $U$ is a modification of a Stein space.\\
(ii) Observe that when $v$ extends smoothly to $\partial U,$ as in Theorem \ref{T:main_1}, then
$\hat\rho$ is H\"older continuous.\\
(iii) The conditions on $v$ are of the type required for the Donnelly-Fefferman weights, except
we do not ask for completeness i.e. that $v\to\infty$ when we approach $\partial U.$
\end{remark}
\begin{example}
Let $(M,\omega)$ be a compact K\"ahler manifold. Let $T$ be a positive closed current of bidegree $(1,1),$ cohomologous to $\omega.$ Write $T-\omega=i{\partial\overline\partial} v,$ we can assume $v\leq 0.$
Assume $U$ is pseudoconvex disjoint from the support of $T.$ Then on $U,$ we have
$\omega=i{\partial\overline\partial}(-v).$ The hypothesis of Theorem \ref{T:2.2} is satisfied if $T$ admits locally bounded
potentials. Otherwise the hypothesis of Theorem \ref{T:3.1} below is satisfied.
\end{example}
\begin{theorem}\label{T:2.3}
Let $U\mathbb{S}ubset M$ be smooth pseudoconvex with real analytic boundary. Then $\partial U$ has no Levi current iff it contains no germ of holomorphic curve.
\end{theorem}
{\rm pr}oof
Suppose $\partial U$ has no Levi current. Then there is a smooth strictly psh function $v$ near $\partial U.$ Let $W$
denote the union of non-trivial germs of holomorphic discs on $\partial U$ and suppose $W$ is nonempty.
Consider the closure $\overline{W}$ and let $p\in\overline{W}$ where the function $v$ reaches it maximum on $\overline{W}.$
According to the proof of Theorem 4 of \cite{DF2} there is a point $q$ close to $p$ and a nontrivial subvariety $V$ through $q,$
in a polydisc centered at $q$ of radius $\delta.$ Moreover, we can choose $q$ arbitrarily close to $p,$ without changing $\delta.$ Since $v$ is strictly psh,
we can assume that the maximum at $p$ is reached at an interior point of $V.$ A contradiction. So $W$ is empty.
Assume now that $\partial U$ does not have a non-trivial germ of holomorphic disc. It follows from Theorem 3 in \cite{DF2} that the holomorphic dimension of any real analytic
submanifold $N$ is zero. This means that for every $z\in N,$ $T^{(1,0)}_z(N)$ intersects
$$
M_z:=\left\lbrace t:\ t\in T^{(1,0)}_z(\partial U),\ \langle i{\partial\overline\partial} r(z),it\wedge\bar{t}\rangle=0 \right\rbrace
$$
only at $0.$ So for each non-zero vector $t\in T^{(1,0)}_z(N)$, $\langle i{\partial\overline\partial} r(z),it\wedge\bar{t}\rangle>0.$
The authors in \cite{DF2} state and prove their theorem in $\mathbb{C}^n$ but this part of the argument is of local nature.
Let $N_0$ denote the real analytic set where $\dim M_z>0.$ Then
$N_0\subset \bigcup_{k=1}^r N_k,$ where $N_k$ is a closed submanifold in $\partial U\setminus \bigcup_{j=1}^{k-1} N_j,$
moreover
$\langle i{\partial\overline\partial} r(z),it\wedge\bar{t}\rangle>0$ for a non-zero $t\in T^{(1,0)}_z(N_k).$ This follows from the Lojasiewicz stratification of real analytic sets and from the
above statment, see \cite{DF2}. The Levi current is a priori supported on $N_0.$
Let $\rho_j$ be a defining function of $N_j$ and let $\chi$ be a cutoff function. If we expand $\langle T,i{\partial\overline\partial} (\chi\rho_j^2)\rangle=0,$
we get that $T\wedge \partial \rho_j=0.$
Writting that $\langle T,i{\partial\overline\partial} (\chi\rho_j)\rangle=0,$ we get also that $T\wedge i{\partial\overline\partial} \rho_j=0.$ The non-degeneracy of $i{\partial\overline\partial} \rho_j$ on $M_z$ implies that
$T=0.$
\endproof
We recall the following form of the following Donnelly-Fefferman theorem, see \cite{DoF} and \cite{B}.
\begin{theorem}\label{T:2.4}
Let $N$ be a complex manifold of dimension $n.$ Let $\mathbb{O}mega:=i{\partial\overline\partial}\varphi$ be a complete K\"ahler metric on $N.$
Assume there is $C_0>0$ such that
$i\partial \varphi\wedge \overline\partial{\varphi}\leq C_0 i{\partial\overline\partial}\varphi.$
Assume $p+q\not=n.$
Then, for any $(p,q)$-form $f$ in $L^2$ with $\overline\partial f=0,$ there is a solution $u,$ to the equation
$\overline\partial u=f$ with
$$
\| u\|^2_\mathbb{O}mega\leq C\|f\|^2_\mathbb{O}mega.
$$
\end{theorem}
The condition on $\varphi$ means just that $|d\varphi|_\mathbb{O}mega$ is bounded. The completeness means that $\varphi(z)\to\infty$ when $z\to\infty$ on $N.$
The following proposition permits to apply the above theorem to the pseudoconvex domains considered previously. We just have to assume that $U$ does not contain analytic varieties of
positive dimension.
\begin{proposition}
Let $U\mathbb{S}ubset M$ be a pseudoconvex domain with a negative exhaustion function $\hat\rho,$ satisfying
$$
i{\partial\overline\partial}\hat\rho \gtrsim |\hat\rho|\omega.
$$
Let $\varphi:=-\log(-\hat\rho).$ Then the metric $\mathbb{O}mega:= i{\partial\overline\partial}\varphi$ is complete and $|d\varphi|_\mathbb{O}mega$ is bounded. So Theorem \ref{T:2.4} applies.
\end{proposition}
{\rm pr}oof
We have
$$
i{\partial\overline\partial}\varphi={i{\partial\overline\partial}\hat\rho \over|\hat\rho|}+ {i\partial\hat\rho \wedge \overline\partial\hat\rho\over \hat\rho^2}
={i{\partial\overline\partial}\hat\rho \over |\hat\rho|}+i\partial\varphi \wedge \overline\partial\varphi\geq c\omega+i\partial\varphi \wedge \overline\partial\varphi.
$$
Moreover, since $\hat\rho\to 0$ when $z\to\partial U,$ the metric $\mathbb{O}mega$ is complete.
\endproof
\begin{remark}
In Theorem \ref{T:2.2}, we start with a potential $v$ satisfying $i\partial v\wedge\overline\partial v\leq i{\partial\overline\partial} v,$ but the metric is not necessarily complete. We end up
with a complete one associated to $\varphi:=-\log(-\hat\rho).$
\end{remark}
\endproof
\section{A condition for Steiness of a pseudoconvex domain}\label{S:Steiness}
As recalled in the introduction, according to Grauert's theorem, to prove that a pseudoconvex domain is Stein, one should construct a strictly psh exhaustion function.
Here we give a quite weak assumption in order to construct such an exhaustion.
\begin{theorem}\label{T:3.1}
Let $U\mathbb{S}ubset M$ be a pseudoconvex domain with smooth boundary. Assume there is a neighborhood $V$ of $\partial U,$
and a function $v$ on $U\cap V$ such that the following conditions are satisfied:
\begin{enumerate}
\item[ (i)] $i{\partial\overline\partial} v\geq \omega;$
\item[(ii)] If $r$ denotes a defining function for $\partial U,$ then for any $\epsilon>0,$
$v>\epsilon\log{|r|}$ when $r\to 0.$
\end{enumerate}
Then $U$ admits a bounded exhaustion function, with all level sets strictly pseudoconvex. Moreover, $U$ is a modification
of a Stein space.
\end{theorem}
When $v$ is defined near $\partial U$ and satisfies $i{\partial\overline\partial} v\geq \omega,$ Elencwajg \cite{E} showed that $U$ is a modification
of a Stein space (see also \cite{Siu}).
{\rm pr}oof
Define $\sigma:=re^{-Av}.$ According to Richberg's approximation theorem \cite{R} we can assume that $v$ is smooth. Condition (ii) implies that for every $A>0,$
$\sigma$ is an exhaustion function. We have
$\overline\partial \sigma=e^{-Av}(\overline\partial r-Ar\overline\partial v)$ and
$$
i{\partial\overline\partial} \sigma=e^{-Av}\mathbb{B}ig( i{\partial\overline\partial} r- 2A\mathbb{R}e(i\partial r\wedge \overline\partial v)-Ari{\partial\overline\partial} v+A^2r\partial v\wedge \overline\partial v \mathbb{B}ig).
$$
We are going to check that the level sets of $\sigma$ are strictly pseudoconvex. If $t$ is a $(1,0)$ tangent vector to a level set, then
$\langle \partial \sigma,t\rangle=0$ i.e. $\langle \partial r,t\rangle=Ar\langle \partial v,t\rangle.$
So
\begin{eqnarray*}
\langle i{\partial\overline\partial} \sigma, it\wedge \overline{t} \rangle&=&e^{-Av}\big ( \langle i{\partial\overline\partial} r, it\wedge \overline{t} \rangle+2A^2|r||\langle \partial v,t\rangle|^2\\
&+&A^2r |\langle \partial v,t\rangle|^2+A|r|\langle i{\partial\overline\partial} v, it\wedge \overline{t} \rangle\big).
\end{eqnarray*}
We also have near $\partial U$ that
$$
\langle i{\partial\overline\partial} r(z), it\wedge \overline{t} \rangle
\geq -C\big ( |\langle \partial r(z),t\rangle||t|+|r||t|^2 \big).
$$
So if $A>C,$ and $A$ is large enough.
\begin{eqnarray*}
e^{Av} \langle i{\partial\overline\partial} \sigma, it\wedge \overline{t} \rangle &=& -CA |r|\ |\langle \partial v,t\rangle|^2 -2C|r| |t|^2\\
&+& A^2|r||\langle \partial v,t\rangle|^2 +A|r| \langle i{\partial\overline\partial} v, it\wedge \overline{t} \rangle\\
&\geq & |r| ( A\langle i{\partial\overline\partial} v,it\wedge \bar{t}\rangle-2C|t|^2 )>0.
\end{eqnarray*}
It follows that there is a function $C(\sigma)$ such that
$$
i{\partial\overline\partial} \sigma \geq -{C(\sigma)\over 2}i\partial\sigma\wedge \overline\partial\sigma\qquad\text{near}\qquad \partial U.
$$
Let $\kappa(t):=\int_{-1}^t C(s)ds,$ and $\chi(t):=\int_{-1}^t e^{\kappa(s)}ds.$
Then $\chi''-C(s)\chi'(s)=0.$ Define $\rho:=\chi(\sigma).$ Then $\rho$ is a strictly psh exhaustion function. Indeed,
\begin{multline*}
i{\partial\overline\partial} \rho=\chi'(\sigma)i{\partial\overline\partial}\sigma+\chi''(\sigma)i\partial\sigma\wedge\overline\partial\sigma\\
> \big( {-C\over 2} \chi'(\sigma) +\chi''(\sigma) \big)i\partial\sigma\wedge\overline\partial\sigma\geq {C\over 2}\chi'(\sigma)i\partial\sigma\wedge\overline\partial\sigma.
\end{multline*}
On the other hand, when $\langle\partial\sigma,t \rangle=0,$ we also have strict positivity. So $\rho$ is a strictly psh exhaustion function.
\endproof
\section{An obstruction to Steiness: Levi currents}\label{S:Levi}
Let $U\mathbb{S}ubset M$
be a locally Stein domain in a complex Hermitian manifold $(M,\omega).$ It is not clear whether there are non-constant psh functions in $U.$
When $U$ admits a continuous psh exhaustion function, the domain $U$ may not have strictly psh functions and hence is not necessarily Stein, this is the case in Grauert examples \cite{G1,G2}, or in the families described
by Ohsawa \cite{O}.
When $M$ is infinitesimally homogeneous manifold \cite{H1,H2}, the domain $U$ has a psh exhaustion function, but may not have strictly psh functions and hence is not necessarily Stein.
In this section we want to discuss an obstruction to Steiness given by Levi currents with compact support in $U.$ In order to introduce the notion
we need to define $T\wedge i{\partial\overline\partial} v,$ when $T$ is a positive current ${\partial\overline\partial}$-closed and $v$ is a continuous psh function.
We recall few results from \cite{DS1}.
Let $T$ be a positive current of bidegree $(p,p).$ Assume that $i{\partial\overline\partial} T$ is of order $0.$
When $T$ is a current of order $0,$ the mass of $T$ on a compact $K$ is denoted by $\| T\|_K.$
When $T$ is positive, and $M$ is of dimension $n,$ $\| T\|_K$ is equivalent to $\big|\int_K T\wedge \omega^{n-p}\big|.$
When $u$ is a smooth psh function on an open set $V\subset M$
we have
\begin{equation}\label{e:4.1}
i{\partial\overline\partial} u\wedge T:=u(i{\partial\overline\partial} T) -i{\partial\overline\partial}(uT)+i\partial({\overline\partial} u\wedge T)-i{\overline\partial}(\partial u\wedge T).
\end{equation}
The following estimate is proved in \cite{DS1}. Let $L\mathbb{S}ubset K$ be two compact sets in $V.$ Assume $T$ is positive
and $i{\partial\overline\partial} T$ is of order $0.$ Then there is a constant $C_{K,L}>0$ such that for every smooth
bounded psh function $u$ on $V,$ we have
\begin{equation}\label{e:4.2}
\int_L i\partial u\wedge {\overline\partial} u\wedge T\wedge \omega^{n-p-1}\leq C_{K,L}\|u\|^2_{L^\infty(K)}\big (\|T\|_K+\|i{\partial\overline\partial} T\|_K\big)
\end{equation}
and
\begin{equation}\label{e:4.3}
\| i {\partial\overline\partial} u\wedge T\|_L\leq C_{K,L}\|u\|_{L^\infty(K)}\big (\|T\|_K+\|i{\partial\overline\partial} T\|_K\big).
\end{equation}
This permits to extend relation \eqref{e:4.1} to $u$ psh and continuous. Moreover, when $u_n$ converges locally uniformly
to $u$ then
$$
I_{n,m}:=\int_L i\partial (u_n-u_m)\wedge {\overline\partial} (u_n-u_m)\wedge T\wedge \omega^{k-p-1}
$$
converges to $0.$ It is enough to prove that for a ball $B$ and to assume $u_n$ and $u_m$ coincide near the boundary of $B$, see \cite{DS1}. So
$$
I_{n,m}={-1\over 2}\int (u_n-u_m) i{\partial\overline\partial} (u_n-u_m)\wedge T+{1\over 2}\int i{\partial\overline\partial} (u_n-u_m)^2\wedge T.
$$
Hence,
\begin{eqnarray*}
2I_{n,m}&\leq& \|u_n-u_m\|^2_B\| i{\partial\overline\partial} T\|_B +\int_B|u_n-u_m|i{\partial\overline\partial} u_n\wedge T +\int_B|u_n-u_m|i{\partial\overline\partial} u_m \wedge T\\
&+&\int_B|u_n-u_m|i{\partial\overline\partial} u_n \wedge T.
\end{eqnarray*}
The convergence follows using \eqref{e:4.3}.
Estimate \eqref{e:4.2} permits also to define $\partial u\wedge T.$ Then $\partial u_n\wedge T\to \partial u\wedge T$, as currents of order $0,$ if $u_n$ converges to $u$ uniformly on compact sets.
\begin{definition}
A Levi current in $V$ is a nonzero positive current of bidimension $(1,1),$ such that $i{\partial\overline\partial} T=0$ and $T\wedge i{\partial\overline\partial} v=0$
for every continuous psh function in $V.$
A Liouville current in $V$ is a nonzero positive current of bidimension $(1,1),$ such that $i{\partial\overline\partial} T=0$ and $T\wedge i{\partial\overline\partial} v=0$
for every bounded continuous psh function in $V.$
\end{definition}
Observe that this implies that for a Levi current $T$ (resp. a Liouville current) we have that
$T\wedge {\overline\partial} v=0$ for every continuous psh function $v,$ (resp.for a bounded continuous psh function $v$).
\begin{proposition}
Let $K$ be a compact set in $V.$ If $S$ is a positive current of bidimension $(1,1),$ supported in $K,$ such that $i{\partial\overline\partial} S=0,$ then $S$ is a Levi current. The convex set $\mathcal L(K)$ of Levi currents of mass $1,$ supported in $K$ is compact.
Let $T$ be a Levi current in $V$. Let $v$ be a non-negative continuous psh function, then the current $vT,$ is a Levi current. If $T$ is an extremal Levi current in $V, $ then continuous psh functions in $V,$ are constant on $H:={\rm supp}(T).$
A similar statement holds for Liouville currents in $V.$
\end{proposition}
{\rm pr}oof
Suppose $S$ is a positive current of bidimension $(1,1),$ ${\partial\overline\partial}$-closed and supported on $K.$
We observe first that for $u$ continuous and psh in a neighborhood of $K,$ ${\overline\partial} u\wedge S,$ is well defined and of order $0.$Then
we apply \eqref{e:4.1} to the function $1.$ This shows that ${\partial\overline\partial} u\wedge S= 0.$ So $S$ is a Levi current. It follows that $\mathcal L(K)$ is compact.
Assume $T$ is a Levi current in $V$. Let $v$ be a continuous psh function in $V.$ Let $h$ be a convex strictly increasing function. Since ${\partial\overline\partial} h(v)\wedge T=0,$ we get that ${\overline\partial} v\wedge T=0.$ We then apply formula \eqref{e:4.1} and get that $-{\partial\overline\partial} (vT)={\partial\overline\partial} v\wedge T=0.$
Assume $T$ is extremal. Let $u$ be continuous psh in $V.$ Suppose
$(u<0)$ and $(u>0)$ are two
nonempty open sets in $H.$ Let $\chi$ be a convex increasing function vanishing for $t<0$
and strictly increasing for $t>0.$ Then the current $S:=\chi(u)T$ is a Levi current as we have seen. This contradicts the extremality of $T.$
The proof for Liouville currents is similar.
\endproof
\begin{theorem}\label{T:4.3}
Let $N$ be a complex manifold with a psh continuous exhaustion $\varphi.$ Then $N$ is Stein iff there is no Levi current
with compact support in $N.$
If $N$ admits a bounded continuous psh exhaustion function, then it admits a bounded strictly
psh exhaustion function iff there is no Levi current with compact suport in $N.$
\end{theorem}
{\rm pr}oof
If there is a strictly psh, continuous function $v$ in $N$ and $T$
is a positive current such that $T\wedge {\partial\overline\partial} v=0,$ then $T=0.$
We have to prove the converse. We show that if there is no Levi current on a compact set $K,$
then there is a smooth function $v_K, $ strictly psh in a neighborhood of $K.$
Suppose $S$ is a positive current of bidimension $(1,1)$ ${\partial\overline\partial}$-closed and supported on $K.$
As we have seen, for every $u$ continuous psh near $K,$
in particular, for $u$ continuous psh on $N,$ we have that $S\wedge i{\partial\overline\partial} u=0.$ So $S$ is a Levi current. Hence $S=0.$ The duality argument used in the proof of Proposition \ref{P:2.1} implies the existence of $v_K$ smooth and strictly psh near $K.$
For a compact $K$ let $\widehat{K}$ denote the hull with respect to continuous psh functions. Since there is a psh continuous exhaustion function we can choose $K_n\nearrow N,$ $K_{n+1}\mathbb{S}ubset K_n,$
and $K_n=\widehat{K}_n.$ Let $v_n$ be a continuous function strictly psh near $K_{n+1}.$ Let $\chi_n$ be a convex increasing function such that
$$
\chi_n(\varphi)<\inf_{K_n} v_n\quad\text{on}\quad K_{n-1}.\quad \text{and} \quad \chi_n(\varphi)>\sup v_n\quad\text{near}\quad \partial K_n.
$$
Define
$$
u_n:=\sup(\chi_n(\varphi),v_n).
$$
Then $u_n$ is psh continuous on $N$ and strictly psh near $K_{n-1}.$ Moreover, it is an exhaustion.
If we choose $0<\epsilon_n< {1\over 2^n} \|u_n\|^{-1}_{K_n},$ then the function $u:=\sum_n\epsilon_nu_n$
is strictly psh function. So the function $u+\chi(\varphi)$ is strictly psh and an exhaustion if $\chi$ is convex increasing fast enough.
Suppose now that $\varphi<0$ psh and $\varphi\to 0$
when $z\to\infty$ on $N.$ We consider $u_n$ as above and define
$w_n:=\epsilon_n(u_n-c_n)$ with $c_n:=\lim_{z\to\infty} u_n(z).$ It is clear that $c_n$ is constant.
If $\epsilon_n$ is small enough, $w:=\sum w_n$ is a bounded strictly psh exhaustion.
\endproof
{\rm pr}oof[Proof of Theorem \ref{T:main_3}]
We will need the following theorem of Grauert. If $N$ is a complex manifold
with a continuous psh exhaustion $\varphi,$ such that $\varphi$ is strictly psh out of a compact set
$K$ of $N,$ then $N$ is a proper modification of a Stein space. More precisely, one can blow down analytic sets
in $N$ to points and get holomorphic convexity for compact sets in the blow down.
Suppose $N$ is not a modification of a Stein space and that for a sequence $t_n\to\infty,$ there is no Levi current on $(\varphi=t_n).$ We have seen that this implies the existence of a smooth strictly psh
function near $(\varphi=t_n).$ Using the construction in the previous theorem, there is a continuous psh
exhaustion $\psi,$ strictly psh in a neighborhood of each $(\varphi=t_n).$ So $(\varphi< t_n)$ is a modification of a Stein space. In particular the compact analytic sets $A_j$ are necessarily in $(\varphi=s_j),$ for some
$s_j.$ From the finiteness
assumption the $A_j$ cannot accumulate near the boundary. So the $s_j$ are uniformly bounded. Hence, there is $t_1$ such that for $t>t_1,$ there is no Levi current with compact support on $t>t_1.$ Here again, we use Grauert's Theorem. The argument in Theorem \ref{T:4.3} shows that we can construct a psh exhaustion function strictly psh on
$(\varphi>t_1).$ Hence $U$ is a modification of a Stein space. A contradiction. So, for $t$ large enough there is a Levi current on $(\varphi=t).$
\endproof
The following proposition describes the function theory near the support of a positive ${\partial\overline\partial}$-closed current in $\mathbb{P}^k.$
\begin{proposition}
Let $T$ be an extremal positive current of bidimension $(1,1),$ ${\partial\overline\partial}$-closed in $\mathbb{P}^k$ with support $K.$ Then there exists a fundamental sequence of open neighbohoods $(U_n)$ of $K$, such that every psh function in $U_n$ is constant.
\end{proposition}
{\rm pr}oof
Let $u$ be a psh function in a neighborhood $V$ of $K.$ Since $\mathbb{P}^k$ is homogeneous, we can assume that $u$ is smooth and satisfies $0<u<1.$ As we have seen $T$ is a Levi current, hence $uT$
is ${\partial\overline\partial}$-closed. The extremality of $T$ implies that $u$ is constant on $K.$ We can consider, the images of $T$ by automorphisms of $\mathbb{P}^k,$ close to identity with a fixed point in $K.$ The function $u$
has to be constant on the images of $K.$ The theorem follows.
\endproof
\begin{remark}\rm
1) Grauert's example shows that we cannot replace in the previous statement the projectif space by a
torus.
2) Similarly, if $K$ is a minimal compact laminated set in $\mathbb{P}^k,$ with finitely many singular points. Then one can construct a fundamental sequence $(U_n)$ of open neighborhoods of $K$, such that every psh function in $U_n$ is constant. For basics on laminations see for example the survey \cite {FS2} .
3) The complement of the support of a positive current of bidimension $(1,1)$ and ${\partial\overline\partial}$-closed is
$1$-pseudoconvex (in dimension $2$ it is pseudoconvex). So the support is quite large see \cite{FS,S2}.
The support $H,$ of a positive ${\partial\overline\partial}$-closed current satisfies the local maximum principle for
psh functions near $H,$ see \cite{S1}.
4) If $U$ is not Stein but admits a continuous psh exhaustion function $\varphi$, the classes
$$C_a:=\left\lbrace z:\ u(z)=u(a)\quad\text{for every}\quad u\quad\text{psh continuous}\right\rbrace$$
are nontrivial and indeed are on $(\varphi=t),$ hence some sets $C_a$ are of Hausdorff dimension
larger or equal to $2.$ Finally, the extremal Levi currents in $U,$ are Levi currents on $(\varphi=t)$
except the level set is not necessarily smooth.
\end{remark}
We next address the relation with pseudoconcave manifolds.
We first recall some definitions, see \cite{G3}
\begin{definition}
A real $\mathcal C^2$ function $u$ defined in a complex manifold $U$ is strictly $q$-convex if the complex Hessian (Levi form) has at least $q$ strictly positive eigenvalues at every point. A complex manifold $U$ is $q$-complete if it admits a strictly $q$-convex exhaustion function.
\end{definition}
\begin{definition}
A function $\rho$ is strictly $q$-convex with corners on $U$ if for every point $p\in U$ there is a neighborhood $U_p$ and finitely many strictly $q$-convex $\mathcal C^2$ functions $\{\rho_{p,j}\}_{j \leq \ell_p}$
on $U_p$ such that $\rho_{|U_p}= \max_{j\leq \ell_p} \{\rho_{p,j}\}.$
The manifold $U$ is strictly $q$-complete with corners if it admits an exhaustion function which is strictly $q$-convex with corners.
\end{definition}
\begin{theorem}\label{T:4.4}
Let $K$ be a compact set in a connected complex manifold $U.$ Assume $U\setminus K,$ is strictly $2$-complete with corners.Then $U\setminus K,$ admits no non-constant psh function.
Moreover there is a non-zero, positive ${\partial\overline\partial}$-closed current $T, $ of bidimension $(1,1)$ supported in $K.$
\end{theorem}
{\rm pr}oof
Let $\rho$ denote the strictly $2$-convex exhaustion function with corners. For each $p$ in a level set
$(\rho=c)$, there is on a neighborhood $U_p,$ a strictly 2-convex function $\rho_{p}$, such that
$\rho_{p}(p)=c,$ and $\rho \geq \rho_{p}$ in $U_p.$ If at $p$ the gradient of $\rho_{p}$ is non zero there is still a strictly positive eigenvalue of the Levi-form in the tangent space. Using the Taylor expansion at $p$ one sees easily
that , there is a holomorphic disc through $p$ and otherwise contained in $(\rho \geq c),$ which enters in $(\rho>c),$ see
\cite{Ho} p.51. If the gradient of $\rho_{p}$ vanishes at $p,$ the construction of an analytic disc with the above properties is even simpler. Hence if $u$ is psh in $(\rho>c),$ by maximum principle, $u$ is constant on each component of $(\rho>c).$ Since $c$ is arbitrary and $U$ is connected the result follows.
In particular there are no strictly psh functions near $K$. The existence of $T$ follows from the duality principle we have already used.
\endproof
We end up this section with few remarks on bounded psh functions.
For a non-compact connected Riemann Surface $N,$ the existence of a non-constant bounded subharmonic function is equivalent to the existence of a Green function with a pole at a point $p$ in $N. $ The notion seems much less explored in several complex variables. We give few remarks on the question. We introduce first the following definition.
\begin{definition}
A connected complex manifold $N$ is Ahlfors hyperbolic iff it admits a smooth bounded
strictly psh function.
\end{definition}
Using Richberg's approximation Theorem \cite{R}, this is equivalent to the existence of a continuous
bounded strictly psh function. It is clear that such manifold does not have non-zero Liouville currents.
We give a class of examples.
Let $\mathbb{P}^k$ denote the complex projective space of dimension $k$. Consider an endomorphism
$f:\mathbb{P}^k\rightarrow\mathbb{P}^k$ which is holomorphic and of algebraic degree $d$ strictly larger than $1.$
Let $\omega$ denote the Fubini-Study form on $\mathbb{P}^k.$ The Green current $T$ associated to $f$ is given by:
$T= lim (d^{-n} (f^n)^*(\omega))= \omega + i{\partial\overline\partial} g.$ The function $g$ is H\"older continuous.
Moreover the complement of $supp(T)$ is the Fatou set \cite {DS2}. So any component $U$ of the Fatou set is Ahlfors hyperbolic.
More generally, let $ (M, \omega )$ be a compact K\"ahler manifold. Let $T$ be a positive closed current of bidegree $(1,1)$ cohomologous to $\omega$, with locally bounded potentials. Then the components of the complement of $supp(T),$ are Ahlfors hyperbolic.
Let $a$ be a positive irrational number. Consider in $ \mathbb{C}^2$ the domain
$$U_a:=\left\lbrace (z,w) / |w||z|^a <1 \right\rbrace.$$ This domain is Stein, but is not an Ahlfors
hyperbolic domain.
Indeed any bounded psh function is constant on the level sets of the function $u(z,w)= |w||z|^a.$ It is easy to see that each such level sets supports a non zero positive closed Liouville current $T$. The current is unique up to a multiplicative constant. On the level set $(u=c),$ the current is given by,
$T_c= i{\partial\overline\partial} (log (sup(u,c))).$
In the definition we have asked for the functions to be smooth to avoid examples like the following.
Let Let $\varphi$ be a subharmonic function in $\mathbb{C},$ taking the value $-\infty,$ on a dense set.
Define $$U:=\left\lbrace (z,w) / |w|exp(\varphi(z)) <1 \right\rbrace.$$
Then $U$ admits non-constant bounded psh functions, but every continuous bounded psh function is constant.
In \cite{OS} there is an example of a Stein domain $U$ with smooth boundary, relatively compact in a homogeneous manifold $M,$ such that all bounded psh functions in $U$ are constant. Indeed $U$is
foliated by images of $\mathbb{C}$ which cluster on the boundary.
\begin{proposition}
Let $(M,\omega),$ be a compact K\"ahler manifold. Let $U \subset M,$ be a domain, with a non-constant continuous bounded psh function $u$ defined in $U, $ reaching it's minimum $c,$ in $U.$ Let $X_c:= (z\in U, u(z)= c).$
Either $\overline{X_c}$ supports a non-zero positive ${\partial\overline\partial}$- closed current, or there is in $U,$ a bounded continuous psh function $v,$ which is strictly psh in a neighborhood of
$X_c.$
\end{proposition}
{\rm pr}oof
Suppose there is no non-zero positive ${\partial\overline\partial}$-closed current, supported on $X_c.$ Then, there is a strictly psh function $w,$ in a neighborhood $W$ of $\overline{X_c}.$ We can assume that $c=0,$ and that on $W,$ $0<w<1.$ Composing with a convex, increasing function, we can assume that out of $W$ we have $u>1.$ It suffices to define $v= sup(u,w).$ The function is well defined in $U,$ and is strictly psh near $X_c.$
\endproof
\section{Manifolds with holomorphic vector fields}\label{S:manifolds_vector_fields}
In this section we discuss the Levi problem on manifolds $M,$ on which
the space $\mathcal V$ of holomorphic vector fields is of positive dimension.
Hirschowitz has considered manifolds $M$ on which at every point $p,$ $\mathcal V$ generates the tangent space of $M$ at $p.$
He called such manifolds
infinitesimally homogeneous. When $\dim \mathcal V\geq 1,$ we will say that
$M$ is partially infinitesimally homogeneous.
Let $D$ denote the open unit disc in $\mathbb{C}.$
Recall that a domain $U$ in a complex manifold $M$ satisfies the Kontinuit\"atssatz if the following holds. For any sequence $f_j:\ \overline{D}\to U$ of holomorphic maps
on $D,$ continuous on $\overline{D},$ such that $\bigcup_j f_j(\partial D)$ is relatively compact in $U,$
then $\bigcup_j f_j(\overline D)$ is relatively compact in $U.$
Hirschowitz showed that if $U$ satisfies the Kontinuit\"atssatz in an infinitesimally homogeneous
manifold, then $U$ admits a continuous psh exhaustion function. We first refine his result, then we describe the pseudoconvex non-Stein domains in an infinitesimally homogeneous manifold. They carry
a holomorphic foliation with very special dynamics.
\begin{theorem}\label{T:5.1}
Let $U\subset M$ be a domain satisfying the Kontinuit\"atssatz. Assume that at each point $p\in\partial U,$
there is $Z\in \mathcal V$ transverse to the boundary at $p,$ we will write $\partial U\pitchfork\mathcal V.$ Then $U$ admits a
psh exhaustion function $\varphi.$
\end{theorem}
{\rm pr}oof
A vector field $Z$ is transverse at $p$ to $\partial U,$ if the local solution of $Z$ around $p$
passes through $U$ and through $M\setminus \overline{U}.$ For a vector field $Z$ in $M,$ we consider
the flow $g_Z(z,\zeta),$ such that $g_Z(z,0)=z.$
Let $ \mathbb{O}mega_z,$ denote the connected component in $\mathbb{C}$ containing $0,$ of the open set
$(\zeta\in\mathbb{C},\quad g_Z(z,\zeta) \in U).$ Let
$$U_Z:=\left\lbrace (z,\zeta):\ z\in M,\zeta\in\mathbb{C}, \zeta \in
\mathbb{O}mega_z \right\rbrace.
$$
We define $d_Z(z)$ as the distance of $z$ to the boundary along the vector field $Z.$ More precisely,
$$
d_Z(z):=\sup\{ |\zeta|: (z,\zeta)\in U_Z\}.
$$
\begin{lemma}
\label{L:5.2}
If $U$ is not invariant under the flow $\varphi_Z,$ then $-\log d_Z$ is psh on $U.$
\end{lemma}
\begin{proof}
We observe that if the domain $U_Z$ satisfies the Kontinuit\"atssatz, it will follow that $-\log$ distance
to the complement in the $\zeta$-direction is psh.
When $Z$ is transverse to the boundary at some point, the function is not identically $-\infty.$
Let $f_j:\ \overline{D}\to U_Z.$ Assume $\bigcup_j f_j(\partial D)\mathbb{S}ubset U_Z.$
Let $\pi:\ U_Z\mapsto U$ be the projection. Since $U$ satisfies the Kontinuit\"atssatz,
$\bigcup_j \pi\circ f_j(\overline{ D})\mathbb{S}ubset U.$ That we have also the compactness in the $\zeta$-direction follows from the standard results on solutions of vector fields.
\end{proof}
{\rm pr}oof [End of the proof of Theorem \ref{T:5.1}]
When $U$ is relatively compact, we just need finitely many vector fields $\mathcal V$, such that
for every point $p\in \partial U,$ there is $Z\in\mathcal V,$ transverse to $\partial U$ at $p.$
Then the function $\sup\{-\log d_Z(z):\ Z\in\mathcal V\}$ is a continuous psh exhaustion. When $\overline{U}$
is not compact, the construction can be easily adapted, to get the continuity. Indeed, we need only
finitely many vector fields on each compact of $\overline{U}.$
\endproof
\begin{remark}\label{R:5.3}
\rm
When $M$ is infinitesimally homogeneous, the hypothesis is always satisfied. Otherwise, one should observe that if it is satisfied for $U,$ it holds also for domains close enough
to $U,$ in the $\mathcal C^1$-topology.
\end{remark}
Let $v$ be a continuous subharmonic function in the unit disc $D.$ We will say that $v$ is
strictly subharmonic at $0$ if the Laplacean $\mathbb{D}elta v>0$ in a neighborhood of $0.$ This is equivalent to the fact that small $\mathcal C^2$ pertubations of $v$ in a neighborhood of $0,$ are still subharmonic. For a continuous psh function $u$ in $U,$ we will write $\langle i{\partial\overline\partial} u(z), it_z\wedge \bar{t}_z \rangle=0$
iff $u\circ f$ is not strictly subharmonic at $0,$ for a holomorphic map $f:\ D\to U,$ $f(0)=z,$ $f'(0)=t_z.$
Let $\mathcal P(U)$ denote the continuous psh functions on $U.$ For a compact set $K\mathbb{S}ubset U,$ let $\mathcal P(K) $ denote the cone of psh continuous near $K.$
We define
$$\mathcal N_z(K):=\left\lbrace t_z:\ t_z\in T^{1,0}(U),\ \langle i{\partial\overline\partial} u(z), it_z\wedge \bar{t}_z \rangle=0\quad\text {for u} \in \mathcal P(K) \right\rbrace. $$
Denote by $\mathcal N(K):= \bigcup_{z\in U}\mathcal N_z(K).$
Let $\mathcal N_z:=\bigcup_{K}\mathcal N_z(K)$ as $K\nearrow U,$ and $\mathcal N:=\bigcup \mathcal N_z.$
\begin{proposition}\label{P:5.4}
Suppose there is $Z\in\mathcal V,$ with $Z(z_0)=t_{z_0}\in\mathcal N_{z_0}.$ Then the orbit of $Z,$ starting at $z_0$
is contained in $\mathcal N.$ If moreover $\partial U\pitchfork\mathcal V,$ then the orbit is complete and is contained in a level set of any psh continuous function $u\in\mathcal P(U).$ In particular in the level sets of the exhaustion $\varphi.$
\end{proposition}
{\rm pr}oof
Let $g$ denote the complex flow of $Z.$ Then for $u\in \mathcal P(K),$ $u\circ g(z_0,\zeta)$ is also in $\mathcal P(K), $ if $\zeta$ is small enough. Since $g$ is a local biholomorphism, $g(z_0,\zeta)(t_{z_0})\in\mathcal
N_{g(z_0,\zeta)}.$
We can approximate functions in $\mathcal P(K) $ by functions in $\mathcal P(K),$ smooth along
the orbits of $\mathcal V.$ Let $B$ be a neighborhood of $0$ in $\mathcal V$ and let $D$ denote the unit disc in $\mathbb{C}.$ Let $\rho$ be an approximation of the identity in $B\times D.$ It suffices to consider the approximation
$$\langle \rho(Z, \zeta), u (g_Z(z, \zeta) \rangle.$$
We can use functions smooth on orbits. If $Z\in\mathcal N,$ we get that $\langle {\overline\partial} u,Z\rangle=0.$
Indeed, we can apply the definition to $exp(u).$
So $u$ is constant along the orbit of $Z.$
Since $\varphi$ is an exhaustion, the orbit is contained in a compact level set of $\varphi$ and hence we have a holomorphic image of $\mathbb{C}$ in that level set.
\endproof
\begin{theorem}\label{T:5.6} Suppose $M$ is infinitesimally homogeneous. Let $U\subset M$
satisfy the Kontinuit\"atssatz. There is an integer $0\leq d<n,$ and a foliation $\cali{F},$
with leaves of dimension $d$ on the level sets of the exhaustion $\varphi.$ If $d=0,$ then $U$ is Stein.
If $d\geq 1,$ then $(i{\partial\overline\partial} \varphi)^{n+d-1}=0$ and there is a positive closed Levi current $T_t,$ of bidimension $(d,d)$ and mass one, on each level set $(\varphi=t),$ for $t\geq t_0.$
When $U\mathbb{S}ubset M$
and $U$ is non Stein there is a positive closed Levi current $S$ of bidimension $(d,d)$ on $\partial U.$
\end{theorem}
\begin{lemma}\label{L:5.7}
The bundle $\mathcal N,$ is of rank $d,$ with $0\leq d<n.$ The sections are stable under Lie bracket.
\end{lemma}
{\rm pr}oof
Since $M$ is infinitesimally homogeneous, for each non-zero vector $t_z\in\mathcal N_z$ there is a holomorphic vector field $Z$ in that direction. So we can apply Proposition \ref{P:5.4}. Moreover, the image by the flow $g$ of $\mathcal N$ is contained in $\mathcal N.$ So the support of $\mathcal N$ is $U.$ We show that $\mathcal N$ is stable under Lie bracket.
In an infinitely homogeneous manifold, continuous psh functions near $K$ are approximable by smooth ones \cite {H2} . So to analyze $\mathcal N_K,$
we can use smooth psh functions, near a compact set $K.$
Let $X,$ $Y$ and $Z$ be $(1,0)$-holomorphic vector fields. If
$X,Y\in\mathcal N(K),$ then $\langle {\overline\partial} u,X\rangle=\langle {\overline\partial} u, Y\rangle=0$ and
$\langle {\partial\overline\partial} u, X\wedge \overline{Z} \rangle =0$ for every $Z\in\mathcal V,$ and for any smooth psh function in a neighborhood of $K.$ We also have for type reasons:
$$
\langle {\partial\overline\partial} u, [X,Y]\wedge\overline{Z}\rangle=-\langle \partial u, [[X,Y],\overline{Z}] \rangle.\quad\
$$
Jacobi's identity gives that
$$
[[X,Y],\overline{Z}] = [X,[Y,\overline{Z}]] -[Y,[X,\overline{Z}]].
$$
So if $X,Y\in\mathcal N(K),$ i.e. $\langle {\partial\overline\partial} u, X\wedge\overline{X}\rangle=\langle{\partial\overline\partial} u, Y\wedge\overline{Y}\rangle=0,$ then
$$\langle {\partial\overline\partial} u, X\wedge\overline{Z}\rangle=\langle{\partial\overline\partial} u, Y\wedge\overline{Z}\rangle=0$$
for every $Z\in\mathcal V.$ It follows that $[X,Y]\in \mathcal N(K).$ t is clear that the rank $d_z$ of $\mathcal N_z(K)$ increases as $K$ increases. Hence it stabilizes.
Observe tha $d_z$ cannot drop since strict plurisubharmonicity is an open condition.
So $\mathcal N$ is a bundle of dimension $d,$ stable under Lie bracket.
\endproof
{\rm pr}oof[Proof of Theorem \ref{T:5.6}]
To the bundle $\mathcal N$, we associate a foliation $\mathcal F$. Moreover for every $Z \in \mathcal N$ we have that
$\langle \partial u, Z \rangle =0. $ Hence the leaves are contained in the level sets of functions in
$\mathcal P(U).$ In particular the leaves are contained in $(\varphi={\rm const}).$
Since there is an exhaustion function, necessarily $d<n.$ When $d=0,$ it is easy to construct a strictly psh exhaustion function
Any continuous psh function $u,$ is constant on the leaves, hence $(i{\partial\overline\partial} u)^{n-d+1}=0.$ In particular, $\varphi$ is constant on leaves, hence $(i{\partial\overline\partial} \varphi)^{n-d+1}=0.$ We can replace
$\varphi$ by $\exp(\varphi),$ so it follows that $i\partial \varphi\wedge{\overline\partial} \varphi\wedge (i{\partial\overline\partial}\varphi)^{n-d}=0.$ Consequently, for every nonnegative function $\chi,$ the current
$\chi(\varphi)(i{\partial\overline\partial}\varphi)^{n-d}$ is closed. Let $c$ denote it's mass.
We can construct a positive closed current $T_{t_0}$ of mass $1$ on $(\varphi=t_0)$
as a limit of ${1\over c_j} \chi_j(\varphi)(i{\partial\overline\partial}\varphi)^{n-d}.$
It follows also that when $d>0$ there
is a positive closed current $S$ of bidimension $(d,d)$ and mass $1$ on $\partial U.$ It suffices to take a cluster point of the currents $T_t.$
\endproof
\begin{remark}\label{R:5.8} \rm
If $M$ is K\"ahler, then
$S$ is nef, i.e., it is a limit of smooth strictly positive closed forms.
\end{remark}
\begin{corollary}
Let $(M,\omega)$ be an infinitesimally homogeneous compact K\"ahler manifold. Let $U\subset M$
be a domain satisfying the Kontinuit\"atssatz. If $H^2_{\text{dR}}(U)=0,$ then $U$ is Stein.
\end{corollary}
{\rm pr}oof Let $T$ be a positive closed current of bidimension $(1,1)$ with compact support in $U.$
Let $\omega$ be a K\"ahler form. The cohomological hypothesis on the de Rham group, implies
that we can write $\omega=d\alpha,$ with $\alpha$ smooth.
Then
$$
\langle T,\omega\rangle=\langle T,d\alpha\rangle=-\langle dT,\alpha\rangle=0.
$$
So the dimension of the bundle $\mathcal N$ is $d=0$ and hence $U$ is Stein.
\endproof
\begin{remark}\label{R:5.9} \rm
Assume $U\mathbb{S}ubset M,$ with $(M,\omega),$ compact K\"ahler, not infinitesimally homogeneous. Suppose $H^2_{\text{dR}}(U)=0,$
and that $U$ admits a continuous psh exhaustion. If $U,$ is not Stein, then there are non-zero non-closed but $i{\partial\overline\partial}$-closed currents with compact support in $U.$ This follows from the argument in the previous Corollary and from Theorem \ref{T:main_3}.
\end{remark}
\begin{corollary}\label{C:5.10}
Supppose $(M,U)$ are as in Theorem \ref{T:5.6}. Assume that $M$ is a compact K\"ahler manifold.
Suppose also that $U$ does not contain a compact variety of dimension $d.$
Let $T_t$ be the positive closed current constructed in Theorem \ref{T:5.6}.
Then $\{ T_t\}^2=0.$
If $d=n-1,$ all the currents $T_t$ are nef and are in the same cohomology class.
\end{corollary}
{\rm pr}oof
The currents $T_t$ are directed by a foliation without singularities and they are closed and diffuse. Then a result of Kaufman \cite{K} states that the cohomology class $\{ T_t\}$ of $T_t$ satisfies
$\{ T_t\}^2=0$ provided $2d\geq n.$
When $d=n-1,$ $\{ T_t\}^2=\{ T_{t'}\}^2=0.$ If $t\not=t'$, since the supports of $T_t$ and $T_t'$ are
disjoint we get that $\{ T_t\}\smile \{ T_{t'}\}=0.$ As we have seen, $\{ T_t\}$ and $\{ T_{t'}\}$ are nef, then the Hodge-Riemann signature Theorem,
implies that $\{ T_t\}$ and $\{ T_{t'}\}$ are proportional. If the two currents are on the same level set,
we use a current on another level set.
\endproof
\begin{remark}\label{R:5.11}\rm
Theorem \ref{T:5.6} can be improved as follows. Suppose $\mathcal V$ generates $T^{1,0}M$ at one point $z_0.$
Let $A$ be the analytic set where ${\rm rank} \mathcal V(z)\leq n-1.$ Assume $U\pitchfork\mathcal V.$
Then there is a continuous psh exhaustion $\varphi.$
There is also, an integer $d,$ $0\leq d<n$ and a foliation with leaves of dimension $d$ on each $(\varphi=t)\setminus A.$ In particular, if $d=0$ and $A$ is Stein, then $U$ is Stein. If $U$ is not Stein, then there is a nontrivial holomorphic image of $\mathbb{C},$ which is relatively compact in $U.$ One shows if $d=0$ that any positive ${\partial\overline\partial}$-closed current has no mass out of $A.$ So if $A$ is Stein, it
has also no mass on $A.$ Hence, $U$ is Stein by Theorem \ref{T:main_3}.
\end{remark}
\begin{remark}\rm Let $U\mathbb{S}ubset M$ be a pseudoconvex domain with smooth boundary in an infinitesimally
homogeneous manifold. Assume it is not Stein and let $d$ be the dimension of the leaves of the associated foliation $\mathcal F$. Then $\mathcal F$ extends to a foliation on $\partial U,$ with leaves of dimension $d.$
This is a case where the dimension of the leaves does not change.
\end{remark}
\begin{corollary}\label{C:5.12}
Let $U\subset M$ be as in Theorem \ref{T:5.6}. Assume $\partial U$
is smooth. Assume that at a point $p\in\partial U$ the rank of the Levi form is maximal and equal to $n-l.$
Then the dimension $d$ of the foliation satisfies, $ d\leq l-1.$ In particular, if $p$ is a point of strict pseudoconvexity, then $U$ is Stein.
\end{corollary}
{\rm pr}oof
Suppose $p$ is a point of strict pseudoconvexity i.e $l=1.$ Then there is a continuous psh function $\psi$ in $U:$
$\psi(p)=0,$ $\psi<0$ on $B(p,r)\cap U$ and $\psi$ strictly psh near $p.$ Such a function
implies that $d=0$ and hence $U$ is Stein. For the general case we can cut locally near $p,$ by a subspace $L_\alpha$ of dimension $n-l+1$ such that $L_\alpha\cap\partial U$ is strictly pseudoconvex
at $p.$ Then the foliation induces on $L_\alpha,$ leaves of dimension $0.$ Hence, the dimension of the original foliation satifies $ d\leq l-1.$
\endproof
I thank Masanori Adachi for pointing out a slip in the previous formulation of the above statement.
\begin{remark}
\rm
When $M$ is compact homogeneous and $U$ has a point of strict pseudoconvexity, the result is due to Michel \cite{M}.
\end{remark}
\section{Levi-problem for Pfaff systems}\label{S:Levi-Pfaff}
Let $(M,\omega)$ be a complex hermitian manifold. Fix $\mathcal S=(\alpha_j)_{j\leq m}$ a Pfaff system, i.e.,
the $\alpha_j$ are $(1,0)$-forms of class $\mathcal C^1$ on $M.$ Assume that for every
$z\in M,$ $\bigcap_{j\leq m} \ker \alpha_j(z)\not=\{0\}.$ We will say that $M$ is $\mathcal S$-strongly
pseudoconvex if it admits a smooth exhaustion $u$ such that outside of a compact set in $M,$
\begin{equation}\label{e:6.1}
\langle i{\partial\overline\partial} u(z), it_z\wedge \bar{t}_z \rangle>0\quad\text{for}\quad t_z\not=0,\quad \langle\alpha_j(z),t_z\rangle=0,\quad 1\leq j\leq m.
\end{equation}
Let $U\mathbb{S}ubset M$ be a domain with smooth boundary and defining function $r.$ We will say that $U$ is
$\mathcal S$-pseudoconvex if
\begin{equation}\label{e:6.2}
\langle i{\partial\overline\partial} u(z), it_z\wedge \bar{t}_z \rangle\geq 0\quad\text{when}\quad \langle \partial r(z), t_z\rangle=\langle\alpha_j(z),t_z\rangle=0,\quad 1\leq j\leq m.
\end{equation}
The basic example of this situation is when the $(\alpha_j)$ are holomorphic and the system is integrable. Then we get a notion of uniform pseudoconvexity on leaves.
The question we address is assuming that $U$ is $\mathcal S$-pseudoconvex in a complex manifold $M,$ under which conditions is $U$ $\mathcal S$-strongly
pseudoconvex?
It is natural to introduce the cone $\mathcal P_S$ of $\mathcal C^2$-smooth functions in $U$ such that
$\langle i{\partial\overline\partial} v, it\wedge \bar{t}\rangle\geq 0$ when $\langle \alpha_j,t\rangle=0$ for $1\leq j\leq m,$
see \cite{S1}. Denote by $\overline{\mathcal P}_S$ the space of continuous functions which are decreasing limits
of functions in $\mathcal P_S$
We can extend some of the results from previous sections to this context. As observed in \cite{S1},
the estimates \eqref{e:4.2} and \eqref{e:4.3} are valid for positive ${\partial\overline\partial}-$ closed currents directed by $\mathcal S,$ and for functions in $\overline{\mathcal P}_S.$
A Levi current $T$ for the system $\mathcal S$ on $U$ is a positive current of bidimension $(1,1)$ such that
$$
i{\partial\overline\partial} T=0,\quad T\wedge \alpha_j=0,\ \ 1\leq j\leq m, \quad T\wedge i{\partial\overline\partial} u=0\quad \text{for every}\quad
u\in\mathcal P_S.
$$
A Levi current for the system $\mathcal S$ on $(r=0)$ satisfies
\begin{equation}\label{e:6.3}
i{\partial\overline\partial} T=0,\quad T\wedge i{\partial\overline\partial} r=0,\quad T\wedge \partial r=0,\quad T\wedge \alpha_j=0,\ \ 1\leq j\leq m,\quad \langle T,\omega\rangle=1.
\end{equation}
We just state some extensions, leaving the proof to the reader.
\begin{theorem}\label{T:6.1}
Let $U\mathbb{S}ubset M$ be an $\mathcal S$-pseudoconvex domain with smooth boundary. If there is no Levi current for the system $\mathcal S$ on $\partial U,$ then $U$ is strongly $\mathcal S$- pseudoconvex. Moreover, there is a smooth function $v,$ such that for $A$ positive large enough and $\eta>0$ small enough,
$\hat\rho:=-(-re^{-Av})^\eta$ satisfies $i{\partial\overline\partial} \hat\rho\gtrsim C\omega|\hat\rho| $ on vectors such that
$\langle \alpha_j(z),t_z\rangle=0,$ $1\leq j\leq m.$
\end{theorem}
{\rm pr}oof[Sketch of proof]
One shows that if $T\geq 0,$ $T\wedge \alpha_j=0$ for $ 1\leq j\leq m, $ and $i{\partial\overline\partial} T=0,$
then $T\wedge\partial r=0$ and $T\wedge i{\partial\overline\partial} r=0.$ So a duality argument implies that if there is no Levi current for the system $\mathcal S$ on $\partial U,$ there is a function $v\in\mathcal P$ which is strictly $\mathcal S$-psh, i.e., $\langle i{\partial\overline\partial} v(z),t_z\wedge \bar{t}_z\rangle >0$ when $\langle \alpha_j,t_z\rangle=0$ for $1\leq j\leq m.$ The proof then follows the lines of the proof of Theorem
\ref{T:2.2} using relations like
$$
\langle i{\partial\overline\partial} r(z),t_z\wedge \bar{t}_z\rangle \geq -C|\langle \partial r(z),t\rangle| |t| -C\sum_{j=1}^m|\langle \alpha_j(z),t\rangle||t|
$$
for $z$ in a neighborhood of $\partial U.$
\endproof
\begin{remark}\rm
In \cite {BerndtssonSibony} and \cite{S1}, H\"ormander type estimates, for ${\overline\partial}$ with respect to $\mathcal S$-directed currents with respect to $\mathcal S$-pseudoconvex functions
as weights, are given.
\end{remark}
\begin{theorem}\label{T:6.2}
Suppose $U\subset M$ admits a continuous exhaustion function which is $\mathcal S$-pseudoconvex. Then $U$
admits a strictly $\mathcal S$-pseudoconvex exhaustion if and only if there is no Levi current, for
the system $ \mathcal S,$ with compact support in $U.$
\end{theorem}
The proof is an adaptation of the proof in Section \ref{S:Levi}. We omit it.
\noindent
Nessim Sibony, Universit{\'e} Paris-Sud,\\
\noindent
and Korea Institute For Advanced Studies, Seoul \\
\noindent
{\tt [email protected]},
\end{document}
|
\begin{document}
\title{Wave equations for determining energy-level gaps of
quantum systems}
\author{Zeqian Chen}
\email{[email protected]}
\affiliation{
Research Center of Mathematical Physics, School of Mathematical
Sciences, Capital Normal University, 105 North Road, Xi-San-Huan,
Beijing}
\affiliation{
Wuhan Institute of Physics and Mathematics, Chinese Academy of
Sciences, 30 West District, Xiao-Hong-Shan, P.O.Box 71010, Wuhan}
\date{\today}
\begin{abstract}
An differential equation for wave functions is proposed, which is
equivalent to Schr\"{o}dinger's wave equation and can be used to
determine energy-level gaps of quantum systems. Contrary to
Schr\"{o}dinger's wave equation, this equation is on `bipartite'
wave functions. It is shown that those `bipartite' wave functions
satisfy all the basic properties of Schr\"{o}dinger's wave
functions. Further, it is argued that `bipartite' wave functions can
present a mathematical expression of wave-particle duality. This
provides an alternative approach to the mathematical formalism of
quantum mechanics.
\end{abstract}
\pacs{03.65.Ge, 03.65.Ud}
\maketitle
In the most general form, Heisenberg's equation \cite{H} and
Schr\"{o}dinger's equation \cite{S} can be written as
follows\begin{equation}i\hbar \frac{\partial \hat{O} (t)}{\partial
t } = \left [ \hat{O} (t), \hat{H} \right
],\end{equation}and\begin{equation}i\hbar \frac{\partial | \psi
(t) \rangle}{\partial t} = \hat{H} | \psi (t)
\rangle,\end{equation}respectively, where $\hat{H}$ is the
Hamiltonian of the system. As is well known, these two forms for
the equations of motion of quantum mechanics are equivalent. Of
these, the Schr\"{o}dinger form seems to be the more useful one
for practical problems, as it provides differential equations for
wave functions, while Heisenberg's equation involves as unknowns
the operators forming the representative of the dynamical
variable, which are far more numerous and therefore more difficult
to evaluate than the Schr\"{o}dinger unknowns.
On the other hand, determining energy levels of various dynamic
systems is an important task in quantum mechanics, for this solving
Schr\"{o}dinger's wave equation is a usual way. Recently, Fan and Li
\cite{FL} showed that Heisenberg's equation can also be used to
deduce the energy level of some systems. By introducing the
conception of invariant `eigen-operator', they derive energy-level
gap formulas for some dynamic Hamiltonians. However, their
`invariant eigen-operator' equation involves operators as unknowns,
as similar to Heisenberg's equation, and hence is also difficult to
evaluate in general.
In this article we propose an differential equation for wave
functions, which can be used to determine energy-level gaps of
quantum systems and is mathematically equivalent to
Schr\"{o}dinger's wave equation, that is, they can be solved from
one another. Contrary to Schr\"{o}dinger's wave equation, this
equation is on `bipartite' wave functions. It is shown that those
`bipartite' wave functions satisfy all the basic properties of
Schr\"{o}dinger's wave functions. In particular, it is argued that
`bipartite' wave functions can present a mathematical expression of
wave-particle. This provides an alternative approach to the
mathematical formalism of quantum mechanics.
For convenience, we deal with the quantum system of a single
particle. Note that the Hamiltonian for a single particle in an
external field is\begin{equation}\hat{H}(\vec{x}) = -
\frac{\hbar^2}{ 2 m} \nabla^2_{\vec{x}} + U(\vec{x}
),\end{equation}where $\nabla^2_{\vec{x}} = \partial^2/\partial
x^2_1 +
\partial^2/\partial x^2_2 + \partial^2/\partial x^2_3,$
$U(\vec{x})$ is the potential energy of the particle in the
external field, and $\vec{x} = (x_1, x_2, x_3) \in \mathbb{R}^3.$
Then, Schr\"{o}dinger's wave equation for a single particle in an
external field is\begin{equation}i\hbar \frac{\partial \psi
(\vec{x}, t) }{\partial t} = \hat{H}(\vec{x}) \psi (\vec{x}, t) =
- \frac{\hbar^2}{ 2 m} \nabla^2_{\vec{x}} \psi (\vec{x}, t) +
U(\vec{x} ) \psi (\vec{x}, t).\end{equation} On the other hand,
let $\psi (\vec{x}, t)$ and $\varphi (\vec{x}, t)$ both satisfy
Eq.(4). Then we have$$\begin{array}{lcl}i\hbar \frac{\partial (
\psi (\vec{x}, t) \varphi^* (\vec{y}, t)) }{\partial t} & = &
i\hbar \frac{\partial \psi (\vec{x}, t) }{\partial t} \varphi^*
(\vec{y}, t) + i\hbar \frac{\partial \varphi^* (\vec{y}, t)
}{\partial t} \psi (\vec{x}, t)\\[0.4mm]
& = & \left [ \hat{H}(\vec{x}) \psi (\vec{x}, t) \right ]
\varphi^* (\vec{y}, t)\\[0.4mm]
&~&~~ - \left [ \hat{H}(\vec{y}) \varphi
(\vec{y}, t) \right ]^* \psi (\vec{x}, t)\\[0.4mm]
& = & \left ( \hat{H}(\vec{x}) - \hat{H}(\vec{y}) \right ) (\psi
(\vec{x}, t) \varphi^* (\vec{y}, t)).
\end{array}$$This leads to the following wave equation
\begin{equation}i\hbar \frac{\partial \Psi
(\vec{x}, \vec{y}; t) }{\partial t} = \left (\hat{H}(\vec{x}) -
\hat{H}(\vec{y}) \right ) \Psi (\vec{x}, \vec{y};
t),\end{equation}where $\Psi (\vec{x}, \vec{y}; t ) \in
L^2_{\vec{x}, \vec{y}}.$ Contrary to Schr\"{o}dinger's wave
equation Eq.(4) for `one-partite' wave functions $\psi (\vec{x})
\in L^2_{\vec{x}},$ the wave equation Eq.(5) is an differential
equation for `bipartite' wave functions $\Psi (\vec{x}, \vec{y}),$
which, replacing $\hat{H}(\vec{x}) + \hat{H}(\vec{y})$ by
$\hat{H}(\vec{x}) - \hat{H}(\vec{y}),$ is also different from
Schr\"{o}dinger's wave equation for two particles.
Since$$\frac{\partial \left | \Psi (\vec{x}, \vec{y}; t) \right
|^2}{\partial t} = 2 i \mathrm{Re} \left [ \Psi^* (\vec{x}, \vec{y};
t) \frac{\partial \Psi (\vec{x}, \vec{y}; t) }{\partial t} \right
],$$it is concluded from Eq.(5) that\begin{equation} \frac{\partial
}{\partial t} \int \left | \Psi (\vec{x}, \vec{y}; t) \right |^2
d^3\vec{x} d^3 \vec{y} = 0.\end{equation}This implies that Eq.(5)
preserves the probability density $\left | \Psi (\vec{x}, \vec{y};
t) \right |^2 d^3\vec{x} d^3 \vec{y}$ with respect to time and means
that, if this wave function $\Psi$ is given at some instant, its
behavior at all subsequent instants is determined.
By Schmidt's decomposition theorem \cite{Schmidt}, for every $\Psi
(\vec{x}, \vec{y}) \in L^2_{\vec{x}, \vec{y}}$ there exist two
orthogonal sets $\{\psi_n \}$ and $\{\varphi_n \}$ in
$L^2_{\vec{x}}$ and $L^2_{\vec{y}}$ respectively, and a sequence of
positive numbers $\{ \mu_n \}$ satisfying $\sum_n \mu^2_n < \infty$
so that\begin{equation}\Psi (\vec{x}, \vec{y}) = \sum_n \mu_n \psi_n
(\vec{x}) \varphi^*_n (\vec{y}).\end{equation}Then, it is easy to
check that$$\Psi (\vec{x}, \vec{y}; t) = \sum_n \mu_n \psi_n
(\vec{x}, t) \varphi^*_n (\vec{y}, t)$$satisfies Eq.(5) with $\Psi
(\vec{x}, \vec{y}; 0) = \Psi (\vec{x}, \vec{y}),$ where both $\psi_n
(\vec{x}, t)$ and $\varphi_n (\vec{y}, t)$ satisfy Eq.(4) with
$\psi_n (\vec{x}, 0) = \psi_n (\vec{x})$ and $\varphi_n (\vec{y}, 0)
= \varphi_n (\vec{y}),$ respectively. Hence, the wave equation
Eq.(5) can be solved mathematically from Schr\"{o}dinger's wave
equation.
Given $\psi \in L^2_{\vec{x}},$ for every $t \geq 0$ define
operators $\varrho_t$ on $L^2_{\vec{x}}$
by\begin{equation}(\varrho_t \varphi ) (\vec{x}) = \int \Psi
(\vec{x}, \vec{y}; t) \varphi (\vec{y}) d^3
\vec{y},\end{equation}where $\Psi (\vec{x}, \vec{y}; t)$ is the
solution of Eq.(5) with $\Psi (\vec{x}, \vec{y}; 0) = \psi
(\vec{x}) \psi^* (\vec{y}).$ It is easy to check
that\begin{equation}i \frac{\partial \varrho_t}{\partial t } =
\left [ H, \varrho_t \right ],~~\varrho_0 = | \psi \rangle \langle
\psi |.\end{equation}This is just Schr\"{o}dinger's equation in
the form of density operators. Hence, Schr\"{o}dinger's wave
equation is a special case of the wave equation Eq.(5) with
initial values of product form $\Psi (\vec{x}, \vec{y}; 0) = \psi
(\vec{x}) \psi^* (\vec{y}).$ Therefore, the wave equation Eq.(5)
is mathematically equivalent to Schr\"{o}dinger's wave equation.
In the sequel, we consider the problem of stationary states. Let
$\psi_n$ be the eigenfuncions of the Hamiltonian operator
$\hat{H},$ i.e., which satisfy the equation\begin{equation}
\hat{H}(\vec{x}) \psi_n (\vec{x}) = E_n \psi_n
(\vec{x}),\end{equation}where $E_n$ are the eigenvalues of
$\hat{H}.$ Correspondingly, the wave equation
Eq.(5)$$\begin{array}{lcl}i\hbar \frac{\partial \Psi (\vec{x},
\vec{y}; t) }{\partial t} &=& \left (\hat{H}(\vec{x}) -
\hat{H}(\vec{y}) \right ) \Psi (\vec{x}, \vec{y}; t)\\& = &(E_n -
E_m ) \Psi (\vec{x}, \vec{y}; t)\end{array}$$with $\Psi (\vec{x},
\vec{y}; 0) = \psi_n (\vec{x}) \psi^*_m (\vec{y}),$ can be
integrated at once with respect to time and
gives\begin{equation}\Psi (\vec{x}, \vec{y}; t) =
e^{-i\frac{1}{\hbar} (E_n - E_m) t} \psi_n (\vec{x}) \psi^*_m
(\vec{y}).\end{equation}Since $\{ \psi_n (\vec{x}) \}$ is a
complete orthogonal set in $L^2_{\vec{x}},$ it is concluded that
$\{ \psi_n (\vec{x}) \psi^*_m (\vec{y}) \}$ is a complete
orthogonal set in $L^2_{\vec{x}, \vec{y}}.$ Then, for every $\Psi
(\vec{x}, \vec{y}) \in L^2_{\vec{x}, \vec{y}}$ there exists a
unique set of numbers $\{ c_{n,m}\}$ satisfying $\sum_{n,m} |
c_{n,m} |^2 < \infty$ so that
\begin{equation}\Psi (\vec{x}, \vec{y}) = \sum_{n,m} c_{n,m} \psi_n (\vec{x})
\psi^*_m (\vec{y}).\end{equation}Hence, for $\Psi (\vec{x},
\vec{y}; 0) = \sum_{n,m} c_{n,m} \psi_n (\vec{x}) \psi^*_m
(\vec{y})$ we have that\begin{equation}\Psi (\vec{x}, \vec{y}; t)
= \sum_{n,m} c_{n,m} e^{-i\frac{1}{\hbar} (E_n - E_m) t} \psi_n
(\vec{x}) \psi^*_m (\vec{y})\end{equation}for $t \geq 0.$ Now, if
$\Psi (\vec{x}, \vec{y}) \in L^2_{\vec{x}, \vec{y}}$ is an
eigenfuncion of the operator $\hat{H}(\vec{x})- \hat{H}(\vec{y}),$
i.e., which satisfies the equation\begin{equation} \left (
\hat{H}(\vec{x})- \hat{H}(\vec{y}) \right ) \Psi (\vec{x},
\vec{y}) = \lambda \Psi (\vec{x}, \vec{y}),\end{equation}where
$\lambda$ is an associated eigenvalue, then $\Psi (\vec{x},
\vec{y}; t) = e^{-i\frac{1}{\hbar} \lambda t} \Psi (\vec{x},
\vec{y})$ satisfies Eq.(5) and consequently, it is concluded from
Eq.(13) that $\lambda = E_n - E_m$ is an energy-level gap of the
system. Thus, the wave equation Eq.(5) can be used to determine
energy-level gaps of the system.
It is well known that the basis of the mathematical formalism of
quantum mechanics lies in the proposition that the state of a
system can be described by a definite Schr\"{o}dinger's wave
function of coordinates \cite{vN,Dirac}. The square of the modulus
of this function determines the probability distribution of the
values of the coordinates \cite{Born}. Since the wave equation
Eq.(5) is mathematically equivalent to Schr\"{o}dinger's wave
equation, it seems that the state of a quantum system also can be
described by a definite `bipartite' wave function of Eq.(5), of
which the physical meaning is that the `bipartite' wave functions
of stationary states determine energy-level gaps of the system.
In fact, we can make the general assumption that if the measurement
of an observable $\hat{O}$ for the system in the `bipartite' state
corresponding to $\Psi$ is made a large number of times, the average
of all the results obtained will be\begin{equation} \langle \hat{O}
\rangle_{\Psi} = \mathrm{Tr} \left [ \varrho^{\dagger}_{\Psi}
\hat{O} \varrho_{\Psi} \right ],\end{equation}where $\varrho_{\Psi}$
is an operator on $L^2$ associated with $\Psi$ defined by
$(\varrho_{\Psi} \varphi ) (\vec{x}) = \int \Psi (\vec{x}, \vec{y})
\varphi (\vec{y}) d^3 \vec{y}$ for every $\varphi \in L^2,$ provided
$\Psi$ is normalized. That is, the expectation value of an
observable $\hat{O}$ in the `bipartite' state corresponding to
$\Psi$ is determined by Eq.(15). It is easy to check that if $\Psi
(\vec{x}, \vec{y}) = \psi (\vec{x}) \psi^* (\vec{y}),$
then\begin{equation} \langle \hat{O} \rangle_{\Psi} = \langle \psi |
\hat{O} | \psi \rangle.\end{equation}This concludes that our
expression Eq.(15) agrees with the interpretation of
Schr\"{o}dinger's wave functions for calculating expectation values
of any chosen observable.
Moreover, `bipartite' wave functions can present a mathematical
expression of wave-particle duality. Let us discuss the double-slit
experiment \cite{Feynman}. Let $\phi_1$ and $\phi_2$ be two
Schr\"{o}dinger's wave functions of the particle arrival through
slit 1 and slit 2, respectively. Then, the associated `bipartite'
wave function of the particle arrival through both slit 1 and slit 2
can be either\begin{equation}\Psi_{\mathrm{W}} (\vec{x}, \vec{y}) =
\left ( \phi_1 (\vec{x}) + \phi_2 (\vec{x}) \right ) \left (
\phi^*_1 (\vec{y}) + \phi^*_2 (\vec{y}) \right
),\end{equation}or\begin{equation}\Psi_{\mathrm{P}} (\vec{x},
\vec{y}) = \phi_1 (\vec{x}) \phi^*_1 (\vec{y}) + \phi_2 (\vec{x})
\phi^*_2 (\vec{y}).\end{equation}A single particle described by
$\Psi_{\mathrm{W}}$ behaves like waves, while by $\Psi_{\mathrm{P}}$
like particles. This is so because for position, by (15) we
have$$\langle \hat{x}\rangle_{\Psi_{\mathrm{W}}} \propto |\phi_1
(\vec{x}) + \phi_2 (\vec{x})|^2, \langle
\hat{x}\rangle_{\Psi_{\mathrm{P}}} \propto |\phi_1 (\vec{x})|^2 + |
\phi_2 (\vec{x})|^2$$respectively. On the other hand,
$\Psi_{\mathrm{P}}$ is a `bipartite' entangled state \cite{EPR},
which means that a single particle can entangle with itself
\cite{Enk}, as similar to the fact that each photon can interfere
with itself, as shown in Ref. \cite{Dirac}. Then, it is concluded
that a single particle behaves like waves when it interfere with
itself, while like particle when entangle with itself. Thus,
wave-particle duality is just the complementarity of interference
and entanglement for a single particle. A more detail on this issue
will be given in the future. Since entanglement plays a crucial role
in quantum communication, cryptograph, and computation \cite{B-G-N},
we may expect that the entanglement of a {\it single} particle will
play an important role in quantum information \cite{Babichev}.
We would like to mention that Eq.(5) have been presented by Landau
and Lifshitz \cite{LL} giving the change in the density matrix with
time, similar to the Schr\"{o}dinger's wave equation. However, we
regard Eq.(5) as a wave equation but not a equation for density
functions. This is the key point which is distinct from \cite{LL}.
As shown above, Eq.(5) is a suitable form for motion of quantum
mechanics as a `bipartite' wave equation.
In summary, we present an differential equation for wave functions,
which is equivalent to Schr\"{o}dinger's wave equation and can be
used to determine energy-level gaps of the system. Contrary to
Schr\"{o}dinger's wave equation, this equation is on `bipartite'
wave functions. It is shown that those `bipartite' wave functions
satisfy all the basic properties of Schr\"{o}dinger's wave
functions. Further, it is argued that `bipartite' wave functions can
present a mathematical expression of wave-particle duality. Our
results shed considerable light on the mathematical basis of quantum
mechanics.
This work was supported by the National Natural Science Foundation
of China under Grant No.10571176, the National Basic Research
Programme of China under Grant No.2001CB309309, and also funds
from Chinese Academy of Sciences.
\end{document}
|
\begin{document}
\title{Clarke subgradients for directionally Lipschitzian stratifiable functions}
\begin{abstract}
Using a geometric argument, we show that under a reasonable continuity condition, the Clarke subdifferential of a semi-algebraic (or more generally stratifiable) directionally Lipschitzian function admits a simple form: the normal cone to the domain and limits of gradients generate the entire Clarke subdifferential. The characterization formula we obtain unifies various apparently disparate results that have appeared in the literature. Our techniques also yield a simplified proof that closed semialgebraic functions on $\R^n$ have a limiting subdifferential graph of uniform local dimension $n$. \end{abstract}
\normalsize
\section{Introduction.}
Variational analysis, a subject that has been vigorously developing for the past 40 years, has proven itself to be extremely effective at describing nonsmooth phenomenon. The Clarke subdifferential (or generalized gradient) and the limiting subdifferential of a function are the earliest and most widely used constructions of the subject.
A key distinction between these two notions is that, in contrast to the limiting subdifferential, the Clarke subdifferential is always convex.
From a computational point of view, convexity of the Clarke subdifferential is a great virtue. To illustrate, by the classical Rademacher theorem, a {\em locally Lipschitz continuous} function $f$ on an open subset $U$ of $\R^n$ is differentiable almost everywhere on $U$, in the sense of Lebesgue measure. Clarke, in \cite{origin}, showed that for such functions, the Clarke subdifferential admits the simple presentation
\begin{equation}\label{eqn:formula}
\partial_c f(\bar{x})=\conv \{\lim_{i\to\infty} \nabla f(x_i): x_i\stackrel{\Omega}{\rightarrow} \bar{x}\}, \tag{LipR}
\end{equation}
where $\bar{x}$ is any point of $U$ and $\Omega$ is any full measure subset of $U$. Such a formula holds great computational promise since gradients are often cheap to compute. For example, utilizing (\ref{eqn:formula}), Burke, Lewis, and Overton developed an effective computational scheme for approximating the Clarke subdifferential by sampling gradients \cite{BLO}, and, motivated by this idea, developed a robust optimization algorithm \cite{alg}.
The authors of \cite{BLO} further extended Clarke's result to the class
of {\em finite-valued}, {\em continuous} functions $f\colon U\to \R$, defined on an open subset $U$ of $\R^n$, that are {\em absolutely continuous on lines}, and are {\em directionally Lipschitzian}; the latter means that the Clarke normal cone to the epigraph of $f$ is pointed. Under these assumptions on $f$, the authors derived the representation
\begin{equation}\label{eqn:lewis}
\partial_c f(\bar{x})=\bigcap_{\delta > 0} \cl \conv\Big(\nabla f\big(\Omega\cap B_\delta(\bar{x})\big)\Big), \tag{ACLR}
\end{equation}
where $B_\delta(\bar{x})$ is an open ball of radius $\delta$ around $\bar{x}$ and $\Omega$ is any full measure subset of $U$, and they extended their computational scheme to this more general setting.
One can easily see that this formula generalizes Clarke's result, since locally Lipschitz functions are absolutely continuous on lines, and for such functions (\ref{eqn:lewis}) reduces to (\ref{eqn:formula}). Pointedness of the Clarke normal cone is a common theoretical assumption. For instance, closed convex sets with nonempty interior have this property. Some results related to (\ref{eqn:lewis}) appear in \cite{CMW}.
In optimization theory, one is often interested in {\em extended real-valued} functions (functions that are allowed to take on the value $+\infty$), so as to model constraints, for instance. Results above are not applicable in such instances. An early predecessor of (\ref{eqn:formula}) and (\ref{eqn:lewis}) does rectify this problem, at least when convexity is present. Rockafellar \cite[Theorem 25.6]{rock} showed that for any closed {\em convex} function $f\colon\R^n\to\R\cup\{+\infty\}$, whose domain $\dom f$ has a nonempty interior, the convex subdifferential has the form
\begin{equation}\label{eqn:rock}
\partial f(\bar{x})=\conv \{\lim_{i\to\infty} \nabla f(x_i): x_i\to \bar{x}\}+N_{\sdom f}(\bar{x}),\tag{CoR}
\end{equation}
where $\bar{x}$ is any point in the domain of $f$ and $N_{\sdom f}(\bar{x})$ is the normal cone to the domain of $f$ at $\bar{x}$.
Our goal is to provide an intuitive and geometric proof of a representation formula unifying (\ref{eqn:formula}), (\ref{eqn:lewis}), and (\ref{eqn:rock}). To do so, we will impose a certain structural assumption on the functions $f$ that we consider. Namely, we will assume that the domain of $f$ can be locally ``stratified'' into a finite collection of smooth manifolds, so that $f$ is smooth on each such manifold. Many functions of practical importance in optimization and in nonsmooth analysis possess this property. All semi-algebraic functions (those functions whose graphs can be described as a union of finitely many sets, each defined by finitely many polynomial inequalities), and more generally, tame functions fall within this class \cite{tame_opt}. We will show (Theorem~\ref{thm:main}) that for a {\em directionally Lipschitzian}, {\em stratifiable} function $f\colon\R^n\to\R\cup\{+\infty\}$, that is continuous on its domain (for simplicity), the Clarke subdifferential admits the intuitive form
\begin{equation}\label{eqn:our}
\partial_c f(\bar{x})=\conv\{\lim_{i\to\infty} \nabla f(x_i):x_i \xrightarrow[]{\Omega}\bar{x}\} + \cone\{\lim_{\substack{i\to\infty\\ t_i \downarrow 0}}t_i\nabla f(x_i):x_i\xrightarrow[]{\Omega} \bar{x}\}+ N^{c}_{\sdom f}(\bar{x}),
\end{equation}
or equivalently,
$$\partial_c f(\bar{x})=\bigcap_{\delta >0} \cl\conv\Big(\nabla f\big(\Omega\cap B_\delta(\bar{x})\big)\Big)+ N^{c}_{\sdom f}(\bar{x}),$$
where $\Omega$ is any dense subset of $\dom f$ and $\cone$ denotes the convex conical hull. (In contrast to the aforementioned results, we do not require $\Omega$ to have full-measure).
This is significant both from theoretical and computational perspectives. Proofs of (\ref{eqn:formula}) and (\ref{eqn:lewis}) are based largely on Fubini's theorem and analysis of directional derivatives, and though the arguments are elegant, they do not shed light on the geometry driving such representations to hold. Similarly, Rockafellar's argument of (\ref{eqn:rock}) relies heavily on the well-oiled machinery of convex analysis. Consequently, a simple unified geometric argument is extremely desirable. From a practical point of view, representation (\ref{eqn:our}) decouples the behavior of the function from the geometry of the domain; consequently, when the domain is a simple set (polyhedral perhaps) and the behavior of the function on the interior of the domain is complex, our result provides a convenient method of calculating the Clarke subdifferential purely in terms of limits of gradients and the normal cone to the domain --- information that is often readily available. Furthermore, using (\ref{eqn:our}), the functions we consider in the current paper become amenable to the techniques developed in \cite{BLO}.
Whereas (\ref{eqn:our}) deals with pointwise estimation of the Clarke subdifferential, our second result addresses the geometry of subdifferential graphs, as a whole. In particular, we consider the size of subdifferential graphs, a feature that may have important algorithmic applications.
For instance, Robinson \cite{rob,newt} shows
computational promise for functions defined on $\R^n$ whose subdifferential
graphs are locally homeomorphic to an open subset of $\R^n$.
Due to the results of Minty~\cite{minty} and Poliquin-Rockafellar~\cite{prox_reg}, Robinson's techniques are applicable for convex, and more generally, for ``prox-regular'' functions.
Trying to understand the size of subdifferential graphs in the absence of convexity (or monotonicity), the authors in \cite{small} were led to consider the semi-algebraic setting. The authors proved that the {\em limiting subdifferential graph} of a closed, proper, {\em semi-algebraic} function on $\R^n$ has {\em uniform local dimension} $n$. Applications to sensitivity analysis were also discussed. We show how the techniques developed in the current paper drastically simplify the proof of this striking fact. Remarkably, this dimensional uniformity does not hold for the Clarke subdifferential graph.
The rest of the paper is organized as follows. In Section~\ref{sec:prelim}, we establish notation and recall some basic facts from variational analysis. In Section~\ref{sec:char}, we derive a characterization formula for the Clarke subdifferential of a directionally Lipschitzian, stratifiable function that possesses a certain continuity property on its domain, and in Section~\ref{sec:random} we relate our results to the gradient sampling framework.
In Section~\ref{sec:loc_dim}, we prove the theorem concerning the local dimension of semi-algebraic subdifferential graphs. We have designed this last section to be entirely independent from the previous ones (except for Section~\ref{sec:prelim}), since it does require a short foray into semi-algebraic geometry.
\section{Preliminary results.}\label{sec:prelim}
In this section, we summarize some of the fundamental tools used in variational analysis and nonsmooth optimization.
We refer the reader to the monographs Borwein-Zhu \cite{Borwein-Zhu}, Clarke-Ledyaev-Stern-Wolenski \cite{CLSW}, Mordukhovich \cite{Mord_1,Mord_2}, and Rockafellar-Wets \cite{VA}, for more details. Unless otherwise stated, we follow the terminology and notation of \cite{VA}.
The functions we consider will take their values in the extended real line $\overline{\R}:=\R\cup\{-\infty,\infty\}$. We say that an extended-real-valued function is {\em proper} if it is never $-\infty$ and is not always $+\infty$.
For a function $f\colon\R^n\rightarrow\overline{\R}$, the {\em domain} of $f$ is $$\mbox{\rm dom}\, f:=\{x\in\R^n: f(x)<+\infty\},$$ and the {\em epigraph} of $f$ is $$\mbox{\rm epi}\, f:= \{(x,r)\in\R^n\times\R: r\geq f(x)\}.$$
Throughout this work, we will only use Euclidean norms. Hence for a point $x\in\R^n$, the symbol $|x|$ will denote the standard Euclidean norm of $x$. Unless we state otherwise, the topology on $\R^n$ that we consider is induced by this norm.
Given a set $Q\subset\R^n$, the notation $x_i\stackrel{Q}{\rightarrow} \bar{x}$ will mean that the sequence $x_i$ converges to $\bar{x}$ and all the points $x_i$ lie in $Q$.
We let ``$o(|x-\bar{x}|)$ for $x\in Q$'' be shorthand for a function that satisfies
$\frac{o(|x-\bar{x}|)}{|x-\bar{x}|}\rightarrow 0$ whenever $x\stackrel{Q}{\rightarrow} \bar{x}$ with $x\neq\bar{x}$.
A function $f\colon\R^n\to\overline{\R}$ is {\em locally Lipschitz continuous at} a point $\bar{x}$ {\em relative to} a set $Q\subset\R^n$ containing $\bar{x}$, if $f(\bar{x})$ is finite and there exists a a real number $\kappa\in [0,+\infty)$ with
$$|f(x)-f(y)|\leq \kappa |x-y|, \textrm{ for all } x,y\in Q \textrm{ near } \bar{x}.$$
If there exists an open neighborhood $Q$ of $\bar{x}$ so that the above conditions hold, then we simply say that $f$ is locally Lipschitz continuous at $\bar{x}$.
A {\em set-valued mapping} $F$ from $\R^n$ to $\R^m$, denoted by $F\colon\R^n\rightrightarrows\R^m$, is a mapping from $\R^n$ to the power set of $\R^m$. Hence for each point $x\in\R^n$, $F(x)$ is a subset of $\R^m$. For a set-valued mapping $F\colon\R^n\rightrightarrows\R^m$, the {\em domain} of $F$ is $$\mbox{\rm dom}\, F:=\{x\in\R^n:F(x)\neq\emptyset\},$$ and the {\em graph} of $F$ is $$\mbox{\rm gph}\, F:=\{(x,y)\in\R^n\times\R^m:y\in F(x)\}.$$
The {\em outer limit} of $F$ at $\bar{x}$ is
$$\operatornamewithlimits{Limsup}_{x\to\bar{x}} F(x):=\{v\in\R^m: \exists x_i\to\bar{x},~\exists v_i\to v \textrm{ with } v_i\in F(x_i) \}.$$
The mapping $F$ is {\em locally bounded} near $\bar{x}$ if the image $F(V)\subset\R^m$ is bounded, for some neighborhood $V$ of $\bar{x}$.
The following definition extends in two ways the classical notion of continuity to set-valued mappings.
\begin{definition}[Continuity]
{\rm Consider a set-valued mapping $F\colon\R^n\rightrightarrows\R^m$.
\begin{enumerate}
\item $F$ is {\em outer semicontinuous} at a point $\bar{x}\in\R^n$ if for any sequence of points $x_i\in\R^n$ converging to $\bar{x}$ and any sequence of vectors $v_i\in F(x_i)$ converging to $\bar{v}$, we must have $\bar{v}\in F(\bar{x})$.
\item $F$ is {\em inner semicontinuous} at $\bar{x}$ if for any sequence of points $x_i$ converging to $\bar{x}$ and any vector $\bar{v}\in F(\bar{x})$, there exist vectors $v_i\in F(x_i)$ converging to $\bar{v}$.
\end{enumerate}
If both properties hold, then we say that $F$ is {\em continuous} at $\bar{x}$.
}
\end{definition}
We let $\inter Q$, $\cl Q$, $\conv Q$, and $\cone Q$, denote the interior, closure, convex hull, and convex conical hull of a set $Q$, respectively. A cone $Q$ is said to be {\em pointed} if it contains no lines. An open ball of radius $r$ around a point $\bar{x}\in\R^n$ will be denoted by $B_r(\bar{x})$. We let ${\bf B}$ and $\overline{{\bf B}}$ be the open and closed unit balls, respectively.
The following is a standard result on preservation of continuity of set-valued mappings under a pointwise convex conical hull operation. We provide a proof for completeness.
\begin{lemma}[Preservation of Continuity]\label{lem:outer}
Consider a set-valued mapping $F\colon\R^n\rightrightarrows\R^m$ that is outer-semicontinuous at a point $\bar{x}\in\R^n$. Suppose $F$ is locally bounded near $\bar{x}$ and $0\notin F(\bar{x})$.
Then the mapping $$x\mapsto G(x):=\cone F(x)$$ is outer-semicontinuous at $\bar{x}$, provided that $G(\bar{x})$ is pointed.
\end{lemma}
\begin{proof}
Consider a sequence $(x_i,v_i)\to(\bar{x},\bar{v})$, with $v_i\in G(x_i)$ for each index $i$. By Carath\'{e}odory's theorem, we deduce
$$v_i=\sum_{j=1}^{m}\lambda^i_j y^i_j,$$
for some multipliers $\lambda^i_j\geq 0$ and vectors $y^i_j\in F(x_i)$, where $j=1,\ldots, m$.
Restricting to a subsequence, we may assume that there exist nonzero vectors $y_j\in F(\bar{x})$ satisfying
$$y_j=\lim_{i\to\infty} y^i_j, \textrm{ for each index } j.$$
We claim that the sequence of multipliers $\lambda^i_j$ is bounded. Indeed suppose this is not the case and let $m_i:=\max_{j} \lambda^i_j$. Then up to a subsequence, there exist multipliers $\lambda_j$, not all zero, such that $$\frac{\lambda^i_j}{m_i}\to\lambda_j \textrm{ as } i\to\infty, \textrm{ and }0=\sum_{j=1}^m \lambda_j y_j,$$ contradicting the fact that $G(\bar{x})$ is pointed. We conclude that the multipliers $\lambda^i_j$ are bounded.
Then up to a subsequence, we have $$\bar{v}=\lim_{i\to\infty} \sum_{j=1}^{m}\lambda^i_j y^i_j= \sum_{j=1}^{m}\lambda_j y_j\in G(\bar{x}),$$ for some real numbers $\lambda_j \geq 0$. We conclude that $G$ is outer-semicontinuous at $\bar{x}$.
\end{proof}
The distance of a point $x$ to a set $Q$ is $$d_Q(x):=\inf_{y\in Q}|x-y|,$$ and the projection of $x$ onto $Q$ is $$P_Q(x):=\{y\in Q:|x-y|=d_Q(x)\}.$$
We now consider normal cones, which are fundamental objects in variational geometry.
\begin{definition}[Proximal normals]
{\rm
Consider a set $Q\subset\R^n$ and a point $\bar{x}\in Q$. The {\em proximal normal cone} to $Q$ at $\bar x$, denoted
$N^{P}_Q(\bar x)$, consists of all vectors $v \in \R^n$ such that $\bar{x}\in P_Q(\bar{x}+\frac{1}{r}v)$ for some $r>0$.
}
\end{definition}
Geometrically, a vector $v\neq 0$ is a proximal normal to $Q$ at $\bar{x}$ precisely when there exists a ball touching $Q$ at $\bar{x}$ such that $v$ points from $\bar{x}$ towards the center of the ball. Furthermore, this condition amounts to
$$\langle v,x-\bar{x} \rangle \leq O(|x-\bar{x}|^2) ~~\textrm{ as } x\to\bar{x} \textrm{ in } Q.$$
Relaxing the inequality above, one obtains the following notion.
\begin{definition}[Frech\'{e}t normals]
{\rm Consider a set $Q\subset\R^n$ and a point $\bar{x}\in Q$. The {\em Frech\'{e}t normal cone} to $Q$ at $\bar x$, denoted
$\hat N_Q(\bar x)$, consists of all vectors $v \in \R^n$ such that $$\langle v,x-\bar{x} \rangle \leq o(|x-\bar{x}|) ~~\textrm{ as }x\to\bar{x} \textrm{ in } Q.$$
}
\end{definition}
Note that both $N^P_Q(\bar{x})$ and $\hat{N}_Q(\bar{x})$ are convex cones, while $\hat{N}_Q(\bar{x})$ is also closed. Evidently, the set-valued mapping $x\mapsto \hat{N}_Q(x)$ is generally not outer-semicontinuous, and hence is not robust relative to perturbations in $x$. To correct for that, the following definition is introduced.
\begin{definition}[Limiting normals]
{\rm Consider a set $Q\subset\R^n$ and a point $\bar{x}\in Q$. The {\em limiting normal cone} to $Q$ at $\bar{x}$, denoted $N_Q(\bar{x})$, consists of all vectors $v\in\R^n$ such that there are sequences $x_i\stackrel{Q}{\rightarrow} \bar{x}$ and $v_i\rightarrow v$ with $v_i\in\hat{N}_Q(x_i)$.}
\end{definition}
The limiting normal cone, as defined above, consists of limits of Frech\'{e}t normals. In fact, the same object arises if we only take limits of proximal normals \cite[Exercise 6.18]{VA}. Convexifying the limiting normal cone leads to the following definition.
\begin{definition}[Clarke normals]
{\rm
Consider a set $Q\subset\R^n$ and a point $\bar{x}\in Q$. The {\em Clarke normal cone} to $Q$ at $\bar{x}$ is
$$N_Q^{c}(\bar{x}):=\cl\conv N_Q(\bar{x}).$$ }
\end{definition}
Given any set $Q\subset\R^n$ and a mapping $F\colon Q\to \widetilde{Q}$, where $\widetilde{Q}\subset\R^m$, we say that $F$ is ${\bf C}^p$-{\em smooth} $(p\geq 2)$ if for each point $\bar{x}\in Q$, there is a neighborhood $U$ of $\bar{x}$ and a ${\bf C}^p$ mapping $\hat{F}\colon \R^n\to\R^m$ that agrees with $F$ on $Q\cap U$. Henceforth, to simplify notation, the word smooth will mean ${\bf C}^2$-smooth.
\begin{definition}[Smooth Manifolds]
{\rm We say that a set $M$ in $\R^n$ is a ${\bf C}^2$-{\em submanifold} of dimension $r$ if for each point $\bar{x}\in M$, there is an open neighborhood $U$ around $\bar{x}$ and a mapping $F\colon \R^n\to\R^{n-r}$ that is ${\bf C}^2$-smooth with $\nabla F(\bar{x})$ of full rank, satisfying
$M\cap U=\{x\in U: F(x)=0\}$.
}
\end{definition}
A good reference on smooth manifold theory is \cite{Lee}.
\begin{theorem}\cite[Example 6.8]{VA}\label{thm:clarke_man}
Consider a ${\bf C}^2$-manifold $M\subset\R^n$. Then at every point $x\in M$, the normal cone $N_M(x)$ is equal to the normal space to $M$ at $x$, in the sense of differential geometry.
\end{theorem}
In fact, the following stronger characterization holds.
\begin{theorem}[Prox-normal neighborhood]\label{thm:prox}
Consider a ${\bf C}^2$-manifold $M\subset\R^n$ and a point $\bar{x}\in M$. Then there exists an open neighborhood $U$ of $\bar{x}$, such that
\begin{enumerate}
\item the projection map $P_M$ is single-valued on $U$,
\item for any two points $x\in M\cap U$ and $v\in U$, the equivalence, $$v\in x+N_M(x) \Leftrightarrow x=P_M(v),$$ holds.
\end{enumerate}
\end{theorem}
Following the notation of \cite{Hare}, we call the set $U$ that is guaranteed to exist by Theorem~\ref{thm:prox}, a {\em prox-normal neighborhood} of $M$ at $\bar{x}$. For more details about Theorem~\ref{thm:prox}, see \cite[Exercise 13.38]{VA}, \cite[Proposition 1.9]{CLSW}. We should note that the theorem above holds for all ``prox-regular'' sets $M$ \cite{prox_reg}.
Armed with the aforementioned facts from variational geometry, we can study variational properties of functions via their subdifferential mappings.
\begin{definition}[Subdifferentials]
{\rm Consider a function $f\colon\R^n\rightarrow\overline{\R}$ and a point $\bar{x}\in\R^n$, with $f(\bar{x})$ finite. The {\em limiting subdifferential} of $f$ at $\bar{x}$ is defined by
$$\partial f(\bar{x})= \{v\in\R^n: (v,-1)\in N_{\mbox{{\scriptsize {\rm epi}}}\, f}(\bar{x},f(\bar{x}))\}.$$
{\em Proximal}, {\em Frech\'{e}t}, and {\em Clarke subdifferentials} are defined analogously.
}
\end{definition}
\noindent For $\bar{x}$ such that $f(\bar{x})$ is not finite, we follow the convention that $\partial_{P} f(\bar{x})=\hat{\partial}f(\bar{x})=\partial f(\bar{x})=\partial_c f(\bar{x})=\emptyset$.
The subdifferentials defined above fail to capture the horizontal normals to the epigraph. Hence to obtain a more complete picture, we consider the following.
\begin{definition}[Horizon subdifferential]
{\rm For a function $f\colon\R^n\to\overline{\R}$ that is finite at a point $\bar{x}$, the {\em horizon subdifferential} is given by
$$\partial^{\infty} f(\bar{x})= \{v\in\R^n: (v,0)\in N_{\mbox{{\scriptsize {\rm epi}}}\, f}(\bar{x},f(\bar{x}))\}.$$}
\end{definition}
For a set $Q\subset\R^n$, we define $\delta_Q\colon\R^n\to\overline{\R}$ to be a function that is $0$ on $Q$ and $+\infty$ elsewhere. We call $\delta_Q$ the {\em indicator function} of $Q$.
Then for a point $\bar{x}$, we have $N_Q(\bar{x})=\partial \delta_Q(\bar{x})$, with analogous statements holding for the other subdifferentials.
Often, we will work with discontinuous functions $f\colon\R^n\to\overline{\R}$. For such functions, it is useful to consider $f${\em-attentive} convergence of a sequence $x_i$ to a point $\bar{x}$, denoted $x_i \xrightarrow[f]{} \bar{x}$. In this notation we have
$$x_i \xrightarrow[f]{} \bar{x} \quad\Longleftrightarrow\quad x_i\to\bar{x} \textrm{ and } f(x_i)\to f(\bar{x}).$$ If in addition we have a set $Q\subset\R^n$, then $x_i \xrightarrow[f]{Q} \bar{x}$ will mean that $x_i$ converges $f$-attentively to $\bar{x}$ and the points $x_i$ all lie in $Q$.
It is immediate that the mappings $\partial f$ and $\partial^{\infty} f$ are outer-semicontinuous with respect to $f$-attentive convergence $x \xrightarrow[f]{} \bar{x}$.
Consider a function $f\colon\R^n\to\overline{\R}$ that is locally lower semi-continuous at a point $\bar{x}$, with $f(\bar{x})$ finite. Then $f$ is locally Lipschitz continuous around $\bar{x}$ if and only if the horizon subdifferential is trivial, that is the condition $\partial^{\infty} f(\bar{x})=\{0\}$ holds \cite[Theorem 9.13]{VA}. Weakening the latter condition to requiring $\partial^{\infty} f(\bar{x})$ to simply be pointed, we arrive at the following central notion \cite[Exercise 9.42]{VA}.
\begin{definition}[epi-Lipschitzian sets and directionally Lipschitzian functions]
\hspace{1 mm}
{\rm
\begin{enumerate}
\item A set $Q\subset\R^n$ is {\em epi-Lipschitzian} at one of its points $\bar{x}$ if $Q$ is locally closed at $\bar{x}$ and the normal cone $N_Q(\bar{x})$ is pointed.
\item A function $f\colon\R^n\to\overline{\R}$, that is finite at $\bar{x}$, is {\em directionally Lipschitzian} at $\bar{x}$ if $f$ is locally lower-semicontinuous at $\bar{x}$ and the cone $\partial^{\infty} f(\bar{x})$ is pointed.
\end{enumerate}}
\end{definition}
Rockafellar \cite[Section 4]{Clarke_roc} proved that an epi-Lipschitzian set in $\R^n$, up to a rotation, locally coincides with an epigraph of a Lipschitz continuous function defined on $\R^{n-1}$. We should further note that the Clarke normal cone mapping of an epi-Lipschitzian set is outer-semicontinuous \cite[Proposition 6.8]{CLSW}.
It is easy to see that a function $f\colon\R^n\to\overline{\R}$ is directionally Lipschitzian at $\bar{x}$ if and only if the epigraph $\epi f$ is epi-Lipschitzian at $(\bar{x},f(\bar{x}))$.
Furthermore, for a set $Q$ that is locally closed at $\bar{x}$, the limiting normal cone $N_Q(\bar{x})$ is pointed if and only if the Clarke normal cone $N^c_Q(\bar{x})$ is pointed \cite[Exercise 9.42]{VA}.
Consider the two functions
\begin{displaymath}
f_1(x)=x ~~\textrm{ and }~~ f_2(x)= \left\{
\begin{array}{ll}
x & \textrm{if } x \leq 0\\
x+1 & \textrm{if } x > 0
\end{array}
\right.
\end{displaymath}
defined on the real line. Clearly both $f_1$ and $f_2$ are directionally Lipschitzian, and have the same derivatives at each point of differentiability. However $\partial_c f_1(0)\neq \partial_c f_2(0)$. Roughly speaking, this situation arises because some normal cones to the epigraph of a function $f$, namely at points $(x,r)$ with $r>f(x)$, may not correspond to any subdifferential. Consequently, if we have any hope of deriving a characterization of the Clarke subdifferential purely in terms of gradients and the normal cone to the domain, we must eliminate the situation above. Evidently, assumption of continuity of the function on the domain would do the trick. However, such an assumption would immediately eliminate many interesting convex functions from consideration. Rather than doing so, we identify a new condition, which arises naturally as a byproduct of our arguments. At the risk of sounding extravagant, we give this property a name.
\begin{definition}[Vertical continuity]
{\rm
We say that a function $f\colon\R^n\to\overline{\R}$ is {\em vertically continuous} at a point $\bar{x}\in\dom f$ if the equation
\begin{equation}
\operatornamewithlimits{Limsup}_{\substack{ x_i\to\bar{x},~r\to f(\bar{x}) \\ r>f(\bar{x}) } } N_{\sepi f}(x,r)= N_{\sdom f}(\bar{x})\times\{0\}, \label{eqn:strange}
\end{equation}
holds.
}
\end{definition}
To put this condition in perspective, we record the following observations.
\begin{proposition}[Properties of vertically continuous functions]\label{prop:norm}
Consider a proper function $f\colon\R^n\to\overline{\R}$ that is locally lower-semicontinuous at a point $\bar{x}$, with $f(\bar{x})$ finite.
\begin{enumerate}
\item \label{it:0} Suppose that whenever a pair $(x,r)\in\epi f$, with $r > f(x)$, is near $(\bar{x},f(\bar{x}))$ we have
$$N_{\sepi f}(x,r)= N_{\sdom f}(x)\times\{0\}.$$ Then $f$ is vertically continuous at $\bar{x}$.
\item \label{it:1} Suppose that $\bar{x}$ lies in the interior of $\dom f$ and that $f$ is vertically continuous at $\bar{x}$. Then $f$ is continuous at $\bar{x}$, in the usual sense.
\item \label{it:2} Suppose that $f$ is continuous on a neighborhood of $\bar{x}$, relative to the domain of $f$. Then $f$ is vertically continuous at all points of $\dom f$ near $\bar{x}$.
\item \label{it:3} If $f$ is convex, then $f$ is vertically continuous at every point $\bar{x}$ in $\dom f$.
\item \label{it:4} Suppose that $f$ is ``amenable'' at $\bar{x}$ in the sense of \cite{amen}; that is, $f$ is finite at $\bar{x}$ and there exists a neighborhood $V$ of $\bar{x}$ so that $f$ can be written as a composition $f=g\circ F$, for a ${\bf C}^1$ mapping $F\colon V\to\R^m$ and a proper, lower-semicontinuous, convex function $g\colon\R^m\to\overline{\R}$, so that the qualification condition
\begin{equation}\label{eqn:qual}
N_{\sdom g}(F(\bar{x}))\cap \kernal \nabla F(\bar{x})^{*}=\{0\},
\end{equation}
is satisfied. Then $f$ is vertically continuous at $\bar{x}$.
\end{enumerate}
\end{proposition}
\begin{proof}
Claim \ref{it:0} is immediate from outer-semicontinuity of the normal cone map $N_{\sdom f}$.
To see \ref{it:1}, suppose that $f$ is vertically continuous at $\bar{x}\in\inter\dom f$. Since the normal cone to the domain of $f$ at $\bar{x}$ consists of the zero vector, we deduce $N_{\sepi f}(\bar{x},f(\bar{x})+\frac{1}{n})=\{0\}$, for all sufficiently large integers $n$. By \cite[Exercise 6.19]{VA}, we deduce that each such point $(\bar{x}, f(\bar{x})+\frac{1}{n})$ lies in the interior of the epigraph $\epi f$. Consequently for any sequence $x_i\to\bar{x}$, we have
$$\operatornamewithlimits{limsup}_{i\to\infty} f(x_i)\leq f(\bar{x})+\frac{1}{n},$$
for all large indices $n$. Letting $n$ tend to infinity, we deduce that $f$ is upper-semicontinuous at $\bar{x}$. The result follows.
To see \ref{it:2}, suppose that $f$ is continuous at $x\in\dom f$, relative to the domain of $f$. Then for any real $r>f(x)$ there exists an $\epsilon >0$ so that the epigraph $\epi f$ coincides with the product set, $\dom f\times [r-\epsilon,r+\epsilon]$, locally around $(x,r)$. In fact, this follows just from upper-semicontinuity of $f$ at $x$, relative to $\dom f$. We deduce $N_{\sepi f}(x,r)=N_{\sdom f}(x)\times \{0\}$. The result follows by \ref{it:0}.
To see \ref{it:3}, consider a pair $(\bar{x},r)$ with $r>f(\bar{x})$ and observe
\begin{align*}
(v,\alpha)\in N_{\sepi f}(\bar{x},r) &\Longleftrightarrow \langle (v,\alpha),(\bar{x},r)\rangle \geq \langle (v,\alpha),(x',r')\rangle ~~\textrm{ for all } (x',r')\in\epi f \\
&\Longleftrightarrow \alpha=0 \textrm{ and } v\in N_{\sdom f}(\bar{x}).
\end{align*}
Appealing to \ref{it:0}, we obtain the result.
The proof of \ref{it:4} follows from a standard nonsmooth chain rule. We outline the argument below. Without loss of generality, we can assume that the representation $f=g\circ F$ holds on all of $\R^n$. Observe $$\epi f=\{(x,r)\in\R^{n+1}: G(x,r)\in\epi g\},$$ for the mapping $G\colon\R^{n+1}\to\R^{m+1}$ defined by $G(x,r)=(F(x),r)$. We would like to use the chain rule appearing in \cite[Theorem 6.14]{VA} to compute the normal cone to $\epi f$. To this end, consider a pair $(x,r)\in \epi f$ and a vector $(y,\alpha)\in N_{\sepi g}(G(x,r))$. We have
\begin{align*}
0=\nabla G(x,r)^{*} (y,\alpha) &\Longleftrightarrow \alpha= 0 \textrm{ and } \nabla F(x)^{*} y=0\\
&\Longleftrightarrow y\in N_{\sdom g}(F(x)) \textrm{ and } \nabla F(x)^{*} y=0\\
&\Longleftrightarrow y=0,
\end{align*}
where the last equivalence follows from the qualification condition (\ref{eqn:qual}). Applying the chain rule \cite[Theorem 6.14]{VA}, we deduce
$$N_{\sepi f}(x,r)= \nabla G(x,r)^{*}N_{\sepi g}(G(x,r)),$$ for all pairs $(x,r)\in \epi f$. In particular, if we have $r > f(x)$, or equivalently $r> g(F(x))$, we deduce
$$N_{\sepi f}(x,r)=\nabla F(x)^{*}N_{\sdom g}(F(x))\times \{0\}.$$
The right hand side coincides with $N_{\sdom f}(x)\times\{0\}$ by \cite[Theorem 3.3]{amen}. The result follows by appealing to \ref{it:0}.
\end{proof}
As can be seen from the proposition above, vertical continuity bridges the gap between continuity of the function on the interior of the domain and continuity on the whole domain, and hence the name. In summary, all convex and amenable functions have this property, as do functions that are continuous on their domains. An illustrative example is provided by the proper, lower semi-continuous, convex (directionally Lipschitzian) function $f$ on $\R^2$, defined by
\begin{displaymath}
f(x,y) = \left\{
\begin{array}{ll}
y^2/2x &\textrm{if } x >0\\
0 &\textrm{if }x=0, y=0 \\
\infty &\textrm{otherwise}
\end{array}
\right.
\end{displaymath}
This function is discontinuous at the origin, despite being vertically continuous there.
In the sequel, we will need the following basic result.
\begin{proposition}\label{prop:help}
Consider a set $M\subset \R^n$ and a function $f\colon\R^n\to\overline{\R}$ that is finite-valued and smooth on $M$.
Then, at any point $\bar{x}\in M$, we have $$\hat{\partial} f(\bar{x})\subset \nabla g(\bar{x})+N_M(\bar{x}),$$ where $g\colon\R^n\to\R$ is any smooth function agreeing with $f$ on $M$ on a neighborhood of $\bar{x}$.
\end{proposition}
\begin{proof}
Define a function $h:\R^n\to\overline{\R}$ agreeing with $f$ on $M$ and equalling plus infinity elsewhere.
It is standard to check that the chain of inclusions,
$$\hat{\partial} f(\bar{x})\subset\hat{\partial} h(\bar{x})=\hat{\partial} (g(\cdot)+\delta_M(\cdot))(\bar{x})\subset\nabla g(\bar{x})+N_M(\bar{x}),$$ holds.
\end{proof}
\section{Characterization of the Clarke Subdifferential.}\label{sec:char}
Directionally Lipschitzian functions play an important role in optimization and are close relatives of locally Lipschitz functions~\cite{BLO, VA}. Indeed, Rockafellar showed that epi-Lipschitzian sets, up to a change of coordinates, are epigraphs of Lipschitz functions \cite[Section 4]{Clarke_roc}.
As was mentioned in the introduction, a key feature of Clarke's construction is that the Clarke subdifferential of a locally Lipschitz function $f$, on $\R^n$, can be described purely in terms of gradient information. It is then reasonable to hope that the same property holds for continuous directionally Lipschitzian functions, but this is too good to be true. Though such functions are differentiable almost everywhere \cite{cone_mon}, their gradients may fail to generate the entire Clarke subdifferential.
A simple example is furnished by the classical ternary Cantor function --- a nondecreasing, continuous, and therefore directionally Lipschitzian function, with zero derivative at each point of differentiability. The Clarke subdifferential of this function does not identically consist of the zero vector \cite[Exercise 3.5.5]{Borwein-Zhu}, and consequently cannot be recovered from classical derivatives. This example notwithstanding, one does not expect the Cantor function to arise often in practice.
Nonsmoothness arises naturally in many applications, but not pathologically so. On the contrary, nonsmoothness is usually highly structured. Often such structure manifests itself through existence of a {\em stratification}. In the current work, we consider so-called {\em stratifiable} functions. Roughly speaking, domains of such function can be decomposed into smooth manifolds (called strata), which fit together in a ``regular'' way, and so that the function is smooth on each such stratum. In particular, this rich class of functions includes all semi-algebraic, and more generally, all o-minimally defined functions. See for example \cite{DM}. We now make this notion precise.
\begin{definition}[Locally finite stratifications]
{\rm Consider a set $Q$ in $\R^n$. A {\em locally finite stratification} of $Q$ is a partition of $Q$ into disjoint manifolds $M_i$ (called strata) satisfying
\begin{itemize}
\item {\bf (frontier condition)} for each index $i$, the the closure of $M_i$ in $Q$ is the union of some $M_j$'s, and
\item {\bf (local finiteness)} each point $x\in Q$ has a neighborhood that intersects only finitely many strata.
\end{itemize}
We say that a set $Q\subset\R^n$ is {\em stratifiable} if it admits a locally finite stratification.
}
\end{definition}
Observe that due to the frontier condition, a stratum $M_i$ intersects the closure of another stratum $M_j$ if and only if the inclusion $M_i\subset \cl M_j$ holds. Consequently, given a locally finite stratification of a set $Q$ into manifolds $\{M_i\}$, we can impose a natural partial order on the strata, namely
$$M_i\preceq M_j \Leftrightarrow M_i\subset \cl M_j.$$
A good example to keep in mind is the partition of a convex polyhedron into its open faces.
\begin{definition}[Stratifiable functions]
{\rm A function $f\colon\R^n\to\overline{\R}$ is {\em stratifiable} if there exists a locally finite stratification of $\dom f$ so that $f$ is smooth on each stratum. }
\end{definition}
The following result nicely illustrates the geometric insight one obtains by working with stratifications explicitly.
\begin{proposition}[Dense differentiability]\label{prop:cl_int}
Consider a proper stratifiable function $f\colon\R^n\to\overline{\R}$ that is directionally Lipschitzian at all points of $\dom f$ near $\bar{x}$, and let $\Omega$ be any dense subset of $\dom f$. Then the set $\Omega\cap \dom\nabla f$ is dense in the domain of $f$, in the $f$-attentive sense, locally near $\bar{x}$.
\end{proposition}
\begin{proof} Consider a locally finite stratification of $\dom f$ into manifolds $M_i$ so that $f$ is smooth on each stratum. Suppose for the sake of contradiction that there exists a point $x\in\dom f$ arbitrarily close to $\bar{x}$ and an $f$-attentive neighborhood $V=\{y\in\R^n:|y-x|<\epsilon, |f(y)-f(x)|<\delta\}$ so that $V\cap \Omega$ does not intersect any strata of dimension $n$. Shrinking $V$, we may assume that $V$ intersects only finitely many strata, say $\{M_j\}$ for $j\in J:=\{1,\ldots,k\}$, and that the inclusion $x\in\cl M_j$ holds for each index $j\in J$. Notice that since $f$ is continuous on each stratum, the set $V$ is a union of open subsets of the strata $M_j$ for $j\in J$.
Now among the strata $M_j$ with $j\in J$, choose a stratum $M$ that is maximal with respect to the partial order $\preceq$.
Clearly, we have
$$M\cap \cl M_j=\emptyset, \textrm{ for each } j\in J \textrm{ with } M_j\neq M.$$
Now let $y$ be any point of $V\cap M$ and observe that there exists a neighborhood $Y$ of $y$ so that the functions $f$ and $f+\delta_{M}$ coincide on $Y\cap M$. We deduce that $\partial f(y)$ is a nontrivial affine subspace. Since $f$ is directionally Lipschitzian at all points in $\dom f$ near $\bar{x}$, and in particular at $y$, we have arrived at a contradiction. Thus $\Omega\cap\dom\nabla f$ is dense (in the $f$-attentive sense) in the domain of $f$, locally near $\bar{x}$.
\end{proof}
In this section, we will derive a characterization formula for the Clarke subdifferential of a stratifiable, vertically continuous, directionally Lipschitzian function $f\colon\R^n\to\overline{\R}$ . This formula will only depend on the gradients of $f$ and on the normal cone to the domain. It is important to note that the characterization formula, we obtain, is independent of any particular stratification of $\dom f$; one only needs to know that $f$ is stratifiable in order to apply our result. The argument we present is entirely constructive and is motivated by the following fact.
\begin{proposition}\label{prop:char}
Consider a closed, convex cone $Q\subset \R^n$, which is neither $\R^n$ nor a half-space. Then the equality,
$$Q=\cone (\bd Q).$$
holds.
\end{proposition}
Hence in light of Proposition~\ref{prop:char}, in order to obtain a representation formula for the Clarke subdifferential, it is sufficient to study the (relative) boundary structure of the Clarke normal cone. This is precisely the route we take.
\footnote{The idea to study the boundary structure of the Clarke normal cone in order to establish a convenient representation for the Clarke subdifferential is by no means new. For instance the same idea was used by Rockaffelar to establish the representation formula for the convex subdifferential \cite[Theorem 25.6]{rock}. While working on this paper, the authors became aware that the same strategy was also used to prove a representation formula for the subdifferential of finite-valued, continuous functions whose epigraph has positive reach
\cite[Theorem 4.9]{CM}, \cite[Theorem 2]{CMW}. In particular, Proposition~\ref{prop:char} also appears as \cite[Proposition 3]{CMW}.
}
\begin{lemma}[Frech\'{e}t accessibility]\label{lem:access_set_main}
Consider a closed set $Q\subset\R^n$, a ${\bf C}^2$-manifold $M\subset Q$, and a point $\bar{x}\in M$. Recall that the inclusion $\hat{N}_Q(\bar{x})\subset N_M(\bar{x})$ holds. Suppose that a vector $\bar{v}\in \hat{N}_Q(\bar{x})$ lies in the boundary of $\hat{N}_Q(\bar{x})$, relative to the linear space $N_M(\bar{x})$. Then there exists a sequence $(x_i,v_i)\to(\bar{x},\bar{v})$, with $v_i\in N^P_Q(x_i)$, and so that all points $x_i$ lie outside of $M$.
\end{lemma}
\begin{proof} Choose a vector $\bar{w}\in N_M(\bar{x})$ in such a way so as to guarantee $$\bar{v}+t\bar{w}\notin \hat{N}_Q(\bar{x}), ~~\textrm{ for all } t>0.$$
Consider the vectors
\begin{equation}\label{eq:proj}
y(t):=\bar{x}+t(\bar{v}+t\bar{w}),
\end{equation}
and observe $y(t)\notin \bar{x}+\hat{N}_Q(\bar{x})$ for every $t>0$. Consider a selection of the projection operator, $$x(t)\in P_Q(y(t)).$$ Clearly, $y(t)\to \bar{x}$ and $x(t)\to \bar{x}$, as $t\to 0$. Observe
\begin{align}
\frac{y(t)-x(t)}{t}&\in N^{P}_Q(x(t)), \notag\\
x(t)&\neq \bar{x},\label{eqn:neq}
\end{align}
for every $t$.
We claim that the points $x(t)$ all lie outside of $M$ for all sufficiently small $t>0$. Indeed, if this were not the case, then for sufficiently small $t$, the points $x(t)$ and $y(t)$ would lie in the prox-normal neighborhood of $M$ near $\bar{x}$, and we would deduce
$$x(t)=P_M(y(t))=\bar{x},$$ contradicting (\ref{eqn:neq}).
Thus all that is left is to show the convergence,
$\frac{y(t)-x(t)}{t}\to\bar{v}$.
To this end, observe that from (\ref{eq:proj}), we have
\begin{equation}\label{eqn:conv}
\frac{y(t)-\bar{x}}{t}\to \bar{v}.
\end{equation}
Hence it suffices to argue $\frac{x(t)-\bar{x}}{t}\to 0$.
By definition of $x(t)$, we have
\begin{equation}\label{eqn:bounded}
|y(t)-\bar{x}|\geq |(y(t)-\bar{x})+(\bar{x}-x(t))|.
\end{equation}
Squaring and simplifying the inequality above, we obtain
\begin{equation}\label{eq:acc2}
\Big\langle \frac{y(t)-\bar{x}}{t}, \frac{x(t)-\bar{x}}{t}\Big\rangle \geq \frac{1}{2}\Big|\frac{x(t)-\bar{x}}{t}\Big|^2.
\end{equation}
From (\ref{eqn:conv}) and (\ref{eqn:bounded}), we deduce that the vectors $\frac{x(t)-\bar{x}}{t}$ are bounded as $t\to 0$. Consider any limit point $\gamma \in\R^n$.
Taking the limit in (\ref{eq:acc2}), we obtain
\begin{equation}\label{eqn:ineq}
\langle \bar{v}, \gamma\rangle \geq \frac{1}{2}|\gamma|^2.
\end{equation}
Since $\bar{v}$ is a Frech\'{e}t normal, we deduce
$$\langle \bar{v},x(t)-\bar{x} \rangle \leq o(|x(t)-\bar{x}|).$$
It immediately follows that
$$\langle \bar{v},\gamma\rangle \leq 0,$$
and in light of (\ref{eqn:ineq}),
we obtain $\gamma =0$. Hence
$$\frac{y(t)-x(t)}{t}\to\bar{v},$$ as we claimed.
\end{proof}
\begin{remark}
{\rm
We note, in passing, that an analogue of Lemma~\ref{lem:access_set_main} (with an identical proof) holds when $M$ is simply ``prox-regular'', in the sense of \cite{prox_reg}, around $\bar{x}$. In particular, the lemma is valid when $M$ is a convex set. }
\end{remark}
The combination of Lemma~\ref{lem:access_set_main} and Proposition~\ref{prop:char} yields dividends even in the simplest case when the manifold $M$ of Lemma~\ref{lem:access_set_main} is a singleton set. We should emphasize that in the following proposition, we do not even assume that the function in question is directionally Lipschitzian or stratifiable.
\begin{proposition}[Isolated singularity]\label{prop:iso}
Consider a continuous function $f\colon U\to\R$, defined on an open set $U\subset\R^n$. Suppose that $f$ is differentiable on $U\setminus \{\bar{x}\}$ for some point $\bar{x}\in U$, and that $\partial f(\bar{x})\neq \emptyset$. Then
$$\partial_c f(\bar{x})= \cl \Big(\conv\{\lim_{i\to\infty} \nabla f(x_i):x_i\to\bar{x}\} + \cone\{\lim_{\substack{i\to\infty\\ t_i \downarrow 0}}t_i\nabla f(x_i):x_i\to \bar{x}\} \Big),$$ under the convention that $\conv \emptyset = \{0\}$.
\end{proposition}
\begin{proof}
Define the two sets
\begin{equation*}
E:=\{\lim_{i\to\infty} \nabla f(x_i):x_i\to\bar{x}\},
~~~~H:= \{\lim_{\substack{i\to\infty\\ t_i \downarrow 0}}t_i\nabla f(x_i):x_i\to \bar{x}\},
\end{equation*}
and consider the epigraph $Q:=\epi f$ and the singleton set $M:=\{(\bar{x},f(\bar{x}))\}$.
By Lemma~\ref{lem:access_set_main} and continuity of $f$, we have
\begin{equation}\label{eq:try2}
\bd \hat{N}_Q(\bar{x},f(\bar{x}))\subset \cone\big(E\times\{-1\}\big)\cup \big(H\times\{0\}\big).
\end{equation}
{\sc Case 1.} Suppose $\hat{N}_Q(\bar{x},f(\bar{x}))$ is not equal to $\R^n\times [0,-\infty)$.
Then from Proposition~\ref{prop:char} and (\ref{eq:try2}), we deduce
\begin{equation}\label{eqn:try1}
N^c_Q(\bar{x},f(\bar{x}))= \cl\cone\big(E\times\{-1\}\big)\cup \big(H\times\{0\}\big).
\end{equation}
From (\ref{eqn:try1}), we see that an inclusion $(v,-1)\in N^c_Q(\bar{x},f(\bar{x}))$ holds if and only if for every $\epsilon > 0$, there exist vectors $y_i\in E\cup H$, and real numbers $\lambda_i> 0$, for $1\leq i \leq n+1$, satisfying
\begin{align*}
\Big|v&-\Big(\sum_{i:y_i\in E} \lambda_i y_i+ \sum_{i:y_i\in H} \lambda_i y_i\Big)\Big|<\epsilon,\\
1&=\sum_{i: y_i\in E} \lambda_i
\end{align*}
Thus $\partial_c f(\bar{x})=\cl (\conv E+\cone H)$, as we claimed.
{\sc Case 2.} Now suppose $\hat{N}_Q(\bar{x},f(\bar{x}))=\R^n\times [0,-\infty)$. Then from (\ref{eq:try2}), we deduce $H=\R^n$ and $\partial_c f(\bar{x})=\R^n=\conv E+\cone H$, under the convention $\conv \emptyset = \{0\}$.
\end{proof}
As an illustration, consider the following simple example.
\begin{example}
{\rm Consider the function $f(x,y):=\sqrt[4]{x^4+y^2}$ on $\R^2$. Clearly $f$
is differentiable on $\R^2\setminus \{(0,0)\}$. The gradient has the form
\[ \nabla f(x,y)=\frac{1}{(x^4+y^2)^{3/4}} \left( \begin{array}{c}
x^{3} \\
\frac{1}{2}y \end{array} \right).\]
From Proposition~\ref{prop:iso}, we obtain
$$\partial_c f(\bar{x})= \cl\Big(\conv\{\lim_{i\to\infty} \nabla f(x_i):x_i\to\bar{x}\} + \cone\{\lim_{\substack{i\to\infty\\ t_i \downarrow 0}}t_i\nabla f(x_i):x_i\to \bar{x}\}\Big),$$
Observe that the vectors $\nabla f(x,0)$ are equal to $(\pm 1,0)$, and the vectors $2\sqrt{|y|} \nabla f(0,\pm y)$ are equal to $(0,\pm 1)$, whenever $x\neq 0\neq y$.
Thus we obtain
$$[-1,1]\times \{0\}\subset \conv\{\lim_{i\to\infty} \nabla f(x_i):x_i\to\bar{x}\}.$$
$$\{0\}\times \R\subset\cone\{\lim_{\substack{i\to\infty\\ t_i \downarrow 0}}t_i\nabla f(x_i):x_i\to \bar{x}\}.$$
Consequently, the inclusion $$[-1,1]\times\R \subset \partial_c f(0,0)$$ holds.
The absolute value of the first coordinate of $\nabla f(x,y)$ is always bounded by $1$, which implies the reverse inclusion above. Thus we have exact equality $\partial_c f(0,0)=[-1,1]\times\R$.
}
\end{example}
We record the following observation for ease of reference.
\begin{corollary}\label{cor:cont}
Consider a closed, convex cone $Q\subset\R^n$ with nonempty interior. Suppose that $\bd Q$ is contained in a proper linear subspace. Then $Q$ is either all of $\R^n$ or a half-space.
\end{corollary}
\begin{proof} Clearly if $Q$ were neither $\R^n$ or a half-space, then by Proposition~\ref{prop:char}, we would deduce that $Q=\cone (\bd Q)$ has empty interior, which is a contradiction.
\end{proof}
Armed with Proposition~\ref{prop:char} and Lemma~\ref{lem:access_set_main}, we can now prove the main result of this section with ease.
\begin{theorem}[Characterization]\label{thm:main}
Consider a proper, stratifiable function $f\colon\R^n\to\overline{\R}$ that is finite at $\bar{x}$. Suppose that $f$ is vertically continuous and directionally Lipschitzian at all points of $\dom f$ near $\bar{x}$. Then for any dense subset $\Omega\subset\dom f$, we have
\begin{equation}\label{eqn:norm}
N^{c}_{\sepi f}(\bar{x},f(\bar{x}))=\cone \big\{\lim_{i\to\infty} \frac{(\nabla f(x_i),-1)}{\sqrt{1+|\nabla f(x_i)|}} : x_i\xrightarrow[f]{\Omega}
\bar{x}\big\}+\big(N^{c}_{\sdom f}(\bar{x})\times \{0\}\big).
\end{equation}
\noindent Consequently, the Clarke subdifferential admits the presentation
$$\partial_c f(\bar{x})=\conv\big\{\lim_{i\to\infty} \nabla f(x_i):x_i\xrightarrow[f]{\Omega}\bar{x}\big\} + \cone\big\{\lim_{\substack{i\to\infty\\ t_i \downarrow 0}}t_i\nabla f(x_i):x_i\xrightarrow[f]{\Omega} \bar{x}\big\}+ N^{c}_{\sdom f}(\bar{x}).$$
\end{theorem}
\begin{proof}
We first prove (\ref{eqn:norm}). Observe that since $f$ is vertically continuous at $\bar{x}$, we have $N^{c}_{\sdom f}(\bar{x})\times \{0\}\subset N^{c}_{\sepi f}(\bar{x},f(\bar{x}))$, and hence the inclusion ``$\supset$'' holds.
Therefore we must establish the reverse inclusion. To this effect, intersecting the domain of $f$ with a small open ball around $\bar{x}$, we may assume that $f$ is directionally Lipschitzian and vertically continuous at each point $x\in \dom f$. For notational convenience, for a vector $v\in\R^{n}$, let $\overbracket[1pt]{v}:=\frac{(v,-1)}{\sqrt{1+|v|^2}}$. Define the set-valued mapping
\begin{equation}
F(x):= \cone \Big(\{\lim_{i\to\infty} \overbracket[1pt]{\nabla f(x_i)} : x_i\xrightarrow[f]{\Omega} x\}\cup (N_{\sdom f}(x)\cap {\bf B})\times\{0\}\Big),\label{eqn:set_val}
\end{equation}
By Proposition~\ref{prop:cl_int}, the set $\{\lim_{i\to\infty} \overbracket[1pt]{\nabla f(x_i)} : x_i\xrightarrow[f]{\Omega} x\}$ is nonempty. Furthermore, from the established inclusion ``$\supset$'', we see that $N_{\sdom f}(x)$ is pointed and hence the set
$\cone N_{\sdom f}(x)$ is closed for all $x\in\dom f$. Consequently, we deduce
\begin{equation*}
F(x)=\cone \{\lim_{i\to\infty} \overbracket[1pt]{\nabla f(x_i)}: x_i\xrightarrow[f]{\Omega} x\}+ \big(N^{c}_{\sdom f}(x)\times \{0\}\big),
\end{equation*}
Combining (\ref{eqn:set_val}) with Lemma~\ref{lem:outer}, we see that $F$ is outer-semicontinuous with respect to $f$-attentive convergence.
Now consider a stratification of $\dom f$ into manifolds $\{M_i\}$ having the property that $f$ is smooth on each stratum $M_i$. Restricting the domain of $f$, we may assume that the stratification $\{M_i\}$ consists of only finitely many sets. We prove the theorem by induction on the dimension of the strata $M_i$ in which the point $\bar{x}$ lies.
Clearly, the result holds for all strata of dimension $n$, since $f$ is smooth on such strata and $\Omega$ is dense in $\dom f$. As an inductive hypothesis, suppose that the claim holds for all strata that are strictly greater in the partial order $\preceq$ than a certain stratum $M$ and let $\bar{x}$ be an arbitrary point of $M$.
Since $f$ is smooth on $M$, we deduce that $\gph f\big|_M$ is a smooth manifold. Then by Lemma~\ref{lem:access_set_main}, for every vector $0\neq v\in\rb \hat{N}_{\sepi f}(\bar{x},f(\bar{x}))$, there exists a sequence $(x_l,r_l,v_l)\to(\bar{x},f(\bar{x}),v)$, with $v_l\in \hat{N}_{\sepi f}(x_l,r_l)$ and $(x_l,r_l)\notin \gph f\big|_M$. Suppose that there exists a subsequence satisfying $r_l\neq f(x_l)$ for each index $l$. Then since $f$ is vertically continuous at $\bar{x}$, we obtain
$$v=\lim_{l\to\infty} v_l\subset \operatornamewithlimits{Limsup}_{l\to\infty}N_{\sepi f}(x_l,r_l)\subset N_{\sdom f}(\bar{x})\times \{0\}\subset F(\bar{x}).$$
On the other hand, if $r_l= f(x_l)$ for all large indices $i$, then restricting to a subsequence, we may assume that all the points $x_l$ lie in a a stratum $M'$ with $M'\succ M$. The inductive hypothesis and $f$-attentive outer-semicontinuity of $F$ yield the inclusion $v\in F(\bar{x})$.
Thus we have established the inclusion,
$$\rb \hat{N}_{\sepi f}(\bar{x},f(\bar{x}))\subset F(\bar{x}).$$
Since $\partial^{\infty} f (\bar{x})$ is pointed, we deduce that the cone $\hat{N}_{\sepi f}(\bar{x})$ is neither a linear subspace nor a half-subspace. Consequently by Lemma~\ref{lem:outer}, we deduce
\begin{equation}\label{eq:point}
\hat{N}_{\sepi f}(\bar{x},f(\bar{x}))=\cone\rb \hat{N}_{\sepi f}(\bar{x},f(\bar{x}))\subset F(\bar{x}).
\end{equation}
In fact, we have shown that (\ref{eq:point}) holds for all points $\bar{x}\in M$.
Finally consider a limiting normal $v\in N_{\sepi f}(\bar{x})$. Then there exists a sequence $(x_l,r_l,v_l)\to (\bar{x},f(\bar{x}),v)$, with $v_l\in \hat{N}_{\sepi f}(x_l,r_l)$. It follows from (\ref{eq:point}), the inductive hypothesis, and $f$-attentive outer-semicontinuity of $F$ that the inclusion $v\in F(\bar{x})$ holds. Thus the induction is complete, as is the proof of (\ref{eqn:norm}).
To finish the proof of the theorem, define the two sets
\begin{equation}
E:=\{\lim_{i\to\infty} \nabla f(x_i):x_i\xrightarrow[f]{\Omega}\bar{x}\},~~~~
H:= \{\lim_{\substack{i\to\infty\\ t_i \downarrow 0}}t_i\nabla f(x_i):x_i\xrightarrow[f]{\Omega} \bar{x}\}.
\end{equation}
Observe
$$\cone \{\lim_{i\to\infty} \overbracket[1pt]{\nabla f(x_i)} : x_i\xrightarrow[f]{\Omega} \bar{x}\}= \cone\big(E\times\{-1\}\big)\cup \big(H\times\{0\}\big).$$
Thus an inclusion $(v,-1)\in N^c_Q(\bar{x},f(\bar{x}))$ holds if and only if there exist vectors $y_i\in E\cup H$ and $y\in N^{c}_{\sdom f}(\bar{x})$, and real numbers $\lambda_i> 0$, for $1\leq i \leq n+1$, satisfying
\begin{align*}
v&=\sum_{i:y_i\in E} \lambda_i y_i+ \sum_{i:y_i\in H} \lambda_i y_i +y,\\
1&=\sum_{i: y_i\in E} \lambda_i
\end{align*}
The result follows.
\end{proof}
Recovering representation (\ref{eqn:lewis}) of the introduction, in the setting of stratifiable functions, is now an easy task.
\begin{corollary}\label{cor:cite}
Consider a proper, stratifiable function $f\colon\R^n\to\overline{\R}$, that is finite at $\bar{x}$. Suppose that $f$ is directionally Lipschitzian at all points of $\dom f$ near $\bar{x}$, and is continuous near $\bar{x}$ relative to the domain of $f$. Then we have
$$\partial_c f(\bar{x})=\bigcap_{\delta >0} \cl\conv\Big(\nabla f\big(\Omega\cap B_\delta(\bar{x})\big)\Big)+ N^{c}_{\sdom f}(\bar{x}),$$
where $\Omega$ is any dense subset of $\dom f$.
\end{corollary}
\begin{proof}
Since the cone $N^c_{\sepi f}(\bar{x},f(\bar{x}))$ is pointed, one can easily verify, much along the lines of Lemma~\ref{lem:outer}, that the equation
$$\bigcap_{\delta >0}
\cl\cone\big\{\frac{(\nabla f(x),-1)}
{\sqrt{1+|\nabla f(x)|}}: x\in \Omega\cap B_{\delta}(\bar{x})\big\}=\cone \big\{\lim_{i\to\infty} \frac{(\nabla f(x_i),-1)}{\sqrt{1+|\nabla f(x_i)|}} : x_i\stackrel{\Omega}{\rightarrow} \bar{x}\big\},
$$
holds.
The result follows by an application of Theorem~\ref{thm:main}. We leave the details to the reader.
\end{proof}
Our next goal is to recover the representation of the convex subdifferential (\ref{eqn:rock}) of the introduction in the stratifiable setting. In fact, we will consider the more general case of amenable functions.
Before proceeding, recall that a proper, lower-semicontinuous, convex function is directionally Lipschitzian at some point if and only if its domain has nonempty interior. A completely analogous situation occurs for amenable functions.
\begin{lemma}\label{lem:amen_dir}
Consider a function $f\colon\R^n\to\overline{\R}$ that is amenable at $\bar{x}$.
Let $V$ be a neighborhood of $\bar{x}$ so that $f$ can be written as a composition $f=g\circ F$, for a ${\bf C}^1$ mapping $F\colon V\to\R^m$ and a proper, lower-semicontinuous, convex function $g\colon\R^m\to\overline{\R}$, so that the qualification condition
\begin{equation*}
N_{\sdom g}(F(\bar{x}))\cap \kernal \nabla F(\bar{x})^{*}=\{0\},
\end{equation*}
is satisfied.
Then there exists a neighborhood $U$ of $\bar{x}$ so that
\begin{enumerate}
\item \label{item:int} $F(U\cap\inter\dom f)\subset \inter\dom g$,
\item \label{item:bd} $U\cap F^{-1}(\inter\dom g) \subset \inter \dom f$.
\end{enumerate}
Furthermore $f$ is directionally Lipschitzian at $\bar{x}$ if and only if $\bar{x}$ lies in $\cl( \inter \dom f)$.
\end{lemma}
\begin{proof}
Let us first recall a few useful formulas. To this end, \cite[Theorem 3.3]{amen} shows that there exists a neighborhood $U$ of $\bar{x}$ so that
for all points $x\in U\cap\dom f$, we have
\begin{align}
\{0\}&=N_{\sdom g}(F(x))\cap \kernal \nabla F(x)^{*}, \label{align:prop} \\
\partial f(x)&=\nabla F(x)^{*}\partial g(F(x)),\\
N_{\sdom f}(x)&=\nabla F(x)^{*}N_{\sdom g}(F(x)). \label{align:prop2}
\end{align}
Furthermore a computation in the proof of Proposition~\ref{prop:norm} (item \ref{it:4}) shows that for any $x\in U\cap\dom f$ and any $r> f(x)$, we have
\begin{equation}\label{eqn:epi_amin}
N_{\sepi f}(x,r)=\nabla F(x)^{*}N_{\sdom g}(F(x))\times \{0\}.
\end{equation}
Observe for any $x\in U\cap\inter\dom f$, we have
$$0=N_{\sdom f}(x)=\nabla F(x)^{*}N_{\sdom g}(F(x)),$$
and consequently $N_{\sdom g}(F(x))=0$. We conclude $F(x)\in \inter\dom g$, thus establishing $\ref{item:int}$.
Now consider a point $x\in U\cap F^{-1}(\inter\dom g)$.
Using $(\ref{eqn:epi_amin})$, we deduce $N_{\sepi f}(x,r)=0$ for any $r >f(x)$. Hence by \cite[Exercise 6.19]{VA}, we conclude $(x,r)\in\inter \epi f$ and consequently $x\in\inter \dom f$, thus establishing $\ref{item:bd}$.
By \cite[Exercise 10.25 (a)]{VA}, we have $\partial^{\infty} f(\bar{x})=N_{\sdom f}(\bar{x})$, and in light of (\ref{align:prop}) and (\ref{align:prop2}) one can readily verify that the cone $N_{\sdom f}(\bar{x})$ is pointed if and only if $N_{\sdom g}(F(\bar{x}))$ is pointed.
Now suppose that $\bar{x}$ lies in $\cl(\inter \dom f)$.
Then by $\ref{item:int}$ the domain $\dom g$ has nonempty interior and consequently $N_{\sdom g}(F(\bar{x}))$ is pointed, as is $N_{\sdom f}(\bar{x})$.
Conversely suppose that $f$ is directionally Lipschitzian at $\bar{x}$. Then $N_{\sdom g}(F(\bar{x}))$ is pointed, and consequently $\dom g$ has nonempty interior. Observe since $F$ is continuous, the set $F^{-1}(\inter \dom g)\subset\dom f$ is open. Hence it is sufficient to argue that this set contains $\bar{x}$ in its closure.
Suppose this is not the case. Then there exists a neighborhood $U$ of $\bar{x}$ so that the image $F(U)$ does not intersect $\inter \dom g$. It follows that the range of the linearised mapping $w\mapsto F(\bar{x})+\nabla F(\bar{x})w$ can be separated from $\dom g$, thus contradicting $(\ref{align:prop})$. See \cite[Theorem 10.6]{VA} for a more detailed explanation of this latter assertion.
\end{proof}
We can now easily recover, in the stratifiable setting, representation (\ref{eqn:rock}) of the introduction. In fact, an entirely analogous formula holds more generally for amenable functions.
\begin{corollary}\label{cor:amen_rock}
Consider a proper stratifiable function $f\colon\R^n\to\overline{\R}$, that is amenable at a point $\bar{x}$, and so that $\bar{x}$ lies in the closure of the interior of $\dom f$. Let $\Omega$ be any dense subset of $\dom f$. Then the subdifferential admits the presentation
$$\partial f(\bar{x})=\conv\big\{\lim_{i\to\infty} \nabla f(x_i):x_i\stackrel{\Omega}{\rightarrow}\bar{x}\big\} + N_{\sdom f}(\bar{x}).$$
\end{corollary}
\begin{proof} By \cite[Exercise 10.25]{VA}, we have $\partial^{\infty} f(\bar{x})=N_{\sdom f}(\bar{x})$.
Thus we have
$$\cone\big\{\lim_{\substack{i\to\infty\\ t_i \downarrow 0}}t_i\nabla f(x_i):x_i\xrightarrow[f]{\Omega} \bar{x}\big\}\subset N_{\sdom f}(\bar{x}).$$
Observe $f$ is amenable, directionally Lipschitzian (Lemma~\ref{lem:amen_dir}), and vertically continuous (Proposition~\ref{prop:norm}) at each point of $\dom f$ near $\bar{x}$.
Applying Theorem~\ref{thm:main}, we deduce
$$\partial f(\bar{x})=\conv\big\{\lim_{i\to\infty} \nabla f(x_i):x_i\xrightarrow[f]{\Omega}\bar{x}\big\} + N_{\sdom f}(\bar{x}).$$ Noting that the subdifferential map $\partial f$ of an amenable function is outer-semicontinuous, the result follows.
\end{proof}
A natural question arises. Does the corollary above hold more generally without the stratifiability assumption? The answer turns out to be yes. This is immediate, in light of (\ref{eqn:rock}), for the subclass of {\em lower-${\bf C}^2$} functions (those functions that are locally representable as a difference of convex functions and convex quadratics). A first attempt at a proof for general amenable functions might be to consider the representation $f=g\circ F$ and the chain rule
$$\partial f(x)=\nabla F(\bar{x})^{*}\partial g(F(\bar{x})).$$ One may then try to naively use Rockafellar's representation formula (\ref{eqn:rock}) for the convex subdifferential
$$\partial g(F(\bar{x}))=\conv\{\lim_{i\to\infty} \nabla g(y_i): y_i\to F(\bar{x})\}$$
to deduce the result. However, we immediately run into trouble since $F$ may easily fail to be surjective onto a neighborhood of $F(\bar{x})$ in $\dom g$. Hence a different more sophisticated proof technique is required. For completeness, we present an argument below, which is a natural extension of the proof of \cite[Theorem 25.6]{rock}. It is furthermore instructive to emphasize how the stratifiability assumption allowed us in Corollary~\ref{cor:amen_rock} to bypass essentially all the technical details of the argument below.
\begin{theorem}
Consider a function $f\colon\R^n\to\overline{\R}$ that is amenable at a point $\bar{x}$ lying in $\cl(\inter \dom f)$. Then the subdifferential admits the presentation
$$\partial f(\bar{x})=\conv\big\{\lim_{i\to\infty} \nabla f(x_i):x_i\stackrel{\Omega}{\rightarrow}\bar{x}\big\} + N_{\sdom f}(\bar{x}),$$
where $\Omega$ is any full-measure subset of $\dom f$.
\end{theorem}
\begin{proof}
Recall that $f$ is Clarke regular at $\bar{x}$, and therefore $\partial^{\infty} f(\bar{x})=N_{\sdom f}(\bar{x})$ is the recession cone of $\partial f(\bar{x})$. Combining this with the fact that the map $\partial f$ is outer-semicontinuous at $\bar{x}$, we immediately deduce the inclusion ``$\supset$''.
We now argue the reverse inclusion. To this end, let $V$ be a neighborhood of $\bar{x}$ so that $f$ can be written as a composition $f=g\circ F$, for a ${\bf C}^1$ mapping $F\colon V\to\R^m$ and a proper, lower-semicontinuous, convex function $g\colon\R^m\to\overline{\R}$, so that the qualification condition
\begin{equation*}
N_{\sdom g}(F(\bar{x}))\cap \kernal \nabla F(\bar{x})^{*}=\{0\},
\end{equation*}
is satisfied.
Since $f$ is directionally Lipschitzian at $\bar{x}$, the subdifferential $\partial f(x)$ is the sum of the convex hull of its extreme points and the recession cone $N_{\sdom f}(\bar{x})$. Furthermore every extreme point is a limit of exposed points. Thus
$$\partial f(\bar{x})= \conv (\cl E)+N_{\sdom f}(\bar{x}),$$
where $E$ is the set of all exposed point of $\partial f(\bar{x})$.
Hence to prove the needed inclusion, it suffices to argue the inclusion
$$E\subset \conv\big\{\lim_{i\to\infty} \nabla f(x_i):x_i\stackrel{\Omega}{\rightarrow}\bar{x}\big\}.$$
Here, we should note that since $f$ is directionally Lipschitzian at $\bar{x}$, the set on the right hand side is closed.
To this end, let $\bar{v}$ be an arbitrary exposed point of $\partial f(\bar{x})$. By definition, there exists a vector $\bar{a}\in \R^n$ with $|\bar{a}|=1$ and satisfying
$$\langle \bar{a},\bar{v}\rangle > \langle \bar{a},v\rangle \textrm{ for all } v\in \partial f(\bar{x}) \textrm{ with } v\neq \bar{v}.$$
Since $N_{\sdom f}(\bar{x})$ is the recession cone of $\partial f(\bar{x})$, from above we deduce
$$\langle \bar{a}, z\rangle < 0 \textrm{ for all } 0\neq z\in N_{\sdom f}(\bar{x}),$$
and consequently
$$\langle \nabla F(\bar{x})\bar{a}, w\rangle < 0 \textrm{ for all } 0\neq w\in N_{\sdom g}(F(\bar{x})).$$
Consider the half-line $\{F(\bar{x})+t\nabla F(\bar{x})\bar{a}: t\geq 0\}$. We claim that this half-line cannot be separated from $\dom g$. Indeed, otherwise there would exist a nonzero vector $\bar{w}\in N_{\sdom g}(\bar{x})$ so that for all $t>0$ and all $x\in \dom g$ we have
$$\langle x,\bar{w}\rangle\leq \langle F(\bar{x})+t\nabla F(\bar{x})\bar{a},\bar{w}\rangle < \langle F(\bar{x}),\bar{w}\rangle,$$
which is a contradiction. Hence by \cite[Theorem 11.3]{rock}, this half-line must meet the interior of $\dom g$. By convexity then there exists a real number $\alpha >0$ satisfying $$\{F(\bar{x})+t\nabla F(\bar{x})\bar{a}: 0 < t\leq \alpha\}\subset \inter(\dom g).$$
Consequently the points $F(\bar{x}+t\bar{a})$ lie in $\inter \dom g$ for all sufficiently small $t >0$.
By Lemma~\ref{lem:amen_dir}, we deduce that there exists a real number $\beta >0$
so that
$$\{\bar{x}+t\bar{a}: 0<t\leq\beta\}\subset \inter (\dom f).$$
Hence $f$ is Lipschitz continuous at each point $\bar{x}+t\bar{a}$ (for $0<t\leq\beta$), and so from (\ref{eqn:formula}) we obtain
\begin{equation}\label{eq:form}
\partial f(\bar{x}+t\bar{a})=\conv\{\lim_{j\to\infty}\nabla f(x_j):x_j\stackrel{\Omega}{\rightarrow}\bar{x}+t\bar{a}\}.
\end{equation}
Now choose a sequence $t_i\to 0$ and observe that by \cite[Theorem 24.6]{rock}, for any $\epsilon>0$ we have
\begin{align*}
\partial g(F(\bar{x}+t_i\bar{a}))&\subset \operatornamewithlimits{argmax}_{v\in\partial g(F(\bar{x}))}\langle \nabla F(\bar{x})\bar{a}, v\rangle+\epsilon{\bf B},\\
&= \operatornamewithlimits{argmax}_{v\in\partial g(F(\bar{x}))}\langle \bar{a}, \nabla F(\bar{x})^{*}v\rangle+\epsilon{\bf B},
\end{align*}
for all large $i$.
We deduce,
$$\nabla F(\bar{x})^{*}\partial g(F(\bar{x}+t_i\bar{a}))\subset \operatornamewithlimits{argmax}_{w\in\partial f(\bar{x})}\langle \bar{a}, w\rangle+\epsilon{\bf B}=\bar{v}+\epsilon{\bf B}.$$
Thus there exists a sequence $w_i\in \partial g(F(\bar{x}+t_i\bar{a}))$ with $\nabla F(\bar{x})^{*}w_i\to \bar{v}$. Consequently the vectors $\nabla F(\bar{x}+t_i\bar{a})^{*}w_i\in \partial f(\bar{x}+t_i\bar{a})$ converge to $\bar{v}$. The result now easily follows from (\ref{eq:form}) and the fact that $f$ is directionally Lipschitzian at $\bar{x}$.
\end{proof}
The following is a further illustration of the applicability of our results to a wide variety of situations.
\begin{corollary}
Consider a proper stratifiable function $f\colon\R^n\to\overline{\R}$ that is locally Lipschitz continuous at a point $\bar{x}$, relative to $\dom f$. Suppose furthermore that $\dom f$ is an epi-Lipschitzian set at $\bar{x}$. Then the formula
$$\partial f(\bar{x})=\conv\big\{\lim_{i\to\infty} \nabla f(x_i):x_i\stackrel{\Omega}{\rightarrow}\bar{x}\big\} + N^{c}_{\sdom f}(\bar{x}),$$
holds, where $\Omega$ is any dense subset of $\dom f$.
\end{corollary}
\begin{proof} Since $f$ is locally Lipschitz near $\bar{x}$ relative to $\dom f$, there exists a globally Lipschitz function $\tilde{f}\colon\R^n\to\R$, agreeing with $f$ on $\dom f$ near $\bar{x}$. Hence, we have
$$f(x)=\tilde{f}(x)+\delta_{\sdom f}(x), \textrm{ locally near } \bar{x}.$$
Combining this with \cite[Exercise 10.10]{VA}, we deduce
$$\partial^{\infty} f(x)\subset N_{\sdom f}(x), \textrm{ for } x \textrm{ near } \bar{x}.$$
We conclude that $f$ is directionally Lipschitzian at all points of $\dom f$ near $\bar{x}$, and furthermore since the gradients of $\tilde{f}$ are bounded near $\bar{x}$ so are the gradients of $f$.
The result follows immediately by an application of Proposition~\ref{prop:norm} and Theorem~\ref{thm:main}.
\end{proof}
\section{Comments on random sampling}\label{sec:random}
Consider a proper, stratifiable function $f\colon\R^n\to\overline{\R}$, that is finite at $\bar{x}$, directionally Lipschitzian at all points of $\dom f$ near $\bar{x}$, and is continuous near $\bar{x}$ relative to the domain of $f$. Let us suppose for the time being that the normal cone $N_{\sdom f}(\bar{x})$ is known exactly. Our interest lies in approximating the Clarke subdifferential $\partial_c f(\bar{x})$. In light of
Corollary~\ref{cor:cite}, one can quickly modify the method considered in \cite{BLO}, in order to achieve this goal. We now describe this procedure in more detail.
Fix a certain radius $\delta >0$ and a sample space $\Lambda=B_{\delta}(\bar{x})$, along with an associated probability measure that is absolutely continuous with respect to the Lebesgue measure $\mu$ on $\R^n$. We assume that the corresponding density $\theta$ is strictly positive almost everywhere on $\Lambda$. Observe that the set $\Lambda\cap \dom \nabla f$ has nonempty interior, and consequently has positive probability. We can consider a sequence of independent trials with outcomes $x_i\in\Lambda$ for $i=1,2,\ldots$, and form trial sets $$C_k=\conv\{\nabla f(x_i): x_i\in \Lambda\cap\dom \nabla f,~ 1\leq i\leq k\}.$$ One may then hope that the sets $D_k:=C_k+ N^{c}_{\sdom f}(\bar{x})$ approximate the Clarke subdifferential $\partial_c f(\bar{x})$ well, as $k$ tends to infinity. One can verify this rigorously via some modifications of arguments made in \cite{BLO}. The starting point is the following result. We omit the proof since it is identical to the argument in \cite[Theorem 2.1]{BLO}.
\begin{theorem}[Limiting Approximation]\label{thm:la}
Consider a function $f\colon \R^n \to \overline{\R}$ and a point $\bar{x}\in\dom f$. Suppose that $f$ is continuously differentiable on an open set dense in $\dom f$. Then for any sampling radius $\delta$, we have
$$\cl \bigcup^{\infty}_{k=1} C_k = \cl\conv \nabla f(B_{\delta}(\bar{x})), \textrm{ almost surely}.$$
\end{theorem}
The following theorem establishes that $\cl\bigcup^{\infty}_{k=1} D_k$ is almost surely an outer approximation of $\partial_c f(\bar{x})$.
\begin{theorem}[Outer approximation]\label{theorem:out}
Consider a proper, stratifiable function $f\colon\R^n\to\overline{\R}$, that is finite at $\bar{x}$. Suppose that $f$ is directionally Lipschitzian at all points of $\dom f$ near $\bar{x}$, and is continuous near $\bar{x}$ relative to the domain of $f$. Then
$$\partial_c f(\bar{x})\subset \cl\bigcup^{\infty}_{k=1} D_k, \textrm{ almost surely}.$$
\end{theorem}
\begin{proof} This is immediate from Theorem~\ref{thm:la} and Corollary~\ref{cor:cite}.
\end{proof}
On the other hand, the following lemma shows that $\cl \bigcup^{\infty}_{k=1} D_k$ is not too much bigger than $\partial_c f(\bar{x})$, as long as the radius is sufficiently small and we restrict ourselves to considering only bounded subsets of $\R^n$.
\begin{lemma}[Truncated Inner approximation]\label{lem:trunc:inner}
Consider a proper, stratifiable function $f\colon\R^n\to\overline{\R}$, that is finite at $\bar{x}$. Suppose that $f$ is directionally Lipschitzian at all points of $\dom f$ near $\bar{x}$, and is continuous near $\bar{x}$ relative to the domain of $f$. Then for any compact subset $\Gamma \subset\R^n$ and a real number $\epsilon >0$ we have, for any sufficiently small sampling radius,
$$\Gamma \cap \cl \bigcup^{\infty}_{k=1} D_k\subset \Gamma \cap \partial_c f(\bar{x})+\epsilon {\bf B}, \textrm{ almost surely}.$$
If, in addition, $f$ is Lipschitz continuous on its domain, then for any real number $\epsilon >0$ we have, for any sufficiently small sampling radius,
$$\cl \bigcup^{\infty}_{k=1} D_k\subset \partial_c f(\bar{x})+\epsilon {\bf B}, \textrm{ almost surely}.$$
\end{lemma}
\begin{proof}
Since $f$ is directionally Lipschitzian at $\bar{x}$, one can easily see check that the mapping $\partial_c f$ is outer-semicontinuous at $\bar{x}$. Consequently the truncated mapping $x\mapsto \Gamma\cap\partial_c f(x)$ is upper-semicontinuous at $\bar{x}$. We conclude that there exists a real number $\delta > 0$ such that $$\Gamma\cap \partial_c f(\bar{x}+\delta {\bf B})\subset \Gamma\cap \partial_c f(\bar{x})+ \epsilon {\bf B}.$$ Observe that $\cl \bigcup^{\infty}_{k=1} D_k$ is almost surely contained in $\Gamma\cap \cl \conv \partial_c f(\bar{x}+\delta {\bf B})$, and hence the result follows.
Now suppose that $f$ is Lipschitz continuous on its domain. Then, one can verify that that there exists a real number $\delta > 0$ satisfying
$$\nabla f(B_{\delta}(\bar{x}))\subset \partial_c f(\bar{x})+\epsilon {\bf B}.$$ Since $\cl \bigcup^{\infty}_{k=1} D_k$ is almost surely contained $\cl\conv\big(\nabla f(B_{\delta}(\bar{x}))+N_{\sdom f}(\bar{x})\big)$, the result follows.
\end{proof}
In particular, the distance of a fixed vector $v$ to $\cl \bigcup^{\infty}_{k=1} D_k$ can be made arbitrarily close to the distance of $v$ to $\partial_c f(\bar{x})$.
\begin{theorem}[Distance approximation]\label{thm:dist}
Consider a proper, stratifiable function $f\colon\R^n\to\overline{\R}$, that is finite at $\bar{x}$. Suppose that $f$ is directionally Lipschitzian at all points of $\dom f$ near $\bar{x}$, and is continuous near $\bar{x}$ relative to the domain of $f$. Then for any vector $v\in\R^n$ and a real $\epsilon > 0$ we have, for any sufficiently small sampling radius,
$$\dist(v,\cl \bigcup^{\infty}_{k=1} D_k)\geq \dist(v,\partial_c f(\bar{x})) -\epsilon, \textrm{ almost surely},$$
and consequently
$$\lim_{k\to\infty}\big|\dist(v,\cl D_k)- \dist(v,\partial_c f(\bar{x}))\big|<\epsilon, \textrm{ almost surely}.$$
\end{theorem}
\begin{proof} Let $\gamma:= \dist(v,\partial_c f(\bar{x}))$ and $\Gamma=\overline{B}_{\gamma}(v)$. Then by Lemma~\ref{lem:trunc:inner}, we have for any sufficiently small radius,
$$\Gamma \cap \cl \bigcup^{\infty}_{k=1} D_k\subset \Gamma \cap \partial_c f(\bar{x})+\epsilon {\bf B}, \textrm{ almost surely}.$$ The result readily follows from the inclusion above and Theorem~\ref{theorem:out}.
\end{proof}
A natural test for optimality, using the sampling scheme, is to determine whether the inclusion $$0\in D_k,$$ holds. According to Theorem~\ref{theorem:out}, if $\bar{x}$
is a Clarke critical point, then $\dist(0,D_k)\to 0$ almost surely. On the other hand, if $\bar{x}$ is not Clarke-critical, then by Theorem~\ref{thm:dist}, for any sufficiently small radius, we have $\lim_{k\to\infty} \dist(0,\cl D_k) >0$, and hence the test $0\in D_k$ will not generate a false positive.
From a computational point of view, there is a certain difficulty we have not addressed. Suppose that for each radius $\delta$, the trial points $x_i$ are sampled with uniform distribution on the ball $B_{\delta}(\bar{x})$. One could worry that as the sampling radius decreases to zero, the proportion of points $x_i$ discarded (those that lie outside of the domain) to those that are in the domain could be arbitrarily large, with positive probability. This, for instance, could happen if the domain was the set $\{(x,y)\in{\R^2}:|y|\leq x^2, 0\leq x\}$. This pathology, however, does not occur in the directionally Lipschitzian setting.
\begin{proposition}[Domain of a continuous directionally Lipschitzian function]
Consider a proper function $f\colon\R^n\to\overline{\R}$, that is finite at $\bar{x}$. Suppose that $f$ is directionally Lipschitzian at $\bar{x}$ and is continuous at $\bar{x}$, relative to $\dom f$. Then the domain of $f$ is epi-Lipschitzian at $\bar{x}$.
\end{proposition}
\begin{proof}
By continuity of $f$, the domain of $f$ is locally closed near $\bar{x}$. Now observe that for each real number $r$ satisfying $r> f(\bar{x})$, we have $N_{\sepi f}(\bar{x},r)=N_{\sdom f}(\bar{x})\times\{0\}$. Consequently the inclusion $$N_{\sdom f}(\bar{x})\times\{0\}\subset N_{\sepi f}(\bar{x},f(\bar{x})),$$ holds. We conclude that the normal cone $N_{\sdom f}(\bar{x})$ is pointed, and hence the domain of $f$ is an epi-Lipschitzian set at $\bar{x}$.
\end{proof}
The {\em lower Lebesgue density} of a set $Q\subset\R^n$ at a point $\bar{x}$ is
$$\mbox{\rm Dens}^{-}(Q,\bar{x}):=\operatornamewithlimits{liminf}_{\delta\to 0} \frac{\mu(Q\cap B_{\delta}(\bar{x}))}{\mu(B_{\delta}(\bar{x}))},$$ where we recall that $\mu$ denotes the Lebesgue measure on $\R^n$.
\begin{proposition}[Density of epi-Lipschitzian sets]
Consider a set $Q\subset\R^n$ that is epi-Lipschitzian at a point $\bar{x}$. Then we have
$$\mbox{\rm Dens}^{-}(Q,\bar{x}) > 0.$$
\end{proposition}
\begin{proof}
Since $Q$ is epi-Lipschitzian at $\bar{x}$, we may assume that $Q$ is an epigraph of a Lipschitz continuous function $f\colon\R^{n-1}\to\R$, with $\bar{x}=(0,0)$ (See for example \cite[Section 4]{Clarke_roc}). We deduce $f(x)\leq \kappa |x|$, for some constant $\kappa >0$ and for all points in $x\in\R^{n-1}$. Consequently, $\epi f$ contains a convex cone with nonempty interior, a set that has strictly positive Lebesgue density. The result follows.
\end{proof}
The sampling scheme outlined in this section is effective whenever gradients are cheap to compute and the normal cone to the domain at the point of interest is known in advance. Now suppose that the normal cone is unknown to us, but nevertheless we can test whether a point is in the domain and we can project onto the domain easily. Then a slight modification of the sampling scheme can still be effective at approximating the Clarke subdifferential.
Namely during our sampling scheme, rather than discarding points lying outside of the domain, we may use these points to approximate the normal cone by projecting onto the domain. See \cite{har_lew} for more details. Using this information along with the sampled gradients, we may still hope to approximate the whole Clarke subdifferential effectively. We, however, do not pursue this further in our work.
\section{Local Dimension of semi-algebraic subdifferential graphs.}\label{sec:loc_dim}
In this section, we apply our methods to study the size of subdifferential graphs of semi-algebraic functions. We first establish notation and record some preliminary facts from semi-algebraic geometry.
\subsection{Semi-algebraic Geometry.}
A {\em semi-algebraic} set $S\subset\R^n$ is a finite union of sets of the form $$\{x\in \R^n: P_1(x)=0,\ldots,P_k(x)=0, Q_1(x)<0,\ldots, Q_l(x)<0\},$$ where $P_1,\ldots,P_k$ and $Q_1,\ldots,Q_l$ are polynomials in $n$ variables. In other words, $S$ is a union of finitely many sets, each defined by finitely many polynomial equalities and inequalities. A map $F\colon\R^n\rightrightarrows\R^m$ is said to be {\em semi-algebraic} if $\mbox{\rm gph}\, F\subset\R^{n+m}$ is a semi-algebraic set. Semi-algebraic sets enjoy many nice structural properties. Unless otherwise stated, we follow the notation of \cite{Coste-semi} and \cite{DM}.
A fundamental fact about semi-algebraic sets is provided by the Tarski-Seidenberg Theorem \cite[Theorem 2.3]{Coste-semi}. Roughly speaking, it states that a linear projection of a semi-algebraic set remains semi-algebraic. From this result, it follows that a great many constructions preserve semi-algebraicity. In particular, for a semi-algebraic function $f\colon\R^n\to\overline{\R}$, the set-valued mappings $\partial_P f$, $\hat{\partial} f$, $\partial f$, and $\partial_c f$ are semi-algebraic. See for example \cite[Proposition 3.1]{tame_opt}.
\begin{definition}[Compatibility]
{\rm Given finite collections $\{B_i\}$ and $\{C_j\}$ of subsets of $\R^n$, we say that $\{B_i\}$ is {\em compatible} with $\{C_j\}$ if for all $B_i$ and $C_j$, either $B_i\cap C_j=\emptyset$ or $B_i\subset C_j$.}
\end{definition}
\begin{definition}[Stratifications]\label{defn:whit}
{\rm Consider a set $Q$ in $\R^n$. A {\em stratification} of $Q$ is a finite partition of $Q$ into disjoint, connected, manifolds $M_i$ (called strata) with the property that for each index $i$, the intersection of the closure of $M_i$ with $Q$ is the union of some $M_j$'s.}
\end{definition}
Remarkably, semi-algebraic sets always admit stratifications. Indeed, the following stronger result holds.
\begin{proposition}\label{prop:smooth}\cite[Theorem 4.8]{DM}
Consider a semi-algebraic function $f\colon\R^n\to\overline{\R}$. Then there exists a stratification $\mathcal{A}$ of $\dom f$ so that $f$ is smooth on each stratum $M_i\in \mathcal{A}$. Furthermore, if $\mathcal{B}$ is some other stratification of $\dom f$, then we can ensure that $\mathcal{A}$ is compatible with $\mathcal{B}$.
\end{proposition}
\begin{definition}[Dimension]
{\rm Let $Q\subset\R^n$ be a nonempty semi-algebraic set. Then the {\em dimension} of $Q$, denoted $\dim Q$, is the maximal dimension of a stratum in any stratification of $Q$. We adopt the convention that $\dim \emptyset=-\infty$.}
\end{definition}
It can be easily shown that the dimension does not depend on the particular stratification. Observe that the dimension of a semi-algebraic set only depends on the maximal dimensional manifold in a stratification. Hence, dimension is a crude measure of the size of the semi-algebraic set. This motivates a localized notion of dimension.
\begin{definition}[Local dimension]
{\rm Consider a semi-algebraic set $Q\subset \R^n$ and a point $\bar{x}\in Q$. We let the {\em local dimension} of $Q$ at $\bar{x}$ be $$\dim_Q(\bar{x}):=\inf_{\epsilon>0}\dim (Q\cap B_{\epsilon}(\bar{x})).$$ It is not difficult to see that there exists a real number $\bar{\epsilon}>0$ such that for every real number $0<\epsilon<\bar{\epsilon}$, we have $\dim_Q(\bar{x})=\dim (Q\cap B_{\epsilon}(\bar{x}))$.
}
\end{definition}
There is a straightforward connection between local dimension and dimension of strata. This is the content of the following proposition.
\begin{proposition}\cite[Proposition 3.4]{small}\label{prop:id}
Consider a semi-algebraic set $Q\subset\R^n$ and a point $\bar{x}\in Q$. Let $\{M_i\}$ be any stratification of $Q$. Then we have the identity $$\dim_Q(\bar{x})=\max_i\{\dim M_i: \bar{x}\in\cl M_i\}.$$
\end{proposition}
\begin{definition}[Maximal strata]
{\rm Given a stratification $\{M_i\}$ of a semi-algebraic set $Q\subset\R^n$, a stratum $M$ is {\em maximal} if it is not contained in the closure of any other stratum.}
\end{definition}
Using the defining property of a stratification, we can equivalently say that given a stratification $\{M_i\}$ of a semi-algebraic set $Q\subset\R^n$, a stratum $M_i$ is maximal if and only if it is disjoint from the closure of any other stratum.
\begin{proposition}\cite[Proposition 3.7]{small}\label{prop:loc_max}
Consider a stratification $\{M_i\}$ of a semi-algebraic set $Q\subset\R^n$. Then given any point $\bar{x}\in Q$, there exists a maximal stratum $M$ satisfying $\bar{x}\in \cl M$ and $\dim M=\dim_Q(\bar{x})$.
\end{proposition}
Semi-algebraic methods have recently found great uses in set-valued analysis. See for example \cite{Pang, dim, tame_opt, ioffe_tame, ioffe_strat}. A fact that will be particularly useful for us is that semi-algebraic set-valued mappings are ``generically'' inner-semicontinuous.
\begin{proposition}\label{prop:cont}\cite[Proposition 2.28, 2.30]{dim}
Consider a semi-algebraic, set-valued mapping $G\colon\R^n\rightrightarrows\R^m$. Then there exists a stratification of $\dom G$ into finitely many semi-algebraic manifolds $\{M_i\}$ such that on each stratum $M_i$, the mapping $G$ is inner-semicontinuous and the dimension of the images $F(x)$ is constant. If in addition $F$ is closed-valued, then we can ensure that the restriction $G\big|_{M_i}$ is also outer-semicontinuous for each index $i$.
\end{proposition}
For a more refined result along the lines of Proposition~\ref{prop:cont}, see \cite[Theorem 28]{Pang}.
The following result is standard.
\begin{proposition} \cite[Theorem 3.18]{Coste-min} \label{prop:const_gen}
Consider a semi-algebraic, set-valued mapping $F\colon\R^n\rightrightarrows\R^m$. Suppose there exists an integer $k$ such that $F(x)$ is $k$-dimensional for each point $x\in \dom F$. Then the equality, $$\dim\gph F= \dim \dom F +k,$$ holds.
\end{proposition}
We will need a version of Proposition~\ref{prop:const_gen} that pertains to local dimension.
\begin{proposition}\label{prop:loc_dim}
Consider a semialgebraic mapping $F\colon\R^n\rightrightarrows\R^m$ that is inner-semicontinuous on its domain. Suppose that there exist constants $k$ and $l$ such that for each pair $(x,v)\in\gph F$, we have
$$\dim_{\sdom F}x=k, ~\dim_{F(x)}v=l.$$
Then $\gph F$ has local dimension $k+l$ around every pair $(x,v)\in\gph F$.
\end{proposition}
\begin{proof} Let $\pi\colon\R^n\times\R^m\to\R^n$ be the canonical projection onto $\R^n$. Consider any stratification $\mathcal{A}$ of $\gph F$ and let $M\in \mathcal{A}$ be any maximal stratum. Clearly
$$\dim M\leq \dim \gph F\leq k+l.$$
Consider an arbitrary point $x\in \pi(M)$. Since $M$ is maximal, the set $M\cap (\{x\}\times\R^m)$ is open relative to $\gph F\cap (\{x\}\times\R^m)$. Furthermore, since $\dim_{F(x)}v=l$ for each vector $v\in F(x)$, it easily follows that $\dim M\cap (\{x\}\times\R^m)=l$.
We now claim that $\dim \pi(M)= k$. Indeed suppose this is not the case, that is the strict inequality $\dim \pi(M)< k$ holds. Since $\dim_{\sdom F}x=k$, we deduce that there exists a sequence $x_i\to\bar{x}$, with $x_i\in \dom F$ and $x_i\notin \pi(M)$ for each index $i$. Since $F$ is inner-semicontinuous on $M$, we deduce
$$M\cap (\{x\}\times\R^m)\subset \operatornamewithlimits{Limsup}_{i\to\infty} ~\{x_i\}\times F(x_i),$$ which contradicts maximality of $M$. Consequently, using Proposition~\ref{prop:const_gen}, we deduce $\dim M=k+l$. Since $M$ was an arbitrary maximal stratum, the result follows by an application of Proposition~\ref{prop:loc_max}.
\end{proof}
\subsection{Local dimension of semi-algebraic subdifferential graphs.}
We begin with a definition.
\begin{definition}[Subjets]
{\rm For a function $f\colon\R^n\to\overline{\R}$, the {\em limiting subjet} is given by
$$[\partial f]:=\{(x,f(x),v):v\in \partial f(x)\}.$$ Subjets corresponding to the other subdifferentials are defined analogously.}
\end{definition}
Much like $f$-attentive convergence, subjets are useful for keeping track of variational information in absence of continuity.
In this section, we build on the following theorem. This result and its consequences for generic semi-algebraic optimization problems are discussed extensively in \cite{dim}.
\begin{theorem}\cite[Theorem 3.6]{dim}\label{thm:grd}
Let $f\colon\R^n\rightarrow\overline{\R}$ be a proper semi-algebraic function. Then the subjets $[\partial_P f]$, $[\hat{\partial} f]$, $[\partial f]$ and $[\partial_c f]$ have dimension exactly $n$.
\end{theorem}
An immediate question arises: Can the four subjets have local dimension smaller than $n$ at some of their points? In a recent paper \cite{small}, the authors showed that this indeed may easily happen for $[\partial_c f]$. Remarkably the authors showed that the subjets $[\partial_P f]$, $[\hat{\partial} f]$, and $[\partial f]$ of a lower-semicontinuous, semi-algebraic function $f\colon\R^n\to\overline{\R}$ do have uniform local dimension $n$. The significance of this result and the relation to Minty's theorem were also discussed.
In this section, we provide a much simplified proof of this rather striking fact (Theorem~\ref{thm:loc_dim}).
The main tool we use is the following accessibility lemma, which is a special case of Lemma~\ref{lem:access_set_main}. Since the proof is much simpler than that of Lemmma~\ref{lem:access_set_main}, we include the full argument below.
\begin{lemma}[Accessibility]\label{lem:access_prox}
Consider a closed set $Q\subset\R^n$, a manifold $M\subset Q$, and a point $\bar{x}\in M$. Recall that the inclusion $N^{P}_Q(\bar{x})\subset N_M(\bar{x})$ holds. Suppose that a proximal normal vector $\bar{v}\in N^{P}_Q(\bar{x})$ lies in the boundary of $N^{P}_Q(\bar{x})$, relative to the linear space $N_M(\bar{x})$. Then there exist sequences $x_i\to\bar{x}$ and $v_i\to\bar{v}$, with $v_i\in N_Q^{P}(x_i)$, and so that all the points $x_i$ lie outside of $M$.
\end{lemma}
\begin{proof} There exists a real number $\lambda >0$ so that $\bar{x}+\lambda\bar{v}$ lies in the prox-normal neighborhood $W$ of $M$ at $\bar{x}$ and such that the equality $P_Q(\bar{x}+\lambda\bar{v})=\bar{x}$ holds. Consider any sequence $v_i\in\R^n$ satisfying
$$v_i\to\bar{v}, v_i\in N_M(\bar{x}), v_i\notin N_Q^{P}(\bar{x}).$$
Choose arbitrary points $x_i\in P_Q(\bar{x}+\lambda v_i)$. We have
$$(\bar{x}- x_i)+\lambda v_i\in N_Q^{P}(x_i).$$
We deduce $x_i\neq \bar{x}$. Clearly, the sequence $x_i$ converges to $\bar{x}$. We claim $x_i\notin M$ for all sufficiently large indices $i$. Indeed, if it were otherwise, then for large $i$, the points $\bar{x}+\lambda v_i$ would lie in $W$ and we would have $x_i\in P_M(\bar{x}+\lambda v_i)=\bar{x}$, which is a contradiction. Thus we have obtained a sequence $(x_i,\frac{1}{\lambda}(\bar{x}- x_i)+v_i)\in \gph N_Q^{P}$, with $x_i\notin M$, and satisfying $(x_i,\frac{1}{\lambda}(\bar{x}- x_i)+v_i))\to(\bar{x},\bar{v})$.
\end{proof}
The following is now immediate.
\begin{corollary}\label{cor:access_func}
Consider a lower semicontinuous function $f\colon\R^n\to\overline{\R}$, a manifold $M\subset\R^n$, and a point $\bar{x}\in M$. Suppose that $f$ is smooth on $M$ and the strict inequality $\dim \partial_P f(\bar{x})<\dim N_M(\bar{x})$ holds. Then for every vector $\bar{v}\in \partial_{P} f(\bar{x})$, there exist sequences $(x_i,f(x_i),v_i)\to(\bar{x},f(\bar{x}),\bar{v})$, with $v_i\in \partial_P f(x_i)$, and so that all the points $x_i$ lie outside of $M$.
\end{corollary}
\begin{proof} From the strict inequality, one can easily see that the normal cone $N_{\sepi f}^P(\bar{x},f(\bar{x}))$ has empty interior relative to the normal space $N_{\sgph}(\bar{x},f(\bar{x}))$. An application of Lemma~\ref{lem:access_prox} completes the proof.
\end{proof}
We can now prove the main result of this section.
\begin{theorem}\label{thm:loc_dim}
Consider a lower-semicontinuous, semi-algebraic function $f\colon\R^n\to\overline{\R}$. Then the subjets $[\partial_P f]$, $[\hat{\partial} f]$, and $[\partial f]$ have constant local dimension $n$ around each of their points.
\end{theorem}
\begin{proof} We first prove the theorem for the subjet $[\partial_P f]$.
Consider the semi-algebraic set-valued mapping
$$F(x):=\{f(x)\}\times \partial_P f(x),$$ whose graph is precisely $[\partial_P f]$. By Propositions~\ref{prop:smooth} and~\ref{prop:cont}, we may stratify the domain of $F$ into finitely many semi-algebraic manifolds $\{M_i\}$, so that on each stratum $M_i$, the mapping $F$ is inner-semicontinuous, the images $F(x)$ have constant dimension, and $f$ is smooth.
Consider a triple $(x,f(x),v)\in[\partial_P f]$. We prove the theorem by induction on the dimension of the strata $M$ in which the point $x$ lies. Clearly the result holds for the strata of dimension $n$, if there are any. As an inductive hypothesis, assume that the theorem holds for all points $(x,f(x),v)\in[\partial_P f]$ with $x$ lying in strata of dimension at least $k$, for some integer $k\geq 1$.
Now consider a stratum $M$ of dimension $k-1$ and a point $x\in M$. If $\dim F(x)=n-\dim M$, then recalling that $F$ is inner-semicontinuous on $M$ and applying Proposition~\ref{prop:loc_dim}, we see that the set $\gph F\Big|_M$ has local dimension $n$ around $(x,f(x),v)$ for any $v\in\partial_P f(x)$. The result follows in this case.
Now suppose $\dim F(x)< n-\dim M$. Then, by Corollary~\ref{cor:access_func}, for such a vector $v$, there exists a sequence $(x_i,f(x_i),v_i)\to (x,f(x),v)$ satisfying $(x_i,f(x_i),v_i)\in[\partial_P f]$ and $x_i\notin M$ for each index $i$. Restricting to a subsequence, we may assume that all the points $x_i$ lie in a stratum $K$ satisfying $\dim K\geq k$. By the inductive hypothesis, we deduce
$$\dim_{[\partial_P f]} (x,f(x),v)\geq \operatornamewithlimits{limsup}_{i\to\infty} \dim_{[\partial_P f]} (x_i,f(x_i),v_i)=n.$$ This completes the proof of the inductive step and of the theorem for the subjet $[\partial_P f]$.
Now observe that $[\partial_P f]$ is dense in $[\hat{\partial} f]$ and in $[\partial f]$. It follows that $[\hat{\partial} f]$ and $[\partial f]$ also have local dimension $n$ around each of their points.
\end{proof}
Surprisingly Theorem~\ref{thm:loc_dim} may fail in the Clarke case, even for Lipschitz continuous functions.
\begin{example}
{\rm
Consider the function $f\colon\R^3\to\R,$ defined by
\begin{displaymath}
f(x,y,z) = \left\{
\begin{array}{ll}
\min\{x,y,z^2\} &\mbox{\rm if }(x,y,z) \in \R_{+}^3\\
\min\{-x,-y,z^2\} &\mbox{\rm if } (x,y,z) \in \R_{-}^3\\
0 & \mbox{\rm{otherwise}.}
\end{array}
\right.
\end{displaymath}
Let $\bar{x}\in\R^n$ be the origin and let $\Gamma:=\conv\{(1,0,0),(0,1,0),(0,0,0)\}$.
One can check that the local dimension of $\gph \partial_c f$ at $(\bar{x},\bar{v})$ is two for any vector $\bar{v}\in \big(\conv (\Gamma\cup -\Gamma)\big)\setminus (\Gamma\cup -\Gamma)$. For more details see \cite[Example 3.11]{small}.}
\end{example}
\small
\small
\parsep 0pt
\end{document}
|
\begin{document}
\title{Signed Magic arrays with certain property}
\begin{abstract}
\noindent A signed magic array, $SMA(m, n;s,t)$, is an $m \times n$ array with the same number of filled cells $s$ in each row and the same number of filled cells $t$ in each column, filled with a certain set of numbers that is symmetric about the number zero, such that every row and column has a zero sum. We use the notation $SMA(m, n)$ if $m=t$ and $n=s$.
\noindent In this paper, we prove that for every even number $n\geq 2$ there exists an $SMA(m,n)$ such that
the entries $\pm x$ appear in the same row for every $x\in\{1, 2, 3,\ldots, mn/2\}$ if and only if
$m\equiv 0, 3(\mod4)$ and $n=2$ or $m\geq 3$ and $n\geq 4$.
\noindent {Keywords: magic array, Heffter array, signed magic array, shiftable array}
\end{abstract}
\section{Introduction}\label{introduction}
An {\em integer Heffter array} $H(m, n; s, t)$ is an $m\times n$ array with entries from
$X=\{\pm1,\pm2,$ $\ldots,\pm ms\}$
such that each row contains $s$ filled cells and each column contains $t$ filled cells,
the elements in every row and column sum to 0 in ${\mathbb Z}$, and
for every $x\in X$, either $x$ or $-x$ appears in the array.
The notion of an integer Heffter array $H(m, n; s, t)$ was first defined by Archdeacon in \cite{arc1}.
Integer Heffter arrays with $m=n$ represent a type of magic square where each number from the set
$\{1,2,3,\ldots,n^2\}$ is used once up to sign. A Heffter array is {\em tight} if it has no empty cell; that is,
$n=s$ (and necessarily $m = t$). The proof of the following Theorem can be found in \cite{arc2}.
\begin{theorem}\label{tightHeffter}
Let $m, n$ be integers at least 3.
There is a tight integer Heffter array if and only if $mn\equiv 0, 3 \pmod 4$.
\end{theorem}
For more information on Heffter arrays consult \cite{arc1,arc2,ADDY,DW}.
A {\em signed magic array} $SMA(m,n;s,t)$ is an $m \times n$ array with entries from $X$, where
$X=\{0,\pm1,\pm2, \pm3,\ldots,\pm (ms-1)/2\}$ if $ms$ is odd and $X = \{\pm1,\pm2, \pm3,\ldots,\pm ms/2\}$ if $ms$ is even,
such that precisely $s$ cells in every row and $t$ cells in every column are filled,
every integer from set $X$ appears exactly once in the array and
the sum of each row and of each column is zero.
An $SMA(m,n;s,t)$ is called {\em tight}, and denoted $SMA(m,n)$, if it contains no empty cells; that is $m=t$
(and necessarily $n=s$). Figure \ref {3x2.3x4.4x2} displays tight $SMA(3,2)$, $SMA(3,4)$ and $SMA(4,2)$.
\begin{figure}
\caption{An $SMA(3,2)$, $SMA(3,4)$ and $SMA(4,2)$ }
\label{3x2.3x4.4x2}
\end{figure}
An $SMA(m, n; s, t)$ is {\em shiftable} if it contains the same number of positive as negative entries in every column and in every row.
These arrays are called \textit{shiftable} because they may be shifted to use different absolute values. By increasing the absolute value of each entry by $k$, we add $k$ to each positive entry and $-k$ to each negative entry. If the number of entries in a row is $2\ell$, this means that we add $\ell k + \ell(-k) = 0$ to each row, and the same argument applies to the columns. Thus, when shifted, the array retains the same row and column sums.
The proof of the following theorem can be found in \cite{KSW}.
\begin{theorem} \label{TH:KSW1}
An $SMA(m,n)$ exists precisely when $m = n = 1$, or when $m = 2$ and $n \equiv 0, 3 \pmod4$, or when $n = 2$ and $m \equiv 0, 3 \pmod4$, or when $m, n > 2$.
\end{theorem}
\begin{corollary}\label{SMA(m,2)}
Let $n=2$. Then there exists an $SMA(m, 2)$ such that the entries $\pm x$ are in the same row for every
$x\in\{1, 2,3,\ldots, m\}$
if and only if $m\equiv 0, 3\pmod 4$.
\end{corollary}
We also note that if $A$ is an $m \times n$ tight integer
Heffter array, then the $m\times 2n$ array $[A,-A]$ is an $SMA(m,2n)$ with the property
that the entries $\pm x$ appear in the same row for every $x\in\{1, 2, \ldots, mn\}$.
For more information on signed magic arrays consult \cite{KLE1, KL,AE,KSW}.
In this paper, we prove that for every even number $n\geq 2$ there exists an $SMA(m,n)$ such that
the entries $\pm x$ appear in the same row for every $x\in\{1, 2, 3,\ldots, mn/2\}$ if and only if
$m\equiv 0, 3(\mod4)$ and $n=2$ or $m\geq 3$ and $n\geq 4$.
For simplicity, we say an $SMA(m,n)$, with $n$ even, has the {\it required property} if
the entries $\pm x$ appear in the same row for every $x\in\{1, 2, 3,\ldots, mn/2\}$.
\section{The case $m$ and $n$ are even}
\begin{theorem}\label {m,n even>=4}
Let $m, n\geq 4$ and even. Then there exists a shiftable $SMA(m,$ $n)$
such that the entries $\pm x$ appear in the same row for every $x\in\{1, 2, 3,\ldots,$ $(mn/2)\}.$
\end{theorem}
\begin{proof}
Proceed by strong induction first on $n$ and then on $m$. As the base case, we provide arrays for $(m, n) = (4, 4)$, $(6, 4)$, $(4,6)$ and $(6,6)$ in Figures \ref{4x4,6x4} and \ref{4x6,6x6}, respectively.
Now, let $m \in \{4, 6\}$ and $n$ be even, and assume that there exists a shiftable $SMA(m,$ $n - 4)$ with the required property. We may extend this array by adding four columns to create an $m \times n$ array. The empty $m\times 4$ array
may be filled by a shifted copy of the $SMA(4,4)$ or $SMA(6, 4)$ in Figure \ref{4x4,6x4}.
As the shifted copies each has a row and column sum of zero, they do not change the row sums from the $m \times (n - 4)$ array, and the sums of the new columns will be zero as well. Moreover, the shifted copies have the required property. Therefore, a shiftable $SMA(m, n)$ exists with the required property. Hence, by strong induction on $n$,
a shiftable $SMA(m, n)$ exists for $m \in \{4, 6\}$ and $n \geq 4$ even
such that every entries $\pm x$ appear in the same row for every $x\in\{1, 2, 3,\ldots,(mn/2)\}.$
See Figure \ref{6x10} for an illustration.
Now, let $m$ and $n$ both be even, and $m,n\geq 4$. Assume that there exists a shiftable $SMA(m - 4,n)$ with the required property. We may extend this array by adding four rows to create an $m \times n$ array.
The empty $4\times n$ array
may be filled by a shifted copy of a shiftable $SMA(4,n)$ with the required property, which exists by the above argument.
Hence, by strong induction on $n$,
a shiftable $SMA(m, n)$ exists for $m, n\geq 4$ and even
such that every entries $\pm x$ appear in the same row for every $x\in\{1, 2, 3,\ldots,(mn/2)\}.$
\end{proof}
\begin{figure}
\caption{$SSMA(4, 4)$ and $SSMA(6,4)$ with the required property}
\label{4x4,6x4}
\end{figure}
\begin{figure}
\caption{$SSMA(4,6)$ and $SSMA(6,6)$ with the required property}
\label{4x6,6x6}
\end{figure}
\begin{figure}
\caption{$SSMA(6,10)$ with the required property obtained by constructions given in Theorem \ref{m,n even>=4}
\label{6x10}
\end{figure}
\section{The case $m$ odd and $n$ even}
Since the structure of the $SMA(3,n)$ given below is crucial in our constructions, we include the proof of this lemma here which can also be found in \cite{KSW}.
\begin{lemma} \label{3xeven}
Let $n$ be even. Then there exists an $SMA(3,n)$ with the property that the entries $\pm x$ appear in the same row for every $x\in\{1, 2, 3,\ldots,(3n/2)\}$.
\end{lemma}
\begin{proof}
An $SMA(3,2)$ and an $SMA(3,4)$ are given in Figure \ref{3x2.3x4.4x2}.
Now let $n=2k\geq 6$ and $p_{j}=\lceil\frac{j}{2}\rceil$ for $1\leq j\leq 2k$ . Define a $3\times n$ array $A=[a_{i,j}]$ as follows: For $1\leq j\leq 2k$,
$$a_{1,j}= \begin{cases}
- \left(\frac{3p_{j}-2}{2}\right) & j \equiv 0 \pmod4 \\
\frac{3p_{j}-1}{2}& j \equiv 1 \pmod4 \\
-\left(\frac{3p_{j}-1}{2}\right) & j \equiv 2 \pmod4 \\
\frac{3p_{j}-2}{2} & j \equiv 3 \pmod4. \\
\end{cases}$$
For the third row we define $a_{3,1}=-3k$, $a_{3,2k}=3k$ and when $2\leq j\leq 2k-1$
$$a_{3,j}= \begin{cases}
- 3(k-p_{j}) & j \equiv 0\pmod4 \\
3(k-p_{j}+1) & j \equiv 1 \pmod4 \\
-3(k-p_{j}) & j \equiv 2 \pmod4 \\
3(k-p_{j}+1) & j \equiv 3 \pmod4. \\
\end{cases}$$
Finally, $a_{2,j}=-(a_{1,j}+a_{3,j})$ for $1\leq j\leq2k$.
It is straightforward to see that array $A$ is an $SMA(3,n)$
with the property that the entries $\pm x$
appear in the same row for every $x\in\{1, 2, 3,\ldots,(5n/2)\}$.
Figure \ref{SMA 3x12} in Appendix 1 displays an $SMA(3, 12)$ constructed by above method.
\end{proof}
\begin{lemma} \label{5xeven}
Let $n\geq 4$ and even. Then there exists an $SMA(5,n)$ with the property that the entries $\pm x$
appear in the same row for every entry $x\in\{1, 2, 3,\ldots,(5n/2)\}$.
\end{lemma}
\begin{proof}
We consider two cases.
\noindent {\bf Case 1: $n\equiv 0 \pmod 4$:}\quad
Let $A$ be an $SMA(3,n)$ constructed by Lemma \ref{3xeven}.
By construction, the entries in row one of $A$ are:
$$\{\pm(3i+1),\pm(3i+2)\mid 0\leq i\leq(n-4)/4\}\; \mbox{(See Figure \ref{SMA 3x12} in Appendix 1)}.$$
Switch $3i+1$ with $3i+2$ and $-(3i+1)$ with $-(3i+2)$ for $0\leq i\leq (n-4)/4$ to obtain the $3 \times n$ array
$B$. See Figure \ref{Array 3 by 12} in Appendix 1.
Note that the row sum is still zero in $B$ and the column sums consists of $n/2$ ones and $n/2$ negative ones.
We now extend array $B$ by adding two rows to create a $5 \times n$ array $C$. The empty $2\times n$ array
may be filled by members of $\{\pm(3n/2)+j\mid 1\leq j\leq n\}$ such that the row sum and column sum of the resulting
$5 \times n$ array are zero. See Figure \ref {SMA(5, 12)} in Appendix 1.
\noindent {\bf Case 2: $n\equiv 2\pmod 4$}: \quad Figure \ref{SMA(5, 6)}
displays an $SMA(5,6)$ with the required property.
Now let $n\geq 10$ and let $A$ be an $SMA(3,n)$ constructed by Lemma \ref{3xeven}.
By construction, the numbers in row one of $A$ are:
$$\{\pm(3i+1),\pm(3i+2)\mid 0\leq i\leq(n-6)/4\}\cup\{\pm (\frac{3n-2}{4})\}.$$
See Figure \ref{SMA (3x10)} in Appendix 2.
In row one of $A$ Switch $3i+1$ with $3i+2$ and $-(3i+1)$ with $-(3i+2)$ for $0\leq i \leq (n-10)/4$ and
switch $\frac{3(n-6)}{4}+1$ with $\frac{3(n-6)}{4}+2$. Note that we switch every entries in row one except
the entries in columns $n-5, n-2, n-1$ and $n$.
Now we switch the entries in row two and columns $n-5, n-2, n-1$ and $n$ as follows:
switch $\frac{-(3n+2)}{4}$ with $\frac{-(3n+10)}{4}$ and $\frac{3n+2}{4}$ with $\frac{3n+10}{4}$
to obtain the $3 \times 10$ array $B$. See Figure \ref{Array 3 by 10} in Appendix 2.
Note that the row sum is still zero in $B$ and the column sums consists of $\frac{n-4}{2}$ ones, $\frac{n-4}{2}$ negative ones, 2 twos and 2 negative twos.
We now extend array $B$ by adding two rows to create a $5 \times n$ array. The empty $2\times n$ array
may be filled by members of $\{\pm(3n/2)+j\mid 1\leq j\leq n\}$ such that the row sum and column sum of the resulting
$5 \times n$ array, Say C, are zero.
See Figure \ref{MA(5, 10)} in Appendix 2.
\end{proof}
\begin{theorem}\label{maintheorem}
Let $m\geq 3$ be odd and $n$ be even. There exists an $SMA(m,n)$
such that $\pm x$ appear in the same row for $x\in\{\pm1, \pm2,\ldots \pm(mn/2)\},$
if and only if $m\equiv 0, 3\pmod 4$ and $n=2$ or $m\geq 3$ and $n\geq 4$.
\end{theorem}
\begin{proof}
For $n=2$ we apply Corollary \ref{SMA(m,2)}. Now let $m\geq 3$ and $n\geq 4$. We consider two cases.
\noindent {\bf Case 1: $m\equiv 3 (\mod 4)$:} \quad
By Lemma \ref{3xeven} the statement is true for $m=3$. Let $m\geq 7$ and
let $A$ be an $SMA(3,n)$ constructed in Lemma \ref{3xeven}.
We extend array $A$ by adding $m-3$ rows to create an $m \times n$ array. The empty $(m-3)\times n$ array
may be filled by a shifted copy of the $SMA(m-3,n)$ given by Theorem \ref{m,n even>=4}.
As the shifted copies each have a row and column sum of zero, they do not change the row sums from the $3 \times n$ array, and the sums of the new columns will be zero as well. Moreover, the shifted copies have the required property. Therefore, an $SMA(m, n)$ exists with the property that the entries $\pm x$ appear in the same row for every
$x\in\{1, 2, 3, \ldots,mn/2\}$.
\noindent {\bf Case 2: $m\equiv 1 (\mod 4)$:}\quad
By Lemma \ref{5xeven} the statement is true for $m=5$. Now let $m\geq 9$ and let
$A$ be an $SMA(5,n)$ constructed in Lemma \ref{5xeven}.
We extend array $A$ by adding $m-5$ rows to create an $m \times n$ array. The empty $(m-5)\times n$ array
may be filled by a shifted copy of the $SMA(m-5,n)$ given by Theorem \ref{m,n even>=4}.
As the shifted copies each have a row and column sum of zero, they do not change the row sums from the $5 \times n$ array, and the sums of the new columns will be zero as well. Moreover, the shifted copies have have the required
property.
Therefore, an $SMA(m, n)$ exists with the property that the entries $\pm x$ appear in the same row for every
$x\in\{1, 2, 3, \ldots,mn/2\}$.
\end{proof}
\begin{figure}
\caption{SMA(5, 4); $SMA(5,6)$ constructed by a $(5, 3)$ Heffter array and its opposite.}
\end{figure}
\begin{figure}
\caption{$SMA(5,8)$ obtained from $H(5, 4)$.}
\end{figure}
\centerline{{\bf Appendix 1:} An example for Lemma \ref{5xeven} case 1}
\begin{figure}
\caption{Array $A:$ $SMA(3, 12)$ constructed by Lemma \ref{3xeven}
\label{SMA 3x12}
\end{figure}
\begin{figure}\label{Array 3 by 12}
\end{figure}
\begin{figure}\label{SMA(5, 12)}
\end{figure}
\centerline{{\bf Appendix 2}: An example for Lemma \ref{5xeven} case 2}
\begin{figure}
\caption{Array $A:$ $SMA(3, 10)$ constructed by Lemma \ref{3xeven}
\label{SMA (3x10)}
\end{figure}
\begin{figure}\label{Array 3 by 10}
\end{figure}
\begin{figure}
\caption{ $C$: An SMA(5, 10)obtained by the construction given in Lemma \ref{5xeven}
\label{MA(5, 10)}
\end{figure}
\end{document}
|
\begin{document}
\operatorname{i}nput amssym.def
\operatorname{i}nput amssym
\hfuzz=5.0pt
\operatorname{d}ef\vec#1{\mathchoice{\mbox{\boldmath$\operatorname{d}isplaystyle\bf#1$}}
{\mbox{\boldmath$\textstyle\bf#1$}}
{\mbox{\boldmath$\scriptstyle\bf#1$}}
{\mbox{\boldmath$\scriptscriptstyle\bf#1$}}}
\operatorname{d}ef\mbf#1{{\mathchoice {\hbox{$\rm\textstyle #1$}}
{\hbox{$\rm\textstyle #1$}} {\hbox{$\rm\scriptstyle #1$}}
{\hbox{$\rm\scriptscriptstyle #1$}}}}
\operatorname{d}ef\operatorname#1{{\mathchoice{\rm #1}{\rm #1}{\rm #1}{\rm #1}}}
\chardef\operatorname{i}i="10
\operatorname{d}ef\mathaccent"0362 {\mathaccent"0362 }
\operatorname{d}ef\mathaccent"0365 {\mathaccent"0365 }
\operatorname{d}ef\varphi{\varphi}
\operatorname{d}ef\varrho{\varrho}
\operatorname{d}ef\vartheta{\vartheta}
\operatorname{d}ef{\operatorname{i}\over\hbar}{{\operatorname{i}\over\hbar}}
\operatorname{d}ef{\cal D}{{\cal D}}
\operatorname{d}ef{\cal H}{{\cal H}}
\operatorname{d}ef{\cal L}{{\cal L}}
\operatorname{d}ef{\cal P}{{\cal P}}
\operatorname{d}ef{\cal V}{{\cal V}}
\operatorname{d}ef{1\over2}{{1\over2}}
\operatorname{d}ef\hbox{$\half$}{\hbox{${1\over2}$}}
\operatorname{d}ef{1\over4}{{1\over4}}
\operatorname{d}ef\hbox{$\viert$}{\hbox{${1\over4}$}}
\operatorname{d}ef\operatorname{d}frac#1#2{\frac{\operatorname{d}isplaystyle #1}{\operatorname{d}isplaystyle #2}}
\operatorname{d}ef\pathint#1{\operatorname{i}nt\limits_{#1(0)=#1'}^{#1(T)=#1''}{\cal D} #1(t)}
\operatorname{d}ef{\operatorname{d}frac{\hbar^2}{2m}}{{\operatorname{d}frac{\hbar^2}{2m}}}
\operatorname{d}ef\pmb#1{\setbox0=\hbox{#1}
\kern-.025em\copy0\kern-\wd0
\kern.05em\copy0\kern-\wd0
\kern-.025em\raise.0433em\box0}
\operatorname{d}ef{\rm I\!M}{{\rm I\!M}}
\operatorname{d}ef{\rm I\!N}{{\rm I\!N}}
\operatorname{d}ef{\rm I\!R}{{\rm I\!R}}
\operatorname{d}ef\bbbz{{\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}}
{\hbox{$\sf\textstyle Z\kern-0.4em Z$}}
{\hbox{$\sf\scriptstyle Z\kern-0.3em Z$}}
{\hbox{$\sf\scriptscriptstyle Z\kern-0.2em Z$}}}}
\operatorname{d}ef\operatorname{Cl}{\operatorname{Cl}}
\operatorname{d}ef\operatorname{Coulomb}{\operatorname{Coulomb}}
\operatorname{d}ef\operatorname{Higgs}{\operatorname{Higgs}}
\operatorname{d}ef\operatorname{max}{\operatorname{max}}
\operatorname{d}ef\operatorname{PSL}{\operatorname{PSL}}
\operatorname{d}ef\operatorname{d} t{\operatorname{d} t}
\operatorname{d}ef\operatorname{d}{\operatorname{d}}
\operatorname{d}ef\operatorname{e}{\operatorname{e}}
\operatorname{d}ef\operatorname{i}{\operatorname{i}}
\begin{titlepage}
\centerline{\normalsize DESY 98--112
ISSN 0418 - 9833}
\centerline{\normalsize August 1998
}
\centerline{\normalsize quant-ph/9808060
}
\vskip.3in
\message{TITLE:}
\message{ON THE PATH INTEGRAL TREATMENT FOR AN AHARONOV--BOHM FIELD ON THE
HYPERBOLIC PLANE}
\begin{center}
{\Large ON THE PATH INTEGRAL TREATMENT FOR AN
\vskip.05in
AHARONOV--BOHM FIELD ON THE HYPERBOLIC PLANE}
\operatorname{e}nd{center}
\vskip.3in
\begin{center}
{\Large Christian Grosche}
\vskip.2in
{\normalsize\operatorname{e}m II.\,Institut f\"ur Theoretische Physik}
\vskip.05in
{\normalsize\operatorname{e}m Universit\"at Hamburg, Luruper Chaussee 149}
\vskip.05in
{\normalsize\operatorname{e}m 22761 Hamburg, Germany}
\operatorname{e}nd{center}
\begin{center}
{ABSTRACT}
\operatorname{e}nd{center}
\noindent
{\small
In this paper I discuss by means of path integrals the quantum dynamics of a
charged particle on the hyperbolic plane under the influence of an Aharonov--Bohm
gauge field. The path integral can be solved in terms of an expansion of the
homotopy classes of paths. I discuss the interference pattern of scattering by
an Aharonov--Bohm gauge field in the flat space limit, yielding a characteristic
oscillating behavior in terms of the field strength. In addition, the cases of the
isotropic Higgs-oscillator and the Kepler--Coulomb potential on the hyperbolic
plane are shortly sketched.
}
\operatorname{e}nd{titlepage}
\normalsize
\section{Introduction}
\message{Introduction}
The Aharonov--Bohm gauge field has a long history, beginning in 1959 by a
classical paper by Aharonov and Bohm \cite{AB}. The effect has been well
studied and well confirmed \cite{Book:A-B}, but not necessarily well
understood. It describes the motion of charged particles, i.e.~electrons, which
are scattered by an infinitesimal thin solenoid. The magnetic vector potential
$\vec A$ of the solenoid produces a magnetic field which is essentially
$\operatorname{d}elta$-like, i.e., its support is an infinitesimal thin
solenoid, and it is vanishing everywhere else. Geometrically this experimental
set-up corresponds to the quantum motion of a particle (which we consider as
spin-less) in ${\rm I\!R}^2$, where a point has been removed with the consequence
that topologically ${\rm I\!R}^2$ becomes no longer connected. Since the solenoid
is assumed impenetrable, the space of the particle motion ${\rm I\!M}$ is the
Euclidean plane minus the cross section of the solenoid. Everywhere in
${\rm I\!M}$, $\vec\nabla\times\vec A=0$ and hence $\vec A=\vec\nabla f(r)$,
where $f(r)$ is an arbitrary scalar function of $r=|\vec x|,\vec x\operatorname{i}n{\rm I\!R}^2$.
Classically, a charged particle is not affected at all by the solenoid.
However, in quantum mechanics, the particle's wave function picks up in a
scattering experiment a phase factor according to
\begin{equation}
\Psi_\alpha(\vec x)=\Psi_0(\vec x)\operatorname{e}xp\Bigg(\operatorname{d}frac{\operatorname{i} e}{\hbar c}
\operatorname{i}nt_{\hbox{\small path $\alpha$}}\vec A\cdot\operatorname{d}\vec x\Bigg)\operatorname{e}nspace,
\operatorname{e}nd{equation}
where $ \Psi_0(\vec x)$ is the vector potential-free solution. The wave-function
$\Psi$ effective to a measurement is the sum of solutions corresponding
to inequivalent paths, i.e., $\Psi=\sum_\alpha\Psi_\alpha$. Topologically the
paths $\alpha$ can be distinguished by their winding numbers $n$, thus giving
rise to infinitely many homotopy classes designated by the number $n$.
Path integral treatments of the Aharonov--Bohm effect in the Euclidean plane are
due to Bernido and Inomata \cite{BEIN}, Gerry and Singh \cite{GSa}, Liang
\cite{LIANG}, and Schulman \cite{SCHUHc}. Harmonic interactions have been dealt
with in \cite{KICA}, the Coulomb--Kepler potential have been taken into
account by, e.g.\ \cite{CGHC,DRCAKI,HHKR}, \linebreak \cite{KNg,LINDHc,PARKb},
relativistic particles by, e.g.\ \linebreak \cite{BERNe,GARI,HHKR,LEVAN},
\linebreak \cite{LINDHc,PARKb},
and a more comprehensive bibliography can be found in, e.g.~\cite{Book:A-B,GRSh}.
Path integrals,\,e.g.\,\cite{FH,GROad,GRSh}, \linebreak \cite{KLEo}, and
\cite{SCHUHd} provide us with global information of the quantum motion,
including the topological effects on the wave-function. If we want to study the
Aharonov--Bohm effect by means of path integrals \cite{BEIN,GSa,LIANG} we consider
the time evolution from $t=0$ to $t=T$ of the wave-function of a particle
according to
\begin{equation}
\Psi_\alpha(\vec x'';T)=\sum_\beta\operatorname{i}nt K_{\alpha\beta}(\vec x'',\vec x';T)
\Psi_\alpha(\vec x';0)\,\operatorname{d}\vec x'\operatorname{e}nspace,
\operatorname{e}nd{equation}
where
\begin{equation}
K_{\alpha\beta}(\vec x'',\vec x';T)= K_0(\vec x'',\vec x';T)
\operatorname{e}xp\Bigg[\operatorname{d}frac{\operatorname{i} e}{\hbar c}\Bigg(\operatorname{i}nt_{\hbox{\small path $\alpha$}}^{\vec x''}-
\operatorname{i}nt_{\hbox{\small path $\beta$}} ^{\vec x'}\Bigg)\vec A\cdot\operatorname{d}\vec x\Bigg]
\operatorname{e}nspace,
\operatorname{e}nd{equation}
and this leads us the the formal expression separating the sum over
$\alpha$ and $\beta$ (under the assumption the separation is well-defined)
\begin{equation}
\sum_{\alpha,\beta}K_{\alpha\beta}\Psi_\beta=K\sum_\beta\Psi_\beta\operatorname{e}nspace.
\operatorname{e}nd{equation}
Provided the paths $\alpha,\beta$ cover in an idealized experiment the whole range
from minus infinity to plus infinity, we can express the separation of the time
evolution of the particle according to
\begin{equation}
K(\vec x'',\vec x';T)=\sum_{n=-\operatorname{i}nfty}^\operatorname{i}nfty K_n(\vec x'',\vec x';T)\operatorname{e}nspace,
\label{KvecxT}
\operatorname{e}nd{equation}
where $n=0$ denotes the unperturbed case in ${\rm I\!R}^2$, i.e., we obtain the free
propagator on the entire ${\rm I\!R}^2$. For the final result we obtain for the Feynman
kernel the following form, e.g.\ \cite{BEIN,GRSh,LIANG}
\begin{equation}
K(\vec x'',\vec x';T)=\operatorname{d}frac{m}{2\pi\operatorname{i}\hbar T}
\operatorname{e}xp\bigg(\operatorname{d}frac{\operatorname{i} m}{2\hbar T}({r'}^2+{r''}^2)\bigg)
\sum_{n=-\operatorname{i}nfty}^\operatorname{i}nfty\operatorname{e}^{\operatorname{i} n(\varphi''-\varphi')}
I_{|n-\xi|} \bigg(\operatorname{d}frac{mr'r''}{\operatorname{i}\hbar T}\bigg)\operatorname{e}nspace.
\operatorname{e}nd{equation}
Here, two-dimensional polar coordinates $(r,\varphi)$ have been used, and $\xi=
e\Phi/2\pi\hbar c$ with $\Phi=B\times\hbox{area}$ the magnetic flux.
\section{Aharonov--Bohm Field on the Hyperbolic Plane}
\message{Aharonov--Bohm Field on the Hyperbolic Plane}
In this paper I would like to give a path integral treatment of the
Aharonov--Bohm effect on the hyperbolic plane \cite{KURORU}, i.e., the
scattering of (spin-less) electrons by an Aharonov--Bohm field on leaky tori.
Such systems play an important r\^ole in the theory of quantum chaos, e.g.\
\cite{GUTc}. The hyperbolic plane, respectively Lobachevsky space, is defined as
one sheet of the double sheeted hyperboloid
\begin{equation}
\vec u^2=u_0^2-u_1^2-u_2^2=R^2\operatorname{e}nspace,\qquad u_0>0\operatorname{e}nspace.
\operatorname{e}nd{equation}
The model of the upper-half plane $U=\{\Im(z)=y>0|z=x+\operatorname{i} y\}$ endowed with the
metric has the form (where I have set for simplicity $R=1$)
\begin{equation}
\operatorname{d} s^2=\operatorname{d}frac{\operatorname{d} x^2+\operatorname{d} y^2}{y^2},\operatorname{e}nspace,\qquad x\operatorname{i}n{\rm I\!R},y>0\operatorname{e}nspace.
\operatorname{e}nd{equation}
Alternatively I can also consider the unit disc model $D=\{z=r\,\operatorname{e}^{\operatorname{i}\vartheta}|
r<1,\vartheta\operatorname{i}n[0,2\pi)\}$)
\begin{equation}
\operatorname{d} s^2=4\operatorname{d}frac{\operatorname{d} r^2+r^2\operatorname{d}\vartheta^2}{(1-r^2)^2}\operatorname{e}nspace,\qquad
r<1,\vartheta\operatorname{i}n[0,2\pi)\operatorname{e}nspace,
\operatorname{e}nd{equation}
and the pseudosphere $\Lambda=\{z=\operatorname{i}\tanh(\tau/2)\,\operatorname{e}^{-\operatorname{i}\varphi}|
\tau>0,\varphi\operatorname{i}n[0,2\pi)\}$
\begin{equation}
\operatorname{d} s^2=\operatorname{d}\tau^2+\sinh^2\tau\,\operatorname{d}\varphi^2\operatorname{e}nspace,\qquad\tau>0,\varphi\operatorname{i}n[0,2\pi)
\operatorname{e}nspace.
\operatorname{e}nd{equation}
$U$, $D$ and $\Lambda$ are three coordinate space representations out of nine
of the hyperbolic plane \cite{GROPOc,GROad,OLE}. Plane waves have the asymptotic
representation $\propto y^{1/2\pm\operatorname{i} k}$ (e.g.~on $U$, $k$ the wave-number),
$\operatorname{e}^{-(\pm\operatorname{i} k+1/2)\tau}$ (on $\Lambda$), and the
coordinate origin is $r=0$ (on $D$), $\tau=0$ (on $\Lambda$), and $z=\operatorname{i}$ (on $U$),
respectively. The isometries on the hyperbolic plane are M\"obius transformations
corresponding to the symmetry group $\operatorname{PSL}(2,{\rm I\!R})$, and magnetic fields give rise
to the consideration of automorphic forms in the
theory of the Selberg trace formula \cite{HEJb}.
Constant magnetic fields on the hyperbolic plane have been studied in, e.g.\
\cite{COM,FAY,PNUELI}, and by means of path integrals in \cite{GROb,GROd}.
The path integral formulation for a particle on the hyperbolic plane subject
to a constant magnetic field on $\Lambda$ has the form \cite{GROd}
(I implicitly assume that the constant negative curvature of the hyperbolic plane,
i.e., the two-dimensional hyperboloid equals one, $\vec u\operatorname{i}n\Lambda$)
\begin{eqnarray} & &\!\!\!\!\!\!\!\!
K(\vec u'',\vec u';T)\operatorname{e}quiv K(\tau'',\tau',\varphi'',\varphi';T)
\nonumber\\ & &\!\!\!\!\!\!\!\!
=\pathint{\tau}\sinh\tau\pathint{\varphi}
\nonumber\\ & &\!\!\!\!\!\!\!\! \qquad\times
\operatorname{e}xp\Bigg\{{\operatorname{i}\over\hbar}\operatorname{i}nt_0^T\bigg[{m\over2}(\operatorname{d}ot\tau^2+\sinh^2\tau\operatorname{d}ot\varphi^2)
-b(\cosh\tau-1)\operatorname{d}ot\varphi
-\operatorname{d}frac{\hbar^2}{8m}\bigg(1-\operatorname{d}frac{1}{\sinh^2\tau}\bigg)\bigg]\operatorname{d} t\Bigg\}
\nonumber\\ & &\!\!\!\!\!\!\!\!
=\operatorname{e}xp\bigg(-{\operatorname{i}\hbar T\over8m}\bigg)\lim_{N\to\operatorname{i}nfty}
\bigg(\operatorname{d}frac{m}{2\pi\operatorname{i}\hbar\operatorname{e}psilon}\bigg)^N\prod_{j=1}^{N-1}
\operatorname{i}nt_0^\operatorname{i}nfty\sinh\tau_j\,\operatorname{d}\tau_j\operatorname{i}nt_0^{2\pi}\operatorname{d}\varphi_j
\nonumber\\ & &\!\!\!\!\!\!\!\!\qquad\times
\operatorname{e}xp\left[{\operatorname{i}\over\hbar}\sum_{j=1}^N\bigg({m\over2\operatorname{e}psilon}
\Big(\Delta^2\tau_j+\mathaccent"0362 {\sinh^2\tau_j}\Delta^2\varphi_j\Big)
-b(\mathaccent"0362 {\cosh\tau_j-1})\Delta\varphi_j-{\operatorname{e}psilon\hbar^2\over8m\sinh^2\tau_j}
\bigg)\right]
\nonumber\\ & &\!\!\!\!\!\!\!\!
=\sum_{l=-\operatorname{i}nfty}^\operatorname{i}nfty\left[
\sum_{N=0}^{N_{\operatorname{max}}} \operatorname{e}^{-\operatorname{i} E_nT/\hbar}
\Psi^b_{Nl}(\tau'',\varphi'') \Psi_{Nl}^{b\,*}(\tau',\varphi')
+\operatorname{i}nt_0^\operatorname{i}nfty\!\!\operatorname{d} k\,\operatorname{e}^{-\operatorname{i} E_kT/\hbar}
\Psi^b_{kl}(\tau'',\varphi'') \Psi_{kl}^{b\,*}(\tau',\varphi')\right]\,.
\nonumber\\ & &
\label{Kpath-Bfeld}
\operatorname{e}nd{eqnarray}
Here $b=eB/\hbar c$, with $B$ the strength of the magnetic field, $c$ denotes the
velocity of light. For the magnetic field $\vec B$ I have chosen the gauge
\begin{equation}
\vec A=\bigg(\begin{array}{c}A_\tau\\ A_\varphi\operatorname{e}nd{array}\bigg)=
B(\cosh\tau-1)\bigg(\begin{array}{c} 0\\ 1\operatorname{e}nd{array}\bigg)\operatorname{e}nspace.
\label{Atau}
\operatorname{e}nd{equation}
Due to $\operatorname{d} B=(\partial_\tau A_\varphi-\partial_\varphi A_\tau)\,\operatorname{d}\tau\wedge\operatorname{d}\varphi=
(m/2)B\sinh\tau \operatorname{d}\tau\wedge\operatorname{d}\varphi$, $\operatorname{d} B$ has the form {\operatorname{i}t constant $\times$
volume form\/} and can thus interpreted indeed as a constant magnetic field. In the
lattice formulation I have taken \cite{GROad,GRSh} $\Delta q_j=q_j-q_{j-1}$,
$q_j=q(t_j)$, $t_j=j\operatorname{e}psilon$, $j=1,\operatorname{d}ots,N$, $\operatorname{e}psilon=T/N$, $N\to\operatorname{i}nfty$,
$\mathaccent"0362 {f^2(q_j)}\operatorname{e}quiv f(q_{j-1})f(q_j)$, for any function $f$ of the coordinates.
The bound state solutions are given by
\begin{eqnarray}
\Psi^b_{N,l}(\tau,\varphi)&=&
\bigg[{N!(2b+|l|)\Gamma(2b-N+|l|)\over4\pi(N+|l|)!\Gamma(2b-N)}\bigg]^{1\over2}
\nonumber\\ & &\times
\operatorname{e}^{\operatorname{i} l\varphi}\bigg(\tanh{\tau\over2}\bigg)^{|l|}
\bigg(1-\tanh^2{\tau\over2}\bigg)^{b-N}
P_N^{(|l|,2b-2N-1)}\bigg(1-2\tanh^2{\tau\over2}\bigg)\operatorname{e}nspace,\qquad
\\
E_N&=&{\hbar^2\over2m}\Bigg[b^2+{1\over4}-\bigg(b-N-{1\over2}\bigg)^2\Bigg]\operatorname{e}nspace,
\qquad(N=0,1,\operatorname{d}ots\leq N_{\operatorname{max}}<b-\hbox{$\half$})\operatorname{e}nspace.\qquad
\label{uplaneEN}
\operatorname{e}nd{eqnarray}
$P_n^{(a,b)}(x)$ are Jacobi polynomials \cite{GRA}. The energy-levels (\ref{uplaneEN})
are the Landau levels on the hyperbolic plane. This is in complete analogy to the
flat space case, where the Landau levels are $E_n=\hbar\omega(n+{1\over2})$ with $\omega
=eB/\hbar c$ the cyclotron frequency, and the bound states are described by Laguerre
polynomials, e.g.~\cite{GRSh}. The flat space limit can be recovered \cite{GROPOc}
by re-introducing the constant curvature $k=1/R$ $(R>0)$, redefining $E_N\to E_N/R^2,
b\to bR^2$ (note $b(\cosh\tau-1)\to br^2R^2/2$, $r>0$ the polar variable in ${\rm I\!R}^2$,
as $R\to\operatorname{i}nfty$), and considering the limit $R\to\operatorname{i}nfty$.
For the continuous states the wave-functions and the energy spectrum,
respectively, I obtain
\begin{eqnarray}
\Psi_{k,l}^b(\tau,\varphi)&=&{1\over\pi|l|!}
\sqrt{k\sinh2\pi k\over4\pi}\,
\Gamma\bigg({1+\operatorname{i} k\over2}+b+|l|\bigg)\Gamma\bigg({1+\operatorname{i} k\over2}-b\bigg)
\nonumber\\ & &\qquad\times
\operatorname{e}^{\operatorname{i} l\varphi}\bigg(\tanh{\tau\over2}\bigg)^{|l|}
\bigg(1-\tanh^2{\tau\over2}\bigg)^{{1\over2}+\operatorname{i} k}
\nonumber\\ & &\qquad\times
{_2}F_1\bigg({1\over2}-\operatorname{i} k+b+|l|,{1\over2}+\operatorname{i} k-b;1+|l|;
\tanh^2{\tau\over2}\bigg)\operatorname{e}nspace,\qquad
\\
E_k&=&{\hbar^2\over2m}\bigg(k^2+b^2+{1\over4}\bigg)\operatorname{e}nspace.
\operatorname{e}nd{eqnarray}
$_2F_1(a,b;c;z)$ is the hypergeometric function, and $k>0$ denotes the wave-number.
I note that a minimum strength of $B$
is required in order that bound states can occur, and only a finite
number of bound states can exist. For the case that the magnetic field vanishes
I obtain \cite{GRSc} (e.g.~\cite{GRA} for the relation of the Legendre functions
to the hypergeometric function)
\begin{eqnarray}
\Psi_{k,l}&=&\sqrt{k\sinh\pi k\over2\pi^2}\,\Gamma(\hbox{$\half$}+\operatorname{i} k+|l|)\,
\operatorname{e}^{\operatorname{i} l\varphi}{\cal P}^{-|l|}_{\operatorname{i} k-1/2}(\cosh\tau)\operatorname{e}nspace,
\label{PSIkl}\\
E_k&=&{\hbar^2\over2m}\bigg(k^2+{1\over4}\bigg)\operatorname{e}nspace.
\label{Ekl}
\operatorname{e}nd{eqnarray}
For instance, we have the relation \cite{ABS}
\begin{eqnarray}
{\cal P}_{\nu-1/2}^\mu(\cosh\tau)&=&\operatorname{d}frac{1}{\Gamma(1-\mu)}
2^{2\mu}(1-\operatorname{e}^{-2\tau})^{-\mu}\operatorname{e}^{-(\nu+1/2)\tau}
\nonumber\\ & &\qquad\times
_2F_1\bigg({1\over2}-\mu;{1\over2}+\nu-\mu;1-2\mu;1-\operatorname{e}^{-2\tau}\bigg)\operatorname{e}nspace.
\operatorname{e}nd{eqnarray}
However, for the vector potential for an Aharonov--Bohm gauge field, we need
another Ansatz. According to \cite{KURORU} I take for $\vec A=B\vec e_\varphi$ with
$B=\hbox{const}$. Therefore I get for the classical Hamiltonian
\begin{equation}
{\cal H}=\operatorname{d}frac{\hbar^2}{2m}\Bigg[p_\tau^2+\operatorname{d}frac{1}{\sinh^2\tau}
\bigg(p_\varphi-\operatorname{d}frac{eB}{\hbar c}\bigg)^2\Bigg]\operatorname{e}nspace,
\operatorname{e}nd{equation}
and for the Lagrangian, respectively ($b=eB/\hbar c$)
\begin{equation}
{\cal L}=\operatorname{d}frac{m}2(\operatorname{d}ot\tau^2+\sinh^2\tau\operatorname{d}ot\varphi^2)+\operatorname{d}frac{e}{c}\vec A\cdot
\bigg(\begin{array}{c}\operatorname{d}ot\tau\\ \operatorname{d}ot\varphi\operatorname{e}nd{array}\bigg)
=\operatorname{d}frac{m}2(\operatorname{d}ot\tau^2+\sinh^2\tau\operatorname{d}ot\varphi^2)+\xi\operatorname{d}ot\varphi\operatorname{e}nspace.
\operatorname{e}nd{equation}
Note that the vector potential in (\ref{Atau}) vanishes at $\tau=0$,
which means that we can take any constant for $A_\varphi$ depending on the
gauge, and the requirement that it is non-zero.
With the momentum operators $p_\tau=(\hbar/\operatorname{i})(\partial_\tau+\coth\tau)$ and
$p_\varphi=(\hbar/\operatorname{i})\partial_{\varphi}$ we get for the quantum Hamiltonian
(together with the quantum potential $\propto\hbar^2$)
\begin{equation}
H=\operatorname{d}frac{\hbar^2}{2m}\Bigg[p_\tau^2+\operatorname{d}frac{1}{\sinh^2\tau}
\bigg(p_\varphi-\operatorname{d}frac{eB}{\hbar c}\bigg)^2\Bigg]
+\operatorname{d}frac{\hbar^2}{8m}\bigg(1-\operatorname{d}frac{1}{\sinh^2\tau}\bigg)\operatorname{e}nspace.
\operatorname{e}nd{equation}
The angular variable $\varphi$ varies in the interval $[0,2\pi)$, and therefore
we usually assume $\varphi_j\operatorname{i}n[0,2\pi),\forall_j$. However, the path can loop around
the infinitesimal solenoid many times, which has the consequence that in our case
$\varphi_j\operatorname{i}n{\rm I\!R},\forall_j$. Therefore, the path integral, if calculated according to
(\ref{Kpath-Bfeld}), gives only a partial propagator which belongs to a class of paths
topologically constraint by $\varphi_j\operatorname{i}n[0,2\pi),\forall_j$. For the total propagator,
we have to take into account all paths from all homotopically different classes. This
can be done by considering the path integration over the angular variable $\varphi_j$
remaining in the physical space ${\rm I\!M}$ with $\Delta\varphi_j=\varphi_j-\varphi_{j-1}+2\pi \
n$ ($\varphi_j\operatorname{i}n[0,2\pi),n\operatorname{i}n\bbbz$), or alternatively switching to the covering space
${\rm I\!M}^*$ with $\Delta\varphi_j=\varphi_j-\varphi_{j-1}$, where $\varphi_j\operatorname{i}n{\rm I\!R}$.
I therefore incorporate the effect of the infinitesimal thin solenoid by a
$\operatorname{d}elta$-function constraint in the path integral, with an additional integration
$\operatorname{i}nt\operatorname{d}\varphi$ \cite{BEIN}, therefore I get (expanding the $\operatorname{d}elta$-function,
$\xi=e\Phi/2\pi\hbar c$ with $\Phi$ the magnetic flux.)
\begin{eqnarray} & &\!\!\!\!\!\!\!\!
K^{AB}(\tau'',\tau',\varphi'',\varphi';T)
\nonumber\\ & &\!\!\!\!\!\!\!\!
=\operatorname{i}nt_{{\rm I\!R}}\operatorname{d}\varphi\pathint{\tau}\sinh\tau\pathint{\varphi}
\operatorname{d}elta\bigg(\varphi-\operatorname{i}nt_0^T\operatorname{d}ot\varphi\,\operatorname{d} t\bigg)
\nonumber\\ & &\!\!\!\!\!\!\!\!\qquad\times
\operatorname{e}xp\Bigg\{{\operatorname{i}\over\hbar}\operatorname{i}nt_0^T\bigg[{m\over2}(\operatorname{d}ot\tau^2+\sinh^2\tau\operatorname{d}ot\varphi^2)+b\operatorname{d}ot\varphi
-\operatorname{d}frac{\hbar^2}{8m}\bigg(1-\operatorname{d}frac{1}{\sinh^2\tau}\bigg)\bigg]\operatorname{d} t\Bigg\}
\nonumber\\ & &\!\!\!\!\!\!\!\!
=\operatorname{e}xp\bigg(-{\operatorname{i}\hbar T\over8m}\bigg)
\operatorname{i}nt_{{\rm I\!R}}\operatorname{d}\varphi \operatorname{i}nt_{{\rm I\!R}}\operatorname{d}frac{\operatorname{d}\lambda}{2\pi}\,\operatorname{e}^{\operatorname{i}\lambda\varphi}
\lim_{N\to\operatorname{i}nfty}\bigg(\operatorname{d}frac{m}{2\pi\operatorname{i}\hbar\operatorname{e}psilon}\bigg)^N\prod_{j=1}^{N-1}
\operatorname{i}nt_0^\operatorname{i}nfty\sinh\tau_j\,\operatorname{d}\tau_j\operatorname{i}nt_0^{2\pi}\operatorname{d}\varphi_j\qquad\qquad
\nonumber\\ & &\!\!\!\!\!\!\!\!\qquad\times
\operatorname{e}xp\left[{\operatorname{i}\over\hbar}\sum_{j=1}^N\bigg({m\over2\operatorname{e}psilon}
\Big(\Delta^2\tau_j+\mathaccent"0362 {\sinh^2\tau_j}\Delta^2\varphi_j\Big)
+(\xi-\lambda)\Delta\varphi_j-{\operatorname{e}psilon\hbar^2\over8m\sinh^2\tau_j}
\bigg)\right]\qquad
\nonumber\\ & &\!\!\!\!\!\!\!\!
=\operatorname{i}nt_{{\rm I\!R}}\operatorname{d}\varphi\operatorname{i}nt_{{\rm I\!R}}\operatorname{d}frac{\operatorname{d}\lambda}{2\pi}\,\operatorname{e}^{\operatorname{i}\lambda\varphi}
\sum_{l=-\operatorname{i}nfty}^\operatorname{i}nfty\operatorname{e}^{\operatorname{i} l(\varphi''-\varphi')}K_{\lambda+l-\xi}(\tau'',\tau';T)
\operatorname{e}nspace,
\label{Klblambda}
\operatorname{e}nd{eqnarray}
where
\begin{eqnarray} & &\!\!\!\!\!\!\!\!
K_{\lambda+l-\xi}(\tau'',\tau';T)
\nonumber\\ & &
=\operatorname{e}^{-\operatorname{i}\hbar T/8m}\pathint{\tau}\operatorname{e}xp\Bigg[{\operatorname{i}\over\hbar}\operatorname{i}nt_0^T\bigg({m\over2}\operatorname{d}ot\tau^2-
\operatorname{d}frac{\hbar^2}{2m}\operatorname{d}frac{(\lambda+l-\xi)^2-1/4}{\sinh^2\tau}\bigg)\operatorname{d} t\Bigg]\operatorname{e}nspace.
\operatorname{e}nd{eqnarray}
Using Poisson's summation formula
\begin{equation}
\sum_{l=-\operatorname{i}nfty}^\operatorname{i}nfty\operatorname{e}^{\operatorname{i} l\theta}=2\pi\sum_{k=-\operatorname{i}nfty}^\operatorname{i}nfty
\operatorname{d}elta(\theta+2\pi k)\operatorname{e}nspace,
\operatorname{e}nd{equation}
I obtain (by changing the integration variable $\lambda\to\lambda+\xi-l$)
\begin{eqnarray} & &\!\!\!\!\!\!\!\!
K^{AB}(\tau'',\tau',\varphi'',\varphi';T)
\nonumber\\ & &\!\!\!\!\!\!\!\!
=\operatorname{d}frac{1}{2\pi} \operatorname{i}nt_{{\rm I\!R}}\operatorname{d}\varphi \operatorname{i}nt_{{\rm I\!R}}\operatorname{d}\lambda\,\operatorname{e}^{\operatorname{i}\lambda\varphi}
\sum_{l=-\operatorname{i}nfty}^\operatorname{i}nfty\operatorname{e}^{\operatorname{i} l(\varphi''-\varphi')}
K_{\lambda+l-\xi}(\tau'',\tau';T)\qquad\qquad
\nonumber\\ & &\!\!\!\!\!\!\!\!
=\operatorname{d}frac{1}{2\pi} \operatorname{i}nt_{{\rm I\!R}}\operatorname{d}\varphi \operatorname{i}nt_{{\rm I\!R}}\operatorname{d}\lambda\,
\operatorname{e}^{\operatorname{i} l(\varphi''-\varphi'-\varphi)+\operatorname{i}(\lambda+\xi)\varphi} K_\lambda(\tau'',\tau';T)
\nonumber\\ & &\!\!\!\!\!\!\!\!
=\operatorname{i}nt_{{\rm I\!R}}\operatorname{d}\varphi \sum_{k=-\operatorname{i}nfty}^\operatorname{i}nfty\operatorname{d}elta(\varphi''-\varphi'-\varphi+2\pi k)\,
\operatorname{e}^{\operatorname{i}(\lambda+\xi)\varphi} \operatorname{i}nt_{{\rm I\!R}}\operatorname{d}\lambda\,K_\lambda(\tau'',\tau';T)\operatorname{e}nspace.
\label{KABn}
\operatorname{e}nd{eqnarray}
$K_\lambda$ is now given by
\begin{eqnarray}
K_\lambda(\tau'',\tau';T)
&=&
\operatorname{e}^{-\operatorname{i}\hbar T/8m}\pathint{\tau}\operatorname{e}xp\Bigg[{\operatorname{i}\over\hbar}\operatorname{i}nt_0^T\bigg({m\over2}\operatorname{d}ot\tau^2-
\operatorname{d}frac{\hbar^2}{2m}\operatorname{d}frac{\lambda^2-1/4}{\sinh^2\tau}\bigg)\operatorname{d} t\Bigg]
\label{pathtaulambda}
\nonumber\\ &=&
\operatorname{i}nt_0^\operatorname{i}nfty\operatorname{d} k\,\operatorname{e}^{-\operatorname{i} E_kT/\hbar}
\Psi_{k,\lambda}(\tau'') \Psi_{k,\lambda}^*(\tau')\operatorname{e}nspace.
\label{Ktaulambda}
\operatorname{e}nd{eqnarray}
The wave-functions and the energy spectrum are given by (\ref{PSIkl},\ref{Ekl}),
respectively, with $l\to\lambda$. Performing the $\varphi$-integration in
(\ref{KABn}) yields
\begin{equation}
K^{AB}(\tau'',\tau',\varphi'',\varphi';T)=\sum_{n=-\operatorname{i}nfty}^\operatorname{i}nfty
\operatorname{e}^{\operatorname{i}\xi(\varphi''-\varphi'+2\pi n)}\operatorname{i}nt_{{\rm I\!R}}\operatorname{d}\lambda\,
\operatorname{e}^{\operatorname{i}\lambda(\varphi''-\varphi'+2\pi n)}K_\lambda(\tau'',\tau';T)\operatorname{e}nspace,
\operatorname{e}nd{equation}
which displays the expansion into the winding numbers.
For $\xi=0$ the free Feynman kernel on $\Lambda$ is recovered.
If we want to study the effect of scattering by an Aharonov--Bohm solenoid we must
consider interference terms according to
\begin{equation}
I_{nl}=K^*_nK_l+K^*_lK_n\operatorname{e}nspace.
\label{Inl}
\operatorname{e}nd{equation}
Unfortunately, a closed expression for the propagator (\ref{Ktaulambda}) does not
exist. We can either analyze (\ref{Ktaulambda}) by means of an asymptotic
expansion of the Legendre functions, i.e., $P_{\operatorname{i} p-1/2}^\mu(z)\propto (\Gamma(\operatorname{i} p)
/\Gamma(1/2+\operatorname{i} p-\mu))(2z)^{1/2-\operatorname{i} p}/\sqrt{\pi}+\hbox{c.c.}$, as $|z|\to\operatorname{i}nfty$,
which yields very complicated and analytically intractable integrals over
$\Gamma$-functions. Alternatively I can use the formula $\lim_{\nu\to\operatorname{i}nfty}\nu^\mu$
\linebreak
${\cal P}_\nu^{-\mu}(\cosh (z/\nu))=I_\mu(z)$ \cite{GRA} which corresponds to the flat
space limit of the hyperbolic space with constant curvature $R$. Restricting
therefore the evaluation of $I_{nl}$ to the flat space limit $R\to\operatorname{i}nfty$ I
re-introduce the constant curvature $R$ into the path integral (\ref{pathtaulambda})
by means of $m\operatorname{d}ot\tau^2\to mR^2\operatorname{d}ot\tau^2=mr^2$, and $m\sinh^2\tau\to mR^2\sinh^2
\tau\to mR^2\tau^2=mr^2$ ($r=R\tau$ is the radial variable in Euclidean polar
coordinates), as $R\to\operatorname{i}nfty$ \cite{IPSWa}. This gives for $K_\lambda$ in this limit
the usual free Feynman kernel in polar coordinates in ${\rm I\!R}^2$ \cite{GRSh,PI}
\begin{equation}
K_\lambda(\tau'',\tau';T)\simeq K_\lambda(r'',r';T)
=\operatorname{d}frac{m}{2\pi\operatorname{i}\hbar T}\operatorname{e}xp\bigg[\operatorname{d}frac{\operatorname{i} m}{2\hbar T}({r'}^2+{r''}^2)\bigg]
I_{|\lambda|}\bigg(\operatorname{d}frac{mr'r''}{\operatorname{i}\hbar T}\bigg)\operatorname{e}nspace.
\operatorname{e}nd{equation}
Following \cite{BEIN} we can now evaluate $I_{nl}$. By means of the asymptotic
formula ($|z|\to\operatorname{i}nfty,\Re(z)>0$)
\begin{equation}
I_{\lambda}(z)\simeq\sqrt{1\over2\pi z}\,
\operatorname{e}xp\bigg(z-\operatorname{d}frac{\lambda^2-1/4}{2z}\bigg)\operatorname{e}nspace,
\operatorname{e}nd{equation}
and a Gaussian integration we get the asymptotic expansion
\begin{equation}
\operatorname{i}nt_{-\operatorname{i}nfty}^\operatorname{i}nfty\operatorname{d}\lambda\,\operatorname{e}^{\operatorname{i}\lambda\Theta}I_\lambda(z)
\simeq\operatorname{e}xp\bigg(z+\operatorname{d}frac{1}{8z}-\operatorname{d}frac{z}{2}\Theta^2\bigg)\operatorname{e}nspace.
\operatorname{e}nd{equation}
Hence I obtain for the partial propagator $K_n$ (with $z=mr'r''/\operatorname{i}\hbar T$,
the condition $\Re(z)$ ignored, c.f.\ \cite{BEIN,GRSh,PI})
\begin{eqnarray} & &\!\!\!\!\!\!\!\!
K_n(\tau'',\tau',\varphi'',\varphi';T)\simeq
\operatorname{d}frac{m}{2\pi\operatorname{i}\hbar T}\operatorname{e}xp\bigg[\operatorname{d}frac{\operatorname{i} mR^2}{2\hbar T}(\tau''-\tau')^2
\nonumber\\ & &\!\!\!\!\!\!\!\!\qquad\qquad
+\operatorname{d}frac{\operatorname{i}\hbar T}{8mR^2\tau'\tau''}+\operatorname{i}\xi(\varphi''-\varphi'+2\pi n)
+\operatorname{d}frac{\operatorname{i} mR^2\tau'\tau''}{2\hbar T}(\varphi''-\varphi+2\pi n)\bigg]\operatorname{e}nspace.
\operatorname{e}nd{eqnarray}
Consequently, I get for the interference term
\begin{eqnarray}
I_{nl}&\simeq&2\bigg(\operatorname{d}frac{m}{2\pi\operatorname{i}\hbar T}\bigg)^2
\nonumber\\ & & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\times
\cos\Bigg[2\pi(l-n)\bigg(\xi+\operatorname{d}frac{mR^2\tau'\tau''}{\hbar T}(\varphi''-\varphi'-\pi)\bigg)
+2\pi^2\operatorname{d}frac{mR^2\tau'\tau''}{\hbar T}(l-n)(l+n+1)\Bigg]\operatorname{e}nspace.\qquad
\label{Inl2}
\operatorname{e}nd{eqnarray}
The principal feature of this result consists that the interference patterns
does not depend only on the initial $(\tau',\varphi')$ and final points
$(\tau'',\varphi'')$, but on the homotopy class numbers $n$ and $m$ as well which
describe the windings around the infinitesimal thin solenoid. This flux dependent
shift is a proper Aharonov--Bohm effect. The interference term vanishes for $n=l$.
The maximum contribution to the Aharonov--Bohm effect on the (hyperbolic) plane is
observed for the smallest non-vanishing value $|n-l|=1>0$. Therefore, the maximum
effect is observed for the interference of the winding number $l=0$ and $n=-1$, or
vice versa, yielding the interference term
\begin{equation}
I_{0,-1}=2\bigg(\operatorname{d}frac{m}{2\pi\operatorname{i}\hbar T}\bigg)^2\cos(2\pi\xi)\operatorname{e}nspace.
\operatorname{e}nd{equation}
This is the standard result, e.g.~\cite{FH} and \cite{BEIN} and references therein.
\section{Higgs-Oscillator and Kepler--Coulomb Potential}
\message{Higgs-Oscillator and Kepler--Coulomb Potential}
Obviously, we can incorporate potential terms in the radial path integration
$\tau$, e.g., we can include the Higgs-oscillator potential \cite{GROPOc,HIGGS}
\begin{equation}
V_{(\operatorname{Higgs})}(\vec u)={m\over2}\omega^2R^2{u_1^2+u_2^2\over u_0^2}
={m\over2}\omega^2R^2\tanh^2\tau\operatorname{e}nspace,
\operatorname{e}nd{equation}
which is the analogue of the harmonic oscillator in a space of constant
curvature, or the Kepler--Coulomb potential \cite{BIJb,GROe,GROPOc}, respectively
\begin{equation}
V_{(\operatorname{Coulomb})}(\vec u)=-{\alpha\over R}\left({u_0\over\sqrt{u_1^2+u_2^2}}-1\right)
=-{\alpha\over R}(\coth\tau-1)\operatorname{e}nspace.
\operatorname{e}nd{equation}
For clarity, I have included the dependence on the constant curvature $R$
explicitly. In these cases, the result (\ref{Klblambda}) is more appropriate.
The combined $\operatorname{d}\varphi\,\operatorname{d}\lambda$-integration yields $\lambda=0$, and the total
propagator becomes
\begin{equation}
K^{AB}(\tau'',\tau',\varphi'',\varphi';T)=\sum_{l=-\operatorname{i}nfty}^\operatorname{i}nfty
e^{\operatorname{i} l(\varphi''-\varphi')}\,K_{|l-\xi|}(\tau'',\tau';T)\operatorname{e}nspace,
\operatorname{e}nd{equation}
and the effect of the solenoid exhibits in a modification of the angular momentum
dependence of $K_{|l-\xi|}$. This feature, however, modifies the number of bound
states of the system with respect to the quantum number $l$. For instance, for the
Higgs-oscillator case this gives ($\nu^2=m^2\omega^2R^4/\hbar^2+1/4$)
\begin{eqnarray} & &\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\Psi_{nl}^{(\operatorname{Higgs})}(\tau,\varphi;R)=(2\pi\sinh\tau)^{-1/2}
S_n^{(\nu)}(\tau;R)\,\operatorname{e}^{\operatorname{i} l\varphi}\operatorname{e}nspace,
\\ & &\!\!\!\!\!\!\!\!\!\!\!\!\!\!
S_n^{(\nu)}(\tau;R)={1\over\Gamma(|l-\xi|+1)}
\bigg[{2(\nu-|l-\xi|-2n-1)\Gamma(n+|l-\xi|+1)\Gamma(\nu-|l-\xi|)\over
R^2\Gamma(\nu-|l-\xi|-n)n!}\bigg]^{1/2}
\nonumber\\ & &\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\qquad\qquad\times
(\sinh\tau)^{|l-\xi|+1/2}(\cosh\tau)^{n+1/2-\nu}
{_2}F_1(-|l-\xi|,\nu-n;1+|l-\xi|;\tanh^2\tau)\operatorname{e}nspace,
\operatorname{e}nd{eqnarray}
with the discrete spectrum given by
\begin{equation}
E_n^{(\operatorname{Higgs})}=-\operatorname{d}frac{\hbar^2}{2mR^2}\bigg[(2n+|l-\xi|-\nu+1)^2-{1\over4}\bigg]
+{m\over2}\omega^2R^2\operatorname{e}nspace.
\operatorname{e}nd{equation}
Only a finite number exist with $N_{\operatorname{max}}=[\nu-|l-\xi|-1]\geq0$ ($[x]$ denotes the
integer value of $x\operatorname{i}n{\rm I\!R}$). The continuous wave-functions have the form
\begin{eqnarray} & &\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\Psi_{kl}^{(\operatorname{Higgs})}(\tau,\varphi;R)=(2\pi\sinh\tau)^{-1/2}
S_k^{(\nu)}(\tau;R)\,\operatorname{e}^{\operatorname{i} l\varphi}\operatorname{e}nspace,
\\ & &\!\!\!\!\!\!\!\!\!\!\!\!\!\!
S_k^{(\nu)}(\tau;R)={1\over\Gamma(|l-\xi|+1)}\sqrt{k\sinh\pi k\over2\pi^2R^2}\,
\Gamma\bigg({\nu-|l-\xi|+1-\operatorname{i} k\over2}\bigg)
\Gamma\bigg({|l-\xi|-\nu+1-\operatorname{i} k\over2}\bigg)
\nonumber\\ & &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\qquad\times
(\tanh\tau)^{|l-\xi|+1/2}(\cosh\tau)^{\operatorname{i} k}
\nonumber\\ & &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\qquad\times
{_2}F_1\bigg({\nu+|l-\xi|+1-\operatorname{i} k\over2},{|l-\xi|-\nu+1-\operatorname{i} k\over2}
;1+|l-\xi|;\tanh^2\tau\bigg)\operatorname{e}nspace,\qquad
\operatorname{e}nd{eqnarray}
with the continuous energy-spectrum given by
\begin{equation}
E_p^{(\operatorname{Higgs})}={\hbar^2\over2mR^2}\bigg(k^2+{1\over4}\bigg)+{m\over2}\omega^2R^2
\operatorname{e}nspace.
\operatorname{e}nd{equation}
In the case of the Kepler--Coulomb problem on $\Lambda$ we obtain for the discrete
energy spectrum ($\tilde N=N+|l-\xi|+{1\over2}$, ($N=0,1,2,\operatorname{d}ots,N_{\operatorname{max}}=[\sqrt{R/a}-
|l-\xi|-{1\over2}], a=\hbar^2/m\alpha$ is the Bohr radius)
\begin{equation}
E_N^{(\operatorname{Coulomb})}={\alpha\over R}-\hbar^2{\tilde N^2-{1\over4}\over2mR^2}
-{m\alpha^2\over2\hbar^2\tilde N^2}\operatorname{e}nspace.
\operatorname{e}nd{equation}
The wave-functions I do not state, c.f.\ \cite{GROPOc}, and the continuous
states are modified by their angular momentum dependence, i.e., $l\to l-\xi$.
However, the effect of the Aharonov--Bohm field is not only restricted to a
modification of the discrete spectrum, but the effect on the scattering states
happens through an interference term $I_{nl}$ similarly to (\ref{Inl}), for the
Coulomb potential and the Higgs-oscillator as well. Again, a closed expression
for the radial propagator does not exist and we are restricted to the investigation
of the limiting case along the lines following (\ref{Inl}).
This I do not repeat once more.
\vspace*{-1.0cm}\noindent
\section{Summary}
\message{Summary}
I therefore have shown the admissibility of path integration of the Aharonov--Bohm
effect on the hyperbolic plane. It can be studied in a straightforward manner
yielding analogous results in comparison to the flat space case. For scattering
states we find interference, due to the modification of the angular momentum
dependence according to $l\to l-\xi$, giving a $\cos$-like pattern in terms of the
strength of the vector-potential, for the free motion, the Kepler--Coulomb problem,
and the Higgs-oscillator (which is absent in the flat space case); the bound state
wave-function and the corresponding energy levels are modified in their
angular momentum dependence $l\to l-\xi$ as well, together including an alteration of
the number of bound states. We found the usual expansion of the total propagator in
terms of an expansion into the winding number $n$ of the homotopy class of paths.
All these features are well-known form the corresponding flat-space cases. The
complicated interference expression (\ref{Inl}) could not be evaluated due the
non-constant curvature features of the hyperbolic plane. This would involve an
analytical intractable integration over Legendre functions with respect to the order.
However, the investigation of the flat space-limit gave the well-known result.
Therefore the effect of an Aharonov--Bohm gauge field on the hyperbolic plane, i.e.,
scattering on leaky tori, exhibits the same features as in the flat space case of
${\rm I\!R}^2$.
\begin{thebibliography}{99}
\message{Bibliography}
\small
\operatorname{i}nput cyracc.def
\font\tencyr=wncyr10
\font\tenitcyr=wncyi10
\font\tencpcyr=wncysc10
\operatorname{d}ef\tencyr\cyracc{\tencyr\cyracc}
\operatorname{d}ef\tenitcyr\cyracc{\tenitcyr\cyracc}
\operatorname{d}ef\tencpcyr\cyracc{\tencpcyr\cyracc}
\bibitem[Abramowitz and Stegun (1984)]{ABS}
Abramowitz, M., Stegun, I.A.~(Editors): {\operatorname{i}t Pocketbook of Mathematical
Functions}. Harry Deutsch, Frankfurt/Main, 1984.
\bibitem[Aharonov and Bohm (1959)]{AB}
Aharonov, Y., Bohm, D.: Significance of Electromagnetic Potentials in
the Quantum Theory. {\operatorname{i}t Phys.\,Rev.}\ {\bf 115} (1959) 485--491.
\bibitem[Anandan and Safko (1994)]{Book:A-B}
Anandan, J.S., Safko, J.L.~(eds.):
{\operatorname{i}t Quantum Coherence and Reality. Proceedings of the International
Conference on Fundamental Aspects of Quantum Theory in Celebration of
the 60th Birthday of Yakir Aharonov}, Columbia, USA, 1992.
World Scientific, Singapore, 1994.
\bibitem[Barut et al.~(1990)]{BIJb}
Barut, A.O., Inomata, A., Junker, G.: Path Integral Treatment of the
Hydrogen Atom in a Curved Space of Constant Curvature: II.~Hyperbolic
Space. {\operatorname{i}t J.\,Phys.\,A: Math.\,Gen.}\ {\bf 23} (1990) 1179--1190.
\bibitem[Bernido (1993)]{BERNe}
Bernido, C.C.: Path Integral Treatment of the Gravitational Anyon in a
Uniform Magnetic Field.
{\operatorname{i}t J.\,Phys.\,A: Math.\,Gen.}\ {\bf 26} (1993) 5461--5471.
\bibitem[Berndio and Inomata (1980)]{BEIN}
Bernido, C.C., Inomata, A.: Topological Shifts in the Aharonov--Bohm
Effect. {\operatorname{i}t Phys.\,Lett.}\ {\bf A 77} (1980) 394--396.
Path Integrals with a Periodic Constraint: The Aharonov--Bohm Effect.
{\operatorname{i}t J.\,Math.\,Phys.}\ {\bf 22} (1981) 715--718.
\bibitem[Chetounai et al.~(1989)]{CGHC}
Chetouani, L., Guechi, L., Hammann, T.F.: Exact Path Integral Solution
of the Coulomb Plus Aharonov--Bohm Potential.
{\operatorname{i}t J.\,Math.\,Phys.}\ {\bf 30} (1989) 655--658.
\bibitem[Comtet (1987)]{COM}
Comtet, A.: On the Landau Levels on the Hyperbolic Plane.
{\operatorname{i}t Ann.\,Phys.\,$($N.Y.$)$} {\bf 173} (1987) 185--209.
\bibitem[Dr\v ag\v anascu et al.~(1992)]{DRCAKI}
Dr\v ag\v anascu, Gh.E., Campigotto, C., Kibler, M.:
On a Generalized Aharonov--Bohm Plus Coulomb System.
{\operatorname{i}t Phys.\,Lett.}\ {\bf A 170} (1992) 339--343.
\bibitem[Fay (1977)]{FAY}
Fay, J.D.: Fourier Coefficients of the Resolvent for a Fuchsian Group.
{\operatorname{i}t J.\,Reine und Angew.\,Math.}\ {\bf 293} (1977) 143--203.
\bibitem[Feynman and Hibbs (1965)]{FH}
Feynman, R.P., Hibbs, A.: {\operatorname{i}t Quantum Mechanics and Path Integrals}.
McGraw Hill, New York, 1965.
\bibitem[Gamboa and Rivelles (1991)]{GARI}
Gamboa, J., Rivelles, V.O.: Quantum Mechanics of Relativistic Particles in
Multiply Connected Spaces and the Aharonov--Bohm Effect.
{\operatorname{i}t J.\,Phys.\,A: Math.\,Gen.}\ {\bf 24} (1991) L659--L666.
\bibitem[Gerry and Singh (1979)]{GSa}
Gerry, C.C., Singh, V.A.: Feynman Path-Integral Approach to the Aharonov--Bohm
Effect. {\operatorname{i}t Phys.\,Rev.}\ {\bf D 20} (1979) 2550--2554.
Remarks on the Effects of Topology in the Aha\-ro\-nov--Bohm Effect.
{\operatorname{i}t Nuovo Cimento} {\bf B 73} (1983) 161--170.
On the Experimental Consequences of the Winding Numbers of the
Aharonov--Bohm Effect. {\operatorname{i}t Phys.\,Lett.}\ {\bf A 92} (1982) 11--12.
\bibitem[Gradshteyn and Ryzhik (1980)]{GRA}
Gradshteyn, I.S., Ryzhik, I.M.: {\operatorname{i}t Table of Integrals, Series, and
Products}. Academic Press, New York, 1980.
\bibitem[Grosche (1988)]{GROb}
Grosche, C.: The Path Integral on the Poincar\'e Upper Half-Plane With
a Magnetic Field and for the Morse Potential.
{\operatorname{i}t Ann.\,Phys.\,$($N.Y.$)$} {\bf 187} (1988) 110--134.
\bibitem[Grosche (1990a)]{GROd}
Grosche, C.: Path Integration on the Hyperbolic Plane With a Magnetic
Field. {\operatorname{i}t Ann.\,Phys. $($N.Y.$)$} {\bf 201} (1990) 258--284.
\bibitem[Grosche (1990b)]{GROe}
Grosche, C.: The Path Integral for the Kepler Problem on the
Pseudosphere. {\operatorname{i}t Ann.\,Phys. $($N.Y.$)$} {\bf 204} (1990) 208--222.
\bibitem[Grosche (1996)]{GROad}
Grosche, C.: {\operatorname{i}t Path Integrals, Hyperbolic Spaces, and Selberg Trace
Formul\ae}. World Scientific, Singapore, 1996.
\bibitem[Grosche et al.~(1996)]{GROPOc}
Grosche, C., Pogosyan, G.S., Sissakian, A.N.: Path-Integral Approach to
Superintegrable Potentials on the Two-Dimensional Hyperboloid.
{\operatorname{i}t Phys.\,Part.\,Nucl.}\ {\bf 27} (1996) 244--278.
\bibitem[Grosche and Steiner (1988)]{GRSc}
Grosche, C., Steiner, F.: The Path Integral on the Pseudosphere.
{\operatorname{i}t Ann. Phys.\,$($N.Y.$)$} {\bf 182} (1988) 120--156.
\bibitem[Grosche and Steiner (1998)]{GRSh}
Grosche, C., Steiner, F.: {\operatorname{i}t Handbook of Feynman Path Integrals}.
Springer, Berlin, Heidelberg, 1998.
\bibitem[Gutzwiller (1991)]{GUTc}
Gutzwiller, M.C.: {\operatorname{i}t Chaos in Classical and Quantum Mechanics}.
Springer, Berlin, Heidelberg, 1991.
\bibitem[Hejhal (1976)]{HEJb}
Hejhal, D.A.: {\operatorname{i}t The Selberg Trace Formula for $\operatorname{PSL}(2,{\rm I\!R})$}. Lecture
Notes in Physics {\bf 548}. Springer, Berlin, Heidelberg, 1976.
\bibitem[Higgs (1979)]{HIGGS}
Higgs, P.W.: Dynamical Symmetries in a Spherical Geometry.
{\operatorname{i}t J.\,Phys.\,A: Math.\,Gen.}\ {\bf 12} (1979) 309--323.
\bibitem[Hoang and Giang (1993)]{LEVAN}
Hoang, L.V., Giang, N.T.: On the Green Function for a Hydrogen-Like
Atom in the Dirac Monopole Field Plus the Aharonov--Bohm Field.
{\operatorname{i}t J.\,Phys.\,A: Math.\,Gen.}\ {\bf 26} (1993) 3333--3338.
\bibitem[Hoang et al.~(1992)]{HHKR}
Hoang, L.V., Hai, L.X., Komarov, L.I., Romaova, T.S.: Relativistic Analogy
of the Aharonov--Bohm Effect in the Presence of Coulomb Field and Magnetic
Charge. {\operatorname{i}t J.\,Phys.\,A: Math.\,Gen.}\ {\bf 25} (1992) 6461--6469.
\bibitem[Izmest'ev et al.~(1997)]{IPSWa}
Izmest'ev, A.A., Pogosyan, G.S., Sissakian, A.N., Winternitz, P.:
Contractions of Lie Algebras and Separation of Variables. Two-Dimensional
Hyperboloid. {\operatorname{i}t Int.\,J.\,Mod.\,Phys.}\ {\bf 12} (1997) 53--61.
\bibitem[Kibler and Campigotto (1993)]{KICA}
Kibler, M., Campigotto, C.: On a Generalized Aharonov--Bohm Plus Oscillator
System. {\operatorname{i}t Phys.\,Lett.}\ {\bf A 181} (1993) 1--6.
\bibitem[Kibler and Negadi (1987)]{KNg}
Kibler, M., Negadi, T.: Motion of a Particle in a Coulomb Field Plus
Aharonov--Bohm Potential. {\operatorname{i}t Phys.\,Lett.}\ {\bf A 124} (1987) 42--46.
\bibitem[Kleinert (1995)]{KLEo}
Kleinert, H.: {\operatorname{i}t Path Integrals in Quantum Mechanics, Statistics and
Polymer Phys\-ics}. World Scientific, Singapore, 1995$^2$.
\bibitem[Kuperin et al.~(1994)]{KURORU}
Kuperin, Yu.A., Romanov, R.V., Rudin, H.E.:
Scattering on the Hyperbolic Plane in the Aharonov--Bohm Gauge Field.
{\operatorname{i}t Lett.\,Math.\,Phys.}\ {\bf 31} (1994) 271--278.
\bibitem[Liang (1988)]{LIANG}
Liang, J.Q.: Path Integrals in Multiply Connected Spaces and the
Aharonov--Bohm Interference. {\operatorname{i}t Physica} {\bf B 151} (1988) 239--244.
\bibitem[Lin (1998)]{LINDHc}
Lin, D.-H.: Path Integral for a Relativistic Aharonov--Bohm--Coulomb System.
{\operatorname{i}t J.\,Phys.\,A: Math.\,Gen.}\ {\bf 31} (1998) 4785--4793.
\bibitem[Olevski\u\operatorname{i}i (1950)]{OLE}
{\tencyr\cyracc Olevs\cydot ki\u i, M.N.: Triortogonal\cprime nye sistemy v
prostranstvakh postoyanno\u i krivizny, v kotorykh uravnenie} $\Delta_2u+
\lambda u=0$ {\tencyr\cyracc dopus\cydot kaet polnoe razdelenie peremennyh}.
{\tenitcyr\cyracc Mat.Sb.}\ {\bf 27} (1950) 379--426.
\newline
[Olevski\u\operatorname{i}i, M.N.: Triorthogonal Systems in Spaces of Constant Curvature in
which the Equation $\Delta_2u+\lambda u=0$ Allows the Complete Separation of
Variables. {\operatorname{i}t Math.Sb.}\ {\bf 27} (1950) 379--426 (in Russian)].
\bibitem[Park and Yoo (1998)]{PARKb}
Park, D.K., Yoo, S.-K.: Propagators for Spinless and Spin-1/2
Aharonov--Bohm--Coulomb Systems.
{\operatorname{i}t Ann.\,Phys.\,$($N.Y.$)$} {\bf 263} (1998) 295--309.
\bibitem[Peak and Inomata (1969)]{PI}
Peak, D., Inomata, A.: Summation Over Feynman Histories in Polar
Coordinates. {\operatorname{i}t J.\,Math. Phys.}\ {\bf 10} (1969) 1422--1428.
\bibitem[Pnueli (1994)]{PNUELI}
Pnueli, A.: Scattering Matrices and Conductances of Leaky Tori.
{\operatorname{i}t Ann.\,Phys.\,$($N.Y.$)$}\ {\bf 231} (1994) 56--83.
\bibitem[Schulman (1971)]{SCHUHc}
Schulman, L.S.: Approximate Topologies.
{\operatorname{i}t J.\,Math.\,Phys.}\ {\bf 12} (1971) 304--308.
\bibitem[Schulman (1981)]{SCHUHd}
Schulman, L.S.: {\operatorname{i}t Techniques and Applications of Path Integration}.
John Wiley \&\ Sons, New York, 1981.
\operatorname{e}nd{thebibliography}
\vbox{\centerline{\ }
\centerline{\quad\operatorname{e}psfig{file=Illusion.eps,width=4cm,angle=90}}}
\operatorname{e}nd{document}
|
\begin{document}
{\bf i}bliographystyle{abbrv}
\title[FCLT for subgraph counting processes]
{Functional Central Limit Theorem for Subgraph Counting Processes}
\author{Takashi Owada}
\address{Faculty of Electrical Engineering\\
Technion-Israel Institute of Technology \\
Haifa, 32000, Israel}
\email{[email protected]}
\thanks{This research was supported by funding from the European Research Council under the European Union's
Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement n. 320422.}
\subjclass[2010]{Primary 60G70, 60D05. Secondary 60G15, 60G18.}
\keywords{Extreme value theory, functional central limit theorem, geometric graph, regular variation, von-Mises function.
}
\begin{abstract}
The objective of this study is to investigate the limiting
behavior of a subgraph counting process. The subgraph counting
process we consider counts the number of subgraphs having a
specific shape that exist outside an expanding ball as the sample
size increases. As underlying laws, we consider distributions with either a regularly varying tail or an
exponentially decaying tail. In both cases, the nature of the
resulting functional central limit theorem differs according to the
speed at which the ball expands. More specifically, the normalizations in the central limit theorems and the properties of the limiting Gaussian processes are all determined by whether or not
an expanding ball covers a region - called a weak core - in which
the random points are highly densely scattered and form a giant
geometric graph.
\end{abstract}
\maketitle
\section{Introduction} \lefteftarrowbel{sec:intro}
The history of random geometric graphs started with Gilbert's
1961 study (\cite{gilbert:1961}) and, since then, it has received
much attention both in theory and applications. More formally,
given a finite set $\mathcal{X} \subset \righteals^d$ and a real number
$r>0$, the geometric graph $G(\mathcal{X},r)$ is defined as an
undirected graph with vertex set $\mathcal{X}$ and edges $[x,y]$
for all pairs $x,y\in{\mathcal{X}}$ for which $\|x-y\| \lefteq r$. The theory of
geometric graphs has been applied mainly in large communication
network analysis, in which the connectivity of network agents
strongly depends on the distance between them; see
\cite{chen:jia:2001}, \cite{stojmenovic:seddigh:zunic:2002}, and
Chapter 3 of \cite{hekmat:2006}. On the purely theoretical side of
random geometric graphs, the monograph
\cite{penrose:2003} is probably the best known resource. It covers a wide range of topics, such as the asymptotics of the
number of subgraphs with a specific shape, the vertex degree, the
clique number, the formation of a giant component, etc. From among
these interesting subjects, the present study focuses on
constructing the functional central limit theorem (FCLT) for the number of
subgraphs isomorphic to a predefined connected graph $\Gamma$ of
finite vertices.
A typical setup in \cite{penrose:2003} is as follows. Let
$\mathcal X_n$ be a set of random points on $\righteals^d$. Typically, this will be either an $\text{i.i.d.}$ random sample of $n$ points from $f$, or an
inhomogeneous Poisson point process with intensity $nf$, where $f$ is a probability density. We assume that the threshold radius $r_n$
depends on $n$ and decreases to $0$ as $n \to \infty$, but we do
not impose any restrictive assumptions on $f$ except for boundedness. Then, the asymptotic behavior of
the subgraph counts given by
\begin{equation} \lefteftarrowbel{e:count.intro1}
G_n := \sum_{{\mathcal{Y}} \subset \mathcal X_n} {\bf 1} {\bf i}gl\{ G({\mathcal{Y}}, r_n) \cong \Gamma {\bf i}gr\}\,,
\end{equation}
($\cong$ denotes graph isomorphism, and $\Gamma$ is a fixed
connected graph) splits into three different regimes. First, if
$nr_n^d \to 0$, called the \textit{subcritical} or \textit{sparse}
regime, the distribution of subgraphs isomorphic to $\Gamma$ is
sparse, and these subgraphs are mostly observed as isolated components. If $nr_n^d \to
\xi \in (0,\infty)$, called the \textit{critical} or
\textit{thermodynamic} regime, for which $r_n$ decreases to $0$ at
a slower rate than the subcritical regime, many of the isolated
subgraphs in $G(\mathcal X_n, r_n)$ become connected to one another.
Finally, if $nr_n^d \to \infty$ (the \textit{supercritical}
regime), the subgraphs are very highly connected and create a
large component.
Historically, the research on the limiting behavior of subgraph
counts of the type \eqref{e:count.intro1} dates back to the
studies of \cite{hafner:1972}, \cite{silverman:brown:1978}, and
\cite{weber:1983}, in all of which mainly the subcritical regime
was treated. Furthermore, \cite{bhattacharya:ghosh:1992} adopted
an approach based on the martingale CLT for $U$-statistics and
proved a CLT under various conditions on $f$ and $r_n$. Relying on
the so-called Stein-Chen method, a set of extensive results for
all three regimes was nicely summarized in Chapter 3 of
\cite{penrose:2003}. Recently, as a higher-dimensional analogue of
a random geometric graph, there has been growing interest in the
asymptotics of the so-called random C\v{e}ch complex. See, for
example, \cite{kahle:2011}, \cite{kahle:meckes:2013}, and
\cite{yogeshwaran:subag:adler:2014}, while
\cite{bobrowski:kahle:2014} provides an elegant review of that
direction.
Somewhat parallel to \eqref{e:count.intro1}, but more
important for the study on the geometric features of extreme sample clouds, is an alternative that we explore in this
paper. To set this up, we introduce a growing sequence $R_n \to
\infty$ and a threshold radius $t > 0$. The
following quantity, $G_n(t)$ counts the number of subgraphs
in $G(\mathcal X_n, t)$ isomorphic to $\Gamma$ that exist outside
a centered ball in $\righteals^d$ with
radius $R_n$:
\begin{equation} \lefteftarrowbel{e:count.intro2}
G_n(t) := \sum_{{\mathcal{Y}} \subset \mathcal X_n } {\bf 1} {\bf i}gl\{ G({\mathcal{Y}}, t) \cong \Gamma {\bf i}gr\} \times {\bf 1} {\bf i}gl\{ m({\mathcal{Y}}) \geq R_n {\bf i}gr\}\,,
\end{equation}
where $m(x_1,\dots,x_k) = \min_{1 \lefteq i \lefteq k} ||x_i||$, $x_i \in \righteals^d$, and $\| \cdot \|$ is the usual Euclidean norm.
From the viewpoint of extreme value theory (EVT), it is important to investigate limit theorems for $G_n(t)$. Indeed, over the last decade or so there have been numerous papers treating geometric descriptions of multivariate extremes, among them \cite{balkema:embrechts:2007}, \cite{balkema:embrechts:nolde:2010}, and \cite{balkema:embrechts:nolde:2013}. In particular, Poisson limits of point processes possessing a U-statistic structure were investigated by \cite{dabrowski:dehling:mikosch:sharipov:2002} and \cite{schulte:thale:2012}, the latter also treating a number of examples in stochastic geometry. The main references for EVT are \cite{embrechts:kluppelberg:mikosch:1997},
\cite{resnick:1987}, and \cite{dehaan:ferreira:2006}.
The asymptotic behavior of \eqref{e:count.intro2} has been partially explored in \cite{owada:adler:2015}, where a growing sequence $R_n$ is taken in such a way that \eqref{e:count.intro2} has Poisson limits as $n\to\infty$. The main contribution in \cite{owada:adler:2015} is the discovery of a certain layered structure consisting of a collection of ``rings" around the origin with each ring containing extreme random points which exhibit different geometric and topological behavior. The object of the current study is to develop a fuller description of this ring-like structure, at least in a geometric graph model, by establishing a variety of FCLTs which describe geometric graph formation between the rings.
By construction, the subgraph counts \eqref{e:count.intro2} can be viewed as generating a stochastic process in the parameter $t\geq0$, while a process-level extension in \eqref{e:count.intro1} is much less obvious. Then, while \eqref{e:count.intro2} captures the dynamic evolution of geometric graphs as $t$ varies, \eqref{e:count.intro1} only describes the static geometry. Thus, the limits in the FCLT for \eqref{e:count.intro2} are intrinsically Gaussian processes, rather than one-dimensional Gaussian distributions.
One of the main results of this paper is that the limiting Gaussian processes can be classified into three distinct categories, according to how rapidly $R_n$ grows.
The most important condition for this classification is whether or not a ball centered at the origin with radius $R_n$, denoted by $B(0, R_n)$, asymptotically covers a \textit{weak core}. Weak cores are balls, centered at the origin with growing radii as $n$ increases, in which the random points are densely scattered and form a highly connected geometric graph. This notion, along with the related notion of a \textit{core}, play a crucial role for the classification of the limiting Gaussian processes. Indeed, if $B(0, R_n)$ grows so that it asymptotically covers a weak core, then the geometric graph outside $B(0, R_n)$ is ``sparse" with many small disconnected components. In this case, the limit is denoted as the difference between two time-changed Brownian motions. In contrast, if $B(0,R_n)$ is asymptotically covered by a weak core, the geometric graph in the area between the outside of $B(0,R_n)$ and inside of a weak core becomes ``dense", and, accordingly, the limit becomes a degenerate Gaussian process with deterministic sample paths.
Finally if $B(0,R_n)$ coincides with a weak core, then the limiting Gaussian process possesses more complicated structure and are even non-self-similar.
We want to emphasize that the nature of the FCLT depends not only on the growth rate of $R_n$ but also the tail property of $f$. This is in complete
contrast to \eqref{e:count.intro1}, because, as seen in Chapter 3
of \cite{penrose:2003}, the proper normalization, limiting
Gaussian distribution, etc. of the CLT are all robust to whether
$f$ has a heavy or a light tail. In this paper, we particularly
deal with the distributions of regularly varying tails and
(sub)exponential tails. However, we are not basically concerned
with any distribution with a superexponential tail, e.g., a
multivariate normal distribution. The details of the FCLT in that case remain for a future study.
The remainder of the paper is organized as follows. First, in
Section 2 we provide a formal definition of the subgraph counting
process. Section 3 gives an overview of what was shown in the previous work \cite{owada:adler:2015} and what will be shown in this paper. Subsequently, in Section 4 we focus on the case in which
the underlying density has a regularly varying tail, including
power-law tails, and prove the required FCLT. We also investigate the
properties of the limiting Gaussian processes, in particular, in
terms of self-similarity and sample path continuity. In
Section 5, we do the same when the underlying density has an
exponentially decaying tail. To distinguish densities via their
tail properties, we need basic tools in EVT. In essence, the properties of the limiting Gaussian processes are determined by how rapidly $R_n$ grows to infinity, as well as how rapidly the tail of $f$ decays. Finally, Section 6 carefully examines both cores and weak cores for a large class of
densities.
Before commencing the main body of the paper, we remark that all the random points in this paper are assumed to be
generated by an inhomogeneous Poisson point process on $\righteals^d$
with intensity $nf$. In our opinion, the FCLT in the main theorem
can be carried over to a usual $\text{i.i.d.}$ random sample setup by a
standard ``de-Poissonization" argument; see Section 2.5 in
\cite{penrose:2003}. This is, however, a little more technical and
challenging, and therefore, we decided to concentrate on the
simpler setup of an inhomogeneous Poisson point process. Furthermore we consider only spherically symmetric distributions.
Although the spherical symmetry assumption is far from being
crucial, we adopt it to avoid unnecessary technicalities.
\section{Subgraph Counting Process}
Let $(X_i, \, i \geq 1)$ be $\text{i.i.d.}$ $\righteals^d$-valued
random variables with spherically symmetric probability density
$f$. Given a Poisson random variable $N_n$ with mean $n$,
independent of $(X_i, \, i \geq 1)$, denote by $\mathcal{P}_n = \{
X_1, X_2, \dots, X_{N_n} \}$ a Poisson point process with
$|\mathcal{P}_n| := N_n$. We choose a positive integer $k$, which
remains fixed hereafter. We take $k \geq 2$, unless otherwise
stated, because many of the functions and objects to follow are degenerate in the case of $k=1$.
Let $\Gamma$ be a fixed connected graph of $k$
vertices and $G$ represent a geometric graph; $\cong$ denotes
graph isomorphism.
We define
$$
h(x_1, \dots, x_k) := {\bf 1} {\bf i}gl\{ G {\bf i}gl( \{ x_1, \dots, x_k \}, \, 1 {\bf i}gr) \cong \Gamma {\bf i}gr\}\,, \ \ x_1,\dots,x_k \in \righteals^d\,.
$$
Next, we define a collection of indicators $(h_t, \, t
\geq 0)$ by
\begin{equation} \lefteftarrowbel{e:geo.graph.dyna}
h_t(x_1,\dots,x_k) := h(x_1/t,\dots,x_k/t) = {\bf 1} {\bf i}gl\{ G {\bf i}gl( \{ x_1, \dots, x_k \}, \, t {\bf i}gr) \cong \Gamma {\bf i}gr\}\,,
\end{equation}
from which one can capture the manner in which a geometric
graph dynamically evolves as the threshold radius $t$ varies. Note, in particular, that $h_1(x_1,\dots,x_k) = h(x_1,\dots,x_k)$.
Clearly $h_t$ is shift invariant:
\begin{align}
h_t(x_1,\dots,x_k) &= h_t(x_1+y,\dots,x_k+y)\,,\ \ \ x_1,\dots,x_k,y \in \righteals^d\,, \lefteftarrowbel{e:location.inv}
\end{align}
and, further,
\begin{equation} \lefteftarrowbel{e:close.enough}
h_t(0,x_1,\dots,x_{k-1}) = 0 \ \ \text{if } ||x_i|| > kt \ \text{for some } i=1,\dots,k-1\,.
\end{equation}
The latter condition implies that $h_t(x_1,\dots,x_k)=1$ only when all the points
$x_1,\dots,x_k$ are close enough to each other.
Moreover $h_t$ can be decomposed as follows. Suppose that $\Gamma$
has $k$ vertices and $j$ edges for some $j \in {\bf i}gl\{
k-1,\dots,k(k-1)/2 {\bf i}gr\}$. Letting $A_\ell$ be a set of
connected graphs of $k$ vertices and $\ell$ edges (up to graph
isomorphism), define for $x_1,\dots,x_k \in \righteals^d$,
\begin{align*}
h_t^+ (x_1,\dots,x_k) &:= h_t(x_1,\dots,x_k) + \sum_{\ell=j+1}^{k(k-1)/2} \sum_{\Gamma^{\prime} \in A_\ell} {\bf 1} {\bf i}gl\{ G{\bf i}gl(\{ x_1,\dots,x_k \},t{\bf i}gr) \cong \Gamma^{\prime} {\bf i}g\}\,, \\
h_t^- (x_1,\dots,x_k) &:= \sum_{\ell=j+1}^{k(k-1)/2} \sum_{\Gamma^{\prime} \in A_\ell} {\bf 1} {\bf i}gl\{ G{\bf i}gl(\{ x_1,\dots,x_k\},t{\bf i}gr) \cong \Gamma^{\prime} {\bf i}g\}\,.
\end{align*}
Note that $h_t^+(x_1,\dots,x_k)=1$ if and only if a geometric graph $G {\bf i}gl( \{ x_1,\dots,x_k \}, t
{\bf i}gr)$ either coincides with $\Gamma$ (up to graph isomorphism) or has
more than $j$ edges, while $h_t^-(x_1,\dots,x_k) = 1$ only when $G {\bf i}gl( \{ x_1,\dots,x_k \}, t {\bf i}gr)$
has more than $j$ edges. It is then elementary to check that
$h_t^{\pm}$ are both indicators, taking values $0$ or $1$, and satisfying, for all $x_1,\dots,x_k \in \righteals^d$ and $0 \lefteq s \lefteq t$,
\begin{align}
h_t (&x_1,\dots,x_k) = h_t^+ (x_1,\dots,x_k) - h_t^- (x_1,\dots,x_k)\,, \lefteftarrowbel{e:ind.decomp1} \\
&h_s^{+} (x_1,\dots,x_k) \lefteq h_t^{+} (x_1,\dots,x_k)\,, \lefteftarrowbel{e:ind.increase+} \\
&h_s^{-} (x_1,\dots,x_k) \lefteq h_t^{-} (x_1,\dots,x_k)\,. \notag \\
&h_t^{\pm}(0,x_1,\dots,x_{k-1}) = 0 \ \ \text{if } ||x_i|| > kt \ \text{for some } i=1,\dots,k-1\,. \lefteftarrowbel{e:close.enough.decomp.dyna}
\end{align}
In addition,
since $h_t$ is an indicator, it is always the case that
$$
h_t^-(x_1,\dots,x_k) \lefteq h_t^+(x_1,\dots,x_k)\,.
$$
The objective of this study is
to establish a functional central limit theorem (FCLT) of the
\textit{subgraph counting process} defined by
\begin{equation} \lefteftarrowbel{e:subgraph.count}
G_n(t) := \sum_{{\mathcal{Y}} \subset \mathcal{P}_n} h_t({\mathcal{Y}})\, {\bf 1} {\bf i}gl\{ m({\mathcal{Y}}) \geq R_n {\bf i}gr\}\,, \ \ t \geq 0\,,
\end{equation}
where $h_t$ is given in \eqref{e:geo.graph.dyna}, $m(x_1,\dots,x_k) = \min_{1 \lefteq i \lefteq k} ||x_i||$, $x_i
\in \righteals^d$, and $(R_n, \, n\geq1)$ is a properly chosen
normalizing sequence. Note that \eqref{e:subgraph.count} counts
the number of subgraphs in $G(\mathbb{P}n, t)$ isomorphic to $\Gamma$ that
lie completely outside of $B(0,R_n)$. More
concrete definitions of $(R_n)$ are given in the subsequent
sections, where the sequence is shown to be dependent on the tail decay rate of $f$.
\section{Annuli Structure} \lefteftarrowbel{s:annuli}
The objective of this short section is to clarify what is already known and
what is new in this paper. Without any real loss of generality, we will do this via two simple examples, one of
which treats a power-law density and the other a density with a (sub)exponential tail.
Before this, however, we introduce two important notions. \begin{definition}(\cite{adler:bobrowski:weinberger:2014}) \lefteftarrowbel{def.core}
Given an inhomogeneous Poisson
point process $\mathbb{P}n$ in $\righteals^d$ with a spherically symmetric density $f$, a
centered ball $B(0,R_n)$, with $R_n \to \infty$, is called a
\textit{core} if
\begin{equation} \lefteftarrowbel{e:core.event}
B(0, R_n) \subset {\bf i}gcup_{X \in \mathbb{P}n \cap B(0, R_n)} B(X, 1)\,.
\end{equation}
\end{definition}
In other words, a core is a centered ball in which random
points are densely scattered, so that placing unit balls around
them covers the ball itself. We usually wish to seek the
largest possible value of $R_n$ such that \eqref{e:core.event}
occurs asymptotically with probability $1$. A related notion, the
\textit{weak core}, plays a more decisive role in characterizing
the FCLT proven in this paper. It is shown later that a
weak core is generally larger but close in size to a core of maximum size.
\begin{definition} \lefteftarrowbel{def.weak.core}
Let $f$ be a spherically symmetric density on $\righteals^d$ and $e_1 =
(1,0,\dots,0) \in \righteals^d$. A \textit{weak core} is a centered ball
$B(0,R_n^{(w)})$ such that $nf(R_n^{(w)}e_1) \to 1$ as $n \to
\infty$.
\end{definition}
\begin{example} \lefteftarrowbel{ex:power.law.tail}
{\rightm Consider the power-law density
\begin{equation} \lefteftarrowbel{e:simple.pdf.RV}
f(x) = C/{\bf i}gl( 1 + ||x||^{\alpha} {\bf i}gr)\,, \ \ x \in \righteals^d,
\end{equation}
for some $\alpha > d$ and normalizing constant $C$. Using this density, we see how random
geometric graphs are formed in all of $\righteals^d$. First, according
to \cite{adler:bobrowski:weinberger:2014}, there exists a sequence
$R_n^{(c)} \sim \text{constant} \times (n/\leftog n)^{1/\alpha}$, $n
\to \infty$ such that, if $R_n \lefteq R_n^{(c)}$,
\eqref{e:core.event} occurs asymptotically with probability $1$.
In addition, as for the radius of a weak core, it
suffices to take $R_n^{(w)} = (Cn)^{1/\alpha}$. Although
$R_n^{(w)}$ grows faster than $R_n^{(c)}$, they are seen to be
``close" to each other in the sense that they have the same
regular variation exponent, $1/\alpha$.
Beyond a weak core, however, the formation of random geometric
graphs drastically varies. In fact, the exterior of a weak core
can be divided into annuli of different radii, at which many
isolated subgraphs of finite vertices are asymptotically placed in
a specific fashion. To be more precise, let us fix connected
graphs $\Gamma_k$ with $k$ vertices for $k=2,3,\dots$ and let
$$
R_{k,n}^{(p)} := {\bf i}gl( C n{\bf i}gr)^{1/(\alpha - d/k)},
$$
which in turn implies that $R_n^{(w)} \leftl \cdots \leftl R_{k,n}^{(p)} \leftl R_{k-1,n}^{(p)} \leftl \cdots \leftl R_{2,n}^{(p)}$, and
$$
n^k {\bf i}gl( R_{k,n}^{(p)} {\bf i}gr)^d f{\bf i}gl( R_{k,n}^{(p)} e_1 {\bf i}gr)^k \to 1\,, \ \ n \to \infty\,.
$$
Under this circumstance, \cite{owada:adler:2015} considered the subgraph counts given by
\begin{equation} \lefteftarrowbel{e:count.intro3}
\sum_{{\mathcal{Y}} \subset \mathbb{P}n} {\bf 1} {\bf i}gl\{ G({\mathcal{Y}},t) \cong \Gamma_k {\bf i}gr\} \times {\bf 1} {\bf i}gl\{ m({\mathcal{Y}}) \geq R_{k,n}^{(p)} {\bf i}gr\}\,,
\end{equation}
and showed that \eqref{e:count.intro3} weakly converges to a
Poisson distribution for each fixed $t$. To be more specific on
the geometric side, let Ann$(K,L)$ be an annulus with inner radius
$K$ and outer radius $L$. Then, we have, in an asymptotic sense,
\begin{itemize}
\item Outside $B{\bf i}gl(0,R_{2,n}^{(p)}{\bf i}gr)$, there are finitely
many graphs isomorphic to $\Gamma_2$, but none isomorphic to $\Gamma_3, \Gamma_4, \dots$. \item
Outside $B{\bf i}gl(0,R_{3,n}^{(p)}{\bf i}gr)$, equivalently inside
Ann${\bf i}gl( R_{3,n}^{(p)}, R_{2,n}^{(p)} {\bf i}gr)$, there are
infinitely many graphs isomorphic to $\Gamma_2$ and finitely many graphs isomorphic to $\Gamma_3$, but none isomorphic to
$\Gamma_4, \Gamma_5, \dots$.
\end{itemize}
In general,
\begin{itemize}
\item Outside $B{\bf i}gl(0,R_{k,n}^{(p)}{\bf i}gr)$, equivalently inside
Ann${\bf i}gl( R_{k,n}^{(p)}, R_{k-1,n}^{(p)} {\bf i}gr)$, there are
infinitely many graphs isomorphic to $\Gamma_2, \dots, \Gamma_{k-1}$ and finitely many
graphs isomorphic to $\Gamma_k$, but none isomorphic to $\Gamma_{k+1}, \Gamma_{k+2}, \dots$ etc.
\end{itemize}
Section \rightef{s:heavy} of the current paper considers the subgraph
counts of the form
\begin{equation} \lefteftarrowbel{e:count.intro4}
\sum_{{\mathcal{Y}} \subset \mathbb{P}n} {\bf 1} {\bf i}gl\{ G({\mathcal{Y}},t) \cong \Gamma_k {\bf i}gr\} \times {\bf 1} {\bf i}gl\{ m({\mathcal{Y}}) \geq R_n {\bf i}gr\}\,,
\end{equation}
where $(R_n)$ satisfies
\begin{equation} \lefteftarrowbel{e:rate.clt}
n^k R_n^d f(R_ne_1)^k \to \infty\,, \ \ n \to \infty\,,
\end{equation}
in which case, $R_n \leftl R_{k,n}^{(p)}$. As a consequence of
\eqref{e:rate.clt}, we may naturally anticipate that a
FCLT governs the asymptotic behavior of \eqref{e:count.intro4}. Since $(R_n)$ satisfying
\eqref{e:rate.clt} shows a slower divergence rate than
$(R_{k,n}^{(p)})$, i.e., $R_n/R_{k,n}^{(p)} \to 0$, we may expect
that infinitely many subgraphs isomorphic to $\Gamma_k$ appear
asymptotically outside $B(0,R_n)$. This in turn implies that,
instead of a Poisson limit theorem, the FCLT governs the limiting
behavior of the subgraph counting process.
As the analog of the setup for \eqref{e:count.intro1}, when deriving an FCLT, the behavior
of \eqref{e:count.intro4} splits into three different regimes:
$$
(i)\ nf(R_ne_1) \to 0\,, \ \ (ii)\ nf(R_ne_1) \to \xi \in (0,\infty)\,, \ \ (iii)\ nf(R_ne_1) \to \infty\,.
$$
Specifically, if $nf(R_ne_1) \to 0$ (i.e., $B(0,R_n)$ contains a
weak core), many isolated components of subgraphs isomorphic to
$\Gamma_k$ are distributed outside $B(0,R_n)$. If
$nf(R_ne_1) \to \xi \in (0,\infty)$ (i.e., $B(0,R_n)$ agrees with a weak core), the subgraphs isomorphic
to $\Gamma_k$ outside $B(0,R_n)$ begin to be connected to one another. In particular, observing that $\leftim_{k\to\infty} R_{k,n}^{(p)} = R_n^{(w)}$ for all $n$, we see that
\begin{itemize}
\item Outside of $B(0, R_n^{(w)})$, there are infinitely many graphs isomorphic to $\Gamma_j$ for every $j=2,3,\dots$.
\end{itemize}
If $nf(R_ne_1) \to \infty$ (i.e., $B(0,R_n)$ is contained in a weak core), the subgraphs isomorphic
to $\Gamma_k$ outside $B(0,R_n)$ are further increasingly connected and form a large component.
In Section \rightef{s:heavy}, we will see that the nature of the FCLT, including the normalizing constants and the properties of the limiting Gaussian processes, differs according to which regime one considers. Combing the results on the FCLT and the Poissonian results in \cite{owada:adler:2015}, we obtain a complete picture of the annuli structure formed by heavy tailed random variables.
\begin{figure}
\caption{{\footnotesize Layered structure of random geometric graphs. For the
density \eqref{e:simple.pdf.RV}
\end{figure}
}
\end{example}
\begin{example} \lefteftarrowbel{ex:subexp.tail}
{\rightm Next, we turn to a density with a (sub)exponential tail
$$
f(x) = C e^{-||x||^{\tau}/\tau}, \ \ x \in \righteals^d\,, \ 0 < \tau \lefteq 1\,.
$$
for which the radius of a maximum core is given by
$$
R_n^{(c)} = {\bf i}gl(\tau \leftog n - \tau \leftog \leftog (\tau \leftog n)^{1/\tau} + \text{constant} {\bf i}gr)^{1/\tau};
$$
see \cite{adler:bobrowski:weinberger:2014} and \cite{owada:adler:2015}. Obviously, one can
take $R_n^{(w)} = {\bf i}gl(\tau \leftog n + \tau \leftog C {\bf i}gr)^{1/\tau}$. As in the previous example, the exterior of a weak core is characterized by the
same kind of layer structure, for which the description in Figure 1 applies, except for the change in the values of $R_{k,n}^{(p)}$. Letting
$$
R_{k,n}^{(p)} = {\bf i}gl( \tau \leftog n + k^{-1} (d-\tau) \leftog (\tau \leftog n) + \tau \leftog C {\bf i}gr)^{1/\tau},
$$
we have, in an asymptotic sense,
$R_n^{(w)} \leftl \cdots \leftl R_{k,n}^{(p)} \leftl R_{k-1,n}^{(p)} \leftl \cdots \leftl R_{2,n}^{(p)}$, and
$$
n^k {\bf i}gl( R_{k,n}^{(p)} {\bf i}gr)^{d-\tau} f{\bf i}gl( R_{k,n}^{(p)} e_1 {\bf i}gr)^k \to 1\,, \ \ n \to \infty\,.
$$
Then, it was shown in \cite{owada:adler:2015} that
\eqref{e:count.intro3} converges weakly to a Poisson distribution
for each fixed $t$.
In Section \rightef{s:light} of this paper, taking $(R_n)$ such that
$n^k R_{n}^{d-\tau} f( R_{n} e_1 )^k \to \infty$, we establish a
FCLT for the subgraph counting process \eqref{e:subgraph.count}. To this end, our
argument has to be split, once again, into the three different
regimes:
$$
(i)\ nf(R_ne_1) \to 0\,, \ \ (ii)\ nf(R_ne_1) \to \xi \in (0,\infty)\,, \ \ (iii)\ nf(R_ne_1) \to \infty\,.
$$
}
As in the last example, three different Gaussian limits may appear depending on the regime. This completes the full description of the annuli structure formed by random variables with an exponentially decaying tail, when combined with the Poisson limit theorems in \cite{owada:adler:2015}.
\end{example}
\section{Heavy Tail Case} \lefteftarrowbel{s:heavy}
\subsection{The Setup}
In this section, we explore the case in which the underlying
density $f$ on $\righteals^d$ has a heavy tail under a more general setup than that in Example \rightef{ex:power.law.tail}. Let $S_{d-1}$ be a $(d-1)$-dimensional unit sphere in $\righteals^d$. We
assume that the density has a regularly varying tail (at infinity)
in the sense that for any $\theta \in S_{d-1}$ (equivalently, for
some $\theta \in S_{d-1}$ because of the spherical symmetry of
$f$), and for some $\alpha > d$,
$$
\leftim_{r \to \infty} \frac{f(rt \theta)}{f(r \theta)} = t^{-\alpha} \ \ \text{for every } t>0\,.
$$
Denoting by $RV_{-\alpha}$ a collection of regularly varying
functions (at infinity) of exponent $-\alpha$, the above is
written as
\begin{equation} \lefteftarrowbel{e:RV.tail}
f \in RV_{-\alpha}\,.
\end{equation}
Clearly, a power-law density in Example \rightef{ex:power.law.tail} satisfies \eqref{e:RV.tail}.
Let $k \geq 2$ be an integer that remains fixed throughout this
section. We remark that many of the functions and objects are
dependent on $k$, but the dependence may not be stipulated by
subscripts (or superscripts). Choosing the sequence $R_n
\to \infty$ so that
\begin{equation} \lefteftarrowbel{e:normalizing.heavy}
n^k R_n^d f(R_n e_1)^k \to \infty \ \ \text{as } n \to \infty\,,
\end{equation}
we consider the subgraph counting process given in
\eqref{e:subgraph.count}, whose behavior is, as argued in Example \rightef{ex:power.law.tail}, expected to be governed by a FCLT.
The scaling constants for the FCLT, denoted by $\tau_n$, are shown
to depend on the limit value of $nf(R_ne_1)$ as $n \to \infty$.
More precisely, we take
\begin{equation} \lefteftarrowbel{e:tau.heavy}
\tau_n := \begin{cases} n^k R_n^d f(R_ne_1)^k & \text{if } nf(R_ne_1) \to 0\,, \\ R_n^d & \text{if } nf(R_ne_1) \to \xi \in (0,\infty)\,, \\ n^{2k-1} R_n^d f(R_ne_1)^{2k-1} & \text{if } nf(R_ne_1) \to \infty\,.
\end{cases}
\end{equation}
The reason for which we need three different normalizations is
deeply related to the connectivity of a random geometric graph. To
explain this, we need the notion of a \textit{weak core}; see Definition \rightef{def.weak.core} for the formal definition. The main point is that the density
of random points between the outside and inside of a weak core
is completely different. In essence, random points inside a weak
core are highly densely scattered, and the corresponding random
geometric graph forms a single giant component. Beyond a weak
core, however, random points are distributed less densely, and as
a result, we observe many isolated geometric graphs of smaller
size. This disparity between the outside and inside of a weak
core requires different normalizations in $(\tau_n)$. In Section
\rightef{s:connect.core}, a more detailed study in this direction is
presented.
\subsection{Limiting Gaussian Processes and the FCLT} \lefteftarrowbel{s:limit.heavy}
We introduce a family of Gaussian processes which function as the building blocks for the limiting Gaussian processes in the FCLT. For $\ell = 1,\dots, k$, let
$$
B_\ell = \frac{s_{d-1}}{\ell! {\bf i}gl( (k-\ell)! {\bf i}gr)^2 {\bf i}gl( \alpha(2k-\ell) - d {\bf i}gr)}\,,
$$
where $s_{d-1}$ is a surface area of the $(d-1)$-dimensional unit sphere in $\righteals^d$.
For $\ell = 2,\dots,k$, write $\lefteftarrowmbda_\ell$ for the Lebesgue measure on $(\righteals^d)^{\ell-1}$, and denote by $G_\ell$ a \textit{Gaussian $B_\ell \lefteftarrowmbda_\ell$-noise}, such that
$$
G_\ell(A) \sim \mathcal N {\bf i}gl( 0, B_\ell \lefteftarrowmbda_\ell (A) {\bf i}gr)
$$
for measurable sets $A \subset (\righteals^d)^{\ell-1}$ with $\lefteftarrowmbda_\ell (A) < \infty$, and if $A \cap B= \emptyset$, then $G_\ell(A)$ and $G_\ell (B)$ are independent. For $\ell = 1$, we define $G_1$ as a Gaussian random variable with zero mean and variance $B_1$. We assume that $G_1, \dots, G_k$ are independent.
For $\ell = 2,\dots, k-1$, we define Gaussian processes ${\bf V}_\ell = {\bf i}gl(V_\ell (t), \, t \geq 0{\bf i}gr)$ by
$$
V_\ell (t) := \int_{(\righteals^d)^{\ell-1}} \int_{(\righteals^d)^{k-\ell}} h_t (0, {\bf y}, {\bf z}) \, d{\bf z}\, G_\ell (d{\bf y}), \ \ t\geq0.
$$
In addition, if $\ell = k$, define
$$
V_k(t) := \int_{(\righteals^d)^{k-1}} h_t(0,{\bf y}) G_k (d{\bf y}),
$$
and if $\ell = 1$, set
$$
V_1(t) := \int_{(\righteals^d)^{k-1}} h_t(0,{\bf z})\, d{\bf z}\, G_1 = t^{d(k-1)} \int_{(\righteals^d)^{k-1}} h(0,{\bf z})\, d{\bf z}\, G_1.
$$
Note that ${\bf V}_1$ is a degenerate Gaussian process with deterministic sample paths. These processes later turn out to be the
building blocks of the weak limits in the main theorem.
The covariance function of the process ${\bf V}_\ell$ is given by
\begin{align}
L_\ell(t,s) &:= \mathbb{E} {\bf i}gl\{ V_\ell(t) V_\ell(s) {\bf i}gr\} \lefteftarrowbel{e:cov.comp} \\
&= B_\ell \int_{(\righteals^d)^{\ell-1}} \hspace{-10pt} d{\bf y} \int_{(\righteals^d)^{k-\ell}}\hspace{-10pt} d{\bf z}_2 \int_{(\righteals^d)^{k-\ell}} \hspace{-10pt} d{\bf z}_1\, h_t(0,{\bf y},{\bf z}_1)\, h_s(0,{\bf y},{\bf z}_2)\,, \ \ \ t,s \geq 0 \notag
\end{align}
(if $\ell =k$, we take ${\bf z}_i = \emptyset$, $i=1,2$, and if $\ell = 1$, we set ${\bf y} = \emptyset$).
Using the decomposition \eqref{e:ind.decomp1}, we can express ${\bf V}_\ell$ as the difference between two Gaussian processes; that is, for $\ell = 2,\dots,k-1$,
\begin{align*}
V_\ell (t) &= \int_{(\righteals^d)^{\ell-1}} \int_{(\righteals^d)^{k-\ell}} \hspace{-5pt}h_t^+ (0, {\bf y}, {\bf z}) \, d{\bf z}\, G_\ell (d{\bf y}) - \int_{(\righteals^d)^{\ell-1}} \int_{(\righteals^d)^{k-\ell}} \hspace{-5pt}h_t^- (0, {\bf y}, {\bf z}) \, d{\bf z}\, G_\ell (d{\bf y}) \\
&:= V_\ell^+(t) - V_\ell^-(t).
\end{align*}
The same decomposition is feasible in an analogous manner for ${\bf V}_1$ and ${\bf V}_k$.
The following proposition shows that the processes ${\bf V}_k^{+}$ and ${\bf V}_k^-$ can be represented as a time-changed Brownian motion.
\begin{proposition} \lefteftarrowbel{p:limit.heavy1}
The process ${\bf V}_k^+$ can be expressed as
$$
{\bf i}gl( V_k^+(t), \, t\geq 0 {\bf i}gr) \stackrel{d}{=} \Bigl( B {\bf i}gl( K_k^+ \, t^{d(k-1)} {\bf i}gr), \, t \geq 0 \Bigr),
$$
where $B$ is the standard Brownian motion, and $K_k^+ := B_k \int_{(\righteals^d)^{k-1}} h^+(0,{\bf y}) d{\bf y}$. \\
Replacing $K_k^+$ with $K_k^- := B_k \int_{(\righteals^d)^{k-1}} h^-(0,{\bf y}) d{\bf y}$, we obtain the same statement for ${\bf V}_k^-$.
\end{proposition}
\begin{proof}
It is enough to verify that the covariance functions on both sides coincide. It follows from \eqref{e:ind.increase+} that for $0 \lefteq s \lefteq t$,
\begin{align*}
\mathbb{E} {\bf i}gl\{ V_k^+(t) V_k^+(s) {\bf i}gr\} &= B_k \int_{(\righteals^d)^{k-1}} h_t^+(0,{\bf y})\, h_s^+(0,{\bf y}) d{\bf y} \\
&= s^{d(k-1)} K_k^+ \\
&= \mathbb{E} {\bf i}gl\{ B(K_k^+\, t^{d(k-1)})B(K_k^+\, s^{d(k-1)}) {\bf i}gr\}.
\end{align*}
\end{proof}
We also claim that the
process ${\bf V}_\ell$ is self-similar and has a.s. $\hspace{-8pt}$
H\"{o}lder continuous sample paths. Recall that a stochastic
process ${\bf i}gl(X(t), \, t \geq 0 {\bf i}gr)$ is said to be
self-similar with exponent $H$ if
$$
{\bf i}gl(X(ct_i), \, i=1,\dots,k {\bf i}gr) \stackrel{d}{=} {\bf i}gl(c^HX(t_i), \, i=1,\dots,k {\bf i}gr)
$$
for any $c>0$, $t_1,\dots,t_k \geq 0$, and $k \geq 1$.
\begin{proposition} \lefteftarrowbel{p:self.similar.heavy}
$(i)$ For $\ell=1,\dots,k$, the process ${\bf V}_\ell$ is self similar with exponent $H=d(2k-\ell-1)/2$. \\
\noindent $(ii)$ For $\ell =1,\dots,k$ and every $T>0$, ${\bf i}gl( V_\ell(t), \, 0\lefteq t
\lefteq T {\bf i}gr)$ has a modification, the sample paths of which are
H\"{o}lder continuous of any order in $[0,1/2)$.
\end{proposition}
\begin{proof}
We can immediately prove $(i)$ by the scaling property
$$
L_\ell(ct,cs) = c^{d(2k-\ell-1)}L_\ell(t,s)\,, \ \ \ t,s\geq0\,, \ c>0\,.
$$
As for $(ii)$, the statement is obvious for $\ell = 1$ or $\ell =k$; therefore, we take $\ell \in \{ 2,\dots,k-1 \}$. By Gaussianity,
\begin{equation} \lefteftarrowbel{e:2m.moment}
\mathbb{E} \Bigl\{ {\bf i}gl( V_\ell(t) - V_\ell(s) {\bf i}gr)^{2m}\Bigr\} = \prod_{i=1}^m(2i-1)\, \Bigl( \mathbb{E} \Bigl\{ {\bf i}gl( V_\ell(t) - V_\ell(s) {\bf i}gr)^2 \Bigr\}\Bigr)^m, \ \ m=1,2,\dots
\end{equation}
We now show that there exists a constant $C>0$, which depends on $T$,
such that
\begin{equation} \lefteftarrowbel{e:est.2nd.moment}
\mathbb{E} \Bigl\{ {\bf i}gl(V_\ell(t) - V_\ell(s) {\bf i}gr)^2\Bigr\} \lefteq C(t-s) \ \ \text{for all } 0 \lefteq s \lefteq t \lefteq T\,.
\end{equation}
By virtue of the decomposition ${\bf V}_\ell = {\bf V}_\ell^+ -
{\bf V}_\ell^-$, showing \eqref{e:est.2nd.moment} for each of
${\bf V}_\ell^+$ and ${\bf V}_\ell^-$ suffices. We handle ${\bf V}_\ell^+$
only, since ${\bf V}_\ell^-$ can be treated in the same manner.
We have
\begin{align*}
\mathbb{E} \Bigl\{ {\bf i}gl( V_\ell^+(t) - V_\ell^+(s) {\bf i}gr)^2\Bigr\} &= B_\ell \int_{(\righteals^d)^{\ell-1}} \hspace{-10pt} d{\bf y} \int_{(\righteals^d)^{k-\ell}} \hspace{-10pt} d{\bf z}_2 \int_{(\righteals^d)^{k-\ell}} \hspace{-10pt} d{\bf z}_1 {\bf i}gl\{ h_t^+(0,{\bf y},{\bf z}_1) - h_s^+(0,{\bf y},{\bf z}_1) {\bf i}gr\} \\
&\quad \times {\bf i}gl\{ h_t^+(0,{\bf y},{\bf z}_2) - h_s^+(0,{\bf y},{\bf z}_2) {\bf i}gr\}\,.
\end{align*}
Because of \eqref{e:close.enough.decomp.dyna}, the above integral is not
altered if the integral domain is restricted to $(\righteals^d)^{\ell-1}
\times (\righteals^d)^{k-\ell} \times {\bf i}gl( B(0,kT) {\bf i}gr)^{k-\ell}$.
In addition, by \eqref{e:ind.increase+}, there exist constants
$C_1, C_2>0$, both depending on $T$, such that
\begin{align*}
\mathbb{E} \Bigl\{ {\bf i}gl( V_\ell^+(t) - V_\ell^+(s) {\bf i}gr)^2\Bigr\} &\lefteq C_1 \int_{(\righteals^d)^{\ell-1}} \int_{(\righteals^d)^{k-\ell}} {\bf i}gl\{ h_t^+(0,{\bf y},{\bf z}) - h_s^+(0,{\bf y},{\bf z}) {\bf i}gr\} d{\bf z} d{\bf y} \\
&= C_1 \int_{(\righteals^d)^{\ell-1}} \int_{(\righteals^d)^{k-\ell}} h^+ (0,{\bf y},{\bf z}) d{\bf z} d{\bf y}\, {\bf i}gl( t^{d(k-1)} - s^{d(k-1)} {\bf i}gr) \\
&\lefteq C_2 \int_{(\righteals^d)^{\ell-1}} \int_{(\righteals^d)^{k-\ell}} h^+ (0,{\bf y},{\bf z}) d{\bf z} d{\bf y}\, (t-s) \ \ \text{for all } 0 \lefteq s \lefteq t \lefteq T\,,
\end{align*}
which verifies \eqref{e:est.2nd.moment}. \\
Combining \eqref{e:2m.moment} and \eqref{e:est.2nd.moment}, we have that for some $C_3>0$,
$$
\mathbb{E} \Bigl\{ {\bf i}gl(V_\ell(t) - V_\ell(s){\bf i}gr)^{2m} \Bigr\} \lefteq C_3(t-s)^m \ \ \text{for all } 0 \lefteq s \lefteq t \lefteq T\,.
$$
It now follows from the Kolmogorov continuity theorem that there
exists a modification of ${\bf i}gl( V_\ell(t), \, 0 \lefteq t \lefteq T
{\bf i}gr)$, the sample paths of which are H\"{o}lder continuous of
any order in ${\bf i}gl[0,(m-1)/(2m) {\bf i}gr)$. Since $m$ is arbitrary,
we are done by letting $m \to \infty$.
\end{proof}
We are now ready to state the FCLT for the subgraph counting process, suitably scaled and centered in such a way that
$$
X_n(t) = \tau_n^{-1/2} {\bf i}gl( G_n(t) - \mathbb{E} \{ G_n(t) \} {\bf i}gr)\,, \ \ t \geq 0\,.
$$
In the following, $\Rightarrow$ denotes weak convergence. All weak
convergence hereafter are in the space
$\mathcal{D}[0,\infty)$ of right-continuous functions with left
limits. The proof of the theorem is deferred to Section
\rightef{s:proof.heavy}.
\begin{theorem} \lefteftarrowbel{t:main.heavy}
$(i)$ If $nf(R_ne_1) \to 0$ as $n \to \infty$, then
$$
{\bf i}gl( X_n(t), \, t\geq 0 {\bf i}gr) \Rightarrow {\bf i}gl( V_k(t), \, t\geq 0 {\bf i}gr) \ \ \text{in } \mathcal{D}[0,\infty)\,.
$$
\noindent $(ii)$ If $nf(R_ne_1) \to \xi \in (0,\infty)$ as $n \to \infty$, then
$$
{\bf i}gl( X_n(t), \, t\geq 0 {\bf i}gr) \Rightarrow \lefteft( \sum_{\ell=1}^k \xi^{2k-\ell} V_\ell(t), \, t\geq 0 \rightight) \ \ \text{in } \mathcal{D}[0,\infty)\,.
$$
\noindent $(iii)$ If $nf(R_ne_1) \to \infty$ as $n \to \infty$, then
$$
{\bf i}gl( X_n(t), \, t\geq 0 {\bf i}gr) \Rightarrow {\bf i}gl( V_1(t), \, t\geq 0 {\bf i}gr) \ \ \text{in } \mathcal{D}[0,\infty)\,.
$$
\end{theorem}
The processes ${\bf V}_1, \dots,
{\bf V}_k$ can be viewed as the building blocks of the limiting
Gaussian processes; however, how many and which ones contribute to the
limit depends on whether the ball $B(0,R_n)$ covers a weak core or
not. If $B(0,R_n)$ covers a weak core, equivalently, $nf(R_ne_1)
\to 0$, then ${\bf V}_k$ is the only process remaining in the limit. Although, as seen in Proposition \rightef{p:limit.heavy1}, ${\bf V}_k$ is generally represented as the difference in two time-changed Brownian motions, it can be denoted as a
\textit{single} time-changed Brownian motion when $h_t$ is
increasing in $t$, i.e., $h_s({\mathcal{Y}}) \lefteq h_t({\mathcal{Y}})$ for all $0 \lefteq s
\lefteq t$, ${\mathcal{Y}} \in (\righteals^d)^k$. This is the case when $\Gamma$ is a complete graph, in which case the negative part
$h_t^{-}$ is identically zero.
In contrast, the process ${\bf V}_1$, a degenerate Gaussian process with deterministic sample paths, only appears in the limit when $B(0,R_n)$ is
contained in a weak core, i.e., $nf(R_ne_1) \to \infty$. Finally,
if $B(0,R_n)$ agrees with a weak core (up to multiplicative
constants), all of the processes ${\bf V}_1, \dots, {\bf V}_k$ contribute
to the limit. Interestingly, only in this case, do the weak
limits become non-self-similar.
\section{Exponentially Decaying Tail Case} \lefteftarrowbel{s:light}
\subsection{The Setup}
This section develops the FCLT of the subgraph counting process
suitably scaled and centered, when the underlying density on
$\righteals^d$ possesses an exponentially decaying tail. Typically, in
the spirit of extreme value theory, a class of multivariate
densities with exponentially decaying tails can be formulated by
the so-called \textit{von Mises functions}. See for example,
\cite{balkema:embrechts:2004} and \cite{balkema:embrechts:2007}.
In particular, in the one-dimensional case ($d=1$), the von Mises
function plays a decisive role in the characterization of the
max-domain of attraction of the Gumbel law. See Proposition 1.4 in
\cite{resnick:1987}. We assume that the density $f$ on $\righteals^d$ is
given by
\begin{equation} \lefteftarrowbel{e:density.light}
f(x) = L {\bf i}gl( ||x|| {\bf i}gr) \exp {\bf i}gl\{ -\psi {\bf i}gl( ||x|| {\bf i}gr) {\bf i}gr\}\,, \ \ x \in \righteals^d.
\end{equation}
Here, $\psi: \righteals_+ \to \righteals$ is a function of $C^2$-class and is referred to as a von Mises function, so that
\begin{equation} \lefteftarrowbel{e:von.Mises}
\psi^{\prime}(z) > 0, \ \ \psi(z) \to \infty, \ \ {\bf i}gl( 1/\psi^{\prime} {\bf i}gr)^{\prime}(z) \to 0
\end{equation}
as $z \to z_{\infty} \in (0,\infty]$. In this paper, we restrict
ourselves to an unbounded support of the density, i.e.,
$z_{\infty} \equiv \infty$. For notational ease, we introduce the
function $a(z) = 1/\psi^{\prime}(z)$, $z > 0$. Since
$a^{\prime}(z) \to 0$ as $z \to \infty$, the Ces\`{a}ro mean of
$a^{\prime}$ converges as well:
\begin{equation} \lefteftarrowbel{e:auxi.slow}
\frac{a(z)}{z} = \frac{1}{z} \int_0^z a^{\prime}(r)dr \to 0\,, \ \ \text{as } z \to \infty\,.
\end{equation}
Suppose that a measurable function $L:\righteals_+ \to \righteals_+$ is \textit{flat} for $a$, that is,
\begin{equation} \lefteftarrowbel{e:flat}
\frac{L {\bf i}gl( t + a(t) v {\bf i}gr)}{L(t)} \to 1 \ \ \text{as } t \to \infty \ \text{uniformly on bounded } v\text{-sets}.
\end{equation}
This condition implies that $L$ behaves as a constant locally in
the tail of $f$, and thus, only $\psi$ plays a dominant role in
the characterization of the tail of $f$. Here, we need to put an
extra technical condition on $L$. Namely, there exist $\gamma \geq
0$, $z_0 > 0$, and $C \geq 1$ such that
\begin{equation} \lefteftarrowbel{e:poly.upper}
\frac{L(zt)}{L(z)} \lefteq C\, t^{\gamma} \ \ \text{for all } t > 1, \, z \geq z_0\,.
\end{equation}
Since $L$ is negligible in the tail of $f$, it seems reasonable to
classify the density \eqref{e:density.light} in terms of the limit
of $a$. If $a(z) \to \infty$ as $z \to \infty$, we say that $f$
belongs to a class of densities with \textit{subexponential} tail, because
the tail of $f$ decays more slowly than that of an exponential
distribution. Conversely, if $a(z) \to 0$ as $z \to \infty$, $f$
is said to have a \textit{superexponential} tail, and if $a(z) \to
c \in (0,\infty)$, we say that $f$ has an exponential tail. To be
more specific about the difference in tail behaviors, let us
consider a slightly more general example than that in Example \rightef{ex:subexp.tail}, for which $f(x) = L {\bf i}gl( ||x|| {\bf i}gr)
\exp {\bf i}gl\{ -||x||^{\tau} /\tau {\bf i}gr\}$, $\tau > 0$, $x \in \righteals^d$.
Clearly, the parameter $\tau$ is associated with the speed at
which $f$ vanishes in the tail. Observe that $a(z)= z^{1-\tau} \to \infty$ as $z \to \infty$ if $0 < \tau <1$, and
therefore in this case, $f$ has a subexponential tail. If $\tau
> 1$, $a(z)$ decreases to $0$, in which case $f$ has a
superexponential tail.
An important assumption throughout most of this study is that
there exists $c \in (0,\infty]$ such that
\begin{equation} \lefteftarrowbel{e:subexp.tail}
a(z) \to c \ \ \text{as } z \to \infty\,.
\end{equation}
In view of the classification described above,
\eqref{e:subexp.tail} eliminates the possibility of densities with superexponential tail. As discovered in
\cite{owada:adler:2015} and
\cite{adler:bobrowski:weinberger:2014}, random points drawn from a
superexponential law hardly form isolated geometric graphs outside
a core, whereas random points coming from a subexponential law do
constitute a layer of isolated geometric graphs outside a core.
Accordingly, it is highly likely that the nature of the FCLT
differs according to whether the underlying density has a
superexponential or a subexponential tail. The present work
focuses on the (sub)exponential tail case, and more detailed
studies on a superexponential tail case remain for future work.
To realize a more formal set up, let $k \geq 2$ be an integer,
which remains fixed for the remainder of this section; however,
once again, note that many of the functions and objects are
implicitly dependent on $k$. Define the sequence $R_n \to \infty$,
so that
\begin{equation} \lefteftarrowbel{e:normalizing.light}
n^k a(R_n) R_n^{d-1} f(R_ne_1)^k \to \infty\,, \ \ \ n \to \infty\,.
\end{equation}
Defining an alternative sequence $R_{k,n}^{(p)} \to \infty$ for which
$$
n^k a{\bf i}gl(R_{k,n}^{(p)}{\bf i}gr) {\bf i}gl(R_{k,n}^{(p)}{\bf i}gr)^{d-1} f{\bf i}gl(R_{k,n}^{(p)}e_1{\bf i}gr)^k \to 1\,, \ \ \ n \to \infty\,,
$$
the subgraph counting process using $R_{k,n}^{(p)}$ is known to
weakly converge to a Poisson distribution; see
\cite{owada:adler:2015}. Since $R_n$ in
\eqref{e:normalizing.light} grows more slowly than
$R_{k,n}^{(p)}$, i.e., $R_n / R_{k,n}^{(p)} \to 0$, we may expect
that an FCLT plays a decisive role in the asymptotic behavior of a
subgraph counting process.
As in the last section, we now want to recall the notion of a
\textit{weak core}. Let $R_n^{(w)} \to \infty$ be a sequence such
that $nf(R_n^{(w)}e_1) \to 1$ as $n \to \infty$. Then, we say that
a ball $B {\bf i}gl( 0,R_n^{(w)} {\bf i}gr)$ is a weak core. We have to
change, once again, the scaling constants $\tau_n$ of the FCLT,
depending on whether $B(0, R_n)$ covers a weak core or not. More
specifically, we define
\begin{equation} \lefteftarrowbel{e:tau.light}
\tau_n := \begin{cases} n^k a(R_n) R_n^{d-1} f(R_ne_1)^k & \text{if } nf(R_ne_1) \to 0\,, \\ a(R_n)R_n^{d-1} & \text{if } nf(R_ne_1) \to \xi \in (0,\infty)\,, \\ n^{2k-1} a(R_n) R_n^{d-1} f(R_ne_1)^{2k-1} & \text{if } nf(R_ne_1) \to \infty\,.
\end{cases}
\end{equation}
\subsection{Limiting Gaussian Processes and the FCLT} \lefteftarrowbel{s:limit.proc.light}
The objective of this subsection is to formulate the limiting Gaussian processes and the FCLT. Let
\begin{equation} \lefteftarrowbel{e:def.D.ell}
D_\ell = \frac{s_{d-1}}{\ell ! \, {\bf i}gl( (k-\ell)! {\bf i}gr)^2}\,, \ \ \ \ell=1,\dots,k,
\end{equation}
and let $H_\ell$ be a Gaussian $\mu_\ell$-noise, where the $\mu_\ell$ for $\ell = 2,\dots,k$, satisfy
\begin{align*}
\mu_\ell(d\rightho\, d{\bf y}) &= D_\ell\, e^{-\ell \rightho - c^{-1} \sum_{i=1}^{\ell-1} \lefteftarrowngle e_1, y_i \rightightarrowngle}\, \\
&\qquad \times {\bf 1} {\bf i}gl\{ \, \rightho + c^{-1} \lefteftarrowngle e_1,y_i \rightightarrowngle \geq 0, \ i = 1,\dots, \ell-1 \, {\bf i}gr\}\, d\rightho\, d{\bf y}, \ \ \rightho \geq 0, \, {\bf y} \in (\righteals^d)^{\ell -1},
\end{align*}
and
$$
\mu_1(d\rightho) = D_1\, e^{-\rightho} d\rightho, \ \ \rightho \geq 0.
$$
Assume that $H_1, \dots, H_k$ are independent.
We now define a collection of Gaussian processes needed for the construction of the limits in the FCLT. For $\ell = 2,\dots, k$, we define
\begin{align*}
W_\ell (t) &:= \int_{[0,\infty)\times (\righteals^d)^{\ell -1}} \int_{(\righteals^d)^{k-\ell}} e^{-\sum_{i=1}^{k-\ell} {\bf i}gl( \rightho + c^{-1} \lefteftarrowngle e_1, z_i \rightightarrowngle {\bf i}gr) } \\
&\qquad \times {\bf 1} {\bf i}gl\{ \, \rightho + c^{-1} \lefteftarrowngle e_1,z_i \rightightarrowngle \geq 0, \ i = 1,\dots, k-\ell \, {\bf i}gr\}\, h_t(0,{\bf y}, {\bf z})\, d{\bf z}\, H_\ell (d\rightho\, d{\bf y}),
\end{align*}
and, accordingly,
\begin{align*}
W_1 (t) &:= \int_0^\infty \int_{(\righteals^d)^{k-1}} e^{-\sum_{i=1}^{k-1} {\bf i}gl( \rightho + c^{-1} \lefteftarrowngle e_1, z_i \rightightarrowngle {\bf i}gr) } \\
&\qquad \times {\bf 1} {\bf i}gl\{ \, \rightho + c^{-1} \lefteftarrowngle e_1,z_i \rightightarrowngle \geq 0, \ i = 1,\dots, k-1 \, {\bf i}gr\}\, h_t(0, {\bf z})\, d{\bf z}\, H_1 (d\rightho), \\
W_k (t) &:= \int_{[0,\infty)\times (\righteals^d)^{k -1}} h_t(0,{\bf y})\, H_k (d\rightho\, d{\bf y}).
\end{align*}
As we did in Section \rightef{s:limit.heavy}, by the decomposition $h_t = h_t^+ - h_t^-$, one can write the process ${\bf W}_\ell$ as the corresponding difference
${\bf W}_\ell = {\bf W}_\ell^+ - {\bf W}_\ell^-$ for $\ell=1,\dots,k$.
It is easy to compute the covariance function of ${\bf W}_\ell$. We have, for $\ell = 1,\dots,k$ and $t,s\geq0$,
\begin{align}
M_\ell(t,s) &:= \mathbb{E} {\bf i}gl\{ W_\ell(t) W_\ell(s) {\bf i}gr\} \lefteftarrowbel{e:cov.comp.light} \\
&=D_\ell \int_0^\infty \int_{(\righteals^d)^{2k-\ell-1}} \hspace{-20pt} e^{ -(2k-\ell)\rightho - c^{-1} \sum_{i=1}^{2k-\ell-1} \lefteftarrowngle e_1,y_i \rightightarrowngle } \notag \\
&\qquad \times {\bf 1} {\bf i}gl\{ \rightho + c^{-1} \lefteftarrowngle e_1, y_i \rightightarrowngle \geq 0\,, \ i=1,\dots,2k-\ell-1 {\bf i}gr\} h_{t,s}^{(\ell)}(0,{\bf y})\, d{\bf y}\, d\rightho\,, \notag
\end{align}
where
\begin{equation} \lefteftarrowbel{e:def.h.ell}
h_{t,s}^{(\ell)} (0,y_1,\dots,y_{2k-\ell-1}) := h_t(0,y_1,\dots,y_{k-1})\, h_s(0,y_1,\dots,y_{\ell-1,} y_{k}, \dots, y_{2k-\ell-1})\,,
\end{equation}
and, in particular, we set
$$
h_s(0,y_1,\dots,y_{\ell-1}, y_{k}, \dots, y_{2k-\ell-1}) := \begin{cases}
h_s(0,y_{k}, \dots, y_{2k-2}) & \text{if } \ell = 1\,, \\
h_s(0,y_{1}, \dots, y_{k-1}) & \text{if } \ell = k\,.
\end{cases}
$$
It is important to note that if $a(z) \to \infty$ as $z\to\infty$, then $M_\ell$ coincides with $L_\ell$ given in \eqref{e:cov.comp} up to multiplicative factors, i.e.,
$$
M_\ell (t,s) = {\bf i}gl( \alpha - d(2k-\ell)^{-1} {\bf i}gr) L_\ell(t,s), \ \ t,s \geq0.
$$
This in turn implies that
$$
{\bf W}_\ell \stackrel{d}{=} {\bf i}gl( \alpha - d(2k-\ell)^{-1} {\bf i}gr)^{1/2} {\bf V}_\ell,
$$
in which case, there is nothing to explore here, because the properties of ${\bf V}_\ell$ have already been studied in Section \rightef{s:limit.heavy}.
In contrast, if $a(z) \to c \in (0,\infty)$ as $z\to\infty$, then $M_\ell$ does not directly relate to $L_\ell$ as above, and, consequently, the process ${\bf W}_\ell$ exhibits properties different to those of ${\bf V}_\ell$. For example, although one may anticipate, as the
analog of the process ${\bf V}_1$, that ${\bf W}_1$ is
a degenerate Gaussian process, this is no longer the case.
\begin{proposition}
Suppose that $a(z) \to c\in(0,\infty)$ as $z\to\infty$. \\
$(i)$ ${\bf W}_1$ is a non-degenerate Gaussian process. \\
$(ii)$ For $\ell = 1,\dots,k$, ${\bf W}_\ell$ is non-self-similar.
\end{proposition}
\begin{proof}
If $a(z)\to c\in (0,\infty)$ as $z\to \infty$, then $M_1(t,s)$ cannot be decomposed into a function of $t$ and a function of $s$, and therefore, ${\bf W}_1$ is non-degenerate.
As for $(ii)$, $M_\ell$ does not match $L_\ell$ at all and it loses the scale invariance, meaning that ${\bf W}_\ell$ is non-self-similar.
\end{proof}
Similarly to Proposition \rightef{p:limit.heavy1}, however, the process ${\bf W}_k (= {\bf W}_k^+ - {\bf W}_k^-)$ can be denoted in law as the difference between two time-changed Brownian motions, regardless of whether $a(z) \to \infty$ or $a(z) \to c\in (0,\infty)$ as $z \to \infty$. Furthermore, the sample paths of ${\bf W}_\ell$ are H\"{o}lder continuous.
\begin{proposition}
Irrespective of the limit of $a$, the following two results hold. \\
$(i)$ The process ${\bf W}_k^+$ can be represented in law as
$$
{\bf i}gl( W_k^+(t), \, t\geq 0 {\bf i}gr) \stackrel{d}{=} {\bf i}ggl( B \Bigl( \int_{[0,\infty) \times (\righteals^d)^{k-1}} \hspace{-10pt}h_t^+(0,{\bf y})\, \mu_k(d\rightho\, d{\bf y}) \Bigr), \, t \geq 0 {\bf i}ggr),
$$
where $B$ is the standard Brownian motion. \\
The same statement holds for ${\bf W}_k^-$, by replacing $h_t^+$ with $h_t^-$.
\noindent $(ii)$ For $\ell = 1,\dots,k$, and every $T>0$, ${\bf i}gl( W_\ell(t), \, 0\lefteq t \lefteq T
{\bf i}gr)$ has a modification, the sample paths of which are
H\"{o}lder continuous of any order in $[0,1/2)$.
\end{proposition}
\begin{proof}
The proof of $(i)$ is very similar to that in Proposition \rightef{p:limit.heavy1}, so we omit it.
The proof of $(ii)$ is analogous to that in Proposition
\rightef{p:self.similar.heavy} $(ii)$; we have only to show that for
some $C>0$,
\begin{equation*}
\mathbb{E} \Bigl\{ {\bf i}gl(W_\ell(t) - W_\ell(s){\bf i}gr)^2 \Bigr\} \lefteq C(t-s) \ \ \text{for all } 0 \lefteq s \lefteq t \lefteq T\,.
\end{equation*}
Because of the decomposition ${\bf W}_\ell = {\bf W}_\ell^+ - {\bf W}_\ell^-$, it
suffices to prove the above for each ${\bf W}_\ell^+$ and
${\bf W}_\ell^-$. We check only the case of ${\bf W}_\ell^+$. We see that
\begin{align*}
\mathbb{E} &\Bigl\{ {\bf i}gl(W_\ell^+(t) - W_\ell^+(s) {\bf i}gr)^2\Bigr\} \\
&= \int_{[0,\infty)\times (\righteals^d)^{\ell -1}} {\bf i}ggl( \int_{(\righteals^d)^{k-\ell}} e^{-\sum_{i=1}^{k-\ell} {\bf i}gl( \rightho + c^{-1} \lefteftarrowngle e_1, z_i \rightightarrowngle {\bf i}gr)} {\bf 1} {\bf i}gl\{ \, \rightho + c^{-1} \lefteftarrowngle e_1,z_i \rightightarrowngle \geq 0, \ i = 1,\dots, k-\ell \, {\bf i}gr\} \\
&\qquad \qquad \qquad \qquad \times {\bf i}gl( h_t^+(0,{\bf y},{\bf z}) - h_s^+(0,{\bf y},{\bf z}) {\bf i}gr)\, d{\bf z} {\bf i}ggr)^2 \mu_\ell (d\rightho\, d{\bf y}) \\
&\lefteq D_\ell\, B_\ell^{-1}\, \mathbb{E} \Bigl\{ {\bf i}gl( V_\ell^+ (t) - V_\ell^+(s) {\bf i}gr)^2 \Bigr\}.
\end{align*}
The rest of the argument is completely the same as Proposition \rightef{p:self.similar.heavy} $(ii)$.
\end{proof}
Now, we can state the FCLT of the centered and scaled subgraph counting process
$$
X_n(t) = \tau_n^{-1/2} {\bf i}gl( G_n(t) - \mathbb{E} \{ G_n(t) \} {\bf i}gr)\,, \ \ \ t \geq 0\,,
$$
where the normalizing sequence $(R_n)$ satisfies
\eqref{e:normalizing.light} and $(\tau_n)$ is defined in
\eqref{e:tau.light}. Interestingly, if $f$ has a subexponential
tail, i.e., $a(z) \to \infty$, then the limiting Gaussian
processes in the theorem below completely coincide (up to
multiplicative constants) with those in Theorem
\rightef{t:main.heavy}. When $f$ has an exponential tail, i.e., $a(z)
\to c \in (0,\infty)$, the limiting Gaussian
processes are essentially different from those in Theorem \rightef{t:main.heavy}.
The proof of the theorem is presented in Section
\rightef{s:proof.light}. For the reader's convenience, we summarize in Tables 1 and 2
the properties of the limiting Gaussian processes in Theorems
\rightef{t:main.heavy} and \rightef{t:main.light}. These tables indicate that the limiting Gaussian processes are somewhat special when $f$ has an exponential tail. For example, in this case, the limits always lose self-similarity, regardless of the asymptotics of $nf(R_ne_1)$, whereas, in the regularly varying or the subexponential tail case, the self-similarity is lost only when $nf(R_ne_1)$ converges to a positive and finite constant. Furthermore, when $nf(R_ne_1) \to \infty$, a non-degenerate limit appears only in the exponential tail case.
\begin{theorem} \lefteftarrowbel{t:main.light}
Assume that the density \eqref{e:density.light} satisfies \eqref{e:von.Mises}, \eqref{e:flat}, \eqref{e:poly.upper},
and \eqref{e:subexp.tail}. \\
$(i)$ If $nf(R_ne_1) \to 0$ as $n \to \infty$, then
$$
{\bf i}gl( X_n(t), \, t\geq 0 {\bf i}gr) \Rightarrow {\bf i}gl( W_k(t), \, t\geq 0 {\bf i}gr) \ \ \text{in } \mathcal{D}[0,\infty)\,.
$$
\noindent $(ii)$ If $nf(R_ne_1) \to \xi \in (0,\infty)$ as $n \to \infty$, then
$$
{\bf i}gl( X_n(t), \, t\geq 0 {\bf i}gr) \Rightarrow \lefteft( \sum_{\ell = 1}^k \xi^{2k-\ell} W_\ell(t), \, t\geq 0 \rightight) \ \ \text{in } \mathcal{D}[0,\infty)\,.
$$
\noindent $(iii)$ If $nf(R_ne_1) \to \infty$ as $n \to \infty$, then
$$
{\bf i}gl( X_n(t), \, t\geq 0 {\bf i}gr) \Rightarrow {\bf i}gl( W_1(t), \, t\geq 0 {\bf i}gr) \ \ \text{in } \mathcal{D}[0,\infty)\,.
$$
\end{theorem}
\begin{table}[htb]
\begin{tabular}{c|ccc} \hline \\
& $nf(R_ne_1) \to 0$ & $nf(R_ne_1) \to \xi$ & $nf(R_ne_1) \to \infty$ \\[5pt] \hline \hline
\rightule[0pt]{0pt}{18pt}
Regularly varying tail & $d(k-1)/2$ & Non-SS & $d(k-1)$ \\[10pt]
Subexponential tail & $d(k-1)/2$ & Non-SS & $d(k-1)$ \\[10pt]
Exponential tail & Non-SS & Non-SS & Non-SS \\[5pt] \hline \hline
\end{tabular}
\caption{Self-similarity exponents of the limiting Gaussian
processes. Non-SS means that the process is non-self-similar. A
zero limit of $nf(R_ne_1)$ is equivalent to the case in which a
ball $B(0,R_n)$ contains a weak core, and $nf(R_ne_1) \to \infty$
if and only if $B(0,R_n)$ is contained in a weak core. If
$nf(R_ne_1) \to \xi \in (0,\infty)$, then $B(0,R_n)$ agrees with a
weak core (up to multiplicative constants). }
\end{table}
\begin{table}[htb]
\begin{tabular}{c|p{130pt}cp{100pt}} \hline \\
& $\hspace{30pt} nf(R_ne_1) \to 0$ & $nf(R_ne_1) \to \xi$ & $\hspace{15pt} nf(R_ne_1) \to \infty$ \\[10pt] \hline \hline
\rightule[0pt]{0pt}{20pt}
Regularly varying tail & Difference of time-changed Brownian motions & New & Degenerate Gaussian process \\[15pt]
Subexponential tail & Difference of time-changed Brownian motions & New & Degenerate Gaussian process \\[15pt]
Exponential tail & Difference of time-changed Brownian motions & New & New \\[15pt] \hline \hline
\end{tabular}
\caption{Representation results on the limiting Gaussian
processes. ``New" implies that the limit constitutes a new class
of Gaussian processes.}
\end{table}
\section{Graph Connectivity in Weak Core} \lefteftarrowbel{s:connect.core}
We start this section by recalling the \textit{weak core}, which
was defined as a centered ball $B {\bf i}gl( 0,R_n^{(w)} {\bf i}gr)$ such
that $nf {\bf i}gl( R_n^{(w)}e_1 {\bf i}gr) \to 1$ as $n \to \infty$. In
addition, we need the relevant notion, the \textit{core}, which was defined in Definition \rightef{def.core}. Recall that, given a Poisson
point process $\mathcal{P}_n$ on $\righteals^d$, a core is a centered
ball $B(0, R_n)$ such that
\begin{equation} \lefteftarrowbel{e:def.core}
B(0, R_n) \subset {\bf i}gcup_{X \in \mathcal{P}_n \cap B(0, R_n)} B(X, 1)\,.
\end{equation}
In the following, we seek the largest possible sequence $R_n \to
\infty$ such that the event \eqref{e:def.core} occurs
asymptotically with probability $1$, and subsequently, it is shown
that the largest possible core and a weak core are ``close" in
size. However, the degree of this closeness depends on the tail of
an underlying density $f$, and therefore, we divide the argument
into two cases.
We first assume that the density $f$ on $\righteals^d$ is spherically
symmetric and has a regularly varying tail, as in
\eqref{e:RV.tail}. For increased clarity, we place an extra
condition that $p(r) := f(re_1)$ is eventually non-increasing in
$r$, that is, $p$ is non-increasing on $(r_0,\infty)$ for some
large $r_0>0$. In this case, the radius of a weak core is,
clearly, given by
\begin{equation} \lefteftarrowbel{e:weak.core.heavy}
R_n^{(w)} = \lefteft(\frac{1}{p}\rightight)^{\lefteftarrow} (n) := \inf {\bf i}ggl\{ s: \lefteft( \frac{1}{p} \rightight)(s) \geq n {\bf i}ggr\}.
\end{equation}
\begin{proposition} \lefteftarrowbel{p:connect.heavy}
Suppose that $p \in RV_{-\alpha}$ for some $\alpha > d$ and $p$ is eventually non-increasing. Define
\begin{equation} \lefteftarrowbel{e:max.core.heavy}
R_n^{(c)} = \lefteft(\frac{1}{p}\rightight)^{\lefteftarrow} \lefteft( \frac{\delta_1 n}{\leftog n - \delta_2 \leftog \leftog n} \rightight)
\end{equation}
with $\delta_1 \in {\bf i}gl( 0, \alpha/(2^d d^{d/2+1}) {\bf i}gr)$ and $\delta_2 \in (0,1)$. If $R_n \lefteq R_n^{(c)}$, then
\begin{equation} \lefteftarrowbel{e:asym.con.heavy}
\mathbb{P} \lefteft( \, B(0, R_n) \subset {\bf i}gcup_{X \in \mathcal{P}_n \cap B(0, R_n)} B(X, 1)\, \rightight) \to 1\,, \ \ \ n \to \infty\,.
\end{equation}
Furthermore, the sequences $(R_n^{(c)})$ in
\eqref{e:max.core.heavy} and $(R_n^{(w)})$ in
\eqref{e:weak.core.heavy} are both regularly varying sequences
with exponent $1/\alpha$, and
\begin{equation} \lefteftarrowbel{e:comp.Rc.R0.heavy}
\frac{R_n^{(c)}}{R_n^{(w)}} - \lefteft( \frac{\delta_1}{\leftog n - \delta_2 \leftog \leftog n} \rightight)^{1/\alpha} \to 0\,, \ \ \ n \to \infty\,.
\end{equation}
\end{proposition}
One can obtain a parallel result when the underlying density has
an exponentially decaying tail, as in \eqref{e:density.light}. We
simplify the situation a bit by assuming
\begin{equation} \lefteftarrowbel{e:density.light.sim}
f(x) = C \exp {\bf i}gl\{ -\psi {\bf i}gl( ||x|| {\bf i}gr) {\bf i}gr\}, \ \ \ x \in \righteals^d\,,
\end{equation}
where $C$ is a normalizing constant and $\psi: \righteals_+ \to \righteals$ is of
$C^2$-class and satisfies $\psi \in RV_v$ (at infinity) for some
$v>0$ and $\psi^{\prime} >0$. It should be noted that we are
permitting the case $v>1$, implying that, unlike in the previous
section, we do not rule out densities with superexponential tail. Evidently,
the radius of a weak core is given by
\begin{equation} \lefteftarrowbel{e:weak.core.light}
R_n^{(w)} = \psi^{\lefteftarrow}(\leftog n + \leftog C)\,.
\end{equation}
\begin{proposition} \lefteftarrowbel{p:connect.light}
Assume that a probability density $f$ on $\righteals^d$ is given by \eqref{e:density.light.sim}. Define
\begin{equation} \lefteftarrowbel{e:max.core.light}
R_n^{(c)} = \psi^{\lefteftarrow } (\leftog n - \leftog \leftog \leftog n - \delta_1 - \delta_2)\,,
\end{equation}
where $\delta_1 = d \leftog 2 - \leftog v + (1+d/2) \leftog d - \leftog C$ and $\delta_2>0$. If $R_n \lefteq R_n^{(c)}$, then
\begin{equation} \lefteftarrowbel{e:asym.con.light}
\mathbb{P} \lefteft( \, B(0, R_n) \subset {\bf i}gcup_{X \in \mathcal{P}_n \cap B(0, R_n)} B(X, 1)\, \rightight) \to 1\,, \ \ \ n \to \infty\,.
\end{equation}
Furthermore, the sequences $(R_n^{(c)})$ in
\eqref{e:max.core.light} and $(R_n^{(w)})$ in
\eqref{e:weak.core.light} are close in size in the sense of
\begin{equation} \lefteftarrowbel{e:comp.Rc.R0.light}
\frac{R_n^{(c)}}{R_n^{(w)}} - \lefteft( 1-\frac{\leftog \leftog \leftog n + \delta_1 + \delta_2 + \leftog C}{\leftog Cn} \rightight)^{1/v} \to 0\,, \ \ \ n \to \infty\,.
\end{equation}
\end{proposition}
The following result is needed as preparation for the proof of
these propositions. The proof may be obtained by slightly
modifying the proof of Theorem 2.1 in
\cite{adler:bobrowski:weinberger:2014}, but so that this paper is
self-contained, we repeat the argument.
\begin{lemma} \lefteftarrowbel{l:Qlemma}
Given a spherically symmetric density $f$ on $\righteals^d$, suppose
that $p(r) = f(re_1)$ is eventually non-increasing. Let $g = 1/(2
d^{1/2})$. Suppose, in addition, that there exists a sequence $R_n
\nearrow \infty$ such that $d \leftog R_n - g^d n f(R_ne_1) \to -\infty$
as $n \to \infty$. Then,
\begin{equation} \lefteftarrowbel{e:only.lower}
\mathbb{P} \lefteft( \, B(0, R_n) \subset {\bf i}gcup_{X \in \mathcal{P}_n \cap B(0, R_n)} B(X, 1)\, \rightight) \to 1\,, \ \ \ n \to \infty\,.
\end{equation}
\end{lemma}
\begin{proof}
For $\rightho>0$, let $\mathcal{Q}(\rightho)$ be a collection of cubes
with grid $g$ that are contained in $B(0, \rightho)$. Then,
$$
{\bf i}gl\{ \, Q \cap \mathcal{P}_n \neq \emptyset \ \, \text{for all } Q \in \mathcal{Q}(\rightho)\, {\bf i}gr\} \subset \Bigl\{ \, B(0, \rightho) \subset {\bf i}gcup_{X \in \mathcal{P}_n \cap B(0, \rightho)} B(X, 1)\, \Bigr\}
$$
for all $\rightho>0$ and $n \geq 1$. It now suffices to show that
$$
\mathbb{P} {\bf i}gl( \, Q \cap \mathcal{P}_n = \emptyset \ \, \text{for some } Q \in \mathcal{Q}(R_n)\, {\bf i}gr) \to 0\,, \ \ \ n \to \infty\,.
$$
This probability is estimated from above by
$$
\sum_{Q \in \mathcal{Q}(R_n)} \mathbb{P}(Q \cap \mathcal{P}_n = \emptyset) = \sum_{Q \in \mathcal{Q}(R_n)} \exp \Bigl\{ -n \int_Q f(x) dx \Bigr\}
$$
$$
\lefteq \sum_{Q \in \mathcal{Q}(R_n)} \exp {\bf i}gl\{ -n g^d f(R_n e_1) {\bf i}gr\} \lefteq g^{-d} R_n^d \exp {\bf i}gl\{ -g^d n f(R_ne_1) {\bf i}gr\}\,.
$$
At the first inequality, we used the fact that $p$ is eventually
non-increasing. Clearly, the rightmost term vanishes as $n \to
\infty$.
\end{proof}
\begin{proof}(\textit{proof of Proposition \rightef{p:connect.heavy}})
Observe that the assumption $p \in RV_{-\alpha}$ implies
$(1/p)^{\lefteftarrow} \in RV_{1/\alpha}$, e.g., Proposition 2.6 (v)
in \cite{resnick:2007}. Thus, \eqref{e:comp.Rc.R0.heavy} readily
follows from the uniform convergence of regularly varying
functions; see Proposition 2.4 in \cite{resnick:2007}. By Lemma
\rightef{l:Qlemma}, it suffices to verify that $d \leftog R_n^{(c)} - g^d
nf {\bf i}gl( R_n^{(c)}e_1 {\bf i}gr) \to -\infty$ as $n \to \infty$.
Since $0 < \delta_2 < 1$, we have
\begin{align*}
d \leftog R_n^{(c)} &\lefteq d \lefteft[ \leftog \lefteft(\frac{1}{p}\rightight)^{\lefteftarrow} \lefteft( \frac{\delta_1 n}{\leftog n - \delta_2 \leftog \leftog n} \rightight) \rightight] \lefteft[ \leftog \lefteft( \frac{\delta_1 n}{\leftog n - \delta_2 \leftog n \leftog n} \rightight) \rightight]^{-1} \\
&\times (\leftog \delta_1 + \leftog n - \delta_2 \leftog \leftog n)\,,
\end{align*}
and $g^d nf {\bf i}gl( R_n^{(c)}e_1 {\bf i}gr) = g^d \delta_1^{-1} (\leftog n
- \delta_2 \leftog \leftog n)$. Using Proposition 2.6 (i) in
\cite{resnick:2007},
\begin{align*}
&d \lefteft[ \leftog \lefteft(\frac{1}{p}\rightight)^{\lefteftarrow} \lefteft( \frac{\delta_1 n}{\leftog n - \delta_2 \leftog \leftog n} \rightight) \rightight] \lefteft[ \leftog \lefteft( \frac{\delta_1 n}{\leftog n - \delta_2 \leftog n \leftog n} \rightight) \rightight]^{-1} - g^d \delta_1^{-1} \\
&\to d \alpha^{-1} - g^d \delta_1^{-1} < 0\,, \ \ \ n \to \infty\,.
\end{align*}
At the last inequality, we applied the constraint in $\delta_1$.
Therefore, we have $d \leftog R_n^{(c)} - g^d nf {\bf i}gl( R_n^{(c)}e_1
{\bf i}gr) \to -\infty$, $n \to \infty$, as requested.
\end{proof}
\begin{proof}(\textit{proof of Proposition \rightef{p:connect.light}})
Since $\psi^{\lefteftarrow} \in RV_{1/v}$, it is easy to show
\eqref{e:comp.Rc.R0.light}, and therefore, we prove only that $d
\leftog R_n^{(c)} - g^d nf {\bf i}gl( R_n^{(c)}e_1 {\bf i}gr) \to -\infty$ as
$n \to \infty$. We see that
$$
d \leftog R_n^{(c)} \lefteq d \leftog \psi^{\lefteftarrow} (\leftog n) \sim d v^{-1} \leftog \leftog n\,, \ \ \ n \to \infty\,,
$$
and that $g^d n f {\bf i}gl( R_n^{(c)}e_1 {\bf i}gr)= g^d C e^{\delta_1 +
\delta_2} \leftog \leftog n$. By virtue of the constraints in $\delta_1$
and $\delta_2$, we have $dv^{-1} - g^d C e^{\delta_1 + \delta_2} <
0$; thus, the claim is proved.
\end{proof}
\begin{remark}
The proof of Lemma \rightef{l:Qlemma} merely estimated the probability
in \eqref{e:only.lower} from below. Therefore, it seems to be
possible that in the propositions above, \eqref{e:asym.con.heavy}
and \eqref{e:asym.con.light} may hold for the sequence $R_n \nearrow
\infty$ growing more quickly than $R_n^{(c)}$ but more slowly than
$R_n^{(w)}$, i.e., $R_n^{(c)} \lefteq R_n \lefteq R_n^{(w)}$; it is
unknown, however, to what extent we can make $R_n$ closer to
$R_n^{(w)}$.
\end{remark}
\section{Proof of Main Results} \lefteftarrowbel{s:proof.main}
This section presents the proof of the main results of this paper.
The proof is, however, rather long, and therefore, it is divided
into several parts. All the supplemental ingredients necessary are
collected in the Appendix, most of which are cited from
\cite{penrose:2003}.
Let Ann$(K,L)$ be an annulus of inner radius $K$ and outer radius
$L$. For $x_1,\dots,x_k \in \righteals^d$, define $\text{Max}(x_1,\dots,
x_k)$ as the function selecting an element with the largest
distance from the origin. That is, $\text{Max}(x_1,\dots, x_k) =
x_i$ if $||x_i|| = \max_{1 \lefteq j \lefteq k}||x_j||$. If multiple
$x_j$'s achieve the maximum, we choose an element with the
smallest subscript.
In the following, ${\mathcal{Y}}, {\mathcal{Y}}^{\prime}, {\mathcal{Y}}_i$, etc. always represent a
finite collection of $d$-dimensional real vectors. We use the
following shorthand notations. That is, for ${\bf x} = (x_1,\dots,x_m)
\in (\righteals^d)^m$, $x \in \righteals^d$, and ${\bf y} = (y_1,\dots,y_{m-1})
\in (\righteals^d)^{m-1}$,
\begin{align*}
f({\bf x}) &:= f(x_1)\cdots f(x_m)\,, \\
f(x+{\bf y}) &:= f(x + y_1)\cdots f(x + y_{m-1})\,, \\
h(0,{\bf y}) &:= h(0,y_1,\dots,y_{m-1}) \ \ \text{etc}.
\end{align*}
Regarding the indicator $h_t :(\righteals^d)^k \to \{ 0,1 \}$ given in \eqref{e:geo.graph.dyna}, the following notations are used to save
space.
\begin{align}
h_{t,s} ({\bf x}) &:= h_t({\bf x}) - h_s ({\bf x})\,, \ \ 0 \lefteq s \lefteq t\,, \ {\bf x} \in (\righteals^d)^k, \lefteftarrowbel{e:def.hts} \\
h_{t,s}^{\pm} ({\bf x}) &:= h_t^{\pm}({\bf x}) - h_s^{\pm} ({\bf x})\,, \ \ 0 \lefteq s \lefteq t\,, \ {\bf x} \in (\righteals^d)^k, \notag \\
h_{n,t,s} ({\bf x}) &:= h_{t,s} ({\bf x})\, {\bf 1} {\bf i}gl\{ m({\bf x}) \geq R_n {\bf i}gr\}\,, \ \ 0 \lefteq s \lefteq t\,, \ {\bf x} \in (\righteals^d)^k, \lefteftarrowbel{e:def.hnts}
\end{align}
and for $\ell \in \{ 0,\dots,k\}$,
\begin{equation} \lefteftarrowbel{e:def.htsl}
h_{t,s}^{(\ell)}({\bf x}) := h_t(x_1,\dots,x_k)\, h_s(x_1,\dots,x_\ell, x_{k+1}, \dots, x_{2k-\ell})\,, \ \ t,s \geq 0\,, \ {\bf x} \in (\righteals^d)^{2k-\ell}.
\end{equation}
In particular, we set
$$
h_s(x_1,\dots,x_\ell, x_{k+1}, \dots, x_{2k-\ell}) := \begin{cases}
h_s(x_{k+1}, \dots, x_{2k}) & \text{if } \ell = 0\,, \\
h_s(x_{1}, \dots, x_{k}) & \text{if } \ell = k\,.
\end{cases}
$$
In Section \rightef{s:proof.heavy}, we use, for $1 \lefteq K < L \lefteq \infty$, $n \in {\mathbb N}_+$ and $t \geq 0$,
\begin{align*}
h_{n,t,K,L}({\bf x}) &:= h_t ({\bf x})\, {\bf 1} {\bf i}gl\{ m({\bf x}) \geq R_n\,, \ \text{Max}({\bf x}) \in \text{Ann}(KR_n,LR_n) \}\,, \\
h_{n,t,K,L}^{\pm}({\bf x}) &:= h_t ^{\pm}({\bf x})\, {\bf 1} {\bf i}gl\{ m({\bf x}) \geq R_n\,, \ \text{Max}({\bf x}) \in \text{Ann}(KR_n,LR_n) \}\,.
\end{align*}
The same notations are retained for Section \rightef{s:proof.light} to
represent, for $0 \lefteq K < L \lefteq \infty$, $n \in {\mathbb N}_+$ and $t \geq
0$,
\begin{align*}
h_{n,t,K,L}({\bf x}) &:= h_t ({\bf x})\, {\bf 1} {\bf i}gl\{ m({\bf x}) \geq R_n\,, \ a(R_n)^{-1}{\bf i}gl(\text{Max}({\bf x}) - R_n {\bf i}gr) \in [K,L) {\bf i}gr\}\,, \\
h_{n,t,K,L}^{\pm}({\bf x}) &:= h_t^{\pm} ({\bf x})\, {\bf 1} {\bf i}gl\{ m({\bf x}) \geq R_n\,, \ a(R_n)^{-1}{\bf i}gl(\text{Max}({\bf x}) - R_n {\bf i}gr) \in [K,L) {\bf i}gr\}\,.
\end{align*}
Finally, $C^*$ denotes a generic positive constant, which may
change between lines and does not depend on $n$.
In the following, we divide the argument into two subsections.
Section \rightef{s:proof.heavy} treats the case in which the
underlying density has a regularly varying tail; our goal is to
prove Theorem \rightef{t:main.heavy}. Subsequently Section
\rightef{s:proof.light} provides the proof of Theorem
\rightef{t:main.light}, where the density is assumed to have an
exponentially decaying tail. Before the specific subsections,
however, we show some preliminary results, which are commonly used
in both subsections for the tightness proof.
\begin{lemma} \lefteftarrowbel{l:used.for.tightness}
Let $h_t: (\righteals^d)^k \to \{ 0,1\} $ be an indicator given in \eqref{e:geo.graph.dyna}. Fix $T > 0$. Then, we have for $\ell \in \{ 1,\dots,k\}$,
\begin{align}
\int_{(\righteals^d)^{\ell-1}} \hspace{-10pt} d{\bf y} \int_{(\righteals^d)^{k-\ell}} \hspace{-10pt} d{\bf z}_2 \int_{(\righteals^d)^{k-\ell}} \hspace{-10pt} d{\bf z}_1 h_{t,s}^{+}(0,{\bf y}, {\bf z}_1)\, h_{s,r}^{+}(0,{\bf y}, {\bf z}_2)
&\lefteq C^* (t - s) (s - r)\,, \lefteftarrowbel{e:triple.int} \\
\int_{(\righteals^d)^{\ell-1}} \hspace{-10pt} d{\bf y} \int_{(\righteals^d)^{k-\ell}} \hspace{-10pt} d{\bf z}_2 \int_{(\righteals^d)^{k-\ell}} \hspace{-10pt} d{\bf z}_1 h_{t,s}^{-}(0,{\bf y}, {\bf z}_1)\, h_{s,r}^{-}(0,{\bf y}, {\bf z}_2)
&\lefteq C^* (t - s) (s - r) \notag
\end{align}
for all $0 \lefteq r \lefteq s \lefteq t \lefteq T$.
\end{lemma}
\begin{proof}
We only prove the first inequality. If $\ell = 1$ or $\ell = k$, the claim is trivial, and therefore,
we can take $2 \lefteq \ell \lefteq k-1$. It follows from
\eqref{e:close.enough.decomp.dyna} that the integral in \eqref{e:triple.int}
is not altered if the integral domain is restricted to ${\bf i}gl(
B(0,kT) {\bf i}gr)^{\ell-1} \times {\bf i}gl( B(0,kT) {\bf i}gr)^{k-\ell}
\times {\bf i}gl( B(0,kT) {\bf i}gr)^{k-\ell}$. With $\lefteftarrowmbda$ being the
Lebesgue measure on $(\righteals^d)^{k-\ell}$, we see that for every
${\bf y} \in (\righteals^d)^{\ell -1}$,
\begin{align}
\int_{ {\bf i}gl( B(0,kT) {\bf i}gr)^{k-\ell}} h_{t,s}^+(0,{\bf y}, {\bf z}) d{\bf z} &= \lefteftarrowmbda {\bf i}gl\{ {\bf z} \in {\bf i}gl( B(0,kT) {\bf i}gr)^{k-\ell}: h_t^+(0,{\bf y},{\bf z}) = 1\,, \ h_s^+(0,{\bf y},{\bf z}) = 0 {\bf i}gr\} \lefteftarrowbel{e:seq.volume} \\
&\lefteq \lefteftarrowmbda {\bf i}gl\{ {\bf z} \in {\bf i}gl( B(0,kT) {\bf i}gr)^{k-\ell}: s < ||z_i - z_j|| \lefteq t \ \text{for some } i \neq j {\bf i}gr\} \notag \\
&\quad + \lefteftarrowmbda {\bf i}gl\{ {\bf z} \in {\bf i}gl( B(0,kT) {\bf i}gr)^{k-\ell}: s < ||z_i - y_j|| \lefteq t \ \text{for some } i, j {\bf i}gr\} \notag \\
&\quad + \lefteftarrowmbda {\bf i}gl\{ {\bf z} \in {\bf i}gl( B(0,kT) {\bf i}gr)^{k-\ell}: s < ||z_i|| \lefteq t \ \text{for some } i {\bf i}gr\} \notag \\
&\quad + \lefteftarrowmbda {\bf i}gl\{ {\bf z} \in {\bf i}gl( B(0,kT) {\bf i}gr)^{k-\ell}: s < ||y_i - y_j|| \lefteq t \ \text{for some } i \neq j {\bf i}gr\} \notag \\
&\quad + \lefteftarrowmbda {\bf i}gl\{ {\bf z} \in {\bf i}gl( B(0,kT) {\bf i}gr)^{k-\ell}: s < ||y_i|| \lefteq t \ \text{for some } i {\bf i}gr\}. \notag
\end{align}
Observe that for $i \neq j$,
$$
\lefteftarrowmbda {\bf i}gl\{ {\bf z} \in {\bf i}gl( B(0,kT) {\bf i}gr)^{k-\ell}: s < ||z_i - z_j|| \lefteq t {\bf i}gr\} \lefteq (kT)^{d(k-\ell-1)} (\omega_d)^{k-\ell} (t^d-s^d)\,,
$$
where $\omega_d$ is the volume of the $d$-dimensional unit ball.
Since the second and the third terms on the rightmost term in
\eqref{e:seq.volume} have the same upper bound, we ultimately
obtain
\begin{align*}
\int_{ {\bf i}gl( B(0,kT) {\bf i}gr)^{k-\ell}} &h_{t,s}^{+}(0,{\bf y}, {\bf z}) d{\bf z} \\
&\lefteq C^* {\bf i}ggl( t^d-s^d + \sum_{i,j=1, \ i \neq j}^{\ell - 1} {\bf 1} {\bf i}gl\{ s < ||y_i - y_j|| \lefteq t {\bf i}gr\} + \sum_{i=1}^{\ell-1} {\bf 1} {\bf i}gl\{ s < ||y_i|| \lefteq t {\bf i}gr\}{\bf i}ggr).
\end{align*}
Therefore, the integral in \eqref{e:triple.int} is bounded above by
\begin{align*}
C^* \int_{{\bf i}gl( B(0,kT) {\bf i}gr)^{\ell-1}} &{\bf i}ggl( t^d-s^d + \sum_{i,j=1, \ i \neq j}^{\ell - 1} {\bf 1} {\bf i}gl\{ s < ||y_i - y_j|| \lefteq t {\bf i}gr\} + \sum_{i=1}^{\ell-1} {\bf 1} {\bf i}gl\{ s < ||y_i|| \lefteq t {\bf i}gr\}{\bf i}ggr) \\
&\times {\bf i}ggl( s^d-r^d + \sum_{i,j=1, \ i \neq j}^{\ell - 1} {\bf 1} {\bf i}gl\{ r < ||y_i - y_j|| \lefteq s {\bf i}gr\} + \sum_{i=1}^{\ell-1} {\bf 1} {\bf i}gl\{ r < ||y_i|| \lefteq s {\bf i}gr\}{\bf i}ggr) d{\bf y}\,.
\end{align*}
An elementary calculation shows that for all $i,j, i^{\prime},
j^{\prime} \in \{ 1,\dots,\ell-1 \}$ with $i>j$ and
$i^{\prime}>j^{\prime}$,
\begin{align*}
&\int_{{\bf i}gl( B(0,kT) {\bf i}gr)^{\ell-1}} {\bf 1} {\bf i}gl\{ s < ||y_i - y_j|| \lefteq t {\bf i}gr\}\, {\bf 1} {\bf i}gl\{ r < ||y_{i^{\prime}} - y_{j^{\prime}}|| \lefteq s {\bf i}gr\} d{\bf y} \\
&\quad \lefteq C^* (t^d-s^d)(s^d-r^d) \lefteq C^* (t-s)(s-r)
\end{align*}
In particular, if $i=i^{\prime}$ and $j=j^{\prime}$, the integral
is identically zero. Applying the same manipulation to the
integral of other cross-terms, we can conclude the claim of the
lemma.
\end{proof}
\subsection{Regularly Varying Tail Case} \lefteftarrowbel{s:proof.heavy}
Under the setup of Theorem \rightef{t:main.heavy}, we first define the
subgraph counting process with restricted domain. For $1 \lefteq K <
L \lefteq \infty$, $n \in {\mathbb N}_+$, and $t \geq 0$, let
\begin{align*}
G_{n,K,L}(t) &= \sum_{{\mathcal{Y}} \subset \mathcal{P}_n} h_t ({\mathcal{Y}})\, {\bf 1} {\bf i}gl\{ m({\mathcal{Y}}) \geq R_n\,, \ \text{Max}({\mathcal{Y}}) \in \text{Ann}(KR_n,LR_n) \} \\
&:= \sum_{{\mathcal{Y}} \subset \mathcal{P}_n} h_{n,t,K,L} ({\mathcal{Y}})\,,
\end{align*}
and
\begin{align*}
G_{n,K,L}^{\pm}(t) &= \sum_{{\mathcal{Y}} \subset \mathcal{P}_n} h_t^{\pm} ({\mathcal{Y}})\, {\bf 1} {\bf i}gl\{ m({\mathcal{Y}}) \geq R_n\,, \ \text{Max}({\mathcal{Y}}) \in \text{Ann}(KR_n,LR_n) \} \\
&:= \sum_{{\mathcal{Y}} \subset \mathcal{P}_n} h_{n,t,K,L}^{\pm} ({\mathcal{Y}})\,,
\end{align*}
where $(R_n)$ satisfies \eqref{e:normalizing.heavy}. For the
special case $K=1$ and $L=\infty$, we simply denote $G_n(t) =
G_{n,1,\infty}(t)$ and $G_n^{\pm}(t) = G_{n,1,\infty}^{\pm}(t)$.
The subgraph counting processes, centered and scaled, for which we
prove the FCLT, are given by
\begin{align}
X_n(t) &= \tau_n^{-1/2} \Bigl( G_n(t) - \mathbb{E}{\bf i}gl\{ G_n(t) {\bf i}gr\} \Bigr)\,, \notag \\
X_n^{\pm}(t) &= \tau_n^{-1/2} \Bigl( G_n^{\pm}(t) - \mathbb{E}{\bf i}gl\{ G_n^{\pm}(t) {\bf i}gr\} \Bigr)\,, \lefteftarrowbel{e:def.Xnpm.heavy}
\end{align}
where $(\tau_n)$ is determined by \eqref{e:tau.heavy} according to
which regime is considered. The first proposition below computes
the covariances of ${\bf i}gl( G_{n,K,L}(t) {\bf i}gr)$.
\begin{proposition} \lefteftarrowbel{p:cova.heavy}
Assume the conditions of Theorem \rightef{t:main.heavy}. Let $1 \lefteq K < L \lefteq \infty$. \\
$(i)$ If $nf(R_ne_1) \to 0$ as $n \to \infty$, then
$$
\tau_n^{-1} \text{Cov} {\bf i}gl( G_{n,K,L}(t), G_{n,K,L}(s) {\bf i}gr) \to (K^{d-\alpha k} - L^{d - \alpha k}) L_k(t,s)\,, \ \ \ n\to \infty\,.
$$
$(ii)$ If $nf(R_ne_1) \to \xi \in (0,\infty)$ as $n \to \infty$, then
$$
\tau_n^{-1} \text{Cov} {\bf i}gl( G_{n,K,L}(t), G_{n,K,L}(s) {\bf i}gr) \to \sum_{\ell=1}^k (K^{d-\alpha (2k-\ell)} - L^{d - \alpha (2k-\ell)}) \xi^{2k-\ell}L_\ell(t,s)\,, \ \ \ n\to \infty\,.
$$
$(iii)$ If $nf(R_ne_1) \to \infty$ as $n \to \infty$, then
$$
\tau_n^{-1} \text{Cov} {\bf i}gl( G_{n,K,L}(t), G_{n,K,L}(s) {\bf i}gr) \to (K^{d-\alpha (2k-1)} - L^{d - \alpha (2k-1)}) L_1(t,s)\,, \ \ \ n\to \infty\,.
$$
\end{proposition}
\begin{proof}
We start by writing
\begin{align*}
\mathbb{E} {\bf i}gl\{ &G_{n,K,L}(t)\, G_{n,K,L}(s) {\bf i}gr\} \\
&= \sum_{\ell=0}^k \mathbb{E} \Bigl\{ \, \sum_{{\mathcal{Y}}_1 \subset \mathcal{P}_n} \sum_{{\mathcal{Y}}_2 \subset \mathcal{P}_n} h_{n,t, K, L} ({\mathcal{Y}}_1)\, h_{n,s, K, L} ({\mathcal{Y}}_2)\, {\bf 1} {\bf i}gl\{ \, |{\mathcal{Y}}_1 \cap {\mathcal{Y}}_2| = \ell\, {\bf i}gr\} \Bigr\} \\
&:= \sum_{\ell=0}^k \mathbb{E} \{I_\ell\}\,.
\end{align*}
For $\ell=0$, applying Palm theory (see the Appendix) twice,
\begin{align*}
\mathbb{E} \{I_0\} &= \frac{n^{2k}}{(k!)^2}\, \mathbb{E} {\bf i}gl\{ h_{n,t, K, L} (X_1,\dots,X_k)\, h_{n,s, K, L} (X_{k+1},\dots,X_{2k}){\bf i}gr\} \\
&= \mathbb{E} {\bf i}gl\{G_{n,K,L}(t) {\bf i}gr\} \mathbb{E} {\bf i}gl\{ G_{n,K,L}(s){\bf i}gr\}\,.
\end{align*}
Therefore, the multiple applications of Palm theory yield
\begin{align*}
\text{Cov} &{\bf i}gl( G_{n,K,L}(t)\,, G_{n,K,L}(s) {\bf i}gr) = \sum_{\ell=1}^k \mathbb{E}\{ I_\ell\} \\
&= \sum_{\ell=1}^k \frac{n^{2k-\ell}}{\ell ! {\bf i}gl( (k-\ell)! {\bf i}gr)^2}\, \mathbb{E} \Bigl\{h_{n,t, K, L} ({\mathcal{Y}}_1)\, h_{n,s, K, L} ({\mathcal{Y}}_2)\, {\bf 1} {\bf i}gl\{ \, |{\mathcal{Y}}_1 \cap {\mathcal{Y}}_2| = \ell\, {\bf i}gr\} \Bigr\}.
\end{align*}
Define for $\ell \in \{1,\dots,k \}$,
\begin{multline*}
C_n^{(\ell)} (K,L) := {\bf i}gl\{ {\bf x} \in (\righteals^d)^{2k-\ell}: \text{Max}(x_1,\dots, x_k) \in \text{Ann}(KR_n,LR_n)\,, \\
\text{Max}(x_1,\dots, x_\ell, x_{k+1}, \dots, x_{2k-\ell}) \in \text{Ann}(KR_n,LR_n){\bf i}gr\}\,.
\end{multline*}
By the change of variables ${\bf x} \rightightarrow (x,x+{\bf y})$ with ${\bf x}
\in (\righteals^d)^{2k-\ell}$, $x \in \righteals^d$, ${\bf y} \in
(\righteals^d)^{2k-\ell-1}$, together with invariance
\eqref{e:location.inv}, while recalling notation
\eqref{e:def.htsl},
\begin{align*}
\mathbb{E} \Bigl\{ &h_{n,t,K,L}({\mathcal{Y}}_1)\, h_{n,s,K,L}({\mathcal{Y}}_2)\, {\bf 1} {\bf i}gl\{ \, |{\mathcal{Y}}_1 \cap {\mathcal{Y}}_2| = \ell\, {\bf i}gr\} \Bigr\} \\
&= \int_{(\righteals^d)^{2k-\ell}} f({\bf x})\, {\bf 1} {\bf i}gl\{ m({\bf x}) \geq R_n {\bf i}gr\}\, h_{t,s}^{(\ell)} ({\bf x})\,{\bf 1} {\bf i}gl\{ {\bf x} \in C_n^{(\ell)}(K,L) {\bf i}gr\} d{\bf x} \\
&= \int_{\righteals^d} \int_{(\righteals^d)^{2k-\ell-1}} f(x)\, f(x+{\bf y})\, {\bf 1} {\bf i}gl\{ m(x, x + {\bf y}) \geq R_n {\bf i}gr\}\, h_{t,s}^{(\ell)} (0, {\bf y})\, \\
&\quad \times {\bf 1} {\bf i}gl\{ (x, x+{\bf y}) \in C_n^{(\ell)}(K,L) {\bf i}gr\} d{\bf y} dx\,.
\end{align*}
The polar coordinate transform $x \rightightarrow (r,\theta)$ and an
additional change of variable $\rightho \rightightarrow r/R_n$ yield
\begin{align}
\mathbb{E} \Bigl\{ &h_{n,t,K,L}({\mathcal{Y}}_1)\, h_{n,s,K,L}({\mathcal{Y}}_2)\, {\bf 1} {\bf i}gl\{ \, |{\mathcal{Y}}_1 \cap {\mathcal{Y}}_2| = \ell\, {\bf i}gr\} \Bigr\} \lefteftarrowbel{e:after.polar} \\
&= R_n^d f(R_ne_1)^{2k-\ell} \int_{S_{d-1}} \hspace{-7pt}J(\theta)d\theta \int_1^{\infty} d\rightho \int_{(\righteals^d)^{2k-\ell-1}} \hspace{-5pt} d{\bf y}\, \rightho^{d-1} \frac{f(R_n\rightho e_1)}{f(R_ne_1)} \notag \\
&\quad \times \prod_{i=1}^{2k-\ell-1} \frac{f {\bf i}gl( ||R_n \rightho \theta + y_i|| e_1 {\bf i}gr)}{f(R_ne_1)}\, {\bf 1} {\bf i}gl\{ ||\rightho \theta + y_i / R_n|| \geq 1 {\bf i}gr\}\, h_{t,s}^{(\ell)}(0,{\bf y}) \notag \\
&\quad \times {\bf 1} {\bf i}gl\{ (R_n \rightho \theta, R_n \rightho \theta + {\bf y}) \in C_n^{(\ell)}(K,L) {\bf i}gr\} \,, \notag
\end{align}
where $S_{d-1}$ denotes the $(d-1)$-dimensional unit sphere in
$\righteals^d$ and $J(\theta)$ is the usual Jacobian
$$
J(\theta) = \sin^{k-2} (\theta_1) \sin^{k-3} (\theta_2) \cdots \sin (\theta_{k-2})\,.
$$
Note that by the regular variation of $f$
(with exponent $-\alpha$), for every $\rightho > 1$, $\theta \in
S_{d-1}$, and $y_i$'s,
\begin{equation} \lefteftarrowbel{e:RV.result}
\frac{f(R_n\rightho e_1)}{f(R_ne_1)} \to \rightho^{-\alpha}\,, \quad \ \ \prod_{i=1}^{2k-\ell-1} \frac{f {\bf i}gl( ||R_n \rightho \theta + y_i|| e_1 {\bf i}gr)}{f(R_ne_1)} \to \rightho^{-\alpha(2k-\ell-1)}\,, \ \ \ n \to \infty
\end{equation}
and, furthermore,
\begin{equation} \lefteftarrowbel{e:conv.indicator}
{\bf 1} {\bf i}gl\{ (R_n \rightho \theta, R_n \rightho \theta + {\bf y}) \in C_n^{(\ell)}(K,L) {\bf i}gr\} \to {\bf 1} \{ K \lefteq \rightho \lefteq L \}\,, \ \ \ n \to \infty\,.
\end{equation}
Substituting \eqref{e:RV.result} and \eqref{e:conv.indicator}
back into \eqref{e:after.polar}, while supposing
temporarily that the dominated convergence theorem is applicable,
we may conclude that
\begin{align}
\text{Cov} &{\bf i}gl( G_{n,K,L}(t)\,, G_{n,K,L}(s) {\bf i}gr) \lefteftarrowbel{e:cov.total} \\
&\sim \sum_{\ell=1}^k n^{2k-\ell} R_n^d f(R_ne_1)^{2k-\ell} {\bf i}gl( K^{d-\alpha(2k-\ell)} - L^{d-\alpha(2k-\ell)} {\bf i}gr) L_\ell(t,s)\,, \ \ \ n \to \infty\,. \notag
\end{align}
Observe that the limit value of $nf(R_ne_1)$ completely determines
which term on the right hand side of \eqref{e:cov.total} is
dominant. If $nf(R_ne_1) \to 0$, then the $k$th term,i.e., $\ell =
k$, in the sum grows fastest, while the first term, i.e., $\ell =
1$, grows fastest when $nf(R_ne_1) \to \infty$. Moreover, if
$nf(R_ne_1) \to \xi \in (0,\infty)$, then all the terms in the sum
grow at the same rate. This concludes the claim of the
proposition.
It now remains to establish an integrable upper bound for the
application of the dominated convergence theorem. First, condition \eqref{e:close.enough} provides
$$
h_{t,s}^{(\ell)}(0,{\bf y}) \lefteq {\bf 1} {\bf i}gl\{ ||y_i|| \lefteq k(t+s)\,, \ i=1,\dots,2k-\ell-1 {\bf i}gr\}\,.
$$
Next, appealing to
Potter's bound ,e.g., Proposition 2.6 $(ii)$ in
\cite{resnick:2007}, for every $\xi \in (0,\alpha-d)$ and
sufficiently large $n$,
$$
\frac{f(R_{n} \rightho e_1)}{f(R_{n} e_1)}\, {\bf 1} \{ \rightho \geq 1 \} \lefteq (1+\xi) \, \rightho^{-\alpha +\xi}\, {\bf 1} \{ \rightho \geq 1 \}
$$
and
\begin{align*}
\prod_{i=1}^{2k-\ell-1} \frac{f(||R_n \rightho \theta + y_i || e_1)}{f(R_n e_1)}\, {\bf 1} {\bf i}gl\{|| \rightho \theta + y_i/R_{n} || \geq 1 {\bf i}gr\} \lefteq (1+\xi)^{2k-\ell-1}\,. \\
\end{align*}
Since $\int_{1}^{\infty} \rightho^{d-1-\alpha+\xi} d\rightho < \infty$, we
are allowed to apply the dominated convergence theorem.
\end{proof}
The next proposition proves the weak convergence of Theorem
\rightef{t:main.heavy} in a finite-dimensional sense.
\begin{proposition} \lefteftarrowbel{p:fidi.heavy}
Assume the conditions of Theorem \rightef{t:main.heavy}. Then, weak
convergences $(i) - (iii)$ in the theorem hold in a
finite-dimensional sense.
Furthermore, let ${\bf X}_n^{\pm}$ be the processes defined in \eqref{e:def.Xnpm.heavy}. Then, the following results
also hold in a finite-dimensional sense. \\
$(i)$ If $nf(R_ne_1) \to 0$ as $n \to \infty$, then
\begin{equation} \lefteftarrowbel{e:bivariate1}
({\bf X}_n^+, {\bf X}_n^-) \Rightarrow ({\bf V}_k^+, {\bf V}_k^-)\,.
\end{equation}
$(ii)$ If $nf(R_ne_1) \to \xi \in (0,\infty)$ as $n \to \infty$, then
\begin{equation} \lefteftarrowbel{e:bivariate2}
({\bf X}_n^+, {\bf X}_n^-) \Rightarrow \lefteft(\sum_{\ell=1}^k \xi^{2k-\ell} {\bf V}_\ell^+, \, \sum_{\ell=1}^k \xi^{2k-\ell} {\bf V}_\ell^- \rightight)\,.
\end{equation}
$(iii)$ If $nf(R_ne_1) \to \infty$ as $n \to \infty$, then
\begin{equation} \lefteftarrowbel{e:bivariate3}
({\bf X}_n^+, {\bf X}_n^-) \Rightarrow ({\bf V}_1^+, {\bf V}_1^-)\,.
\end{equation}
The limiting Gaussian processes $({\bf V}_\ell^+, {\bf V}_\ell^-)$, $\ell
= 1,\dots,k$ are all formulated in Section \rightef{s:limit.heavy}.
\end{proposition}
\begin{proof}
The proofs of \eqref{e:bivariate1}, \eqref{e:bivariate2}, and
\eqref{e:bivariate3} are a bit more technical, but are very
similar to the corresponding results in Theorem
\rightef{t:main.heavy}; therefore, we check only finite-dimensional
weak convergences in Theorem \rightef{t:main.heavy}. The argument here
is closely related to that in Theorem 3.9 of \cite{penrose:2003},
for which we rely on the so-called Cram\'er-Wold device. For $0
\lefteq t_1 < \dots < t_m < \infty$, $a_1,\dots, a_m \in \righteals$ and $m
\geq 1$, define $S_n := \sum_{j=1}^m a_j G_n (t_j)$. For $K>1$,
$S_n$ can be further decomposed into two parts:
\begin{align*}
S_n &= \sum_{j=1}^m a_j G_{n,1,K}(t_j) + \sum_{j=1}^m a_j G_{n,K,\infty}(t_j) \\
&:= T_n^{(K)} + U_n^{(K)}\,.
\end{align*}
We define a constant $\gamma_K$ as follows in accordance with the limit of $nf(R_ne_1)$.
$$
\gamma_K := \begin{cases} \sum_{i=1}^m \sum_{j=1}^m a_i a_j (1-K^{d-\alpha k}) L_k(t_i,t_j) & \text{if } nf(R_ne_1) \to 0\,, \\
\sum_{i=1}^m \sum_{j=1}^m a_i a_j \sum_{\ell=1}^k (1-K^{d-\alpha(2k-\ell)}) \xi^{2k-\ell} L_\ell(t_i,t_j) & \text{if } nf(R_ne_1) \to \xi \in (0,\infty)\,, \\
\sum_{i=1}^m \sum_{j=1}^m a_i a_j (1-K^{d-\alpha (2k-1)}) L_1(t_i,t_j) & \text{if } nf(R_ne_1) \to \infty\,.
\end{cases}
$$
Moreover, $\gamma := \leftim_{K \to \infty} \gamma_K$.
It is then elementary to check that, regardless of the regime we consider,
$$
\tau_n^{-1} \text{Var} \{ T_n^{(K)} \} \to \gamma_K\,, \ \ \ \tau_n^{-1} \text{Var} \{ U_n^{(K)} \} \to \gamma - \gamma_K \ \ \ \text{as } n \to \infty\,.
$$
For the completion of the proof, we ultimately need to show that
$$
\tau_n^{-1/2} {\bf i}gl( S_n - \mathbb{E} \{ S_n \} {\bf i}gr) \Rightarrow N(0,\gamma)\,.
$$
By the standard approximation argument given on p. 64 of
\cite{penrose:2003}, it suffices to show that
\begin{equation} \lefteftarrowbel{e:int.CLT}
\tau_n^{-1/2} \Bigl( T_n^{(K)} - \mathbb{E} {\bf i}gl\{ T_n^{(K)} {\bf i}gr\} \Bigr) \Rightarrow \text{N}(0, \gamma_K) \ \ \text{for every } K > 1\,;
\end{equation}
equivalently,
\begin{equation} \lefteftarrowbel{e:int.normalized.CLT}
\frac{T_n^{(K)} - \mathbb{E} {\bf i}gl\{ T_n^{(K)} {\bf i}gr\}}{\sqrt{\text{Var} {\bf i}gl\{ T_n^{(K)} {\bf i}gr\}}} \Rightarrow N(0,1) \ \ \text{for every } K > 1\,.
\end{equation}
Let $(Q_\ell: \ell \in {\mathbb N})$ be a collection of unit cubes covering $\righteals^d$. Define
$$
V_n := {\bf i}gl\{ \ell \in {\mathbb N}: Q_\ell \cap \text{Ann}(R_n,KR_n) \neq \emptyset {\bf i}gr\}\,,
$$
where we have that $|V_n| \lefteq C^* R_n^d$. \\
Then, $T_n^{(K)}$ can be partitioned as follows.
\begin{align*}
T_n^{(K)} &= \sum_{\ell \in V_n} \sum_{j=1}^m a_j \sum_{{\mathcal{Y}} \subset \mathcal P_n} h_{t_j} ({\mathcal{Y}})\, {\bf 1} {\bf i}gl\{ m({\mathcal{Y}}) \geq R_n, \ \text{Max}({\mathcal{Y}}) \in \text{Ann}(R_n,KR_n) \cap Q_\ell {\bf i}gr\} \\
&:= \sum_{\ell \in V_n} \eta_{\ell,n}\,.
\end{align*}
For $i,j \in V_n$, we put an edge between $i$ and $j$ (write $i
\sim j$) if $i \neq j$ and the distance between $Q_i$ and $Q_j$
are less than $2kt_m$. Then, $(V_n, \sim)$ gives a
\textit{dependency graph} with respect to $(\eta_{\ell, n}, \,
\ell \in V_n)$; that is, for any two disjoint subsets $I_1$, $I_2$
of $V_n$ with no edges connecting $I_1$ and $I_2$, $(\eta_{\ell,
n}, \, \ell \in I_1)$ is independent of $(\eta_{\ell, n}, \, \ell
\in I_2)$. Notice that the maximum degree of $(V_n, \sim)$ is at
most finite.
According to Stein's method for normal approximation (see Theorem 2.4 in \cite{penrose:2003}), \eqref{e:int.normalized.CLT} immediately follows if we
can show that for $p=3,4$,
\begin{equation} \lefteftarrowbel{e:normal.approx}
R_n^d\, \max_{\ell \in V_n} \frac{\mathbb{E} {\bf i}gl| \eta_{\ell,n} - \mathbb{E} \{ \eta_{\ell,n} \} {\bf i}gr|^p}{\Bigl( \text{Var}{\bf i}gl\{ T_n^{(K)} {\bf i}gr\} \Bigr)^{p/2}} \to 0 \ \ \text{as } n \to \infty\,.
\end{equation}
Since the proof for showing this varies depending on the limit of
$nf(R_ne_1)$, we divide the argument into three different cases.
Suppose first that $nf(R_ne_1) \to 0$ as $n \to \infty$. Let
$Z_{\ell,n}$ denote the number of points in $\mathbb{P}n$ lying in
$$
\text{Tube} (Q_\ell; kt_m) := {\bf i}gl\{ x \in \righteals^d: \inf_{y \in Q_{\ell}} ||x-y|| \lefteq kt_m {\bf i}gr\}\,.
$$
Then, $Z_{\ell,n}$ has a Poisson distribution with mean
$n\int_{\text{Tube}(Q_\ell; kt_m)}f(z)dz$. Using Potter's bound,
we see that $Z_{\ell,n}$ is stochastically dominated by another
Poisson random variable $Z_n$ with mean $C^* nf(R_ne_1)$.
Observing that
$$
|\eta_{\ell,n}| \lefteq C^* \begin{pmatrix} Z_{\ell,n} \\ k \end{pmatrix}\,,
$$
we have, for $q=1,2,3,4$,
$$
\mathbb{E}|\eta_{\ell,n}|^q \lefteq C^* \mathbb{E} \begin{pmatrix} Z_{\ell,n} \\ k \end{pmatrix}^q \lefteq C^* \mathbb{E} \begin{pmatrix} Z_n \\ k \end{pmatrix}^q \lefteq C^* {\bf i}gl( nf(R_ne_1) {\bf i}gr)^k,
$$
where in the last step we used the assumption $nf(R_ne_1) \to 0$. \\
It now follows that for $p=3,4$,
$$
\max_{\ell \in V_n} E {\bf i}gl| \eta_{\ell,n} -\mathbb{E} \{ \eta_{\ell,n} \} {\bf i}gr|^p \lefteq C^* {\bf i}gl( nf(R_ne_1) {\bf i}gr)^k.
$$
Therefore,
\begin{align*}
R_n^d\, \max_{\ell \in V_n} \frac{\mathbb{E} {\bf i}gl| \eta_{\ell,n} - \mathbb{E} \{ \eta_{\ell,n} \} {\bf i}gr|^p}{\Bigl( \text{Var}{\bf i}gl\{ T_n^{(K)} {\bf i}gr\} \Bigr)^{p/2}} &\lefteq C^* R_n^d \frac{{\bf i}gl( nf(R_ne_1) {\bf i}gr)^k}{ {\bf i}gl( n^k R_n^d f(R_ne_1)^k \gamma_K {\bf i}gr)^{p/2}} \\
&= \frac{C^*}{\gamma_K^{p/2}} {\bf i}gl( n^k R_n^d f(R_ne_1)^k {\bf i}gr)^{1-p/2} \to0\,, \ \ \ n \to\infty\,,
\end{align*}
where the last convergence follows from \eqref{e:normalizing.heavy}.
In the case of $nf(R_ne_1) \to \xi \in (0,\infty)$, the argument
for proving \eqref{e:normal.approx} is very similar to, or even
easier than, the previous case, so we omit it.
Finally, suppose that $nf(R_ne_1) \to \infty$ as $n \to \infty$.
We begin by establishing an appropriate upper bound for the fourth
moment expectation
\begin{equation} \lefteftarrowbel{e:bino.expansion}
\mathbb{E} {\bf i}gl| \eta_{\ell,n} - \mathbb{E} \{ \eta_{\ell,n} \} {\bf i}gr|^4 = \sum_{j=0}^4 \begin{pmatrix} 4 \\ j \end{pmatrix} (-1)^j \mathbb{E} \{ \eta_{\ell,n}^j \} {\bf i}gl( \mathbb{E} \{ \eta_{\ell,n} \} {\bf i}gr)^{4-j}.
\end{equation}
Letting
$$
g_{\ell,n}({\mathcal{Y}}) := \sum_{j=1}^m a_j h_{t_j}({\mathcal{Y}})\, {\bf 1} {\bf i}gl\{ m({\mathcal{Y}}) \geq R_n, \ \text{Max}({\mathcal{Y}}) \in \text{Ann} (R_n,KR_n) \cap Q_{\ell} {\bf i}gr\}\,,
$$
we see that for every $j \in \{0,\dots,4\}$,
$$
F_n(j):=\mathbb{E} \{ \eta_{\ell,n}^j \}{\bf i}gl( \mathbb{E} \{ \eta_{\ell,n} \} {\bf i}gr)^{4-j}
$$
can be denoted as the expectation of a quadruple sum
\begin{equation} \lefteftarrowbel{e:quadruple}
\mathbb{E} \lefteft\{ \sum_{{\mathcal{Y}}_1 \subset \mathbb{P}n^{(1)}} \sum_{{\mathcal{Y}}_2 \subset \mathbb{P}n^{(2)}} \sum_{{\mathcal{Y}}_3 \subset \mathbb{P}n^{(3)}} \sum_{{\mathcal{Y}}_4 \subset \mathbb{P}n^{(4)}} g_{\ell,n}({\mathcal{Y}}_1)\, g_{\ell,n}({\mathcal{Y}}_2)\, g_{\ell,n}({\mathcal{Y}}_3)\, g_{\ell,n}({\mathcal{Y}}_4)\, \rightight\},
\end{equation}
where each of $\mathbb{P}n^{(1)}, \dots, \mathbb{P}n^{(4)}$ is either equal to or
an independent copy of one of the others. By definition, each
${\mathcal{Y}}_i$ is a finite collection of $d$-dimensional vectors. If, in
particular, $|{\mathcal{Y}}_1 \cup {\mathcal{Y}}_2 \cup {\mathcal{Y}}_3 \cup {\mathcal{Y}}_4| = 4k$, i.e., any
two of ${\mathcal{Y}}_i$, $i=1,\dots,4$ have no common elements, then the
Palm theory given in the Appendix reveals that \eqref{e:quadruple}
is equal to ${\bf i}gl( \mathbb{E} \{ \eta_{\ell,n} \} {\bf i}gr)^4$. Then, in
this case, their overall contribution to \eqref{e:bino.expansion} is
identically zero, because
$$
\sum_{j=0}^4 \begin{pmatrix} 4 \\ j \end{pmatrix} (-1)^j {\bf i}gl( \mathbb{E} \{ \eta_{\ell,n} \} {\bf i}gr)^{4} = 0\,.
$$
Next, suppose that $|{\mathcal{Y}}_1 \cup {\mathcal{Y}}_2 \cup {\mathcal{Y}}_3 \cup {\mathcal{Y}}_4| = 4k-1$,
i.e., there is a pair $({\mathcal{Y}}_i, {\mathcal{Y}}_j)$, $i \neq j$ having exactly
one element in common and no other common elements between
${\mathcal{Y}}_i$'s are present. In this case, \eqref{e:quadruple} can be
written as
\begin{equation} \lefteftarrowbel{e:second.order.term}
\frac{n^{2k-1}}{{\bf i}gl( (k-1)! {\bf i}gr)^2}\, \mathbb{E} \Bigl\{ g_{\ell,n}({\mathcal{Y}}_1)\, g_{\ell,n}({\mathcal{Y}}_2)\, {\bf 1} {\bf i}gl\{ |{\mathcal{Y}}_1 \cap {\mathcal{Y}}_2|=1 {\bf i}gr\} \Bigr\}
\lefteft( \frac{n^k}{k!} \mathbb{E} {\bf i}gl\{ g_{\ell,n}({\mathcal{Y}}) {\bf i}gr\} \rightight)^2.
\end{equation}
In particular, \eqref{e:second.order.term} appears once in
$F_n(2)$, $\tiny \begin{pmatrix} 3 \\ 2 \end{pmatrix} $ times in
$F_n(3)$, and $\tiny \begin{pmatrix} 4 \\ 2 \end{pmatrix} $ times
in $F_n(4)$. Thus, the total contribution to
\eqref{e:bino.expansion} sums up to
$$
{\bf i}ggl\{ \begin{pmatrix} 4 \\ 2 \end{pmatrix} (-1)^2 + \begin{pmatrix} 4 \\ 3 \end{pmatrix} (-1)^3 \begin{pmatrix} 3 \\ 2 \end{pmatrix} + \begin{pmatrix} 4 \\ 4 \end{pmatrix} (-1)^4 \begin{pmatrix} 4 \\ 2 \end{pmatrix} {\bf i}ggr\} \times \eqref{e:second.order.term} = 0\,.
$$
We may assume, therefore, that $|{\mathcal{Y}}_1 \cup {\mathcal{Y}}_2 \cup {\mathcal{Y}}_3 \cup
{\mathcal{Y}}_4| \lefteq 4k-2$. Let us start with $|{\mathcal{Y}}_1 \cup {\mathcal{Y}}_2 \cup {\mathcal{Y}}_3
\cup {\mathcal{Y}}_4| = 4k-2$, where we shall examine in particular the case
in which $\mathbb{P}n^{(1)}=\mathbb{P}n^{(2)}=\mathbb{P}n^{(3)}=\mathbb{P}n^{(4)}$, $|{\mathcal{Y}}_1 \cap
{\mathcal{Y}}_2|=2$ and no other common elements between ${\mathcal{Y}}_i$'s exist. The
argument for the other cases will be omitted because they can be
handled in the same manner. Then, by Palm theory,
\eqref{e:quadruple} is equal to
\begin{equation} \lefteftarrowbel{e:2overlaps}
\frac{n^{2k-2}}{2 {\bf i}gl( (k-2)! {\bf i}gr)^2}\, \mathbb{E} \Bigl\{ g_{\ell,n}({\mathcal{Y}}_1)\, g_{\ell,n}({\mathcal{Y}}_2)\, {\bf 1} {\bf i}gl\{ |{\mathcal{Y}}_1 \cap {\mathcal{Y}}_2| = 2 {\bf i}gr\} \Bigr\} \lefteft( \frac{n^k}{k!} \mathbb{E} {\bf i}gl\{ g_{\ell,n}({\mathcal{Y}}) {\bf i}gr\} \rightight)^2.
\end{equation}
Because of Potter's bound, together with the fact that $Q_\ell$
intersects with $\text{Ann}(R_n,KR_n)$,
\begin{align*}
\Bigl| \mathbb{E} &\Bigl\{ g_{\ell,n}(Y_1)\, g_{\ell,n}({\mathcal{Y}}_2)\, {\bf 1} {\bf i}gl\{ |{\mathcal{Y}}_1 \cap {\mathcal{Y}}_2| = 2 {\bf i}gr\} \Bigr\} \Bigr| \\
&\lefteq C^* \Bigl( \mathbb{P} {\bf i}gl\{ X_1 \in \text{Tube} (Q_\ell; kt_m) {\bf i}gr\} \Bigr)^{2k-2} \lefteq C^* f(R_ne_1)^{2k-2}.
\end{align*}
Similarly, we can obtain
$$
\Bigl| \mathbb{E} {\bf i}gl\{ g_{\ell,n}({\mathcal{Y}}) {\bf i}gr\} \Bigr| \lefteq C^* f(R_ne_1)^k,
$$
and therefore, the absolute value of \eqref{e:2overlaps},
equivalently that of \eqref{e:quadruple}, is bounded above by $C^*
{\bf i}gl( nf(R_ne_1) {\bf i}gr)^{4k-2} $.
A similar argument proves that if $|{\mathcal{Y}}_1 \cup {\mathcal{Y}}_2 \cup {\mathcal{Y}}_3 \cup
{\mathcal{Y}}_4| = 4k-q$ for some $q \geq 3$, the absolute value of
\eqref{e:quadruple} is bounded above by $C^*{\bf i}gl( nf(R_ne_1)
{\bf i}gr)^{4k-q}$. Putting these facts altogether, while recalling
$nf(R_ne_1) \to \infty$ as $n\to \infty$, we may conclude that
$$
\mathbb{E} {\bf i}gl| \eta_{\ell,n} - \mathbb{E} \{ \eta_{\ell,n} \} {\bf i}gr|^4 \lefteq C^* {\bf i}gl( nf(R_ne_1) {\bf i}gr)^{4k-2}.
$$
Now, it is easy to check \eqref{e:normal.approx}.
In terms of the third moment expectation $\mathbb{E} {\bf i}gl| \eta_{\ell,n}
- \mathbb{E} \{ \eta_{\ell,n} \} {\bf i}gr|^3$, we apply H\"{o}lder's
inequality to obtain
$$
\mathbb{E} {\bf i}gl| \eta_{\ell,n} - \mathbb{E} \{ \eta_{\ell,n} \} {\bf i}gr|^3 \lefteq \Bigl( \mathbb{E} {\bf i}gl| \eta_{\ell,n} - \mathbb{E} \{ \eta_{\ell,n} \} {\bf i}gr|^4 \Bigr)^{3/4} \lefteq C^* {\bf i}gl( nf(R_ne_1) {\bf i}gr)^{3k-3/2}.
$$
Again, it is easy to prove \eqref{e:normal.approx}.
Now, we have obtained a CLT in \eqref{e:int.CLT} as required, regardless of the limit of $nf(R_ne_1)$.
\end{proof}
An important claim is that once the tightness of each ${\bf X}_n^+$
and ${\bf X}_n^-$ is established in the space $\mathcal D [0,\infty)$ which is equipped with the Skorohod $J_1$-topology, the proof of Theorem
\rightef{t:main.heavy} is complete. To see this, suppose that
${\bf X}_n^+$ and ${\bf X}_n^-$ were both tight in $\mathcal D
[0,\infty)$. Then, a joint process $({\bf X}_n^+, {\bf X}_n^-)$ is tight
as well in $\mathcal D [0,\infty) \times \mathcal D [0,\infty)$,
which is endowed with the product topology. Because of the already
established finite-dimensional weak convergence of $({\bf X}_n^+,
{\bf X}_n^-)$, every subsequential limit of $({\bf X}_n^+, {\bf X}_n^-)$
coincides with the limiting process in Proposition
\rightef{p:fidi.heavy}. This in turn implies the weak convergence of
$({\bf X}_n^+, {\bf X}_n^-)$ in $\mathcal D [0,\infty) \times \mathcal D
[0,\infty)$. Using the basic fact that the map $(x,y) \to x-y$
from $\mathcal D [0,\infty) \times \mathcal D [0,\infty)$ to
$\mathcal D [0,\infty) $ is continuous at $(x,y) \in \mathcal
C[0,\infty) \times \mathcal C[0,\infty)$, while recalling that the
limits in Proposition \rightef{p:fidi.heavy} all have continuous
sample paths, the continuous mapping theorem gives weak
convergence of ${\bf X}_n = {\bf X}_n^+ - {\bf X}_n^-$ in $\mathcal D
[0,\infty)$.
\begin{proposition} \lefteftarrowbel{p:tightness.heavy}
The sequences $({\bf X}_n^+)$ and $({\bf X}_n^-)$ are both tight in
$\mathcal D [0,\infty)$, irrespective of the limit of
$nf(R_ne_1)$.
\end{proposition}
\begin{proof}
We prove the tightness of $({\bf X}_n^+)$ only, in the space $\mathcal D [0,L]$ for any fixed $L>0$. For notational ease, however, we omit the superscript ``+" from all the functions and objects during the proof. By Theorem 13.5 of \cite{billingsley:1999}, it is
sufficient to show that there exists $B>0$ such that
$$
\mathbb{E} \Bigl\{ {\bf i}gl( X_n(t) - X_n(s) {\bf i}gr)^2 {\bf i}gl( X_n(s) - X_n(r) {\bf i}gr)^2 \Bigr\} \lefteq B (t-r)^2
$$
for all $0 \lefteq r \lefteq s \lefteq t \lefteq L$ and $n \geq 1$.
For typographical convenience, we use shorthand notations
\eqref{e:def.hts}, \eqref{e:def.hnts}, and further,
\begin{align*}
\xi_{n,t,s} &:= \sum_{{\mathcal{Y}} \subset \mathbb{P}n} h_{n, t,s}({\mathcal{Y}})\,.
\end{align*}
Then,
\begin{align*}
\mathbb{E} \Bigl\{ {\bf i}gl( X_n(t) - X_n(s) {\bf i}gr)^2 {\bf i}gl( X_n(s) - X_n(r) {\bf i}gr)^2 \Bigr\}
&= \tau_n^{-2} \mathbb{E} \Bigl\{ {\bf i}gl( \xi_{n,t,s} - \mathbb{E} \{ \xi_{n,t,s} \} {\bf i}gr)^2 {\bf i}gl( \xi_{n,s,r} - \mathbb{E} \{ \xi_{n,s,r} \} {\bf i}gr)^2 \Bigr\} \\
&= \tau_n^{-2} \sum_{p=0}^2 \sum_{q=0}^2 \begin{pmatrix} 2 \\ p \end{pmatrix} \begin{pmatrix} 2 \\ q \end{pmatrix} (-1)^{p+q} F_n(p,q)\,,
\end{align*}
where
$$
F_n(p,q) = \mathbb{E} \{ \xi_{n,t,s}^p \xi_{n,s,r}^q \} {\bf i}gl( \mathbb{E} \{ \xi_{n,t,s} \} {\bf i}gr)^{2-p} {\bf i}gl( \mathbb{E} \{ \xi_{n,s,r} \} {\bf i}gr)^{2-q}.
$$
Note that for every $p, q \in \{ 0,1,2 \}$, $F_n(p,q)$ can be represented by
\begin{equation} \lefteftarrowbel{e:quadruple1}
\mathbb{E} \lefteft\{ \sum_{{\mathcal{Y}}_1 \subset \mathbb{P}n^{(1)}} \sum_{{\mathcal{Y}}_2 \subset \mathbb{P}n^{(2)}} \sum_{{\mathcal{Y}}_3 \subset \mathbb{P}n^{(3)}} \sum_{{\mathcal{Y}}_4 \subset \mathbb{P}n^{(4)}} h_{n,t,s}({\mathcal{Y}}_1)\, h_{n,t,s}({\mathcal{Y}}_2)\, h_{n,s,r}({\mathcal{Y}}_3)\, h_{n,s,r}({\mathcal{Y}}_4)\, \rightight\},
\end{equation}
where each of $\mathbb{P}n^{(1)}, \dots, \mathbb{P}n^{(4)}$ is either equal to or an independent copy of one of the others.
According to the Palm theory given in the Appendix, if $|{\mathcal{Y}}_1 \cup
{\mathcal{Y}}_2 \cup {\mathcal{Y}}_3 \cup {\mathcal{Y}}_4 | = 4k$, i.e., any two of ${\mathcal{Y}}_i$ have no
common elements, then \eqref{e:quadruple1} reduces to ${\bf i}gl( \mathbb{E}
\{ \xi_{n,t,s} \} {\bf i}gr)^2 {\bf i}gl( \mathbb{E} \{ \xi_{n,s,r} \} {\bf i}gr)^2$.
Then, an overall contribution in this case identically vanishes,
since
$$
\sum_{p=0}^2 \sum_{q=0}^2 \begin{pmatrix} 2 \\ p \end{pmatrix} \begin{pmatrix} 2 \\ q \end{pmatrix} (-1)^{p+q} {\bf i}gl( \mathbb{E} \{ \xi_{n,t,s} \} {\bf i}gr)^2 {\bf i}gl( \mathbb{E} \{ \xi_{n,s,r} \} {\bf i}gr)^2 = 0\,.
$$
In the following, we examine the case in which at least one common
element exists between ${\mathcal{Y}}_i$'s. First, for $\ell = 1,\dots,k$, we
count the number of times
\begin{equation} \lefteftarrowbel{e:pattern1}
\mathbb{E} \Bigl\{ \sum_{{\mathcal{Y}}_1 \subset \mathbb{P}n}\sum_{{\mathcal{Y}}_2 \subset \mathbb{P}n}h_{n,t,s}({\mathcal{Y}}_1)\, h_{n,t,s}({\mathcal{Y}}_2)\, {\bf 1} {\bf i}gl\{ |{\mathcal{Y}}_1 \cap {\mathcal{Y}}_2|=\ell {\bf i}gr\} \Bigr\}
{\bf i}gl(\mathbb{E} \{ \xi_{n,s,r} \} {\bf i}gr)^2
\end{equation}
appears in each $F_n(p,q)$. Indeed, \eqref{e:pattern1} appears
only once in $F_n(2,0)$, $F_n(2,1)$, and $F_n(2,2)$. Therefore,
the total contribution amounts to
$$
\lefteft[ \begin{pmatrix} 2 \\ 2 \end{pmatrix}\begin{pmatrix} 2 \\ 0 \end{pmatrix}(-1)^{2+0} + \begin{pmatrix} 2 \\ 2 \end{pmatrix}\begin{pmatrix} 2 \\ 1 \end{pmatrix}(-1)^{2+1} + \begin{pmatrix} 2 \\ 2 \end{pmatrix}\begin{pmatrix} 2 \\ 2 \end{pmatrix}(-1)^{2+2} \rightight] \times \eqref{e:pattern1} = 0\,.
$$
Similarly, for every $\ell = 1,\dots, k$, no contribution is made
by
$$
\mathbb{E} \Bigl\{ \sum_{{\mathcal{Y}}_1 \subset \mathbb{P}n}\sum_{{\mathcal{Y}}_2 \subset \mathbb{P}n}h_{n,s,r}({\mathcal{Y}}_1)\, h_{n,s,r}({\mathcal{Y}}_2)\, {\bf 1} {\bf i}gl\{ |{\mathcal{Y}}_1 \cap {\mathcal{Y}}_2|=\ell {\bf i}gr\} \Bigr\}
{\bf i}gl(\mathbb{E} \{ \xi_{n,t,s} \} {\bf i}gr)^2.
$$
Subsequently, for $\ell=1,\dots,k$, we explore the presence of
\begin{equation} \lefteftarrowbel{e:pattern2}
\mathbb{E} \Bigl\{ \sum_{{\mathcal{Y}}_1 \subset \mathbb{P}n}\sum_{{\mathcal{Y}}_2 \subset \mathbb{P}n}h_{n,t,s}({\mathcal{Y}}_1)\, h_{n,s,r}({\mathcal{Y}}_2)\, {\bf 1} {\bf i}gl\{ |{\mathcal{Y}}_1 \cap {\mathcal{Y}}_2|=\ell {\bf i}gr\} \Bigr\}
\mathbb{E} \{ \xi_{n,t,s} \} \mathbb{E} \{ \xi_{n,s,r} \}\,.
\end{equation}
One can immediately check that \eqref{e:pattern2} appears once in
$F_n(1,1)$, twice in $F_n(2,1)$, twice in $F_n(1,2)$, and four
times in $F_n(2,2)$. However, their total contribution disappears
again, because
\begin{multline*}
{\bf i}ggl[ \begin{pmatrix} 2 \\ 1 \end{pmatrix}\begin{pmatrix} 2 \\ 1 \end{pmatrix}(-1)^{1+1} + \begin{pmatrix} 2 \\ 2 \end{pmatrix}\begin{pmatrix} 2 \\ 1 \end{pmatrix}(-1)^{2+1} \cdot 2 \\ + \begin{pmatrix} 2 \\ 1 \end{pmatrix}\begin{pmatrix} 2 \\ 2 \end{pmatrix}(-1)^{1+2} \cdot 2
+ \begin{pmatrix} 2 \\ 2 \end{pmatrix}\begin{pmatrix} 2 \\ 2 \end{pmatrix}(-1)^{2+2} \cdot 4 {\bf i}ggr] \times \eqref{e:pattern2} = 0\,.
\end{multline*}
Next, let $\ell_i \in \{ 0,\dots,k \}$, $i=1,2,3$, $\ell \in \{
2,\dots,2k\}$ such that at least two of $\ell_i$'s are non-zero,
so that we should examine the appearance of
\begin{multline}
\mathbb{E} \Bigl\{ \sum_{{\mathcal{Y}}_1 \subset \mathbb{P}n}\sum_{{\mathcal{Y}}_2 \subset \mathbb{P}n} \sum_{{\mathcal{Y}}_3 \subset \mathbb{P}n} h_{n,t,s}({\mathcal{Y}}_1)\, h_{n,t,s}({\mathcal{Y}}_2)\,h_{n,s,r}({\mathcal{Y}}_3)\, \lefteftarrowbel{e:pattern3} \\
\times {\bf 1} {\bf i}gl\{ |{\mathcal{Y}}_1 \cap {\mathcal{Y}}_2 | =\ell_1, \, |{\mathcal{Y}}_1 \cap {\mathcal{Y}}_3 | =\ell_2, \, |{\mathcal{Y}}_2 \cap {\mathcal{Y}}_3 | =\ell_3, \, |{\mathcal{Y}}_1 \cup {\mathcal{Y}}_2 \cup {\mathcal{Y}}_3| = 3k-\ell {\bf i}gr\} \Bigr\}\, \mathbb{E} \{ \xi_{n,s,r} \}\,.
\end{multline}
This actually appears once in $F_n(2,1)$ and twice in $F_n(2,2)$; therefore, their overall contribution is
$$
{\bf i}ggl[ \begin{pmatrix} 2 \\ 2 \end{pmatrix}\begin{pmatrix} 2 \\ 1 \end{pmatrix}(-1)^{2+1} + \begin{pmatrix} 2 \\ 2 \end{pmatrix}\begin{pmatrix} 2 \\ 2 \end{pmatrix}(-1)^{2+2} \cdot 2 {\bf i}ggr] \times \eqref{e:pattern3} = 0\,.
$$
For the same reason, we can ignore the presence of
\begin{multline*}
\mathbb{E} \Bigl\{ \sum_{{\mathcal{Y}}_1 \subset \mathbb{P}n}\sum_{{\mathcal{Y}}_2 \subset \mathbb{P}n} \sum_{{\mathcal{Y}}_3 \subset \mathbb{P}n} h_{n,t,s}({\mathcal{Y}}_1)\, h_{n,s,r}({\mathcal{Y}}_2)\,h_{n,s,r}({\mathcal{Y}}_3)\, \\
\times {\bf 1} {\bf i}gl\{ |{\mathcal{Y}}_1 \cap {\mathcal{Y}}_2 | =\ell_1, \, |{\mathcal{Y}}_1 \cap {\mathcal{Y}}_3 | =\ell_2, \, |{\mathcal{Y}}_2 \cap {\mathcal{Y}}_3 | =\ell_3, \, |{\mathcal{Y}}_1 \cup {\mathcal{Y}}_2 \cup {\mathcal{Y}}_3| = 3k-\ell {\bf i}gr\} \Bigr\}\, \mathbb{E} \{ \xi_{n,t,s} \}\,.
\end{multline*}
where $\ell_i \in \{ 0,\dots,k \}$, $i=1,2,3$, $\ell \in \{
2,\dots,2k\}$ such that at least two of $\ell_i$'s are non-zero.
Putting these calculations altogether, we find that the tightness
follows, once we can show that there exists $B>0$ such that
\begin{multline}
\tau_n^{-2} \mathbb{E} \Bigl\{ \sum_{{\mathcal{Y}}_1 \subset \mathbb{P}n}\sum_{{\mathcal{Y}}_2 \subset \mathbb{P}n} \sum_{{\mathcal{Y}}_3 \subset \mathbb{P}n}\sum_{{\mathcal{Y}}_4 \subset \mathbb{P}n} h_{n,t,s}({\mathcal{Y}}_1)\, h_{n,t,s}({\mathcal{Y}}_2)\,h_{n,s,r}({\mathcal{Y}}_3)\,h_{n,s,r}({\mathcal{Y}}_4)\, \lefteftarrowbel{e:pattern4} \\
\times {\bf 1} {\bf i}gl\{ \text{each } {\mathcal{Y}}_i \text{ has at least one common elements with} \\
\text{at least one of the other three} {\bf i}gr\} \Bigr\} \lefteq B(t-r)^2
\end{multline}
for all $0 \lefteq r \lefteq s \lefteq t \lefteq L$ and $n \geq 1$. We need to check only the following possibilities. \\
$[\text{I}]$ $\ell := |{\mathcal{Y}}_1 \cap {\mathcal{Y}}_2| \in \{ 1,\dots,k \}$, ${\ell^{\prime}} := |{\mathcal{Y}}_3 \cap {\mathcal{Y}}_4| \in \{ 1,\dots,k \}$,
and $({\mathcal{Y}}_1 \cup {\mathcal{Y}}_2) \cap ({\mathcal{Y}}_3 \cup {\mathcal{Y}}_4) = \emptyset$. \\
$[\text{I} \hspace{-1pt }\text{I}]$ $\ell := |{\mathcal{Y}}_2 \cap {\mathcal{Y}}_3| \in \{ 1,\dots,k \}$, ${\ell^{\prime}} := |{\mathcal{Y}}_1 \cap {\mathcal{Y}}_4| \in \{ 1,\dots,k \}$, and $({\mathcal{Y}}_2 \cup {\mathcal{Y}}_3) \cap ({\mathcal{Y}}_1 \cup {\mathcal{Y}}_4) = \emptyset$. \\
$[\text{I}\hspace{-1pt}\text{I}\hspace{-1pt}\text{I}]$. Each
${\mathcal{Y}}_i$ has at least one common element with at least one of the
other three, but neither $[\text{I}]$ or $[\text{I}\hspace{-1pt
}\text{I}]$ is true.
For example, if $|{\mathcal{Y}}_1 \cap {\mathcal{Y}}_2|=2$, $|{\mathcal{Y}}_1 \cap {\mathcal{Y}}_3|=3$, $|{\mathcal{Y}}_2
\cap {\mathcal{Y}}_4|=1$, and there are no other common elements between
${\mathcal{Y}}_i$'s, then it falls into category
$[\text{I}\hspace{-1pt}\text{I}\hspace{-1pt}\text{I}]$, where,
unlike $[\text{I}]$ or $[\text{I} \hspace{-1pt }\text{I}]$, the
expectation in \eqref{e:pattern4} can no longer be separated by
the Palm theory.
Denoting by $A$ the left-hand side of \eqref{e:pattern4}, let us
start with case $[\text{I}]$. As a result of Palm theory,
\begin{align*}
A &= \tau_n^{-1} \frac{n^{2k-\ell}}{\ell ! {\bf i}gl( (k-\ell)! {\bf i}gr)^2}\, \mathbb{E} \Bigl\{ h_{n,t,s}({\mathcal{Y}}_1)\, h_{n,t,s}({\mathcal{Y}}_2)\, {\bf 1} {\bf i}gl\{ |{\mathcal{Y}}_1 \cap {\mathcal{Y}}_2| = \ell {\bf i}gr\} \Bigr\} \\
&\quad \times \tau_n^{-1} \frac{n^{2k-{\ell^{\prime}}}}{{\ell^{\prime}} ! {\bf i}gl( (k-{\ell^{\prime}})! {\bf i}gr)^2}\, \mathbb{E} \Bigl\{ h_{n,s,r}({\mathcal{Y}}_3)\, h_{n,s,r}({\mathcal{Y}}_4)\, {\bf 1} {\bf i}gl\{ |{\mathcal{Y}}_3 \cap {\mathcal{Y}}_4| = {\ell^{\prime}} {\bf i}gr\} \Bigr\} \\
&:= A_1 \times A_2\,.
\end{align*}
Proceeding as in the calculation of Proposition
\rightef{p:cova.heavy}, we obtain
\begin{align}
A_1 &\lefteq C^* \tau_n^{-1} n^{2k-\ell} R_n^d f(R_ne_1)^{2k-\ell} \int_{(\righteals^d)^{\ell-1}} \hspace{-10pt} d{\bf y} \int_{(\righteals^d)^{k-\ell}} \hspace{-10pt} d{\bf z}_2 \int_{(\righteals^d)^{k-\ell}} \hspace{-10pt} d{\bf z}_1\, h_{t,s}(0,{\bf y}, {\bf z}_1)\, h_{t,s}(0,{\bf y}, {\bf z}_2)\,, \lefteftarrowbel{e:upper.E1}\\
A_2 &\lefteq C^* \tau_n^{-1} n^{2k-{\ell^{\prime}}} R_n^d f(R_ne_1)^{2k-{\ell^{\prime}}} \int_{(\righteals^d)^{{\ell^{\prime}}-1}} \hspace{-10pt} d{\bf y} \int_{(\righteals^d)^{k-{\ell^{\prime}}}} \hspace{-10pt} d{\bf z}_2 \int_{(\righteals^d)^{k-{\ell^{\prime}}}} \hspace{-10pt} d{\bf z}_1\, h_{s,r}(0,{\bf y}, {\bf z}_1)\, h_{s,r}(0,{\bf y}, {\bf z}_2)\,. \lefteftarrowbel{e:upper.E2}
\end{align}
Notice that $h_t$ is increasing in $t$ in the sense of \eqref{e:ind.increase+} (recall that the superscript ``+" is suppressed during the proof).
It also follows from \eqref{e:close.enough.decomp.dyna} that the triple integral in
\eqref{e:upper.E1} is unchanged if the integral domain is
restricted to ${\bf i}gl( B(0,kL) {\bf i}gr)^{\ell-1} \times {\bf i}gl(
B(0,kL) {\bf i}gr)^{k-\ell} \times {\bf i}gl( B(0,kL) {\bf i}gr)^{k-\ell}$.
Therefore, with $\lefteftarrowmbda$ being the
Lebesgue measure on $(\righteals^d)^{k-\ell}$,
\begin{align*}
\int_{(\righteals^d)^{\ell-1}} \hspace{-10pt} d{\bf y} &\int_{(\righteals^d)^{k-\ell}} \hspace{-10pt} d{\bf z}_2 \int_{(\righteals^d)^{k-\ell}} \hspace{-10pt} d{\bf z}_1\, h_{t,s}(0,{\bf y}, {\bf z}_1)\, h_{t,s}(0,{\bf y}, {\bf z}_2) \\
&\lefteq \lefteftarrowmbda {\bf i}gl\{ {\bf i}gl( B(0,kL) {\bf i}gr)^{k-\ell} {\bf i}gr\}\, \int_{(\righteals^d)^{\ell-1}}\int_{(\righteals^d)^{k-\ell}} h_{t,s}(0,{\bf y}, {\bf z}) d{\bf y} d{\bf z} \\
&= \lefteftarrowmbda {\bf i}gl\{ {\bf i}gl( B(0,kL) {\bf i}gr)^{k-\ell} {\bf i}gr\}\, {\bf i}gl( t^{d(k-1)} - s^{d(k-1)} {\bf i}gr) \int_{(\righteals^d)^{k-1}} h(0,{\bf y}) d{\bf y} \\
&\lefteq C^*(t-r)\,.
\end{align*}
Applying the same manipulation to the triple integral in
\eqref{e:upper.E2}, we obtain
$$
A \lefteq C^* \tau_n^{-2} n^{4k-\ell-{\ell^{\prime}}} R_n^{2d} f(R_ne_1)^{4k-\ell-{\ell^{\prime}}} (t-r)^2.
$$
It remains to check that $\sup_n \tau_n^{-2} n^{4k-\ell-{\ell^{\prime}}}
R_n^{2d} f(R_ne_1)^{4k-\ell-{\ell^{\prime}}} < \infty$, which is, however,
easy to prove, irrespective of the definition of $\tau_n$. Now
case $[\text{I}]$ is done.
Next, we turn to case $[\text{I}\hspace{-1pt}\text{I}]$. As a
consequence of the same operation as in $[\text{I}]$, we obtain
the same upper bound for $A$ up to multiplicative constants.
Finally, we proceed to case
$[\text{I}\hspace{-1pt}\text{I}\hspace{-1pt}\text{I}]$. Let $\ell
:= 4k-|{\mathcal{Y}}_1 \cup {\mathcal{Y}}_2 \cup {\mathcal{Y}}_3 \cup {\mathcal{Y}}_4|$; then, it must be that
$3 \lefteq \ell \lefteq 3k$. It follows from Palm theory that
$$
A = C^* \tau_n^{-2} n^{4k-\ell} \mathbb{E} {\bf i}gl\{ h_{n,t,s}({\mathcal{Y}}_1)\, h_{n,t,s}({\mathcal{Y}}_2)\,h_{n,s,r}({\mathcal{Y}}_3)\,h_{n,s,r}({\mathcal{Y}}_4) {\bf i}gr\}
$$
with $({\mathcal{Y}}_1,\dots,{\mathcal{Y}}_4)$ satisfying requirements in case
$[\text{I}\hspace{-1pt}\text{I}\hspace{-1pt}\text{I}]$. In
particular, $({\mathcal{Y}}_1 \cup {\mathcal{Y}}_2) \cap ({\mathcal{Y}}_3 \cup {\mathcal{Y}}_4)$ must be
non-empty; hence, we may assume without loss of generality that
${\mathcal{Y}}_1 \cap {\mathcal{Y}}_3\neq \emptyset$. Set ${\ell^{\prime}} := |{\mathcal{Y}}_1 \cap {\mathcal{Y}}_3| \in
\{ 1,\dots,k \}$. By \eqref{e:ind.increase+} and \eqref{e:close.enough.decomp.dyna}, we have
$$
A \lefteq C^* \tau_n^{-2} n^{4k-\ell} R_n^d f(R_ne_1)^{4k-\ell} \int_{(\righteals^d)^{{\ell^{\prime}}-1}} \hspace{-10pt} d{\bf y} \int_{(\righteals^d)^{k-{\ell^{\prime}}}} \hspace{-10pt} d{\bf z}_2 \int_{(\righteals^d)^{k-{\ell^{\prime}}}} d{\bf z}_1\, h_{t,s}(0,{\bf y}, {\bf z}_1)\, h_{t,s}(0,{\bf y}, {\bf z}_2)\,,
$$
Because of Lemma \rightef{l:used.for.tightness},
\begin{align*}
A \lefteq C^* \tau_n^{-2} n^{4k-\ell} R_n^d f(R_ne_1)^{4k-\ell} (t-r)^2.
\end{align*}
Once again, verifying
$$
\sup_n \tau_n^{-2} n^{4k-\ell} R_n^d f(R_ne_1)^{4k-\ell} < \infty
$$
is elementary, and hence, we have completed the proof of
\eqref{e:pattern4} as required.
\end{proof}
\subsection{Exponentially Decaying Tail Case} \lefteftarrowbel{s:proof.light}
We start by defining a subgraph counting process with restricted
domain. For $0 \lefteq K < L \lefteq \infty$, we define
\begin{align*}
G_{n,K,L}(t) &= \sum_{{\mathcal{Y}} \subset \mathcal{P}_n} h_t ({\mathcal{Y}})\, {\bf 1} {\bf i}gl\{ m({\mathcal{Y}}) \geq R_n\,, \ a(R_n)^{-1}{\bf i}gl(\text{Max}({\mathcal{Y}}) - R_n {\bf i}gr) \in [K,L) {\bf i}gr\} \\
&:= \sum_{{\mathcal{Y}} \subset \mathcal{P}_n} h_{n,t,K,L} ({\mathcal{Y}})\,,
\end{align*}
and
\begin{align*}
G_{n,K,L}^{\pm}(t) &= \sum_{{\mathcal{Y}} \subset \mathcal{P}_n} h_t^{\pm} ({\mathcal{Y}})\, {\bf 1} {\bf i}gl\{ m({\mathcal{Y}}) \geq R_n\,, \ a(R_n)^{-1}{\bf i}gl(\text{Max}({\mathcal{Y}}) - R_n {\bf i}gr) \in [K,L) {\bf i}gr\} \\
&:= \sum_{{\mathcal{Y}} \subset \mathcal{P}_n} h_{n,t,K,L}^{\pm} ({\mathcal{Y}})\,,
\end{align*}
where $(R_n)$ satisfies \eqref{e:normalizing.light}. For the
special case $K=0$ and $L=\infty$, we denote $G_n(t) =
G_{n,0,\infty}(t)$ and $G_n^{\pm}(t) = G_{n,0,\infty}^{\pm}(t)$.
The centered and scaled versions of the subgraph counting process
are
\begin{align}
X_n(t) &= \tau_n^{-1/2} \Bigl( G_n(t) - \mathbb{E}{\bf i}gl\{ G_n(t) {\bf i}gr\} \Bigr)\,, \lefteftarrowbel{e:def.Xn}\\
X_n^{\pm}(t) &= \tau_n^{-1/2} \Bigl( G_n^{\pm}(t) - \mathbb{E}{\bf i}gl\{ G_n^{\pm}(t) {\bf i}gr\} \Bigr) \lefteftarrowbel{e:def.Xnpm}\,,
\end{align}
where $(\tau_n)$ is given in \eqref{e:tau.light}. As seen in the
regularly varying tail case, we first need to know the growing
rate of the covariances of $G_{n,K,L}(t)$. Before presenting the
results, we introduce for $\ell = 1,\dots,k$,
\begin{align*}
M_{\ell, K, L}(t,s) := D_\ell &\int_0^{\infty} \int_{(\righteals^d)^{2k-\ell-1}} \hspace{-10pt}e^{ -(2k-\ell)\rightho - c^{-1} \sum_{i=1}^{2k-\ell-1} \lefteftarrowngle e_1,y_i \rightightarrowngle }\, \\
&\times {\bf 1} {\bf i}gl\{ {\bf y} \in E_{K,L}^{(\ell)} (\rightho,e_1) {\bf i}gr\}\, h_{t,s}^{(\ell)}(0,{\bf y})\, d{\bf y} d\rightho\,, \ \ t,s\geq 0\,,
\end{align*}
where $D_\ell $ is given in \eqref{e:def.D.ell}, $h_{t,s}^{(\ell)}(0,{\bf y})$ is defined in \eqref{e:def.h.ell}, and for $\rightho > 0$ and $\theta \in S_{d-1}$,
\begin{align*}
E_{K,L}^{(\ell)}(\rightho, \theta) = \Bigl\{ {\bf y} \in (\righteals^d)^{2k-\ell-1}:\, &\rightho + c^{-1} \lefteftarrowngle \theta, y_i \rightightarrowngle \geq 0\,, \ i=1,\dots, 2k-\ell-1\,, \\
&K \lefteq \max {\bf i}gl\{ \rightho\,, \rightho + c^{-1} \hspace{-5pt} \max_{i = 1,\dots, k-1}\lefteftarrowngle \theta, y_i \rightightarrowngle {\bf i}gr\} < L\,, \\
&K \lefteq \max {\bf i}gl\{ \rightho\,, \rightho + c^{-1} \hspace{-5pt} \max_{i=1,\dots,\ell-1,k,\dots,2k-\ell-1}\lefteftarrowngle \theta, y_i \rightightarrowngle {\bf i}gr\} < L \, \Bigr\}\,.
\end{align*}
Note that $M_{\ell, 0, \infty}(t,s)$ completely matches \eqref{e:cov.comp.light}.
\begin{proposition} \lefteftarrowbel{p:cova.light}
Assume the conditions of Theorem \rightef{t:main.light}. Let $0 \lefteq K < L \lefteq \infty$. \\
$(i)$ If $nf(R_ne_1) \to 0$ as $n \to \infty$, then
$$
\tau_n^{-1} \text{Cov} {\bf i}gl( G_{n,K,L}(t), G_{n,K,L}(s) {\bf i}gr) \to M_{k,K,L}(t,s)\,, \ \ \ n\to \infty\,.
$$
$(ii)$ If $nf(R_ne_1) \to \xi \in (0,\infty)$ as $n \to \infty$, then
$$
\tau_n^{-1} \text{Cov} {\bf i}gl( G_{n,K,L}(t), G_{n,K,L}(s) {\bf i}gr) \to \sum_{\ell=1}^k \xi^{2k-\ell}M_{\ell, K, L}(t,s)\,, \ \ \ n\to \infty\,.
$$
$(iii)$ If $nf(R_ne_1) \to \infty$ as $n \to \infty$, then
$$
\tau_n^{-1} \text{Cov} {\bf i}gl( G_{n,K,L}(t), G_{n,K,L}(s) {\bf i}gr) \to M_{1,K,L}(t,s)\,, \ \ \ n\to \infty\,.
$$
\end{proposition}
\begin{proof}
As argued in Proposition \rightef{p:cova.heavy}, with the multiple
applications of Palm theory, one can write
$$
\text{Cov} {\bf i}gl( G_{n,K,L}(t)\,, G_{n,K,L}(s) {\bf i}gr)
= \sum_{\ell=1}^k \frac{n^{2k-\ell}}{\ell ! {\bf i}gl( (k-\ell)! {\bf i}gr)^2}\, \mathbb{E} \Bigl\{h_{n,t, K, L} ({\mathcal{Y}}_1)\, h_{n,s, K, L} ({\mathcal{Y}}_2)\, {\bf 1} {\bf i}gl\{ \, |{\mathcal{Y}}_1 \cap {\mathcal{Y}}_2| = \ell\, {\bf i}gr\} \Bigr\}.
$$
Define for $\ell \in \{ 1,\dots,k \}$,
\begin{multline*}
F_n^{(\ell)} (K,L) := {\bf i}gl\{ {\bf x} \in (\righteals^d)^{2k-\ell}: a(R_n)^{-1} {\bf i}gl( \text{Max}(x_1,\dots, x_k) - R_n {\bf i}gr) \in [K,L)\,, \\
a(R_n)^{-1} {\bf i}gl( \text{Max}(x_1,\dots, x_\ell, x_{k+1}, \dots, x_{2k-\ell}) - R_n {\bf i}gr) \in [K,L) {\bf i}gr\}\,.
\end{multline*}
By the change of variables ${\bf x} \rightightarrow (x,x+{\bf y})$ with ${\bf x}
\in (\righteals^d)^{2k-\ell}$, $x \in \righteals^d$, ${\bf y} \in
(\righteals^d)^{2k-\ell-1}$, together with invariance
\eqref{e:location.inv},
\begin{align*}
\mathbb{E} \Bigl\{ &h_{n,t,K,L}({\mathcal{Y}}_1)\, h_{n,s,K,L}({\mathcal{Y}}_2)\, {\bf 1} {\bf i}gl\{ \, |{\mathcal{Y}}_1 \cap {\mathcal{Y}}_2| = \ell\, {\bf i}gr\} \Bigr\} \\
&= \int_{(\righteals^d)^{2k-\ell}} f({\bf x})\, {\bf 1} {\bf i}gl\{ m({\bf x}) \geq R_n {\bf i}gr\}\, h_{t,s}^{(\ell)} ({\bf x})\,{\bf 1} {\bf i}gl\{ {\bf x} \in F_n^{(\ell)}(K,L) {\bf i}gr\} d{\bf x} \\
&= \int_{\righteals^d} \int_{(\righteals^d)^{2k-\ell-1}} f(x)\, f(x+{\bf y})\, {\bf 1} {\bf i}gl\{ m(x, x + {\bf y}) \geq R_n {\bf i}gr\}\, h_{t,s}^{(\ell)} (0, {\bf y})\, \\
&\quad \times {\bf 1} {\bf i}gl\{ (x, x+{\bf y}) \in F_n^{(\ell)}(K,L) {\bf i}gr\} d{\bf y} dx\,.
\end{align*}
Let $J_k$ denote the last integral. Further calculation by the
polar coordinate transform $x \to (r,\theta)$ with $J(\theta) =
|\partial x / \partial \theta|$ and the change of variable $\rightho =
a(R_n)^{-1} (r-R_n) $ yields
\begin{align}
J_k = &a(R_{n}) R_{n}^{d-1} f(R_{n}e_1)^{2k-\ell} \int_{S^{d-1}} \hspace{-7pt} J(\theta)d\theta \int_0^{\infty} d\rightho \int_{(\righteals^d)^{2k-\ell-1}} d{\bf y}\, \lefteftarrowbel{e:Jk}\\
&\times \lefteft( 1 + \frac{a(R_{n})}{R_{n}} \rightho \rightight)^{d-1} \frac{f\Bigl( {\bf i}gl( R_{n} + a(R_{n}) \rightho {\bf i}gr) e_1 \Bigr)}{f(R_ne_1)} \notag \\
&\times \prod_{i=1}^{2k-\ell-1} f(R_{n}e_1)^{-1} f\Bigl( \|(R_{n}+a(R_{n})\rightho)\theta + y_i\|e_1 \Bigr)\, {\bf 1} \Bigl\{ \|(R_{n}+a(R_{n})\rightho)\theta + y_i\| \geq R_n \Bigr\} \notag \\
&\times {\bf 1} \Bigl\{ {\bf i}gl( (R_n + a(R_n)\rightho)\theta, \, (R_n + a(R_n)\rightho)\theta + {\bf y} {\bf i}gr) \in F_n^{(\ell)} (K,L)\Bigr\}\, h_{t,s}^{(\ell)} (0,{\bf y})\,, \notag
\end{align}
where $S^{d-1}$ is the $(d-1)$-dimensional unit sphere in $\righteals^d$. \\
The following expansion is applied frequently in the following.
For each $i=1,\dots,2k-\ell-1$,
\begin{equation*}
\Bigl|\Bigl|{\bf i}gl( R_{n} + a(R_{n})\rightho {\bf i}gr)\theta + y_i \Bigr|\Bigr| = R_{n} + a(R_{n}) \rightho + \lefteftarrowngle \theta, y_i \rightightarrowngle + \gamma_n(\rightho, \theta, y_i)\,,
\end{equation*}
so that $\gamma_n(\rightho, \theta, y_i) \to 0$ uniformly in $\rightho >
0$, $\theta \in S^{d-1}$, and $\|y_i\| \lefteq k(t+s)$.
For the application of the dominated convergence theorem, we need
to compute the limit of the expression under the integral sign,
while establishing an integrable upper bound. We first calculate
the limit of the indicator functions. For every $\rightho >0$, $\theta
\in S^{d-1}$, and $\| y_i\| \lefteq k(t+s)$, $i=1,\dots,2k-\ell-1$,
\begin{align*}
\prod_{i=1}^{2k-\ell-1} &{\bf 1} \Bigl\{ \|(R_{n}+a(R_{n})\rightho)\theta + y_i\| \geq R_n \Bigr\} \\
&\times {\bf 1} \Bigl\{ {\bf i}gl( (R_n + a(R_n)\rightho)\theta, \, (R_n + a(R_n)\rightho)\theta + {\bf y} {\bf i}gr) \in F_n^{(\ell)} (K,L)\Bigr\} \\
&\quad \to {\bf 1} {\bf i}gl\{ {\bf y} \in E_{K,L}^{(\ell)}(\rightho, \theta)\, {\bf i}gr\}\,, \ \ \ n\to \infty\,.
\end{align*}
Next, it is clear that for every $\rightho >0$, ${\bf i}gl( 1 + a(R_n)\rightho
/R_n {\bf i}gr)^{d-1}$ tends to $1$ as $n \to \infty$ (see
\eqref{e:auxi.slow}) and is bounded above by $2 {\bf i}gl( \max \{ 1,
\rightho \} {\bf i}gr)^{d-1}$.
As for the ratio of the densities in the second line of
\eqref{e:Jk}, we use the basic fact that $1/a$ is flat for $a$,
that is, as $n \to \infty$,
\begin{equation} \lefteftarrowbel{e:unif.conv.a}
\frac{a(R_{n})}{a{\bf i}gl( R_{n} + a(R_{n}) v {\bf i}gr)} \to 1\,, \ \ \text{uniformly on bounded } v \text{-sets};
\end{equation}
see p142 in \cite{embrechts:kluppelberg:mikosch:1997} for details.
Noting that $L$ is also flat for $a$, we have for every $\rightho >0$,
\begin{align*}
\frac{f\Bigl( {\bf i}gl( R_{n} + a(R_{n}) \rightho {\bf i}gr) e_1 \Bigr)}{f(R_ne_1)} &= \frac{L{\bf i}gl( R_{n} + a(R_{n}) \rightho {\bf i}gr)}{L(R_n)}\, \exp \Bigl\{ -\psi {\bf i}gl( R_{n} + a(R_{n})\rightho {\bf i}gr) + \psi(R_{n}) \Bigr\} \\
&=\frac{L{\bf i}gl( R_{n} + a(R_{n}) \rightho {\bf i}gr)}{L(R_n)} \exp \Bigl\{ - \int_0^\rightho \frac{a(R_n)}{a {\bf i}gl( R_n + a(R_n)r {\bf i}gr)} dr \Bigr\} \\
&\to e^{-\rightho}, \ \ \text{as } n \to \infty\,.
\end{align*}
To provide an upper bound for the ratio of the densities, let
${\bf i}gl( q_m(n), \, m \geq 0, n \geq 1 {\bf i}gr)$ be a sequence
defined by
$$
q_m(n) = a(R_{n})^{-1} \Bigl( \psi^{\lefteftarrow} {\bf i}gl(\psi(R_{n}) + m {\bf i}gr) - R_{n} \Bigr)\,,
$$
equivalently,
$$
\psi {\bf i}gl( R_n + a(R_n) q_m(n) {\bf i}gr) = \psi(R_n) + m\,.
$$
Then, for $\epsilon \in {\bf i}gl( 0, (d+\gamma (2k-\ell))^{-1}
{\bf i}gr)$, there exists an integer $N_{\epsilon} \geq 1$ such that
$$
q_m(n) \lefteq e^{m\epsilon} / \epsilon \ \ \text{for all } n \geq N_{\epsilon}, m \geq 0\,.
$$
For the proof of this assertion, the reader may refer to Lemma 5.2
in \cite{balkema:embrechts:2004}; see also Lemma 4.7 of
\cite{owada:adler:2015}. Because of the fact that $\psi$ is
non-decreasing, we have, for sufficiently large $n$,
\begin{align*}
&\exp \Bigl\{ -\psi {\bf i}gl( R_{n} + a(R_{n})\rightho {\bf i}gr) + \psi(R_{n}) \Bigr\} \, {\bf 1} \{\rightho > 0 \} \\
&= \sum_{m=0}^{\infty} \, {\bf 1} {\bf i}gl\{ q_m(n) < \rightho \lefteq q_{m+1}(n) {\bf i}gr\}\, \exp \Bigl\{ -\psi {\bf i}gl( R_{n} + a(R_{n})\rightho {\bf i}gr) + \psi(R_{n}) \Bigr\} \\
&\lefteq \sum_{m=0}^{\infty} \, {\bf 1} {\bf i}gl\{ 0 < \rightho \lefteq \epsilon^{-1} e^{(m+1)\epsilon} {\bf i}gr\}\, e^{-m}.
\end{align*}
Using the bound in \eqref{e:poly.upper},
\begin{align*}
L(&R_{n})^{-1} L {\bf i}gl( R_{n} + a(R_{n}) \rightho {\bf i}gr) {\bf 1} \{ \rightho >0 \} \lefteq C \lefteft( 1 + \frac{a(R_{n})}{R_{n}} \rightho \rightight)^{\gamma} \lefteq
2C {\bf i}gl( \max \{ \rightho, 1\} {\bf i}gr)^{\gamma}.
\end{align*}
Combining these bounds,
$$
\frac{f\Bigl( {\bf i}gl( R_{n} + a(R_{n}) \rightho {\bf i}gr) e_1 \Bigr)}{f(R_ne_1)}\, {\bf 1} \{ \rightho >0 \} \lefteq 2C {\bf i}gl( \max \{ \rightho, 1\} {\bf i}gr)^{\gamma} \sum_{m=0}^{\infty} \, {\bf 1} {\bf i}gl\{ 0 < \rightho \lefteq \epsilon^{-1} e^{(m+1)\epsilon} {\bf i}gr\}\, e^{-m}.
$$
Finally, we turn to
\begin{align*}
\prod_{i=1}^{2k-\ell-1} \frac{f\Bigl( \|(R_{n}+a(R_{n})\rightho)\theta + y_i\|e_1 \Bigr)}{f(R_ne_1)} = & \prod_{i=1}^{2k-\ell-1} \frac{L \Bigl( R_{n} + a(R_{n}) {\bf i}gl( \rightho + \xi_n(\rightho,\theta,y_i) {\bf i}gr) \Bigr)}{L(R_n)} \\
&\times \exp \lefteft\{ -\int_0^{\rightho + \xi_n(\rightho,\theta,y_i)} \frac{a(R_{n})}{a {\bf i}gl( R_{n} + a(R_{n} )r {\bf i}gr)}\, dr \rightight\},
\end{align*}
where
$$
\xi_n(\rightho, \theta, y) = \frac{\lefteftarrowngle \theta, y \rightightarrowngle + \gamma_n(\rightho, \theta, y)}{a(R_{n})}\,.
$$
Since $c=\leftim_{n \to \infty}a(R_n)>0$,
$$
A := \sup_{\substack{n \geq 1, \ \rightho >0, \\[2pt] \theta \in S^{d-1}, \ \|y\| \lefteq k(t+s)}} {\bf i}gl| \xi_n(\rightho, \theta, y) {\bf i}gr| < \infty\,.
$$
Therefore, because of the uniform convergence in
\eqref{e:unif.conv.a}, for every $\rightho>0$, $\theta \in S^{d-1}$,
and $\| y_i \| \lefteq k(t+s)$,
$$
\prod_{i=1}^{2k-\ell-1} \frac{f\Bigl( \|(R_{n}+a(R_{n})\rightho)\theta + y_i\|e_1 \Bigr)}{f(R_ne_1)} \to \exp {\bf i}gl\{ -(2k-\ell-1)\rightho - c^{-1} \sum_{i=1}^{2k-\ell-1}\lefteftarrowngle \theta, y_i \rightightarrowngle {\bf i}gr\}.
$$
Subsequently, on the set
\begin{align*}
\Bigl\{ &\|(R_{n}+a(R_{n})\rightho)\theta + y_i\| \geq R_n\,, \ i=1,\dots,2k-\ell-1 \Bigr\} \\
&={\bf i}gl\{ \rightho + \xi_n(\rightho, \theta, y_i) \geq 0, \ i=1,\dots,2k-\ell-1 {\bf i}gr\}\,,
\end{align*}
we have an obvious upper bound
$$
\prod_{i=1}^{2k-\ell-1} \exp \lefteft\{ -\int_0^{\rightho + \xi_n(\rightho,\theta,y_i)} \frac{a(R_{n})}{a {\bf i}gl( R_{n} + a(R_{n} )r {\bf i}gr)}\, dr \rightight\} \lefteq 1
$$
from which, together with \eqref{e:poly.upper}, we see that
\begin{align*}
\prod_{i=1}^{2k-\ell-1} \frac{f\Bigl( \|(R_{n}+a(R_{n})\rightho)\theta + y_i\|e_1 \Bigr)}{f(R_{n}e_1)} &\lefteq \prod_{i=1}^{2k-\ell-1} C \lefteft( 1 + \frac{a(R_{n})}{R_{n}} {\bf i}gl( \rightho + \xi_n(\rightho,\theta,y_i) {\bf i}gr) \rightight)^{\gamma} \\
&\lefteq C^* {\bf i}gl( \max \{ \rightho,1 \} {\bf i}gr)^{\gamma (2k-\ell-1)}.
\end{align*}
From the argument thus far, for every $\rightho >0$, $\theta \in
S^{d-1}$, and $\| y_i \| \lefteq k(t+s)$, $i=1,\dots,2k-\ell-1$, the
expression under the integral sign in \eqref{e:Jk} eventually
converges to
$$
e^{ -(2k-\ell)\rightho - c^{-1} \sum_{i=1}^{2k-\ell-1} \lefteftarrowngle \theta,y_i \rightightarrowngle }\, {\bf 1} {\bf i}gl\{ {\bf y} \in E_{K,L}^{(\ell)} (\rightho,\theta) {\bf i}gr\}\, h_{t,s}^{(\ell)}(0,{\bf y})\,,
$$
while it possesses an upper bound of the form
$$
C^* {\bf i}gl( \max \{ \rightho,1 \} {\bf i}gr)^{d-1+\gamma (2k-\ell)} \sum_{m=0}^{\infty} \, {\bf 1} {\bf i}gl\{ 0 < \rightho \lefteq \epsilon^{-1} e^{(m+1)\epsilon} {\bf i}gr\}\, e^{-m} h_{t,s}^{(\ell)}(0,{\bf y})
$$
for sufficiently large $n$. Because of the restriction in
$\epsilon$, it is elementary to check that
$$
\int_{0}^{\infty}{\bf i}gl( \max \{ \rightho,1 \} {\bf i}gr)^{d-1+\gamma (2k-\ell)} \sum_{m=0}^{\infty} {\bf 1} {\bf i}gl\{ 0 < \rightho \lefteq \epsilon^{-1} e^{(m+1)\epsilon} {\bf i}gr\} e^{-m} d \rightho < \infty\,.
$$
As a result of the dominated convergence theorem, we have obtained, as $n\to\infty$,
\begin{align*}
J_k &\sim a(R_n)R_n^{d-1} f(R_ne_1)^{2k-\ell} \int_{S_{d-1}} \hspace{-7pt} J(\theta) d\theta \int_0^\infty d\rightho \int_{(\righteals^d)^{2k-\ell-1}} \hspace{-10pt}d{\bf y} \\
&\qquad \qquad \times e^{ -(2k-\ell)\rightho - c^{-1} \sum_{i=1}^{2k-\ell-1} \lefteftarrowngle \theta,y_i \rightightarrowngle }\, {\bf 1} {\bf i}gl\{ {\bf y} \in E_{K,L}^{(\ell)} (\rightho,\theta) {\bf i}gr\}\, h_{t,s}^{(\ell)}(0,{\bf y}) \\
&= a(R_n)R_n^{d-1} f(R_ne_1)^{2k-\ell} \ell ! {\bf i}gl( (k-\ell)! {\bf i}gr)^2 M_{\ell, K, L}(t,s)\,,
\end{align*}
where the last step follows from the rotation invariance of $h_\cdot$. Hence, we have
\begin{align*}
\text{Cov} &{\bf i}gl( G_{n,K,L}(t)\,, G_{n,K,L}(s) {\bf i}gr) \sim \sum_{\ell=1}^k n^{2k-\ell} a(R_n)R_n^{d-1} f(R_ne_1)^{2k-\ell} M_{\ell, K, L}(t,s)\,, \ \ \ n \to \infty\,. \notag
\end{align*}
If $nf(R_ne_1) \to 0$, then the $k$th term in the sum is
asymptotically dominant, and therefore, statement $(i)$ of the
theorem is complete. However, the first term becomes dominant when
$nf(R_ne_1) \to \infty$, in which case, statement $(iii)$ is
established. In addition, if $nf(R_ne_1) \to \xi \in (0,\infty)$,
all the terms in the sum grow at the same rate, and this completes
statement $(ii)$.
\end{proof}
Subsequently, we show the results on finite-dimensional weak
convergence of ${\bf X}_n$ and $({\bf X}_n^{+}, {\bf X}_n^-)$ defined in
\eqref{e:def.Xn} and \eqref{e:def.Xnpm}, which somewhat parallel
those of Proposition \rightef{p:fidi.heavy}. The reader may return to
Section \rightef{s:limit.proc.light} to recall the definition and
properties of the limit $({\bf W}_\ell^+, {\bf W}_\ell^-)$. We omit their
proofs, since the argument in Proposition \rightef{p:fidi.heavy} does
apply again with minor modifications.
\begin{proposition} \lefteftarrowbel{p:fidi.light}
Assume the conditions of Theorem \rightef{t:main.light}. Then, weak
convergences $(i) - (iii)$ in the theorem hold in a
finite-dimensional sense.
Furthermore, the following results also hold in a finite-dimensional sense. \\
$(i)$ If $nf(R_ne_1) \to 0$ as $n \to \infty$, then
$$
({\bf X}_n^+, {\bf X}_n^-) \Rightarrow ({\bf W}_k^+, {\bf W}_k^-)\,.
$$
$(ii)$ If $nf(R_ne_1) \to \xi \in (0,\infty)$ as $n \to \infty$, then
$$
({\bf X}_n^+, {\bf X}_n^-) \Rightarrow \lefteft(\sum_{\ell=1}^k \xi^{2k-\ell} {\bf W}_\ell^+, \, \sum_{\ell=1}^k \xi^{2k-\ell} {\bf W}_\ell^- \rightight)\,.
$$
$(iii)$ If $nf(R_ne_1) \to \infty$ as $n \to \infty$, then
$$
({\bf X}_n^+, {\bf X}_n^-) \Rightarrow ({\bf W}_1^+, {\bf W}_1^-)\,.
$$
\end{proposition}
For the same reason as discussed in the preceding subsection, the
next proposition can complete the proof of Theorem
\rightef{t:main.light}.
\begin{proposition}
The sequences $({\bf X}_n^+)$ and $({\bf X}_n^-)$ are both tight in
$\mathcal D [0,\infty)$, regardless of the limit of $nf(R_ne_1)$.
\end{proposition}
\begin{proof}
We only prove the tightness of $({\bf X}_n^+)$ but suppress the superscript ``+" from the functions and objects involved during the proof.
Proceeding completely in the same manner as Proposition
\rightef{p:tightness.heavy}, we have only to show that there exists
$B>0$ such that
\begin{multline}
\tau_n^{-2} \mathbb{E} \Bigl\{ \sum_{{\mathcal{Y}}_1 \subset \mathbb{P}n}\sum_{{\mathcal{Y}}_2 \subset \mathbb{P}n} \sum_{{\mathcal{Y}}_3 \subset \mathbb{P}n}\sum_{{\mathcal{Y}}_4 \subset \mathbb{P}n} h_{n,t,s}({\mathcal{Y}}_1)\, h_{n,t,s}({\mathcal{Y}}_2)\,h_{n,s,r}({\mathcal{Y}}_3)\,h_{n,s,r}({\mathcal{Y}}_4)\, \lefteftarrowbel{e:pattern5} \\
\times {\bf 1} {\bf i}gl\{ \text{each } {\mathcal{Y}}_i \text{ has at least one common elements} \\
\text{with at least one of the other three} {\bf i}gr\} \Bigr\} \lefteq B(t-r)^2
\end{multline}
for all $0 \lefteq r \lefteq s \lefteq t \lefteq L$ and $n \geq 1$. There are three possibilities to be discussed. \\
$[\text{I}]$ $\ell := |{\mathcal{Y}}_1 \cap {\mathcal{Y}}_2| \in \{ 1,\dots,k \}$, ${\ell^{\prime}} := |{\mathcal{Y}}_3 \cap {\mathcal{Y}}_4| \in \{ 1,\dots,k \}$, and $({\mathcal{Y}}_1 \cup {\mathcal{Y}}_2) \cap ({\mathcal{Y}}_3 \cup {\mathcal{Y}}_4) = \emptyset$. \\
$[\text{I} \hspace{-1pt }\text{I}]$ $\ell := |{\mathcal{Y}}_2 \cap {\mathcal{Y}}_3| \in \{ 1,\dots,k \}$, ${\ell^{\prime}} := |{\mathcal{Y}}_1 \cap {\mathcal{Y}}_4| \in \{ 1,\dots,k \}$, and $({\mathcal{Y}}_2 \cup {\mathcal{Y}}_3) \cap ({\mathcal{Y}}_1 \cup {\mathcal{Y}}_4) = \emptyset$. \\
$[\text{I}\hspace{-1pt}\text{I}\hspace{-1pt}\text{I}]$. Each
${\mathcal{Y}}_i$ has at least one common element with at least one of the
other three, but neither $[\text{I}]$ or $[\text{I}\hspace{-1pt
}\text{I}]$ is true.
Let $B$ be the left hand side of \eqref{e:pattern5}. As for case
$[\text{I}]$, by mimicking the argument in Proposition
\rightef{p:tightness.heavy}, we obtain
\begin{align*}
B \lefteq C^* \tau_n^{-2} n^{4k-\ell-{\ell^{\prime}}} a(R_n)^2 R_n^{2(d-1)} f(R_ne_1)^{4k-\ell-{\ell^{\prime}}} (t-r)^2 \lefteq C^* (t-r)^2,
\end{align*}
which proves \eqref{e:pattern5}. Since we can deal with $[\text{I}
\hspace{-1pt }\text{I}]$ in an analogous way, we can turn to case
$[\text{I}\hspace{-1pt}\text{I}\hspace{-1pt}\text{I}]$. Letting
$\ell := 4k-|{\mathcal{Y}}_1 \cup {\mathcal{Y}}_2 \cup {\mathcal{Y}}_3 \cup {\mathcal{Y}}_4| \in \{ 3,\dots,3k
\}$, the same argument as Proposition \rightef{p:tightness.heavy}
yields
\begin{align*}
B \lefteq C^* \tau_n^{-2} n^{4k-\ell} a(R_n) R_n^{d-1} f(R_ne_1)^{4k-\ell} (t-r)^2 \lefteq C^* (t-r)^2
\end{align*}
which verifies \eqref{e:pattern5}.
\end{proof}
\section{Appendix}
We collect supplemental but important results for the completion
of the main theorems. This result is known as the Palm theory
of Poisson point processes, which is applied a number of times
throughout the proof.
\begin{lemma} (Palm theory for Poisson point processes, \cite{arratia:goldstein:gordon:1989},
Corollary B.2 in \cite{bobrowski:adler:2014}, see also Theorem 1.6 in \cite{penrose:2003}) \lefteftarrowbel{l:palm1}
Let $(X_i)$ be $\text{i.i.d.}$ $\righteals^d$-valued random variables with common
density $f$. Let $\mathbb{P}n$ be a Poisson point process on $\righteals^d$ with
intensity $nf$. Let $h({\mathcal{Y}})$, $h_i({\mathcal{Y}})$, $i=1,2,3,4$ be measurable
bounded functions defined for ${\mathcal{Y}} \in (\righteals^d)^k$. Then,
\begin{align*}
\mathbb{E} \Bigl\{ \sum_{{\mathcal{Y}} \subset \mathbb{P}n} h({\mathcal{Y}}) \Bigr\} &= \frac{n^k}{k!} \mathbb{E} {\bf i}gl\{ h({\mathcal{Y}}) {\bf i}gr\}\,,
\end{align*}
and for every $\ell \in \{ 0,\dots,k \}$,
\begin{align*}
\mathbb{E} \Bigl\{ \sum_{{\mathcal{Y}}_1 \subset \mathbb{P}n} \sum_{{\mathcal{Y}}_2 \subset \mathbb{P}n} h_1({\mathcal{Y}}_1)\, h_2({\mathcal{Y}}_2)\, {\bf 1} {\bf i}gl\{ |{\mathcal{Y}}_1 \cap {\mathcal{Y}}_2| = \ell {\bf i}gr\} \Bigr\} &= \frac{n^{2k-\ell}}{\ell ! {\bf i}gl( (k-\ell)! {\bf i}gr)^2}\, \mathbb{E} \Bigl\{ h_1({\mathcal{Y}}_1)\, h_2({\mathcal{Y}}_2)\, {\bf 1} {\bf i}gl\{ |{\mathcal{Y}}_1 \cap {\mathcal{Y}}_2| = \ell {\bf i}gr\} \Bigr\}\,.
\end{align*}
Moreover, for every $\ell_{1}, \ell_{2}, \ell_{3} \in \{ 0,\dots,k
\}$ and $\ell \in \{ 0,\dots,2k \}$, there exists a constant
$C>0$, which depends only on $\ell_{i}$, $\ell$, and $k$ such that
\begin{align*}
&\mathbb{E} {\bf i}ggl\{ \sum_{{\mathcal{Y}}_1 \subset \mathbb{P}n} \sum_{{\mathcal{Y}}_2 \subset \mathbb{P}n} \sum_{{\mathcal{Y}}_3 \subset \mathbb{P}n} h_1({\mathcal{Y}}_1)\, h_2({\mathcal{Y}}_2)\, h_3({\mathcal{Y}}_3)\, \\
&\quad \quad \quad \quad \times {\bf 1} {\bf i}gl\{ |{\mathcal{Y}}_1 \cap {\mathcal{Y}}_2| = \ell_{1}, \, |{\mathcal{Y}}_1 \cap {\mathcal{Y}}_3| = \ell_{2}, \, |{\mathcal{Y}}_2 \cap {\mathcal{Y}}_3 |= \ell_{3}, \, |{\mathcal{Y}}_1 \cup {\mathcal{Y}}_2 \cup {\mathcal{Y}}_3| = 3k-\ell {\bf i}gr\} {\bf i}ggr\} \\
&\quad = C n^{3k-\ell} \mathbb{E} \Bigl\{ h_1({\mathcal{Y}}_1)\, h_2({\mathcal{Y}}_2)\, h_3({\mathcal{Y}}_3)\,\\
&\quad \quad \quad \quad \times {\bf 1} {\bf i}gl\{ |{\mathcal{Y}}_1 \cap {\mathcal{Y}}_2| = \ell_{1}, \, |{\mathcal{Y}}_1 \cap {\mathcal{Y}}_3| = \ell_{2}, \, |{\mathcal{Y}}_2 \cap {\mathcal{Y}}_3| = \ell_{3}, \, |{\mathcal{Y}}_1 \cup {\mathcal{Y}}_2 \cup {\mathcal{Y}}_3| = 3k-\ell {\bf i}gr\} \Bigr\}\,.
\end{align*}
Similarly, for $\ell_{i,j} \in \{ 0,\dots,k \}$, $m_{p,q,r} \in \{
0,\dots,k \}$, $i,j,p,q,r \in \{1,2,3,4\}$ with $i \neq j, p \neq
q, p \neq r, q \neq r$, and $\ell \in \{0,\dots,3k\}$, there
exists a constant $C>0$, which depends only on $\ell_{i,j}$,
$m_{p,q,r}$, $\ell$, and $k$ such that
\begin{align*}
&\mathbb{E} {\bf i}ggl\{ \sum_{{\mathcal{Y}}_1 \subset \mathbb{P}n} \sum_{{\mathcal{Y}}_2 \subset \mathbb{P}n} \sum_{{\mathcal{Y}}_3 \subset \mathbb{P}n} \sum_{{\mathcal{Y}}_4 \subset \mathbb{P}n} h_1({\mathcal{Y}}_1)\, h_2({\mathcal{Y}}_2)\, h_3({\mathcal{Y}}_3)\, h_4({\mathcal{Y}}_4)\\
&\quad \quad \quad \times {\bf 1} {\bf i}gl\{ |{\mathcal{Y}}_i \cap {\mathcal{Y}}_j| = \ell_{i,j}, \, i, j \in \{ 1,2,3,4 \}, i \neq j\,, \\
&\quad \quad \quad \quad \quad \quad |{\mathcal{Y}}_p \cap {\mathcal{Y}}_q \cap {\mathcal{Y}}_r| = m_{p,q,r}, \, p,q,r \in \{ 1,2,3,4 \}, \, p \neq q, p \neq q, q \neq r, \\
&\quad \quad \quad \quad \quad \quad \quad |{\mathcal{Y}}_1 \cup {\mathcal{Y}}_2 \cup {\mathcal{Y}}_3 \cup {\mathcal{Y}}_4| = 4k-\ell {\bf i}gr\} {\bf i}ggr\} \\
&\quad = C n^{4k-\ell} \mathbb{E} \Bigl\{ h_1({\mathcal{Y}}_1)\, h_2({\mathcal{Y}}_2)\, h_3({\mathcal{Y}}_3)\, h_4({\mathcal{Y}}_4)\\
&\quad \quad \quad \quad \quad \times {\bf 1} {\bf i}gl\{ |{\mathcal{Y}}_i \cap {\mathcal{Y}}_j| = \ell_{i,j}, \, i, j \in \{ 1,2,3,4 \}, i \neq j\,, \\
&\quad \quad \quad \quad \quad \quad \quad \quad |{\mathcal{Y}}_p \cap {\mathcal{Y}}_q \cap {\mathcal{Y}}_r| = m_{p,q,r}, \, p,q,r \in \{ 1,2,3,4 \}, \, p \neq q, p \neq q, q \neq r, \\
&\quad \quad \quad \quad \quad \quad \quad \quad \quad |{\mathcal{Y}}_1 \cup {\mathcal{Y}}_2 \cup {\mathcal{Y}}_3 \cup {\mathcal{Y}}_4| = 4k-\ell {\bf i}gr\} {\bf i}ggr\}\,.
\end{align*}
\end{lemma}
\end{document}
|
\begin{document}
\newtheorem{corollary}{Corollary}[section]
\newtheorem{theorem}{Theorem}[section]
\newtheorem{remark}{Remark}[section]
\newtheorem{lemma}{Lemma}[section]
\newtheorem{conjecture}{Conjecture}[section]
\newtheorem{problem}{Problem}[section]
\renewcommand{\textup{\textbf{Proof}}}{\textup{\textbf{Proof}}}
\title{Ordering connected graphs by their Kirchhoff indices hanks{
The first author is supported by NNSF of China (No. 11201227), China Postdoctoral Science Foundation (2013M530253) and Natural Science Foundation of Jiangsu Province (BK20131357), the second author was supported by National Research Foundation funded by the Korean government with the grant No. 2013R1A1A2009341, the third author is supported by NNSF of China (No. 11271256).
ewline Email addresses: [email protected](K. Xu), [email protected](K. C. Das),
ewline [email protected](X.D. Zhang).}
\begin{center}
(Received Nov. 24, 2014)
\end{center}
\baselineskip=0.23in
\begin{abstract}
The Kirchhoff index
$Kf(G)$ of a graph $G$ is the sum of resistance distances between
all unordered pairs of vertices, which was introduced by Klein and Randi\'c. In this paper we characterized all extremal graphs with Kirchhoff index among all graphs obtained by deleting $p$ edges from a complete graph $K_n$ with $p\leq\lfloor\frac{n}{2}\rfloor$ and obtained a sharp upper bound on the Kirchhoff index of these graphs. In addition, all the graphs with the first to ninth maximal Kirchhoff indices are completely determined among all connected graphs of order $n>27$.
\end{abstract}
\textit{Keywords:} Graph; Distance (in graph);
Kirchhoff index; Laplacian spectrum\\
\textit{AMS Subject Classifications {\em(2010)}: }~05C50, 05C12,
05C35.
\baselineskip=0.30in
\section{Introduction}
\ \indent Let $G$ be a connected graph with vertices labeled as $v_1,\,v_2,
\ldots,\,v_n$. The distance between vertices $v_i$ and $v_j$,
denoted by $d_G(v_i,\,v_j$), is the length of a shortest path
between them. The famous Wiener index $W(G)$ \cite{Wiener} is the
sum of distances between all unordered pairs of vertices, that is,
$W(G) =\sum
_{i<j}d_G(v_i,\,v_j).$
In 1993, Klein and Randi\'c \cite{Klein93} introduced a new distance
function named resistance distance based on electrical network
theory. They viewed $G$ as an electrical network $N$ by replacing
each edge of $G$ with a unit resistor, the resistance distance
between $v_i$ and $v_j$, denoted by $r_G(v_i,\,v_j)$, is defined to
be the effective resistance between them in $N$. Similar to the long
recognized shortest path distance, the resistance distance is also
intrinsic to the graph, not only with some nice purely mathematical
and physical interpretations \cite{Klein93, Klein97}, but with a
substantial potential for chemical applications.
In fact, the shortest-path might be imagined to be more relevant
when there is corpuscular communication (along edges) between two
vertices, whereas the resistance distance might be imagined to be
more relevant when the communication is wave- or fluid-like. Then
the chemical communication in molecules is rather wavelike suggests
the utility of this concept in chemistry. So in recent years, the
resistance distance was well studied in mathematical and chemical
literatures \cite{BKlein02,Bapat,Bonchev94,DA1,DA2,DXG2012,DChen2013,DChen2014}.
Analogue to Wiener index, the Kirchhoff index (or resistance index)
\cite{Bonchev94} is defined as
$$Kf(G) =\sum _{i<j}r_G(v_i,\,v_j).$$
As a useful structure-descriptor, the computation of Kirchhoff
index is a hard problem \cite{BKlein02}, but one may compute the
specific classes of graphs. Since for trees, the
Kirchhoff index and the Wiener index coincide. It is possible to
study the Kirchhoff index of topological structures containing
cycles. Throughout this paper we denote by $P_n$ (resp. $C_n$, $K_n$) denote the path graph (resp. cycle graph, complete graph) on $n$
vertices. Some nice mathematical results can be found in \cite{Lukovits99,Yang2008}.
All graphs considered in this paper are finite and simple. For two
nonadjacent vertices $v_i$ and $v_j$, we use $G+e$ to denote the
graph obtained by inserting a new edge $e=v_i\,v_j$ in $G$.
Similarly, for $e\in E(G)$ of graph $G$, let $G-e$ be the subgraph
of $G$ obtained by deleting the edge $e$ from $E(G)$. The complement
of graph $G$ is always denoted by $\overline{G}$. For two vertex
disjoint graphs $G_1$ and $G_2$, we denote by $G_1\bigcup G_2$ the
graph which consists of two connected components $G_1$ and $G_2$.
The \textit{join} of $G_1$ and $G_2$, denoted by $G_1 \bigvee G_2$,
is the graph with vertex set $V (G_1)\bigcup V (G_2)$ and edge set
$E(G_1)\bigcup E(G_2)\bigcup \{u_iv_j : u_i\in V(G_1), v_j\in
V(G_2)\}$. For other undefined notation and terminology from graph
theory, the readers are referred to \cite{BM1976}.
For a graph $G$ with vertex set $V=\{v_1,\,v_2,\ldots,\,v_n\}$, we
denote by $d_i$ the degree of the vertex $v_i$ in $G$ for $i=1,\,2,
\ldots , n$. Assume that $A(G)$ is the $(0, 1)$-adjacency matrix of
$G$ and $D(G)$ is the diagonal matrix of vertex degrees. The
Laplacian matrix of $G$ is $L(G)=D(G)-A(G)$. The Laplacian
polynomial $Q(G, \lambda)$ of $G$ is the characteristic polynomial
of its Laplacian matrix, $Q(G,\lambda)=det(\lambda I_n-L(G))=
\sum\limits_{k=0}^{n}(-1)^kc_k\lambda^{n-k}$. The Laplacian matrix
$L(G)$ has nonnegative eigenvalues
$n\geq\mu_1\geq\mu_2\geq\cdots\geq\mu_n=0$ \cite{CMS1995}. Denote by
$S(G)=\{\mu_1,\,\mu_2,\ldots,\,\mu_n\}$ the spectrum of $L(G)$,
i.e., the Laplacian spectrum of $G$. If the eigenvalue $\mu_i$
appears $l_i>1$ times in $S(G)$, we write them as $\mu_i^{(l_i)}$
for the sake of convenience.
In 1996, Gutman and Mohar \cite{Gut1996} obtained the following nice result, by which a relation is established between Kirchhoff index and Laplacian spectrum:
\begin{equation}
Kf(G)=n\sum\limits_{i=1}^{n-1}\frac{1}{\mu_i}~\label{e1}
\end{equation}
for any connected graphs of order $n\geq 2$.
Let ${\cal{G}}(n)$ be the set of connected graphs of order $n$. In
this paper, we determined the first to ninth minimal Kirchhoff
indices of graphs from ${\cal{G}}(n)$ with $n>9$; also characterized
all the graphs from ${\cal{G}}(n)$ with $n>27$ with the first to
ninth maximal Kirchhoff indices.
\section{Preliminaries}
\ \indent In this section we will list some known lemmas as necessary preliminaries.
\begin{lemma}\label{NEW0} {\em(\cite{GMS1990})} Let $G$ be a graph and $G'=G+e$ the graph obtained by inserting a new edge into $G$. Then we have
$$\mu_1(G')\geq \mu_1(G)\geq \mu_2(G')\geq\mu_2(G)\geq\cdots\geq\mu_n(G')=\mu_n(G)=0.$$
\end{lemma}
Combining Lemma \ref{NEW0} and the fact that
$\sum\limits_{i=1}^{n-1}
\mu_i(G+e)-\sum\limits_{i=1}^{n-1}
\mu_i(G) = 2$, by the equation (\ref{e1}),
the following lemma can be easily obtained.
\begin{lemma}\label{NEW1} \em{(\cite{Lukovits99})} Let $G$ be a connected graph with $e\in E(G)$ and two nonadjacent
vertices $v_i$ and $v_j$ in $V(G)$. Then we have
\begin{itemize}
\item [$(1)$] $Kf(G-e)> Kf(G)$ where $G-e$ is connected;
\item [$(2)$] $Kf(G)>Kf(G+e')$ where $e'=v_iv_j$.
\end{itemize}
\end{lemma}
Based on Lemma \ref{NEW1} $(1)$, the corollary below follows immediately.
\begin{corollary} \label{CO0} Suppose that $G$ is a connected graph of order $n$ and with $m\geq n$ edges and with $T$ as its spanning tree. Then we have $Kf(G)<Kf(T)$.
\end{corollary}
\begin{lemma}\label{NEW2} {\em(\cite{Merris1994})} Let $G$ be a graph of order $n$ with $S(G)=\{\mu_1,\,\mu_2, \ldots ,\,\mu_{n-1}, 0\}$.
Then $S(\overline{G})=\{n-\mu_1,\, n-\mu_2,\ldots ,\,n-\mu_{n-1},
0\}$.
\end{lemma}
\begin{lemma} \label{NEW4} {\em(\cite{Klein93})}
Let $G$ be a connected graph. Then we have $W(G) \geq Kf(G),$ with
equality if and only if $G$ is a tree.
\end{lemma}
Before listing this problem, we first introduce some necessary
notations and definitions. A vertex $v$ of a tree $T$ is called a
\textit{branching point} if $d(v) \geq 3$. A tree is said to be
starlike if exactly one of its vertices has degree greater than two.
Let $P_n$ denote the path on $n$ vertices. By
$T_n(n_1,\,n_2,\ldots,\,n_k)$ we denote the starlike tree which has
a vertex $v$ of degree $k\geq 3$ and which has the property
$$T_n(n_1,\,n_2,\ldots,\,n_k)-v=P_{n_1}\cup P_{n_2} \cup \cdots\cup P_{n_k}.$$
This tree has $n_1+n_2+\ldots+n_k+1=n$ vertices and assumed that
$n_1\geq n_2\geq \ldots\geq n_k\geq 1.$ We say that the starlike
tree $T_n(n_1,\,n_2,\ldots,\,n_k)$ has $k$ branches, the lengths of
which are $n_1,\,n_2,\ldots,\,n_k$, respectively.
\vspace*{3mm}
Note that any tree with exactly one branching point is a starlike
tree. Assume that $T$ is a tree of order $n$ with exactly two
branching points $v_1$ and $v_2$ with $d(v_1) = r$ and $d(v_2) = t$.
The orders of $r-1$ components, which are paths, of $T-v_1$ are
$p_1,\ldots,\, p_{r-1}$, the order of the component which is not a
path of $T-v_1$ is $p_r=n-p_1-\cdots-p_{r-1}-1$. The orders of $t-1$
components, which are paths, of $T-v_2$ are $q_1,\ldots, q_{t-1}$,
the order of the component which is not a path of $T-v_2$ is
$q_t=n-q_1-\cdots-q_{t-1}-1$. We denote this tree by $T=T_n(
p_1,\ldots,\, p_{r-1}; q_1,\ldots, \,q_{t-1})$, where $r\leq t$,
$p_1\geq \cdots \geq p_{r-1}$ and $q_1 \geq \cdots \geq q_{t-1}$.
\begin{figure}
\caption{ The trees $T_{10}
\label{G1}
\end{figure}
For convenience, when considering the trees
$T_n(n_1,\,n_2,\ldots,\,n_k,\ldots,\,n_m)$ or $T_n(
p_1,\ldots,\,p_k,\\\ldots,\,p_{r-1};$ $
q_1,\ldots,\,q_k,\ldots,\,q_{t-1})$, we use the symbols
$n_{k}^{l_{k}}$ or $p_{k}^{l_{k}}$ (resp. $q_{k}^{l_{k}}$) to
indicate that the number of $n_{k}$ or $p_k$ (resp. $q_k$) is
$l_{k}>1$ in the following. For example,
$T_{15}(2,\,2,\,3,\,3,\,4)$ will be written
as $T_{15}(2^{2},\,3^{2},\,4)$. As another two examples, the trees $T_{10}(5,\,3,\,1)$ and $T_9(1^2;\,1^2)$ are shown in Figure \ref{G1}.
In the following lemma the partial result in \cite{LLL2010} are summarized.
\begin{lemma}\label{LEM5} {\em(\cite{LLL2010})} ~Suppose that $T$ is a tree of order $n\geq 9$.
Then we have
\begin{eqnarray*}W(P_n)&>&W(T_n(n-3,\,1^2))>W(T_n(n-4,\,2,\,1))>W(T_n(1^2;\,1^2))>W(T_n(n-5,\,3,\,1))\\ &>&W(T_n(n-4,\,1^3))=W(T_n(1^2;\,2,\,1))>W(T_n(n-6,\,4,\,1))>W(T).\end{eqnarray*}\end{lemma}
Combining Lemmas \ref{NEW4} and \ref{LEM5}, the following corollary
can be easily obtained.
\begin{corollary} \label{CO1} Suppose that $T$ is a tree of order $n\geq 9$.
Then we have
\begin{eqnarray*}Kf(P_n)&>&Kf(T_n(n-3,\,1^2))>Kf(T_n(n-4,\,2,\,1))>Kf(T_n(1^2;\,1^2))>Kf(T_n(n-5,\,3,\,1))\\ &>&Kf(T_n(n-4,\,1^3))=Kf(T_n(1^2;\,2,\,1))>Kf(T_n(n-6,\,4,\,1))>Kf(T).\end{eqnarray*}
\end{corollary}
Let $P_n^k$ be the graph obtained by identifying a pendent vertex of a path of length $n-k+1$ with one vertex of a cycle $C_k$.
\begin{lemma}\label{LEM6} {\em(\cite{Yang2008})} For any connected graph $G$ of order $n>3$ and with $n>3$ edges, we have
\begin{eqnarray*}
Kf(G)\leq \frac{n^3-11n+18}{6}
\end{eqnarray*}
with equality if and only if $G\cong P_n^3$.
\end{lemma}
Denote by $C_{p,q}^{l}$ the graph which is formed by two disjoint cycles $C_p$ and $C_q$ linked by a path of length $l$ (see Figure \ref{G2}). In \cite{ZhangYang09}, the authors
determined the graph which maximizes the Kirchhoff index among all connected graphs of order $n$ with $n$ edges and exactly two cycles. Recently, Feng, Yu et al. and one of the
present authors \cite{FYXJ2014} completely characterized the extremal graph with maximal Kirchhoff index among all connected graphs of order $n$ and with $n+1$ edges.
\begin{figure}
\caption{ The graph $C_{p,q}
\label{G2}
\end{figure}
\begin{lemma}\label{LEM7} {\em(\cite{FYXJ2014})} Let $G$ be a connected graph of order $n$ and with $n+1$ edges $(n\geq 8)$. Then we have
\begin{eqnarray*}
Kf(G)\leq \frac{n^3-21n+36}{6}
\end{eqnarray*} with equality if and only if $G\cong C_{3,3}^{n-5}$.
\end{lemma}
An invariant related to Kirchhoff index is defined \cite{Yang2008} as follows: $Kf_{v_i}(G)=\sum\limits_{j\neq i}r_G(v_i,v_j)$. In the following lemma a nice formula is
presented on Kirchhoff index of a graph with cut vertices.
\begin{lemma}\label{LEM8}{\em(\cite{ZhangYang09})} Let $x$ be a cut vertex of connected graph $G$ such that $G=G_1\bigcup G_2$, $V(G_1)\bigcap V(G_2)=\{x\}$ and $|V(G_i)|=n_i$
for $i=1,2$. Then we have \begin{eqnarray*}Kf(G)=Kf(G_1)+Kf(G_2)+(n_1-1)Kf_x(G_2)+(n_2-1)Kf_x(G_1).\end{eqnarray*}
\end{lemma}
Note that (\cite{XLDGF2014}) $P_n$ has uniquely the largest Wiener index among all trees of order $n$. From Lemma \ref{LEM8}, the corollary below follows immediatey.
\begin{corollary} \label{CAD1} Let $G_0$ be a connected graph with $v_0\in V(G_0)$ and $T_t$ a tree of order $t\geq 2$ with $x\in V(T_t)$. Assume that $G$ is a graph obtained by identifying the vertex $v_0$ in $G_0$ with $x\in T$ and $G^{\prime}$ is obtained by identifying $v_0 \in G_0$ with a pendent vertex of path $P_t$. Then \begin{eqnarray*}Kf(G)\leq Kf(G^{\prime})\end{eqnarray*} with equality holding if and only if $G\cong G^{\prime}$, i.e., $T_t\cong P_t$ with $x$ being a pendent in $T_t$. \end{corollary}
\begin{lemma}\label{LEA9} {\em(\cite{Yang2008})}
Among all connected graph of order $n$ with $n$ edges and cycle length $k$, the graph $P_n^k$ has uniquely the maximal Kirchhoff index. \end{lemma}
\section{Main results}
\ \indent In this section, we will order all the graphs from ${\cal{G}}(n)$ with $n$ being not very small by their Kirchhoff indices. In what follows, we will deal with the
two cases, respectively, for graphs from ${\cal{G}}(n)$ with smaller Kirchhoff indices and with larger Kirchhoff indices.
\subsection{The ordering of connected graphs with smaller Kirchhoff indices}
\ \indent Lukovits et al. \cite{Lukovits99}
showed that, among all connected graphs of order $n$, $Kf(G) \geq n -1$ with equality if and only if $G$ is
complete graph $K_n$. In the following it suffices to order the graphs from ${\cal{G}}(n)\setminus \{K_n\}$ by their Kirchhoff indices.
For convenience, for a subgraph $G_0$ of $K_n$, we denote by $K_n-G_0$ the graph obtained by deleting all edges of $G_0$ from $K_n$. From the structure of $K_n-G_0$, we claim
that $\overline{K_n-G_0}\cong \overline{K_{n-|V(G_0)|}}\bigcup G_0$. For the consistency of sign, we write $G_1(n)=K_n$ and $G_2(n)=K_n-K_2$. Moreover, let $G_3(n)=K_n-2K_2$
and $G_4(n)=K_n-K_{1,2}$. Next we consider the graphs obtained by deleting three edges from $K_n$.
Assume that
$$G_5(n)=K_n-3K_2;~~~
G_6(n)=K_n-(K_{1,2}\cup K_2);~~~
G_7(n)=K_n-P_4;$$
$$G_8(n)=K_n-C_3;~~~
G_9(n)=K_n-K_{1,3}. $$
In the following theorem the graphs from ${\cal{G}}(n)$ with $n\geq 11$ and with first to ninth minimal Kirchhoff indices are completely determined.
\begin{theorem}\label{TH1.0} {\em (\cite{DXG2012})} Let $n\geq 11$ and $G\in {\cal{G}}(n)$ but other than any graph from the set $\{G_i(n)|i\in \{1,2,\ldots,9\}\}$. Then we have
\begin{eqnarray*}Kf(G)>Kf(G_9(n))>Kf(G_8(n))>Kf(G_7(n))>Kf(G_6(n))>Kf(G_5(n))\\ \ \indent\ \indent>Kf(G_4(n))>Kf(G_3(n))>Kf(G_2(n))>Kf(G_1(n)).\end{eqnarray*} \end{theorem}
In view of Theorem \ref{TH1.0}, naturally we will ask a related problem as follows:
\textit{For an integer $4\leq p\leq \Big\lfloor\displaystyle{\frac{n}{2}}\Big\rfloor$, which graph has the extremal Kirchhoff index among all connected graphs obtained by deleting $p$ edges from $K_n$?}
Before solving the above problem, we need a related lemma as follows:
\begin{lemma} \em{(\cite{DA0})} \label{m2}
Let $G$ be a connected graph with at least one edge. Then
\begin{equation} \label{t10}
\mu_1(G)\leq \max_{v_iv_j\in E(G)}|N_i\cup N_j|
\end{equation}
where $N_i$ is the neighbor set of vertex $v_i\in V(G)$\,. This upper bound for $\mu_1(G)$ does not exceed $n$.
\end{lemma}
In the following theorem we will give a complete solution of this problem for the minimal case.
\begin{theorem} \label{NEWTH1} For any integer $2\leq p\leq \Big\lfloor\displaystyle{\frac{n}{2}}\Big\rfloor$ and any graph $G$ obtained by deleting $p$ edges from $K_n$, we have
\begin{eqnarray}
Kf(G)\geq n-1+\frac{2p}{n-2}\label{ps1}
\end{eqnarray}
with equality holding in {\em(\ref{ps1})} if and only if $G\cong K_n-p\,K_2$.
\end{theorem}
\begin{proof} Denote by $\overline{\mu}_i$ with $i=1,\,2,\ldots,\,n$ the non-increasing Laplacian eigenvalues of $\overline{G}$. By Lemma \ref{NEW2}, we have
$\overline{\mu}_i=n-\mu_{n-i}$ for $i=1,\,2,\ldots,\,n-1$. Since $\overline{G}$ is the complement graph of $G$\,, we have $\overline{m}=p$ with
$2\leq p\leq \left\lfloor\frac{n}{2}\right\rfloor$, where $\overline{m}$ is the number of edges in $\overline{G}$. Since
$$\overline{m}=p\leq \left\lfloor\frac{n}{2}\right\rfloor\,,$$
$\overline{G}$ must be a disconnected graph. Let $k$ be the number of connected components in $\overline{G}$. Also let $\overline{n}_i$ and $\overline{m}_i$
be the number of vertices and number of edges in the $i$-th component of $\overline{G}$ such that $\overline{n}_1\geq \overline{n}_2\geq \cdots\geq
\overline{n}_{k-1}\geq \overline{n}_k$\,. Thus we have
$$\sum\limits^k_{i=1}\overline{n}_i=n\,\,\,\mbox{ and }\,\,\,\sum\limits^k_{i=1}\overline{m}_i=\overline{m}=p.$$
From the above, it follows that
$$p=\sum\limits^k_{i=1}\overline{m}_i\geq \sum\limits^k_{i=1}(\overline{n}_i-1)=n-k,\,\,\,\mbox{ that is, }\,\,k\geq n-p.$$
\vspace*{3mm}
Therefore there are at least $n-p$ Laplacian eigenvalues which are zero
in $\overline{G}$\,, that is,
\begin{equation}
\overline{\mu}_i=0,\,\,i=p+1,\,p+2,\ldots,\,n.\label{1e0}
\end{equation}
Using the above, we get
\begin{equation}
\sum\limits^{n-1}_{i=1}\overline{\mu}_i=\sum\limits^{p}_{i=1}\overline{\mu}_i=2p.\label{1e1}
\end{equation}
\vspace*{3mm}
Since $\overline{G}$ is disconnected, by Lemma \ref{m2}, we have
$$\overline{\mu}_i\leq n-1,\,\,\,i=1,\,2,\ldots,\,n-1.$$
Now we have
\begin{eqnarray}
Kf(G)&=&\sum\limits^{n-1}_{i=1}\frac{n}{\mu_i}\nonumber\\[2mm]
&=&\sum\limits^{n-1}_{i=1}\frac{n}{n-\overline{\mu}_{n-i}}\,\,\,\mbox{ as $\mu_i=n-\overline{\mu}_{n-i}$}\nonumber\\[2mm]
&=&n-1-p+\sum\limits^{p}_{i=1}\frac{n}{n-\overline{\mu}_i}\,\,\,\mbox{ by (\ref{1e0})}\nonumber\\[2mm]
&\geq&n-1-p+\frac{p^2}{\sum\limits^{p}_{i=1}\displaystyle{\frac{n-\overline{\mu}_i}{n}}}\,\,\,\mbox{ by AM and HM inequality}\label{ps2}\\[2mm]
&=&n-p-1+\frac{p}{1-2/n}\,\,\,\mbox{ as }\sum\limits^{p}_{i=1}\overline{\mu}_i=2p\nonumber\\[2mm]
&=&n-1+\frac{2p}{n-2}\,.\nonumber
\end{eqnarray}
First part of the proof is done.
\vspace*{3mm}
Now suppose that the equality holds in (\ref{ps1}). Then all inequalities in the above argument must be equalities.
From the equality in (\ref{ps2}), we get
$$\frac{n}{n-\overline{\mu}_1}=\frac{n}{n-\overline{\mu}_2}=\cdots=\frac{n}{n-\overline{\mu}_p}\,,\,\,\,\mbox{ that is,
}\,\,\overline{\mu}_1=\overline{\mu}_2=\cdots=\overline{\mu}_p\,.$$
Using (\ref{1e1}), from the above, we get
$$\overline{\mu}_1=\overline{\mu}_2=\cdots=\overline{\mu}_p=2.$$
From the above, we conclude that each connected component $(n_i\geq 2)$ is isomorphic to
$K_2$, otherwise, the largest Laplacian eigenvalue in $\overline{G}$ is $\overline{\mu}_1\geq 3$, a
contradiction. Hence $\overline{G}\cong pK_2\cup (n-2p)K_1=pK_2\cup\overline{ K_{n-2p}}$\,,
that is, $G\cong K_n-p\,K_2$\,.
\vspace*{3mm}
Conversely, let $G$ be isomorphic to the graph $K_n-p\,K_2$\,. Then the Laplacian spectrum of $G$ is
$$S(G)=\{n^{(n-p-1)}\,,(n-2)^{(p)},\,0\}\,.$$
Hence the equality holds in (\ref{ps1}).
\end{proof}
\begin{lemma} {\rm(\cite{Merris1994})} \label{dk7} Let $G$ be a simple graph on $n$ vertices which has at least one edge. Then
\begin{equation}
\mu_1\geq \Delta+1\,,\label{dlu8}
\end{equation}
where $\Delta$ is the maximum degree in $G$\,. Moreover, if $G$ is connected, then the
equality holds in {\em(\ref{dlu8})} if and only if $\Delta=n-1$.
\end{lemma}
Let $a_1,\,a_2,\ldots,\,a_n$ be positive real numbers. We define $A_k$ to be the average of all products of $k$ of the $a_i$'s, that
is,
\begin{eqnarray}
A_1&=&\frac{a_1+a_2+\cdots+a_n}{n}\nonumber\\
&&\nonumber\\
A_2&=&\frac{a_1a_2+a_1a_3+\cdots+a_1a_n+a_2a_3+\cdots+a_{n-1}a_n}{\frac{1}{2}\,n(n-1)}\nonumber\\
\vdots\nonumber\\
A_{n-1}&=&\frac{a_2\ldots a_{n-1}\,a_n+a_1a_3\ldots a_{n-1}\,a_n+\cdots+a_1a_2\ldots a_{n-2}\,a_n+a_1a_2\ldots a_{n-1}}{n}\nonumber\\
A_n&=&a_1a_2\ldots a_n\,.\nonumber
\end{eqnarray}
Hence the AM is simply $A_1$ and the GM is $A_n^{1/n}$\,. The
following result generalize this:
\begin{lemma} {\rm (Maclaurin's Symmetric Mean Inequality \cite{BW})} \label{1m2}
For positive real numbers $a_1,~a_2,\ldots,~a_n$,
$$A_1\geq A_2^{1/2}\geq A_3^{1/3}\geq \ldots \geq A_{n-1}^{1/(n-1)}\geq A_n^{1/n}.$$
Equality holds if and only if $a_1=a_2=\cdots=a_n$.
\end{lemma}
\begin{theorem}\label{Weak} For any integer $2\leq p\leq \lfloor\frac{n}{2}\rfloor$ and any graph $G$ obtained by deleting $p$ edges from $K_n$, we have
\begin{eqnarray}
Kf(G)\leq n-1-p+\frac{n}{n-p-1}+\frac{(p-1)\,\delta\,n^{n-p-1}\,(n-1)^{p-2}}{t(G)}\,,\label{e1ps1}
\end{eqnarray}
where $t(G)$ is the number of spanning trees in $G$ and $\delta$ is the minimum degree in $G$. Moreover, the equality holds in $(\ref{e1ps1})$
if and only if $G\cong K_n-K_{1,\,p}$.
\end{theorem}
\begin{proof} For the sake of consistency, $\overline{\mu}_i$ with $i=1,\,2,\ldots,\,n$, $\overline{m}$, $\overline{m}_i$ and $\overline{n}_i$ are similarly defined as that
in the proof of Theorem \ref{NEWTH1}. Then we claim that $\overline{G}$ has exactly $n-p$ components of order $n$ and with $p$ edges. It follows that
\begin{equation}
\overline{\mu}_i=0,\,\,i=p+1,\,p+2,\ldots,\,n,~\mbox{ that is, }~\mu_i=n,~i=1,\,2,\ldots,\,n-p-1.\label{1ew11}
\end{equation}
Moreover, we have
\begin{equation}
\sum\limits^p_{i=1}\,\overline{\mu}_i=2p. \label{dase1}
\end{equation}
Now we assume that $\overline{G}=\bigcup\limits_{i=1}^{n-p}H_{i}$ and $\overline{\Delta}$ is the maximum degree in $\overline{G}$. Then, by Lemmas \ref{m2} and \ref{dk7}, we have
\begin{eqnarray}
\overline{\Delta}+1\leq \overline{\mu}_1=\max\limits_{1\leq i\leq n-p}\mu_1(H_i)\leq p+1.\label{1ew1}
\end{eqnarray}
Putting $n=p-1$ and $a_i=n-\overline{\mu}_{i+1}$, $i=1,\,2,\ldots,\,p-1$ in Lemma \ref{1m2}, we get $A_1\geq A_{p-2}^{1/(p-2)}$\,, that is,
\begin{equation}
\frac{\sum\limits^p_{i=2}\,(n-\overline{\mu}_i)}{p-1}\geq \left[\frac{\prod^p_{i=2}\,(n-\overline{\mu}_i)\,\sum\limits^{p}_{i=2}
\frac{1}{n-\overline{\mu}_i}}{p-1}\right]^{1/(p-2)}\,.\label{kcd1}
\end{equation}
It is well known that
$$t(G)=\frac{1}{n}\,\prod^{n-1}_{i=1}\,\mu_i\,.$$
\vspace*{3mm}
Since $n-\mu_{n-1}=\overline{\mu}_1\geq \overline{\Delta}+1$ and $n-\overline{\Delta}-1=\delta$, we have
$$\prod^p_{i=2}(n-\overline{\mu}_i)=\prod^p_{i=2}\,\mu_{n-i}=\frac{\prod^{n-1}_{i=1}\,\mu_{i}}{\prod^{n-p-1}_{i=1}\,\mu_{i}\cdot \mu_{n-1}}\geq
\frac{n\,t(G)}{n^{n-p-1}\,\delta}$$
and
$$\frac{\sum\limits^p_{i=2}\,(n-\overline{\mu}_i)}{p-1}=\frac{n(p-1)-(2p-\overline{\mu}_1)}{p-1}\leq
n-1~~\mbox{ as }\overline{\mu}_1\leq p+1.$$
\vspace*{3mm}
Using the above result in (\ref{kcd1}), we get
\begin{eqnarray}
\sum\limits^{p}_{i=2} \frac{1}{n-\overline{\mu}_i}&\leq& \frac{(p-1)}{\prod^p_{i=2}\,(n-\overline{\mu}_i)}\,(n-1)^{p-2}\nonumber\\[3mm]
&\leq&\frac{(p-1)\,(n-1)^{p-2}\,\delta\,n^{n-p-1}}{n\,t(G)}\,.\label{kcd2}
\end{eqnarray}
Therefore, we have
\begin{eqnarray}
Kf(G)&=&\sum\limits^{n-1}_{i=1}\frac{n}{\mu_i}\nonumber\\[2mm]
&=&\sum\limits^{n-1}_{i=1}\frac{n}{n-\overline{\mu}_{n-i}}\,\,\,\mbox{ as $\mu_i=n-\overline{\mu}_{n-i}$}\nonumber\\[2mm]
&=&n-1-p+\sum\limits^{p}_{i=1}\frac{n}{n-\overline{\mu}_i}\,\,\,\mbox{ by (\ref{1ew11})}\nonumber\\
&\leq&n-1-p+\frac{n}{n-p-1}+\sum\limits^{p}_{i=2}\frac{n}{n-\overline{\mu}_i}~~~\mbox{ by (\ref{1ew1})}\,.\nonumber
\end{eqnarray}
Using (\ref{kcd2}) in the above, we get the required result in (\ref{e1ps1}). First part of the proof is done.
\vspace*{3mm}
Now suppose that the equality holds in (\ref{e1ps1}). Then all inequalities in the above argument must be equalities.
From the equality in (\ref{kcd1}), we get $\overline{\mu}_2=\overline{\mu}_3=\cdots=\overline{\mu}_p$, by Lemma \ref{1m2}.
\vspace*{3mm}
From the equality in (\ref{kcd2}), we get $\overline{\mu}_1=p+1$. Using (\ref{dase1}) with the above results, we get $\overline{\mu}_2=\overline{\mu}_3=\cdots=\overline{\mu}_p=1$.
Thus we must have $\overline{G}$ is tree $K_{1,\,p}$ and all the remaining $n-p-1$ components are trivially $K_1$'s. Equivalently, we deduce that $G=K_n-K_{1,\,p}$.
\vspace*{3mm}
Conversely, let $G\cong K_n-K_{1,\,p}$\,. Then we have $\mu_1=\mu_2=\cdots=\mu_{n-p-1}=n$, $\mu_{n-p}=\mu_{n-p+1}=\cdots=\mu_{n-2}=n-1$ and
$\mu_{n-1}=n-p-1$. Also we have $t(G)=(n-p-1)\,n^{n-p-2}\,(n-1)^{p-1}$ and $\delta=n-p-1$. Now,
\begin{eqnarray}
&&n-1-p+\frac{n}{n-p-1}+\frac{(p-1)\,\delta\,n^{n-p-1}\,(n-1)^{p-2}}{t(G)}\nonumber\\[3mm]
&&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~=n-p-1+\frac{n}{n-1}\,(p-1)+\frac{n}{n-p-1}\nonumber\\[2mm]
&&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~=Kf(K_n-K_{1,p})\,.\nonumber
\end{eqnarray}
This completes the proof.
\end{proof}
The following lemma was implicitly proved in \cite{Kel1974}.
\begin{lemma}{\em(\cite{Kel1974})} \label{NADD1} Let $G$ be a connected graph obtained by deleting $p\leq n-1$ edges from the complete graph $K_n$. Then we have \begin{eqnarray}
t(G)\geq n^{n-p-2}(n-1)^{p-1}(n-p-1)\,,\label{eADD1}
\end{eqnarray}
with equality holding if and only if $G\cong K_n-K_{1,p}$. \end{lemma}
Combining Lemma \ref{NADD1} and Theorem \ref{Weak}, we can easily deduce the following corollary.
\begin{corollary} \label{NCO1} For any integer $2\leq p\leq \lfloor\frac{n}{2}\rfloor$ and any graph $G$ obtained by deleting $p$ edges from $K_n$, we have
\begin{eqnarray}
Kf(G)\leq n-1-p+\frac{n}{n-p-1}+\frac{n\,(p-1)\,\delta}{(n-1)(n-p-1)}\,,\label{e1ps22}
\end{eqnarray}
where $\delta$ is the minimum degree in $G$. Moreover, the equality holds in $(\ref{e1ps1})$
if and only if $G\cong K_n-K_{1,\,p}$. \end{corollary}
\subsection{The ordering of connected graphs with larger Kirchoff indices}
\ \indent In this subsection we will determine the graphs from ${\cal{G}}(n)$ ($n>27$) with first to ninth largest Kirchhoff indices. Considering Lemma \ref{NEW1} $(1)$ and Corollary \ref{CO1}, we find that the path $P_n$ has the largest Kirchhoff index among all graphs from ${\cal{G}}(n)$.
Before stating our main result, we first prove a lemma below.
\begin{lemma} \label{LEM3.1} For any connected graph $G$ of order $n$ and with $m>n+1$ edges, there exists a connected graph $G_1$ of order $n$ and with $n+1$ edges such that $Kf(G_1)>Kf(G)$.
\end{lemma}
\begin{proof} For any connected graph $G$ of order $n$ with $m>n+1$ edges, choosing and deleting one non-cut edge from $G$, we can get a connected graph $G'$ of order $n$
with $m-1$ edges and $Kf(G')>Kf(G)$ by Lemma \ref{NEW1} $(1)$.
Repeating the above process by $m-n-1$ times, we can obtain a
connected graph $G_1$ of order $n$ with $n+1$ edges and
$Kf(G_1)>Kf(G)$, completing the proof of this lemma.
\end{proof}
Now we denote by $Q_n^k$ (see Figure \ref{G3}
for the case when $k=3$) the graph obtained by attaching a pendent
edge to the unique neighbor of the pendent vertex in
$P_{n-1}^k$. Let $R_n^3$ be a graph, shown in Figure
\ref{G3}, which is obtained by attaching a pendent edge to the
vertex with the distance $2$ from the pendent vertex in
$P_{n-1}^3$. A graph $CQ_n^{3}$ is obtained by attaching a pendent edge to a vertex of $C_3$ in $Q_{n-1}^3$ with degree $2$. Let $C_3(k_1,k_2)$ be a graph obtained attaching a path of length $k_1$ to one vertex of $C_3$ and a path of length $k_2$ to another vertex in $C_3$. Denote by $C_3(k_1,k_2,k_3)$ a graph obtained by attaching three paths of lengths $k_1$, $k_2$ and $k_3$, respectively, to three vertices of $C_3$. In the following we define two sets of graphs:
\begin{eqnarray*}{\cal{H}}(n)=\Big\{P_n^3,Q_n^3,R_n^3,C_3(1,n-4),C_3(2,n-5),CQ_n^{3} \Big\}, \end{eqnarray*} \begin{eqnarray*}{\cal{T}}^0(n)=\Big\{P_n,T_n(n-3,1^2),T_n(n-4,2,1),T_n(1^2;1^2),T_n(n-5,3,1),T_n(1^2;2,1)\Big\}. \end{eqnarray*}
\begin{figure}
\caption{ The graphs $Q_n^3$ and $R_n^3$}
\label{G3}
\end{figure}
It is not difficult to verify that any spanning tree of the graphs $C_3(1,1)$ and $C_3(1,1,1)$ must be in the set ${\cal{T}}^0(n)$.
\begin{lemma}\label{LEM3.2} Let $G$ be a connected graph of order $n$ $(n\geq 10)$ with $n$ edges and maximum degree $\Delta\geq 3$,
cycle length $k>4$. Then $G$ has a spanning tree $T$ with $T\notin {\cal{T}}^0(n)$.
\end{lemma}
\begin{proof} Assume that $G$ contains a cycle $C_k$ as a subgraph. According to the value of $\Delta$, we divide into the following two cases.
\textbf{Case 1.} $\Delta\geq 4$.
In this case, we choose $G-e$ where $e$ is on the cycle in $G$ but not incident with the vertex of degree $\Delta$ in it. Then $G-e$ is a spanning tree of $G$ with maximum degree $\Delta\geq 4$. Then $G-e\notin {\cal{T}}^0(n)$, since any tree in ${\cal{T}}^0(n)$ has maximum degree $3$.
\textbf{ Case 2.} $\Delta=3$.
Assume that $v$ is a vertex in $C_k$ of degree $3$ in $G$. Note that $k\geq 5$ from the condition in this lemma.
Now we choose an edge $e=v_1v_2$ on the cycle $C_k$ in
$G$ such that $v_1$ and $v_2$ are all in the distance as large as
possible from the vertex $v$. Since $k\geq 5$, we have $d_G(v,v_1)\geq 2$ and
$d_G(v,v_2)\geq 2$. Then $G-e$ is a spanning tree of
$G$ with $G-e\notin {\cal{T}}^0(n)$, since neither of neighbors of $v$
are pendent vertices. \end{proof}
\begin{lemma}\label{LEM3.3} Let $G\notin {\cal{H}}(n)$ be a connected graph of order $n$ $(n\geq 8)$ with $n$ edges and maximum degree $\Delta\geq3$,
cycle length $k=3$. Then $G$ has a spanning tree $T$ with $T\notin {\cal{T}}^0(n)$.
\end{lemma}
\begin{proof} For the case $\Delta>3$, from a similar reasoning as that in Case 1 in the proof of Lemma \ref{LEM3.2}, our result follows immediately. Therefore it suffices to consider the case $\Delta=3$. Assume that $C_3=v_1v_2v_3v_1$ in $G$. Next we deal with the following three cases.
\textbf{Case 1.} There is only one vertex, say $v_1$, of $C_3$ in $G$ with degree $3$.
In this case, we choose the edge $e=v_2v_3$ in $C_3$. Then $G-e$ is a spanning tree of $G$, in which the vertex $v_1$ is still of degree $3$. Thus we have $G-e\ncong P_n$. If $G-e\cong T_n(n-3,1^2)$, then the super graph $G$ obtained by inserting the edge $e$ into $T_n(n-3,1^2)$ is just $P_n^3$, contradicting the fact that $G\notin {\cal{H}}(n)$. Therefore $G-e\ncong T_n(n-3,1^2)$. By a similar reasoning, we can conclude that $G-e\ncong T_n(1^2;1^2)$ for the edge $e\in E(G)$ defined as above from the condition that $G\ncong Q_n^3$. Moreover, if $G-e\cong T_n(1^2;2,1)$ for the edge $e$ in the triangle in $G$ and not incident with the vertex $v$ in it, then we claim that $G\cong R_n^3$. This is impossible because of the fact that $G\notin {\cal{H}}(n)$. Therefore, we have $G-e\notin {\cal{T}}^0(n)$.
\textbf{Case 2. } There are exactly two vertices, say $v_1$ and $v_2$, of $C_3$ in $G$ with degree $3$.
In this case, without loss of generality, we assume that the eccentricity of $v_1$ is not more than that of $v_2$ in $G$. Let $e=v_2v_3$. Then $G-e$ is a spanning tree in $G$. Since $G\ncong C_3(1,n-4)$, we deduce that $G-e\ncong T_n(n-4,2,1)$. Similarly, we have $G-e\ncong T_n(n-5,3,1)$ from the condition $G\ncong C_3(2,n-5)$. Moreover, $G-e\ncong T_n(1^2;2,1)$, since $G\ncong CQ_n^3$. Note that, in $G-e$, there are at least two pendent vertices at the distance $d\geq 2$ to $v_1$ with degree $3$. Therefore we have $G-e\notin {\cal{T}}^0(n)$ as desired.
\textbf{Case 3.} All the vertices of $C_3$ in $G$ are of degree $3$.
Assume that $v_1$ has the smallest eccentricity among all the vertices of $C_3$ in $G$. Let $e=v_2v_3$. Then $G-e$ is a spanning tree of $G$ such that $v_1$ is of degree $3$ in it. Moreover, $G-e\notin {\cal{T}}^0(n)$, since there are at least three pendent vertices at the distance at least $2$ to $v_1$ in $G-e$. This completes the proof for this case, ending the proof of this lemma.
\end{proof}
\begin{theorem}\label{TH1} Let $n>27$. Then we have
\begin{eqnarray*}Kf(P_n)>Kf(T_n(n-3,1^2))>Kf(P_n^3)>Kf(T_n(n-4,2,1))>Kf(T_n(1^2;1^2))>Kf(Q_n^3)\\
>Kf(T_n(n-5,3,1))>Kf(T_n(n-4,1^3))=Kf(T_n(1^2;2,1))>Kf(C_{3,3}^{n-5}).\end{eqnarray*}\end{theorem}
\begin{proof}In view of Corollary \ref{CO1} and Lemma \ref{LEM6}, considering Lemma \ref{NEW1} $(2)$, we claim that the remaining is only to prove the following inequalities:
\begin{equation}Kf(P_n^3)>Kf(T_n(n-4,2,1)),~\label{e3} \end{equation}
\begin{equation}Kf(Q_n^3)>Kf(T_n(n-5,3,1)),~\label{e4}\end{equation}
\begin{equation}Kf(T_n(1^2;2,1))>Kf(C_{3,3}^{n-5}).~\label{e5} \end{equation}
From Lemma \ref{NEW4} and the results in \cite{LLL2010}, we have
\begin{eqnarray}Kf(T_n(n-4,2,1))=\displaystyle{{n+1\choose 3}}-2n+8=\displaystyle{\frac{n^3-13n+48}{6}},\nonumber\end{eqnarray} \begin{eqnarray}Kf(T_n(n-5,3,1))=\displaystyle{{n+1\choose 3}}-3n+15=\displaystyle{\frac{n^3-19n+90}{6}},\nonumber\end{eqnarray} \begin{eqnarray}Kf(T_n(1^2;2,1))=\displaystyle{{n+1\choose 3}}-3n+11=\displaystyle{\frac{n^3-19n+66}{6}}.\nonumber\end{eqnarray}
By Lemmas \ref{LEM6} and \ref{LEM7}, we arrive at the following results: $$Kf(P_n^3)=\displaystyle{\frac{n^3-11n+18}{6}}, ~~~ Kf(C_{3,3}^{n-5})=\displaystyle{\frac{n^3-21n+36}{6}}.$$
Some straightforward calculations show the validity of inequalities (\ref{e3}) and (\ref{e5}) for $n>27$.
Setting $T'=T_{n-2}(n-5,1^2)$ and applying Lemma \ref{LEM8} to the vertex of degree $3$ in $C_3$ of $Q_n^3$, we have
\begin{eqnarray*}
Kf(Q_n^3)&=&Kf(C_3)+Kf(T')+2Kf_x(T')+(n-2)Kf_x(C_3)\\[3mm]
&=&2+\frac{(n-2)^3-7(n-2)+18}{6}\\[3mm]
&&+2\Big[1+2+3+\cdots\cdots+(n-5)+2(n-4)\Big]+\frac{4}{3}(n-3)\\[3mm]
&=& \frac{n^3-17n+36}{6}.
\end{eqnarray*}
It can be easily checked that
$\displaystyle{\frac{n^3-17n+36}{6}}>\frac{n^3-19n+90}{6}$ when
$n>27$, i.e., the inequality (\ref{e4}) holds if $n>27$. This
completes the proof of this theorem.
\end{proof}
Now we define a new set of graphs as follows:
\begin{eqnarray*}{\cal{G}}^0(n)={\cal{T}}^0(n)\bigcup\left\{T_n(n-4,1^3),P_n^3,Q_n^3,C_{3,3}^{n-5}\right\}.
\end{eqnarray*}
In the following theorem we order the graphs from ${\cal{G}}(n)$ with first to tenth largest Kirchhoff indices.
\begin{theorem}\label{TH2} Let $G$ be any graph from ${\cal{G}}(n)\setminus{\cal{G}}^0(n)$ with $n>27$. Then we have
\begin{eqnarray*}Kf(P_n)>Kf(T_n(n-3,1^2))>Kf(P_n^3)>Kf(T_n(n-4,2,1))>Kf(T_n(1^2;1^2))
>Kf(Q_n^3)\\>Kf(T_n(n-5,3,1))>Kf(T_n(n-4,1^3))=Kf(T_n(1^2;2,1))
>Kf(C_{3,3}^{n-5})>Kf(G).\end{eqnarray*}\end{theorem}
\begin{proof} By Theorem \ref{TH1}, it suffices to prove that $Kf(G)<Kf(C_{3,3}^{n-5})$ for any graph $G\in {\cal{G}}(n)\setminus{\cal{G}}^0(n)$ with $n>27$.
If $G\in {\cal{G}}(n)\setminus{\cal{G}}^0(n)$ has $m>n+1$ edges, by Lemma \ref{LEM3.1}, we conclude that there exists a connected graph $G_1$ of order $n$ and with $n+1$ edges
such that $Kf(G)<Kf(G_1)$. By Lemma \ref{LEM7}, we have $Kf(G)<Kf(G_1)\leq Kf(C_{3,3}^{n-5})$. Clearly, for any connected graph $G$ of order $n$ and with
$n+1$ edges, $Kf(G)<Kf(C_{3,3}^{n-5})$ from Lemma \ref{LEM7}, again.
Now we only need to consider the connected graphs of order $n$ and with $m\leq n$ edges. In the case when $m=n-1$ with $n>27$, for any graph $G\notin {\cal{T}}^0(n)\bigcup\{T_n(n-4,1^3)\}$
of order $n$ and with $n-1$ edges, i.e., $G$ is a tree, by Corollary \ref{CO1} and Lemma \ref{LEM8}, we have
\begin{eqnarray*}
Kf(G)&\leq&Kf(T_n(n-6,4,1))\\[3mm]
&=&{n+1\choose 3}-4n+24\\[3mm]
&=&\frac{n^{3}-25n+144}{6}\\[3mm]
&<&\frac{n^{3}-21n+36}{6}\\[3mm]
&=&Kf(C_{3,3}^{n-5}).
\end{eqnarray*}
Now we focus on the case when $m=n$. Combining Lemma
\ref{LEM3.2} and Corollaries \ref{CO0} and \ref{CO1}, we find that,
when $n>27$, for any connected graph $G$ of
order $n$ and with $n$ edges, maximum degree $\Delta\geq 3$ and
cycle length $k>4$, we have $Kf(G)\leq
Kf(T_n(n-6,4,1))<Kf(C_{3,3}^{n-5})$. By Lemma \ref{LEA9}, we have
$Kf(G)\leq Kf(P_n^4)$ for any connected graph $G$ of order $n$
and with $n$ edges and cycle length $4$. From Lemma \ref{LEM3.3}, Corollaries \ref{CO0} and \ref{CO1}, we have $Kf(G)\leq
Kf(T_n(n-6,4,1))<Kf(C_{3,3}^{n-5})$ for any graph $G\notin {\cal{H}}(n)$ of order $n$ with $n$ edges, cycle length $3$ and maximum degree $\Delta$. Thus the remaining for this case is to show that $Kf(G)<Kf(C_{3,3}^{n-5})$ for any graph $G$ from the set $\{R_n^3,P_n^4,C_n,C_3(1,n-4),C_3(2,n-5),CQ_n^3\}$. From Corollary \ref{CAD1}, $Kf(CQ_n^3)<Kf(C_3(1,n-4))$. Note that $Kf(P_n^3)=\displaystyle{\frac{n^3-11n+18}{6}}$ and $Kf(P_n)=\displaystyle{\frac{n^3-n}{6}}$ (\cite{LLL2010}). Applying Lemma \ref{LEM8} to the vertices in $C_3$ of $C_3(1,n-4),C_3(2,n-5)$, respectively, with degree $3$ and a smaller eccentricity, we have $$Kf(C_3(1,n-4))=\frac{n^3-27n+82}{6},~~~Kf(C_3(2,n-5))=\frac{n^3-25n+88}{6}, $$
both of them is less than $\displaystyle{\frac{n^{3}-21n+36}{6}}=Kf(C_{3,3}^{n-5})$. Moreover, we have $Kf(CQ_n^3)<Kf(C_{3,3}^{n-5})$. By the formula
$$Kf(P_n^l)=\displaystyle{\frac{n^{3}-2n}{6}}+\frac{(1+2n)l}{4}+\frac{l^{3}}{4}-\frac{(3+2n)l^{2}}{6}$$
in \cite{Yang2008}, we can get
$$Kf(P_n^4)=\displaystyle{\frac{n^{3}-22n+54}{6}}<\frac{n^{3}-21n+36}{6}=Kf(C_{3,3}^{n-5})~~\mbox{ when }~n>27.$$
Also from \cite{Yang2008}, we have $Kf(C_n)=\displaystyle{\frac{n^3-n}{12}}$.
Therefore it follows that
\begin{eqnarray*}
Kf(C_n)&=&\frac{n^3-n}{12}\\[3mm]
&<&\frac{n^{3}-21n+36}{6}\\[3mm]
&=&Kf(C_{3,3}^{n-5})~~as~~ n^3-41n+72>0~~\mbox{ when }~~ n>27.
\end{eqnarray*}
Finally, setting $T''=T_{n-2}(n-6,2,1)$, by the application of Lemma \ref{LEM8} to the vertex, say $x$, of degree $3$ on the triangle $C_3$ of $R_n^3$, we have
\begin{eqnarray*}Kf(R_n^3)&=&Kf(C_3)+Kf(T'')+(n-3)Kf_x(C_3)+2Kf_x(T'')\\[3mm]
&=&2+\frac{(n-2)^2-13(n-2)+48}{6}+\frac{4}{3}(n-3)\\[3mm]
&&+2\Big[1+2+\cdots+(n-5)+(n-4)+(n-5)\Big]\\[3mm]
&=&\frac{n^3-23n+66}{6}.
\end{eqnarray*}
Obviously, we conclude that
$$Kf(R_n^3)=\displaystyle{\frac{n^3-23n+66}{6}}<\frac{n^{3}-21n+36}{6}=Kf(C_{3,3}^{n-5})~~\mbox{ if }~n>27.$$
Thus we complete the proof of this theorem.
\end{proof}
\end{document}
|
\begin{document}
\large
\title{\textsc{A bicommutant theorem for dual Banach algebras}}
\author{Matthew Daws}
\maketitle
\begin{abstract}
A dual Banach algebra is a Banach algebra which is a dual space, with the
multiplication being separately weak$^*$-continuous. We show that given a unital
dual Banach algebra $\mathcal A$, we can find a reflexive Banach space $E$, and an
isometric, weak$^*$-weak$^*$-continuous homomorphism $\pi:\mathcal A\rightarrow\mathcal B(E)$
such that $\pi(\mathcal A)$ equals its own bicommutant.
\emph{Keywords:} dual Banach algebra, bicommutant, reflexive Banach space.
2000/2010 \emph{Mathematical Subject Classification:}
46H05, 46H15, 47L10 (primary), 46A32, 46B10
\end{abstract}
\section{Introduction}
Given a Banach space $E$, we write $\mathcal B(E)$ for the Banach algebra of operators
on $E$. Given a subset $X\subseteq\mathcal B(E)$, we write $X'$ for the commutant of $X$,
\[ X' = \{ T\in\mathcal B(E) : TS=ST \ (S\in X) \}. \]
The von Neumann bicommutant theorem tells us that if $E$ is a Hilbert space,
and $X$ is a $*$-closed, unital subalgebra, then $X''$ is the strong operator
topology closure of $X$ in $\mathcal B(E)$. If $X$ is not $*$-closed, then this result
may fail (consider strictly upper-triangular two-by-two matricies). However, a result
of Blecher and Solel, \cite{BS}, shows, in particular, that if $X$ is weak$^*$-closed,
that we can find another Hilbert space $K$, and a completely isometric,
weak$^*$-weak$^*$-continuous homomorphism $\pi:X\rightarrow \mathcal B(K)$, such that
$\pi(X) = \pi(X)''$. That is, if we change the Hilbert space which our algebra
acts on, we do have a bicommutant theorem.
A dual Banach algebra is a Banach algebra which is a dual space, such that the
multiplication is weak$^*$-continuous. Building on work of Young and Kaiser, the
author showed in \cite{Daws} that given a dual Banach algebra $\mathcal A$, we can find
a reflexive Banach space $E$ and an isometric, weak$^*$-weak$^*$-continuous
homomorphism $\pi:\mathcal A\rightarrow\mathcal B(E)$. In this paper, we show that when
$\mathcal A$ is unital, we can choose $E$ and $\pi$ such that $\pi(\mathcal A) = \pi(\mathcal A)''$.
The method is similar to that used in \cite{BS} (although we follow the presentation
of \cite{BLM}) combined with an idea adapted from \cite[Section~6]{Daws}.
\subsection{Acknowledgments}
The author wishes to thank Stuart White and Allan Sinclair for suggesting this
problem, and to thank David Blecher for bringing \cite{BS} to his attention.
\section{Notation and preliminary results}
Given a Banach space $E$, let $E^*$ be the dual space to $E$. For $\mu\in E^*$
and $x\in E$, we write $\ip{\mu}{x} = \mu(x)$. For $X\subseteq E$, let
\[ X^\perp = \{ \mu\in E^* : \ip{\mu}{x}=0 \ (x\in X) \}. \]
For $Y\subseteq E^*$, let
\[ {}^\perp Y = \{ x\in E : \ip{\mu}{x}=0 \ (\mu\in Y) \}. \]
Then ${}^\perp(X^\perp)$ is the closure of the linear span of $X$, while
$({}^\perp Y)^\perp$ is the weak$^*$-closure of the linear span of $Y$.
We may canonically identify $X^*$ with $E/X^\perp$, and $(E/X)^*$ with
$X^\perp$. In particular, $Y$ is weak$^*$-closed if and only if
$Y = ({}^\perp Y)^\perp$, and in this case, the canonical predual of $Y$
is $E / {}^\perp Y$.
We write $E^*\widehat\otimes E$ for the projective tensor product of $E^*$ with $E$. This is
the completion of the algebraic tensor product $E^*\otimes E$ with respect to the norm
\[ \|\tau\|_\pi = \inf\Big\{ \sum_{k=1}^n \|\mu_k\| \|x_k\| :
\tau = \sum_{k=1}^n \mu_k\otimes x_k \Big\}. \]
Any element of $E^*\widehat\otimes E$ can be written as $\sum_k \mu_k\otimes x_k$ with
$\sum_k \|\mu_k\| \|x_k\|<\infty$. For further details, see \cite{Dales} or
\cite{PalBook}, for example.
The Banach algebra $\mathcal B(E)$ is a dual Banach algebra with respect to the
predual $E^*\widehat\otimes E$, the dual pairing being given by
\[ \ip{T}{\mu\otimes x} = \ip{\mu}{T(x)}
\qquad (T\in\mathcal B(E), \mu\otimes x\in E^*\widehat\otimes E), \]
and linearity and continuity. Indeed, under many circumstances, this is the
unique predual for $\mathcal B(E)$, see \cite[Theorem~4.4]{Daws}.
It follows that any weak$^*$-closed subalgebra of $\mathcal B(E)$ is also a dual
Banach algebra: then \cite[Corollary~3.8]{Daws} shows that every dual Banach
algebra arises in this way. If $X\subseteq\mathcal B(E)$, then $X'$ is a closed
subalgebra of $\mathcal B(E)$. Notice that $T\in X'$ if and only if $T$
annihilates all $\tau\in E^*\widehat\otimes E$ of the form
\[ \tau = \mu\otimes S(x) - S^*(\mu)\otimes x
\qquad (S\in X, \mu\in E^*, x\in E). \]
Hence $X' = Y^\perp = (E^*\widehat\otimes E / Y)^*$ is weak$^*$-closed, where
$Y$ is the closed linear span of such $\tau$. In particular, $X''$ is
a weak$^*$-closed subalgebra of $\mathcal B(E)$ containing $X$, and so $X''$
contains the weak$^*$-closed algebra generated by $X$.
We shall follow the ideas of \cite[Theorem~3.2.14]{BLM}; see \cite{BS}
for a fuller treatment. We first establish some preliminary results.
Given a Banach space $E$, we write $\ell^2(E)$ for the Banach space
consisting of sequences $(x_n)$ in $E$ with norm $\|(x_n)\|_2 =
\Big( \sum_n \|x_n\|^2 \Big)^{1/2}$. Throughout, we could instead work
with $\ell^p(E)$ for $1<p<\infty$, if we so wished.
Then $\ell^2(E)^* = \ell^2(E^*)$,
and $\ell^2(E)$ is reflexive if $E$ is. For each $n$, let $\iota_n:
E\rightarrow\ell^2(E)$ be the injection onto the $n$th co-ordinate, and
let $P_n:\ell^2(E)\rightarrow E$ be the projection onto the $n$th co-ordinate.
For $T\in\mathcal B(E)$, let $T^{(\infty)}\in\mathcal B(\ell^2(E))$ be the operator
given by applying $T$ to each co-ordinate. Notice that $T^{(\infty)} \iota_n
= \iota_n T$ and $P_n T^{(\infty)} = T P_n$, for each $n$.
For $X\subseteq\mathcal B(E)$,
let $X^{(\infty)} = \{ T^{(\infty)} : T\in X \}$. Given a homomorphism
$\pi:\mathcal A\rightarrow\mathcal B(E)$, let $\pi^{(\infty)}:\mathcal A
\rightarrow\mathcal B(\ell^2(E))$ by the homomorphism given by $\pi^{(\infty)}(a)
= \pi(a)^{(\infty)}$ for each $a\in\mathcal A$.
\begin{lemma}\label{lemma::one}
For a Banach space $E$, and $X\subseteq\mathcal B(E)$, we have that
$(X^{(\infty)})'' = (X'')^{(\infty)}$.
\end{lemma}
\begin{proof}
Let $Q\in(X^{(\infty)})'$. For $n,m\in\mathbb N$ and $S\in X$,
we have that $P_n Q \iota_m S = P_n Q S^{(\infty)} \iota_m
= P_n S^{(\infty)} Q \iota_m = S P_n Q \iota_m$.
Thus $P_n Q^{(\infty)} \iota_m \in X'$, for each $n,m$. Similarly,
one can show that for $Q\in\mathcal B(\ell^2(E))$, if $P_n Q \iota_m
\in X'$ for all $n,m$, then $Q\in(X^{(\infty)})'$.
So, given $T\in X''$ and $Q\in(X^{(\infty)})'$, we have that $TP_nQ\iota_m
= P_nQ\iota_mT$ for all $n,m$. Thus, for all $n,m$, it follows that
$P_n T^{(\infty)} Q \iota_m = P_n Q T^{(\infty)} \iota_m$, from which it
follows that $T^{(\infty)} Q = Q T^{(\infty)}$. Thus $(X'')^{(\infty)}
\subseteq (X^{(\infty)})''$.
For the converse, let $T \in (X^{(\infty)})''$. For each $n,m$, notice that
$\iota_n P_m \in (X^{(\infty)})'$, so that $T \iota_n P_m = \iota_n P_m T$.
Let $r\in\mathbb N$, so that
\[ T \iota_n \delta_{m,r} = T \iota_n P_m \iota_r = \iota_n P_m T \iota_r. \]
It follows that $T\iota_r = \iota_r R$ for some $R\in\mathcal B(E)$, and that $R$ does
not depend upon $r$. Thus there must exist $R\in\mathcal B(E)$ with $T=R^{(\infty)}$.
Now let $S\in X'$, so that $S^{(\infty)} \in (X^{(\infty)})'$, and hence
\[ (RS)^{(\infty)} = T S^{(\infty)} = S^{(\infty)} T = (SR)^{(\infty)}. \]
It follows that $R\in X''$, and hence that $(X^{(\infty)})'' \subseteq
(X'')^{(\infty)}$.
\end{proof}
\begin{lemma}\label{lemma::two}
Let $E$ be a reflexive Banach space, and let $X\subseteq\mathcal B(E)$ be a subalgebra.
Let $X_w$ be the weak$^*$-closure of $X$ in $\mathcal B(E)$, with respect to the
predual $E^*\widehat\otimes E$. Then $(X_w)^{(\infty)} = (X^{(\infty)})_w$.
\end{lemma}
\begin{proof}
Let $T\in (X^{(\infty)})_w$. For $x\in E,\mu\in E^*$ and $n\not=m$, certainly
$\iota_n(\mu) \otimes \iota_m(x) \in {}^\perp(X^{(\infty)})$, and so
\[ 0 = \ip{\iota_n(\mu)}{T\iota_m(x)} = \ip{\mu}{P_n T \iota_m(x)}. \]
Thus $P_n T \iota_m=0$ whenever $n\not=m$. For any $x,\mu,n$ and $m$,
we also have that
\[ \iota_n(\mu)\otimes\iota_n(x) - \iota_m(\mu)\otimes\iota_m(x) \in
{}^\perp(X^{(\infty)}). \]
It follows that $P_n T \iota_n = P_m T \iota_m$. Combining these results,
we conclude that $T=S^{(\infty)}$ for some $S\in\mathcal B(E)$.
Let $\tau \in {}^\perp X \subseteq E^*\widehat\otimes E$, say $\tau = \sum_k \mu_k\otimes x_k$.
For $R\in X$ and each $n$, we have that
\[ \ip{R^{(\infty)}}{\sum_k \iota_n(\mu_k) \otimes \iota_n(x_k)} = 0, \]
so that $\sigma=\sum_k \iota_n(\mu_k) \otimes \iota_n(x_k) \in {}^\perp (X^{(\infty)})$.
So
\[ 0 = \ip{T}{\sigma} = \ip{S^{(\infty)}}{\sigma} = \ip{S}{\tau}, \]
from which it follows that $S\in X_w$. So $(X^{(\infty)})_w \subseteq (X_w)^{(\infty)}$.
For the converse, let $T\in X_w$, and let $\tau\in {}^\perp (X^{(\infty)})$, say
$\tau = \sum_n \mu_n \otimes x_n$. By rescaling, we may suppose that $\sum_n \|\mu_n\|^2
= \sum_n \|x_n\|^2 <\infty$. For each $n$, we have that $\mu_n=(\mu^{(n)}_k)$, say,
where $\|\mu_n\|^2 = \sum_k \|\mu^{(n)}_k\|^2$. Thus $\sum_{n,k} \|\mu^{(n)}_k\|^2<\infty$.
Similarly, each $x_n = (x^{(n)}_k)$, and $\sum_{n,k} \|x^{(n)}_k\|^2<\infty$. We can
now compute that, for $S\in X$,
\[ 0 = \ip{S^{(\infty)}}{\tau} = \sum_n \ip{\mu_n}{S^{(\infty)}(x_n)}
= \sum_{n,k} \ip{\mu^{(n)}_k}{S(x^{(n)}_k)}, \]
so that $\sigma = \sum_{n,k} \mu^{(n)}_k \otimes x^{(n)}_k \in {}^\perp X$
(where this sum converges absolutely by an application of the Cauchy-Schwarz inequality).
Then $0 = \ip{T}{\sigma} = \ip{T^{(\infty)}}{\tau}$, from which it follows that
$T^{(\infty)} \in (X^{(\infty)})_w$. So $(X_w)^{(\infty)} \subseteq (X^{(\infty)})_w$.
\end{proof}
The following lemma is usually stated in terms of ``reflexivity'' of a subspace
of $\mathcal B(E)$, but this is a different meaning to that of a reflexive Banach space,
so we avoid this terminology.
\begin{lemma}\label{lemma::three}
Let $E$ be a reflexive Banach space, and let $X\subseteq\mathcal B(E)$ be a weak$^*$-closed
subspace. If $T\in\mathcal B(\ell^2(E))$ is such that, for each $x\in\ell^2(E)$,
we have that $T(x)$ is in the closure of $\{ S^{(\infty)}(x) : S\in X \}$, then
actually $T\in X^{(\infty)}$.
\end{lemma}
\begin{proof}
Let $T$ be as stated, so for each $n$, we have that the image of $T\iota_n$ is a
subset of the image of $\iota_n$. By considering what $T$ maps $(\iota_1+\cdots+\iota_n)(x)$
to, for any $x\in E$, we may conclude that $T=R^{(\infty)}$ for some $R\in\mathcal B(E)$.
Let $\tau \in {}^\perp X$, say $\tau = \sum_n \mu_n\otimes x_n$, where we may suppose
that $\sum_n \|\mu_n\|^2 = \sum_n \|x_n\|^2 < \infty$. Let $\mu=(\mu_n)\in\ell^2(E^*)$
and $x=(x_n)\in\ell^2(E)$, so that
\[ \ip{R}{\tau} = \ip{\mu}{R^{(\infty)}(x)} = \ip{\mu}{T(x)}. \]
However, notice that $\ip{\mu}{S^{(\infty)}(x)} = \ip{S}{\tau}=0$ for each $S\in X$,
so by the assumption on $T$, it follows also that $\ip{\mu}{T(x)}=0$, so $\ip{R}{\tau}=0$.
So $R\in ({}^\perp X)^\perp = X$, that is, $T\in X^{(\infty)}$.
\end{proof}
\section{The main result}
Let us introduce some temporary terminology, motivated by \cite{BLM}. Let
$\mathcal A$ be a Banach algebra, and $E$ be a left $\mathcal A$-module (which we assume to
be a Banach space with contractive actions). In this section,
we shall always suppose that $E$ is \emph{essential}, that is, the linear span of
$\{ a\cdot x: a\in\mathcal A, x\in E \}$ is dense in $E$.
We say that $E$ is \emph{cyclic} if there exists $x\in E$ with $\mathcal A\cdot x =
\{ a\cdot x: a\in\mathcal A\}$ being dense in $E$. We say that $E$ is
\emph{self-generating} if, for each closed cyclic submodule $K\subseteq E$, the
linear span of $\{ T(E) : T:E\rightarrow K \text{ is an $\mathcal A$-module homomomorphism}\}$
is dense in $K$.
The following is very similar to the presentation in \cite{BLM}, but we check that the
details still work for reflexive Banach spaces, and not just Hilbert spaces.
\begin{theorem}\label{thm::one}
Let $\mathcal A$ be a unital Banach algebra, and let
$E$ be a reflexive Banach space with a bounded homomorphism $\pi:\mathcal A\rightarrow
\mathcal B(E)$. Use $\pi$ to turn $E$ into a left $\mathcal A$-module, and suppose that
$\ell^2(E)$ is self-generating. Then $\pi(\mathcal A)''$ agrees with the weak$^*$-closure
of $\pi(\mathcal A)$ in $\mathcal B(E)$.
\end{theorem}
\begin{proof}
Let $\mathcal B$ be the closure of $\pi(\mathcal A)$ in $\mathcal B(E)$, and let $\mathcal B_w$ be
the weak$^*$-closure of $\mathcal B$. We wish to show that $\mathcal B_w = \mathcal B''$.
Let $T\in (\mathcal B'')^{(\infty)} \subseteq \mathcal B(\ell^2(E))$, let
$x\in\ell^2(E)$ be non-zero, and let $K$ be the closure of $\mathcal B^{(\infty)}(x)$.
As $E$ is essential, it follows that the unit of $\mathcal A$ acts as the identity on
$E$, and hence also as the identity on $\ell^2(E)$, under $\pi^{(\infty)}$.
Thus $x\in K$. We shall show that $T(K)\subseteq K$.
Let $V:\ell^2(E)\rightarrow K$ be an $\mathcal A$-module homomorphism, and let
$\iota:K\rightarrow\ell^2(E)$ be the inclusion map. By continuity, and the density
of $\mathcal A$ in $\mathcal B$, we see that $\iota V \in (\mathcal B^{(\infty)})'$.
Hence $T \iota V = \iota V T$, from which it follows that $TV(\ell^2(E)) =
VT(\ell^2(E)) \subseteq K$. Let $W$ be the linear span of the images of all such $V$.
As $\ell^2(E)$ is self-generating, it follows that $W$ is dense in $K$. However,
$T(W)\subseteq K$, and so by continuity, $T(K)\subseteq K$, as required.
So we have shown that for each $x\in\ell^2(E)$, we have that $T(x)$ is in the
closed linear span of $\mathcal B^{(\infty)}(x) \subseteq \mathcal B_w^{(\infty)}(x)$.
By Lemma~\ref{lemma::three}, we conclude that $T \in \mathcal B_w^{(\infty)}$.
So we have shown that $(\mathcal B'')^{(\infty)} \subseteq \mathcal B_w^{(\infty)}$. By
Lemma~\ref{lemma::one} and Lemma~\ref{lemma::two}, this shows that
$(\mathcal B^{(\infty)})'' \subseteq (\mathcal B^{(\infty)})_w$. However, we always have
that $(\mathcal B^{(\infty)})_w \subseteq (\mathcal B^{(\infty)})''$, and so
$(\mathcal B^{(\infty)})_w = (\mathcal B^{(\infty)})''$. Hence also
$(\mathcal B_w)^{(\infty)} = (\mathcal B'')^{(\infty)}$, from which it follows immediately
that $\mathcal B_w = \mathcal B''$, as required.
\end{proof}
By using the Cohen Factorisation theorem, see \cite[Corollary~2.9.25]{Dales},
a slightly more subtle argument would show that this theorem also holds for
Banach algebras with a bounded approximate identity.
The previous result is only useful if we have a good supply of self-generating
modules. The following is similar to an idea we used in \cite[Lemma~6.10]{Daws}.
\begin{proposition}\label{prop::one}
Let $\mathcal A$ be a Banach algebra, and let $E$ be a reflexive Banach space which is
a left $\mathcal A$-module. There exists a reflexive left $\mathcal A$-module $F$ such that:
\begin{enumerate}
\item $E$ is isomorphic to a one-complemented submodule of $F$;
\item each closed, cyclic submodule of $\ell^2(F)$ is isomorphic to
a one-complemented submodule of $F$;
\end{enumerate}
In particular, $\ell^2(F)$ is self-generating.
\end{proposition}
\begin{proof}
Let $\mathcal E_0=\{E\}$. We use transfinite induction to define $\mathcal E_\alpha$ to be
a set of reflexive left $\mathcal A$-modules, for each ordinal $\alpha\leq\aleph_1$.
If $\alpha$ is a limit ordinal, we simply define $\mathcal E_\alpha = \bigcup_{\beta<\alpha}
\mathcal E_\beta$.
Otherwise, we let $E_\alpha$ to be the $\ell^2$ direct sum of each module in
$\mathcal E_\alpha$, so that $E_\alpha$ is a reflexive left $\mathcal A$-module in the obvious
way. Let $\mathcal E_{\alpha+1}$ be $\mathcal E_\alpha$ unioned with the set of all closed cyclic
submodules of $\ell^2(E_\alpha)$.
Let $F$ be the $\ell^2$ direct sum of all the modules in $\mathcal E_{\aleph_1}$. As
$\{E\} = \mathcal E_0 \subseteq \mathcal E_{\aleph_1}$, condition (1) follows. Let $K$ be
a closed, cyclic submodule of $\ell^2(F)$, say $K$ is the closure of $\mathcal A\cdot x$.
Thus
\[ x\in\ell^2(F) \cong \ell^2-\bigoplus_{G\in\mathcal E_{\aleph_1}} \ell^2(G). \]
Say $x=(x_G)_{G\in\mathcal E_{\aleph_1}}$ where each $x_G \in \ell^2(G)$. As
$\|x\|^2 = \sum_G \|x_G\|^2 < \infty$, it follows that $x_G\not=0$ for at most
countably many $G$. As $\aleph_1$ is uncountable, we must actually have that there
exists $\alpha<\aleph_1$ with $x\in \ell^2-\bigoplus_{G\in\mathcal E_\alpha} \ell^2(G)
\cong \ell^2(E_\alpha)$. Then, by construction, $K\in\mathcal E_{\alpha+1}$, and so $K$
is a one-complemented submodule of $F$.
\end{proof}
Let $\mathcal A$ be a Banach algebra. Recall, for example from \cite{Daws}, that
$\operatorname{WAP}(\mathcal A^*)$ is the closed submodule of $\mathcal A^*$ consisting of those functionals
$\phi\in\mathcal A^*$ such that
\[ \mathcal A\rightarrow\mathcal A^*; \quad a \mapsto a\cdot\phi \]
is weakly-compact. Young's result, \cite{young}, shows that for each
$\phi\in\operatorname{WAP}(\mathcal AT^*)$, there exists a reflexive Banach space $E$, a contractive
homomorphism $\pi:\mathcal A\rightarrow\mathcal B(E)$, and $x\in E,\mu\in E^*$ with
$\|\phi\| = \|x\| \|\mu\|$ and such that
\[ \ip{\phi}{a} = \ip{\mu}{\pi(a)(x)} \qquad (a\in\mathcal A). \]
Let $\mathcal A$ be a dual Banach algebra with predual $\mathcal A_*$. It is easy to show
(see \cite{Daws} for example) that $\mathcal A_*\subseteq\operatorname{WAP}(\mathcal A^*)$. We showed in
\cite[Section~3]{Daws} that Young's result holds for $\phi\in\mathcal A_*$, with
the additional condition that for any $\lambda\in E^*$ and $y\in E$, the
functional $\pi^*(\lambda\otimes y)$ is in $\mathcal A_*$, where
\[ \ip{\pi^*(\lambda\otimes y)}{a} = \ip{\lambda}{\pi(a)(y)}
\qquad (a\in\mathcal A). \]
Note that, a priori, Young's result only shows that $\pi^*(\lambda\otimes y)
\in \operatorname{WAP}(\mathcal A^*)$.
\begin{proposition}\label{prop::two}
With the notation of Proposition~\ref{prop::one}, we have that
$\pi^*(F^*\widehat\otimes F)$ is a subset of the closed submodule generated by
$\pi^*(E^*\widehat\otimes E)$.
\end{proposition}
\begin{proof}
The module $F$ is generated from $E$ by two constructions: (i) taking submodules;
and (ii) taking $\ell^2$-direct sums. For (i), let $K$ be a submodule of
$E$. The Hahn-Banach theorem shows that $\pi^*(K^*\widehat\otimes K) \subseteq
\pi^*(E^*\widehat\otimes E)$. For (ii), let $(K_i)$ be a family of submodules of $E$ with
$\pi^*(K_i^*\widehat\otimes K_i) \subseteq \pi^*(E^*\widehat\otimes E)$ for each $i$, and let
$F = \ell^2-\bigoplus_i K_i$. Let $\sum_n \mu_n\otimes x_n \in F^*\widehat\otimes F$,
with, say, $\sum_n \|\mu_n\|^2 = \sum_n \|x_n\|^2 < \infty$. For each $n$,
we have $\mu_n = (\mu^{(n)}_i)$ with $\|\mu_n\|^2 = \sum_i \| \mu^{(n)}_i \|^2$,
and $x_n = (x^{(n)}_i)$ with $\|x_n\|^2 = \sum_i \| x^{(n)}_i \|^2$. Then
\[ \sum_n \ip{\mu_n}{a\cdot x_n} = \sum_{n,i} \ip{\mu^{(n)}_i}{a\cdot x^{(n)}_i}
\qquad (a\in\mathcal A). \]
Hence
\[ \pi^*\Big( \sum_n \mu_n\otimes x_n \Big) = \pi^*\Big( \sum_{n,i} \mu^{(n)}_i
\otimes x^{(n)}_i \Big) \in \pi^*(E^*\otimes E). \]
Again, the Cauchy-Schwarz inequality shows that the sum on the right converges.
\end{proof}
\begin{theorem}\label{thm::two}
Let $\mathcal A$ be a unital dual Banach algebra. There exists a reflexive Banach space
$E$ and an isometric, weak$^*$-weak$^*$-continuous homomorphism
$\pi:\mathcal A\rightarrow\mathcal B(E)$ such that $\pi(\mathcal A)'' = \pi(\mathcal A)$.
\end{theorem}
\begin{proof}
By \cite[Corollary~3.8]{Daws}, we may suppose that $\mathcal A\subseteq\mathcal B(E_0)$,
for some reflexive Banach space $E_0$.
By Proposition~\ref{prop::one}, we can find a self-generating, reflexive Banach
space $E$ and a contractive representation $\pi:\mathcal A\rightarrow \mathcal B(E)$.
As $E_0\subseteq E$, it follows that $\pi$ is an isometry. By
Proposition~\ref{prop::two}, $\pi$ is weak$^*$-weak$^*$-continuous.
The result now follows from Theorem~\ref{thm::one}.
\end{proof}
It is well-known that for any Banach algebra $\mathcal A$, we have that
$\operatorname{WAP}(\mathcal A^*)^*$ is a dual Banach algebra (see, for example,
\cite[Proposition~2.4]{Daws}). When $\mathcal A$ has a bounded approximate
identity, a weak$^*$-limit point in $\operatorname{WAP}(\mathcal A^*)^*$ will be a unit for
$\operatorname{WAP}(\mathcal A^*)^*$.
\begin{corollary}
Let $\mathcal A$ be a Banach algebra with a bounded approximate identity.
There exists a reflexive Banach space
$E$ and a contractive homomorphism $\pi:\mathcal A\rightarrow\mathcal B(E)$ such that
$\pi(\mathcal A)''$ is isometrically, weak$^*$-weak$^*$-continuously isomorphic
to $\operatorname{WAP}(\mathcal A^*)^*$.
\end{corollary}
Finally, we remark that Uygul showed in \cite{uygul} that given a dual, completely
contractive Banach algebra $\mathcal A$, we can find a reflexive operator space and
a completely isometric, weak$^*$-weak$^*$-continuous homomorphism $\pi:\mathcal A
\rightarrow\mathcal B(E)$. Using this result, we can easily prove a version of
Theorem~\ref{thm::two} for completely contractive Banach algebras. Indeed, the
only thing to do is to equip $\ell^2$ direct sums with an Operator Space structure
such that the inclusion and projection maps are complete contractions.
This is worked out in detail in \cite{xu} (see also \cite{uygul}).
Finally, we remark that the space constructed in Theorem~\ref{thm::two} is
very abstract. For a group measure space convolution algebra $M(G)$, Young showed
in \cite{young} that $M(G)$ can be weak$^*$-represented on a direct sum of
$L^p(G)$ spaces; the analogous result for the Fourier algebra was shown by the
author in \cite{daws1}. For such concrete Banach algebras $\mathcal A$, it would be
interesting to know if ``nice'' reflexive Banach spaces $E$ could be found with
$\pi:\mathcal A\rightarrow\mathcal B(E)$ such that $\pi(\mathcal A)''=\pi(\mathcal A)$.
\noindent\emph{Author's Address:}
\parbox[t]{3in}{School of Mathematics,\\
University of Leeds\\
Leeds\\
LS2 9JT.}
\noindent\emph{Email:} \texttt{[email protected]}
\end{document}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.